text
stringlengths
64
89.7k
meta
dict
Q: Everything leads to "one" number You get an invitation from an Institute of Mathematics and you are told if you can solve a simple puzzle you can be part of their elite institution. You have to fill the blank. $6 \rightarrow 6$, $7 \rightarrow 11$, $8 \rightarrow 3$, $9 \rightarrow 13$, $10 \rightarrow$ __ Can you get into the institute? A: Seems the answer is 5? Because... these are numbers 6-10 in the integer sequence Number of halving steps to reach 1 in '3x+1' problem: 0, 1, 5, 2, 4, 6, 11, 3, 13, 5,... oeis elaborates: The total number of steps to reach 1 under the modified '3x+1' map: T := n -> n/2 if n is even, n -> (3n+1)/2 if n is odd. [The bold part is probably what the title references.]
{ "pile_set_name": "StackExchange" }
Q: Avoiding using Option.Value I have a type like this: type TaskRow = { RowIndex : int TaskId : string Task : Task option } A function returns a list of these records to be processed further. Some of the functions doing that processing are only relevant for TaskRow items where Task is Some. I'm wondering what the best way is to go about that. The naive way would be doing let taskRowsWithTasks = taskRows |> Seq.filter (fun row -> Option.isSome row.Task) and passing that to those functions, simply assuming that Task will never be None and using Task.Value, risking an NRE if I don't pass in that one special list. That is exactly what the current C# code does but seems rather unidiomatic for F#. I shouldn't be 'assuming' things but rather let the compiler tell me what will work. More 'functional' would be to pattern match every time the value is relevant and then do/return nothing (and use choose or the like) for None, but that seems repetitive and wasteful as the same work would be done multiple times. Another thought was introducing a second, slightly different type: type TaskRowWithTask = { RowIndex : int TaskId : string Task : Task } The original list would then be filtered into a 'sublist' of this type one to be used where appropriate. I guess that would be okay from a functional perspective, but I wonder whether there's a nicer, idiomatic way without resorting to this kind of 'helper type'. Thanks for any pointers! A: There's quite a bit of value knowing that the tasks have already been filtered, so having two different types can be helpful. Instead of defining two different types (which, in F#, isn't that big a deal, though), you could also consider defining a generic Row type: type Row<'a> = { RowIndex : int TaskId : string Item : 'a } This enables you to define a projection like this: let project = function | { RowIndex = ridx; TaskId = tid; Item = Some t } -> Some { RowIndex = ridx; TaskId = tid; Item = t } | _ -> None let taskRowsWithTasks = taskRows |> Seq.map project |> Seq.choose id If the initial taskRows value has the type seq<Row<Task option>>, then the resulting taskRowsWithTasks sequence has the type seq<Row<Task>>. A: I agree with you, the more "pure functional" way is to repeat the pattern match, I mean use a function with Seq.choose that does the filtering, instead of saving it to another structure. let tasks = Seq.choose (fun {Task = t} -> t) taskRows The problem is performance as it would be calculated many times, but you can use Seq.cache so behind the scenes it's saved into an intermediate structure, while keeping your code more "pure functional" looking.
{ "pile_set_name": "StackExchange" }
Q: Problema com async task Estou batendo a cabeça pra conseguir resolver mas não consigo, o erro que está dando é An asynchronous module or handler completed while an asynchronous operation was still pending. source error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Estou enviando uma requisição para gerar um boleto, mas está erro de asynchronous. Estou usando o plugin Fluent para isso código: public async Task<FileStreamResult> Teste(){ var response = url.WithHeaders( new { Authorization = "Basic " + Convert.ToBase64String( Encoding.UTF8.GetBytes(ApiKey + ":" + Password)), ContentType = "application/x-www-form-urlencoded" }).PostUrlEncodedAsync(boleto).Result; var pdfBytes = response.Content.ReadAsByteArrayAsync().Result; pdfBytesResult = pdfBytes; MemoryStream ms = new MemoryStream(pdfBytesResult); return new FileStreamResult(ms, "application/pdf"); } O problema só ocorre quando o site está em produção(IIS), quando local o mesmo não acontece A: O problema não estava no código e sim no proxy, que estava bloqueando o acesso externo.
{ "pile_set_name": "StackExchange" }
Q: Why isn't my if-else block ever getting hit, even though it should be? (Just need another pair of eyes.) I am making a Falling Sand style game in Java, and I'm having weird issues with an if-else block that I have. In my doGravity() method, I have an various blocks of conditions that will cause different things to happen, and for some odd reason, one block is NEVER getting hit. When I have this block count how many times each condition is hit, the left and right blocks are hit almost evenly: else if(world[x][y+1]==EMPTY && (x-1 >= 0) && world[x-1][y+1] == EMPTY && (x+1 < world.length) && world[x+1][y+1]==EMPTY) { int r = rand.nextInt(50); if(r == 0) { world[x-1][y+1] = world[x][y]; //System.out.println("GO: right"); countRight++; } else if(r == 1) { world[x+1][y+1] = world[x][y]; //System.out.println("GO: left"); countLeft++; } else { world[x][y+1] = world[x][y]; countCenter++; } world[x][y] = EMPTY; } Next comes this condition, which also equally distributes left and right. else if((x-1 >= 0) && world[x-1][y+1] == EMPTY && (x+1 < world.length) && world[x+1][y+1]==EMPTY) { if(rand.nextBoolean()) { world[x-1][y+1] = world[x][y]; //countLeft++; } else { world[x+1][y+1] = world[x][y]; //countRight++; } world[x][y] = EMPTY; } But when I count these blocks, the left block NEVER gets hit, even when the space to the left is open. I feel like its probably just some stupid typo that I can't see for some reason. else if((x-1 >= 0) && world[x-1][y+1] == EMPTY) { world[x-1][y+1] = world[x][y]; world[x][y] = EMPTY; countLeft++; System.out.println("Hit Left"); } else if((x+1 < world.length) && world[x+1][y+1] == EMPTY) { world[x+1][y+1] = world[x][y]; world[x][y] = EMPTY; countRight++; System.out.println("Hit Right"); } UPDATE: If I remark out the left block at the end, absolutely nothing changes. The sand acts exactly the same. If I remark out the right block at the end, it acts the same as if I remark out both blocks. I cannot figure this out. It should work... but it doesn't. UPDATE: Here's the full source code. I have no idea what this could possibly be. It will, in fact, drive me insane. http://pastebin.com/mXCbCvmb A: Your pastebin code does show "Hit left", you just need to change the creation of world (line 65 pastebin) to world = new Color[worldWidth][worldHeight+1]; Because of the y+1 part i would suppose. Other than that it grows both to the left and to the right. EDIT: http://pastebin.com/GVmSzN4z I twiddled a little with your doGravity to make the drops a little more symmetric.
{ "pile_set_name": "StackExchange" }
Q: encoding and decoding a binary guid in PHP I have a C# .NET 3.5 application using the ADO.NET Driver for MySQL (Connector/NET). It stores a Guid.NewGuid() in the MySQL database as a BINARY(16). I have another application using PHP 5.3.4 that needs to be able to read that binary value as a GUID string and encode a GUID string in the same 16-byte binary value. As an example I want to be able to convert between these two things: GUID string: E241346C-504F-4BE5-BDF3-1B8274815597 BINARY(16): 65 32 34 31 33 34 36 63 2d 35 30 34 66 2d 34 62 How can I do this in PHP? A: The problem was that the field was BINARY(16), but the My SQL connector was trying to store a CHAR(36) in there. (They changed the way GUIDs were formatted in v6.1.1) So, the last 20 bytes of my GUID were being silently truncated. (I do not know why the connector didn't throw an exception.) Once I changed the database to store GUIDs as a CHAR(36), it worked fine. Alternatively, I could have use the oldguids=true element in my connection string.
{ "pile_set_name": "StackExchange" }
Q: Distributed CRON in Kubernetes distributed CRON in Kubernetes is still a work in progress (https://github.com/kubernetes/kubernetes/issues/2156). What do you use for CRON jobs in Kubernetes today? Do you recommend any solution that works well with Spring/JVM-based services? Spring/JVM startup time is quite high and if CRON scheduler started a new JVM for each job, startup time might be much higher than time of actual work - is there any solution that could run the job in existing JVM? Thank you, Jakub A: I think Mesos Chronos is still ideal solution.
{ "pile_set_name": "StackExchange" }
Q: Decrypt AES-256-CBC String (need IV, string/data format?) I've been going around in circles from Apple's CCCrypto docs, frameworks and other SO answers and am not making any headway. I think I need to figure out how to get a IV from an encrypted string that I receive. I receive a JSON payload which contains a String. That string is encrypted in AES-256-CBC. (From a Laravel PHP instance that I think uses OpenSSL). The string itself, decrypted, is another JSON object. I have a pre-defined key. The string I receive looks something like: eJahdkawWKajashwlkwAkajsne8ehAhdhsiwkdkdhwNIEhHEheLlwhwlLLLLhshnNWhwhabwiIWHWHwh= (but is a lot longer). I'm trying to use this answer here: Issue using CCCrypt (CommonCrypt) in Swift But am a) unsure if I'm properly converting the string to data and b) how to get the IV (initialization vector) from the string I receive. Using that answer I do get "success" however when I try to pass it to the NSJSONSerailizer I never got a good result (it always fails) but I do get data out - I think it's garbage. Edit: I really mis-understood my original problem - I was receiving a base64 encoded string that I needed to decode into JSON (which went fine). Then using the linked answer and importing CommonCrypto I thought I'd be able to get usable data but I am not. @Rob Napier 's answer is extremely helpful. I think my problem is that the instance of laravel in question is using OpenSSL. A: There is no really commonly used standard format for AES encrypted data (there are several "standard formats" but they're not commonly used....) The only way to know how the data you have is encrypted is to look at the documentation for the data format, or failing that, the encrypting code itself. In good encryption formats, the IV is sent along with the data. But in many common (insecure) formats, there is a hard-coded IV (sometimes 16 bytes of 0x00). If there's a password, you also need to find out how they've converted the password to a key (there are several ways to do this, some good, some horrible). In a good format, the key derivation may include some random "salt" that you need to extract from the data. You'll also need to know if there is an HMAC or similar authentication (which might be stored at the beginning or the end of the data, and may include its own salt). There just isn't any good way to know without documentation from the sender. Any decently encrypted format is going to look like random noise, so figuring it out just by looking at the final message is pretty hard. If this comes out of Laravel's encrypt function, then that seems to be ultimately this code: public function encrypt($value) { $iv = mcrypt_create_iv($this->getIvSize(), $this->getRandomizer()); $value = base64_encode($this->padAndMcrypt($value, $iv)); // Once we have the encrypted value we will go ahead base64_encode the input // vector and create the MAC for the encrypted value so we can verify its // authenticity. Then, we'll JSON encode the data in a "payload" array. $mac = $this->hash($iv = base64_encode($iv), $value); return base64_encode(json_encode(compact('iv', 'value', 'mac'))); } If this is correct, then you should have been passed base64-encoded JSON with three fields: the IV (iv), the ciphertext (value), and what looks like an HMAC encrypted using the same key as the plaintext (mac). The data you've given above doesn't look like JSON at all (even after base-64 decoding). This assumes that the caller used this encrypt function, though. There are many, many ways to encrypt, though, so you need to know how the actual server you're talking to did it.
{ "pile_set_name": "StackExchange" }
Q: Rearrange the words to make meaningful sentence I have a hard time rearranging these words (kind of sentence to be made is indicated in the brackets): 1) Magnanimous her how our all ignore of to weaknesses (Exclamatory) 2) to the agent did plan tour the itinerary satisfaction your (Interrogative) A: In the first sentence, it should be "how", not "now". How magnanimous of her to ignore all our weaknesses! "itinerary" is always a noun, and it usually goes with "to plan". It's planned by a person ("agent") who organises tours ("tour agent"). It's past time because of "did". "agent" is a subject. Did the tour agent plan the itinerary to your satisfaction?
{ "pile_set_name": "StackExchange" }
Q: How can I update each file in a ZIP archive? When trying the following I get the error "Collection was modified; enumeration operation may not execute.". How can I loop through Zip entries and update them? using (ZipArchive archive = ZipFile.Open(@"c:\file.zip",ZipArchiveMode.Update)) { foreach (ZipArchiveEntry entry in archive.Entries) { archive.CreateEntryFromFile(@"c:\file.txt", entry.FullName); } } A: You can't update a collection whilst enumerating through it. You could convert to a for loop instead. for (int i = 0; i < archive.Entries.Count; i++) { archive.CreateEntryFromFile(@"c:\file.txt", archive.Entries[i].FullName); } You may find it helpful to have a read of the API reference on Enumerators. "Enumerators can be used to read the data in the collection, but they cannot be used to modify the underlying collection."
{ "pile_set_name": "StackExchange" }
Q: Reverse a string using recursion in java I want to reverse a whole String. For example, "Cat is running" should give output "running is cat". I have tried a lot but I am unable to do it. It shows "gninnur si taC". Kindly help me that it should take "cat" as a single character instead of taking 'c' as a single character. Here is the code: public static void main(String[] args) { String str = "Cat is running"; System.out.println("Before recursion: " + str); System.out.println("After recursion: " + reverse(str)); } public static String reverse(String str) { if(str.isEmpty()) return str; String s = ""; for(int i = 0; i < str.length(); i++) { s = s + str.charAt(i); } return reverse(s.substring(1)) + s.charAt(0); } A: You have to find the first word in the String, pass the rest of the String to the recursive call, and append the first word at the end: public static String reverse(String str) { if(str.isEmpty() || !str.contains(" ")) return str; int sep = str.indexOf(' '); return reverse(str.substring(sep+1)) + " " + str.substring(0,sep); } Output: Before recursion: Cat is running After recursion: running is Cat BTW, the loop is your original code is pointless. You can simply use str directly instead of creating a copy of it. You can make it even shorter with: public static String reverse(String str) { int sep = str.indexOf(' '); return sep >= 0 ? reverse(str.substring(sep+1)) + " " + str.substring(0,sep) : str; }
{ "pile_set_name": "StackExchange" }
Q: Trying to send an email in PHP only after the submit button is pressed and the form is valid I am new to PHP and currently getting back to HTML. I have made a form and have the data sent and validated by PHP but I am trying to send the email to myself only after the data had been validated and is correct. Currently if the page is loaded I think it send an email and it will send whenever I hit submit without the data being correct. Here is where I validate the data: <?php //Set main variables for the data. $fname = $lname = $email = $subject = $website = $likedsite = $findoption = $comments = ""; //Set the empty error variables. $fnameErr = $lnameErr = $emailErr = $subjectErr = $commentsErr = $websiteErr = $findoptionErr = ""; //Check to see if the form was submitted. if ($_SERVER["REQUEST_METHOD"] == "POST") { //Check the 'First Name' field. if (empty($_POST["fname"])) { $fnameErr = "First Name is Required."; } else { $fname = validate_info($_POST["fname"]); } //Check the 'Last Name' field. if (empty($_POST["lname"])) { $lnameErr = "Last Name is Required."; } else { $lname = validate_info($_POST["lname"]); } //Check the 'E-Mail' field. if (empty($_POST["email"])) { $emailErr = "E-Mail is Required."; } else { $email = validate_info($_POST["email"]); //Check if valid email. if (!filter_var($email, FILTER_VALIDATE_EMAIL)) { $emailErr = "Invalid E-Mail Format."; } } //Check the 'Subject' field. if (empty($_POST["subject"])) { $subjectErr = "Subject is Required."; } else { $subject = validate_info($_POST["subject"]); } //Check the 'Website' field. if (empty($_POST["siteurl"])) { $website = ""; } else { $website = validate_info($_POST["siteurl"]); //Check if valid URL. if (!preg_match("/\b(?:(?:https?|ftp):\/\/|www\.)[-a-z0-9+&@#\/%?=~_|!:,.;]*[-a-z0-9+&@#\/%=~_|]/i",$website)) { $websiteErr = "Invalid URL."; } } //Check the 'How Did You Find Us' options. if (empty($_POST["howfind"])) { $findoptionErr = "Please Pick One."; } else { $findoption = validate_info($_POST["howfind"]); } //Check the comment box. if (empty($_POST["questioncomments"])) { $commentsErr = "Questions/Comments are Required."; } else { $comments = validate_info($_POST["questioncomments"]); } //Pass any un-required data. $likedsite = validate_info($_POST["likedsite"]); } function validate_info($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } ?> Sorry its a little lengthy. Here is where I try to send the email. I have tried two different attempts and both have the same result. <?php if (!empty($fnameErr) || !empty($lnameErr) || !empty($subjectErr) || !empty($emailErr) || !empty($commentErr) || !empty($websiteErr) || !empty($findoptionErr)) { echo "Sent!!"; }else { echo"Not Sent!!"; } //Make the message. $message = " First Name: $fname.\n Last Name: $lname.\n Website: $website\n Did They Like the Site? $likedsite.\n How They Found Us. $findoption.\n Question/Comments:\n $comments. "; $message = wordwrap($message, 70); $headers = "From: $email"; mail("[email protected]", $subject, $message, $headers); ?> Once again sorry for the length. Thanks in advance also sorry if this is a double question or not described enough I am also new to stack overflow. A: Please try: <?php //Set main variables for the data. $fname = $lname = $email = $subject = $website = $likedsite = $findoption = $comments = ""; //Set the empty error variables. $fnameErr = $lnameErr = $emailErr = $subjectErr = $commentsErr = $websiteErr = $findoptionErr = ""; //Initialize variable used to identify form is valid OR not. $formValid = true; //Check to see if the form was submitted. if ($_SERVER["REQUEST_METHOD"] == "POST") { //Check the 'First Name' field. if (empty($_POST["fname"])) { $formValid = false;//Form not validate $fnameErr = "First Name is Required."; } else { $fname = validate_info($_POST["fname"]); } //Check the 'Last Name' field. if (empty($_POST["lname"])) { $formValid = false;//Form not validate $lnameErr = "Last Name is Required."; } else { $lname = validate_info($_POST["lname"]); } //Check the 'E-Mail' field. if (empty($_POST["email"])) { $formValid = false;//Form not validate $emailErr = "E-Mail is Required."; } else { $email = validate_info($_POST["email"]); //Check if valid email. if (!filter_var($email, FILTER_VALIDATE_EMAIL)) { $formValid = false;//Form not validate $emailErr = "Invalid E-Mail Format."; } } //Check the 'Subject' field. if (empty($_POST["subject"])) { $formValid = false;//Form not validate $subjectErr = "Subject is Required."; } else { $subject = validate_info($_POST["subject"]); } //Check the 'Website' field. if (empty($_POST["siteurl"])) { $website = ""; } else { $website = validate_info($_POST["siteurl"]); //Check if valid URL. if (!preg_match("/\b(?:(?:https?|ftp):\/\/|www\.)[-a-z0-9+&@#\/%?=~_|!:,.;]*[-a-z0-9+&@#\/%=~_|]/i",$website)) { $formValid = false;//Form not validate $websiteErr = "Invalid URL."; } } //Check the 'How Did You Find Us' options. if (empty($_POST["howfind"])) { $formValid = false;//Form not validate $findoptionErr = "Please Pick One."; } else { $findoption = validate_info($_POST["howfind"]); } //Check the comment box. if (empty($_POST["questioncomments"])) { $formValid = false;//Form not validate $commentsErr = "Questions/Comments are Required."; } else { $comments = validate_info($_POST["questioncomments"]); } //Pass any un-required data. $likedsite = validate_info($_POST["likedsite"]); } //If every variable value set, send mail OR display error... if (!$formValid){ echo"Form not validate..."; } else { //Make the message. $message = " First Name: $fname.\n Last Name: $lname.\n Website: $website\n Did They Like the Site? $likedsite.\n How They Found Us. $findoption.\n Question/Comments:\n $comments. "; $message = wordwrap($message, 70); $headers = "From: $email"; mail("[email protected]", $subject, $message, $headers); if($sendMail){ echo "Mail Sent!!"; } else { echo "Mail Not Sent!!"; } } function validate_info($data) { $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data; } ?> I edit my answer as per some change. Now this code only allow send mail if form required fields are not empty and all fields value are valid as per your validation. Let me know if there is any concern.!
{ "pile_set_name": "StackExchange" }
Q: django trailing slash base url i'm trailing slash in wsgi for my django application WSGIScriptAlias /rp1 /var/www/reports/app.wsgi WSGIPythonPath /var/www/reports/ documentroot /var/www/reports/ <Directory /var/www/reports> Order deny,allow Allow from all </Directory> access to this application by this url : 172.30.12.37/rp1 which best way to add /rp1 prefix to all my url? A: You shouldn't need to do anything special in urls.py. The '/rp1' prefix will automagically be accomodated. If though you are construction redirects from code/templates you do need to make sure you use the proper APIs to construct the URL rather than hardcoding the path. Some settings in Django settings module such as LOGIN_URL do need to explicitly include the prefix though. So, be more specific about what the actual problem you are having is.
{ "pile_set_name": "StackExchange" }
Q: Separating lettuce seeds I grew some good lettuce this year, cos (white seed) and crutterbunch (dark seed). I like the crutterbunch better and it is sometimes tough to find the seed in the store so I made sure to cut all the cos before it flowered and let the other go to seed. No other lettuces local to me within a mile at least. As individual flower heads ripened and showed fluff I picked them carefully and saved some good black/brown seed. It was easy to seive and blow off chaff and from these I saved pretty much 100% good dark seed so I have lots for next year. I got fed up with this slow process, decided to upscale the production and waited for the flowering head to mature much more and then just cut the whole head with lots of stalk and brought it in the house to dry. After much threshing and drying the winnowing process produced the mix as in the picture: As you can see there is a lot of immature seed mixed in with the fully mature dark seed and a bit of chaff. Quite a different result from the painstaking careful selection of fully ripe. I'm now faced with how to separate the pepper from the salt. I tried setting up a blower but it did not work, salt fell in the same place as the pepper. Experimenting with water separation (heavy seed sinks, immature seed floats) only works partly and was very messy. I could have waited longer for the heads to mature fully but was trying to be careful not to have lettuce all over the garden next year. For my own use I could also just sow the white along with the dark in confidence that the white will not germinate. But how to separate the white from the dark? A: Seeds are rounder and harder than chaff. Pour the stuff slowly onto a smooth inclined board. With a little shaking, you should be able to find an angle where most of the seeds roll down, and most of the chaff stays near where you poured it. Take it slow, and clear off the chaff regularly. It may well take several passes to get good separation. Unripe seeds likely sink in water, or float. The ripe ones likely stay suspended easily. Try a small batch to see. Skim off the float to separate, whichever thing happens. You'll have to dry the ripe seed again, but if you don't let it sit too long, should not get germination. Some such separation protocols call for sugar solutions of different density to aid the separation. I've tried that, but you end up using a lot of sugar to get a sticky mess that needs cleaning up. So: Hardness, Roundness, Density.
{ "pile_set_name": "StackExchange" }
Q: Building a diversified portfolio at market high I am a Canadian, setting up an ETF-based RRSP in Questrade. Right now I have 70% of my portfolio in Canadian bonds, the other 30% of my portfolio is in cash, in USD. I am modelling my portfolio after the Canadian Couch Potato's conservative portfolio, so the 30% currently in cash was destined for equity. However, I can hardly stand the idea of buying equity right now, when it is at 52-week high nearly worldwide. In fact US equity (SPY) is at its all-time high. I am also worried that there is political risk in the US, to say it mildly. In some ways I think it makes no difference, and that everyone that currently has equity and is not selling it is making the same choice I would be making (choosing to hold equity rather than something else). What do I do? Do I just bite the bullet and buy equity ETFs to round out my portfolio, despite the fact that the markets are at such a high? A: Considering that the market has grown consistently in the macro sense (the regression curve slopes up) the market is often at an "all-time high". So the only reason you would not want to invest in equities is if you have a short investment horizon and think there is a correction soon, in which case you'd buy when equities are down relative to today. The answer really depends on your investment horizon and how much risk you are willing to take in exchange for higher returns in the long run. The longer your investment horizon, the greater the chance that the market will gain over that time. So unless you have a very short investment horizon and no appetite for risk, investing now or waiting for a correction (i.e. trying to "time the market") probably won't make a difference. Sure you might get lucky and be able to "buy the low", but you'll be just as likely to miss out on future gains while you wait for a drop. A: Your initial instincts are correct, that if the money was already invested a long time ago, it is the same outcome as if you invest it now. If the market is due for a correction, that will affect your portfolio the same whether you bought yesterday or ten years ago. The only thing that changes when your $10,000 investment drops by 10% to $9,000 is the psychology of knowing that you paid $5,000 ten years ago verses $10,000 now. If you paid ten years ago, a $5,000 gain becomes a $4,000 gain, which isn't as good, but still feels good. If you paid yesterday, you see the whole $1,000 loss for what it is. It should be the same pain either way. As long as you have appropriately accounted for the risks you perceive in the market, any market moves will affect you the same way no matter what the cost basis was. Hopefully by rebalancing your portfolio after a correction, you position yourself for greater gains during the subsequent recovery.
{ "pile_set_name": "StackExchange" }
Q: Peoples reputation vanishing So this one person has a ton of badges but one reputation! I don't think this is possible unless there's A. A bug B.(unlikely) his questions got flagged as SPAM. I've seen this happen to a lot of people and it's really annoying because I can't see people's reputation. Please fix. I have a picture of it here... My real question is: why does this happen? Thanks. This has happened to about ten profiles I clicked on. A: This is not a bug. Retrosaur normally has tens of thousands of reputation points and earned those badges through normal use of the site. Right now this user is on a temporary suspension, so the rep was temporarily set to 1 by the system. You can see the suspension info by clicking on the user's profile page. This account is temporarily suspended to cool down. The suspension period ends on Jul 9 at 23:22. Once the suspension ends, the user's reputation points will go back to normal. The specific reasons why a user was suspended are private between the moderators and the suspended user, but in general it is because the user has a pattern of not putting in the effort to learn and improve over time or a pattern of disruptive behavior. More information about temporary suspension can be found on this blog post
{ "pile_set_name": "StackExchange" }
Q: Aggregating dictionary using datetime I have a dictionary WO of the format: WO = {datetime: {'V1', 'V2', 'V3', 'V4'}} Where datetime is the key of the (example) format: datetime.date(2014, 6, 20) And V1 through V4 are lists containing floating values. Example: WO = {datetime.date(2014, 12, 20): {'V1': [11, 15, 19], 'V2': [12, 3, 4], 'V3': [50, 55, 56], 'V4': [100, 112, 45]}, datetime.date(2014, 12, 21): {'V1': [10, 12, 9], 'V2': [16, 13, 40], 'V3': [150, 155, 156], 'V4': [1100, 1132, 457]}, datetime.date(2014, 12, 22): {'V1': [107, 172, 79], 'V2': [124, 43, 44], 'V3': [503, 552, 561], 'V4': [1000, 1128, 457]}} If I want to aggregate values in V1 through to V4 according to the week for a given date, for example: my_date = datetime.date(2014, 5, 23) For this given date, aggregate all values in V1 through to V4 for this week, where the week starts from Monday. year, week, weekday = datetime.date(my_date).isocalendar() This line gives me the week and weekday for this particular date. If I have a function as: def week(date): ''' date is in 'datetime.date(year, month, date)' format This function is supposed to aggregate values in 'V1', 'V2', 'V3' and 'V4' for a whole week according to the parameter 'date' ''' How should I proceed next to define such a function? A: from what i understood you want to do some manipulation over all V1...V4 values of given week of a given date. first i'll start with finding the monday (week start) of the given date. year, week, weekday = my_date.isocalendar() last_monday_date = my_date - datetime.timedelta(days = weekday - 1) would give you the last monday date. then you can use this for a date range over the week days: Creating a range of dates in Python and lastly in the daterange for loop iterate over WO values and get your resualt.
{ "pile_set_name": "StackExchange" }
Q: Are all OpenPGP public key server equal? An error info occur when to execute apt-get update . W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://download.virtualbox.org jessie InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY A2F683C52980AECF I found an answer: gpg --keyserver key_server_name --recv-keys A2F683C52980AECF gpg --armor --export A2F683C52980AECF| apt-key add - There are two pool.sks-keyservers.net and keys.gnupg.net among many public key servers, are they equal? gpg --keyserver pool.sks-keyservers.net --recv-keys A2F683C52980AECF gpg --keyserver keys.gnupg.net --recv-keys A2F683C52980AECF Do the two commands take same effect? A: Most OpenPGP key servers are integrated in the SKS key server pool and exchange keys among each other. You can have a look at the pool status pages to get an overview of the contained servers. pool.sks-keysevers.net resolves to a (weighted) random choice of those servers. Actually, keys.gnupg.net is a simple alias for pool.sks-keyservers.net (technically speaking, a CNAME to this domain): $ host keys.gnupg.net keys.gnupg.net is an alias for pool.sks-keyservers.net. [...] In the end, it does not matter which server you choose, as long as it is contained in the pool. Using pool.sks-keyservers.net is a safe choice, and keys.gnupg.net is equivalent. A typical answer of a key server that is not synchronizing is the PGP Global Directory, which also performs a simple ownership verification of the mail addresses contained.
{ "pile_set_name": "StackExchange" }
Q: Max Cardinality Integer Programming Problem I browsed a lot on google to find any similar solution to my problem below, particularly on network flow literature, but couldn't find any similar problems. I think my problem can be probably solved by an appropriate integer programming formulation, but I can't exactly figure it out either. Suppose I have $m$ number of variables, along with a matrix of weights between every one variables to another: $\begin{matrix} & | & N_1 & N_2 & ... & N_m\\ \_ & \_ & \_ & \_ & \_ & \_ \\ N_1 & | & 0 & w_{1,2} & ... & w_{1, m}\\ N_2 & | & w_{1, 2} & 0 & ... & w_{2, m}\\ ... & | & ... & ... & ... & ... \\ N_m & | & w_{1, m} & w_{2, m} & ... & 0 \end{matrix}$ where $w_{i, j} = w_{j, i}$ (and if needed $0 \le |w_{j, i}| \le 1$). I want to choose a set of maximum number of variables $\{N_k\}$ out of $m$ variables such that the weights $w_{k_1, k_2}$ between any $N_{k_1}$ and $N_{k_2}$ selected in the set $\{N_k\}$ satisfy: $-\beta \le w_{j, i} \le \beta$. where $\beta$ is a constant provided to the problem (and if needed, $|\beta|$ < 1, otherwise we can obviously choose all the $m$ variables). Can someone help me if this can be solved (or reformulated) into any known network flow problem, or alternatively, if there is an easy way to frame this into an integer programming problem ? So far, I have been able to write: $\max \sum_{k=1}^m X_k$ $s.t.$ $ -\beta \le w_{i,j} X_i X_j \le \beta$ $\forall i, j$ But I can't figure out how to separate out the $X_i X_j$ into a linear way (to use any Integer Programming package). A: The weights are entirely irrelevant. After receiving your $\beta$ any $w_{i, j}$ with $-\beta \leq w_{i, j} \leq \beta$ is an edge and any $N_k$ is a vertex, forming a graph. Then your problem is simply finding the maximum clique.
{ "pile_set_name": "StackExchange" }
Q: Speeding up JPG "videos" pulled from a proxied web cam? I have a snippet of javascript that is used to pull a series of jpg's from a webcam that is inside our network. Right now it's being reverse-proxied through Apache2 and embedded in our website. The thing is that to keep the display from "freezing" with timeouts I've had to adjust the consultant's snippet to 2 seconds. Someone suggested that it would be possible to cache some of the data better and speed it up with the proper javascript coding. What I have now is: <center> <img id="video" src="../../img/unconnected.jpg" alt="video" height="480" width="640" /> </center> <script type="text/javascript">// <![CDATA[ var sourceImage = "http://www.ourwebsite.com/video/link/image.cgi?v=jpg:640x480&seq=1"; function show() { document.getElementById("video").src = sourceImage + Math.random(); setTimeout(show, 2000); } show(); // ]]></script> Is there a way to do this? A: It's not gorgeous but it takes care of the by-the-time-an-image-arrives-it's-expired problem. Bonus points for writing an adaptive algorithm that adjusts the timing so that each image is fetched in advance but arrives a short while before it expires ;) sourceImage = "http://www.yourwebsite.com/video/image.cgi?v=jpg:640x480&seq=1"; function show() { document.getElementById("video").src = sourceImage + Math.random(); } document.getElementById("video").onload = show; document.getElementById("video").onerror = show; show();
{ "pile_set_name": "StackExchange" }
Q: Matching expression from an input, count matchs and return each entire lignes I have an input like "Hey bro bro its amazing bazinga" I used this to count how many time I found occurence var count = (this.script.match(/bro/g) || []).length; console.log('Total: ' + count); It return 2, which is perfect, But I would like to know if there is a simple way to get the entire line for each matchs. So output should be: Total: 2 Hey bro bro its amazing A: /^.*?bro.*?$/gm – with multiline (m) flag – will match the entire line if an occurence of bro is found. Then you just need to return the length of the array for the total amount of occurencies. var str = `Hey bro bro is bro amazing bazinga`; var rows = (str.match(/^.*?bro.*?$/gm) || []); console.log(rows); console.log(rows.length);
{ "pile_set_name": "StackExchange" }
Q: sqlite3 and Python: date modifier When modifying the date of a column on python with sqlite3 I receive the None value. datetime(Date, '+4 months') works, but datetime(Date, '+' || Test || 'months') does not and sends me back None value. Test is an integer datatype column. Any idea? A: Add a space before months: % sqlite3 sqlite> select datetime('2018-02-13', '+' || 4 || ' months'); 2018-06-13 00:00:00 sqlite> select datetime('2018-02-13', '+' || 4 || 'months'); sqlite>
{ "pile_set_name": "StackExchange" }
Q: What are the possible linking matrices of a quasi-positive link? I was surprised recently to come across a 3-component link where the linking number of two of the components was negative. For a while I thought I had made a mistake, then I thought a little more and realized that there was no particular reason that the linking numbers had to be all positive. The linking matrix I found was $$\begin{pmatrix} * & 3 & 3 \\ 3 & * & -1 \\ 3 & -1 & * \end{pmatrix}. $$ Here is the link. (I haven't attempted to find a quasi-positive representative yet, the proof it's quasi-positive was pretty indirect.) A: There seem to be hardly any restrictions on the linking matrices you can get from a quasipositive link. For example, this construction shows that everything except the first row and column of the linking matrix can be completely arbitrary: Start with any link $L$, represent it as the closure of a braid, and then draw the braid with horizontal and vertical segments so that horizontal segments always pass over vertical segments. Then a typical small piece of the braid will look like this: To this picture, add one new strand that generally runs along the left edge of the braid, but near a horizontal segment does something like this: Then the resulting braid is strongly quasipositive, and its closure is a link $L' = L \cup U$ consisting of $L$ together with an unknot winding around it. Applying this construction to the negative Hopf link gives something close to your link, except the extra strand only needs to wind around twice. The linking matrices coming from this construction always have nonnegative row sums, even if the entries can be very negative, and I think it should be possible to show that the same holds for any strongly quasipositive link, using the fact that a strand of a quasipositive braid can only participate in negative crossings while it's moving to the right. However, this doesn't work for non-strongly quasipositive links. This two-component link is the closure of a quasipositive braid, but has linking number $-1$:
{ "pile_set_name": "StackExchange" }
Q: merge replication - can't create snapshot - timeout - sql server 2008 I have a SQL Server 2008 database, and I need a mergereplication because i want to sync with mobile devices afterwards. So I created a replication but when it comes to start the snapshotagent, the agent tries to start for about 20 minutes and then it shows the message The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor are still active. There aren't any other errormessages, neither in the snapshot-agent-status-window nor in the agent-log-window. I don't have the administrator of the domain, but the local administrator and a domainuser with admin-privilegs. Both have all rights to database, are in the access-list of the replication. The server agent runs on the local administrator-account and there are 3 MergeReplications on the server, working The job runs also under the local administrator. Thank you for your help, Karl A: So it works again... Maybe someone else has got the same issue one day, so i post the solution here: I researched on the server and found out, the sql server service is running under a local user. The reason for this is, that there were problems with the backupsystem, used by our customers and so they changed it years ago. Because of the local user account a 15404-Error occures. Knowing, that i mustn't use domain-accounts, I also solved the initial problem with my snapshot-agent. I searched for hours (nearly days ;) ) and it was just this little change: When the Replication is created, the job is created too. The job has three steps. The Job-owner is the local-admin, also for the server-agent-service. But the second step of my job (replictionsnapshot) has one setting: run as. And by default this isn't the job-owner but the user running the creation, in my case my domain-account. Now, that I set it to the local-administrator as well everything works fine again. Thanks, Karl
{ "pile_set_name": "StackExchange" }
Q: Do structured bindings and forwarding references mix well? I know I can do auto&& bla = something(); and depending on the constness of the return value of something, I'd get a different type for bla. Does this also work in the structured bindings case, e.g. auto&& [bla, blabla] = something(); I would guess so (structured bindings piggy-back on auto initializers, which behave like this), but I can't find a definitive yes. Update: Preliminary tests seem to do what I expect (derive the constness properly): #include <tuple> using thing = std::tuple<char, int*, short&, const double, const float&>; int main() { char c = 0; int i = 1; short s = 2; double d = 3.; float f = 4.f; thing t{c, &i, s, d, f}; auto&& [cc, ii, ss, dd, ff] = t; c = 10; *ii = 11; ss = 12; dd = 13.; ff = 14.f; } Live demo, gives error as I'd expect if auto&& is doing its job: main.cpp: In function 'int main()': main.cpp:20:10: error: assignment of read-only reference 'dd' dd = 13.; ^~~ main.cpp:21:10: error: assignment of read-only reference 'ff' ff = 14.f; I'd still like to know exactly where this behaviour is specified. Note: Using "forwarding references" to mean this behaviour might be stretching it, but I don't have a good name to give the const deduction part of auto&& (or template-T&& for that matter). A: Yes. Structured bindings and forwarding references mix well†. In general, any place‡ you can use auto, you can use auto&& to acquire the different meaning. For structured bindings specifically, this comes from [dcl.struct.bind]: Otherwise, e is defined as-if by attribute-specifier-seqopt decl-specifier-seq ref-qualifieropt e initializer ; where the declaration is never interpreted as a function declaration and the parts of the declaration other than the declarator-id are taken from the corresponding structured binding declaration. There are further restrictions on these sections in [dcl.dcl]: A simple-declaration with an identifier-list is called a structured binding declaration ([dcl.struct.bind]). The decl-specifier-seq shall contain only the type-specifier auto and cv-qualifiers. The initializer shall be of the form “= assignment-expression”, of the form “{ assignment-expression }”, or of the form “( assignment-expression )”, where the assignment-expression is of array or non-union class type. Putting it together, we can break down your example: auto&& [bla, blabla] = something(); as declaring this unnamed variable: auto && e = something(); ~~~~ ~~ ~~~~~~~~~~~ decl-specifier-seq initializer ref-qualifier The behavior is that is derived from [dcl.spec.auto] (specifically here). There, we do do deduction against the initializer: template <typename U> void f(U&& ); f(something()); where the auto was replaced by U, and the && carries over. Here's our forwarding reference. If deduction fails (which it could only if something() was void), our declaration is ill-formed. If it succeeds, we grab the deduced U and treat our declaration as if it were: U&& e = something(); Which makes e an lvalue or rvalue reference, that is const qualified for not, based on the value category and type of something(). The rest of the structured bindings rules follow in [dcl.struct.bind], based on the underlying type of e, whether or not something() is an lvalue, and whether or not e is an lvalue reference. † With one caveat. For a structured binding, decltype(e) always is the referenced type, not the type you might expect it be. For instance: template <typename F, typename Tuple> void apply1(F&& f, Tuple&& tuple) { auto&& [a] = std::forward<Tuple>(tuple); std::forward<F>(f)(std::forward<decltype(a)>(a)); } void foo(int&&); std::tuple<int> t(42); apply1(foo, t); // this works! I pass my tuple is an lvalue, which you'd expect to pass its underlying elements in as lvalue references, but they actually get forwarded. This is because decltype(a) is just int (the referenced type), and not int& (the meaningful way in which a behaves). Something to keep in mind. ‡ There are two places I can think of where this is not the case. In trailing-return-type declarations, you must use just auto. You can't write, e.g.: auto&& foo() -> decltype(...); The only other place I can think of where this might not be the case is part of the Concepts TS where you can use auto in more places to deduce/constrain types. There, using a forwarding reference when the type you're deducing isn't a reference type would be ill-formed I think: std::vector<int> foo(); std::vector<auto> a = foo(); // ok, a is a vector<int> std::vector<auto&&> b = foo(); // error, int doesn't match auto&&
{ "pile_set_name": "StackExchange" }
Q: Fictitious forces and $\omega$ I have been studying fictitious forces, such as the centrifugal force and Coriolis force. The equation for the centrifugal force is given by: $$F_{centrifugal}=-m\omega\times(\omega\times r)$$ My question is this, what does $\omega$ represent? I can see three possible options for a situation where the origins of the inertial frame and non-inertial frame do not coincide? The angular velocity of the origin of the non-inertial frame about the axis of the inertial frame. The angular velocity of the axis of the non-inertial frame about its own origin. A combination of the above. If it is 3, please can you explain how we combine them. A: Option 4, none of the above. Your option 1 is wrong because points don't rotate. Your option 2 is closer to correct, but ultimately still wrong. You're overly hung up on points (the origin). It might help to get a handle on what "rotation" is. Points don't rotate. Better said, a rotated point is indistinguishable from the original. What about one dimensional space? Rotating a line about itself doesn't change the coordinate of a point on that line one iota. Once again, rotation doesn't make quite sense here. Two dimensional space is where the concept of rotation begins, and indeed, it helps to look at rotations in higher dimensional spaces as a composite of two dimensional rotations. Only one parameter ("angle") is needed to describe a rotation in two dimensional space. Rotation is not a two-vector in two dimensional space. Rotation is not a four-vector in four dimensional space. Six parameters are needed to describe rotations in four dimensional space, ten in five dimensional space. Our three dimensional space is the only space where the number of parameters needed to describe a rotation is equal to the dimensionality of the space. This is one of the reasons why we can treat angular velocity in three dimensional space as if it was a vector. Another key reason is the concept of an axis of rotation. That this axis must exist is the key point of Euler's rotation theorem. Any sequence of rotations in three dimensional space can be described in terms of a single rotation about some axis by some angle. That axis specifies a direction and the rotation angle specifies a magnitude. Direction and magnitude: That's a vector! One reason I said "option 4, none of the above" is that you appeared to be a bit hung up on the origin. The origin doesn't really matter. It might help if you visually making some coordinate system markers. It's easy. Something like these: Imagine making a bunch of them. Next, go find a playground with a kid's roundabout: Spread your markers about the roundabout. Put one dead center, put others elsewhere. Now give it a spin. Think of each of those markers as representing a coordinate system. The coordinate system at the center of the roundabout is undergoing pure rotation. All of the others are undergoing a combination of rotation and acceleration. The origin does matter when it comes to velocity and acceleration of the origin of the frame, but it doesn't matter when it comes to angular velocity. All of those reference frames share the same angular velocity vector.
{ "pile_set_name": "StackExchange" }
Q: Visual Studio 2012- How do you delete a new file? I have Visual Studio 2012 with TFS. I created a new file (call it "x.h") and before I checked it in I decided I didn't need it. MSDN makes it sound so simple: In either Solution Explorer or Source Control Explorer, browse to the folder or file that you want to delete. Select the items that you want to delete, open their shortcut menu, and choose Delete. When you are ready, check in your changes. So I went to Source Control Explorer, right-clicked the file, and chose Delete. It was removed from source control and my pending changes but is still on disk and in the Solution Explorer. When I right-click the file in Solution Explorer, Delete is not an option and Exclude From Project is disabled. I might have more luck if I check it in first then delete it but that seems very unnecessary. Hopefully I'm just missing something obvious! How do I delete this new file ("x.h") from my solution? A: Yep. It was something obvious. You can't delete files from the Solution while it is building. I just tried again and the Delete option magically reappeared. I realized that it has stopped a build since I last tried. In short, there are three different angles that a user can try to delete a file while a build is occurring and the behavior is different for each. Undo the file add from Pending Changes -> Nice error messages are given. Delete the file from Source Control Explorer -> It lets you remove the file as I described in the question but leaves it on disk and in Solution Explorer (same behavior regardless of whether or not a build happening). Delete the file from Solution Explorer -> It quietly prevents you from shooting yourself in the foot and doesn't explain why. The right way to do this is to cancel the build (or let it complete), then delete from Source Control AND from Solution Explorer.
{ "pile_set_name": "StackExchange" }
Q: kettle etl how to convert to a time data type I have a table input and gets some data from a SQL Server table. One field has values of type time, e.g. 02:22:57.0000000, the destination table (table output ) is a PostgreSQL table and has data type of time for that field. But PDI seems think the time from the source table is of type string and generates an error. ERROR: column "contact_time" is of type time without time zone but expression is of type character varying I tried using select value step, but there is no time type, only date and timestamp. How should I do? A: You can use Select Values step and in the meta-data tab, select Type as Timestamp and Format as HH:mm:ss This will format your string input to timestamp. Hope this helps :)
{ "pile_set_name": "StackExchange" }
Q: Gnuplot: Second legend in multiplot behind the grid I'm using a multiplot with two boxes for two sets of data as legend. However, I came across the following problem: when using a grid, the second box is always behind the grid. Using the following code (borrowed from another question in SE and modified): set term pngcairo set output "legends.png" set multiplot # make a box around the legend set key box # fix the margins, this is important to ensure alignment of the plots. set lmargin at screen 0.15 set rmargin at screen 0.98 set tmargin at screen 0.90 set bmargin at screen 0.15 set xrange[0:2*pi] set yrange[-1:1] set grid # main plot command plot sin(x) title "sinus" # turn everything off unset xlabel #label off unset ylabel set border 0 #border off unset xtics #tics off unset ytics #unset grid #grid off set key at graph 0.5, 0.5 plot cos(x) ls 2 lw 2 title "cosinus" The output you get is: I would like the second box to be opaque to the grid, just like the first one. The command #unset grid doesn't do anything since there is no grid if you disable xtics and ytics. A: Use opaque at the second key: ... set key at graph 0.5, 0.5 set key opaque ....
{ "pile_set_name": "StackExchange" }
Q: :first-child, :last-child not working I have a graphic element that I want to position on top of another element. Because I do not want to create css selectors for those that have no re-use, I have applied psuedo class. :first-element that is supposed to sit on top of the other is hidden by the other element. Here is my JS BIN HTML <div class="wrapper"> <div></div> <div></div> </div> CSS .wrapper { position: relative; } .wrapper:first-child { position: absolute; width:50px; height:50px; top:25px; left: 25px; background: red; z-index: 100; } .wrapper:last-child { position: relative; width:100px; height:100px; background: orange; } Added Thanks for all your input. Now I understand where I got this wrong (psuedo class to be applied to child element) I've made up the sample to isolate my problem but I am actually doing this for <img> elements where one <img> sits on top of the other <img> and the html structure is <div class="brand"> <a href="#"><img src=""/></a> <a href="#"><img src=""/></a> </div> css is .brand img:last-child { position: relative; } .brand img:first-child { position: absolute; top:10px; left:15px; } They are not working like the simple div example. I guess that is because there is another anchor element that needs to be taken into account for. A: You should use .wrapper > div:first-child and .wrapper > div:last-child instead. .wrapper > div:first-child { position: absolute; width:50px; height:50px; top:25px; left: 25px; background: red; z-index: 100; } .wrapper > div:last-child { position: relative; width:100px; height:100px; background: orange; } As per you edit: You should do like this .brand a:last-child img { position: relative; } .brand a:first-child img { position: absolute; top:10px; left:15px; }
{ "pile_set_name": "StackExchange" }
Q: List of lists python: combine list elements that are the same size I have a list of lists in python: [[1],[2],[3,4],[5,6],[7,8,9,10,11],[12,13,14,15,16],[17]] I would like to combine the sublists into a single sublist if they hold the same number of elements: [[1,2,17],[3,4,5,6],[7,8,9,10,11,12,13,14,15,16]] Is there a simple way of doing this? A: Use groupby and chain from itertools Ex: from itertools import groupby, chain lst = [[1],[2],[3,4],[5,6],[7,8,9,10,11],[12,13,14,15,16],[17]] result = [list(chain.from_iterable(v)) for k, v in groupby(sorted(lst, key=lambda h: len(h)), lambda x: len(x))] print(result) Output: [[1, 2, 17], [3, 4, 5, 6], [7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] sorted(lst, key=lambda h: len(h)) to sort your list by len then use groupby to group your list by len of list
{ "pile_set_name": "StackExchange" }
Q: Wix Check if Application Initialization is installed on 2008R2 I need to check if Application Initialization is installed on a 2008R2 Server. The app is not installed as a feature, it is an IIS Module that I downloaded from the following link. The problem I'm having is where does the folders actually get placed to be able to perform a search in my WiX project to see if they exist or not. A: TLDR: Look for the Version value in HKLM\SOFTWARE\Microsoft\IIS Extensions\Application Initilaization. Current version is 7.1.1636.0. Full answer: Since this is a MSI installation package, you can open it using Orca and search for any registry key being created. Then in Orca, you open the Registry table and find the row with Registry=reg8BD5741527F144C70BDB7B0134BC7B84. In it, you will find the Key where the value will be created, the Name of it and the Value. This way, you can easily perform a registry search and evaluate if the module is installed. EDIT To perform a search during launch and verify if the module is installed, add the following code: <Property Id="MODULEINSTALLED"> <RegistrySearch Id="IsModuleInstalled" Root="HKLM" Key="SOFTWARE\Microsoft\IIS Extensions\Application Initilaization" Name="Version" Type="raw" /> </Property> Then use the property in a condition: <Condition Message="This application requires Application Initialization module. Please install the Application Initialization module then run this installer again."> <![CDATA[Installed OR MODULEINSTALLED]]> </Condition>
{ "pile_set_name": "StackExchange" }
Q: android supportsRtl conflict, suggested to add 'tools:replace' I have an android app and i've defined android:supportsRtl="false" (Without it - it was rtl and I don't want it). Now i'm using this library: compile 'com.iceteck.silicompressorr:silicompressor:2.0' And the my code doesn't compile: Error:Execution failed for task ':app:processDebugManifest'. Manifest merger failed : Attribute application@supportsRtl value=(false) from AndroidManifest.xml:40:9-36 is also present at [com.iceteck.silicompressorr:silicompressor:2.0] AndroidManifest.xml:17:9-35 value=(true). Suggestion: add 'tools:replace="android:supportsRtl"' to element at AndroidManifest.xml:36:5-189:19 to override. I'm not completely understanding how should I implement the suggestion, or if there is another way to solve it. I've tried tools:replace="android:supportsRtl=false" but it still doesn't compile Thanks! A: Ideally, you would use true for supportsRtl. And, ideally, the library would not be setting any value for supportsRtl. However, the instructions in the error message should be straightforward: Suggestion: add 'tools:replace="android:supportsRtl"' to element at AndroidManifest.xml:36:5-189:19 to override Since we don't have your manifest, we have to guess as to what is on those lines. Most likely, you should add tools:replace="android:supportsRtl" to your <application> element, in addition to your android:supportsRtl="false" that you already have there.
{ "pile_set_name": "StackExchange" }
Q: angular guard-service inject service I need to inject a service in a guard. This guard checks if the user was invited, when yes, he can access the route. To check this condition, I need to call a service which fetches this information from the db. I have a cyclical dependency error, I understand that we shouldn't inject services in Guards, but in this case, I need to do it: providers: [AuthService, HackService, HacksStorageService, AuthGuard, EmailGuard], And the guard: import { ActivatedRouteSnapshot, RouterStateSnapshot, CanActivate } from "../../../node_modules/@angular/router"; import { HacksStorageService } from "../shared/hacks-storage.service"; export class EmailGuard implements CanActivate { constructor( private hacksStorageService: HacksStorageService, ) {} canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot) { this.hacksStorageService.fetchHack(); // check if user was invited. should be in the participants array return true; } } I am pretty confused. Usually I used guards to see if user is logged in or not, so I usually imported stuff from firebase, not from my own services, so not cyclical dependencies. Now I want to check if a condition happens, based on my own data. How can I inject my own data in the EmailGuard, if I am not allowed to inject services because of the cyclical dependency? Thanks. A: You can inject services in guards. If your service returns synchronously, then you can just return immediately, like in your sample code. Otherwise, I did it like this (using firebase auth) import { Injectable } from '@angular/core'; import { CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot, Router } from '@angular/router'; import { Observable, of } from 'rxjs'; import { map, first } from 'rxjs/operators'; import { AngularFireAuth } from '@angular/fire/auth'; import { Paths } from './paths'; @Injectable({ providedIn: 'root' }) export class SignInGuard implements CanActivate { constructor(private afAuth: AngularFireAuth, private router: Router) { } canActivate( next: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> | Promise<boolean> | boolean { return this.afAuth.user.pipe( first(), map(user => { if (user) { this.router.navigate([Paths.MAIN_PATH]); return false; } else { return true; } }) ); } }
{ "pile_set_name": "StackExchange" }
Q: Absolute path to TYPO3 RTE CSS In the Page TSConfig I use the line: RTE.default.contentCSS = fileadmin/template/css/rte.css To link my RTE stylesheet. Is there any way to use an absolute path to an external file? When I try entering one it doesn't seem to work. A: This shouldn't be possible as it would allow for XSS. As with CSS you can execute JavaScript (thx to IE) this would be a security concern. Subresource Integrity is only available for a year and thus is still not implemented in every still used browser. If your server is broken, fix it. If you can't fix it, abandon the project. If you can't abandon it, you could create an extension holding your viable configuration files, as extensions presumably work. All in all "half of the installation works" always points to a permissions problem or a typo in the configuration. In the Install Tool there is a check for folder permissions which might help.
{ "pile_set_name": "StackExchange" }
Q: Basic React code not producing any output I just started with React and wrote my very first code but as per the video(tutorial) the output is not generated. function People(prop){ return( <div className="peer"> <h1>{prop.nam}</h1> <h3>age: {prop.age}</h3> </div> ); } ReactDOM.render(<peer name="Mark" age="20"/>,document.querySelector("#p1")); .peer{ border: 2px solid black; padding: 10px; width: 150px; box-shadow: 5px 10px blue; } <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> <div id="p1"></div> A: You're not passing your People component correctly in your render call. It should look something like this: ReactDOM.render(<People name="Mark" age="20"/>, document.querySelector("#p1")); Also Worth Noting: Stylistically you should name your prop parameter props. This and you're also missing an e to your name prop.
{ "pile_set_name": "StackExchange" }
Q: find objects of specific class from a List<> I've got a base class, say called Fruits. I then have a few child classes of that, say Banana:Fruit, Apple:Fruit, etc. I then have a list of objects of different types, Banana, Apple, whatever. That looks like this: List<Fruits> f = new List<Fruits>{new Banana(), new Banana(), new Apple(), new Banana()}; I want a function that can take a list of fruits as well as a type, and give me a list with only that type's objects in the list. So if I call find_obj(f, Banana), (or something) it should give me back a list containing only Bananas. I might be showing extreme ignorance here, and I apologise. Is this even possible? I know I could do something like this if I know the class beforehand: public List<Fruit> GimmeBanana(List<Fruit> f) { List<Fruit> Output=new List<Fruit>{ }; foreach(Fruit fr in f) { if (fr is Banana){ Output.Add(fr); } } } But I don't know how to make that work for Any class. A: Such a method already exists in the framework - OfType<T>: List<Banana> bananas = f.OfType<Banana>().ToList();
{ "pile_set_name": "StackExchange" }
Q: Prove Aut(G) of Sym(G) is bijective I know that Aut(G) is defined as the set of all isomorphisms from a group G to itself. But I've also seen another supposedly equivalent definition that states: Aut(G) = {f in Sym(G) | f(g1g2) = f(g1)*f(g2)}, where * denotes permutation composition. The above definition is the definition for a group homomorphism, how can I prove that any element in Aut(G) is also bijective? A: Every element in $\;Sym(G)\;$ is bijective, but not necessarily a homomorphism. Thus, as any automorphism is, in particular, a bijection (as it is an isomorphism , as you wrote, and then injective and surjective), we have Aut$\,(G)\subset Sym\,(G)\;$ , which means precisely that: any automorphism of $\;G\;$ is trivially (by definition) a bijection.
{ "pile_set_name": "StackExchange" }
Q: List which contains objects of different subclasses Now I have a class Animal, with three subclasses extending it: Dog, Cat, and Fish. class Dog extends Animal { public void bark(){} } class Cat extends Animal { public void catchMouse(){} } class Fish extends Animal { public void swim(){} } And I have a list: List<Animal> listOfAnimals = new ArrayList<>(); Then I use a static method to add objects of Dog, Cat and Fish to the list: public static void addAnimal(List<Animal> list, AnimalInfo info) { Animal animal = new Animal(); switch (info) { case 0: animal = new Dog(); break; case 1: animal = new Cat(); break; case 2: animal = new Fish(); break; } list.add(animal); } I call this static method 3 times and add a Dog object, a Cat object and a Fish object to the list in order. Now the list should contain a Dog instance, a Cat instance and a Fish instance. Now I want to call bark() on the Dog instance: list.get(0).bark(); But this will not work obviously. What is the best way to achieve this? Use: (Dog)(list.get(0)).bark(); ? A: I think utilizing inheritance here could be a good approach in this case, but I thought I'd share an alternative. Another approach here is to use the visitor pattern. This is especially good when you don't know what you need to do with an object when you declare its class, or if you have contextual behaviour (such as updating another object's state) that you want to keep separate from your Animal classes (Separation of Concerns) abstract class Animal { abstract void accept(AnimalVisitor visitor); } class Dog extends Animal { void bark() { ... } @Override void accept(AnimalVisitor visitor) { visitor.visit(this); } } class Cat extends Animal { void meow() { ... } @Override void accept(AnimalVisitor visitor) { visitor.visit(this); } } interface AnimalVisitor { void visit(Dog dog); void visit(Cat cat); } // Somewhere else... AnimalVisitor voiceVisitor = new AnimalVisitor() { @Override void visit(Dog dog) { dog.bark(); } @Override void visit(Cat cat) { cat.meow(); } } animalList.get(0).accept(voiceVisitor);
{ "pile_set_name": "StackExchange" }
Q: Issue with scroll in Visual Studio 2012 I have 1 problem with Visual Studio 2012 that I cannot scroll using the scroll bar. It shows like this: Is there any suggesstion to fix the issue? Thanks. A: You can reinstall VS 2012, and it can fix the issue.
{ "pile_set_name": "StackExchange" }
Q: Negative scale in Matrix 4x4 After some rotations and to from quaternion conversions I get negative scale from Matrix 4x4, is it possible? I need that scale to draw sprite on screen so I get sprite flipped, how to deal with this problem should I just handle negative scale in sprite drawing method. if MatrixHasNegativeScale then invert scale, draw sprite with inverted scale after m4.initWithHeadPitchRoll(0, 0, 180); I already get negative scale. or something wrong with matrix class? Edit I create transformation matrix(rotation + scale + translate) rotation around Oz by 180 and when I extract scale from it, it has negative value is it normal? A: If your matrix and quaternion classes are functioning properly, then a sequence of rotations should not ever give you a reflection (inverting or flipping a sprite). You should not just sweep the problem under the rug by writing code to flip something if it comes out with a reflection; you should try to figure out the actual problem. That being said, based on the comments, it's not clear to me that you actually have a reflection showing up. Negative components in a matrix show up naturally as a result of rotations. For example a 2D rotation matrix for a 180-degree rotation is [ -1 0 ] [ 0 -1 ] The presence of negative values in the matrix - even along the main diagonal - doesn't mean anything by itself. You have to look at the determinant of the matrix to see whether it's orientation-preserving (positive) or orientation-reversing (negative), and in this case the determinant is +1, so this is a perfectly legit rotation matrix, with no reflection.
{ "pile_set_name": "StackExchange" }
Q: Does redirecting the client using the PHP header function always work in client's modern browsers? I want to redirect users back to login page if they wrote the wrong password. I usually use header(location: "login.php?msg=wrong password"); when they type the wrong password or something. Does this method always work in modern browsers? what about the old ones? Is there a chance that a browser won't let the redirection happen? (If it only won't work in old browsers, then how old and which browsers?) Is this the best way to redirect users? In the example the mentioned example, how do professional web developers solve this issue of redirecting users and sending the GET variables to that URL? Note that I use header to redirect users both to the same page and sometimes to another page, my main concern is sending the client to a page (another page or the same) with the get variables set, to show a message on that page. EDIT: I had no idea I have to even state this, but based on answers I think I should: Yes, I am aware that PHP is back end and works in the server and not client! I am asking that do modern browsers meaning the clients browser support the redirection that is send by the server. Obviously when I said "redirect client" in the title, I didn't mean redirect the server. A: All major browsers support redirects and have done so seamlessly for 20 years. The last browser that didn't support redirects well was Netscape 4. That was about 1997. Even then, redirects worked but: The screen would flash. The redirect URL was added to history. That meant that it was difficult to use the back button from the page after the redirect. Users would have to hit the back button twice really quick or they would end up just getting the redirect again. Normally when you have questions about whether browsers support a feature you can can consult https://caniuse.com/. However, redirects have worked well for so long, that they don't feel that it is a browser feature even worth covering. Redirecting to include parameters is fully supported as well. Your example redirect will work as you intend in all modern browsers. Even though redirects work really well, they probably are not the correct solution to your particular problem. Ideally your login form would be produced by the same PHP file that checks the credentials. This would allow you to show the form again with the error message without doing any redirects. Typically redirects are use after successful login to take the user to whatever page they need to see next after logging in.
{ "pile_set_name": "StackExchange" }
Q: How to get model number of the phone? I am targeting Android with Delphi XE7. I would like to obtain the model number of the phone. That is, I would like to obtain the information highlighted in this image: How can I achieve this? A: You can use DeviceType := JStringToString(TJBuild.JavaClass.MODEL); OSName := GetCodename(JStringToString(TJBuild_VERSION.JavaClass.RELEASE)); OSVersion := JStringToString(TJBuild_VERSION.JavaClass.RELEASE); There is a sample here. I hope it'll be useful
{ "pile_set_name": "StackExchange" }
Q: sum if values oracle sql I have got a query like select distinct i.charge_type, cp.name, sum(i.amount) from charge.gp_schedule gp, charge.gsm_charge_plan i, charge cp where i.code = gp.sales_audit_code and cp.code = gp.code group by i.charge_type ,cp.name which outputs for example the following GSMFixedCharge FCFBBR15 15 **GSMUsageCharge** Call Charges 2.16 **GSMUsageCharge** Service Charges 2 GSMFixedCharge Line Rental 23.98 GSMFixedCharge FCFAFPBL 1 How can I further sum the values only based on 'GSMUsageCharge' in the same query so the desired output would be GSMFixedCharge FCFBBR15 15 GSMUsageCharge Call Charges 4.16 GSMFixedCharge Line Rental 23.98 GSMFixedCharge FCFAFPBL 1 I have tried something like select distinct i.charge_type, cp.name, DECODE(i.charge_type, 'GSMUsageCharge', sum (i.amount) group by i.charge_type) result but it does not work... A: You could try the following query. First, the i.charge_type and cp.name values are modified for '** GSMUsageCharge **'. Next, the amounts are summed. Please note that you do not need to use DISTINCT when you use GROUP BY. SELECT charge_type, name, sum(amount) FROM ( select CASE WHEN i.charge_type = '**GSMUsageCharge**' THEN 'GSMUsageCharge' ELSE i.charge_type END charge_type, CASE WHEN i.charge_type='**GSMUsageCharge**' AND cp.name='Service Charges' THEN 'Call Charges' ELSE cp.name END name, i.amount amount from charge.gp_schedule gp, charge.gsm_charge_plan i, charge cp where i.code = gp.sales_audit_code and cp.code = gp.code ) modified_names group by charge_type, name;
{ "pile_set_name": "StackExchange" }
Q: npm Gitlab - use of url and token I am currently trying to use the gitlab module to connect to my gitlab repository but I find the documentation of gitlab too vague to reproduce. The documentation is provided here: First of all, they say to connect to gitlab you have to do the following: gitlab = (require 'gitlab') url: 'http://example.com' token: 'abcdefghij123456' I dont understand the purpose of this url at all. As for the token what I believe is that the token is used to identify which gitlab account we want to connect to. Am I right here? My second confusion is a bit more general. They show further in the documentation that you can use this module to listen to 'users' and 'projects'. What exactly are the users here? I understand that the projects are the projects that are hosted by my gitlab account but whats the purpose of the users? Can one account (which I'm assuming we connect to via the token) have multiple users? This is really confusing. Please explain how to implement this gitlab module. I am looking forward to any insight provided on these queries. Thank you. A: I think your question has more to do with the node package than Gitlab itself, you should report issues to their issuetracker. To answer some of your questions: Why the url? | Gitlab can also be installed on-premises so this url can be different for a company using its own installation of Gitlab. Token | This is a personal access token, you can create them using these docs and give the token only the permissions it needs. The docs also state the use-cases. Monitor users | This means you can see what users interact with your repo in what way, so e.g. commit, push, create issues, etc. User account | Technicaly a user account has at least 1 login and a login belongs to a user account. Since you can have both a default user account and a connected Google login a user can have more logins. This module seems to focus on users accounts instead of logins though.
{ "pile_set_name": "StackExchange" }
Q: Is a readonly field in C# thread safe? Is a readonly field in C# thread safe? public class Foo { private readonly int _someField; public Foo() { _someField = 0; } public Foo(int someField) { _someField = someField; } public void SomeMethod() { doSomething(_someField); } } Have gone through some posts: What are the benefits to marking a field as readonly in C#? - JaredPar suggests that readonly fields once constructed are immutable and hence safe. Readonly Fields and Thread Safety, suggests that there is some risk if constructors do a lot of work. So, if the readonly field is used as in the code above, and constructors are light, is it thread-safe? What if _someField is a referrence type (e.g. an array of strings)? A: Yes - your code doesn't expose this within either constructor, so no other code can "see" the object before it's been fully constructed. The .NET memory model (as of .NET 2) includes a write barrier at the end of every constructor (IIRC - search Joe Duffy's blog posts for more details) so there's no risk of another thread seeing a "stale" value, as far as I'm aware. I'd personally still usually use a property instead, as a way of separating implementation from API, but from a thread-safety point of view it's fine. A: That depends what's in the field. Reading from a readonly field, or from any field that is smaller than the word length (including all reference types) is an atomic operation. However, the object inside the readonly field may or may not be thread-safe.
{ "pile_set_name": "StackExchange" }
Q: mod of an unsigned char from secure hash function i have an unsigned char test_SHA1[20] of 20 bytes which is a return value from a hash function. With the following code, I get this output unsigned char test_SHA1[20]; char hex_output[41]; for(int di = 0; di < 20; di++) { sprintf(hex_output + di*2, "%02x", test_SHA1[di]); } printf("SHA1 = %s\n", hex_output); 50b9e78177f37e3c747f67abcc8af36a44f218f5 The last 9 bits of this number is 0x0f5 (= 245 in decimal) which I would get by taking a mod 512 of test_SHA1. To take mod 512 of test_SHA1, I do int x = (unsigned int)test_SHA1 % 512; printf("x = %d\n", x); But x turns out to be 158 instead of 245. A: I suggest doing a bit-wise and with 0x1ff instead of using the % operator.
{ "pile_set_name": "StackExchange" }
Q: Congratulations Eric Lippert for finally winning the 'c#-language' badge Yesterday John Saunders added a new tag, c#-language, to seven questions. There are quite many C# questions on Stack Overflow, but these seven questions have high voted answers from Eric Lippert, who already received the new c#-language badge for them. Should there be a different tag for these question? Should we differentiate C#-language questions from questions where C# is a tool, and is this the way? Is it acceptable to retag questions that way? It does look a little like an exploit, which was undone in the past. A: IMHO, [c#-language] doesn't appear to signify anything that [c#] doesn't already indicate. I certainly don't see anything common to the questions currently tagged that would appear to require a separate tag - indeed, this one would appear to require the consideration of a specific implementation... That said, there is a [ecma262] tag for questions on the ECMAScript standard itself or questions on its implementation that are potentially orthogonal to JavaScript use or any specific implementation of it. So I could see a similar tag ([c#-specification] or [ecma334] say...) used for questions on the C# standard that aren't directly concerned with the use of language or its common implementations. Assuming there are actually questions that would benefit from such a thing... A: Having bruted through the [language] tag long ago, I can see the use for having a tag to identify questions about the features or elements of a specific language, or any such general questions about the language rather than using the language. We have a fair enough number of them. However, I think we are better suited to using a tag like [language-features] or [language-design] in concert with the [c#] tag to illustrate this goal than to create a new tag exclusively for C#. "Aren't tags that depend on other tags bad?" you might ask, but I would envision this about the same as a tag like [strings]. We still have that as a tag, and it's just as dependent on the language as a general language features tag would be. Nevermind that a string is one such feature/design element anyway. It clearly indicates what the question is about, so it works well as a tag. And we won't have to setup a unique one for every language, to boot! A: I created that tag in order to differentiate between questions about the C# programming langauge itself, and questions about everything else, but where the questioner happens to be using the C# programming language. In my opinion, the c# tag has become meaningless, as a tag. It does not categorize the question, it simply indicates the programming language used by the questioner. I began using the c#-language tag to indicate questions that are specifically about the programming language. Think about it. Is there really no difference between problem with using alias name in query in ms access (the question doesn't even contain any C# code), and Limitations of the dynamic type in C#? Think about it another way. Should all questions tagged c# also be tagged .net? After all, the questioner is likely using .net in his C# program. How about tagging them visual-studio since Visual Studio was probably used to write the program? Or oxygen since that's probably what the questioner was breathing at the time? Yet another way to think about the distinction: in front of me is the book "Essential C# 4.0" by Mark Michaelis. An excellent book. The first 13 chapters of this book fall firmly into the area for which I intended the c#-language tag. Only when you get to Chapter 14, "Collection Interfaces with Standard Query Operators" would I say you've entered the gray area. Subsequent chapters, "LINQ with Query Expressions", "Building Custom Collections", "Reflection, Attributes, and Dynamic Programming", up to Chapter 21, "The Common Language Infrastructure", move further and further away from what I had in mind. I probably wouldn't remove a c#-language tag placed on questions about most of these, but I would not add one. Contrast this with another great book I have here, "Windows Forms 2.0 Programming", by Chris Sells and Michael Weinhardt. Even though the examples are all written in C#, I would say that none of the chapters of this book are about c#-language. Now, I happened to start off with Eric Lippert answers simply as a quick way of finding questions that were likely to be about the language itself. It never crossed my mind that tagging these particular questions would lead to Eric winning the badge for the tag. OTOH, he can now write the tag wiki for it. I just reread the original blog post on suspensions: "A Day in the Penalty Box". The reasons for suspension are stated as: There’s only one rule of behavior that really matters, whether on Stack Overflow, or anywhere else: don’t be a jerk. How do you know you’re being a jerk? Other users react negatively to your posts, posting negative responses and generally causing a commotion. There is a broad sense of community resentment over your behavior, and you are frequently cited in discussion about the community. The moderators get regular email complaints about your behavior. You make snide or rude comments “behind people’s backs”, in public places. Considering that there has been no attempt to inform me of what my bad behavior was, I have to go by the above. Was I being a jerk? In what way? I know it's the weekend, and look forward to answers during the week.
{ "pile_set_name": "StackExchange" }
Q: JLabel icon not changing during run-time When I run my program, I need to add an image to my GUI during run time. As far as I know, getting the image from the source file works: public ImageIcon getImage() { ImageIcon image = null; if (length > 6.0) { //TODO } else { try { image = new ImageIcon(ImageIO.read( new File("car.png"))); } catch (IOException ex) { JOptionPane.showMessageDialog(null, "An Error occured! Image not found.", "Error", JOptionPane.ERROR_MESSAGE); } } return image; } I then add the image to a JLabel using the .setIcon() method, but nothing changes on the GUI. public void addCarImage(Car newCar, int spaceNo) { ImageIcon carImage; carImage = newCar.getImage(); JOptionPane.showMessageDialog(null, "Car", "Car", JOptionPane.INFORMATION_MESSAGE, carImage); carList[spaceNo - 1] = newCar; carLabel[spaceNo - 1].setIcon(carImage); } The JOptionPane message was added to see if the image will actually load, and it does. Any ideas? I've used google to look for solutions, such as repaint()/revalidate()/updateUI() and they didn't work. Edit - carLabels are added like so (before adding images). JLabels are initially blank. carLabel = new JLabel[12]; for (int i = 0; i < carLabel.length; i++) { carLabel[i] = new JLabel(); } carPanel.setLayout(new GridLayout(3, 4)); for (int i = 0; i < 12; i++) { carPanel.add(carLabel[i]); } A: Please make sure you do it on the swing thread. Also, make sure image was loaded correctly. Here is a simple code that I used to test and it is fine. public class Main { public static void main(String[] args) { final JFrame frame = new JFrame("TEST"); frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); final JLabel label = new JLabel(); ImageIcon icon = null; try { icon = new ImageIcon(ImageIO.read(new File("C:\\images\\errorIcon.png"))); } catch (IOException e) { e.printStackTrace(); } frame.getContentPane().setLayout(new BorderLayout()); frame.getContentPane().add(label, BorderLayout.CENTER); frame.setSize(200,200); SwingUtilities.invokeLater(new Runnable() { @Override public void run() { frame.setVisible(true); } }); try { Thread.currentThread().sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } final ImageIcon finalIcon = icon; SwingUtilities.invokeLater(new Runnable() { @Override public void run() { if(finalIcon != null && finalIcon.getImageLoadStatus() == MediaTracker.COMPLETE){ label.setIcon(finalIcon); } } }); } } Yarik.
{ "pile_set_name": "StackExchange" }
Q: If there were no space between nuclei, how big would the Earth be? I have some people telling me it would be the size of baseball. I am quite doubtful on this. If this is true, then the gaps must be so incredibly huge that everything should be transparent. I am not a physics student, just wondering. A: Instead of assuming the earth is made of metallic hydrogen, let's just compare Earth's density of $5.52 \times 10^3 kg/m^3$ to that of neutrons' $2.3 \times 10^{17} kg/m^3$ because degenerate matter consisting of neutrons is what you get when electrons are forced into nuclei. That's a density increase of about $4.17 \times 10^{13}$ (at least 3 orders of magnitude different from Pranav's proton-atom volume ratio estimate), or a radial decrease of about $3.47 \times 10^4$. This puts a neutron Earth radius at 184 metres or a huge baseball STADIUM including car park. A: When you look at crystalline substances, there is really not that much space between the atoms. What people mean when they say that an atom is mostly empty space, is that the INSIDE of the atom is very sparsely populated with stuff. This is because the stuff in question, the nucleus and the electrons, are tiny in comparison to the actual size of the atom. The nucleus is typically a few hundred thousandths of an angstrom unit, or $\approx 10^{-15} \text{meters}$ in diameter. The entire atom, on the other hand, has a diameter to the order of an angstrom, or $10^5$ times larger than the nucleus. The electrons are even tinier, so much so that the volume that each electron occupies is negligible in comparison to the nucleus. A simple calculation will tell you that the volume occupied by an atom is $\approx 10^{15}$ times the volume of its building blocks, for the Hydrogen atom. A similar calculation for an atom of a heavier element would tell you $V_{atom} \approx 10^{10} \times V_{nucleus}$ If you applied this reduction, by removing all that empty volume inside the atom, to the Earth, you would get an $$ V_{earth}'=V_{earth} \times 10^{-12} \\ d_{earth}' = d_{earth} \times 10^{-4} = 1300 \text{m} $$ P.S.: The empty space isn't really empty space. The currently accepted model of the atom says that an electron can be anywhere inside that space, but has a higher probability of being in a particular part of it. As David pointed out, it is more accurate to say "Increasing the density of the earth to the density of the nucleus, while keeping the mass the same."
{ "pile_set_name": "StackExchange" }
Q: Two models in one Django DetailView and filtering by relation betwen them I have a question. In my DetailView I want to placed data from two models. Moreover, I want to filter them, that on my scenario-detail was only that comments related to specyfic scenario, related by ForeignKey->Scenario. My views.py: class ScenarioDetailView(LoginRequiredMixin, DetailView): model = Scenario def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context['comments'] = Comment.objects.all() return context And in scenario_detail.html i have a simple {{ comments }} I thinking on filtering my comment object in views.py, something like: Comment.objects.get(commentScenario=Scenario.id) but it didn't work at all. My models.py: class Scenario(models.Model): scenarioTitle = models.CharField(max_length=256) scenarioArea = models.ForeignKey(ScenarioArea, on_delete=models.CASCADE) scenarioAuthor = models.ForeignKey(User, on_delete=models.CASCADE) scenarioDate = models.DateTimeField(default=timezone.now) scenarioDescription = models.TextField() def __str__(self): return self.scenarioTitle def get_absolute_url(self): return reverse('scenario-detail', kwargs={'pk': self.pk}) class Comment(models.Model): commentText = models.CharField(max_length=256) commentScenario = models.ForeignKey(Scenario, on_delete=models.CASCADE) commentAuthor = models.ForeignKey(User, on_delete=models.CASCADE) commentDate = models.DateTimeField(default=timezone.now) def __str__(self): return self.commentText And urls.py: path('scenario/<int:pk>/', ScenarioDetailView.as_view(), name='scenario-detail'), Could someone help me? A: You don't need to send any extra context to your template in order to show the related comments; As you already have the related comments to your Scenario with backward relationship. So you can simply use Scenario.comment_set.all in your template to access them. As an example: {% for comment in object.comment_set.all %} {{ comment }} {% endfor %}
{ "pile_set_name": "StackExchange" }
Q: Stop waiting time out for resources in chrome automation I have automated chrome with Python Selenium Webdriver. Is there any chrome option or method to stop waiting for third party resources ? When i launched google-chrome in debug mode, it's waiting on below google fonts for 2 minutes. https://fonts.googleapis.com/css?family=Open+Sans:400,300,500,600,700,800 https://fonts.googleapis.com/css?family=Roboto:100,300,400,500,700,900 Environment Limitations 1. No internet 2. No sudo permission This is not a rendering issue, browser is waiting on those resources and selenium is giving timeout exception. Need to stop this waiting time and continue automation without errors. A: Selenium Wire has a way of waiting for requests to finish before proceeding with test execution. See https://pypi.org/project/selenium-wire/#waiting-for-a-request
{ "pile_set_name": "StackExchange" }
Q: Strain in a freely falling body I wanted to know if there would be any extension in a freely falling body due to gravity . I know that a rod tied to the ceiling at one end will show an extension $MgL/2AY$. But if the same rod is falling freely under gravity, I am not able to go about this problem. Will the extension remain the same as that of the tied to the ceiling problem or will it change and how? Is it possible that there will be no extension at all? A: This is an interesting (and illuminating) question. The question to ask is as to what causes the stress in a body. The idea is that if different parts of a solid body are subjected to such external forces that they have a tendency to accelerate at different rates then the internal forces of the solid body will activate themselves so as to try to keep the different parts of the solid body from flying away from each other with such different accelerations. Of course, no solid body is perfectly rigid and thus, even with the presence of such internal forces (i.e., stress), there would be some deformation of the solid body (i.e., strain). Now, in the presence of gravity, if a solid body is freely falling (i.e., there are no external forces on the body except for the forces of gravity), then the acceleration of a material point of the body due to external forces (i.e., gravity) is just the same as the value of the gravitational field at the location of that material point. Thus, if the gravitational field is uniform, then all the material points of the body will accelerate with the same acceleration simply under the influence of the external force of gravity and the internal forces wouldn't need to activate themselves at all. However, if the gravitational field is non-uniform, then different material points of the body will tend to accelerate with different accelerations and thus, there would be some non-zero stress in the body. In your example, thus, if you treat the gravitational field of the Earth as a nearly uniform gravitational field then there would be no stress in the rod while it's freely falling. However, if you take into account the non-uniformity in the gravitational field of the Earth (which would be necessary if your object is big enough in its spatial extension), then non-zero stress would be induced in the rod.
{ "pile_set_name": "StackExchange" }
Q: Conditional step in a pipeline Given a pipeline something like "A|B|C|D|E", I want to make step C conditional on the result of step B. Something like this: A | B | if [ $? -ne 0 ]; then C; else cat; fi | D | E But this doesn't seem to work; C is never executed no matter what the result of B. I'm looking for a better solution. I understand that each step of a pipeline runs in its own subshell. So I can't pass an environment variable back to the pipeline. But this pipeline is in a Gnu Parallel environment where many such pipelines are running concurrently and none of them knows any unique value (they just process the data stream and don't need to know the source, the parent script handles the necessary separation). That means that using a temporary file isn't practical, either, since there isn't any way to make the file names unique. Even $$ doesn't seem to give me a value that is the same in each of the steps. A: You can't make a pipeline conditional because the commands are run in parallel. If C weren't run until B exited, where would B's output be piped to? You'll need to store B's output in a variable or a temporary file. For instance: out=$(mktemp) trap 'rm "$out"' EXIT if A | B > "$out"; then C < "$out" else cat "$out" fi | D | E
{ "pile_set_name": "StackExchange" }
Q: Uploading a file with billions of record on server and inserting the records in DB In our application devoloped in java-j2ee we need to import the records from uploaded file from client. The content of file will be of sort as below, id,email,name,last-name,text 1,[email protected],John,Lives in LA ...billion such records in a file. while the upload process is going on client must regular updates on the process progress. We are able to upload the File correctly but then for inserting all records in it is also done. But now we want it to happen shortest possible time. Suggested approaches are, Using Multi-threading in Fork join Multiple threads JMS Please suggest. A: If you want the truly shortest possible time, copy the file in chunks over to the server with the database (maybe using a Java SCP implementation if it's available), then do your DB's version of LOAD DATA INFILE (that's MySQL's flavor). The more sensible approach is just doing batch inserts. Suggested approaches are, Using Multi-threading, JMS Probably won't help. JMS doesn't solve this, and parallelism won't help when you're IO-bound (the size of the pipe or speeds of disks is really what's getting you). Edit: you could see a benefit from multithreading if you have one reader thread that reads the file and another writer thread that does the DB access (producer/consumer). The reason this can help is so you're you're always reading and you're always writing. If you write this correctly, you'll be able to spawn multiple insertion threads so you can try to run it in parallel and see if it helps.
{ "pile_set_name": "StackExchange" }
Q: iText7 : com.itextpdf.kernel.PdfException: Document was closed. It is impossible to execute action I got com.itextpdf.kernel.PdfException: Document was closed. It is impossible to execute action. error on iText7. 1 // UPDATE FROM HERE 2 PdfFont font; 3 { 4 GcsFilename gcsFilename = new GcsFilename("fonts", "msgothic001.ttf"); 5 try (GcsInputChannel inputChannel = 6 gcsService.openPrefetchingReadChannel(gcsFilename, 0, BUFFER_SIZE)) { 7 font = 8 PdfFontFactory.createFont( 9 getBytes(Channels.newInputStream(inputChannel)), 10 PdfEncodings.IDENTITY_H, 11 true); 12 } 13 } 14 // UPDATE UNTIL HERE 15 16 WriterProperties wp = new WriterProperties(); 17 wp.useSmartMode(); 18 try (PdfDocument writeDoc = new PdfDocument(new PdfWriter(outputStream, wp))) { 19 20 List<Integer> keyList = Arrays.asList(Integer.valueOf(1), Integer.valueOf(2), Integer.valueOf(3)); 21 for (Integer keyNumber : keyList) { 22 LOGGER.info(keyNumber); // (1) 23 ByteArrayOutputStream baos = new ByteArrayOutputStream(); 24 try (PdfWriter writer = new PdfWriter(baos); 25 PdfDocument readDoc = 26 new PdfDocument(new PdfReader(new ByteArrayInputStream(inputBytes)), writer)) { 27 PdfAcroForm pdfAcroForm = PdfAcroForm.getAcroForm(readDoc, false); 28 Map<String, PdfFormField> fieldMap = pdfAcroForm.getFormFields(); 29 if (fieldMap != null && fieldMap.size() > 0) { 30 Set<String> fieldNameSet = new HashSet<>(fieldMap.keySet()); 31 for (String fieldName : fieldNameSet) { 32 pdfAcroForm.renameField(fieldName, fieldName + "_" + keyNumber); 33 } 34 fieldMap = pdfAcroForm.getFormFields(); 35 } 36 37 38 // UPDATE FROM HERE 39 PdfFormField form = fieldMap.get("Customer_" + keyNumber); 40 form.setFont(font).setValue("Test Test"); 41 // UPDATE UNTIL HERE 42 43 } // (2) We got the error on this line 44 45 try (PdfDocument readDoc = 46 new PdfDocument(new PdfReader(new ByteArrayInputStream(baos.toByteArray())))) { 47 readDoc.copyPagesTo(1, readDoc.getNumberOfPages(), writeDoc, new PdfPageFormCopier()); 48 } 49 } 50 } I got this output. 13:55:45.962 1 // (1) 13:55:47.252 2 // (1) 13:55:47.782 com.itextpdf.kernel.PdfException: Document was closed. It is impossible to execute action. at com.itextpdf.kernel.pdf.PdfDocument.checkClosingStatus(PdfDocument.java:1887) at com.itextpdf.kernel.pdf.PdfDocument.getWriter(PdfDocument.java:645) at com.itextpdf.kernel.pdf.PdfObject.makeIndirect(PdfObject.java:228) at com.itextpdf.kernel.pdf.PdfDictionary.makeIndirect(PdfDictionary.java:491) at com.itextpdf.kernel.pdf.PdfDictionary.makeIndirect(PdfDictionary.java:57) at com.itextpdf.kernel.pdf.PdfObject.makeIndirect(PdfObject.java:249) at com.itextpdf.kernel.pdf.PdfDictionary.makeIndirect(PdfDictionary.java:479) at com.itextpdf.kernel.pdf.PdfDictionary.makeIndirect(PdfDictionary.java:57) at com.itextpdf.kernel.font.PdfFont.makeObjectIndirect(PdfFont.java:600) at com.itextpdf.kernel.font.PdfType0Font.getFontDescriptor(PdfType0Font.java:672) at com.itextpdf.kernel.font.PdfType0Font.flushFontData(PdfType0Font.java:828) at com.itextpdf.kernel.font.PdfType0Font.flush(PdfType0Font.java:600) at com.itextpdf.kernel.pdf.PdfDocument.flushFonts(PdfDocument.java:1848) at com.itextpdf.kernel.pdf.PdfDocument.close(PdfDocument.java:800) at (our source (2) ) Why I got this error? How can I fix? [UPDATE] I found setting value with font "MS Gothic" (Standard font on Japanese Windows) cause this error. It seems some fonts cause this error while others not. I also tried with HELVETICA, but it does not cause error. I have updated my program (from line 1 to 14, and line 38 to 41). A: I have made an almost literal copy of your code: package com.itextpdf.samples; import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.FileOutputStream; import java.io.IOException; import java.util.HashSet; import java.util.Map; import java.util.Set; import com.itextpdf.forms.PdfAcroForm; import com.itextpdf.forms.PdfPageFormCopier; import com.itextpdf.forms.fields.PdfFormField; import com.itextpdf.io.font.FontProgram; import com.itextpdf.io.font.FontProgramFactory; import com.itextpdf.kernel.font.PdfFontFactory; import com.itextpdf.kernel.pdf.PdfDocument; import com.itextpdf.kernel.pdf.PdfReader; import com.itextpdf.kernel.pdf.PdfWriter; import com.itextpdf.kernel.pdf.WriterProperties; public class Test { public static void main(String[] args) throws IOException { FontProgram fontProgram = FontProgramFactory.createFont("c:/windows/fonts/msgothic.ttc,1"); FileOutputStream outputStream = new FileOutputStream("test.pdf"); WriterProperties wp = new WriterProperties(); wp.useSmartMode(); try (PdfDocument writeDoc = new PdfDocument(new PdfWriter(outputStream, wp))) { for (int keyNumber = 0; keyNumber < 3; keyNumber++) { ByteArrayOutputStream baos = new ByteArrayOutputStream(); try (PdfWriter writer = new PdfWriter(baos); PdfDocument readDoc = new PdfDocument(new PdfReader("form.pdf"), writer)) { PdfAcroForm pdfAcroForm = PdfAcroForm.getAcroForm(readDoc, false); Map<String, PdfFormField> fieldMap = pdfAcroForm.getFormFields(); if (fieldMap != null && fieldMap.size() > 0) { Set<String> fieldNameSet = new HashSet<>(fieldMap.keySet()); for (String fieldName : fieldNameSet) { pdfAcroForm.renameField(fieldName, fieldName + "_" + keyNumber); } fieldMap = pdfAcroForm.getFormFields(); } PdfFormField form = fieldMap.get("name_" + keyNumber); form.setFont(PdfFontFactory.createFont(fontProgram)).setValue("Test Test"); } try (PdfDocument readDoc = new PdfDocument(new PdfReader(new ByteArrayInputStream(baos.toByteArray())))) { readDoc.copyPagesTo(1, readDoc.getNumberOfPages(), writeDoc, new PdfPageFormCopier()); } } } } } It doesn't throw any errors when I run it. I had to make some changes, because I didn't know what parameters such as keyList were about. Can you execute my example, and tell me if the problem persists? If my example still throws an error: maybe you aren't using the most recent version of iText 7. Please upgrade and try anew. If my example doesn't throw an error: try adapting my example step by step until the error happens again. Tell us which was the last step you performed before the error occurred. Update: When you create a PdfFont and when you use that PdfFont instance in the context of a PdfDocument, that PdfFont "belongs" to that document, and you can no longer reuse it. You should create a new PdfFont instance for every document. This doesn't mean you can't reuse a FontProgram though. I'll have updated my example. I use the FontProgramFactory to create a FontProgram (I used the quick & dirty way). I don't reuse any PdfFont, but I use the FontProgram to create a new PdfFont for every PdfDocument.
{ "pile_set_name": "StackExchange" }
Q: Does a company that only deals with information still need offices? For my working group at my present employer (a start-up) we are discussing about the possibility of granting the right to work exclusively from home, if we wish to. Our work performance is only to process information in some way or the other, some are programmers, other are data analysts or technical writers. How would the work flow be influenced? I see that people won't have to spend time commuting, so that makes in some extreme cases, +3h each day for some employees. It's clear that they won't dedicate 3h extra if they telecommute, but that means less costs for them and less stress. Or, has face to face some essential property that we can't ignore? A: Becasue there is no substitute for face to face interaction. That is why some companies who used to allow telecommuting have taken it away. It may be more efficient for the individual but not for the group. More work gets done in those face-to-face casual meetings in the hall than you may realize. And potential clients and investors want to meet with you at a professional office space. Why do you think senior managers need much nicer offices than junior people. The people they meet with would not take them seriously without it.
{ "pile_set_name": "StackExchange" }
Q: How to get arc-length of polar function $r= 4(1-\sin{\phi})$? How can I get arc-length of this polar function? $$ r= 4(1-\sin{\phi})$$ $$-\frac{\pi}{2}\leq\phi\leq\frac{\pi}{2}$$ I know that arc-length of polar function can get calculate by $$l=\int_\alpha^\beta\sqrt{r^2+(r')^2}d\phi$$ So: $$r'=-4\cos{\phi}$$ $$l=\int_\frac{-\pi}{2}^\frac{\pi}{2}\sqrt{4^2(1-\sin{\phi})^2+(-4\cos{\phi})^2}d\phi=\int_\frac{-\pi}{2}^\frac{\pi}{2}\sqrt{16-32\sin{\phi}+16\sin^2{\phi}+16\cos^2{\phi}}d\phi=\cdots$$ $$=\displaystyle\int_\frac{-\pi}{2}^\frac{\pi}{2}\sqrt{-32\sin{\phi}}d\phi$$ Does this really have to imply that I will need to calculate $\sqrt{32} \displaystyle\int_\frac{-\pi}{2}^\frac{\pi}{2} \sqrt{-\sin{\phi}}d\phi$? If so then how can I do that? A: You should have $$16-32\sin\phi +16(\sin^2(\phi)+\cos^2(\phi))=32(1-\sin\phi)$$ For the integral, note that $$\int{\sqrt{1-\sin x}}dx=\int{\frac{\sqrt{1-\sin^2(x)}}{\sqrt{1+\sin x}}}dx=\int{\frac{\cos x}{\sqrt{1+\sin x}}}dx=2\sqrt{1+\sin x}+C$$
{ "pile_set_name": "StackExchange" }
Q: Index out of range.Must be non Negative private void dataGridView1_DoubleClick(object sender, EventArgs e) { try { DataTable dt = new DataTable(); // In database there are three columns: proid, proname, and unitprice. // I need this to retrieve in respective textbox when i double click // the value in datagrid view SqlDataAdapter da = new SqlDataAdapter(" select * from tblproduct where proid = " + Convert.ToInt16(dataGridView1.SelectedRows[0].Cells[0].Value.ToString()) + "", con); // Something wrong near this select statement so getting error index was out of range. da.Fill(dt); textBox1.Text = dt.Rows[0][0].ToString(); textBox2.Text = dt.Rows[0][1].ToString(); textBox3.Text = dt.Rows[0][2].ToString(); } catch (Exception error) { MessageBox.Show(error.ToString()); } } A: This can only occur on four different lines: // either SelectedRows or Cells is zero length dataGridView1.SelectedRows[0].Cells[0] // either Rows is zero length or there are no columns returned dt.Rows[0][0] // either Rows is zero length or there is only 1 column returned dt.Rows[0][1] // either Rows is zero length or there are only 2 columns returned dt.Rows[0][2] The most likely lines? // there are no SelectedRows dataGridView1.SelectedRows[0].Cells[0] // there are no Rows returned dt.Rows[0][0]
{ "pile_set_name": "StackExchange" }
Q: Is there a way to export shortcuts from Visual Studio? I have changed the default shortcuts at work and want to synchronize at home. Is there a way to export and import only the shortcuts? I'm using Visual Studio 2015 Community Edition. A: Try Tools --> Import and Export Settings from Visual Studio. You can deselect all the options except "All Settings --> Options --> Environment --> Keyboard". This procedure creates a file .vssettings that you can import at home.
{ "pile_set_name": "StackExchange" }
Q: Wait for vkCmdCopyBuffer using Vulkan semaphores I have two command buffers cb1, cb2 and I am using semaphores to make sure that the execution of cb2 wait for the excecution of cb1. Both command buffers are submitted in a batch to the same queue. cb1 has only a vkCmdCopyBuffer command in it. Are the semaphores enough to guarantee that cb2 will only run after vkCmdCopyBuffer completes the memory transfer or should I add a barrier command in cb1 short after vkCmdCopyBuffer ? A: You generally don't need semaphores within a single queue, they're primarily for synchronizing across queues. If you're submitting both command buffers in the same batch (same VkSubmitInfo), in fact, you can't use semaphores since the semaphores are waiting on before any command buffers in the batch start, and signaled after all of the command buffers in the batch have completed. For execution and memory dependencies within a queue, you typically want a pipeline barrier or SetEvent/WaitEvent pair.
{ "pile_set_name": "StackExchange" }
Q: Can't put print statement in generator What's going on here? Why doesn't the latter work? def foo(arg): print(arg) for _ in (foo(x) for x in range(10)): pass # Works for _ in (print(x) for x in range(10)): pass # Doesn't work A: It would work in Python 3.x, but not in Python 2.x because print is a statement in Python 2.x and you cannot put a statement in a generator expression. If you insist you can make it work by converting print to a Python 3-compatible function in Python 2.x with: from __future__ import print_function But even in Python 3.x it is not recommended to put a function that always returns None in a generator expression, since a generator is meant to produce values.
{ "pile_set_name": "StackExchange" }
Q: Bulk create fails with 1 milion rows For learning purposes I've created csv files with 10 000 rows and 2nd file with 1 million rows, 4 columns each. The idea is to add the data to the table most efficient. ( I am using SQL Database) First approach - @transaction.atomic @transaction.atomic def insert(request): data_reader = csv.reader(open(csv_filepathname), delimiter=',', quotechar='"') for row in data_reader: nela = Nela() # my model name nela.name = row[0] nela.last = row[1] nela.almost = row[2] nela.number = row[3] nela.save() insert(request) 1) For 10 000 rows it takes 2.1006858348846436 to insert data 2) For 1 million rows it was something around ~220.xx secs My 2nd approach is usage of bulk_create() def insert(request): data_reader = csv.reader(open(csv_filepathname), delimiter=',', quotechar='"') header = next(data_reader) Nela.objects.bulk_create([Nela(name=row[0], last=row[1], almost=row[2], number=row[3]) for row in data_reader]) insert(request) 1) for 10 000 rows it takes 0.4592757225036621 which is cool and over 4 times faster then with @transaction.atomic 2) However, for 1 million rows it fails and it outruns the limit of the SQL base. Does anyone have any idea why such bug appears ? I've been reading django docs bulk_create but besides annotation about batch_size for SQL lite I can't find anything useful. A: The problem is the million object list you create and feed to bulk_create. Even if you just pass in an iterator, Django would turn it into a list so that it can optimize the insert before proceeding. The solution is to do your own batching outside of Django with islice. from itertools import islice data_reader = csv.reader(open(csv_filepathname), delimiter=',', quotechar='"') header = next(data_reader) batch_size = 10000 while True: batch = [Nela(name=row[0], last=row[1], almost=row[2], number=row[3]) for row in islice(data_reader, batch_size)] if not batch: break Nela.objects.bulk_create(batch, batch_size)
{ "pile_set_name": "StackExchange" }
Q: How to create list of sections from list of consecutive points This problem is easily solved with classical for loop for (i = 0; i < points.size() - 1; i++) { PointAG p1 = this.points.get(i); PointAG p2 = this.points.get(i + 1); sections.add(new LineSection(p1, p2)); } Is there a possibility to achieve the same in a functional way, for example with two iterators? A: Depends on what you mean by "in functional way". If you mean "using streams", then the following might be one way: List<LineSection> sections = IntStream.range(1, points.size()) .mapToObj(i -> new LineSection(this.points.get(i - 1), this.points.get(i))) .collect(Collectors.toList()); But that isn't really any shorter or easier to read than a normal for loop, so why do it? List<LineSection> sections = new ArrayList<>(); for (int i = 1; i < points.size(); i++) sections.add(new LineSection(this.points.get(i - 1), this.points.get(i))); A: You can do it with a single iterator thus: Iterator<PointAG> it = points.iterator(); if (it.hasNext()) { PointAG prev = it.next(); while (it.hasNext()) { PointAG next = it.next(); sections.add(new LineSection(prev, next)); prev = next; } } You can write the loop body without any extra variables: while (it.hasNext()) { sections.add(new LineSection(prev, prev = it.next())); } This exploits the guaranteed left-to-right evaluation order of Java, meaning that the first prev is evaluated before the re-assignment. This might not be the best approach: side effects inside expressions are easy to miss when reading code; use the one you are comfortable reading. Doing it like this - with an iterator - will be more efficient than indexing for non-RandomAccess list implementations, for example LinkedList.
{ "pile_set_name": "StackExchange" }
Q: Is it okay for a command to slurp more arguments than it is passed to? Is it okay for a control sequence \foo to include another control sequence \slurp which slurps more arguments than \foo actually passes to it? For example, is it okay to do this: \documentclass{article} \newcommand\foo [1]{#1 \slurp} \newcommand\slurp[3]{#1 #2 #3} \begin{document} \foo{a}{b}{c}{d} \end{document} Instead of this? \documentclass{article} \newcommand\foo [4]{#1 \slurp{#2}{#3}{#4}} \newcommand\slurp[3]{#1 #2 #3} \begin{document} \foo{a}{b}{c}{d} \end{document} A: Is this okay? Yes indeed! In fact, there is an abundance of usages for such macro definitions. Most notably the fundamental definitions for starred variants of commands. For example, article defines \section as \newcommand\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex}% {2.3ex \@plus.2ex}% {\normalfont\Large\bfseries}} See how it takes zero arguments, even though we typically specify/use it as \section[<toc>]{<title>}?! That is because \@startsection takes 6 arguments, and then does a test to see whether the user added a star or not. From latex.ltx: \def\@startsection#1#2#3#4#5#6{% \if@noskipsec \leavevmode \fi \par \@tempskipa #4\relax \@afterindenttrue \ifdim \@tempskipa <\z@ \@tempskipa -\@tempskipa \@afterindentfalse \fi \if@nobreak \everypar{}% \else \addpenalty\@secpenalty\addvspace\@tempskipa \fi \@ifstar {\@ssect{#3}{#4}{#5}{#6}}% {\@dblarg{\@sect{#1}{#2}{#3}{#4}{#5}{#6}}}} As such, the arguments we typically specify for \section is only gobbled by a macro two stages down the road. Another good example of why this is good practice has to do with changes in category codes. Once arguments are consumed for use, their category codes are not changeable. So, a helper macro is usually used to modify the category codes before gobbling any arguments. There are numerous other examples in the LaTeX kernel, from basic font macros to dealing with the ToC, even to defining a new command via \newcommand: \def\newcommand{\@star@or@long\new@command} Again, another macro that doesn't take any argument, but performs some operation prior to passing the torch to another macro. In general, this principle is well-used throughout the kernel and packages. A: As explained in Werner's answer, this is common practice. All macros having a *-variant are defined in this way: \newcommand{\foo}{\@ifstar{\@sfoo}{\@foo}} \newcommand{\@sfoo}[1]{Foo with * applied to #1} \newcommand{\@sfoo}[1]{Foo without * applied to #1} or variants thereof. Similarly, macros having more than one optional argument, such as \makebox must take a long route for deciding whether there are no, one or two optional arguments: \newcommand{\bar}{\@ifnextchar[{\@bar@i}{\@bar}} \def\@bar@i[#1]{\@ifnextchar[{\@bar@ii{#1}}{\@bar@iii{#1}} \def\@bar@ii#1[#2]#3{Bar has two optional arguments, #1 and #2, and #3} \def\@bar@iii#1#2{Bar has one optional argument, #1, and #2} \def\@bar#1{Bar has no optional argument and #1} With xparse the situation is quite different: since *-variants and optional arguments can be specified in a fairly general way, it's preferred to load all actual arguments: \usepackage{xparse} \NewDocumentCommand{\foo}{sm}{% \IfBooleanTF{#1} {Foo with * applied to #2} {Foo without * applied to #2}% } \NewDocumentCommand{\bar}{oom}{% \IfNoValueTF{#1} {Bar has no optional argument and #3} {\IfNoValueTF{#2} {Bar has one optional argument, #1, and #3} {Bar has two optional arguments, #1 and #2, and #3}% }% }% } This is "the future" with LaTeX3.
{ "pile_set_name": "StackExchange" }
Q: Java Regex replacement for negative bytes in string I'm getting a date from a web (html): " abril   2013  Viernes 19" I've tried all normal regex with no success. Finally I discovered the string bytes (str.getBytes()), and this are the values: [-96, 97, 98, 114, 105, 108, -96, -96, -96, 50, 48, 49, 51, -96, -96, 86, 105, 101, 114, 110, 101, 115, -96, 49, 57] What are this -96? how to replace 1 or more -96 or whatever empty space is by 1 space? A: The byte -96 (A0 in hexadecimal, or 160 as an unsigned byte), is the non-breaking space in the ISO-8859-1 character encoding, which is probably the encoding you used to transform the string to bytes. A: The first byte (-96) is negative because in Java bytes are signed. It corresponds to character 160 (256 - 96), which is a non-breaking space. You'll need to specify that character directly in your regular expression. str = str.replaceAll(String.valueOf((char) -96), " "); A: You should be able to use the Character.isSpaceChar function to do this. As mentioned in a response to a related question, you can use it in a java regex like this: String sampleString = "\u00A0abril\u00A0\u00A02013\u00A0Viernes\u00A019"; String result = sampleString.replaceAll("\\p{javaSpaceChar}", " "); I think that will do exactly what you want while avoiding any need to deal with raw bytes.
{ "pile_set_name": "StackExchange" }
Q: Can I deploy my ASP.NET MVC 4 application in .net 4 I want to know about a thing that IS ASP.NET MVC 4 application can be run on .NET 4 server. I am trying to deploy my MVC4 application and I am got the error that. 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. I target the .net 4 framework but I am still having this error. I am not sure what happen to server. I am trying to look on other post but I did not find any post who can clearify that MVC 4 can run on .net server. A: Yes. ASP.Net MVC 4 will run on ASP.Net 4 and ASP.Net 4.5. I am running a site on ASP.Net 4, and it is also confirmed in this blog post by Scott Gu. This assumes that you are not targeting any new 4.5 features in your application. Be sure that you are copying the required MVC 4 .dll files in your bin folder if the server does not already have them installed. This post by Phil Haack explains how to bin deploy version 3. Hanselman has a similar article. You will need to check the version 4 release notes for the current .dlls required. We may be able to provide a more specific answer if you can provide some additional details, and let us know what version of IIS and Visual Studio you are running. A: Right click your mvc4/webapi project ,choose the "Add Library Package Reference..." item, checked the "ASP.NET MVC" checkbox . Install DotNet Framework 4 Client Profile on your server.Because of the reference item "System.Net.Http" target framework is .net4 client profile. Publish your mvc4 project to your server enjoy......:)_
{ "pile_set_name": "StackExchange" }
Q: How to change page size in Grid.mvc I have the following controller: public ActionResult Grid() { schoolEntities db = new schoolEntities(); List<Student> result = db.Students.ToList(); // I can't use pagesizelist here, taken from the view ViewBag.pageSize = int.Parse(pagesizelist.SelectedValue); return View(result); } and relating view: ... @Html.DropDownList("Page", new SelectList(new Dictionary<string, int> { { "10", 10 }, { "20", 20 }, { "50", 50 } }, "Key", "Value"), new { id = "pagesizelist" }) <div class="code-cut"> @Html.Grid(Model).Columns(Columns => { Columns.Add(c => c.StudentID).Titled("Id").Filterable(true); Columns.Add(c => c.LastName).Titled("Last name").Filterable(true); Columns.Add(c => c.FirstName).Titled("First name").Filterable(true); Columns.Add(c => c.EnrollmentDate).Titled("Enrollment date").Filterable(true); Columns.Add() ... }).WithPaging(ViewBag.pageSize).Sortable(true) I would like to somehow set the .WithPaging() parameter dynamically according to change in DropDownList. A: Wrap the "Page" ddl within a form, Subscribe to its client-side "onchange". Submit the form there, Handle the "change size" operation within a separate action method, Specify a new page size value and reload the entire view: View: @model IEnumerable<Student> @using GridMvc.Html <script type="text/javascript"> function onDdlPageChange(sender) { $("#formIdHere").submit(); } </script> @using (Html.BeginForm("Grid", "Home", FormMethod.Post, new { id = "formIdHere" })) { @Html.DropDownList("Page", new SelectList(new Dictionary<string, int> { { "10", 10 }, { "20", 20 }, { "50", 50 } }, "Key", "Value", ViewBag.pageSize), new { id = "pagesizelist", onchange = "onDdlPageChange(this);" }) @Html.Grid(Model).Columns(Columns => { Columns.Add(c => c.StudentID).Titled("Id").Filterable(true); Columns.Add(c => c.LastName).Titled("Last name").Filterable(true); Columns.Add(c => c.FirstName).Titled("First name").Filterable(true); Columns.Add(c => c.EnrollmentDate).Titled("Enrollment date").Filterable(true); //Columns.Add(); }).WithPaging(ViewBag.pageSize).Sortable(true) } Controller: public class HomeController : Controller { public static readonly string viewNameWithGrid = "Grid"; public static readonly int defaultPageSize = 10; private static readonly string SavedPageSizeSessionKey = "PageSizeKey"; public int SavedPageSize { get { if (Session[SavedPageSizeSessionKey] == null) Session[SavedPageSizeSessionKey] = defaultPageSize; return (int)Session[SavedPageSizeSessionKey]; } set { Session[SavedPageSizeSessionKey] = value; } } //The same as the Action name //return View(result); //Initial Load [HttpGet] public ActionResult Grid() { return GetViewWithGrid(SavedPageSize); } //Change Page Size [HttpPost] public ActionResult Grid(int? Page) { if (Page.HasValue) SavedPageSize = Page.Value; //Page = DropDownList.id return GetViewWithGrid(SavedPageSize); } ActionResult GetViewWithGrid(int pageSize) { schoolEntities db = new schoolEntities(); List<Student> result = db.Students.ToList(); //ViewBag.pageSize = int.Parse(pagesizelist.SelectedValue); ViewBag.pageSize = pageSize; return View(viewNameWithGrid, result); } }
{ "pile_set_name": "StackExchange" }
Q: Update data in Multiple table Three table as follow mysql> select * from food; +--------+------+-------+ | foodid | name | price | +--------+------+-------+ | 1 | 雞 | 100 | | 2 | 鴨 | 200 | | 3 | 魚 | 300 | | 4 | 肉 | 400 | +--------+------+-------+ 4 rows in set mysql> select * from drink; +---------+------+-------+ | drinkid | name | price | +---------+------+-------+ | 1 | 紅茶 | 50 | | 2 | 綠茶 | 100 | | 3 | 奶茶 | 150 | +---------+------+-------+ 3 rows in set mysql> select * from order_table; +----+-----------+--------+---------+------------+-------------+-------------+ | id | user_name | foodid | drinkid | food_count | drink_count | total_price | +----+-----------+--------+---------+------------+-------------+-------------+ | 2 | 小明 | 3 | 2 | 2 | 2 | 0 | | 3 | 小華 | 1 | 1 | 1 | 8 | 0 | | 4 | 小英 | 1 | 3 | 3 | 3 | 0 | | 6 | 小a | 2 | 1 | 4 | 6 | 0 | | 7 | 小b | 2 | 2 | 5 | 4 | 0 | | 8 | 小c | 2 | 3 | 6 | 10 | 0 | | 9 | 大A | 3 | 1 | 9 | 8 | 0 | | 10 | 大B | 3 | 2 | 5 | 4 | 0 | | 11 | 大C | 3 | 3 | 10 | 3 | 0 | +----+-----------+--------+---------+------------+-------------+-------------+ foodid in order_table is link to foodid in food table, drinkid in order_table is link to drinkid in drink table, Now, I want to calculate total price, Total_price = order_table.foodid(food.price in food table) * order_table.food_count + order_table.drinkid(drink.price in drink table) * order_table.drink_count; So, let me knowlege the command to update total price thx a lot. A: Something like this should be close: SELECT COALESCE(F.Price,0)*OT.Food_Count+COALESCE(D.Price,0)*OT.Drink_Count Total_price FROM Order_Table OT LEFT JOIN Food F ON OT.FoodId = F.FoodId LEFT JOIN Drink D ON OT.DrinkId = D.DrinkId And to actually update that column: UPDATE Order_Table OT LEFT JOIN Food F ON OT.FoodId = F.FoodId LEFT JOIN Drink D ON OT.DrinkId = D.DrinkId SET OT.Total_Price = COALESCE(F.Price,0)*OT.Food_Count+COALESCE(D.Price,0)*OT.Drink_Count
{ "pile_set_name": "StackExchange" }
Q: Bad Company 2 - Any way to remove/disable C4? The last few nights I've killed dudes at MCOMs after they've put the C4 on it. When they die though, the C4 stays on the MCOM (this is stupid IMO - BF2 handled it much better). So someone else just throws a grenade in and the MCOM gets nearly 50% damage on it. Is there a way to disable/remove it after killing someone who has placed it? This also happened on a tank I was in. A: No, as of now, there is no way to disable C4. If you kill the guy who planted it, it will disappear eventually. I haven't found any details on exactly how long it persists, but I know from experience that C4 does disappear within a minute or two of the C4-planter's death. So, if you can keep anyone from hitting the MCOM with explosives for a minute or so (maybe less), you've probably averted the danger the C4 presented. Unfortunately for the defenders, none of this applies to AT Mines, which never disappear on their own, even after the Engineer who planted them is killed.
{ "pile_set_name": "StackExchange" }
Q: Square API Won't Return List of Locations - Invalid Request Error Here is my code: // Build request URL $url = 'https://connect.squareup.com/v2/locations/'; // Build and execute CURL request $options = array( CURLOPT_RETURNTRANSFER => true, // return web page CURLOPT_HEADER => false, // don't return headers CURLOPT_FOLLOWLOCATION => true, // follow redirects CURLOPT_MAXREDIRS => 10, // stop after 10 redirects CURLOPT_ENCODING => "", // handle compressed CURLOPT_AUTOREFERER => true, // set referrer on redirect CURLOPT_CONNECTTIMEOUT => 120, // time-out on connect CURLOPT_TIMEOUT => 120, // time-out on response CURLOPT_CUSTOMREQUEST => 'GET', CURLOPT_HTTPHEADER => array( 'Authorization: Bearer ' . $accessToken, 'Accept: application/json', ) ); $ch = curl_init($url); curl_setopt_array($ch, $options); $content = curl_exec($ch); curl_close($ch); var_dump($content); Here is what I get back: string(158) "{"errors":[{"category":"INVALID_REQUEST_ERROR","code":"NOT_FOUND","detail":"API endpoint for URL path `/v2/locations/` and HTTP method `GET` is not found."}]}" I am pounding my head on this one... I tried using the Square SDK but calling from it doesn't return a list of locations either. I have created an application on the Square Developer Dashboard. $accessToken is set to the sandbox access token listed there. A: You have added an extra / at the end of the url. You should instead use: $url = 'https://connect.squareup.com/v2/locations'; Other than that your code works!
{ "pile_set_name": "StackExchange" }
Q: R Plot_ly list Plots Multiple Plots in .Rmd R-Studio Run Mode, but not when Knitted I am trying plot multiple plots from a list of data.frames. I am using Markdown to render the data. Within R-Studio, when I click the ">" run button, I get all the plots. The code I am trying to use is: ### Plot each list ```{r plotSWBS6IByPhaseAndWave, echo=TRUE, eval=TRUE} plotList <- list() for(i in 1:length(seriesFigureSaleDataBS6I_PhaseWave)) { plotList[[i]] <- plot_ly(data = seriesFigureSaleDataBS6I_PhaseWave[[i]], x = ~priceDate, y = ~amount, color = ~actionFigurePackageName, colors = "Pastel2", type = "scatter", mode = "markers") %>% layout(title = paste("Phase", seriesFigureSaleDataBS6I_PhaseWave[[i]]$Phase, "& Wave", seriesFigureSaleDataBS6I_PhaseWave[[i]]$Wave)) } # p <- lapply(seriesFigureSaleDataBS6I_PhaseWave, function(phaseWaveRow) plot_ly(data = phaseWaveRow, x = ~priceDate, y = ~amount, color = ~actionFigureUniqueId, colors = "Pastel2")) print(class(seriesFigureSaleDataBS6I_PhaseWave)) print(summary(seriesFigureSaleDataBS6I_PhaseWave)) #rm(seriesFigureSaleDataBS6I_PhaseWave) plotList ``` The list looks like: print(summary(seriesFigureSaleDataBS6I_PhaseWave)) Length Class Mode 40th.1 35 data.frame list 40th.2 35 data.frame list 40th.Legacy 35 data.frame list Blue.5 35 data.frame list Blue.6 35 data.frame list Blue.7 35 data.frame list Blue.8 35 data.frame list ... The output in the run mode looks like: The knit output just gives me the R Console output: ## [[1]] ## ## [[2]] ## ## [[3]] ## ## [[4]] ## ## [[5]] If I try the following code, I lose the R Console output (which is good) and get the plots in R-Studio "run" mode, but get no plot output in the knit mode: for(i in 1:length(seriesFigureSaleDataBS6I_PhaseWave)) { print(plot_ly(data = seriesFigureSaleDataBS6I_PhaseWave[[i]], x = ~priceDate, y = ~amount, color = ~actionFigurePackageName, colors = "Pastel2", type = "scatter", mode = "markers") %>% layout(title = paste("Phase", seriesFigureSaleDataBS6I_PhaseWave[[i]]$Phase, "& Wave", seriesFigureSaleDataBS6I_PhaseWave[[i]]$Wave))) } A: Use the htmltools::tagList function: --- title: "Knit a List of Plotly Graphs" output: html_document --- ```{r, include = F} library(dplyr) library(plotly) library(htmltools) ``` ```{r, echo=TRUE, eval=TRUE} dat <- list(mtcars, mtcars) plots <- lapply(dat, function(x) { plot_ly(data = x, x = ~hp, y = ~mpg) %>% add_trace(type = "scatter", mode = "markers") }) tagList(plots) ```
{ "pile_set_name": "StackExchange" }
Q: How can I add HTML content using document.createElement() and appendChild()? So I am creating a simple login page and have chosen to display a modal dialog that will state 'Logging you in...' and then have a spinning loader, I can easily add the text I want with document.createTextNode('Logging you in...);. My issue is that I don't know how to then add my CSS3 spinning loader underneath the text. Here is my code: <script> function activateModal() { // initialize modal element var modalEl = document.createElement('h2'); modalEl.setAttribute("class", "modal-header"); modalEl.style.width = '400px'; modalEl.style.height = '300px'; modalEl.style.margin = '100px auto'; modalEl.style.backgroundColor = '#828282'; modalEl.style.color = '#ffffff'; // add content var loginTxt = document.createTextNode('Attempting to log you in...'); modalEl.appendChild(loginTxt); // show modal mui.overlay('on', modalEl); } </script> As requested; https://jsfiddle.net/jackherer/ffghkc8k/ I'd like to be able to place a loader/spinner in there ? like this one .. https://codepen.io/jackherer/pen/dpBzym As I'm just beginning to learn javascript I would really appreciate any advice people A: OK so this is what you will need. From your expected outcome in codepen, I've forked your jsfiddle and modified a bit working example jsfiddle . You already have the scss animation ready, so it's just adding to your modal box. To add the spinner, update the activateModal script function activateModal() { // initialize modal element var modalEl = document.createElement('h3'); modalEl.setAttribute("class", "modal-header"); modalEl.style.width = '400px'; modalEl.style.height = '300px'; modalEl.style.margin = '100px auto'; modalEl.style.backgroundColor = '#828282'; modalEl.style.color = '#ffffff'; modalEl.style.textAlign = 'center'; //Extra addition. You may remove this. // A span element so you can add styling var loginTxt = document.createElement("span"); loginTxt.innerHTML = 'Attempting to log you in...'; modalEl.appendChild(loginTxt); modalEl.setAttribute("class", "loader"); //SPINNER START var newDiv = document.createElement("div"); newDiv.setAttribute('class', 'loader'); // For simplicity, I choose to use the svg content as inner html. You may create this as an element newDiv.innerHTML = '<svg class="circular" viewBox="25 25 50 50"> <circle class="path" cx="50" cy="50" r="20" fill="none" stroke-width="2" stroke-miterlimit="10"/></svg>'; modalEl.appendChild(newDiv); //SPINNER END // show modal mui.overlay('on', modalEl); } Snip: To be honest you seem to have everything sorted. If you expected a diferent outcome let us know. P.S: I understand :) You'll get used to it.
{ "pile_set_name": "StackExchange" }
Q: Secure authentication in jboss portal I am developing a Portal application and using jboss portal for this purpose. My current application authenticates the user from jboss DB, using the j_security_check servlet with username and password as POST parameters. Now, if I use firebug or any HTTP monitor, then I can see the username and password, which is a security issue. What is the better and secure way of authentication in jboss? A: Securing web applications is a vast subject. It entirely depends on your needs. From your post, what you want (to start with) is a secure communication. You can use SSL with JBoss to ensure a secure channel. I recommend you to take a look at the JBoss security documentation. I am sure you will have more concrete doubts / concerns when you start working with it - then we will try to help :) I found a very good source of JBoss information JBoss in Action. It refers to JBoss 5 so many areas might be outdated, but other would still apply. I am using JBoss 6 and has been of great help.
{ "pile_set_name": "StackExchange" }
Q: Managed package with chatter FeedItem on env where Chatter is disabled If the managed package has functionality based on FeedItem and Chatter=enabled. What will happens if target env doesn't have Chatter enabled? what are the practices to handle the code in this way. A: You will not be able to install the managed package until you switch on Chatter in the target org, unfortunately you will receive this error why you try to install: You could make it a configuration/custom setting and not create the dependency on Chatter in your DE org, i.e. refer to FeedItem dynamically in your code thus not creating a dependency on having Chatter installed in the target. Using dynamic code would also allow you to handle the failure gracefully at run-time. Schema.SObjectType targetType = Schema.getGlobalDescribe().get('FeedItem'); if(targetType == null) { System.debug('Chatter not enabled'); } else { SObject feedItem = targetType.newSObject(); // do stuff with FeedItem }
{ "pile_set_name": "StackExchange" }
Q: Turning off Toyota Yaris ABS system I unplugged the ABS fuse from my Toyata Yaris 2006 1.5 Hatchback CE to turn off the ABS system which was giving me trouble when trying to stop on snowy/icy roads. The car stops much better now, so that problem is fixed. However the 50Amp fuse says ABS1/VSC1 and I don't know what the VSC1 stands for. Is it something important I might miss later on? A: VSC stands for Vehicle Stability Control. It's the part which works in conjunction with the ABS to keep your vehicle stable during a bad braking situation. It also will help you in corners and such which might be a little too much for your vehicle. You decide whether it's important to you or not.
{ "pile_set_name": "StackExchange" }
Q: images in assets folder not loading Im getting a 304 error When Im trying to load this png file from assets folder im getting a 304 error. Im trying to load pictures from assets folder. const path = require('path'); const express = require('express'); const webpack = require('webpack'); const config = require('./webpack.config.dev'); const app = express(); const compiler = webpack(config); app.use("/assets", express.static("assets")); app.use(require('webpack-dev-middleware')(compiler, { noInfo: true, publicPath: config.output.publicPath })); app.use(require('webpack-hot-middleware')(compiler)); app.get('*', (req, res) => { res.sendFile(path.join(__dirname, 'index.html')); }); app.listen(process.env.PORT || 3000, (err) => { if (err) { console.log(err); return; } console.log('Listening at http://localhost:3000'); }); file tree looks like this Project Name |-- /assets |-- /dist |-- /src `-- server.js How should I approach this problem? Is it a webpack problem or a simple express one. Thanks! A: HTTP 304 is a redirect - not an error. It means it is loading the image from the cache. If you have been making changes try clearing your cache or opening the browser in private browsing mode so it won't load from it. It may have cached an image/location that is invalid still which is why it isn't loading.
{ "pile_set_name": "StackExchange" }
Q: How is COSMOS possible? I just saw that COSMOS is an OS written in MSIL langage, and I just wonder how that is possible? I always thought that MSIL needed a CLR to work, and CLR needed an OS behind it. Thanks for explanations. A: I refer you to the second and third sentences of the Wikipedia article on COSMOS, which I reproduce for you here: Cosmos is an open source operating system written in C#. It also encompasses a compiler (IL2CPU) for converting Common Intermediate Language (.NET) bytecode into native instructions. The operating system is compiled together with a user program and associated libraries using IL2CPU to create a bootable standalone native binary
{ "pile_set_name": "StackExchange" }
Q: How can I grant AI Platform training jobs access to Cloud SQL resources in the same project? I have an image which will run my training job. The training data is in a Cloud SQL database. When I run the cloud_sql_proxy on my local machine, the container can connect just fine. ❯ docker run --rm us.gcr.io/myproject/trainer:latest mysql -uroot -h"'172.17.0.2'" -e"'show databases;'" Running: `mysql -uroot -h'172.17.0.2' -e'show databases;'` Database information_schema mytrainingdatagoeshere mysql performance_schema I'm using mysql just to test the connection, the actual training command is elsewhere in the container. When I try this via the AI Platform, I can't connect. ❯ gcloud ai-platform jobs submit training firsttry3 \ --region us-west2 \ --master-image-uri us.gcr.io/myproject/trainer:latest \ -- \ mysql -uroot -h"'34.94.1.2'" -e"'show tables;'" Job [firsttry3] submitted successfully. Your job is still active. You may view the status of your job with the command $ gcloud ai-platform jobs describe firsttry3 or continue streaming the logs with the command $ gcloud ai-platform jobs stream-logs firsttry3 jobId: firsttry3 state: QUEUED ❯ gcloud ai-platform jobs stream-logs firsttry3 INFO 2019-12-16 22:58:23 -0700 service Validating job requirements... INFO 2019-12-16 22:58:23 -0700 service Job creation request has been successfully validated. INFO 2019-12-16 22:58:23 -0700 service Job firsttry3 is queued. INFO 2019-12-16 22:58:24 -0700 service Waiting for job to be provisioned. INFO 2019-12-16 22:58:26 -0700 service Waiting for training program to start. ERROR 2019-12-16 22:59:32 -0700 master-replica-0 Entered Slicetool Container ERROR 2019-12-16 22:59:32 -0700 master-replica-0 Running: `mysql -uroot -h'34.94.1.2' -e'show tables;'` ERROR 2019-12-16 23:01:44 -0700 master-replica-0 ERROR 2003 (HY000): Can't connect to MySQL server on '34.94.1.2' It seems like the host isn't accessible from wherever the job gets run. How can I grant AI platform access to Cloud Sql? I have considered including the cloud sql proxy in the training container, and then injecting service account credentials as user args, but since they're both in the same project I was hoping that there would be no need for this step. Are these hopes misplaced? A: So unfortunately, not all Cloud products get sandboxed into the same network, so you won't be able to connect automatically between products. So the issue you're having is that AI Platform can't automatically reach the Cloud SQL instance at the 34.xx.x.x IP address. There's a couple ways you can look into fixing it, although caveat, I don't know AI Platform's networking setup well (I'll have to do it and blog about it here soonish). First, is you can try to see if you can connect AI Platform to a VPC (Virtual Private Cloud) network, and put your Cloud SQL instance into the same VPC. That will allow them to talk to each other over a Private IP (going to likely be different than the IP you have now). In the Connection details for the Cloud SQL instance you should see if you have a Private IP, and if not, you can enable it in the instance settings (requires a shutdown and restart). Otherwise, you can be sure a Public IP address is setup, which might be the 34.xx.x.x IP, and then allowlist (whitelist, but I'm trying to change the terminology) the Cloud IP address for AI Platform. You can read about the way GCP handles IP ranges here: https://cloud.google.com/compute/docs/ip-addresses/ Once those ranges are added to the Authorized Networks in the Cloud SQL connection settings you should be able to connect directly from AI Platform. Original response: Where's the proxy running when you're trying to connect to it from the AI platform? Still on your local machine? So basically, in scenario 1, you're running the container locally with docker run, and connecting to your local IP: 172.17.0.2, and then when you shift up to the AI platform, you're connecting to your local machine at 34.xx.x.x? So first, you probably want to remove your actual home IP address from your original question. People are rude on the internet and that could end badly if that's really your home IP. Second, how sure are you that you've opened a hole in your firewall to allow traffic in from the AI platform? Generally speaking, that would be where I'd assume the issue is, that your connection on your local machine is being refused, and the error that results is the unable to connect.
{ "pile_set_name": "StackExchange" }
Q: Configure Jasig CAS deployerContextConfig using cas.properties I'm working on an implementation of Jasig CAS version 3.5.2.1 CAS 3.5.2.1 is a Spring 3.1 MVC application. Currently, the app uses a ContextLoaderListener to populate the WebApplicationContext from an xml file called deployerContextConfig.xml Can I use properties (such as those loaded from cas.properties) within the deployerContextConfig.xml file? If so, how? A: I am on CAS 3.5.0 but I think it would be the same as your version. First the web.xml will load all the *.xml file inside /WEB-INF/spring-configuration directory and /WEB-INF/deployerConfigContext.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/spring-configuration/*.xml /WEB-INF/deployerConfigContext.xml </param-value> </context-param> The /WEB-INF/spring-configuration/propertyFileConfigurer.xml will load the cas.properties file <bean id="propertyPlaceholderConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer" p:location="/WEB-INF/cas.properties" /> Inside the deployerConfigContext.xml: <!-- Define the contextSource --> <bean id="contextSourceRepository" class="org.springframework.ldap.core.support.LdapContextSource"> <property name="pooled" value="false" /> <property name="urls"> <bean class="org.springframework.util.StringUtils" factory-method="commaDelimitedListToSet"> <constructor-arg type="java.lang.String" value="${ldap.repository.server.urls}" /> </bean> </property> <property name="userDn" value="${ldap.authentication.manager.userdn}" /> <property name="password" value="${ldap.authentication.manager.password}" /> <property name="baseEnvironmentProperties"> <map> <entry key="com.sun.jndi.ldap.connect.timeout" value="${ldap.authentication.jndi.connect.timeout}" /> <entry key="com.sun.jndi.ldap.read.timeout" value="${ldap.authentication.jndi.read.timeout}" /> <entry key="java.naming.security.authentication" value="${ldap.authentication.jndi.security.level}" /> </map> </property> </bean> And your cas.properties: ldap.repository.server.urls=ldap://ldap.usfca.edu:389
{ "pile_set_name": "StackExchange" }
Q: XPathの//item[1]とdescendant-or-self::item[1]の違い 非常に基本的なことなのでしょうけれども掲題の違いがわかりません.例えば次のようなXMLがあったとします. <?xml version="1.0" encoding="UTF-8"?> <root> <item attr1="val1" attr2="val2"> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> </item> </root> これに対して次のようなスタイルシートを適用すると <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xs" version="2.0"> <xsl:output indent="yes"/> <xsl:template match="/"> <root> <p1-1> <xsl:copy-of select="//item"/> </p1-1> <p1-2> <xsl:copy-of select="descendant-or-self::item"/> </p1-2> </root> </xsl:template> </xsl:stylesheet> p1-1とp1-2には同じ次の内容のノードが選択されます. <item attr1="val1" attr2="val2"> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> </item> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> <item attr4="val4"> <item attr5="val5"/> </item> <item attr5="val5"/> ところがこれを <xsl:template match="/"> <root> <p1-1> <xsl:copy-of select="//item[1]"/> </p1-1> <p1-2> <xsl:copy-of select="descendant-or-self::item[1]"/> </p1-2> </root> </xsl:template> と変えると <?xml version="1.0" encoding="UTF-8"?> <root> <p1-1> <item attr1="val1" attr2="val2"> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> </item> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> <item attr4="val4"> <item attr5="val5"/> </item> <item attr5="val5"/> </p1-1> <p1-2> <item attr1="val1" attr2="val2"> <item attr3="val3"> <item attr4="val4"> <item attr5="val5"/> </item> </item> </item> </p1-2> </root> という結果となります.predicate(述語)の[1]は異なる結果をもたらします. XPathの仕様によれば簡略記述の//item[1]は(fn:root(self::node()) treat as document-node())/descendant-or-self::node()/child::item[1]に等しくなります. 3.2 Path Expressions A "//" at the beginning of a path expression is an abbreviation for the initial steps (fn:root(self::node()) treat as document-node())/descendant-or-self::node()/ (however, "//" by itself is not a valid path expression [err:XPST0003].) すこしややこしいですが、簡単に言えばドキュメントノードの位置から見てdescendant-or-self::node()/child::item[1]とdescendant-or-self::item[1]はどう違うのか? ということになるでしょうか?なぜ両者が異なる結果となるのかがよくわかりません. 説明いただける方おられましたらよろしくお願いいたします. [追記] ちなみに仕様には違う結果になると書いてあります. 3.3.5 Abbreviated Syntax Note:The path expression //para[1] does not mean the same as the path expression /descendant::para[1]. The latter selects the first descendant para element; the former selects all descendant para elements that are the first para children of their respective parents. A: descendant-or-self::node()/child::item[1]は、子要素中のp‌​ositionなので常にprecedin‌​g-siblingで先頭から何番目か?d‌​‌​escendant-or-self:‌​:i‌​tem[1]は、コンテキストは各‌​子孫のitemなので、ともかくd‌​es‌​cendant-or-self::i‌​‌​temの1番目、preceding-si‌​blingなんて関係ない.という解釈でし‌​ょうか? でいいと思います。一言で言えば、「述語[1]の付くのがグループかどうか?」となりそうです。 //item[1] = descendant-or-self::node()/child::item[1] すでに @SOTH さんが述べてますが、descendant-or-self::node()/child::item[1]は、「まず descendant-or-self::node() を選んで、次にそこから child::item[1] を選ぶ」なので、各兄弟要素中で最初に出現するitem要素がすべて選択されます。長子要素の集合が結果になり、サンプルでも複数のitem要素が選択されています。 descendant-or-self::item[1] = (descendant-or-self::item)[1] = (//item)[1] = (descendant-or-self::node()/child::item)[1] 対してdescendant-or-self::item[1]は、「まず descendant-or-self::item を選んでその結果グループの中から先頭の要素を述語[1]で選ぶ」です。グループの中から常に1個の要素を選ぶ訳ですから、サンプルでも1個のitem要素‌​のみ選択されています。 まとめると、以下のようになります。 //item[1]と(//item)[1]は述語の付く対象が異なる //item[1]は任意のitem[1]を選択 (//item)[1]はitemグループの先頭を選択
{ "pile_set_name": "StackExchange" }
Q: error in Multiple Select statements in Insert statement I am writing a query which has multiple select statements in an insert statement INSERT INTO dbo.Products (ProductName, SupplierID, CategoryID, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued) VALUES ('Twinkies' , (SELECT SupplierID FROM dbo.Suppliers WHERE CompanyName = 'Lyngbysild'), (SELECT CategoryID FROM dbo.Categories WHERE CategoryName = 'Confections'), 0, 0, 10, 0) Actually it gives error Msg 1046, Level 15, State 1, Line 4 Subqueries are not allowed in this context. Only scalar expressions are allowed. Msg 102, Level 15, State 1, Line 4 Incorrect syntax near ','. Where these two select statements returns only one value. A: Just change VALUES to SELECT and remove the outer parentheses. INSERT INTO dbo.Products (ProductName, SupplierID, CategoryID, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued) SELECT 'Twinkies' , (SELECT SupplierID FROM dbo.Suppliers WHERE CompanyName = 'Lyngbysild'), (SELECT CategoryID FROM dbo.Categories WHERE CategoryName = 'Confections'), 0, 0, 10, 0 You may also need a TOP 1 on the subexpressions, but that would give a different error message: subquery returned more than one value.
{ "pile_set_name": "StackExchange" }
Q: Are there any reasons to not install binaries, etc in a user's home dir? I'm a user on a shared computing environment. More often than not, the system doesn't have most of the libraries I need or the binaries and programs are atleast 4-5 versions old. It's so cumbersome to email the sysadmins each time to update packages etc, that I've started installing them to a folder in my home dir. My question is: are there any negatives to doing this? Can I also install the latest version of my shell to my home dir and chsh to use that? Certain packages have a lot of files. Will this affect login times (I presume the system has to stat() my entire home dir and check with quota)? A: Typical practices is to have a ~/.bin directory with symlinks to your local executables. That way, you don't have to update your PATH for each new app, just the links. Yes, what you describe is a common practice, though usually one doesn't need the number of packages/libraries you seem to. Do be careful if applications or libraries make assumptions about where they're installed... There should not be any significant effect on login times.
{ "pile_set_name": "StackExchange" }
Q: AzureDevops Create Project libary I would like to create projects, users and so on via C#. There is a Libary from Microsoft for adding Workitems, Comments, ... (see here). Is there something similar for creating projects and users? I only found the documentation for the API here. A: Is there something simular for creating Projects and users? You can use ProjectHttpClient.QueueCreateProject method to create project. It comes from Microsoft.TeamFoundation.Core.WebApi.dll which is also contained in Azure Devops Service .net SDK. (Just like the WorkItemTrackingHttpClient class shared in your question) And here's the official sample about how to use C# client api to create project.
{ "pile_set_name": "StackExchange" }
Q: Instantiating IEnumerable> with an array in C# I probably miss something with array construction syntax in c#. If I have the following function (thanks to type variance feature in c# 4): IEnumerable<IEnumerable<T>> test<T>() { return new List<List<T>>(); } How can I write similar one that instantiates array of array? A: IEnumerable<IEnumerable<T>> test<T>() { return new [] { new T[] {} }; } I noticed that there used to be a bug in the mono compiler (at least, the 2.6.7 gmcs would crash...) that required you to spell new IEnumerable<T>[] { new T[] {}}; This is no longer a problem with e.g. mono 2.11 Alternatively you could use a yield block (which would be less efficient) IEnumerable<IEnumerable<T>> test<T>() { yield return new T[] {}; } Enumerable.Empty<> Are you aware of Enumerable.Empty<>? IEnumerable<IEnumerable<T>> test<T>() { return Enumerable.Empty<IEnumerable<T>>(); }
{ "pile_set_name": "StackExchange" }
Q: Confusion with 'lifting' functions in Scala In the book Functional Programming In Scala, there's an example of 'Lift' where a function with type A => B is promoted to Option[A] => Option[B]. This is how lift is implemented: def lift[A,B](f: A => B):Option[A] => Option[B] = _ map f I have a couple of confusions regarding this: The first one is, what is the '_' here? And secondly, when I remove the return type from the def, expecting the type-inference to do its magic I get the following exception: scala> def lift[A,B](f: A => B) = _ map f <console>:7: error: missing parameter type for expanded function ((x$1) => x$1.map(f)) def lift[A,B](f: A => B) = _ map f Can somebody explain what's going on here? Thanks A: lift is a function that returns a function. The function returned lifts a value (unnamed) by applying the function f to that value. The unnamed value to be lifted is referred to as _. You can certainly give it a more explicit name: def lift[A,B](f: A => B): Option[A] => Option[B] = { value => value map f } The return type of that function (the one being returned) needs to be either stated explicitly or determined implicitly. As written the compile can infer that what is to be returned is an Option[B] (more specifically, lift is returning a function Option[A] => Option[B] (explicitly stated), while that function has return type Option[B] (determined implicitly)). Without that type information, the compiler needs some other indication of what the return type(s) are. Alternately, define lift thus: def lift[A,B](f: A => B) = { value: Option[A] => value map f } Here you're explicitly stating the type for value, and the compiler can infer the return type of the function returned to be Option[B], because f: A => B will map type A to B, the return type for lift to be Option[A] => Option[B].
{ "pile_set_name": "StackExchange" }
Q: Neovim spellcheck: all the words highlighted as misspelled I've been trying to set up spell checking in neovim. init.vim: setlocal spell spelllang=ru_ru First time it downloaded ru.utf-8.spl and ru.utf-8.sug, just as it probably should. Unfortunately, as soon as I try :set spell/invspell, all the Russian words get highlighted. What am I missing? UPD: English spell checking won't work, too (everything misspelled). The files are here /usr/share/nvim/runtime/spell/en.utf-8.spl which looks OK. UPD: English spell checking started to work, I am not sure why (I tried spelllang=ru_ru,en_us; Russian won't work even as a single option. A: set spelllang=ru_ru,en_us nnoremap 99 :set invspell<CR> This solution is reasonably convenient. A better one is welcome :)
{ "pile_set_name": "StackExchange" }
Q: Graphics API for Game Development Well, I've researched a lot and have come to a basic conclusion on APIs: as it seems the API choice for learning doesn't matter that much, as long as you can really learn 3D programming, or so it seems. My background/current fields of study: I have a sort of solid background on programming, algorithms and object oriented designs. I have made a few 2D games: using XNA(C#) and allegro 5(C++), I even wrote a sort of small framework for a few 2D games I had to make for a course (using allegro) to speed up my game making process, so I am quite familiar with the basic concepts of game logic and game loop structure. I have made a very few experiments with 3D using XNA, just got the basic idea actually. I have a basic, but solid, knowledge on Linear Algebra and Calculus. I am currently studying Artificial Intelligence, Machine Learning (to be more specific, in case anyone wants to know: rules extraction for relational databases using Markov Logic for a reading system), doing some research in this field right now. Well, I love games and have a lot of fun with game programming especially AI and game logic. I want to move to 3D game development now, and even though I don't really want to work too much on the graphics programming and stuff, I want to have the knowledge (well, I'd like to know what's going on and how to take advantage of that: if I want to write some good AI for 3D environments I need to have a good idea of how to deal with it). I just started (today) reading the examples and documentation of Direct3D (9 and 11) from what I've read if I'm going to learn DirectX I should go for 11 if low-hardware compatibility and XP users aren't my goals. I'd like your opinion on whether I should use DirectX or something like SDL/OpenGL/OpenAL I have nothing against none I just want to know which would be better for someone who wants to focus on some new AI for games and game logic. I will be using C++/Python for the games, some C#/XML for some tools and such. Well I'm basically going to be developing stuff in/for a windows environment (nothing against linux or something - I use linux for some of my AI research for some of the software I use). Well I hope I made my question clear, and thanks in advance to anyone willing to help. A: As stated here and by @leppie, OpenGL is cross platform, so you will have to learn to use one framework which will work on most platforms today. Although I have never really worked with OpenGL, from what experience I have with open source projects is that good documentation is hard to come by, although you will most likely find a ton of examples online. So you might have some hard time finding good documentation (again, basing my self on previous open source projects I have used). On the other hand, DirectX is something which runs in a Windows environment. It also seems to be the tool of choice for many game developers which seems to be what you are after. Using DirectX will also allow you to exploit Visual Studio (which in my opinion is a very, if not the most, powerful IDE I have ever used) as shown here. All in all though, what you are asking has been a debate which has been going for years (here and here just to name a few). EDIT: Like most things in the programming world, a certain platform will give you an edge if you try to do a list of given things. Different lists require different platforms/frameworks, etc. I have never really messed around with these technologies, but, I think that Microsoft sometimes tends to make the life of developers using their products slightly easier than those using open source products. What you could do is to try and go through some code, or try to come up with your own, to do the same thing using the two frameworks and then, choose the easier one.
{ "pile_set_name": "StackExchange" }
Q: Is S a group under matrix addition Another matrix question! Let $$S=\{A \in M_2(\mathbb{R}):f(A)=0\}\text{ and }f\left(\begin{bmatrix}a&b\\c&d \end{bmatrix}\right)=b$$ Is S a group under matrix addition. Either prove that (S,+) is a group or show that one of the group axioms fails. Group axioms: (G1) the set G is closed under the operation $*$ (G2) The operation $*$ is associative - $(a*b)*c=a*(b*c)$ (G3) There exists an element $e \in G$ such that $a*e=a=e*a$ for all $a \in G$ (G4) for each $a \in G$ there exists an element $a^{-1}$ of G such that $a*a^{-1}=e=a^{-1}*a$ Here's what I've got so far: (G1) let $A=\begin{bmatrix}a&0\\c&d\end{bmatrix}$ and $B=\begin{bmatrix}e&0\\g&h \end{bmatrix}$ where $A, B \in S$ then $A+B=\begin{bmatrix}a+e&0\\c+g&d+h \end{bmatrix}$ which is $\in S$ so it is closed. - (G1) is satisfied (G2) using $A$ and $B$ as above and $C=\begin{bmatrix}j&0\\m&n \end{bmatrix},\\ (A+B)+C = \begin{bmatrix}a+e&0\\c+g&d+h \end{bmatrix}+\begin{bmatrix}j&0\\m&n \end{bmatrix} = \begin{bmatrix}a+e+j&0\\c+g+m&d+h+n\end{bmatrix}$ and $A+(B+C)=\begin{bmatrix}a&0\\c&d\end{bmatrix}+\begin{bmatrix}e+j&0\\g+m&h+n\end{bmatrix}=\begin{bmatrix}a+e+j&0\\c+g+m&d+h+n\end{bmatrix}$ so $(A+B)+C=A+(B+C)$ - (G2) is satisfied (G3) let $e=\begin{bmatrix}0&0\\0&0\end{bmatrix}$ pretty easy to see that $A+e=A=e+A$ - (G3) satisfied (G4) let $A^{-1}=\begin{bmatrix}-a&0\\-c&-d\end{bmatrix}$ $A+A^{-1}=\begin{bmatrix}a&0\\c&d\end{bmatrix}+\begin{bmatrix}-a&0\\-c&-d\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}=e$ and $A^{-1}+A=\begin{bmatrix}-a&0\\-c&-d\end{bmatrix}+\begin{bmatrix}a&0\\c&d\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}=e$ (G4) satisfied So I have proved all the axioms (hopefully correctly?!) therefore we can say that (S,+) is a group. I know that's a bit of reading but I would really appreciate your confirmation that this is right / where I went wrong if it isn't. The one I'm really worried about is G1, and I know then that if this is wrong and (S,+) is not closed then I don't need to worry about the rest. And also if there is another way to do it? A: This is great! So here's some feedback on the presentation: Proof of (G1): This reads backwards in my opinion: "let $A=\begin{bmatrix}a&0\\c&d\end{bmatrix}$ and $B=\begin{bmatrix}e&0\\g&h \end{bmatrix}$ where $A, B \in S$..." I suggest writing: "let $A, B \in S$. Then, by definition, $A=\begin{bmatrix}a&0\\c&d\end{bmatrix}$ and $B=\begin{bmatrix}e&0\\g&h \end{bmatrix}$ where $a,c,d,e,g,h \in \mathbb{R}$. ..." "...which is $\in S$ so it is closed" could be extended to: "...which shows $f(A+B)=0$ so $A+B \in S$, and $S$ is closed under matrix addition." Again, I suggest writing "let $C \in S$, so $C=$...". Also " - (G2) satisfied" is a bit abbreviated. Nothing to add. "let $A^{-1}=\begin{bmatrix}-a&0\\-c&-d\end{bmatrix}$" This should be denoted $-A$ not $A^{-1}$. Additionally, the notation $-A$ and $A^{-1}$ somewhat presupposes that an inverse exists. Further comment: If we know $(M_2(\mathbb{R}),+)$ is a group, then we need only to check that $S$ is a subgroup. We could use a subgroup test, which is a bit quicker. We just check (a) that $S$ is non-empty, and (b) that if $A,B \in S$, then $A-B \in S$ too.
{ "pile_set_name": "StackExchange" }
Q: Text input top and left border appears wrong I have a text input but the color is not right on top and left border, why is this, and how can I fix this? http://jsfiddle.net/9ehBs/ HTML <input type="text" class="searchbox" /> CSS .searchbox { width: 200px; height: 30px; padding: 10px; font-size: 28px; font-weight: 900; font-family: Ebrima; color: rgb(54,54,54); border-width: 13px; border-color: rgb(46,94,115); float: left; margin-left: 10px; margin-top: 10px; } A: You need border-style: solid. See your updated fiddle. It would be much more efficient to use the shorthand, i.e. border: 13px solid rgb(46,94,115);
{ "pile_set_name": "StackExchange" }
Q: PHP ArrayAccess set multidimensional EDIT: I realized the amount of text might be intimidating. The essence of this question: How to implement ArrayAccess in a way that makes setting multidimensional values possible?     I am aware that this was discussed here already but I seem unable to implement the ArrayAccess interface correctly. Basically, I've got a class to handle the app configuration with an array and implemented ArrayAccess. Retrieving values works fine, even values from nested keys ($port = $config['app']['port'];). Setting values works only for one-dimensional arrays, though: As soon as I try to (un)set a value (eg. the port in the previous example), i get the following error message: Notice: Indirect modification of overloaded element <object name> has no effect in <file> on <line> Now the general opinion seems to be that the offsetGet() method has to return by reference (&offsetGet()). That, however, does not solve the problem and I'm afraid I don't know how to implement that method correctly - why is a getter method used to set a value? The php doc here is not really helpful either. To directly replicate this (PHP 5.4-5.6), please find a sample code attached below: <?php class Config implements \ArrayAccess { private $data = array(); public function __construct($data) { $this->data = $data; } /** * ArrayAccess Interface * */ public function offsetSet($offset, $value) { if (is_null($offset)) { $this->data[] = $value; } else { $this->data[$offset] = $value; } } public function &offsetGet($offset) { return isset($this->data[$offset]) ? $this->data[$offset] : null; } public function offsetExists($offset) { return isset($this->data[$offset]); } public function offsetUnset($offset) { unset($this->data[$offset]); } } $conf = new Config(array('a' => 'foo', 'b' => 'bar', 'c' => array('sub' => 'baz'))); $conf['c']['sub'] = 'notbaz';     EDIT 2: The solution, as Ryan pointed out, was to use ArrayObject instead (which already implements ArrayAccess, Countable and IteratorAggregate). To apply it to a class holding an array, structure it like so: <?php class Config extends \ArrayObject { private $data = array(); public function __construct($data) { $this->data = $data; parent::__construct($this->data); } /** * Iterator Interface * */ public function getIterator() { return new \ArrayIterator($this->data); } /** * Count Interface * */ public function count() { return count($this->data); } }   I used this for my Config library libconfig which is available on Github under the MIT license. A: I am not sure if this will be useful. I have noticed that the ArrayObject class is 'interesting'... I am not sure that this is even an 'answer'. It is more an observation about this class. It handles the 'multidimensional array' stuff correctly as standard. You may be able to add methods to make it do more of what you wish? <?php // class Config extends \ArrayObject { // private $data = array(); public function __construct(array $data = array()) { parent::__construct($data); } } $conf = new Config(array('a' => 'foo', 'b' => 'bar', 'c' => array('sub' => 'baz'))); $conf['c']['sub'] = 'notbaz'; $conf['c']['sub2'] = 'notbaz2'; var_dump($conf, $conf['c'], $conf['c']['sub']); unset($conf['c']['sub']); var_dump('isset?: ', isset($conf['c']['sub'])); var_dump($conf, $conf['c'], $conf['c']['sub2']); Output: object(Config)[1] public 'a' => string 'foo' (length=3) public 'b' => string 'bar' (length=3) public 'c' => array 'sub' => string 'notbaz' (length=6) 'sub2' => string 'notbaz2' (length=7) array 'sub' => string 'notbaz' (length=6) 'sub2' => string 'notbaz2' (length=7) string 'notbaz' (length=6) string 'isset?: ' (length=8) boolean false object(Config)[1] public 'a' => string 'foo' (length=3) public 'b' => string 'bar' (length=3) public 'c' => array 'sub2' => string 'notbaz2' (length=7) array 'sub2' => string 'notbaz2' (length=7) string 'notbaz2' (length=7)
{ "pile_set_name": "StackExchange" }
Q: How can I use the Material Design icons in XML in Android? I managed to clone the git repository of Material Design icons, but now I'm struggling with how to use it. I want to use the icon resources in the XML files, in the XML attributes like android:icon="@drawable/***. so I searched quite many articles and none of them seems to be clear so far. Can someone explain? A: You don't need to clone the repository to do this. If you're using an up to date version of Android Studio you can import the individual icons you need directly. Right-click on your drawable folder and in the menu go to New > Vector Asset. From there the default option is to select any of the Material icons to use, or you have the option to use your own SVG files. A: In your project directory there's a drawable folder, the path should be: /app/src/main/res/drawable your icons have to be put in that folder in order to be used with @drawable/ command in XML. Here a similar question, and here the link to the android drawable importer
{ "pile_set_name": "StackExchange" }
Q: Operator in coherent state basis I am reading "Introductory to Quantum Optics" by Christopher C. Gerry and Peter L. Knight but I don't understand a solution from which you can obtain the matrix elements of an operator in the number basis if you know the diagonal coherent-state matrix elements of that operator. page 56: The diagonal elements of an operator $\hat{F}$ in a coherent state basis completely determine the operator. From Eqs. (3.76) and (3.77) we have $$ \langle\alpha|\hat{F}|\alpha\rangle e^{\alpha^*\alpha} = \sum_n\sum_m \frac{\alpha^{*m}\alpha^n}{\sqrt{m!n!}} \langle m |\hat{F} | n \rangle$$ Treating $\alpha$ and $\alpha^*$ as independent variables it is apparent that $$ \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m} \left( \langle \alpha | \hat{F} | \alpha \rangle e^{\alpha^*\alpha} \right)}{\partial\alpha^{*m}\partial\alpha^n }\right] \right|_{\alpha^*=0 \\ {\alpha=0}} = \langle m | \hat{F} | n \rangle $$ I am a bit confused right now. In the first equation, $n$ and $m$ are just indices but when I take the derivatives in the second equation into account, $n$ and $m$ are both outside of the sum and that doesn't really make sense to me. $$ \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m} \left( \langle \alpha | \hat{F} | \alpha \rangle e^{\alpha^*\alpha} \right)}{\partial\alpha^{*m}\partial\alpha^n }\right] \right|_{\alpha^*=0 \\ {\alpha=0}} = \frac{1}{\sqrt{m!n!}} \left. \left[ \frac{\partial^{n+m}}{\partial\alpha^{*m}\partial\alpha^n} \sum_n\sum_m \frac{\alpha^{*m}\alpha^n}{\sqrt{m!n!}} \langle m |\hat{F} | n \rangle \right] \right|_{\alpha^*=0 \\ {\alpha=0}}$$ As I understood it, $n$ and $m$ only have a real meaning inside of the sums and I don't really know how to apply the derivatives to prove the solution. A: Your confusion is due to a slight abuse of notation, but is fairly common practice. One should not treat $m$ and $n$ to have the same meaning in the first and second equation within your quote. In the first equation, $m,n$ are dummy variables, in the second they are specific choices. Let us rename the $m,n$ in the second equation to $m',n'$. Now, when performing the derivative, you will pull out precisely the piece of in the sum of the first equation where $m=m',n=n'$, arriving at the desired result. To get this, you also assume $\langle m | \hat{F} |n \rangle$ is independent of $\alpha, \alpha^*$.
{ "pile_set_name": "StackExchange" }
Q: When putting a string in preferences, it doesn't change. Any ideas why? String path = workspaceField.getText(); //prefs.remove("workspaceDirectory"); prefs.put("workspaceDirectory", path); splitPane = commands.getSplitPane(); WebScrollPane oldTree = (WebScrollPane) splitPane.getLeftComponent(); splitPane.remove(oldTree); WebScrollPane newTree = commands.createFileTree(); splitPane.setLeftComponent(newTree); dialog.dispose(); The above code gets a file path from a text field, then puts that in a String preference called "workspaceDirectory". The issue is that that preference does not change. The commented prefs.remove call removes the preference successfully, but it doesn't change the preference when prefs.put("workspaceDirectory", path) is called. I don't receive any errors. The method createFileTree(): public WebScrollPane createFileTree() { fileTree = new WebFileTree(prefs.get("workspaceDirectory", WorkspaceManager.createWorkspaceDirectory())); fileTreeScrollPane = new WebScrollPane(fileTree); fileTreeScrollPane.setVerticalScrollBarPolicy(WebScrollPane.VERTICAL_SCROLLBAR_ALWAYS); fileTree.addMouseListener(new FileTreeListener(this)); return fileTreeScrollPane; } That's all createFileTree does, but it doesn't affect anything. If I comment the code out that changes the components, prefs.put does nothing. Any ideas what causes this or is stopping the preference from being changed? A: From the javadoc for java.util.Preferences: All of the methods that modify preferences data are permitted to operate asynchronously; they may return immediately, and changes will eventually propagate to the persistent backing store with an implementation-dependent delay. The flush method may be used to synchronously force updates to the backing store. Normal termination of the Java Virtual Machine will not result in the loss of pending updates -- an explicit flush invocation is not required upon termination to ensure that pending updates are made persistent. So, if you make a change to the preferences and then immediately try to read that change, your results may not be what you would expect.
{ "pile_set_name": "StackExchange" }
Q: How can I create a sophisticated table like the one attached? I'm writing a paper where I systemize knowledge. At the moment I have a very big excel file. In order to fit this table into an A4 paper, I would have to create a table similar to the one attached. As a matter of fact I have read many papers where similar tables are being shown. What software do I need to create something like this? Is this latex? A: The table can be created in LibreOffice Calc. The following screenshot shows similar table that has been created in LibreOffice Calc. The table can be fitted within A4-sized paper in landscape orientation, but not portrait. Text format details The following font family and size are used. Liberation Serif 10 for text DejaVu Serif 10 for symbols The following symbols are found in Unicode Character Database. ✻ U+273B TEARDROP-SPOKED ASTERISK † U+2020 DAGGER ○ U+25CB WHITE CIRCLE ◐ U+25D0 CIRCLE WITH LEFT HALF BLACK ● U+25CF BLACK CIRCLE For quick lookup, refer to Unicode symbols on Wikipedia. Users on Linux platform can use GNOME Character Map that is based on the Unicode Character Database 6.3.0 (as version 3.10.1). Format discrepancy The appearance of symbols may vary when using different font family i.e. Liberation font family renders some symbols at different size; DejaVu Serif renders at same size. Also, symbol might need repositioning (Go to: Format > Character > Position) to mimic the text appearance in original table. The slanted text is achieved by changing the text orientation (Go to: Format > Cells... > Alignment) from 0 to 60 degrees. By doing this in LibreOffice, the cell border lines will be shifted towards right. Therefore, one might have to apply similar formatting of cell borders to the next several columns to get equal length of table border lines in final document (either in Print Preview or Export PDF). Remarks: Reproduced table in LibreOffice Calc 5.1 on Linux. This answer shall be useful and applicable for those who use LibreOffice on Windows. A: This can be done in excel. Firstly, you will need the circles being used to represent the data. These are available in a font called WingDings 2. Information on how to use the black medium circle and semi-circles can be found on that page. Once you have the data in excel, you will then need to angle the column headers. Highlight the row that the headers are in, then click on the diagonal "ab" in the Alignment group on the Home tab. Select the diagonal option, or any other option you might prefer. Then select the columns that correspond with your header information and adjust the width to squeeze the data together. You may also want to resize your columns or rows to accommodate these changes. A: As Narzard pointed out, you can certainly do this in Excel, but if you want it to look like a 'professional' document, or are considering it for publication, go with LaTeX. These symbols are in the "wasysym" package in LaTeX. I won't lie to you though: LaTeX has a very steep learning curve so if you just need to get it done, stick with Excel. I do want to make one point about the table. As a piece of printing, it's pretty advanced. As a piece of information design, it's not very good. Often in these sorts of tables the "optimum" row is the one that has all the circles filled - think of the Consumer Reports magazine reviews for example. If that is true, why use "-" for "doesn't have this property" and not a clear circle ("O")? In complex graphics like these you want the readers' eyes to do the visual recognition with as little work as possible. Another example is how the reader has to work out which columns belong under "Adoption": it looks like "Message Repudiation" might be one but then again probably not. All soluble by some other element such as color, banding or a vertical separator. Good luck with your thesis!
{ "pile_set_name": "StackExchange" }
Q: How changing object value of tuple is possible using copy by reference? I am learning Python data structures. If Tuples are considered immutable, can anyone explain me how changing object value of tuple is possible using copy by reference? >>> tuple1=[20,30,40] >>> tuple2=tuple1 >>> print(tuple2) [20, 30, 40] >>> tuple2[1]=10 >>> print(tuple2) [20, 10, 40] >>> print(tuple1) [20, 10, 40] A: Tuples are immutable in a sense that you can't change what objects are contained (i.e. the tuple itself remains unchanged). For example >>> t = (1, 2) >>> t[0] = 3 is not possible. However what you can do is modify objects that are referred to by tuples. For example: t = (1, [2]) print(t) t[1][0] = 3 print(t) which outputs (1, [2]) (1, [3]) The tuple t still contains the same objects as before however one of the objects (t[1]) has changed (leaving the tuple untouched).
{ "pile_set_name": "StackExchange" }
Q: Optional depencency on target in Makefile Is it possible to create an optional dependency on a target in Makefile (GNU Make)? help: @echo Usage: clean: @echo Cleaning... build: clean? @echo Building... The expected output: $ make Usage: $ make clean Cleaning... $ make build Building... $ make build clean Cleaning... Building... $ make clean build Cleaning... Building... The set of targets is fixed and can not be extended, i.e. rebuild: clean build .PHONY: rebuild is not acceptable. A: Yes, you can do this: build: $(filter clean,$(MAKECMDGOALS)) @echo Building... See: https://www.gnu.org/software/make/manual/html_node/Goals.html and https://www.gnu.org/software/make/manual/html_node/Text-Functions.html#index-filter
{ "pile_set_name": "StackExchange" }
Q: How do I remotely patch openSSH to allow handshake obfuscation? I believe that my Chinese ISP has started detecting and bandwidth-limiting SSH connections. I'm not using the default SSH port so I suspect that they are using deep packet filtering to detect the unencrypted handshake. I am aware of one solution to this which patches openSSH to encrypt the handshake. However, I have two problems: This project is three years old so if I were to just compile it as is, I would be forsaking three years of security updates for openSSH. My server is in the US and my only way of accessing it is through SSH--so downtime or messing up isn't an option. As such I have two questions: Can I somehow patch a current version of openSSH to use the obfuscated handshake? If so, how would I do this? Which files would I need to add/modify in the source? How could I avoid any downtime for the ssh server while doing this patch? Can I install two ssh servers side by side? If anyone could help me with this--or knew of a better way to obfuscate ssh handshakes--my appreciation would be boundless! A: You can use Obfsproxy to to beat DPI, here's a screenshot explaining what it does: Screenshot shamelessly taken from the TorProject site
{ "pile_set_name": "StackExchange" }
Q: Change textview properties programatically from a fragment class I am relatively new to Android Studio and I am starting to explore more the Android Navigation Drawer activities. I have been trying to change the font type of a textview from a fragment class and I just can't. I have tried many different solutions available here and none of them worked. In the picture, you can see my latest attempt. In this image you can see the last code I tried to change the font from the fragment class A: I use this Library and it works superb on all devices Step:- 1) Add this line in your gradle compile 'com.github.balrampandey19:FontOnText:0.0.1' If this lines are missing add it in your gradle:- allprojects { repositories { maven { url "https://jitpack.io" } } } 2) Add your font in asset folder like this 3) Then replace your view in xml with <com.balram.library.FotTextView android:id="@+id/vno_tv" . . android:textSize="14sp" app:font="regular.ttf" /> this line is important to set custom font you want app:font="regular.ttf" You can do same for Buttons Edittext OR If you want to use same "Font" through whole application you can follow this Guide here
{ "pile_set_name": "StackExchange" }
Q: stack trace analysis tools are there any recommendations for tools that can analyze the stack trace from a crash? If not, are there any guides to writing one? I mean a tool that can look at a bunch of these dumps and catch patterns. These are crash logs of c++ applications. A: For windows I recommend WinDbg: http://msdn.microsoft.com/en-us/windows/hardware/gg463009 there are various useful sites you can google for that will describe how to use it. You can also open crash dumps using Visual Studio: http://msdn.microsoft.com/en-us/library/fk551230%28v=vs.90%29.aspx There is a similar SO post here for linux: Which is the best Linux C/C++ debugger (or front-end to gdb) to help teaching programming? gdb should be fine for what you are asking for and Eclipse
{ "pile_set_name": "StackExchange" }
Q: What is a reasonable speed for long distances on a bike? I am curious what a reasonable speed to travel on a bike is. Speed will obviously vary based on the conditions in which you are riding. I am planning on taking the GPS out with me this weekend to see how quickly I go. Before I did that I wanted to get some benchmarks. For the most part I will be riding an older road bike on crushed rock. (Very small rock, with good rolling resistance but still much worse than pavement). I will also be riding that road bike on the road (i.e.: pavement in North America, Tarmac in Great Britain). What is a reasonable speed on these two surfaces? I am more interested in speed over long distances, i.e. if you were going 80 km what would your target speed be? A: Speed varies widely by cyclist, depending on fitness, road conditions and traffic. Some of my observations (cruising speed based on a flat, paved road in good condition): 20km/h (12.4 mph) - many "occasional" cyclists ride around this speed 25km/h (15.5 mph) - most commuters 30km/h (18.6 mph) - fast commuters, slower roadies 35km/h (21.7 mph) - fast roadies any faster than that on a long flat and they're probably a racer (based on who I pass and who passes me when riding around 30km/h) Average speed will usually be slower than you think, once traffic stops and hills are factored in, especially over longer distances (like 80km). On my 21km commute I'll hit 30+ on every long stretch I can, but my average still only works out to 24km/h. For longer rides I cruise around 27-28 km/h, which is more sustainable; averaging 22-24 over a very long ride (200km) is a great pace for me. A: Average speed is extremely dependant on: Your fitness (main factor) Weather (particularly wind) Road surface quality Interruptions like traffic lights, dog-walkers on bike-lanes Accumulated fatigue over multiple days How hilly the terrain is (although this can be balanced out by the faster descent) As you mentioned, best way to see is using a GPS and seeing how fast you go.. I've found over the course of about 6-months of riding, my average speed over long rides is around the average of my shorter rides (I'm classifying "long" as around 150-200km, and "short" as maybe 30-80km) For example, here is a plot of my distances vs average speed: (the axis's are in km/h and km) The >50km rides averaging 25-30km/h are mostly group rides. Ignoring those, beyond about 80km begin to converge to an average of 20km/h (although at 80km I've ranged from about 15-25km/h, but this includes when I just started riding..) These numbers are all specific to me, and even still they vary (particularly over time): These averages are spread over a few different bikes (start to April was on a hybrid bike, April to mid May was on one road bike, and the rest was on a different road bike) - but, the spikes are almost all related to either terrain (there's a large dip in July related to a Strava hill-climbing challenge), fatigue (the dip in August was another Strava challenge, to cycle long distances over consecutive days), or other factors mentioned above Sorry for the rather rambly answer, but it hopefully conveys that average speed depends on a lot of factors, and it's hard to give a specific answer A: I've already answered this question, but this is a different answer; I've recently started using a website called Strava (they do also have iPhone/Android apps as well as accepting GPX uploads which can be generated by many platforms and devices - I use MotionX-GPS for the iPhone). Their (I think unique) central point is to allow users to defined specific 'segments' of their ride and then anyone whose uploaded route passes over that segment is included in a virtual league table. This allows you to easily compare yourself to others over short routes, climbs, sprints and so on. So long as you cycle in reasonably populated areas, you'll be amazed at how many segments your ride already covers, at least around the London area, I was. (I've no connection to the website, apart from being a satisfied, paying customer.)
{ "pile_set_name": "StackExchange" }