id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.256150 | Background: I'm working on an application that will manage backup generators. These generators need to be able to be linked together. For example, Generator B might serve as a backup for Generator A. If A fails, then the software turns B on. Or they might be linked in a load-sharing configuration, where Generator A and B run together, and share the load.I'm making a function that will make the link, given 2 generators and the type of link desired. public void LinkGenerators(Generator A, Generator B, Type linkType);In writing my tests, I've come with a large number of invalid parameter configurations. My LinkGenerators function looks like this:public void LinkGenerators(Generator A, Generator B, Type linkType){ if (linkType.BaseType != typeof(Link)) { throw new ArgumentException(linkType is not a valid link type); } if (linkAlreadyExistsFor(A, B)) { throw new InvalidOperationException(Link for A and B already exists); } if (A.Equals(B) || B.Equals(A)) { throw new InvalidOperationException(A and B cannot be the same generator); } if (A == null || B == null || linkType == null) { throw new ArgumentException(Cannot pass a null argument); } ..... //Actually make the link after making sure all the arguments are valid.}Most of the LinkGenerator functions consists of verifying that the parameters are good. The actual link creation takes 2 lines of code. There's a bit of business logic (verifying that link doesn't already exist and that the generators are not the same) mixed in with a bit functional logic (making sure that the linkType derives from Link, arguments aren't null...), and makes me.......uncomfortable. Is a long list of parameter checks an anti-pattern or a code smell? And if so, what can one do about it?(Just to make this clear to close-voters probably misunderstanding the question: this is not a question about coding style for conditions like this one). | Is a long list of parameter checks an anti-pattern? | c#;exceptions;parameters | When you need to check conditions that aren't business rules, that's where it gets suspect and a bit smelly. I try to avoid those where possible.Some of your tests seem suspect: if (linkType.BaseType != typeof(Link))This looks like a check that should be made by the type system. i.e. its a type restriction and ideally your function signature should, as much as possible, only accept parameters of the correct type. How to do that depends on more details of what you are doing. if (A.Equals(B) || B.Equals(A))Why do you feel the need to check it both ways? If A.Equals(B) != B.Equals(A) you are in for a world of hurt. if (A == null || B == null || linkType == null)There is no way to tell which parameter was null from the resulting exception. It's also done after the other checks which means that you probably already caused an exception trying to dereference a null parameter.I avoid null checks in general. Typically, they aren't that helpful because the code in question will end up throwing an exception after tripping on the null anyways and I haven't gained anything by strewing null checks throughout my code. If you do insist on a null check, I suggest extracting it into a utility function. |
_codereview.60202 | Given a singly linked list, swap kth node from beginning with kth node from end. Swapping of data is not allowed, only pointers should be changed. This code is attributed to geeksforgeeks. I'm looking for code-review, optimizations and best practices. Why I don't extend or reuse: I am prepping for interviews, and interviewers explicitly want you to code, in my experience. I request the reviewer to not insist on reusing, as I am aware in real life reusability is the right approach. This does not work in interviews.Why don't I use a Util class instead nesting method inside linked list? That is because I need the Node to be an internal data structure. Had I made a Util class, it would have no access to internal data structure and perform operations on the node's pointers.public class SwapKth<T> { private Node<T> first; private int size; public SwapKth(List<T> items) { addAll(items); } private void addAll(List<T> items) { Node<T> prev = null; size = items.size(); for (T item : items) { Node<T> node = new Node<T>(item); if (prev == null) { first = prev = node; } else { prev.next = node; prev = node; } } } private static class Node<T> { private Node<T> next; private T item; Node(T item) { this.item = item; } } public void swap (int n) { if (n == 0) { throw new IllegalArgumentException(The value of n should be greater than 0.); } if (n > size) { throw new IllegalArgumentException(The value of n: + n + is greater than: + size); } // code to reach the nth node from front. Node<T> x = first; Node<T> prevX = null; for (int i = 0; x != null && i < (n - 1); x = x.next, i = i + 1) { prevX = x; } // code to reach the nth node from the end. Node<T> temp = x.next; // note: we have x.next in place. Node<T> y = first; Node<T> prevY = null; for (; temp != null; temp = temp.next, y = y.next) { prevY = y; } // if 'x' and 'y' happen to be the same node. // eg: 1->2->3->4->5, swap 3rd from start with 3rd from the end. if (x == y) return; Node<T> prevFirst = null; Node<T> first = null; Node<T> prevSecond = null; Node<T> second = null; if (n <= size/2) { prevFirst = prevX; first = x; prevSecond = prevY; second = y; } else { prevFirst = prevY; first = y; prevSecond = prevX; second = x; } if (first.next == second) { adjacentSwap(prevFirst, first, second); } else { distantSwap(prevFirst, first, prevSecond, second); } } /** * Swap 3rd from start with 3rd form the end, in a linkedlist like. * 1->2->3->4->5->6 * Here node 3, and 4 are adjacent * * Edge case: * 1->2 (swap 1 with 2) */ public void adjacentSwap(Node<T> firstPrev, Node<T> first, Node<T> second) { first.next = second.next; second.next = first; if (firstPrev != null) { firstPrev.next = second; } else { this.first = second; } } /** * Swap 2nd not from the front and 2nd node from the end. * 1->2->3->4->5->6 * Here node 2nd node and 5th node are not adjacent * * Edge case: * 1->2->3->4->5->6 (swap 1 with 6) * */ public void distantSwap(Node<T> firstPrev, Node<T> first, Node<T> secondPrev, Node<T> second) { if (firstPrev != null) { firstPrev.next = first.next; secondPrev.next = second.next; second.next = firstPrev.next; first.next = secondPrev.next; firstPrev.next = second; secondPrev.next = first; } else { secondPrev.next = second.next; second.next = first.next; first.next = secondPrev.next; this.first = second; secondPrev.next = first; } } // size of new linkedlist is unknown to us, in such a case simply return the list rather than an array. public List<T> toList() { final List<T> list = new ArrayList<>(); if (first == null) return list; for (Node<T> x = first; x != null; x = x.next) { list.add(x.item); } return list; } @Override public int hashCode() { int hashCode = 1; for (Node<T> x = first; x != null; x = x.next) hashCode = 31*hashCode + (x.item == null ? 0 : x.hashCode()); return hashCode; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; SwapKth<T> other = (SwapKth<T>) obj; Node<T> currentListNode = first; Node<T> otherListNode = other.first; while (currentListNode != null && otherListNode != null) { if (currentListNode.item != otherListNode.item) return false; currentListNode = currentListNode.next; otherListNode = otherListNode.next; } return currentListNode == null && otherListNode == null; }}public class SwapKthTest { @Test public void testEvenLength() { SwapKth<Integer> sk1 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk1.swap(1); assertEquals(Arrays.asList(6, 2, 3, 4, 5, 1), sk1.toList()); SwapKth<Integer> sk2 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk2.swap(2); assertEquals(Arrays.asList(1, 5, 3, 4, 2, 6), sk2.toList()); SwapKth<Integer> sk3 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk3.swap(3); assertEquals(Arrays.asList(1, 2, 4, 3, 5, 6), sk3.toList()); SwapKth<Integer> sk4 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk4.swap(4); assertEquals(Arrays.asList(1, 2, 4, 3, 5, 6), sk4.toList()); SwapKth<Integer> sk5 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk5.swap(5); assertEquals(Arrays.asList(1, 5, 3, 4, 2, 6), sk2.toList()); SwapKth<Integer> sk6 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5, 6)); sk6.swap(6); assertEquals(Arrays.asList(6, 2, 3, 4, 5, 1), sk6.toList()); } @Test public void testOddLength() { SwapKth<Integer> sk7 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5)); sk7.swap(1); assertEquals(Arrays.asList(5, 2, 3, 4, 1), sk7.toList()); SwapKth<Integer> sk8 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5)); sk8.swap(2); assertEquals(Arrays.asList(1, 4, 3, 2, 5), sk8.toList()); SwapKth<Integer> sk9 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5)); sk9.swap(3); assertEquals(Arrays.asList(1, 2, 3, 4, 5), sk9.toList()); SwapKth<Integer> sk10 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5)); sk10.swap(4); assertEquals(Arrays.asList(1, 4, 3, 2, 5), sk10.toList()); SwapKth<Integer> sk11 = new SwapKth<Integer>(Arrays.asList(1, 2, 3, 4, 5)); sk11.swap(5); assertEquals(Arrays.asList(5, 2, 3, 4, 1), sk11.toList()); } @Test public void testTwoElement() { SwapKth<Integer> sk12 = new SwapKth<Integer>(Arrays.asList(1, 2)); sk12.swap(1); assertEquals(Arrays.asList(2, 1), sk12.toList()); SwapKth<Integer> sk13 = new SwapKth<Integer>(Arrays.asList(1, 2)); sk13.swap(2); assertEquals(Arrays.asList(2, 1), sk13.toList()); } @Test public void testSingleElement() { SwapKth<Integer> sk14 = new SwapKth<Integer>(Arrays.asList(1)); sk14.swap(1); assertEquals(Arrays.asList(1), sk14.toList()); }} | Swap Kth node from beginning with Kth node from end in a Linked List | java;algorithm;linked list | null |
_unix.74181 | My scenario:I have Server A at home, and Server B at my parents house. I also have Laptop A and Laptop B at home. I do all sorts of strange ssh hops - I might connect Laptop A -> Server A -> Server B. Or Laptop B -> Server A -> Laptop A. Or Server A -> Server BI've got keychain set up on all of these machines, and like a good security-conscious geek, I have eval `keychain --clear`in my .zlogin file.My problem is that even though I unlock my keys on Laptop A, once I've connected to Server A it tries to use the keychain/ssh-agent on Server A, so I have to unlock those keys, too. I've got ForwardAgent setup on all of these machines as well. What I would prefer to happen is something like this:Login to Laptop AUnlock Laptop A keysssh to Server Assh to Server B - using keys from Laptop A.How can I do this? | How can I use ForwardAgent with keychain, or detect a new login? | ssh;security | null |
_cs.65906 | I have attached the problem belowConsider a 512-KByte cache with 64-word cachelines (a cacheline is also known as a cache block, each word is 4-Bytes). This cache uses write-back scheme, and the address is 32 bitswide.Let's try to start doing the first row. To find the Cachline index, I would have to know the number of cachelines in the cache. I don't know that. The problem doesn't provide any information about the cache that is being used. It's going to be a Miss since at the beginning the cache is empty. It wasn't modified because the request is to read. For the tag, I would have to know what's the bit size of the address , but in this case it's in hex. What's data?. what is caused replace? what is write-back to Memory?. It would be nice if someone can do the first two rows for me and go through every step in this process. I would do the other ones. I would like the first two rows to be done because there are two different types of request, read and write. I need orientation. It would be good as well if someone can point out the resources to understand this question.Thanks!!!I have updated the threadIn previous computations, I got that cache offset is 8 bits, cache index is 11 bits and the tag is 13 bits. I converted 0x128 to binary and got 0b100101000. This implies that the cacheline index is 1. The requested address is 0x28 and the tag is 0x0, however; I still can't understand what data, caused replace and write-back to memoery means. | Direct Mapped, Cache Transactions | computer architecture;cpu cache | null |
_cs.21778 | I'm trying to understand the Huffman compression algorithm.Lets assume the word : YESSSSAccording to Huffman tree we will get :S : 4 times -> Code : 0Y : once -> Code : 01E : once -> Code : 00at the end YESSSS will become : 01 00 0 0 0 0So far everything is clear.Now my problem is in the space between the binary words. How this can be stored in memory ?In another words ?How to computer will know that that :the first character has two bitsthe second character has two bitsthe fourth other characters has only one bitBecause 01 00 0 0 0 0 doesn't have the same meaning than 01 00 00 0001 00 0 0 0 0 means : YESSSS01 00 00 00 means : YEEEAny ideas please ? | How to determine letter boundaries in Huffman encoded strings? | coding theory | null |
_webmaster.82839 | I'm new to this site and making webpages that aren't only run locally on a single computer.I'm about to start building a website for someone. She already had another person make a basic website but didn't like it and wants some UI changes. There isn't much fancy stuff in terms of functionality (e.g. just HTML/CSS and a little bit of JavaScript).She already says she has a place to host the website, but was asking if I could make it using wordpress and then give her the source code. Is this possible? I know someone can make a free website using wordpress.com but can they then transfer the contents to another host?Also I'm open to other CMS, or doing things using Dreamweaver.What's the normal procedure when someone says make me a website, I've already got a few half-baked webpages? | How should I handle a client with a partial site already built who wants me to take over? | web hosting;wordpress;web development | She already says she has a place to host the website, but was asking if I could make it using wordpress and then give her the source code. Is this possible?Sort of. You can create a free site on wordpress.com but you can't export the look and feel of it...just the content. Alternately, if you have PHP and MySQL hosting, you can download WordPress from wordpress.org and create a self-hosted site. Those can be transferred, including the Theme (look and feel) as well as the content in a variety of ways. However, your target server has to be able to run PHP and MySQL for this to work and transferring a WordPress site is not the simplest action in the world for the uninitiated. It involves copying the wp-content folder and exporting the complete database.What's the normal procedure when someone says make me a website, I've already got a few half-baked webpages?There isn't a single normal procedure because with or without half-baked web pages, you still need to take the time to assess the client's capabilities and needs and design a solution appropriate to the situation. Lots of people just ask (or exclaim) just use WordPress! but, while it's a versatile tool, WordPress is not appropriate for every application, situation, and/or client. You need to figure out who's going to keep the site updated (both in terms of content and software if a CMS is in play), what the necessary features are going to be both today and into the future, budgets, timelines, and more. This is what makes webmastering something of an art and not a science. You are simultaneously called on to be a designer, developer, artist, philosopher, prognosticator, therapist, accountant, and sometimes an executioner. So my advice in your specific situation is to pretend that you are building a new site from the ground up and make your client aware that you will need to conduct a project analysis before any real work can begin. How you charge for all of this is up to you. Personally, I have a flat fee that I charge for the project analysis portion that is payable up-front and then I produce an estimate based on the analysis for the site building that of course varies depending on the job specifications. |
_softwareengineering.314362 | There has been a discussion recently (with my colleagues) on whether or not checks should be performed on client machines to check for specific software, within a web based platform, mainly because we have some requirements for certain user actions and to help diagnose certain issues we could log whether or not they actually had that software. On a users PC at an internal company this seems fairly straight forward as a lot of companies will have the same software on each machine, but this is on client machines outside of the company. As it would be client side via a web interface (built using ASP.NET) I guess I'd use JavaScript to do the look-ups, however it occurs to me that there are potential pit-falls with this, namely:A user may have similar software to what you say is a requirement e.g. Open Office instead of MS OfficeA user may have installed the software in a different location to the default areaA huge variety of software types on multiple different operating systems (Windows, iOS, Android, etc).Software versions can vary so that would need accounted for, especially as they can be installed in different areas (e.g. Visual Studio).I also remember old discussions from many moons ago that checking for things like Browser version, or even Operating System, were relatively frowned upon and to me this seems semi-similar.So my question is as follows:Is there a best practice for, or against, checking for software on a client machine via a web browser? Be it for Adobe, or Word, or whatever other software a website may want a user to have. Ideally some reference material would be great.To me it seems there are a lot of downsides, and it also puts a lot of onus on the webpage to ensure a user has the correct software (Im a believer in the practice that a UI should have little to no working knowledge of its back end system or anything else).edit just to point out, my question is not related to using JavaScript to do this, it is around best practice on whether you should attempt to do some form of check. | Should I check for software on a users machine via a web browser? | web development;asp.net | These checks invariably fail. Try to avoid tying your application to specific software, browsers or versions. Code to standards. As it appears you are Microsoft based, you may need to code around issues with different Internet Explorer versions. Try to keep these to a minimum. From a security standpoint, you don't want to prevent users from upgrading to a supported version. You especially don't want to force them to remain on an insecure version. I have run into numerous issues with built-in checks. Two significant version checks (both from vendors that should know better) I have had to deal with are:A virus scanner Java console that was pinned to a specific patch level of Java. It failed whenever Java was updated (for security fixes).A program that was configured to run on only 4 versions of Internet Explorer stopping at IE8. It works fine on IE9, if you spoof the User Agent. |
_codereview.142592 | I have a PHP bot running on a shared host. My account often get suspended. When I asked to web hosting service, they said that my account suspended because of excessive MySQL usage.There is an upload.php and db_functions.php files. Upload PHP runs every half hour. I open and close MySQL connections in every function.Is building a MySQL connection at the beginning of the upload.php and closing at the end of the upload.php able to prevent excessive usage?Which one is effective between these two situation?1st situationupload.phprequire_once(WEBSITE_ROOT.'/'.APP_DIRECTORY.'/functions/db_fns.php');$subreddits = subreddit_getir($db_user[category]);$posts = grab_reddit_picture($subreddits);foreach($posts as $post){ $hashtags = make_hashtag($db_user[category], $post[subreddit]); //#city #culture etc. if($post[type]=='resim'){ $rawImage = file_get_contents($post[url]); if(!$rawImage){break;} $basename = preg_replace('/^.+[\\\\\\/]/', '', $post[url]); if(!is_picture_inserted($db_user[tw_id], $post[url])){ $pic_id = insert_picture($db_user[tw_id], $post[title], $post[url], $post[type]); file_put_contents(WEBSITE_ROOT.'/'.APP_DIRECTORY.'/images/'.$basename, $rawImage); //image_text($basename, $db_user[tw_name]); // assign access token on each page load $cb->setToken($db_user['oauth_token'], $db_user['oauth_token_secret']); $reply = $cb->statuses_updateWithMedia(array( 'status' => substr(convert_hashtag($post[title]), 0, 72).' '.$hashtags.' ', 'media[]' => WEBSITE_ROOT.'/'.APP_DIRECTORY.'/images/'.$basename )); print_r($reply); //break; } }else{ if(!is_picture_inserted($db_user[tw_id], $post[url])){ insert_picture($db_user[tw_id], $post[title], $post[url], $post[type]); $cb->setToken($db_user['oauth_token'], $db_user['oauth_token_secret']); $reply = $cb->statuses_update(array( 'status' => substr(convert_hashtag($post[title]), 0, 100).' '.$post['url'].' '.$hashtags )); print_r($reply); } }}db_functions.php<?php function db_connect() { $connection = new mysqli(MYSQL_HOSTNAME, USERNAME, PASSWORD, DATABASE); if (!$connection) { echo 'No connection!'; mysql_error(); } if (!$connection->select_db(DATABASE)) { echo 'No database!'; mysql_error(); } $connection->query(SET NAMES UTF8); return $connection; } function db_result_to_array($result) { $res_array = array(); for ($count = 0; $row = mysqli_fetch_assoc($result); $count++) { $res_array[$count] = $row; } return $res_array; } function is_user_created($id, $tablo_adi) { $conn = db_connect(); $query = SELECT id FROM .$tablo_adi. WHERE tw_id = .$id; $result = $conn->query($query); $result = $result->fetch_array(); if($result) { return $result; } else { return false; } if(mysqli_ping($conn)) { $conn->close(); } } function create_user($user_id, $oauth_token, $oauth_token_secret){ $conn = db_connect(); $query = sprintf(INSERT into reddit_twitter SET tw_id = %s, oauth_token = '%s', oauth_token_secret = '%s', $user_id, $oauth_token, $oauth_token_secret); $result = $conn->query($query); if(!$result){ echo 'Dnyay ele geirmeye falan m alyorsun?'; echo $conn->error; }else{ return $conn->insert_id; } $conn->close(); } function select_users(){ $conn = db_connect(); $query = SELECT * FROM reddit_twitter ORDER by id DESC; $result = $conn->query($query); $result = db_result_to_array($result); return $result; $conn->close(); } function insert_picture($tw_id, $title, $picture, $type){ $conn = db_connect(); $query = sprintf(INSERT into pics SET tw_id = '%s', title = '%s', picture = '%s', type = '%s' , $tw_id, $conn->real_escape_string($title), $picture, $type ); $result = $conn->query($query); if(!$result){ echo 'Dnyay ele geirmeye falan m alyorsun?'; echo $conn->error; }else{ return $conn->insert_id; } $conn->close(); } function is_picture_inserted($uid, $basename) { $conn = db_connect(); $query = SELECT id FROM pics WHERE tw_id = .$uid. AND picture = '.$basename.'; $result = $conn->query($query); $result = mysqli_fetch_assoc($result); if($result) { return $result[id]; } else { echo $conn->error; return false; } if(mysqli_ping($conn)) { $conn->close(); } }?>2nd situationupload.phprequire_once(WEBSITE_ROOT.'/'.APP_DIRECTORY.'/functions/db_fns.php');$conn = db_connect();$subreddits = subreddit_getir($db_user[category]);$posts = grab_reddit_picture($subreddits);foreach($posts as $post){ $hashtags = make_hashtag($db_user[category], $post[subreddit]); //#city #culture etc. if($post[type]=='resim'){ $rawImage = file_get_contents($post[url]); if(!$rawImage){break;} $basename = preg_replace('/^.+[\\\\\\/]/', '', $post[url]); if(!is_picture_inserted($db_user[tw_id], $post[url])){ $pic_id = insert_picture($db_user[tw_id], $post[title], $post[url], $post[type]); file_put_contents(WEBSITE_ROOT.'/'.APP_DIRECTORY.'/images/'.$basename, $rawImage); //image_text($basename, $db_user[tw_name]); // assign access token on each page load $cb->setToken($db_user['oauth_token'], $db_user['oauth_token_secret']); $reply = $cb->statuses_updateWithMedia(array( 'status' => substr(convert_hashtag($post[title]), 0, 72).' '.$hashtags.' ', 'media[]' => WEBSITE_ROOT.'/'.APP_DIRECTORY.'/images/'.$basename )); print_r($reply); //break; } }else{ if(!is_picture_inserted($db_user[tw_id], $post[url])){ insert_picture($db_user[tw_id], $post[title], $post[url], $post[type]); $cb->setToken($db_user['oauth_token'], $db_user['oauth_token_secret']); $reply = $cb->statuses_update(array( 'status' => substr(convert_hashtag($post[title]), 0, 100).' '.$post['url'].' '.$hashtags )); print_r($reply); } }}$conn->close(); db_functions.php<?php function db_connect() { $connection = new mysqli(MYSQL_HOSTNAME, USERNAME, PASSWORD, DATABASE); if (!$connection) { echo 'No connection!'; mysql_error(); } if (!$connection->select_db(DATABASE)) { echo 'No database!'; mysql_error(); } $connection->query(SET NAMES UTF8); return $connection; } function db_result_to_array($result) { $res_array = array(); for ($count = 0; $row = mysqli_fetch_assoc($result); $count++) { $res_array[$count] = $row; } return $res_array; } function is_user_created($id, $tablo_adi) { $query = SELECT id FROM .$tablo_adi. WHERE tw_id = .$id; $result = $conn->query($query); $result = $result->fetch_array(); if($result) { return $result; } else { return false; } } function create_user($user_id, $oauth_token, $oauth_token_secret){ $query = sprintf(INSERT into reddit_twitter SET tw_id = %s, oauth_token = '%s', oauth_token_secret = '%s', $user_id, $oauth_token, $oauth_token_secret); $result = $conn->query($query); if(!$result){ echo 'Dnyay ele geirmeye falan m alyorsun?'; echo $conn->error; }else{ return $conn->insert_id; } } function select_users(){ $query = SELECT * FROM reddit_twitter ORDER by id DESC; $result = $conn->query($query); $result = db_result_to_array($result); return $result; } function insert_picture($tw_id, $title, $picture, $type){ $query = sprintf(INSERT into pics SET tw_id = '%s', title = '%s', picture = '%s', type = '%s' , $tw_id, $conn->real_escape_string($title), $picture, $type ); $result = $conn->query($query); if(!$result){ echo 'Dnyay ele geirmeye falan m alyorsun?'; echo $conn->error; }else{ return $conn->insert_id; } } function is_picture_inserted($uid, $basename) { $query = SELECT id FROM pics WHERE tw_id = .$uid. AND picture = '.$basename.'; $result = $conn->query($query); $result = mysqli_fetch_assoc($result); if($result) { return $result[id]; } else { echo $conn->error; return false; } }?> | Upload bot, using MySQL excessively | performance;php;mysql;comparative review | null |
_unix.102232 | Anyone got progress on setting null encryption in FreeBSD 8 ipsec?# ./setkey -cadd 10.10.19.50 10.10.19.100 esp 1680464666 -m transport -E null -A hmac-md5 authentication!! ;The result of line 1: Invalid argument.patch from here change nothing same error. | FreeBSD 8.2 setkey null encryption | freebsd;ipsec | null |
_webapps.53994 | There is a website sharing scientific courses in video format. They upload videos to YouTube and then embed them in their website. They share their videos unlisted, so there is no explicit way to get the video link. There a lot of videos, and if you want to rewatch one of them, you need to search for it for minutes; the website has no content search feature. I keep a record of my favorite videos in my notes for watching them again in the future.In order to find link of a video I use this workaround:Click the Watch Later button on the videoOpen/sign-in YouTubeGo to my Watch Later listGet the video link from thereIs there a simpler way to do this? A Firefox extension or a userscript maybe? | How do I find link of an embedded and unlisted YouTube video? | youtube | Whether a video is unlisted has no effect on whether you can get the link from an embed. This instead depends on how they have set up the player. It's possible to hide controls from the embed, which is possible to do whether or not the video is unlisted.First make sure you have checked all the usual ways of getting the link:The YouTube logo link in the lower right corner. (The most obvious one - almost certainly disabled.)The symbol in the upper right corner which looks like three dots connected with lines. (Only visible when the video is stopped or when you hover the player.) Click that to get a clickable link and a text box that you can copy the URL from.Right clicking the video might give you a copy URL option.Assuming all of those don't work, there are of course various ways you can get the link from the source code. If you tell me more about the site, I can write a bookmarklet which does this for you. What I need to know: Does the site use frames? Which embed code does it use? New (iframe) or old(embed/object)? Could you post a sample? Can a single page have more than one video? |
_unix.358172 | I enabled ufw within the server instance, when connected through ssh.But now, I am not able to connect to the server in any way.Is there a way to disable the ufw in the server?I couldnt find any way from aws console.It is a ubuntu server | I enabled ufw in an aws instance. How to stop that? | linux;ubuntu;aws;ufw | I did the same sort of thing once; even worse though as I fried the instance so that it couldn't even boot. What you do is:Boot some other instance. Anything will do that can mount the system diskof your old system.Find the old system disk of your old system on your 'Volumes' tab of the ECs console.Shut down your old instance and detach the system volume from it as per the aws documentation.Attach it to your new instance as per the aws documentation as some boring, non-boot drive.Edit the files on it wherein you did the bad thing that made it so that you couldn't access it any more. In your case 'ufw' configuration files.Now go the other way and detach it from your new instance and hook it back up to your old instance as a system disk.Boot your old instance and hopefully you can get to it now. If not, rinse, repeat.This method has the advantage that you don't lose any data or configuration work that you might have created between the time that you last did a 'snapshot' of the instance and the time that you fried it. |
_cs.70951 | Consider the language:$A'_{TM} = \{\langle M,w\rangle: M \mbox{ is a TM with access to an oracle for } A_{TM} \mbox{ and } M \mbox{ accepts } w\}$Clearly, we expect that any language is decidable relative to itself, including $A'_{TM}$. So let $F$ be an oracle decider for $A'_{TM}$ that has access to an oracle for $A'_{TM}$. In particular, we can construct $F$ as follows.$F = $ On input $\langle M, w \rangle$, where $M$ is a TM and $w$ is a string:Query the oracle for $A'_{TM}$ with $\langle M, w \rangle$.If the oracle replies YES, accept. If the oracle replies NO, reject.Now, consider the following TM, $D$.$D$ = On input $\langle M \rangle$, where $M$ is a TM.Check if $M$ is an oracle TM with an oracle for $A'_{TM}$. If it is not, reject. Otherwise, proceed to the next step.Simulate $F$ on $\langle M, M \rangle$. If it accepts, reject. If it rejects, accept.Now, suppose that we feed $\langle D \rangle$ as input to $D$. Then clearly, computation proceeds to Step 2, since $D$ has an oracle for $A'_{TM}$. But Step 2 yields a contradiction, since we are forced to conclude that $D$ accepts $\langle D \rangle$ if and only if $D$ rejects $\langle D \rangle$. So it seems we must conclude that $A'_{TM}$ is both decidable and undecidable relative to itself, which seems absurd. | Can a Turing machine be both decidable and undecidable relative to itself? | turing machines;undecidability;oracle machines | A Turing machine doesn't come with an oracle. The oracle comes from outside. Rather, an oracle Turing machine is a Turing machine that has a special way of accessing an oracle. When you run the Turing machine with a specific oracle $O$, whenever the machine activates the special mechanism for accessing an oracle, you forward the request to $O$. A phrase like $M$ is an oracle TM with an oracle for $A'_{TM}$ is thus meaningless.You are saying that you run $D$ on the input $\langle D \rangle$, but you haven't specified what oracle you are running $D$ with. If you want $F$ to have the proper semantics, then you have to run $D$ with an oracle for $A'_{TM}$. When run in this way, $D$ accepts $\langle D \rangle$ iff $F$ rejects $\langle D,D \rangle$ when run with $A'_{TM}$, and so iff $D$ rejects $\langle D \rangle$ when run with $A_{TM}$.Notice that there is no contradiction: $D$ accepts $\langle D \rangle$ when run with the oracle $A'_{TM}$ iff it rejects $\langle D \rangle$ when run with the oracle $A_{TM}$.The real reason this works out is that a machine cannot get as input an oracle TM together with an oracle, since the oracle is an infinite object and so it cannot be specified as input. |
_webapps.41287 | How do I enable popups in Google Calendar when tab is closed? | How to enable popups in Google Calendar when tab is closed? | google calendar | null |
_unix.348084 | Some program was connected from my local ip 111.111.111.111 with 130.239.18.176:80,how to get the pid number? netstat -np(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)Active Internet connections (w/o servers)Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program nametcp 0 0 111.111.111.111:46243 52.89.80.240:443 TIME_WAIT - tcp 0 1 111.111.111.111:36553 173.239.79.210:443 SYN_SENT 2630/firefox-esrtcp 0 0 111.111.111.111:48470 130.239.18.176:80 ESTABLISHED - tcp 0 1 111.111.111.111:36552 173.239.79.210:443 SYN_SENT 2630/firefox-esrtcp 0 1 111.111.111.111:34202 74.125.204.101:80 SYN_SENT 2630/firefox-esrtcp 0 0 111.111.111.111:52243 203.208.48.79:443 ESTABLISHED 2630/firefox-esrtcp 0 1 111.111.111.111:46521 74.125.203.93:443 SYN_SENT 2630/firefox-esrtcp 0 1 111.111.111.111:34200 74.125.204.101:80 SYN_SENT 2630/firefox-esrtcp 0 0 111.111.111.111:48424 130.239.18.176:80 ESTABLISHED - tcp 0 0 111.111.111.111:46238 52.89.80.240:443 TIME_WAIT - tcp 0 1 111.111.111.111:46523 74.125.203.93:443 SYN_SENT 2630/firefox-esrtcp 0 0 111.111.111.111:34204 74.125.204.101:80 TIME_WAIT - tcp 0 0 111.111.111.111:33700 104.24.98.177:443 ESTABLISHED 2630/firefox-esrtcp 0 1 111.111.111.111:34206 74.125.204.101:80 SYN_SENT 2630/firefox-esrtcp 0 0 127.0.0.1:49941 127.0.0.1:80 ESTABLISHED 2630/firefox-esr | How to get the pid number which connect to external ip? | netstat | The netstat output explains it fairly well:(Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.)Just run it as root (e.g. sudo netstat -np) |
_webmaster.87090 | I'm using Nginx as my server.I have just installed successfully my domain with positive SSL. I edited my vhost of main domain as follow:server {listen 80;server_name example.com www.example.com;return 301 https://example.com$request_uri;}server {listen 443 ssl spdy;SSLssl on;ssl_certificate /****/example-bundle.crt;ssl_certificate_key /***/example.com.key;ssl_session_timeout 20m;ssl_session_cache shared:SSL:10m;ssl_protocols TLSv1 TLSv1.1 TLSv1.2;ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4;ssl_prefer_server_ciphers on;add_header Strict-Transport-Security max-age=31536000; includeSubdomains;;ssl_stapling on;...I don't edit anything of subdomain's vhost.This mean subdomain listen from port 80.OK. When I access to main domain, it OK. Access to http://example.com auto redirect to https://example.com.This's problem:When I access to subdomain, it also auto redirect to https and get an error because Certificate invalid for subdomain. I DON'T WANT subdomain with ssl. I only need ssl for main domain. How to fix it for my subdomain? This's vhost of subdomain:server { server_name www.sub.example.com; rewrite ^(.*) http://sub.example.com$1 permanent; }server { listen 80; access_log off; error_log off; # error_log /*******/logs/error.log; root /home/*******/public_html;include /etc/nginx/conf/ddos2.conf; index index.php index.html index.htm; server_name sub.example.com;........Thanks you! | Error SSL with subdomain | subdomain;https | The following line is causing this:add_header Strict-Transport-Security max-age=31536000; includeSubdomains;;This tells browsers to require HTTPS on both the main domain and all subdomains. remove the includeSubdomains from the HSTS header and that should help. |
_codereview.147738 | I just recently started reading the python.org documents and it looked like a really interesting language. Now, since Visual Studio 2017 supports Python, I decided to finally actually start learning it. My friend challenged me to write a program that checks if a number is an Armstrong number.import sysdef isArmstrong(number): arr = str(number) count = len(arr) res = 0 for x in range(0, count): res += int(arr[x]) ** count return res == numberif len(sys.argv) > 1: arg = sys.argv[1] print(arg + is an armstrong number: + str(isArmstrong(int(arg))))else: print(No arguments passed! :-();Now this is the first solution that I could come up with. As you can see, it converts the number to a string to determine the amount of digits, and then handle them individually. Is this the smartest way to do it, or can it be done without converting to a string? If not: Is there anything I can do to improve this code performance-wise? | Decide if number is Armstrong number | python;python 3.x | null |
_unix.367751 | I have built a basic cluster consisting of the following machines: (1) an HP Inspiron 530 desktop and (2) a Jetson TK1 development board. Both are running Linux and I've built mpich 3.2 from source on both machines. I'm able to run a basic hello world program on this cluster, but as soon as a communication operation is involved (like MPI_Bcast), the program hangs. I've checked to verify that hydra is launching parallel jobs on both machines. However, as soon as the program reaches a communication operation, it hangs. I've disabled the Unix firewall on both machines, but it still won't work. | Why does My MPI Cluster Hang? | parallelism;mpi | null |
_unix.79619 | Within the shell, typing ALT+. or using !$ recalls the last passed argument of the previous command. I use this all the time, but how do you do that when you detached the previous command?$ do-something foo a_long_file_name &How do I get a_long_file_name on the prompt, and not the ampersand? | Shell: How do I get the last argument the previous command when it was detached? | bash;command line;command history;readline;line editor | null |
_softwareengineering.335727 | I'm unclear how TDD, the methodology, handles the following case. Suppose I want to implement the mergesort algorithm, in Python. I begin by writingassert mergesort([]) === []and the test fails withNameError: name 'mergesort' is not definedI then adddef mergesort(a): return []and my test passes. Next I addassert mergesort[5] == 5and my test fails withAssertionErrorwhich I make pass withdef mergesort(a): if not a: return [] else: return aNext, I addassert mergesort([10, 30, 20]) == [10, 20, 30]and I now have to try to make this pass. I know the mergesort algorithm so I write:def mergesort(a): if not a: return [] else: left, right = a[:len(a)//2], a[len(a)//2:] return merge(mergesort(left)), mergesort(right))And this fails withNameError: name 'merge' is not definedNow here's the question. How can I run off and start implementing merge using TDD? It seems like I can't because I have this hanging unfulfilled, failing test for mergesort, which won't pass until merge is finished! If this test hangs around, I can never really do TDD because I won't be green during my TDD iterations constructing merge.It seems like I am stuck with the following three ugly scenarios, and would like to know (1) which one of these does the TDD community prefer, or (2) is there another approach I am missing? I've watched several Uncle Bob TDD walkthroughs and don't recall seeing a case like this before! Here are the 3 cases:Implement merge in a different directory with a different test suite.Don't worry about being green when developing the helper function, just manually keep track of which tests you really want to pass.Comment out (GASP!) or delete the lines in mergesort that call merge; then after getting merge to work, put them back in.These all look silly to me (or am I looking at this wrong?). Does anyone know the preferred approach? | Can the TDD methodology be applied top-down? | tdd | Here are some alternative ways to look at your options. But first, the rules of TDD, from Uncle Bob with emphasis by me:You are not allowed to write any production code unless it is to make a failing unit test pass.You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.You are not allowed to write any more production code than is sufficient to pass the one failing unit test.So, one way to read rule number 3 is that you need the merge function to pass the test, so you can implement it -- but only in its most basic form.Or, alternately, you start by writing the merge operation inline, and then refactor it out into a function after getting the test to work.Another interpretation is that you're writing mergesort, you know that you'll need a merge operation (ie, it isn't YAGNI, which is what the sufficient rule attempts to curtail). Therefore, you should have started with tests for the merge, and only then proceeded to tests for the overall sort. |
_softwareengineering.145583 | Possible Duplicate:Version control for independent developers? I am not sure if I should use a code repository when I am the only one working on a project. | Should I use a code repository if I am the only one working on a project? | java;svn;solo development;repository;version control | null |
_codereview.11339 | This allows users to upload an file from the browser into my Rails app using the paperclip gem. Once the file is uploaded it gets saved into the filesystem. When the user then goes in and the show method or the edit method is evoked the image is shown to the user. This is fine for image files but for .csv and .txt files I don't want to show the preview in the browser. This code is clunky and I know there is a better way to do this.<% if @user.image? %><%=filetype = @user.image.url %><br/><%if filetype.include? .jpeg %> <b>isJpeg</b> <%= image_tag @user.image.url %> <br /> <%= link_to @user.image.url, @user.image.url %><% end %> <%if filetype.include? .gif %> <b>isGif</b> <%= image_tag @user.image.url %> <br /> <%= link_to @user.image.url, @user.image.url %><% end %><%if filetype.include? .png %> <b>isPNG</b> <%= image_tag @user.image.url %> <br /> <%= link_to @user.image.url, @user.image.url %><% end %><%if filetype.include? .jpg %> <b>isJPG</b> <%= image_tag @user.image.url %> <br /> <%= link_to @user.image.url, @user.image.url %><% end %><%if filetype.include? .csv %> <b>isCSV</b> <p>Your file was a csv file and has no preview</p> <%= link_to @user.image.url, @user.image.url %><% end %><%= image_tag @user.image.url %> <br /><%= link_to @user.image.url, @user.image.url %><% end %> | Uploading a file from the browser | beginner;ruby;ruby on rails;image;file system | tokland is right (on both counts), you should push all that logic into a helper. You can also add a bit of OpenStruct into the mix to make the helper nicer:# in app/helpers/application_helper.rb or another helperdef user_image_info(user) info = OpenStruct.new(:has_image? => false) return info if(!user.image?) # There might be better ways to do this but I don't know paperclip. u = user.image.url %w[jpeg gif png jpg csv].find do |ext| # A small abuse of `find` but reasonable in this case. info.is = is#{ext.upcase} if(u.include?(.#{ext})) end if(info.is == 'isCSV') info.preview_link = '<p>Your file was a csv file and has no preview</p>'.html_safe else info.preview_link = (image_tag(user.image.url) + '<br>').html_safe end infoendThen in your ERB, you could do something like this:<% info = user_image_info(@user) %><% if info.has_image? %> <b><%= info.is %></b> <%= info.preview_link %> <%= link_to @user.image.url, @user.image.url %><% end %> |
_codereview.133828 | This a solution to the CodeEval's challenge Lucky Tickets.After a failed attempt with brute-force calculations (no surprise here), I have tried to implement an algorithm that I found on StackOverflow.The code runs without errors and my results match the output given in the example. However, when I submit the code for evaluation I obtain a partially solved status and a low score (17.5/100).It is certain my code can be optimized (especially the use of dictionary which I believe takes a toll on the memory usage) however I have, yet, not found a better way. Also I do not understand the reason why this code only partially solves the problem.public class LuckyTickets: ChallengeTemplate { /// <summary> /// N = total ticket length (2, 4, 6, etc.) /// Key : N/2 /// Value : Dictionary where /// Key = calculated sum for the N/2 digits /// Value = how many times this sum is found /// </summary> Dictionary<int, Dictionary<double, BigInteger>> allSums = new Dictionary<int, Dictionary<double, BigInteger>>(); public override void Execute() { // initialize the dictionary, f(0,0) = 1 allSums.Add(0, new Dictionary<double, BigInteger>()); allSums[0].Add(0, 1); // each line contains an even number corresponding to the length of the ticket foreach (var line in this.Lines) { int halflength = int.Parse(line) / 2; if (!allSums.ContainsKey(halflength)) { allSums.Add(halflength, new Dictionary<double, BigInteger>()); } // calculate the maximum sum we can possibly find (eg. if ticket length = 6, max sum is 9+9+9 = 27) double maxSumPossible = 9 * halflength; // recursively, for each sum, find how many times we can calculate it with n digits for (double i = maxSumPossible; i >= 0; i--) { GetSumCount(halflength, i); } BigInteger total = 0; foreach (var kv in allSums[halflength]) { total += (kv.Value * kv.Value); } Console.WriteLine(total.ToString(#)); } } private BigInteger GetSumCount(int n, double sum) { // does this length exist? if (!allSums.ContainsKey(n)) { allSums.Add(n, new Dictionary<double, BigInteger>()); } // if the count has already been calculated, return it BigInteger count; if (allSums[n].TryGetValue(sum, out count)) { return count; } else if (n >= 1 && sum >= 0) { // apply algorithm: // f(n, m) = f(n-1, m) + f(n-1, m-1) + f(n-1, m-2) count = 0; for (int i = 0; i <= 9; i++) { count += GetSumCount(n - 1, sum - i); } allSums[n][sum] = count; return count; } else { return 0; } } } | CodeEval - Lucky Tickets challenge | c#;algorithm;programming challenge;recursion | null |
_unix.28631 | I am completely blocked, but temporarily.I have an Ubuntu 11.10 in good condition, in this one I installed VirtuelBox.I installed on VirtuelBox as Ghest Debian Squeeze with Kernel 2.6.32-5-686.ja 'tried to recompile my kernel to erase all composont driver then I turned on the current configuration of the driver necessary for the operation of Ghest Debian.Steps for compiling and installing the kernel are:root login# nano /etc/apt/sources.list# apt-get update# apt-get install debconf-utils debhelper dpkg-dev build-essential kernel-package libncurses5-dev# uname-r2.6.32-5-686# wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.32.5.tar.bz2# tar xvjf linux-2.6.32.5.tar.bz2# mv linux-2.6.32.5/ /usr/src/# cd /usr/src/linux-2.6.32.5/# cp /boot/config-2.6.32-5-686. config### make allnoconfig### make menuconfigwe selected the penultimate year Load Alternate Configuration Fileyou exit the menu while watching# make-kpkg - append-to-version -tango - initrd buildpackage-us-ucthe image is now in /usr/src/# dpkg-i linux-image-2.6.32.5-tango-tango-2.6.32.5-10.00Custom_i386.deban error when starting the first line from Grub kernel panic not syncing vfs Unable to mount root fs on unknown-block 0 0 lsmodModule Size Used byppdev 4058 0 lp 5570 0 binfmt_misc 4907 1 fuse 44268 1 loop 9769 0 snd_intel8x0 19595 1 snd_ac97_codec 79200 1 snd_intel8x0ac97_bus 710 1 snd_ac97_codecsnd_pcm 47226 2 snd_intel8x0,snd_ac97_codecsnd_seq 35463 0 snd_timer 12270 2 snd_pcm,snd_seqsnd_seq_device 3673 1 snd_seqparport_pc 15799 0 parport 22554 3 ppdev,lp,parport_pcsnd 34423 8 snd_intel8x0,snd_ac97_codec,snd_pcm,snd_seq,snd_timer,snd_seq_devicepsmouse 44809 0 pcspkr 1207 0 serio_raw 2916 0 ac 1640 0 joydev 6739 0 evdev 5609 8 i2c_piix4 7076 0 button 3598 0 i2c_core 12787 1 i2c_piix4soundcore 3450 1 sndsnd_page_alloc 5045 2 snd_intel8x0,snd_pcmext3 94396 5 jbd 32317 1 ext3mbcache 3762 1 ext3usbhid 28008 0 hid 50909 1 usbhidsg 19937 0 sr_mod 10770 0 cdrom 26487 1 sr_modsd_mod 26005 7 crc_t10dif 1012 1 sd_modata_generic 2247 0 ohci_hcd 16999 0 ata_piix 17736 0 ahci 27410 6 ehci_hcd 28693 0 thermal 9206 0 libata 115869 3 ata_generic,ata_piix,ahcithermal_sys 9378 1 thermalusbcore 98969 4 usbhid,ohci_hcd,ehci_hcdnls_base 4541 1 usbcorescsi_mod 104853 4 sg,sr_mod,sd_mod,libatae1000 77317 0 root@debian:/boot# root@debian:/boot# lspci00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]00:01.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 02)00:04.0 System peripheral: InnoTek Systemberatung GmbH VirtualBox Guest Service00:05.0 Multimedia audio controller: Intel Corporation 82801AA AC'97 Audio Controller (rev 01)00:06.0 USB Controller: Apple Computer Inc. KeyLargo/Intrepid USB00:07.0 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)00:0b.0 USB Controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller00:0d.0 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 02)root@debian:/boot# root@debian:/boot# lscpuArchitecture: i686CPU(s): 1Thread(s) per core: 1Core(s) per socket: 1CPU socket(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 15Stepping: 13CPU MHz: 1983.975root@debian:/boot# the file .config is in the link: .configHelp me please | kernel panic error after recompilation | debian;kernel;virtualbox | null |
_unix.293445 | For example$ pwd/home/my_nameIt is possible or not to identify that this command is typed by human or run by script. | Distinguish between command that typed by human and run by script | shell script;keyboard;interactive | null |
_unix.322081 | 3:root@SERVER:/root # cat wtf.shecho datedate echo su - root -c datesu - root -c date3:root@SERVER:/root # 3:root@SERVER:/root # ksh wtf.shdateWed Nov 9 13:15:01 MEZ 2016su - root -c dateWed Nov 9 12:15:01 UTC 20163:root@SERVER:/root # grep TZ /etc/environment TZ=MEZ-1MESZ-2,M3.5.0/02:00,M10.5.0/03:003:root@SERVER:/root # oslevel -s6100-09-06-15433:root@SERVER:/root # Why do they differ? Even the crontab shows UTC, but the system TZ is MEZ. | AIX: Why does the timezone differ when using su or in crontab? | date;aix;timezone | null |
_unix.195437 | I have a USB Stick which is recognized by my laptop but there is no partition to mount. The device is listed in /dev/ as sdb but no partition /dev/sdb1. fdisk -l also displays nothing about /dev/sdb. I also tried to reread the partition table with partprobe which had no effect. I did not update my kernel, rebooted none the less, still same result.I also found this question: Mounting USB Drive that is Not Recognized which was solved by the answers here bbs.archlinux.org, but those didn't help me since my /etc/udev/rules.d/ is empty.dmesg outputs the following when I plug in the stick:usb 3-1: new high-speed USB device number 2 using xhci_hcdusb-storage 3-1:1.0: USB Mass Storage device detectedscsi host6: usb-storage 3-1:1.0usbcore: registered new interface driver usb-storageusbcore: registered new interface driver uasscsi 6:0:0:0: Direct-Access 2238 PRAM 1.00 PQ: 0 ANSI: 0 CCSsd 6:0:0:0: [sdb] Attached SCSI removable disklsusb displays my stick:Bus 003 Device 004: ID 13fe:3100 Kingston Technology Company Inc. 2/4 GB stickThe output of udevadm monitor is:monitor will print the received events for:UDEV - the event which udev sends out after rule processingKERNEL - the kernel ueventKERNEL[475.967958] add /devices/pci0000:00/0000:00:14.0/usb3/3-2 (usb)KERNEL[475.968047] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0 (usb)KERNEL[475.968467] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7 (scsi)KERNEL[475.968757] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/scsi_host/host7 (scsi_host)UDEV [476.071403] add /devices/pci0000:00/0000:00:14.0/usb3/3-2 (usb)UDEV [476.136149] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0 (usb)UDEV [476.136834] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7 (scsi)UDEV [476.138221] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/scsi_host/host7 (scsi_host)KERNEL[476.971068] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0 (scsi)KERNEL[476.971147] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0 (scsi)KERNEL[476.971196] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/scsi_disk/7:0:0:0 (scsi_disk)KERNEL[476.971334] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/scsi_device/7:0:0:0 (scsi_device)KERNEL[476.971780] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/bsg/7:0:0:0 (bsg)KERNEL[476.971813] add /devices/virtual/bdi/8:16 (bdi)KERNEL[476.971898] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/block/sdb (block)UDEV [476.972288] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0 (scsi)KERNEL[476.972597] change /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/block/sdb (block)UDEV [476.973246] add /devices/virtual/bdi/8:16 (bdi)UDEV [476.973710] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0 (scsi)UDEV [476.974907] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/scsi_disk/7:0:0:0 (scsi_disk)UDEV [476.974949] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/scsi_device/7:0:0:0 (scsi_device)UDEV [476.975955] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/bsg/7:0:0:0 (bsg)UDEV [476.986215] add /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/block/sdb (block)UDEV [477.000467] change /devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/host7/target7:0:0/7:0:0:0/block/sdb (block)Since it seems that the stick is nearly dead I wanted to make a backup of the data so that I could restore it later. I found this How Do I Create a Bit-Identical Image of a USB Stick?So I ran:sudo dd if=/dev/sdb of=~/usb.img bs=4MWhich resulted in:dd: failed to open /dev/sdb: No medium foundI also tried the same with ddrescue with the same result:ddrescue: Can't open input file: No medium foundSo do you know if there is any way to access or at least save the data on my stick? Thanks for the help!EDIT1Today I wanted to post the output of lsblk requested in the comments and my stick showed a different behaviour than when I originally asked this question.dmesg displayed the following:usb 1-2: new high-speed USB device number 9 using xhci_hcdusb-storage 1-2:1.0: USB Mass Storage device detectedscsi host7: usb-storage 1-2:1.0scsi 7:0:0:0: Direct-Access Kingston DT R500 PMAP PQ: 0 ANSI: 0 CCSsd 7:0:0:0: [sdb] 31227904 512-byte logical blocks: (15.9 GB/14.8 GiB)sd 7:0:0:0: [sdb] Write Protect is offsd 7:0:0:0: [sdb] Mode Sense: 23 00 00 00sd 7:0:0:0: [sdb] No Caching mode page foundsd 7:0:0:0: [sdb] Assuming drive cache: write throughsdb: sdb1sd 7:0:0:0: [sdb] Attached SCSI removable diskusb 1-2: reset high-speed USB device number 9 using xhci_hcdusb 1-2: device descriptor read/64, error -110usb 1-2: device descriptor read/64, error -110usb 1-2: reset high-speed USB device number 9 using xhci_hcdusb 1-2: device descriptor read/64, error -110usb 1-2: device descriptor read/64, error -110usb 1-2: reset high-speed USB device number 9 using xhci_hcdusb 1-2: device descriptor read/8, error -110usb 1-2: device descriptor read/8, error -110usb 1-2: reset high-speed USB device number 9 using xhci_hcdusb 1-2: device descriptor read/8, error -110usb 1-2: device descriptor read/8, error -110usb 1-2: USB disconnect, device number 9sd 7:0:0:0: [sdb] UNKNOWN Result: hostbyte=0x01 driverbyte=0x00sd 7:0:0:0: [sdb] CDB: cdb[0]=0x28: 28 00 01 dc 7f 80 00 00 08 00blk_update_request: I/O error, dev sdb, sector 31227776xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff880213bcf240xhci_hcd 0000:00:14.0: xHCI xhci_drop_endpoint called with disabled ep ffff880213bcf288usb 1-2: new high-speed USB device number 10 using xhci_hcdusb 1-2: device descriptor read/64, error -110usb 1-2: device descriptor read/64, error -110usb 1-2: new high-speed USB device number 11 using xhci_hcdusb 1-2: device descriptor read/64, error -110usb 1-2: device descriptor read/64, error -110usb 1-2: new high-speed USB device number 12 using xhci_hcdusb 1-2: device descriptor read/8, error -110usb 1-2: device descriptor read/8, error -110usb 1-2: new high-speed USB device number 13 using xhci_hcdusb 1-2: device descriptor read/8, error -110usb 1-2: device descriptor read/8, error -110usb usb1-port2: unable to enumerate USB deviceI also connected the stick to my Windows Computer. It could access the stick but there were no files showing. | USB Stick recognized but not mountable | arch linux;mount;usb drive;dd | null |
_unix.346638 | I have a | delimited file with multiple fields (Field_1 to Field_10), and some of the fields have dollar amounts. I would like to get the cumulative sum of say Field_1 ($ amount), Field_5 ($ amount), Field_6 ($ amount), and produce an output which is equal to SUM(Field_1)+SUM(Field_5)+SUM(Field_6). | Sumation of Multiple Field Cumulative Sum | shell script;text processing;numeric data | null |
_opensource.2870 | A little background: I have a library project which I put under the LGPL v2.1. However, I have two applications inside the project tree using the library. It would be somewhat inconvenient to make these two small applications have their own license, so I figured I could just use the LGPL for all the code in the tree, including those applications.Does it make sense to have an application under the LGPL? What would change when compared to, e.g., the GPL? | Does it make sense to have a program under the LGPL? | gpl;lgpl | The LGPL v2.1 is specifically designed for libraries; in particular, it allows distribution of modified versions of the library only in certain circumstances (section 2), includingThe modified work must itself be a software library.However, you can distribute your library and its associated programs under the LGPL, because section 3 of the LGPL allows it to be upgraded to standard GPL (version 2 or later). Thus a recipient making changes and wishing to redistribute them can do so either under the LGPL or the GPL, whichever is appropriate. (After distribution occurs under the GPL though, further distribution of modified works on top of that can only use the GPL.)Having said all that, using two licenses within your source code isn't all that complex: all you need to do is include both the LGPL and the GPL, and make sure the source files' headers indicate which license applies to them (as explained at the end of the license documentation). |
_unix.46709 | Is there any particular command to find out what servers (like: apache2, mysql-server, backup-server etc) are running inside a dedicated server?If I will reboot my dedicated server will they all start automatically?What is the safe way of reboot a dedicated server with all its applications server running inside it?Note: I am in a dedicated Debian server. | How to check what is running in a server? | linux | I assume that application servers are using ports [Apache, Mysql do]If so you can use netstat -lepunt to find out the services running in your server.If you want to know the services are started at boot time check for init scripts in /etc/init.d/. Most of the time services like Apache and MySQL servers are started at boot time if they are installed using a package manager. If not you can create an init script to start them at the boot time. |
_codereview.124200 | I have made a Priority Queue implemented as a Binary Search Tree. I would appreciate comments related to optimization and proper functionality, but any comments/critiques are welcome.PQ.h:#ifndef PQ_H#define PQ_H#include <iostream>struct Node{ int value; Node * left; Node * right; Node(int value) : value(value), left(NULL), right(NULL) { }};class PQ{ private: Node * root; void Push(int value, Node * node); int Pop(Node * node, Node * parent); void DeletePQ(Node * node); public: PQ(); ~PQ(); void Push(int value); int Pop(); void DeletePQ(); bool IsEmpty();};#endifPQ.C:#include PQ.hPQ::PQ(){ root = NULL;}PQ::~PQ(){ DeletePQ();}void PQ::DeletePQ(){ DeletePQ(root);}void PQ::DeletePQ(Node * node){ if(node != NULL) { DeletePQ(node->left); DeletePQ(node->right); delete node; }}void PQ::Push(int value){ if(root != NULL) { Push(value, root); } else { root = new Node(value); }}void PQ::Push(int value, Node * node){ if(value < node->value) { if(node->left != NULL) { Push(value, node->left); } else { node->left = new Node(value); } } else { if(node->right != NULL) { Push(value, node->right); } else { node->right = new Node(value); } }}int PQ::Pop(){ int value; if(root != NULL) { if(root->right != NULL) { value = Pop(root->right, root); } else { value = root->value; if(root->left != NULL) { Node * temp = root; root = root->left; delete temp; } else { delete root; root = NULL; } } } else { value = -1; } return value;}int PQ::Pop(Node * node, Node * parent){ int value; if(node->right != NULL) { value = Pop(node->right, node); } else { value = node->value; if(node->left != NULL) { parent->right = node->left; } else { parent->right = NULL; } delete node; } return value;}bool PQ::IsEmpty(){ bool isEmpty; if(root == NULL) { isEmpty = true; } else { isEmpty = false; } return isEmpty;} | Priority Queue (Binary Search Tree) Implementation | c++;tree;binary search;priority queue | Handling booleansComputation of isEmpty is anti-idiomatic. Consider isEmpty = (root == NULL);Now it is easy to see that isEmpty is not needed at all. The whole method can be shortened tobool PQ::IsEmpty(){ return root == NULL;}Pop return on failureReturning -1 on failure means that the tree cannot have nodes with -1 as a value (a caller cannot tell such node from a failure). Consider throwing an exception instead.Recursive PushPush doesn't need to be recursive:void PQ::Push(int value, Node * node){ while (1) { if (value < node->value) { if (node->left) { node = node->left; } else { node->left = new Node(value); return; } } else { if (node->right) { node = node->right; } else { node->right = new Node(value); return; } } }}Recursive PopPop needs not to be recursive either:int PQ::Pop(Node * node, Node * parent){ while (node->right) { parent = node; node = node->right; } do_actual_deletion}Edit: as requested, no multiple returns, no infinite loops. I am not sure it is cleaner than the original. void PQ::Push(int value, Node * node) { Node ** insertionPoint = 0; while (insertionPoint == 0) { if (value < node->value) { if (node->left) { node = node->left; } else { insertionPoint = &node->left; } } else { if (node->right) { node = node->right; } else { insertionPoint = &node->right; } } } *insertionPoint = new Node(value); } |
_cs.42302 | As in the title, is it possible to find a clique with more than 2 nodes in a bipartite graph? | Can I find a clique with more than 2 nodes in a bipartite graph? | graph theory;graphs;clique | A graph is bipartite if and only if it is 2-colorable. A clique of size at least 3 contains a triangle, and a triangle $K_3$ clearly cannot be colored with 2 colors. It follows we can't find a triangle in a bipartite graph, so the corresponding decision problem is very easy for a bipartite graph. |
_unix.34534 | Using Ubuntu 10.04I'm using Clonezilla to create and install a customized image. I'd like to be able to set up multiple static IP addresses. For example, I'd like the first interface to have the IP 10.0.0.1 and the second interface to have the IP 10.0.0.2I have tried setting up two connections using NetworkManager, but can't associate a specific MAC address with them as these will be different for all target machines. Because of this, NetworkManager sets both adapters to the same IP address (10.0.0.1).I clear the udev network rules before imaging, so all machines are guaranteed to have an eth0 interface. However some targets are going to use a USB NIC adapter for the second interface, so they are not guaranteed to have an eth1. That is, it's possible that different USB NIC adapters will be used, and Linux will assign each a new interface name.Is there any way to assign the two IP addresses to the different adapters without the second having a fixed name using NetworkManager or /etc/network/interfaces? | Configuring multiple static IP addresses for a disk image | ubuntu;debian;networking;networkmanager;ethernet | null |
_vi.9887 | Can you please suggest any latex plugin with livereload/preview support ? All of the plugins google suggested me does not support preview/reload, or use 3rd party tools that use another 3rd party tools and so on... Which one do you use or can suggest (if you sure it really work on linux). Thank you ! | vim latex livereload/preview plugin | filetype tex | I've created the function that compile tex files, and can be mapped to any key binding. As @Dalker said pdf viewer will automatically refresh when the file displayed changes , and it really do. Am not exactly sure about git related stuff @VanLaser said.. i think he meant that i have save the file changes first.. maybe not.. idkfunction! MKTex() if (&ft == 'tex') let s:dir = expand('%:p:h') let s:file = expand('%:p') execute '!pdflatex -interaction=nonstopmode -output-directory ' . s:dir .' '. s:file else echo For .tex files only. endifendfunction |
_unix.152268 | I have a tmux session already running. I have created it just withtmuxnow I can leave the session and re-enter withtmux a -t 0how can I share this session with other users? Usually one has to create the session with -S option, but I haven't. Is there a way to share my session? | share existing tmux session | tmux | null |
_unix.216052 | I am running Kali 1.1 on Parallels 9 for Mac and am having some irritating issues.Firstly, the WiFi doesn't work (normal for use within a virtual machine, but also isn't working with my external wireless card with confirmed functioning chipset. I am hoping the second issue's solution will solve this one too.Parallel Tools - I tried three or four different ways (from a virtual disc, copied to the Kali environment from the virtual disc [as it didn't have execute permissions on the disk itself], and also follwing several threads from forums - listed below). All of these attempts resulted in the an error message: An error occurred whn downloading required components for Parallels Tools installation. -Kernel sources Install these components and try againI don't have any clues as to what sources may be missing.Any other ideas about how this might be solved? Anyone already done it?? I can't find anything else.Another thing I tried was to boot Kali from a USB drive (dual booting, not through virtually with Parallels), which took a long time as one must 'copy' the .iso file to the USB stick with the 'dd' funtion. This however also didn't work. It actually didn't even make it into Kali, it froze while still at the dual boot screen, subsequently blowing out my USB port, requiring a SMC reset on my macbook pro.All of these experiences are leading me to believe that Kali, running on kernel version 3.18 is simply not compatible with current versions of the other software involved.I would be greatful for any help. Here are 5 links to forums/methods I have used, none of which worked so far.https://superuser.com/questions/671747/cant-install-parallels-tools-on-debian-7-2-0https://forum.parallels.com/threads/parallels-tools-9-0-24229-and-kali-1-0-7.303731/forum (dot) parallels.com/threads/problem-installing-tools-on-3-18-kernel.327491/forum (dot) parallels.com/threads/trying-to-install-parallels-tools-on-kali-linux.327779/download (dot) parallels.com/desktop/v6/docs/en/Parallels_Desktop_Users_Guide/22570.htmEDIT #1: For a look at the log file, please follow the link in the first comment below from AustinMW: naxrevlis (dot) com/?p=19EDIT #2: I have now just decided to dual boot from SSD, which is working as one would hope. I did manage to make a bootable Kali USB, using a program called Rufus in Windows, which created both EFI and BIOS boot methods.Thanks for your help in advance. | Kali 1.3 - Installing Parallel Tools | linux;configuration;kali linux;parallels | null |
_opensource.4247 | The Json license has an appendix where it says: The Software shall be used for Good, not Evil.If I want to use a library with the Json license am I forced to add this quote to my license? Update: Changed the title so that its question and text ask the same question | Is every project using a Json licensed library forced to add use it for good, not evil statement to its license? | licensing;derivative works | Yes, assuming that it is reasonable to interpret this permission notice as the whole of the license text.GPL-compatibility for permissive licenses like MIT and BSD relies on the doctrine of sublicensing; you may give downstream recipients fewer rights than you received, but not more. In this case you never had the right to use the software for evil, which means you cannot possibly give it to recipients. |
_webmaster.68924 | I'm trying to use schema.org to indicate advertisements on my website. I'm concerned that putting adverts within the content of a blog page will hurt my SEO and I'm hoping using schema might help make things clear to search engines.This is what I have so far, an image with a link.<div itemscope itemtype=https://schema.org/WPAdBlock> <a href= itemprop=url><img src= itemprop=image></a></div> is the above schema I'm using correct?should I rather be inserting these adverts with javascript or jquery rather than just html?I am not using an ad network these are custom ads. | How to use Schema.org for Adverts on a website - WPAdBlock | seo;advertising;microdata | Yes, your use of Microdata and schema.org is correct.Instead of div, you might want consider using the sectioning element aside:The element can be used for [] advertising, []That way the advertisement is separate from the main content flow of your document.Oh, and dont forget to provide an alt attribute for the image.Depending on the ad, you might also want to use the nofollow link type for the link (Google urges to use it for paid links).For transparency (and, sometimes, legal) reasons, you might want to explicitly note that its an advertisement. For example, by using an Advertisement heading (which makes sense if you are using a sectioning element, and is nice if you want screen reader users to easily skip it if they want to), or just a simple note, or maybe a link to a page explaining why you are advertising and what happens with possible income or referrer data etc.Including it with JavaScript? Well, I wouldnt do it for the obvious reason that users without JS wouldnt be able to see your ad. |
_unix.229403 | We have a Cisco router which allows for rate limiting (they call it policing) but permitting bursting on a per-TCP connection basis. For example, we can cap the bandwidth at 50mbit but the cap won't be imposed until 4 megabytes have been transferred. This is enforced per each TCP connection that is made.Is there some way to do this in Linux? Also, are there any drawbacks to such a solution? In case it's helpful to anyone, the Cisco command for setting the bursting is the third parameter to the police command which is run under a policy-map (at least on our ASA 5505).The goal of this is to allow a server to take advantage of 95/5 bursting and serve web pages as quickly as possible for normal users but reduce the chances of bursting more than 5% of the time (such as if doing a server to server transfer or large files being downloaded from a website). I understand with a DDoS attack that went on too long this might not be a solution, but for various reasons that's not a concern here. | Rate limit network but allow bursting per TCP connection before limiting | networking;bandwidth | This is doable in linux with iptables and tc. You configure iptables to MARK packets on a connection where some number of bytes have been transferred. You then use tc to put those marked packets in a class in a queuing discipline to ratelimit the bandwidth.One somewhat tricky part is to limit the connection for both uploads and downloads. tc doesn't support traffic shaping of the ingress. You can get around this by shaping the egress on your webserver-facing interface (which will shape downloads to your webserver), and shaping egress on your upstream-provider facing interface (which will shape uploads from your webserver). You aren't really shaping the ingress (download) traffic, as you can't control how quickly your upstream provider sends data. But, shaping your webserver facing interface will result in packets being dropped and the uploader shrinking their TCP window to accommodate for the bandwidth limit.Example: (assumes this is on a linux-based router, where web server facing interface is eth0 and upstream is eth1)# mark the packets for connections over 4MB being forwarded out eth1# (uploads from webserver)iptables -t mangle -A FORWARD -p tcp -o eth1 -m connbytes --connbytes 4194304: --connbytes-dir both --connbytes-mode bytes -j MARK --set-mark 50# mark the packets for connections over 4MB being forwarded out eth0# (downloads to webserver)iptables -t mangle -A FORWARD -p tcp -o eth0 -m connbytes --connbytes 4194304: --connbytes-dir both --connbytes-mode bytes -j MARK --set-mark 50# Setup queuing discipline for server-download traffictc qdisc add dev eth0 root handle 1: htbtc class add dev eth0 parent 1: classid 1:50 htb rate 50mbit# Setup queuing discipline for server-upload traffictc qdisc add dev eth1 root handle 1: htbtc class add dev eth1 parent 1: classid 1:50 htb rate 50mbit# set the tc filters to catch the marked packets and direct them appropriatelytc filter add dev eth0 parent 1:0 protocol ip handle 50 fw flowid 1:50tc filter add dev eth1 parent 1:0 protocol ip handle 50 fw flowid 1:50If you want to do this on the webserver itself instead of on a linux router, you can still use the upload portions of the above stuff. One notable change is you'd replace FOWARD with OUTPUT. For download you'd need to setup a queuing discipline using an Intermediate Functional Block device, or ifb. In short, it uses a virtual interface so that you can treat ingress traffic as egress, and shape it from there using tc. More info on how to setup an ifb can be found here: https://serverfault.com/questions/350023/tc-ingress-policing-and-ifb-mirroringNote that this type of stuff tends to require a lot of tuning to scale. One immediate concern is that connbytes relies upon the conntrack module, which tends to hit scaling walls with large numbers of connections. I'd recommend heavy load testing.Another caveat is that this doesn't work at all for UDP, since it is stateless. There are other techniques to tackle that, but it looks like your requirements are for TCP only.Also, to undo all of the above, do the following:# Flush the mangle FORWARD chain (don't run this if you have other stuff in there)iptables -t mangle -F FORWARD# Delete the queuing disciplinestc qdisc del dev eth0 roottc qdisc del dev eth1 root |
_codereview.91379 | I've managed to create a working shopping cart and my next concern is security, mostly about the architecture and session security.Should I make sessions somehow secure, if there's no authenticated login and sessions are deleted when browser closes? or is session_start() enough in this case?Would the server side validation be enough strong in the add_to_cart.php and is that proper way to exit PHP code in case of errors?Are the database queries safe or should I take some extra measures?Are there some high security risks with my approach I should take into account?Cart will be hosted on SSL-secured server. Do I need to specify something in the code, to only make it use SSL?If anyone find this cart useful, feel free to use it.session.php// Check if session is created. If not, then create.if (session_status() == PHP_SESSION_NONE) { session_start();}db_connect.php$host = localhost;$db_name = xx;$username = xx;$password = xx;try { $con = new PDO(mysql:host={$host};dbname={$db_name}, $username, $password); $con->exec(set names utf8);}//to handle connection errorcatch(PDOException $exception){ echo Connection error: . $exception->getMessage();}products.php<?php $query = SELECT id, name, price, image FROM shoes ORDER BY id; $stmt = $con->prepare( $query ); $stmt->execute(); $num = $stmt->rowCount(); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)){ extract($row); echo <div class=\item\> <div class=\product-id\>{$id}</div> <div class=\category\>shoes</div> <div class=\image\> <img src=\images/{$image}\ class=\product-image\ alt=\product\/> </div> <div class=\name\> {$name} </div> <div class=\price\> {$price}</div> <div class=\quantity\><input type=\text\ value=\1\ class=\maara\ /></div> <input type=\button\ class=\lisaa\ value=\Lis\/> </div> ; }?>ajax.jsfunction add() {$(.lisaa).click(function(e) { var id = $(this).closest(.item).find(.product-id).text(); var category = $(this).closest(.item).find('.category').text(); var quantity = $(this).closest(.item).find('.maara').val(); var action = add; $.ajax({ url: 'add_to_cart.php', type: 'POST', data: {'id': id, 'category': category, 'quantity': quantity, 'action': action}, beforeSend: function() { $(#wait).show(); }, complete: function() { $(#wait).hide(); }, success: function(data) { if(data.indexOf(error) >= 0) { $(.success, .errors).hide(); $(.errors).html(data).slideDown(); } else { $(.shoppingcart-container).load(cart.php); $(.errors, .success).hide(); $(.success).html(data).slideDown(); } }, error: function(request, status, error) { alert(error!); } });});}add_to_cart.php// Check server request methodif ($_SERVER[REQUEST_METHOD] == POST) {// Check if action is setif(isset($_POST['action']) ? $_POST['action'] : ) { // Check if action is add if($_POST['action'] == add) { // Error variable. $error = ; // Success variable. $success = ; // VALIDATE ID if (isset($_POST['id']) ? $_POST['id'] : ) { // clean input $id = test_input($_POST[id]); // Check if id is numerical if (!is_numeric($id)) { // Show invalid ID as a return data echo error: Invalid ID. Not numerical.; // Add a value to error variable $error = Error; // Exit php exit; } } // If id doesn't exist else { // Show invalid ID as a return data echo error: Invalid ID. Empty id.; // Add a value to error variable $error = Error; // Exit php exit; } // VALIDATE Category if (isset($_POST['category']) ? $_POST['category'] : ) { // clean input $category = test_input($_POST[category]); // Category must match your product categories if(!preg_match('[shoes|shirts]', $category)) { // Show invalid category as a return data echo error: invalid category.; // Add a value to error variable $error = Error; // Exit php exit; } } // If category doesn't exist else { // Show invalid category as a return data echo error: Invalid category.; // Add a value to error variable $error = Error; // Exit php exit; } // VALIDATE Quantity if (isset($_POST['quantity']) ? $_POST['quantity'] : ) { // clean input $quantity = test_input($_POST[quantity]); // Check if quantity is numerical if (!is_numeric($quantity)) { // Show invalid category as a return data echo error: Invalid quantity format.; // Add a value to error variable $error = Error; // Exit php exit; } } // Check if errors are false if ($error == false) { // Connect to database and select row from table, which matches category variable $query = SELECT id, name, price, image FROM {$category} WHERE id={$id}; $stmt = $con->prepare( $query ); $stmt->execute(); } else { // Show error as return data echo error: errors occurred with db.; // Add a value to error variable $error = Error; // Exit php exit; } // Check if query contains a row if($stmt->rowCount() <= 0) { // Add a value to error variable $error = Error; // exit php exit; } // Get values of the item, which matched our database search while ($row = $stmt->fetch(PDO::FETCH_ASSOC)){ $name = $row['name']; $price = $row['price']; $image = $row['image']; } // Check if session variable cart exists. If not, then create. if (!isset($_SESSION['cart']) || empty($_SESSION['cart'])) { $_SESSION['cart'] = array(); } // Check if array is set if(isset($_SESSION['cart']['id'])) { // If array is set, check if our product id exists in cart already if(in_array($id, $_SESSION['cart']['id'])) { foreach($_SESSION['cart']['id'] as $key => $val) { if ($val == $id ) { // Update product quantity $_SESSION['cart']['quantity'][$key] = $quantity + $_SESSION['cart']['quantity'][$key]; // Show succesfull quantity update message as return data echo {$name} x {$quantity} quantity added; // Add a value to success variable $success = {$name} x {$quantity} quantity added; // Exit php exit; } } } } // If product doesn't exist in cart and errors are false, add new item to cart if ($error == false) { $_SESSION['cart']['id'][] = $id; $_SESSION['cart']['category'][] = $category; $_SESSION['cart']['name'][] = $name; $_SESSION['cart']['price'][] = $price; $_SESSION['cart']['quantity'][] = $quantity; $_SESSION['cart']['image'][] = $image; // Show succesfully added message as return data echo {$name} x {$quantity} succesfully added; // Add a value to success variable $success = {$name} x {$quantity} succesfully added; // exit php exit; } }}}cart.phpfunction showcart() { // If cart variable is not empty, then do the following if(!empty($_SESSION['cart'])) { // Few variables, to collect total amount of items and total price $total = ; $counter = ; // Start shoppingcart div echo <div class=\shoppingcart\>; // Loop through cart items foreach ($_SESSION['cart']['id'] as $key => $value) { // Add product's price into variable $singleproduct = $_SESSION['cart']['price'][$key]; // Add product's quantity into variable $quantityproduct = $_SESSION['cart']['quantity'][$key]; // Replace , with . to make calculations $singleformat = str_replace(',' , '.' , $singleproduct); // Count product's amount x quantity $multipleproducts = $singleformat * $quantityproduct; // Change number formatting $multipleformat = number_format($multipleproducts, 2, ,, ); // Create html output, which contains the product information echo <div class=\shoppingcart-items\>; echo(<div class=\shoppingcart-image\><img src=\images/{$_SESSION['cart']['image'][$key]}\ class=\shoppingcart-image\/></div>); echo(<div class=\shoppingcart-itemname\>{$_SESSION['cart']['name'][$key]}</div>); echo(<div class=\shoppingcart-quantity\> {$_SESSION['cart']['quantity'][$key]} x </div>); echo(<div class=\shoppingcart-price\> {$multipleformat} <br /> <span class=\singleproduct-price\> ({$singleproduct} / kpl)</span></div>); // Calculate total price of products $total += $singleformat * $quantityproduct; // Calculate total items amount $counter += $quantityproduct; // Change total price number format $totalsum = number_format($total, 2, ,, ); echo </div>; // End foreach loop } // End shopping cart div echo </div>; // Create bottom for shopping cart, which contains total amount of items and total price echo <div class=\shoppingcart-bottom\> <div class=\summa\><a href=\lomake.php\>Kori</a></div> <div class=\tuotteiden-maara\>{$counter} tuotetta <br />{$totalsum} </div> </div>; } // if cart variable is empty, then show the following else { echo <div class=\shoppingcart\>; echo ostoskori on tyhj; echo </div>; }}test_input functionfunction test_input($data){ $data = trim($data); $data = stripslashes($data); $data = htmlspecialchars($data); return $data;} | Securing PHP shopping cart | php;mysql;security;ajax;e commerce | SecurityQuestionsShould I make sessions somehow secure, if there's no authenticated login and sessions are deleted when browser closes? or is session_start() enough in this case?The default should be fine. Most of session security is about server configuration, and for a shopping card you probably don't need stuff like regenerating the session id regularly, binding it to ip and/or user agent, etc.Would the server side validation be enough strong in the add_to_cart.php and is that proper way to exit PHP code in case of errors?No and no (see below). Are the database queries safe or should I take some extra measures?No, use prepared statements (see below).Cart will be hosted on ssl-secured server. Do i need to specify something in the code, to only make it use ssl?This is also mainly a server configuration thing, but you can try to enforce HTTPS via PHP code.VulnerabilitiesYour code is vulnerable to SQL injection via the $category variable which is user supplied ($_POST['category']) and then put into the query. Basically what you have is this:$query = SELECT id, name, price, image FROM $_POST['category'] WHERE id={$id};Your code is also vulnerable to XSS by anyone who can add products. I will not go into that, because I'm assuming it's intended, but if I where you, I would still defend against it (maybe in the future you allow third parties to add products; maybe you don't want your sales-person be able to escalate privileges to admin; etc).Your code is also vulnerable to CSRF, but for a shopping card that's not really that bad. It makes it possible for an attacker to add items to the card of a victim if the victim visits some website that contains HTML and JavaScript code by the attacker. This could lead to the victim accidentally buying something they did not want to buy if they don't check their card during checkout, and might annoy you or your users. But there is no real profit for the attacker (except annoying people), and it's a difficult attack (the timing must be right and the victim must not thoroughly check their card), so it's not a real danger.AttackAn attack like this should work: localhost/addtocart.php?action=add&quantity=1&id=1&category=shoes where id=-1 union all select user,password,3,4 from mysql.users %23Your defenseYou apply two checks to the category: !preg_match('[shoes|shirts]', $category) and test_input (which is stripslashes + htmlspecialchars). The first one is insufficient (eg shoesFooBar passes), and the second one doesn't have anything to do with SQL injection (htmlspecialchars defends against XSS, stripslashes doesn't do anything useful).The correct defenseFirst of all, what you want to do is use prepared statements for all variable data. It doesn't matter where it comes from; if it's not hardcoded, use prepared statements (if it comes from the database it might have been user supplied in the past, which would open you up to second order injection).But what about things where you can't use prepared statements? For example your $category. Here you want to use whitelists:$whitelistTableNames = array(shoes, shirts);if (in_array($_POST['category'], $whitelistTableNames, TRUE)) { $query = SELECT id, name, price, image FROM $_POST['category'] WHERE id=?; // prepare and execute}I put the TRUE in there for strict checking (===), just in case someone adds a 0 to the whitelist.Alternatively, you can just hardcode it:if ($_POST['category'] === shoes) { $tableName = shoes;} else if ($_POST['category'] === shirts) { $tableName = shoes;} else { throw new Exception('invalid category');}$query = SELECT id, name, price, image FROM $tableName WHERE id=?;Defense in generalLooking at your code, it seems that you are not really sure what you are defending against, and are just using a couple of functions in the hope that it works. This is not the correct approach. I think it would benefit you greatly if you just tried out the most common vulnerabilities (eg XSS and SQL injection) yourself, so you know how they work, and what can defend against them.Regarding the various functions you use:htmlspecialchars: this is the proper defense against XSS in most situations (please note this list of places where it does not defend against XSS). It should not be applied when inserting something in the database, but when echoing anything non-hardcoded to the user. This makes sense: You should defend at that moment when it could be exploited, not at any other moment, because it would be hard to maintain (You would constantly have to check if you cleaned a variable already or not), and can be vulnerable (maybe you have another method to add data to the database that doesn't clean values).stripslashes: Does what it says: It removes slashes that are used to escape stuff. This function was useful when magic quotes was still used, which it mostly isn't anymore, so the function doesn't have all that much purpose. Definitely never use it for any security, as it doesn't provide any.is_numeric: This is what in your code protects most user input from becoming an SQL injection. It's secure, but it's not the recommended way to handle SQL injection (again, use prepared statements, one reason is the same as for XSS: you want to defend directly where the vulnerability is, not earlier). If you know that you need a numeric value, you might as well use filter_input with an int filter instead of applying is_numeric at some point in-between getting input and supplying it to a query. This should not be your main defense against SQL injection, but is a nice addition as defense in depth.MiscYou can use single as well as double quotes to define a string, make use of this. If you only have double quotes inside a string, terminate it with single quotes to avoid escaping all the double quotes.use guard-clauses to reduce the nesting of your code (eg if ($_SERVER[REQUEST_METHOD] !== POST || empty($_POST['action'])) return;). your add_to_card.php code is just one long block, which makes it hard to read and maintain. Try to introduce functions which separate logical units of code.use less newlines. Currently, nearly all your statements get their own paragraph, which is just too much; it makes your code harder to read.use less comments. Don't rephrase what your code already told a reader (eg Exit php, clean input, etc), and don't add comments because your code looks confusing (eg If id doesn't exist because the if was opened so far away; if you need comments like these, reduce the length of your code).just exiting on error makes your code hard to reuse, because the calling code can't control it. Try to return, throw exceptions, or similar instead.Why have $error if you are exiting on error anyways? This seems unnecessary. |
_softwareengineering.341077 | I am looking to an IoT like system which must deal with a wide range of data complexities. As an example, I'd like to model a 'thing' and I know all things have a location. However, when the actual data arrives from these things (from different vendors), something like latitude can be spelled, it may have different naming standards (vendor1=lat, vendor2=latitude). I want to build an abstract model at the application layer which knows how to manage these gaps. Preferably using an XML data representation (ie, I could use a std like OAGIS/MIMOSA).Couple of questions:1) Has anyone done something similar?2) Seems like it could be done at the data ingestion layer (ie, harmonize all data into 1 def of lat) or it could be done at query time (when someone executes an API, find the right column and return it). Has anyone done a comparison of these two models? | Generic Type System | architecture;data structures;type systems | null |
_unix.19178 | I have 2 dedicated servers both running CentOS 5.3 and Plesk 10.0.0.I'm trying to migrate stuff from one to the other but it's not happening, just after I fill out Migration settings and press Next I get Host xx.xx.xx.xx is not accessibleWhat I've done so far that didn't have any effect whatsoever:stopped the firewall on the source serverenabled PermitRootLogin yes in /etc/ssh/sshd_configand restarted the serviceAny ideas?UPDATE: I was able to initiate the migration from the new server (which is blank) to an old one (I need vice versa), so there's definitely something relating settings.I pulled out a log file, and this is where it fails:23117: 2011-08-22 16:34:22,949 INFO Executing <subprocess[23118] '/usr/local/psa/admin/bin/launchpad --send-scout --host=XX.XXX.XXX.XXX --login=root --session-path=/usr/local/psa/PMM/msessions/2011082216342281'>23117: 2011-08-22 16:34:32,983 INFO Subprocess raised ExecuteException: Subprocess <subprocess[23118] '/usr/local/psa/admin/bin/launchpad --send-scout --host=XX.XXX.XXX.XXX --login=root --session-path=/usr/local/psa/PMM/msessions/2011082216342281'> was finished with exit code 10== STDOUT ====================== STDERR ====================Cannot send scout to the remote host23117: 2011-08-22 16:34:32,983 ERROR Subprocess <subprocess[23118] '/usr/local/psa/admin/bin/launchpad --send-scout --host=XX.XXX.XXX.XXX --login=root --session-path=/usr/local/psa/PMM/msessions/2011082216342281'> was finished with exit code 10== STDOUT ====================== STDERR ====================Cannot send scout to the remote host23117: 2011-08-22 16:34:32,983 INFO Outgoing packet:<?xml version=1.0 encoding=UTF-8?><response><errcode>130</errcode><errmsg>Host XX.XXX.XXX.XXX is not accessible</errmsg></response> | Plesk 10 migration failure | centos;plesk | Solved. Edited hosts.deny and hosts.allow and that did it. |
_unix.38911 | When I try to open a file using vim inside tmux the whole window freezes. I have to kill the window with C-a &.Here are my ~/.vimrc settings::set autoindent:set ts=4:set number:set shiftwidth=4:set showmode:filetype on:filetype plugin on:syntax enable:set mouse=aand ~/.tmux.conf# I like Ctrl-a as the default hotkeyunbind C-bset-option -g prefix C-a# Split window using | and -unbind %bind | split-window -hbind - split-window -v# Set status barset -g status-bg blackset -g status-fg whiteset -g status-left #[fg=green]#H# Highlight active windowset-window-option -g window-status-current-bg red# Makes window numbering start from 1, instead of 0set -g base-index 1I am facing the problem in RHEL. However the same config works fine in my Mac. I guess, things were working fine till my RHEL box got restarted and I tried to recover a file in from vi swap file.Any ideas on how to fix this?[edit]: I tried ssh to other box inside tmux and running vi there. Works fine in remote box ![added later]Following the suggestion of @jasonwryan, I added the line set -g default-terminal screen-256color at the end of tmux.conf. That prevented programs like less from working.echo $TERM inside tmux is screen and outside tmux is xterm.Searching for $TERM led me to https://wiki.archlinux.org/index.php/Tmux, from where I added the line set -g default-terminal screen-256color as the first line of tmux.conf. This made the $TERM inside tmux to screen-256color. But now when I start vi inside tmux, it displays the following error:E558: Terminal entry not found in terminfo'screen-256color' not known. Available builtin terminals are: builtin_riscos builtin_amiga builtin_beos-ansi builtin_ansi builtin_pcansi builtin_win32 builtin_vt320 builtin_vt52 builtin_xterm builtin_iris-ansi builtin_debug builtin_dumbdefaulting to 'ansi'Looks like I have solved the issue. Just added set -g default-terminal xterm as the first line of my ~/.tmux.conf and it worked ! | Vim not running inside tmux | vim;tmux | I solved the issue by adding the lineset -g default-terminal xtermas the first line of my ~/.tmux.conf and it worked fine.However as @jasonwryan has pointed out, the TMUX FAQ clearly states that:Most display problems are due to incorrect TERM! Before reporting problems make SURE that TERM settings are correct inside and outside tmux.Inside tmux TERM must be screen or similar (such as screen-256color). Outside, it must match your terminal ...I only post this answer as it actually solved my problem. please feel free to add your alternative solutions. |
_unix.187132 | I just updated my system and upon reboot I've found myself in emergency mode. This is a dm-crypt+LUKS EFI system (thinkpad) using gummiboot. journalctl -xb reports that /boot could not be mounted. Following this thread, I tried downgrading my kernel to 3.18.2 using pacman -U, and while it did downgrade, I still can't boot normally.Thinking the kernel upgrade process just caught a glitch, I tried re-updating my kernel (from /var/cache/pacman/pkg/, but that didn't affect the next boot. mkinitcpio gave a warning that the boot partition wasn't mounted.The line currently in my /etc/fstab is:LABEL=EFI /boot vfat rw,relatime,fmask=0022,dmask=0022,code page=437,iocharset=iso8859-1,short name=mixed,errors=remount-ro 0 2uname -r tells me the emergency mode is using Linux kernel 3.18.2 instead of the 3.18.6 kernel I updated to. pacman -Q says Linux 3.18.6-1.Journalctl -xb | grep -I failed | less shows that systemd failed to load display manager and failed to start Load Kernel Modules. Two units failed according to systemctl --state=failed.When I startup and when I try and connect to the internet with netctl, I get the codepage cp437 error and am prompted again for my root password. Further investigation reveals that this is the MS-DOS/FAT extended ASCII encoding specified for my EFI partition in /etc/fstab. If this is just a misalignment between /boot and /, how could I resync them past pacman -U?I'd really appreciate suggestions for restoring my system. Thanks in advance. | How to restore normal boot process after pacman update on EFI? | arch linux;boot;systemd;uefi | jasonwryan pointed me in the right direction. I performed the following steps:1) downloaded latest installation media and made a bootable USB2) unencrypted my LUKS LVM volumes3) mounted my volume to the live USB file system in /mnt/arch, a directory I created (including /mnt/arch/boot, and /mnt/arch/home)4) connected to the internet with wifi-menu5) used arch-chroot to change root6) updated with pacman7) rebooted |
_cstheory.5891 | Let $G( n, m )$ be the set of all possible connected graphs of $n$ nodes and $m$ edges such that, for each $g_1 \in G( n, m )$, $g_2 \in G( n, m )$, if $g_1 \neq g_2$ then $g_1$ and $g_2$ are non-isomorphic.Question How large can $|G( n, m )|$ be? Is it polynomial in both $n$ and $m$? Or is it superpolynomial in either $n$ or $m$? | Number of non-isomorphic connected graphs of $n$ nodes and $m$ edges | graph theory;co.combinatorics;graph isomorphism | According to Bollobas (Random Graphs), if you make natural assumptions on $n$ and $m$ there are $n!$ times more labelled graphs on $n$ vertices and $m$ edges than random unlabelled graphs on $n$ vertices and $m$ edges, so roughly $\frac{1}{n!}{{n \choose 2} \choose m}$ unlabelled graphs on $n$ vertices and $m$ edges. If you pick something like $m = \frac{1}{2}{n \choose 2}$, all those graphs should be connected with high probability, so I'd say --> massively superpolynomial ! Of course, you can break those natural assumptions by setting $m = 0$, or $m<n-1$ in your case...Nathann |
_unix.149424 | I think lock, mutex, semaphore are used for synchronized multiple (threads or processes?) to access something simultaneously.Must this something be some shared memory between multiple (threads or processes)?If yes, does that mean lock, mutex, semaphore are only used for multiple threads of a process, not for multiple processes, because multiple processes don't share memory, while multiple threads of the same process do?Thanks. | Are lock, mutex, and semaphore for between threads or between processes? | process;lock;thread | null |
_vi.11104 | Auto-comments are the most annoying feature of text editors and IDEs for me.I've searched -- nay, scavenged -- high and low to figure out how to get rid of literally any semblance of automatic comment continuation, and for whatever reason it's proving impossible and incredibly annoying.Right now, I have tried the following three lines, which works for most languages.autocmd BufNewFile,BufRead,FileType * set formatoptions-=croautocmd BufNewFile,BufRead,FileType * setlocal formatoptions-=croau FileType c,cpp setlocal comments-=:// comments+=f://But whenever I open a file that isn't considered C++ at first (i.e. not the correct extension) and use setf cpp, all auto-comments seem to come back to haunt me.Even sometimes opening a known-to-be-C++ file, this still happens. I'm not sure what the cause is, but it's aggravating.How on Earth can I tell vim that I really truly do not want auto-comments, ever, under any circumstances? I feel like I've been plagued by this for years. | Disable absolutely all auto-comments, for real | vimrc;neovim;autocmd;comments | The culprit is $VIMRUNTIME/ftplugin/c.vim, and likely all other standard ft plugins. If you want everything they define but the setting for 'formatoptions', I don't see any simple solution. (Just in case, $VIMRUNTIME is set within vim)May be, you could listen for OptionSet to prevent inserting cro in &fo. But beware of possible infinite loops. I've never tried it.BTW, I would have left 'comments' alone. This option could be used by plugins that toggle comments on blocks, or that generate documentation -- I actually use it in mu-template to insert license captions as comments, which ever the current language is. |
_unix.216491 | Is there a way in Gnome3 on CentOS7 to list the actual keyboard shortcuts for things like the activities view? I can find lots of web pages that tell me what they should be, but I'd like to know for sure.For instance, a Gnome Help site says that the shortcut for the activities view is Alt-F1, but that just brings up the Application menu. I want a shorter sequence to bring this up. That same page also refers to a Super key, but I don't have that key on this HP Z Book.After I get a list of these shortcuts, how can I change them? | How to determine or set keyboard shortcut for activities view in gnome3 on centos7? | centos;gnome3 | You can see all shortcuts under Keyboard in the Settings and add custom ones.And you can change the key for the applications menu in the Gnome Tweak Tool. |
_unix.364468 | I bought a disk with some bad sectors, planning to fix them and then use it as part of RAID 6 cluster. I can do bad sector fixing under Windows, there are very good bad block fixing tools, but under Windows, the process is very slow, one sector fix takes 15 minutes. In my experience, Linux is better at dealing with devices that don't respond in time and this results in a far faster process under Linux. However, I checked the fsck manual, but did not find any useful option for surface & bad block scanning or bad block reallocation.How can I scan the surface of my hard disk and fix/reallocate bad sectors in Linux from the command line? | How can I do disk surface scanning, and fix/reallocate bad sectors in Linux from the command line? | hard disk;fsck | This answer is about magnetic disks. SSDs are different. Also, this is disk with no data (or no data you care to preserve) on it; see my answer to Can I fix bad blocks on my hard disk with a single command for what to do if you have important data on the disk.Disks made since at least the late 90s manage bad blocks themselves. In brief, a disk will handle a bad block by transparently replacing it with a spare sector. It will do so if (a) while reading, it discovers the block is weak, but ECC is enough to recover the data; (b) while writing, it discovers the sector header is bad; (c) while writing, if a read previously detected the sector as bad, but the data was not recoverable.The disk firmware typically lets you monitor this process (the counts at least) via SMART attributes. Typically there will be at least a count of reallocated sectors and two counts of pending (discovered bad on read, ECC failed, has not yet been written to).There are two ways to get the disk to notice bad sectors:Use smartctl -t offline /dev/sdX to tell the disk firmware to do an offline surface scan. You then just leave the disk alone (completely idle will be fastest) until it's done (check the Offline data collection status in smartctl -c /dev/sdX). This will typically update the offline uncorrectable count in SMART. (Note: drives can be configured to automatically run an offline check routinely.)Have Linux read the entire disk, e.g., badblocks -b 4096 -c 1024 -s /dev/sdX. This will typically update the current pending sector count in SMART.Either of the above may also increase the reallocated sector countthis is case (b), the ECC recovered the data.Now, to recover the sectors you just need to write to them. Normally, that'd be a simple pv -pterba /dev/zero > /dev/sdX (or just plain cat, or dd) but you plan to make these part of a RAID array. The RAID init will write to the entire disk anyway, so that's pointless. The only exception the beginning and end of the diskit's possible a few tens of megabytes will be missed (due to alignment, headers, etc.). So:disk=/dev/sdXend=$(echo $(/sbin/blockdev --getsize64 $disk)/4096-32768 | bc)dd if=/dev/zero bs=4096 count=32768 of=$disk # first 128 MiBdd if=/dev/zero bs=4096 seek=$end count=32768 of=$disk # last 128 MiBI think I managed to avoid the all-to-easy fencepost error1 above, so that should blank the first and last 128MiB of the disk. Then let mdadm raid init write the rest. It's harmless (except for trivial wear, and wasting hours of time) to zero the whole disk if you'd like to, though.Another thing to do, if your disks support it: smartctl -l scterc,40,100 (or whatever numbers) to tell the disk that you want it to give up on correcting read errors quicker40 would be 4 seconds. The two numbers are read errors and write errors; mdraid will easily correct read errors via parity (and write the failed sector back to the disk to let it reallocate). Write errors, though, will fail the disk out of the array.PS: Make sure to keep an eye on the reallocated sectors count. That attribute going to failed is bad news. And if its continuously increasing, that's bad news too.PPS: Make sure your RAID arrays are scrubbed (every sector read and all the parity verified) routinely. Many distros already ship a script that does this monthly. This will detect & repair any new bad blocks as otherwise seldom-read bad blocks can linger and ultimately cause rebuild failure.1 Fencepost errora type of off-by-one error from failing to count one of the ends. Named from if you have a fence post every 3ft, how many fence posts in a 9ft freestanding fence? The correct answer is 4; the fencepost error is 3 and is from not counting the post at the beginning or at the end. |
_unix.384974 | Is it possible to do something like this:inputNum=$1files=($(find /dir/to/check -mtime $inputNum))Basically the idea is that I can use an input parameter to set the number of days to find files and set it to a variable array. I am not sure on the syntax to make this readable in bash. | Variable in find command set to new variable in bash | bash;shell script;files;find;variable | The output of find is not post-processable reliably unless you use -print0 instead of -print (-print implied when no action is specified).To post-process the output of find -print0 and store the file paths in an array:With bash4.4+:readarray -td '' files < <(find /dir/to/check -mtime $inputNum -print0)With older versions:files=()while IFS= read -rd '' file; do files=(${files[@]} $file)done < <(find /dir/to/check -mtime $inputNum -print0)More generally, you'd want to read the recommendations at: Why is looping over find's output bad practice? |
_softwareengineering.252643 | Just talking about internal applications or intranet web apps... At some companies I've seen their business-logic piece (Model in MVC/VM in MVVM) on one (or both) sides of a Web Service. And on the other side of the web service is the Persistence.MVC/MVVM > Service Layer > PersistenceThis is only for intranet/internal application customers, and both the web or app code, and the persistence (usually ORM) dlls both sit on the same server, or even in the same folder.I'm used to seeing internal apps and intranet websites that reference a business-layer... then that business-layer connects to persistence. So the app itself is persistence-ignorant.But with my own apps, if something needed to be exposed externally, that something is opened up via a web service. But otherwise, everything stays internal.Is there a reason for why I've seen a couple different companies do this? They didn't seem to know the answer themselves. | Should my internal MVC/MVVM application use Web Services for Persistence? | mvc;web services;mvvm;persistence | I tend to think that's a bit silly. More formally, it's an example of what I would call speculative generality. The counterargument would be that the architecture you describe allows for other sorts of clients to be easily plugged into the same system with less effort, and that one never really knows what sort of new direction the project might take. (Realistically, though, sometimes these things can be known to a very great extent. Pretending everything's an unknown and requires generalization can be a very bad way to work, in my experience.)I suppose that the deciding factor is how much effort the team thinks it will take to use Web Services. Superficially, it doesn't seem like the sort of thing that would require too much extra work. However, debugging and configuration implications must be considered (as opposed to just the code itself), and that's where I think that the Web Services approach probably damages the developer experience. |
_unix.56665 | Is there a command that compares a floppy disk image (e.g. a .iso file) to the actual contents of the floppy the image was written on (e.g. /dev/fd0)? | How do I compare a file with a floppy image and the actual floppy's content? | diff;floppy | A floppy device file is a file. Any command that reads files will work on it.cmp /dev/fd0 image.fatPass the -l option if you want a list of all differing bytes; for human consumption, this is mostly useful in the formcmp -l /dev/fd0 image.fat | wc -lto know how many bytes differ. Run cmp -s /dev/fd0 image.fat if you don't want any output, just a return status of 0 if the two files are identical and 1 if they're different.This compares the images byte by byte. If the floppy and the image contain files and you only want to compare the files and not the metadata (file dates, etc.) nor the empty space, mount the floppy and the image and compare the directory trees. |
_codereview.78729 | I created my first java game in LibGDX and it's working fine but I'm 100% sure a lot of my code can be written shorter than now. Does anyone have tips how I can make this code better?Like the Gdx.input.getX(), if I run this on Android, it's fine, but when I run this on PC, you just have to hover over your screen to change the players position without clicking. On Android you have to tap first.Does anyone have any tips how to do this better and make the game playable on pc? So you have to click to move the bar.Here is a screenshot of my game right now.Here are my 4 classes with the code I wrote:MainScreen.javapackage ***.***.***;import com.badlogic.gdx.Gdx;import com.badlogic.gdx.Screen;import com.badlogic.gdx.graphics.GL20;import com.badlogic.gdx.graphics.Texture;import com.badlogic.gdx.graphics.g2d.BitmapFont;import com.badlogic.gdx.graphics.g2d.SpriteBatch;import ***.***.***.Player;import ***.***.***.Enemy;import ***.***.***.Ball;public class MainScreen implements Screen { //GAMESTATE = 0 ---- MENU //GAMESTATE = 1 ---- INIT/RESET //GAMESTATE = 2 ---- START //GAMESTATE = 3 ---- UPDATE //GAMESTATE = 4 ---- GAMEOVER //GAMESTATE = 5 ---- PAUSE //GAMESTATE = 6 ---- EXIT //GAMESTATE = 100 ---- WIN int GAMESTATE = 0; int timer = 30; int countdown = 90; int score=0,lives=5; private Player player; private Enemy enemy; private Ball ball; private BitmapFont font; private SpriteBatch batch; Texture BackGround; Texture StartScreen; public static float difficulty; float width = Gdx.graphics.getWidth(); float height = Gdx.graphics.getHeight(); float screenwidth = width/270; float screenheight = height/480; public MainScreen() { Gdx.app.log(GameScreen, Attached); player = new Player(); enemy = new Enemy(); ball = new Ball(); batch = new SpriteBatch(); //Load in Fonts font = new BitmapFont(Gdx.files.internal(data/whitetext.fnt), Gdx.files.internal(data/whitetext.png), false); BackGround = new Texture(data/background.jpg); StartScreen = new Texture(data/start.jpg); } @Override public void render(float delta) { //Draw the background batch.begin(); batch.draw(BackGround,0,0,270.0f*screenwidth,480.0f*screenheight); batch.end(); //MENU if(GAMESTATE==0){ if(timer!=0)timer--; if (Gdx.input.isTouched()&&timer==0) { GAMESTATE=1; } batch.begin(); font.setColor(1.0f, 1.0f, 1.0f, 1.0f); font.setScale(1.0f*screenwidth,1.0f*screenheight); font.draw(batch, PONG, 60*screenwidth, 400*screenheight); font.setScale(0.5f*screenwidth,0.5f*screenheight); font.draw(batch, Tap to start, 20*screenwidth, 240*screenheight); batch.end(); } //INIT if(GAMESTATE==1){ player.init(); enemy.init(); ball.init(); GAMESTATE=2; timer=30; difficulty=1.0f; countdown=90; } //START if(GAMESTATE==2){ if(countdown==0){ GAMESTATE=3; } countdown--; batch.begin(); font.setColor(1.0f, 1.0f, 1.0f, 1.0f); font.setScale(1.0f*screenwidth,1.0f*screenheight); if(countdown>=60&&countdown<=90)font.draw(batch, 3, 120*screenwidth, 300*screenheight); if(countdown>=30&&countdown<=60)font.draw(batch, 2, 120*screenwidth, 300*screenheight); if(countdown>=0&&countdown<=30)font.draw(batch, 1, 120*screenwidth, 300*screenheight); batch.end(); } //UPDATE if(GAMESTATE==3){ player.update(); enemy.update(); ball.update(); //DRAW SCORE/LIVES batch.begin(); font.setColor(1.0f, 1.0f, 1.0f, 1.0f); font.setScale(0.2f*screenwidth,0.2f*screenheight); font.draw(batch, Score: +score+/5, 190*screenwidth, 475*screenheight); font.draw(batch, Lives: +lives, 5*screenwidth, 475*screenheight); batch.end(); } if(lives==0)GAMESTATE=4; if(score==5)GAMESTATE=100; if(ball.getY()<0*screenheight){ lives--; GAMESTATE=1; } if(ball.getY()>480*screenwidth-16*screenwidth){ score++; GAMESTATE=1; } //LOSE SCREEN if(GAMESTATE==4){ if(timer!=0)timer--; if (Gdx.input.isTouched()&&timer==0) { GAMESTATE=1; } batch.begin(); font.setColor(1.0f, 1.0f, 1.0f, 1.0f); font.setScale(0.5f*screenwidth,0.5f*screenheight); font.draw(batch, Game Over, 20*screenwidth, 300*screenheight); font.setScale(0.3f*screenwidth,0.3f*screenheight); font.draw(batch, Tap to Start, 20*screenwidth, 260*screenheight); batch.end(); score=0;lives=5; } //WIN SCREEN if(GAMESTATE==100){ if(timer!=0)timer--; if (Gdx.input.isTouched()&&timer==0) { GAMESTATE=1; } batch.begin(); font.setColor(1.0f, 1.0f, 1.0f, 1.0f); font.setScale(0.5f*screenwidth,0.5f*screenheight); font.draw(batch, You Won!, 20*screenwidth, 300*screenheight); font.setScale(0.3f*screenwidth,0.3f*screenheight); font.draw(batch, Tap to Start, 20*screenwidth, 260*screenheight); batch.end(); score=0;lives=5; } //Player Ball Colission if(ball.getX()+8*screenwidth>player.getX()-40*screenwidth&&ball.getX()+8*screenwidth<player.getX()+40*screenwidth){ if(ball.getY()>player.getY()&&ball.getY()<player.getY()+16*screenheight){ float zy; zy = 3*screenheight; ball.setZy(zy); difficulty+=0.1; } } //Enemy Ball Colission if(ball.getX()+8*screenwidth>enemy.getX()-40*screenwidth&&ball.getX()+8*screenwidth<enemy.getX()+40*screenwidth){ if(ball.getY()+16*screenheight>enemy.getY()&&ball.getY()+16*screenheight<enemy.getY()+16*screenheight){ float zy; zy = -3*screenheight; ball.setZy(zy); difficulty+=0.5; } } //Enemy AI if(enemy.getX()<ball.getX()){ float ex; ex = 3.0f*screenwidth*difficulty; enemy.setZX(ex); } if(enemy.getX()>ball.getX()){ float ex; ex = -3.0f*screenwidth*difficulty; enemy.setZX(ex); } } public float getDif() { return difficulty; } public void setDif(float difficulty) { this.difficulty = difficulty; } @Override public void resize(int width, int height) { } @Override public void show() { } @Override public void hide() { } @Override public void pause() { } @Override public void resume() { } @Override public void dispose() { }}Ball.javapackage ***.***.***;import java.util.Random;import com.badlogic.gdx.Gdx;import com.badlogic.gdx.graphics.Texture;import com.badlogic.gdx.graphics.g2d.SpriteBatch;import ***.***.***.MainScreen;import ***.***.***.Player;public class Ball { SpriteBatch batch; Texture ballsprite; Random random = new Random(); float width = Gdx.graphics.getWidth(); float height = Gdx.graphics.getHeight(); float screenwidth = width/270; float screenheight = height/480; float x,y,zx,zy,randomx,randomw; float difficulty; private MainScreen main; public Ball(){ } public void init(){ batch = new SpriteBatch(); ballsprite = new Texture(data/ball.png); x = 127*screenwidth; y = 232*screenheight; randomw = random.nextInt(2); if(randomw==1)randomw=-1;else randomw=1; randomx = random.nextInt(20) + 20; zx = randomx/10*screenwidth*randomw; zy = 3*screenwidth; main = new MainScreen(); } public void update(){ difficulty = main.getDif(); //Colission check if(x<0*screenheight)zx=3*screenwidth; if(x>270*screenwidth-16*screenwidth)zx=-3*screenwidth; x += zx*difficulty; y += zy*difficulty; //Draw ball batch.begin(); batch.draw(ballsprite,x,y,16*screenheight,16*screenwidth); batch.end(); } public float getX() { return x; } public float getY() { return y; } public void setZy(float zy) { this.zy = zy; }}Player.javapackage ***.***.***;import com.badlogic.gdx.Gdx;import com.badlogic.gdx.graphics.Texture;import com.badlogic.gdx.graphics.g2d.Sprite;import com.badlogic.gdx.graphics.g2d.SpriteBatch;public class Player { float width = Gdx.graphics.getWidth(); float height = Gdx.graphics.getHeight(); float screenwidth = width/270; float screenheight = height/480; SpriteBatch batch; Texture playersprite; public float x,y; public float getX() { return x; } public float getY() { return y; } public Player(){ } public void init(){ x=135*screenwidth; y=40*screenheight; batch = new SpriteBatch(); playersprite = new Texture(data/bar.png); } public void update(){ x = Gdx.input.getX(); batch.begin(); batch.draw(playersprite,x-40*screenwidth,y,80*screenwidth,16*screenheight); batch.end(); }}Enemy.javapackage ***.***.***;import com.badlogic.gdx.Gdx;import com.badlogic.gdx.graphics.Texture;import com.badlogic.gdx.graphics.g2d.SpriteBatch;public class Enemy { SpriteBatch batch; Texture enemysprite; float width = Gdx.graphics.getWidth(); float height = Gdx.graphics.getHeight(); float screenwidth = width/270; float screenheight = height/480; public float x,y; public float zx; public Enemy(){ } public void init(){ x=135*screenwidth; y=444*screenheight; batch = new SpriteBatch(); enemysprite = new Texture(data/bar.png); } public void update(){ batch.begin(); batch.draw(enemysprite,x-40*screenwidth,y,80*screenwidth,16*screenheight); batch.end(); x+=zx; if(x>229*screenwidth)x=229*screenwidth; if(x<41*screenwidth)x=41*screenwidth; } public float getX() { return x; } public float getY(){ return y; } public void setZX(float zx) { this.zx = zx; }}PongMain.javapackage ***.***.***;import com.badlogic.gdx.Game;import com.badlogic.gdx.Gdx;import ***.***.***.MainScreen;public class PongMain extends Game { @Override public void create() { setScreen(new MainScreen()); }} | Simple LibGDX Pong game | java;game;android;ios;libgdx | Scope You should reduce the scope of variables to the minimum needed. So, if possible make them private. Something like public float x,y;public float getX() { return x;}public float getY() { return y;} should be avoided because it removes encapsulation. Sometimes it is ok to expose a variable to the public, but you should usually hide them behind methods. Mixing both is a code smell. Declaring and initializing of multiple variables should be avoided because readability matters. So instead of int score=0,lives=5; it should be private int score = 0;private int lives = 5; By extracting the rendering of the different states to separate methods, your render() method will be easier to maintain and read. Your variables would be happy to get some space to breathe. So instead of font.setScale(0.5f*screenwidth,0.5f*screenheight);it should look like font.setScale(0.5f * screenwidth, 0.5f * screenheight); Based on the IDE you are using, there is a keyboard shortcut to format the code by using proper indention and spacing. This will add readability to your code. Using braces {} for single if statements will make your code less errorprone and is more structured so content which belongs together comes into the focus and can be grapsed at first glance. If you decide not to use braces you should stick to the choosen style. Right now you are mixing the usage.Shortening of method names or variable names should be avoided for readability. If you have a method SetDif() it could mean for setting the Difference or Difficulty. Some construct like //Player Ball Colission if(ball.getX()+8*screenwidth>player.getX()-40*screenwidth&&ball.getX()+8*screenwidth<player.getX()+40*screenwidth){ if(ball.getY()>player.getY()&&ball.getY()<player.getY()+16*screenheight){ float zy; zy = 3*screenheight; ball.setZy(zy); difficulty+=0.1; } } should be extracted to a separate method like hasBallCollision(Player player) which should return true if both if conditions evaluate to true. This can the be called for the enemy too which will make your code DRY (don't repeat yourself) by removing code duplication. This would lead to something like if (hasBallCollision(player)){ float zy; zy = 3 * screenheight; ball.setZy(zy); difficulty += 0.1;}which could be reduced to if (hasBallCollision(player)){ ball.setZy(3 * screenheight); difficulty += 0.1;}You have a lot of magic numbers in your code. You should try to extract them into well named constants. For the case that the game is won or lost, you should return early from the render() method. Update Unfortunately you can't use the hasBallCollision() method with the enemy and the player.If you don't want to go with Simon Andr Forsberg suggestion using one class for both, which by the way is a pretty good suggestion, you could add an overloaded method which takes an Enemy as an input parameter. You should at least simplify the if conditions. If we look closer at the first condition we see that you are adding 8 * screenwidth to the left side and substracting 40 * screenwidth from the right sideif(ball.getX() + 8 * screenwidth > player.getX() - 40 * screenwidth && ball.getX() + 8 * screenwidth < player.getX() + 40 * screenwidth) this can be simplified to if(ball.getX() > player.getX() - 32 * screenwidth && ball.getX() < player.getX() + 32 * screenwidth) or more readable int offset = 32 * screenwidth; if(ball.getX() > player.getX() - offset && ball.getX() < player.getX() + offset) For the code you had used for the enemy ball collision the second condition of the second if statement had been && ball.getY() + 16 * screenheight < enemy.getY() + 16 * screenheight) this should be simplified to && ball.getY() < enemy.getY() |
_webmaster.106020 | I work at a company with a very prolific tracking tag, which uses a tracking pixel to pass information. Currently, our tag uses a bare <img> inside of a <noscript>, and the image does not have an alt attribute.Our SEO wants to add an alt with some keywords to the tracking pixel, but I noted that the Facebook and Google pixels do not do this. I'm unable to justify my resistance to making this change besides a strong gut-feeling and comparison to competitors.Is there any technical or SEO reason why this would be a bad idea? Have Facebook and Google simply overlooked this, or perhaps they're aware it offers no benefit - but wouldn't do any harm either.I did note a few older threads discussing that all images should have an alt and that in the case of tracking pixels, an empty alt would be most appropriate. However, neither FB nor Google have an alt at all.This question is not about how to track users on our site. My company is providing the tag that others use for their tracking. We're the second most prolific web tag behind Google Analytics. If I add an alt attribute, it will be included across all websites who install our tag moving forward. This is not about adding an alt attribute to a single tag on our own website. | Tradeoffs of including alt attribute in tracking pixel | seo;tracking;alt attribute | null |
_cs.35479 | Why are Hamming codes the best 1-error-correcting codes? I need references. I know that hamming codes are the best 1-error-correcting codes but I want to know why they are best? | Hamming and BCH codes | coding theory;error correcting codes;hamming code | The Hamming codes are optimal in the sense that among all codes with the same block length and minimal distance, they contain the most number of codewords. We know this because Hamming codes are perfect codes: their number of codewords matches the Hamming bound, which is an upper bound on the number of codewords in a code with given block length and minimal distance. |
_codereview.123781 | Within my app i have a UIButton (avatar button which shows a profileVC) within a UICollectionView this button shows up in about 4-5 other views. I'm currently adding the target within cellForItemAtIndexPath in each view controller and pushing the view from a public function pushNewViews. I was wondering if they was a better way to do this? (less repetition) class ShotsViewController: UIViewController{ //CollectionView override func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCellWithReuseIdentifier(cell2, forIndexPath: indexPath) as! mainCell cell.userImg.addTarget(self, action: #selector(ShotsViewController.showProfileView(_:)), forControlEvents: .TouchUpInside) }//Target Func func showProfileView(sender: AnimatableButton) { let profileUser = shots[sender.tag].user pushNewViews.showProfileViewController(profileUser, navigation: navigationController!, storyboard: storyboard!)} }class pushNewViews{ class func showProfileViewController(user: User, navigation: UINavigationController, storyboard: UIStoryboard){ let vc = storyboard.instantiateViewControllerWithIdentifier(profileView) as! ProfileViewController vc.user = user navigation.pushViewController(vc, animated: true) }} | UICollectionView button target | ios;swift | You have a couple of options here depending on how explicitly you want to couple this view to the push action.One option which I like is to rely on the existing responder chain. This gives you a lot of flexibility to allow views to emit events from anywhere in the view hierarchy and handle those events in a parent view, view controller, or the app delegate. I like that it avoids the overhead of connecting cells to view controllers (either through target action patterns as you have now or through a delegate protocol) but with the tradeoff that it becomes less obvious which class will respond to an event and requires more integration testing.// Traverse the responder chain looking for the first responder who conforms to a provided generic typeextension UIResponder { func handoffEvent<T>(@noescape eventHandler: (T) -> Void) { var nextResponder: UIResponder? nextResponder = self while((nextResponder) != nil) { if let responder = nextResponder as? T { eventHandler(responder) return } nextResponder = nextResponder?.nextResponder() } // FIXME: you probably want some warning if you fail to find a responder of the expected type }}// Define a protocol for our responderprotocol ControllerRouter { func pushUserProfileViewController(user: User)}// Views can then send events up the responder chain to be handled by some responder who adopts our protocol but the view doesn't need to know anything about who that responder isclass SomeCustomView: UIView { @IBAction func didPressProfileButton() { self.handoffEvent { (handler: ControllerRouter) in handler.pushCurrentUserProfileViewController(self.user) } }}// Some class in the responder chain needs to implement our protocol to handle the eventclass ViewControllerOrAppDelegateOrWhatever: ControllerRouter { func pushCurrentUserProfileViewController(user: User) { guard let navigationController = self.navigationController, let profileViewController = self.storyboard.instantiateViewControllerWithIdentifier(profileView) as? ProfileViewController else { // FIXME: do not ignore unexpected nils return } profileViewController.user = user navigationController.pushViewController(profileViewController, animated: true) }} |
_unix.94443 | I'd like my NFS server to always use the same user ID to own the files, whatever user ID is used on the client.Is this possible and how? | How to configure a fixed user ID on an NFS server? | osx;nfs | null |
_unix.15023 | I want to replace all files in a target path with the same name as original.file AND the same hash as orignal.file with new.file. What's the command to do this?Say I have updated the contents of a file, and now I want all other copies of that file in a certain path to be updated as well.In most cases the following code would work:find /target_path/ -iname original.file -exec cp new.file '{}' However if original.file is readme.txt for example, many unrelated files would be overwritten. | Replace all files with identical hash | linux;bash;hashsum | This this will require a test to see if the checksums match before decide to run the cp, you will have to run a subshell as the -exec argument to find. This should do the job:find /target_path/ -iname original.file -exec bash -c \ '[[ $(md5sum original.file) = $(md5sum {}) ]] && cp new.file {}' \; |
_unix.168807 | I am looking for a way to mount a ZIP archive as a filesystem so that I can transparently access files within the archive. I only need read access -- the ZIP will not be modified. RAM consumption is important since this is for a (resource constrained) embedded system. What are the available options? | Mount zip file as a read-only filesystem | linux;filesystems;embedded | fuse-zip is an option and claims to be faster than the competition.# fuse-zip -r archivetest.zip /mntarchivemount is another:# archivemount -o readonly archivetest.zip /mntBoth will probably need to open the whole archive, therefore won't be particularly quick. Have you considered extracting the ZIP to a HDD or USB-stick beforehand and simply mounting that read-only? |
_cstheory.8973 | Let a full rank $n\times n$ matrix ${\bf A}$ with elements over $\mathbb{GF}(2)$. What is the worst case complexity to calculate $n$ linearly independent (over $\mathbb{GF}(2)$) vectors, such that each one of them obeys $${\bf A}{\bf x} = {\bf x}$$ again over $\mathbb{GF}(2)$. | Complexity to calculate a full set of eigenvectors over a finite field | cc.complexity theory;linear algebra;matrices | There will generally not exist a set of $n$ linearly independent vectors $x$ such that $Ax = x$; this can only happen for $A$ being the identity matrix. On the other extreme, there may be no such vectors at all. For example, the matrix$$ A = \left(\begin{array}{cc} 0 & 1\\ 1 & 1 \end{array}\right) $$has full rank, but there are no nonzero vectors $x$ such that $Ax = x$. This is consistent with the fact that the characteristic polynomial of $A$ is $\lambda^2 + \lambda + 1$, which is irreducible over GF(2).If you want to find the maximum number $k$ of linearly independent vectors $x$ such that $Ax=x$, compute the nullspace of $A - I$. This can be done using Gaussian elimination in time $O(n^3)$. Several other algorithms for this task also exist, and are described in textbooks on computational linear algebra. |
_vi.12983 | I am trying to do a search and replace on a large database file where some of the urls are escaped and some not, e.g.:http://www.example.comhttp:\\/\\/www.example.comI thought I could use a simple character class in a regex like this:%s~http\(:[/\\]+www.example.com\)~https\1~gBut won't work no matter how I escape it. Can I use character classes in vim search and replace? If not, how can I match the url so that any amount of backslashes and forward slashes are matched? | replacing urls and escaped urls | regular expression;substitute | In your search pattern of the :s part you are looking for www. which is not part of your source. So it won't match. Better would be to make that part optionally:%s~http\(:[/\\]\+\(www.\)\?example.com\)~https\1~gor even better: %s~http:[/\\]\+\(\(www.\)\?example.com\)~https://\1~gwhich will also normalize the slashes following the protocol part of your URI. |
_codereview.73622 | http://en.wikipedia.org/wiki/Monotone_cubic_interpolationWe have implemented it using the formula from Wikipedia :public class MonotoneCubicSplineInterpolation { public static double[] Calc(double[] xs, double[] ys, double[] x_interp) { var length = xs.Length; // Deal with length issues if (length != ys.Length) { IPDevLoggerWrapper.Error(Need an equal count of xs and ys); throw new Exception(Need an equal count of xs and ys); } if (length == 0) { return null; } if (length == 1) { return new double[] {ys[0]}; } // Get consecutive differences and slopes var delta = new double[length - 1]; var m = new double[length]; for (int i = 0; i < length - 1; i++) { delta[i] = (ys[i + 1] - ys[i]) / (xs[i + 1] - xs[i]); if (i > 0) { m[i] = (delta[i - 1] + delta[i]) / 2; } } var toFix = new List<int>(); for (int i = 1; i < length - 1; i++) { if ((delta[i] > 0 && delta[i - 1] < 0) || (delta[i] < 0 && delta[i - 1] > 0)) { toFix.Add(i); } } foreach (var val in toFix) { m[val] = 0; } m[0] = delta[0]; m[length - 1] = delta[length - 2]; toFix.Clear(); for (int i = 0; i < length - 1; i++) { if (delta[i] == 0) { toFix.Add(i); } } foreach (var val in toFix) { m[val] = 0; m[val + 1] = 0; } var alpha = new double[length - 1]; var beta = new double[length - 1]; var dist = new double[length - 1]; var tau = new double[length - 1]; for (int i = 0; i < length - 1; i++) { alpha[i] = m[i] / delta[i]; beta[i] = m[i + 1] / delta[i]; dist[i] = Math.Pow(alpha[i], 2) + Math.Pow(beta[i], 2); tau[i] = 3/Math.Sqrt(dist[i]); } toFix.Clear(); for (int i = 0; i < length - 1; i++) { if (dist[i] > 9) { toFix.Add(i); } } foreach (var val in toFix) { m[val] = tau[val] * alpha[val] * delta[val]; m[val + 1] = tau[val] * beta[val] * delta[val]; } var y_interp = new double[x_interp.Length]; int ind = 0; foreach (var x in x_interp) { int i; for (i = xs.Length - 2; i >= 0; --i) { if (xs[i] <= x) { break; } } var h = xs[i + 1] - xs[i]; var t = (x - xs[i])/h; var t2 = Math.Pow(t, 2); var t3 = Math.Pow(t, 3); var h00 = 2*t3 - 3*t2 + 1; var h10 = t3 - 2*t2 + t; var h01 = -2*t3 + 3*t2; var h11 = t3 - t2; y_interp[ind++] = h00*ys[i] + h10*h*m[i] + h01*ys[i + 1] + h11*h*m[i + 1]; continue; } return y_interp; } }Please comment about style, correctness and complexity. | Monotone cubic interpolation | c#;algorithm | Don't throw Exceptionthrow new Exception(Need an equal count of xs and ys);It forces client code to catch any subclass of Exception. In this case I would throw an ArgumentException.The continue at the end of the last loop is redundant.Here you're using double.Equalsif (delta[i] == 0)From MSDNThe Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values.That link covers two techniques for dealing with this.As far as I can tell, toFix can be removed. For example,var toFix = new List<int>();for (int i = 1; i < length - 1; i++){ if ((delta[i] > 0 && delta[i - 1] < 0) || (delta[i] < 0 && delta[i - 1] > 0)) { toFix.Add(i); }}foreach (var val in toFix){ m[val] = 0;}Can be rewritten asfor (int i = 1; i < length - 1; i++){ if ((delta[i] > 0 && delta[i - 1] < 0) || (delta[i] < 0 && delta[i - 1] > 0)) { m[i] = 0; }} |
_codereview.55035 | Is it a good practice to initialize the max to -100000 and min to 100000? Is there any other way to initialize both min and max to 0? import javax.swing.*; import java.util.*;public class arrayTajba{ public static void main(String[] args) throws Exception { String userStringInput = ; String display = ; int max = -100000; int min = 100000; int total = 0; double average; int i = 0; int [] num = new int [5]; for (i = 0; i < num.length; i++) { userStringInput = JOptionPane.showInputDialog(null,Please enter 5 numbers + (i+1),Input Table,JOptionPane.QUESTION_MESSAGE); num[i] = Integer.parseInt(userStringInput); display+=num[i] + , ; total+=num[i]; if (num[i] < min) { min = num[i]; } if (num[i] > max) { max = num[i]; } } average = total / num.length; JOptionPane.showMessageDialog(null,The numbers you have entered are:\n + display + \nSum of all numbers is: + total + \nAverage is: + average + \nMinimum number is: + min + \nMaximum number is: + max,Output Table ,JOptionPane.INFORMATION_MESSAGE); } }i | Min and Max initialization | java;array | null |
_webapps.19028 | If ytimg.com is blocked by NoScript, I can only view the title, some comments, etc. The show more button usually does not work.But when I (temporarily) allow ytimg.com it allows show more, but also starts downloading the video (Aah! It is EATING my bandwidth!!! Stop, disconnect!).How to view the show more info (where useful information or links are provided) without consuming too much traffic? | How to view YouTube's show more without downloading the video? | youtube | ytimg.com is where YouTube stores all its static content. (Javascript, stylesheets, etc.) That includes both the script which sets up the player and the script which powers show more. (They do that to save bandwidth and make things snappier by preventing your browser from sending your YouTube cookies when retrieving files that don't care anyway.)If you want to have scripts like show more without starting the video downloading and you're using NoScript, the simplest solution is to go into NoScript Options > Embeddings and check Apply these restrictions to whitelisted sites too.That'll get you FlashBlock-like behaviour (which is designed to be a secure protection against Flash exploits, unlike FlashBlock which is for annoyance-reduction) even on sites which you've marked as trusted.The only downside is that NoScript for non-mobile Firefox doesn't yet have the extended settings support to allow you to set or unset Apply these restrictions to whitelisted sites too on a per-site basis, so you'll get FlashBlock-like behaviour everywhere and you can't whitelist any sites.The alternative would be to allow ytimg.com and then install Greasemonkey and a script like YousableTubeFix which lets you set Prevent both autoplay and autobuffering. |
_unix.97046 | Why doesn't this work?cat /dev/video1 | mplayer -If I could get that to work, then I could play & record video at the same time using 'tee' to feed mplayer and mencoder.I want to play live video (from /dev/video1:input=1:norm=NTSC) and record it at the same time without introducing lag.mplayer plays the video fine (no noticeable lag).mencoder records it fine.But I can't figure out how to tee the output from /dev/video so that I can feed it to both at the same time. (I know ways to encode it, then immediately play the encoded video, but that introduces too much lag).If mplayer and mencoder would read from stdin, then I could use 'tee' to solve this. How can I do it?[BTW, I'd be happy with ANY solution that plays & records at the same time, as long as it doesn't add lag - I'm not wedded to mplayer. But encoding first and then playing adds lag.] | How to get mplayer to play from stdin? | linux;video;stdin;mplayer;tee | null |
_cstheory.9010 | The closest pair of points problem deals with the task to find a pair of points with the global minimum distance. There is a problem, when all points share the same x-coordinate, or at least a large number of points.I just don't know why, I heard the running time becomes $n^2$What about it? What is the problem, since the proof for the algorithm shows, that at max. 8 points can reside in the $2 \delta \times \delta$ area. | What is the problem in closest pair problem if all points share the same x-coordinate | ds.algorithms;cg.comp geom;time complexity | Points with the same x-coordinate do not cause any substantial problem. However, if you implement the divide-and-conquer algorithm carelessly, they may cause a problem. One way to deal with them is by using symbolic perturbation. |
_webmaster.10049 | I'm trying to guess how many loyal users I have by counting the number of people that have visited the site 10 times. How can I answer this question with Google Analytics?Visitor Loyalty is a tempting answer, but the label for loyalty is Visits that were the visitor's nth visit, and I want something more like Visitors that visited n times.For example, we have 40 visits in the 51-100 visit range, but I think that could be a single user who visited 91 times. Or two users who visited 71 times each. The whole chart makes a good logic puzzle (I wonder if there's a unique solution) but doesn't easily answer the question I have. | Google analytics: how many visitors have visited n times? | google analytics | You can build a custom report using the Count of Visits dimension and Unique Visitor metric to get the answer you want. However, remember that each time someone visits they add an additional count to the Count of Visits dimension without being removed from the previous ones. So each group is a subset of the one above it. For example, if you have two visitors, one who visits 2 times and another who visits 3 times the report will be as follows:--------------------------------------| Count of Visits | Unique Visitors |--------------------------------------| 1 | 2 || 2 | 2 || 3 | 1 |-------------------------------------- |
_unix.98263 | I would like to try Linux on my Acer Revo 3600 (ION) in the hopes that it will perform better than Windows 7, which is extremely sluggish.All I need it to do is:Run Plex Media ServerRun uTorrentRun LogMeIn (or VNC if necessary)Be able to connect to my network over WiFi (with dongle)Be able to mount SMB shares on the networkSupport HDMI Video & Audio outputSupport the Revo's hardware accelerated HD playbackI want an easy installation -- I am no Linux expert -- but something that is lean and will run smoothly. | Best flavor of Linux for an Acer Revo 3600 as media server? | hdmi | null |
_unix.349798 | Nginx's error log shows some OpenSSL Handshake errors and while searching for the cause I found confusing outputs of what OpenSSL version is used.Details:Debian Jessie 8.7 64 Bit# apt-cache policy opensslopenssl: Installed: 1.0.1t-1+deb8u6 Candidate: 1.0.1t-1+deb8u6 Version table: 1.0.2k-1~bpo8+1 0 100 http://ftp.debian.org/debian/ jessie-backports/main amd64 Packages *** 1.0.1t-1+deb8u6 0 500 http://security.debian.org/ jessie/updates/main amd64 Packages 100 /var/lib/dpkg/status 1.0.1t-1+deb8u5 0 500 http://mirror.hetzner.de/debian/packages/ jessie/main amd64 Packages 500 http://http.debian.net/debian/ jessie/main amd64 Packages# apt-cache policy nginxnginx: Installed: 1.9.10-1~bpo8+4 Candidate: 1.10.3-1~bpo8+1 Version table: 1.10.3-1~bpo8+1 0 100 http://ftp.debian.org/debian/ jessie-backports/main amd64 Packages *** 1.9.10-1~bpo8+4 0 100 /var/lib/dpkg/status 1.6.2-5+deb8u4 0 500 http://mirror.hetzner.de/debian/packages/ jessie/main amd64 Packages 500 http://http.debian.net/debian/ jessie/main amd64 Packages 500 http://security.debian.org/ jessie/updates/main amd64 Packages# nginx -Vnginx version: nginx/1.9.10built with OpenSSL 1.0.2j 26 Sep 2016 (running with OpenSSL 1.0.2k 26 Jan 2017)# openssl version -aOpenSSL 1.0.1t 3 May 2016 (Library: OpenSSL 1.0.2k 26 Jan 2017)How can nginx runs with openssl 1.0.2k and openssl version -a says that the Library is OpenSSL 1.0.2k but apt-cache policy openssl says installed is 1.0.1t ?Could someone shed some light, please? | How to distinguish which version of OpenSSL is installed? | debian;openssl;nginx | The openssl package contains the front-end binary, not the library. You're tracking Jessie for that package (with its security updates).The library itself is libssl1.0.0, and you're tracking Jessie backports for that package (along with Nginx; you're just a few versions behind for the latter). This is what Nginx uses, and is the library version identified by the openssl front-end. You can see the version of the library on your system withapt-cache policy libssl1.0.0(as well as the availability of newer versions, if any). |
_unix.1983 | When I restore a splitted session of screen, I've got only one print session and have to reconfigure the number of display session. Is there another way to have the original screen configuration? | GNU screen - Restore a session with splitted screen | gnu screen | null |
_softwareengineering.123797 | Why standard C++ doesn't respect system (foreign or hardware) exceptions?E.g. when null pointer dereference occurs, stack isn't unwound, destructors aren't called, and RAII doesn't work. The common advice is to use system API. But on certain systems, specifically Win32, this doesn't work. To enable stack unwinding for this C++ code// class Foo;// void bar(const Foo&);bar(Foo(1, 2));one should generate something like this C codeFoo tempFoo;Foo_ctor(&tempFoo);__try { bar(&tempFoo);}__finally { Foo_dtor(&tempFoo);}Foo_dtor(&tempFoo);and it's impossible to implement this as C++ library. Upd:Standard doesn't forbid handling system exceptions. But it seems that popular compilers like g++ doesn't respect system exceptions on any platforms just because standard doesn't require this.The only thing that I want - is to use RAII to make code readable and program reliable. I don't want to put hand-crafted try\finally around every call to unknown code. For example in this reusable code, AbstractA::foo is such unknown code:void func(AbstractA* a, AbstractB* b) { TempFile file; a->foo(b, file);}Maybe one will pass to func such implementation of AbstractA, which every Friday will not check if b is NULL, so access violation will happen, application will terminate and temporary file will not be deleted. How many months uses will suffer because of this issue, until either author of func or author of AbstractA will do something with it?Related: Is `catch(...) { throw; }` a bad practice? | C++ and system exceptions | c++;exception handling | null |
_codereview.12777 | Question goes like this : input: 1output:{}input: 2output:{}{}{{}}input: 3output:{}{}{}{{}}{}{}{{}}This is my program : public class PrintBraces { int n; char []braces; void readData() { java.util.Scanner scn=new java.util.Scanner(System.in); System.out.print(Please enter the value of n : ); n=scn.nextInt(); } void manipulate() { braces=new char[n*2]; for(int i=0;i<2*n;i+=2) { braces[i]='{'; braces[i+1]='}'; } for(int i=0;i<n;i++) { int oddNo=2*i-1; if(oddNo>0) { char temp=braces[oddNo]; braces[oddNo]=braces[oddNo+1]; braces[oddNo+1]=temp; print(); temp=braces[oddNo]; braces[oddNo]=braces[oddNo+1]; braces[oddNo+1]=temp; } else { print(); } } } void print() { for(int i=0;i<2*n;i++) { System.out.print(braces[i]); } System.out.println(); }}class PrintMain{ public static void main(String args[]) { PrintBraces pb=new PrintBraces(); pb.readData(); pb.manipulate(); }}As expected, I get the correct answer.I have solved it but I think it isn't efficient enough. Can anyone optimise it? And I would love to see any other alternative approaches for the same problem. May be a recursive one?Also, I am open to any suggestions for improving my programming style. Any good practices that I may be violating in my code? | Printing braces | java;optimization | null |
_webmaster.17887 | I started noticing that for some search results by google, there is a x hours ago before the description under the link. How does google determine this?There is also this link Get more results from the past 24 hours. How does this work? Is that based on last crawl timestamp? if so, how can one make google crawl a dynamic site more frequently? should we set that in HTTP headers like last-modified? | Google search results now shows last updated for some results. How does it work? | google search | null |
_codereview.42770 | Is there any way to make that code shorter? I still want to use jQuery. I don't want to use any validation script.$(#form).submit(function (e) { var tmp = $('#select-1').val(); var tmp1 = $('#select-2').val(); var tmp2 = $('#select-3').val(); var error = $('#error-1'); var error2 = $('#error-2'); var error3 = $('#error-3');if (tmp == '0' || tmp == 'Select') { e.preventDefault(); error.show();} else { error.hide();}if (tmp1 == '0' || tmp1 == 'Select') { e.preventDefault(); error2.show();} else { error2.hide();}if (tmp2 == '0' || tmp2 == 'Select') { e.preventDefault(); error3.show();} else {error3.hide();}});});HTML<form action= id=form> <div> <label for=select-1>Value 1</label> <select id=select-1> <option value=0>Select</option> <option value=1>Select 1</option> <option value=2>Select 2</option> <option value=3>Select 3</option> </select> <i id=error-1 class=error>Error</i> </div> <div> <label for=select-2>Value 2</label> <select id=select-2> <option value=0>Select</option> <option value=1>Select 1</option> <option value=2>Select 2</option> <option value=3>Select 3</option> </select> <i id=error-2 class=error>Error</i> </div> <div> <label for=select-3>Value 3</label> <select id=select-3> <option value=0>Select</option> <option value=1>Select 1</option> <option value=2>Select 2</option> <option value=3>Select 3</option> </select> <i id=error-3 class=error>Error</i> </div> <div> <button type=submit id=formsubmission>Submit</button></div></form> | Select validation | javascript;jquery;validation | your html has pattern, so it might be easier if you do this way.(function ($) { $.fn.xSelect = function (e) { return this.each(function () { var $this = $(this); if ($this.val() === '0') { e.preventDefault(); $this.next(.error).show(); }else{ $this.next(.error).hide(); } }); };})(jQuery);then use like$(#form).submit(function (e) { $(#select-1,#select-2,#select-3).xSelect(e);}); notice one thing: the event e need to pass in xSelect to make preventDefault work.for the better design. the parameter of jQuery plugin should be a JSON object,so you can make more options. this just give you idea how it's done.please check my jsfiddle example: jsfiddle |
_vi.8240 | This is probably an incredibly simple question, but I did not find any answer so far (I must lack the right sources, and I don't know where to search in vim's help).I have a condition and I would like it to include 'AND', likeif (condition1 .AND. condition2) do what I want you to doendifbut I couldn't find the syntax. Same thing for 'OR'. | Use conditional operators AND or OR in an IF statement? | vimscript | As @lcd047 said in his comment, vimscript use C-like operators && and ||.You can find description of their usage on :h expr2. Some important points mentioned by the doc are the followingYou'll find that the operators can be concatenated and && takes precedence over ||, so&nu || &list && &shell == cshIs equivalent to&nu || (&list && &shell == csh)Also once the result is known, the expression short-circuits, that is, further arguments are not evaluated. This is like what happens in C.If you use: if a || bThe expression will be valid even is b is not defined. |
_scicomp.20677 | I am trying to solve two non-linear equations self-consistently in a Gummel loop. Sometimes (every once in a while), I get to a situation when the loop repeats itself with wrong solutions and a certain error persists. As a simple example, consider the following two equations:\begin{align}&y = -x + 1 \\&y = \sqrt{x}\end{align}and suppose the loop reaches to $x=0$ for the input of first equation, which leads to $y=1$ for the input of the second equation. This results in $x=1$ for the input of the first equation, leading to $y=0$, and the situation repeats itself (and of course, does not converge to a correct solution).I was wondering if there is a good and comprehensive reference on this particular problem and on the properties of equations which lead to such behaviour. Also, what is the best way to avoid such difficulties in general? | Convergence problem in iterative method | iterative method;nonlinear equations;convergence | null |
_unix.286522 | Ive installed Ubuntu 15.10 on a Dell Inspiron 13-7353. Things work okay overall, but two critical problems have appeared.First, occasionally, when the laptop wakes up from sleep the screen will come on and I can see the desktop, but things are frozen. No keyboard entry or mouse cursor movement. Even the caps lock LED wont toggle.A second problem has popped up after the last few updates. After waking from sleep, and using for a minute or so, the trackpad cursor freezes. If I have a terminal window up and in focus, I can still type in commands.Would anyone happen to have some experience or advice to resolve these issues? Ive tried to do an upgrade from 15.10 to 16.04, and my attempts have been unsuccessful. The upgrade seems to make it through, but on reboot things never make it to the login screen. I have to fire up Clonezilla and restore my 15.10 backup. I keep the 15.10 install updated with the latest stable kernels and updates.Are there any Dell device drivers available to try?Is there a better/tested distribution and version of Linux to use? Id actually probably prefer Debian, but when I tried that disto I couldnt get the wireless to work. | Ubuntu 15.10 on Dell Inspiron 13-7353 | hardware | null |
_softwareengineering.108688 | I have already done a PHP project, and I did a number of things pretty wrong :)I just had all pages as scripts with php mixed in with html. I also wasn't using a framework like cakephp. And I didn't really use objects, nor had any sort of test suite. :)This time around I want to get it right. What should I do in terms of good practice approaches? And what am I ok not using? Any suggestions? Tips? | New PHP project, how to best architect it | php;frameworks;mvc;system architecture | From what you've provided, it's difficult to offer anything other than general guidelines.SOLID principles of OO design and best practices such as those found in Kent Beck's Smalltalk Best Practice Patternsan architecture that is appropriate for your domain and well understood by you and/or your team (MVC, for example)mature frameworks or libraries with an eye to familiarity, community engagement, documentation, stability, etca coding style that favors consistency, readability and maintainabilitya commitment to automated testing and/or TDDa modern version control system (git, mercurial)a willingness to treat PHP like a real language and not just a collection of cobbled-together HTML templatesmost importantly, a process that is iteratively self-evaluating and self-improvingEdit: Finally, don't try to do too much at once. Make a change, give it time to set in, and evaluate that change. Keep what works. |
_unix.364396 | Is it OK for two or more processes concurrently read/write to the same unix socket?I've done some testing.Here's my sock_test.sh, which spawns 50 clients each of which concurrently write 5K messages:#! /bin/bash --SOC='/tmp/tst.socket'test_fn() { soc=$1 txt=$2 for x in {1..5000}; do echo ${txt} | socat - UNIX-CONNECT:${soc} done}for x in {01..50}; do test_fn ${SOC} Test_${x} &doneI then create a unix socket and capture all traffic to the file sock_test.txt:# netcat -klU /tmp/tst.socket | tee ./sock_test.txtFinally I run my test script (sock_test.sh) and monitor on the screen all 50 workers doing their job. At the end I check whether all messages have reached their destination:# ./sock_test.sh# sort ./sock_test.txt | uniq -cTo my surprise there were no errors and all 50 workers have successfully sent all 5K messages.I suppose I must conclude that simultaneous writing to unix sockets is OK? Was my concurrency level too low to see collisions?Is there something wrong with my test method? How then I test it properly?EDITFollowing the excellent answer to this question, for those more familiar with python there's my test bench:#! /usr/bin/python3 -u# coding: utf-8import socketfrom concurrent import futurespow_of_two = ['B','KB','MB','GB','TB']bytes_dict = {x: 1024**pow_of_two.index(x) for x in pow_of_two}SOC = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)SOC.connect('/tmp/tst.socket')def write_buffer( char: 'default is a' = 'a', sock: 'default is /tmp/tst.socket' = SOC, step: 'default is 8KB' = 8 * bytes_dict['KB'], last: 'default is 2MB' = 2 * bytes_dict['MB']): print('## Dumping to the socket: {0}'.format(sock)) while True: in_memory = bytearray([ord(char) for x in range(step)]) msg = 'Dumping {0} bytes of {1}' print(msg.format(step, char)) sock.sendall(bytes(str(step), 'utf8') + in_memory) step += step if last % step >= last: breakdef workers(concurrency=5): chars = concurrency * ['a', 'b', 'c', 'd'] with futures.ThreadPoolExecutor() as executor: for c in chars: executor.submit(write_buffer, c)def parser(chars, file='./sock_test.txt'): with open(file=file, mode='rt', buffering=8192) as f: digits = set(str(d) for d in range(0, 10)) def is_digit(d): return d in digits def printer(char, size, found, junk): msg = 'Checking {}, Expected {:8s}, Found {:8s}, Junk {:8s}, Does Match: {}' print(msg.format(char, size, str(found), str(junk), size == str(found))) char, size, found, junk = '', '', 0, 0 prev = None for x in f.read(): if is_digit(x): if not is_digit(prev) and prev is not None: printer(char, size, found, junk) size = x else: size += x else: if is_digit(prev): char, found, junk = x, 1, 0 else: if x==char: found += 1 else: junk += 1 prev = x else: printer(char, size, found, junk)if __name__ == __main__: workers() parser(['a', 'b', 'c', 'd'])Then in the output you may observe lines like the following:Checking b, Expected 131072 , Found 131072 , Junk 0 , Does Match: TrueChecking d, Expected 262144 , Found 262144 , Junk 0 , Does Match: TrueChecking b, Expected 524288 , Found 219258 , Junk 0 , Does Match: FalseChecking d, Expected 524288 , Found 219258 , Junk 0 , Does Match: FalseChecking c, Expected 8192 , Found 8192 , Junk 0 , Does Match: TrueChecking c, Expected 16384 , Found 16384 , Junk 0 , Does Match: TrueChecking c, Expected 32768 , Found 32768 , Junk 610060 , Does Match: TrueChecking c, Expected 524288 , Found 524288 , Junk 0 , Does Match: TrueChecking b, Expected 262144 , Found 262144 , Junk 0 , Does Match: TrueYou can see that payload in some cases (b, d) is incomplete, however missing fragments are received later (c). Simple math proves it:# Expectedb + d = 524288 + 524288 = 1048576# Found b,d + extra fragment on the other check on cb + d + c = 219258 + 219258 + 610060 = 1048576Therefore simultaneous writing to unix sockets is OK NOT OK. | Concurrently reading/writing to the same unix socket? | linux;unix sockets | That is a very short test line. Try something larger than the buffer size used by either netcat or socat, and sending that string in multiple times from the multiple test instances; here's a sender program that does that:#!/usr/bin/env expectpackage require Tcl 8.5set socket [lindex $argv 0]set character [string index [lindex $argv 1] 0]set length [lindex $argv 2]set repeat [lindex $argv 3]set fh [open | socat - UNIX-CONNECT:$socket w]# avoid TCL buffering screwing with our resultschan configure $fh -buffering noneset teststr [string repeat $character $length]while {$repeat > 0} { puts -nonewline $fh $teststr incr repeat -1}And then a launcher to call that a bunch of times (25) using different test characters of great length (9999) a bunch of times (100) to hopefully blow well past any buffer boundary:#!/bin/sh# NOTE this is a very bad idea on a shared systemSOCKET=/tmp/blablafor char in a b c d e f g h i j k l m n o p q r s t u v w x y; do ./sender -- $SOCKET $char 9999 100 &donewaitHmm, I don't have a netcat hopefully nc on Centos 7 will suffice:$ nc -klU /tmp/blabla > /tmp/outAnd then elsewhere we feed data to that$ ./launcherNow our /tmp/out will be awkward as there are no newlines (some things buffer based on newline so newlines can influence test results if that is the case, see setbuf(3) for the potential for line-based buffering) so we need code that looks for a change of a character, and counts how long the previous sequence of identical characters was.#include <stdio.h>int main(int argc, char *argv[]){ int current, previous; unsigned long count = 1; previous = getchar(); if (previous == EOF) return 1; while ((current = getchar()) != EOF) { if (current != previous) { printf(%lu %c\n, count, previous); count = 0; previous = current; } count++; } printf(%lu %c\n, count, previous); return 0;}Oh boy C! Let's compile and parse our output...$ make parsecc parse.c -o parse$ ./parse < /tmp/out | head49152 b475136 a57344 b106496 a49152 b49152 a38189 r57344 b57344 a49152 b$ Uh-oh. That don't look right. 9999 * 100 should be 999,900 of a single letter in a row, and instead we got...not that. a and b got started early, but it looks like r somehow got some early shots in. That's job scheduling for you. In other words, the output is corrupt. How about near the end of the file?$ ./parse < /tmp/out | tail8192 l8192 v476 d476 g8192 l8192 v8192 l8192 v476 l16860 v$ echo $((9999 * 100 / 8192))122$ echo $((9999 * 100 - 8192 * 122))476$Looks like 8192 is the buffer size on this system. Anyways! Your test input was too short to run past buffer lengths, and gives a false impression that multiple client writes are okay. Increase the amount of data from clients and you will see mixed and therefore corrupt output. |
_softwareengineering.338526 | I have the following problem-About ~60 tables on SQL that some have foreign key to each other,primary key as identity had to change to something else with logic. It was replaced by triggers that checks what is the next number and change the primary key field (by sequence)Now this makes several problems on EF when trying to insert new identity:In order to update the foreign keys i must retrieve back the entity from DB because the primary key is not refreshing.When I add more than one entity, the EF throws exception that the primary key is not unique. (It receive the default value in my case 0 for int)The solution for 1 is good only if I have other unique fields that I can get the entity back from the DB.The best solution for 2 that I could come up with is using Detach entity after each insert.Both are not optimal, to say the least.Would glad to hear if there is any other solution / another approach instead of triggers.At this point i cannot change the structure of the tables on the database, just manipulations on the primary key field.Supplement:The auto generated ID's were abandoned to eliminate duplicates on this field, as these tables will run on several db's that are merged from time to time. | Trigger on primary key instead of identity | sql;entity framework | null |
_webmaster.99068 | I have a desktop version of my site using yourwebsite.com and a mobile version using m.yourwebsite.com. Should I treat them as one site and only submit the desktop version along with its sitemap and preferred URL (www.yourwebsite.com / yourwebsite.com) to search engines, and what what is the right way to use the robots.txt and .htaccess file for each site? One for both or each site has their own files?Does any of this harm search engine visibility, web crawling and SEO ranking or does it make any difference either way?As of now, I am using the robot.text from the desktop version for the mobile version and redirecting mobile users. Any suggestions would be appreciative. | Should I submit both mobile and desktop domain names for indexing on non-responsive sites or is that bad for SEO? | seo;web crawlers;indexing;mobile;negative seo | null |
_unix.355368 | I have a RAID 1 which was managed by Intel Rapid Storage Technology drivers. Im migrating this system to Fedora Linux and I ran mdadm --assemble --scan and it seemed like there was an array available via /dev/md126..I successfully mounted it and accessed the data I needed. But then I wanted to check the health of the array, the BIOS is reporting completely intact.I ran [root@localhost ~]# sudo mdadm --detail /dev/md126/dev/md126: Container : /dev/md/imsm0, member 0 Raid Level : raid1 Array Size : 1953511424 (1863.01 GiB 2000.40 GB) Used Dev Size : 1953511556 (1863.01 GiB 2000.40 GB) Raid Devices : 2 Total Devices : 1 State : clean, degraded Active Devices : 1Working Devices : 1 Failed Devices : 0 Spare Devices : 0 UUID : **OMMITTED** Number Major Minor RaidDevice State - 0 0 0 removed 0 8 0 1 active sync /dev/sdaFrom this output it seems that only 1 device is attached? Also there is a degraded status listed. sda and sdb should both be members of this array. Here is the output of my status command:[root@localhost ~]# cat /proc/mdstatPersonalities : [raid1] md126 : active raid1 sda[0] 1953511424 blocks super external:/md127/0 [2/1] [_U]md127 : inactive sda[0](S) 3028 blocks super external:imsmunused devices: <none>Not sure what md127 is representative of here either. In anycase here is my /etc/mdadm.conf :# mdadm.conf written out by anacondaMAILADDR rootAUTO +imsm +1.x -allHow should I configure my existing array properly without corrupting data?@Miorrin****************************EDIT ADDED LSBLK -f - ****************************$ lsblk -fNAME FSTYPE LABEL UUID MOUNTPOINTsdd sdd2 ntfs *OMITTED* sdd1 ntfs System Reserved *OMITTED* sdb ddf_raid_m *OMITTED*ddf1_New_VD sde sde2 ntfs *OMITTED* /run/mediasde1 sdc sdc2 crypto_LUK *OMITTED* luks-*OMITTED* LVM2_membe *OMITTED* fedora-root ext4 *OMITTED* / fedora-swap swap *OMITTED* [SWAP] fedora-home ext4 *OMITTED* /homesdc1 ext4 *OMITTED* /bootsda isw_raid_m The two 1.8T volumes should be in the same raidThank you all for any assistance! | Understanding Linux software RAID, which devices are connected to my mdadm RAID 1? | linux;fedora;raid;mdadm | null |
_codereview.115969 | I'm trying to update my skillset by learning how to write isomorphic JavaScript applications. For my stack, I've chosen React and Express - no database yet.The problem many people seem to face when it comes to writing these kinds of applications is getting data from the server to the client so the view can be rendered properly client-side. Apparently the best way to do this is to JSON encode the data and include that in the server-rendered page.All of this is done and working, but I would really appreciate if somebody who knows what they're doing could weigh in on my techniques before I develop bad habits.A working repository (very small) is available on GitHub, but I have included the relevant concerning code below.To get the data to the client, I pass it down through React until it hits the base template, where it is encoded and included in an invisible element:return this.isServer ? ( <html> <head> <title>{ this.props.title || Untitled }</title> <link rel=stylesheet href=/static/theme/style.scss/> </head> <body> { //If data is to be passed into the view, encode it as JSON and insert it into a hidden element this.props.data && <x-data> { new Buffer(JSON.stringify(this.props.data)).toString(base64) } </x-data> } <h1>My Files</h1> <main> { this.props.children } </main> <script src=/static/js/bundle.js></script> </body> </html>) : this.props.children;The application is rendered into the <main> element on the server, and if this code is running on the client, it is handled elsewhere, in my client entrypoint:import App from ./views/index.jsx;//Get data from server, which is passed down through the x-data element//It is base64 encoded JSONlet props = {};let xData = document.body.querySelector(x-data);if (xData) { let json = JSON.parse(window.atob(xData.innerText)); Object.assign(props, json);}//All pages will export an App class which renders the current pageReactDOM.render( React.createElement(App, props), document.body.querySelector(main));The App class is the base React component to render this specific page. I'm not sure how I'll work out having multiple pages in the future, but this is how I have it working now.Am I following a proper design pattern, or have I invented some mutant hellspawn technique that should be vanquished with a quick rm -rf? | Very basic isomorphic JavaScript application | javascript;node.js;express.js;react.js | You can render your webpage on the server complete with all props/state in place and at client-side React will pick up from initial state when he founds markup already existing.So write your components the normal way, query database (or what have you) for props and render on server using ReactDOMServer. Here's a little example.(As far as I can tell, you're doing something similar in your repo, so this is probably late to the party :) |
_unix.102804 | So.. if there is a bad block on the HDD, what can survive better?not using FDE (full disc encryption)using FDE - since the whole disk has 1 partition - the encrypted one, isn't it more likely to loose all data if there is a HDD error, ex.: bad block? | Does using full-disk encryption affect the probability of losing data in case of storage errors? | filesystems;encryption;storage | No, I mean yes, I mean... a little bit of both.If a block is bad, the data of that block is gone. Whether the block contained encrypted data or not, is not relevant at that point. It doesn't make a difference.Of course there are special cases. If your encryption has a metadata header, such as LUKS does, and the bad block happens to be one that holds your encryption key, then that single bad block can render the entire disk unreadable and the entire data lost. If the machine is still running and has the container open, don't reboot without writing down the key first... (e.g. dmsetup table --showkeys).But the same can happen to you in an unencrypted scenario. If critical filesystem metadata goes bad, your files are probably gone. And rescue tools like PhotoRec that try to make sense of remaining clear text data may not be able to yield satisfactory results (depending on whether you stored files in a known format, fragmentation and other things).In the end no matter what you do, you always need backups. After all, a HDD may just as well die completely rather than have just a bad block here and there. |
_unix.243117 | Kindly consider below files: File 1:boo,194,2322foo,999,7559File 2: boo,2322boo,4526foo,4222foo,4223I need to link Field1 in File 1 with Field1 in File 2 and get the related Field2 from File 2, while excluding the result if it's equal to Field3 in File 1.The result should be: boo,4526foo,4222,4223I tried the below script, but it does not exclude the similar values.awk -F, 'NF==3{arr[$1]=$3}{if(arr[$1]==$1){print $2}}' | Exclude similar values from awk | text processing;awk;text | You need to check whether $1 is in arr and if so whether the value is different from $2 and then print:awk -F, 'FNR == NR { arr[$1] = $3; next } { if ($1 in arr && arr[$1] != $2) print $2 }'Using FNR == NR and next is the conventional way to process lines in the first file differently from the lines in other files. Yes, you can flatten it onto one line, but 'one-liner' is a pejorative term unless you're writing APL (or perhaps Perl). |
_unix.118141 | I wanted to know in technical terms; what is the difference between BSD Kernel and Linux Kernel.In Linux, we can download the source kernel then patch it and make and make modules it. Even we have multiple tools to edit the kernel config such as menuconfig, xconfig and ... .But I couldn't find such kinda vast field on BSD. First, Could I download the BSD kernel? How could I config it? and ... So what am I asking is: (Without referring to ancestry and etymology) Is the Kernel in each case (in)dependent of a distribution?Ways to config Each Kernel and tools available for the job?Whether any Patch work could be done in each case?Availability of the kernel outside the realm of distribution? (Kernel Sources)?Flavour of Kernels available in each case (X??BSD/Linux) Like XEN/Vmware/GEN? | BSD Kernel Vs. Linux kernel? | linux;kernel;linux kernel;kernel modules;bsd | null |
_unix.149926 | I used to be able to kexec into a new kernel immediately after a kickstart (anaconda) install via pxe.I was able to do this by figuring out the current kernel version, and grabbing cmdline options by using /boot/grub/grub.confcmdline=$(awk /kernel.*console/'{$1=$2=; print$0}' /boot/grub/grub.conf)Then:kexec -l /boot/vmlinuz-$(uname -r) --initrd=/boot/initramfs-$(uname -r).img --append=${cmdline}Now I am unable to find what the cmdline options for the next reboot are since /proc/cmdline and the cmdline command only show me what the cmdline is for the installation disc.How would I be able to find out what the cmdline is for next reboot now? | Kexec immediately after Kickstart install (where is cmdline?) | kernel;rhel;kickstart;pxe;anaconda | null |
_unix.59017 | Linux debian squeeze 6.0.6 (2.6.32-5-amd64) is supplied with quite an old 02.100.03.00 version of the mpt2sas driver.I do wish to install a much newer mpt2sas driver version. In know there are backported kernel versions available, like bpo.3 and bpo.4. Those backports both contain version 10 of the mpt2sas driver.The mpt2sas.ko module is already blacklisted from being loaded during boot, with:$ echo 'blacklist mpt2sas' >> /etc/modprobe.d/mpt2sas.conf; depmod; update-initramfs -u -k $(uname -r)For this mpt2sas driver are precompiled binaries available in rpm format for RHEL5 and SLES10, and there is source code available.How can a newer much mpt2sas driver be installed in debian? | How to install much newer mpt2sas driver version in debian squeeze? | debian;drivers;scsi | Use the newer Linux driver version 15.00.00.00 from LSI. This 700 MB download also contains precompiled binaries for Debian 6.0.5.Installation instruction for amd64 architecture - adapted from the included readme - are:# cd debian\rpms-03# dpkg -i mpt2sas-15.00.00.00-3_Debian6.0.5.amd64.debAnd the output is:Selecting previously deselected package mpt2sas.(Reading database ... 28905 files and directories currently installed.)Unpacking mpt2sas (from mpt2sas-15.00.00.00-3_Debian6.0.5.amd64.deb) ...pre 15.00.00.00Setting up mpt2sas (15.00.00.00-3) ...post 15.00.00.00The mpt driver for kernel 2.6.32-5-amd64 is now version 15.00.00.00Working files in /tmp/mkinitramfs_PvDVif and overlay in /tmp/mkinitramfs-OL_Ko3jrSpost Install Done.The result is that the old driver is renamed from:/lib/modules/2.6.32-5-amd64/kernel/drivers/scsi/mpt2sas/mpt2sas.koto:/lib/modules/2.6.32-5-amd64/kernel/drivers/scsi/mpt2sas/mpt2sas.ko.origand the new driver is installed at:/lib/modules/2.6.32-5-amd64/weak-updates/mpt2sas/mpt2sas.ko |
_softwareengineering.336069 | I am developing a Web API, with an n-tier approach, using Entity Framework and using code first approach.My questions is does my DAL, and Business Logic layer are following DI/UoW/DDD pattern if not where should I change my code to make it more standard.What should my service layer look like to bridge between the web api and the business layer. I plan to define role management in this layer, only x user can perform this task.Is the validation defined in core, good there or should it be its own class for each item?This is a sample of the code, and may be missing stuff, but I am trying to emphasize, on how close i am to the DDD. Role implementation is missing. At this stage the Web API layout, and Service Layout isn't created but build on the fly, just wondering if there are tweak that I should do to this.StructureLibrary Name ReferenceAPI Service , Model, CommonService Core, Model, CommonCore DAL, Model, CommonDAL Model, CommonModelCommonModelpublic abstract class Base{ public int ID { get; set; } public Boolean isValid { get; set; } public DateTime createdOn { get; set; } public int createdID { get; set; } public Person createdPerson { get; set; } public DateTime updatedOn { get; set; } public int updatedID { get; set; } public Person updatedPerson { get; set; }}public class Person : Base{ public DateTime DOB { get; set; } public virtual ICollection<Name> PreferredName { get; set; } public virtual ICollection<Prefix> PrefixID { get; set; } public virtual ICollection<Gender> GenderID { get; set; } public virtual ICollection<Ethnicity> EthnicityID { get; set; }}DALpublic interface IPersonRepsoitory{ IEnumerable<Person> GetPersons(); Person GetPersonByID(int id); IEnumerable<Person> GetPersonByFirst(string first); IEnumerable<Person> GetPersonByLast(string last); IEnumerable<Person> GetPersonBirthday(DateTime d); IEnumerable<Person> GetPersonWithGroup(IEnumerable<Roles> r); void InsertPerson(Person p); void DeletePerson(int id); void UpdatePerson(Person p); void Save();}public class PersonRepository : IPersonRepsoitory, IDisposable{ private Context context; private bool disposed = false; public PersonRepository(Context _context) { this.context = _context; } public IEnumerable<Person> GetPersons() { return context.Person.ToList(); } public Person GetPersonByID(int id) { return context.Person.Where(x => x.ID == id).FirstOrDefault(); } public IEnumerable<Person> GetPersonByFirst(string first) { return context.Person.Where(x => x.PreferredName.First().firstName == first); } public IEnumerable<Person> GetPersonByLast(string last) { return context.Person.Where(x => x.PreferredName.First().lastName == last); } public IEnumerable<Person> GetPersonBirthday(DateTime d) { return context.Person.Where(x => x.DOB == d); } public IEnumerable<Person> GetPersonWithGroup(IEnumerable<Roles> r) { // need to complete association return null; } public void InsertPerson(Person p) { context.Person.Add(p); } public void DeletePerson(int id) { Person p = context.Person.Find(id); context.Person.Remove(p); } public void UpdatePerson(Person p) { context.Entry(p).State = EntityState.Modified; } public void Save() { context.SaveChanges(); } protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { context.Dispose(); } } this.disposed = true; } public void Dispose() { Dispose(true); GC.SuppressFinalize(this); }}Corepublic interface IPersonCore{ IEnumerable<Person> PersonList(); Person UserWithID(int id); IEnumerable<Person> UserWithRole(IEnumerable<Roles> r); int AddPerson(Person p); int RemovePerson(int id); int UpdatePerson(Person p);}public class PersonCore : IPersonCore{ private IPersonRepsoitory dbPerson; private Person currUser; public PersonCore(Person _currUser) { this.dbPerson = new PersonRepository(new Context()); this.currUser = _currUser; } public IEnumerable<Person> PersonList() { return dbPerson.GetPersons(); } public Person UserWithID(int id) { return dbPerson.GetPersonByID(id); } public IEnumerable<Person> UserWithRole(IEnumerable<Roles> r) { return dbPerson.GetPersonWithGroup(r); } public int AddPerson(Person p) { if (isValid(p)) { p.createdOn = DateTime.Now; p.updatedOn = DateTime.Now; p.createdPerson = currUser; p.updatedPerson = currUser; dbPerson.InsertPerson(p); return 1; } return 0; } public int RemovePerson(int id) { Person found = dbPerson.GetPersonByID(id); if (found != null) { dbPerson.DeletePerson(found.ID); return 1; } return 0; } public int UpdatePerson(Person p) { if (isValid(p)) { Person found = dbPerson.GetPersonByID(p.ID); if (found != null) { found = p; found.updatedOn = DateTime.Now; found.updatedPerson = currUser; dbPerson.InsertPerson(found); return 1; } } return 0; } private static bool isValid(Person p) { if (p == null) { return false; } // More validation done here return true; }}Servicepublic interface IPersonService{ void GetListPerson();}public class personService : IPersonService{ private IPersonCore personCore; public class personService() { this.personCore = new PersonCore(); } public IEnumerable<Person> GetListPerson() { return personCore.PersonList(); }}Web APIpublic class PersonController : ApiController { private IPersonService personService; public PersonController() { this.personService = new personService(); } public ListPerson[] Get() { return personService.GetListPerson(); } } | Does this structure follow DDD/UoW pattern? | c#;object oriented;asp.net | My questions is does my DAL, and Business Logic layer are following DI/UoW/DDD pattern if not where should I change my code to make it more standard.Regarding DI, I see no evidence of any DI in your code. At every layer, you have constructors like:public personService(){ this.personCore = new PersonCore();}To be using DI, it would look like:public personService(IPersonCore personCore){ this.personCore = personCore;}ie, you are injecting the dependency, not having each class create it for itself.The code is incomplete, but currently there's no evidence of any unit of work in the code.DDD is not a software pattern. Whetehr you are modelling your domain and using that to drive this design cannot be determined from the code. |
_datascience.14875 | I am trying to collect some data for ML, specifically for training a neural network model, and I don't know how big the data set is enough. So is there a rule of thumb on how many data of dimension DIM should one collect for training a NN-Model ? For example, it depends on the number of features or the kind of NN-Model or something else ?Any help will be appreciated. | Data set size versus data dimension, is there a rule of thumb? | machine learning;neural network;data mining;dataset;deep learning | In this video by Caltech prof. Yaser Abu-Mostafa, he explains the relationship between dimension of a dataset and it's size required for any learning model to work.As a general rule of thumb, size of dataset should be at-least about 10x it's dimension and should be independent of the model used.Also, this link has summaries from some of the relevant papers, viz.For a finite sized data with little or no a priori information, the ratio of the sample size to dimensionality must be as large as possible to suppress optimistically biased evaluations of the performance of the classifier.This says, the ratio of size of dataset (sample) to dimension should be as large as possible to reduce classifier bias towards a particular class.The ratio of the sample size to dimensionality should vary inversely proportional to the amount of available knowledge about the class conditional densities.In a classifier setting, the more knowledge we have for each class' probability density, lesser can be the sample-size to dimension ratio. In simpler terms, we can say we should be including as much data as possible and if we are not able to do that, include as much information as possible in the small dataset itself, because for any model to work we need to feed it with high variance dataset. |
_webapps.26009 | It drives me crazy that I get notifications from people I don't know who have written on the wall of an event that I (along with 3000 other people) was invited to. I see that you can prevent getting emailed whenever this happens, but is there way to avoid the notification (i.e. the red globe with the number lighting up) entirely? | On Facebook, is there a way to stop notifications for other people writing on the wall of an event I have RSVPed to? | facebook;notifications | Yep! For Facebook's Notifications on a specific event, here's how to turn them off.Go to the event page.Click the Gear icon in the upper right.Turn off NotificationsAlso related, to disable Email Notifications for events (every event, not just a specific one)Go to your Account SettingsNotifications from the leftUnder Events uncheck Posts on the wall of an event you've joined |
_softwareengineering.299106 | I am using getters and setters to for the purpose of encapsulation.public class Student { private String studentID; private String studentName; private String address; public Student(){ //default constructor } public Student(String studentID, String studentName, String address) { super(); this.studentID = studentID; this.studentName = studentName; this.address = address; } public String getStudentID() { return studentID; } public void setStudentID(String studentID) { this.studentID = studentID; } public String getStudentName() { return studentName; } public void setStudentName(String studentName) { this.studentName = studentName; } public String getAddress() { return address; } public void setAddress(String address) { this.address = address; }}the variables studentID, studentName and address are declared as private, with the intention of encapsulation.but we could also do the same task by making the variable accessibility level from private to public, is it really helps to apply encapsulation by the use of setters and getters?only I can understand the use of getter and setter is users of the class dows not need to have an idea about the names of the variables used in the class as setters and getters makes sense to the users of the classex- objectofStudentClass.setStudentID(S0001);Is there any difference between main difference between making getters and setters instead if making the variable access level to public.Question 2:also here I have made parameterized constructor matching to the variables/fields in the class Student. is that throw away the concept of encapsulation? | Meaning of using getters and setters and Uses of parameterized Constructor. | java;object oriented;encapsulation | Yes, there is a huge difference.Having getters and setters allows you to change their implementation (for instance, to add range checking, audit logging, statistics updates etc.) in the future without having to change all client code. Having a public variable would make such maintenance a breaking change. Very often this is the difference between a change being feasible and being too invasive/risky to perform. |
_unix.248745 | The shell script that I have to write has to create new users and automatically assign them to groups. This is my code so far:echo -n Enter the username: read textuseradd $textI was able to get the shell script to add the new user (I checked the /etc/passwd entry). However, I have tried and I am not able to get a passwd (entered by the user) to the newly created user before. If anyone could help me assign a password to the newly created user it would be of much help. | How to add users to Linux through a shell script | linux;shell script;users;password;group | Output from man 1 passwd:--stdin This option is used to indicate that passwd should read the new password from standard input, which can be a pipe.So to answer your question, use the following script:echo -n Enter the username: read unameecho -n Enter the password: read -s passwdadduser $unameecho $password | passwd $uname --stdinI used read -s for the password, so it won't be displayed while typing.Edit: For Debian users -stdin won't work. Instead of passwd use chpasswd: echo $uname:$passwd | chpasswd |
_unix.211206 | My intention is to monitor traffic from/to wan. To achieve that, i want to calculate outgoing and ingoing bytes/second, with iptables counters as data source. Unluckily, I'm not able to understand what to do with FORWARD chain, although I'm aware of INPUT and OUTPUT. I'm focusing on iptables because it actually shows only IPV4 packets and bytes since I don't need Ethernet ones.My configuration scheme is:modem -> OpenWrt routerAnd here's my /etc/config/firewall file:config defaults option syn_flood '1' option input 'ACCEPT' option output 'ACCEPT' option forward 'REJECT'config zone option name 'lan' option input 'ACCEPT' option output 'ACCEPT' option forward 'ACCEPT' option network 'lan'config zone option name 'wan' option input 'REJECT' option output 'ACCEPT' option forward 'REJECT' option masq '1' option mtu_fix '1' option network 'wan'config rule option name 'Allow-DHCP-Renew' option src 'wan' option proto 'udp' option dest_port '68' option target 'ACCEPT' option family 'ipv4'config rule option name 'Allow-Ping' option src 'wan' option proto 'icmp' option icmp_type 'echo-request' option family 'ipv4' option target 'ACCEPT'config rule option name 'Allow-DHCPv6' option src 'wan' option proto 'udp' option src_ip 'fe80::/10' option src_port '547' option dest_ip 'fe80::/10' option dest_port '546' option family 'ipv6' option target 'ACCEPT'config rule option name 'Allow-ICMPv6-Input' option src 'wan' option proto 'icmp' list icmp_type 'echo-request' list icmp_type 'echo-reply' list icmp_type 'destination-unreachable' list icmp_type 'packet-too-big' list icmp_type 'time-exceeded' list icmp_type 'bad-header' list icmp_type 'unknown-header-type' list icmp_type 'router-solicitation' list icmp_type 'neighbour-solicitation' list icmp_type 'router-advertisement' list icmp_type 'neighbour-advertisement' option limit '1000/sec' option family 'ipv6' option target 'ACCEPT'config rule option name 'Allow-ICMPv6-Forward' option src 'wan' option dest '*' option proto 'icmp' list icmp_type 'echo-request' list icmp_type 'echo-reply' list icmp_type 'destination-unreachable' list icmp_type 'packet-too-big' list icmp_type 'time-exceeded' list icmp_type 'bad-header' list icmp_type 'unknown-header-type' option limit '1000/sec' option family 'ipv6' option target 'ACCEPT'config include option path '/etc/firewall.user'config include 'miniupnpd' option type 'script' option path '/usr/share/miniupnpd/firewall.include' option family 'any' option reload '1'config forwarding option dest 'wan' option src 'lan'If possible, please provide me an answer that works also with OpenWrt-based access points, not only routers.Thank you! | How to get RX and TX bytes querying iptables? | iptables;openwrt;bandwidth | null |
_unix.19252 | I realize that ! has special significance on the commandline in the context of the commandline history, but aside from that, in a runing script the exclamation mark can sometimes cause a parsing error.I think it has something to do with an event, but I have no idea what an event is or what it does. Even so, the same command can behave differently in different situations.The last example, below, causes an error; but why, when the same code worked outside of the command substitution? .. using GNU bash 4.1.5 # This works, with or without a space between ! and p { echo -e foo\nbar | sed -nre '/foo/! p' echo -e foo\nbar | sed -nre '/foo/!p'; }# bar# bar# This works, works when there is a space between ! and p var=$(echo -e foo\nbar | sed -nre '/foo/! p'); echo $var# bar# This causes an ERROR, with NO space between ! and p var=$(echo -e foo\nbar | sed -nre '/foo/!p'); echo $var# bash: !p': event not found | Why does the exclamation mark `!` sometimes upset bash? | bash;command history;quoting | The ! character invokes bash's history substitution. When followed by a string (as in your failing example) it tries to expand to the last history event that began with that string. Just like $var gets expanded to the value of that string, !echo would expand to the last echo command in your history.Space is a breaking character in such expansions. First note how this would work with variables:# var=like# echo $varlike# echo $$# echo Do you $var frogs?Do you like frogs? <- as expected, variable name broken at space# echo Do you $varfrogs?Do you? <- $varfrogs not defined, replaced with blank# echo Do you $ var frogs?Do you $ var frogs? <- $ not a valid variable name, ignoredThe same thing will happen for history expansion. The bang character (!) starts off a history replacement sequence, but only if followed by a string. Following it with a space make it literal bang instead of part of a replace sequence.You can avoid this kind of replacement for both variable and history expantion by using single quotes. Your first examples used single quotes and so ran fine. Your last examples are in double quotes and thus bash scanned them for expantion sequences before it did anything else. The only reason the first one didn't trip is that that the space is a break character as shown above. |
_unix.372563 | I am using linux arch. I installed android according to ArchWiki.I uninstalled every skd component and android studio using pacman -Rs. Also, I removed every hidden folder in the home directory that had something to do with android such .android, .gradle, AndroidStudio. But the environment variable ANDROID_HOME is still set and the following command ps aux | grep 'adb' returns adb -L tcp:5037 fork-server server --reply-fd 4Where is the environment variable ANDROID_HOME set and what means adb -L tcp:5037 fork-server server --reply-fd 4? Is android not properly uninstalled?The command adb returns bash: adb: command not foundEDIT:In a /proc/pid/environ, there are still things like /opt/android-sdk/platform-tools:/opt/android-sdk/tools:/opt/android-sdk/tools/bin | Uninstall Android Studio + SDK | arch linux;environment variables;android;adb | null |
_unix.163110 | For some reason, tftpboot does not work with a colon in filename parameter:group { filename node7:linux/pxelinux.0; host machine_7a { hardware ethernet 02:01:02:02:01:11; fixed-address 192.168.10.8; } host machine_7b { hardware ethernet 02:01:02:02:01:12; fixed-address 192.168.10.9; }}If I change the colon to a dash -, the remote machines can tftpboot fine. Do you know why a colon in the filename parameter doesn't work?UPDATE:Output from /var/log/message when I set the filename to node7-linux/pxelinux.0Jan 22 22:52:48 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 22:52:48 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:49 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 22:52:49 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:51 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.7 (192.168.10.1) from 02:01:02:02:01:11 via br0Jan 22 22:52:51 linux-yp1y dhcpd: DHCPACK on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:51 linux-yp1y atftpd[12967]: Serving node7-linux/pxelinux.0 to 192.168.10.7:1024Jan 22 22:52:51 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 22:52:51 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:52 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 22:52:52 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:54 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.7 (192.168.10.1) from 02:01:02:02:01:11 via br0Jan 22 22:52:54 linux-yp1y dhcpd: DHCPACK on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:52:54 linux-yp1y atftpd[12967]: Serving node7-linux/pxelinux.cfg/default to 192.168.10.7:1024Jan 22 22:52:54 linux-yp1y atftpd[12967]: Serving node7-linux/vmlinuz to 192.168.10.7:1025Jan 22 22:52:55 linux-yp1y atftpd[12967]: Serving node7-linux/initrd to 192.168.10.7:1026Jan 22 22:53:14 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 22:53:14 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:53:14 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.7 (192.168.10.1) from 02:01:02:02:01:11 via br0Jan 22 22:53:14 linux-yp1y dhcpd: DHCPACK on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 22:53:15 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:12 via br0Jan 22 22:53:15 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 22:53:15 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.8 (192.168.10.1) from 02:01:02:02:01:12 via br0Jan 22 22:53:15 linux-yp1y dhcpd: DHCPACK on 192.168.10.8 to 02:01:02:02:01:12 via br0Output from /var/log/message when I set the filename to node7:linux/pxelinux.0:Jan 22 23:16:57 linux-yp1y kernel: [343021.700191] br0: port 6(vnet6) entering forwarding stateJan 22 23:16:57 linux-yp1y kernel: [343021.700296] device vnet6 left promiscuous modeJan 22 23:16:57 linux-yp1y kernel: [343021.700302] br0: port 6(vnet6) entering disabled stateJan 22 23:16:57 linux-yp1y kernel: [343021.721423] br0: port 7(vnet7) entering forwarding stateJan 22 23:16:57 linux-yp1y kernel: [343021.721467] device vnet7 left promiscuous modeJan 22 23:16:57 linux-yp1y kernel: [343021.721470] br0: port 7(vnet7) entering disabled stateJan 22 23:16:58 linux-yp1y kernel: [343021.742969] br2: port 4(vnet8) entering forwarding stateJan 22 23:16:58 linux-yp1y kernel: [343021.743040] device vnet8 left promiscuous modeJan 22 23:16:58 linux-yp1y kernel: [343021.743042] br2: port 4(vnet8) entering disabled stateJan 22 23:16:58 linux-yp1y ifdown: vnet6 Jan 22 23:16:58 linux-yp1y ifdown: Interface not available and no configuration found.Jan 22 23:16:58 linux-yp1y ifdown: vnet7 Jan 22 23:16:58 linux-yp1y ifdown: Interface not available and no configuration found.Jan 22 23:16:58 linux-yp1y ifdown: vnet8 Jan 22 23:16:58 linux-yp1y ifdown: Interface not available and no configuration found.Jan 22 23:17:01 linux-yp1y kernel: [343025.377050] device vnet6 entered promiscuous modeJan 22 23:17:01 linux-yp1y kernel: [343025.393810] br0: port 6(vnet6) entering forwarding stateJan 22 23:17:01 linux-yp1y kernel: [343025.393814] br0: port 6(vnet6) entering forwarding stateJan 22 23:17:01 linux-yp1y kernel: [343025.449562] device vnet7 entered promiscuous modeJan 22 23:17:01 linux-yp1y kernel: [343025.466276] br0: port 7(vnet7) entering forwarding stateJan 22 23:17:01 linux-yp1y kernel: [343025.466280] br0: port 7(vnet7) entering forwarding stateJan 22 23:17:01 linux-yp1y kernel: [343025.557549] device vnet8 entered promiscuous modeJan 22 23:17:01 linux-yp1y kernel: [343025.574265] br2: port 4(vnet8) entering forwarding stateJan 22 23:17:01 linux-yp1y kernel: [343025.574270] br2: port 4(vnet8) entering forwarding stateJan 22 23:17:03 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 23:17:03 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:04 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 23:17:04 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:05 linux-yp1y kernel: [343029.580829] br2: port 4(vnet8) entering forwarding stateJan 22 23:17:06 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.7 (192.168.10.1) from 02:01:02:02:01:11 via br0Jan 22 23:17:06 linux-yp1y dhcpd: DHCPACK on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:06 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:12 via br0Jan 22 23:17:06 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 23:17:07 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:12 via br0Jan 22 23:17:07 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 23:17:09 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.8 (192.168.10.1) from 02:01:02:02:01:12 via br0Jan 22 23:17:09 linux-yp1y dhcpd: DHCPACK on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 23:17:12 linux-yp1y kernel: [343035.804008] vnet8: no IPv6 routers presentJan 22 23:17:12 linux-yp1y kernel: [343036.148005] vnet7: no IPv6 routers presentJan 22 23:17:12 linux-yp1y kernel: [343036.397004] vnet6: no IPv6 routers presentJan 22 23:17:24 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 23:17:24 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:25 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:11 via br0Jan 22 23:17:25 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:27 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.7 (192.168.10.1) from 02:01:02:02:01:11 via br0Jan 22 23:17:27 linux-yp1y dhcpd: DHCPACK on 192.168.10.7 to 02:01:02:02:01:11 via br0Jan 22 23:17:27 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:12 via br0Jan 22 23:17:27 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 23:17:28 linux-yp1y dhcpd: DHCPDISCOVER from 02:01:02:02:01:12 via br0Jan 22 23:17:28 linux-yp1y dhcpd: DHCPOFFER on 192.168.10.8 to 02:01:02:02:01:12 via br0Jan 22 23:17:30 linux-yp1y dhcpd: DHCPREQUEST for 192.168.10.8 (192.168.10.1) from 02:01:02:02:01:12 via br0Jan 22 23:17:30 linux-yp1y dhcpd: DHCPACK on 192.168.10.8 to 02:01:02:02:01:12 via br0 | dhcpd does not allow filename option with a colon in it | filenames;dhcp;pxe;tftp | null |
_softwareengineering.71461 | Are there any A.I. resources that explain the concepts and present source code, similarly to AI Horizon? I've read books and research papers but they generally present a conceptual approach, without really delving into the source code of it. | Are there any A.I. resources that explain the concepts and present source code? | books;resources;artificial intelligence | BooksProgramming Game AI by Example, by Mat Buckland.Covers lots of gound, with good code examples on the CD for everything in the book. Notably, includes State Machines, Goal Driven Behaviour, Path Finding/Planning and Fuzzy Logic.AI Techniques for Game Programming, by Mat Buckland.The book name is a bit of a misnomer. It's about Neural Nets and Genetic Algorithms. The implementations are in a game format, but the information is more general. Again, all of the source is available on the CD.AI Game Programming Wisdom Series, by Steve Rabin.The series is formatted like the Game Programming Gems series, ie. they're a collection of articles written by industry professionals and professors. The size of the articles makes them easily digestible in chunks. Some of the articles provide implementations, some are more 'high level' descriptions. Artificial Intelligence: A Modern Approach, by Stuart Russell and Peter Norvig.The book's website has the has the source code for the algorithms presented in the book, in several languages. There is also a discussion group, where you can post questions or start a discussion about the material in the book.Communities & tutorialsAI questions on Stack Overflow. Most questions and answers present code, in various languages.AI Horizon: Computer Science and Artificial Intelligence Programming.AIGameDev.com - It has a fair amount of recent articles with code. For some of the content, you might need to get a paid account but the insider (free) account grants access to some good articles and videos.The Miscellaneous section of P-99 presents interesting AI problems, like Eight Queens, Sudoku, Crossword puzzle, etc, and their solutions in Prolog.Chatbot tutorial, on ai-programming.com |
_softwareengineering.339710 | I create Application for my client. I use some libraries released on GitHub under MIT, BSD and Apache license. I create also documentation (PDF file) where I would like to point what libraries and components I've used.What details about libraries should I place beside the name/source of library to satisfy MIT, BSD and Apache License conditions?Is it enough to give only the name and licence of the resource? Or should I put also the Author name and the full text of specific license? | MIT, BSD, Apache License: Create application for client | programming practices;project management;mit license;bsd license | IANAL, but as far as I have seen it in open source projects, a LICENSES.txt or LICENSES.md is the most common way to do it. In which you group your dependencies based on their licenses (e.g., first block is MIT; second BSD; third Apache2...).License name + project name should be sufficient.If you are a bit nicer, then include a link for every license to the official text of the license (e.g., https://opensource.org/licenses/MIT don't use Wikipedia links); or even copy the text of the license there.If you are even nicer, and the project has only a handful of authors (e.g., it's not Apache Maven with a lot of contributors), then you could list those authors too, but this I haven't seen in widespread use.P.s.: I kinda have a small OCD, so I would take extra care that I really list all my dependencies. With maven it's doable with the dependencies plugin, although it can get tricky quite quickly (e.g., what do you do with a project which is under Apache2 but has a (L)GPL dependency? It couldn't be Apache2 in the first place, but are you still allowed to use it as Apache2? According to corporate lawyers: as long as you don't know about the (L)GPL dependency you are good to go. This is why I never look for transient dependencies, only direct ones :)) |
_webapps.57932 | EDIT: You cand find additional information in Is it possible to create a Gmail filter that works on headers other than From, To, Subject?I have two email accounts, one is my main Gmail account which I check regularly (for simplicity I name it [email protected]), and the other one is from university mail service where I currently study (and I name it [email protected]).As the disk quota of university mail server is too low and there are lots of daily announcements from university staff which contain huge attachments, after about 5-7 days all disk quota is used and afterwards no emails will be delivered to my inbox.So I created a filter in the university mail service (which uses Zimbra) to forward all received messages to email alias [email protected] and then delete the message.Now the main problem is: I want to create a filter in Gmail to mark these forwarded messages with a custom label, but the information included in message header, specially FROM and TO fields, is not enough to distinguish these messages from other ones (as shown below)As you can see, there is no sign of alias I used in the Forwarding Address in Zimbra's filter settings.Even I can't rely on TO field (which contains [email protected]) because there are messages which are sent to both of my email addresses, so some mistakes will occur if I use this field.By the way, these forwarded messages are marked by Gmail as being sent via mail.uni.comIs there any way I can use this to filter messages?Did I miss something while setting up the filter in Zimbra? | How to create a filter in Gmail to identify messages forwarded from Zimbra? | gmail filters;zimbra | Use the deliveredto: search keyword.Search for messages within a particular email address in the Delivered-To line of the message headerExample: deliveredto:[email protected] Meaning: Any message with [email protected] in the Delivered-To: field of the message header (which can help you find messages forwarded from another account or ones sent to an alias).(Google support)So, searching for (or using a filter with) deliveredto:[email protected] should give you what you need. |
_softwareengineering.263010 | I have a Visual Studio Project that is using a MSBuild task to generate some code files. (The basis are xml files, and the generated code is quite lengthy, but noting special, just a lot of boilerplate.)Im not sure how I should handle the generated files:Not include generated files in the project, and use MSBuild to make sure they get compiled (by adding respective <Compile...> tags to the <Target>).Include generated files in the project, but exclude them from source control.Any other options? Are there issues with these options? Experiences so far? Recommendations or best practices? | Handling generated files in a Visual Studio Project | visual studio;projects and solutions;msbuild | Personally I include only the source files WRT source control. Its a rare case when generated files should be added to source control - maybe if they were generated once and never changed, and the generation step is lengthy or complex, otherwise I can't think of a good reason.For compilation - if they need to be added to the project then I tend to add them - I prefer not to have hidden compilation files as part of the project as I prefer to know what its really building without any surprises. I do tend to put generated files in a filtered folder though so they are tucked away from daily view though. |
Subsets and Splits