id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.13279 | So we have this awesome Flipboard App on the iPad now... or Zite.com to add another one to the game. I don't own an iPad yet I have social profiles everywhere full of awesome news streams that I would LOVE to read in a beautiful way.Is there any Flipboard clone that does intelligently present the most relevant news to me in a browser, not an iPad? | Is there something like Flipboard for normal browsers? | webapp rec;html5;news | Have you looked at Feedly?It is linked to Google Reader feeds and includes your personal twitter feed as well. You can also customize layouts, from lists to galery views, look for suggestions, save articles for later and get recommendations.I don't think it is available for IE but you can find it for Chrome, Firefox, iPhone and now Android. |
_webmaster.102083 | Please don't close for being a duplicate, I have already tried the other answered questions and am still having issues.First, I know why the certification for the site breaks when using www. The cert is for just example.com not www.example.com, and adding www is technically a subdomain and therefore breaks the cert, causing browsers to do the whole 'bad/untrusted cert' thing.My situation is that I have certs for just example.com, and sometimes Google indexes pages using www.example.com. This is obviously an issue because when people search for things on my site, it is rather annoying and bad press to encounter the 'bad cert' thing.Is there any possible way to force https access and remove the www? I have tried using htaccess to do so, but am still running into the bad cert issue.It should be noted that my current htaccess does what it needs to (force https/no www) when initially connecting via http. It only fails to do so when the initial connection is made with https and using www (presumably the browser is refusing the connection before htaccess has a chance to act).Current htaccess:RewriteEngine OnRewriteCond %{HTTP_HOST} ^(www\.)(.+) [OR]RewriteCond %{HTTPS} off [OR]RewriteCond %{HTTP_HOST} ^(www\.)?(.+)RewriteRule ^ https://%2%{REQUEST_URI} [R=301,L]Alternatively, if there was a way to keep Google from indexing the pages with www, that would also be an acceptable workaround for me. | SSL/https breaking when using www | htaccess;https;no www | Browsers dont further access the server when the certificate is not trusted/valid, so the .htaccess redirect cant work (it should work as soon as the user adds the certificate as an exception).The best solution is to get a certificate for the hostname with www, too. This does not only help for redirecting users to the correct hostname when following links from search results or bookmarks, it also helps those users that type your hostname with www (which is commonly done, even if its advertised without www), which might still happen long after search engines have removed the hostname with www from their indexes.If thats not possible, you can at least signal search engines that they should prefer the hostname without www. A 301 redirect is the best way here. If thats not possible, the second best is the canonical link type. Of course both can only work for search engines that ignore the bad certificate (I guess most do).You could also try to use the search engines webmaster tools to set the preferred hostname. For Google, see Set your preferred domain (www or non-www) (but I dont know if this works for hosts without a trusted certificate).In any case, you have to wait. After some time, the hostname with www should disappear from search results. |
_unix.309362 | I have a file with the content shown below:***************Encrypted String*************** ezF7LcHO0Zlb+8kkBeIwtA== **********************************************I need to get only encrypted password from above. I used Google to search for an answer, and I got this example (below), but it didn't work:sed -n '/***************Encrypted String***************/,/**********************************************/p' $fileI tried but it didn't work | Print Lines Between Two Patterns with SED | shell script;sed;regular expression | null |
_unix.186996 | I need to frequently login to many machines but I can only login to them from a proxy machine or using an SSH tunnel (by tunneling through that proxy machine). The problem is that I cannot use authorized_keys on the proxy machine, therefore I need to enter the password every time I setup the tunnel.How can I automate it? I was thinking about combining expect with some way to enter the password automatically without having to store it openly.I am using Linux on all of the machines mentioned. | Automating setup of the SSH tunnel | ssh;password;ssh tunneling;expect | null |
_codereview.141568 | I am job-searching and putting together some code samples. Just wondering how it looks.Here is an example of some code, from a custom WordPress theme, that works and is in production:Homepage Template<?php/*Template Name: HomepageRequirements: Advanced Custom Fields plugin*/?><?php get_header(); ?><section class=hero> <div class=container> <main> <p class=h1> <?php the_field('hero_content'); ?> </p> </main> </div> <div class=testimonials> <ul class=slider> <?php get_template_part('partials/testimonials'); ?> </ul> </div></section><main id=main> <?php the_content(); ?> <div class=details> <div class=heading> <h1><?php _e('What We Build','ssae'); ?></h1> </div> <?php get_template_part('partials/details'); ?> </div> <div class=stats> <?php while ( have_rows('stats') ) : the_row(); $label = get_sub_field('label'); $value = get_sub_field('value'); ?> <div class=stat> <div class=label><?php echo $label; ?></div> <div class=value><?php echo $value; ?></div> </div> <?php endwhile; ?> </div></main><?php get_template_part('partials/callout'); ?><?php get_template_part('partials/featured-cs'); ?><?php get_footer(); ?>Partialscallout<?php/* Callout, links to contact page. */?><section class=callout><a href=<?php echo site_url('/contact/'); ?>> <div class=container> <div class=text> <span class=icon-calendar solo></span> <h2 class=h1><?php _e('Schedule a Discussion','ssae'); ?></h2> </div> </div></a></section>details<?php/* Display a title, block of text, and icon for each item. Items pulled from a repeater field in the backend. */?><?php while ( have_rows('details') ) : the_row(); $title = get_sub_field('title'); $icon_class = get_sub_field('icon_class'); $text = get_sub_field('text');?> <div class=detail> <span class=icon-<?php echo $icon_class; ?> solo></span> <h2><?php echo $title; ?></h2> <?php echo $text; ?> </div><?php endwhile;?>testimonials<?php/* Display the 3 Most Recent Testimonials (a custom post type) */?><?php $loop = new WP_Query( array( 'post_type' => 'testimonials', 'posts_per_page' => 3 ) ); while ( $loop->have_posts() ) : $loop->the_post(); ?> <?php $thumb_id = get_post_thumbnail_id($post->id); $thumbnail = wp_get_attachment_image_src( $thumb_id, 'thumbnail' ); $thumb_alt = get_post_meta($thumb_id, '_wp_attachment_image_alt', true); $logo_id = get_field('logo'); $logo = $logo_id['sizes']['grid-thumb']; $logo_alt = get_post_meta($logo_id, '_wp_attachment_image_alt', true); ?> <div class=testimonial><div class=wrap> <div class=header> <div class=headshot> <img src=<?php echo $thumbnail[0]; ?> alt=<?php echo $thumb_alt; ?>> </div> <p class=subheading><?php the_title(); ?></p> </div> <div class=text> <?php the_content(); ?> </div> <?php if ( is_page_template('startups-page.php') ) : ?> <img class=logo src=<?php echo $logo; ?>> <!-- needs an alt attribute --> <?php endif; ?> </div></div><?php endwhile; wp_reset_query(); ?>featured-cs<?php/* Display selected case studies (a custom post type). Items are chosen in a custom field for this page in the backend. */?><section class=featured case-studies> <h1><?php _e('Case Studies','ssae'); ?></h1> <div class=gallery><div class=container> <?php if( have_rows('case_studies')): while ( have_rows('case_studies')) : the_row(); $post_object = get_sub_field('post'); if( $post_object ) : $post = $post_object; setup_postdata($post); get_template_part('partials/article'); wp_reset_postdata(); endif; endwhile; endif; ?> </div></div></section>article<?php/* Layout for displaying a thumbnail, title, link, and categories for an article. */?><a class=article href=<?php the_permalink() ?>> <div class=image> <?php if ( has_post_thumbnail() ) : echo get_the_post_thumbnail( $post->ID, 'grid-thumb'); else : ?> <img src=https://placehold.it/600x440 alt=placeholder /> <?php endif; ?> </div> <h2><?php the_title(); ?></h2> <span class=category> <?php /* show a comma-separated list of associated categories as just text, no links */ $cats = ''; foreach((get_the_category()) as $category) { if ($category->cat_name != Uncategorised) { // don't print this category $cats .= $category->cat_name . ', '; } } echo rtrim($cats, ', '); ?> </span></a> | Custom Homepage | php;wordpress | null |
_softwareengineering.197953 | I have some questions about VPS and web hosting.As far as I understand, VPS is a virtual machine on which we can do anything as we can do with our local machine. Install software, change settings etc. Web hosting is where we have only a folder in which we place our web site in.However, most of the provider currently advertise their service as VPS web hosting. This confuses me, does that mean they are selling VPS service which can only host web sites?I need a virtual machine which host one RESTful Java service using Tomcat and Jersey and MYSQL at the backend. My plan was to hire a VPS machine, install Tomcat 7 and MySQL on that machine. Is this the right way to go? Many thanks. | VPS vs Web Hosting: Which one is good for java web services | java;web services;web hosting | null |
_codereview.11312 | I'm working on an application that turns a raw camera image into a binary (pure black/white) image and I need this to happen as fast as possible for swift further processing. This is what my code currently looks like:public static boolean[][] createBinaryImage( Bitmap bm ){ int[] pixels = new int[bm.getWidth()*bm.getHeight()]; bm.getPixels( pixels, 0, bm.getWidth(), 0, 0, bm.getWidth(), bm.getHeight() ); int w = bm.getWidth(); // Calculate overall lightness of image long gLightness = 0; int lLightness; int c; for ( int x = 0; x < bm.getWidth(); x++ ) { for ( int y = 0; y < bm.getHeight(); y++ ) { c = pixels[x+y*w]; lLightness = ((c&0x00FF0000 )>>16) + ((c & 0x0000FF00 )>>8) + (c&0x000000FF); pixels[x+y*w] = lLightness; gLightness += lLightness; } } gLightness /= bm.getWidth() * bm.getHeight(); gLightness = gLightness * 5 / 6; // Extract features boolean[][] binaryImage = new boolean[bm.getWidth()][bm.getHeight()]; for ( int x = 0; x < bm.getWidth(); x++ ) for ( int y = 0; y < bm.getHeight(); y++ ) binaryImage[x][y] = pixels[x+y*w] <= gLightness; return binaryImage;}As you can see, it uses a global threshold to tell apart the black and white pixels. I recently switched from bm.getPixel to bm.getPixels and this yielded a speed improvement of ~33%. Now my question is if there is anything else I can do to speed things up?It currently takes <0.5 seconds to process 558x256 image and this is reasonable for my application, but I wonder if there's room for any other simple optimizations. | Fast image binarization on Android | java;android | You can get rid of the nested loop, replacing thisfor ( int x = 0; x < bm.getWidth(); x++ ) { for ( int y = 0; y < bm.getHeight(); y++ ) { c = pixels[x+y*w]; lLightness = ((c&0x00FF0000 )>>16) + ((c & 0x0000FF00 )>>8) + (c&0x000000FF); pixels[x+y*w] = lLightness; gLightness += lLightness; } }With this:int size = bm.getWidth()*bm.getHeight();for (int i = 0; i < size; i++) { c = pixels[i]; // etc.}It will at least save one x+y*w operation per pixel.It may be advantageous to calculate size into a separate variable as above, instead of doing it inside the loop, as in for (int i = 0; i < bm.getWidth()*bm.getHeight(); i++). The condition is checked at each round so you would end up calling the getters at each round, and who knows what's in them? The JIT compiler may be able to optimize this kind of stuff away - and probably will, if the getters are simple return class member statements - but at least it's not guaranteed.More substantial improvements can be got by sacrificing perfection. Maybe you needn't take each pixel into account for calculating the threshold? Perhaps count only every 10th on each direction? Then you would probably use the nested loop again. Like this:int width = bm.getWidth();int height = bm.getHeight();for ( int x = 0; x < width; x+=10; ){ for ( int y = 0; y < height; y+=10; ) { // etc. }}gLightness /= (width / 10) * (height / 10); |
_unix.167621 | I'm dual-booting Windows 8 and Kali Linux. On Linux, my wireless connection doesn't work; it keeps asking for the wifi password but refuses to connect. The same network works fine on Windows 8. Is there some setting I should change to fix this on Linux? | Kali Linux won't connect to wireless network | wifi;kali linux | null |
_unix.195987 | I have a unix machine with truecrypt 7.1a installed. I'm trying to mount a volume to test with the command line but I don't have the truecrypt command installed evidently. Did the command line functionality come with the install before the software went crazy or am I doing something crazy? Or is there a place to download the the necessary files to put it on my machine? | How do I get the Truecrypt CLI installed? | osx;gpg;truecrypt | So I found it out myself by scouring the web. You just use the path to the truecrypt executable in the app bundle:/Applications/TrueCrypt.app/Contents/MacOS/TrueCrypt -hThe above command will get you the help option to see what other options there are for TrueCrypt. Also, here is one of the sources I found for the code: mount-dev-volumes.sh |
_cstheory.11402 | Suppose $f$ is a submodular set function on a universe $U$ of size $n$.For $k \in \{0,\ldots,n\}$, let$$ F(k) = \operatorname*{\mathbb{E}}_{X \in \binom{U}{k}} f(X), $$where $\binom{U}{k}$ is the set of all subsets of $U$ of size $k$.We are interested in proving the following inequality:$$ F(k) \geq \frac{k}{n} F(n) + \frac{n-k}{n} F(0). $$One can prove this inequality by induction on $k$ (or even directly, if we're careful), but this doesn't explain why the inequality holds. Instead, we will use some form of term rewriting.Submodularity directly implies the following inequality, for $k \in \{1,\ldots,n-1\}$:$$ F(k) \geq \frac{1}{2} F(k+1) + \frac{1}{2} F(k-1). $$Imagine applying this inequality over and over again, in an arbitrary way. Here is an example for $n = 3$ and $k = 2$:$$ \begin{align*} F(2) &\geq \frac{1}{2} F(3) + \frac{1}{2} F(1) \\ &\geq \frac{1}{2} F(3) + \frac{1}{4} F(2) + \frac{1}{4} F(0) \\ &\geq \frac{5}{8} F(3) + \frac{1}{8} F(1) + \frac{1}{4} F(0) \\ &\geq \frac{5}{8} F(3) + \frac{1}{16} F(2) + \frac{5}{16} F(0) \\ &\geq \cdots \\ &\geq \frac{2}{3} F(3) + \frac{1}{3} F(0). \end{align*} $$The dots hide infinitely many steps.There is a way to make this argument completely rigorous. One can cook up some potential function that increases whenever one applies any instance of submodulary (for example, the expectation of squared cardinality). Taking the consequence of $F(k)$ maximizing this potential, it's easy to see that we get an inequality of the form$$ F(k) \geq \alpha F(n) + \beta F(0). $$If $f$ is any modular function then there is equality. Taking $f = 1$ we find that $\alpha + \beta = 1$, and taking $f(X) = |X|$ we determine $\alpha = k/n$.This argument extends to prove more interesting inequalities on submodular functions.Does this sort of reasoning look familiar? | Term rewriting for proving inequalities | co.combinatorics;term rewriting systems;submodularity | null |
_webmaster.45379 | I have a website that changes hourly, new contents are added from different feeds, so no original contents at all,I have details pages where they show the details of every item,Then I have pages that link to details pages, they are overview of all the details pages, think about Ebay for example, you have places where you search and browse, then you can get into an auction.So I thought since the details pages have a short life, I index and follow them while they are active, then I make them no index and follow when they become inactive.In my sitemap I put the priority of details pages really low around .2And the browse or overview pages since they are permanent and remain there forever have a rank of .9The other problem I have is that the sitemap changes all the time, Google Webmaster tools seems to keep the older sitemaps, and it seems like I have to go resubmit the sitemap once in a while, why is that? why doesn't Google automatically re-read the sitemap?So I have two questions here1- Am I doing the right thing in the sitemap and the priority value, and no indexing short living pages?2- Do I need to resubmit the sitemap every time it changes (that would be hourly) or every once in a while? I noticed I can call a url to resubmit the sitemap according to Google documentation do you think I should do that and if so how often? | SEO Sitemap questions about a site that changes all the time | seo;site maintenance | null |
_unix.185632 | I have installed mail server successfully on my vps according tohttps://www.digitalocean.com/community/tutorials/how-to-set-up-a-postfix-e-mail-server-with-dovecot.I create user joe with passwd ps1 with root successfully.When i lgoin into my vps , [email protected] can sent and receive email via postfix and dovecot .There two passwords here for joe : one is for joe to login into vps ,one is for joe to enter the email .which one will i fill in the password in thunderbird?I tried tow of them ,all failures. Why can't connect my mail on my vps via thunderbird locally? | thunderbird can't connect to my email box | email;postfix;dovecot | null |
_webmaster.20010 | Is there a way I can load a CSS stylesheet to a site I'm viewing in a browser? I'm working on my company website. For obvious reasons I don't want to make changes to the live version of the site while I develop. Normally there is a development version I can work on, but this is currently down. All I need to do is add CSS and see these changes as I go along. Firefox's web developer can add a local stylesheet, but as soon as you refresh the page or navigate away it stops being added and you have to manually add it again (even with persist features turned on). With all the incremental changes I need to make this isn't really a solution for me. | Load custom CSS through browser to site being viewed? | css;web development | You can put an alternate stylesheet into the header:<LINK href=mystyle.css title=Medium rel=alternate stylesheet type=text/css>You can then select it from Firefox with View->Page Style->your stylesheet.Anyone else viewing the page could do that, but I doubt they will think to do it. Presumably it's not a big problem if they do though.This addon amongst other things lets you specify a local stylesheet if you can't change the live site. (I've not used it myself.) |
_scicomp.8782 | I need to find the zero of a function $f(\lambda)$ which is of the form $\sum \frac{c_i^2}{(1+\lambda d_i)^2} -1 $. I tried using Newton's method, and it works sometimes, but it is higly dependent of the initial choice, and the chances of divergence are very high, since the function is almost flat. What would be the best method to find the zero of such a function?For now, I replaced Newton with secant method, and it seems to work better. | Best method to find the zero of a decreasing function numerically | newton method;solver | Off the top of my head, there are a number of things that may be going wrong. You might want to verify the followingAre the function values or derivatives you are computing numerically stable? Is the rounding error in their evaluation smooth? You can verify this visually by plotting an extremely small interval around a known root such that the function values are $\pm \varepsilon_\mathsf{mach}$. If your function or derivative changes sign near the root, Newton's method won't work.What termination criteria are you using? If you're looking for a root $f(\lambda)=0$ with $\lambda \in [a,b]$, do you stop when $|b-a|<\tau$ or when $|f(\lambda)|<\tau$ where $\tau$ is your tolerance? In either case, what is your rationale for your choice of $\tau$ and how does it fit with the error/interval you observe in the previous point?Newton's method is a good hammer, but there are many functions that will look a lot like your thumb. The Secant method is a good choice for functions that are almost linear in the search interval, but will perform poorly as soon as you depart from that assumption. A good alternative is the so-called Illinois algorithm. A major advantage of interval methods is that they don't rely on the function or any of its derivatives being numerically smooth as you can run them until $fl(f(\lambda))=0$ or until the search interval $[a,b]$ is numerically zero, i.e. $fl(a)=fl(b)$. |
_unix.255305 | Say I have a script that is always run from an interactive shell. I would like this script to launch an interactive subshell that is a replica of the parent (i.e., all environment variables, etc. preserved) and then run arbitrary commands (specifically, I would like to amend PS1 and define a few aliases). I need a genuine subshell (rather than using source, otherwise environment variables won't persist after the script has finished) and it needs to be shell agnostic (i.e., works with bash, zsh, etc.)The only way I've been able to accomplish this, so far, is to launch and script the shell with expect. This is a bit horrible, but it sort-of-works:expect <(cat <<-EXPECT spawn $SHELL send export FOO=\$foo\\r send PS1=\(foo:$FOO) \\\$PS1\\r send alias foo=\do_somthing --foo=$FOO\\r send clear\r interactEXPECT)Is there a better way? (I also notice that this approach appears to introduce a problem with the screen redrawing and character encoding.)The problem with doing something like PS1=$foo $SHELL is that PS1 can be overridden by the shell's global and user .rc files. There doesn't seem to be a shell agnostic way of providing a custom .rc file. | Start a subshell then run commands | shell;expect;subshell | null |
_codereview.108629 | I was reading up on implementing Graphs in Python and I came across this Essay at python.org about graphs, so I decided to implement it, but with weighted edges.I start with arcs and their cost using list of lists and then iterate through it building a dictionary (Adjacency list format) that represents the Undirected Weighted Graph. I am looking for suggestions on how to improve the following code. Here I need to check every time if a vertex already exists in the dictionary using an if x in dict.keys() statement, which takes linear time for each node.data=[['A','B',5], ['A','C',4], ['B','C',2], ['B','D',3], ['C','D',10], ['E','F',1],['F','C',6]]graph={}for row in data: if row[0] in graph.keys(): graph[row[0]][row[1]]=row[2] if row[1] not in graph.keys(): graph[row[1]]={} graph[row[1]][row[0]]=row[2] else: graph[row[1]][row[0]]=row[2] else: graph[row[0]]={} graph[row[0]][row[1]]=row[2] if row[1] not in graph.keys(): graph[row[1]]={} graph[row[1]][row[0]]=row[2] else: graph[row[1]][row[0]]=row[2]print graphThe output of the above code is:>>{'A': {'C': 4, 'B': 5}, 'C': {'A': 4, 'B': 2, 'D': 10, 'F': 6}, 'B': {'A': 5, 'C': 2, 'D': 3}, 'E': {'F': 1}, 'D': {'C': 10, 'B': 3}, 'F': {'C': 6, 'E': 1}} | Converting a List of arcs into Adjacency list representation for graph using dictionary | python;matrix;graph | Overall this is pretty good. Here are a few suggestions for making it even better:You have a lot of repeated code in your if branches. Since this code will run on either iteration, its better to pull it out of the branches. This reduces code repetition, and makes it clearer that this code will always run. Heres what it reduces to:for row in data: if row[0] not in graph.keys(): graph[row[0]]={} if row[1] not in graph.keys(): graph[row[1]]={} graph[row[0]][row[1]]=row[2] graph[row[1]][row[0]]=row[2]Next you have the problem that you need to initialise every vertex with one empty dictionary. Rather than doing this by hand, you can use collections.defaultdict from the standard library. You specify a default factory method, and whenever you try to access a key which hasnt already been set, it fills in the default for you.Now the code becomes:import collectionsgraph = collections.defaultdict(dict)for row in data: graph[row[0]][row[1]]=row[2] graph[row[1]][row[0]]=row[2]It can be a bit unwieldy to follow these numeric indices around. You can use tuple unpacking at the top of the for loop to name the individual parts of the row:for vertex0, vertex1, weight in data: graph[vertex0][vertex1] = weight graph[vertex1][vertex0] = weightIt would be better to wrap your graph-making code in a function, graph_from_data(), so that you can reuse it in other modules.You should also move the bits of this code that cant be reused into an if __name__ == '__main__' block this means the same file can be both a module and a script. If I do an import from this file, I wont get an example graph printed as a side effect. |
_softwareengineering.81794 | I'm about to begin a big PHP project with a friend. It's my first time using PHP and I've been wondering wether I should try developing on Linux since it's so popular.I've had some past experience with Linux and the choice of an editor won't be hard since I know vim (though I've looked at VS.PHP and it's putting me back from the change).Does using Linux when developing PHP (or any web language) give me an advantage? | What advantages does Linux give me when developing in PHP for the web? | web development;php;linux | It depends what you call web development and how you do want to work. For instance running Photoshop natively is impossible (sure with some VM or emulation there are ways to do that or you can simply use GIMP.) If you're planning to do pure coding - it depends on what you love during development.You won't get as good live editor as dreamweaver although Eclipse and NetBeans do the job of IDE. Sure Eclipse would be obvious choice here.If you like wamp server on windows, xamp is available on Linux, but it's not as simple. I usually end up with just apache2 and needed modules. On the other hand:Make/bash.sh/fab files feels in home under Linux and it may increase your performance a lot doing repetitive commands. Sure there is .bat files but under Linux its way more easier and way more clearer how script should work what commands it should use and ect. Because it's Linux you will learn how to deploy in such servers much faster.If you learn VIM (that takes some time) - its fastest editor around. Emacs also fast, but nowhere near VIM speed of editing. Sure don't jump on it too soon - it will scare you!So thats 3 points for both sides. All in all - Linux is just an OS. Tools makes it good and the person it uses makes it fast/slow. I had problems when i needed older versions of php, but overall I use Linux every day not because its better for development, but because it's way better OS, although it has a steep learning curve. I must say that i don't have huge experience developing in php under Linux so i might be missing some points. Talking about other web languages:I don't really know about Ruby, but i heard that its better than on windows due to some(?) services and system tools that downloads gems easily.Django is way better in Linux - It runs better, it takes half as much to deploy as in Windows (just for developing). Its easy to deploy in Linux servers and pain in the ass to do the same in windows production servers. Finally I just can recommend to try it, not because it may bring some speed to your development, but because it is Linux and it is awesome. |
_cs.1394 | I'm trying to figure out a way I could represent a Facebook user as a vector. I decided to go with stacking the different attributes/parameters of the user into one big vector (i.e. age is a vector of size 100, where 100 is the maximum age you can have, if you are lets say 50, the first 50 values of the vector would be 1 just like a thermometer).Now I want to represent the Facebook interests as a vector too, and I just can't figure out a way. They are a collection of words and the space that represents all the words is huge, I can't go for a model like a bag of words or something similar. How should I proceed? I'm still new to this, any reference would be highly appreciated. | How to represent the interests of a Facebook user | machine learning;modelling;social networks;knowledge representation | The interests are categorical data and may be modeled as binary variables (a user either likes them or he does not). You can subsume little-used categories under broader categories. For example, a user who likes a little-known horror movie can simply be marked as liking horror movies. You can even subsume such items under multiple categories if it belongs to several.For what you can do with the data see A Review on Data Clustering Algorithms for Mixed Data |
_softwareengineering.181416 | My company uses a mix of onsite and increasingly offsite contractors for development of websites and online applications.Our platform uses a mix of open source software and libraries that we've made a number of modifications to over the years. Some of the modified software is licensed GPLv2 without the linking exception. For various reasons, we do not want the source to be made public.The concern is that if we supply the binaries to our platform for our offsite developers, we are obligated to supply the source code upon request. Additionally, there will be times when we need to distribute the source of our modified libraries. From there, nothing would seem to preclude the contractor from redistributing the work.The GPL FAQ states:. . . when the organization transfers copies to other organizations or individuals, that is distribution. In particular, providing copies to contractors for use off-site is distribution.Furthermore, the GPL states: You may not impose any further restrictions on the recipients' exercise of the rights granted herein.The question is: what can be done to restrict offsite contractors from redistributing our modified code?A note, that I think the GPLv3 addresses this concern with the following clause. So my question is specifically about GPLv2-licensed modified code:You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. | What can a company do to restrict offsite contract developers from redistributing GPLv2-licensed code modifications? | licensing;gpl;contract | In all seriousness, get an attorney experienced in these matters.It seems to me that your contractors are working on your behalf, accessing your software, and this is not distribution (in a common sense standpoint). I would ensure that the contract with your contractors is a work for hire, causing you to own their work on your behalf (like an employee). If this is not the case, you don't likely own the copyright to their modifications anyway.Common sense would dictate that an appropriate work for hire contract whereby they are working on your software on your behalf and not for their own use -- it would not conflict with GPL. |
_cs.41170 | We just had an interesting though for a routing algorithm for people carpooling. Imagine the following situation:Person 1 is driving with his car from the south of city A to city B far in the north. He is picking up person 2 who is starting in the west of city A and person 3 who is starting in the east of city A. Person 2 and 3 only use public transport. Most likely, the optimal solution will be that they meet somewhere in the center or north part of city A and then drive on from there.Any hints on an algorithm to find that solution? | Route planning for a car driver picking up people using public transport | algorithms;graph theory;routing | A generalized version of this problem is likely $NP$-complete, however given that very few people are sharing the same car, you can probably get away with an exact, exponential time algorithm.Assuming your road network is given as a graph, for every node you will want to store:The earliest time each person using public transport can get to that nodeFor every subset of people sharing the car, what is the earliest time the car can get to that point with that subset of people in itBasically, if you have $n$ persons and one car, you would get a graph with $2^n+n$ layers (one layer for each subset and one layer for each person using public transport).You can use a Dijkstra-like algorithm to compute these values. You do a multi-start search, starting from the homes of all the people involved. Whenever the car gets to a new node you check whether any people are ready to join the car, if so, you branch (the car continues without picking them up, but also moves to any layers in the graph that can be reached by picking people up). Whenever a person gets to a new node, you check whether the car has already reached that node (and also branch).Note it is not really branching, but rather pushing more nodes on to the priority queue. |
_unix.326922 | I accidentally deleted everything in ~/.config/xfce4 in the process of backing it up (yeah, laugh it up). Simple question, is there a way to have xfce write all configurations currently in memory back to disk? | write existing xfce4 configurations to disk | xfce | null |
_unix.162575 | I'm trying to install MPLAB X onto my kali Linux 64-bit OS and every time I get to the last part of installation I receive this message:root@kali:~/Desktop# sudo chmod 755 mla_v2014_07_22_linux_installer.runroot@kali:~/Desktop# sudo ./MPLABX-v2.20-linux-installer.sh 64 Bit, check libraries Check for 32 Bit libraries These 32 bit libraries were not found and are needed for MPLAB X to run: libc.so libdl.so libgcc_s.so libm.so libpthread.so librt.so libstdc++.so libexpat.so libX11.so libXext.soWhen I enter this command I get this message:root@kali:~/Desktop# sudo apt-get install libc6:i386 libx11-6:i386 \ libxext6:i386 libstdc++6:i386 libexpat1:i386Reading package lists... DoneBuilding dependency tree Reading state information... DoneE: Unable to locate package libc6E: Unable to locate package libx11-6E: Unable to locate package libxext6E: Unable to locate package libstdc++6E: Couldn't find any package by regex 'libstdc++6'E: Unable to locate package libexpat1How do I find these libraries?right now it's killing me, It shouldn't be this complicated!!! | where can I find the 32-bit libraries needed to run MPLAB X? | kali linux;libraries | null |
_cs.67178 | I have been reading some material about undecidability of a language, and I am not quite sure of one part. For what I see:I start with a decider H, that has an input <M,w> where M is a Turing Machine and w is a string, so I have the following:H(<M,w>)=accept if M accepts w rejects if M rejects wall fine until this point. Now it says that I construct a new Turing Machine D, and this TM has H as a subroutine. If I make a graph I imagine something like this:------------- D that is a TM| ----- || | H | || |____| ||___________|So it says that D calls H to determine what M would do if the input is M, it means something like:H(M,<M>)Now it says that it should do the opposite this decider H, it means that:H(M,<M>)=reject if M accepts <M> accepts if M rejects <M> ---------(1)and this procedure can be generalized for D like:D(<M>)=accept if M does not accept <M> ----------(2) reject if M accept <M>I have two questions here, for not opening another thread, but in part (1) why the decider H makes the opposite? What would happen if it does not? I have read in a book that the argument is that sometimes a program can receive a program as an input, such as in a compiler, why it decides to do the opposite?The interpretation of part (2) does it mean that the Turing Machine D, with input M, this input refers to the same Turing Machine that was put as an input for the decider H? At the ends there is a contradiction, but I cannot get it, why in step (1) it is decided to do the opposite? | undecidability proof by using Turing Machines | computability;turing machines;undecidability | null |
_unix.117742 | Is there a diagram that shows how the various performance tools such as ip, netstat, perf, top, ps, etc. interact with the various subsystems within the Linux kernel? | Diagram of Linux kernel vs. performance tools? | linux;performance | I came across this diagram which shows exactly this. In the above you can see where tools such as strace, netstat, etc. interact with the Linux kernel's subsystems. I like this diagram because it succinctly shows where each tool latches on to the Linux kernel, which can be extremely helpful when you're first learning about all the tools and their applications.Source: Linux PerfToolsReferencesLinux Performance |
_softwareengineering.37798 | My team creates a lot of one-off web forms. Most of these forms just send an e-mail, and a few do a simple database write. Right now, each form lives in its own separate solution in Visual Studio Team Foundation Server. That means we have close to 100 different form projects, which makes it difficult to maintain consistency. Each form is unique in that the fields are different, but all of them do pretty much the same thing.I'm looking to condense these somehow, and I could really use some guidance. Should I try to create one solution file with all of our form projects in it? There isn't a lot of plumbing code, although I could create a few helper classes to help with e-mail formatting and such. It would be very helpful to be able to share CSS, JavaScript, controls and images across projects.Given that we're a Microsoft shop, are there any tangible benefits to going with something like MVC over Webforms for this specific scenario? I am sold on the concept of MVC as a whole, but would it help me pull together a 15-field data collection form more efficiently if all that form does is send an e-mail? The form that got me thinking about this had a good bit of logic built in to show and hide fields based on the user's responses and seems like it would have been less efficient to use MVC and jQuery. | How to organize repetitive code? | design;refactoring | null |
_softwareengineering.228745 | I've about seven modules arranged like so:ServiceProcessingCommonAccountEmailSchedulingI try to make it my policy to restrict code to the module that actually uses it. Code that is shared by multiple projects (3+) is sent to common. However, there are a few classes that are only used by two projects. In my most recent example, both Account and Processing need some Image Processing done.Is it a code smell two have the same classes found in two modules? Should I move duplicate code into common as soon as it it's used more than once? | Code Duplication in Multi-Module Project | design;refactoring;code smell | null |
_unix.244181 | I ran smartctl -l xerror on a Seagate ST31000528AS (1TB disk with 512-byte sectors), and it gave me (in part):Error 597 [16] occurred at disk power-on lifetime: 11903 hours (495 days + 23 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 74 59 00 70 bc 00 00 Error: UNC at LBA = 0x74590070bc = 499709407420 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 60 00 00 00 08 00 74 59 00 70 b8 40 00 12d+16:57:44.392 READ FPDMA QUEUED ea 00 00 00 00 00 00 00 00 00 00 a0 00 12d+16:57:40.893 FLUSH CACHE EXT ea 00 00 00 00 00 00 00 00 00 00 a0 00 12d+16:57:40.801 FLUSH CACHE EXT 61 00 00 00 08 00 08 a8 00 43 18 40 00 12d+16:57:40.800 WRITE FPDMA QUEUED 61 00 00 00 08 00 08 af 00 40 68 40 00 12d+16:57:40.800 WRITE FPDMA QUEUEDI'm really confused what this meansin particular the LBA48 it's giving. hdparm -I confirms the disk has 1,953,525,168 sectors; 499,709,407,420 is well beyond that. (It'd need a disk of 256TB to be valid, even with 512-byte sectors.)Judging from the kernel logs, the LBA48 is probably actually 1,953,520,060 testing with dd and hdparm --read-sector confirm that sector is indeed bad. (Indeed, that one also shows up in smartctl -l xselftest).Why is the extended error log giving an LBA48 that appears to be almost 256 (but not quite!) times greater than the real value? Looking at the hex values, it appears the bytes are in the wrong orderis this maybe just a drive firmware bug? | Why is `smartctl -l xerror` LBA well beyond end of disk? | hard disk;sata;smartctl;smart | null |
_unix.36292 | When I do ps -ef, I see TIME field. What does this field mean? From what I understand, this tells the actual CPU time, that the process got (amidst all the context switching). Does the TIME field include the disk read/write time also or only the CPU time? | The TIME field in ps -ef | linux;process;ps;administration;top | null |
_unix.359478 | I have a GTK application in C++ running, and one of the widgets is a dropdown menu. I want to take a screenshot when this menu is open, but nothing happens when I press Print Screen (it works when the menu is closed). I don't think this is a GTK thing: how can I take a screenshot when the menu is open? | Linux take screenshot when menu is open | screenshot | null |
_scicomp.20884 | I am facing the following problem. I need to solve numerically a set of coupled equations $$i\frac{d}{dt}f_{n}^{(i)}(t) = \left[U\cdot n(n-1) + \mu\cdot n\right]f_{n}^{(i)}(t) - \sqrt{n+1}\Phi_i^{*}\ f_{n+1}^{(i)}-\sqrt{n}\Phi_{i}\ f_{n-1}^{(i)}$$where $U,\mu$ are just constants and $$\Phi_{i} = \sum\limits_{n=1}^{N}\left( f_{n-1}^{(i+1)}(t)\right)^{*}\ f_{n}^{(i+1)}(t)\sqrt{n}$$has to be determined self-consistently. These are coupled differential equations where $i = 1,2,\ldots, M$ and $n = 0,1,,\ldots, N$. What kind of numerical methods can be used in order to solve this problem efficiently? To be clear, I am looking for time evolution of each $f_{n}^{(i)}$ (complex numbers). | Time dependent self-consistent equations | algorithms;ode;nonlinear equations;computational physics;computational chemistry | null |
_unix.169690 | I work on a shared linux enviroment (CentOS), but for some reason one of my logins has been locked. When I do a cat /etc/passwd | grep /home, I can find my user:roaming:x:579:579::/home/roaming:/bin/nologinI've got root permission, but don't know what to do to be able to login again. What should I do about this 'no login' thing?? | Linux user not being able to login (/bin/nologin) | users;login | man 8 nologin There is your real answer as to why it isn't working.If you want the user to log in then you need to give them a shell like /bin/bash or something else.You can edit /etc/passwd directly or use usermod -s /bin/bash roaming, all of this needs to be done as root. |
_softwareengineering.316398 | ES2015 introduced the let and const keywords, which essentially have the same semantics apart from reassignment. (const can be seen as a final variable in languages like Java.)I see the point of switching from var to let, but many online articles recommend using const whenever possible (in some form of manual SSA).However, I recall in the question on Java final variables, the accepted most popular answer adviced against using final local variables because it makes the code more difficult to understand.This is reflected in reality as a number of code review comments questioned the over-use of const on local variables.So does const still reduces readability in JavaScript? | TypeScript/ES2015: Prefer `const` instead of `let` reduces readability? | javascript;coding style;typescript | *sigh*... This is why immutable needs to be the default. Even the referenced Java answer suggests this. Note that that answer does not recommend removing final modifiers, just that the author of that answer wouldn't add them in new code (for local variables) because they clutter up the code. However, there is a difference between JavaScript and Java here. In Java, final is an additional modifier on a variable declaration, so you get:final int foo = 3; // versusint foo = 3;In JavaScript, though, const is just an alternate variable declaration, so the situation is:var foo = 3; // (or let nowadays) versusconst foo = 3;I don't think two more characters constitutes clutter. If the alternative being suggested is just foo = 3 then the reviewers are just wrong.Personally, I would always use const when applicable and would suggest it's inclusion in a code review. But then I'm a Haskeller, so I would be like that. But also, JavaScript tends to be more pervasively mutable and have a worse debugging story when things do unexpectedly change, that I would say const in JavaScript is more valuable than final in Java (though it's still valuable there.)As Ixrec points out, const only controls the modifiability of the binding, not the object that is bound. So it's perfectly legal to write:const foo = { bar: 3 };foo.bar = 5;This can be used as an argument against using const as someone may be surprised when an unmodifiable object is modified. Newer versions of JavaScript do have a mechanism to actually make objects unmodifiable, namely Object.freeze. It makes a good addition to const (though only use it on objects that you create, for example, as Object.freeze({ ... })). This combination will communicate and enforce your intent. If used consistently, the times where you are mutating things will stick out and clearly signal non-trivial data flow. |
_unix.373252 | is this udev-rule in /etc/udev/rules.d/10-usb-deny.rules :ACTION==add, ATTR{bInterfaceClass}==* \RUN+=/bin/sh -c 'echo 0 >/sys$DEVPATH/../authorized'ACTION==add, ATTR{bDeviceClass}==* \RUN+=/bin/sh -c 'echo 0 >/sys$DEVPATH/../authorized'sufficient to prevent BadUsb attacks?Digging a bit deeper into the rabbit hole:Anything handled by usb-storage driver passes by that rule.So thumbdrives to usb-SSDs are still a threat.Adding:install usb-storage /bin/trueto /etc/modprobe.d/usb-storage.confis the next obvious thing to do, if I want to be on the safe side.The rabbit hole is even much deeper: Anything with a filesystem passes by that rule and can be activated by USBdevices. So CD/DVD and SDcards are probably the most common hardware to block.Adding:install cdrom /bin/trueto /etc/modprobe.d/cdrom.confand install mmc-core /bin/trueto /etc/modprobe.d/mmc-core.confwill block CD/DVD and SDcards. These three fakeinstalls should be sufficient on standard hardware. On a Laptop or a system with SSH-access you could alternatively fakeinstall usbcore and mmc-coreinstaed as there you don't need keyboard or mouse. | How to prevent BADUSB attacks in debian 9 stretch? | linux;debian;security;linux kernel;usb | null |
_unix.340289 | For example:I have a subvolume /home and create a snapshot:btrfs subvolume snapshot /home /temp/snapshotIs there any connection, that tells me that the new subvolume /temp/snapshot was originally cloned from /home?In other words: If I delete everything in /temp/snapshot and create a new empty subvolume /temp/snapshot2, are these subvolume of different types whatsoever? | Can the original source subvolume of a btrfs snapshot be found by examining that snapshot? | filesystems;btrfs | The answer to your first question is yes. Not only can you determine the source subvolume of a snapshot, you can also see the snapshots for a given subvolume.For example if you run: btrfs subvol show /temp/snapshot you'll see something like this:MOUNT_POINT/temp/snapshot Name: snapshot UUID: 862e55f5-d1a0-4742-87ed-b430dd181a97 Parent UUID: 5c1e9a70-3158-6940-94d4-be82e064f8df Received UUID: - Creation time: 2017-01-26 22:34:21 -0500 Subvolume ID: 940 Generation: 29824 Gen at creation: 29824 Parent ID: 5 Top level ID: 5 Flags: readonly Snapshot(s):If that snapshot itself is the source of other snapshots, you'd see them listed under snapshot(s). The Parent UUID is the source subvolume, which you can use with btrfs subvol list and grep to get more information about the source subvolume:$ btrfs subvol list -u . | grep 5c1e9a70-3158-6940-94d4-be82e064f8dfID 878 gen 29824 top level 5 uuid 5c1e9a70-3158-6940-94d4-be82e064f8df path home |
_softwareengineering.159789 | The Wikipedia article on Parrot VM includes this unreferenced claim: Core committers take turns producing releases in a revolving schedule, where no single committer is responsible for multiple releases in a row. This practice has improved the project's velocity and stability.Parrot's Release Manager role documentation doesn't offer any further insight into the process, and I couldn't find any reference for the claim. My first thoughts were that rotating release managers seems like a good idea, sharing the responsibility between as many people as possible, and having a certain degree of polyphony in releases. Is it, though? Rotating release managers has been proposed for Launchpad, and there were some interesting counterarguments: Release management is something that requires a good understanding of all parts of the code and the authority to make calls under pressure if issues come up during the release itselfThe less change we can have to the release process the better from an operational perspectiveDon't really want an engineer to have to learn all this stuff on the job as well as have other things to take care of (regular development responsibilities)Any change of timezones of the releases would need to be approved with the SAsand: I think this would be a great idea (mainly because of my lust for power), but I also think that there should be some way making sure that a release manager doesn't get overwhelmed if something disastrous happens during release week, maybe by have a deputy release manager at the same time (maybe just falling back to Francis or Kiko would be sufficient).The practice doesn't appear to be very common, and the counterarguments seem reasonalbe and convincing. I'm quite confused on how it would improve a project's velocity and stability, is there something I'm missing, or is this just a bad edit on the Wikipedia article? Worth noting that the top voted answer in the related Is rotating the lead developer a good or bad idea? question boldly notes: Don't rotate. | How can rotating release managers improve a project's velocity and stability? | release management | null |
_softwareengineering.270988 | import java.util.List;import java.util.ArrayList;public class Test { /** * @param args * @throws Exception */ public static void main(String[] args) throws Exception { // TODO Auto-generated method stub Deque queue = new Deque(); int random = (int)(Math.random() * 100); queue.addLast(10); queue.addLast(20); queue.addLast(random); queue.addLast(40); queue.addLast(50); for(int i = 0; i < 2; i++) { assertTrue(get(queue, 0) == 10); assertTrue(get(queue, 1) == 20); assertTrue(get(queue, 2) == random); assertTrue(get(queue, 4) == 50); assertTrue(get(queue, 3) == 40); try { System.out.println(get(queue, 5)); assertTrue(false); } catch(Exception e) { assertTrue(true); } } } public static void assertTrue(boolean v) { if(!v) { Thread.dumpStack(); System.exit(0); } } public static int get(Deque queue, int index) throws Exception { // 1) Only fill in your code in this method // 2) Do not modify anything else // 3) Use of 'new' keyword is not allowed // 4) Do not use reflection // 5) Do not use string concatenation return queue.getList().get(index); }}class Deque { private List<Integer> items; public Deque() { items = new ArrayList<Integer>(); } public void addFirst(int item) { items.add(0, item); } public void addLast(int item) { items.add(item); } public int removeFirst() { if(isEmpty()) throw new RuntimeException(); return items.remove(0); } public int removeLast() { if(isEmpty()) throw new RuntimeException(); return items.remove(items.size() - 1); } public boolean isEmpty() { return items.size() == 0; } public List<Integer> getList() { return items; }}Thanks~July | How should I change in get method without calling getList() of Deque? | java;constructors | You can use recursion to remove index-1 elements from the beginning of the queue, store the the first element and then, as you are unwinding the recursion, put removed elements back on the queue:public static int get(Deque queue, int index) throws Exception { int tmp = queue.removeFirst(); try { if (index == 0) { return tmp; } else { return get(queue, index - 1); } } finally { queue.addFirst(tmp); }} |
_codereview.82869 | I have written a fixed-size block allocator implementation and would like some feedback as to what I could improve in my code and coding practices. Your comments or notes are welcomed!An auxiliary allocator (non-typed, instead unique for the given size of a chunk; without necessary interface for STL containers):#include <iostream>#include <list>#include <vector>template <size_t ChunkSize>class TFixedAllocator { union TNode { // union char data[ChunkSize]; TNode *next; // free chunks of memory form a stack; pointer to the next (or nullptr) }; TNode *free; // the topmost free chunk of memory (or nullptr) std::vector<TNode*> pools; // all allocated pools of memory int size = 1; // size of the last allocated pool of memory const int MAX_SIZE = 1024; void new_pool() { // allocate new pool of memory if (size < MAX_SIZE) { size *= 2; } free = new TNode[size]; // form a stack of chunks of this pool pools.push_back(free); for (int i = 0; i < size; ++i) { free[i].next = &free[i+1]; } free[size-1].next = nullptr; } TFixedAllocator() { // private for singleton new_pool(); }public: TFixedAllocator(const TFixedAllocator&) = delete; // for singleton static TFixedAllocator& instance () { // singleton static TFixedAllocator instance; return instance; } void* allocate() { if (!free) { new_pool(); } TNode* result = free; // allocate the topmost element (saved in free) free = free->next; // and pop it from the stack of free chunks return static_cast<void*>(result); } void deallocate(void* elem) { TNode* node = static_cast<TNode*>(elem); // add to the stack of chunks node->next = free; free = node; } ~TFixedAllocator() { for (auto ptr : pools) { delete ptr; } }};An allocator wrapper for STL containers:template <class T>class TFSBAllocator {public: using value_type = T; using pointer = T*; using const_pointer = const T*; using reference = T&; using const_reference = const T&; template <class U> class rebind { public: using other = TFSBAllocator<U>; }; pointer allocate(size_t n) { if (n == 1) { return static_cast<T*>(TFixedAllocator<sizeof(T)>::instance().allocate()); } else { return std::allocator<T>().allocate(n); } } void deallocate(pointer p, size_t n) { if (n == 1) { TFixedAllocator<sizeof(T)>::instance().deallocate(static_cast<void*>(p)); } else { return std::allocator<T>().deallocate(p, n); } } void construct(pointer p, const_reference t) { new (p) T(t); } void destroy(pointer p) { p->~T(); }};A tester:#include <chrono>using namespace std::chrono;template <class List>void test(std::string comment, List l) { std::cout << comment; auto start_time = high_resolution_clock::now(); for (int i = 0; i < 1e7; ++i) { l.push_back(i); } auto end_time = high_resolution_clock::now(); std::cout << duration_cast<milliseconds>(end_time - start_time).count() << ms << std::endl;}int main() { // CodeBlocks 12.13, Release mode test(std::allocator: , std::list<int>()); // std::allocator: 1816ms test( TFSBAllocator: , std::list<int, TFSBAllocator<int>>()); // TFSBAllocator: 204ms}By the way, what is about thread-safety? | Fixed-size block allocator | c++;memory management;stl | About thread safety: it is not thread safe. There is nothing that prevents member-functions such as TFixedAllocator::allocate, TFixedAllocator::new_pool and others to be executed by multiple threads at the same time. Inside these functions several variables are modified. Modifying them from multiple threads without synchronization is an undefined behavior according to the C++ standard. How to fix it? The easiest way to do it is to use one std::mutex for the allocate and deallocate member-functions of the TFixedAllocator class(acquiring it in the very beginning of the function and releasing it in the very end). It is convenient to use an std::lock_guard for this purpose. Something like this:template <size_t ChunkSize>class TFixedAllocator { ... std::mutex allocator_mutex;public: ... void* allocate() { std::lock_guard<std::mutex> allocator_lock_guard(allocator_mutex); if (!free) { new_pool(); } TNode* result = free; // allocate the topmost element (saved in free) free = free->next; // and pop it from the stack of free chunks return static_cast<void*>(result); } void deallocate(void* elem) { std::lock_guard<std::mutex> allocator_lock_guard(allocator_mutex); TNode* node = static_cast<TNode*>(elem); // add to the stack of chunks node->next = free; free = node; }};Functions and variables naming: it is conventional to name functions with a verb(to describe an action). I would use get_instance instead of instance and create_new_pool instead of new_pool, for example. You TFixedAllocator class violates the single responsibility principle. Despite managing memory allocation(which is its main responsibility) it also implicitly implements some data structure(a stack?(I mean things related to the union TNode, the free pointer and so on)). It makes the code rather hard to follow: those operations with free and next pointers are not easy to understand(it is not clear why they are performed in the first place). There are two ways to fix it: Create a separate class that implements this data structure.Use a standard container. If what you need is a stack, std::stack is a good option(again, I'm not completely sure if it is feasible because I do not fully understand how this data structure is used in your code).Writing useless comments is definitely a bad practice. For instance, union TNode { // union doesn't make much sense: it is absolutely clear that it is a union without any comments. I'd recommend simply deleting comments like this one. Comments inside a function that tell what a block of code does, like here:void deallocate(void* elem) { TNode* node = static_cast<TNode*>(elem); // add to the stack of chunks node->next = free; free = node;}usually indicate that this block of code should have been a separate function(in this case, as I have mentioned before, there should probably be another class that implements this data structure). In general, you should try to write self-documenting code. Spacing and indentation: separating different functions with an empty line makes the code more readable. |
_datascience.15152 | Say you have an organization that requires employees to participate in a Q&A site similar to StackOverflow - questions and answers are voted upon, selected answers get extra points, certain behaviors boost your score etc. What we need to do is assign a rating from 1-100 to these users with even distribution.The behaviors that add points:Ask a question [fixed]Answer a question [fixed]Receive an upvote on a question [determined by relative ranking]Receive an upvote on an answer [determined by relative ranking]Have your answer selected [determined by relative ranking]Responding to a comment, etc [fixed]Likewise, there are behaviors that subtract points.If a user with a high ranking upvotes a question asked by a lower-ranking user, more points should be awarded than the inverse situation. Likewise if a lower-ranking user downvotes a higher-ranking user's question, the impact should be minimal compared to the inverse. There should be a limit to this impact though so that a high-ranking user doesn't unintentionally destroy any momentum of a low-ranking user by issuing a powerful downvote.We have a few challenges here: How do we determine how many points to assign to each type of behavior, with actor/recipient relative rank taken into account?I'm thinking we just assign a flat number to each behavior, that number decided relative to the importance of the other behaviors, and then have a variable score that can alter the score if there is a wide variance between the users. The mechanics of this - does the score double at most? - are unclear.How to we assign this rank? This one is a little easier - I'm thinking we just order the users according to score and then split the dataset into 100 sections, assigning each chunk a number 1-100.Should we be worried about these numbers getting very big? The scenario described above has been trivialized; actions taken by these users may happen hundreds of times per day so the scores can become very high, very quickly. Is there a way we can keep this under control while avoiding a large number of duplicate scores?How do we define the fixed scores as the total scores become very big? Over time we may have users with hundreds of thousands of points - but the fixed-score behaviors should still reward them. They should reward lower-ranking users more than higher-ranking users.I don't know if there are some standard practices, algorithms, or terminologies that I should be aware of when facing a problem like this - any input would be appreciated. | Methods / Algorithms for rank scales based on cumulative scoring | data mining;statistics;algorithms;distribution | To solve challenges #3 and #4, let's limit the overall available rank volume. For example, sum of this rank for all the users will be 1 (100%).From challenge #2 I understood, that you accept 2 different ranks: (1) place from 1 to 100, and (2) simple sum of all earned points (fixed and relative). Did I got it right? I so, there is no need to worry about unlimited growth, or fixed scores inflation. Let's just use percentages, not 1-100 ranks.These percentage ranks could be calculated based on interaction behaviors (vote/selecting answer/etc), using PageRank-like algorithm. Such algorithm will consider all previous reactions (and ranks of acted users), obtained by an exact user. Unfortunately, you cannot use PageRank algorithm as is, because it supports only positive links, but you can look for it's extensions. For example, look at this paper with PageRank extension for both positive and negative links (as users can down vote). You can iteratively estimate percentage rank (TrustRank, TR) using this algorithm.The second task is to calculate reward/penalty rate in points for each single action. Let's determine (predefine) maximal reward/penalty rate (X) for each type of action. And will use coefficient to discount it, based on TrustRanks of acting users (e.g., author and voter). Slightly modified Sigmoid will map this ratio from [-Inf,+Inf] range to [0,1]. Here for peer users you will have ~0.5 of predefined maximal rate. If voter has TR twice more than author, author will recieve ~0.75 of predefined value, and so on. You can tune steepness with additional parameter, or try to find any other mapping transformation function.Anyway, now simply multiply maximal penalty/reward by this coefficient, and you'll get the number of point, you need to deduct or add. The only issue, I see, is a user with zero TR - such user as a voter will give nothing, and as an object of voting, will recieve the maximal amount of points regardless voter's rank. To avoid this, you can predefine minimal TR (like 1e-10), and don't let user's TR to fall beyond this value. |
_codereview.30179 | I would like to get some feedback on my AS3 code below. It's for an Adobe Air mobile app to preload a website in a StageWebView container.That container will be moved on screen later in the app process. My goal is to show the website content to the user as fast as possible.// OFF SCREEN - PRELOADvar webView:StageWebView = new StageWebView();webView.stage = this.stage;webView.viewPort = new Rectangle( -5000, 0, stage.stageWidth, stage.stageHeight);webView.loadURL(http://www.example.com/foobar.php);// ONSCREENfunction showWeb(e:Event=null){webView.viewPort = new Rectangle( 0, 0, stage.stageWidth, stage.stageHeight);}btn.addEventListener(MouseEvent.CLICK, showWeb)Is there something I could do better?I'm using Adobe Air 3.6 and Flash CS6.Thank yoo | Preload content in StageWebView | actionscript 3 | null |
_webapps.19904 | How do Google Plus Photos, and Picasa relate to each other? Are they different names for the same thing? Or different interfaces to the same photo albums?http://picasaweb.google.com/ looks quite different than https://plus.google.com/${my account id}/photos, but they show the same photo albums, thus the question. | How does Picasa relate to Google Plus Photos? | google plus photos;picasa web albums | These are the services/products provided by Google, both are completely different, it is just that they are linked with your common Google ID which you can use for any Google services: Google Docs, Translate, Blogger, Orkut, etc. To see more Click Here. So the photos you upload to Google Plus actually gets saved in Picasa. In short your account is linked with different Google services. |
_computergraphics.1945 | In a ray tracer, given a point on a sphere (point_of_intersection with a ray) and its normal for that point (point_of_intersection - center_of_sphere) how do I calculate the tangent space for that point? Do I need other data to calculate the tangent space? | Ray tracing - tangent space for a point on a sphere | raytracing;maths | The tangent space is spanned by the tangent to the point and the bitangent (which is orthogonal to both tangent and normal).So you need to calculate the tangent which is achieved by calculating the cross-product of the ray-direction and the normal. $T = N \times DIR$The resulting vector will be orthogonal to the normal and thereby be the tangent.Now calculate the cross-product of the tangent and the normal $BT = T \times N$ to create a vector orthogonal to both. This vector is the bitangent.Tangent $T$ and bitangent $BT$ span a plane which is the tangent-space of your intersection-point. |
_webmaster.13802 | More and more browsers support SVG. Is the text in SVG selectable / copy'able? | Is text in SVG selectable/copyable in browsers? | browser support | YesThat's right, you can select and copy text right out of an SVG! The SVG does not store the text and letters as shapes, but by their meaning. Given a good SVG viewer or SVG-capable browser, you can select and copy text as you would in a normal document. Note that this will not work with all SVGs: Since the exact font (the looks) of a text is often important, and not everyonehas every font installed, some SVG artists convert the letters into shapes. This keeps their appearance, but loses the meaning, so a viewer no longer knows what letters the shapes represent.From the W3CIn many viewing scenarios, the user will be able to search for and select text strings and copy selected text strings to the system clipboard |
_cogsci.9929 | There is a problem in all therapies that if the client doesn't have faith or trust in the therapist then it is unlikely that anything can be achieved. Therefore effective therapy for an extreme skeptic relies on convincing them that the therapist does indeed have some talent and ability.Consider a profile that has something akin to the following views on common therapies:When looking at psychotherapy and therapists the list of different types seems endless. On the one end, Freudian or Psychoanalysis is widely discredited quakery. Carl Jung believed in synchronicity, which puts him firmly in the quack bin. Plenty of other therapist start talking spirituality and so again, sadly are filed under not proper scientific medicine.If someone has the above attitude to therapy and psychotherapy, what sort of therapist is going to be most credible? We are talking about finding a credible therapy for someone who would self describe as a skeptic, atheist, scientist, etc. | Which schools of psychotherapy are most credible to a hard scientist? | clinical psychology;psychoanalysis | For several reasons, Cognitive Behavioral Therapy (CBT) should have a good fit for someone who has a skeptic and scientific outlook on life. There is a large body of research showing that CBT is effective (see e.g., Hofmann et al. 2012). Obviously, this also depends on the kind of disorder. And of course, other forms of therapy can be effective too. However, for some disorders, CBT may be more effective than other therapies. For example, in a recent randomized trial, CBT was more effective in treating Bulimia Nervosa (Poulson et al., 2014) than psychoanalytic therapy (the other main form of psychotherapy). By and large, most empirical evidence we have about psychotherapy regards CBT. Despite your premise that the question is not about effectiveness, I believe that a skeptic would love that.CBT is informed by and has inspired much basic research about the cognitive processes that may underlie different disorders. For example, there is much research about attentional processes in affective disorders (e.g., MacLeod et al., 2002). A skeptic should value this.Another way to frame this is to say that CBT is scientific in the sense that it is based on testable theories and disorder models. In contrast many concepts in psychoanalysis, such as resistance or repression may be criticized as unfalsifiable (Popper, 1963). Again, a skeptic should prefer a more scientific therapy form.A core technique in CBT is to question wrong beliefs and assumptions. CBT has a very rational, thought-focused way of explaining an dealing with problems (some say overly). A central tenet of CBT is that people often hold wrong (self-defeating) beliefs about the world and that they engage in schematic thought processes (e.g. someone who has social phobia may have catastrophizing thoughts about how others might react to him in public and therefore avoid such situations). Questioning such beliefs should be right up a skeptic's alley.Conducting behavioral experiments to collect evidence about oneself is an important therapeutic tool in CBT (e.g., Brennet-Levy et al. 2004). Whereas psychoanalytical approaches strongly rely on the interpretation of clients' (unconscious) conflicts, CBT encourages people to collect data about their thoughts and behaviors and to conduct experiments that clarify important questions about themselves. Skeptics should love this facts-driven approach.Whereas psychoanalytic therapies are focused on trying to solve core, unconscious intrapersonal conflicts (brought from the past) in (mostly) long therapies, CBT is problem- and behavior-focused and short. A skeptic should like this pragmatism of CBT.Even though your question highlights the role of the therapist, this is actually not an important feature of CBT. CBT relies on structured therapeutic manuals, and not so much on talent or personality of the therapist. In fact, CBT may be effective if conducted via the internet (Andersson et al., 2009), or even in the form of self-help books (Anderson et al., 2005). A skeptic should value that CBT is focused on technique and not on the therapist.ReferencesAndersson, G. (2009). Using the Internet to provide cognitive behaviour therapy. Behaviour Research and Therapy, 47, 175180. doi:10.1016/j.brat.2009.01.010Anderson, L., Lewis, G., Araya, R., Elgie, R., Harrison, G., Proudfoot, J., et al. (2005). Self-help books for depression: how can practitioners and patients make the right choice. British Journal of General Practice, 55, 387-392.Bennett-Levy, J., Butler, G., Fennell, M., Hackmann, A., Mueller, M., Westbrook, D., & Rouf, K. (2004). Oxford Guide to Behavioural Experiments in Cognitive Therapy. Cognitive Behaviour Therapy: Science and Practice.Hofmann, S. G., Asnaani, A., Vonk, I. J. J., Sawyer, A. T., & Fang, A. (2012). The Efficacy of Cognitive Behavioral Therapy: A Review of Meta-analyses. Cognitive therapy and research, 36, 427440. doi:10.1007/s10608-012-9476-1MacLeod, C., Rutherford, E., Campbell, L., Ebsworthy, G., & Holker, L. (2002). Selective attention and emotional vulnerability: assessing the causal basis of their association through the experimental manipulation of attentional bias. Journal of Abnormal Psychology, 111, 107123.Popper, K.R. (1963). Conjectures and Refutations. London: Routledge and Kegan Paul.Poulsen, S., Lunn, S., Daniel, S. I. F., Folke, S., Mathiesen, B. B., Katznelson, H., & Fairburn, C. G. (2014). A Randomized Controlled Trial of Psychoanalytic Psychotherapy or Cognitive-Behavioral Therapy for Bulimia Nervosa. American Journal of Psychiatry, 171, 109116. doi:10.1176/appi.ajp.2013.12121511 |
_softwareengineering.114346 | I need a tool (for in house usage) that will format SQL code (SQL Server/MySQL).There are various 3rd party tools and online web sites that do it but no exactly how I need it.So I want to write my own tool that will fit my needs. First question is there any standard or a convention for how the SQL code should be formatted? (the tools that I tried format it differently)The second question, how should I approach this task? Should at first convert the sql query into some data structure like a Tree? | Algorithm for formating SQL code | sql;code formatting;parsing | null |
_codereview.67530 | I've a code which applies marquee to certain elements by using the requestAnimFrame method. However, when I test my application on a lower spec PC (Intel Celeron 2.13GHz dou core) the CPU usage is skyrocketing and gets to a minimum of around 80%! (I have to mentioned that I have 2 elements that the custom maruqee is applied to). Also, there are cases where even 3 or 4 elements are being targeted by the marquee. There is also a case in my application, not so common, that there is another animation that scrolls text from the right to the left.My development environment is a lot different, I'm using a 3.8GHz 8-cores CPU with a strong GPU so my CPU usage is no more than 11%. I'm not sure if it matters but locally I'm using xampp and on the tested PC I've been using wamp.The code for my marquee is splitted into few functions:/** * Applies the marquee function to the elements who are overflowing */function tryMarquee(/**/) { var args = arguments; for(var i=0; i<args.length; i++) { var elem = $(args[i]); var containerHeight = elem.outerHeight(true); var contentHeight = calculateContentHeight(elem); // extract args var settings = $('.marqueeSettings'); var speed = settings.find('input[data-target='+args[i]+']input[name=marqueeSpeed]').val() || 1; var spacer = settings.find('input[data-target='+args[i]+']input[name=marqueeSpacer]').val() == 1; var spacerHeight = settings.find('input[data-target='+args[i]+']input[name=marqueeSpacerHeight]').val() || 60; if(contentHeight > containerHeight && containerHeight > 10) { marquee(args[i], speed, spacer, spacerHeight); } }}/** * Calculates the content height of an element by it's children's height * @param elem * @returns {number} */function calculateContentHeight(elem) { var total = 0; elem.children().not('.clone').each(function() { if(parseInt($(this).css('margin-top')) >= 0) total += $(this).height() + parseInt($(this).css('margin-top')); else total += $(this).height(); }); total -= 10; return total;}/** * Generates a spacer. * @param marginTop * @param height * @returns {*|jQuery|HTMLElement} */function generateSpacer(marginTop, height) { var spacer = $('<div class=marquee-spacer clone style=margin-top: '+marginTop+'px;></div>'); height = height || 60; spacer.css({ height: height }); return spacer;}/** * returns a clone of an element's children * @param elem * @returns {*|jQuery|HTMLElement} */function duplicateContent(elem) { return elem.children().clone().addClass('clone');}/** * Checks the prayers element still needs the marquee. * @returns {boolean} */function keepPrayersMaruqee() { var containerHeight = elem.outerHeight(true); var contentHeight = calculateContentHeight(prayers); return contentHeight > containerHeight;}These functions are only triggered once so I'm not sure if it has something to do when the application is fully loaded and already running.function marquee(className, scrollAmount, spacer, spacerHeight) { // parse spacer spacer = spacer || false; // select the elements var elemSet = $(className); // loop through the element set elemSet.each(function() { var $this = $(this); $this.addClass('marquee'); /** * TODO: if the container is taller than the content we should clone the content so there's no gap in the loop */ var initialMargin = parseInt($this.find(div).first().css('margin-top')); if(spacer) $this.append(generateSpacer($this, initialMargin, spacerHeight)); $this.append(duplicateContent($this)); (function loop(){ /** * This block of code is only executed on a certain element so check if it still needs to be scrolled * because there is a possibility that the element's children will be removed dynamically. */ if($this.hasClass('prayers')) if( ! keepPrayersMaruqee()) { // remove clone elements $this.children('.clone').remove(); // cancel the maruqee return; } var first = $this.find(div).first(); var top = parseInt(first.css('margin-top')); var height = first.outerHeight(); if ((height+top) > 0){ first.css('margin-top','-='+scrollAmount); } else { first.appendTo($this); first.css('margin-top',initialMargin); } /** * repeat the animation * @see window.requestAnimFrame */ requestAnimFrame(loop); })(); });}A working fiddleThis is where the magic happens. I was trying to make this code as efficient as possible. Any suggestion on how I can actually improve this code and make it consume much less CPU power? | Custom marquee consumes a lot of CPU power | javascript;optimization;jquery;animation | Profiling in Opera suggests that it is mainly the time taken to draw the text which is consuming CPU (I do this in Opera simply because Opera is the only browser I know of that has a comprehensive profiler which profiles all aspects of page generation). Rendering fonts is one of the slowest things a web browser can do, and it appears that the browser re-renders the text on each frame, and that is basically what is consuming the cpu like crazy. And it appears that regardless of what mechanism is used to scroll the text the repaint is triggered.Some fonts are more work to render than others and you'd benefit from a font which is quick to render, unfortunately I don't know if As an experiment I tried putting a (large) image in the marquee and the CPU usage is much less when scrolling an image. This makes sense because it's a lot less work for a computer to blit an image than to render fonts. Hence one option would be to marquee an image of text instead of text. I know this isn't really a good solution for a lot of reasons, but it would be fast.The other solution is a really obvious and simple one - reduce the number of frames rendered per second. At the moment your code runs at 60fps which is way higher than is really needed. Actually, at a reasonable text scrolling rate for human reading speed, you could get away with 20fps or maybe even 10fps.At the moment you use a constant scroll amount per frame. This is not a good way to do it. Instead you should use a constant scroll rate per unit time, meaning the text scrolls at the same rate even if the browser chooses to render fewer than 60fps. Essentially what you need to do is calculate the elapsed time since the last frame, and use that to determine how far to scroll. The callback for requestAnimationFrame is passed the current time (in milliseconds) so this is quite easy, here is a minimalistic example:function gogogo(scrollRatePerSecond, frame_skip) { var lastFrameTime = null, i = 0; function loop(now) { var delta = lastFrameTime ? now - lastFrameTime : 0; if (++i % (frame_skip + 1) == 0) { lastFrameTime = now; var scrollAmount = delta * scrollRatePerSecond / 1000; // Do something with scrollAmount } requestAnimationFrame(loop); } requestAnimationFrame(loop);}In that code I've also added a frameskip option, which means 'skip this many frames for each one rendered'. You can set the rate per second to get the desired speed, and then adjust frameskip to get the desired performance. |
_softwareengineering.177235 | This may be a very simple question. I'm curious how blocking calls are implemented. Specifically, how do they block? Is this just thread.sleep? | How are blocking calls implemented? | .net | Specifically, I was implementing a socket class in C# and one of the .Net socket methods blocks until data is received. I was just curious how that works.There are several ways to implement a blocking call. The obvious way to do it is to return when the work is done See Robert Harvey's Answer.In the case where there is no work to be done (e.g., waiting for a signal or for input), there are several choices:Spinlock. Basically, code like while(signal not found){}. Spinlocks have almost no overhead and return faster than other methods, but burn CPU until they return. They are useful if the signal you are waiting for is going to return fast or if giving up a timeslice is to be avoided, but are generally a bad idea in high-level code.Locks, Mutexes, etc. In C#, these are accessed with things like lock(obj){} (cause other threads to block) and ManualResetEvent (block until another thread signals). Generally, there are implemented at the kernel or hardware level. E.g., C# lock is implemented on x86 machines using the assembly lock cmpxchg assembly instruction (see Junfeng Zhang's blog). |
_codereview.58169 | I need to make thumbnails without empty space and in the original ratio. Please help me check this algorithm to improve it.public function createThumbnail($imagePath, $thumbnailPath, $targetWidth, $targetHeight){ list( $originalWidth, $originalHeight, $originalType ) = getimagesize($imagePath); $targetRatio = $targetWidth / $targetHeight; $originalRatio = $originalWidth / $originalHeight; if ( $originalRatio >= $targetRatio ) { if ( $originalRatio >= 1 ) { $sourceWidth = $originalHeight * $targetRatio; $sourceHeight = $originalHeight; $sourceX = ( $originalWidth - $sourceWidth ) / 2; $sourceY = 0; } else { $sourceWidth = $originalWidth / $originalRatio * $targetRatio; $sourceHeight = $originalHeight; $sourceX = ( $originalWidth - $sourceWidth ) / 2; $sourceY = 0; } } else { if ( $originalRatio >= 1 ) { $sourceWidth = $originalWidth * $originalRatio / $targetRatio; $sourceHeight = $originalHeight; $sourceX = ( $originalWidth - $sourceWidth ) / 2; $sourceY = 0; } else { $sourceWidth = $originalWidth; $sourceHeight = $originalHeight * $originalRatio / $targetRatio; $sourceX = 0; $sourceY = ( $originalHeight - $sourceHeight ) / 2; } } $originalImage = $this->imageCreateFromType( $originalType, $imagePath ); $thumbnailImage = imagecreatetruecolor( $targetWidth, $targetHeight ); imagecopyresampled( $thumbnailImage, $originalImage, 0, 0, $sourceX, $sourceY, $targetWidth, $targetHeight, $sourceWidth, $sourceHeight ); imagepng( $thumbnailImage, $thumbnailPath );} | Create thumbnail in the original ratio without empty space | php;algorithm;image | The first thing that's really needed are some comments describing the different conditions. Sure, you can work them out every time you read the code, but that's error-prone busy-work that you can avoid for future maintainers. You don't need to go crazy with ASCII graphics, though this is one case that might actually deserve them! :)Here's an example:if ( $originalRatio >= $targetRatio ) // original is more landscape if ( $originalRatio >= 1 ) // original is landscape; shrink horizontally else // both are portrait; shrink horizontallyelse // original is more portrait if ( $originalRatio >= 1 ) // both are landscape; shrink vertically else // original is portrait; shrink verticallyNote: Assuming those comments are correct, your calculations in the third case are incorrect.As for the calculations themselves, it may be more intuitive to calculate the source width/height in each block and move the origin calculations below. It's certainly less code since you can easily calculate the origin from the size. And if you start the source width/height equal to the original values, you only need to set one dimension in each if block.$sourceWidth = $originalWidth;$sourceHeight = $originalHeight;... shrink $sourceWidth or $sourceHeight ...$sourceX = ($originalWidth - $sourceWidth) / 2;$sourceY = ($originalHeight - $sourceHeight) / 2;This relatively-small function (by procedural coding standards) is very difficult to test. Refactor it into several small functions so that each function does one thing:Read the original image size and type.Calculate the trimmed original size.Calculate the trimmed original origin.Read the original image from disk. (imageCreateFromType)Resize to a new image.Write new image to disk.1, 5, and 6 are single function calls to the GD library already, but I can see 4-6 making a nice scale disk image function together.public function createThumbnail($imagePath, $thumbnailPath, $targetWidth, $targetHeight) { list ($originalWidth, $originalHeight, $originalType) = getimagesize($imagePath); $trimmedSize = $this->calculateTrimmedSize( $originalWidth, $originalHeight, $targetWidth, $targetHeight ); $trimmedOrigin = $this->calculateTrimmedOrigin( $originalWidth, $originalHeight, $trimmedSize ); $this->scaleDiskImage( $imagePath, $trimmedOrigin, $trimmedSize, $thumbnailPath, $targetWidth, $targetHeight );}I really don't like dealing with separate width, height, x, and y values and passing them around and would prefer to define Size and Point classes. If this function is the extent of the image manipulation, it's probably not worth the effort, small as it would be. But in larger applications they would clean up the code quite a bit.Here's the same code as if we had those classes plus a custom ImageInfo that encapsulates the path, size, and type.public function createThumbnail($imagePath, $thumbnailPath, $targetSize) { $original = $this->getImageInfo($imagePath); $trimmedSize = $this->calculateTrimmedSize(original->getSize(), $targetSize); $trimmedOrigin = $this->calculateTrimmedOrigin(original->getSize(), $trimmedSize); $this->scaleDiskImage($imagePath, $trimmedOrigin, $trimmedSize, $thumbnailPath, $targetSize);} |
_codereview.135697 | The code uses the XML-based OC Transpo data feed to create a list of the bus name, where it's headed, and the times. Keep in mind that I am a beginner at python so any advice at all is appreciated. import subprocess, pprintfrom bs4 import BeautifulSoupdef format_set(result_set): new_set = [] for el in result_set: new_set.append(str(el.get_text())) return new_setdef get_stop_number(): print('Please enter your desired stop number (or \'quit\'):') stop_numb = raw_input('> ') try: return int(stop_numb) except: print('Exiting...') return 'quit'def get_stop_info(stopNo): try: output = subprocess.check_output(('curl -d appID=ba91e757&apiKey=' '&stopNo={}&format=xml https://api.octranspo1.com/v1.2/GetNextTripsForStopAllRoutes').format(stopNo), shell=True) soup = BeautifulSoup(output, 'xml') except: print('An error occured!') return None summary = [] for el in soup.find_all('Route'): routeNo = int(el.find('RouteNo').get_text()) routeHeading = str(el.find('RouteHeading').get_text()) times = format_set(el.find_all('TripStartTime')) x = [routeNo, routeHeading, times] summary.append(x) return summarydef is_empty(any_structure): if any_structure: return False else: return Trueif __name__ == '__main__': while True: stop_number = get_stop_number() if stop_number == 'quit': break summary = get_stop_info(stop_number) pprint.pprint(summary)Sample output: Please enter your desired stop number (or 'quit'):> 3058 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed100 7756 100 7679 100 77 4654 46 0:00:01 0:00:01 --:--:-- 4656[[92, 'St-Laurent', ['11:30', '12:00', '12:30']], [92, 'Stittsville', []], [96, 'St-Laurent', ['10:51', '11:21', '11:51']], [96, 'Terry Fox', ['10:09', '10:39', '11:09']], [118, 'Hurdman', ['11:20', '11:40', '12:00']], [118, 'Kanata', []], [162, 'Stittsville', ['11:55', '12:55', '13:55']], [162, 'Terry Fox', []], [168, 'Bridlewood', ['11:35', '12:05', '12:35']], [168, 'Terry Fox', []]]Please enter your desired stop number (or 'quit'):> quitExiting... | Parses XML to return the bus times at specified bus stop | python;beginner;xml;beautifulsoup;curl | null |
_unix.325387 | I am using Apache Hive on RedHat. Hive version is 1.2.1. The current Timezone in Hive is EST and I want to convert it to GMT. Can someone please guide me on the same.PS: I want to change the Hive Server Timezone and I am not asking about using a particular function in Hive that can convert EST timezone to GMT timezone. | How to change Hive Server Timezone | apache hive | null |
_codereview.165655 | A string contains many patterns of the form 1(0+)1 where (0+) represents any non-empty consecutive sequence of 0's. The patterns are allowed to overlap.For example, consider string 1101001, we can see there are two consecutive sequences 1(0)1 and 1(00)1 which are of the form 1(0+)1.public class Solution { static int patternCount(String s){ String[] sArray = s.split(1); for(String str : sArray) { if(Pattern.matches([0]+, str)) { count++; } } return count; } public static void main(String[] args) { int result = patternCount(1001010001); System.out.println(result);//3 } }Sample Input:100001abc101, 1001ab010abc01001, 1001010001Sample Output:2 2 3But still something i feel might fail in future could you please help me to optimize my code as per the Requirement | Count patterns that start and end with 1, with 0's in between | java;strings;regex | You can accomplish this by using positive look-behind in your regex and simply counting the number of matches.static int patternCount(String s) { Pattern pattern = Pattern.compile((?<=1)[0]+(?=1)); Matcher matcher = pattern.matcher(s); int count = 0; while (matcher.find()) { count++; } return count;} |
_softwareengineering.185018 | I'm going through a MVC tutorial, and I notice that convention seems to be to expose a tables primary key on detail pages/urls (ie. /Movies/Details/5 as an example from the tutorial).It's obviously not a problem for things like a movie record or a SO post, but it might be a bit different for an invoice or transaction with confidential information on it if the key was sequential or guessable.So, is it just this tutorial or do MVC apps typically expose a tables primary key? Is there a common library or pattern for hiding the key if you don't want to expose it? | MVC exposes database primary keys? | mvc;asp.net mvc | And that's why I think those tutorials simplify things to a degree that confuses the newcomer.Try to do a little tabula rasa and think the process from scratch, and decoupling the concepts.First of all, MVC is a presentation layer. It does not (or should not) even know what is backing it up. It could be a database, an xml file, a text file, an in-memory collection, whatever. It should be abstracted away.What does MVC understand? Receives an HTTP request, parses it using the route engine, resolves to the corresponding controller/action method and does whatever you write in the action method. End of the story.Do you ask for an id which, behind the scenes, corresponds to a database surrogate key? MVC doesn't know that. For it, it's just an int parameter of a method.It's entirely up to you how you decouple your logic and layers, how you retrieve data and check for security.Assume you get asked for Products/Detail/1. Is your action method under authentication (using the authorize attribute)? If so, are your products only visible to certain users?You'll pass to a business logic method the requested id along with the username, and that method will tell you here's your product or sorry, you cannot view this based on the logic inside. And heck, if you designed it well even the business logic method won't know if behind the scenes it used a database or anything else to retrieve the product.I hope this little explanation clarifies things a little. MVC is just about handling HTTP requests and returning a view or some json data or whatever you want to return to the users. Anything that backs it should be abstracted away and implemented as you see fit. |
_unix.268866 | I want to timeout time command. For example:timeout -s 1 time sleep 5I never get timeout. Is any way to do this? | Timeout time command in bash | linux;bash | The -s option in timeout is for mentioning signal to send.You just need to remove -s:timeout 1 time sleep 5 |
_cs.35409 | Question:Suppose you have a list of integers and it might contain duplicates. Build a Max Heap using this list. Where would the duplicates of the max integer reside in this Max Heap data structure? Where would the duplicates of other integers reside in the heap?My Answer: The duplicates can reside below (as children of) the value considering a heap (tree structure).Or if we are using arrays to implement this Max Heap, then the duplicates would reside on the indices H[2i + 1] and H[2i + 2] where i is the index of the parent node.Or we can even make a pointer to a link list of duplicates from that node which has duplicates.The teacher said that there is another way.Could you please give me hints on how to do it? Or at least how to answer this question? | Where To Put Duplicates in Max Heap? | data structures;heaps;priority queues | null |
_webapps.107329 | On my Facebook account (Desktop Browser) I have a right hand sidebar with online friends I can chat with, and above that, a window with friend's activities, obviously called a Ticker. On the bottom of that sidebar there is a gear, with an option Hide Ticker and Show Ticker, which hides that Friends Activity part of the sidebar, or shows it.Now, on my friend's account, this option is missing completely, you can't even click-hold and drag the sidebar down in order to reveal the Ticker/Friends activity, and I have no idea why is this.Anyone knows how I can bring that option on? | Show Ticker option completely missing | facebook | null |
_unix.340435 | I am running a sendmail server on CentOS 6.8. For MTA connections on port25 I want to use tcpwrappers to reject host with no PTR DNS record.so my hosts.allow looks like :sendmail: ALL EXCEPT UNKNOWNMy problem is the mail submission port on 587 seems to share this setting. The result is that roaming users (mostly on US Cellular) who don't have a PTR record for their current IP address get rejected before they can authenticate.I can fix this by setting up sendmail: ALL in hosts allow, but this about triples the number of garbage connections from spammers on port 25.Does anyone know a way to make sendmail call libwrap for port 25 connections but not for port 587 connections that will be authenticated ?Thanks! | Sendmail 8.14.4 on CentOS 6.8 tcpwrappers problem | configuration;sendmail;tcp wrappers | tcp_wrappers (last stable release: 1997) dates to an awkward phase of the Internet when OS and applications generally lacked suitable protections; since then OS now ship with firewalls by default and applications have all sorts of business logic available (features and milters in the case of sendmail) to keep the spammers to a dull roar. tcp_wrappers is problematic here as it is a single library, so would need two distinct versions of sendmail and probably some patching of sendmail for one to use the library via sendmail and the other sendmailmsp.In this case sendmail has suitable features that will reject connections without rdns but allow relay to authenticated connections via the following sendmail.mc defines (see cf/README under the source for details on these, and how to rebuild sendmail.cf):FEATURE(`delay_checks')dnlFEATURE(`require_rdns')dnl(Lacking such, the next option would be to carry out the necessary business logic via a milter.) Note that the next expected move from the spammers would be to break an authenticated account and spam via that, so log monitoring, rate throttling, and so forth may need to be in place to limit and detect such. |
_cogsci.997 | I'm working on brain imaging (fMRI) and I'm looking for a way to plot brain effective connectivity (dynamic causal modeling) parameters between different brain regions in a 3D plot. The plotting software should have the following input and output:Input:The directed connections from each region to another are defined by a connectivity matrix (no-connections have a value of 0). The 3D coordinates of the brain regions are presented in another matrix, practically using the weighted mean MNI coordinates of the regions as their coordinates for visualization.Output:The connection strength would be signaled by the color of the sticks, and/or numerically. However, a first step could be simple 3D ball-and-stick diagram without connection parameters. Output as 2D vector or bitmap image: .ps, .eps, .tex, .png, .jpg, etc. The output would be used in a LaTeX document but also in other formats.In the field of molecular analysis (with which I'm completely unfamiliar with), I noticed a lot of free and open-source software for creating ball-and-stick plots of molecules (some with publication-quality ray-tracing). Before reinventing the wheel (programming some intermediate software to convert MNI or Talairach 3D coordinates of brain regions to 3D spatial coordinates of atoms and writing them in some molecule file format for molecule ball-and-stick visualization software eg. RasMol to use it for brain connectivity visualization), I want to ask: Is there a good (and preferibly free) software for brain connectivity visualization?I have checked the examples of MayaVi, R and TikZ but neither of them has anything related to plotting ball-and-stick visualizations. | Plotting publication-quality ball-and-stick models of brain connectivity in 3D | software;neuroimaging;publication process;fmri | Have you tried:connectomeviewer http://www.connectomeviewer.org/viewerbrainnetviewer http://www.nitrc.org/projects/bnv/ which is a toolbox for the SPM software package http://www.fil.ion.ucl.ac.uk/spm/Gephi http://gephi.org/Trackvis http://trackvis.org/Also Nico Dosenbach has some amazing picture of brain connectivity in this paper http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3135376/. The code is based on matlab and the plotting can be done in the caret software http://brainvis.wustl.edu/wiki/index.php/Main_Page from the Van-Essen Lab. |
_webmaster.22968 | I bought a domain from godaddy.com and I was wondering how to I edit the information on the webpage? Like html FTP and such. And sorry if this isn't on the correct stack website, but this seems like it would be the best to post this question on. | How to edit the pages on your domain on Godaddy? | web hosting;domains;website design;godaddy | null |
_webmaster.88383 | I'm trying to add new functionality to my site which involves passing a whole bunch of parameters (could be hundreds of parameters) all encoded in base64 format. When I attempt the new URL I get an error 403 (access denied). I verified this to be a length issue because I then tried accessing the same domain but instead of base64 code, I used numbers after the URL and I still get the same error.If you feel like scrolling across, you'll see the URL I try to access:http://example.com/1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I was looking through my apache configuration files and the only thing that stood out which doesn't make sense here is this:LimitRequestFieldSize 8200LimitRequestLine 8200I say it doesn't make sense because I set the values to 8200 which I think means accept up to 8200 characters in the URL (not 400).Is there a setting I can use in apache to fix this problem, because I know its an apache issue but I'm not sure which setting to fix. | Why would a url of example.com/n (where n is the same number repeated 500x) return a 403 error? | url;apache;configuration;403 forbidden | null |
_unix.304087 | I am wondering if there are in linux file systems supporting for each file the archive bithttps://en.m.wikipedia.org/wiki/Archive_bitI need something with the same logic in native f.s., like ext3, not fat neither ntfs.best regards, Sala | file system, archive bit | filesystems;archive | Modern Linux systems support custom file attributes, at least on ext4 and btrfs. You can use getfattr to list them and setfattr to set them. Custom attributes are extended attributes in the user namespace, i.e. with a name of that starts with the five characters user..$ touch foo$ getfattr foo$ setfattr -n user.archive -v $(date -r foo) foo$ getfattr -d foo# file: foouser.archive=1471478895You can use a custom attribute if you wish. The value can be any short string (how much storage is available depends on the filesystem and on the kernel version; a few hundred bytes should be fine). Here, I use the file's timestamp; a modification would update the actual timestamp but not the copy in the custom attribute.Note that if the file is modified by deleting it and replacing it with a new version, as opposed to overwriting the existing file, the custom attributes will disappear. This should be fine for your purpose: if the attribute is not present then the file should be backed up.Incremental backup programs in the Unix world don't use custom attributes. What they do is to compare the timestamp of the file with the timestamp of the backup, and back up the file if it's changed. This is more reliable because it takes the actual state of the backup into account backups made solely according to the system state are more prone to missing files due to a backup disappearing or due to mistakes made when maintaining the attribute. |
_codereview.83219 | I wrote a simple calculator which uses orders of operations. I would like to know, do you see any serious flaws in logic and what do you think about this solution?It is my second approach to a problem, and the code passes tests (with integers and decimals) for basic operations: ^,|(i used this sign for root square), *, /, +, -. import java.math.*;import java.util.*;public class OrderOfOperations { ArrayList<String> contents; String item; OrderOfOperations check; public static void main (String[] args){ Scanner input = new Scanner(System.in); System.out.println(Enter an operation: ); String a = input.nextLine(); OrderOfOperations go = new OrderOfOperations(); a = go.brackets(a); System.out.println(Result: +a); } public String brackets(String s){ //method which deal with brackets separately check = new OrderOfOperations(); while(s.contains(Character.toString('('))||s.contains(Character.toString(')'))){ for(int o=0; o<s.length();o++){ try{ //i there is not sign if((s.charAt(o)==')' || Character.isDigit(s.charAt(o))) //between separate brackets && s.charAt(o+1)=='('){ //or number and bracket, s=s.substring(0,o+1)+*+(s.substring(o+1)); //it treat it as } //a multiplication }catch (Exception ignored){} //ignore out of range ex if(s.charAt(o)==')'){ //search for a closing bracket for(int i=o; i>=0;i--){ if(s.charAt(i)=='('){ //search for a opening bracket String in = s.substring(i+1,o); in = check.recognize(in); s=s.substring(0,i)+in+s.substring(o+1); i=o=0; } } } } if(s.contains(Character.toString('('))||s.contains(Character.toString(')'))|| s.contains(Character.toString('('))||s.contains(Character.toString(')'))){ System.out.println(Error: incorrect brackets placement); return Error: incorrect brackets placement; } } s=check.recognize(s); return s; } public String recognize(String s){ //method divide String on numbers and operators PutIt putIt = new PutIt(); contents = new ArrayList<String>(); //holds numbers and operators item = ; for(int i=s.length()-1;i>=0;i--){ //is scan String from right to left, if(Character.isDigit(s.charAt(i))){ //Strings are added to list, if scan finds item=s.charAt(i)+item; //a operator, or beginning of String if(i==0){ putIt.put(); } }else{ if(s.charAt(i)=='.'){ item=s.charAt(i)+item; }else if(s.charAt(i)=='-' && (i==0 || (!Character.isDigit(s.charAt(i-1))))){ item=s.charAt(i)+item; //this part should recognize putIt.put(); //negative numbers }else{ putIt.put(); //it add already formed number and item+=s.charAt(i); //operators to list putIt.put(); //as separate Strings if(s.charAt(i)=='|'){ //add empty String to list, before | sign, item+= ; //to avoid removing of any meaningful String putIt.put(); //in last part of result method } } } } contents = putIt.result(contents, ^, |); //check Strings contents = putIt.result(contents, *, /); //for chosen contents = putIt.result(contents, +, -); //operators return contents.get(0); } public class PutIt{ public void put(){ if(!item.equals()){ contents.add(0,item); item=; } } public ArrayList<String>result(ArrayList<String> arrayList, String op1, String op2){ int scale = 10; //controls BigDecimal decimal point accuracy BigDecimal result = new BigDecimal(0); for(int c = 0; c<arrayList.size();c++){ if(arrayList.get(c).equals(op1)|| arrayList.get(c).equals(op2)){ if(arrayList.get(c).equals(^)){ result = new BigDecimal(arrayList.get(c-1)).pow(Integer.parseInt(arrayList.get(c+1))); }else if(arrayList.get(c).equals(|)){ result = new BigDecimal(Math.sqrt(Double.parseDouble(arrayList.get(c+1)))); }else if(arrayList.get(c).equals(*)){ result = new BigDecimal(arrayList.get(c-1)).multiply (new BigDecimal(arrayList.get(c+1))); }else if(arrayList.get(c).equals(/)){ result = new BigDecimal(arrayList.get(c-1)).divide (new BigDecimal(arrayList.get(c+1)),scale,BigDecimal.ROUND_DOWN); }else if(arrayList.get(c).equals(+)){ result = new BigDecimal(arrayList.get(c-1)).add(new BigDecimal(arrayList.get(c+1))); }else if(arrayList.get(c).equals(-)){ result = new BigDecimal(arrayList.get(c-1)).subtract(new BigDecimal(arrayList.get(c+1))); } try{ //in a case of to out of range ex arrayList.set(c, (result.setScale(scale, RoundingMode.HALF_DOWN). stripTrailingZeros().toPlainString())); arrayList.remove(c + 1); //it replace the operator with result arrayList.remove(c-1); //and remove used numbers from list }catch (Exception ignored){} }else{ continue; } c=0; //loop reset, as arrayList changed size } return arrayList; } }} I tested code for inputs such as:1-(1+1)+1,2*(5*(8/2)),|3+3.6589,9-5/(8-3)*2+6,-1--1--1--1+|4^2,1+7/3*(34.67/23-(-2--5)+6^2),2(2(2(2(2(2))))), | Order of operations algorithm for calculator | java;beginner;calculator;math expression eval | OOPYour internal PutIt class doesn't have any state itself, but uses the fields of the enclosing class, which is quite confusing. PutIt could just as well be removed and its methods made part of OrderOfOperations.It's also rarely needed that a class has a field storing an instance of the class itself (linked lists would be an example where this makes sense). If I remove the check field from your code and call all the methods directly, it still seems to work perfectly.I don't want to do a complete redesign of your code, but if you do want to use classes, classes such as Operation and Equation might be more useful.Error HandlingYour code doesn't handle invalid input very well. As I had no idea what valid input looks like, I tried a couple of things:5 + 3 -> java.lang.NumberFormatException: I would expect at least a message of what string could not be formated (although your program should be able to handle spaces and not throw an error at all).+ 4 2 -> java.lang.ArrayIndexOutOfBoundsException: -1: Again, catching this and presenting a meaningful error message would be nice.NamingVariable names should help a reader understand your code. They should be as expressive as possible.Yours are often very generic. contents of what? what item? check what? PutIt what's it? and put it where? go where? short variable names are also almost never good. c, a, and s are all not very expressive. Especially bad is o as a loop variable, because it results in code like i=o=0, which is quite hard to read.methods should be named after what they do. brackets eg doesn't tell me that at all (does it process brackets? does it find brackets?). Same with recognize (what does it recognize? and what does recognize even mean in this context?), put (put what where?) and result (does it print the result? does it create the result? the result of what?).Miscdon't import *, but the concrete classes that you need.use private or public for fields.use a lot more spaces to increase readability (you can use an IDE to format this for you).don't ignore exceptions. If you don't want to deal with them, throw them upwards. Just swallowing them will make it very hard to find bugs.declare fields in as small a scope as possible. contents and item are both only used in recognize and put. It would be a lot better to declare them inside recognize and pass them as arguments to put. |
_codereview.118939 | Ok, before you ask: yes, I need to do this. Sort of.I'm wrapping a 3rd-party API for data access, and I can't use an ORM, so I'm implementing this kind of thing:public interface IRepository<TEntity> where TEntity : class, new(){ /// <summary> /// Projects all entities that match specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <param name=filter>A function expression that returns <c>true</c> for all entities to return.</param> /// <returns></returns> IEnumerable<TEntity> Select(Expression<Func<TEntity, bool>> filter); /// <summary> /// Projects the single that matches specified predicate into a <see cref=TEntity/> instance. /// </summary> /// <exception cref=InvalidOperationException>Thrown when predicate matches more than a single result.</exception> /// <param name=filter>A function expression that returns <c>true</c> for the only entity to return.</param> /// <returns></returns> TEntity Single(Expression<Func<TEntity, bool>> filter); /// <summary> /// Updates the underlying <see cref=View/> for the specified entity. /// </summary> /// <param name=entity>The existing entity with the modified property values.</param> void Update(TEntity entity); /// <summary> /// Deletes the specified entity from the underlying <see cref=View/>. /// </summary> /// <param name=entity>The existing entity to remove.</param> void Delete(TEntity entity); /// <summary> /// Inserts a new entity into the underlying <see cref=View/>. /// </summary> /// <param name=entity>A non-existing entity to create in the system.</param> void Insert(TEntity entity);}Notice the Expression<Func<TEntity, bool>> filter parameter of the Single and Select methods? That's so I can write this:using (var repository = new PurchaseOrderRepository()){ var po = repository.Single(x => x.Number == 123456); //...}Instead of this:_headerView.Browse(PONUMBER = \123456\, true);So, this ToFilterExpression extension method allows me to nicely wrap this stringly-typed API with my own strongly-typed API, and hide all the nastiness behind a familiar IRepository abstraction.Here's the extension method in question:public static string ToFilterExpression<TEntity>(this Expression<Func<TEntity, bool>> expression) where TEntity : class, new(){ if (expression == null) { return string.Empty; } var lambdaExpression = (LambdaExpression)expression; lambdaExpression = (LambdaExpression)Evaluator.PartialEval(lambdaExpression); var visitor = new FilterVisitor<TEntity>(lambdaExpression); var result = visitor.Filter; return result;}If you're curious, here's what the client code looks like:static void Main(string[] args){ using (var session = new Session()) { session.Init(/*redacted*/); session.Open(/*redacted*/); using (var context = session.OpenDBLink(DBLinkType.Company, DBLinkFlags.ReadWrite)) using (var repository = new PurchaseOrderHeadersRepository()) { repository.Compose(context); var poNumber = 123456; var date = DateTime.Today.AddMonths(-1); var result = repository.Select(x => x.Number == poNumber && x.OrderDate >= date || x.Number.EndsWith(123)); foreach(var po in result) { Console.WriteLine(PO Number: {0}, po.Number); } } } Console.ReadLine();}...which looks pretty neat compared to what it would be without that wrapper API! The extension method produces this output:PONUMBER = 123456 AND ORDEREDON >= 20160105 OR PONUMBER LIKE \%123\To achieve this, I implemented an ExpressionVisitor, adapting code from an MSDN article. Here's the visitor:/// <summary>/// Based on https://msdn.microsoft.com/en-us/library/bb546158.aspx/// </summary>internal class FilterVisitor<TEntity> : ExpressionVisitor where TEntity : class, new(){ private readonly Expression _expression; private string _filter; private readonly IList<EntityPropertyInfo<TEntity>> _properties; public FilterVisitor(Expression expression) { _expression = expression; _properties = typeof (TEntity).GetPropertyInfos<TEntity>().ToList(); } public string Filter { get { if (_filter == null) { _filter = string.Empty; Visit(_expression); } return _filter; } } private readonly ExpressionType[] _binaryOperators = { ExpressionType.Equal, ExpressionType.NotEqual, ExpressionType.GreaterThan, ExpressionType.GreaterThanOrEqual, ExpressionType.LessThan, ExpressionType.LessThanOrEqual }; private readonly IDictionary<ExpressionType, string> _binaryOperations = new Dictionary<ExpressionType, string> { { ExpressionType.Equal, = }, { ExpressionType.NotEqual, != }, { ExpressionType.GreaterThan, > }, { ExpressionType.GreaterThanOrEqual, >= }, { ExpressionType.LessThan, < }, { ExpressionType.LessThanOrEqual, <= }, { ExpressionType.AndAlso, AND }, { ExpressionType.OrElse, OR }, }; private readonly Stack<string> _operators = new Stack<string>(); protected override Expression VisitBinary(BinaryExpression b) { if (_binaryOperators.Contains(b.NodeType)) { foreach (var property in _properties) { var name = property.Property.Name; if (ExpressionTreeHelpers.IsMemberEqualsValueExpression(b, typeof(TEntity), name, b.NodeType)) { var value = ExpressionTreeHelpers.GetValueFromEqualsExpression(b, typeof(TEntity), name, b.NodeType); if (value is DateTime) { value = ((DateTime)value).ToString(yyyyMMdd); } _filter += property.FieldName + _binaryOperations[b.NodeType] + value; if (_operators.Any()) { _filter += _operators.Pop(); } return b; } } } else if (b.NodeType == ExpressionType.AndAlso || b.NodeType == ExpressionType.OrElse) { _operators.Push(_binaryOperations[b.NodeType]); } return base.VisitBinary(b); } protected override Expression VisitMethodCall(MethodCallExpression m) { if (m.Method.DeclaringType == typeof(string)) { if (m.Method.Name == StartsWith) { foreach (var property in _properties) { var name = property.Property.Name; if (ExpressionTreeHelpers.IsSpecificMemberExpression(m.Object, typeof(TEntity), name)) { _filter += property.FieldName + LIKE \ + ExpressionTreeHelpers.GetValueFromExpression(m.Arguments[0]) + %\; return m; } } } if (m.Method.Name == EndsWith) { foreach (var property in _properties) { var name = property.Property.Name; if (ExpressionTreeHelpers.IsSpecificMemberExpression(m.Object, typeof(TEntity), name)) { _filter += property.FieldName + LIKE \% + ExpressionTreeHelpers.GetValueFromExpression(m.Arguments[0]) + \; return m; } } } if (m.Method.Name == Contains) { foreach (var property in _properties) { var name = property.Property.Name; if (ExpressionTreeHelpers.IsSpecificMemberExpression(m.Object, typeof(TEntity), name)) { _filter += property.FieldName + LIKE \% + ExpressionTreeHelpers.GetValueFromExpression(m.Arguments[0]) + %\; return m; } } } } return base.VisitMethodCall(m); }}Obviously there are a number of things I could add and support additional constructs and method calls - but this is pretty much good enough for my immediate needs.Is there a better way to do this? | Something like a LINQ provider | c#;repository;extension methods;expression trees | I wouldn't worry too much about the amount of things that aren't supported (yet/if ever) - it's impossible to cover everything in a scenario like this. One thing I would suggest is that you throw exceptions so the caller knows they're doing something unexpected:protected override Expression VisitMethodCall(MethodCallExpression m){ if (m.Method.DeclaringType == typeof(string)) { // might work better as a switch with a default case. if (m.Method.Name == StartsWith) { // ... } if (m.Method.Name == EndsWith) { // ... } if (m.Method.Name == Contains) { // ... } throw new NotSupportedException(A meaningful error message); } throw new NotSupportedException(A meaningful error message);}LIKE should be a well named constant.String.Format or string interpolation is nicer than concatenation:var value = expressionTreeHelpers.GetValueFromExpression(m.Arguments[0]);_filter += ${property.FieldName} LIKE \%{value}\;I'm afraid that's about the limit of what I can suggest at the moment. It seems like a good approach to me but I'm not exactly an expert at this kind of thing! I would suggest putting some search string validation/escaping in as it's generally a good idea to be cautious. |
_scicomp.24776 | I'm wondering specifically in regard to a recursive function such as massive a game tree. I can't specifically say how big yet, but definitely pushing the limits of a given processor or processor array.Is it correct to say that passing a variable to a function requires an operation. Certainly there must the the flipping of some bits. Does this reduce efficiency? | Is there any computational efficiency to global variables? | efficiency | null |
_unix.192287 | I hope you are well!I use Manjaro x64_86, using GRUB2 and EFI. I made a large error after coming home from a night shift. I accidentally removed my boot partition when attempting to format an external hard drive for my girlfriend. I followed instructions from the Manjaro wiki to reinstall grub. Some steps are confusing, specifically at the start it asks you to mount your boot partition to /mnt/boot:mount /dev/sda1 /mnt/bootThen later in the EFI section, it asks you to mount the boot partition to /boot/efi:sudo mount /dev/sda1 /boot/efiI wonder if this has contributed.I happily updated grub using update-grub, without any major hitches:sudo update-grubUnfortunately whenever I need to update grub, it does not work. It appears that my computer is using /boot/efi/grub/grub.cfg rather than the /boot/grub/grub.cfg that is automatically updated when I update my kernel/run the update-grub command from inside my currently running Manjaro install.I have attempted to read through various wikis about GRUB2 and EFI, but each one is asking me to read more and more information. Is there any chance I can change this so it updates automatically again? I promise I'll never use gparted when I'm tired again! :)Thank you in advance.EDIT:Specifically my problem now is that when I use sudo update-grub, it only changes /boot/efi/grub/grub.cfg and when I turn on my pc and grub loads, it appears to be using /boot/grub/grub.cfg instead, which isn't being updated. | Problems in using GRUB2 - Manjaro | linux;grub2;grub;manjaro | I have done some more reading and I have found the way to update my grub.conf in a different directory. I have not checked if this will now allow it to update automatically in the future, but at least I have a working solution.My problem was GRUB2 was using /boot/efi/grub/grub.cfgand running the scriptupdate-grub was only updating/boot/grub/grub.cfgSo I simply ran:sudo grub-mkconfig -o /boot/efi/grub/grub.cfgand GRUB2 now loads the additional kernels I had installed. However, I have not made this work automatically. I think I will see if it is functioning later, if it is not, I will sym-link /boot/efi/grub/to /boot/grubI have not tested this further solution, so caution is advised. |
_unix.78861 | I have an embedded setup using an initramfs for the root file system but using a custom ext3 partition mounted on a compact flash IDE drive. Because data integrity in the face of power loss is the most important factor in the entire setup, I have used the following options to mount (below is the entry from my /etc/fstab file<file system> <mount pt> <type> <options> <dump><pass>/dev/sda2 /data ext3 auto,exec,relatime,sync,barrier=1 0 2I came by these options from reading around on the internet. What I am worried about is that the content of /proc/mounts give the following:/dev/sda2 /data ext3 rw,sync,relatime,errors=continue,user_xattr,acl,barrier=1,data=writeback 0 0From what I understand from reading around is that I want to use data=journal option for my mount as this offers the best protection against data corruption. However, from the man page for specific ext3 options for mount it says the following about the writeback option:Data ordering is not preserved - data may be written into the main filesystem after its metadata has been committed to the journal. This is rumoured to be the highest-throughput option. It guarantees internal filesystem integrity, however it can allow old data to appear in files after a crash and journal recovery.I am very confused about this - the man page seems to suggest that for file system integrity I want to specify data=writeback option to mount but most other references I have found (including some published books on embedded linux) suggest that I should be using data=journal. What would be the best approach for me to use? Write speed is not an issue at all - data integrity is though. | What mount option to use for ext3 file system to minimise data loss or corruption? | mount;ext3;journaling | Don't get misled by the fact that only writeback mentions internal filesystem integrity.With ext3, whether you use journal, ordered or writeback, file system metadata is always journalled and that means internal file system integrity. The data modes offer a way of control over how ordinary data is written to the file system.In writeback mode, metadata changes are first recorded in the journal and a commit block is written. After the journal has been updated, metadata and data write-outs may proceed. data=writeback can be a severe security risk: if the system crashes while appending to a file, after the metadata has been committed (and additional data blocks allocated), but before the data has been written (data blocks overwritten with new data), then after journal recovery that file may contain blocks filled with data from previously deleted files from any user1. So, if data integrity is your main concern and speed is not important, data=journal is the way to go. |
_unix.260183 | Tmux doesn't passes correctly the ctrl-shift-arrow sequences.It doesn't work on emacs, and when I use sed -n l, I see it displays the escape sequence of the arrow key alone instead of the full sequnceFor example, ctrl-shift-right passes as ^[[C (which is the same as the escape sequence of the right key), instead of ^[OC (outside tmux).Any idea of how to solve this?Note that ctrl-arrow key (without shift) and shift-arrow (without ctrl) pass correctly.My .tmux.conf is:# Changes prefix from Ctrl-b to Alt-aunbind C-bset -g prefix M-aset-option -g default-terminal xterm-256color# choosing windows with Alt-#bind -n M-0 select-window -t 0bind -n M-1 select-window -t 1bind -n M-2 select-window -t 2bind -n M-3 select-window -t 3bind -n M-4 select-window -t 4bind -n M-5 select-window -t 5bind -n M-6 select-window -t 6bind -n M-7 select-window -t 7bind -n M-8 select-window -t 8bind -n M-9 select-window -t 9setw -g monitor-activity onset -g visual-activity onset-window-option -g window-status-current-bg whiteset -g mode-mouse onset -g mouse-resize-pane onset -g mouse-select-pane onset -g mouse-select-window on# Toggle mouse onbind m \ set -g mode-mouse on \;\ set -g mouse-resize-pane on \;\ set -g mouse-select-pane on \;\ set -g mouse-select-window on \;\ display 'Mouse: ON'# Toggle mouse offbind M \ set -g mode-mouse off \;\ set -g mouse-resize-pane off \;\ set -g mouse-select-pane off \;\ set -g mouse-select-window off \;\ display 'Mouse: OFF'# disable selecting panes with mouse (because enabling mess with copy-paste)set-option -g mouse-select-pane off# display status bar message for 4 secset-option -g display-time 4000# Start windows and panes at 1, not 0set -g base-index 1set -g pane-base-index 1# enable shift-arrow keysset-window-option -g xterm-keys on# start default shellset-option -g default-shell $SHELL# support for escape char for viset -s escape-time 0 | tmux doesn't passes correctly ctrl-shift-arrow sequences | terminal;keyboard shortcuts;keyboard;tmux | It looks like tmux is doing the right thing for your example:For example, ctrl-shift-right passes as ^[[C (which is the same as the escape sequence of the right key), instead of ^[OC (outside tmux).because the usual connotation of that sequence is that it is the same as cursor-movement sent from the host. A zero parameter is the same as a missing parameter, which happens to be one.The terminal was not identified; xterm does not do that. For controlshiftright-arrow, xterm may send ^[[1;6C. In this case, tmux absorbs the escape sequence sent, because it is not in the table of known xterm-style keys that it knows about. In tmux, the file xterm-keys.c contains a table, with the comment:/* * xterm-style function keys append one of the following values before the last * character: * * 2 Shift * 3 Alt * 4 Shift + Alt * 5 Ctrl * 6 Shift + Ctrl * 7 Alt + Ctrl * 8 Shift + Alt + Ctrl * * Rather than parsing them, just match against a table. * * There are three forms for F1-F4 (\\033O_P and \\033O1;_P and \\033[1;_P). * We accept any but always output the latter (it comes first in the table). */ |
_unix.243366 | I have a directory bar:# file: bar/# owner: root# group: rootuser::rwxuser:little-jonny:rwxgroup::r-xmask::rwxother::r-xdefault:user::rwxdefault:user:little-jonny:rwxdefault:group::r-xdefault:mask::rwxdefault:other::r-xIf I create a directory inside bar with mkdir baz, I have for baz:# file: baz/# owner: root# group: rootuser::rwxuser:little-jonny:rwxgroup::r-xmask::rwxother::r-xdefault:user::rwxdefault:user:little-jonny:rwxdefault:group::r-xdefault:mask::rwxdefault:other::r-xIf I use mkdir -p foo, I have for foo:# file: foo/# owner: root# group: rootuser::rwxuser:little-jonny:rwx #effective:r-xgroup::r-xmask::r-xother::r-xdefault:user::rwxdefault:user:little-jonny:rwxdefault:group::r-xdefault:mask::rwxdefault:other::r-xWhy mkdir -p uses the default root rights when there is explicit rule that states default:mask::rwx if you are in bar?How force mkdir -p to behave as mkdir? (Without always passing -m775 when root and it should happen only when making directories in the bar) | When using linux acl mkdir and mkdir -p do different things | linux;posix;acl;shell builtin;mkdir | null |
_cs.206 | Let $L_1$, $L_2$, $L_3$, $\dots$ be an innite sequence of context-free languages, each ofwhich is dened over a common alphabet $$. Let $L$ be the innite union of $L_1$, $L_2$, $L_3$, $\dots $;i.e., $L = L_1 \cup L_2 \cup L_3 \cup \dots $. Is it always the case that $L$ is a context-free language? | Is an innite union of context-free languages always context-free? | formal languages;context free;closure properties | The union of infinitely many context-free languages may not be context free. In fact, the union of infinitely many languages can be just about anything: let $L$ be a language, and define for every $l \in L$ the (finite) language $L_l = \{ l \}$. The union over all these languages is $L$. Finite languages are regular, but $L$ may not even be decidable (and thereby definitely not context-free).The closure properties of context-free languages can be found on Wikipedia. |
_unix.242423 | I am connected through VNC to a CentOS 6.4 machine at my workplace. Every five minutes a box pops up that says:Authentication is required to set the network proxy used for downloading packagesAn application is attempting to perform an action that requires privleges. Authentication as the super user is required to perform this actionPassword for root:DetailsRole unknownAction: org.freedesktop.packagekit.system-network-proxy-configureVendor: The PackageKit Project[Cancel] [Authenticate]I don't have the root password, so usually I just click it an make it go away but it tends to come back a few minutes later. My local sysadmin has tried to deal with the problem a few times and given up and told me just to keep closing the popup box. That said, its driving me nuts. Is there some way I can make it so I don't have to see the popup, even if the problem isn't itself fixed? Less preferably, is there some very easy thing I can tell the sysadmin to do to actually fix the problem? | Banish a popup error message | centos;pop up | null |
_webmaster.102061 | Recently, I moved a site from http to https and the sitemaps got updated. I submitted the sitemaps to Google. There were around 9 sitemaps for the site, each of them containing pages/images the site. The site isn't very big to need more than a sitemap (around 1200 pages site). Is it ok to have 9 sitemaps covering the site or shall I recreate just 1 sitemap? | Is it okay to have multiple sitemaps? | google;google search console;sitemap;xml sitemap | null |
_scicomp.311 | I am using the Weka workbench to train a protein fold classifier. I imported my training data into Weka and performed PCA-based feature selection. This seems to have worked fine, but now I cannot evaluate my trained classifier on the test data because the test data contains all the original attributes. Of course, if I try to run the feature selection on the test data, I will come up with a different set of features.In Weka, after you have applied feature selection to a training set, how do you pull those same features out of a test set? | Applying same feature selection to multiple data sets with Weka | machine learning | With any such modelling thing, you are going to have to recalculate the model using the new training set (ie. the original minus its test set).The usual approach is to randomly extract a subset for testing. Then train using all of the remaining data points.Of course there will be some random variability according to which are extracted, so you can repeat the process to get some statistical significance. Your final model will not be trained on all of your data. You either have to live with that (what I usually see in the Natural Language Processing field), or once you have determined your best parameters compute the final model using all of your data - with the understand you won't be able to test it. |
_codereview.168725 | I wrote an RPN shell stack-based interpreter. It supports following operators: +, -, *, /, %, ^. Right now it only works with positive numbers. It features also printing out the top stack element by entering new line character and popping out the result with = operator. I decided to also add one variable available to the user, it is the x variable, which can be used in two states. X (capital X) is used for reading the value of the variable and x (lowercase x) is used for writing the value to the variable.#include <stdio.h>#include <stdlib.h>#include <ctype.h>#include <math.h>#define CRESET \033[0m#define CYELLOW \033[33m#define CRED \033[31m#define MAXOP 100#define NUMBER '0'#define MAXSTACK 100#define BUFSIZE 100int getop(char []);void push(double);double top(void);double pop(void);int getch(void);void ungetch(int);int sp = 0;double stack[MAXSTACK];char buf[BUFSIZE];int bufp = 0;double x = 0;main () { int type; double _op; char s[MAXOP]; while ((type = getop(s)) != EOF) { switch (type) { case NUMBER: push(atof(s)); break; case '+': push(pop() + pop()); break; case '*': push(pop() * pop()); break; case '-': _op = pop(); push(pop() - _op); break; case '^': _op = pop(); push(pow(pop(), _op)); break; case '/': _op = pop(); if (_op != 0.0) push(pop() / _op); else printf(CRED ERROR: Cannot divide by 0\n CRESET); break; case '%': _op = pop(); if (_op != 0.0) push(fmod(pop(), _op)); else printf(CRED ERROR: Cannot divide by 0\n CRESET); break; case 'x': x = pop(); break; case 'X': push(x); break; case '\n': printf(CYELLOW Top: %.8g\n CRESET, top()); break; case '=': printf(CYELLOW Result: %.8g\n CRESET, pop()); break; default: printf(CRED ERROR: Unknown command %s\n CRESET, s); break; } }}void push(double f) { if (sp < MAXSTACK) stack[sp++] = f; else printf(CRED ERROR: Stack is full, cannot push %g\n CRESET, f);}double top(void) { if (sp > 0) return stack[sp - 1]; else { return 0.0; }}double pop(void) { if (sp > 0) return stack[--sp]; else { printf(CRED ERROR: Stack is empty\n CRESET); return 0.0; }}int getop(char s[]) { int i, c; while ((s[0] = c = getch()) == ' ' || c == '\t') ; s[1] = '\0'; if (!isdigit(c) && c != '.') return c; i = 0; if (isdigit(c)) while (isdigit(s[++i] = c = getch())) ; if (c == '.') while (isdigit(s[++i] = c = getch())) ; s[i] = '\0'; if (c != EOF) ungetch(c); return NUMBER;}int getch(void) { return (bufp > 0) ? buf[--bufp] : getchar();}void ungetch(int c) { if (bufp >= BUFSIZE) printf(CRED UNGETCH: Too many characters\n CRESET); else buf[bufp++] = c;}SamplesInput:> 2 2 + =Output:Result: 4Top: 0Input:> 3 x> X 5 +Output:Top: 8Input (x = 3):> X 3 ^ 2 +Output:Top: 29Could you review my code? Are there any issues I missed regarding the user input? | Reverse Polish Notation shell interpreter with one variable in C | c;interpreter | Here are some ideas for improving your program.Use the correct prototype for mainMany compilers will default to an int return type when the function return type is undeclared, but it's not a good idea to omit it, especially with main. Instead of this:main () {write this:int main() {Eliminate global variables where practicalThe code declares and uses 5 global variables. Global variables obfuscate the actual dependencies within code and make maintainance and understanding of the code that much more difficult. It also makes the code harder to reuse. For all of these reasons, it's generally far preferable to eliminate global variables and to instead pass pointers to them. That way the linkage is explicit and may be altered more easily if needed. In this case, I'd suggest gathering everything together into a structure within main's scope and then passing that structure to each of the functions. For example, it might be defined like this:typedef struct stack_s { int sp; int bufp; double x; double stack[MAXSTACK]; char buf[BUFSIZE];} StackMachine;And then within main it could be defined like this:StackMachine sm = { .sp = 0, .bufp = 0 };Then push could be reimplemented like this:void push(StackMachine *sm, double f) { if (sm->sp < MAXSTACK) sm->stack[sm->sp++] = f; else printf(CRED ERROR: Stack is full, cannot push %g\n CRESET, f);}Be consistent with the user interfaceWhen the user uses an empty line to see the top of the stack, it reports 0 instead of STACK EMPTY when the stack is empty. This is inconsistent with the other operations. |
_codereview.145590 | I'm implementing my very first Go application and I would like to receive an hint on how to make a certain check faster.I have a huge txt file in which each line is structured like the following:key textkey text . .key textWhere key is ALWAYS 6 HEX digits long and text is ALWAYS 16 HEX digits long.I need to find and print all the lines which have the same text value. For instance, suppose we have 2 lines like the following000000 1234567890123456111111 1234567890123456They should be both printed.Here's my code:r, _ := os.Open(store.txt) scanner := bufio.NewScanner(r) for scanner.Scan() { line := scanner.Text() text := line[7:] if !contains(duplicates, text) { duplicates = append(duplicates, text) } else { t, _:= os.Open(store.txt) dupScan := bufio.NewScanner(t) //currLine := dupScan.Text() for dupScan.Scan() { currLine := dupScan.Text() currCipher := currLine[7:23] if( currCipher == text ){ fmt.Println(currLine) } } t.Close() } }fmt.Println(Done Check)//Close Filer.Close()func contains(s []string, e string) bool { for _, a := range s { if a == e { return true } } return false}It works just fine but, since I have million lines, it's really slow. I'm only capable of doing it with a single thread and not using goroutines.Have you any suggestion on how I should change this code to speed up the check? | Print all lines of a text file containing the same duplicated word | performance;beginner;file;go | You have a bugI don't think your program works as intended. If a line exists \$N > 1\$ times, it will be printed \$N (N - 1)\$ times. Because:The first time it's seen, it will be added to duplicates and not printed. So far so good.The 2nd, 3rd, ... Nth time it's seen, all \$N\$ occurrences will be printed.When \$N = 2\$, the program will appear to work fine.But when \$N > 2\$ there will be excess output of duplicates, which is probably not the way you intended.PerformanceThere are several inefficient operations.The worst is re-reading the file for every duplicate. If the million lines contain pairs of 500000 values, the file will be read 500001 times.Another inefficient operation is storing the unique values in a slice, instead of a map. The contains implementation based on a slice has \$O(n)\$ time complexity, which would be \$O(1)\$ using a map.To improve performance you could use a map of counts. Read the entire file once to build this map. Unique values will have a count of 1, duplicates will have > 1. Read the file again, checking the count in the map, if it's > 1 then print the line.Resource management and error handlingIn Go, it's recommended to use defer to close resources soon after you opened them, so you don't forget later. So instead of this:fh, err := os.Open(path)// large block of codefh.Close()It's recommended to write like this:fh, err := os.Open(path)if err != nil { panic(could not open file: + path)}defer fh.Close()// rest of the codeClosely related is error handling. You ignored the error value returned by os.Open, which is not wise. The rest of the program will make little sense if opening the file fails. Always remember and consider to handle errors.Coding styleIt's good to run go fmt on your program before asking for a review.It automatically reformats it following the standard of Go.Suggested implementationPutting the above tips together:func main() { path := /tmp/store.txt counts := readCounts(path) check(path, counts)}func check(path string, counts map[string]int) { fh, _ := os.Open(path) defer fh.Close() scanner := bufio.NewScanner(fh) for scanner.Scan() { line := scanner.Text() text := line[7:] if counts[text] > 1 { fmt.Println(line) } }}func readCounts(path string) map[string]int { fh, err := os.Open(path) if err != nil { panic(could not open file: + path) } defer fh.Close() scanner := bufio.NewScanner(fh) counts := make(map[string]int) for scanner.Scan() { line := scanner.Text() text := line[7:] counts[text] += 1 } return counts} |
_unix.81258 | I'm trying to figure out whether AUCTeX is actually calling dvipng when it renders LaTeX previews. While this may not the best way to find this out, one possibility is to check whether the executable dvipng is being called at all - nothing else on the system would be using it. The compilation output does not mention dvipng, and top does not show it being run.For non-emacs users, AUCTeX is an emacs package that runs inside emacs and can call external executables, i.e. dvipng.So, my question is: for an arbitrary executable, is there some way to check whether and if so when, it has been run in the recent past? More information, like what arguments it has been called with, would also be useful.I tried seeing whether the emacs process called dvipngby using strace (I don't know if I did this correctly) by doing$ strace emacs corrmodel.tex 2>&1 | grep dvipngand then running a compilation, but I just got the outputread(15, falias 'preview-start-dvipng #[n..., 4096) = 4096Is this a correct procedure? Is there a better way? | Is there some way to find out if/when an executable was called? | executable;audit | null |
_unix.41470 | I want to automatically update a Debian system (actually Debian Wheezy on Raspberry Pi, though that shouldn't make much a difference). I've already seen that there is cron-apt, probably a good choice, as it also includes mail notifications on errors. I am however not sure about updates involving reboots like e.g. new kernel updates? Does cron-apt automatically reboot a system after a new kernel image was installed? Or can it at least notify me on update of specific packages? | Automatically update a Debian system | debian;kernel;upgrade | null |
_webmaster.1782 | I'm looking for an Open Source website backup tool. I'm more interested in Open Source so I can make changes if need be and possibly contribute to the software. Automatic scheduled FTP backups from mutiple web servers.MySQL backups from databases (only partially important as I can just do mySQL dumps and get those with ftp) Differential and/or Incremental backups (improtant for bandwidth and disk space.)Windows 7 or Linux support.I'm not really sure if this is a better question for Server Fault but I feel it can live here easy enough. Thank you for any suggestions.Software I've found - Cobian BackupNote this for backing up data on web servers, usually shared hosting. Installing software on the remote server is impossible, so ftp and mysql access is about it. | Open Source website backup tool suggestions | php;mysql;backups;site maintenance | null |
_unix.384541 | I just recently upgraded to Ubuntu 16.04 LTS (running Gnome Flashback-Metacity, if that matters) and whenever I run mutt in Terminal, there is a black box that appears after the exit output mutt provides:This does not happen with another interactive command such as vi nor a command that returns output such as grep.I've tried a variety of settings changes on the default profile with no effect, unless I change to having white text on black background. Here are the current colors settings:Anyone have hints as to where I should be looking to fix this? Thanks! | Terminal window writes black block in output after mutt return | ubuntu;terminal;gnome terminal;mutt | null |
_cs.6668 | I'd like to know if there is a function $f$ from n-bit numbers to n-bit numbers that has the following characteristics:$f$ should be bijectiveBoth $f$ and $f^{-1}$ should be calculable pretty fast$f$ should return a number that has no significant correlation to its input.The rationale is this:I want to write a program that operates on data. Some information of the data is stored in a binary search tree where the search key is a symbol of an alphabet. With time, I add further symbols to the alphabet. New symbols simply get the next free number available. Hence, the tree will always have a small bias to smaller keys which causes more rebalancing than I think should be needed.My idea is to mangle the symbol numbers with $f$ such that they are widely spread over the whole range of $[0,2^{64}-1]$. Since the symbol numbers only matter during input and output which happens only once, applying such a function should not be too expensive.I thought about one iteration of the Xorshift random number generator, but I don't really know a way to undo it, although it should theoretically be possible.Does anybody know such a function?Is this a good idea? | Function that spreads input | binary trees;hash;binary arithmetic | null |
_webmaster.77559 | I'm under the impression that PHP-based proxies download the content of a site to their server before sending it back to the browser just like browsing any standard website site. Am I correct in saying this?Note that URL filtering is not the issue here. | If a corporate network disables peer-to-peer networking, will that affect web-based proxy services? | proxy;filtering | You are correct- The web proxy will act as a middle man in between you and the content being requested through the proxy. P2P disabled will in no way affect the results of using a web based proxy, as long as you are able to successfully connect to the web proxy server.Though there are still some things to consider depending on the quality of web proxy you are using. The majority of web based proxies will have issues with complex javascript, flash objects, java, and other types on load event processing. Some web proxies will not properly proxy all content due to various reasons (usually due to the proxy parser being unable to understand certain element of a website being viewed through the service) which means any content that does get properly sent through the proxy will still be loaded directly from your browser which would still be susceptible to any sort of blocks or filters. |
_unix.149017 | I tried tailing two files using the option:tail -0f file1.log -0f file2.logIn Linux I see an error tail : can process only one file at a time.In AIX I see the error as Invalid options.This works fine when I use:tail -f file1 -f file 2in Linux but not in AIX.I want to be able to tail multiple files using -0f or -f in AIX/Linuxmultitail is not recognized in either of these OS. | How to tail multiple files using tail -0f in Linux/AIX | tail | null |
_reverseengineering.13204 | I am new to malware analysis .. and I was analyzing some 'windows' apps and found functions that I thought it exist only on malware, is this possible or there is something wrong with my analysis ? I am using Cuckoo sandbox .. the functions are: SetWindowsHookExA, IsDebuggerPresent .. and others as well One of the app examples is AcroRd32.exe: It calls IsDebuggerPresent .. and this is its page on virustotal including all the information related to the sample in addition to the MD5.https://www.virustotal.com/en/file/9e702e7b53f6f00e344a1cb42c63eaf4d52ed4adb5407744a654753569044555/analysis/ | Can benign applications have such APIs? | malware;winapi;virus | IsDebuggerPresent is found in most executables compiled with Visual C++ in the setup code that is executed before the main function. There are also legitimate use cases for SetWindowsHookExA, so you will often see them in clean executables. |
_unix.208528 | I see other people doing this, occasionally. They'll add something like the following to the start of their terminal, sort of a welcome screen: ____ _____ _ _ _____ __ __ _ _ __ _ _ ____ ____ _____ ____ __ __ _ _ __ ___ __ __ __ ____ ___( _ \( _ ) ( \/ )( _ )( )( ) ( \/\/ ) /__\ ( \( )(_ _) (_ _)( _ ) ( _ \( ) /__\ ( \/ ) /__\ / __) /__\ ( \/ )( ___)(__ ) )(_) ))(_)( \ / )(_)( )(__)( ) ( /(__)\ ) ( )( )( )(_)( )___/ )(__ /(__)\ \ / /(__)\ ( (_-. /(__)\ ) ( )__) (_/(____/(_____) (__) (_____)(______) (__/\__)(__)(__)(_)\_) (__) (__) (_____) (__) (____)(__)(__)(__) (__)(__) \___/(__)(__)(_/\/\_)(____) (_)It happens when the shell starts, and I would like to have it happen for me when the shell starts, too. I am pretty proficient with vim for text editing, so I think I could figure out a way to do it. If vim fails, I can use something like the following, but how do I make it come up not garbled when I start my new shell? Please note that this question is not just about ASCII art, but is also about how to successfully add it to my bash, and about possible escapes required for the bash shell to get it to work properly.Creating diagrams in ASCII | How do I add ASCII art to my Bash? | shell script;ascii art | null |
_reverseengineering.15964 | I have two files that have the extension .P1 and .P2These files are ROM sound files from a particular 'real world' fruit machine (also known as a slot machine).There is some Fruit Machine emulation software (MFME) that reads both the sound ROM files and the game ROM files in order recreate the fruit machine within a PC.I'm playing around creating my own fruit machine with the editor side of MFME, based on an existing fruit machine, and want to edit the sounds on this machine.Unfortunately MFME doesn't have the functionality to edit sound files, only to read them.The sound ROMs are apparently comprise of all the sounds packed one after another with a table at the start pointing to where each sound file is (lots of short beeps, buzzes, pings etc as you would expect from a fruit machine).Does anyone know what software I would need to edit these files in order to insert my own sounds? I originally thought I would need a disassembler but when I downloaded one it wouldn't open them as they weren't either .exe or .dll files (I tried changing the extension of the file but the disassembler still knew they weren't the correct type of file).How can I go about reverse engineering these files in order to edit them? Any ideas?Thanks. :-) | How to edit a type of sound file used with a fruit machine emulator? | disassembly | The MFME source code is available to read.You'll probably want to start with interface.cpp, which has most of the logic for deciding which sound format to load in TForm1::Load(String). There are at least ten or so different formats from the look of it.You can then take a look at sample.cpp which has the implementation for loading files. LoadJPMSound is of particular interest, but there are other formats to consider.Here are the key facts for JPM sounds:File starts with a 1-byte sample count prefix (so I guess 255 samples max?) Next comes a 4-byte magic number (0x5569A55A)After that there's some sort of page table which describes the addresses of the sound data within the ROM.Each audio page has a 1-byte flag value at the start. The two MSBs specify the kind of sound.00 means silence, where the remainder of the byte (i.e. flag & 0x3F) specifies how long the silence should be in increments of 20 samples. A zero value here means an empty audio sample.01 means there are 256 nibbles of audio. I'm not sure if this means 4-bit values but I haven't seen any bit manipulation in the decoding. The sample rate is set to 160000 divided by the remainder of the flag value plus one.10 means the same as above, except the number of nibbles is stored as a byte following the flag.11 means that the sample is a repeating loop. The number of repeats is computed as (flag & 0x7) + 1.For YMZ samples, the file contains 250 sample entries, each consisting of:16-bit masked sample count, compute as count = (((value & 0xFF00) >> 8) / 6) * 1000High byte of buffer pointer.High byte of sample pointer.Mid byte of buffer pointer.Mid byte of sample pointer.Low byte of buffer pointer.Low byte of sample pointer.Sample data in the following format:4-bit YMZ280B format sample data (channel A)4-bit YMZ280B format sample data (channel B)The YMZ280B format is step-based format where each next signal value in the wave is encoded as the difference from the previous signal value (or 0 for the first sample). The 4-bit value is an index to a lookup table containing the possible steps: The step LUT is calculated as follows:// nib from 0000 to 1111for (nib = 0; nib < 16; nib++) { int value = (nib & 0x07) * 2 + 1; diff_lookup[nib] = (nib & 0x08) ? -value : value;}That should hopefully get you started. The source code should get you the rest. |
_softwareengineering.125399 | Could Designing by Contract (DbC) be a way to program defensively? Is one way of programming better in some cases than the other? | Differences between Design by Contract and Defensive Programming | design patterns;contract;defensive programming | Design by Contract and defensive programming are in some sense opposites of each other: in DbC, you define contracts between collaborators and you program under the assumption that the collaborators honor their contracts. In defensive programming, you program under the assumption that your collaborators violate their contracts.A real square root routine written in DbC style would state in its contract that you aren't allowed to pass in a negative number and then simply assume that it can never encounter a negative number. A real square root routine written defensively would assume that it is passed a negative number and take appropriate precautions.Note: it is of course possible that in DbC someone else will check the contract. In Eiffel, for example, the contract system would check for a negative number at runtime and throw an appropriate exception. In Spec#, the theorem prover would check for negative numbers at compile time and fail the build, if it can't prove that the routine will never get passed a negative number. The difference is that the programmer doesn't make this check. |
_cstheory.837 | I wanted to compute the complexity of a smoothed $\ell_0$ algorithm in BigO notation. The algorithm can be found here. Can anybody help me in this regard? | Complexity of Smoothed $\ell_0$ algorithm | compressed sensing | null |
_webmaster.99706 | Please advise is it good from SEO perspective if my website content is in side sticker like in attached Screen Shot. If I will click any where outside this content get hide, also I need to use slider to read the whole content. | Is it good from SEO perspective to use Content in Side Stickers? | seo;content | null |
_unix.359074 | I have another question.I have three different directories but I have the same files inside, with the same name. For example:Directory1:dir1/file1.txtdir1/file2.txtdir1/file3.txtDirectory2:dir2/file1.txtdir2/file2.txtdir2/file3.txtDirectory3:dir3/file1.txtdir3/file2.txtdir3/file3.txtI would like to paste the files with the same name. Like it:dir1/file1.txt + dir2/file1.txt + dir3/file1.txt = file1.txtdir1/file2.txt + dir2/file2.txt + dir3/file2.txt = file2.txtdir1/file3.txt + dir2/file3.txt + dir3/file3.txt = file3.txtHow can I do it? | How can I paste files with the same name from different directories? | files;paste | null |
_softwareengineering.47331 | How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ?ThanksEdit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ?Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective. | Mock Objects for Testing - Test Automation Engineer Perspective | testing;qa | Hey.First question: do you want to use xUnit frameworks, mock frameworks, and write code? If not, don't bother. 90% of jobs for testers doesn't include writing code, so if it is not something you are looking for, you can skip this set of knowledge. On the other hand if you like writing code, somehow you don't think about being developer, there is possibility to work on test automation which will require coding skills. Particular programming language will depend on the toll/application stack but you will be required to write code. As for xUnit frameworks, probably you won't write unit test (as mentioned dev job), but it is possible you will be using them as runner for your tests. For example Selenium that was mentioned here doesn't require coding skills if you use SeleniumIDE which is only one of products. If you use SeleniumCore - than you are using api that wraps around browser. In this case you write code that will perform tests on given application. And if you put this code into xUnit framework you will have runner, reports with it. As for mock objects you will be using them in very rare situations. Maybe when you will be building automation framework for your app. But depending on the approach you can skip it. EDITAs per new answers and edit of the main question.I agree with c_maker - you probably won't be writing unit tests for application code, but it is possible to write unit tests for your automation framework software is software iven if it is software testing other software. Here again as c_maker said, if you wrote gui level tests with selenium using Selenium - those are acceptance tests not unit tests.Anyway check following links so you will now how work of test automation engineer may look: - Quick overview - Bigger explanation - Inspiration for all above and few pdf describing it |
_unix.330107 | I expectedecho |to do the following:Print the empty string to stdout.Pipe stdout to stdin.What I would have expected from writing the empty string to stdin is: nothing.What happened instead: A prompt > appears which behaves like a bash in the bash:> echo mmWhy is that? | Basic bash behavior | bash | What are you trying to pipe into? The | must be followed by another command, and bash shows > prompting you to complete the pipeline.To do both of:Print the empty string to stdout.Pipe stdout to stdin.echo -n '' | catHere cat is just a placeholder for your second command, which in this case just sends its stdin to its stdout. |
_cstheory.16649 | Steiner Tree Problem:Given a weighted graph G(V,E,w) where w is the weight function on edges and a subset of vertices SQ called terminals, a Steiner Tree is a connected subgraph which connects all vertices in S. Finding minimum weight Steiner Tree is called Steiner Tree Problem.Node-weighted Steiner Tree ProblemGiven a weighted graph G(V,E,w) where w is the weight function on nodes and a subset of vertices SQ called terminals, a Node-Weighted Steiner Tree is a connected subgraph which connects all vertices in S. Finding minimum weight Steiner Tree is called Node-Weighed Steiner Tree Problem.My question is: can any Steiner Tree Problem be converted to Node-weighted Steiner Tree Problem? I have the following approach and need to verify whether it is correct:Lets assume we have a Steiner Tree problem with weighted edges. Consider an edge (u, v) with weight w assigned to it. Lets place a vertex X on this edge, so that the edge splits in two edges: (u, X) and (X, v). Assign zero weight to each of those edges, and assign weight w to vertex X. Repeat this for every edge of the initial graph.The obtained graph will be node weighted and any solution to the node-weighted problem can be easily converted to match initial Steiner Tree Problem.Basically, I need someone to verify all above is valid. I would also greatly appreciate any references from reliable sources, regarding this issue. | Can any Steiner Tree Problem be converted to Node-Weighted Steiner Tree Problem? | graph theory | Sounds right. Denote by $G$ the original graph, and by $G'$ the graph after introducing the new vertices.The first direction is trivial - every Steiner tree in $G$ corresponds to a Steiner tree in $G'$, with the same weight (by splitting each edge with its new vertex).The second direction is less trivial. Still, consider a minimal Steiner tree $T$ in $G'$. We can assume w.l.o.g that $T$ does not contain any leaf that corresponds to a new edge-vertex. Indeed, this vertex is not in $S$, so if it is a leaf, it can be removed from the tree, keeping it a Steiner tree with minimal weight (unless you allow negative weights, which I assume you don't).Now that we have a Steiner tree without edge-vertex leafs, we can convert it to a Steiner tree in $G$: root the tree by some vertex that is not an edge-vertex, then by the construction, all the edge-vertices have a single child in this tree. So you can replace each pair of edges $(u,X),(X,v)$ by $(u,v)$ in the original graph, obtaining a Steiner tree with the same weight.I don't see any bugs here, but maybe I missed something. |
_codereview.107991 | Inspired by reading How-to write a password-safe class?, I tried some clever (or dumb-fool) hack to create a widely-useable secure string using the std::basic_string-template, which does not need to be explicitly securely erased itself.At least gcc and clang seem not to choke on it (coliru):#include <string>namespace my_secure {void SecureZeroMemory(void* p, std::size_t n) { for(volatile char* x = static_cast<char*>(p); n; --n) *x++ = 0;}// Minimal allocator zeroing on deallocationtemplate <typename T> struct secure_allocator { using value_type = T; secure_allocator() = default; template <class U> secure_allocator(const secure_allocator<U>&) {} T* allocate(std::size_t n) { return new T[n]; } void deallocate(T* p, std::size_t n) { SecureZeroMemory(p, n * sizeof *p); delete [] p; }};template <typename T, typename U>inline bool operator== (const secure_allocator<T>&, const secure_allocator<U>&) { return true;}template <typename T, typename U>inline bool operator!= (const secure_allocator<T>&, const secure_allocator<U>&) { return false;}using secure_string = std::basic_string<char, std::char_traits<char>, secure_allocator<char>>;}namespace std {// Zero the strings own memory on destructiontemplate<> my_secure::secure_string::~basic_string() { using X =std::basic_string<char, std::char_traits<char>, my_secure::secure_allocator<float>>; ((X*)this)->~X(); my_secure::SecureZeroMemory(this, sizeof *this);}}And a short program using it to do nothing much://#include my_secure.husing my_secure::secure_string;#include <iostream>int main() { secure_string s = Hello World!; std::cout << s << '\n';}Some specific concerns:How badly did I break the standard?Does the fact that one of the template-arguments is my own type heal the fact that I added my own explicit specialization to ::std?Are the two types actually guaranteed to be similar enough that my bait-and-switch in the destructor is ok?Is there any actual implementation where the liberties I took with the standard will come back to haunt me?Did I miss any place where I should zero memory after use? Or is there any chance that anything will slip by? | Hacking a SecureString based on std::basic_string for C++ | c++;strings;c++11;security;securestring | The only thing special about SecureZeroMemory (the Windows version) is that it uses volatile to prevent optimization. Therefore there's no reason to write your own. Just defer to standard algorithms:std::fill_n((volatile char*)p, n*sizeof(T), 0);You will probably want to call the global delete:::operator delete [] p;Add noexcept:template <class U> secure_allocator(const secure_allocator<U>&) noexcept {} |
_webmaster.81359 | I have a site that has many articles on a specific dog breed. I want to help Google and other search engines to understand this article is specifcally on this breed or a topic realted to this breed. I thought about using productontology.org to define an additional type.For example I have the following Schema.org markup: <article itemscope itemtype=http://schema.org/Article> <link itemprop=additionalType href=http://www.productontology.org/id/dog_breed /> <!-- Additional Code + Schema.org markup --></article>Is this the correct way to indicate with Schema.org markup that this article is on (or related to) this specific breed? If not, what is the proper way using Schema.org?Note: I understand fully that the best way is with great content that uses keywords. However, I am looking to know how to do this from a schema.org perspective. | Schema.org markup for an Article with additionalType? | microdata;schema.org;productontology.org | No, your example would mean that its an schema:Article and a pto:Dog_breed.To state what the schema:Article is about, you could use its about property. The elaborate version would be:<article itemscope itemtype=http://schema.org/Article> <div itemprop=about itemscope itemtype=http://schema.org/Intangible> <link itemprop=additionalType href=http://www.productontology.org/id/Dog_breed /> <!-- properties about dog breed(s) --> </div> <!-- properties about the article --></article>Notes:Schema.org has no class for dog breeds, so youd have to choose the closest broader class. I think it would be schema:Intangible; otherwise the top class schema:Thing.Using pto:Dog_breed means that the article is about the concept of dog breeds, or dog breeds in general, but not about a specific dog breed.It should be /id/Dog_breed, not /id/dog_breed (URIs are case-sensitive). |
_webmaster.78014 | In a mediawiki page, I would like to have a checkbox that applies to all pages so that templates on the page can use the value of the checkboxwhen the checkbox is changed, the page refreshes.The checkbox could be used, say to relegate details in the text into footnotes. E.g. an editor could writeTrains from Bristol Parkway {{Details|opened in 1971}} go to London.The user would seeTrains from Bristol Parkway^1 go to London or Trains from Bristol Parkway (opened in 1971) go to London depending on their preference.The template Details would be something like{{#ifeq:UserPreference|IncludeDetailsInText|({{{1|}}})|<ref>{{{1|}}}</ref>}}How do you fetch the UserPreference?Should I use a Widget? | How to use a user preference in a Mediawiki template | mediawiki;forms | null |
_webmaster.67700 | How do I prevent the following website and similar tools finding subdomains of domains in my server?https://pentest-tools.com/reconnaissance/find-subdomains-of-domain | How do I prevent subdomain finders finding subdomains on my server? | subdomain | null |
_softwareengineering.316431 | In my experience, technical design is made more challenging when it is divorced from implementation, particularly by assigning the roles to different people, because its easy for the designer to overlook a myriad of implementation details/gotchas. In theory, I like the idea though. So my question is, should we be striving for this separation? | Is separating design from implementation a net win? | design;development process | null |
_cseducators.211 | One of the most challenging concepts to instill in new CS students is 0-indexing (indeed, the pedagogy of this fact probably merits its own discussion). Another difficult topic -- although a slightly more advanced one -- is pointers. (I'm thinking particularly of programming in C.) With this question I'm wondering if part of the difficulty is the syntactic sugar in C that allows the following to be equivalent:// declare array of 5 ints on the stackint num[5];// use array notation to change first elementnum[0] = 42;// use pointer arithmetic to change first element*(num + 0) = 42;With array notation, we are explicit with 0-indexing, but the logic of it isn't apparent. Yet, with pointer arithmetic, it's more clear (I think...) why we use 0: the pointer stores the base address, so dereferencing the pointer brings us to that address which is where the array logically begins.Comfort with this leads to the topic of memory management on the heap with something like this:// declare array of 5 ints on the heapint *num = malloc(sizeof(int) * 5);// use pointer arithmetic to change first element*num = 42;I am toying with expanding my introduction on arrays next year to include this particular use of pointers. I recognize that it would involve taking maybe a single lesson on arrays and expanding to something closer to a week long to tie together all these ideas.However, is it worth putting aside the syntactic sugar in order to understand more accurately the indexing of arrays? On the other hand, does the introduction of pointers and memory complicate the process so much so that confusion about arrays will increase rather than decrease? (I'm thinking of this is a lesson idea feedback discussion.) | Lesson Idea: Arrays, Pointers, and Syntactic Sugar | lesson ideas;arrays;c;syntax;abstraction | null |
_codereview.15165 | I have the following Ruby code snippet: url = http://somewiki.com/index.php?book=#{author}:#{title}&action=edit encoded = URI.encode(url) encoded.gsub(/&/, %26).sub(/%26action/, &action)The last line is needed since I have to encode the ampersands (&) in the author and title but I need to leave the last one (&action...) intact.Consider this example for clarity:author = John & Dianetitle = Our Life & Workurl = http://somewiki.com/index.php?book=#{author}:#{title}&action=editencoded = URI.encode(url)encoded.gsub(/&/, %26).sub(/%26action/, &action)# => http://somewiki.com/index.php?book=John%20%26%20Diane:Our%20Life%20%26%20Work&action=edit Although this result is satisfactory, I'd like to clean the original code with the gsub/sub calls. Any ideas?PS: I could clean both params before passing them to the url = ... line but then the % characters will be re-encoded as %25 by the URI.encode call so I don't think that's an option. I'd love to be proved wrong here though. | Encoding special characters in URLs with Ruby | ruby | Unless you need the unencoded url (for logging?), I would just:encoded_url = http://somewiki.com/index.php?book=#{CGI.escape(author)}:#{CGI.escape(title)}&action=edit |
_webmaster.101450 | I've set up DFP to do dynamic allocation but I can't seem to see any Ad Exchange stats now. It says Adsense/AD Exchange stats and then the Ad Exchange section is all 0's. I've also noticed the stats do not show in the standard ADX system. How can I tell if ADX is actually filling? It looks like Adsense is filling only based on the Ad Exchange stats showing 0. | Where do I see stats for ADX ad units when using DFP? | doubleclick ad exchange | null |
_unix.4462 | Possible Duplicate:Make package explicitly installed in pacman So I've been stupid and removed my /var/lib/pacman/local dir.Without backup, but with a root shell open at tty4. O yeah, pacman -Sf with the base repo also overwrites your /etc/passwd. Thank god for .pacorig. I still have my pacman logs, but all the packages are already installed. Pacman outputs errors that the files already exist.How can I install packages without modifying files? (pacman -S --fake --noextract)I've also tried the -D switch but that only works for installed packages. | Pacman/Arch: Install package(s) without really installing them | arch linux;package management;pacman | null |
_webmaster.35184 | I was looking at some cloud hosting price. Consider an entry level self hosted server:PRICE: 40----------CPU: i5 (4x 2.66 GHz)RAM: 16GBhard disk: 2TBBandwidth: 10TB/month with 100MbpsNow consider an equivalent on a cloud structure... (for example phpfog) PRICE: 29$ -------------- RAM: 613MB (LOL WUT?) CPU: 2 Burst ECUs Storage: 10GB (WUT?)Basically with cloud, to have the same hardware of your entry level dedicated server you have to pay 300-400...Is it normal? I am missing something? | Cloud hosting vs self hosting price | cloud | null |
_unix.339758 | Say, I have custom kernel from my distribution, how could I get list of all options the kernel was build with?It's possible to get them by reading config file from kernel package from vendor's repo, but is there any other way? I mean ways to get that information form the kernel itself, maybe from procfs? | How to determine the options Linux kernel was build with? | linux;kernel;configuration;options;procfs | In addition to what @Stephen Kitt said, at least on my Debian system you can find the information in:/boot/config-<version>Where version, in my case, is:3.16.0-4-686-paeSo, issuing:less /boot/config-3.16.0-4-686-paeSpits out the kernel configs in a long list! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.