id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.54184 | I have a site that is a portal or directory for service providers.We opened every service provider's own page on our site, but now we get a lot of applications from those providers that want sites from their own.We want to make a full site for every service provider, but rather put them on sub domain URLs. (They dont mind, it's OK for them.)So, my site is www.exaple.comTheir site will be: provider.example.comNow I have two questions:Can the content on the provider sites harm my site in SEO?If one from those sub domains is punished by Google because the owner does black hat SEO, how it will affect the rood domain? Can it make the root domain get punished? | Can third party content on sub-domains harm the main site's search rankings? | seo;subdomain;blackhat | null |
_unix.23217 | I want to play an audio file over VoIP in order that people can listen to it with a phone call. One way to do this would be to set up the audio file as an audio device where the audio starts playing at a specified time.There are better ways of playing my file over VoIP, but I'm still curious as to how you would set up the audio device that I discuss. What would I do?You can see this as the reverse of this question. | Audio file as an audio input device | audio;voip | null |
_softwareengineering.301967 | I'm developing a Java web application. It is a three layered architecture:web > service > repository.I'm thinking of creating many exceptions - each specific to each individual error and in the service layer where my business logic resides I want to throw specific exceptions that is tied to the business error.And in the web layer (further up the stack where it is closer to the front end) is where I'm catching it and handling with them accordingly.Someone said to me, don't do it this way because it is bad practice and advised me to just throw one single generic business exception instead - and he was adamant he was right. When I heard this, my reaction was that it goes against all the stuff I learned in university, past experiences, and the stuff I read in common tech books. | Is it bad practice to throw multiple custom exceptions in Java? | java;exceptions;exception handling | Is it bad practice to throw multiple custom exceptions in Java?No. It is good practice.The only situation where multiple exceptions might be a (slightly) bad idea is if there is absolutely no possibility (ever!) of catching individual exceptions; i.e. the fine-grained exceptions won't serve any functional purpose. However, my opinion is that if you can make that assertion, then there is probably something wrong with the way that you / your team is using Java. And the counter-argument is that sensible use of multiple custom exceptions will help you to document the APIs ... even if you are never going to do more than catch-log-and-bail-out at runtime.This is not to say that lots of custom exceptions will always be good:If you go overboard and create a separate exception for everything, then you are probably adding unnecessary complexity. (In a lot of cases, different exception messages are sufficient.)If you don't have a sensible inheritance hierarchy for your custom exceptions, then you may end up regretting it. (A well-designed hierarchy allows you to catch classes of exceptions, or declare methods as throwing them. It can make your code simpler.) |
_codereview.111844 | This code is just some exercise code printing questions and waiting for answers. It is working perfectly well as far as I can tell. I would like to know if this code could be considered acceptable or how it could be improved.import java.util.ArrayList;import java.util.Scanner;public class MainClass { public static void main(String[] args) { ArrayList<Citizens> list = new ArrayList<Citizens>(); Citizens p1 = new Portuguese(); list.add(p1); p1.addName(); p1.addAge(); p1.addAdress(); Citizens p2 = new German(); list.add(p2); p2.addName(); p2.addAge(); p2.addAdress(); boolean addCitizen = true; while (addCitizen) { Scanner input = new Scanner(System.in); System.out.println(Do you want to add another citizen?Y or N?); String answer = input.next(); if (answer.equals(y)) { System.out.println(Do you want to add a Portuguese or a German citizen?PT or GER?); answer = input.next(); if (answer.equals(pt)) { Citizens p3 = new Portuguese(); list.add(p3); p3.addName(); p3.addAge(); p3.addAdress(); } else if (answer.equals(ger)) { Citizens p3 = new German(); list.add(p3); p3.addName(); p3.addAge(); p3.addAdress(); } else if (!answer.equals(pt) || (!answer.equals(ger))) { System.out.println(Please choose PT or GER!); } } else if (answer.equals(n)) { System.out.println(Youre not gonna add a new citizen!); addCitizen = false; } else { System.out.println(Please enter y or n); } } System.out.println(The end); }}class Citizensimport java.util.Scanner;public class Citizens { private String name; private int age; private String adress; String answer; public int answernr; boolean afirmativeanswer = true; Scanner input = new Scanner(System.in); public void addName() { System.out.println(Do you want to add the citizen name?Y or N?); answer = input.nextLine(); while (!answer.equals(y) || (!answer.equals(n))) { if (answer.equals(y)) { System.out.println(Please give the citizens name !); String giveName = input.nextLine(); this.setName(giveName); break; } else if (answer.equals(n)) { System.out.println(Not adding a name!); break; } else { System.out.println(Please enter y or n); answer = input.nextLine(); } } } public void addAge() { System.out.println(Do you want to add + this.getName() + s age? Y or N?); while (afirmativeanswer) { answer = input.nextLine(); if (answer.equals(y)) { System.out.println(Please enter + this.getName() + s age!); answernr = input.nextInt(); this.setAge(answernr); afirmativeanswer = false; } else if (answer.equals(n)) { System.out.println(You choose not to add + this.getName() + s age!); afirmativeanswer = false; } else { System.out.println(Please enter y or n); } } } public void addAdress() { System.out.println(Do you want to add + this.getName() + s Adress?Y or N?); afirmativeanswer = true; while (afirmativeanswer) { answer = input.next(); if (answer.equals(y)) { System.out.println(Enter + this.getName() + s Adress!); answer = input.next(); this.setAdress(answer); afirmativeanswer = false; } else if (answer.equals(n)) { System.out.println(You choose not to add + this.getName() + s Adress!); afirmativeanswer = false; } else { System.out.println(Please choose Y or N!); } } } public void setName(String s) { this.name = s; } public String getName() { return name; } public void setAge(int i) { this.age = i; } public int getAge() { return age; } public void setAdress(String c) { this.adress = c; } public String getAdress() { return adress; } public String toString() { return name; }}The Portuguese and German classes have just one constructor each. | Simple input/output practice | java;io | NamingNaming is fundamental to make your code easier to read.ArrayList<Citizens> list = new ArrayList<Citizens>();Name this to citizens.Citizens p1 = new Portuguese();p1 is not an adequate name. Rather name it as person1. Replace also p2 to person2.Rename Citizens to Citizen: person1 is Citizen not Citizens. Same thing for person2.Also use correct English spelling and follow Java capitalization conventions when you name a variable:boolean afirmativeanswer = true;should be affirmativeAnswer.Java is a language where CamelCase is mostly respected. Why didn't you use it?MethodA constructor is mostly used to access that classes methods and attributes parallel to other objects created.For an infinite-loop, you can do while(true) (preferred) or for(;;). Use break to exit from the while loop. So you don't even need to use afirmativeanswer.Use toLowerCase() method to not differentiate from inputting UPPERCASE or lowercase characters.There is no need to make answer and answernr instance variables. Just put them into the methods when you need them. |
_cs.43923 | I'm trying to map a 12 digit number into a fixed width file. For a number of reasons, it must be compressed in such a way that it is guaranteed to be less than or equal to 9 characters (alpha numeric is fine). My first thought was a change of base, but I can't find an equation which gives an upper bound the number of characters needed for a given base.For example, transforming into base 32123456789101 -> 3IV9I6JDWhich is 8 digits. How to find a basis which is guaranteed to need 9 or less characters to represent a 12 digits number? | How to find a basis which is guaranteed to need 9 or less characters to represent a 12 digits number? | databases;encoding scheme;base conversion | null |
_reverseengineering.10952 | I have a question about dynamic linking on Linux. Consider the following disassembly of an ARM binary.8300 <printf@plt-0x40>: ....8320: e28fc600 add ip, pc, #0, 128324: e28cca08 add ip, ip, #8, 20 ; 0x80008328: e5bcf344 ldr pc, [ip, #836]! ; 0x344 ....83fc <main>: ....8424: ebffffbd bl 8320 <_init+0x2c>the main() function calls printf() at 8424: bl 8320. Where 8320 is an address in the .plt shown above. Now, the code in .plt makes call to dynamic linker to invoke printf() routine. My question is how the dynamic linker will be able to say that it is a call to printf()? | How the dynamic linker determines which routine to call on Linux? | elf;dynamic linking | null |
_vi.7450 | I have a file that has decided to scroll synchronously when I open it in a second window and start scrolling. scrollbind and diff are set to off. (:set scrollbind? prints noscrollbind and :set diff? prints nodiff.)What else can I try?I'm using neovim. | Turn off synchronous scrolling not caused by either scrollbind or diff | neovim;vimdiff;scrolling | As Christian Brabandt suggested in a comment, this can be caused by the 'cursorbind' setting.From :help 'cursorbind':When this option is set, as the cursor in the current window moves other cursorbound windows (windows that also have this option set) move their cursors to the corresponding line and column.You can use :verbose set cursorbind? to find out what is switching this setting on. |
_unix.44056 | How do you go about creating a bootable backup ISO of my computer (dom0)? I have tried remastersys and this backs up everything but Xen does not work properly. LiveCDs also do not work.Does anyone have any ideas for how I could? Would a simple cat /dev/sda > /dev/sdb work ? | How to backup entire xen dom0 (on debian) | debian;backup;xen | null |
_softwareengineering.294953 | We have a feature branch right now in development that must not be deployed to production. At the moment there is nothing that would prevent such a mistake from happening.Deployment happens manually at the moment (SVN export + FTP to production). Migration to a better model is planned but not available in the short term.Now when you work on a branch locally, then export and upload the files it is really easy to accidentally deploy the wrong branch.What are some easy and light-weight ways to prevent accidental deployments? We also would be happy with preventing accidental code execution. Downtime is much more acceptable than execution of the feature branches code. So it would be OK for the app to refuse to work in production. This would be noticed right away. | How to prevent accidental deployment of branch under a legacy deployment process? | deployment;branching | Deployment happens manually at the moment (SVN export + FTP to production). Migration to a better model is planned but not available in the short term.I do not know what better model you have in mind, but you seem to think of something which is so complicated that you cannot implement it by yourself in one or two hours. Why don't you just put the SVN export + FTP steps you currently do manually into a simple shell script? SVN and FTP are available as command line tools for every OS I know. You just have to care for these two things:make sure the deploy script exports only from the trunk, but not from the feature branch. make sure noone in your team tries to deploy manually, only by that script.This does not only solve your current problem, it also makes the deployment more smooth and less error prone when you have no feature branch. |
_softwareengineering.175481 | I remember reading that there are no existing data structures which allow for random-access into a variable length encoding, like UTF-8, without requiring additional lookup tables.The main question I have is, is this even a useful property? I mean, to look up and replace random single codepoints in O(1) time. | Is O(1) random access into variable length encoding strings useful? | unicode | I would give the traditional, and really quite boring answer of it depends.Is random access to individual characters (glyphs) in a string a useful property? Yes, definitely.Do you need access to individual code points? I guess that could be useful in certain situations that aren't too contrived if you are doing extensive handling of text data, such as in for example word processing or text rendering. Data (text encoding) normalization is another possible use-case that I can think of. I'm sure there are other good uses as well.Does it need to be in O(1) time? Really, with a few exceptions that are unlikely to apply in the general case, not necessarily. If O(1) time access is a requirement, it's probably easier to just use a fixed-length encoding such as UTF-32. (And you will still be dealing with cache misses and swap space fetches, so for sufficiently long strings it won't be O(1) anyway... :)) |
_unix.321833 | I am unable to reinstall a package (libturbojpeg0). I have the following error# dpkg --audit The following packages are in a mess due to serious problems duringinstallation. They must be reinstalled for them (and any packagesthat depend on them) to function properly: libturbojpeg0:amd64 TurboJPEG runtime library - SIMD optimizedI cannot remove it# apt-get remove libturbojpeg0[...]dpkg: error processing package libturbojpeg0:amd64 (--remove): package is in a very bad inconsistent state; you should reinstall it before attempting a removalErrors were encountered while processing: libturbojpeg0:amd64E: Sub-process /usr/bin/dpkg returned an error code (1)What can I do in order to reinstall this package? | The following packages are in a mess due to serious problems during installation | software installation;package management;dpkg;debian installer | You should ask apt-get to reinstall it: apt-get --reinstall install libturbojpeg0:amd64 |
_vi.2480 | When Vim reports errors within a function it typically reports relative line numbers. For example:Error detected while processing function Foo:line 11:E123: Blah blahHere indicating the issue is at line 11 relative to start of function Foo. Guess this is a result of how Vim load functions etc. but is there a way to make it report absolute line numbers? As in line-number of script-file holding the function. | Absolute script-file line-numbers in Vim-function errors | vimscript;error | There was an RFC suggesting this, but there was no response from Bram at the time. |
_unix.281360 | So I just made a 70GB partition on my 1TB ssd by using the Kali Linux live CD.After I installed everything, the system booted into Kali just fine, with no problems. Although, when I try to restart and boot into windows it does not show me a windows 10 option. I can actually boot into windows 10 if I go into my BIOS and make sure it is set to UEFI. If I set it to CSM (I guess that's an older bios compatibility thing), it boots Kali Linux with no problem.So both operating systems will boot, but I do not have a simple way to switch between them.Are there any fixes for this?EDIT: I installed Kali onto the 70gb partition on the ssd, not onto the CD | Dual boot windows 10 and Kali Linux unusual problems | windows;kali linux;dual boot | null |
_unix.222560 | Here's what I have in my service file, arkos-redis.service:GNU nano 2.4.2 File: /usr/lib/systemd/user/arkos-redis.service [Unit]Description=Advanced key-value store[Service]ExecStart=/usr/bin/redis-server /etc/arkos/arkos-redis.confExecStop=/usr/bin/redis-cli shutdown[Install]WantedBy=default.targetBut when boot I get the following status:[vagrant@arkos-vagrant etc]$ systemctl --user status arkos-redis.servicearkos-redis.service - Advanced key-value store Loaded: loaded (/usr/lib/systemd/user/arkos-redis.service; enabled; vendor preset: enabled) Active: inactive (dead) | Why is my Systemd unit arkos-redis loaded, but inactive (dead)? | systemd | Because your service file is in /usr/lib/systemd/user, it is treated as a user service, and is started by your own instance of systemd (run as systemd --user). This means, among other things, that the process is started under your user, not root, and is started for each user that logs in. Based on the reference to the config file in /etc, I would guess that only one instance of this process should be running at any given time, and that it should run as root (or some other system accout). If this process is supposed to start as root, move this file to /usr/lib/systemd/system (or better yet, /etc/systemd/system, since it's your own service file) and ignore the rest of this answer.If your service file is supposed to start under your own user, then note that only the following targets are available in user mode: When systemd runs as a user instance, the following special units are available, which have similar definitions as their system counterparts: default.target, shutdown.target, sockets.target, timers.target, paths.target, bluetooth.target, printer.target, smartcard.target, sound.target.Neither multi-user.target nor network.target are available, and so your service won't start automatically. If you want it to start, change multi-user.target to default.target, and get rid of After=network.target. Then, run systemctl --user enable arkos-redis.service. |
_datascience.10741 | I have implemented my own mini neural network program1.Currently, it does not have batch updates, it only updates the parameters by simple backpropagation using SGD after each forward pass. I was trying to implement batch updates and batch normalisation2.1) For simple batch updates, instead of updating parameters each time, for each image of the batch size of 'n' I should backpropagate and accumulate the deltas for all the parameters and finally update them once after the end of the batch.2)For batch normalisation (BN), I went through the paper and I am sort of the clear with the idea but I am confused regarding how to implement it. Generally, I would multiply the matrices in the net one after the other for a single image to get final input, but with BN, do I need to feed forward for all the images in the batch till the first layer, then normalise the values, then fwd pass these values till second layer, then normalise again, and so on?Once I reach the final layer, should I backpropagate the error for the corresponding input-output pair and update the parameters immediately as fwd pass for all the images in the batch has been done already?Going by the way I have described, it seems to require a lot of parameter tracking throughout the batch.It will be helpful if you can point out a better way to do it or anything that I have misunderstood so far. | Implementing Batch normalisation in Neural network | machine learning;neural network;backpropagation;batch normalization | null |
_softwareengineering.220781 | I'm in a situation with my company where this may be an important distinction.Is there any distinction between source code and source files in a technical context?Is there any distinction between source code and source files in a legal context?Thanks.Edit: I saw some close votes on this. I want to note that this is a possible issue between two companies - and where necessary, we'll definitely use legal counsel. I'm asking this because I'm attempting to be prepared if I'm asked for any technical definitions (as the developer role in this). | Source code vs source files? | legal;source code | null |
_unix.139743 | I have 2 VPS servers, one in China and another in the US. The server (openvpn client) in China is connected to the US one via openvpn.I also have squid running in the China server.I want to redirect all traffic to squid through the openvpn tunnel to the US server, so users can access blocked sites including youtube.com, facebook, twitter and the likes.I currently have all outgoing http and https traffic on the China server going through the openvpn tunnel. I verify this, because when visiting normal sites, the public IP address has already become the US IP address. Yet I still cannot access blocked sites due to DNS pollution, and all these sites resolve to the unreachable IP address.How can I circumvent the DNS pollution issue in this case?I know there are other ways to bypass the gfw (e.g. SSH tunnel, VPN) but this method is the most convenient one for ordinary users. | bypass firewall with Openvpn + Squid | dns;openvpn;squid | null |
_webmaster.2305 | I currently have my full contact details (including my phone number and postal address) in the WHOIS information for all the domain names I have registered so far. I wonder how bad of an idea this is and whether I should remove such information from the WHOIS.What are the possible consequences of leaving such information there and what are the benefits (if any) for leaving it? Are there any known incidents for troubles because of WHOIS information? Finally, what would you recommend? Would you recommend using a WHOIS privacy service?Just to add, I believe there are services, like domaintools.com, that archive the WHOIS information, so, is it too late to remove my contact details? (if you think I should remove it).Many thanksUpdate:I just received an email from my domain registrar which included this:(Under ICANN rules and the terms of your registration agreement, PROVIDING FALSE CONTACT INFORMATION CAN BE GROUNDS FOR DOMAIN NAME CANCELLATION.) To review the ICANN policy, visit: http://www.icann.org/en/whois/wdrp-registrant-faq.htmSo, it's clear that there's no choice as to providing your real contact details. A WHOIS privacy service is the best option if anyone wants to hide their contact details (this is best done from the beginning as there are services, like domaintools.com, that archive the WHOIS information). | Postal address in Whois information, how bad of an idea is this? | whois | null |
_unix.225212 | Let's say I have ls | xargs -n1 -p rm, how do I use yes or yes n to automatically answer the questions generated by the -p flag?I tried yes n | (ls | xargs -n1 -p rm) but didn't work.P.S. I tried to add the yes tag, but didn't have enough rep.UPDATE: The question is not really about rm, it's about how to use yes properly. I have an alias or a function that uses xargs -p and I like the fact that it asks me and shows me what it's doing before doing it. When I know what it will do, I would like to be able to use yes to automatically go through all of the xargs -p in the function. So even though the example uses rm, it's not really about it. Also just to be extra clear, I don't want to modify my alias or function to use or not use -p. I rather just input yes externally. Tbh I thought that something like yes | some_function_asking_me_questions or some_function_asking_me_questions <( yes ) would have worked, but it didn't. 2nd EDIT: Another example: I have an alias to list AWS SNS topics in a region like: alias delete_snstopics=list_sns | cut -f 2 | xargs -n1 -p aws sns delete-topic --topic-arn Then I have a function that for each region in AWS finds and prompts for deletion for those SNS topics. I want to see the aws sns delete-topic --topic-arn $1 that the xargs would run, because the id of the SNS topic is different every time and if something goes wrong I can match up the SNS id in the web console. Moreover at times I might not want to delete the SNS topic in a particular region. And that's why I want to use yes with this function, so that I can use the same function for partial deletion and full deletion, and still get useful output. Makes sense? | How to use `yes` with `xargs -p`? | xargs | null |
_codereview.77490 | Basically, I wish to have 3 printers to print in error state, success state or info state which means nothing but different colors. class ColorCode{ private static $options = [ dark_gray => 1;30, light_gray => 0;37, blue => 0;34, green => 0;32, cyan => 0;36, red => 0;31, yellow => 1;33 ]; public static function get($key){ if(isset(self::$options[$key])){ return self::$options[$key]; } else { return ''; } } } class ColorPrint { public $color = ''; function __construct($color) { $this->color = ColorCode::get($color); } public function printc($msg) { echo \033[. $this->color .m.$msg.\033[0m\n; } } $out = [ 'error' => new ColorPrint('red'), 'info' => new ColorPrint('cyan'), 'success' => new ColorPrint('green'), ]; $out['error']->printc('Invalid Option'); | Printing to terminal in a color | php;console | null |
_unix.227122 | Im working in a manufacturing company (small one) and one of our clients required a test using a Linux (Ubuntu) computer, we are testing some wi-fi modules. the test is simple. beforehand: Powering up the units (loaded with test FW) will create an access point where the SSID is the MAC Address of said unit and there is no passwordThe test is as follows:Connect the laptop to the wi-fi access pointInside a terminal use command sh mfg.sh (file provided by them) which has sudo inside so it will require password the first time usedThe script will do everything and just put in text PASSED or FAILEDPower down the unit, if you want to test another one keep terminal openTest is pretty simple right?, the issue comes with the access points (AP), for starters, there is something about the Wi-Fi module on the laptop that it doesnt see the next AP but it keeps seeing the old one (even after powering down the unit), so we have to do a manual service restart using sudo service network-manager restart after the restart we now see the AP.Then another issue, every unit has a different SSID (different MAC Address) so every time you connect to a new one, a file is born in /etc/NetworkManager/system-connections/ and after 50 or so files the connection to a new AP is slow, to revert that I delete all the previous AP using sudo rm -r /etc/NetworkManager/system-connections/*Ok so here comes the question. Since the low-level operators are only familiarized with windows computers, is there a way to encapsulate this commands into a Icon type executable? like, just double click it, it opens terminal do his thing and closes, also it would be the best to not be ask for password to use sudo. I tried doble clicking the mfg.sh file but it only oppened on editor.Im pretty proficient in computers but this is my first approach with Linux environments, every comment and suggestion is very appreciated. | Make a command to remove NetworkManager connections | scripting;executable;networkmanager | null |
_cs.54130 | Dijkstra, in his essay On the cruelty of really teaching computing science, makes the following proposal for an introductory programming course:On the one hand, we teach what looks like the predicate calculus, but we do it very differently from the philosophers. In order to train the novice programmer in the manipulation of uninterpreted formulae, we teach it more as boolean algebra, familiarizing the student with all algebraic properties of the logical connectives. To further sever the links to intuition, we rename the values {true, false} of the boolean domain as {black, white}.On the other hand, we teach a simple, clean, imperative programming language, with a skip and a multiple assignment as basic statements, with a block structure for local variables, the semicolon as operator for statement composition, a nice alternative construct, a nice repetition and, if so desired, a procedure call. To this we add a minimum of data types, say booleans, integers, characters and strings. The essential thing is that, for whatever we introduce, the corresponding semantics is defined by the proof rules that go with it.Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification. While designing proofs and programs hand in hand, the student gets ample opportunity to perfect his manipulative agility with the predicate calculus. Finally, in order to drive home the message that this introductory programming course is primarily a course in formal mathematics, we see to it that the programming language in question has not been implemented on campus so that students are protected from the temptation to test their programs.He emphasises that this is a serious proposal, and outlines various possible objections, including that his idea is utterly unrealistic and far too difficult.But that kite won't fly either for the postulate has been proven wrong: since the early 80's, such an introductory programming course has successfully been given to hundreds of college freshmen each year. [Because, in my experience, saying this once does not suffice, the previous sentence should be repeated at least another two times.]Which course is Dijkstra referring to, and is there any other literature on it? The essay appeared in 1988 when Dijkstra was at the University of Texas at Austin, which is probably a clue -- they host the Dijkstra archive but it is huge, and I'm particularly interested in hearing from others about this course.I don't want to discuss whether Dijkstra's idea is good or realistic here. This is a cross-post from matheducators.se where it didn't attract any answers for a couple of weeks and the mods didn't want to migrate it. | on `On the cruelty of really teaching computing science' | reference request;logic;education | null |
_datascience.22112 | This is a duplicate question of https://stackoverflow.com/questions/45592174/possible-incorrect-usage-of-custom-eval-metric-in-mxnet, as this seems to be a better forum to ask machine learning problems.I am working on a problem and were trying to solve using MXNet. I was trying to use a custom metric in the code. The code for the same is:def calculate_sales_from_bucket(bucketArray): return numpy.asarray(numpy.power(10, calculate_max_index_from_bucket(bucketArray)))def calculate_max_index_from_bucket(bucketArray): answerArray = [] for bucketValue in bucketArray: index, value = max(enumerate(bucketValue), key=operator.itemgetter(1)) answerArray.append(index) return answerArraydef custom_metric(label, bucketArray): return numpy.mean(numpy.power(calculate_sales_from_bucket(label)-calculate_sales_from_bucket(bucketArray),2))model.fit( train_iter, # training data eval_data=val_iter, # validation data batch_end_callback = mx.callback.Speedometer(batch_size, 1000), # output progress for each 1000 data batches num_epoch = 10, # number of data passes for training optimizer = 'adam', eval_metric = mx.metric.create(custom_metric), optimizer_params=(('learning_rate', 1),))I am getting the output as:INFO:root:Epoch[0] Validation-custom_metric=38263835679935.953125INFO:root:Epoch[1] Batch [1000] Speed: 91353.72 samples/sec Train-custom_metric=39460550891.057487INFO:root:Epoch[1] Batch [2000] Speed: 96233.05 samples/sec Train-custom_metric=9483.127650INFO:root:Epoch[1] Batch [3000] Speed: 90828.09 samples/sec Train-custom_metric=57538.891485INFO:root:Epoch[1] Batch [4000] Speed: 93025.54 samples/sec Train-custom_metric=59861.927745INFO:root:Epoch[1] Train-custom_metric=8351.460495INFO:root:Epoch[1] Time cost=9.466INFO:root:Epoch[1] Validation-custom_metric=38268.250469INFO:root:Epoch[2] Batch [1000] Speed: 94028.96 samples/sec Train-custom_metric=58864.659051INFO:root:Epoch[2] Batch [2000] Speed: 94562.38 samples/sec Train-custom_metric=9482.873310INFO:root:Epoch[2] Batch [3000] Speed: 93198.68 samples/sec Train-custom_metric=57538.891485INFO:root:Epoch[2] Batch [4000] Speed: 93722.89 samples/sec Train-custom_metric=59861.927745INFO:root:Epoch[2] Train-custom_metric=8351.460495INFO:root:Epoch[2] Time cost=9.341INFO:root:Epoch[2] Validation-custom_metric=38268.250469In this case, irrespective of change in train-custom_metric for batches, the train-custom_metric is still the same. Like in case of batch 1000 for epoch 1 and epoch 2.I believe that this is an issue as the Train-custom_metric and Validation-custom_metric is not changing irrespective of the value of epoch steps.I am a beginner in MXNet and I might be wrong in this assumption. Can you confirm if I am passing eval_metric in the correct way? | Possible incorrect usage of custom eval_metric in MXNet | machine learning;neural network | null |
_codereview.123380 | When iterating over the members of an object for logging the keys / values one gets a stair case effect.Therefore I wrote myself this function which takes care for a left-alignment of the values.Any hints concerning flaws and improvement-recommendations welcome.// #### START TEST #######################var person = { yourMobilPhoneNumber : 01234171819, firstName : 'theFirstName', lastName : 'theLastName', mail : '[email protected]', zip : '12345', street : 'theNameOfMyStreet', city : 'someCitySomewhere', yourVeryPersonalWebpage : 'http://that-is-me.com', id : 12345, calculate: function() { return 3 + 4; }};displayMembers(person);// #### END TEST #######################// Displays the members of an assigned // object on the console.// -- Parameter -------------// Object - An object which members // shall be displayed.function displayMembers(obj) { var i; var max = (function() { var ret = 0; var keys = Object.keys(obj); for (i = 0; i < keys.length; i++) { if (keys[i].length > ret) ret = keys[i].length; } return ret; })(); var getSpacer = function(len, state) { if (state.length < len) { return getSpacer(len, state += ' '); } else { return state; } } for (i in obj) { console.log('%s: %s%s', i, getSpacer(max - i.length, ''), obj[i]); }} | JavaScript function for logging the members of an object (with horizontal alignment) | javascript;functional programming | Simplification with for/in loopsYour loop through Object.keys() in the function you have for the max variable is reinventing JavaScript's for/in loop which loops over and objects keys. Here's how you could simplifying that code:for(var key in obj) { if(!obj.hasOwnProperty(key)) { continue; } if(key.length > ret) { ret = key.length; }}Unnecessary recursionYour getSpacer function is using recursion when it really does not need to be; the function would be a lot simpler and a lot faster if you used a neat JavaScript trick for repeating characters:function getSpacer(len) { return Array(len + 1).join( );}Now, there's no need for recursion - that means there is less interaction with the stack - and, rather than switching to a loop, this nice solution can be used. |
_unix.18303 | I have a logfile with timestamps in it. Occasionally there are multiple timestamps in one line. Now I would like to remove all of the timestamps from a line but keep the first one.I can do s/pattern//2 but that only removes the second occurrence and sed doesn't allow something like s/pattern//2-.Any suggestions? | sed: delete all occurrences of a string except the first one | sed | null |
_softwareengineering.92847 | A recent previous question of mine had an answer that sparked a different and unrelated question in my mind:Customer wants to modify the .properties files packaged in our WAR fileThe question that I thought of after reading this answer is, just how low-risk is the data being collected on people (non-users, lets just say, people) in my application?A first name and last nameCompany or organization that person currently is employed at.(Optional) An email address(Optional) A persons phone numberA photograph of the persons faceAn digitally signed PDF document physically signed with electronic signature pad (a persons hand written signature)No other sensitive data like social security numbers, credit card numbers or anything that can accurately identify a person with 100% accuracy. How sensitive would you rate the data types listed above? Is identity theft even remotely possible with the above information?In light of all the recent news outbreaks of hacking successes and data breaches, if such a thing were to happen to my application (assume that I have reasonable security measures, SSL, encrypted passwords with salt, account lock after so many failed attempts, etc...), what kind of a response would be appropriate for my organization in your opinion? Should every attempt be made to notify the persons that this information has been compromised? Is it worth it?Thanks for sharing your thoughts. | What would you define as sensitive user data? | security | Anything that can be used to harm your users is sensitiveIt's not only 'sensitive' when it allows for identity theft, that is but one form of harm.If data can be used that way depends on the context.For example: the first and last names and the portrait are definitively sensitive user data in, uhmm, 'adult toy stores', they are not on facebook.The phone number may be non-sensitive for all those who let it print in phone books, but it may be for the unlucky ones that get stalked.The user is in a better position to judge his context than you, therefore i would consider all of your items sensitve, until proven otherwise or told by the user. |
_unix.346620 | I want to show messages from a logfile in realtime on desktop. (xfce4 on fedora 24)My idea is to do this by using notify-send and tail -f in a shell script.So far I have two shell scripts:read_data.shwrite_data.shBoth fork a process and communicates via a pipe.write_data.sh:tail -f /var/log/logfile > mypiperead_data.sh:mkfifo mypipewhile truedo echo read now from pipe if read line <mypipe; then echo $line fidoneUnfortunately I get an error message:EPIPE (Broken pipe)I used strace to analyze what's going on:write_data.sh:strace tail -f /var/log/logfile > mypipe....write(1, Message from logfile..., 281) = -1 EPIPE (Broken pipe)--- SIGPIPE {si_signo=SIGPIPE, si_code=SI_USER, si_pid=7314, si_uid=0} ---+++ killed by SIGPIPE +++strace read_data.sh...read(0, \n, 1) = 1dup2(10, 0) = 0fcntl(10, F_GETFD) = 0x1 (flags FD_CLOEXEC)close(10) = 0open(., O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3fstat(3, {st_mode=S_IFDIR|0550, st_size=4096, ...}) = 0getdents(3, /* 133 entries */, 32768) = 5400getdents(3, /* 0 entries */, 32768) = 0close(3) = 0write(1, message from logfile ....) = 62rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0write(1, read now from pipe\n, 19read now from pipe) = 19open(/tmp/mypipe, O_RDONLYread_data.sh blocks at this point.Any idea why this happens? | show messages from logfile in realtime on desktop | pipe | null |
_unix.46224 | I've successfully VPNed to my University server on a Fedora 17 Linux terminal.$ sudo openconnect -u UNIVERSITY_USERNAMEID sslvpn.nameofuniversity.edu[sudo] password for PCUSERNAME: Attempting to connect to xxx.xxx.xxx.xxx:xxxSSL negotiation with sslvpn.nameofuniversity.eduConnected to HTTPS on sslvpn.nameofuniversity.eduGET https://sslvpn.nameofuniversity.edu/Got HTTP response: HTTP/1.0 302 Object MovedSSL negotiation with sslvpn.nameofuniversity.eduConnected to HTTPS on sslvpn.nameofuniversity.eduGET https://sslvpn.nameofuniversity.edu/+webvpn+/index.htmlPlease enter your username and password.Password:POST https://sslvpn.nameofuniversity.edu/+webvpn+/index.htmlGot CONNECT response: HTTP/1.1 200 OKCSTP connected. DPD 30, Keepalive 0Connected tun0 as xxx.xx.xx.xx, using SSLAfter this, I opened a new terminal and executed an ssh command which normally works when I'm on campus.I get the following output on the terminal:# ssh -vvv -Y UNIVERSITY_USERNAMEID@serverOpenSSH_5.9p1, OpenSSL 1.0.0j-fips 10 May 2012debug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 50: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to server [xxx.xxx.xx.xx] port 22.debug1: Connection established.debug1: permanently_set_uid: 0/0debug1: identity file /root/.ssh/id_rsa type -1debug1: identity file /root/.ssh/id_rsa-cert type -1debug1: identity file /root/.ssh/id_dsa type -1debug1: identity file /root/.ssh/id_dsa-cert type -1ssh_exchange_identification: Connection closed by remote hostTo no avail, I've appended the following to file /etc/hosts.allowSSHD: ALLSSHD: .nameofuniversity.edu : allSSHD: ipaddress : allWhat can be wrong here? | Can't SSH via password to a Remote Server | linux;ssh;vpn | null |
_unix.147158 | On my Archlinux box I want to limit some applications to a smaller amount of memory using cgroups. Since I use swap, I need to account memory+swap. How can I enable swap accounting? | How to enable swap accounting for memory cgroup in Archlinux? | arch linux;memory;swap;cgroups | Arch Linux' kernel has the swap accounting disabled by default (cf. the kernel config file). You can enable it by booting with swapaccount=1 in your kernel cmdline (cf. the kernels Kconfig documentation. |
_codereview.129135 | For some post-processing, I need to flatten a structure like this {'foo': { 'cat': {'name': 'Hodor', 'age': 7}, 'dog': {'name': 'Mordor', 'age': 5}}, 'bar': { 'rat': {'name': 'Izidor', 'age': 3}}}Each bottom entries will appear as a row on the output. The heading keys will appear each row, flattened. Perhaps an example is better than my mediocre explanation:[{'age': 5, 'animal': 'dog', 'foobar': 'foo', 'name': 'Mordor'}, {'age': 7, 'animal': 'cat', 'foobar': 'foo', 'name': 'Hodor'}, {'age': 3, 'animal': 'rat', 'foobar': 'bar', 'name': 'Izidor'}]I first wrote this function: def flatten(data, primary_keys): out = [] keys = copy.copy(primary_keys) keys.reverse() def visit(node, primary_values, prim): if len(prim): p = prim.pop() for key, child in node.iteritems(): primary_values[p] = key visit(child, primary_values, copy.copy(prim)) else: new = copy.copy(node) new.update(primary_values) out.append(new) visit(data, { }, keys) return outout = flatten(a, ['foobar', 'animal']) I was not really satisfied because I have to use copy.copy to protect my input arguments. Obviously, when using flatten one does not want its input data to be altered.So I thought about one alternative that uses more global variables (at least global to flatten) and uses an index instead of directly passing primary_keys to visit. However, this does not really help me to get rid of the ugly initial copy:keys = copy.copy(primary_keys)keys.reverse()So here is my final version: def flatten(data, keys): data = copy.copy(data) keys = copy.copy(keys) keys.reverse() out = [] values = {} def visit(node, id): if id: id -= 1 for key, child in node.iteritems(): values[keys[id]] = key visit(child, id) else: node.update(values) out.append(node) visit(data, len(keys)) return out I am sure some Python magic will help in this case. | Flatten a nested dict structure in Python | python;dictionary | Both algorithms recurse using the length of keys to stop, so I am going to assume that the nested dictionaries always have the same level of nesting too. If your input can be of the form:{'foo': { 'cat': {'name': 'Hodor', 'age': 7}, 'dog': {'name': 'Mordor', 'age': 5}}, 'bar': { 'rat': {'name': 'Izidor', 'age': 3}}, 'baz': 'woops',}then your approach can't handle it and neither will mine.I quickly stopped trying to understand how your algorithm work and started to think about how I would implement it myself. This indicates that:your algorithm is not that trivial;it is poorly documented.You should at least have comments indicating why you use some approaches: reversing the keys and iterating over them in decreasing order, storing your group names/values (values[keys[id]] = key) as you go into nesting levels and updating the last dictionary when you reach itSpeaking about updating the last dictionary, note that data = copy.copy(data) does not protect your node.update(values) to modify the original data in place. You either need to use copy.deepcopy or to change the updated dictionary (create a new one and update it with both node and values).Now let me show you an other approach. Rather than wrapping a function that access global variables (this is what visit look like) into flatten, you can make flatten the recursive function by splitting keys into its head and tail part. When there is no element left, you won't be able to do it and you can stop the recursion by returning the data you're on: this is one of the most nested dictionaries.Otherwise, you can iterate over the key/values pairs, flatten the values using the tail as a new set of keys and then, build a list out of the flattened values and the {head: key} dictionary.To make things a bit more efficient, I'll use generators instead of building lists, so youll want to change your calls from out = flatten(a, ['foobar', 'animal']) to out = list(flatten(a, ['foobar', 'animal'])) (calls of the form for flattened in flatten(a, ['foobar', 'animal']): don't need to be changed though):def flatten(data, group_names): try: group, group_names = group_names[0], group_names[1:] except IndexError: # No more key to extract, we just reached the most nested dictionnary yield data.copy() # Build a new dict so we don't modify data in place return # Nothing more to do, it is already considered flattened for key, value in data.iteritems(): # value can contain nested dictionaries # so flatten it and iterate over the result for flattened in flatten(value, group_names): flattened.update({group: key}) yield flattenedI also changed keys to group_names to be able to use the generic names key and value when iterating over data.In case the input data can ever contain less nested levels than the amount of items in group_names, you'll reach a point where data.iteritems() will raise and AttributeError. You can catch that if you so choose:def flatten(data, group_names): try: group, group_names = group_names[0], group_names[1:] except IndexError: # No more key to extract, we just reached the most nested dictionnary yield data.copy() # Build a new dict so we don't modify data in place return # Nothing more to do, it is already considered flattened try: for key, value in data.iteritems(): # value can contain nested dictionaries # so flatten it and iterate over the result for flattened in flatten(value, group_names): flattened.update({group: key}) yield flattened except AttributeError: yield {group: data}Soa = { 'foo': { 'cat': {'name': 'Hodor', 'age': 7}, 'dog': {'name': 'Mordor', 'age': 5}, }, 'bar': { 'rat': {'name': 'Izidor', 'age': 3}, }, 'baz': 'woops',}list(flatten(a, ['foobar', 'animal']))will return[{'animal': 'woops', 'foobar': 'baz'}, {'age': 5, 'animal': 'dog', 'foobar': 'foo', 'name': 'Mordor'}, {'age': 7, 'animal': 'cat', 'foobar': 'foo', 'name': 'Hodor'}, {'age': 3, 'animal': 'rat', 'foobar': 'bar', 'name': 'Izidor'}] |
_softwareengineering.181850 | Having read a post yesterday, I realized I did not know much about the origin of exceptions. Is it an OOP related concept only? I tend to think it is, but again there are database exceptions. | Are exceptions an OOP concept? | programming languages;exceptions | Exceptions are not an OOP concept. But they are not completely unrelated either in one little tiny point.As other answers have shown: The concept of exceptions has made it in several non-OOP languages. Nothing in that concept requires something from OOP.But any if not all OOP languages which takes OOP seriously require exceptions because the other methods of error-handling fail at one specific point: The constructor. One of the points of OOP is that an object shall encapsulate and manage its internal state completely and consistently. This also means that in pure OOP you need a concept to create a new object with a conistent state atomically - everything from memory allocation (if required) to the initialisation to a meaningful state (i.e. simple zeroing the memory is not enough) must be done in one expression. Hence a constructor is required:Foo myFoo = Foo(foo, bar, 42);But this means that the constructor can also fail due to some error. How to propagate the error information out from the constructor without exceptions?Return value? Fails since in some languages new could return only null but not any meaningfull information. In other languages (e.g. C++) myFoo is not a pointer. You could not check it against null. Also you cannot ask myFoo about the error - it is not initialized and therefore does not exist in OOP thinking.Global error flag? So much about encapsulating state and then some global variable? Go to h... ;-)A mixture? In no way better.?So exceptions are a more fundamental concept than OOP but OOP builds upon them in a natural way. |
_unix.372524 | i used this code to backup a directory to an ftpmirror -R /media/root/7CBAA4537758FCA/SAVE/DeesBootCD_SlipStreamed2/ /luksftpmnt/put-into-images-folder/DeesBootCD_SlipStreamed2and it worked well but some directories on original have small size but when copied they have no size.....the contents of these directories are correctly mirrored - all files same size, NO hidden files or symlinks that may have been corrupted.Original directory example:stat VCRTL/ File: 'VCRTL/' Size: 4096 Blocks: 8 IO Block: 4096 directoryDevice: 811h/2065d Inode: 44034 Links: 1Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2015-07-26 04:34:06.897335000 +0000Modify: 2011-10-24 17:45:20.000000000 +0000Change: 2015-07-25 01:49:32.655572000 +0000 Birth: -Remote ftp directory example:stat VCRTL/ File: 'VCRTL/' Size: 0 Blocks: 0 IO Block: 4096 directoryDevice: 29h/41d Inode: 23451 Links: 1Access: (0777/drwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root)Access: 2017-06-19 12:20:00.000000000 +0000Modify: 2017-06-19 12:20:00.000000000 +0000Change: 2017-06-19 12:20:00.000000000 +0000 Birth: - | lftp changes directory size without changing files in directory | directory;directory structure;lftp | null |
_unix.167571 | I have some chef recipes referencing the package php53u. It seems that at the time the recipes were written, the package was available in ius repo, but now it seems that package php installs version 5.3.3 and there is no php53u package at all. Can I force yum to use old version of repository so that the package can be found? | Can I force install of a package delete from yum repo? | yum | Can I force yum to use old version of repository so that the package can be found?No you just kind of have to go based on what the repo has available. If they got rid of that package then that's all she wrote. You may try using google to find an IUS mirror repo that is still available but hasn't been synced in a while. That's kind of laborious but it may work.but now it seems that package php installs version 5.3.3 and there is no php53u package at all.I don't have much experience with IUS so I don't know what's different about this build than the standard RHEL build, but there's a php55u package currently available for EL6 and a php54 for EL5. Are either of those acceptable for your purposes? |
_datascience.17709 | I am working on a project that is about Natural Language Processing. However I am stuck at the point which is I have a ANN that has fixed size of input neurons. I am trying to do sentiment analysis with using Imdb movie review set. To able to do that, firstly, I calculated the word embeddings for each word with creating a word-context matrix then applied SVD. So I have the word embedding matrix. But I do not know the best way to compress sentence's vector (which contains embeddings for each word in the sentence) into a fixed size to be able to feed the neural net. I tried PCA but result was not satisfying. Any help? | Best way to fix the size of a sentence [Sentiment Analysis] | word embeddings;sentiment analysis | The easiest way is to average the word- embeddings. This works quite well. Another thing you can try is to represent each document as a bag of words - i.e. - to have a vector in the size of your vocabulary, where each element in the vector represents the number of times a certain word had been mentioned in your document (for example, the first element in the vector will represent how many times the word a was mentioned, and so on). Afterwords, to reduce the size of the vector you can use techniques like LDA, SVD, or autoencoders. |
_webmaster.85900 | I have a personal weather station connected to my LAN that provides its info (e.g. temperature, rainfall, humidity) via a web UI. I would like to scrape this info so I can make a better webpage of my own, and keep a record of the daily weather in a spreadsheet.I know there are scrapers like parsehub and import.io, but they seem to work off the cloud which means they cannot access the websites residing locally on my home network.What are some of the best ways to scrape content off a local website on a regular basis? | Scrape Intranet Website | scraper sites | null |
_webmaster.62930 | I sell small game servers on a website and used to be able to have customers pay through Google Checkout. Since that has been replaced with Google Wallet, I've been trying to set it up. However, I think I have to apply to use the buttons now and the requirements are:-Have the ability to process their own payments with a qualified payment processor-Maintain PCI compliance-Sell physical goods online or through their Android native application, and do not sell any digital goods through their Android native applicationSince I don't meet any of these requirements, is it true that I cannot use any Google service as a payment method? Am I doing this wrong?For reference, I'm getting them from this site: https://developers.google.com/wallet/instant-buy/ then click sign up | Google Wallet use case | payment gateway | The requirements to use Google Wallet Instant Buy are as follows:-Instant Buy is currently only available to U.S. buyers for transactions in USD currency. Do not display the Buy with Google button for non-US transactions.There is an $1800 transaction limit with Instant Buy. For items beyond this limit, use alternative payment methods.Review the detailed Content policies to make sure your specific goods or services are supported.If you exclusively sell digital goods such as movies or games, use Google Play In-app Billing for Android or the Digital Goods API instead of Instant Buy.You must have your own payment processor for processing credit card payments.You will need to sign up for a payment provider in order to accept card payments otherwise you will not be able to use Google Wallet. Surely this is something you can do so that you can then use GWIB? |
_codereview.159939 | I tried to implement the builder pattern of GoF. After searching for almost every related posts/examples on the Internet, I'm still confused. But I found that there are two kinds of patterns, which are both called builder:Bloch Builder : A famous post, but a comment pointed out it's actually a fluent interface, still useful, though. Also notice that it's not the same as Method Chaining, see the differences.Builder Pattern @thejavageek.com : The author gives a very clear structure of Builder Pattern: Director, Builder, ConcreteBuilder, Product.Finally I decide to make my own version of Builder Pattern, which follows the first link above, with some modifications:The static method Pizza.makePizza() acts as the Director.For simplicity, I didn't make the Factory a factory method or abstract factory, but it can be without problem.Before you want to make a pizza, you have to override some methods, which you can put anything must be done before the pizza is created.Main.javapublic class Main { public static void main(String[] args) throws InterruptedException { Factory myFactory = new Factory(); Pizza myPizza = Pizza.makePizza(new Pizza.Builder(myFactory) { @Override public void prepareDough() { myFactory.prepareDough(); } @Override public void prepareToppings() { myFactory.prepareToppings(); } }.withSize(20).withBacon().withPepperoni()); System.out.println(myPizza); }}Pizza.javapublic class Pizza { private final int size; private final boolean cheese; private final boolean pepperoni; private final boolean bacon; private Pizza(Builder builder) { size = builder.size; cheese = builder.cheese; pepperoni = builder.pepperoni; bacon = builder.bacon; } public static Pizza makePizza(Builder builder) throws InterruptedException { builder.prepareDough(); Thread.sleep(2000); builder.prepareToppings(); Thread.sleep(2000); return new Pizza(builder); } public static abstract class Builder { private int size; private boolean cheese = false; private boolean pepperoni = false; private boolean bacon = false; public Builder(Factory factory) { } public Builder withSize(int size) { this.size = size; return this; } public Builder withCheese() { cheese = true; return this; } public Builder withPepperoni() { pepperoni = true; return this; } public Builder withBacon() { bacon = true; return this; } protected abstract void prepareDough(); protected abstract void prepareToppings(); } public String toString() { return String.format(pizza={size=%d, cheese=%s, pepperoni=%s, bacon=%s}, size, cheese, pepperoni, bacon); }}Factory.javapublic class Factory { public Factory() {} public void prepareDough() { System.out.println(Preparing dough...); } public void prepareToppings() { System.out.println(Preparing toppings...); }}Result:Preparing dough...Preparing toppings...pizza={size=20, cheese=false, pepperoni=true, bacon=true} | Pizza maker with Bloch builder | java;fluent interface | Your main() code feels cumbersome, and the modelling seems unnatural to me. I also find that .withSize(20).withBacon().withPepperoni() is hard to read, perhaps due to its placement after the definition of the anonymous subclass of Pizza.Builder.public class Main { public static void main(String[] args) throws InterruptedException { Factory myFactory = new Factory(); Pizza myPizza = Pizza.makePizza(new Pizza.Builder(myFactory) { @Override public void prepareDough() { myFactory.prepareDough(); } @Override public void prepareToppings() { myFactory.prepareToppings(); } }.withSize(20).withBacon().withPepperoni()); System.out.println(myPizza); }}Assuming that you are interested in constructing an immutable Pizza, I would prefer to see:public class Main { public static void main(String[] args) throws InterruptedException { Pizza myPizza = new PizzaBase(20).addTopping(Pizza.Topping.BACON) .addTopping(Pizza.Topping.PEPPERONI) .bake(); System.out.println(myPizza); }}In particular:This interface mimics the process of making a pizza.The base is mandatory, and you must specify its size. The toppings are optional.Offering an .addTopping() method that accepts a parameter is more flexible. You might even be able to .addTopping(CHEESE) twice to get extra cheese.I'm not convinced that dependency injection for the Factory is worthwhile. |
_softwareengineering.181932 | I have recently witnessed more and more problems similar to the ones explained in this article on feature intersections. Another term for it would be product lines, though I tend to attribute these to actually different products, whereas I usually encounter these problems in the form of possible product configurations.The basic idea of this type of problem is simple: You add a feature to a product, but somehow things get complicated due to a combination of other existing features. Eventually, QA finds a problem with a rare combination of features that no one thought of before and what should have been a simple bugfix may even turn into requiring major design changes.The dimensions of this feature intersection problem are of a mind-blowing complexity. Let's say the current software version has N features and you add one new feature. Let's also simplify things by saying that each of the features can turned on or off only, then you already have 2^(N+1) possible feature combinations to consider. Due to a lack of better wording / search terms, I refer to the existence of these combinations as feature intersection problem. (Bonus points for an answer including reference(s) for a more established term.)Now the question I struggle with is how to deal with this complexity problem on each level of the development process. For obvious cost reasons, it is impractical up to the point of being utopian, to want to address each combination individually. After all, we try to stay away from exponential complexity algorithms for a good reason, but to turn the very development process itself into an exponentially sized monster is bound to lead to utter failure.So how do you get the best result in a systematic fashion that does not explode any budgets and is complete in a decent, useful, and professionally acceptable way.Specification: When you specify a new feature - how do you ensure that it plays well with all the other children? I can see that one could systematically examine each existing feature in combination with the new feature - but that would be in isolation of the other features. Given the complex nature of some features, this isolated view is often already so involved that it needs a structured approach all in itself, let alone the 2^(N-1) factor caused by the other features that one willingly ignored.Implementation: When you implement a feature - how do you ensure your code interacts / intersects properly in all cases.Again, I am wondering about the sheer complexity. I know of various techniques to reduce the error potential of two intersecting features, but none that would scale in any reasonable fashion. I do assume though, that a good strategy during the specification should keep the problem at bay during implementation.Verification: When you test a feature - how do you deal with the fact, that you can only test a fraction of this feature intersection space? It is tough enough to know that testing a single feature in isolationguarantees nothing anywhere near error-free code, but when you reducethat to a fraction of 2^-N it seems like hundreds of tests are noteven covering a single drop of water in all oceans combined. Even worse, the most problematic errors are those that stem from the intersection of features, which one might not expect to lead to any problems - but how do you test for these if you don't expect such a strong intersection?While I would like to hear how others deal with this problem, I am primarily interested in literature or articles analyzing the topic in greater depth. So if you personally follow a certain strategy it would be nice to include corresponding sources in your answer. | Dealing with Feature Intersections | complexity;product features | We already knew mathematically that verification of a program is impossible in finite time in the most general case, due to the halting problem. So this kind of problem is not new.In practice, good design can provide decoupling such that the number of intersecting features is far less than 2^N, though it certainly seems to be above N even in well designed systems. As far as sources, it seems to me that almost every book or blog about software design is effectively trying to reduce that 2^N as much as possible, though I don't know of any that cast the problem in the same terms as you do.For an example of how design might help with this, in the article mentioned some of the feature intersection happened because replication and indexing were both triggered of the eTag. If they had available another communication channel to signal the need for each of those separately then possibly they could have controlled the order of events more easily and had fewer issues.Or, maybe not. I don't know anything about RavenDB. Architecture can't prevent feature intersection issues if the features really are inexplicably intertwined, and we can never know in advance we won't want a feature that really does have the worst case of 2^N intersection. But architecture can at least limit intersections due to implementation issues. Even if I'm wrong about RavenDB and eTags (and I'm just using it for the sake of argument - they're smart people and probably got it right), it should be clear how architecture can help. Most patterns people talk about are designed explicitly with the goal of reducing the number of code changes required by new or changing features. This goes way back - for example Design Patterns, Elements of Reusable Object-Oriented Software, the introduction states Each design pattern lets some aspect of the architecture vary independently of other aspects, thereby making a system more robust to a particular kind of change.My point is, one can get some sense of the Big O of feature intersections in practice by, well, looking at what happens in practice. In researching this answer, I found that most analysis of function points/development effort (i.e. - productivity) found either less than linear growth of project effort per function point, or very slightly above linear growth. Which I found a bit surprising. This had a pretty readable example.This (and similar studies, some of which use function points instead of lines of code) doesn't prove feature intersection doesn't occur and cause problems, but it seems like reasonable evidence that it's not devastating in practice. |
_webmaster.103320 | Question is, if my title tag and meta description is getting some of content or text from DB how would this effect SEO? e.g.Isuzu D-Max 4x2 LS MT - Specs, Specification and Price ListWhere Isuzu D-Max 4x2 LS MT was fetch from DB, thus this still count as complete or good title?TIA! | Title Tag and Meta Description | seo;title;database | null |
_codereview.165793 | I recently ran into a question on M.SE asking,For positive integers x, y, z find a solution s.t. \$ \frac{x}{y+z} + \frac{y}{x+z} + \frac{z}{x+y} = 4 \$so rather than think, I made a quick brute force c++ program to check for solutions:unsigned lim = 251;for(double x=1; x<lim; x++) { for(double y=1; y<lim; y++) { for(double z=1; z<lim; z++) { printf(x=%.0f, y=%.0f, z=%.0f\n,x,y,z); if( std::abs((x/(y+z) + y/(x+z) + z/(y+x)) - 4) <= 1E-15) { printf(solution: x=%.0f, y=%.0f, z=%.0f\n,x,y,z); } } }}All of this being within main, of course. This is obviously slow so I was looking for optimizations. A couple of my thoughts were to remove the std::abs call because the inside expression must be positive, perhaps I could simply check == 4 rather than account for precision. I couldn't figure out a way to drop a for loop, because you can't isolate a variable. Otherwise I'm not sure what to do.Questions:How can this code be sped up? My goal is lim=1E3 in under a minute. As of now it takes 31.0643 s for lim=251.Can the for-loops be reduced? As of now formatting is not really an issue for me, but can this be simplified down to a single expression and/or loop? Or maybe look a little cleaner, without sacrificing optimization?Update: After some research I'm now aware that the smallest known solution to this has numbers with 81 digits. I'm not concerned with this, just the above questions. | Searching for three positive integers as a solution to an equation | c++;performance;c++11;mathematics | Here are some simple optimizations you can make that can push your limit a bit higher:Don't use floating point arithmetic. Floating point arithmetic is slow compared to integer arithmetic (this is not always true with the advent of things like FPUs, but it is a good rule of thumb). Even worse, floating point arithmetic is prone to things like rounding errors, which means you might find solutions which don't work and miss solutions that do work.One way to avoid using floating point arithmetic here is to expand out the equation into the form:\$x(x+y)(x+z) + y(y+x)(y+z) + z(z+x)(z+y) = 4(y+z)(x+z)(x+y)\$In this form you can check whether the equation is satisfied just using integer arithmetic (make sure you use a integer type large enough so that neither side overflows, however).Make use of symmetry (thanks @Deduplicator). Another simple observation you can make is that this equation is symmetric in \$x\$, \$y\$, and \$z\$, i.e. if you have a solution \$(x, y, z)\$, then any permutation of this solution also works. One way to use this fact is to only loop over \$1 \leq y \leq x\$ and \$1 \leq z \leq y\$. This cuts down on your total number of iterations by a factor of \$3! = 6\$. Reduce number of degrees of freedom. One interesting observation is that, once you fix \$x\$ and \$y\$, you don't have many choices left for \$z\$. So instead of looping over all \$z\$ in the range \$[1, L]\$, you can more efficiently compute the possible values of \$z\$. (This removes one of your for loops).How do you find the possible values of \$z\$? Well, if we know what \$x\$ and \$y\$ are, the above equation reduces to a cubic in \$z\$; i.e., something for the form \$c_3z^3 + c_2z^2 + c_1z + c_0 = 0\$. There are a couple ways to solve this for \$z\$; the easiest way I can think of (but definitely not the fastest) is to try all the factors of \$c_0\$ (by the Rational Root Theorem, any integer \$z\$ that satisfies this must divide \$c_0\$). This does involve factoring \$c_0\$, however, which can take a while depending on how you implement it. A faster method is to use binary search to find the roots, but you have to be a bit careful here, since the cubic might not be monotone increasing/decreasing over the entire interval. The correct way to do this is to first find the roots of the derivative of the cubic (the derivative of the cubic is a quadratic, so you can do this with just the quadratic formula); these are the critical points of the cubic, so the cubic is monotone increasing/decreasing on the intervals between these points, and you can binary search here. Learn about elliptic curves. So it turns out that the smallest solutions to this Diophantine equation are immense, and it's pretty much impossible to find any via any method like this (I forget how large they are, but I believe each of x, y, and z is at least 10 digits long). The question then becomes, how did people find this answer in the first place?The trick here is that the cubic above (in fact, pretty much any homogeneous cubic in three variables) is an instance of a mathematical object known as an elliptic curve, and that these objects have a ton of nice properties. One of these nice properties is that there is a method whereby you can take any two integer solutions to this equation and add them to get a third integer solution. So one approach which works is to start with an integer solution (but not a positive integer solution) such as (11, 9, -5), and repeatedly add it to itself until you end up with a positive integer solution (as far as I know, there's no guarantee you ever will end up with a positive integer solution, but in this case you do after a couple of steps). EDIT:I ran some tests to compare the above optimizations (see here). On my machine for limit=1000:OP's code (removing the printf per iteration) takes ~14.5 seconds.OP's code with optimization 1 takes ~4.2 seconds.OP's code with optimizations 1 + 2 takes ~0.7 seconds.OP's code with optimizations 1 + 2 + 3 takes ~0.25 seconds. For larger limits, these differences are more pronounced. For limit=5000:OP's code with optimizations 1 + 2 + 3 takes ~7.9 seconds.OP's code with optimizations 1 + 2 takes ~86 seconds.The other two cases each took at least 5 minutes. |
_softwareengineering.180732 | Do NSURLConnection service objects and XML/JSON parser objects fall within the controller layer or the model layer? Why?Is it OK to have business logic in the controller? Or should it be in the model layer only?Can the model layer be represented by NSArray/NSDictionary objects or should it be strictly structured with custom objects to comply with the MVC pattern, given that my app doesn't require persistent storage. | Where do the parser and service objects fit in MVC? | mvc;ios | null |
_unix.155682 | I am running some long-running programs (for scientific purposes) on a multi-core Linux box. The processes are controlled by a small daemon which restarts the next job when one finishes (I run 3-6 at once) - but I've noticed they don't always use 100% CPU. The programs (and the daemon) are written in Python.When I run this code on Mac OS X, I can run programs for weeks and they will always use as many system resources are available, while the machine is running at a normal temperature.I've just started trying to run such things on Debian Linux (on another machine), with 6 Cores and much more RAM than the jobs need. I am running 5 such jobs at once. When I first started the jobs a few days ago, I had 5 Python processes in top, each using 100% CPU. About a day later, I checked how it was all doing and I had 3 processes at 100% CPU and two running at 50% CPU. Most recently (about 4 days in) I have 5 processes all running at 20% CPU.What could be causing this? Nothing seems to suggest that CPU usage management tools are pre-installed on Debian Wheezy, and I have not installed anything like that myself (to my knowledge), nor configured it. Also, since the limits seem to vary depending on how long the demon has been running, I'm not convinced that it could be such a system. I have checked if the machine is overheating, and it doesn't seem to be much hotter than the (cold) room it's in; the air from the fans/vents is both unobstructed and cool.The processes are still running, so I can measure anything that might be useful for debugging (length of time running, process priority, etc) in order to debug this problem. Can anyone tell me where to start, or what any possible solutions might be?UPDATE:When I try the same thing with 3 threads instead of 5, I am dropped to 33% each (after initial drops to 50%.)Is there any program or scheduling policy that limits all child processes of a single process to a total of 100%? as that's what seems to be happening.Next test is to directly run the scripts in separate screen shells, (BTW, the first script was launched from inside screen) and see if we get any slowdown. Carving the jobs up by hand like this is an OK-ish workaround, but quite irritating (and should be unnecessary.) In general, of course, this sort of problem may not be solvable this way, but since all results from each job are saved to disk rather than returned to the thread manager, I'll get away with it.UPDATE 2:Separate processes launched from different screen instances are still going at 100% CPU after 14 hours, will report back if I see any slowdown but as expected this case is unaffected by any throttling.anyone care to write (or point me at) something that explains process priority on Linux? I am wondering if my spawning processes is being marked as a lower priority (since it uses very little CPU itself), and then the child processes are inheriting this.EDIT:I've been asked about the script I'm running, and the function of forking the daemon processes.The long running script is a big calculation, which always runs at 100% CPU until it finishes, and does nothing funny about parallelisation or multiprocessing. (this is a widely tested assertion.) To clarify further - the only times I've seen these processes run at less than 100% CPU on my Mac are when overheating, or when paging/swapping. Neither of these is relevant to the Linux case.Here is the function that forks out, and then manages the long running processes:from multiprocessing import Processimport time, sys, os# An alternative entry point which runs N jobs in parallel over a list of files.# Note, since this is designed to be used as a library function, we return from the initial# function call rather than exiting.def run_over_file_list(script_path, list_of_data_files, num_processes, timeout=float('inf')): try: pid = os.fork() if pid > 0: # exit first parent return except OSError, e: print >>sys.stderr, fork #1 failed: %d (%s) % (e.errno, e.strerror) sys.exit(1) # decouple from parent environment os.chdir(/) os.setsid() os.umask(0) # do second fork try: pid = os.fork() if pid > 0: # exit from second parent, print eventual PID before print Daemon PID %d % pid sys.exit(0) except OSError, e: print >>sys.stderr, fork #2 failed: %d (%s) % (e.errno, e.strerror) sys.exit(1) # OK, we're inside a manager daemon. if os.path.isfile(status_filename): raise Exception(a daemon is already running. failed.) f = open(status_filename, w) f.write(str(os.getpid())) f.close() jobs = [script_path] * num_processes data_files_remaining = [f for f in list_of_data_files] update_files_remaining_file(len(data_files_remaining)) assert num_processes <= len(data_files_remaining) restart = False with nostdout(): while True: processes = [] for job in jobs: p = Process(target=file_list_worker, args=(job, data_files_remaining.pop(0))) p.started = time.time() p.start() processes.append(p) stop = False while True: time.sleep(10) ended = [] for i, p in enumerate(processes): if not p.is_alive(): j = i ended.append((j,p)) elif time.time() - p.started > timeout: p.terminate() j = i ended.append((j,p)) if not stop: for tup in ended: if not data_files_remaining: stop = True break i, e = tup new_p = Process(target=file_list_worker, args=(jobs[i], data_files_remaining.pop(0))) new_p.started = time.time() new_p.start() processes[i] = new_p # old e will be garbage collected else: if len(ended) == len(processes) and not data_files_remaining: stop = False break try: command = check_for_command() if command == stop: stop = True elif command == restart: stop = True restart = True elif command == kill: for p in processes: p.terminate() clear_command() os.remove(status_filename) exit(0) except NoCommandError: pass update_files_remaining_file(len(data_files_remaining)) clear_command() update_files_remaining_file(len(data_files_remaining)) if not restart: os.remove(status_filename) break else: jobs = None restart = False # While in a fork, we should never return (will continue running the original/calling script in parallel, hilarity ensues.) exit(0)EDIT 2:prioritySo, everything seems to run with priority 20 from whatever source; the pre-throttled processes, the post-throttled processes, the daemon manager, the processes run directly from the shell under screen.ulimit -afrom bash:core file size (blocks, -c) 0data seg size (kbytes, -d) unlimitedscheduling priority (-e) 0file size (blocks, -f) unlimitedpending signals (-i) 127788max locked memory (kbytes, -l) 64max memory size (kbytes, -m) unlimitedopen files (-n) 1024pipe size (512 bytes, -p) 8POSIX message queues (bytes, -q) 819200real-time priority (-r) 0stack size (kbytes, -s) 8192cpu time (seconds, -t) unlimitedmax user processes (-u) 127788virtual memory (kbytes, -v) unlimitedfile locks (-x) unlimitedfrom fish:Maximum size of core files created (kB, -c) 0Maximum size of a processs data segment (kB, -d) unlimitedMaximum size of files created by the shell (kB, -f) unlimitedMaximum size that may be locked into memory (kB, -l) 64Maximum resident set size (kB, -m) unlimitedMaximum number of open file descriptors (-n) 1024Maximum stack size (kB, -s) 8192Maximum amount of cpu time in seconds (seconds, -t) unlimitedMaximum number of processes available to a single user (-u) 127788Maximum amount of virtual memory available to the shell (kB, -v) unlimitedfrom fish under screen:(exactly the same as normal fish.)Much later updateI have also noticed this bug with long-running processes run from separate shells. e.g:Instance 1: 17% (one core of 6 at 100%.)Instance 2: 8% (one core of 6 at 50%.)Instance 3: 8% (one core of 6 at 50%.)if I change the priority of instance 2 to be very high, the state becomes:Instance 1: 17% (one core of 6 at 100%.)Instance 2: 17% (one core of 6 at 100%.)Instance 3: 0% (one core of 6 at 0%.)If the priorities are all equalised again, we return to the first state.I am starting to think that this problem may be related to a specific hardware configuration or something, but I lack the tools/knowledge to debug further. | What's limiting my CPU usage in forked processes? | debian;cpu;top | null |
_unix.71050 | I use Slackware64-current on my work laptop and though I love running Slackware, sometimes it hangs on me near to the point of starting considering falling back to Windows. I develop on Java and use Eclipse IDE. I know Java IDE's are not supposed to be lightweight, but, on occasions, it really gets stuck and as I normally have a console open, I run top only to find out Java is taking 90%+ of CPU load.How can I make sure it does not gets (much!) in my way?I know this is a rather open question, but I'll be more than happy to provide more details as needed.EDIT: Hardware specsCPU: Intel(R) Core(TM) i5-2520M CPU @ 2.50GHzRAM: 4GiB (probably DDR3, not sure though!) | Prevent Java to escalate CPU usage on Slackware64? | java;cpu;top;slackware;eclipse | null |
_codereview.86528 | I'm working on some code that has the following pattern of conditionals inside it. It doesn't look great, but it doesn't look terrible. Please, shed some light on the underlying reason this is not passing my smell test. I'd love to see some alternatives. I'm guessing it's because it's completely ignoring a certain permutation.Note: I'm asking this question with the intent of bettering the code, not code golf.public class RoleManager{ public string Role { get; private set; } internal void SetRole(bool isManager, bool hasBackendAccess) { if (isManager && hasBackendAccess) { Role = FullAccess; } else if (isManager) { Role = Manager; } else if (hasBackendAccess) { Role = LimitedBackend; } }} | Seemingly redundant conditionals...or not? | c# | You have 3 different role values and 1 for not assigning anything, so you'll somehow need 4 discrete values to differentiate. But you could rewrite as:if (isManager) Role = hasBackendAccess ? FullAccess : Manager;else if (hasBackendAccess) Role = LimitedBackend; |
_unix.276199 | The situation I have in mind has the following structure:% some_command | [PRODUCED OUTPUT] || echo 'no output' >&2Here [PRODUCED OUTPUT] stands for some as-yet-unspecified testing command, whose value should be true (i.e. success) iff some_command produces any output at all.Of course, some_command stands for an arbitrarily complex pipeline, and likewise, echo 'no output' >&2 stands for some arbitrary action to perform in case some_command produces no output.Is there a standard test to do the job represented above by [PRODUCED OUTPUT]? grep -qm1 '.' comes close but also reports false on an input consisting of empty lines.Unlike Check if pipe is empty and run a command on the data if it isn't I just want to discard the input if it's present, I don't need to preserve it. | Standard/canonical way to test whether foregoing pipeline produced output? | shell script;zsh;pipe | How about using read?$ cat /dev/null | read pointless || echo no outputno output$ echo something | read pointless || echo no output$ printf \n | read pointless || echo no output$ printf \n | read pointless || echo no output$ false | read pointless || echo no outputno outputAccording to the Open Group definition:EXIT STATUSThe following exit values shall be returned:0Successful completion.>0End-of-file was detected or an error occurred. |
_unix.25640 | I'm trying to make a small change to an automake build.The system to modify uses configure.ac and Makefile.am inputs.For a single object file within one subdirectory I have to invokea script before compiling, to patch config info into the build.I don't see the right location to allow such pre-processingahead of compiling this specific C file. What I've tried isto insert an additional target into all: all-amBut this seems not to be the way to go and in addition I wasn't ableto figure how to overload this generated line. | Add additional processing in an automake build for one object | automake | null |
_unix.386047 | I am using docker on Debian 8, docker mount container, this cause issues with snmpd which logs lot of cannot statfs :Aug 14 16:27:13 docker1 snmpd[26624]: Cannot statfs /run/docker/netns/52b226f1dfca#012: Permission deniedAug 14 16:27:13 docker1 snmpd[26624]: Cannot statfs /mnt/docker/devicemapper/mnt/3f15d8f53ad8ad978a24ec69df0b60783f09fad35f9a9ed96130b2d05b138d02#012: Permission deniedAug 14 16:27:13 docker1 snmpd[26624]: Cannot statfs /mnt/docker/containers/2274df4e7c2f544d7af14a32025e7838c51643929cce6019f7605928daf6ec36/shm#012: Permission deniedAug 14 16:27:13 docker1 snmpd[26624]: Cannot statfs /mnt/docker/devicemapper/mnt/1ccd17d308d0f213c6152e192b810d7a90139862790c9289904187d2cb564eda#012: Permission deniedAug 14 16:27:13 docker1 snmpd[26624]: Cannot statfs /applis/docker/containers/07de11df39db022ed5578c49655ef33f0ed7aef457368623cb4bc8f442978827/shm#012: Permission deniedI would like to suppress those error from my log file.I tried to add this in my snmpd.conf :ignoredisk /mnt/docker/containers/*But it doesn't work. any idea ? | snmpd Cannot statfs : Permission denied | snmp | null |
_reverseengineering.12629 | I've been playing around with decompiling Android apps from dex/jar files to java source code, with varying success. I've tried the usual suspects - JD-GUI, procyon, cfr, krakatau and jadx. I'm having a specific problem with all of them on a particular app in that it's obfuscated, and many classes seem to be split up into multiple files. AFAIK, this isn't allowed in Java.Also, the classes seem to extends multiple base classes, e.g. class a extends Activity, and class a(or renamed to something else but I know is actually class a as it tries to access a's private member variables directly) extends BroadcastReceiver. AFAIK, this isn't allowed in Java either.As a result, the decompiled code is full of errors and hard to follow. I'm not expecting a compileable code form the decompiled code, but I do wish to at least be able to perform meaningful static analysis, which is hard to do when the decompiled source doesn't follow java conventions.Any help on how to resolve this decompilation issue?Thanks. | Decompiled Java classes span multiple files | decompilation;java;deobfuscation | null |
_unix.315327 | I have two computers on the same lanComp A: 192.168.151.19Comp B: 192.1681.151.15 (Static IP address with gateway as <comp A IP address)The setup is like thisInternet <----> Computer A <--------> Computer BBoth computers have a single network card. The computer A is connected to the Internet. Computer B is connected to computer A via a usb-ethernet adapter. I have tried to understand iptables and other related questions, but somehow I still not able to configure this correctly.I use the following iptable rules on comp A iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to-destination 192.168.151.19:443 iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 192.168.151.19:80 iptables -t nat -A POSTROUTING -j MASQUERADEOn computer B I use the command sudo ip route add default via 192.168.151.19I ran sysctl -w net.ipv4.ip_forward=1 on both the computers. How can I configure this to give Internet connectivity to computer B. Also, I would like to control internet connectivity to computer B by any firewall rule that I add on computer AEdit: The question is essentially same but with a different network setup. In an earlier question of mine, the computer were not connected to each other physically. Now, One computer is connected to the other via a usb-ethernet adapter and the other computer is connected to the internet | Networking between two computers | networking;iptables;port forwarding;connection sharing | null |
_unix.40694 | I have a script converting video files and I run it at server on test data and measure its time by time. In result I saw:real 2m48.326suser 6m57.498ssys 0m3.120sWhy real time is that much lower than user time? Does this have any connection with multithreading? Or what else?Edit: And I think that script was running circa 2m48s | Why real time can be lower than user time | time | The output you show is a bit odd, since real time would usually be bigger than the other two.Real time is wall clock time. (what we could measure with a stopwatch)User time is the amount of time spend in user-mode within theprocessSys is the CPU time spend in the kernel within the process.So I suppose if the work was done by several processors concurrently, the CPU time would be higher than the elapsed wall clock time.Was this a concurrent/multi-threaded/parallel type of application?Just as an example, this is what I get on my Linux system when I issue the time find . command. As expected the elapsed real time is much larger than the others on this single user/single core process.real 0m5.231suser 0m0.072ssys 0m0.088sThe rule of thumb is:real < user: The process is CPU bound and takes advantage of parallel execution on multiple cores/CPUs.real user: The process is CPU bound and takes no advantage of parallel exeuction.real > user: The process is I/O bound. Execution on multiple cores would be of little to no advantage. |
_webmaster.65107 | I keep getting 404.7 (FILE_EXTENSION_DENIED) on a sub directory I have under my Default Web Site. Request Filtering allows htm and html files on the server level and the site level.A few months ago I created a folder under Default Web Site to test some proof of concept stuff and have had no problems. A week or so ago I added a folder with Piwik php \ html code underneath 'Default Web Site' and request filtering has decided to block html extensions.My question is can anyone tell me any reason why my one folder would not have html files blocked and yet another added a few weeks later does have html files blocked? EDIT:I believe I installed Request Filtering a much longer time ago before both of these websites were created when I was doing some Web Deploy stuff. Other than that I have not knowingly applied any request filters to any of the websites. | IIS 7.5 Request Filtering | iis7 | null |
_unix.73053 | What does the -c option of sg do? At least on my machine, the man page fails to explain this option. | What does the -c option of sg do? | linux;bash | It's not documented in the man page, but looking at the source code it looks like it runs the command via /bin/sh -c instead of executing it directly. I think it is there mostly for compatibility with the sg command on other Unix systems. |
_codereview.150533 | I'm working on HackerRank to try to improve my Haskell skills along side with reading Haskell Programming from first principles. I wrote a program that works, but it seems to time out on large input sets. The purpose of the program isGiven a list of n integers a = [a1, a2, ..., an], you have to find those integers which are repeated at least k times. In case no such element exists you have to print -1. If there are multiple elements in a which are repeated at least k times, then print these elements ordered by their first occurrence in the list. So I wrote a few different functions to help with this. count which counts the number of occurrences of an element in a listcount :: Eq a => Integral b => a -> [a] -> bcount e [] = 0count e (a:xs) = (count e xs +) $ if a == e then 1 else 0uniq which removes duplicates from a listuniq :: Eq a => [a] -> [a] -> [a]uniq x [] = x uniq [] (a:xs) = uniq [a] xsuniq x (a:xs) = if a `elem` x then uniq x xs else uniq (a:x) xs filt which filters through a list and removes elements that don't occur at least k times.filt :: Show a => Num a => Read a => Eq a => Integral b => [a] -> b -> [a]filt a b = reverse $ uniq [] [i | i <- a, count i a >= b]printList which prints a list as a space separated list or prints -1 if the list is empty.printList :: Show a => [a] -> IO ()printList [] = putStrLn -1printList a = putStrLn $ unwords [show i | i <- a] readNumbers which takes a space separated string and returns a Num list from that string.readNumbers :: Show a => Eq a => Num a => Read a => String -> [a]readNumbers = map read . wordsrun which throws all of this together and runs this n times.run :: (Show a, Eq a, Num a, Read a) => a -> IO ()run 0 = putStr run n = do a <- getLine b <- getLine printList $ filt (readNumbers b) (readNumbers a !! 1) run $ n - 1main the main function. It gets a number n and then calls run n.main :: IO ()main = do a <- getLine run $ read aThis code works, for example, with the input39 24 5 2 5 4 3 1 3 49 44 5 2 5 4 3 1 3 410 25 4 3 2 1 1 2 3 4 5and gives the desired output of4 5 3-15 4 3 2 1However, with larger datasets this code is incredibly slow. I'm guessing it's because the recursion is less than optimal, but I can't really pinpoint what is taking so long. My best guess is that uniq or count is the limiting factor, but I can't figure out how to optimize them. | Filter Duplicate Elements in Haskell | haskell;recursion;time limit exceeded | If you write uniq as a right fold, you don't need to pass an accumulator through, and the list comes out in the right order:uniq :: Eq a => [a] -> [a]uniq [] = []uniq (x:xs) = (if x `elem` xs then id else (x:)) $ uniq xsfilt :: Show a => Num a => Read a => Eq a => Integral b => [a] -> b -> [a]filt k is = uniq [i | i <- is, count i is >= k](Edit: Actually that one throws out the first of each two equal elements, not the last. Here`s one without that problem:uniq :: Eq a => [a] -> [a]uniq [] = []uniq (x:xs) = x : uniq (filter (/=x) xs))You've commendably already brought count into a form that allows it to be written in terms of library combinators:count :: Eq a => Integral b => a -> [a] -> bcount e = sum . map (\a -> if a == e then 1 else 0)That's a bit ugly due to lambdas though, here's a nicer version:count e = length . filter (== e)For separation of monadic and pure code (and generally for factoring out common code from across cases), here's a showList to replace printList:showList :: Show a => [a] -> StringshowList [] = -1showList a = unwords [show i | i <- a] Calling a monadic action a given number of times doesn't need manual recursion, and thus also doesn't need to give the repeated action a name:main :: IO () main = do a <- readLn replicateM_ a $ do [_n, k] <- map read . words <$> getLine numbers <- map read . words <$> getLine putStrLn $ showList $ filt k numbers(I think readNumbers doesn't deserve a name.)In case the order in which the output is given isn't important, here's a version that doesn't require quadratic time because each element is compared to every other:filt k = map head . filter ((>=k) . length) . group . sortwhich relies on Data.Lists sort being faster than quadratic time. |
_webapps.89639 | Say I searched for my childhood's school on Google Maps:After seeing the Earth View of it, I feel like searching for Images of this college, on Google Image search.Is there a way to launch an image search with the same input as the one currently used in Google Maps? (Or any other media-specific Google search such as Videos, etc.)My reasoning is that since I'm gonna go from a Google service to another Google service, there might be a way besides from copy/paste to switch search results type.Otherwise I can copy/paste but that requires selecting the text, going to Google Image search, then pasting, then clicking the search button. Other Google search services have shortcuts to search through different type of media at the top of the page as seen here:I tried clicking on different menus and shortcuts, but none that led to a different type of search. Looking for ways to do this on Google didn't give any results either. | Copying a Google Maps search field to an Image search field | google search;google maps;google image search | null |
_unix.362536 | How to convert log file below to the output file in the bottom using awk in shell scripts.input file format as below:zzz ***Fri 27 March 2017 01:21:00 ESTDevice: C1 C2 C3R1 1 2 3 R2 4 5 6R3 7 8 9zzz ***Fri 27 March 2017 01:22:00 ESTDevice: C1 C2 C3R1 11 12 13 R2 14 15 16R3 17 18 19Output file format:Timestamp R1-C1 R1-C2 R1-C3 R2-C1 R2-C2 R2-C3 R3-C1 R3-C2 R3-C303/08/17 01:21:00 1 2 3 4 5 6 7 8 9 03/08/17 01:22:00 11 12 13 14 15 16 17 18 19 | text file reformat to text file | shell script;awk;conversion | null |
_webapps.87647 | For example, I have 500 items in an album. In it, I have 12 videos and the rest are all images. Is there a way I can see all the videos together or search only for specific videos in the album? | How to list only videos in an album in Flickr | flickr | null |
_unix.385641 | WIM format automatically detects duplicate files and archives without duplication.Is there any alternative on UNIX, Linux or Mac? | An effective archive method for many duplicate files | tar;archive | If the archive is small enough, most archive formats will do a decent job, with the exception of zip. Zip compresses each file independently, but other popular formats (tar.anything, 7z, rar) compress the archive as a whole. If identical files are close enough in the archive then the second occurrence can be compressed down to a few bytes. How close is close enough depends on the archive format.A method that works for any archiver that understands hard links, such as tar, is to first replace the identical files by hard links. This is only applicable if you don't want the files with duplicate contents to have different metadata (permissions, timestamps, etc.). You use fdupes to look for duplicates and then a bit of post-processing to replace duplicates by hard links, assuming that the file names don't contain newlines:fdupes -q -r . | awk ' $0 == {first = ; next} { gsub(/\047/, \047\\\047\047, $0); if (first == ) first = $0; else system(ln -f \047 first \047 \047 $0 \047); }' |
_webmaster.103923 | I am working on Influencer marketing, one of the blogger has written a blog for us but they are linking to our landing page with tracking ids, as well as link redirects before the actual URL is hit, is this good for SEO? I personally dont think so but will like some thoughts. | Redirecting Backlinks | redirects;backlinks | null |
_codereview.141249 | #include <iostream>#include <vector>#include <algorithm>template <typename T>int copy_max(T& to, T& from, unsigned char delim, size_t limit){ typename T::iterator pos = std::find(from.begin(), from.end(), delim); typename T::iterator prev_pos = pos; if(pos == from.end()) { return -1; } size_t index = std::distance(from.begin(), pos); while(pos != from.end() && index <= limit) { prev_pos = pos; pos = std::find(pos + 1, from.end(), delim); index = std::distance(from.begin(), pos); } index = std::distance(from.begin(), prev_pos); if(index == 0) { return -1; } to.insert(to.begin(), from.begin(), prev_pos); from.erase(from.begin(), prev_pos); return index;}void output(const std::vector<unsigned char>& va, const std::vector<unsigned char>& vb ){ std::string str(va.begin(), va.end()); std::cout << To: << str << std::endl; str.assign(vb.begin(), vb.end()); std::cout << From: << str << std::endl; }static int case_num = 1;#define expect(a, b) \std::cout << [ << case_num ++ << ] ; \if(a == b) \std::cout << #a << == << #b << TRUE << std::endl;\else {\std::cout << FAIL << #a << != << #b << FALSE << std::endl;\throw new int;\}int main() { std::string b = Hi\x24The\x24HowAreYou\x24?; std::vector<unsigned char> va; std::vector<unsigned char> vb(b.begin(), b.end()); int index = copy_max(va, vb, '\x24', 5); expect(index, 2); output(va, vb); va.clear(); index = copy_max(va, vb, '\x24', 5); expect(index, 4); va.clear(); index = copy_max(va, vb, '\x24', 5); expect(index, -1); return 0;}Don't mind the main() function and testing macros, this is just something I threw together in a matter of minutes. | Function to copy characters from one buffer to another given the maximum size | c++ | null |
_webmaster.85385 | Can I create slug with unicode letters since all the major browsers support unicode urls (so it won't be transformed into the utf8 equivalent), or should I always use English letters?For example: If I want to create a slug out of the words Buenos das, should it be http://example.com/buenos-das or http://example.com/buenos-dias? | Should slug (SEO/user friendly URLs) contain only English letters? | seo;url;internationalization | null |
_webmaster.99709 | I have created a bilanguage site EN/DEthe language text are dynamically loaded from EN.php or DE.php depending on a $language variable.How can I organise my code / pages for better SEO and how can I organise the tags description for both language.Should I duplicate my full pages to www.mySite.com/en/ and www.mySite.com/de/ ?? | Google SEO for dynamic multilanguage site | seo;php;meta description | null |
_unix.223465 | I'm running Arch Linux with the Gnome desktop environment. Using the Gnome settings I can add the dvorak keyboard layout and switch between with the shortcut key, but whenever start the computer the login screen has the qwerty keyboard set. As soon as I login, it's set back to dvorak though.I've used setxkbmap -layout dvorak and localectl set-x11-keymap dvorak as detailed on the wiki here. But all they achieve is to remove the language toolbar from the login screen, so that I can't change to dvorak before login.Where is the setting that allows me to specify the keyboard layout on the login screen?UPDATE: Here is what my input sources settings look like: | Setting login screen keyboard | arch linux;gnome;keyboard layout | null |
_datascience.5610 | What are Hybrid classifiers used for sentiment analysis? How are they built? Please suggest good tutorial/book/link for reference. Also how are they different from other classifiers like SVM and Naive Bayes? | What are Hybrid Classifiers used in Sentiment Analysis? | machine learning;classification | In sentiment analysis you may want to combine a number of classifiers. Let's say: a separate classifier for emoticons, another one for emotionally loaded terms, another one for some special linguistic patterns and - let's say - yet another one to detect and filter out spam messages. It's all up to you.You can use either SVM, Naive Bayes, or anything else that best suits your problem. You may use majority voting, weights (for example based on cross validation results), or any other more advanced technique to decide which class is the most appropriate one.Also, googling for hybrid sentiment returns tons of papers containing answers to the questions that you have stated. Please, don't ask us to rewrite this papers here. |
_unix.332007 | I am trying to restart my Centos 6.7 system using the command line:init 6But I need it stay down N number of seconds before starting back up again. I have been searching with Google, but I cannot by a variant of the init command that will do this. | How can I tell my system to shutdown, stay off for X seconds, then restart? | centos;init;reboot | As you are on Debian, you want the rtcwake utility.Manual pageNot good for very short sleeps (say less than 10 seconds) as it may take more time to put the system to sleep than that.The basic idea is that you program the RealTimeClock chip as a wake source for n seconds in the future and then suspend, either to ram or disk, or even switch the system off.. |
_unix.330594 | I have configured a network map in Virtual Box. I have one VM that acts as a host on a network. I want this host to access a private network that is located behind a gateway, through a tunnel. The private network prefix is 10.0.20.0/24 and it is connected to a gateway 10.0.20.1. The gateway is connected to another network through another interface 192.168.20.5. The VPN server will be installed on the gateway. I have created a tunnel and assigned it an IP address on the gateway, then I did the same on the host thus connecting to the server. I get that the client is connected to the server and the server to the client.The problem is that I can not ping from the outside host to the private network. I think the problem is my routing table. On the external host, I set default gateway as the tunnel interface. And on the gateway, I add the net where the provate network is and set the gateway as 10.0.20.1Is that correct? | Create tunnel from Host to Gateway | vpn;gateway | null |
_webmaster.56547 | My SSL certificate works on my apache web server but only when I do https://Dedicated_vps_ip. When I do https://domain.com it just doesn't work. I've tried almost everything. What am I doing wrong?Here is a paste of my ssl.conf file: ## This is the Apache server configuration file providing SSL support.# It contains the configuration directives to instruct the server how to# serve pages over an https connection. For detailing information about these# directives see <URL:http://httpd.apache.org/docs/2.2/mod/mod_ssl.html>## Do NOT simply read the instructions in here without understanding# what they do. They're here only as hints or reminders. If you are unsure# consult the online docs. You have been warned. #LoadModule ssl_module modules/mod_ssl.so## When we also provide SSL we have to listen to the# the HTTPS port in addition.#Listen 443#### SSL Global Context#### All SSL configuration in this context applies both to## the main server and all SSL-enabled virtual hosts.### Pass Phrase Dialog:# Configure the pass phrase gathering process.# The filtering dialog program (`builtin' is a internal# terminal dialog) has to provide the pass phrase on stdout.SSLPassPhraseDialog builtin# Inter-Process Session Cache:# Configure the SSL Session Cache: First the mechanism# to use and second the expiring timeout (in seconds).SSLSessionCache shmcb:/var/cache/mod_ssl/scache(512000)SSLSessionCacheTimeout 300# Semaphore:# Configure the path to the mutual exclusion semaphore the# SSL engine uses internally for inter-process synchronization.SSLMutex default# Pseudo Random Number Generator (PRNG):# Configure one or more sources to seed the PRNG of the# SSL library. The seed data should be of good random quality.# WARNING! On some platforms /dev/random blocks if not enough entropy# is available. This means you then cannot use the /dev/random device# because it would lead to very long connection times (as long as# it requires to make more entropy available). But usually those# platforms additionally provide a /dev/urandom device which doesn't# block. So, if available, use this one instead. Read the mod_ssl User# Manual for more details.SSLRandomSeed startup file:/dev/urandom 256SSLRandomSeed connect builtin#SSLRandomSeed startup file:/dev/random 512#SSLRandomSeed connect file:/dev/random 512#SSLRandomSeed connect file:/dev/urandom 512## Use SSLCryptoDevice to enable any supported hardware# accelerators. Use openssl engine -v to list supported# engine names. NOTE: If you enable an accelerator and the# server does not start, consult the error logs and ensure# your accelerator is functioning properly.#SSLCryptoDevice builtin#SSLCryptoDevice ubsec#### SSL Virtual Host Context##<VirtualHost _default_:443># General setup for the virtual host, inherited from global configurationDocumentRoot /var/www/html/drummerzwebsitesServerName www.mineprowebhost.com:443# Use separate log files for the SSL virtual host; note that LogLevel# is not inherited from httpd.conf.ErrorLog logs/ssl_error_logTransferLog logs/ssl_access_logLogLevel warn# SSL Engine Switch:# Enable/Disable SSL for this virtual host.SSLEngine on# SSL Protocol support:# List the enable protocol levels with which clients will be able to# connect. Disable SSLv2 access by default:SSLProtocol all -SSLv2# SSL Cipher Suite:# List the ciphers that the client is permitted to negotiate.# See the mod_ssl documentation for a complete list.SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW# Server Certificate:# Point SSLCertificateFile at a PEM encoded certificate. If# the certificate is encrypted, then you will be prompted for a# pass phrase. Note that a kill -HUP will prompt again. A new# certificate can be generated using the genkey(1) command.SSLCertificateFile /etc/ssl/ssl.crt/www_mineprowebhost_com.crt# Server Private Key:# If the key is not combined with the certificate, use this# directive to point at the key file. Keep in mind that if# you've both a RSA and a DSA private key you can configure# both in parallel (to also allow the use of DSA ciphers, etc.)SSLCertificateKeyFile /etc/ssl/ssl.key/www_mineprowebhost_com.key# Server Certificate Chain:# Point SSLCertificateChainFile at a file containing the# concatenation of PEM encoded CA certificates which form the# certificate chain for the server certificate. Alternatively# the referenced file can be the same as SSLCertificateFile# when the CA certificates are directly appended to the server# certificate for convinience.SSLCertificateChainFile /etc/ssl/ssl.crt/www_mineprowebhost_com.ca-bundle# Certificate Authority (CA):# Set the CA certificate verification path where to find CA# certificates for client authentication or alternatively one# huge file containing all of them (file must be PEM encoded)#SSLCACertificateFile /etc/pki/tls/certs/ca-bundle.crt# Client Authentication (Type):# Client certificate verification type and depth. Types are# none, optional, require and optional_no_ca. Depth is a# number which specifies how deeply to verify the certificate# issuer chain before deciding the certificate is not valid.#SSLVerifyClient require#SSLVerifyDepth 10# Access Control:# With SSLRequire you can do per-directory access control based# on arbitrary complex boolean expressions containing server# variable checks and other lookup directives. The syntax is a# mixture between C and Perl. See the mod_ssl documentation# for more details.#<Location />#SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \# and %{SSL_CLIENT_S_DN_O} eq Snake Oil, Ltd. \# and %{SSL_CLIENT_S_DN_OU} in {Staff, CA, Dev} \# and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \# and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \# or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/#</Location># SSL Engine Options:# Set various options for the SSL engine.# o FakeBasicAuth:# Translate the client X.509 into a Basic Authorisation. This means that# the standard Auth/DBMAuth methods can be used for access control. The# user name is the `one line' version of the client's X.509 certificate.# Note that no password is obtained from the user. Every entry in the user# file needs this password: `xxj31ZMTZzkVA'.# o ExportCertData:# This exports two additional environment variables: SSL_CLIENT_CERT and# SSL_SERVER_CERT. These contain the PEM-encoded certificates of the# server (always existing) and the client (only existing when client# authentication is used). This can be used to import the certificates# into CGI scripts.# o StdEnvVars:# This exports the standard SSL/TLS related `SSL_*' environment variables.# Per default this exportation is switched off for performance reasons,# because the extraction step is an expensive operation and is usually# useless for serving static content. So one usually enables the# exportation for CGI and SSI requests only.# o StrictRequire:# This denies access when SSLRequireSSL or SSLRequire applied even# under a Satisfy any situation, i.e. when it applies access is denied# and no other module can change it.# o OptRenegotiate:# This enables optimized SSL connection renegotiation handling when SSL# directives are used in per-directory context.#SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire<Files ~ \.(cgi|shtml|phtml|php3?)$> SSLOptions +StdEnvVars</Files><Directory /var/www/cgi-bin> SSLOptions +StdEnvVars</Directory># SSL Protocol Adjustments:# The safe and default but still SSL/TLS standard compliant shutdown# approach is that mod_ssl sends the close notify alert but doesn't wait for# the close notify alert from client. When you need a different shutdown# approach you can use one of the following variables:# o ssl-unclean-shutdown:# This forces an unclean shutdown when the connection is closed, i.e. no# SSL close notify alert is send or allowed to received. This violates# the SSL/TLS standard but is needed for some brain-dead browsers. Use# this when you receive I/O errors because of the standard approach where# mod_ssl sends the close notify alert.# o ssl-accurate-shutdown:# This forces an accurate shutdown when the connection is closed, i.e. a# SSL close notify alert is send and mod_ssl waits for the close notify# alert of the client. This is 100% SSL/TLS standard compliant, but in# practice often causes hanging connections with brain-dead browsers. Use# this only for browsers where you know that their SSL implementation# works correctly.# Notice: Most problems of broken clients are also related to the HTTP# keep-alive facility, so you usually additionally want to disable# keep-alive for those clients, too. Use variable nokeepalive for this.# Similarly, one has to force some clients to use HTTP/1.0 to workaround# their broken HTTP/1.1 implementation. Use variables downgrade-1.0 and# force-response-1.0 for this.SetEnvIf User-Agent .*MSIE.* \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0# Per-Server Logging:# The home of a custom SSL log file. Use this when you want a# compact non-error SSL logfile on a virtual host basis.CustomLog logs/ssl_request_log \ %t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \%r\ %b</VirtualHost> | SSL works on https://dedicatedip but not https://domain | apache;https;virtualhost | null |
_unix.186014 | ---------------------------------------------------update-----------------------on doge the route works:thufir@doge:~$ thufir@doge:~$ routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.1.1 0.0.0.0 UG 0 0 0 eth0192.168.1.0 * 255.255.255.0 U 1 0 0 eth0thufir@doge:~$ I'd like to get the same results on tleilax, but don't know how to configure that with yast:tleilax:~ # tleilax:~ # routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 192.168.1.1 0.0.0.0 UG 0 0 0 enp3s8loopback * 255.0.0.0 U 0 0 0 lo192.168.1.0 * 255.255.255.0 U 0 0 0 enp3s8tleilax:~ # -------------------------------------------------------original question--------I'm trying to figure out how to use yast. Here's where I'm at: Network Settings Global OptionsOverviewHostname/DNSRouting Hostname and Domain Name Hostname Domain Name tleilax bounceme.net [x] Change Hostname via DHCP [ ] Assign Hostname to Loopback IP Modify DNS configuration Custom Policy Rule Use Default Policy Name Servers and Domain Search List Name Server 1 Domain Search 8.8.8.8 google.com Name Server 2 8.8.4.4 Name Server 3 hosts, etc:tleilax:~ # tleilax:~ # routeKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifaceloopback * 255.0.0.0 U 0 0 0 lo192.168.1.0 * 255.255.255.0 U 0 0 0 enp3s8tleilax:~ # tleilax:~ # ip route127.0.0.0/8 dev lo scope link 192.168.1.0/24 dev enp3s8 proto kernel scope link src 192.168.1.2 tleilax:~ # tleilax:~ # hostname -ftleilax.bounceme.nettleilax:~ # tleilax:~ # hostnametleilaxtleilax:~ # tleilax:~ # ifconfigenp3s8 Link encap:Ethernet HWaddr 00:13:20:AC:13:B0 inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::213:20ff:feac:13b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3828 errors:0 dropped:0 overruns:0 frame:0 TX packets:3554 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:331124 (323.3 Kb) TX bytes:1550902 (1.4 Mb)lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:31477 errors:0 dropped:0 overruns:0 frame:0 TX packets:31477 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2453523 (2.3 Mb) TX bytes:2453523 (2.3 Mb)tleilax:~ # tleilax:~ # ping 216.58.216.164connect: Network is unreachabletleilax:~ # tleilax:~ # ping www.google.comping: unknown host www.google.comtleilax:~ # tleilax:~ # cat /etc/issueWelcome to openSUSE 13.1 Bottle - Kernel \r (\l).tleilax:~ # It's notable that, on the LAN, I can ssh into tleilax fine from doge; or ping it:thufir@doge:~$ thufir@doge:~$ ping tleilaxPING tleilax.bounceme.net (192.168.1.2) 56(84) bytes of data.64 bytes from tleilax.bounceme.net (192.168.1.2): icmp_seq=1 ttl=64 time=0.190 ms64 bytes from tleilax.bounceme.net (192.168.1.2): icmp_seq=2 ttl=64 time=0.169 ms64 bytes from tleilax.bounceme.net (192.168.1.2): icmp_seq=3 ttl=64 time=0.178 ms^C--- tleilax.bounceme.net ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2000msrtt min/avg/max/mdev = 0.169/0.179/0.190/0.008 msthufir@doge:~$ I'm looking at:https://www.suse.com/documentation/sles11/book_sle_admin/data/sec_basicnet_yast.htmlbut can't figure out yast. I think I've got tleilax using dhcp; in which case I don't see why it can't connect outside the LAN. Of course, tleilax can ping the router, or doge:tleilax:~ # tleilax:~ # ping 192.168.1.1PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=1.45 ms64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.333 ms^C--- 192.168.1.1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.333/0.894/1.455/0.561 mstleilax:~ # tleilax:~ # ping 192.168.1.3PING 192.168.1.3 (192.168.1.3) 56(84) bytes of data.64 bytes from 192.168.1.3: icmp_seq=1 ttl=64 time=0.230 ms64 bytes from 192.168.1.3: icmp_seq=2 ttl=64 time=0.223 ms64 bytes from 192.168.1.3: icmp_seq=3 ttl=64 time=0.184 ms^C--- 192.168.1.3 ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 1998msrtt min/avg/max/mdev = 0.184/0.212/0.230/0.023 mstleilax:~ # -----------------------------------------------------------update--------------routing tab: Network Settings Global OptionsOverviewHostname/DNSRouting Default IPv4 Gateway Device 192.168.1.1enp3s8 Default IPv6 Gateway Device - Routing Table DestinationGatewayGenmaskDeviceOptions [Add][Edit][Delete] [ ] Enable IP Forwarding | yast routing problem -- connect: Network is unreachable | linux;networking;opensuse;routing;yast | null |
_codereview.4785 | I'm using Winforms C# .NET 3.5. I'm getting frames, and this is how I handle them: delegate void videoStream_NewFrameDelegate(object sender, NewFrameEventArgs eventArgs); public void videoStream_NewFrame(object sender, NewFrameEventArgs eventArgs) { if (ready) { if (this.InvokeRequired) { videoStream_NewFrameDelegate del = new videoStream_NewFrameDelegate(videoStream_NewFrame); this.Invoke(del, new object[] {sender, eventArgs} ); } else { Rectangle rc = ClientRectangle; Bitmap bmp = new Bitmap(rc.Width, rc.Height); Graphics g = Graphics.FromImage((Image)bmp); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBilinear; g.DrawImage((Bitmap)eventArgs.Frame, rc.X+10, rc.Y+10, rc.Width-20, rc.Height-20); g.Dispose(); this.Image = (Image)bmp; } } }Is there any way to optimize or improve performance?I'm getting low performance when my image is 320x240 and I'm stretching it to 1280x1024, But the same image on 640x480 and stretching to 1280x1024 wont get much loss in performance.I tried to use WPF and still same performance loss. That is weird because WPF is supposed to use DirectX and be fast in image processing.Here is my WPF code: delegate void videoStream_NewFrameDelegate(object sender, NewFrameEventArgs eventArgs); public void videoStream_NewFrame(object sender, NewFrameEventArgs eventArgs) { if (ready) { if (!this.Frame.Dispatcher.CheckAccess()) { videoStream_NewFrameDelegate del = new videoStream_NewFrameDelegate(videoStream_NewFrame); this.Frame.Dispatcher.Invoke(del, new object[] { sender, eventArgs }); } else { Bitmap bmp = (Bitmap)eventArgs.Frame.Clone(); IntPtr hBitmap = bmp.GetHbitmap(); BitmapSource img = Imaging.CreateBitmapSourceFromHBitmap(hBitmap, IntPtr.Zero, Int32Rect.Empty, BitmapSizeOptions.FromEmptyOptions()); bmp.Dispose(); GC.Collect(); this.Frame.Source = img; } } } | Image processing optimization | c#;.net;image;winforms;wpf | I tried to use WPF and still same preformence lose.. that is weird because WPF suppose to use DirectX and be fast in image proccessing.Except, in the WPF version, you are cloning a Bitmap (which has nothing to do with WPF) and then calling Imaging.CreateBitmapFromHBitmap, which eventually resolves to a call to:[DllImport(WindowsCodecs.dll, EntryPoint=IWICImagingFactory_CreateBitmapFromHBITMAP_Proxy)]internal static extern int CreateBitmapFromHBITMAP(IntPtr THIS_PTR, IntPtr hBitmap, IntPtr hPalette, WICBitmapAlphaChannelOption options, out BitmapSourceSafeMILHandle ppIBitmap);Now I don't know what that method is doing, but it may very well be copying the data again, but perhaps not. Either way, your WPF method is performing at least one clone of the entire image and your WinForms version is not.That's not really the meat of it though. You haven't even posted benchmark results, so you need to do that first before assuming any one part of code is slow, and you will have a hard time optimizing until you have that answer. You're two versions aren't even identical in output; the WinForms version performs HighQualityBilinear interpolation, which is certainly going to take a significant amount of time.You need to tell us what you requirements are. I ran a quick test with the following code using a 320x240 size image, and then again at 640x480:public Form1(){ InitializeComponent();} private TimeSpan RunTest( int sampleSize ){ var sw = Stopwatch.StartNew(); var baseImg = new Bitmap( @C:\TestImage.jpg ); Bitmap bmp = null; for( int i = 0; i < sampleSize; ++i ) { var rect = picBox.ClientRectangle; bmp = new Bitmap( rect.Width, rect.Height ); using( Graphics g = Graphics.FromImage( bmp ) ) { g.InterpolationMode = InterpolationMode.HighQualityBilinear; g.DrawImage( baseImg, rect ); } picBox.Image = bmp; // the call to Invalidate() that you have is not needed. } sw.Stop(); return TimeSpan.FromMilliseconds( sw.ElapsedMilliseconds );}private void picBox_Click( object sender, EventArgs e ){ int iterations = 100; var time = RunTest( iterations ); MessageBox.Show( String.Format( Elapsed: {0}, Average: {1}, time.TotalMilliseconds, time.TotalMilliseconds / iterations ) );}I then performed another test using the default InterpolationMode. Under a release build that gave me the following results:320x2401. High quality interpolation: Total: 3122ms, Average: 31.22ms, FPS: 322. Default interpolation: Total: 2165ms, Average: 21.65ms, FPS: 46640x4801. High quality interpolation: Total: 3963ms, Average: 39.63ms, FPS: 252. Default interpolation: Total: 2256ms, Average: 22.56ms, FPS: 44Are you saying that 32 frames per second is not good enough for your application? Is 42 ok and is it alright to remove the high quality rendering? You need to provide more details. You can't expect us to optimize your code with no target requirement in mind. Also, as you can see, your claim that the performance is better with large images does not appear to be true, though the difference without high quality bilinear rendering is negligible.If you can forego the Bilinear interpolation you can simply set the StretchMode of the PictureBox to Stretch and just set the Image property to the Frame property of the event args object and be done with it. This will be significantly faster. |
_codereview.20596 | I've inherited a class which has a bunch of properties defined as:public int ID{ get { return m_nOID; } set { m_nOID = value; }}public int ConxnDetail{ get { LoadDetails(); return m_nConxnDetail; } set { m_nConxnDetail = value; }}public bool ConxnConstraints{ get { LoadDetails(); return m_bConxnConstraints; } set { m_bConxnConstraints = value; }}public bool IsShowCables{ get { LoadDetails(); return m_bIsShowCables; } set { m_bIsShowCables = value; }}public int DefaultQueue{ get { LoadDetails(); return m_nDefaultQueue; } set { m_nDefaultQueue = value; }}private User LoadDetails(){ if (!IsLoaded && !IsNew) { Logger.InfoFormat(Loading user details. UserID: [{0}], m_nOID); User user = RemoteActivator.Create<RemoteUser>().GetUserById(m_nOID); if (user != null) { CopyDetails(user); MarkLoaded(); } else ClearDetails(); } return this;}I would absolutely love to clean up the conventions here a bit and use only automatic getters/setters. Is this possible to do? If not, any other suggestions on reducing the boilerplate needed? | Calling tries to lazy load in every properties getter | c#;design patterns | There is the INotifyPropertyChanged event handler, but I'm not sure why you would need that in this case? Instead why not just load the details when the ID changes as that seems to be the only value that will change between LoadDetails calls.Perhaps something like:public int ID{ get { return m_nOID; } set { if(m_NOID != value) { InitialiseUserDetails(m_nOID); } m_nOID = value; }}If this was ok, you could remove the private backing fields as I believe you wish to do. |
_unix.149781 | Let's say I have a program food in /bin/ that creates the file dinner.txt like so:#!/bin/bashtouch dinner.txtI want to use my food program to create dinner.txt in my /home/ folder without calling cd /home/ first. In other words, I want to call food as if it were called from /home/. How would I do this?Edit: Assume I can't edit the actual food program. | How do I call an executable as if it were called from another folder? | bash | (cd ~ && /bin/food)This launches it in a subshell. |
_codereview.138179 | I have been messing around with practicing some c++ making a little console game, and I would like to receive some advice and tips on how to improve my code.Some of the code is not done, meaning that some of the functions are not done and some are not called but I have tested it and it does all works.Some of my concerns are: I don't know if I used dynamic memory allocation properly.I am not sure if I should be using pointers in this.I am not sure if I am following all the standards I should be.Player.h#ifndef PLAYER_H#define PLAYER_H#include <string>#include <vector>using namespace std;class Player{public: void DisplayPlayerInv(vector<string>Inv); //Gets the players inv //void AddPlayerInv(vector<string>Inv); void PlyrArmor(int Armor); //Players armor string PlyrName(string name); //Gets and displays the players name vector<string>Inv;private: vector<string*>pInv; // vector of strings to store items in inventory vector<string>::const_iterator it; // iteraotr for inv int health; int gold; int Armor; string name;};#endifPlayer.cpp#include <iostream>#include <string>#include <vector>#include Player.husing namespace std;void Player::DisplayPlayerInv(vector<string>Inv) // goes through the players inventory{ for (it = Inv.begin(); it < Inv.end(); ++it) { cout << *it << endl; }}void Player::PlyrArmor(int Armor) // gets the players armor{ //Add stuff here later}string Player::PlyrName(string name) // players name{ return name;}store.h#ifndef SHOPS_H#define SHOPS_H#include <iostream>#include <string>#include Player.husing namespace std;class BaseStore: public Player{public: virtual void SellItems() = 0; // function to sell items virtual void BuyItems() = 0; // function to buy items virtual int SMenu(int choice, char LChoice, bool InStore) = 0; // is the main menu for the store //virtual string ListItems() = 0; // lists the items in the storeprotected: int choice; // used for choices inside the store char LChoice; // used for Y or N inside the store static bool InStore;};class Store1 : public BaseStore{public: virtual void SellItems(); // function to sell items virtual int SMenu(int choice, char LChoice, bool InStore); // used for store menu virtual void BuyItems(); // function to buy items //virtual string LiteItems() = 0; // lists the items inside the store};#endifStore.cpp#include <iostream>#include Store.h#include Player.h#include <string>#include <vector>using namespace std;void Store1::SellItems() // sell items menu{ &Player::DisplayPlayerInv;}void Store1::BuyItems() // buy items menu{}int Store1::SMenu(int choice, char LChoice, bool InStore) // stores menu{ InStore = true; while (InStore == true) { cout << Hello welcome to the store what would you like to do?\n\n; cout << 1. Buy Items\n; cout << 2. Sell Items\n; cout << 3. leave\n; cin >> choice; if (choice == 1 && LChoice != 'N' && LChoice != 'n') { cout << would you like to buy items? (Y/N)\n; cin >> LChoice; BuyItems(); } else if (choice == 2 && LChoice != 'N' && LChoice != 'n') { cout << Would you like sell items? (Y/N)\n; cin >> LChoice; SellItems(); } else if (choice == 3 && LChoice != 'N' && LChoice != 'n') { cout << Would you like to leave the store? (Y/N)\n; cin >> LChoice; return 0; InStore = false; } else { cout << You Have Made a invalid Choice; } } return 0;}Enemey.h#ifndef ENEMEY_H#define ENEMEY_H#include <string>using namespace std;class Enemey{public: virtual int BasicEnemeyStats(int health, int gold, int Attack, int Armor) = 0; // pure virtual function for the enemey stats virtual string EnemeyName(string name) = 0; // gets the enemey nameprotected: int health; int Attack; int gold; int Armor; string name;};//Enemey classesclass Theif : public Enemey{public: virtual int BasicEnemeyStats(int health, int gold, int Attack, int Armor); virtual string EnemeyName(string name);protected: int Attack; int Armor;};class Troll : public Enemey{public: virtual int BasicEnemeyStats(int health, int gold, int Attack, int Armor); // Enemey Stats virtual string EnemeyName(string name); // gets the Enemeys Nameprotected: int Attack; int Armor;};//Boss Classesclass Boss : public Enemey{public: virtual int BasicEnemeyStats(int health, int gold, int Attack, int Armor); // Enemey stats int DamageMulti(int Attack); // Multiplys damage based on the players armor or attack virtual string EnemeyName(string name); // enemey's nameprotected: int Armor; int Attack;};#endifEnemey.cpp#include <iostream>#include Enemey.h#include <string>using namespace std;// Boss Functionsint Boss::BasicEnemeyStats(int health, int gold, int Attack, int Armor) // Bosses stats { health = 100; // place holder for now gold = 0; Armor = 50; Attack = 50; return (health, gold, Armor, Attack);}int Boss::DamageMulti(int Attack) // damage multiplyer{ int DamageMulti = Attack * 2; //(PlyrArmor) this is what needs to be changed to divide by 2 from player armor stat return DamageMulti;}string Boss::EnemeyName(string name) // gets the enemey name{ return name;}//Enemey Functionsstring Theif::EnemeyName(string name){ return name;}int Theif::BasicEnemeyStats(int health, int gold, int Attack, int Armor) { health = 100; gold = 0; Attack = 20; Armor = 5; return (health, Attack, Armor);}string Troll::EnemeyName(string name){ return name;}int Troll::BasicEnemeyStats(int health, int gold, int Attack, int Armor){ health = 100; gold = 0; Attack = 30; Armor = 12; return (health, Attack, Armor);}World.h#ifndef WORLD_H#define WORLD_H#include <iostream>#include <string>#include <vector>using namespace std;class World // class for the base game basicly the Parent of every other class in the game{public: void Story(int SChoice, bool NameSelectDone); // is for the story int SChoice stands for Story Choice void SetChoice(char LChoice); // Sets SChocie bool NameSelect(bool NameSelectDone, string* plyrName); // Name Set function void SaveGame(); // save game functions void GameOver(bool IsGameOver, bool NameSelectDone); // Game over function int QuitGame(); // quit game function void DisplayControls();private: int SChoice; char LChoice; string* plyrName; bool NameSelectDone; bool PlayAgain; bool IsGameOver; bool PlayerAlive;};#endifWorld.cpp#include <iostream>#include World.h#include Enemey.h#include Player.h#include Store.h#include <string>#include <fstream>#include stdlib.husing namespace std;void World::Story(int SChoice, bool NameSelectDone)// Is the story for the game { Player* p1 = new Player;// player obj Store1* s1 = new Store1; // store obj Theif* t1 = new Theif; bool PlayerAlive = true; // sees if the player is alive NameSelectDone = true; // Name selection variable IsGameOver = false; // Game is not over by default LChoice;// Yes or no character while (NameSelectDone == true && IsGameOver == false && LChoice != 'Q' && LChoice != 'q') { cout << Before we begin at anytime if you type 'S' you can save the game, also at anytime if you type 'Q' you can quit the game.\n; cout << Also your choices will afect the outcome of the story so pick wisely\n; cout << If you use the Letter 'C' you can display all the controls.\n; cout << Type any letter to continue....\n; cin >> LChoice; system(cls); cout << While travling with a group of warriors from the kingdom of 'PlaceHolder'\n; cout << It began to storm as you hear a schreech so loud that it could shatter glass.\n; cout << You look into the stormy clouds and see the sky has been torn and evil forces from the darkness start to pour into the land.\n; cout << Creatures so dark that men begin to reek of piss and other things.\n You look down and see that your land and everyone you have ever known or loved is being slaughterd\n\n; cout << And burned to the ground.\n; cout << While in shock you hear loud stomps and see a wall of darkness coming towards you. the warriors you are with are turned to ash with out any effort.\n; cout << The wall of darkness approches you and then everything fades to black.........................\n\n; cout << endl; cout << Type any letter to continue....\n; cin >> LChoice; system(cls); // clears the screen cout << You wake up after being blacked out, you proceed to stand up and look into the land and see that forests, villages, and farms are burned to the ground.\n; cout << while starring in disbelif you hear a lound shreek you have two paths.\n; cout << 1. Head up the mountains.\n; cout << 2. Head down the path towards the town.\n; cin >> SChoice; // select choice used for numbers if (SChoice == 1) { cout << Type any letter to continue....\n; system(cls); cout << You Choose to head to the mountains....\n; cout << You begin to head to the mountains, on your way a bear spots you in a cave.\n; cout << But something is diffrent...\n; cout << This bear appears to be made of darkness, pure evil.\n << The bear begins to charge you, you have two choices.\n\n; cout << 1. Fight the bear\n; cout << 2. Run for your life\n; cin >> SChoice; switch (SChoice) { case 1: { system(cls); cout << The bear proccedes to charge you, you draw your fists quickly....\n; cout << Type any letter to continue....\n; cin >> LChoice; cout << The bear smashes into you proceding to rip you to shreds.\n; cout << Now with your legs ripped of and your guts and intestines spilling from your torso, you vision fades to black and you die....\n; IsGameOver = true; system(cls); GameOver(NameSelectDone, IsGameOver); break; } case 2: { system(cls); cout << You choose to run for you life.\n; cout << While running you trip over a rock and die.\n; IsGameOver = true; system(cls); GameOver(NameSelectDone, IsGameOver); break; } } } else if (SChoice == 2) { cout << You start walking down the path and you see dead bodies\n.; cout << There is gear from a soilder on the ground what you like to do with it?\n; cout << 1. Pick It Up\n; cout << 2. keep walking\n; cin >> SChoice; if (SChoice == 1) { cout << \n\n(Just a quick note if you wish to show what is in your inv type 'I'\n; cout << Basic Armor Equiped\n; p1->Inv.push_back(Basic Armor); cout << Basic Sword Equiped\n; p1->Inv.push_back(Basic Sword); } else if (SChoice == 2) { system(cls); cout << you keep walking and die...\n; } system(cls); cout << I have no idea on what to add so i will just call a bunch of things for practaice.\n\n; cout << Type a letter to continue...\n; cin >> LChoice; system(cls); cout << What would you like to do?\n; cout << 1. Store; cout << 2. call enemeys(stats not working)\n; cout << 3. End game\n\n; cin >> SChoice; switch (SChoice) { case 1: { bool InStore = true; s1->SMenu(SChoice, LChoice, InStore); break; } case 2: { t1->EnemeyName(Boogie); t1->BasicEnemeyStats(100, 0, 10, 10); break; } case 3: { QuitGame(); break; } } } if (LChoice == 'Q' || LChoice == 'q') // Quit game { QuitGame(); NameSelectDone = false; } if (LChoice == 'S' || LChoice == 's') // Save game { SaveGame(); } if (LChoice == 'I' || LChoice == 'i' || SChoice == 'I' || SChoice == 'i') { p1->DisplayPlayerInv(p1->Inv); } if (LChoice == 'C' || LChoice == 'c' || SChoice == 'C' || SChoice == 'c') { DisplayControls(); } delete p1; delete s1; delete t1; }}void World::SetChoice(char LChoice) // sets the players choice{}bool World::NameSelect(bool NameSelectDone, string* plyrName) // Player name select loops untill user enters Y/N{ Player p1; char LChoice; while (NameSelectDone == false) { cout << Hello welcome to the world of PlaceHolder\n; cout << please enter in your name:; cin >> *plyrName; cout << \nIs: << p1.PlyrName(*plyrName) << Your name? (Y/N); cin >> LChoice; if (LChoice == 'N' || LChoice == 'n') { NameSelectDone = false; } else { NameSelectDone = true; } } return (NameSelectDone);}void World::SaveGame() // Save the game{ cout << Saving the game...\n; //Add save function cout << Game has been saved\n;}int World::QuitGame() // Quit and saves the game aswell{ cout << If you havent saved the game i will save it for you; SaveGame(); cout << Thank you for playing; return 0;}void World::GameOver(bool IsGameOver, bool NameSelectDone) // Game over function{ cout << You have died would you like to play again ? (Y/N)\n; cin >> LChoice; if (LChoice == 'Y' || LChoice == 'y') { IsGameOver = false; NameSelectDone = true; cout << Game is ready to restart...\n << (Type any letter to continue)\n; cin >> LChoice; system(cls); Story(NameSelectDone, SChoice); } else if (LChoice == 'N' || LChoice == 'n') { cout << Quitting game...\n; QuitGame(); }}void World::DisplayControls(){ cout << At any time you can type these letters to do certain things.\n\n; cout << Q or q exits the game\n; cout << S or s Saves the game\n; cout << I or i Shows what you have in your inventory\n; cout << C or c Shows the controls menu\n; cout << Type any letter to continue...\n; cin >> LChoice; system(cls);} | Object-oriented console adventure | c++;beginner;console;c++14;adventure game | code level commentsnice looking code, well structured, some good classes, some comments +++, good namesdont do using namespace std. It pollutes your namespace too much. Just get used to saying std::string etc.dont repeat part of you naming when already in a named scope. class Player{public: void DisplayPlayerInv(vector<string>Inv); //Gets the players inv //void AddPlayerInv(vector<string>Inv); void PlyrArmor(int Armor); //Players armor string PlyrName(string name); //Gets and displays the players name should beclass Player{public: void DisplayInv(vector<string>Inv); //Gets the players inv //void AddInv(vector<string>Inv); void Armor(int Armor); //Players armor string Name(string name); //Gets and displays the players name adopt a member variable naming standard. I have been shot down for this one, but I still think most people think its valuable. I use m_, the other common one is xxx_. Thus Player becomes:private:int health_;int gold_;int Armor_;string name_;};that vector of pointers to strings looks very suspiciousvector<string*>pInv;I am sure you dont need it, just use vector<string> (or std::vector<std::string>), string will do the right thingHaving an iterator as a member variable is also very odd. They are normally transient things in functions.vector<string>::const_iterator it; // iteraotr for invIn general dont put explicit IO directly in the classes. Ievoid Player::DisplayPlayerInv(vector<string>Inv) // goes through the players inventory{ for (it = Inv.begin(); it < Inv.end(); ++it) { cout << *it << endl; }}if you decide you want to display the output a different way you are stuck. Better to pass in a stream to write to or a function to invoke. I would (in this case) simply return the inventory as a read only list so that I can do what I want with it.be consistenat with names. It makes a huge difference with 10 times as much code, get into good habits int health; int gold; int Armor;why is Armor upper case?. Typically upper case names mean functions or propertiesThis is odd string Player::PlyrName(string name) // players name { return name; }I assume its because you intend(ed?) to have 2 functions one to read the name (with no args) and one to set it (with one arg) , this is a hybrid of the 2Next one look odd too. class BaseStore: public Player {A Store (place where I can buy and sell stuff) is a type of Player? I dont think so. A Player goes to a store to buy and sell things, or at least thats what I expect to see. This smells odd |
_cstheory.17041 | Consider a 2D grid, and a given planar graph $G$ with $\Delta<4$ (max node degree) and without odd cycles. What conditions should $G$ satisfy so that when it is mapped (or embedded) into the 2D grid, the adjacency of the nodes is maintained (i.e., all adjacent nodes in $G$ remain adjacent in the 2D grid). Accordingly, after embedding of $G$ in the 2D grid, the shortest path distance between adjacent nodes is still 1. The alternative question is what is the condition for a given planar graph (with $\Delta<4$ and w/o odd cycles) to be a 2D grid?Thanks! | Adjacency-Preserving 2D Grid Embedding | graph theory;cg.comp geom;embeddings | Embedding planar graphs (with max degree four) in an adjacency-preserving way onto a grid is NP-complete, meaning that there's unlikely to be simple necessary and sufficient conditions. Actually that's still true even for embedding trees into a grid. See:S. Bhatt and S. Cosmodakis. The complexity of minimizing wire lengths in VLSI layouts.Inform. Proc. Lett. 25:263267, 1987. |
_codereview.98262 | This function conditionally copies and pastes information from one sheet to another based on the column headings to standardize data before exported to SQL database.I am not an expert in VBA and started learning but when I run this code, it takes way too long to process huge data (ex: Excel sheet with 70k rows would take like 2 minutes to 5 minutes). Can anyone make a suggestion to make it time-efficient?Option ExplicitPublic Sub projectionTemplateFormat() Dim t1 As Double, t2 As Double xlSpeed True t1 = Timer 'On Error Resume Next mainProcess 'On Error GoTo 0 t2 = Timer xlSpeed False MsgBox Duration: & t2 - t1 & secondsEnd SubPrivate Sub mainProcess() Const SPACE_DELIM As String = Dim wsIndex As Worksheet Dim wsImport As Worksheet 'Raw Dim wsFinal As Worksheet Dim indexHeaderCol As Range Dim msg As String Dim importHeaderRng As Range Dim importColRng As Range Dim importHeaderFound As Variant Dim importLastRow As Long Dim finalHeaderRng As Range Dim finalColRng As Range Dim finalHeaderRow As Variant Dim finalHeaderFound As Variant Dim header As Variant 'Each item in the FOR loop Dim lastRow As Long 'Manual Headers based on the number of rows in the raw data Dim rngs As Range Set wsIndex = aIndex 'This is the Code Name; top-left pane: aIndex (Index) Set wsImport = bImport 'Direct reference to Code Name: bImport.Range(A1) Set wsFinal = cFinal 'Reference using Sheets collection: ThisWorkbook.Worksheets(Final) Set rngs = ThisWorkbook.Sheets(DATA).Cells lastRow = rngs.Find(What:=*, After:=rngs.Cells(1), Lookat:=xlPart, LookIn:=xlFormulas, SearchOrder:=xlByRows, SearchDirection:=xlPrevious, MatchCase:=False).Row + 1 '+ 1 is added for dummy row in the final sheet; otherwise it won't copy the last row in the import sheet 'Static Data(Account Level information) wsFinal.Range(D3:D & lastRow).Value = Sheets(INDEX).Range(H2).Value wsFinal.Range(AD3:AD & lastRow).Value = Sheets(INDEX).Range(H3).Value wsFinal.Range(X3:X & lastRow).Value = Sheets(INDEX).Range(H4).Value wsFinal.Range(Y3:Y & lastRow).Value = Sheets(INDEX).Range(H5).Value wsFinal.Range(AF3:AF & lastRow).Value = Sheets(INDEX).Range(H6).Value wsFinal.Range(AG3:AG & lastRow).Value = Sheets(INDEX).Range(H7).Value wsFinal.Range(AE3:AE & lastRow).Value = Sheets(INDEX).Range(H8).Value wsFinal.Range(F3:F & lastRow).Value = Sheets(INDEX).Range(H9).Value wsFinal.Range(AC3:AC & lastRow).Value = Sheets(INDEX).Range(H10).Value 'Claim Type wsFinal.Range(E3:E & lastRow).Value = AB With wsImport.UsedRange Set importHeaderRng = .Rows(1) 'Import - Headers importLastRow = .Rows.Count + 1 'Import - Total Rows; + 1 is for taking into consideration of the dummy row in the final sheet End With With wsFinal.UsedRange finalHeaderRow = .Rows(1) 'Final - Headers (as Array) Set finalHeaderRng = .Rows(1) 'Final - Headers (as Range) End With With wsIndex.UsedRange 'Transpose col 3 from Index (without the header), as column names in Import Set indexHeaderCol = .Columns(3).Offset(1, 0).Resize(.Rows.Count - 1, 1) wsImport.Range(wsImport.Cells(1, 1), wsImport.Cells(1, .Rows.Count - 1)).Value2 = Application.Transpose(indexHeaderCol) End With If Len(aIndex.Cells(1, 1).Value2) > 0 Then 'if index cell (1,1) is not empty With Application For Each header In finalHeaderRow 'Loop through all headers in Final If Len(Trim(header)) > 0 Then 'If the Final heade is not empty importHeaderFound = .Match(header, importHeaderRng, 0) 'Find header in Import sheet If IsError(importHeaderFound) Then msg = msg & vbLf & header & SPACE_DELIM & wsImport.Name 'Import doesn't have current header Else finalHeaderFound = .Match(header, finalHeaderRng, 0) 'Find header in Final sheet If IsError(finalHeaderFound) Then msg = msg & vbLf & header & SPACE_DELIM & wsFinal.Name 'Import doesn't have current header Else With wsImport Set importColRng = .UsedRange.Columns(importHeaderFound).Offset(1, 0).Resize(.UsedRange.Rows.Count - 1, 1) End With With wsFinal Set finalColRng = .Range(.Cells(3, finalHeaderFound), .Cells(importLastRow, finalHeaderFound)) 'Change 3 to 2 if the dummy row is not included finalColRng.Value2 = vbNullString 'Delete previous values (entire column) End With finalColRng.Value2 = importColRng.Value2 'Copy Import data in Final columns End If End If End If Next header End With ConvertToUppercase extractYearsDim i As Long For i = 3 To lastRow If Not (wsFinal.Cells(i, Q).Value <= 2015 And wsFinal.Cells(i, Q).Value >= 1910) Then With wsFinal .Cells(i, Q).ClearContents End With End If Next i Dim j As Long For j = 3 To lastRow If Not (wsFinal.Cells(j, R).Value <= 2015 And wsFinal.Cells(j, R).Value >= 1910) Then With wsFinal .Cells(j, R).ClearContents End With End If Next j wsFinal.Columns(G).NumberFormat = @ wsFinal.Columns(I).NumberFormat = MM/DD/YYYY wsFinal.Columns(K).NumberFormat = MM/DD/YYYY wsFinal.Columns(A).NumberFormat = @ wsFinal.Columns(B).NumberFormat = @ wsFinal.Columns(C).NumberFormat = @ 'wsFinal.Columns(R).NumberFormat = @ 'wsFinal.Columns(Q).NumberFormat = @ wsFinal.Columns(J).NumberFormat = @ wsFinal.Columns(L).NumberFormat = @ wsFinal.Columns(T).NumberFormat = MM/DD/YYYY wsFinal.Columns(W).NumberFormat = MM/DD/YYYY wsFinal.Columns(V).NumberFormat = MM/DD/YYYY wsFinal.Columns(AD).NumberFormat = MM/DD/YYYY wsFinal.Columns(N).NumberFormat = _($* #,##0.00_);_($* (#,##0.00);_($* -??_);_(@_) wsFinal.Columns(AN).NumberFormat = _($* #,##0.00_);_($* (#,##0.00);_($* -??_);_(@_) wsFinal.Columns(AO).NumberFormat = _($* #,##0.00_);_($* (#,##0.00);_($* -??_);_(@_) wsFinal.Columns(AP).NumberFormat = _($* #,##0.00_);_($* (#,##0.00);_($* -??_);_(@_) 'wsFinal.Columns(AQ).NumberFormat = General applyFormat wsFinal.Range(wsFinal.Cells(2, 1), wsFinal.Cells(importLastRow, wsFinal.UsedRange.Columns.Count)) Dim ws As Worksheet For Each ws In Worksheets ws.Select ActiveWindow.Zoom = 90 Next ws Else MsgBox Missing raw data (Sheet 2 - 'Import'), vbInformation, Missing Raw Data End IfEnd SubFunction 2 Private Sub extractYears() Dim arr As Variant, i As Long, j As Long, ur As Range, colW As Long, colV As Long Set ur = cFinal.UsedRange '3rd sheet If WorksheetFunction.CountA(ur) > 0 Then colW = colNum(Q) colV = colNum(R) arr = ur 'transfer sheet data to memory For i = 3 To getMaxCell(ur).Row 'each row If Len(arr(i, colW)) > 0 Then 'if not empty If Len(arr(i, colW)) > 4 Then 'if it's full date (longer than 4 digits) arr(i, colW) = Format(arr(i, colW), yyyy) 'extract the year part End If End If 'if it contains 4 digit year leave it as is If Len(arr(i, colV)) > 0 Then 'the same logic applied for colV If Len(arr(i, colV)) > 4 Then arr(i, colV) = Format(arr(i, colV), yyyy) End If End If Next ur = arr 'transfer memory data back to sheet End If End SubFunction 2Private Sub applyFormat(ByRef rng As Range) With rng '.ClearFormats With .Font .Name = Georgia .Color = RGB(0, 0, 225) End With .Interior.Color = RGB(216, 228, 188) With .Rows(1) .Font.Bold = True .Interior.ColorIndex = xlAutomatic End With With .Borders .LineStyle = xlDot 'xlContinuous .ColorIndex = xlAutomatic .Weight = xlThin End With End With refit rngEnd SubSub ConvertToUppercase() Dim ws As Object Dim LCell As Range 'Move through each sheet in your spreadsheet On Error Resume Next ActiveWorkbook.Sheets(FINAL).Activate 'Convert all constants and text values to proper case For Each LCell In Cells.SpecialCells(xlConstants, xlTextValues) LCell.Formula = UCase(LCell.Formula) NextEnd Sub | Transferring information between sheets based on column headings | vba;excel | null |
_webmaster.32699 | I submitted my site to Technorati but they prompt me that can not read from my feed url. I see it to be fine and it worked fine on other sites. Any ideas if there is a special requirement for technorati? | Feeds not working on Technorati | feeds | I would contact Technorati and let them know you have a w3c valid feed why cant they read your feedhttp://validator.w3.org/appc/You are submitting your Feed URL to them right http://www.startupsandfinance.com/feed/ and not your feedburner URL?Did you put the confirmation code into your feeds? |
_unix.147359 | On some embedded linux systems, there is no /etc/passwd file and /etc directory is not writable (which means I cannot set user account and password?). Then what is my (default) account name and password, or how to set account and password? I need to get account and password on embedded linux to run ssh server, which requires user account and password for ssh login. | How to check my account on embedded linux without /etc/passwd? | linux;ssh;password;embedded;dropbear | If there is no /etc/passwd, then your embedded system is not running what is usually known as the Linux system, but rather a different operating system which is also based on the Linux kernel. A famous example of an operating system which uses on the Linux kernel but is not Linux is Android. Android doesn't have user accounts (at least not in its basic usage) and repurposes users to isolate applications rather than accounts.Such embedded systems are generally not meant to have user accounts. They have at most a control console, which is solely intended for administration and allows running commands as root. There may be authentication in some form, or the fact that you have physical access may be considered sufficient authentication. There's no general rule there, you have to know (or find out) how your system is designed.If you want to connect with SSH, you'll have to supply credentials to the SSH servers. |
_cstheory.38841 | I am a new student in america and this my senior (12 grade) year please help answer my questions fastwhat is the difference between the SAT and ACT?- should I take them both or just one?- which one do the colleges and universities recommend or require?-the subjects or classes students are taking this year in school, is it to prepare them for the SAT ACT or AP exam?-is the college or university i am applying to will ask for the school test scores?-ifyes,what percentage do they look at or count the SAT Score and the school test score?-if i did bad in school test scores, does this effect my applying to a college or it doesn't count?-when should i startapplying for colleges or universities?-In middle east i took SAT 2 and did Math and got 500 ,do they count it here in the colleges?-ihaven't done any volunteering yet, is it required when applying to a college? | Please help answer the questions fast | cc.complexity theory;sat;arithmetic circuits | null |
_softwareengineering.108848 | Why don't many code review tools seem to be syntax aware or provide more in-depth analysis of changes? Is it simply too hard to do?I find this to be a major hole of most programmer's toolkits. From what I have seen, which admittedly is not much, code review tools just compare code line-by-line with many of them not even being able to do syntax highlighting.Is there a solution out there that is smart enough to offer file-level, method-level code review/comparison? One of the simple problems I have is that methods get re-ordered in code and my code review software breaks down completely, but they should be able to do so much more.I'm interested in others opinions/knowledge on the topic of code review/comparison tools. | Why don't many code review tools seem to be syntax aware or provide more in-depth analysis of changes? | code reviews;comparison | null |
_webapps.104640 | How can I SUM a variable number of Rows, where the start and end points will be changing constantly? In the image below I would like to be able to add all the values for each choice together. However there may be rows added in as needed(As you can see Wednesday has more rows than Thursday). If possible I'd also like to be able to show how many bookings there were for that day using a variation of the below formula:=CONCATENATE(COUNTA(B2:B13), Bookings)Example Sheet | SUM a variable number of Rows between two points | google spreadsheets | I think I've come up with something (Whenever I post a question I seem to instantly come up with my own solution!). Using the following: =SUM(D$19:INDIRECT(ADDRESS(ROW()-1,COLUMN())))I can add as many rows as I want between the start and end points and it will SUM correctly. The D$19 will update itself to whatever that cell becomes(D$20, D$21, D$22, etc). The INDIRECT(ADDRESS(ROW()-1,COLUMN())) will always get the cell above itself, so it will SUM everything above itself. Is there a better way to do this? I know that the only problem I have right now is if a row is added before 10:00am it will not be counted in the formula. |
_unix.39106 | I'm trying to access www.belgacom.be using Firefox from my Archlinux system and I get the following error:Firefox can't find the server at www.belgacom.beWhen I use Chromium, it works perfectly.I tried deleting the ~/.mozilla directory, with no success. I've also tried running Firefox from another user session, with no luck.I'm running the latest version of Firefox.Any ideas? | Cannot access a website from Firefox | arch linux;firefox;chrome | null |
_webmaster.32979 | Our website is insynchq.com. In the All Pages report under Content -> Site Content we can only see data for some our pages, like /, /getstarted, and /download. Others, like /gmail, /about, and /mobile are not shown, even if we are sure that there have been visits to them. We use a template for our pages so the scripts that are loaded for / (for example) should also be loaded for /gmail, so it doesn't seem to be a problem with the installation of the tracking code. Can anyone help? Thanks. | Google Analytics is not tracking all of our pages | google analytics | null |
_unix.137347 | I need to fill all files with a specific filename (recursively) with a text.In zsh this can be done withecho SomeText > **/TheFileNameI search generic solutions for sh-compatible and/or tcsh shells.Is there a shorter/easier way than the following command?find . -name TheFileName -print0 | xargs -0 sed -n -i -e '1c\nSomeText' | zsh's echo SomeText >> **/filename in other shells | shell;files;find;xargs;replace | find . -name TheFileName -type f -exec sh -c 'for i do echo SomeText > $i; done' sh {} + |
_codereview.63181 | I wrote a speech enhancement code for an Android App. The algorithm runs on 256 size frames of voice samples. On my PC the code runs per about 5ms per frame, while on my Nexsus 5 it more like 50ms per frame, making the speech enhancement on a 30sec long recording run for over two minutes. On my PC, enhancing the whole audio file takes under a minute. I am looking to improve run time anywhere I can. I am attaching the code that runs per frame which is where I see biggest increase in run time.for(int j = 0; j < Consts.NUM_OF_CLUSTERS; j++){ tmp_log_h=0; for(int k = 0; k < Consts.SIZE_OF_SEMPELS_IN_CHUNCK; k++) { f_X = (1/FastMath.sqrt(2*FastMath.PI*sigmaSqr.get(j).get(k)))*FastMath.exp(-(FastMath.pow(NoisySignal[k] - mue.get(j).get(k),2))/(2*sigmaSqr.get(j).get(k)));// equation (3) F_X=(0.500 + 0.5*Erf.erf((NoisySignal[k] - mue.get(j).get(k))/(FastMath.sqrt(2*sigmaSqr.get(j).get(k)))));// equation (6) if(j==0) { g_pdf_Y[k]=1/FastMath.sqrt(2*FastMath.PI*Noise_sigmaSqr.get(k))*FastMath.exp(-(FastMath.pow(NoisySignal[k] - Noise_mu.get(k),2))/(2*Noise_sigmaSqr.get(k)));// equation (4) G_cdf_Y[k]=(0.500 + 0.5*Erf.erf((NoisySignal[k] - Noise_mu.get(k))/(FastMath.sqrt(2*Noise_sigmaSqr.get(k)))));// equation (5) } tmp_R_samples = (f_X/F_X);// equation (12.5) if(j==0) { R_noise[k]=g_pdf_Y[k]/G_cdf_Y[k]; // equation (12.5) } Roe[j][k] = 1/(1 + (R_noise[k]/tmp_R_samples)); X_hat[j][k] = NoisySignal[k]*Roe[j][k]+ (mue.get(j).get(k))*(1 - Roe[j][k]) - (sigmaSqr.get(j).get(k))*((f_X*R_noise[k])/(f_X + F_X*R_noise[k])); tmp_h_first=(f_X*G_cdf_Y[k] + F_X*g_pdf_Y[k]);// equation (7.5) tmp_log_h += FastMath.log(tmp_h_first); // equation (7.5) } Log_h_second[j]=tmp_log_h; Log_h_final_vecotr[j]=FastMath.log(c.get(j)) + Log_h_second[j];}Double maxVi = MathHelpers.findMax(Log_h_final_vecotr);double sumExp = 0.0;for(int j = 0; j < Consts.NUM_OF_CLUSTERS; j++){ sumExp +=(FastMath.exp(Log_h_final_vecotr[j] - maxVi));// equation (18) System.out.println(sumExp);}Log_h_final = maxVi + FastMath.log(sumExp);for(int j = 0; j < Consts.NUM_OF_CLUSTERS; j++){ Log_q[j]=FastMath.log(c.get(j)) + Log_h_second[j] - Log_h_final; q[j]=(FastMath.exp(Log_q[j]));// equation (10)}for(int k = 0; k < Consts.SIZE_OF_SEMPELS_IN_CHUNCK; k++){ Double temp_X_estimate = 0.0; for(int j = 0; j < Consts.NUM_OF_CLUSTERS; j++) { temp_X_estimate += (X_hat[j][k]*q[j]); } X_estimate[k]=temp_X_estimate;// equation (9)} | Performance of speech enhancement code for Android app | java;performance;android;audio;signal processing | null |
_softwareengineering.257996 | I have code reviewed a piece of Python code, but to me it looks really ugly, hacky and complex for something that can be achieved very easily.The code looks something similar to the following:_extra.py (this is supposed to be a private module because of the underscore):import dependencyclass Extra(dependency.Cls): ...setup.pysetup( ... name='somepkg', extras_require={ ... 'extra': ['dependency']})public.py_dist = pkg_resources.working_set.find( pkg_resources.Requirement.parse('somepkg'))if 'extra' in _dist: from _extra import ExtraSo that the extras_require can be tested using:class ExtraTest(unittest.TestCase): ... # Some test cases where the requirement actually is available def test_require_extra(self): dist = mock.Mock(pkg_resources.Distribution) self.dist.extras = [] public = helper.import_or_reload('public') with self.assertRaises(AttributeError): public.ExtraMy initial thought would be to not have _extra.py exist and have public.py look like:try: import dependencyexcept ImportError passelse: class Extra(dependency.Cls): ...I don't really care that it isn't tested that it can't be imported if it is not in the extras_require.Does the original code seem like a logical solution?Should extras_require be tested in any way? If so, how? | Is there a cleaner way to test extras_require | python;unit testing;testing;dependency management | If this is all done just to make it possible to test the Extra class, then the code is overcomplicating matters. The code is asking for permission rather than for forgiveness.You are quite correct, changing the code to catch the ImportError exception would vastly simplify the codebase. The code shouldn't even try to test the detection, but simply rely on Python's own unit tests to have tested throwing an ImportError properly. At best the unittest should verify what happens if extra is not set (perhaps by the code setting extra = None if the dependency is missing, testing is then as easy as mocking extra = None). |
_unix.288214 | After rebooting computer I can't log in to desktop as anything else than root. If I try with my normal username and correct password the screen blinks and returns without any message or anything.Q: How do I get back to the desktop as a normal user?Here's what I did before this happened:I'm using Intel onboard graphics and while browsing software in Synaptic I found the xserver-xorg-video-intel driver. It says:The use of this driver is discouraged if your hw is new enough (ca. 2007 and newer). You can try uninstalling this driver and let the server use it's builtin modesetting driver instead.So I uninstalled it.During the same session I also went to the terminal and ran Xorg -configure. And as SU I issued startx and ended up in a new GUI session for root, which I didn't really intended.Before that I also tried to setmode 1920x1080 for my monitor, which failed and I had to accept 1600x1200(Using Debian 8)EDIT: Noticed now that if I at the Display Manager press CTRL+ALT+F1 and jump to terminal and log in as my usual user and issue: startx, nothing seems to happen, but hitting CTRL+C 4-5 times sends me into the desktop for my user. Logging out and trying to log in the normal way still doesn't work. | How do I log in to debian 8 desktop as a normal user? | debian | I think you will see errors in journalctl, or possibly ~/.xsession-errors.Something like what you describe might cause a permissions error. Try running the command chown -R user:user ~user/.??* as root. It should fix permissions on all hidden files used to configure your session. May take some time if that includes a large Thunderbird^WIcedove cache. |
_webapps.73198 | I am an Ubuntu user with Google Chrome 39.0 and whenever I enter a Google Spreadsheet-document (with Chrome), I get a warning header saying that This version of Firefox is no longer supported, please upgrade to a supported browser.What have I done wrong? How come Google thinks I'm using Firefox when I'm using their own browser? I have not enabled emulation in the developer tools.When checking my user-agent at a 3rd party website, it says Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120427 Firefox/15.0a1. | Google spreadsheet thinks Chrome is Firefox | google chrome;google spreadsheets | null |
_unix.273868 | I know that wget have some options to display or not the progress bars.I would like to display the wget transfer in a more short way, or.. a percent or something dynamic but not taking so much space like the classic wget output because i have to insert it into a script with already it's output.My goal is to display to the user that the file is being downloaded, maybe even it's speed but not ruining too much the overall look of the script.Important: My wget passages are inside an if then else script, so I have to retain the error detection functionality.My script have a block that looks like:if wget -O filename http://someurl then some_actionelse some_other_actionfiCan someone provide me some funny examples of customized progress data or bars with this refinements? Thank you ;) | Display wget transfer in a more compact way (while keeping the error detection functionality) | shell script;wget | If you use the dot progress output style, which is something like this: 500K .......... .......... .......... .......... .......... 2% 496K 91sthen you can pipe this (which is on stderr) into awk or similar and just print the 2% field shown in the last-2 column.wget ... --progress=dot -q --show-progress 2>&1 |awk 'NF>2 && $(NF-2) ~ /%/{printf \r %s,$(NF-2)} END{print \r }'This shows you the changing percent value on one line, which is cleared at the end.To preserve the return code of wget for an if..else you can ask bash to make pipelines return the error code of any command that failed (instead of just the rightmost command) by setting in the script:set -o pipefailAlternatively, you could put all of the if..else..fi code unchanged inside a block and pipe the stderr at the end into a single more informative awk such as suggested by cas in the comments:( if wget ... fi if wget ... fi if wget ... fi) 2>&1 | awk '/^Saving to:/ { fn = gensub(/^Saving to: /,,1) }NF>2 && $(NF-2) ~ /%/ { printf \r%s %s,fn,$(NF-2) }END { gsub(/./, ,fn); print \r fn }'or to avoid missing important error messages on stderr, just redirect the stderr on each wget command to a 3rd file descriptor:( if wget ... 2>&3 then ... else ... fi if wget ... 2>&3 then ... else ... fi if wget ... 2>&3 then ... else ... fi) 3>&1 | awk ... |
_unix.181369 | How and where does this Gnome applet get weather information? Same question for sunrise and sunset times. I suppose there is a web API it queries but which one and can I use it?(sorry for the screenshot in french) | How does Gnome clock/calendar applet get weather, sunset and sunrise time information? | gnome;api | gnome-weather uses libgweather underneath which in turn uses several GWeatherProviders (defined in gweather-weather.h) to get weather information for your particular geo-location: * GWeatherProvider:.... * @GWEATHER_PROVIDER_METAR: METAR office, providing current conditions worldwide * @GWEATHER_PROVIDER_IWIN: US weather office, providing 7 days of forecast * @GWEATHER_PROVIDER_YAHOO: Yahoo Weather Service, worldwide but non commercial only * @GWEATHER_PROVIDER_YR_NO: Yr.no service, worldwide but requires attribution * @GWEATHER_PROVIDER_OWM: OpenWeatherMap, worldwide and possibly more reliable, but requires attribution and is limited in the number of queries....You could look into the source code and see how they do it:weather-metar.c,weather-iwin.c,weather-yahoo.c,weather-yrno.c,weather-owm.c. See also weather.cSunrise and sunset times are computed in weather-sun.c |
_unix.354634 | Since I have problems with my Internet connection I'm not able to always run commandapt-get install xdotool in Terminal, so I would like to download the xdotool package manually from website in .zip or .deb format and then install it manually every time I need it. (I'm using Usb Live Kali Linux 2016.2-amd64 and every time I reboot it deletes all files).I've tried to download xdotool from https://github.com/jordansissel/xdotool at the right side in green box Clone or download there is option download ZIP. After that I extract all files in Home folder and then open it in Terminal.in README file there are instructions like this: See the website for more up-to-date documentationhttp://www.semicomplete.com/projects/xdotool/ or the manpage listed below.Compile: make Install: make install Remove: make uninstall You may have to set 'PREFIX' to the location you want to install to. The default PREFIX is /usr/localFor packagers, there's also support for DESTDIR for staged install.I type in make then make install and it always outputs me this error: root@kali:~/xdotool-master# makecc -pipe -O2 -pedantic -Wall -W -Wundef -Wendif-labels -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-align -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs -Winline -Wdisabled-optimization -Wno-missing-field-initializers -g -std=c99 -I/usr/X11R6/include -I/usr/local/include -fPIC -c xdo.cxdo.c:29:34: fatal error: X11/extensions/XTest.h: No such file or directory #include <X11/extensions/XTest.h> ^compilation terminated.Makefile:124: recipe for target 'xdo.o' failedmake: *** [xdo.o] Error 1 root@kali:~/xdotool-master# make installinstall -d /usr/localcc -pipe -O2 -pedantic -Wall -W -Wundef -Wendif-labels -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-align -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wnested-externs -Winline -Wdisabled-optimization -Wno-missing-field-initializers -g -std=c99 -I/usr/X11R6/include -I/usr/local/include -fPIC -c xdo.cxdo.c:29:34: fatal error: X11/extensions/XTest.h: No such file or directory #include <X11/extensions/XTest.h> ^compilation terminated.Makefile:124: recipe for target 'xdo.o' failedmake: *** [xdo.o] Error 1root@kali:~/xdotool-master# What I'm doing wrong?Can you suggest me other methods of installation of xdotool (but without Internet connection)? | Install problem for xdotool | terminal;software installation;kali linux;xdotool | null |
_codereview.160719 | I have two sliders that I can select using my keyboard. The black bar shows what currently is selected. It looks for current_position_in_option_buttons and positions itself as needed. When selected on either the sound or music option you can use the left and right arrows to change the setting. As you can see I have this hacky way to change each setting. if current_position_in_option_buttons == 0: change sound settingif current_position_in_option_buttons == 1: change music settingIs there a better way to do this? I'm wondering what I would do if there are hundreds of options to change. The current approach would lead to hundreds of if statements.global.gd:var sound_volume = 50 setget set_sound_volume, get_sound_volumevar music_volume = 50 setget set_music_volume, get_music_volumeoptions_menu.gd:var current_position_in_option_buttons# option slidersvar option_sliders = []var option_sliders_list_sizefunc change_slider_value_to_right(): if option_sliders[current_position_in_option_buttons].get_type() == HSlider: var cur_val = 0 if current_position_in_option_buttons == 0: #change the volume of sound cur_val = global.get_sound_volume() global.set_sound_volume(cur_val + 5) #TODO make 5 a const value if current_position_in_option_buttons == 1: #change the volume of music cur_val = global.get_music_volume() global.set_music_volume(cur_val + 5) #TODO make 5 a const value option_sliders[current_position_in_option_buttons].set_value(cur_val + 5) #TODO make 5 a const valueI have taken out all superfluous code. If you need more code let me know and I'll post it. | List of options to change in a game menu using a list | gdscript | null |
_webmaster.57562 | We are about to launch two product pages plus a corporate website. The goal is to keep a blog in all of the sites, but here it comes the question about how to do it in a way we get everything unified but do not mess with Google's web crawlers.We considered the following options:Putting a blog from which we retrieve two categories with custom CSS,so we have a blog that sub splits two category-dependent blogs; thisway we can get the feeds and will point to itPutting two product blogs of which we retrieve their posts into a bigger, corporate blogPutting three independent blogsDespite I was for the first option, so we only have to address our content from the product pages, I would sincerely like to hear your opinion. We are afraid duplicate content or strange link games may make us lose PageRank. How would you do it? | How do I optimize SEO in a multiblog WordPress install? | seo;wordpress;blog;optimization | null |
_softwareengineering.283561 | If one has a 32 bit machine, a single program cannot address more than 2^32 bytes, or 4 GB. Would making use of mmap() allow one to exceed the 4 GB limit? | Memory Limit of a Single Program and mmap | linux;memory | No, you can never exceed 4GiB of simultaneously addressable memory for a 32-bit binary. Usually, the kernel takes half and you are left with 2GiB user. Some kernels support a compromise split of 1GiB/3GiB.However, you can ask the OS to map different portions of a file into memory at different times, essentially performing time multiplexing of the available address space. IMHO, at that point you might as well not memory map anything and just read() from the file into buffers. |
_softwareengineering.211056 | Now that I've gotten into a dependency injection groove, I find main methods for different applications all look basically the same. This is similar to stuff you might find on the Guice documentation, but I'll put an example here:public class MyApp { public static void main(String... args) { Injector inj = Guice.createInjector(new SomeModule(), new DatabaseModule() new ThirdModule()); DatabaseService ds = inj.getInstance(DatabaseService.class); ds.start(); SomeService ss = inj.getInstance(SomeService.class); ss.start(); // etc. }}For multiple applications I have main methods that all look just like that. But now, lets say I want to make a brand new app, that, say, reuses my DatabaseModule. I pretty much have to copy and paste the main method and make appropriate changes... I view copy-paste as a code smell. Further, lets say I realize I should probably be putting shutdown hooks in, now I have to go through and change every main method in all my applications to attach shutdown hooks to each service.Is there some good way to template this process and minimize the boilerplate? | Main method templating | java;dependency injection;templates;entry point | Nothing about the code example in your question suggests the need for automation. Every Main method is going to be different.Of course, if you had a database, and you were writing data access code, you might realize that a lot of this code looks almost exactly the same (CRUD methods), and you might be tempted to write a code generator that reads your database table schemas and generates classes that correspond to the entities in each table (customers, for example). You would then have the equivalent of an Object-Relational Mapper, arguably a worthy pursuit.Some programming environments such as Visual Studio have the ability to store code snippets, and even come with a template code generator (T4). But the point of automation is to save time. Consult this chart first, before writing the template:http://xkcd.com/1205/ |
_webapps.99171 | My users are confusing the Save button with Submit when only part way through the form. Can I change the text so I can clarify Save & finish later or some such? I turned off the feature because of confusion and had to follow up with users. | Can I change text on Save button for Save & Resume Feature | cognito forms | null |
_webmaster.86957 | I'm wondering if the removal of an exact match keyword could cause rankings to drop on Bing and Yahoo, but not on Google.A website my company manages recently experienced a dramatic loss in rankings for several keywords across Bing and Yahoo. After doing a little research, I realized that the rankings dropped around the same time that we amended the website verbiage to be more broad. Before the change, the website contained several instances of the keyword hog hunting. Due to a change in strategy, we amended many of these instances to hunting. However, the keyword hog is still mentioned in a different context quite often in the website, include in the company name and website URL.My working theory is that Google has not dropped our rankings because the engine understands the context around the keywords, and that because the keywords hog and hunting are mentioned throughout the website, although not together, it can still infer that they are related. However, because Bing and Yahoo are not as advanced as Google, the context is not understood, and the rankings have dropped due to the loss of exact-match keywords. Can anyone confirm that this could be the cause of the drop in rankings? Any opinions would be appreciated! | Could the removal of exact match keywords cause a drop in rankings on Bing and Yahoo but not Google? | seo;ranking;serps;bing;yahoo | null |
_cs.78092 | Let $L$ to be the following problem: $$L = \{ \langle M,x,1^t \rangle \mid \text{$M$ accepts $x$ after $t$ steps with probability at least $3/4$} \}.$$ Show that if $L\in BPP$ then $NP\subseteq BPP$.I already showed that $L$ is $BPP$-hard. I think a reduction from $3SAT$ to $L$ need to be shown (logarithmic/polynomial? not sure)It's not clear to me how to do that - I'd be glad for guidance. | if $L\in BPP$ then $NP\subseteq BPP$ | complexity theory | This is similar to the proof that PP contains NP. Given a 3SAT instance $\varphi$ on $n$ variables, construct a machine $M$ that with probability $3/4 - 2^{-n}/4$ accepts, and with probability $1/4 + 2^{-n}/4$ chooses a random truth assignment and accepts if the truth assignment satisfies $\varphi$. If $\varphi$ has $N$ satisfying assignments, then $M$ accepts with probability$$p = \frac{3}{4} - \frac{2^{-n}}{4} + \frac{M}{2^n} \left(\frac{1}{4} + \frac{2^{-n}}{4}\right).$$If $M = 0$ then $p < 3/4$, whereas if $M \geq 1$ then $p \geq 3/4 + 2^{-2n}/4$.I'll let you fill in the rest of the argument. |
_unix.297360 | In a Jenkins pipeline script, I am using the checkout function with class GitSCM, as in the following example:checkout( [ $class: 'GitSCM', branches: [[name: '*/<branche>']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [ [credentialsId: '<id>', url: '<url>'] ] ])On the Stage View) for this pipeline, for each build, the Jenkins UI shows me the number of new commits since the previous build.I would like to abort the build if there are 0 new commits. How should I do that? Is there a way to get that information from the checkout function? | Abort pipeline build if no new commits | jenkins | null |
_unix.109468 | when I run groups in ubuntu without my username delliott@delliott:/var/www$ groups delliott wheelit returns two groups I'm in.when I do delliott@delliott:/var/www$ groups delliott delliott : delliott wheel webusersit returns all three groups I'm inwhen I run delliott@delliott:/var/www$ whoamiit returns delliottwhich I expectand when I run delliott@delliott:/var/www$ id delliott it returns uid=1006(delliott) gid=1007(delliott) groups=1007(delliott),1001(wheel),1010(webusers)why doesn't it list all three when I just do groups with no username? | ubuntu groups not returning all my groups | ubuntu | Group membership is updated at login time. Maybe one of your shells was opened before making the corresponding changes to groups and hasn't properly reloaded yet. Specifically, the deliott session in the first example appears to be outdated (you added your user to the webusers group and have not reloaded the session since). |
_unix.68684 | I am working on a project on two different machines - one running Mac OSX 10.8.3, and one running Red Hat Enterprise Linux. On my Mac, I can do this:vim $(ls -R */*.@(h|cpp) */*/*.@(h|cpp))and everything works fine. On the Linux box, it fails. All of these work exactly as I expect:ls -R */*.@(h|cpp) */*/*.@(h|cpp)echo $(ls -R */*.@(h|cpp) */*/*.@(h|cpp))export myfilelist=$(ls -R */*.@(h|cpp) */*/*.@(h|cpp))echo $myfilelistBut vim $(ls -R */*.@(h|cpp) */*/*.@(h|cpp))produces a set of mangled filenames, e.g.^[[00m^[[00mevent_builder/include/eb_service.h^[[00m [New DIRECTORY]or:ls 1 %a ^[[00m^[[00mevent_builder/include/eb_service.h^[[00m line 1 2 ^[[00mevent_builder/include/EventBuilder.h^[[00m line 0Does anyone know why? | starting vim with command substitution | bash;ls;wildcards;command substitution | You have an alias (or function) for ls that colorizes the output. What does type -a ls give you? Instead usevim $(command ls ...)However: don't parse lsTry shopt -s nullglob globstarprintf %s\n **/*.{h,cpp} |
_unix.356202 | Is there a way to assign a swap space or swap file JUST to one process or a group of processes while other processes still use normal swap mechanism?Here is some context. I have a process that uses huge amount of memory (larger than physical memory) and I want to direct the swapped pages owned ONLY by this process to a swap file that I create on SSD.I am using Linux. I am open to use containers if that helps with the solution. | Have a private swap file per process | linux;swap;virtual memory | null |
_unix.188756 | Is there a way that stdin can 'hop' over a process? For example, in the following command,cat file | ssh host 'mkdir -p /some/directory && cat > /some/directory/file'This will send the stdin from the first cat to mkdir and the second cat will recieve no stdin. I want the stdout from the first cat to hop over mkdir and only be sent to the second cat. I am aware that you can run something like:cat file | ssh host 'cat > /tmp/file2 ; mkdir -p /some/directory && mv /tmp/file2 /some/directory/'That only works when copying a file orcat file | ssh host 'tee >(mkdir -p /some/directory) >/some/directory/file'But that only works because the mkdir command does not use stdin. Is there a command that will execute a command that replicates this functionality? Something like:cat file | ssh host 'stdinhop mkdir -p /some/directory | cat > /some/directory/file'where stdinhop would not send its stdin to mkdir, but it redirect it to stdout so the second cat can read it? | stdin 'hop' over process? | shell;io redirection | You can redirect the first command's stdin from /dev/null:anthony@Watt:~$ echo -e 'hello\nworld' | ssh localhost 'cat < /dev/null && cat -n' 1 hello 2 worldThe lines are numbered, so the 2nd cat got them.If not using ssh in there, you'd use a subshell: echo -e 'hello\nworld' | ( cat < /dev/null && cat -n ) |
Subsets and Splits