id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_codereview.100358 | I'm yrying to practice some OO PHP and I'm just wondering if what I'm doing is okay or not. Please tell me if there are any alarms in my method so that I can stop doing it and learn a better way.In my conditional statement, I was following the rule of no else keyword, based on this article by William Durand, which talks about Object Calisthenics.I also don't know if I'm using the abstract properly, being that I learned it just a few minutes ago - but it works, so I suppose it's being used correctly.<?php abstract class Homework { protected $coursesTaken = 0; protected $minimumCourses = 10; abstract function completedOrNot(); } class HistoryHomework extends Homework { public function __construct($coursesTaken) { $this->coursesTaken = $coursesTaken; } public function completedOrNot() { if ($this->coursesTaken >= $this->minimumCourses) { return You completed the course in . $this->coursesTaken . classes, but you only needed . $this->minimumCourses . classes to complete the course! \n\n; } return Sorry, you did not complete this course. You only took . $this->coursesTaken . classes, and you need a minimum of . $this->minimumCourses . to pass. \n\n; } } $student1 = new HistoryHomework(11); echo $student1->completedOrNot(); // You have completed the course in 11 classes, but you only needed 10 classes to complete the course! $student2 = new HistoryHomework(7); echo $student2->completedOrNot(); // Sorry, you did not complete this course. You only took 7 classes, and you need a minimum of 10 to pass. | History homework class | php | null |
_unix.19292 | I'm running Ubuntu. I need to know any way to obfuscate .sh shell script file contents in order to make it very difficult to read.Any suggestion are welcome, including using online obfuscator. | Is there any way I can obfuscate .sh shell script? | shell script | This really depends on who you are trying to prevent from reading the script and what resources you are expecting the system to have.One option is to simply use many different programs to do different parts of your script: shell, awk, sed, perl, etc. as well as lots of obscure parameters of tools, forcing the reader to constantly refer to man pages.Even within a shell, you can create unnecessary functions and variables, making them interdependent in confusing ways. And, of course, give them misleading names.More complicated, you can append binary data to the end of your shell and have your shell extract and execute the binary. I believe nVidia's Linux drivers, and Sun's JDK are installed this way (the binary data is an RPM, which the shell extracts and installs). Another example I just downloaded the other day is the soapUI program.In that vein, it is possible to have a text file that can be compiled or interpreted in multiple languages, so it could start as a shell, compile itself as a C program and execute the result. The IOCCC has some examples. |
_unix.381766 | I found, that sometimes DHCP server from behind my main router answers DHCP request from clients in the LAN. Below is an exampleNotebook is connected to LAN via access point. There is DHCP server running on LAN at pfSense. There is also DHCP server running on Router3. Sometimes, notebook receives address from Router3. The question is: how it can be and how to patch this bleach? In my firewall I have a rule 192.168.100.0/24 to pass so that I could open router web GUI from any clients. But I don't want it's DHCP server serves...UPDATE 1Probably I was wrongly blaming Router3 device. I found another device that could provide DHCP services: I have also multimedia player with built-in AP functionality. I have turned it on some time ago, but it didn't worked as I expected, so I forgot about it. Although it has 192.168.100.1 undocumented DHCP server inside. I deduced this by MAC address of that fake DHCP server, which has first bytes the same as of this device. Now I turned AP off and will see how it will behave. | How to prevent DHCP server from behind the router to answer to DHCP requests? | freebsd;routing;dhcp;pfsense | null |
_codereview.48129 | I had a rather large method in one public class that I refactored into 2 helper classes. The thing is though, that those 2 helper classes have dependencies. I refactored them into helper classes so I could mock and test them easily, which worked out perfectly.However, the thing is I don't want to have to register my helper classes in the DI container, because I know the public class will always be using those specific implementations.This is how I implemented the public class' constructors: /// <summary> /// Internal constructor used by tests for mocking. /// </summary> internal TranslationCompiler(ITranslationCatalogTransformer translationCatalogTransformer, ICompiledCatalogTransformer compiledCatalogTransformer) { if (compiledCatalogTransformer == null) throw new ArgumentNullException(compiledCatalogTransformer); if (translationCatalogTransformer == null) throw new ArgumentNullException(translationCatalogTransformer); _translationCatalogTransformer = translationCatalogTransformer; _compiledCatalogTransformer = compiledCatalogTransformer; } /// <summary> /// Public constructor that passes dependencies to concrete implementations of helper classes. /// </summary> public TranslationCompiler(IResourceService resourceService, ITranslationSerializer serializer) { if (resourceService == null) throw new ArgumentNullException(resourceService); if (serializer == null) throw new ArgumentNullException(serializer); _translationCatalogTransformer = new TranslationCatalogTransformer(resourceService); _compiledCatalogTransformer = new CompiledCatalogTransformer(serializer); }Is this an acceptable use for poor man's DI? This way, the DI container only has to know the actual dependencies for the public class to work, while still being very testable. | Using Poor Man's DI to inject helper class dependencies | c#;design patterns;dependency injection | null |
_webmaster.89486 | There is a product that consists of multiple services that are hosted on different domains. One of the services is used to host user's packages which are available through the URL:service.productdomain.com/packageidonly direct links are used to access the packages, so the service doesn't have a homepage and requests to service.productdomain.com/ will be 302 redirected to productdomain.com/ (URL of the main product). Is it ok for search engines?Will they request service.productdomain.com/robots.txt, sitemap (which is defined in robots.txt), and index other pages that are specified in sitemap?I can also create a homepage with some general information and meta tags if existing approach doesn't work for search crawlers. | SEO and 302 redirect from homepage to another domain | seo;redirects;homepage | null |
_unix.360564 | In OSX, the following command removes patterns and affects whole words:sed -e $(sed 's:.*:s/&//g:' /path/to/wordsToRemove.txt) /path/to/sourceFile.txt > outFile.txtwordsToRemove.txt contains:itforsourceFile.txt contains:it was green forever for candyoutFile.txt contains:was green ever candyThe word forever is matched and has been changed to ever although I wanted to match the word for on its own, not as part of forever.Is it possible to avoid this? | sed file comparison | sed;regular expression | null |
_codereview.74879 | I coded an executable program (.exe) that I only want run either from my home computer, our main server, or people in our development team.I have coded logic that will only allow the program to be run from certain IP addresses.The .txt file that's referenced in the getIPlist function looks like this:[123.456.78.90] = Main Server[77.34.555.392] = My computer at home[333.455.3.3] = Assistant coderAnd the HTTP address that the parseAllowableIPs function points to is a PHP file that is simply coded:<?php echo $_SERVER['REMOTE_ADDR']; ?>This code is working great (except for a bit of memory not being freed), so a review or some efficiency tips are welcome.bool compareIPs(std::string IPlist, std::string userIP){ const char *U = userIP.c_str(); char str[20]; const char *goodList[25] = { '\0' }; std::string uIP = ; while (*U) { if (atoi(U)) { uIP += itoa(atoi(U), str, 10); while (*U && *U != '.') *U++; if (*U && *U == '.') uIP += .; } if (*U) *U++; } const char *L = IPlist.c_str(); int count = 0; std::string thisIP = ; while (*L) { switch (*L) { case '[': *L++; while (*L && *L != ']') { thisIP += *L; *L++; } goodList[count] = _strdup(thisIP.c_str()); thisIP = ; count++; break; default: break; } if (*L && *L != '[') *L++; } std::string comp = ; // Now check to see if the user's IP matches any in the goodList[] for (int a = count; a >= 0; a--) { if (!goodList[a]) continue; comp = goodList[a]; if (!uIP.compare(comp)) { // I need to free() what was _strdup()'d // but this causes the progam to crash. //free(&goodList[a]); return true; } } return false;}#pragma comment(lib, WinInet.Lib)std::string getIPlist(){ HINTERNET hInternet, hFile; DWORD rSize; char buffer[1024]; hInternet = InternetOpen(NULL, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0); hFile = InternetOpenUrl(hInternet, http://www.myWebServer.com/ip/authorizedIPlist.txt, NULL, 0, INTERNET_FLAG_RELOAD, 0); InternetReadFile(hFile, &buffer, sizeof(buffer), &rSize); buffer[rSize] = '\0'; InternetCloseHandle(hFile); InternetCloseHandle(hInternet); std::string result = buffer; return result;}bool parseAllowableIPs(){ // Grap the list of allowable IP's and store it in a string // to be parsed in compareIPs() function. std::string allowableIPlist = getIPlist(); HINTERNET hInternet, hFile; DWORD rSize; char buffer[1024]; hInternet = InternetOpen(NULL, INTERNET_OPEN_TYPE_PRECONFIG, NULL, NULL, 0); hFile = InternetOpenUrl(hInternet, http://www.myWebServer.com/ip/index.php, NULL, 0, INTERNET_FLAG_RELOAD, 0); InternetReadFile(hFile, &buffer, sizeof(buffer), &rSize); buffer[rSize] = '\0'; InternetCloseHandle(hFile); InternetCloseHandle(hInternet); std::string result = buffer; if (!compareIPs(allowableIPlist, result)) return false; return true;}int main(){ //... blah blah blah // If not an authorized IP, just exit the game. if (!parseAllowableIPs()) { MySQL__disconnect(); exit(588924); }} | Allow certain IP addresses to run a C++ program | c++;security;networking;windows;authorization | null |
_webapps.44216 | Is there a way to learn the time of a notification?For example, on Facebook, when someone comments on my status or tags me in some photo I receive an instant notification, and when I open my notification window it says 2 hours ago or a few seconds ago etc.Is there such a feature on G+ as well ? | Google Plus when did I get a notification? | google plus;notifications | Certainly if you're receiving email notifications you'll have a timestamp on the email message.Some of the notifications (depending on what they are) do have a date/time on them. You may need to go to View all notifications to see that (or at least click the notification itself).I think, though, that this isn't practical in G+. Notifications for the same item get rolled up into one notification record. (For instance, I see a notification on a post of mine telling me that three people have +1'd it and one person has shared it. Obviously they didn't all do it at the same precise moment.) |
_codereview.23163 | I want to check if the length of phone number is appropriate for specified country (let's consider that only some countries have restriction, another countries accept phone number with various length). I have a Map, where the correct pairs are defined so this map can be used as a reference in the condition:public static ErrCode checkStatePhoneLen(final String state, final String phoneNo) { String stateTmp = state.trim(); String phoneTmp = phoneNo.trim(); Integer phoneLen = new Integer(phoneTmp.length()); if ( statePhoneNoMap.containsKey(stateTmp) && !phoneLen.equals(statePhoneNoMap.get(stateTmp))) { return ERROR; } return SUCCESS;}My questions are:Is it better to use temporary variables or directly usage of already existed object? I can just use state.trim() instead of creating the variable state_tmp and so on. I think that advantages of the solution with temporary variables are better readability and debugging but disadvantages are the effort to create new variable by runtime (or is it optimized someway by compiler?) and more rows of code (but I prefer readability factor more than number of rows factor).is it better to check if map contains the key and then compare, or to get value for given key and then check if it is not null and compare them? As following example:Integer definedLen = (Integer) statePhoneNoMap.get(stateTmp);if (definedLen != null && !definedLen.equals(phoneLen)){In this code sample, there is needed one more variable, but the condition is clearer. And, there is just one operation upon map (get()) instead of two in previous code (containsKey(), get())What is better solution? How would you modify this function? | Defining of new, temporary, variables or usage of already known ones? | java;optimization | public static ErrCode checkStatePhoneLen(final String state, final String phoneNo) { String stateTmp = state.trim(); String phoneTmp = phoneNo.trim();The issue is that stateTmp is less readable than state.trim(), so if you want to create a new variable, make sure that the name carries your intent. You can go for normalizedState, but since you're only trimming (and are not normalizing capitalization for example) then a variable is useless. Integer phoneLen = new Integer(phoneTmp.length());This one is useful! phoneLen is clearer than phoneNo.trim().length(). This answers your first question: it depends! Use new variable names when they do make thing clearer, but never use dummy names like phoneTmp, phone1, and so on. By the way, I may be mistaken, but are you certain Integer is necessary? Java should do autoboxing. I would simply write int phoneLen = phoneNo.trim().length(). if (statePhoneNoMap.containsKey(stateTmp) && !phoneLen.equals(statePhoneNoMap.get(stateTmp))) { return ERROR; } return SUCCESS;To answer your second question, you don't need to explicitely check for null: it's simpler to compare phoneLen and statePhoneNoMap.get(stateTmp) directly. If the latter is null, then the comparison will return false. If phoneLen is null, you don't want to return a value but throw an exception anyway, and this is what happens with your current code because null.equals(...) throws.It makes more sense to check for success and return ERROR if something went wrong.If you have an ErrCode type instead of a boolean, you have to return more explicit codes! Otherwise just return the condition directly.The code becomes:public static ErrCode checkStatePhoneLen(final String state, final String phoneNo) { int phoneLen = phoneNo.trim().length(); int stateTrimmed = state.trim(); if (statePhoneNoMap.containsKey(stateTrimmed) && phoneLen == statePhoneNoMap.get(stateTrimmed)) { return SUCCESS; } return ERROR;} |
_unix.360283 | Would anybody have a quick script to add the used space out of a df command on Linux? I can do a RedHat rhel 6 but the rhel 5 switch for total is non-existent. I am looking to add the total of column 2 (after dev/mapper column). /dev/mapper/rootvg-LogVol00 7.5G 3.0G 4.2G 43% //dev/mapper/rootvg-LogVol02 2.0G 914M 969M 49% /tmp/dev/mapper/rootvg-LogVol01 3.9G 1.2G 2.6G 31% /home/dev/mapper/rootvg-LogVol07 992M 492M 450M 53% /opt/dev/mapper/rootvg-LogVol08 4.9G 1.1G 3.6G 24% /opt/patrol3/dev/mapper/rootvg-LogVol03 3.9G 1.9G 1.9G 51% /usr/dev/mapper/rootvg-LogVol05 3.0G 469M 2.3G 17% /usr/local/dev/mapper/rootvg-LogVol04 5.9G 934M 4.7G 17% /var/dev/mapper/rootvg-LogVol11 496M 357M 114M 76% /nsr/dev/mapper/rootvg-LogVol09 3.0G 428M 2.4G 16% /opt/patrol3/perform/dev/mapper/rootvg-LogVol12 14G 3.0G 9.5G 24% /var/crash | script to add output from df -Ph | scripting;disk usage | null |
_cs.55647 | I'm performing a simulation of protein-protein interactions. I'm using Python to code logic gates as functions to model protein interactions. My model is basically a series of groups (g0 to g4) containing logic gates (see image). Initially, I set up a list containing my groups, and then for each group a dict that contains proteins (nodes) with their starting values (their so-called seedValues, which are the starting parameters for the network at $t=0$).My question is this: is there some way of iterating through my groups (and their logic gate functions), that begins at group 0 (g0 in the image) at t, and that at t=t+1 executes groups g0 and g1 synchronously, then executes the three groups g0, g1 and g2 at t=t+2, and so on until t=m, where m is the number of iterations wanted?Image notes: A and B are switches (the program is supposed to change them, as a way of studying perturbations), C is a constant (never changed). J is the output (mostly for show). D and F are built that way to oscillate, whenever A = 0.I've understood that threading might be the solution to my problem, but before I dive into that I'm interested in finding a simpler way of solving this.Because I don't know how to formulate this in Python, I attach some extremely messy pseudocode: #setting starting conditions #g is starting group #m is max number of iterations #t is time #u is the number of nodes #v is the number of groupsg = m = t = 0u = x (where x is the number of nodes in the model)v = y (where y is the number of groups in the model)#implement node iteratornodeChecker(): timeStep(): t = t + 1 nodeExecute(p): #p is the number of groups to execute over, in the interval 1 <= g <= v (p=1 is group 1, p=2 is group 1 and 2, ...) execute all nodes inside selected group(s) timeStep() printResults(): print results of execution of nodes at time of execution print state of unexecuted groups (minus current group(s)) #print the seedValue states of network before execution at time t, execute nodes in g nodeExecute(1) printResults() at time t+1, execute nodes in g and g+1 nodeExecute(2) printResults() ... at time t = m, execute nodes in group g, g+1 ... g+(u-1) nodeExecute(g+(u-1)) printResults() stop executionCode note: 1 <= g <= v is the interval $1 \leq$ g $\leq$ v. x and y aren't code variables; the notation is supposed to indicate u = $x$ and v = $y$.My ambition is to output something like: t 0 1 2 ... mnode1 0 1 0 1node2 1 1 0 0node3 0 0 1 0Thank you for your time. | Stepping through a sequence of grouped logic gates | logic;simulation;sequential circuit;bio inspired computing | null |
_softwareengineering.98588 | I'm a QA guy and designer and i'm coming to development. How do most developers architect programs? With design I build up with a vague idea of what I have in mind and adapt my design to keep in moving forward and looking good. With coding, I'm trying the same methodology. I'm constantly testing and debugging my code and I make forward moving tweaks to my code. The problem is, I dont think this is how it's done. When I look at good Javascript for example, everything is broken out into evenly distributed functions. I don't architect my code in this way. Do you normally have to sit out and draw out your classes and functions before you start writing? | How do you think about and architect programs? | design | Some devs architect their designs in UML completely before starting any coding, and others just jump right in. I've seen good designs both ways. The key is, I think, to be open to redesign and refactoring at any stage of development. A beautifully-architected design, conceived in a 300-page requirements document and drawn out with a stack of state and sequence diagrams, can be utter garbage when coded.Be willing to throw out your work whenever necessary. Have the tests in place to prove that your refactored design works as correctly after changes as before. Having tests that you trust will give you the courage to change. |
_unix.219654 | UPDATE: So it seems that I can access the website from computers outside of the LAN, it's when I try to pull up from any computer on the same LAN as the server that I get an issue. From what I've read it seems like this is a NAT problem. I don't entirely understand the issue, but I know it has to do with how the router treats traffic which is trying to access a public domain that is actually hosted on server connected to the router. My router has an IP triggering feature, and from what I remember about its purpose that may be what I need to configure. Any help in how I would do that would be greatly appreciated!I'm trying to teach myself some server basics by setting up a test server VM in VirtualBox and hosting my own WordPress blog. This is all mostly in preparation for when I finish my thesis, which will include a digital/web version which I would prefer to be able to host myself. Everything has gone pretty smoothly. I got a LAMP set up working, created a couple of test Virtual Hosts, installed WordPress and was able to visit all the Virtual Hosts, including the one with my WordPress blog, from within my LAN. Where I have run into trouble is trying to open the server to the Internet. I bought a domain name and set up dynamic DNS (I'm on a residential Comcast account) using this guide, which seemed to work, but for the life of me I can't seem to get it working and I'm out of troubleshooting ideas. Any help would be greatly appreciated.Setup details:The desktop on which the VM lives is running Windows 7, not sure if you all need hardware specifics, but it's a gaming machine with a decent bit of power.I'm using VirtualBox for the VM, and I have it set up with a Bridged connection.Ubuntu Server 14.04 is the OS on the VMUsing LAMP setup, and I changed my document root to /srv, just made more sense to me.Using Namecheap.com for dynamic DNS. I set it up using the guide above, and got a success message. Also it updated the IP in host settings at namecheap.com, all of which leads me to believe that my dynamic DNS is likely configured properly. But I'm a noob, so who knows.On my router I've forwarded ports 80, 443 and even 8080 just in case. I've also put my server in DMZ, and even tried turning off the firewall all together.I'm using a modem and router 2-in-1 from Comcast. It's running eMTA & DOCSIS Software Version:7.6.116.Not sure what all log/conf info will help, so hopefully this isn't overkill...Apache2.conf# Global configuration### ServerRoot: The top of the directory tree under which the server's# configuration, error, and log files are kept.## NOTE! If you intend to place this on an NFS (or otherwise network)# mounted filesystem then please read the Mutex documentation (available# at <URL:http://httpd.apache.org/docs/2.4/mod/core.html#mutex>);# you will save yourself a lot of trouble.## Do NOT add a slash at the end of the directory path.##ServerRoot /etc/apache2# Trying to fix internet acessability issue...# ServerName anarchoanthro.com <-- this got rid of that startup error, but otherwise didn't work.## The accept serialization lock file MUST BE STORED ON A LOCAL DISK.#Mutex file:${APACHE_LOCK_DIR} default## PidFile: The file in which the server should record its process# identification number when it starts.# This needs to be set in /etc/apache2/envvars#PidFile ${APACHE_PID_FILE}## Timeout: The number of seconds before receives and sends time out.#Timeout 300## KeepAlive: Whether or not to allow persistent connections (more than# one request per connection). Set to Off to deactivate.#KeepAlive On## MaxKeepAliveRequests: The maximum number of requests to allow# during a persistent connection. Set to 0 to allow an unlimited amount.# We recommend you leave this number high, for maximum performance.#MaxKeepAliveRequests 100## KeepAliveTimeout: Number of seconds to wait for the next request from the# same client on the same connection.#KeepAliveTimeout 5# These need to be set in /etc/apache2/envvarsUser ${APACHE_RUN_USER}Group ${APACHE_RUN_GROUP}## HostnameLookups: Log the names of clients or just their IP addresses# e.g., www.apache.org (on) or 204.62.129.132 (off).# The default is off because it'd be overall better for the net if people# had to knowingly turn this feature on, since enabling it means that# each client request will result in AT LEAST one lookup request to the# nameserver.#HostnameLookups Off# ErrorLog: The location of the error log file.# If you do not specify an ErrorLog directive within a <VirtualHost># container, error messages relating to that virtual host will be# logged here. If you *do* define an error logfile for a <VirtualHost># container, that host's errors will be logged there and not here.#ErrorLog ${APACHE_LOG_DIR}/error.log## LogLevel: Control the severity of messages logged to the error_log.# Available values: trace8, ..., trace1, debug, info, notice, warn,# error, crit, alert, emerg.# It is also possible to configure the log level for particular modules, e.g.# LogLevel info ssl:warn#LogLevel warn# Include module configuration:IncludeOptional mods-enabled/*.loadIncludeOptional mods-enabled/*.conf# Include list of ports to listen onInclude ports.conf# Sets the default security model of the Apache2 HTTPD server. It does# not allow access to the root filesystem outside of /usr/share and /var/www.# The former is used by web applications packaged in Debian,# the latter may be used for local directories served by the web server. If# your system is serving content from a sub-directory in /srv you must allow# access here, or in any related virtual host.<Directory /> Options FollowSymLinks AllowOverride None Require all denied</Directory><Directory /usr/share> AllowOverride None Require all granted</Directory><Directory /var/www/> Options FollowSymLinks AllowOverride None Require all granted</Directory><Directory /srv/> Options FollowSymLinks IncludesNOEXEC XBitHack on AllowOverride None Require all granted</Directory># AccessFileName: The name of the file to look for in each directory# for additional configuration directives. See also the AllowOverride# directive.#AccessFileName .htaccess## The following lines prevent .htaccess and .htpasswd files from being# viewed by Web clients.#<FilesMatch ^\.ht> Require all denied</FilesMatch>## The following directives define some format nicknames for use with# a CustomLog directive.## These deviate from the Common Log Format definitions in that they use %O# (the actual bytes sent including headers) instead of %b (the size of the# requested file), because the latter makes it impossible to detect partial# requests.## Note that the use of %{X-Forwarded-For}i instead of %h is not recommended.# Use mod_remoteip instead.#LogFormat %v:%p %h %l %u %t \%r\ %>s %O \%{Referer}i\ \%{User-Agent}i\ vhost_combinedLogFormat %h %l %u %t \%r\ %>s %O \%{Referer}i\ \%{User-Agent}i\ combinedLogFormat %h %l %u %t \%r\ %>s %O commonLogFormat %{Referer}i -> %U refererLogFormat %{User-agent}i agent# Include of directories ignores editors' and dpkg's backup files,# see README.Debian for details.# Include generic snippets of statementsIncludeOptional conf-enabled/*.conf# Include the virtual host configurations:IncludeOptional sites-enabled/*.conf# vim: syntax=apache ts=4 sw=4 sts=4 sr noetUserDir disabled rootports.conf# If you just change the port or add more ports here, you will likely also# have to change the VirtualHost statement in# /etc/apache2/sites-enabled/000-default.confListen 80Listen 8080<IfModule ssl_module> Listen 443</IfModule><IfModule mod_gnutls.c> Listen 443</IfModule># vim: syntax=apache ts=4 sw=4 sts=4 sr noetmy-wpsite.conf <-- This is the only site enabled, and I just copied the default.conf and edited it.<VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerAdmin [email protected] ServerName www.anarchoanthro.com ServerAlias anarchoanthro.com DocumentRoot /srv/wp-anarchoanthro # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with a2disconf. #Include conf-available/serve-cgi-bin.conf # Set /srv/testsite1/cgibin/ as CGI script directory. ScriptAlias /cgi-bin/ /srv/wp-anarchoanthro/cgi-bin/# vim: syntax=apache ts=4 sw=4 sts=4 sr noetAnd here are my logs. I tried to load up anarchoanthro.com, my blog, just before grabbing these. Also I'm only including logs from today, hopefully that will narrow things down.access.log95.134.193.184 - - [01/Aug/2015:04:17:41 -0500] \x0fK\x17\xaf$W\xff' 200 28811 - -199.30.228.129 - - [01/Aug/2015:05:07:30 -0500] GET / HTTP/1.1 200 7795 - Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 GTB7.138.105.109.12 - - [01/Aug/2015:05:12:36 -0500] GET / HTTP/1.1 200 29152 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:38 -0500] GET / HTTP/1.1 200 29151 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:39 -0500] GET /wp-content/themes/arcade-basic/library/js/html5.js HTTP/1.1 200 2734 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:39 -0500] GET /wp-includes/js/wp-emoji-release.min.js?ver=4.2.3 HTTP/1.1 200 14953 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:39 -0500] GET /wp-includes/js/jquery/jquery.js?ver=1.11.2 HTTP/1.1 200 96260 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:40 -0500] GET /wp-includes/js/jquery/jquery-migrate.min.js?ver=1.2.1 HTTP/1.1 200 7506 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:40 -0500] GET /wp-content/themes/arcade-basic/library/js/bootstrap.min.js?ver=3.0.3 HTTP/1.1 200 6980 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:40 -0500] GET /wp-content/themes/arcade-basic/library/js/fillsize.js?ver=4.2.3 HTTP/1.1 200 2576 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:40 -0500] GET /wp-content/themes/arcade-basic/library/js/jquery.arctext.js?ver=4.2.3 HTTP/1.1 200 10612 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)38.105.109.12 - - [01/Aug/2015:05:12:40 -0500] GET /wp-content/themes/arcade-basic/library/js/theme.js?ver=4.2.3 HTTP/1.1 200 3052 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)64.69.91.210 - - [01/Aug/2015:06:02:54 -0500] GET / HTTP/1.1 200 29128 - Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322)192.187.110.98 - - [01/Aug/2015:06:54:53 -0500] GET http://testp2.czar.bielawa.pl/testproxy.php HTTP/1.1 404 356 - Mozilla/5.0 (Windows NT 5.1; rv:32.0) Gecko/20100101 Firefox/31.0141.212.122.59 - - [01/Aug/2015:07:56:56 -0500] CONNECT proxytest.zmap.io:80 HTTP/1.1 200 27778 - Mozilla/5.0 zgrab/0.x141.212.122.59 - - [01/Aug/2015:07:56:57 -0500] GET / HTTP/1.1 200 30504 - Mozilla/5.0 zgrab/0.x104.238.194.164 - - [01/Aug/2015:09:32:09 -0500] GET / HTTP/1.1 200 29153 - Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)46.172.71.251 - - [01/Aug/2015:12:12:51 -0500] GET /rom-0 HTTP/1.1 404 367 - Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)error.log[Sat Aug 01 06:54:53.947240 2015] [:error] [pid 4035] [client 192.187.110.98:56439] script '/srv/wp-anarchoanthro/testproxy.php' not found or unable to stat[Sat Aug 01 11:23:56.393436 2015] [mpm_prefork:notice] [pid 3918] AH00169: caught SIGTERM, shutting down[Sat Aug 01 11:23:57.476298 2015] [mpm_prefork:notice] [pid 4943] AH00163: Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.11 OpenSSL/1.0.1f configured -- resuming normal operations[Sat Aug 01 11:23:57.476333 2015] [core:notice] [pid 4943] AH00094: Command line: '/usr/sbin/apache2'[Sat Aug 01 12:30:02.492747 2015] [mpm_prefork:notice] [pid 4943] AH00169: caught SIGTERM, shutting down[Sat Aug 01 12:30:03.513348 2015] [mpm_prefork:notice] [pid 5037] AH00163: Apache/2.4.7 (Ubuntu) PHP/5.5.9-1ubuntu4.11 OpenSSL/1.0.1f configured -- resuming normal operations[Sat Aug 01 12:30:03.513384 2015] [core:notice] [pid 5037] AH00094: Command line: '/usr/sbin/apache2'other_vhosts_access.log127.0.1.1:80 216.218.206.68 - - [01/Aug/2015:01:31:36 -0500] \x16\x03\x01 400 0 - -127.0.1.1:80 141.212.122.42 - - [01/Aug/2015:03:15:26 -0500] \x16\x03\x01 400 0 - -127.0.1.1:80 65.31.172.201 - - [01/Aug/2015:06:20:06 -0500] \x80F\x01\x03\x01 400 0 - -127.0.1.1:80 50.77.106.104 - - [01/Aug/2015:06:44:22 -0500] \x80F\x01\x03\x01 400 0 - -127.0.1.1:80 71.174.188.128 - - [01/Aug/2015:07:29:10 -0500] \x80F\x01\x03\x01 400 0 - -127.0.1.1:80 98.251.14.214 - - [01/Aug/2015:09:31:43 -0500] \x80F\x01\x03\x01 400 0 - -127.0.1.1:80 89.248.171.137 - - [01/Aug/2015:10:22:04 -0500] \x16\x03\x01 400 0 - -anarchoanthro.com:80 177.206.182.186 - - [01/Aug/2015:12:08:54 -0500] \x80F\x01\x03\x01 400 0 - -Result of route commandKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Ifacedefault 10.0.0.1 0.0.0.0 UG 0 0 0 eth010.0.0.0 * 255.255.255.0 U 0 0 0 eth0Any help on this would be greatly appreciated!!! | Cannot Access LAMP Web Server on Ubuntu Server 14.04 | ubuntu;apache httpd;mysql;wordpress | First of all I would check that I could reach the web server from a second PC on your LAN. You would probably need an entry in your hosts file to map the domain name you the internal address. This will confirm that the server is bridged correctly and isn't firewalled, and can route to the LAN.I would then check that the server had a default route pointing to your gateway. Without this it can't reply to - or even acknowledge - inbound requests.Finally I would run a network sniffer such as Wireshark on the server and watch for a controlled connection inbound from outside your LAN. This will confirm traffic is routed correctly.Some ISPs, particularly in USA, block traffic to port 80. You will want to check this, too, if it's appropriate for your situation.You need port 80 for http, 443 for https. You don't need 8080. You might want to forward or at least have your router respond to ping.Many home routers cannot handle an internal request to their external ip address that us then forwarded internally. Exclude this situation from your tests, at least initially. |
_unix.198653 | Alright, I have two folders. For simplicity's sake, I'll call them people and animals. The animals folder has a file for each animal, and the people folder has a file for each person with references to which animals that person owns. This is what I have so far:ls -1 ~/animals | cut -d. -f1 | grep -R -f - ~/peopleThe grep syntax I got from here. I'm trying to get it to say:dog: 8cat: 7hippo: 2Instead, I add the -c flag to grep and I get:Bob.txt: 0Cathy.txt: 0John.txt: 0Patrick.txt: 1How do I get counts of the animals in total, not the animals for each person? | How do I get a count of file references inside a folder of files with those references? | files;grep;search | null |
_scicomp.910 | Recently, I've encountered a bizarre problem with FORTRAN95. I initialized variables X and Y as follows: X=1.0Y=0.1Later I add them together and print the result: 1.10000000149012After examining the variables, it seems as though 0.1 is not represented in double precision with full accuracy. Is there any way to avoid this? | How to set double precision values in Fortran | fortran;floating point | Another way to do this is to first explicitly specify the precision you desire in the variable using the SELECTED_REAL_KIND intrinsic and then use this to define and initialize the variables. Something like:INTEGER, PARAMETER :: dp = SELECTED_REAL_KIND(15)REAL(dp) :: xx = 1.0_dpA nice advantage to doing it this way is that you can store the definition of dp in a module, then USE that module where needed. Now if you ever want to change the precision of your program, you only have to change the definition of dp in that one place instead of searching and replacing all the D0s at the end of your variable initializations. (This is also why I'd recommend not using the 1.0D-1 syntax to define Y as suggested. It works, but makes it harder to find and change all instances in the future.)This page on the Fortran Wiki gives some good additional information on SELECTED_REAL_KIND. |
_webapps.24189 | Possible Duplicate:My account on Facebook has been hacked - how do I recover it? Somebody hacked my Facebook id recently.And its creating mess in my friend circle.How can I solve this problem.and remove this malicious task.Is anybody can help me for this.I am in great need.Thanks.. | Hacked the facebook account | facebook;account management;notifications | null |
_unix.109026 | I would like to enforce the security access of some files in my Home folder. My concern is about processes running with the same privileges as me having access to those files. I've been wondering about this for some time, because the role based security in Linux is great but weak for things running in the same role. Particularly when it comes to an user account that is very active, every file laying inside the home folder is vulnerable to the user actions. For example, installing a malicious Firefox plug-in, the other parts of the OS won't be touched but all the files inside the home folder can be exposed and installing a Firefox plug-in is something any user could do with out any special privilege. | What are the methods to protect home folder files from other applications runing as the same user? | security | You will be probably best off with either a security framework implementing RBAC or MAC (grsecurity for the former, SELinux, AppArmor, Tomoyo Linux for the latter) which lets you define finer grained permissions per application.Apart from that, recent Linux kernels offer namespaces which allow you to change the way different processes see the whole system. If you mount empty directory over say $HOME for the untrusted process, it won't be able to read your files. |
_codereview.1318 | 3.1Write a program that computes the arithmetic sum of a constant difference sequence:D0 = ADn+1 = Cdn + BRun the following values and compare to the closed form solution:Maximum index = 10,000, a = 1:0; c = 5; b = 2^-15I am not sure I perfectly understand the problem above or how to confirm that I have the correct answer, but I wrote the following short program and I would appreciate any feedback.(defun d-n-plus-1 (d-n c b) (+ (* d-n c) b))(defun d-depth (depth d-n c b a) (if (equal depth 0) (d-n-plus-1 d-n c b) (d-depth (1- depth) (if (null d-n) a (d-n-plus-1 d-n c b)) c b a)))(format t result: ~a ~% (d-depth 10000 nil 5 (expt 2 -15) 1))In addition, I wrote a version that just uses a loop. I'm not crazy about loop for x from 1 to depth or that I used setq in this version. Can you think of a better way? Any other suggestions for this version are also welcome.(defun d_n+1 (d_n c b) (+ (* d_n c) b)) (defun d-depth (depth d_n c b) (loop for x from 0 to depth do (setq d_n (d_n+1 d_n c b))) d_n)(format t result: ~a ~% (d-depth 10000 1 5 (expt 2 -15))) | Compute arithmetic sum of a constant difference sequence | lisp;common lisp | I notice three things about your code:You're relying on tail-recursion. This is okay if you know your program will only ever run on implementations that perform tail call optimization, but you should be aware that the Common Lisp standard does not require TCO (and more practically speaking some implementations do indeed not offer it). So if you want your program to be portable, you should rewrite it using a loop.Your d-depth function takes d-n and a as parameters, but only uses a in place of d-n if d-n is nil. It'd make more sense to me to remove the a parameter and instead pass in the a value as the initial value for d-n (instead of nil).I would also write d_n instead of d-n to emphasize that n is an index, which is usually written using an underscore in ASCII. Also I'd call d-n-plus-1 d_n+1 instead, there's no reason to spell out plus in lisp.In response to your update:I'm not crazy about loop for x from 1 to depth or that I used setq in this version. Can you think of a better way?for x from 1 to depth is equivalent to repeat depth (unless you actually use x, which you don't). However in your code you're actually starting at 0, not 1 so you'd need repeat (+ 1 x) to get the same number of iterations.The setq can be replaced using the for var = start then replacement-form syntax. The result would be something like this:(defun d-depth (depth a c b) (loop repeat (+ 1 depth) for d_n = a then (d_n+1 d_n c b) finally (return d_n))) Or in scheme :p |
_codereview.92489 | I am using Python with libtcodpy to build a turn-based strategy game.In this game each unit has a maximum range which varies from 4-10. This range determines how many steps the unit can take and therefore the distance it can move on the game map.I wish to output all the squares which the unit can move to, based upon its range as well as variable terrain, which increases the number of steps necessary to move over it.This function is what I have to generate these squares and it works but is intolerably slow (takes about 1-2 seconds for the squares to highlight on the map):#Gets a list of all the squares the unit can move to (so that they#can be highlighted later)def get_total_squares(max_range, playerx, playery, team): global coord_in_range #List where the final coords will be stored global T_Map #Instance of the Map class, i.e the map #This iterates through an area of the game map defined by the square #created by the player's maximum range (where the player is in the center #of the square) for x in range((playerx-max_range), (playerx+max_range+1)): for y in range ((playery-max_range), (playery+max_range+1)): #This creates a path for every square in the above area path = libtcod.path_new_using_map(new_map, 0) #This computes a path to every square in the above area libtcod.path_compute(path, playerx, playery, x, y) #This gives the number of steps the unit takes to walk one specific path num_steps = libtcod.path_size(path) #This is a blank list which will be populated with the coordinates of #the tiles of one specific path coord_of_path = [] #This populates the above list for i in range(libtcod.path_size(path)): coord_of_path.append(libtcod.path_get(path, i)) #This is a list of all the tiles in the map which can hinder movement #henceforth called terrain_tiles terrain_tiles = [tile for tile in T_Map.tile_array if tile.terrain_name in ('Tall Grass', 'Hills', 'Forest', 'Water')] #This iterates through all the terrain tiles and #if the tile is in the path, adds that tiles movement penalty #to the total number of steps to take that path for tile in terrain_tiles: if (tile.x, tile.y) in coord_of_path: num_steps += tile.move_cost #This is what actually determines whether the path is added to the #list of walkable paths; if the path's step count, taking into account #modifications from terrain, is greater than the unit's range that path is not added if num_steps <=max_range: for i in range(libtcod.path_size(path)): coord_in_range.append(libtcod.path_get(path, i)) return coord_in_rangeI'm quite certain something in here is acting as a bottleneck and very certain it has something to do with adjusting the paths due to terrain, since if I do it without considering terrain, it runs very fast.If the full code is necessary (I don't think it should be as, stated above, the problem must lie here) I will post it. | Pathfinding in turn-based strategy game | python;performance;pathfinding | null |
_codereview.148344 | After I got this answer, I edited my code after that answer, but the code still seems kind of slow. Can I make any other improvements? And why does the code seem to get slower with every minute? It seems to start great and continue slow. Am I missing something here?using System;using System.Windows.Forms;using Microsoft.Office.Interop.Excel;using System.IO;using System.Linq;using System.Diagnostics;namespace Report{ public partial class Form1 : Form { public Form1() { InitializeComponent(); } string GetLine(string fileName, int line) { using (var sr = new StreamReader(fileName)) { for (int i = 1; i < line; i++) sr.ReadLine(); return sr.ReadLine(); } } private void button1_Click(object sender, EventArgs e) { Stopwatch timer = new Stopwatch(); timer.Start(); Microsoft.Office.Interop.Excel.Application excelapp = new Microsoft.Office.Interop.Excel.Application(); excelapp.Visible = true; _Workbook workbook = (_Workbook)(excelapp.Workbooks.Open(textBox2.Text)); _Worksheet worksheet = (_Worksheet)workbook.ActiveSheet; DateTime dt = DateTime.Now; foreach (string fileName in Directory.GetFiles(textBox1.Text, *.txt)) { int row = 1, EmptyRow = 0; while (Convert.ToString(worksheet.Range[A + row].Value) != null) { row++; EmptyRow = row; } var line2s = File.ReadLines(fileName).Skip(9).Take(1).ToArray(); string comp = line2s[0]; string compare = comp.Substring(30, comp.Length - 30); if (compare != Failed) { continue; } else { string[] lines = File.ReadAllLines(fileName); string serial = lines[3]; string data = lines[4]; string time = lines[5]; string operat = lines[6]; string result = lines[9]; (worksheet.Cells[EmptyRow, 1] as Microsoft.Office.Interop.Excel.Range).Value2 = serial.Substring(30, serial.Length - 30); (worksheet.Cells[EmptyRow, 2] as Microsoft.Office.Interop.Excel.Range).Value2 = data.Substring(30, data.Length - 30); (worksheet.Cells[EmptyRow, 3] as Microsoft.Office.Interop.Excel.Range).Value2 = time.Substring(30, time.Length - 30); (worksheet.Cells[EmptyRow, 4] as Microsoft.Office.Interop.Excel.Range).Value2 = operat.Substring(30, operat.Length - 30); (worksheet.Cells[EmptyRow, 5] as Microsoft.Office.Interop.Excel.Range).Value2 = result.Substring(30, result.Length - 30); foreach (string line in lines) { if (line.Contains(FixtureCoverResistance:)) { (worksheet.Cells[EmptyRow, 6] as Microsoft.Office.Interop.Excel.Range).Value2 = line.Substring(31, line.Length - 31); } else if (line.Contains(FwProgrammingCheck:)) { (worksheet.Cells[EmptyRow, 7] as Microsoft.Office.Interop.Excel.Range).Value2 = line.Substring(31, line.Length - 31); } else if (line.Contains(Checksum =)) { (worksheet.Cells[EmptyRow, 8] as Microsoft.Office.Interop.Excel.Range).Value2 = line.Substring(11, line.Length - 11); } else if (line.Contains(FwEepromCheck:)) { (worksheet.Cells[EmptyRow, 9] as Microsoft.Office.Interop.Excel.Range).Value2 = line.Substring(31, line.Length - 31); } else if (line.Contains(FixtureCoverResistanceAfterProg:)) { (worksheet.Cells[EmptyRow, 11] as Microsoft.Office.Interop.Excel.Range).Value2 = line.Substring(33, line.Length - 33); } } } } TimeSpan ts = timer.Elapsed; label2.Text = ts.ToString(mm\\:ss\\.ff); timer.Stop(); } private void button2_Click(object sender, EventArgs e) { if (folderBrowserDialog1.ShowDialog() == DialogResult.OK) { textBox1.Text = folderBrowserDialog1.SelectedPath; } } private void button3_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { textBox2.Text = openFileDialog1.FileName.ToString(); } } }} | Extracting data from .txt files and writing to Excel - follow-up | c#;performance;excel | I am not providing review from programming good practice or architecture perspective. I just want to provide you some recommendations to improve performance of your code you have asked.1) Find first empty cell on worksheetYour while loop way to find first empty row is inefficient. Getting value from cell via Interop is slow.Instead of this block:while (Convert.ToString(worksheet.Range[A + row].Value) != null){ row++; EmptyRow = row;}Use single line built in Excel function:int emptyRow = worksheet.Cells[worksheet.Cells.Rows.Count, A].End[-4162].Row + 1;Constant -4162 is xlUp constant and you can find it in Excel documentation or in object browser in VBE.2) Use temporary string[] arrayWriting to worksheet cells one by one is inefficient. Work with string array and fill final array at once into worksheet.You can initialize array:string[,] rangeArray = new string[1, 11]; Then instead of writing values to cells:(worksheet.Cells[EmptyRow, 1] as Range).Value2 = serial.Substring(30, serial.Length - 30);(worksheet.Cells[EmptyRow, 2] as Range).Value2 = data.Substring(30, data.Length - 30);store them in array, which is very fast operation:rangeArray[0,0] = serial.Substring(30, serial.Length - 30);rangeArray[0,1] = data.Substring(30, data.Length - 30);Finally fill array in worksheet:Range c1 = (Range)worksheet.Cells[emptyRow, 1];Range c2 = (Range)worksheet.Cells[emptyRow, 11];Range range = worksheet.Range[c1, c2];range.Value = rangeArray;3) Still not satisfied with performance?If you are still not satisfied with performance, avoid of using Office.Interop. You can parse your text files in C# and store the output in CSV file. The final CSV file you can easily open in Excel.There are also alternatives like ExcelLibrary. This should be also faster than Interop. |
_webapps.7034 | Are there any third-party websites that use delicious.com's data?Background: I've been searching for information on finding a suitable mattress, and I've found that popular only has about 10 results (not enough signal), while recent gives 9773 results (too much noise). | Are there any third-party websites that use delicious.com's data? | delicious | null |
_softwareengineering.236194 | I'm working on an cross platform C++ project, which doesn't consider unicode, and need change to support unicode.There is following two choices, and I need to decide which one to choose.Using UTF-8 (std::string) which will make it easy to support posix system.Using UTF-32 (std::wstring) which will make it easy to call windows API.So for item #1 UTF8, the benefit is code change will not too many. But the concern is some basic rule will broken for UTF8, for example, string.size() will not equal the character length.search an '/' in path will be hard to implement (I'm not 100% sure).So any more experience? And which one I should choose? | Does it make sense to choose UTF-32, based on concern that some basic rule will be broken for UTF-8? | c++;programming practices;cross platform;unicode | Use UTF-8. string.size() won't equal the amount of code points, but that is mostly a useless metric anyway. In almost all cases, you should either worry about the number of user-perceived characters/glyphs (and for that, UTF-32 fails just as badly), or about the number of bytes of storage used (for this, UTF-32 is offers no advantage and uses more bytes to boot).Searching for an ASCII character, such as /, will actually be easier than with other encodings, because you can simply use any byte/ASCII based search routine (even old C strstr if you have 0 terminators). UTF-8 is designed such that all ASCII characters use the same byte representation in UTF-8, and no non-ASCII character shares any byte with any ASCII character.The Windows API uses UTF-16, and UTF-16 doesn't offer string.size() == code_point_count either. It also shares all downsides of UTF-32, more or less. Furthermore, making the application handle Unicode probably won't be as simple as making all strings UTF-{8,16,32}; good Unicode support can require some tricky logic like normalizing text, handling silly code points well (this can become a security issue for some applications), making string manipulations such as slicing and iteration work with glyphs or code points instead of bytes, etc.There are more reasons to use UTF-8 (and reasons not to use UTF-{16,32}) than I can reasonably describe here. Please refer to the UTF-8 manifesto if you need more convincing. |
_unix.324234 | I would like to specify U-Boot not to use uramdisk to boot because my ramdisk is part of the Linux image. The problem is that even if I choose sdboot which I modified and call bootm {linux} - {devicetree} it checks for the uramdisk.image.gz file existence.EDIT : Whatever I do it doesn't override sdboot property. It's like it is loading my uEnv.txt (and it works because it correctly takes my device tree blob which has a different name) and just after that it overrides the sdboot property...Here is my uEnv.txt file :sdboot=if mmcinfo; then run uenvboot; echo Copying Linux from SD to RAM... && load mmc 0 ${kernel_load_address} ${kernel_image} && echo Copying Device Tree from SD to RAM... && load mmc 0 ${devicetree_load_address} ${devicetree_image} && echo Boot Linux kernel... &&bootm ${kernel_load_address} - ${devicetree_load_address}; fiAnd here is the log I get :U-Boot 2015.07-svn563 (Nov 17 2016 - 17:10:38 +0100)Model: Zynq ZC702 Development BoardI2C: readyDRAM: ECC disabled 512 MiB# Malloc address : 0x1F316000# Malloc size : 12713984 (0x00c20000)# CONFIG_SYS_TEXT_BASE : 0x04000000# U-Boot relocated in RAM at : 0x1ff36000MMC: zynq_sdhci: 0SF: Detected N25Q128 with page size 256 Bytes, erase size 64 KiB, total 16 MiB*** Warning - bad CRC, using default environment# load_addr = 0x00000000In: serialOut: serialErr: serialModel: Zynq ZC702 Development BoardNet: Gem.e000b000Hit any key to stop autoboot: 0 Device: zynq_sdhciManufacturer ID: 3OEM: 5344Name: SE32G Tran Speed: 50000000Rd Block Len: 512SD version 3.0High Capacity: YesCapacity: 29.7 GiBBus Width: 4-bitErase Group Size: 512 Bytesreading uEnv.txt2187 bytes read in 14 ms (152.3 KiB/s)Loaded environment from uEnv.txtImporting environment from SD ...Copying Linux from SD to RAM...reading busybox.img11904111 bytes read in 1003 ms (11.3 MiB/s)reading mlg-x.dtb13851 bytes read in 15 ms (901.4 KiB/s)reading uramdisk.image.gz** Unable to read file uramdisk.image.gz **zynq-uboot> Here is the full trace if I let the autoboot fail with uramdisk, then printenv, then reset default env and printenv again :U-Boot 2015.07-svn563 (Nov 17 2016 - 17:10:38 +0100)Model: Zynq ZC702 Development BoardI2C: readyDRAM: ECC disabled 512 MiB# Malloc address : 0x1F316000# Malloc size : 12713984 (0x00c20000)# CONFIG_SYS_TEXT_BASE : 0x04000000# U-Boot relocated in RAM at : 0x1ff36000MMC: zynq_sdhci: 0SF: Detected N25Q128 with page size 256 Bytes, erase size 64 KiB, total 16 MiB*** Warning - bad CRC, using default environment# load_addr = 0x00000000In: serialOut: serialErr: serialModel: Zynq ZC702 Development BoardNet: Gem.e000b000Hit any key to stop autoboot: 0 Device: zynq_sdhciManufacturer ID: 3OEM: 5344Name: SE32G Tran Speed: 50000000Rd Block Len: 512SD version 3.0High Capacity: YesCapacity: 29.7 GiBBus Width: 4-bitErase Group Size: 512 Bytesreading uEnv.txt381 bytes read in 10 ms (37.1 KiB/s)Loaded environment from uEnv.txtImporting environment from SD ...Copying Linux from SD to RAM...reading busybox.img11904111 bytes read in 1004 ms (11.3 MiB/s)reading mlg-x.dtb13839 bytes read in 15 ms (900.4 KiB/s)reading uramdisk.image.gz** Unable to read file uramdisk.image.gz **zynq-uboot> printenv baudrate=115200bitstream_image=system.bit.binboot_image=BOOT.binboot_size=0xF00000bootcmd=run $modebootbootdelay=3bootenv=uEnv.txtdevicetree_image=mlg-x.dtbdevicetree_load_address=0x2000000devicetree_size=0x20000dfu_mmc=run dfu_mmc_info && dfu 0 mmc 0dfu_mmc_info=set dfu_alt_info ${kernel_image} fat 0 1\\;${devicetree_image} fat 0 1\\;${ramdisk_image} fat 0 1dfu_ram=run dfu_ram_info && dfu 0 ram 0dfu_ram_info=set dfu_alt_info ${kernel_image} ram 0x3000000 0x500000\\;${devicetree_image} ram 0x2A00000 0x20000\\;${ramdisk_image} ram 0x2000000 0x600000ethact=Gem.e000b000ethaddr=00:0a:35:00:01:22fdt_high=0x20000000filesize=360fimportbootenv=echo Importing environment from SD ...; env import -t ${loadbootenv_addr} $filesizeinitrd_high=0x20000000ipaddr=10.10.70.102jtagboot=echo TFTPing Linux to RAM... && tftpboot ${kernel_load_address} ${kernel_image} && tftpboot ${devicetree_load_address} ${devicetree_image} && tftpboot ${ramdisk_load_address} ${ramdisk_image} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}kernel_image=busybox.imgkernel_load_address=0x2080000kernel_size=0x500000loadbit_addr=0x100000loadbootenv=load mmc 0 ${loadbootenv_addr} ${bootenv}loadbootenv_addr=0x2000000mmc_loadbit=echo Loading bitstream from SD/MMC/eMMC to RAM.. && mmcinfo && load mmc 0 ${loadbit_addr} ${bitstream_image} && fpga load 0 ${loadbit_addr} ${filesize}modeboot=sdbootnandboot=echo Copying Linux from NAND flash to RAM... && nand read ${kernel_load_address} 0x100000 ${kernel_size} && nand read ${devicetree_load_address} 0x600000 ${devicetree_size} && echo Copying ramdisk... && nand read ${ramdisk_load_address} 0x620000 ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}norboot=echo Copying Linux from NOR flash to RAM... && cp.b 0xE2100000 ${kernel_load_address} ${kernel_size} && cp.b 0xE2600000 ${devicetree_load_address} ${devicetree_size} && echo Copying ramdisk... && cp.b 0xE2620000 ${ramdisk_load_address} ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}preboot=if test $modeboot = sdboot && env run sd_uEnvtxt_existence_test; then if env run loadbootenv; then env run importbootenv; fi; fi; qspiboot=echo Copying Linux from QSPI flash to RAM... && sf probe 0 0 0 && sf read ${kernel_load_address} 0x100000 ${kernel_size} && sf read ${devicetree_load_address} 0x600000 ${devicetree_size} && echo Copying ramdisk... && sf read ${ramdisk_load_address} 0x620000 ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}ramdisk_image=uramdisk.image.gzramdisk_load_address=0x4000000ramdisk_size=0x5E0000rsa_jtagboot=echo TFTPing Image to RAM... && tftpboot 0x100000 ${boot_image} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_nandboot=echo Copying Image from NAND flash to RAM... && nand read 0x100000 0x0 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_norboot=echo Copying Image from NOR flash to RAM... && cp.b 0xE2100000 0x100000 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_qspiboot=echo Copying Image from QSPI flash to RAM... && sf probe 0 0 0 && sf read 0x100000 0x0 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_sdboot=echo Copying Image from SD to RAM... && load mmc 0 0x100000 ${boot_image} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}sd_uEnvtxt_existence_test=test -e mmc 0 /uEnv.txtsdboot=if mmcinfo; then run uenvboot; echo Copying Linux from SD to RAM... && load mmc 0 ${kernel_load_address} ${kernel_image} && echo Copying Device Tree from SD to RAM... && load mmc 0 ${devicetree_load_address} ${devicetree_image} && echo Boot Linux kernel... && bootm ${kernel_load_address} - ${devicetree_load_address}; fiserverip=10.10.70.101stderr=serialstdin=serialstdout=serialthor_mmc=run dfu_mmc_info && thordown 0 mmc 0thor_ram=run dfu_ram_info && thordown 0 ram 0uenvboot=if run loadbootenv; then echo Loaded environment from ${bootenv}; run importbootenv; fi; if test -n $uenvcmd; then echo Running uenvcmd ...; run uenvcmd; fiusbboot=if usb start; then run uenvboot; echo Copying Linux from USB to RAM... && load usb 0 ${kernel_load_address} ${kernel_image} && load usb 0 ${devicetree_load_address} ${devicetree_image} && load usb 0 ${ramdisk_load_address} ${ramdisk_image} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}; fiEnvironment size: 4843/131068 byteszynq-uboot> env default -f -a## Resetting to default environmentzynq-uboot> printenv baudrate=115200bitstream_image=system.bit.binboot_image=BOOT.binboot_size=0xF00000bootcmd=run $modebootbootdelay=3bootenv=uEnv.txtdevicetree_image=devicetree.dtbdevicetree_load_address=0x2000000devicetree_size=0x20000dfu_mmc=run dfu_mmc_info && dfu 0 mmc 0dfu_mmc_info=set dfu_alt_info ${kernel_image} fat 0 1\\;${devicetree_image} fat 0 1\\;${ramdisk_image} fat 0 1dfu_ram=run dfu_ram_info && dfu 0 ram 0dfu_ram_info=set dfu_alt_info ${kernel_image} ram 0x3000000 0x500000\\;${devicetree_image} ram 0x2A00000 0x20000\\;${ramdisk_image} ram 0x2000000 0x600000ethaddr=00:0a:35:00:01:22fdt_high=0x20000000importbootenv=echo Importing environment from SD ...; env import -t ${loadbootenv_addr} $filesizeinitrd_high=0x20000000ipaddr=10.10.70.102jtagboot=echo TFTPing Linux to RAM... && tftpboot ${kernel_load_address} ${kernel_image} && tftpboot ${devicetree_load_address} ${devicetree_image} && tftpboot ${ramdisk_load_address} ${ramdisk_image} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}kernel_image=uImagekernel_load_address=0x2080000kernel_size=0x500000loadbit_addr=0x100000loadbootenv=load mmc 0 ${loadbootenv_addr} ${bootenv}loadbootenv_addr=0x2000000mmc_loadbit=echo Loading bitstream from SD/MMC/eMMC to RAM.. && mmcinfo && load mmc 0 ${loadbit_addr} ${bitstream_image} && fpga load 0 ${loadbit_addr} ${filesize}nandboot=echo Copying Linux from NAND flash to RAM... && nand read ${kernel_load_address} 0x100000 ${kernel_size} && nand read ${devicetree_load_address} 0x600000 ${devicetree_size} && echo Copying ramdisk... && nand read ${ramdisk_load_address} 0x620000 ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}norboot=echo Copying Linux from NOR flash to RAM... && cp.b 0xE2100000 ${kernel_load_address} ${kernel_size} && cp.b 0xE2600000 ${devicetree_load_address} ${devicetree_size} && echo Copying ramdisk... && cp.b 0xE2620000 ${ramdisk_load_address} ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}preboot=if test $modeboot = sdboot && env run sd_uEnvtxt_existence_test; then if env run loadbootenv; then env run importbootenv; fi; fi; qspiboot=echo Copying Linux from QSPI flash to RAM... && sf probe 0 0 0 && sf read ${kernel_load_address} 0x100000 ${kernel_size} && sf read ${devicetree_load_address} 0x600000 ${devicetree_size} && echo Copying ramdisk... && sf read ${ramdisk_load_address} 0x620000 ${ramdisk_size} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}ramdisk_image=uramdisk.image.gzramdisk_load_address=0x4000000ramdisk_size=0x5E0000rsa_jtagboot=echo TFTPing Image to RAM... && tftpboot 0x100000 ${boot_image} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_nandboot=echo Copying Image from NAND flash to RAM... && nand read 0x100000 0x0 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_norboot=echo Copying Image from NOR flash to RAM... && cp.b 0xE2100000 0x100000 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_qspiboot=echo Copying Image from QSPI flash to RAM... && sf probe 0 0 0 && sf read 0x100000 0x0 ${boot_size} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}rsa_sdboot=echo Copying Image from SD to RAM... && load mmc 0 0x100000 ${boot_image} && zynqrsa 0x100000 && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}sd_uEnvtxt_existence_test=test -e mmc 0 /uEnv.txtsdboot=if mmcinfo; then run uenvboot; echo Copying Linux from SD to RAM... && load mmc 0 ${kernel_load_address} ${kernel_image} && load mmc 0 ${devicetree_load_address} ${devicetree_image} && load mmc 0 ${ramdisk_load_address} ${ramdisk_image} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}; fiserverip=10.10.70.101thor_mmc=run dfu_mmc_info && thordown 0 mmc 0thor_ram=run dfu_ram_info && thordown 0 ram 0uenvboot=if run loadbootenv; then echo Loaded environment from ${bootenv}; run importbootenv; fi; if test -n $uenvcmd; then echo Running uenvcmd ...; run uenvcmd; fiusbboot=if usb start; then run uenvboot; echo Copying Linux from USB to RAM... && load usb 0 ${kernel_load_address} ${kernel_image} && load usb 0 ${devicetree_load_address} ${devicetree_image} && load usb 0 ${ramdisk_load_address} ${ramdisk_image} && bootm ${kernel_load_address} ${ramdisk_load_address} ${devicetree_load_address}; fiEnvironment size: 4742/131068 byteszynq-uboot> | How to specify U-Boot not to use uramdisk | linux kernel;u boot;ramdisk | null |
_webapps.28057 | I have two different organizations that I want to use Trello for, using the same email address or is it possible to have a username that is not an email address? | Creating multiple accounts with the same email | trello | null |
_unix.166711 | I'm new to this forum and linux in general and I'm having trouble connecting to the internet in some situations. I have a Thinkpad T530 that's running Linux Mint 13 LTS. The problem I'm having is that I cannot connect to my home network which is password protected. I have no problem connecting to unprotected networks. I would give more information but I'm not at my computer right now. Thanks for any help. | Unable to connect to WPA network | linux mint;wpa | null |
_unix.235554 | I managed to write the following script:#!/bin/bash#files listfile1=/tmp/1wall_long.txtfile2=/tmp/1wall_test1.txtfile3=/tmp/1wall_test2.txtfile4=/tmp/1wall_test3.txtfile5=/tmp/3mt_long.txtfile6=/tmp/3mt_OpenSpace_test1.txtfile7=/tmp/3mt_OpenSpace_test2.txtfile8=/tmp/3mt_OpenSpace_test3.txtfile9=/tmp/3rooms_test1.txtfile10=/tmp/3rooms_test2.txtfile11=/tmp/3rooms_test3.txtfile12=/tmp/20mt_OpenSpace_test1.txtfile13=/tmp/20mt_OpenSpace_test2.txtfile14=/tmp/20mt_OpenSpace_test3.txt#script for 1wall_long fileif [ ! -e $file1 ]; then #check if the file exist echo File 1wall_long.txt does not exist #if not exist print echo outputelse sed -i -e 's/- /-/g' $file1 #remove space on the first 10 values awk '{print $7}' $file1 > /tmp/1wall_long_S.txt #print the column number 7 and copy the output in a file rm $file1 #remove old filefiThe script is repeated for all files described in the variable (basically I have the same script repeated 14 times with different variables)Is there a better way to do it and what is the best practice in these situations ? | bash script - loop function | bash;shell script | Personally, I would avoid hardcoding the file names. That is rarely a good idea and it is usually better to have the option of passing target files as arguments. Additionally, you are modifying the file in place and then deleting the original. That's not efficient, just modify the file on the fly and print the 7th column without having to write it to disk. For example:#!/usr/bin/env bash## Iterate over the file names givenfor file in $@; do ## Get the output file's name. The ${file%.*} is ## the file's anme without its extension. outfile=${file%.*}_S.txt ## If the file exists if [ -e $file ]; then ## remove the spaces and print the 7th column sed 's/- /-/g' $file | awk '{print $7}' > $outfile && ## Delete the original but only if the step ## above was successful (that's what the && does)/ rm $file else ## If the file doesn't exist, print an error message echo The file $file does not exist! fidoneThen, you can run the script like this:foo.sh /tmp/1wall_long.txt /tmp/1wall_test1.txt /tmp/1wall_test2.txt /tmp/1wall_test3.txt /tmp/20mt_OpenSpace_test1.txt /tmp/20mt_OpenSpace_test2.txt /tmp/20mt_OpenSpace_test3.txt /tmp/3mt_long.txt /tmp/3mt_OpenSpace_test1.txt /tmp/3mt_OpenSpace_test2.txt /tmp/3mt_OpenSpace_test3.txt /tmp/3rooms_test1.txt /tmp/3rooms_test2.txt /tmp/3rooms_test3.txt If you do want to have the names hard coded, just use an array as suggested by @choroba:#!/usr/bin/env bashfiles=(/tmp/1wall_long.txt /tmp/1wall_test1.txt /tmp/1wall_test2.txt /tmp/1wall_test3.txt /tmp/20mt_OpenSpace_test1.txt /tmp/20mt_OpenSpace_test2.txt /tmp/20mt_OpenSpace_test3.txt /tmp/3mt_long.txt /tmp/3mt_OpenSpace_test1.txt /tmp/3mt_OpenSpace_test2.txt /tmp/3mt_OpenSpace_test3.txt /tmp/3rooms_test1.txt /tmp/3rooms_test2.txt /tmp/3rooms_test3.txt )## Iterate over the file names givenfor file in ${files[@]}; do ## Get the output file's name. The ${file%.*} is ## the file's anme without its extension. outfile=${file%.*}_S.txt ## If the file exists if [ -e $file ]; then ## remove the spaces and print the 7th column sed 's/- /-/g' $file | awk '{print $7}' > $outfile && ## Delete the original but only if the step ## above was successful (that's what the && does)/ rm $file else ## If the file doesn't exist, print an error message echo The file $file does not exist! fidone |
_unix.198791 | I'm running Ubuntu 15.04 64-bit Desktop Edition (A Debian based Linux).I used sudo dpkg-reconfigure console-setup from the command line to change the default console font type to Terminus. Immediately afterwards the console fonts changed to the sharper looking font face.However, after a reboot Ctrl+Alt+F1 takes me to a console window that has the original chunkier looking style font face, not my selected choice.The /etc/default/console-setup file appears to have been changed to my choices.# CONFIGURATION FILE FOR SETUPCON# Consult the console-setup(5) manual page.ACTIVE_CONSOLES=/dev/tty[1-6]CHARMAP=UTF-8CODESET=guessFONTFACE=TerminusFONTSIZE=8x16VIDEOMODE=# The following is an example how to use a braille font# FONT='lat9w-08.psf.gz brl-8x8.psf'How do I permanently change the console font to use my preferred font? | How do I permanently change the console TTY font type so it holds after reboot? | command line;console;tty;fonts | null |
_codereview.173706 | This is small utility class I made format.utility.ts. There are some other one off utility methods in here I removed for this. The one in question is duration()class Duration { constructor(public value: number, public name: string) { }}export class FormatUtility { private static durations: Array<Duration> = [ new Duration(60, 'seconds'), new Duration(60, 'minutes'), new Duration(24, 'hours'), new Duration(7, 'days'), new Duration(4, 'weeks'), new Duration(12, 'months'), new Duration(null, 'years') ] public static duration(time: number, interval?: string, duration?: Duration): string { time = Math.abs(time); duration = duration == null ? (interval == null ? this.durations[0] : this.durations.find(x => x.name == interval)) : duration; if (!duration) throw new Error(`The interval specified (${interval}) is unknown`); if (!duration.value || time <= duration.value) return Math.round(time) + ' ' + duration.name; let i: number = this.durations.findIndex(x => x.name == duration.name); return this.duration(time / duration.value, null, this.durations[i+1]); }}tests/examples:FormatUtility.duration(12)// '12 seconds'FormatUtility.duration(87)// '1 minutes'FormatUtility.duration(112, 'hours')// '5 days'FormatUtility.duration(4585, 'minutes')// '3 days'FormatUtility.duration(23999999, 'hours')// '2976 years'The point of this is I have a lot of places in my app where I have display a duration like '12 days ago' or '6 minutes ago' but there are no requirements on the specific interval to use. They wanted to always have 'the most human readable interval for the specific time'. So I came up with this utility and method. Using moment.js elsewhere I get the duration between two dates in seconds and then call FormatUtility.duration(seconds) and have it generate that human readable label for me.For the review, I'm notoriously awful with recursion which is why I chose to use it here. Looking for general feedback on Typescript usage, vanilla JS, recursion, and whatever else. Is there a better way? Or changes to make this better?If anyone sees red flags, please let me know. | Recursive time formatter | datetime;recursion;formatting;typescript | null |
_unix.43894 | When I am running background processes like(I have 9 files suffixed by phastcon):for i in *.phastcon; do cut -f 2 $i >$i.value & doneAfter kicking the Enter, I get the output in terminal showing background id and process id,[1] 22917[2] 22918[3] 22919[4] 22920[5] 22921[6] 22922[7] 22923[8] 22924[9] 22925But hen finished, I got[7] Done cut -f 2 $i > $i.value[8]- Done cut -f 2 $i > $i.value[1] Done cut -f 2 $i > $i.value[2] Done cut -f 2 $i > $i.value[3] Done cut -f 2 $i > $i.value[4] Done cut -f 2 $i > $i.value[5] Done cut -f 2 $i > $i.value[6]- Done cut -f 2 $i > $i.value[9]+ Done cut -f 2 $i > $i.valueThe results are all right.But I can not understand what is the difference of '-' and '+' after the square.Thank you for all helps!Tong | The meaning of '-' and '+' symbol when background processes are finished? | bash | From the bash manpage, in the section JOB CONTROL:In output pertaining to jobs (e.g., the output of the jobs command), the current job is always flagged with a +, and the previous job with a -.This explains the + behind the [9], because that was the last job started. It also explains the - behind [8] and [6], because they were the previous jobs at the moment they finished ([6] was the previous job because [7] and [8] finished before it). |
_unix.275969 | If I use cp in archive mode e.gcp -a /my_old_directory/* /new_location/my_new_directory/to replicate a directory structure and all that is in it.If I then run the same command again will any changed files be refreshed or over-written or skipped?(I know rsync is more advanced at this kind of thing, I'm just curious about cp -a as I can't find any description of what it actually does in this case. | Does cp -a refresh existing files, overwrite or skip | shell;cp | null |
_unix.329495 | I have RAID5 (10 x HDD +2 x Spare SATA setup)Everything was fine until i've replace 1st hdd (sda), system wont boot now.after sdb,sdc,and another hdd failure, system has rebuilded everything, no data loss of something happend, rebooted many times, changes chunk size for more performance, system worked perfectly, but after 1st hdd (sda) failure system wont boot. Any ideas how to solve it?whole system is in one partition /, there is no swap or /boot, or any other partition there right now.I can boot on rescue disk, mount whole raid, status is clean, but it wont boot alone.the operating system is fedora core 25./dev/md127: Version : 1.2 Creation Time : Sat Dec 10 11:02:45 2016 Raid Level : raid5 Array Size : 40007870300160 (36.38 TiB 40 TB) Raid Devices : 10 Total Devices : 12 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Dec 10 21:56:39 2016 State : clean Active Devices : 10Working Devices : 12 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Name : fc25.host:OS_RAID UUID : c130fef5:7fd56abe:4a813111:71db6c3f Events : 719 Number Major Minor RaidDevice State 7 8 32 0 active sync /dev/sdc 8 8 48 1 active sync /dev/sdd 9 8 80 2 active sync /dev/sdf 10 8 96 3 active sync /dev/sdg 16 8 176 4 active sync /dev/sdl 15 8 160 5 active sync /dev/sdk 14 8 144 6 active sync /dev/sdj 13 8 128 7 active sync /dev/sdi 12 8 112 8 active sync /dev/sdh 11 8 64 9 active sync /dev/sde 5 8 0 - spare /dev/sda 6 8 16 - spare /dev/sdb | linux boot raid 5 (config 10HDD +2 spare) after 4hdd replacement | boot;raid5 | null |
_scicomp.25766 | Is there routine in standard BLAS or LAPACK to set strictly-upper triangular part (the part above the diagonal) of a matrix to alpha? I do not want to change diagonal elements so laset is not a good candidate. | Set strictly upper triangular part of a matrix to alpha using BLAS or LAPACK | lapack;blas | null |
_unix.106118 | How to block all the network traffic from one user? But other users are alive to the network that the blocked user is able to connect to other users who have the permission to the internet. | How to block all the network traffic from my running user | networking;opensuse;tor | null |
_cogsci.861 | I've learned through course lectures that infants can recognize faces shortly after birth (Slater & Quinn, 2001), and have a visual preference for human features as young as 1-month-old (Sanefuji et al., 2011)[pdf]. Infants are also known to prefer hearing their mother's voices (DeCasper & Fifer, 1980) and motherese (Fernald,1984). Prefences for higher-pitched singing also suggests that babies may in general prefer higher pitches (Trainor & Zacharias, 1997). In general I would like to know more comprehensively what types of sound preferences do young infants have? And when is the youngest age for which these preferences are observed? | What types of sounds do young infants prefer? | social psychology;developmental psychology;perceptual learning | null |
_scicomp.14545 | I've heard that classical Gram-Schmidt is more amenable to parallelization than modified Gram-Schmidt; apparently the reason has something to do with level 2 BLAS, which I'm not familiar with. Also, in a comment on this question, @Jed Brown talks about left-looking and right-looking Gram-Schmidt, which I haven't found a reference to.How is the Gram-Schmidt algorithm parallelized in practice, and how effective is the parallelization? | Parallel Gram-Schmidt algorithms | linear algebra;parallel computing | null |
_unix.280751 | I have a 3D surface in a text file which I need to plot on a regular X/Y grid. However, the values for X and Y are not regularly spaced and are not necessarily in ascending order. I need to regularly space the X and Y coordinates and interpolate the value in the Z column. The Z column does not need to be sub-sampled.Hear is an example of the file.The columns are X, Y and Value (or Z):50459.83 170405.62 0.0150439.13 170384.92 0.0350459.83 170384.92 0.0450480.53 170384.92 0.0150459.83 170364.22 0.1350480.53 170364.22 0.1450397.72 170343.51 0.2750418.42 170343.51 0.3350480.53 170343.51 0.3250501.23 170343.51 0.3650563.34 170343.51 0.29I would like an output like:50460 170400 0.0150440 170380 0.0350460 170380 0.0450480 170380 0.0150460 170360 0.13I.e. have X and Y sampled on a 20x20 grid, and have the Z column interpolated to those grid points (which I have not done in the example output).The file is very large, tens of millions of lines.Thank you. | How do I interpolate a text file of 3D coordinates (X,Y,Z) onto a regular grid? | shell;awk | null |
_softwareengineering.303835 | Currently we are planning to switch from our software version 5.x.x to 6.x.x. Such major releases contains in our case a lot of refactoring work and changing the software architecture. Instead of creating a new branch for version 6 (git), I thought to create a custom repository for this. In general, developing the new version bases on the old version, so it would be a copy.My problem is that developing version 5 will not stop, because bug fixes and a few minor changes will be done. But now I have two versions I am working on, in two separate repositories. What is the best way to make changes in both, without copy code, or do the work twice? Is there some effective way?Maybe some one else has the same issue before. | Making major version step in software development in separate repository | architecture;version control;git;branching;sdlc | One common way to handle this scenario is to use a trunk/branch concept. What you do is have the single repository and branch the 5.x.x version for maintenance reasons. Then you put all of the your new 6.x.x changes into the trunk. That way you maintain all of the version history of your code. This also allows you to check out the old version and make a targeted fix there without impacting the trunk. Here is what this would look like: |------branch (5.x.x) |----------------------------------------------------------------------Trunk (6.x.x)Then later if you ever need to have a version 7.x.x you could do the same again: |------branch (5.x.x) | |---branch (6.x.x) | |---------------------------------------------------------Trunk (7.x.x) |
_codereview.163292 | This method finds all permutations of a string and stores them in a sorted array. It then returns the element in the middle of the array. def middle_permutation(string) sorted = string.chars.to_a.permutation.map(&:join).sort sorted[sorted.length / 2 - 1]endFor strings over 10+ characters this code is too slow. How can I speed it up?Here are my tests. They pass, but I'd like it to be more efficient:describe Basic tests do Test.assert_equals(middle_permutation(abc),bac) Test.assert_equals(middle_permutation(abcd),bdca) Test.assert_equals(middle_permutation(abcdx),cbxda) Test.assert_equals(middle_permutation(abcdxg),cxgdba) Test.assert_equals(middle_permutation(abcdxgz),dczxgba)end | Finding the middle permutation | performance;ruby;array;combinatorics | Calculating every permutation is very expensive so you want to avoid doing that. There are some ways to handle this:1) Sort the characters before you run your algorithm. That way you are sorting a few characters not billions of permutations.2) If the string has an even number of characters (say abcd) remove the almost middle character (b) and rewrite as something like 'b' + 'acd'.reverse3) If the string has an odd number of characters (say abcde) you can remove the middle character (c) leaving abde and simplify the result as 'c' + middle_permutation('abcd')(3) Will always be followed by (2)So you could rewrite your code as:def middle_permutation2(string) string = string.chars.sort.join() return string if string.length <= 2 if string.length.even? middle = string.length / 2 - 1 remainder = string[0...middle] + string[middle+1..-1] string[middle] + remainder.reverse else middle = string.length / 2 remainder = string[0...middle-1] + string[middle+1..-1] string[middle] + string[middle-1] + remainder.reverse endend |
_cstheory.7026 | I'm interested in a data structure (let's call it a DMV queue, or DMV for short) over keys (say, strings) with the following operations:empty is a DMV containing no keys.enqueue(q,k) adds the key k to the back of the DMV q, unless k is already in q, in which case it does nothing.dequeue(q) deletes the key at the front of the DMV q, if one exists, and returns it.delete(q,k) removes the key k from the DMV q.depth(q,k) returns an natural number indicating the approximate number of keys between the key k and the front of the DMV q. Let $k_q$ denote the exact number of keys between k and the front of q. Then there must be some c such that for all k and q, $k_q/c$ < depth(q,k) < $ck_q$.I think I know how to provide the queue operations in $\Theta(1)$ time and delete and depth in $\Theta(\lg n)$ time (all expected amortized). Is it possible to do better?My proposed solution is as follows: Maintain a balanced tree with $O(1)$ operations at the ends. Nearly any finger tree will do. This tree will store the keys in queue order at its nodes. Also, annotate every non-spine node with its number of descendants.Keep a hash table mapping keys to pointers to nodes in the tree.To enqueue a key k, add k to the back of the tree. This invalidates $O(1)$ node pointers and creates $O(1)$ new node pointers, so we need only perform $O(1)$ hash table operations. Dequeue is similar.To delete a key, we look it up in the hash table and find its location in the tree, then delete it from the tree. This takes $O(\lg n)$ time in the tree, and invalidates $O(\lg n)$ slots in the hash table. We must also maintain non-spine node size annotations in the tree, but this also only takes logarithmic time.To find the depth of a key, we first annotate the spine nodes of the tree with their number of descendants. This takes $O(\lg n)$ time. We then look up the key in the hash table and find its location in the tree. We then follow parent pointers until we reach the root, summing the annotations at left siblings. Note that this is the exact depth. | Keyed queues with depth queries and delete | ds.data structures | Your solution can be modified to do everything in $O(1)$ amortized time. Instead of maintaining a balanced tree, just keep track of the number of successful enqueue operations (those in which something was actually added to the back of the queue) and attach that number to queue along with the item during a successful enqueue operation. To find the depth of a key, just take the difference between the number of successful enqueues when it was enqueued, and the number of successful enqueues when the item currently at the front of the queue was enqueued. Note that this gives exact depth.If you're worried about storing excessively large values after many enqueue and dequeue operations, you can also keep track of the size of the queue and whenever the size is less than half the number of enqueues, re-number the number of enqueues for each entry to 1 through the size, and reset the number of enqueues to the size. Everything takes constant amortized time and each key takes at most one more bit than necessary to represent its depth at insertion. |
_cs.26047 | Given an undirected and connected graph $G=(V,E)$ and two vertices $s,t$ and a vertex $d \in V- \{s,t\}$, we would like to define a legal path as a path from $s$ to $t$, passes through $d$ (at least once) and is of even length (regarding number of edges). We need to find such a path that is the shortest in $O(V+E)$ time.I thought about BFS from $s$ to find a shortest path to $d$, and BFS from $d$ to find shortest path to $t$, but then it wouldn't necessarily be of even length.Plus, such a path we're looking for is not necessarily simple.Any hints? | Shortest even path that goes through a vertex | algorithms;graphs | Your remark is true: there might be no simple even path from $s$ to $t$, an even path perhaps includes a cycle of odd length.The shortest path from $s$ to $t$ via $d$ is the shortest path from $s$ to $d$ plus the shortest path from there to $t$.To compute even length paths you might consider turning the graph into a bipartite graph using two copies of itself. Double $V$ by adding a copy $V'$. Now duplicate every edge $(x,y)$ into $(x,y')$ and $(x',y)$, where the primes indicate copies in $V'$. Now the shortest path from $s$ to $t$ will be of even length. (And all paths of even length in the original graph are 'represented' in the new graph.)Problem: the intermediate node $d$ might be $d'$, its copy in $V'$. Your turn to connect the two requirements 'via $d$' and 'even length'. |
_datascience.16411 | This question is only about the vocabulary.Do / can you saydata itemdata samplerecordingsampledata pointsomething elsewhen you talk about elements of the training / test set? For example:The figure shows 100 data items of the training set.Database A contains the same data items as database B, but in another format.The remaining data items were removed from the dataset.Those 10 classes have 123456 data items.Please provide papers with examples.According to Google n-grams: | How is a single element of the training set called? | data;terminology | The term you are looking for is Example.Source: Martin Zinkevich, Research Scientist at Google (http://martin.zinkevich.org/rules_of_ml/rules_of_ml.pdf)Instance: The thing about which you want to make a prediction. For example, the instance might be a web page that you want to classify as either about cats or not about cats.Label: An answer for a prediction task either the answer produced by a machine learning system, or the right answer supplied in training data. For example, the label for a web page might be about cats.Feature: A property of an instance used in a prediction task. For example, a web page might have a feature contains the word 'cat'.Example: An instance (with its features) and a label. |
_webapps.9917 | I'm looking for a strongly visual but still simple alternative to text-based chat. The important feature I'm looking for is a friendly visual representation of user state/emotion (available/away/busy), with a rough analog to actually being a room together. Chat would still be text-based, persistent (doesn't fade away) and mostly public (directed to the whole room). Visually distinguishing directed public chat would be nice. Private chat must also be possible. Being able to move avatars to different areas of the room would be nice.Simply having large customisable avatars on the side of a chatroom would be a good start. Does anything like this already exist?Ideally this would be an open standard technology built on existing standards (XMPP?), with no need for a server or a very light server, with an open-source, cross-platform client. And a pony.For non-web-app answers, see see https://superuser.com/questions/217129/simple-visual-virtual-presence-chat-clientExample of a mythical good answer: A one-screen Metaplace room with persistent chat history. (Pity Metaplace no longer exists and didn't have a persistent log.)Example of a poor answer: Microsoft Comic Chat: very visual but poor/confusing chatroom history, also no longer supportedExample of a bad answer: Second Life: big download to install, lots of bandwidth to run, complicated virtual environment, complicated user interface | Simple visual/virtual presence chat client | webapp rec;chat;avatar | null |
_unix.247203 | So, basically I was messing up with minix and qemu and I messed up too much. me@meplepl ~ $ whichbash: /usr/bin/which: cannot execute binary file: Exec format errorme@meplepl ~ $ file /bin/which/bin/which: Minix-386 executableI have the same problem with awk and somehow ssh.It turns out I somehow replaced my binaries with those from minix? Is there easy fix or I have to go back to my previous backup? | How do I repair binaries? | binary | If your package manager is in a working state, you can force reinstallation of the packages containing the binaries you overwrote. Depending on your distro:apt-get --reinstall install *package-name*rpm -iv --replacepkgs *package-name*yum reinstall *package-name* emerge *package-name*pacman -S *package-name*If that doesn't work, you say you have backups so I would just restore /bin and /usr/bin from the backup.A helpful way to avoid doing this in the future is the age old advice don't use the root account when you don't need to. Once qemu is installed you can run it as your user. The benefit is that your user cannot overwrite /usr/bin on the host, so you can't mess up your system. |
_webmaster.105025 | My website has 6milions pages of companies.ex: www.mywebsite.com/company-aI will generate one sitemap index file and put the url in robots.txtWhen the company A update one information in your page(example: address), I will change the lastmod in sitemap file that contais the url www.mywebsite.com/company-a and I will change the lastmod for this sitemap in sitemap index file. After, I will send a ping for google to my sitemap index file url.My question is: Is it really worth doing this? What would be the gain in relation to not doing this update and send ping? | Sitemap update lastmod and ping. what is the advantage? | seo;sitemap;ping | null |
_unix.260240 | I want to be able to completely close termite from within a bash script. I have something like this:while true; do read -n 1 -s result case $result in [c]* ) exit 0;; esacdoneAnd I want hitting c to close termite. | Exit termite from within bash | bash;arch linux;termite | Execute your command with exec command to replace your bash with your script and when your script interpret exit command it will close your terminal.Run your command like: exec ./myscript.sh NOTE: Your script must have execute permission. |
_unix.261442 | I have a requirement to identify sequence gap in a set of files. Sequence starts at FILENAME_0001 and ends at FILENAME_9999. After this the sequence is restarted from 0001.To implement a proper sequence check I used ls -rt to pick the files in order of modified time and the compared with the previous files sequence number. If the previous file was 9999 I check whether the next one is 0001 (to accommodate the sequence reset). Recently I came across a scenario where files were listed in the below order:FILENAME_0001 FILENAME_0002FILENAME_0005FILENAME_0003FILENAME_0004FILENAME_0006FILENAME_0007This was because files 3, 4 & 5 had the same modified time to the second. Only the millisecond was different. So I am guessing ls -rt considers only upto the seconds. Could someone suggest a workaround? | File sort by time issue | ls;sort | If your find has printf, print out the mtime in seconds followed by the filename, then use sort, and finally cut:find . -type f -printf %T@\t%f\n |sort -k 1n -k 2 |cut -f 2-The find outputs TIMESTAMP FILENAME on each line. The sort first sorts the timestamps in numerical order. If the timestamps are equal, it will use the filename as a last resort. The cut removes the timestamp from the output.EDIT: Your perl solution works, but I would do it differently. Here's the simplest:find . -type f -print | perl -lne 'print (((stat($_))[9].\t.$_)' |sort -k 1n -k 2 |cut -f 2-No need to convert the time to a string and back again. Just output stat's mtime as a numeric value as find would have done. |
_codereview.11759 | In my Backbone.Collection I need to parse the response before to render it in Backbone.View. The following code works, but it will be great to have some suggestions:// response is array of object // [{id:1, prop: null },{id:2, prop: bar }]// the output parsed_response can be //[{id:1, prop: null, isProp: true },// {id:2, prop: bar, isProp: false }]parse: function(response){ var parsed_response; parsed_response = _(response).clone(); var parsed_response = _.map(parsed_response, function (obj) { return function (_obj) { if (_obj.prop === || typeof _obj.prop === null ) { _obj.isProp = false; } else { _obj.isProp = true; } return _obj; }(obj) }); return parsed_response;}My questions are:Is there a way to improve the code? Also renaming the name of variables to make it more clear. Since this code should be reused from other collections, what is the best way to generalise it? | How to parse a collection of objects in Backbone? | javascript;backbone.js | Based on the way your parse function works you're worried about mutability. Your code can be much shorter. Also, your use of _.clone is wasted since it's only a shallow copy. I've rewritten parse code and now it looks like this:// response is array of object // [{id:1, prop: null },{id:2, prop: bar }]// the output parsed_response can be //[{id:1, prop: null, isProp: true },// {id:2, prop: bar, isProp: false }]parse: function(response){ return _.map(response, function(obj) { obj = _.clone(obj); obj.isProp = obj.prop !== && obj.prop !== null; return obj; });}If you want further explanation let me know :)The only thing I'm concerned with is how your prop works? Can you just check for a falsey value here (i.e. !obj.prop) or do you really need to explicitly check for an empty string or null?In term of re-usability, you're covered here. parse() doesn't reference this so it can more or less be called statically from anywhere in your code. But be careful where you use it from, if you start calling this all over the place, it would make more sense to refactor this parse() function to a more generic object. |
_softwareengineering.272206 | I have been asked to provide a C-library of my code (which I have written in a high-level language). I will hire a programmer to implement my code in C. I would like a short introduction to what a C-library means before I start this process. Is it correct that I can provide a C-library and that the people who use it will not be able to see the actual source code?I understand that the library will inlude .h-files which determine how the people that I give the code to will interact with the library. Can I have just one of these files so that the internal structure is hidden?In this situation, I assume that the library should be dynamically linked. Why is that? | C-library - newbie guide | c;libraries | YesYesIt doesn't have to be dynamically linked.In detail: When creating a library (doesn't have to be in C, but I assume this means it needs to expose its functionality in the form of exported C functions) then the source code is turned into the equivalent machine instructions. While a skilled hacker could turn this machine code back into a higher level language, its really awkward and the resulting decompilation is really difficult for a human to understand. So you're pretty much safe from anyone stealing your algorithms, unless they're really worth the effort.In order to use a library like this, you need a way to tell programs that link with the lib what is inside it, this is typically done with a header file. A single header containing only those exported functions is fine. The only reason people use multiple headers is because they don't want the trouble of maintaining duplicates and so simply ship the headers used in development.The choice of dynamic or static is up to you. A dynamic library can be replaced with a newer version easily. This is the most common reason to ship in this format. |
_unix.203943 | I'm trying to run a older SIMetrix Version on Ubuntu 15.04 64bit (if there is a newer version of SIMetrix around tell me!). When I do it I get following error: user@user-Ubuntu-Laptop:/opt/simetrix_intro_53/bin$ ./SIMetrix ./SIMetrix: error while loading shared libraries: libXext.so.6: cannot open shared object file: No such file or directoryBut when I run sudo ldconfig -v | grep Xext the output is libXext.so.6 -> libXext.so.6.4.0.So why is the file not found? Running SIMetrix with sudo doesn't help.user@user-Ubuntu-Laptop:/opt/simetrix_intro_53/bin$ sudo linux32 --3gb ./SIMetrix./SIMetrix: error while loading shared libraries: libXext.so.6: cannot open shared object file: No such file or directory$ file ./SIMetrix./SIMetrix: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, stripped$ uname -aLinux user-Ubuntu-Laptop 3.19.0-16-generic #16-Ubuntu SMP Thu Apr 30 16:09:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux$ ldd ./SIMetrix linux-gate.so.1 => (0xf777f000) libSupportDll.so => /opt/lib/simetrix/5.3/libSupportDll.so (0xf7704000) libqt-mt.so.3 => /opt/lib/simetrix/5.3/libqt-mt.so.3 (0xf6ed9000) libXext.so.6 => /usr/lib/i386-linux-gnu/libXext.so.6 (0xf6ec3000) libX11.so.6 => /usr/lib/i386-linux-gnu/libX11.so.6 (0xf6d78000) libpthread.so.0 => /lib/i386-linux-gnu/libpthread.so.0 (0xf6d5b000) libstdc++.so.5 => not found libm.so.6 => /lib/i386-linux-gnu/libm.so.6 (0xf6d0e000) libgcc_s.so.1 => /lib/i386-linux-gnu/libgcc_s.so.1 (0xf6cf0000) libc.so.6 => /lib/i386-linux-gnu/libc.so.6 (0xf6b35000) libdl.so.2 => /lib/i386-linux-gnu/libdl.so.2 (0xf6b30000) libstdc++.so.5 => not found libGL.so.1 => not found libXmu.so.6 => not found libSM.so.6 => not found libICE.so.6 => not found libstdc++.so.5 => not found libxcb.so.1 => /usr/lib/i386-linux-gnu/libxcb.so.1 (0xf6b0d000) /lib/ld-linux.so.2 (0xf7780000) libXau.so.6 => /usr/lib/i386-linux-gnu/libXau.so.6 (0xf6b08000) libXdmcp.so.6 => /usr/lib/i386-linux-gnu/libXdmcp.so.6 (0xf6b01000) | Cannot open shared object file Error even though ldconfig is showning entry | ubuntu;libraries | null |
_unix.337752 | I would like to be able to perform actions on files in different locations within a common window/container. Let's say I have the following directory structure: DCIM browser-photos merel.jpg yent.jpg raha.jpg Camera laki.jpg wion.jpg darta.jpg qonad.jpg isha.jpg mowi.jpg kaens.jpg3 directories, 10 filesIs it possible for me to start a file manager like Thunar or Ranger in a way that would let me view all 10 files on a single level (no nesting), and when I perform an action on a single file (e.g. remove) it's performed within the file's location.This isn't just for images. I'm currently going through a few old hard drives and I'm spending a lot of time traversing through unnecessary hierarchies. | Union based file manager | files;directory;thunar;ranger | null |
_softwareengineering.310980 | I'm building a new CMS with Node.js, and I have a question.Would adding WordPress-like multi-site support to the system be a good idea? Or should I let the user handle it via a reverse proxy like nginx?I realise that having a single process handling vhosts that share a single database can be advantageous in some ways, but it would have its caveats as well.I would love to hear your input on this, since I can't decide on my own. | CMS Design: Is multi-site a good idea? | design;architecture;cms | This largely depends on the scalability, and (perhaps even more importantly) the scalability of the platforms you plan for your CMS to run on.For example, a lightweight (i.e. AWS t2.micro) server would handle several (not hundreds, but enough) low-volume sites very easily. However, a single high-volume site (think Wikipedia, etc) is obviously distributed over many MUCH more heavyweight servers (not the plural).Where this becomes a problem, is in the ability of one site to scale up. Let's say someone's political blog (running on your new CMS) gets linked from CNN.. In about five minutes it goes from a few dozen hits a day to hundreds per second. Now, obviously the underlying platform scalability is dependant upon your platform (AWS, Azure, actually hosting it on a real life computer under your desk, etc) - but the ability of your CMS to leverage the resources it has in such a situation is VERY MUCH impacted by hosting multi-sites rather than having a dedicated instance.There is also the human element to consider. Let's say your multi-site instance is hosting sites from different users... Now when Fred's politics blog goes viral, Aunty May's Delicious Biscuits Website also slows down (or even goes offline entirely). This is a lot more difficult to explain to poor old aunty May than if Fred was the holder of all of the virtual sites in the instance (i.e. Fred's politics blog's popularity made Fred's scrap-booking site go offline).Obviously there are performance benefits to be gained from things like DB connection pooling, etc for multi-site systems (otherwise they wouldn't exist). But make sure sites can be promoted as they become more popular (and require more resources). A small site that starts out as a virtual entry in a multi-site system may one day become a behemoth (perhaps not as large as Wikipedia, but still) requiring multiple dedicated servers and databases.Overall, I would try and take a good look at your expected use-cases/users/etc. How likely is it that there will be popular sites (you can assume unlikely, but never impossible), how likely is it one user will want to have multiple virtual sites on this CMS? For that number of users is it worth the extra development effort? Would most (or all) of those users easily know how/be willing to just use nginx to manage it if you didn't do multi-site? |
_webapps.9022 | How can I sync Diigo with Google Bookmarks without having to manually import/export continuously? | Sync Diigo with Google Bookmarks | google bookmarks;diigo | null |
_codereview.158412 | Good Morning Gents, I'm trying to practice functional programing using javascript. These pure functions just work together to find the difference between two arrays. I know that this might be an impractical example of functional programing but, could you let me know where I can improve or maybe where i've misunderstood an fp technique.use strict;let find = (arr1) => (arr2) => { let diff1 = findDiff(arr1)(arr2); let diff2 = findDiff(arr2)(arr1); return concat(diff1)(diff2);};let concat = (arr1) => (arr2) => arr1.concat(arr2);let findDiff = (arr1) => (arr2) => arr1.filter( (elem) => arr2.indexOf(elem) < 0 );let sweet = find( [1,2,3,4,5] )( [1,2,3,5,6] );console.log(sweet); | Finding the difference between two different arrays using JS and FP | javascript;functional programming | I am concerned that you really haven't defined difference here and that perhaps your code does not perform the way you expect.You example is quite simple in that you would expect [4, 6]. But what if you had arrays like:[1, 1, 2, 2, 3, 3, 4, 4, 5, 5][1, 2, 3, 5, 6]What do you expect to be returned? Currently, your function would return [4, 4, 6], but is that what you expect, or would each instance of a repeated value need to be treated differently (i.e. return would be [1, 2, 3, 4, 4, 5, 6])?Have you considered flipping your array values into object keys (if type conversion is not a concern) or using Array.sort() on input arrays in combination with fromIndex parameter to Array.indexOf() to minimize array iteration caused by Array.indexOf()? This may not be a concern if you don't expect to be diffing large arrays.Do you really want to support the find()() syntax for this? I don't really see much value in nesting your function calls in this manner vs. just using:let find = (arr1, arr2) => { // function logic}You are not ever returning the intermediate function for possible use elsewhere, so I don't follow the need for this approach.I agree with the comment from @GregBurghardt around not unneccesarily wrapping native array functions.I don't like the function being called find. Perhaps something more descriptive to what is actually happening here like arrayDiff(). You are not finding here at all.To me this whole thing would be clearer/simpler like this:let arrayDiff = (arr1, arr2) => { return arr1.filter( elem => arr2.indexOf(elem) < 0 ) .concat( arr2.filter( elem => arr1.indexOf(elem) < 0 ) );};Note, this obviously does not address some of the questions I asked about uniqueness of array values or optimization of the filter operations. |
_unix.363247 | Installed and configured xrdp and I am able to connect from a Windows mstsc.exe but only as root. I found this forum post dealing with the situation where the only user who can log into a machine running xrdp is root: https://forums.kali.org/showthread.php?32062-Unable-to-login-using-xrdp-and-non-root-usernameButthere is no Xwrapper.config in the specified directory on my machine. The commands man Xwrapper.config and man XOrg.wrap do not work.and when I create this file as he specified and reboot, there is nochange.OS: Fedora 19 (it will NOT be upgraded for the purposes of this question)How can I allow other users to log in via RDP and disallow remote root logins? | Only root can log into the machine running xrdp | fedora;xrdp | The sesman.ini config file has entries for allowed users and groups.look at man sesman.ini for the exact usage of theses keys.TerminalServerUsersTerminalServerAdminsAlwaysGroupCheck |
_webapps.85774 | I have just started using cognito forms and have hit a brick wall as I am not a pro when it comes to formulas.I have a section that has (5) Radio button choices with the following titles;2' x 2' 2' x 4' 4' x 4' 4' x 6' 4' x 8'when one of these buttons is selected for example number (2) 2' x 4' I would like to take the answer to that which is 8 and multiply that number by another field which has asked for the quantity.another example is someone types in the quantity field (6) and then they select radio button number (5) which is 4' x 8' the answer to this formula needs to be (4' x 8')*(6) = 192 | How to calculate using cognito form choices from radio button | cognito forms | null |
_webmaster.104440 | I am seeing my Google my business page website visits as - 4,412 while in google analytics sessions - 8,000 for same month (1 month data). We are using utm parameters to capture GMB listings clicks in analytics account. Any idea why is there so much difference between this data. | Discrepancy between Google My Business and Google Analytics Sessions Data | google analytics;visitors;session;google my business | null |
_cs.8952 | Let $B$ be a boolean formula consisting of the usual AND, OR, and NOT operators and some variables. I would like to count the number of satisfying assignments for $B$. That is, I want to find the number of different assignments of truth values to the variables of $B$ for which $B$ assumes a true value. For example, the formula $a\lor b$ has three satisfying assignments; $(a\lor b)\land(c\lor\lnot b)$ has four. This is the #SAT problem.Obviously an efficient solution to this problem would imply an efficient solution to SAT, which is unlikely, and in fact this problem is #P-complete, and so may well be strictly harder than SAT. So I am not expecting a guaranteed-efficient solution.But it is well-known that there are relatively few really difficult instances of SAT itself. (See for example Cheeseman 1991, Where the really hard problems are.) Ordinary pruned search, although exponential in the worst case, can solve many instances efficiently; resolution methods, although exponential in the worst case, are even more efficient in practice. My question is:Are any algorithms known which can quickly count the number of satisfying assignments of a typical boolean formula, even if such algorithms require exponential time in the general instance? Is there anything noticeably better than enumerating every possible assignment? | Is there a sometimes-efficient algorithm to solve #SAT? | complexity theory;reference request;satisfiability | Counting in the general caseThe problem you are interested in is known as #SAT, or model counting. In a sense, it is the classical #P-complete problem. Model counting is hard, even for $2$-SAT! Not surprisingly, the exact methods can only handle instances with around hundreds of variables. Approximate methods exist too, and they might be able to handle instances with around 1000 variables. Exact counting methods are often based on DPLL-style exhaustive search or some sort of knowledge compilation. The approximate methods are usually categorized as methods that give fast estimates without any guarantees and methods that provide lower or upper bounds with a correctness guarantee. There are also other methods that might not fit the categories, such as discovering backdoors, or methods that insist on certain structural properties to hold on the formulas (or their constraint graph).There are practical implementations out there. Some exact model counters are CDP, Relsat, Cachet, sharpSAT, and c2d. The sort of main techniques used by the exact solvers are partial counts, component analysis (of the underying constraint graph), formula and component caching, and smart reasoning at each node. Another method based on knowledge compilation converts the input CNF formula into another logical form. From this form, the model count can be deduced easily (polynomial time in the size of the newly produced formula). For example, one might convert the formula to a binary decision diagram (BDD). One could then traverse the BDD from the 1 leaf back to the root. Or for another example, the c2d employs a compiler that turns CNF formulas into deterministic decomposable negation normal form (d-DNNF).If your instances get larger or you don't care about being exact, approximate methods exist too. With approximate methods, we care about and consider the quality of the estimate and the correctness confidence associated with the estimate reported by our algorithm. One approach by Wei and Selman [2] uses MCMC sampling to compute an approximation of the true model count for the input formula. The method is based on the fact that if one can sample (near-)uniformly from the set of solution of a formula $\phi$, then one can compute a good estimate of the number of solutions of $\phi$. Gogate and Dechter [3] use a model counting technique known as SampleMinisat. It's based on sampling from the backtrack-free search space of a boolean formula. The technique builds on the idea of importance re-sampling, using DPLL-based SAT solvers to construct the backtrack-free search space. This might be done either completely or up to an approximation. Sampling for estimates with guarantees is also possible. Building on [2], Gomes et al. [4] showed that using sampling with a modified randomized strategy, one can get provable lower bounds on the total model count with high probabilistic correctness guarantees. There is also work that builds on belief propagation (BP). See Kroc et al. [5] and the BPCount they introduce. In the same paper, the authors give a second method called MiniCount, for providing upper bounds on the model count. There's also a statistical framework which allows one to compute upper bounds under certain statistical assumptions.Algorithms for #2-SAT and #3-SATIf you restrict your attention to #2-SAT or #3-SAT, there are algorithms that run in $O(1.3247^n)$ and $O(1.6894^n)$ for these problems respectively [1]. There are slight improvements for these algorithms. For example, Kutzkov [6] improved upon the upper bound of [1] for #3-SAT with an algorithm running in time $O(1.6423^n)$.As is in the nature of the problem, if you want to solve instances in practice, a lot depends on the size and structure of your instances. The more you know, the more capable you are in choosing the right method.[1] Vilhelm Dahllf, Peter Jonsson, and Magnus Wahlstrm. Counting Satisfying Assignments in 2-SAT and 3-SAT. In Proceedings of the 8th Annual International Computing and Combinatorics Conference (COCOON-2002), 535-543, 2002.[2] W. Wei, and B. Selman. A New Approach to Model Counting. In Proceedings of SAT05: 8th International Conference on Theory and Applications of Satisfiability Testing, volume 3569 of Lecture Notes in Computer Science, 324-339, 2005.[3] R. Gogate, and R. Dechter. Approximate Counting by Sampling the Backtrack-free Search Space. In Proceedings of AAAI-07: 22nd National Conference on Artificial Intelligence, 198203, Vancouver, 2007.[4] C. P. Gomes, J. Hoffmann, A. Sabharwal, and B. Selman. From Sampling to Model Counting. In Proceedings of IJCAI-07: 20th International Joint Conference on Artificial Intelligence, 22932299, 2007.[5] L. Kroc, A. Sabharwal, and B. Selman. Leveraging Belief Propagation, Backtrack Search, and Statistics for Model Counting. In CPAIOR-08: 5th International Conference on Integration of AI and OR Techniques in Constraint Programming, volume 5015 of Lecture Notes in Computer Science, 127141, 2008.[6] K. Kutzkov. New upper bound for the #3-SAT problem. Information Processing Letters 105(1), 1-5, 2007. |
_webapps.982 | There's nothing I hate more than receiving spammy notifications from other people's applications on facebook. However, Sometimes I come across a useful application that I authorize to interact with my facebook account and I cannot tell if it subsequently sends out the same bothersome notifications to my friends. Is there a way to track what an application is sending? | How can I tell when Facebook applications are spamming my friends? | facebook;notifications | You could create a second account just for the purpose of monitoring how your own account is perceived by other people. It's not the best method but it should give you the most accurate judgement. |
_vi.4120 | I created an augroup in my .vimrc containing several autocmd and I need to enable/disable these autocommand on the fly. The idea is to create a mapping (let's say F4 for example) which would enable these autocommands when pressed once and disable them when pressed again without having to source a file or reload the .vimrc.How can I do that? | How to enable/disable an augroup on the fly? | key bindings;autocmd | Building on your answer: you don't need a variable to keep state of the augroup, you can use exists() for that, provided that you know at least one of the autocmds that are part of the group:function! ToggleTestAutoGroup() if !exists('#TestAutoGroup#BufEnter') augroup TestAutoGroup autocmd! autocmd BufEnter * echom BufEnter . bufnr(%) autocmd BufLeave * echom BufLeave . bufnr(%) autocmd TabEnter * echom TabEnter . tabpagenr() autocmd TabLeave * echom TabLeave . tabpagenr() augroup END else augroup TestAutoGroup autocmd! augroup END endifendfunctionnnoremap <F4> :call ToggleTestAutoGroup()<CR> |
_unix.66220 | /usr/share/tipp10$ llinsgesamt 9408drwxr-xr-x 3 myname ssl-cert 4096 Feb 26 20:07 ./drwxr-xr-x 288 root root 12288 Feb 26 20:07 ../-rwxrwxrwx 1 myname ssl-cert 9480 Okt 6 2010 error.wav*drwxrwxrwx 4 myname ssl-cert 4096 Feb 26 20:07 help/-rwxrwxrwx 1 myname ssl-cert 16368 Dez 30 2010 license_de.txt*-rwxrwxrwx 1 myname ssl-cert 16291 Dez 30 2010 license_en.txt*-rwxrwxrwx 1 myname ssl-cert 5928 Okt 6 2010 metronome.wav*-rwxrwxrwx 1 myname ssl-cert 9537480 Mr 11 2011 tipp10*-rwxrwxrwx 1 myname ssl-cert 1255 Nov 7 2008 tipp10.png*-rwxrwxrwx 1 myname ssl-cert 13312 Dez 18 2010 tipp10v2.template*/usr/share/tipp10$ pwd tipp10/usr/share/tipp10/usr/share/tipp10$ file tipp10tipp10: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.4, not strippedmanuel@P5KC:/usr/share/tipp10$ ldd tipp10 \tdas Programm ist nicht dynamisch gelinkt # (Program is not dynamic linked)/usr/share/tipp10$ ./tipp10bash: ./tipp10: File not foundUbuntu 12.04 x64 What the heck is wrong here?EDIT: --------------------- SOLUTION ---------------------.. for those, who dont want to read the complete dup article.My OS is 64-bit. I though 32-bit apps would run on 64-bit machines. 32-bit apps can run on a 64-bit machine, but only if the requisite supporting libraries are installed. For running 32-bit programs, try installing the ia32-libs package. | Existing file can not be found? | permissions;files;executable | null |
_webapps.59505 | There is a quote formatting option within Gmail, but no unquote to undo the deeper quote level. How do I unquote text? | How do I unquote in Gmail? | gmail | By clicking on the decrease indent icon. |
_unix.153556 | We are using several Ubuntu servers having English as working language; however having en_US locale set in machines we encounter problems with apt-cacher-ng downloading translation files. One solution is to change locale to POSIX.Considering all options we want to change locale in all the systems.What are the consequences for the system of changing locales from en_US value to POSIX? Are there any implications on LC_* apart from change of the value? | Consequences of setting up POSIX locales | apt;locale;apt cacher | null |
_codereview.129782 | This is the first time I've tried this. I'd like some feedback on how I did, including any bad practice warnings. For example is it a really bad idea to allow the code to recreate the table if it doesn't think it exists? Would it be better to simply create the table once in a different file?<?php/*Database and mail functionality*/// get user credentials$config = parse_ini_file('../config.ini'); // path may vary depending on setup// Create connection$conn = new mysqli('localhost', $config['username'], $config['password'],$config['dbname']);// Check connectionif ($conn->connect_error){ die('Connection failed. ' . $conn->connect_error);}// if table not made yet, create itif(!$conn->query (DESCRIBE visitors)) { // sql to create table $sql = 'CREATE TABLE visitors( id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL, message VARCHAR(500), reg_date TIMESTAMP )'; if (!$conn->query($sql)){ die ('Sorry there was an error. Please try again later.'); }}//insert data into table//clean data for SQL query$name = mysqli_real_escape_string($conn, $data['name']);$email = mysqli_real_escape_string($conn, $data['email']);$message = mysqli_real_escape_string($conn, $data['message']);$sql = INSERT INTO visitors (name, email, message) VALUES ('$name', '$email', '$message');if ($conn->query($sql) === TRUE) { $usrMsg = 'Thank you we\'ll be in touch when we have some news';} else { die (Sorry, there was an Error. Please try again later.);}// close connection$conn->close();// email// addresses and default subject$to = ''; // add details$from = ''; // add details$subject = 'New form entry on website';// prepare message variables - !not sure how to make quotes etc. display properly in email body$message = wordwrap($data['message']);$body = <<<_ENDName: {$data['name']}Email: {$data['email']}Message: $message_END;// sendmail($to, $subject, $body, $from);?> | PHP store data from form into MySql DB | php;mysql;form | Since, as you mentioned, you are pretty new to this, i will start with some novice level advice:You should refactor your code into seperated functions, both in order to seperate functionality with different purpose, and to be able to reuse.function connect_db($config) {// Create connection $conn = new mysqli('localhost', $config['username'], $config['password'],$config['dbname']); // Check connection if ($conn->connect_error){ die('Connection failed. ' . $conn->connect_error); } return $conn;}function prepare_visitors_table($conn) { // if table not made yet, create it if(!$conn->query (DESCRIBE visitors)) { // sql to create table $sql = 'CREATE TABLE visitors( id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, name VARCHAR(30) NOT NULL, email VARCHAR(50) NOT NULL, message VARCHAR(500), reg_date TIMESTAMP )'; if (!$conn->query($sql)){ die ('Sorry there was an error. Please try again later.'); } }}function save_visitor($conn, $name, $email, $message){ //insert data into table //clean data for SQL query $name = mysqli_real_escape_string($conn, $name); $email = mysqli_real_escape_string($conn, $email); $message = mysqli_real_escape_string($conn, $message); $sql = INSERT INTO visitors (name, email, message) VALUES ('$name', '$email', '$message'); return $conn->query($sql) ;}function send_visitor_notification($name, $email, $message){ // email // addresses and default subject $to = ''; // add details $from = ''; // add details $subject = 'New form entry on website'; // prepare message variables - !not sure how to make quotes etc. display properly in email body $message = wordwrap($message); $body = <<<_END Name: {$name} Email: {$name} Message: $message _END; // send mail($to, $subject, $body, $from);}$config = parse_ini_file('../config.ini'); // path may vary depending on setup$conn = connect_db($config);prepare_visitors_table($conn);if (save_visitor($conn, $data['name'], $data['email'], $data['message'])) { $usrMsg = 'Thank you we\'ll be in touch when we have some news'; } else { die (Sorry, there was an Error. Please try again later.); }send_visitor_notification($data['name'], $data['email'], $data['message']);$conn->close(); |
_datascience.11278 | I'm trying to choose an algorithm for filtering spam.I found two options:Create word dictionary for spam and not spam data.Calculate average TF-IDF for each word and use cosine similarity for filtering.Or use word dictionary for training logistic regression model.Could you suggest what fit the best for my goal. Maybe I should use some another algorithm. | Cosine similarity or logistic regression for spam filtering | machine learning;classification | Use logistic regression which allows the weights to be learnt. By using cosine similarity you are forcing the weights to be the same for all features (assuming that you normalize the features first). This is putting unnecessary restrictions on the model. |
_webmaster.925 | What are some of the best hosting providers for a Ruby on Rails application?I have looked into Heroku and it looks like a good option, but would it be better to go with a VPS or Grid hosting provider. | Rails hosting providers | web hosting | null |
_computerscience.214 | I'm trying to learn about raytracing by implementing things in Python 3. I know this is always going to be slower than something like C++, and I know the speed could also be improved by using GPU raytracing, but for this question I'm not looking for general ways of speeding up, but specifically ways of reducing the number of samples required, which will then be useful for any language I may work with in future.I have a partly formed idea which I'd like to work on, but first I wanted to run this past the experts to see if this is a pre-existing technique so I don't repeat work that has already been done.I've searched for sampling solid angle and voronoi sphere sampling but I can't see any sign of prior work. I'll describe my idea in case it goes by a name I can't think of.Example imageThis is an image of three spheres on a plane (which is actually a very large sphere). One is emissive, one is reflective, and one is matt (as is the floor). Sampling is adaptive so that pixels that quickly reach a stable colour do not take up much time. I limit the total number of samples per pixel to avoid the rendering continuing for too long. Even allowed to run overnight, the resulting image is very grainy, and experimenting with smaller images suggests this size image (1366 by 768) would take weeks to converge with my current approach.My idea: concentrating samples along colour boundariesI'd like to be able to concentrate samples where they are needed, and to do this adaptively based on previous samples for the same intersection point or pixel. This will give an unknown bias in the distribution of samples, which means taking the average of the samples will give an inaccurate result. Instead I would like to consider the size of the voronoi cell on the surface of the unit hemisphere centred on the point of intersection (for sampling light incident at a point on a matt surface) or on the surface of a small circle (for sampling around a pixel).Assume that all points within that voronoi cell are receiving rays of the same colour as the centre of the voronoi cell. Now an estimate of the average colour can be obtained by weighting according to the area of each voronoi cell. Choosing new samples on the boundary between the two voronoi cells with the greatest difference in colour leads to an improvement in the estimate without needing to sample the entire hemisphere. Samples should end up more densely concentrated in areas of higher colour gradient. Areas of flat colour should end up being ignored once they have a few points near their boundary.The extra complication is that in both cases (sampling from a point on a matt surface, or sampling over a circle around a pixel centre) the simplified approach I have described is roughly equivalent to a large number of samples distributed uniformly. To make this work I would need to be able to bias the average by both the voronoi cell areas and the required distribution (cosine around the surface normal for a matt surface or gaussian around a pixel centre).So I have some more thinking to do before I could test this idea, but I wanted to check first if this has either already been done, or already ruled out as unworkable. | Speeding up convergence: am I reinventing the wheel? | raytracing;sampling;efficiency | null |
_codereview.44682 | I have the following code, which works as I need it to, however, it repeats itself over and over.I'm relatively new to PHP and haven't grasped recursive functions yet. Which I've been told I'll need to understand as this function could potentially have many levels.if ($_POST['parent'] == None) { $parent = ; } else { $parent = $connection->real_escape_string($_POST['parent']); } if (!empty($parent)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent'); while($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent1 = $path_arr[$key]; if (!empty($parent1)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent1'); while($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent2 = $path_arr[$key]; if (!empty($parent2)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent2'); while($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent3 = $path_arr[$key]; if (!empty($parent3)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent3'); while($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent4 = $path_arr[$key]; } } } } } } } } $path_to = rtrim(ltrim($parent4.'/'.$parent3.'/'.$parent2.'/'.$parent1.'/'.$parent.'/'.$filename,'/'),'/');As you can see there are minimal differences within the nested while loops, but I'm really not sure how to make the function loop for each level.I was pointed to this, and while I understand what the factorial is, I still am unsure on how to refactor my code into a looping function.Any pointers?UPDATECertainly moving in the right direction of what I'm hoping to achieve with Simon Andr Forsberg's suggestionHowever (I probably should have made this clearer in my explanation), The loop(s) determine whether a 'file' (a database entry, masquerading as a file) has a parent 'file', returns that parent 'file', then checks that parent 'file' for a parent 'file' and returns it and so on and so on until there are no more parents.It is used to create a dynamic path directory.like so /parent4/parent3/parent2/parent1/filename/The line of code that follows the loops (again, I probably should have included this), is below...$path_to = rtrim(ltrim($parent4.'/'.$parent3.'/'.$parent2.'/'.$parent1.'/'.$parent.'/'.$filename,'/'),'/');So each level of the loop outputs to this string.I have a feeling I may have to do this using palacsint's method of taking the working part of each loop and creating that as a function.I may be wrong but could Simon's method output to an array? first parent in pos 0, second in pos 1, etc?However, then the query would have to have the filename='$parent' updated to look at the last array entry.$query = $connection->query(SELECT * FROM pages WHERE filename='$parent');Have I bitten off more than I can chew? | Need to condense the following into a looping function | php | You seem to always end with the check: Is there another parent? Or as it can be formulated, while there is another parent, grab that parent.Luckily for you, there are while loops.while (!empty($parent)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent'); while($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent = $path_arr[$key]; break; // break from the inner loop that fetches each row }}This code will continue looping while there is a parent available. Note that the new parent uses the same variable as the previous one, this is an important part in making this loop work.Other suggestions:Use prepared statements! It is good that you seem to sanitize your inputs by using the real_escape_string method. But prepared statements is always better. And then you wouldn't have to use the real_escape_string call.Since this code only gets the first result, you don't need the inner while loop and can replace it by a if instead.while (!empty($parent)) { $query = $connection->query(SELECT * FROM pages WHERE filename='$parent'); if ($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent = $path_arr[$key]; }}I do have to ask though: What's the point in just grabbing the parent until there are no more parents? To store all the parents in an array, let's do like this:$parents = array();while (!empty($parent)) { $parents[] = $parent; // add parent to the array $query = $connection->query(SELECT * FROM pages WHERE filename='$parent'); if ($row = $query->fetch_array()) { $path_arr = explode('/',$row['path_to']); end($path_arr); $key = key($path_arr)-1; $parent = $path_arr[$key]; }}Now, when this loop finishes, $parents contains all the parents that was not empty. Now you can loop through that using a foreach loop, or use it with implode or whatever you'd like :) |
_unix.23609 | if I have current path as a long one..and I want to switch to a directory with just one word from path replaced by something else..like say when using maven , I want to switch from main path to test path, how do I do it?Some time back, I was able to do it by$ cd main testto replace main by test in the path, but not any more..Any pointers...? | Any cd shortcut to switch an intermediate directory in current path? | bash;cd command | You could use a simple function for that (put it in your .bashrc or something like that):function bcd { cd ${PWD/$1/$2}}Then you call it like this:~/tmp $ bcd tmp src~/src $ |
_codereview.120567 | #pragma comment(lib, sfml-network.lib)#include <iostream>#include <SFML/Network.hpp>const unsigned short PORT = 5000;const std::string IPADDRESS(192.168.0.100);//change to suit your needsstd::string msgSend;sf::TcpSocket socket;sf::Mutex globalMutex;bool quit = false;void DoStuff(void){ static std::string oldMsg; while(!quit) { sf::Packet packetSend; globalMutex.lock(); packetSend << msgSend; globalMutex.unlock(); socket.send(packetSend); std::string msg; sf::Packet packetReceive; socket.receive(packetReceive); if(packetReceive >> msg) { if(oldMsg != msg) if(!msg.empty()) { std::cout << msg << std::endl; oldMsg = msg; } } }}void Server(void){ sf::TcpListener listener; listener.listen(PORT); listener.accept(socket); std::cout << New client connected: << socket.getRemoteAddress() << std::endl;}bool Client(void){ if(socket.connect(IPADDRESS, PORT) == sf::Socket::Done) { std::cout << Connected\n; return true; } return false;}void GetInput(void){ std::string s; std::cout << \nEnter \exit\ to quit or message to send: ; std::cin >> s; if(s == exit) quit = true; globalMutex.lock(); msgSend = s; globalMutex.unlock();}int main(int argc, char* argv[]){ sf::Thread* thread = 0; char who; std::cout << Do you want to be a server (s) or a client (c) ? ; std::cin >> who; if(who == 's') Server(); else Client(); thread = new sf::Thread(&DoStuff); thread->launch(); while(!quit) { GetInput(); } if(thread) { thread->wait(); delete thread; } return 0;} | Basic C++ server and client for chatting over TCP | c++;multithreading;tcp;chat;sfml | null |
_unix.327954 | there are so many tutorials out there explaining how to setup dhcpd server, in relation to providing ntp suggestions to dhcp clients, that I had always thought that ntp configuration was carried out automatically. Recently I started seeing clock drifts in my local network, so I assume this was a wrong assumption. So I set out to see how can one minimize the ntp client configuration, provided one has carried out the effort to set up ntp-server suggestions through dhcpd.I have not been able to find much apart from this Ubuntu specific help tutorial https://help.ubuntu.com/community/UbuntuTime . Even here (see paragraph under Troubleshooting -> Which configuration file is it using?) the information is scarce but it says that if an /etc/ntp.conf.dhcp file is found it will be used instead. First of all the actual location that the writer meant here is /var/lib/ntp/ntp.conf.dhcp as observed in /etc/init.d/ntp , but regardless of that the presence of this file does not guarantee that the ntp will request servers from dhclient. As a result, I have to explicitly add the server clause in ntp.conf.dhcp for my local ntp server. But in that case, why do I even setup ntp settings on the dhcpd server?This seems to go against intuition, ie setup ntp settings once (ie on the server) and let dhcpd server delegate the information to the clients. How can I minimize (if not avoid altogether), client configuration for the ntp. Alternatively, how can I get ntp information through dhclient.Is there a cli solution that fits all linux distros? I assume every client should have the executables of ntpd, but I do not know how to proceed from there.Thank youEDIT:ubuntu client verbose output when running manually dhclient:sudo dhclient -1 -d -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0Internet Systems Consortium DHCP Client 4.2.4Copyright 2004-2012 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Listening on LPF/eth0/20:cf:30:0e:6c:12Sending on LPF/eth0/20:cf:30:0e:6c:12Sending on Socket/fallbackDHCPREQUEST of 192.168.112.150 on eth0 to 255.255.255.255 port 67 (xid=0x2e844b8f)DHCPACK of 192.168.112.150 from 192.168.112.112reload: Unknown instance: invoke-rc.d: initscript smbd, action reload failed.RTNETLINK answers: File exists * Stopping NTP server ntpd ...done. * Starting NTP server ntpd ...done.bound to 192.168.112.150 -- renewal in 41963 seconds.The ntpd service is restarted, yet running ntpq -cpe -cas afterwards I still do not see my local ntp server in the list of ntp servers.Of course my dhcpd server does have option ntp-serverssubnet 192.168.112.0 netmask 255.255.255.0 { max-lease-time 604800; default-lease-time 86400; authoritative; ignore client-updates; option ntp-servers 192.168.112.112; #self ... (many other options)} | how do you set up a linux client to use ntp information provided through dhcp? | configuration;dhcp;ntp;ntpd;dhclient | If the dhcp server you are using is configured to provide the ntp-servers option, you can configure your dhclient to request ntp-servers by adding ntp-servers to the default request line in dhclient.conf, as shown at the end of this example from Ubuntu Linux (16.04 now, but was installed as 12.04):request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, dhcp6.name-servers, dhcp6.domain-search, dhcp6.fqdn, dhcp6.sntp-servers, netbios-name-servers, netbios-scope, interface-mtu, rfc3442-classless-static-routes, ntp-servers;/etc/ntp.conf and the information from DHCP will be used to create /etc/ntp.conf.dhcp. Your ntpd must be told to use /etc/ntp.conf.dhcp if it exists. On the version of Ubuntu that I'm using, this is done via /etc/dhcp/dhclient-exit-hooks.d/ntp. <-- this is the file that tells NTPd to use /etc/ntp.conf.dhcp if it exists, and to just use /etc/ntp.conf if it doesn't. |
_webapps.97688 | When I use back quote to highlight terms in trello card content, the outcome to be in red color which I don't like it.I prefer to format it as we do back-quoting here e.g. a back-quoted text sample.Is it possible to change that in trello.com ? | How to change text color for back-quoted text in trello.com | trello | There are a couple of options:Go to Settings and click Enable Color Blind Friendly Mode under Accessibility. This will give you the black on gray that you desire. It will change some other coloring as well. You'll notice that the label colors are now striped.You can install a browser extension like Stylish and add a CSS rule for the code element to set the text color to black. Something like code { color: black !important;} |
_unix.30309 | I am planning to log in to my college PC using ssh and run some simulations. These simulations take very long time, so I would like the relevant process to run longer than the ssh session (I want to log in, run the process, log out and collect the results the next day).How can I do itIf the process is a command line tool that doesn't expect any inputs (so that I just need the resulting output file)?If the process is a GUI, which sadly doesn't save the results to a file, but displays it instead. So in this case I was thinking of using ssh -X ... command, but then I don't know how to reconnect to the open window. | Running programs over ssh | ssh;remote desktop | Assuming your college's computer runs all the time:Use GNU Screen or tmux and live happily ever after.Apparently, xpra offers that, i.e. it attempts to be Screen for X11. (I've never used it, though.)(There're other solutions for (1.), e.g. nohup and IO redirection, but Screen probably is the canonical tool for these kinds of issues. (You can then just re-attach to the detached session and see if the simulation still runs etc...)) |
_unix.361552 | I have two ZFS mount points/a ZFS pool in a FreeBSD 12.0 server, that I can see with df:$ df -h | grep zrootzroot/vms 196G 657M 195G 0% /vmszroot 195G 19K 195G 0% /zrootHow can I know in which partition it is located? Can I know a little more about it? | Mapping ZFS pool to partition | freebsd;zfs | You can know more about your ZFS pool with several commands:$zpool status pool: zroot state: ONLINE scan: none requestedconfig: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 nvd0p4 ONLINE 0 0 0errors: No known data errorsAs you can see, a ZFS pool zroot was created in the nvd0p4 partition.You can also get a few more glimpses about the characteristics of the pool with the command zpool list:$zpool listNAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOTzroot 202G 657M 201G - 0% 0% 1.00x ONLINE -As root, you can also see the history of the ZFS pool usage:$sudo zpool historyHistory for 'zroot':2017-01-16.22:00:43 zpool create zroot /dev/nvd0p42017-01-16.22:48:59 zfs create -V16G -o volmode=dev zroot/linuxdisk02017-01-16.22:49:33 zfs destroy zroot/linuxdisk02017-01-17.20:59:04 zfs create -o mountpoint=/vms zroot/vms2017-01-17.21:21:35 zfs create zroot/vms/testvm2017-01-17.21:21:40 zfs create -sV 16G -o volmode=dev zroot/vms/testvm/disk02017-01-17.21:23:41 zfs destroy -rf zroot/vms/testvm2017-01-30.22:24:59 zfs create zroot/vms/testvm2017-01-30.22:25:04 zfs create -sV 16G -o volmode=dev zroot/vms/testvm/disk02017-01-30.22:35:15 zfs destroy -rf zroot/vms/testvm You can also list the mounted ZFS filesystems:$ zfs mountzroot/vms /vmszroot /zrootZFS has also support for snapshots, jails, and much more. See man zfs and man zpool for more details.See also ZFS Tutorials : Creating ZFS pools and file systems |
_webapps.40388 | It appears that I have to proactively select the timezone I'm in to Google Drive so that they display correctly. Is there a way I can push this setting to all users within the organization? | How do I set the timezone for Google Drive from Google Apps? | google drive;google apps | You can change your timezone in the settings for Google Drive. See below screenshots.As for changing it for all users this is not possible. Each user will have to set their timezone individually. |
_codereview.29150 | I'm using Twitter Bootstrap and I'm working on a page that has several tabs that all have carousels in them (each with a ton of images) I've managed to write an AJAX script that pulls the images from a JSON file I've created (for one of the carousels). I'm planning on making similar JSON file for the rest, but what I'm wondering though is there a way to write the AJAX script so that it grabs the id from the HTML so I don't have to make a unique AJX script for each version of the carouselHere's how my HTML looks for the carousel: <div id=ShaluCarousel1b class=carousel slide><!-- class of slide for animation --> <div class=carousel-inner id=carousel1b> </div><!-- /.carousel-inner --> </div>And here's my AJAX script:<script> $.ajaxSetup({ error: function(xhr, status, error) { alert(An AJAX error occured: +status + \nError: +error); } }); $.ajax({ url:'json/carousel1b', type:'GET', success: function(data){ console.log('grabbing photos'); var totalPictures = data['totalPictures']; for (var i=0; i<totalPictures; i++) { console.log(new image); if(i<1){ console.log(first image); var d = <div class='item active'>; d += <a href=' +data.pictures[i]['photo'] +' rel='prettyPhoto'>; d += <img src=' +data.pictures[i]['photo'] +' >; d += </a>; d += </div>; $('#carousel1b').append(d); } else { console.log(image +i); var d = <div class='item'>; d += <a href=' +data.pictures[i]['photo'] +' rel='prettyPhoto'>; d += <img src=' +data.pictures[i]['photo'] +' alt=''>; d += </a>; d += </div>; $('#carousel1b').append(d); $(a[rel^='prettyPhoto']).prettyPhoto(); console.log('pp init'); } } } })</script>I'm basically wondering if there is a way to pull the id (carousel1b) from the htnl and inject it into the AJAX strip in the url:'jason/carousel1b and the $('carousel').append(d) | Simplifying an ajax script in my HTML | ajax;html5 | Depending on when you want to make the ajax request, you'll need to modify the first line of my example. Currently it just gets all of them. You might want to change it so that it gets only the div that was clicked, or whatever. Anyways here's one way you could do that:$('div[id*=carousel]').each(function() { //The *= selector gets all the divs with an id that contains carousel anywhere in the id. var $this = $(this), //Here we select the right div id = $this.attr('id'); //And grab the id of the div $.ajax({ url: 'json/' + id, //You don't need to set type: GET, the GET is the default. success: function(data) { //blah blah blah... $this.append(d); //Boom done! } });});The way you have your success function set up, you're appending and changing the DOM each time that for loop goes around, and it goes around for every image. That's not good because DOM manipulations are quite expensive performance wise. So if you have 100 images, you're append one at a time - 100 times. The better way to do that would be do all your stuff and save to a variable, string, or object, then append once outside the loop.Here's an example of what I mean:var totalPictures = data['totalPictures'], d = ; //We add d outsidefor (var i=0; i<totalPictures; i++) { if(i<1){ d = <div class='item active'>; d += <a href=' +data.pictures[i]['photo'] +' rel='prettyPhoto'>; d += <img src=' +data.pictures[i]['photo'] +' >; d += </a>; d += </div>; } else { d += <div class='item'>; //Added a += here so that it'll add onto the first picture d += <a href=' +data.pictures[i]['photo'] +' rel='prettyPhoto'>; d += <img src=' +data.pictures[i]['photo'] +' alt=''>; d += </a>; d += </div>; }}$('#carousel1b').append(d); //Here we append a single time outside the loop//This cuts our appends down from totalPictures to 1.//What may seem like a small change will make a huge difference$(a[rel^='prettyPhoto']).prettyPhoto(); //I assume you'll need this outside since it depends on the appended content |
_unix.323324 | Let's suppose a scene. domain name: xyz.com the domain parsed by third-party dns server: ns1.xxx.com IP address bound with domain: 123.123.123.123 apache2 was installed on 123.123.123.123 Should my /etc/httpd/conf/httpd.conf be this way :config1:ServerRoot /etc/httpdListen 80Include conf.modules.d/*.confUser apacheGroup apacheServerName 123.123.123.123:80<VirtualHost *:80> ServerName www.xyz.com DocumentRoot /var/www/html</VirtualHost>config2: ServerRoot /etc/httpdListen 80Include conf.modules.d/*.confUser apacheGroup apacheServerName xyz.com:80<VirtualHost *:80> ServerName www.xyz.com DocumentRoot /var/www/html</VirtualHost>config3: ServerRoot /etc/httpdListen 80Include conf.modules.d/*.confUser apacheGroup apacheServerName localhost:80<VirtualHost *:80> ServerName www.xyz.com DocumentRoot /var/www/html</VirtualHost>config4: ServerRoot /etc/httpdListen 80Include conf.modules.d/*.confUser apacheGroup apacheServerName 127.0.0.1:80<VirtualHost *:80> ServerName www.xyz.com DocumentRoot /var/www/html</VirtualHost>Which config file is fit for my example? | Which config file is fit for my example to config apache2? | apache httpd;dns | null |
_softwareengineering.262093 | Our client sells several products in an online shop (our software) for a especially low price on the first purchase. Further purchases of each product will fallback to the regular price.E.g. product costs 10 EUR on the first purchase (quantity fixed to 1) and 20 EUR each on the next purchases (any quantity).To prevent customers playing tricks and attempting to order products more than once, we compare e-mail, name/address before accepting any order. Although there are still ways to bypass this, we block most naive attempts. Thus customers usually order once and never come back (that's fine, it's solely purpose is product promotion).However, we just encountered a case where a customer tricked our system. But first, let me explain how our checkout works: When hitting the buy button, we generate an unique transaction with all order details and redirect the customer to the payment gateway (e.g. PayPal Standard). As soon as we receive a valid and successful payment notification from the payment service provider (e.g. PayPal IPN), this transaction is converted to an actual order. Since the ordered products are now considered purchased, we would block further attempts to purchase them for the lower price.The said customer did the following to trick the validation: He opened a second browser tab of the summary page with buy button. He pressed the buy button in each tab resulting in two browser tabs (1st tab with transaction ID 1234 and 2nd tab with transaction ID 1235 - same session). The customer then paid for both transactions (e.g. on PayPal) and thus successfully generated two orders with the same products for the low price.While it is possible to detect such a case or even deny the second purchase here (additional validation after payment notification), we still would need to deal with the money paid by the customer. Is there any technical way to prevent a simultaneous purchase to begin with? | Simultaneous purchase in online shop bypassing limited offer | concurrency;e commerce;transaction | null |
_unix.317341 | I have like FOLDERs name like:/AAA1\BBB1\CCC1/AAA2\BBB2\CCC2/AAA3\BBB3\CCC3How I can mass rename it to:/AAA1_BBB1_CCC1/AAA2_BBB2_CCC2/AAA3_BBB3_CCC3 | How to mass rename folder Debian? | debian;directory;rename | null |
_webapps.33644 | There is a user in our Trello organization who only needs to be notified when she is mentioned in a card. She has not subscribed to any boards.The notification emails she gets are filled with notifications from cards she is not involved in. Why is this happening? How can I set it up so that she only gets notifications when she is @mentioned? | Trello user getting notifications for cards/boards she is not involved in or subscribed to? | trello | null |
_unix.379326 | (I give context for my question first, the question itself is at the bottom where it says QUESTION in bold).Take two processes A and B. A checks a condition, sees that it isn't satisfied, and goes to sleep/blocks. B satisfies the condition and wakes A up. If everything happens in that order, then we have no problems.Now if the scheduler goes:A checks condition, it's not satisfiedB satisfies condition, wake A upA goes to sleep/blocksthen we lose the wake-up that B performs for A.I've come across this problem in the context of implementing a blocking semaphore (i.e. one that puts the wait()ing thread to sleep/blocks it instead of letting it spin-wait). Several sources give solutions to this, among them:Andrew Tanenbaum, Modern Operating Systems, 4th edition, p. 130:The essence of the problem here is that a wakeup sent to a process that is not (yet) sleeping is lost. If it were not lost, everything would work. A quick fix is to modify the rules to add a wakeup waiting bit to the picture. When a wakeup is sent to a process that is still awake, this bit is set. Later, when the process tries to go to sleep, if the wakeup waiting bit is on, it will be turned off, but the process will stay awake. The wakeup waiting bit is a piggy bank for storing wakeup signals. The consumer clears the wakeup waiting bit in every iteration of the loop.This article in the Linux journal (Kernel Korner - Sleeping in the Kernel, Linux Journal #137) mentions something similar:This code avoids the lost wake-up problem. How? We have changed our current state to TASK_INTERRUPTIBLE, before we test the condition. So, what has changed? The change is that whenever a wake_up_process is called for a process whose state is TASK_INTERRUPTIBLE or TASK_UNINTERRUPTIBLE, and the process has not yet called schedule(), the state of the process is changed back to TASK_RUNNING.Thus, in the above example, even if a wake-up is delivered by process B at any point after the check for list_empty is made, the state of A automatically is changed to TASK_RUNNING. Hence, the call to schedule() does not put process A to sleep; it merely schedules it out for a while, as discussed earlier. Thus, the wake-up no longer is lost.As I understand, this basically says you can mark a process as wanting to go to sleep/block such that a later wakeup can cancel the later sleep/block call.Finally these lecture notes in the bottom couple paragraphs starting at The pseudo-code below shows the implementation of such a semaphore, called a blocking semaphore: gives code for a blocking semaphore and uses an atomic operation Release_mutex_and_block (csem.mutex);. They claim that:Please notice that the P()ing process must atomically become unrunnable and release the mutex. This is becuase of the risk of a lost wakeup. Imagine the case where these were two different operations: release_mutex(xsem.mutex) and sleep(). If a context-switch would occur in between the release_mutex() and the sleep(), it would be possible for another process to perform a V() operation and attempt to dequeue_and_wakeup() the first process. Unfortunately, the first process isn't yet asleep, so it missed the wake-up -- instead, when it again runs, it immediately goes to sleep with no one left to wake it up.Operating systems generally provide this support in the form of a sleep() system call that takes the mutex as a parameter. The kernel can then release the mutex and put the process to sleep in an environment free of interruptions (or otherwise protected).QUESTION: Do processes in UNIX have some way of marking them as I'm planning on going to sleep, or a wakeup waiting bit as Tanenbaum calls it? Is there a system call sleep(mutex) that atomically releases a mutex and then puts the process to sleep/blocks it?It's probably somewhat apparent that I'm not familiar with system calls and generally OS internals; if there are any false assumptions apparent in my question or misuses of terminology, I'd be happy to have them pointed out to me. | Lost wakeup problem - how does UNIX deal with it | concurrency | null |
_codereview.82012 | I'm looking for feedback specifically about what I might add to this to make it more usable or general, especially with regard to scala collection traits to implement.import scala.collection.{GenTraversableOnce, SortedSet}object NonEmptySortedSet { def apply[A](elems: A*)(implicit ord: scala.Ordering[A]) = new NonEmptySortedSet(SortedSet[A](elems: _*)) def apply[A](s: SortedSet[A]) = if (s.nonEmpty) Some(new NonEmptySortedSet(s)) else None def apply[A](s: Set[A])(implicit ord: scala.Ordering[A]) = if (s.nonEmpty) Some(new NonEmptySortedSet(SortedSet[A]()(ord) ++ s)) else None object Implicits { implicit class SortedSetOps[A](s: SortedSet[A]) { def toNes = apply(s) } implicit class SetOps[A](s: Set[A]) { def sorted(implicit ord: scala.Ordering[A]) = apply(s) } }}class NonEmptySortedSet[A] private(s: SortedSet[A]) extends SortedSet[A] { override val head: A = s.head override val tail: SortedSet[A] = s.tail def contains(elem: A): Boolean = s.contains(elem) override val isEmpty = false def +(elem: A): NonEmptySortedSet[A] = new NonEmptySortedSet(s + elem) def -(elem: A): SortedSet[A] = s - elem override def ++(elems: GenTraversableOnce[A]): NonEmptySortedSet[A] = new NonEmptySortedSet(s ++ elems) def map[B](f: A => B)(implicit ord: scala.Ordering[B]): NonEmptySortedSet[B] = new NonEmptySortedSet(s.map(f)) def iterator: Iterator[A] = s.iterator implicit val ordering: Ordering[A] = s.ordering override def rangeImpl(from: Option[A], until: Option[A]): SortedSet[A] = s.rangeImpl(from, until) override def keysIteratorFrom(start: A): Iterator[A] = s.keysIteratorFrom(start)} | NonEmptySortedSet implementation | scala;collections | null |
_codereview.119750 | I wonder if there exists a shorter/more elegant functional programming way than listing all the possible cases. Here, a function that determines positions of beginning/end of subintervals greater than threshold is coded. The idea behind the listed code is to mark and retain the beginning of such an interval, then to push a tuple of (beginning,ending) as soon as the interval ends. Feel free to choose any other approach if needed.-- | Determines the intervals greater than threshold.---- Examples:-- >>> intervals 0.5 [0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0]-- [(3,4),(8,10)]-- >>> intervals 0.5 [1,0,0,0,1,1,0,0,0,1,1,1,0]-- [(0,0),(4,5),(9,11)]-- >>> intervals 0.5 [1,0,0,0,1,1,0,0,0,1,1,1,0,1,1,1]-- [(0,0),(4,5),(9,11),(13,15)]intervals :: Ord a => a -> [a] -> [(Int, Int)]intervals threshold ys = f False 0 p where p = zip [0..] . map (> threshold) $ ys f :: Bool -> Int -> [(Int, Bool)] -> [(Int, Int)] f _ _ [] = [] f True startPos ((bPos,b):[]) | b = [(startPos, bPos)] | otherwise = [(startPos, startPos)] f False _ ((bPos,b):[]) | b = [(bPos, bPos)] | otherwise = [] f True startPos ((aPos,a):(bPos,b):as) | a && b = f True startPos ((bPos,b):as) | a && (not b) = ((startPos, aPos)) : (f False 0 as) | otherwise = (startPos, startPos) : (f False 0 ((bPos,b):as)) f False _ ((aPos,a):as) | a = f True aPos as | otherwise = f False 0 as | A function determining intervals of values greater than threshold | haskell | (You can skip right to TL;DR for a simpler approach)Your function actually determines the indices of list elements that are above a threshold. In Haskell, when you have a list, an index is not the idiomatic way to represent its items. What do you want with those indices?Agreed, your version is hard to read. For another approach, I start withintervalsT :: [Bool] -> [(Int, Int]and notice that the group function might come in handy to collect subsequent equal elements.*Main> group [True,True,False,False,True][[True,True],[False,False],[True]]mapping length will result in [2,2,1], which is a step closer to the indices. To turn [a,b,c] into [0,a ,a+b, a+b+c], the function scanl' is perfect:*Main> scanl' (+) 0 [2,2,1][0,2,4,5]which we can zip with its own tail. But wait! We lost information whether something is above or below threshold.zip it again with the grouped Bools, filter based on the bools, throw away the bools. This yields:TL;DRintervals p = intervalsT . map (>p)intervalsT :: [Bool] -> [(Int,Int)]intervalsT xs = let grouped = group xs idx = scanl' (+) 0 . map length $ grouped ivs = zip idx (map (subtract 1) $ tail idx) in map snd $ filter fst $ zip (map head grouped) ivs |
_codereview.161680 | My Second take on this can be found hereI wanted to make a simple console game in order to practice OOP. I would really appreciate a review that looks at readability, maintenance, and best practices.What annoys me a little bit with this code is I don't use interfaces, abstract classes, or inheritance, but I couldn't find a good use case for them here.Board.javapackage com.tn.board;import com.tn.constants.Constants;import com.tn.ship.Ship;import com.tn.utils.Position;import com.tn.utils.Utils;import java.awt.Point;import java.util.Scanner;public class Board { private static final Ship[] ships; private char[][] board; /** * Initialize ships (once). * */ static { ships = new Ship[]{ new Ship(Carrier, Constants.CARRIER_SIZE), new Ship(Battleship, Constants.BATTLESHIP_SIZE), new Ship(Cruiser, Constants.CRUISER_SIZE), new Ship(Submarine, Constants.SUBMARINE_SIZE), new Ship(Destroyer, Constants.DESTROYER_SIZE) }; } /** * Constructor */ public Board() { board = new char[Constants.BOARD_SIZE][Constants.BOARD_SIZE]; for(int i = 0; i < Constants.BOARD_SIZE; i++) { for(int j = 0; j < Constants.BOARD_SIZE; j++) { board[i][j] = Constants.BOARD_ICON; } } placeShipsOnBoard(); } /** * Target ship ship. * * @param point the point * @return ship */ public Ship targetShip(Point point) { boolean isHit = false; Ship hitShip = null; for(int i = 0; i < ships.length; i++) { Ship ship = ships[i]; if(ship.getPosition() != null) { if(Utils.isPointBetween(point, ship.getPosition())) { isHit = true; hitShip = ship; break; } } } final char result = isHit ? Constants.SHIP_IS_HIT_ICON : Constants.SHOT_MISSED_ICON; updateShipOnBoard(point, result); printBoard(); return (isHit) ? hitShip : null; } /** * Place ships on board. */ private void placeShipsOnBoard() { System.out.printf(%nAlright - Time to place out your ships%n%n); Scanner s = new Scanner(System.in); for(int i = 0; i < ships.length; i++) { Ship ship = ships[i]; boolean isShipPlacementLegal = false; System.out.printf(%nEnter position of %s (length %d): , ship.getName(), ship.getSize()); while(!isShipPlacementLegal) { try { Point from = new Point(s.nextInt(), s.nextInt()); Point to = new Point(s.nextInt(), s.nextInt()); while(ship.getSize() != Utils.distanceBetweenPoints(from, to)) { System.out.printf(The ship currently being placed on the board is of length: %d. Change your coordinates and try again, ship.getSize()); from = new Point(s.nextInt(), s.nextInt()); to = new Point(s.nextInt(), s.nextInt()); } Position position = new Position(from, to); if(!isPositionOccupied(position)) { drawShipOnBoard(position); ship.setPosition(position); isShipPlacementLegal = true; } else { System.out.println(A ship in that position already exists - try again); } } catch(IndexOutOfBoundsException e) { System.out.println(Invalid coordinates - Outside board); } } } } private void updateShipOnBoard(Point point, final char result) { int x = (int) point.getX() - 1; int y = (int) point.getY() - 1; board[y][x] = result; } /** * * @param position * @return */ private boolean isPositionOccupied(Position position) { boolean isOccupied = false; Point from = position.getFrom(); Point to = position.getTo(); outer: for(int i = (int) from.getY() - 1; i < to.getY(); i++) { for(int j = (int) from.getX() - 1; j < to.getX(); j++) { if(board[i][j] == Constants.SHIP_ICON) { isOccupied = true; break outer; } } } return isOccupied; } /** * * @param position */ private void drawShipOnBoard(Position position) { Point from = position.getFrom(); Point to = position.getTo(); for(int i = (int) from.getY() - 1; i < to.getY(); i++) { for(int j = (int) from.getX() - 1; j < to.getX(); j++) { board[i][j] = Constants.SHIP_ICON; } } printBoard(); } /** * Print board. */ private void printBoard() { System.out.print(\t); for(int i = 0; i < Constants.BOARD_SIZE; i++) { System.out.print(Constants.BOARD_LETTERS[i] + \t); } System.out.println(); for(int i = 0; i < Constants.BOARD_SIZE; i++) { System.out.print((i+1) + \t); for(int j = 0; j < Constants.BOARD_SIZE; j++) { System.out.print(board[i][j] + \t); } System.out.println(); } }}Constants.javapackage com.tn.constants;public class Constants { private Constants() {} public static final int PLAYER_LIVES = 17; //sum of all the ships public static final int CARRIER_SIZE = 5; public static final int BATTLESHIP_SIZE = 4; public static final int CRUISER_SIZE = 3; public static final int SUBMARINE_SIZE = 3; public static final int DESTROYER_SIZE = 2; public static final char SHIP_ICON = 'X'; public static final char BOARD_ICON = '-'; public static final char SHIP_IS_HIT_ICON = 'O'; public static final char SHOT_MISSED_ICON = 'M'; public static final char[] BOARD_LETTERS = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'}; public static final int BOARD_SIZE = 10;}Player.javapackage com.tn.player;import com.tn.board.Board;import com.tn.constants.Constants;import com.tn.ship.Ship;import java.awt.Point;import java.util.HashMap;import java.util.Map;import java.util.Scanner;public class Player { private int id; private int lives; private Board board; private Map<Point, Boolean> targetHistory; private Scanner scanner; /** * Instantiates a new Player. * * @param id the id */ public Player(int id) { System.out.printf(%n=== Setting up everything for Player %s ====, id); this.id = id; this.lives = Constants.PLAYER_LIVES; this.board = new Board(); this.targetHistory = new HashMap<>(); this.scanner = new Scanner(System.in); } /** * Gets id. * * @return the id */ public int getId() { return id; } /** * Gets lives. * * @return the lives */ public int getLives() { return lives; } /** * Decrement live by one. */ public void decrementLiveByOne() { lives--; } /** * Turn to play. * * @param opponent the opponent */ public void turnToPlay(Player opponent) { System.out.printf(%n%nPlayer %d, Choose coordinates you want to hit (x y) , id); Point point = new Point(scanner.nextInt(), scanner.nextInt()); while(targetHistory.get(point) != null) { System.out.print(This position has already been tried); point = new Point(scanner.nextInt(), scanner.nextInt()); } attack(point, opponent); } /** * Attack * * @param point * @param opponent */ private void attack(Point point, Player opponent) { Ship ship = opponent.board.targetShip(point); boolean isShipHit = (ship != null) ? true : false; if(isShipHit) { ship.shipWasHit(); opponent.decrementLiveByOne(); } targetHistory.put(point, isShipHit); System.out.printf(Player %d, targets (%d, %d), id, (int)point.getX(), (int)point.getY()); System.out.println(...and + ((isShipHit) ? HITS! : misses...)); }}Ship.javapackage com.tn.ship;import com.tn.utils.Position;public class Ship { private String name; private int size; private int livesLeft; private boolean isSunk; private Position position; public Ship(String name, int size) { this.name = name; this.size = size; this.livesLeft = size; this.isSunk = false; } public String getName() { return name; } public int getSize() { return size; } public int getLivesLeft() { return livesLeft; } public boolean isSunk() { return isSunk; } public void setSunk(boolean sunk) { isSunk = sunk; } public Position getPosition() { return position; } public void setPosition(Position position) { this.position = position; } public void shipWasHit() { if(livesLeft == 0) { isSunk = true; System.out.println(You sunk the + name); return; } livesLeft--; }}Position.javapackage com.tn.utils;import com.tn.constants.Constants;import java.awt.Point;public class Position { private Point from; private Point to; /** * Instantiates a new Position. * * @param from the from * @param to the to */ public Position(Point from, Point to) { if(from.getX() > Constants.BOARD_SIZE || from.getX() < 0 || from.getY() > Constants.BOARD_SIZE || from.getY() < 0 || to.getX() > Constants.BOARD_SIZE || to.getX() < 0 || to.getY() > Constants.BOARD_SIZE || to.getY() < 0) { throw new ArrayIndexOutOfBoundsException(); } this.from = from; this.to = to; } /** * Gets from. * * @return the from */ public Point getFrom() { return from; } /** * Gets to. * * @return the to */ public Point getTo() { return to; }}Utils.javapackage com.tn.utils;import java.awt.Point;public class Utils { private Utils() { } /** * Distance between points double. * * @param from the from * @param to the to * @return the double */ public static double distanceBetweenPoints(Point from, Point to) { double x1 = from.getX(); double y1 = from.getY(); double x2 = to.getX(); double y2 = to.getY(); return Math.sqrt(Math.pow(x1-x2, 2) + Math.pow(y1-y2, 2)) + 1; } /** * Is point between boolean. * * @param point the point * @param position the position * @return the boolean */ public static boolean isPointBetween(Point point, Position position) { Point from = position.getFrom(); Point to = position.getTo(); return from.getY() <= point.getY() && to.getY() >= point.getY() && from.getX() <= point.getX() && to.getX() >= point.getX(); }}Game.javapackage com.tn.game;import com.tn.player.Player;public class Game { private Player[] players; /** * Instantiates a new Game. */ public Game() { players = new Player[]{ new Player(1), new Player(2) }; } /** * Start. */ public void start() { int i = 0; int j = 1; int size = players.length; Player player = null; while(players[0].getLives() > 0 && players[1].getLives() > 0) { players[i++ % size].turnToPlay(players[j++ % size]); player = (players[0].getLives() < players[1].getLives()) ? players[1] : players[0]; } System.out.printf(Congrats Player %d, you won!,player.getId()); }}Main.javapackage com.tn;import com.tn.game.Game;public class Main { public static void main(String[] args) { Game game = new Game(); game.start(); }} | OOP Battleship console game in Java | java;object oriented | Thanks for sharing your code.What annoys me a little bit with this code is I don't use interfaces, abstract classes, or inheritance, Doing OOP means that you follow certain principles which are (amongst others):information hiding / encapsulationsingle responsibilityseparation of concernsKISS (Keep it simple (and) stupid.)DRY (Don't repeat yourself.)Tell! Don't ask.Law of demeter (Don't talk to strangers!)Interfaces, abstract classes, or inheritance support hat principles and should be used as needed. They do not define OOP.IMHO the main reason why your approach fails OOP is that your Model is an array of an primitive type char. This ultimately leads to a procedural approach for the game logic.I would think of an interface like this:interface GameField{ char getIcon(); Result shootAt();}where Result would be an enum: enum Result{ NO_HIT, PARTIAL_HIT, DESTROYED }And I would have different implementations of the interface:public class BorderField implements GameField{ private final char borderName; public BorderField(char borderName){ this.borderName = borderName; } @Override public char getIcon(){ return borderName; } @Override public Result shootAt(){ return Result.NO_HIT; }}public class WaterField implements GameField{ private boolean isThisFieldHit = false; @Override public char getIcon(){ return isThisFieldHit?'M': ' '; } @Override public Result shootAt(){ return Result.NO_HIT; }}public class ShipField implements GameField{ private final Ship ship; private boolean isThisFieldHit = false; public ShipField(Ship ship){ this.ship = ship; } @Override public char getIcon(){ Result shipState = ship.getState(); switch(shipState){ case NO_HIT: return ' '; case PARTIAL_HIT: return isThisFieldHit?'O':' '; case DESTROYED: return '#'; } @Override public Result shootAt(){ ship.hit(); return ship.getState(); }}This should be enough, hope you get the idea...Formal issuesNamingFinding good names is the hardest part in programming. So always take your time to think about your identifier names.On the bright side you follow the Java naming conventions.But you should have your method names start with a verb in its present tense.E.g.: shipWasHit() should be named hit().Or distanceBetweenPoints() should be calculateDistanceBetween(). Here the parameters reveal that the distance is between points, so no need to put that in the method name. Be verbose in your variable names. instead of double x1 = from.getX(); double y1 = from.getY(); double x2 = to.getX(); double y2 = to.getY();this variables should rather be named like this: double startPointX = from.getX(); double startPointY = from.getY(); double endPointX = to.getX(); double endPointY = to.getY();Take your names from the problem domain, not from the technical solution.eg.: SHIP_ICON should be SHIP only unless you have another constant within the Ship class.CommentsComments should explain why the code is like it is. Remove all other comments. comments should only be used on interface or abstract methods where they contain the contract that the implementer must fulfill.Constants classPut things together that belong together. Define constants in the class that uses them. |
_webapps.40552 | When creating a flowchart in Visio I can choose how connectors will be rendered as:right-anglestraightcurvedBut when drawing the same chart in Google drawing there only seems to be straight type. I can't seem to find a way to draw lines at right angles? Is that possible?But even though there isn't any possibility to draw right-angled connectors by default there are two missing features that would make such task at least easier to accomplish:ability to add manual alignment guidelines - using these one could put two to easily draw two straight lines that would connect perfectly at intersectionability for lines to have connection points similar to shapes - using these one could position one line and then draw the other and connect it to the other line making both of them connectedHow do you solve this problem? | Connectors (lines and arrows) in Google drawings | google drawing;diagrams | There is no automatic way to create a right-angled line. The best way to do this would be to:Create 2 intersecting lines. I think you mention this in your 1st bullet. One way to make sure that the lines are perpendicular to each other is to hold the Shift key down while dragging out the line. This will automatically snap it to a preset angle (0, 45, 90 degrees, etc). By making 2 lines perpendicular you could make the ends meet and create a right angle line.Use the polyline tool. You can find this by clicking on the drop down arrow next to the line icon. This menu will allow you to create lines of different types. Examples are:ArrowCurvePolylineSome of these new lines (not all) will allow you to attach the ends to the connection point of another shape (like a text box) but you'll have to fiddle around with it to find what solution works best for you.If you're looking to create formal flowcharts with Google Drive, there are now apps you can connect to your Drive account that will help you create flowcharts. Examples include Lucidchart and Draw.io. To connect these apps to your Drive account, go into your Drive and click Create > Connect more apps |
_unix.351576 | I'm having unexpected behavior using the telnet command on various linux (Linux Mint, Ubuntu server).When trying to connect to a non-existent device, it succeed. I tested with 1.2.3.4 which is not a placeholder.Telnet$ telnet 1.2.3.4 9100Trying 1.2.3.4...Connected to 1.2.3.4.Escape character is '^]'.^]telnet> Connection closed.Pingfails as expected, this is not the problem!$ ping 1.2.3.4 -c 5PING 1.2.3.4 (1.2.3.4) 56(84) bytes of data.--- 1.2.3.4 ping statistics ---5 packets transmitted, 0 received, 100% packet loss, time 4076msTraceroute (update)$ sudo traceroute -T -p telnet 1.2.3.4traceroute to 1.2.3.4 (1.2.3.4), 30 hops max, 60 byte packets 1 1.2.3.4 (1.2.3.4) 2.693 ms 3.166 ms 3.178 msRoute$ route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.2.240 0.0.0.0 UG 600 0 0 wlp4s0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br-427309471a28172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-427309471a28192.168.2.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0Even when stopping docker I still can reproduce:$ route -nKernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.2.240 0.0.0.0 UG 600 0 0 wlp4s0192.168.2.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0QuestionHere is a recording.Why is telnet connecting on a device that doesn't exist? | Telnet connect to non-existing adress | networking;telnet;connectivity | null |
_cstheory.16214 | What is known about the computational complexity of factoring integers in general number fields? More specifically:Over the integers we represent integers via their binary expansions. What is the analogous representations of integers in general number fields?Is it known that primality over number fields is in P or BPP?What are the best known algorithms for factoring over number fields? (Do the $\exp \sqrt n$ and the (apparently) $\exp n^{1/3}$ algorithms extend from $\mathbb{Z}$?) Here, factoring refers to finding some representation of a number (represented by $n$ bits) as a product of primes.What is the complexity of finding all factorizations of an integer in a number field? Of counting how many distinct factorizations it has? Over $\mathbb{Z}$ it is known that deciding if a given number has a factor in an interval $[a,b]$ is NP-hard. Over the ring of integers in number fields, can it be the case that finding if there is a prime factor whose norm is in a certain interval is already NP-hard? Is factoring in number fields in BQP?Remarks, motivations and updates.Of course the fact that factorization is not unique over number fields is crucial here. The question (especially part 5) was motivated by this blog post over GLL (see this remark), and also by this earlier TCSexchange question. I presented it also over my blog where Lior Silverman presented a thorough answer. | Complexity of factoring in number fields | cc.complexity theory;nt.number theory;comp number theory | null |
_unix.220640 | I'm trying to sftp to a remote host, yet keep getting following output:$ sftp -v X.X.XOpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013debug1: Reading configuration data /home/alexus/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 56: Applying options for *debug1: Connecting to X.X.X [X.X.X.X] port 22.debug1: Connection established.debug1: identity file /home/alexus/.ssh/id_rsa type 1debug1: identity file /home/alexus/.ssh/id_rsa-cert type -1debug1: identity file /home/alexus/.ssh/id_dsa type -1debug1: identity file /home/alexus/.ssh/id_dsa-cert type -1debug1: identity file /home/alexus/.ssh/id_ecdsa type -1debug1: identity file /home/alexus/.ssh/id_ecdsa-cert type -1debug1: identity file /home/alexus/.ssh/id_ed25519 type -1debug1: identity file /home/alexus/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_6.6.1ssh_exchange_identification: read: Connection reset by peerCouldn't read packet: Connection reset by peer$ I tried to Google it, but as of now I have not found a solution( any suggestions? | sftp (ssh_exchange_identification: read: Connection reset by peer) | ssh;sftp | null |
_unix.347981 | I got nvidia 970 gtx made by evga in my desktop PC. I was using Fedora 25 for almost a month without any problems. I had drivers installed following instruction from rpm fusion howto.dnf install xorg-x11-drv-nvidia akmod-nvidia kernel-devel-uname-r == $(uname -r)dnf update -yBut live cannot be so simple! Yesterday I made a system update and turned off the machine. Today when it booted I saw a low-resolution mode, completely without nvidia graphic support. I chose other kernels from boot menu: the same.I tried reinstalling nvidia drivers as in above posted howto also without result. Then I removed nvidia driver completely withdnf remove xorg-x11-drv-nvidia\*and did reboot. Now I ended with my desktop in a state that it doesn'n boot at all. After choosing the kernel in grub monitor goes blank in a moment and I cannot even go to terminal with ctrl+alt+F2. In grub menu I tried rescue option, which leads me to kind of limited console, but I cannot install driver from there due to lack of internet connection in that mode.I was reading some old thread which explained how to enter text mode, but it was in pre systemd era.So... The question: how can I boot in this situation into text mode with internet support on Fedora 25 and systemd to be able to install drivers? Or is there any other easy way to fix it? | Fedora 25 doesn't boot after removing nvidia driver | fedora;boot;drivers;nvidia | null |
_codereview.149283 | I just have started to introduce myself into network programming using C++. So I started with Winsock. The code I made is compiled with MinGW and works perfectly!As a beginner, the main purpose of code was to do what I wanted. Now it's time to go to next level!The program will download a webpage source using a given socket and the website address/ip and port. After the connection to socks4 is established, the program sends a packet telling to associate with destination server.Now a GET request is send to sock and it return the response from server.So far so good and we have the following code:main.cpp#include <winsock.h>#include <string>#include <iostream>#include util.hppusing namespace std;int main(void){ //Socks4 info u_short sockPort = 1080; std::string sockIp = xx.xx.xx.xx; //Destination info u_short destPort = 80; std::string destIPorURL = checkip.dyndns.com; WSADATA wsaData; if (WSAStartup(MAKEWORD(2,0), &wsaData)==0) { if (LOBYTE(wsaData.wVersion) < 2) { cout << WSA Version error!; return -1; } } else { cout << WSA Startup Failed; return -1; } /////////////////////////////////////////////////////////////////////////////////////////// //Init socket for socks4 cout << Initialize sock_addr for socks4 connection...; sockaddr_in sock; sock.sin_family = AF_INET; // host byte order sock.sin_port = htons( sockPort ); // short, network byte order if(!utils::getHostIP(sock.sin_addr.S_un.S_addr, sockIp)) // Write ip address in the right format { cout << fail; return -1; } cout << done << endl; //Creating socket handler cout << Creating socket handler...; SOCKET hSocketSock = INVALID_SOCKET; if( (hSocketSock = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET ) { cout << fail; return -1; } cout << done << endl; ///////////////////////////////////////////////////////////////////////////////////////////// //Init socket for destination server cout << Initialize sock_addr for destination server...; sockaddr_in dest; dest.sin_family = AF_INET; dest.sin_port = htons( destPort ); if(!utils::getHostIP(dest.sin_addr.S_un.S_addr, destIPorURL)) // Write ip address in the right format { cout << fail; return -1; } memset( &(dest.sin_zero), '\0', 8 ); cout << done << endl; //////////////////////////////////////////////////////////////////////////////////////////////// //Time to make connection to socks cout << Connecting to sock server...; if(connect(hSocketSock, reinterpret_cast<sockaddr *>(&sock), sizeof(sock)) != 0) { cout << failed; return -1; } cout << done << endl; //We are connected now to our socket! //All we have to do is to send our desires. //From now, the things differ from SOCKS4 to SOCKS5. //So, the code below apply only for SOCKS4! //So, we'll start with SOCKS4 //Documentation: http://www.openssh.com/txt/socks4.protocol //The packet we have to build // +----+----+----+----+----+----+----+----+----+----+....+----+ // | VN | CD | DSTPORT | DSTIP | USERID |NULL| // +----+----+----+----+----+----+----+----+----+----+....+----+ //# of bytes: 1 1 2 4 variable 1 //This packet is meant to inform socks server who's the destination we want to communicate with. char *initPacket = new char[9]; //9 because we don't use auth. initPacket[0] = 4; //Sock version we use is 4 initPacket[1] = 1; //Connect code memcpy(initPacket + 2, &dest.sin_port, 2); //Copy port into memcpy(initPacket + 4, &dest.sin_addr.S_un.S_addr, 4); //Copy ip address initPacket[8] = 0; //No username for auth provided //Sending our packet! cout << Sending init packet to socks...; if(send(hSocketSock, initPacket, 9, 0) == SOCKET_ERROR ) { cout << fail; return -1; } cout << done << endl; //Don't need init packet anymore as we have send it. delete[] initPacket; //We want a replay. This will tell us if the sock is able to communicate with destination char replay[8]; //WHY 8? Because of table below :) memset(&replay, 0, 8); //Reading the response cout << Reading reaponse from sock...; //if(recv(hSocketSock, replay, strlen((const char *)replay), 0) == SOCKET_ERROR) if(recv(hSocketSock, replay, 8, 0) == SOCKET_ERROR) { fail; return -1; } cout << done << endl; // Expected response format: // +----+----+----+----+----+----+----+----+ // | VN | CD | DSTPORT | DSTIP | // +----+----+----+----+----+----+----+----+ //# of bytes: 1 1 2 4 // VN is the version of the reply code and should be 0. CD is the result // code with one of the following values: // // 90: request granted // 91: request rejected or failed // 92: request rejected becasue SOCKS server cannot connect to identd on the client // 93: request rejected because the client program and identd report different user-ids. //So, we have to check if replay is ok :) cout << Checking replay version code...; if(replay[0] != 0) { cout << fail - << (int)replay[0]; return -1; } cout << ok << endl; //Returned code: 90 = access granted cout << Checking replay returned code...; if(replay[1] != 90) { cout << failed - << (int)replay[1]; return -1; } cout << (int)replay[1] << - ok << endl; //Those being said, if everithing is ok, we can use @hSocketSock handler to send/recv data. //Let's download the content of an webpage: std::string headers = GET + destIPorURL + HTTP/1.0\r\nHost: + utils::getHostFromUrl(destIPorURL) + \r\n\r\n; //Send our request cout << endl << Sending custom request...; int sendResult = send(hSocketSock, headers.c_str(), headers.length(), 0); if(sendResult == SOCKET_ERROR) { cout << sendResult << - failed; return -1; } cout << done! << endl; std::string fullResp = ; char buffer[128]; cout << Reading response from server...; while(true) { int retval = recv(hSocketSock, buffer, strlen((const char *)buffer), 0); if(retval == 0) { break; } else if(retval == SOCKET_ERROR) { cout << failed; return -1; } else { buffer[retval] = 0; fullResp += buffer; } } cout << done << endl; cout << What we have got: << endl << fullResp; //Make clean! if(hSocketSock != INVALID_SOCKET) { closesocket(hSocketSock); } cout << endl; return 0;}As you may have seen, the util.hpp contain some namespaces with the following functions used in program (and other useful functions): std::string getHostFromUrl(std::string &url);bool getHostIP(unsigned long &ipAddr, std::string urlOrHostnameOrIp);Don't need a review for this. However I will post content of util.cpp and util.hpp in case someone wants to test. util.hpp:#include <winsock2.h>#include <string>#include <algorithm>#include <vector>namespace utils{ std::string getHostFromUrl(std::string &url); bool getHostIP(unsigned long &ipAddr, std::string urlOrHostnameOrIp); namespace IPAddr { bool isValidIPv4(std::string &ip); std::string reverseIpAddress(std::string ip); std::string decimalToDottedIp(unsigned long ip); unsigned long stripToDecimal(std::string &ip); } namespace strings { std::vector<std::string> split(std::string &s, char delim); std::string removeSubstrs(std::string &source, std::string pattern); }};util.cpp#include <stdexcept>#include <iostream>#include <sstream>#include <stdio.h>#include util.hpp#define cout std::cout#define endl std::endl/////////////////////////////////////////////////////////////////////////////////////// _ _ _ _ _// | \ | | __ _ _ __ ___ ___ ___ _ __ __ _ ___ ___ _ _| |_(_) |___// | \| |/ _` | '_ ` _ \ / _ \/ __| '_ \ / _` |/ __/ _ \ | | | | __| | / __|// | |\ | (_| | | | | | | __/\__ \ |_) | (_| | (_| __/ | |_| | |_| | \__ \// |_| \_|\__,_|_| |_| |_|\___||___/ .__/ \__,_|\___\___| \__,_|\__|_|_|___/// |_|/////////////////////////////////////////////////////////////////////////////////////bool utils::getHostIP(unsigned long &ipAddr, std::string url){ HOSTENT *pHostent; std::string hostname = getHostFromUrl(url); if( utils::IPAddr::isValidIPv4(hostname) ) { //IP Address must be reversed in order to be compatible with sockAddr.sin_addr.S_un.S_addr //example: 192.168.1.2 => 2.1.168.192 hostname = utils::IPAddr::reverseIpAddress(hostname); ipAddr = utils::IPAddr::stripToDecimal(hostname); return true; } if (!(pHostent = gethostbyname(hostname.c_str()))) { return false; } if (pHostent->h_addr_list && pHostent->h_addr_list[0]) { ipAddr = *reinterpret_cast<unsigned long *>(pHostent->h_addr_list[0]); return true; } return false;}std::string utils::getHostFromUrl(std::string &url){ std::string urlcopy = url; urlcopy = utils::strings::removeSubstrs(urlcopy, http://); urlcopy = utils::strings::removeSubstrs(urlcopy, www.); urlcopy = utils::strings::removeSubstrs(urlcopy, https://); urlcopy = urlcopy.substr(0, urlcopy.find(/)); return urlcopy;}// ___ ____ _ _ _// | _ _|| _ \ / \ __| | __| | _ __ ___ ___ ___// | | | |_) | / _ \ / _` | / _` || '__|/ _ \/ __|/ __|// | | | __/ / ___ \| (_| || (_| || | | __/\__ \\__ \// |___||_| /_/ \_\\__,_| \__,_||_| \___||___/|___/bool utils::IPAddr::isValidIPv4(std::string &ipv4){ const std::string address = ipv4; std::vector<std::string> arr; int k = 0; arr.push_back(std::string()); for (std::string::const_iterator i = address.begin(); i != address.end(); ++i) { if (*i == '.') { ++k; arr.push_back(std::string()); if (k == 4) { return false; } continue; } if (*i >= '0' && *i <= '9') { arr[k] += *i; } else { return false; } if (arr[k].size() > 3) { return false; } } if (k != 3) { return false; } for (int i = 0; i != 4; ++i) { const char* nPtr = arr[i].c_str(); char* endPtr = 0; const unsigned long a = ::strtoul(nPtr, &endPtr, 10); if (nPtr == endPtr) { return false; } if (a > 255) { return false; } } return true;}std::string utils::IPAddr::reverseIpAddress(std::string ip){ std::vector<std::string> octeti = utils::strings::split(ip, '.'); return (octeti[3] + . + octeti[2] + . + octeti[1] + . + octeti[0]);}unsigned long utils::IPAddr::stripToDecimal(std::string &ip){ unsigned long a,b,c,d,base10IP; sscanf(ip.c_str(), %lu.%lu.%lu.%lu, &a, &b, &c, &d); // Do calculations to convert IP to base 10 a *= 16777216; b *= 65536; c *= 256; base10IP = a + b + c + d; return base10IP;}std::string utils::IPAddr::decimalToDottedIp(unsigned long ipAddr){ unsigned short a, b, c, d; std::ostringstream os ; std::string ip = ; a = (ipAddr & (0xff << 24)) >> 24; b = (ipAddr & (0xff << 16)) >> 16; c = (ipAddr & (0xff << 8)) >> 8; d = ipAddr & 0xff; os << d << . << c << . << b << . << a; ip = os.str(); return ip;}// ____ _ _// / ___| | |_ _ __ (_) _ __ __ _ ___// \___ \ | __|| '__|| || '_ \ / _` |/ __|// ___) || |_ | | | || | | || (_| |\__ \// |____/ \__||_| |_||_| |_| \__, ||___/// |___/std::vector<std::string> utils::strings::split(std::string &s, char delim){ std::vector<std::string> elems; std::stringstream ss; ss.str(s); std::string item; while (std::getline(ss, item, delim)) { elems.push_back(item); } return elems;}std::string utils::strings::removeSubstrs(std::string &input, std::string pattern){ std::string source = input; std::string::size_type n = pattern.length(); for (std::string::size_type i = source.find(pattern); i != std::string::npos; i = source.find(pattern)) { source.erase(i, n); } return source;}The thing is that I have some plans for some projects which uses sockets and I want to get a good start!I think that the code I wrote is not so easy to understand. For me it's easy but for the others my not!Hope it is not to soon to say that but I want to write some professional code. So, be as critical as possible!PS: Not sure if it helps, but the output looks like the following: | Download website source through Socks4 using Winsock | c++;networking;socket;tcp | null |
_unix.233014 | My question is related to the sed-specific solution given in this answer for this question of reverse grepping. The sed/grep solution that I am unable to decipher is the following one: sed '1!G;h;$!d' fileCan someone please decipher this command for a beginner like me? I know from VI(M) knowledge that G denotes the last line of the file and that in sed a bang(!) followed by an address work a bit like grep -v that is to say that it will not match that line. But as a whole the inline sed script above is beyond me. | How does the command sed '1!G;h;$!d' reverse the contents of a file? | sed | This reverses the file line by line.sed '1!G;h;$!d' fileFirst, sed has a hold space and a pattern space. We have to distinguish between them before concentrating on that specific command.When sed reads a new line, it is loaded into the pattern space. Therefore, that space is overwritten every time a new line is processed. On the other hand, the hold space is consistent over the whole processing and values can be stored there for later usage.To the command:There are 3 commands in this statement: 1!G, h and $!d1!G means that the G command is executed on every line except the first one (the ! negates the 1). G means to append what is in the hold space into the pattern space.h applies to every line. It copies the pattern space to the hold space (and overwrites it).$!d applies to every line except the last one ($ represents the last line, ! negates it). d is the command to delete the line (pattern space).Now, when the first line is read, sed executes the h command. The first line is copied into the hold space. Then it is deleted, since it matches the $! condition. sed continues with the second line.The second line matches the condition 1! (it's not the first line), and so the hold space (which has the first line) is appended to the pattern space (which has the second line). After that, in the pattern space, there is now the second line followed by the first line, delimited by a newline. Now, the h command applies (as in every line); all that is in the pattern space is copied to the hold space. The third statement ($!d) applies: The line is deleted from the pattern space.Step 2 is now done with all lines. We skip to the last line.In the last line ($) nearly all of Step 2 is done, but not the delete part (d). sed, when invoked without -n, prints the pattern space automatically at the end of the processing for each input line. So, when not deleted, the pattern space is printed. It contains now all lines in reversed order. |
_unix.310955 | I want to embed an initramfs into my kernel at build time, but to create the rootfs image I need access to a number of modules for the kernel I'm about to build.What is the proper way to resolve this apparent chicken/egg problem of building the modules, then the initramfs, and then finally the kernel proper?I can't seem to find much documentation on this particular workflow. One option I am considering (since this is an embedded device with a known hardware config) is to just forego modules for any boot-time required code.UpdateI guess a better question is in order. Here's some additional info:This is running on the Raspberry Pi which has it's own closed-source bootloader which can be configured via a couple text files in the boot partition.The main impetus of this question is to ease updates in the nearterm (being able to just ship the kernel/initramfs in a single package without fiddling with bootloader configs that don't support logic for calculating addr offsets), to ease future migration to a secure boot chain (most likely through an interposed uboot loader; more files, more signatures, more pain), and to keep our OS build process simple (doesn't currently use a discrete bootloader).The more I look at it, the more I'm leaning towards just spending a day and getting uboot integrated into our build pipeline and not looking back. It just seems the whole Linux boot process is overly complex--especially for a fixed hardware spec system--and I'm trying to find ways to simplify through degeneralization. | How can I compile Linux kernel modules without compiling the kernel? | linux;linux kernel;kernel modules | null |
_unix.220705 | For example, I create 32768 directories named 0,1,2,...32767. I want to randomly choose one as the path every time I run a command. So I change $PATH to $PATH:/blabla/$RANDOM, but it won't work because $RANDOM is evaluated immediately. How can I delay the evaluation? | How can I delay the evaluation of variables (lazy evaluation)? | shell;environment variables;variable | This isn't a capability of any of the common shells.Recent versions of ATT ksh have a unique feature among shells called discipline functions. You can execute custom code when a variable is accessed, and if you set .sh.value to a different value, that value is used instead of the value of the variable.function PATH.get { .sh.value=$PATH:/blabla/$RANDOM; }However even this feature won't help you for PATH since it only triggers when a variable is used by the script, not by internal uses of PATH inside the shell.If you want that for the last PATH element, and you're using bash or zsh, you can use their command-not-found feature to invoke custom code if a command is not found. In bash:command_not_found_handle () { command /blabla/$RANDOM/$@}In zsh:command_not_found_handler () { /blabla/$RANDOM/$1 $@[2,$#]}Apart from these cases, there's no shell feature that'll help you. In any case, no shell feature will help you for programs that are not invoked by a shell.You could use LD_PRELOAD to override the execlp, execvp and execvpe library functions to do something different from breaking up PATH into colon-separated pieces and interpreting each of them as a directory. SeeRedirect a file descriptor before execution for an LD_PRELOAD example.Alternatively, you could put a PATH entry on a FUSE filesystem that implements a stacked filesystem that makes the given path correspond to a variable underlying directory. This will work for programs that just call execve with each PATH element until one works, but it'll confuse programs that first traverse the PATH entries looking for existing, executable files and then execute the one that is found. |
_unix.127077 | I landed up in a situation wherein I had to access a Linux machine via puTTY.I made various attempts to SSH but failed to connect to the machine.I then realised my colleague was accessing the same Linux machine as root user,and I too wanted to access as a root user.I asked him to log out so that I can login as root.Is there a way we limit number of SSH login on a linux OS? Is this some kind of security feature that distinguishes a windows based OS with a linux based OS.I am fairly new to Linux,kindly help give a genuine answer.Thanks | Number of SSH connection(s) on a single linux machine | ssh;limit | null |
_cs.18181 | Does the term schema, in the context of describing a structure, refer to the actual structure of the data, or the description of this structure? I.e. can I talk about the schema of an entity without a schema language?For example, suppose I am writing documentation for some JSON API, that always returns a particular structure, say:{ name : jeroen, age : 28}Can I say that the schema for the payload returned by this API includes required values for name and age? Or does the term exclusively refer to another document that formally describes this structure using a schema language of some sort? In that case, what is another appropriate term to refer to the recurring pattern in the structure?The Github API, has a section titled schema: http://developer.github.com/v3/#schema. However, the word schema is not mentioned in the text itself, nor do they use any schema language. This would suggest that schema is just a general term for the structure/attributes of the output. Unless they consider english as a schema language, in which case the entire page is a schema. Hmmm. | Precise definition of term: *schema* | terminology;encoding scheme | In order to write the documentation of the structure of the JSON objects returned from your API you can follow two approaches:describe the common attributes and of the objects returned by every function of your API; and then for each API function describe the specific attributes (their type, meaning, and example);Many APIs that use JSON to exchange data are documented using this approach: e.g. Google APIs for various services, Facebook Graph API, Twitter API, ...In this case I think it is not correct to use the term schema.or you can give a formal description of the output using a schema definition language like JSON-schema (designed by the IETF) and that plays the same role of the XML schema definition language for XML.But I don't know if JSON-schema is an established standard (there is an active Google group, but I didn't find any notable example, ... ).In this case you should provide a valid (json-)schema for the output of each function.If you choose this approach, then you should say that name and age are two required properties of the (json-)schemas of the (json-)objects returned by the functions of your APIs (but obviously you should provide the full-schemas, too) |
_unix.284696 | Network Finland(internet)-PC(Sweden)-Uni(Sweden). Uni IP is fixed. Finland(internet) is gained with 3g-4 connection. I need to use a VPN provider to change my country but the VPN provider gives ppp0, does not provide split tunnelling and does not give a public IP. Internet is gained with a mobile-connection of Telia-Sonera of Finland. First VPN connection allows access to institutional materials ... but it requires local IP is in Sweden. I give them my username+password but they have extra security with local IP. Second VPN (Slave) is required to change local IP from Finland to Sweden, but current VPN providers give only ppp0, no dynamic IP and no split tunnelling. (2-3) multiples because there are multiple institutionsGoals Mobile internet. I am using Telia-Sonera located in Finland. I do not know if you can change the server location of the operator dynamically. TODO ask it from the operator. VPN provider. Find one which provides a dynamic IP and/or split tunnelling. Software. Make Split tunnelling. TODO how to make this? Unsuccessful attempts to gain access to split tunnellingVPN - Tor (NordVPN Tor Sweden). This does not work; uni connection is rejected....SystemProblem: VPN provider provides only private access ppp0. I contacted several VPN providers for the thing. My current VPN provider is NordVPN. Their answers about having multiple VPN connections at the same time, which I do not believe, since I think they are talking about their NordVPN application only, and their technical proficiency has been generally low Unfortunately, it is not possible to have multiple VPN connections active on same computer at the same time. - - No, you can not change the subnet details assigned for you. - - NordVPN routes your entire internet traffic through VPN, thus the only option for two VPN connections is to set up one VPN connection on a virtual machine. - - No we do not support split tunneling. but I am ready to change my VPN provider if it is needed here for the task. I am not sure if you can do split tunnelling by software. Proposal: Build a second layer of VPN on top of your established connection inside Finland. Do this with ifconfig, openvpn and not with mass market GUI. (here)OS X El-Capitan Tunnelblick for both VPN connections?Some OS X part of the thread is discussed here about How to use unique subnets of two VPN connections? Here is Tunnelblick's demo file config.ovpn without any changes; which you install by just drag-and-drop it to Tunnelblick's GUI menubar############################################### Sample client-side OpenVPN 2.0 config file ## for connecting to multi-client server. ## ## This configuration can be used by multiple ## clients, however each client should have ## its own cert and key files. ## ## On Windows, you might want to rename this ## file so it has a .ovpn extension ################################################ Specify that we are a client and that we# will be pulling certain config file directives# from the server.client# Use the same setting as you are using on# the server.# On most systems, the VPN will not function# unless you partially or fully disable# the firewall for the TUN/TAP interface.;dev tapdev tun# Windows needs the TAP-Win32 adapter name# from the Network Connections panel# if you have more than one. On XP SP2,# you may need to disable the firewall# for the TAP adapter.;dev-node MyTap# Are we connecting to a TCP or# UDP server? Use the same setting as# on the server.;proto tcpproto udp# The hostname/IP and port of the server.# You can have multiple remote entries# to load balance between the servers.remote my-server-1 1194;remote my-server-2 1194# Choose a random host from the remote# list for load-balancing. Otherwise# try hosts in the order specified.;remote-random# Keep trying indefinitely to resolve the# host name of the OpenVPN server. Very useful# on machines which are not permanently connected# to the internet such as laptops.resolv-retry infinite# Most clients don't need to bind to# a specific local port number.nobind# Downgrade privileges after initialization (non-Windows only);user nobody;group nobody# Try to preserve some state across restarts.persist-keypersist-tun# If you are connecting through an# HTTP proxy to reach the actual OpenVPN# server, put the proxy server/IP and# port number here. See the man page# if your proxy server requires# authentication.;http-proxy-retry # retry on connection failures;http-proxy [proxy server] [proxy port #]# Wireless networks often produce a lot# of duplicate packets. Set this flag# to silence duplicate packet warnings.;mute-replay-warnings# SSL/TLS parms.# See the server config file for more# description. It's best to use# a separate .crt/.key file pair# for each client. A single ca# file can be used for all clients.ca ca.crtcert client.crtkey client.key# Verify server certificate by checking# that the certicate has the nsCertType# field set to server. This is an# important precaution to protect against# a potential attack discussed here:# http://openvpn.net/howto.html#mitm## To use this feature, you will need to generate# your server certificates with the nsCertType# field set to server. The build-key-server# script in the easy-rsa folder will do this.;ns-cert-type server# If a tls-auth key is used on the server# then every client must also have the key.;tls-auth ta.key 1# Select a cryptographic cipher.# If the cipher option is used on the server# then you must also specify it here.;cipher x# Enable compression on the VPN link.# Don't enable this unless it is also# enabled in the server config file.comp-lzo# Set log file verbosity.verb 3# Silence repeating messages;mute 20Ubuntu 16.04I can start to test here also if necessary. It would be great if the above OpenVPN solution works in both systems. How can you make such a tunnel with OpenVPN? | How to Do Split Tunnelling with Slave ppp0 VPN + 2nd VPN? | networking;openvpn | I have not managed to complete the solution with Ryder's answer. My current understand is that you can only reach the target by using Virtual Machine as described here about How to Mimic Location of Slave VPN for Primary VPN? by klanomathCreate a new VM hull and attach it to this NAT network. Install a familiar OS (e.g OS X 10.9-10.11) in this VM Set up a VPN connection in the VM to your school's VPN server in Sweden in the VPN in System Preferences -> Network, shut down the VM and quit the hypervisor.Connect to NordVPN in the non-virtualized OS Start the VM Connect to the school's VPN server in the virtualized OS. |
_unix.306787 | I'm trying to combine two different files, one called itemnum and another called items. itemnum contains:ItemNumber12011221132013401410and items contains:ItemLobby FurnitureBallroom SpecialtiesPoolside CartsFormal Dining SpecialsReservation LogsI want to use join here right? So that the output looks like this:ItemNumber:Item1201:Lobby Furniture1221:Ballroom Specialties1320:Poolside Carts1340:Formal Dining Specials1410:Reservation LogsI can't even figure out how to get them to join, let alone add the :I tried join itemnum items > prodinfo, but that just gives me an empty file. | Join lines from two files | join;paste | null |
_unix.72216 | I've read about how to make hard drives secure for encryption, and one of the steps is to write random bits to the drive, in order to make the encrypted data indistinguishable from the rest of the data on the hard drive.However, when I tried using dd if=/dev/urandom of=/dev/sda in the past, the ETA was looking to be on the order of days. I saw something about using badblocks in lieu of urandom, but that didn't seem to help a whole lot. I would just like to know if there are any ways that might help me speed this up, such as options for dd or something else I may be missing, or if the speed is just a limitation of the HD. | Fast Way to Randomize HD? | encryption;dd;random | dd if=/dev/urandom of=/dev/sda, or simply cat /dev/urandom >/dev/sda, isn't the fastest way to fill a disk with random data. Linux's /dev/urandom isn't the fastest cryptographic RNG around. Is there an alternative to /dev/urandom? has some suggestions. In particular, OpenSSL contains a faster cryptographic PRNG:openssl rand $(</proc/partitions awk '$4==sda {print $3*1024}') >/dev/sdaNote that in the end, whether there is an improvement or not depends on which part is the bottleneck: the CPU or the disk.The good news is that filling the disk with random data is mostly useless. First, to dispel a common myth, wiping with zeroes is just as good on today's hardware. With 1980s hard disk technology, overwriting a hard disk with zeroes left a small residual charge which could be recovered with somewhat expensive hardware; multiple passes of overwrite with random data (the Gutmann wipe) were necessary. Today even a single pass of overwriting with zeroes leaves data that cannot realistically be recovered even in laboratory conditions.When you're encrypting a partition, filling the disk with random data is not necessary for the confidentiality of the encrypted data. It is only useful if you need to make space used by encrypted data indistinguishable from unused space. Building an encrypted volume on top of a non-randomized container reveals which disk blocks have ever been used by the encrypted volume. This gives a good hint as to the maximum size of the filesystem (though as time goes by it will become a worse and worse approximation), and little more. |
_unix.171872 | how to move a header to the last column, using awk or sedinput file look like this:Line 1.000Nx y z 23.88 44.66 56.623.81 41.66 53.6Line 81.000Nx y z 13.88 34.66 56.613.81 41.66 43.6I would like the output to be in the following format:23.88 44.66 56.6 1.000N23.81 41.66 53.6 1.000N13.88 34.66 56.6 81.000N13.81 41.66 43.6 81.000N | how to move a header to the last column, using awk or sed | sed;awk | null |
_cs.31898 | I'm looking for an effective method to simulate a program. Why I am needing this is because sometimes I can only have the program's description, code, and all I have left is pen and paper when I want to calculate the result at the end, or the n-th loop, or step n-th ..etc.. For example : Declare @i int While @i < 1000 begin print @i set @i = @i + 1endneedless to say, this is a very simple program, everyone can understand it within a glance, but what about more complicated ones? I can make it with pen and paper, but it consumes alot of time, and I think maybe there will be some methods, which will make this problem a lot easier! | Effective method for simulating program's cycle | formal methods | null |
_cstheory.8489 | The question is: what are examples of clique problem applications? I mean, what problems can be solved by reducing to clique problem (sorry for tautology)?All I came with is finding social cliques: groups of people who know each other personally.I understand, that similar ideas may arise in electronics, CS (i.e., compiler design?) and probably other fields, but can't think of other interesting problems.Would be glad to see some of them. | Maximum-clique practical applications | ds.algorithms;graph theory;application of theory;clique | null |
Subsets and Splits