id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.166645
I work in a medium sized company but with a very small IT force.Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue.Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects.I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously... I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, funcA not achieved due to time constraints, proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier.The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work.So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month.
How to handle this unfortunately non hypothetical situation with end-users?
project management;requirements;maintenance
Disputes about easily-observable facts are actually quite easy to resolve: just observe the facts. If I say there's a tree with purple wood outside my house, anyone able to come to my house can verify the truth or falsehood of my statement for themselves.If they're complaining that FuncA used to be in the product and used to work in an earlier version and now it's not working, and you don't think it was ever in the product, ask them to prove it. (Or, in more gentle words, say something like we're having trouble reproducing the problem. Could you help us out here?)Give them a copy of the earlier version if they don't still have one, and get them in a LiveMeeting, and have them show you how they used to use FuncA. If they can't do it, then (hopefully) they're realize that it wasn't in there afterall and get off your case about it, or at least try a different tactic to get it implemented. (And make sure to get someone from management or PM in on the LiveMeeting.)
_unix.6065
I can change the name of a window with Ctrl-a Shift-a. Instead of editing several window names by hand, is there a way to have them automatically named after the current directory?
GNU Screen: new window name change
terminal;gnu screen;window title
Make your shell change the window title every time it changes directory, or every time it displays a prompt.For your ~/.bashrc:if [[ $TERM == screen* ]]; then screen_set_window_title () { local HPWD=$PWD case $HPWD in $HOME) HPWD=~;; $HOME/*) HPWD=~${HPWD#$HOME};; esac printf '\ek%s\e\\' $HPWD } PROMPT_COMMAND=screen_set_window_title; $PROMPT_COMMANDfiOr for your ~/.zshrc (for zsh users):precmd () { local tmp='%~' local HPWD=${(%)tmp} if [[ $TERM == screen* ]]; then printf '\ek%s\e\\' $HPWD fi}For more information, look up under Dynamic titles in the Screen manual, or under Titles (naming windows) in the man page.
_unix.163100
If the shell script is #!/bin/shBut the value of my $SHELL is ksh. Will it make a difference if I changed my ksh to sh, and then executed the script. ie. will a script have different behaviour depending on what type of shell is executing it.
Shell script written in a different shell than what my current shell
shell;shell script
will a script have different behaviour depending on what type of shell is executing it.In the sense that bash script.sh and ksh script.sh are likely to behave differently, yes. Commonly, that difference will be that one of them works and one gives an error, but there are a range of options. Many simple scripts will have the same behaviour on common shells, but more complex scripts are likely to hit one of the many differences between the languages provided by different shells.Will a script behave differently depending on your value of SHELL? Only if the script either invokes $SHELL itself, or tests or otherwise uses its value, directly or indirectly. Ordinary shell scripts generally will not, but they can.Will a script behave differently depending on the parent shell from which it was invoked? Extremely rarely - the script would have to do a fair bit of work to detect that, to the extent that it would almost have to be on purpose.I think your use case is running ./script.sh, which is a sh script, from your interactive shell, which is ksh. If that's right, we're in the last case above, and the script will almost certainly behave in the same way as if you were using any other shell yourself. The system will always start up a new /bin/sh process and tell it to execute the script.
_codereview.171240
I wrote an implementation of a HashTable that uses bucket lists to store the key-value pairs implemented with a linked list.Here's the header://HashTable.h#ifndef HASHTABLE_H#define HASHTABLE_H#include ../../List/include/List.h/** * Implementation of a Hashtable based on bucket lists made with Linkedlist */template<typename K, typename V> class HashList;template<typename K, typename V>class HashPair{ private: friend class HashList<K,V>; K key; V value; public: HashPair(); // Default constructor HashPair(const K key,const V value); // constructs a new hash pair given a key and a value K getKey() const; // returns the key void setKey(const K key); // sets the key V getValue() const; // returns the value void setValue(const V v); // sets the value};template<typename K, typename V>class HashList{ private: List<HashPair<K,V>> l; public: List_iterator<HashPair<K,V>> find(const K key) const; // Returns an HashPair given a key if present, null if absent void insert(const K key,const V value) const; // Inserts a key-value pair in the HashList V lookup(const K key) const; // Returns a reference to an HashPair value given a key if present; null otherwise void remove(const K key) const; // Removes an element given a key bool empty() const; // Returns true if the list is empty, false otherwise List_iterator<HashPair<K,V>> begin() const; List_iterator<HashPair<K,V>> end() const; bool finished(List_iterator<HashPair<K,V>> const p) const;};template<typename K, typename V>class HashTable;template<typename K, typename V>class hash_iterator;template<typename K, typename V>bool operator ==(const hash_iterator<K,V> it, const hash_iterator<K,V> it2);template<typename K, typename V>bool operator !=(const hash_iterator<K,V> it, const hash_iterator<K,V> it2);template<typename K, typename V>class hash_iterator{ private: HashTable<K,V>* baseTable; int i; List_iterator<HashPair<K,V>> it; List_iterator<HashPair<K,V>> nextOccurrence(); public: hash_iterator(); hash_iterator(HashTable<K,V>* table); hash_iterator(const hash_iterator& it2); friend bool operator == <>(const hash_iterator it, const hash_iterator it2); friend bool operator != <>(const hash_iterator it, const hash_iterator it2); hash_iterator begin(); hash_iterator end(); hash_iterator operator ++(); //prefix hash_iterator operator ++( int ); //postfix HashPair<K,V> operator *() const;};template<typename K, typename V>class HashTable{ protected: HashList<K,V>* entries; int m; //table dimension friend class hash_iterator<K,V>; public: HashTable(const int capacity); //Creates a new hash table with given dimension ~HashTable(); //Destructor bool contains(const K k) const; //Returns true if the hashtable contains k V lookup(const K k) const; //returns the value being searched if present, nil otherwise V operator [](const K k) const; // same as lookup, with a array like notation void insert(const K key,const V value) const; //Inserts the key-value pair into the table void remove(const K key) const; //Given a key, it removes the key-pair value, if present int Hash(const long int key) const; //Hash function hash_iterator<K,V> begin(); hash_iterator<K,V> end();};namespace keyOnly{ template<typename K> class HashList { private: List<K> l; public: List_iterator<K> find(const K key) const; // Returns an HashPair given a key if present, null if absent void insert(const K key) const; // Inserts a key-value pair in the HashList void remove(const K key) const; // Removes an element given a key bool empty() const; // Returns true if the list is empty, false otherwise List_iterator<K> begin() const; List_iterator<K> end() const; bool finished(List_iterator<K> const p) const; }; template<typename K> class HashTable; template<typename K> class hash_iterator; template<typename K> bool operator ==(const hash_iterator<K> it, const hash_iterator<K> it2); template<typename K> bool operator !=(const hash_iterator<K> it, const hash_iterator<K> it2); template<typename K> class hash_iterator { protected: HashTable<K>* baseTable; int i; List_iterator<K> it; List_iterator<K> nextOccurrence(); public: hash_iterator(); hash_iterator(HashTable<K>* table); hash_iterator(const hash_iterator& it2); friend bool operator == <>(const hash_iterator it, const hash_iterator it2); friend bool operator != <>(const hash_iterator it, const hash_iterator it2); hash_iterator begin(); hash_iterator end() const; hash_iterator operator ++(); //prefix hash_iterator operator ++( int ); //postfix K operator *() const; }; template<typename K> class HashTable { protected: HashList<K>* entries; int m; //table dimension friend class hash_iterator<K>; public: HashTable(); //Default constructor HashTable(const int capacity); //Creates a new hash table with given dimension ~HashTable(); //Destructor bool contains(const K k) const; // Returns true if the table contains k void insert(const K key) const; //Inserts the key-value pair into the table void remove(const K key) const; //Given a key, it removes the key-pair value, if present int Hash(const long int key) const; //Hash function };}#include ../src/HashTable.cpp#endifand here's the code:// HashTable.cpp#ifndef HASHTABLE_CPP#define HASHTABLE_CPP#include ../include/HashTable.h#include <cmath>using namespace std;template<typename K, typename V>HashPair<K,V>::HashPair(){ key = K(); value = V();}template<typename K, typename V>HashPair<K,V>::HashPair(const K key,const V value):key(key), value(value){}template<typename K, typename V>K HashPair<K,V>::getKey() const{ return key;}// returns the keytemplate<typename K, typename V>void HashPair<K,V>::setKey(const K key){ this->key = key; }// sets the keytemplate<typename K, typename V>V HashPair<K,V>::getValue() const{ return value;}// returns the valuetemplate<typename K, typename V>void HashPair<K,V>::setValue(const V v){ this->value = value;}// sets the valuetemplate<typename K, typename V>List_iterator<HashPair<K,V>> HashList<K,V>::find(const K key) const{ bool found = false; List_iterator<HashPair<K,V>> e(nullptr); List_iterator<HashPair<K,V>> i = l.begin(); while(!l.finished(i) && !found) { if((*i).key == key) { e = i; found = true; } i++; } return e;}// Returns an HashPair given a key if present, null if absenttemplate<typename K, typename V>void HashList<K,V>::insert(const K key,const V value) const{ List_iterator<HashPair<K,V>> kv = find(key); HashPair<K,V> k(key,value); if (kv != List_iterator<HashPair<K,V>>(nullptr)) { l.write(kv,k); } else { l.insert(k); }}// Inserts a key-value pair in the HashListtemplate<typename K, typename V>V HashList<K,V>::lookup(const K key) const{ List_iterator<HashPair<K,V>> kv = find(key); V e = V(); if (kv != List_iterator<HashPair<K,V>>(nullptr)) { e = (*kv).value; } return e;}// Returns a reference to an HashPair value given a key if present; null otherwisetemplate<typename K, typename V>void HashList<K,V>::remove(const K key) const{ List_iterator<HashPair<K,V>> item = find(key); if(item != List_iterator<HashPair<K,V>>(nullptr)) l.remove(item);}template<typename K, typename V>bool HashList<K,V>::empty() const{ return l.empty();}template<typename K, typename V>List_iterator<HashPair<K,V>> HashList<K,V>::begin() const{ return l.begin();}template<typename K, typename V>List_iterator<HashPair<K,V>> HashList<K,V>::end() const{ return l.end();}template<typename K, typename V>bool HashList<K,V>::finished(List_iterator<HashPair<K,V>> const p) const{ return l.finished(p);}template<typename K, typename V>List_iterator<HashPair<K,V>> hash_iterator<K,V>::nextOccurrence(){ i++; it = List_iterator<HashPair<K,V>>(nullptr); while(i < baseTable->m && it == List_iterator<HashPair<K,V>>(nullptr)) { if(baseTable->entries[i].empty()) i++; else it = baseTable->entries[i].begin(); } return it;}template<typename K, typename V>hash_iterator<K,V>::hash_iterator(){ baseTable = nullptr; i = -1; it = List_iterator<HashPair<K,V>>(nullptr);}template<typename K, typename V>hash_iterator<K,V>::hash_iterator(HashTable<K,V>* table){ baseTable = table; i = -1; it = List_iterator<HashPair<K,V>>(nullptr);}template<typename K, typename V>hash_iterator<K,V>::hash_iterator(const hash_iterator& it2){ baseTable = it2.baseTable; i = it2.i; it = it2.it;}template<typename K, typename V>bool operator ==(const hash_iterator<K,V> it, const hash_iterator<K,V> it2){ return (it.baseTable == it2.baseTable && it.it == it2.it);}template<typename K, typename V>bool operator !=(const hash_iterator<K,V> it, const hash_iterator<K,V> it2){ return !(it == it2);}template<typename K, typename V>hash_iterator<K,V> hash_iterator<K,V>::begin(){ if(i != 0) i = -1; hash_iterator<K,V> ret(*this); ret.nextOccurrence(); return ret;}template<typename K, typename V>hash_iterator<K,V> hash_iterator<K,V>::end(){ hash_iterator<K,V> ret(*this); ret.i = baseTable->m; ret.it = List_iterator<HashPair<K,V>>(nullptr); return ret;}template<typename K, typename V>hash_iterator<K,V> hash_iterator<K,V>::operator ++() //prefix{ it++; if (baseTable->entries[i].finished(it)) { it = nextOccurrence(); } return *this;}template<typename K, typename V>hash_iterator<K,V> hash_iterator<K,V>::operator ++( int ) //postfix{ hash_iterator<K,V> oldit(*this); ++(*this); return oldit;}template<typename K, typename V>HashPair<K,V> hash_iterator<K,V>::operator *() const{ return *it;}template<typename K, typename V>HashTable<K,V>::HashTable(const int capacity){ entries = new HashList<K,V> [capacity]; m = capacity;}//Creates a new hash table with given dimensiontemplate<typename K, typename V>HashTable<K,V>::~HashTable(){ delete [] entries;}//Destructortemplate<typename K, typename V>V HashTable<K,V>::lookup(const K k) const{ int i = Hash(hash<K>()(k)); V value = V(); if (!entries[i].empty()) value = entries[i].lookup(k); return value;}//returns the value being searched if present, nil otherwisetemplate<typename K,typename V>bool HashTable<K,V>::contains(const K k) const{ int i = Hash(hash<K>()(k)); if(entries[i].empty()) return false; else { if(entries[i].find(k) == List_iterator<HashPair<K,V>>(nullptr)) return false; else return true; }}//template<typename K,typename V>V HashTable<K,V>::operator [](const K k) const{ return lookup(k);}template<typename K, typename V>void HashTable<K,V>::insert(const K key,const V value) const{ int i = Hash(hash<K>()(key)); entries[i].insert(key,value);}//Inserts the key-value pair into the tabletemplate<typename K, typename V>void HashTable<K,V>::remove(const K key) const{ int k = Hash(hash<K>()(key)); if (!entries[k].empty()) entries[k].remove(key);}//Given a key, it removes the key-pair value, if presenttemplate<typename K, typename V>int HashTable<K,V>::Hash(const long int key) const{ return abs(key) % m;}//Hash functiontemplate<typename K, typename V>hash_iterator<K,V> HashTable<K,V>::begin(){ hash_iterator<K,V> ret(this); return ret.begin();}template<typename K, typename V>hash_iterator<K,V> HashTable<K,V>::end(){ hash_iterator<K,V> ret(this); return ret.end();}namespace keyOnly{ template<typename K> List_iterator<K> HashList<K>::find(const K key) const { bool found = false; List_iterator<K> e = List_iterator<K>(nullptr); if(!l.empty()) { List_iterator<K> i = l.begin(); while(!l.finished(i) && !found) { if(*i == key) { e = i; found = true; } i++; } } return e; } // Returns an HashPair given a key if present, null if absent template<typename K> void HashList<K>::insert(const K key) const { List_iterator<K> k = find(key); if (k == List_iterator<K>(nullptr)) { l.insert(key); } else { l.write(k,key); } } // Inserts a key-value pair in the HashList /* template<typename K> K HashList<K>::lookup(K key) { List_iterator<K> k = find(key); K e; if (k != List_iterator<K>(nullptr)) e = *k; return e; } // Returns a reference to an HashPair value given a key if present; null otherwise*/ template<typename K> void HashList<K>::remove(const K key) const { List_iterator<K> item = find(key); if(item != List_iterator<K>(nullptr)) l.remove(item); } template<typename K> bool HashList<K>::empty() const { return l.empty(); } template<typename K> List_iterator<K> HashList<K>::begin() const { return l.begin(); } template<typename K> List_iterator<K> HashList<K>::end() const { return l.end(); } template<typename K> bool HashList<K>::finished(const List_iterator<K> p) const { return l.finished(p); } template<typename K> List_iterator<K> hash_iterator<K>::nextOccurrence() { i++; it = List_iterator<K>(nullptr); while(i < baseTable->m && it == List_iterator<K>(nullptr)) { if(baseTable->entries[i].empty()) i++; else it = baseTable->entries[i].begin(); } return it; } template<typename K> hash_iterator<K>::hash_iterator() { baseTable = nullptr; i = -1; it = List_iterator<K>(nullptr); } template<typename K> hash_iterator<K>::hash_iterator(HashTable<K>* table) { baseTable = table; i = -1; it = List_iterator<K>(nullptr); } template<typename K> hash_iterator<K>::hash_iterator(const hash_iterator& it2) { baseTable = it2.baseTable; i = it2.i; it = it2.it; } template<typename K> bool operator ==(const hash_iterator<K> it, const hash_iterator<K> it2) { return (it.baseTable == it2.baseTable && it.it == it2.it); } template<typename K> bool operator !=(const hash_iterator<K> it, const hash_iterator<K> it2) { return !(it == it2); } template<typename K> hash_iterator<K> hash_iterator<K>::begin() { if(i != 0) i = -1; hash_iterator<K> ret(*this); ret.nextOccurrence(); return ret; } template<typename K> hash_iterator<K> hash_iterator<K>::end() const { hash_iterator<K> ret(*this); ret.i = baseTable->m; ret.it = List_iterator<K>(nullptr); return ret; } template<typename K> hash_iterator<K> hash_iterator<K>::operator ++() //prefix { it++; if (baseTable->entries[i].finished(it)) { it = nextOccurrence(); } return *this; } template<typename K> hash_iterator<K> hash_iterator<K>::operator ++( int ) //postfix { hash_iterator<K> oldit(*this); ++(*this); return oldit; } template<typename K> K hash_iterator<K>::operator *() const { return *it; } template<typename K> HashTable<K>::HashTable(const int capacity) { entries = new HashList<K> [capacity]; m = capacity; } //Creates a new hash table with given dimension template<typename K> HashTable<K>::~HashTable() { delete [] entries; } //Destructor template<typename K> HashTable<K>::HashTable() { entries = nullptr; m = -1; } /* template<typename K> K HashTable<K>::lookup(K k) { K key = K(); int i = Hash(hash<K>()(k)); if (!entries[i].empty()) key = entries[i].lookup(k); return key; } //returns the value being searched if present, nil otherwise */ template<typename K> bool HashTable<K>::contains(const K k) const { int i = Hash(hash<K>()(k)); if(entries[i].empty()) return false; else { if(entries[i].find(k) == List_iterator<K>(nullptr)) return false; else return true; } } template<typename K> void HashTable<K>::insert(const K key) const { int i = Hash(hash<K>()(key)); entries[i].insert(key); } //Inserts the key-value pair into the table template<typename K> void HashTable<K>::remove(const K key) const { int k = Hash(hash<K>()(key)); if (!entries[k].empty()) entries[k].remove(key); } //Given a key, it removes the key-pair value, if present template<typename K> int HashTable<K>::Hash(const long int key) const { return abs(key) % m; } //Hash function}#endifIt works mostly fine, but I have one big problem with this code: the quantity of repeated code. As you can see, there are two versions of the HashTable, one that stores key-value pairs and requires two template arguments and one that stores only the key and requires only one. I use the latter to implement a Set that uses a HashTable to store the elements (I don't need to store a key-value pair in this case). I wonder if there is a way to handle the template arguments in C++ without having to handle the two cases separately, as a lot of code is practically the same in both cases. I've looked into variadic template arguments, but they don't seem to be what I need.What I would like to do is, for example, in the insert function to be able to tell if the user used one or two template arguments, and, in the first case, I would insert a key-value pair in the HashTable, in the second case just a key. I don't know if it's even possible in C++, at least my searches have not been conclusive.Other than that, any advice on the code that doesn't have to do with this problem is very well appreciated, especially in the coding style.Yeah, I know there are std::unordered_map and std::unordered_set that do exactly what I need. I would really like to use them, but for now I can't. I'm working on a project for uni where if I need any data structure I have to write it myself, otherwise I would be using the STL any day. Also, the List data structure used in the code has been written by me as well, you can find it, together with other data structures written by me on my GitHub page.
HashTable implementation using bucket lists
c++;template;hash table
null
_unix.277457
I'm tying to install a fresh Ubuntu 15.10 distro on my laptop. I boot my pc from the usb with the Ubuntu installer, On the GRUB menu, I select test without install, but when it reachs the screen where yo can read Ubuntu with the moving point below:It gets freeze and nothing happens.My laptop is a Mountain Iridium with an Intel Core i7 6700HQ, M.2 240GB SSD and a 1TB HDD, RAM 16GB DDR4 2133MHz and Nvidia GTX970M 3GB GDDR5Edit:I managed to install Ubuntu 16.04, but problems still happens.When installation ends and a windows prompts you to select keep testing or reboot, I select restart and the pc hangs, I have to long press on the power botton to reboot. Once rebooted it gets the log in screen, I introduce my password, and... surprise the computers hangs, again.Edit 2:Trying to install Ubuntu 16.04 again, I get this error message The pc hangs and I can't install anything
Unable to install Ubuntu
ubuntu;system installation
null
_softwareengineering.266302
I've been learning HTML5/CSS3 for a month now, and I've built my first demo website. At first I was using a lot of the element selectors like: >, ,, + in combination with the type names for selecting nested tags.Now I've moved more to the id and class selectors and use the >, ,, + less often for selecting nested tags. Is using id and class selectors a better approach for selecting tags which are (deeply) nested? Are there any downsides to this approach? Or is it just a matter of style?
CSS: When to use which selector
css
This is not only a matter of style. It is a matter of performance and maintainability. Anecdotally, some selectors are difficult to implement efficiently for browsers. For example, the general sibling selector A ~ B or the descendant selector A B. And of course the universal selector *. Unless actually needed, these should be avoided. The thing that is really fast is using class names or IDs. The comma A, B is not considered a selector.When you start out with CSS, you might be tempted to write something like ul.#steps > li.item-without-bullet. That is not good for various reasons:Do not use IDs in selectors, because any ID can only be used once in a document. This prevents you from re-using the styling.Avoid element names in selectors. HTML is meant to be used as semantic markup that highlights what each element means. This should be kept mostly separate from the styling. You will still want to style elements directly (e.g. using a different font only for headings, or setting the line height for paragraphs). That is OK, and I wouldn't use classes for that (yuck), but you shouldn't refer to the element names when styling something special such as a nav bar, or an image carousel, or a pull quote, or .Avoid ID and class names that focus too much on what text effect they provide (you might as well use inline style attributes), and instead use class names as custom elements or element modifiers, as a way to add your own semantics. A class such as red-text is not as semantic as error--fatal.Avoid the descendant selector A B and the child selector A > B. You can usually encode the necessary information through your class names, e.g. steps and steps__step. You can trust that whoever is applying the classes to the HTML structure will respect proper nesting, given sufficiently self-documenting class names.Seriously consider a strict naming scheme such as BEM (Block Element Modifier). Yes, it's incredible overkill, but using the BEM naming scheme can help you to properly structure your CSS. It exclusively uses class names, and does not generally use selectors.
_unix.289023
I have an HTTP and HTTPS proxy at my work. Though when I work from home (on the laptop from work) I would like to disable the proxy settings (connect directly to the Internet). Then, when I come back to work, bring back the proxy settings.The problem is, only a few application recognize the system-wide proxy settings (set using the Linux Mint's network manager and through HTTPS_PROXY and HTTP_PROXY environment variables). For many other applications (IntelliJ, SBT, Maven, Synaptic, apt-get, git) I had to set them manually and editing the settings for each of them every time is tedious.I could probably write a script or something that would edit settings files of all those applications, but I think it's error-prone (I could corrupt the files) and not really the easiest solution. What I thought about is intercepting the outgoing packets sent to the proxy, repackaging them somehow and sending them directly to the Internet. Would it be possible to do that using an iptables rule or something similar? I'm not really an expert when it comes to networks, proxies, etc. so I'm not even sure if it's doable, not to mention constructing the rule myself. Would be grateful for your help!
Temporarily ignore/bypass proxy settings using iptables when WFH
iptables;proxy
You could install a proxy on your laptop and configure all your apps to use it (on localhost). Then you could change the local proxy's config to either use a parent proxy or not, depending on your location.Tinyproxy is probably ideal for this task. Here's the description from the Debian package of it:Package: tinyproxyVersion: 1.8.3-3+b1Installed-Size: 145Description-en: A lightweight, non-caching, optionally anonymizing HTTP proxy An anonymizing HTTP proxy which is very light on system resources, ideal for smaller networks and similar situations where other proxies (such as Squid) may be overkill and/or a security risk. Tinyproxy can also be configured to anonymize HTTP requests (allowing for exceptions on a per-header basis).Homepage: https://banu.com/tinyproxy/
_webapps.108401
I unarchived my list but check list no longer displays - despite on list view it indicates checklist with 0/4 being displayed
In trello, ive unarchived my list but check list no longer displays
trello
null
_softwareengineering.169908
I'm re-working on the design of an existing application which is build using WebForms. Currently the plan is to work it into a MVP pattern application while using Ninject as the IoC container.The reason for Ninject to be there is that the boss had wanted a certain flexibility within the system so that we can build in different flavor of business logic in the model and let the programmer to choose which to use based on the client request, either via XML configuration or database setting.I know that Ninject have no need for XML configuration, however I'm confused on how it can help to dynamically inject the dependency into the system?Imagine I have a interface IMember and I need to bind this interface to the class decided by a xml or database configuration at the launch of the application, how can I achieve that?
How can I bind an interface to a class decided by an xml or database configuration at the launch of the application?
design patterns;webforms;ninject
null
_webmaster.101412
I have created a web site on the Google platform: ...sites.google.com/site/golfshotpilot/ Furthermore I have made all administrational steps to get the site's analytics. In this context I got a Tracking ID. The tracking works. I get the reports and statistics. (Evidences below). However when I use the Google Webmaster Tool, the system will not verify my account for my own website. I have tried the verification based on the uploaded HTML file. It did not work. Then I tried to apply the authentication via my Tracking ID of the Analytics Tool. This did not work either. The error message is: The Google Analytics tracking code on your site looks malformed. How can this be when the tracking id is obviously OK?Any ideas for help?Error messageDefinition about the Analytics in the site management settings. (here was a hardcopy but I can insert only two )Statistic result OK:(here was a hardcopy but I can insert only two )Evidence that the code is existing in the web sites source data: Any help would be appreciated.
Cannot get a Google Site verified despite a valid Analytics Tracking ID
google analytics;google search console
null
_unix.358362
I am trying to install linux mint, but i keep running into problems.I put the mint iso on my USB using rufus, and then select that usb in boot menu.But then, it only shows option to start linux mint - no install option.(First option is same as 2nd one, but without the Compatibility mode)So I click on start linux mint option, and that then loads the terminal, expecting me to enter the login details.I also tried installing Solus before Mint, and got the same issues. Any idea?
Cant install linux Mint 18.1 KDE
linux mint
null
_unix.203426
How would I print the last 3 lines that have the string #include from tester.c and if fewer than 3 lines contain the string, print the entire file.So far I have:grep #include tester.c | tail -3But I can't figure out how to include the second half of the requirement. This is homework and the solution must be a single line with no ; which is why I can't seem to figure it out.
Grep 2 different requirements in one command line statement
grep
null
_unix.136945
I have been trying to recover data from a Seagate 7200.11 1.5TB drive (2 ext4 partitions_ for 3 days now, predominantly with ddrescue and testdisk, but because of some critical error on the disk (probably SA damage or something similar?), it gets dropped off /dev when the system accesses some specific sector(s). The closest I have come, I think, is with ddrescue. But the image it created was incomplete and I could not mount it as it gave bad geometry: block count xxx exceeds size of device.. error among others. Last night I again fired up ddrescue, this time on the second partition, and after waiting for 3 hours, went to sleep. At that time, it had copied ~150GB from the ~700GB partition. Command used:ddrescue -n -v -T 30 --skip-size=1M,10M --min-read-rate=50k /dev/sdc2 /media/rescue/Drive2.img /media/rescue/Drive2.logI was pretty disappointed when I woke up and saw that the drive had disappeared from /dev/ and consequently ddrescue showed error size in 200GB+ range. The /var/log/messages contained repeating lines :2014-06-13T10:54:08.526490+05:00 suse kernel: [ 6693.096125] Read(10): 28 00 5a 79 55 88 00 00 08 002014-06-13T10:54:08.526491+05:00 suse kernel: [ 6693.096174] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T10:54:08.526491+05:00 suse kernel: [ 6693.096176] sd 2:0:0:0: [sdc] 2014-06-13T10:54:08.526492+05:00 suse kernel: [ 6693.096176] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T10:54:08.526493+05:00 suse kernel: [ 6693.096177] sd 2:0:0:0: [sdc] CDB: 2014-06-13T10:54:08.526494+05:00 suse kernel: [ 6693.096178] Read(10): 28 00 5a 79 4d e8 00 00 08 002014-06-13T10:54:08.526494+05:00 suse kernel: [ 6693.096226] sd 2:0:0:0: [sdc] Unhandled error codeand these around the time it disappeared from /dev (I think):2014-06-13T07:34:30.290574+05:00 suse kernel: [ 6743.832817] ata3: EH complete2014-06-13T07:34:33.892459+05:00 suse kernel: [ 6747.432198] ata3.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x02014-06-13T07:34:33.892486+05:00 suse kernel: [ 6747.432203] ata3.00: irq_stat 0x400000082014-06-13T07:34:33.892489+05:00 suse kernel: [ 6747.432206] ata3.00: failed command: READ FPDMA QUEUED2014-06-13T07:34:33.892502+05:00 suse kernel: [ 6747.432212] ata3.00: cmd 60/08:00:10:50:08/00:00:5c:00:00/40 tag 0 ncq 4096 in2014-06-13T07:34:33.892511+05:00 suse kernel: [ 6747.432212] res 41/40:08:17:50:08/00:00:5c:00:00/00 Emask 0x409 (media error) <F>2014-06-13T07:34:33.892517+05:00 suse kernel: [ 6747.432215] ata3.00: status: { DRDY ERR }2014-06-13T07:34:33.892519+05:00 suse kernel: [ 6747.432217] ata3.00: error: { UNC }2014-06-13T07:34:34.003455+05:00 suse kernel: [ 6747.543056] ata3.00: configured for UDMA/1332014-06-13T07:34:34.003476+05:00 suse kernel: [ 6747.543074] sd 2:0:0:0: [sdc] Unhandled sense code2014-06-13T07:34:34.003480+05:00 suse kernel: [ 6747.543076] sd 2:0:0:0: [sdc] 2014-06-13T07:34:34.003483+05:00 suse kernel: [ 6747.543078] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE2014-06-13T07:34:34.003486+05:00 suse kernel: [ 6747.543080] sd 2:0:0:0: [sdc] 2014-06-13T07:34:34.003488+05:00 suse kernel: [ 6747.543082] Sense Key : Medium Error [current] [descriptor]2014-06-13T07:34:34.003491+05:00 suse kernel: [ 6747.543085] Descriptor sense data with sense descriptors (in hex):2014-06-13T07:34:34.003502+05:00 suse kernel: [ 6747.543086] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 2014-06-13T07:34:34.003503+05:00 suse kernel: [ 6747.543095] 5c 08 50 17 2014-06-13T07:34:34.003504+05:00 suse kernel: [ 6747.543099] sd 2:0:0:0: [sdc] 2014-06-13T07:34:34.003505+05:00 suse kernel: [ 6747.543110] Add. Sense: Unrecovered read error - auto reallocate failed2014-06-13T07:34:34.003505+05:00 suse kernel: [ 6747.543111] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:34:34.003506+05:00 suse kernel: [ 6747.543112] Read(10): 28 00 5c 08 50 10 00 00 08 002014-06-13T07:34:34.003507+05:00 suse kernel: [ 6747.543116] end_request: I/O error, dev sdc, sector 15440486632014-06-13T07:34:34.003508+05:00 suse kernel: [ 6747.543118] Buffer I/O error on device sdc2, logical block 32704022014-06-13T07:34:34.003509+05:00 suse kernel: [ 6747.543127] ata3: EH complete2014-06-13T07:34:36.758454+05:00 suse kernel: [ 6750.295735] ata3.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x02014-06-13T07:34:36.758484+05:00 suse kernel: [ 6750.295740] ata3.00: irq_stat 0x400000082014-06-13T07:34:36.758488+05:00 suse kernel: [ 6750.295743] ata3.00: failed command: READ FPDMA QUEUED2014-06-13T07:34:36.758492+05:00 suse kernel: [ 6750.295750] ata3.00: cmd 60/08:00:10:50:08/00:00:5c:00:00/40 tag 0 ncq 4096 in2014-06-13T07:34:36.758496+05:00 suse kernel: [ 6750.295750] res 41/40:08:17:50:08/00:00:5c:00:00/00 Emask 0x409 (media error) <F>2014-06-13T07:34:36.758499+05:00 suse kernel: [ 6750.295752] ata3.00: status: { DRDY ERR }2014-06-13T07:34:36.758502+05:00 suse kernel: [ 6750.295754] ata3.00: error: { UNC }2014-06-13T07:34:36.932467+05:00 suse kernel: [ 6750.469333] ata3.00: configured for UDMA/1332014-06-13T07:34:36.932495+05:00 suse kernel: [ 6750.469351] sd 2:0:0:0: [sdc] Unhandled sense code2014-06-13T07:34:36.932501+05:00 suse kernel: [ 6750.469354] sd 2:0:0:0: [sdc] 2014-06-13T07:34:36.932504+05:00 suse kernel: [ 6750.469355] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE2014-06-13T07:34:36.932507+05:00 suse kernel: [ 6750.469357] sd 2:0:0:0: [sdc] 2014-06-13T07:34:36.932510+05:00 suse kernel: [ 6750.469359] Sense Key : Medium Error [current] [descriptor]2014-06-13T07:34:36.932514+05:00 suse kernel: [ 6750.469362] Descriptor sense data with sense descriptors (in hex):2014-06-13T07:34:36.932534+05:00 suse kernel: [ 6750.469364] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 2014-06-13T07:34:36.932546+05:00 suse kernel: [ 6750.469372] 5c 08 50 17 2014-06-13T07:34:36.932551+05:00 suse kernel: [ 6750.469376] sd 2:0:0:0: [sdc] 2014-06-13T07:34:36.932556+05:00 suse kernel: [ 6750.469379] Add. Sense: Unrecovered read error - auto reallocate failed2014-06-13T07:34:36.932560+05:00 suse kernel: [ 6750.469381] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:34:36.932564+05:00 suse kernel: [ 6750.469382] Read(10): 28 00 5c 08 50 10 00 00 08 002014-06-13T07:34:36.932567+05:00 suse kernel: [ 6750.469390] end_request: I/O error, dev sdc, sector 15440486632014-06-13T07:34:36.932572+05:00 suse kernel: [ 6750.469394] Buffer I/O error on device sdc2, logical block 32704022014-06-13T07:34:36.932576+05:00 suse kernel: [ 6750.469420] ata3: EH complete2014-06-13T07:36:15.441806+05:00 suse su: (to root) procyon on /dev/pts/52014-06-13T07:53:20.731456+05:00 suse kernel: [ 7873.286421] ata3: failed to read log page 10h (errno=-5)2014-06-13T07:53:20.731483+05:00 suse kernel: [ 7873.286429] ata3.00: exception Emask 0x1 SAct 0x1 SErr 0x0 action 0x02014-06-13T07:53:20.731487+05:00 suse kernel: [ 7873.286431] ata3.00: irq_stat 0x400000082014-06-13T07:53:20.731488+05:00 suse kernel: [ 7873.286434] ata3.00: failed command: READ FPDMA QUEUED2014-06-13T07:53:20.731490+05:00 suse kernel: [ 7873.286440] ata3.00: cmd 60/08:00:10:59:6d/00:00:60:00:00/40 tag 0 ncq 4096 in2014-06-13T07:53:20.731493+05:00 suse kernel: [ 7873.286440] res 40/00:00:10:59:6d/00:00:60:00:00/40 Emask 0x1 (device error)2014-06-13T07:53:20.731495+05:00 suse kernel: [ 7873.286443] ata3.00: status: { DRDY }2014-06-13T07:53:20.740442+05:00 suse kernel: [ 7873.296009] ata3.00: both IDENTIFYs aborted, assuming NODEV2014-06-13T07:53:20.740462+05:00 suse kernel: [ 7873.296013] ata3.00: revalidation failed (errno=-2)2014-06-13T07:53:20.740464+05:00 suse kernel: [ 7873.296018] ata3: hard resetting link2014-06-13T07:53:21.045453+05:00 suse kernel: [ 7873.599792] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)2014-06-13T07:53:21.065444+05:00 suse kernel: [ 7873.620355] ata3.00: both IDENTIFYs aborted, assuming NODEV2014-06-13T07:53:21.065467+05:00 suse kernel: [ 7873.620359] ata3.00: revalidation failed (errno=-2)2014-06-13T07:53:26.045451+05:00 suse kernel: [ 7878.595494] ata3: hard resetting link2014-06-13T07:53:26.350457+05:00 suse kernel: [ 7878.900156] ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300)2014-06-13T07:53:26.395504+05:00 suse kernel: [ 7878.945713] ata3.00: both IDENTIFYs aborted, assuming NODEV2014-06-13T07:53:26.395516+05:00 suse kernel: [ 7878.945717] ata3.00: revalidation failed (errno=-2)2014-06-13T07:53:26.395518+05:00 suse kernel: [ 7878.945719] ata3.00: disabled2014-06-13T07:53:26.395520+05:00 suse kernel: [ 7878.945752] ata3: EH complete2014-06-13T07:53:26.395522+05:00 suse kernel: [ 7878.945774] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395523+05:00 suse kernel: [ 7878.945775] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395525+05:00 suse kernel: [ 7878.945776] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395528+05:00 suse kernel: [ 7878.945777] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395529+05:00 suse kernel: [ 7878.945778] Read(10): 28 00 60 6d 59 10 00 00 08 002014-06-13T07:53:26.395531+05:00 suse kernel: [ 7878.945782] end_request: I/O error, dev sdc, sector 16177789602014-06-13T07:53:26.395532+05:00 suse kernel: [ 7878.945784] Buffer I/O error on device sdc2, logical block 124866902014-06-13T07:53:26.395534+05:00 suse kernel: [ 7878.945863] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395535+05:00 suse kernel: [ 7878.945868] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395537+05:00 suse kernel: [ 7878.945869] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395538+05:00 suse kernel: [ 7878.945872] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395540+05:00 suse kernel: [ 7878.945873] Read(10): 28 00 60 6d 59 10 00 00 08 002014-06-13T07:53:26.395541+05:00 suse kernel: [ 7878.945882] end_request: I/O error, dev sdc, sector 16177789602014-06-13T07:53:26.395543+05:00 suse kernel: [ 7878.945885] Buffer I/O error on device sdc2, logical block 124866902014-06-13T07:53:26.395544+05:00 suse kernel: [ 7878.945997] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395546+05:00 suse kernel: [ 7878.946000] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395547+05:00 suse kernel: [ 7878.946002] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395548+05:00 suse kernel: [ 7878.946004] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395550+05:00 suse kernel: [ 7878.946005] Read(10): 28 00 60 6d 59 80 00 00 08 002014-06-13T07:53:26.395551+05:00 suse kernel: [ 7878.946012] end_request: I/O error, dev sdc, sector 16177790722014-06-13T07:53:26.395552+05:00 suse kernel: [ 7878.946015] Buffer I/O error on device sdc2, logical block 124867042014-06-13T07:53:26.395554+05:00 suse kernel: [ 7878.946076] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395555+05:00 suse kernel: [ 7878.946080] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395557+05:00 suse kernel: [ 7878.946082] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395558+05:00 suse kernel: [ 7878.946085] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395560+05:00 suse kernel: [ 7878.946100] Read(10): 28 00 60 6d 5a 00 00 00 08 002014-06-13T07:53:26.395562+05:00 suse kernel: [ 7878.946141] end_request: I/O error, dev sdc, sector 16177792002014-06-13T07:53:26.395563+05:00 suse kernel: [ 7878.946152] Buffer I/O error on device sdc2, logical block 124867202014-06-13T07:53:26.395580+05:00 suse kernel: [ 7878.946192] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395582+05:00 suse kernel: [ 7878.946194] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395584+05:00 suse kernel: [ 7878.946195] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395585+05:00 suse kernel: [ 7878.946196] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395587+05:00 suse kernel: [ 7878.946197] Read(10): 28 00 60 6d 5b 00 00 00 08 002014-06-13T07:53:26.395588+05:00 suse kernel: [ 7878.946202] end_request: I/O error, dev sdc, sector 16177794562014-06-13T07:53:26.395590+05:00 suse kernel: [ 7878.946203] Buffer I/O error on device sdc2, logical block 124867522014-06-13T07:53:26.395592+05:00 suse kernel: [ 7878.946221] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395593+05:00 suse kernel: [ 7878.946223] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395595+05:00 suse kernel: [ 7878.946224] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395596+05:00 suse kernel: [ 7878.946224] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395598+05:00 suse kernel: [ 7878.946227] Read(10): 28 00 60 6d 5d 00 00 00 08 002014-06-13T07:53:26.395599+05:00 suse kernel: [ 7878.946228] end_request: I/O error, dev sdc, sector 16177799682014-06-13T07:53:26.395601+05:00 suse kernel: [ 7878.946229] Buffer I/O error on device sdc2, logical block 124868162014-06-13T07:53:26.395602+05:00 suse kernel: [ 7878.946245] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395608+05:00 suse kernel: [ 7878.946254] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395611+05:00 suse kernel: [ 7878.946254] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395612+05:00 suse kernel: [ 7878.946255] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395615+05:00 suse kernel: [ 7878.946258] Read(10): 28 00 60 6d 61 00 00 00 08 002014-06-13T07:53:26.395616+05:00 suse kernel: [ 7878.946259] end_request: I/O error, dev sdc, sector 16177809922014-06-13T07:53:26.395618+05:00 suse kernel: [ 7878.946260] Buffer I/O error on device sdc2, logical block 124869442014-06-13T07:53:26.395624+05:00 suse kernel: [ 7878.946281] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.395626+05:00 suse kernel: [ 7878.946282] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.395628+05:00 suse kernel: [ 7878.946284] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.395629+05:00 suse kernel: [ 7878.946285] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.395634+05:00 suse kernel: [ 7878.946286] Read(10): 28 00 60 6d 69 00 00 00 08 002014-06-13T07:53:26.395636+05:00 suse kernel: [ 7878.946295] end_request: I/O error, dev sdc, sector 16177830402014-06-13T07:53:26.395637+05:00 suse kernel: [ 7878.946297] Buffer I/O error on device sdc2, logical block 124872002014-06-13T07:53:26.396515+05:00 suse kernel: [ 7878.946314] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.396522+05:00 suse kernel: [ 7878.946315] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.396524+05:00 suse kernel: [ 7878.946316] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.396527+05:00 suse kernel: [ 7878.946318] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.396529+05:00 suse kernel: [ 7878.946319] Read(10): 28 00 60 6d 79 00 00 00 08 002014-06-13T07:53:26.396531+05:00 suse kernel: [ 7878.946323] end_request: I/O error, dev sdc, sector 16177871362014-06-13T07:53:26.396533+05:00 suse kernel: [ 7878.946325] Buffer I/O error on device sdc2, logical block 124877122014-06-13T07:53:26.396534+05:00 suse kernel: [ 7878.946344] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.396536+05:00 suse kernel: [ 7878.946346] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.396538+05:00 suse kernel: [ 7878.946347] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.396540+05:00 suse kernel: [ 7878.946348] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.396542+05:00 suse kernel: [ 7878.946349] Read(10): 28 00 60 6d 99 00 00 00 08 002014-06-13T07:53:26.396544+05:00 suse kernel: [ 7878.946354] end_request: I/O error, dev sdc, sector 16177953282014-06-13T07:53:26.396546+05:00 suse kernel: [ 7878.946356] Buffer I/O error on device sdc2, logical block 124887362014-06-13T07:53:26.396548+05:00 suse kernel: [ 7878.946374] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.396550+05:00 suse kernel: [ 7878.946376] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.396552+05:00 suse kernel: [ 7878.946377] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK2014-06-13T07:53:26.396554+05:00 suse kernel: [ 7878.946379] sd 2:0:0:0: [sdc] CDB: 2014-06-13T07:53:26.396556+05:00 suse kernel: [ 7878.946379] Read(10): 28 00 60 6d d9 08 00 00 08 002014-06-13T07:53:26.396557+05:00 suse kernel: [ 7878.946401] sd 2:0:0:0: [sdc] Unhandled error code2014-06-13T07:53:26.396560+05:00 suse kernel: [ 7878.946403] sd 2:0:0:0: [sdc] 2014-06-13T07:53:26.396561+05:00 suse kernel: [ 7878.946404] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OKUnfortunately, I can't figure out at which block/sector did the issue occur, so I can restart ddrescue from that point on, skipping the troublesome area. For now, The same was the case with testdisk when I tried to list the files for recovery; after I had painstakingly selected all the files to copy, testdisk failed to copy a single one of them because the drive had disappeared during the scanning I think.For now, I have restarted ddrescue with this :ddrescue -n -v -T 30 -A --retrim -d -i 150G --skip-size=500k,10M --min-read-rate=50k /dev/sdc2 /media/rescue/Drive2.img /media/rescue/Drive2.logBut as it is bound to repeat the disappearing drive phenomenon again and producing an incomplete/almost-useless image, I really need some help in figuring out a way to skip the sectors that are causing this problem, or any other tips to recover the data.
Drive disappears from dev during ddrescue copy or testdisk recovery
opensuse;data recovery;ddrescue
null
_unix.199932
I have upgraded a Debian 7 system to Debian 8 and, among other changes, when I log in using gnome-classic, the desktop background is new. However, my gnome menus have not changed, as you can see in this screenshot:So, I have performed a fresh installation of Debian 8, and on that I get the following look and feel when I log in as gnome-classic:Why are the icons, the menu look and feel, and the upper tool bar different?Can it be that some packages have not been upgraded properly when going from Debian 7 to Debian 8?If this is the case, is it possible to restore the look and feel of the upper picture in Debian 8?
Different look and feel for gnome classic in Debian 7 and Debian 8
debian;gnome classic
i'm not sure that this is the answer you are asking for, but if want the look and feel of the classic gnome 2, you can install and use the MATE Desktop Environment (sudo apt-get install mate-desktop-environment)!i would definitely advise you to use MATE instead of the old gnome3 classic-mode:MATE will use less resourcesthe so called classic-mode of gnome 3, is just a fallback-mode if your system faces problems with the graphic cardthe classic-mode was discontinued as of gnome 3.6and debian squeeze (2.30+7), wheezy (3.4+7+deb7u1) or jessie (3.14+3) uses different versions of gnome by default!
_codereview.33115
After fumbling around with Ruby for a few weeks I've fallen into a coding pattern that I'm comfortable with. When I create a new class and that class has dependencies, I add a method initialize_dependencies that does the job of creating them. That function is then executed from the constructor.When I write a test for this class, I actually test a subclass where the initialize_dependencies function has been overridden and the dependencies replaced with stubs or mocks. Here is an example of such a test (with most tests and test data removed for brevity):require address_kit/entities/street_addressmodule AddressKit module Validation describe FinnValidationDriver do # Subject returns hard coded results from finn client. subject { Class.new(FinnValidationDriver) { def initialize_dependencies @client = Object.new @client.define_singleton_method(:address_search) { |address_string| FINN_VALIDATION_DRIVER_TEST_DATA[address_string] or [] } end }.new } it considers an address invalid if finn returns no results do address = AddressKit::Entities::StreetAddress.new({ street_name: FOOBAR, house_number: 123 }) subject.valid?(address).must_equal false subject.code.must_equal FinnValidationDriver::CODE_NO_RESULTS end it considers an address invalid if finn returns multiple results do address = AddressKit::Entities::StreetAddress.new({ street_name: FORNEBUVEIEN, house_number: 10 }) subject.valid?(address).must_equal false subject.code.must_equal FinnValidationDriver::CODE_NOT_SPECIFIC end end endendFINN_VALIDATION_DRIVER_TEST_DATA = { FORNEBUVEIEN 10 => [ AddressKit::Entities::StreetAddress.new({ street_name: FORNEBUVEIEN, house_number: 10, entrance: A, postal_code: 1366, city: LYSAKER }), AddressKit::Entities::StreetAddress.new({ street_name: FORNEBUVEIEN, house_number: 10, entrance: B, postal_code: 1366, city: LYSAKER }) ], TORGET 12, ASKIM => [ AddressKit::Entities::StreetAddress.new({ street_name: TORGEIR BJRNARAAS GATE, house_number: 12, postal_code: 1807, city: ASKIM }), AddressKit::Entities::StreetAddress.new({ street_name: TORGET, house_number: 12, postal_code: 1830, city: ASKIM }) ]}This class only has one dependency, which is a client for a web service. Here I replace it with a stub returning static data.I'm fairly new with Ruby so there are probably at least a few issues or potholes with this approach that I'm not seeing. Is there a best practice when it comes to writing classes and tests for them? Am I way off?
Are there any glaring issues with the way I write and test my Ruby classes?
ruby
null
_cs.71539
Knowledge bases and expert systems are usually production rules systems and as such they lack expressive means for expressing modalities like agent believes in statement, agent has duty to perform action, agent has permission to perform action. Modalities are usualy written as modal operators (diamond and square boxes) or as special kind of implications. https://ts.data61.csiro.au/publications/nictaabstracts/5627.pdf is good example how to introduce modalities in defeasible logic without Kripke/relational/possible world semantics/machinery.Introduction of modalities introduces both new symbols in language and new inference rules in language (like permission follows from duty), https://en.wikipedia.org/wiki/Modal_logic contains list of good examples of rules that are brought with introduction of modal operators.So - my question is - how to introduce modalities in expert systems / production rule systems which have no dedicated operators or implications for modalities?My proposal is to treat modalities as predicates and introduce special kind of metarules that for each type of modal predicate generate relevant modal predicates. E.g. which for duty-predicate generates permissions-predicate. There is book about such approach http://www.springer.com/us/book/9783319225562Is my proposal about modalities as special kind of predicates sound, are there alternatives?Specifically, I am trying to introduce modalities http://opencog.org/, this system support meta-rules and higher-order rules, so my approach maybe is valid and academically acceptable?
How to express modalities in rule bases, knowledge bases or expert systems?
logic;knowledge representation;modal logic;expert systems
null
_unix.373441
When using ifdown or ifup for the lookback interface in CentOS 7:[root@localhost etc]# ifup loI have got the error below:Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' However if I use the ifconfig command, it shows the command(ifdown or ifup) works with success. As I tested, the use of the ifconfig lo up/down does not show any signs of errors.What is happening?EDITI checked the ifcfg-lo file, it shows the info below:[root@localhost etc]# ls -la /etc/sysconfig/network-scripts/ifcfg-lo-rw-r--r--. 1 root root 254 Jun 26 20:07 /etc/sysconfig/network-scripts/ifcfg-lo
Could not load file '/etc/sysconfig/network-scripts/ifcfg-lo' when I use ifdown or ifup
centos;ifconfig
null
_unix.358352
I want to make a backup, but my tapes don't have space for everything, so I decided to not back up virtual machines (over 1tb).I have the virtual machines in .local/share/libvirt/imagesI used this commandtar cvf - /home/user -X altro/file.esclude | openssl aes-256-cbc -salt -k password | dd bs=80M of=/dev/st0In altro/file.esclude I put this line /home/user/.local/share/libvirt/imagesBut tar ignore the exclude file and backups everything!!So I usetar cvf - /home/user --exclude '/home/user/.local/share/libvirt/images' | openssl aes-256-cbc -salt -k password | dd bs=80M of=/dev/st0And...same thing!Why?System is Slackware 14.2 with gnu tar
Get tar to exclude some files
tar
The X must be before the paths to include in the tar file.So:tar cvf - /home/user -X altro/file.esclude | openssl aes-256-cbc -salt -k password | dd bs=80M of=/dev/st0is WRONG.This:tar cvf - -X /home/user/altro/file.esclude /home/user | openssl aes-256-cbc -salt -k pass | dd bs=80M of=/dev/st0is RIGHTIn the exclude file, I use/home/user/.local/share/libvirt/images/*
_webmaster.106772
We have recently moved our old website to a new platform using Angular 2 + Universal. The site is city-sightseeing.com and while it was fine at the beginning in the last weeks we are having issues with SEO and being de-indexed.Timeline of events, issues and solutions29th March: New site launched only in English. All looking good and working fine. Redirects were made from the old site website to the new one using 301s for the pages we have in the new site. The content has all be re-written for the new site.12th May: New languages added: italian, french, spanish and german. We put the hreflang in all the pages to tell Google about all languages. Issues started here. We saw that Google started giving us soft-404s and de-listing the english site from the listings. The new languages didn't seem to be affected. 24th May: after investigating we found a few issues that were fixed on this date. We changed how the default language was set to follow Google guidelines. We found that while we served the html code to Google the Fetch and Render in Search Console couldn't see our pages properly. We fixed that and now all our pages can be fully seen and rendered properly in Search Console. We reviewed our structured data to make sure it was working well. A new sitemap was uploaded with all the urls in all languages. We started seeing improvements and our pages, especially english ones which are the main ones for us, started appearing again in the searches and google index. Since then we didn't touch anything giving time to Google to go over the whole site. Since few days ago we started seeing again the same issues as before. The pages are being de-indexed by google. The soft-404s started growing, these are only for the english pages. The structured data number of items started to go down. We double checked and the pages can be properly seen by the bot in the Fetch and Render tool. Any ideas of what can be happening? Thanks!Here is the render in mobile:
After moving to Angular 2 + Universal Google is reporting soft 404s and removing pages from the index
google search console;soft 404
null
_unix.355203
I've just installed Gnuroot and Gnuroot Wheezy on my Samsung phone (Android 7.0). I can't get past the Creat New Rootfs stage, though: my device says unpacking a rootfs, thinks for a few seconds then stops.Any thoughts?
Gnuroot Wheezy fails to install on Android 7.0
debian;android;gnuroot
null
_computergraphics.221
I know in the not so long ago (5-10 years?) that it was popular / efficient to bake data out into textures and then read the data from the textures, often using the built in texture interpolation to get linear interpolation of the baked out data.Now that computing time is cheaper compared to texture lookup time, this practice has definitely lessened if not all together disappeared.My question is, are baked out textures still used for anything? Does anyone have any usage cases for them in modern architecture? Does it seem likely they will ever make a come back? (say, if memory technology or basic GPU architecture changes)
Are lookup textures still used for anything?
texture;gpu;hardware
Yes, lookup textures are still used. For example, pre-integrated BRDFs (for ambient lighting, say), or arbitrarily complicated curves baked down to a 1D texture, or a 3D lookup texture for color grading, or a noise texture instead of a PRNG in the shader.ALU is generally cheaper than a texture sample, true, but you still have a limited amount of ALU per frame. GPUs are good at latency hiding and small lookup textures are likely to be in the cache. If your function is complicated enough, it may still be worth using a lookup texture.
_unix.239317
I have a text file I need to verify, and I am trying to figure out how to check that an expected value exists.An example of my file isInitialPattern: Value1=somevalue Value2=somevalue Value3=somevalue InstallationName=InstallationXI don't care about lines 2-4, but I need to verify that in line 5 InstallationName=Installation1To throw a wrench in it, this line does not always exist, at which point the pattern starts again withInitialPattern:What I have so far kind of works, but not in the case that the line does not occur altogether: instName=Installation1 installationNames=$(cat file.txt | grep InstallationName) IFS=$'\n' read -rd '' -a array <<< $installationNames for element in ${array[@]} do if [[ $element =~ $instName ]]; then test=pass else test=fail break fi doneany ideas? I was looking at this post: Print Matching line and nth line from the matched lineWhere the user got the forth line after an pattern occurrence - I was thinking if I could store this value I could compare it to the expected value, but I am not entirely sure how to store it yet.Any guidance is welcome!
Verification of 4th line value after pattern occurrence
shell script
null
_unix.188455
This isn't an actual problem - but more of a curious question, when I run while true; do ps aux | grep abc; echo done; done I get the following:user 29733 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29735 0.0 0.0 11748 920 pts/1 S+ 20:25 0:00 grep --color=auto abcdoneuser 29737 0.0 0.0 11748 924 pts/1 S+ 20:25 0:00 grep --color=auto abcdonedonedonedoneuser 29745 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29747 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29749 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29751 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29753 0.0 0.0 11748 924 pts/1 S+ 20:25 0:00 grep --color=auto abcdoneuser 29755 0.0 0.0 11748 924 pts/1 S+ 20:25 0:00 grep --color=auto abcdonedoneuser 29759 0.0 0.0 11748 924 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneuser 29761 0.0 0.0 11748 920 pts/1 R+ 20:25 0:00 grep --color=auto abcdoneSometimes grep doesn't actually see itself in ps aux. Is this just a timing issue between the two processes running? This also happens when I run the commands individually and not in a loop. This is happening both on my computer and another machine over ssh, but it is happening more frequently on the remote computer (which the output is from).Ubuntu 14.04
grep randomly appearing and disappearing in ps aux (ps aux | grep python)
grep;pipe;ps
I think this is just timing, as you mention. Commands on pipes run concurrently, you can find more information on In what order do piped commands run?. It might happen more frequently on a machine if you have more/less CPU or more/less processes.
_unix.13449
I'm attempting to compile GCC 4.5.2 as part of the Linux from Scratch book (http://www.linuxfromscratch.org/lfs/view/stable/chapter05/gcc-pass1.html). My configure is as follows:./configure \ --target=$LFS_TGT --prefix=/tools \ --disable-nls --disable-shared --disable-multilib \ --disable-decimal-float --disable-threads \ --disable-libmudflap --disable-libssp \ --disable-libgomp --enable-languages=c \ --with-gmp-include=$(pwd)/gmp --with-gmp-lib=$(pwd)/gmp/.libs \ --without-ppl --without-cloogWhen I attempt to make I get the error (after digging around in config.logs):error while loading shared libraries: libgmp.so.10: cannot open shared object file: No such file or directoryI have gmp in a subdirectory and got to this point after successfully compiling it. How can I point GCC to use this library?I'm going through LFS in an attempt to get myself more familiar with Linux behaviour. I've jumped over a fair few hurdles but this particular case is stumping me.If it's relevant: I'm using an Ubuntu 11 host. Any ideas?With thanks.
Cannot find libgmp when compiling GCC 4.5.2
make;gcc
I'm fairly sure the issue was caused by my (dumb) decision to use a combined source + build directory. Cleaning up my environment and re-building to a different folder has addressed this issue.
_unix.325407
Background & RequirementsI've found a number of reference docs and Q&A post relating to this topic but I've not been able to figure out a key area of the design.I would like to reject an inbound email based on a custom analysis algorithm - simply I have a python script that does the analysis and I'm currently testing by invoking it as a mail filter from Gnome Evolution. This all seems to work more-or-less as expected. Seems to be a couple of nuances with return codes in python vs. the interpretation by Evolutions mail filter system but otherwise operational.At this stage not tied to a technology or system other than it must be open source. Ideally it should run on Debian (or Ubuntu) so Postfix seems to be the best fit.The Problem AreaI've been looking at gateways et al such as Postfix in order to design an integration that works on a more autonomous level - and to prevent the need to waste time filtering email in inboxes. I can see the lightweight before queue filters (for example here) and I can see how to call a script from these hooks e.g. here, but not acquire any return codes from the script.What I can't seem to find in the documentation is how you would a apply a result code / return code from the script to Postfix in order for it to determine whether or not to allow or reject the message. Note that the solution relies on being able to reject an email message, not discard it (for reasons I unfortunately cannot go into here). I thought about a cron job that inspects a list of collated data items collected from email added to the queue, and appending more filters to the Postfix configurations automatically. This only solves part of the problem and means something will run on the server even if there is no new email.TL;DR So my questions are:How can I call a script from the MTA and get the script / call result? e.g. call scan.py and get either a 0 or 1 backWhat mechanism in Postfix (or other similar open source system) should I refer to in documentation to then bind this result to an action?
Using a custom filter via script to reject inbound email
shell script;python;postfix;mail transport agent
null
_webapps.24040
If I go to Search Settings / Languages / select English / Save, it goes to English but then keeps switching to Slovak.In my Cookies, I have 'Allow local data to be set' checked.I'm in Ireland and I'd like local search results in English.But Google gives me search results from Slovakia and the Czech Republic, and Google is in Slovak.The same happens whether or not I am signed into my Google account, so I guess there's a file somewhere on my computer telling Chrome to give me search results in Slovak and from Slovakia.How do I make Google default English in Chrome?..Edit:This page has some answers:http://support.google.com/websearch/bin/answer.py?hl=en&answer=533But none of those things resolve this.
How do I make Google default English in Chrome?
google
null
_unix.38444
I'm new to Linux. I have 2 Debian Squeeze hosts running. Let's call them SqueezeOne and SqueezeTwo. After logging into SqueezeOne, I ran ssh-keygen and added the resulting public key to my authorized key file: cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysI also added a public key generated by puttygen from my Windows desktop to the same key file. I can ssh in from my putty just fine without being asked for my password. However, if I type in either of the following commands:ssh localhost ssh OneI get the following error.The authenticity of host 'localhost (127.0.0.1)' can't be established.RSA key fingerprint is 75:56:33:22:c3:da:43:72:11:33:ec:50:f4:d0:dd:c7.Are you sure you want to continue connecting (yes/no)?Host key verification failed.If I go to SqueezeTwo, and try to ssh to SqueezeOne, I receive the same message. On SqueezeTwo, there is a ~/.ssh/known_host file, which I know did not create on my own. However, I am not seeing the same known_hosts file on SqueezeOne. On SqueezeTwo, I can ssh to localhost and itself with no problem.What am I doing wrong?
Cannot SSH to localhost - host key verification failed
ssh
null
_unix.378323
In a folder, I have number of files that are in .dat extension format (it is originally .xvg format but have changed to .dat format to plot all other graphs in a single plot) which contains the values along with the some written headings etc., as : # Grace project file#@version 50125@page size 792, 612@page scroll 5%@page inout 5%@link page off@map font 8 to Courier, Courier@map font 10 to Courier-Bold, Courier-Bold@map font 11 to Courier-BoldOblique, Courier-BoldOblique@map font 9 to Courier-Oblique, Courier-Oblique@map font 4 to Helvetica, Helvetica@map font 6 to Helvetica-Bold, Helvetica-Bold@map font 7 to Helvetica-BoldOblique, Helvetica-BoldOblique@map font 5 to Helvetica-Oblique, Helvetica-Oblique....@ s0 errorbar riser linestyle 1@ s0 errorbar riser clip off@ s0 errorbar riser clip length 0.100000@ s0 comment rdf_CaNm.xvg@ s0 legend N1@target G0.S0@type xy0 00.002 00.004 0ie., there are 327 lines from #Grace project file(being line one) --> @type xy(being line 327) and then the 0 0 is the 328th line, 0.002 0 is the 329th line etc.How can I delete all the first 327 lines in all the files of .dat format containing in the folder (yes all those .dat files have first 327 lines as stated above) through the command in terminal?
Regarding deleting first 327 lines in all the .dat files in a folder
text processing
null
_webmaster.81661
(I hope I've got everything translated correctly, because our Google Analytics is in german. I'll write the german word after the translations, in case anyone speaks german and I've got the translations wrong.)We have been looking at the SEO Content pages report (Akquisition > Suchmaschineneoptimierung -> Zielseiten) recently. For some unknown reason we cannot edit that report, so we tried to create a new one with additional info. But Impressions and Clicks are not available in custom reports. Ok. Create a extra report with the additional fields and merge those two in Excel. Not nice but would work.Our custom report has a filter for source/medium with value google/organic and landing page (Zielseite) as dimension. To our understanding that should be the same settings that the SEO Content pages report would be based on. We have Entries (Einstiege) as the first metric. Again that should somehow match to Clicks in the SEO report. But it does not. (a small difference would be no problem)Given the same time period, we have really big differences. A page has 5 clicks in SEO report and 78 entries in the custom report. Another page has 35 clicks in SEO report and 18 entries in the custom report. So the difference is in both ways.Why is that? Is our understanding of either source/medium google/organic or the SEO content pages report wrong? How can we explain these differences? Is there a better/more correct way to get additional info to the SEO content pages report? We need the revenue value.Any help is appreciated.Sebastian
Different number in Google Analytics SEO landing pages vs. custom report google/organic entries
google analytics
This is due to the data in Akquisition > Suchmaschineneoptimierung -> Zielseiten (or Accuisition > Search Engine Optimisation > Landing Page ) being pulled from Goole Webmaster Tools / Search Console account for the site and the data in your other report coming from Google Analytics's tracking, two different tracking methords.Why are the numbers so out? Well Google explain why that could be here : Search Console data may differ from the data displayed in other toolsHowever I have seen outrageous differences in data. For instance a site where many hundreds of pages are appearing as landing pages, yet these pages are set to noindex and are not indexed in Google, so couldn't have been the landing page from organic.Apparently this happens due to people landing on a different page from search, then left the website and came back directly but to a different page not via search, which is still counted as an organic session.How can pages which aren't indexed be reported as landing pages in Google Analytics?
_webapps.85070
How do I disable Facebook chat availability? My chat is offline and I am using Windows 7 with Firefox. When I login with my other account through my mobile Facebook and open a chat conversation I see the account that I use on my computer is active just now whenever I move the mouse or refresh the page.How do I disable that? Is there some specific heartbeat message that is possible to be blocked through AdBlock Plus or something?
Disable Facebook chat availability (Active now, active just now, active x minutes ago)
facebook;facebook chat
Okay I have found the solution to block this status.Basically Facebook uses a timer and so it will send a heartbeat message to their servers with the idle time every x minutes.Here is an example of such web request:https://1-edge-chat.facebook.com/pull?channel=userid&seq=0&partition=-2&clientid=18ae8ecc&cb=ie3k&idle=117&qp=y&cap=8&msgs_recv=0&uid=userid&viewer_uid=userid&msgr_region=FRC&state=offlineIf you read that link you'll see that &idle=117 is the total seconds the account has been idle for. The state=offline stands for whether the chat is active or offline Note that I have replaced my userid which is a number of 15 digits.So to disable this just add the following rule in your AdBlock filters:https://*-edge-chat.facebook.comOne drawback of this method is that you will not receive messages in real time. You must refresh the page to get the messages.
_webapps.1013
Why don't I have access to my Google Calendar tasks using either Lighting for Thunderbird or Evolution?I can sync calendars without any problems, but I do not even have read access to tasks...
How can I have read access to my Google Calendar tasks in a desktop application?
sync;google calendar;tasks;thunderbird
The reason is because Google has not created any functionality in their Google Calendar APIs for doing anything with tasks. Without an API for this, applications would have to screen-scrape, which is a pain and is inconvenient.Many developers want such an API, so such functionality will probably be added soon.Until then, there's not much you can do. Sorry!
_softwareengineering.129043
I have an environment that supports both dictionaries (json style) and databases (not relational or anything, just formatted data by row and column). My application doesn't really need database functionality, butI'm somewhat more comfortable with the database system than I am with dictionaries. Is there a major performance advantage to dictionaries? What situations are there where a dictionary is better than a database?
What are the relative advantages of dictionaries versus databases?
database;dictionary
If you want to know whether there is a performance advantage, the best thing to do is measure it yourself. The performance depends a lot on the type of data, the language, the amount of data, etc. It's impossible to give a blanket statement as to when dictionaries are better than databases. Again, it depends on the data, the language, etc. Roughly speaking, dictionaries are better for simple and small datasets, and databases are good for complex and large data sets.
_reverseengineering.2119
For learning (and fun) I have been analyzing a text editor application using IDA Pro. While looking at the disassembly, I notice many function calls are made by explicitly calling the name of the function. For example, I notice IDA translates most function calls into the following two formats.call cs:CoCreateInstanceOrcall WinSqmAddToStreamBut sometimes the format does not use a function name. The following example includes the code leading up to the line in question. The third line of code seem to be missing the function name. (The comments are my own.)mov rcx, [rsp+128h+var_D8] // reg CX gets the address at stack pointer+128h+var_D8 bytes mov r8, [rcx] // the address at reg CX is stored to reg r8call qword ptr [r8 + 18h] // at address rax+18h, call function defined by qword bytes My questions are as follows:How do I make the connection between call qword ptr <address> and a function in the disassembly?I understand that IDA cannot use a function name here since it does not know the value stored at the register R8... so what causes this? Was there a certain syntax or convention used by the developer? In other words, did the developer call the function WinSqmAddToStream in a different manner than the function at [r8+18h]?
What to do when IDA cannot provide a function name?
ida;disassembly
To connect an indirect call to its target (if you know it) you can do the following:1) Add a custom cross-reference - either with IDC/Python, or from the Cross References subview. If you use scripting, don't forget to add the XREF_USER flag so IDA does not delete it on reanalysis.2) Use the callee plugin (Edit->Plugins->Change the callee address, or Alt+F11). This will automatically add a cross-reference and also a comment next to the call.As for why the explicit call is not present in the binary there can be many explanations. The snippet you're showing looks like a virtual function call, and they are usually done only in this manner to account for possibility of the method being overridden in a derived class.
_softwareengineering.284727
So let's say we have three simple resources: Groups, Users, and GroupUsers.Groups - Represent interest groups which can be subscribed by users.{ name: 'Colorado Mountain Biking Group' ownerId: 1 (Some user)}GroupUsers - Represents the junction table in the many to many Groups - Users relationship. Group membership status and some other attributes are stored here. { userId: 2, courseId: 1, color: '#FFFFFF', nickname: 'MBG Colorado', status: 'accepted'}Since our API client will always handle the group from the perspective of the authenticated user, GET /api/groups/1 AUTH(userId=2) should return: Group (Includes GroupUser for authenticated user) { id: 1 name: 'Colorado Mountain Biking Group' ownerId: 1, groupUser: { userId: 2, courseId: 1, color: '#FFFFFF', nickname: 'MBG Colorado', status: 'accepted' } }Someone suggested to me that our API Clients should not know or care about the junction table, all they should care about is the group, so I should instead respond with a merged resource like so: group (In reality group merged with groupUsers for the authenticated user.) { id: 1 name: 'Colorado Mountain Biking Group' ownerId: 1, userId: 2, color: '#FFFFFF', nickname: 'MBG Colorado', status: 'accepted' }The problem I see with this (besides merging issues like repeated Id's and createdAt/updatedAt timestamps)is that the API clients will then use this same resource to further interact with our API endpoints.If an API client wishes to update the groupUser resource he would:PUT /api/groups/1 { id: 1 name: 'Colorado Mountain Biking Group' ownerId: 1, userId: 2, color: '#000000', nickname: 'Some other value', status: 'accepted' }So now we would also need to scan request bodies and differentiate between group and groupUser attributes. Is the hassle worth it? Even when API consumers are other in-house developers?
Is hiding complexity from API clients by merging resources a correct practice?
rest;api
null
_unix.38781
In trying to trace a simple HTTP GET request and its response with nc, I'm running into something strange.This, for example, works fine: the in file ends up containing the HTTP GET request and the out file the response.$ mkfifo p$ (nc -l 4000 < p | tee in | nc web-server 80 | tee out p)&[1] 8299$ echo GET /sample | nc localhost 4000This is contents of /sample...$ cat outThis is contents of /sample...$However, if I replace the tee out p above with tee out >p, then the out file turns out to be empty.$ (nc -l 4000 < p | tee in | nc web-server 80 | tee out > p)&[1] 8312$ echo GET /sample | nc localhost 4000$ cat out$ Why should this be so? EDIT: I'm on RHEL 5.3 (Tikanga).
Different redirection styles with netcat and tee giving different results
netcat;tee
The problem is that you're using shell redirects to read from and write to the same file. Check p afterwards, it will be empty as well. The shell opens it in read mode, truncating the file, while it's setting up the pipeline before it runs the commands. However, using tee, since it opens the file itself, means that the file isn't truncated until after the contents have been read for the input. This is a well known and documented behavior and the reason you can't simply use redirects to make inline changes to files.
_softwareengineering.180986
Im trying to conceive the business logic of this website that has many activities, that the users can build their combo and get discounts depending on their choices and how long they are willing to pay for their plan (1, 3, 6 and 12 month plans).Im having a hard time trying to come up with a solution, while keeping the database normalized and with proper relations, without having to resort to JSON-encoded data in the database fields. The system must stay generic enough to fit many businesses types that relies on plans/activities. I need to know how to structure my tables. Scenario: For example, in case of a gym, will have bodybuilding, yoga, boxing, body pump. If the person choose bodybuilding and boxing, they will have a discount. If they add yoga, boxing and bodybuilding will keep the discount but will add the yoga without any discount to the price. If the person decides to pay 12 months upfront, they will get a bigger discount.Bodybuilding + Body PumpBodybuilding..$70 | 28% discountBody Pump.....$70 | Total........$1001 month - $100 / month3 months - $ 91 / month6 months - $ 75 / monthBodybuilding + Body Pump + YogaBodybuilding..$70 | 28% discountBody pump.....$70 |Yoga..........$90 | No discountTotal........$190 1 month - $190 / month3 months - $171 / month6 months - $158 / monthIll be using PHP and MYSQL but that doesnt matter much, only the RDBMS part. Edit for clarification: What I really looking for is a database schema for packaging products (or in this case services) together under a single price/offer. Each product must also exist in the system as a standalone product.I still need the ability to report on sales (and profit) by product even if that product was sold as part of a package.I would need the ability to report on package performance.
DB schema for packaging products/services together under a single price/offer
design;php;architecture;database;mysql
At a Minimum You need:A Promo Header Table - Holds exactly what you get with thePromotion (Free item, free shipping, $$ off)A Promo-Requirements Tables (I to meany to Promo's, Hold all therequirement if the promo) Each record is one Requirement,Requirements can be must be and item of Brand X or must be SKU1234, or order total must be >50$ (When i did this is had a row typeflag that told me what kid if Requirement it was.)A module that is good a checking if an order meets a promotion'sRequirements.Unfortunately, I can't give you an exact ERD, because the promo conditions vary so much from business to business, so it's up to you make it as complex or simple as you need.For example, do you need Promos by SKU?, By Category, by subcategory?, by brand?, do you need to exclude certain SKUs?, brands? Category's? shipping locations?Lastly make sure you make this thing easy to maintain and adjust, because just when you think you covered all your bases the business team will come up with some new crazier promo the no customer on earth will understand how to use.EDIT:*Now that I better understand your question here what you need:*Package Header Table and a Line Items Table, The Package Name and Description is stored on the header, The Lines hold the Items (including the Item prices when sold as part of that package). You need a way to adding a Package to order. When that is done add then each item in the package as a regular line item but have a extra fields to specifies that the item is part of a Package and the PackageID. It is then up to you to decide how you want to code the order Print out, Either just print a package total, so the customer doesn't know the line item prices, or print it normally but add with the package description.
_hardwarecs.4243
I want a wearable camera for a life logging project and I find the existing options to be absurdly expensive. So I want to build one. I found a solution using Raspberry Pi Camera. However I still find the Raspberry Pi Camera to be pretty expensive for my purpose. I feel I should be able to build the camera in about 20 USD using something like these:Processor for 9 USD or Processor for 5 USDMemory for 6 USDCamera for 5 USDMy concern is I don't know if one can mount a mobile front camera directly on a processor. Is there a more suitable camera for the project?I want my device to be minimalistic and simple. These are the features I desire:Should take pictures every 1 minutes (or any other interval).The picture shouldn't be too blurry because the wearer is moving or walking.Should be able to transfer the pictures to a PC via USB cable (or anyhow).I would even be happier to know if I wouldn't need such an elaborate processor.Any advice on how to proceed?
Need hardware suggestion for simple wearable camera
camera
null
_webmaster.56498
I recently updated a wordpress blog to my own domain and have been moving everything over and coming to grips with making the site look half decent.When I run the domain through Google's Webmaster structured data, I get multiple errors. in fact, an error for every single item!!! Every error seems to have the same problem- missing Author and missing Updated. And mentions something to do with Hentry? I googled that to try and figure it out,but it pretty much made my head explode!!FYI I know nothing about CSS etc.And I know I'm a bit over my head with it all, but I think it's too late to turn back to the regular wordpress account now. ugh.So, are these errors important? and... how do I fix them?!
Google Webmasters structured data errors
google;google search console
null
_cs.41070
In relation to the thread Proving that the conversion from CNF to DNF is NP-Hard (and a related Math thread):How about the other direction, from DNF to CNF? Is it easy or hard?On Page 2 of this paper, they seem to hint that both directions are equally hard when they say We are interested in the maximal blow-up of size when switching from the CNF representation to the DNF representation (or vice versa).But DNF-SAT is in P and CNF-SAT is NP-complete. So given a DNF expression $\phi_1$, there should be an equisatisfiable CNF expression $\phi_2$ whose length is polynomial in the length of $\phi_1$. And the $\phi_1 \to \phi_2$ conversion can be done in poly time. Is this correct?Edit: Changed equivalent to equisatisfiable (that is, additional variables are allowed in $\phi_2$).
DNF to CNF conversion: Easy or Hard
complexity theory;logic;satisfiability;normal forms
If you are willing to introduce additional variables, you can convert from DNF to CNF form in polynomial time by using the Tseitin transform. The resulting CNF formula will be equisatisfiable with the original DNF formula: the CNF formula will be satisfiable if and only if the original DNF formula was satisfiable. See also https://en.wikipedia.org/wiki/Conjunctive_normal_form#Conversion_into_CNF.If you don't want to allow introduction of additional variables, converting from DNF to CNF form is co-NP-hard. In particular, testing whether a DNF formula is a tautology is co-NP-hard. However, testing whether a CNF formula is a tautology can be done in polynomial time (you just check separately whether every clause is a tautology, which is easy as each clause is a disjunction of literals). Therefore, if you could convert from DNF form to CNF form in polynomial time, without introducing new variables, then you would obtain a polynomial-time algorithm for testing whether a DNF formula is a tautology -- something which seems unlikely, given that we expect P is not equal to co-NP. Or, to put it another way, converting from DNF to CNF form without introducing additional variables is co-NP-hard.This is the difference between equivalence vs equisatisfiability. Equivalence requires the two formulas to have the same set of solutions (and thus does not allow introducing additional variables). Equisatisfiability only requires that either both formulas are satisfiable or both are unsatisfiable (and thus does allow introducing additional variables).
_cs.28894
True or False?Say some data structure can perform $x$ operations in amortized $O(x)$ time.Then for a big enough $y$ it can perform $xy$ operations in worst case $O(xy)$ time.My attempt:$x$ operations in $O(x)$ amortized means $O(1)$ expected time for $1$ operation.Then for $xy$ operations it'd be $O(xy)$ amortized (and I think $O(x^2y)$ worst case). Therefore, the statement is incorrect.But the answers sheet says i'm wrong. Why?
If x operations cost O(x) amortized then how much xy operations cost?
algorithm analysis;amortized analysis
Amortized is not just probabilistic, it means that for some big enough $y$, $xy$ operations can't take a long time and will guaranteed to be $O(x)$ in average in worst case (and therefore $O(xy)$ for all $xy$ operations), even through some of operations may take even $O(xy)$ time itself. https://stackoverflow.com/questions/200384/constant-amortized-time
_scicomp.14401
I am having an issue with the implementation of NLOPT in Python. My objective is to minimize a somewhat complicated Maximum Likelihood function.My function is called mle and there are 6 parameters to estimate. Finding the gradient to this MLE is not trivial, so I decided to turn to a numerical gradient function:def numgrad(f, x, step=1e-6): numgrad(f: function, x: num array, step: num) -> num array Numerically estimates the gradient of a function f which takes an array as its argument. ary = len(x) curr = x * sp.ones((ary, ary)) next = curr + sp.identity(ary) * step delta = sp.apply_along_axis(f, 1, next) - sp.apply_along_axis(f, 1, curr) return delta / stepThen my implementation of NLOPT goes like this:def myfunc(x, grad): if grad.size > 0: grad = numgrad(mle, [x[0], x[1], x[2], x[3], x[4], x[5]], step=1e-14) return mle([x[0], x[1], x[2], x[3], x[4], x[5]])opt = nlopt.opt(nlopt.LD_SLSQP, 6)opt.set_lower_bounds([mmin, smin, ming, bmin, vmin, pmin]) #min bound for each of the param.opt.set_upper_bounds([mmax, smax, maxg, bmax, vmax, pmax])opt.set_min_objective(myfunc)opt.set_xtol_rel(1e-15)opt.maxeval=10000x = opt.optimize([x1, x2, x3, x4, x5, x6])minf = opt.last_optimum_value()print optimum at , x[0], x[1], x[2], x[3], x[4], x[5]print minimum value = , minfprint result code = , opt.last_optimize_result()Now the issue is this .... the minimization process goes wayyy tooo fast. In matlab, it takes approx 1 hour and here in Python 12 seconds ... I don't get the same results in Matlab using fmincon. My feeling is that the code does not recognize the opt.set_xtol_rel(1e-15) and opt.maxeval=10000 statements because even if I increase the number ... no change in the time process... Or the problem is elsewhere... what am I doing wrong?
Maximum function evaluation with NLOPT in Python
optimization;python
You should essentially never estimate the gradient numerically.You say the gradient is difficult to estimate. In general, if it is at all possible to get the gradient exactly you should do so, and use an appropriate algorithm (NLopt has several, the one you're using should be fine).However, if you cannot get the gradient exactly, NLopt features several derivative free algorithm, which you could use instead and expect to get better results. I think this is probably the easiest solution to your problem, and would give you better results. The speed difference can easily be real however. Differences in optimization algorithm and the fact that python is generally faster than Matlab could explain the difference easily.TLDR: Change from nlopt.LD_SLSQP to nlopt.LN_BOBYQA .Hope this helps.
_unix.372463
I here for seeking help as I have a problem with Linux Mint cinnamon 18.1.I was able to open any folder by right click and Open as Root but it's no longer working now, the Open as Root is there in right click option but nothing happens after clicked that.How can I fix this?
linux mint cinnamon 18.1 - open as root not working - how to fix?
linux;linux mint
null
_codereview.49220
I am trying to learn Clojure for some time. In my experience, it has been rather too easy to produce write-only code.Here is a solution to a simple problem with very little essential complexity. Input and output formats are extremely simple, too. Which means all complexity in it must be accidental. How to improve its legibility, intelligibility?Is the decomposition of the problem into functions all right?Also other specific problems:How to input/output numbers? Is there any benefit to use read-string instead of Double/parseDouble? How to format floating point numbers to fixed precision without messing with the default locale?How to avoid the explicit loop/recur, which is currently a translation of a while loop?Are there definitions that should/shouldn't have bee private/dynamic?ProblemYou start with 0 cookies. You gain cookies at a rate of 2 cookies per second [...]. Any time you have at least C cookies, you can buy a cookie farm. Every time you buy a cookie farm, it costs you C cookies and gives you an extra F cookies per second.Once you have X cookies that you haven't spent on farms, you win! Figure out how long it will take you to win if you use the best possible strategy.(ns cookie-clicker (:use [clojure.string :only [split]]) (:require [clojure.java.io :as io] [clojure.test :refer :all]));;See http://code.google.com/codejam/contest/2974486/dashboard#s=p1(defn parse-double [s] (java.lang.Double/parseDouble s))(defn parse-row [line] (map parse-double (split line #\s+)))(defn parse-test-cases [rdr] (->> rdr line-seq rest (map parse-row)))(def initial-rate 2.0)(defn min-time [c f x] (loop [n 0 ; no of factories used tc 0 ; time cost of factories built r initial-rate ; cookie production rate t (/ x r)] ; total time (let [n2 (inc n) tc2 (+ tc (/ c r)) r2 (+ r f) t2 (+ tc2 (/ x r2))] (if (> t2 t) t (recur n2 tc2 r2 t2)))))(java.util.Locale/setDefault (java.util.Locale/US))(defn ans [n t] (str Case # n : (format %.7f t))) (defn answers [test-cases] (map #(ans %1 (apply min-time %2)) (rest (range)) test-cases))(defn spit-answers [in-file] (with-open [rdr (io/reader in-file)] (doseq [answer (answers (parse-test-cases rdr))] (println answer))))(defn solve [in-file out-file] (with-open [w (io/writer out-file :append false)] (binding [*out* w] (spit-answers in-file))))(def ^:dynamic *tolerance* 1e-6)(defn- within-tolerance [expected actual] (< (java.lang.Math/abs (- expected actual)) *tolerance*))(deftest case-3 (is (within-tolerance 63.9680013 (min-time 30.50000 3.14159 1999.19990))))(defn -main [] (solve resources/cookie_clicker/B-large-practice.in resources/cookie_clicker/B-large-practice.out))
Cookie Clicker Alpha solution
clojure
I have to admit, the actual solving the problem component of this is a little over my head. But, I thought I'd try to answer your questions and give you some style/structure feedback, for what it's worth :)You can simplify your ns declaration like this:(ns cookie-clicker (:require [clojure.string :refer (split)] [clojure.java.io :as io] [clojure.test :refer :all]))(:require foo :refer (bar) does the same thing as :use foo :only (bar), and is generally considered preferable, especially as an alternative to having both :use and :require in your ns declaration)I think Double/parseDouble is a good approach to parsing doubles in string form. Integer/parseInt is usually my go-to for doing the same with integers in string form. This is just a hypothesis, but Double/parseDouble might be faster and/or more accurate than read-string because it's optimized for doubles.FYI, you can leave out the java.lang. and just call it as Double/parseDouble in your code. In light of that, you might consider getting rid of your parse-double function altogether and just using Double/parseDouble whenever you need it. The only thing is that Java methods aren't first-class in Clojure, so you would need to do things like this if you go that route:(defn parse-row [line] (map #(Double/parseDouble %) (split line #\s+)))(Personally, I still like that better, but you might prefer to keep it wrapped in a function parse-double like you have it. It's up to you!)I think needing to mess with the locale might be a locale-specific problem... I tried playing around with (format %.7f ... without changing my locale and it worked as expected. Granted, I'm in the US :)I think the legibility issues you're seeing might be related to having too many functions. You might consider condensing and renaming things and see if you like that better. I would re-structure your program so that you parse the data into the data structure at the top, something like this:(defn parse-test-cases [in-file] (with-open [rdr (io/reader in-file)] (let [rows (rest (line-seq rdr))] (map (fn [row] (map #(Double/parseDouble %) (split row #\s+))) rows))))(I condensed your functions parse-row, parse-test-cases and half of spit-answers into the function above)Then define the functions that do all the work like min-time, and then, at the end:(defn spit-answers [answers out-file] (with-open [w (io/writer out-file :append false)] (.write w (clojure.string/join \n answers)))(def -main [] (let [in resources/cookie_clicker/B-large-practice.in out resources/cookie_clicker/B-large-practice.out test-cases (parse-test-cases in) answers (map-indexed (fn [i [c f x]] (format Case #%d: %.7f (inc i) (min-time c f x))) test-cases)] (spit-answers answers out)))I came up with a few ideas above:In your answers function you use (map ... (rest (range)) (test-cases)) in order to number each case, starting from 1. A simpler way to do this is with map-indexed. I used (inc i) for the case numbers, since the index numbering starts at 0.I condensed (str Case # n : (format %.7f t))) into a single call to format.I used destructuring over the arguments to the map-indexed function to represent each case as c f x -- that way it's clearer that each test case consists of those three values, and you can represent the calculation as (min-time c f x) instead of (apply min-time test-case).As for your min-time function, I don't think loop/recur is necessarily a bad thing, and I often tend to rely on it in complicated situations where you're doing more involved work on each iteration, checking conditions, etc. I think it's OK to use it here. But if you want to go a more functional route, you could consider writing a step function and creating a lazy sequence of game states using iterate, like so:(note: I'm writing step as a letfn binding so that it can use arbitrary values of c, f and x that you feed into a higher-order function that I'm calling step-seq -- this HOF takes values for c, f and x and generates a lazy sequence of game states or steps.)(defn step-seq [c f x] (letfn [(step [{:keys [factories time-cost cookie-rate total-time result]}] (let [new-time-cost (+ time-cost (/ c cookie-rate)) new-cookie-rate (+ cookie-rate f) new-total-time (+ new-time-cost (/ x new-cookie-rate))] {:factories (inc factories) :time-cost new-time-cost :cookie-rate new-cookie-rate :total-time new-total-time :result (when (> new-total-time total-time) total-time)}))] (iterate step {:factories 0, :time-cost 0, :cookie-rate 2.0, :total-time (/ x 2.0), :result nil})))Now, finding the solution is as simple as grabbing the :result value from the first step that has one:(defn min-step [c f x] (some :result (step-seq c f x)))
_datascience.6570
I have a Healthcare dataset. I have been told to look at non-parametric approach to solve certain questions related to the dataset. I am little bit confused about non-parametric approach. Do they mean density plot based approach (such as looking at the histogram)? I know this is a vague question to ask here. However, I don't have access to anybody else whom I can ask and hence I am asking for some input from others in this forum.Any response/thought would be appreciated.Thanks and regards.
Non-parametric approach to healthcare dataset?
data mining
They are not specifically referring to a plot based approach. They are referring to a class of methods that must be employed when the data is not normal enough or not well-powered enough to use regular statistics.Parametric and nonparametric are two broad classifications of statistical procedures with loose definitions separating them:Parametric tests usually assume that the data are approximately normally distributed.Nonparametric tests do not rely on a normally distributed data assumption.Using parametric statistics on non-normal data could lead to incorrect results.If you are not sure that your data is normal enough or that your sample size is big enough (n < 30), use nonparametric procedures rather than parametric procedures.Nonparametric procedures generally have less power for the same samplesize than the corresponding parametric procedure if the data truly are normal.Take a look at some examples of parametric and analogous nonparametric tests from Tanya Hoskin's Demystifying Summary:Here are some summary references:Another general table with some different informationNonparametric StatisticsAll of Nonparametric Statistics, by Larry WassermanR tutorialNonparametric Econometrics with Python
_unix.376183
In the VMware workstation, I have a VM(CentOS 7.2). Before, I only have one NIC for the VM; it used the NAT network model. I configured the IP for the NIC in VM, and it works fine.But I added another, and rebooted the VM.I use the ip a to show the NICs state.(Because I am remote to the host machine, so I can not copy the code form the VMware Workstation, so I only can post snapshot), You see there is two NICs, I added it successfully.But I can not find the configuration file(ifcfg-eno33554984) under the /etc/sysconfig/network-scripts/ directory, it did not generate:EDITYou know if I add three NICs in the beginning, there will generate three ifcfg-* files here.
In the VMware I add the second NIC to the VM why the directory network-scripts/ do not generate the ifcfg-eno33554984?
centos;configuration;network interface;vmware
null
_webapps.37864
I have been given a link to a Google docs form where I was supposed to enter some data. The URI looks something like this: https://docs.google.com/spreadsheet/formResponse?formkey=fooIs there anyway I can see the the responses already submitted by people using the 'foo' value (or may be some other hack like that)? Also, any idea how Google docs generates the keys?
How can we view the whole spreadsheet instead of just the form in Google forms?
google spreadsheets;google forms
If you were given the link but do not own the form, then no - responses are private unless the response spreadsheet is shared with you.If the response spreadsheet was shared with you however, then you can just search for that document title in your Drive list, and it will appear as a normal shared document.No idea how Drive generates document keys, but they are different between the spreadsheet and the actual form, again to make sure that the responses are private unless explicitly shared with others.
_webmaster.60559
I'm considering using i18next for translating text. But then it suddenly struck me; If labels / text is retrieved through javascript, won't that affect SEO?And what about screen readers for visual impaired users?
Will i18next affect SEO? What about screen readers?
seo;javascript;language
To my knowledge Googlebot reads javascript.Use the appropriate tags in every language version as described in https://support.google.com/webmasters/answer/189077?hl=en<link rel=alternate href=http://example.com/de hreflang=de /><link rel=alternate href=http://example.com/bg hreflang=bg /><link rel=alternate href=http://example.com/en hreflang=en />
_webmaster.58013
I want to add www in front of a subdomain e.g. www.subdomain.domain.com.My blogs are hosted on Blogger and am using GoDaddy for having custom domains.I have HOST @ entries for 'domain' pointing specified by blogger. The following subsdomains are configured by adding CNAME alias as follows:subdomain -> ghs.google.comwww -> ghs.google.com For domain (including www.domain) I have one blog. For subdomain, I am pointing it to seperate blog using above entries and subdomain.domain.com works fine. I read articles on this issue and tried adding following CNAME entry but no luck:www.subdomain -> subdomain.domain.comHow do I make www.subdomain.domain.com work ?
How do I add 'www' before a subdomain, like www.subdomain.domain.com?
subdomain;domain registration
null
_unix.243515
I've downloaded the Raspbian image on this page. I'm trying to compile a kernel that can be used to boot the image within qemu.I downloaded the Linux kernel source from kernel.org and ran:make versatile_defconfigmake menuconfigI then added the following features to the kernel:PCI support (CONFIG_PCI)SCSI Device Support (CONFIG_SCSI)SCSI Disk Support (CONFIG_BLK_DEV_SD)SYM53C8XX Version 2 SCSI Support (CONFIG_SCSI_SYM53C8XX_2)The Extended 3 (ext3) filesystem (CONFIG_EXT3_FS)The Extended 4 (ext4) filesystem (CONFIG_EXT4_FS)I also loop mounted the disk image and:commented out /etc/ld.so.preloadadjusted /etc/fstab to use /dev/sda1 and /dev/sda2I then unmounted the image and attempted to start the machine with:qemu-system-arm \ -M versatilepb \ -m 256 \ -kernel linux-4.3/arch/arm/boot/zImage \ -hda 2015-09-24-raspbian-jessie.img \ -serial stdio \ -append root=/dev/sda2 rootfstype=ext4 rw console=ttyAMA0The kernel was able to mount the filesystem but it immediately ran into some trouble:Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004CPU: 0 PID: 1 Comm: init Not tainted 4.3.0 #1Hardware name: ARM-Versatile PB[<c001b5c0>] (unwind_backtrace) from [<c0017e18>] (show_stack+0x10/0x14)[<c0017e18>] (show_stack) from [<c0069860>] (panic+0x84/0x1ec)[<c0069860>] (panic) from [<c0025b98>] (do_exit+0x81c/0x850)[<c0025b98>] (do_exit) from [<c0025c5c>] (do_group_exit+0x3c/0xb8)[<c0025c5c>] (do_group_exit) from [<c002dfcc>] (get_signal+0x14c/0x59c)[<c002dfcc>] (get_signal) from [<c001bf28>] (do_signal+0x84/0x3a0)[<c001bf28>] (do_signal) from [<c0017a94>] (do_work_pending+0xb8/0xc8)[<c0017a94>] (do_work_pending) from [<c0014f30>] (slow_work_pending+0xc/0x20)---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004At first, I wondered if this wasn't related to SELinux. I tried booting the kernel with:selinux=0 enforcing=0...but it made absolutely no difference.What am I doing wrong? And what does this error mean?UpdatesI have also tried the following, with no luck:I tried compiling with and without CONFIG_VFP enabledI added CONFIG_DEVTMPFS and CONFIG_DEVTMPFS_MOUNTApplying this patch and enabling CPU_V6, CONFIG_MMC_BCM2835, & CONFIG_MMC_BCM2835_DMAUsing the gcc-linaro-arm-linux-gnueabihf-raspbian toolchainCompiling a simple C program with the toolchain and then passing its path to the kernel via init= works - leading me to believe there's a discrepancy between binary formatsfile <sample program>:ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), statically linked, for GNU/Linux 2.6.26, BuildID[sha1]=e5ec8884499c51b248df60aedddfc9acf72cdbd4, not strippedfile <file from the image>:ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3e92423821f3325f8cb0ec5d918a7a1c76bbd72c, stripped`diff of ELF headerI compiled this simple C program with the toolchain:<path>/arm-linux-gnueabihf-gcc --static simple.c -o simple...and copied it to /root in the image, changing the init= boot parameter to /root/simple. This gives me the following when booting:Starting bash...Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004It seems to be choking on the execv() call.
Why can't the kernel run init?
linux kernel;arm;qemu;init
null
_softwareengineering.106850
In applications that I write at work, I often need to have an external properties/settings file so that certain parameters can be configurable after the application is deployed with the end-user. The file will usually be text or XML and I will usually be implementing in C++ or Java.In the past, I have created Singleton classes to manage the injection of these properties/settings into my application. The class would be initialised with the path to a file, be hardcoded with the property keys it is looking for in the file, read it in and store all the attributes. Other classes in the application would then perform a call such as propertySingleton::getInstance().getMyParameter().The more I read and learn about software engineering, and design, the more this approach feels clumsy and inherently wrong. I was wondering if anyone had to perform similar tasks, and how they would approach this in a well-thought object-oriented fashion?
Injecting properties/settings into an application
design;object oriented
null
_webmaster.76884
I am configuring a lightweight content negotiation mechanism using Apaches's Mod-Rewrite. The configuration should deliver several different data representations (for instance, HTML, XML, RDF/XML, and RSS, although I actually have to consider a few more in my application) on a base resource URL depending on the Accept header of the request:# serve html on base url if requested via accept headerRewriteCond %{HTTP_ACCEPT} text/html [OR]RewriteCond %{HTTP_ACCEPT} application/xhtml\+xmlRewriteRule ^resource/(.*)$ view/html/$1.html [NC,R=303,L]# serve xml on base url if requested via accept headerRewriteCond %{HTTP_ACCEPT} application/xmlRewriteRule ^resource/(.*)$ view/xml/$1.xml [NC,R=303,L]# serve rdf on base url if requested via accept headerRewriteCond %{HTTP_ACCEPT} application/rdf\+xmlRewriteRule ^resource/(.*)$ view/rdf/$1.rdf [NC,R=303,L]# serve rss on base url if requested via accept headerRewriteCond %{HTTP_ACCEPT} application/rss\+xmlRewriteRule ^resource/(.*)$ view/rss/$1.rss [NC,R=303,L]# serve html as default response (keep at bottom)RewriteRule ^resource/(.*)$ view/html/$1.html [NC,R=303,L]This works pretty fine if the request is sending pure Accept headers, but I run into some trouble when mixed Accept headers are sent. In that case, my configuration does not respect any given q-value in the header, and I end up serving the first matching content type according to the (arbitrary) ordering of my RewriteRules. E.g., I incorrectly serve text/html for the following request:Accept: application/rdf+xml;q=0.5,text/html;q=.3Is there any way how I can make my configuration take the q-values of the Accept header into account? Any help is appreciated.
Mod-Rewrite content negotiation for mixed Accept header?
mod rewrite;apache2;semantic web
Use the built-in content negotiation functionality with a type map. You may need to tweak your filenames / URLs or use rewrite rules after applying the type map.
_webmaster.69833
I am using a cloud hosting provider (Heroku) to host my webapp. Since I don't have access to a permanent file system, I am storing my sitemap.xml in Amazon S3.I wanted to know the SEO implications of the following 2 options for submitting the sitemap to search engines (Google & Bing) via their webmaster tools:1) Create on endpoint on my domain: http://mydomain.com/sitemap.xml that performs a 301 redirect to the S3 hosted sitemap. Provide the url hosted on my domain to the search engines. This is the option I am currently using. It seems to work fine with Google, but I noticed a sitemap error with Bing - I am monitoring this as I am not yet sure what the cause is.2) Apparently, there is a way to do cross domain sitemap submission whereby I get the S3 URL approved by the search engine, then I can directly submit the S3 url as my sitemap.Also I am currently pointing the sitemap entry in robots.txt to the sitemap url hosted on my domain (not to S3).Is one of these methods preferred from an SEO perspective? Like I said, I am using option (1) but I want to be somewhat confident that the crawlers will be OK with the HTTP 301 that I'm using.
Can externally hosted sitemaps work with Google and Bing?
seo;google search console;sitemap;bing webmaster tools
null
_unix.9949
I placed the uBoot loader and the kernel into the raw flash image. This does not contain any root file system. (I copied uBoot and kernel image using dd command to a flash image).Now i have to change my kernel as to start my application at a particular address which was located in my flash image.How can I change the kernel to start my application on its own?
Kernel - Starting the application
linux;boot;embedded
null
_unix.92362
If I am doing something likecreating temporary filesome process generating output > temp_filecat temp_fileprocess substitution:cat <(some process generating output)another way :cat <<<(some process generating output)I have some doubts regarding these:Is there any limit on data output size of process substitution<() >() or variable expansion<<<() Which among these is the fastest or is there a way to do it faster?My ulimit command output is :bash-3.00$ ulimit -acore file size (blocks, -c) unlimiteddata seg size (kbytes, -d) unlimitedfile size (blocks, -f) unlimitedopen files (-n) 256pipe size (512 bytes, -p) 10stack size (kbytes, -s) 8480cpu time (seconds, -t) unlimitedmax user processes (-u) 8053virtual memory (kbytes, -v) unlimited
Creating temp file vs process substitution vs variable expansion?
linux;variable;process substitution
Bash process substitution in the form of <(cmd) and >(cmd) is implemented with named pipes if the system supports them. The command cmd is run with its input/output connected to a pipe. When you run e.g. cat <(sleep 10; ls) you can find the created pipe under the directory /proc/pid_of_cat/fd. This named pipe is then passed as an argument to the current command (cat).The buffer capacity of a pipe can be estimated with a tricky usage of dd command which sends zero data to the standard input of sleep command (which does nothing). Apparently, the process will sleep some time so the buffer will get full:(dd if=/dev/zero bs=1 | sleep 999) &Give it a second and then send USR1 signal to the dd process:pkill -USR1 ddThis makes the process to print out I/O statistics:65537+0 records in65536+0 records out65536 bytes (66 kB) copied, 8.62622 s, 7.6 kB/sIn my test case, the buffer size is 64kB (65536B).How do you use <<<(cmd) expansion? I'm aware of it's a variation of here documents which is expanded and passed to the command on its standard input.Hopefully, I shed some light on the question about size. Regarding speed, I'm not so sure but I would assume that both methods can deliver similar throughput.
_cs.76682
Let's consider the following situation.We have a finitie alphabet $A$. Let $A = \{a_1, .., a_k\}$ We consider words over $A$ of length exactly $n$.I am trying to solve some problem and I am going to:Generate every word using non-deterministic Turing Machine. And, for every generated word make a computation $C$. that uses only constant space and linear time. So, it seems that we have to remember (for a moment) generated word. I mean the situation that we have to generate word $w$ and then make a computation $C$ on $w$.The scheme of Turing machine looks like:The question is: Is my Turing machine NPSPACE? I have a problem with thinking about space complexity when it comes to nondterministic TM.
Understanding of SPACE in non deterministic Turing Machines
turing machines;space complexity;nondeterminism
A nondeterministic Turing machine is a Turing machine that has a guessing mechanism. It accepts an input if there is a sequences of guesses that leads it to an accepting state. It rejects an input if all guesses lead it to a rejecting state.The time complexity and space complexity of the Turing machine are defined in exactly the same way as for deterministic Turing machines:The time complexity on inputs of size $n$ is the maximum number of steps that the machine executes before halting over all inputs of size $n$ and all guesses.The space complexity on inputs of size $n$ is the maximum number of tape cells that the machine uses over all inputs of size $n$ and all guesses.If your machine always uses only a polynomial amount of space, then it is is NPSPACE. Note that NPSPACE=PSPACE, a consequence of Savitch's theorem, and so you can convert it to a deterministic machine using polynomial space (the amount of space used could increase).
_softwareengineering.91130
I'm a nub programmer, using python, and my current project is a chatbot for an irc channel I reside in. I wish to make it capable of keeping conversations organized, primarily between its self and one other person.Right now, I'm creating a conversation object when the bot is initially addressed. The object has the attributed of peer (the other conversation member), topic (the basic topic of the conversation), log (a log of past messages to and from the peer), lastsent (super-simplified forms of the most recent 10 messages sent to the channel by the bot), and lastrecv (the last 10 messages received sent to the channel by the peer, also super-simplified). As messages are received, the bot checks the topic and runs through a list of expected replies. If one is matched, the bot chooses a response and send it to the channel. It then updates the topic if needed and the bot's conversation dictionary.The bot has a conversation dictionary list, the key is the user's nickname, and the definition is the conversation object.I feel this is unnecessarily excessive. I was wondering what some other approaches to keeping track of conversations were. Are there any simpler, easier approaches?
Chatbot Conversation Objects, your approach?
python;methodology
null
_unix.202218
I'm on a shared machine running CentOS 5.10 that I log onto using VNC from Windows 7. Our default and official shell is csh.Every time I open a new terminal, I have three particular environment variables (related to the modules system) that are mysteriously set somewhere.I can't find them in .cshrc, nor in .login (which I don't have anyway), nor in /etc/csh.cshrc or /etc/csh.login or anywhere else I can think of.Is there a way to trace what sources them?Just to clarify, if I log onto the gateway machine using PuTTY, I don't face that issue.
csh: Terminal inherits environment variables from an unknown location
shell;terminal;environment variables;csh
null
_codereview.153513
I am writing some tests in selenium webdriver (on node.js). and have made a custom function to check the css value of an iFrame Element. I'm a coding beginner.The script tests an app where the user writes the image width they want (just putting in a number) and the image in an iFrame should change width. This is tricky because one must switch iFrames, wait for elements to become stale (as the new image width is loaded) then grab the new element and check its css value.Often, the tests are flaky because sometimes it checks before the value has changed etc. I finally wrote a function that passed 120 times out of 120 times.//function looks for 'el', then switches iframe and extracts the desired cssValur, then compares it to the 'value' we expectPage.checkCssValue = function (el, cssValue, value){ //find function, the '0' represents the iframe index var newEl = this.find(el, 0); return newEl.getCssValue(cssValue).then(function(result){ if(result !== value){ console.log(result + and + value + Do not Match.) return Page.checkCssValue(el, cssValue, value); } else{ console.log(result + and + value + Do Match!) return result; } });};But I'm not sure if this is considered bad programming and if a while loop would be better?
Recursively checking the css class of an iFrame Element
javascript;node.js;selenium;webdriver
Code structure wise, the recursive pattern in JavaScript/Selenium and Protractor code is a pretty common one. The biggest problem here is that, you don't have a recursive cycle exit condition - if result would never become equal to value, you'll eventually get the recursive call stack size overflow error. This is a negative case for you and would probably mean a test failure, but, in the world end-to-end UI tests, you have to be as specific in your test failures as possible (the difficulties in finding the root cause for a test failure is one of the reasons we should generally write more unit tests as opposed to end-to-end tests, according to Google Testing Pyramid).A better approach to tackle flakiness would be to use Explicit Waits, which are design to continuously execute a function until it evaluates to true, or a timeout is reached:this.wait( condition, opt_timeout, opt_message ) ThenableSchedules a command to wait for a condition to hold. The condition may be specified by a Condition, as a custom function, or as any promise-like thenable.For a Condition or function, the wait will repeatedly evaluate the condition until it returns a truthy value. If any errors occur while evaluating the condition, they will be allowed to propagate. In the event a condition returns a promise, the polling loop will wait for it to be resolved and use the resolved value for whether the condition has been satisified. Note the resolution time for a promise is factored into whether a wait has timed out.Here is how you can apply wait() in your case:// function looks for 'el', then switches iframe and extracts the desired cssValue, then compares it to the 'value' we expectPage.checkCssValue = function (el, cssValue, value) { // find function, the '0' represents the iframe index var newEl = this.find(el, 0); // wait for the desired CSS value var timeout = 5000; // in milliseconds this.driver.wait(function() { return newEl.getCssValue(cssValue).then(function (result) { return result === value; }); }, timeout, CSS value ' + cssValue + ' has not become equal to ' + value + '.); return newEl.getCssValue(cssValue);};where this.driver is your selenium webdriver instance, 5000 is a timeout value in milliseconds.
_codereview.26410
I am required to read the following text from a keyboard (stdin). Please note that it will be entered by the user from the keyboard in this format only. #the total size of physical memory (units are B, KB, MB, GB) 512MB 2 #the following are memory allocations { abc = alloc(1KB); { y_ = alloc(128MB); x1= alloc(128MB); y_ = alloc(32MB); for (i = 0; i < 256; i++) abc[i] =alloc(512kB); x1 = alloc(32MB); x2 = alloc(32MB); x3 = alloc(32MB); x1.next = x2, x2.next = x3, x3.next = x1; } abc = alloc(256MB); }A line beginning with the # sign is considered a comment and is ignored.The first two allocations are physical memory size and number of generations.A global bracket will be opened and it may be followed by a line calledabc = alloc(1KB);where abc is the object name and 1KB is the memory size allocated.x1.next = x2, where x1 points to x2.The for loop is entered in this format and it can have a same-line command or can have nested for loops.for (i = 0; i < 256; i++) abc[i] =alloc(512kB);I have the following code that somewhat takes care of this. I want to know how to improve on it.#include <iostream>#include <algorithm>#include <string>#include <iomanip>#include <limits>#include <stdio.h>#include <sstream>using namespace std;using std::stringstream;string pMem,sGen, comment,val,input,input_for,id_size,id,init_str1, init_str2, inc_str, id_dummy,s_out,sss, id_dummy1;int gen=0, pMem_int=0,i=0, gBrckt =0,cBrckt=0, oBrckt=0, id_size_int,v1,v2, for_oBrckt=0,for_cBrckt=0,y=0, y1=0, g=0;unsigned long pMem_ulong =0, id_size_ulong;char t[20], m[256], init1[10],init2[10],inc[10];unsigned pos_start, pos,pos_strt=0,pos_end=0;string extract(string pMem_extract);unsigned long toByte(int pMem_int_func, string val);void commentIgnore(string& input);void func_insert();void func_insert_for();stringstream out;void commentIgnore_for(string& input_for);int main() { /* Reading the input main memory and num of generations */ /* Ignoring comment line */ cin >> pMem; if(pMem == #) { cin.clear(); pMem.clear(); getline(cin,comment); cin >> pMem; } if(pMem == #) { cin.clear(); pMem.clear(); getline(cin,comment); cin >> pMem; } if(pMem == #) { cin.clear(); pMem.clear(); getline(cin,comment); cin >> pMem; } /* Reading input generations */ cin>> sGen; if(sGen == #) { cin.clear(); sGen.clear(); getline(cin,comment); cin >> sGen; } if(sGen == #) { cin.clear(); sGen.clear(); getline(cin,comment); cin >> sGen; } if(sGen == #) { cin.clear(); sGen.clear(); getline(cin,comment); cin >> sGen; } /* Convert sGen and physical memory to int and report error if not a number */ gen = atoi(sGen.c_str()); if(gen ==0) { cerr << Generation must be a number<<endl; exit(0); } pMem_int = atoi(pMem.c_str()); // cout<< gen<< <<pMem_int<<endl; /* Now that the number from pMem is removed, get its unit B,MB,KB */ extract(pMem); /* returns val(string) */ /* convert the given physical memory to Byte. input: pMem_int*/ toByte(pMem_int, val); /* return(pMem_ulong)*/ // move pMem_ulond to another location to keep address intact /* read rest of the inputs */ /* Ignore comment lines before the global bracket */ cin >> input; if(input == #){ cin.clear(); input.clear(); getline(cin,comment); cin >> input; } if(input == #){ cin.clear(); input.clear(); getline(cin,comment); cin >> input; } if(input == #){ cin.clear(); input.clear(); getline(cin,comment); cin >> input; } if(input.compare({) ==0) gBrckt=1; else { cerr<< Syntax error\n; exit(0); } /* Clearing the input stream for next input */ cin.ignore(numeric_limits<streamsize>::max(), '\n'); cin.clear(); input.clear(); //cout<<input: <<input<<endl; while( getline(cin,input)) { if(input == CTRL-D) break; commentIgnore(input); //cout<<inputloop: <<input<<endl; /* If input = '{' or '}'*/ if(input.compare({) ==0) oBrckt = oBrckt + 1; if (input.compare(}) ==0) cBrckt = cBrckt + 1; if (((input.find(alloc))!= string::npos) && (input.find(alloc) < input.find(for))) { func_insert(); //call the allocate function here with name: id, size: id_size_ulong } if ((input.find(for)) != string::npos) { sscanf(input.c_str(), for (%s = %d; %s < %d; %[^)]), init1, &v1, init2, &v2, inc); init_str1 = init1, init_str2 = init2, inc_str = inc; cout<<init1<< =<< v1<< <<init_str1<< < << v2<< << inc_str<<endl; cout << input <<endl; if(init_str1 != init_str2) { cerr << Error!\n; exit(0); } if ((input.find(alloc))!= string::npos) { // unsigned pos = (input.find(alloc)); if((input.find(;)) != string::npos) { pos_start = (input.find())+1); string alloc_substr = input.substr(pos_start); cout<<Substring alloc: << alloc_substr<<endl; func_insert(); //call the allocate function here with name: id, size: id_size_ulong } else { cerr << ERROR: SYNTAX\n; exit(0); } } // cin.ignore(); while(getline(cin,input_for)) { commentIgnore_for(input_for); if ((input_for.find({) != string::npos)) { pos = input_for.find({); for_oBrckt = for_oBrckt+1; string for_brckt = input_for.substr(pos,pos); cout<< Found: << for_oBrckt<<endl; } if ((input_for.find(}) != string::npos)) { pos = input_for.find(}); for_cBrckt = for_cBrckt+1; string for_brckt = input_for.substr(pos,pos); cout<< Found: << for_cBrckt<<endl; } if (((input_for.find(alloc))!= string::npos) && (input_for.find(alloc) < input_for.find(for))) { func_insert_for(); //call the allocate function here with name: id, size: id_size_ulong } if(for_oBrckt == for_cBrckt) break; } cout<<out of break<<endl; } if (((input.find(.next))!= string::npos) && (input.find(.next) < input.find(for))) { func_insert(); //call the allocate function here with name: id, size: id_size_ulong } if(((cBrckt-oBrckt)) == gBrckt) break; }}/*---------------------- Function definitions --------------------------------*//* Function to extract the string part of physical memory */string extract(string pMem_extract) { i=0; const char *p = pMem_extract.c_str(); for(i=0; i<=(pMem_extract.length()); i++) { if (*p=='0'|| *p=='1'|| *p=='2'|| *p=='3'|| *p =='4'|| *p=='5'|| *p=='6'|| *p=='7'|| *p=='8'|| *p=='9') *p++; else { val = pMem_extract.substr(i); return(val); } }}/* Convert the physical memory to bytes. return(pMem_ulong);*/unsigned long toByte(int pMem_int_func, string val){ if (val == KB) pMem_ulong = (unsigned long) pMem_int_func * 1024; else if (val == B) pMem_ulong = (unsigned long) pMem_int_func; else if (val == GB) pMem_ulong = (unsigned long) pMem_int_func * 1073741824; else if (val == MB) pMem_ulong = (unsigned long) pMem_int_func * 1048576; else { cerr<<Missing the value in memory, B, KB, MB, GB\n; exit(0); } return(pMem_ulong);}/*Ignoring comment line*/void commentIgnore(string& input){ unsigned found = input.find('#'); if (found!=std::string::npos) input= input.erase(found); else return; return;}void func_insert() { sscanf(input.c_str(), %s = alloc(%[^)]);, t, m); id =t; id_size =m; cout<<Tag: <<id << Memory: <<id_size<<endl; extract(id_size); /* Separates B,MB,KB and GB of input, returns val*/ id_size_int = atoi(id_size.c_str()); /* Convert object size to B */ toByte(id_size_int, val); /* return(pMem_ulong) */ id_size_ulong = pMem_ulong;}void func_insert_for() { sscanf(input_for.c_str(), %s = alloc(%[^)]);, t, m); id =t; id_size =m; if(!((id.find([)) && (id.find(])) != string::npos)) { cout<<Tag: <<id << Memory: <<id_size<<endl; extract(id_size); /* Separates B,MB,KB and GB of input, returns val*/ id_size_int = atoi(id_size.c_str()); /* Convert object size to B */ toByte(id_size_int, val); /* return(pMem_ulong) */ id_size_ulong = pMem_ulong; // allocate here return; } else { if(inc_str.find(++)) y1 =1; if(inc_str.find(=)) { sss = inc_str.substr(inc_str.find(+) +1); y1 = atoi(sss.c_str()); cout<<y1:<<y1<<endl; } pos_strt = id.find([); pos_end = id.find(]) -1; cout<<Positions start and ebd: << pos_strt<<pos_end<<endl; id_dummy = id.substr(0,pos_strt); id = id_dummy; cout<<Tag: <<id_dummy << Memory: <<id_size<<endl; extract(id_size); /* Separates B,MB,KB and GB of input, returns val*/ id_size_int = atoi(id_size.c_str()); /* Convert object size to B */ toByte(id_size_int, val); /* return(pMem_ulong) */ id_size_ulong = pMem_ulong; //allocate here cout<<v1: << v1 << << v2<<endl; // g = 0; for(y = v1; y < v2; y= y+y1) { // allocate here } } return;}void commentIgnore_for(string& input_for){ unsigned found = input_for.find('#'); if (found!=std::string::npos) input_for= input_for.erase(found); else return; return;}
Reading input from keyboard
c++;strings;stream
null
_datascience.9222
I am trying to understand how the shape of the image changes after deconvolution ?I am trying to understand the example code of convolutional autoencoder from neon.layers = [Conv((4, 4, 8), init=init_uni, activation=Rectlin()), Pooling(2), Conv((4, 4, 32), init=init_uni, activation=Rectlin()), Pooling(2), Deconv(fshape=(3, 3, 8), init=init_uni, strides=2, padding=1), Deconv(fshape=(3, 3, 8), init=init_uni, strides=2, padding=1), Deconv(fshape=(4, 4, 1), init=init_uni, strides=2, padding=0)]The input_shapes and output_shapes of each layer are as followsConvolution Layer 'ConvolutionLayer': 1 x (28x28) inputs, 8 x (25x25) outputs, padding 0, stride 1Pooling Layer 'PoolingLayer': 8 x (25x25) inputs, 8 x (12x12) outputsConvolution Layer 'ConvolutionLayer': 8 x (12x12) inputs, 32 x (9x9) outputs, padding 0, stride 1Pooling Layer 'PoolingLayer': 32 x (9x9) inputs, 32 x (4x4) outputsDeconvolution Layer 'DeconvolutionLayer': 32 x (4x4) inputs, 8 x (7x7) outputsDeconvolution Layer 'DeconvolutionLayer': 8 x (7x7) inputs, 8 x (13x13) outputsDeconvolution Layer 'DeconvolutionLayer': 8 x (13x13) inputs, 1 x (28x28) outputsI understand how the shapes change after convolution ('valid'). (Thanks to http://cs231n.github.io/convolutional-networks/)How does the stride affect the size of matrix when deconvolution (full convolution) is used ?
How does strided deconvolution works?
deep learning;convnet;autoencoder
null
_unix.270349
I have over 400 lines of html containing this code for images:<a class='gallery' href=galimages/boards/board34.jpg alt=board large><image src =galimages/boards/thumbs/34.jpg alt=board thumb></a>The first lot are board images and go from number 34 to 160.Is there a way to programmitically number them because each line of code is identical except for the numbers?I am on Centos 7 and I use vim editor normally.
sequentially number a line of linked images with vim or other?
vim
Vim solutionSome suggestions here. I'd create the list of numbers, then substitute the rest of the string around them. I find this strategy easier, since you'd want two of each number. For example, in an empty document::put =range(34,160):%s,\(.*\),<a class='gallery' href=galimages/boards/board\1.jpg alt=board large><image src =galimages/boards/thumbs/\1.jpg alt=board thumb></a>N.B. put creates an empty line on the first line, so you'll have to delete that manually.Explanation:put =range(34,160): Create a range of numbers from 34 to 160, one on each line. As noted, this actually starts the document with a blank line, so manually delete it now or later.:%s,FOO,BAR: Over the whole document (%), do a search and replace (s), replacing FOO with BAR.FOO: \(.*\). Replace the whole line (.*), but store the contents (number) into a capturing group, i.e. \(...\) .BAR: Replace with the string as required, using the number in two places (\1), to create the final lines.Shell solutionYou can use a similar strategy in the shell without using vim.$ seq 34 160 | sed 's,\(.*\),<a class='\''gallery'\'' href=galimages/boards/board\1.jpg alt=board large><image src =galimages/boards/thumbs/\1.jpg alt=board thumb></a>,'Explanationseq 34 160: Create a range of numbers from 34 to 160, one on each line.sed: substitute as above. N.B. since I quote the sed argument with ', this script escapes the in-line 's with '\''.
_softwareengineering.325306
I am going to develop which will be web application as well as mobile (android / iOS / windows) application. The database in this application will be managed by Hibernate. Also as it is cross platform application, web service will also be used. What I know so far is:HIBERNATE:POJO Files (the getter-setter ones which will create database tables)Model (the java class which will interact with database)Controller (basically servlet which will get data from view [jsp],set it in POJO object and pass this object to Model for any of CRUDoperation)View (the jsp pages)REST WEB SERVICE:Web service implementation class, which have web methods, which canbe called by URL from client, and it can return JSON or XML formatdata.So now my question is:How to integrate these both? Should I put my all POJOs and Modelfiles to web service? If no, than what to do in this situation? Ifyes, than how to do that (simple example)?
How do I integrate hibernate and REST web service in java?
java;rest;web services;hibernate
null
_unix.119232
My distro is fedora 17 Gnome -64, and the wireless adapter is Edimax EW-7612UAn V2. I never used wireless on this computer and it is not so long ago I installed this op system. I installed wireless on another computer with fedora 17 few years ago, but I don't remember how to set it up.There is no wireless showing up anywhere, and I couldn't set it up with network connection because it didn't see the adapter, I think.This is what I've done:I've built the wpa_supplicant, had one error, but fixed it. The driver won't build I think, the only directions where to build and make wpa_supplicant, but now I found that there is a driver folder to with a makefile, can't build that one, says that a folder is missing. This is from the file on their website to build. But there is no wireless. I've done everything in the readme file from the vendor. But no wireless showing up?sudo lshw -c network -sanitize*-network description: Ethernet interface product: 82573E Gigabit Ethernet Controller (Copper) vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: p1p1 version: 03 serial: [REMOVED] size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=2.2.14-k duplex=full firmware=1.0-7 ip=[REMOVED] latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:42 memory:d0080000-d009ffff memory:d0000000-d007ffff ioport:4000(size=32)After I took the extension cord off the adapter, the system could see it: description: Wireless interface physical id: 1 bus info: usb@3:1 logical name: wlan0 serial: [REMOVED] capabilities: ethernet physical wireless configuration: broadcast=yes driver=rtl8192cu driverversion=3.9.10-100.fc17.x86_64 firmware=N/A link=no multicast=yes wireless=IEEE 802.11bgnBut it says that the hardware is disabled? In the network manager
Trouble setting up wireless fedora 17
fedora;wifi
As I suggested in this similar Q&A titled: No wired ethernet connection, you want to start at the bottom of the stack when debugging networking issues. Use the following command to confirm that your WiFi NIC has a driver associated with it.$ sudo lshw -c network -sanitize *-network description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlp3s0 version: 00 serial: [REMOVED] width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.12.11-201.fc19.x86_64 firmware=39.31.5.1 build 35138 ip=[REMOVED] latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:f2400000-f2401fffPay special attention to the configuration: line, looking for the portion that shows driver=....
_cstheory.32267
Suppose we are given a set of n boolean variables x_1,...,x_n and a set of m functions y_1...y_m where each y_i is the XOR of a (given) subset of these variables.The goal is to compute the minimum number of XOR operations you need to perform to compute all these y_1...y_m functions.Note that the result of an XOR operation,say x_1 XOR x_2 might be used in computation of multiple y_j's but is counted as one. Also, note that it might be useful to compute XOR of a much larger collection of x_i's (larger than any y_i function, e.g. computing XOR of all x_i's) in order to compute y_i's more efficiently,Equivalently, suppose we have a binary matrix A, and a vector X and the goal is to compute vector Y such that A.X=Y where all operations done in GF(2) using minimum number of operations.Even when each row of A has exactly k one's (say k=3) is interesting.Does anybody know about the complexity (hardness of approximation) for this question? Mohammad Salavatiopur
smallest circuit size using XOR gates
circuit complexity;approximation hardness;approximation;matrix product
This is NP-hard. See:Joan Boyar, Philip Matthews, Ren Peralta. Logic Minimization Techniques with Applications to Cryptology. http://link.springer.com/article/10.1007/s00145-012-9124-7The reduction is from Vertex Cover and is very nice. Given a graph $(\{1,\ldots,n\},E)$ with $m=|E|$, define an $m \times (n+1)$ matrix $A$ as: $A[i,j] = 1$ if $j < n+1$ and $(i,j) \in E$, and $A[i,n+1] = 1$. In other words, given $n+1$ variables $x_1,\ldots,x_{n+1}$ we want to compute the $m$ linear forms $x_i+x_j+x_{n+1}$ for all $(i,j) \in E$.A little thought shows that there is an XOR circuit for $A$ with gates of fan-in two computing the linear transformation $A$ with only $m+k$ gates, where $k$ is the optimal vertex cover for the graph. (First compute $x_{i'} + x_{n+1}$ for all $i'$ in the vertex cover, using $k$ operations. The linear forms are then all computable in $m$ more operations.) It turns out that this is also a minimum size circuit!The proof that the reduction is correct is not so nice. I would love to see a short proof that this reduction is correct.
_webmaster.82563
We have an eCommerce site, and we have had rich snippets implemented (in JSON-LD) since January/February this year (2015). I have read multiple sources (including the Google documentation) and there seem to be three possibilities why they are not showing:Not enough time has passed (4-12 weeks seems to be the common time quoted)The markup is wrongGoogle have decided not to show the dataI can rule out the first given the amount of time.The second I am fairly confident is not the case as I have used multiple testing tools.As for the third well that's anyone's guess (although the study here suggests that MOST shops should be successful).A strange note is that the rich snippets do not show on a regular search for our ranking keywords (we rank #1 for a few and page 1 for the majority). However the rich snippets DO show when we search for a ranking keyword + site:www.fridgefreezerdirect.co.uk as show here:I have done this in an incognito window in the browser and using a VPN with the same results.Can anyone suggest anything we can do or reasons this may be?
Rich Snippets not working (and working at the same time?)
google search;serps;rich snippets;schema.org
We have been having very similar issues with our website. As with yours we have waited above that time period (6 months in fact), and all the testing tools show the markup as valid.The third bullet point is most likely the issue. The reasons Google outline are (in the form of answering a question):Q: Why doesn't my site show rich snippets? I added everything and the test tool shows it's ok.A: Google does not guarantee that Rich Snippets will show up for search results from a particular site even if structured data is marked up and can be extracted successfully according to the testing tool. Here are some reasons that marked-up pages might not be shown with Rich Snippets:- The marked-up structured data is not representative of the main content of the page or potentially misleading.- Marked-up data is incorrect in a way that the testing tool was not able to catch.- Marked-up content is hidden from the user.- The site has very few pages (or very few pages with marked-up structured data) and may not be picked up by Google's Rich Snippets system.(Source - http://sites.google.com/site/webmasterhelpforum/en/faq-rich-snippets#noshow)Assuming you don't make any of those mistakes, the Google algorithm has just decided to not show your markup. The appearance of the rich snippets when using the site: operator also suggests this. When using the site: operator, it doesn't actually factor the Google algorithm when generating and displaying the results. Therefore if your rich snippets are showing with the site: operator, then the search engine can pick up the rich snippets but the algorithm is preventing them from showing.Unfortunately, unless you have a manual action in your Webmaster Tools (now Search Console) about the rich snippets, you can't do anything more directly to try and rectify the situation unless the algorithm changes. You can try reaching out on the Google Product Forums to see if you can get someone there to have a look at it.
_unix.67909
I have Windows 8 and need to dual boot Ubuntu. I made a new partition from Windows Manager. My machine is Dell Inspiron 15R-5537 LaptopWindows 8, and I tried to install the latest version 16.04 Ubuntu .My machine doesn't allow to make partitions more than 4. then when I shrink the new space for Ubuntu I have got unallocated space rather than free space!!!but when I boot Ubuntu and chooseInstallation Type:Something else hereI can't select the unallocated space! which is the shrink-ed new partition.this option of Add(+) is disabled when I select unallocated space.such like following:Then I can't install Ubuntu because I can't select and add partitions for Ubuntu. My installation is stopped at this pointI'm trying to use How do I dual boot Ubuntu with Windows 8 in a different partition?
Dual boot Ubuntu with windows 8
linux;ubuntu;dual boot
null
_unix.165572
I wanted to try Sage Math as a free alternative to MatLab. I installed it from AUR, and it works in terminal, but I can't access it via the browser. I tried to Google it, but had no luck.~> sage Sage Version 6.3, Release Date: 2014-08-10 Type notebook() for the browser-based notebook interface. Type help() for help. sage: notebook()The notebook files are stored in: sage_notebook.sagenb Open your web browser to http://localhost:8080 Executing twistd --pidfile=sage_notebook.sagenb/sagenb.pid -ny sage_notebook.sagenb/twistedconf.tac/opt/sage/local/lib/python2.7/site-packages/Crypto/Util/number.py:57: PowmInsecureWarning: Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability. _warn(Not using mpz_powm_sec. You should rebuild using libgmp >= 5 to avoid timing attack vulnerability., PowmInsecureWarning)2014-11-02 19:29:59+0100 [-] Log opened.2014-11-02 19:29:59+0100 [-] twistd 13.2.0 (/opt/sage/local/bin/python 2.7.8) starting up.2014-11-02 19:29:59+0100 [-] reactor class: twisted.internet.epollreactor.EPollReactor.2014-11-02 19:29:59+0100 [-] QuietSite starting on 80802014-11-02 19:29:59+0100 [-] Starting factory <__builtin__.QuietSite instance at 0x7f4b7f7ab830>It opens Chromium at localhost:8080, but it gives me Connection Refused error. I also tried this in Firefox, but with the same results.There's some odd error, but it doesn't look like that's related.I'm running the up-to-date Arch Linux 64bit.I'll be grateful for any ideas to get this working.
Sage Math browser interface not working
arch linux;python
null
_computerscience.1998
In a webgl pixel shader, all functions are inlined as i understand it, however you can have parameters that are marked as in versus being inout meaning that their value can change but the value won't persist outside of the function call.Does this mean that the shader must make a copy of the value for the function to work with when it is an in value?Are shader compilers/optimizers smart enough to know when they don't need to make a copy, or is it best to really just mark up all parameters as inout and make sure and not modify the ones you don't want modified, if performance is the primary concern?Thanks!
Cost of parameter passing in webgl pixel shaders?
webgl;pixel shader;efficiency
My experience working with shader compiler stacks a few years back is that they are extremely aggressive, and I doubt you will see any perf difference, but I would suggest testing as much as you can.I would generally recommend providing the compiler (and human readers) with more information where the language allows it, marking parameters according to their usage. Treating an in-only parameter as in/out is more error prone for humans than compilers.Some detail: shaders run almost entirely using registers for variables (I worked with architectures that supported up to 256 32 bit registers) - spilling is hugely expensive. Physical registers are shared between shader invocations - think of this as threads in a HyperThread sense sharing a register pool - and if the shader can be complied to use fewer registers there's greater parallelism. The result is shader compilers work very hard to minimize the number of registers used - without spilling, of course. Thus inlining is common since it's desirable to optimize register allocation across the whole shader anyway.
_unix.228216
I have a directory which has folders of everyday and every folder has 1000 of images in it. I want to archive folders older than 30 days to archeive folder.I tried this and it bugged up everything, It copied all the image files to archeive folder instead of date folder.sudo find /home/lanein1/AshtonRPOUT/ -type f -mtime +30 -exec mv '{}' /home/lanein1/AshtonRPOUT/Arch/ \; my script copied all the images into arch instead of folders seprately..
moving folder older than 30 days to another folder
ubuntu
null
_webapps.12213
I'm trying to make my kids' (ages 9.5 and 9.5) gmail accounts safer.Is there a way to limit (filter, etc.) inbound email to only email from folks on their contact list?
How to limit inbound Gmail (ideally) or other free web email to only from contact list (to make it kid-safe)
gmail;outlook.com
Thanks to the tip from Al Everett, I poked around on Hotmail.Bring up your Hotmail window Clickon the Options drop down menu inthe upper right corner. Choose More Options. Choose Filters and Reporting Selectthe second option under Junk EmailFilter: Exclusive
_unix.322864
Errata:Similar questions about this have been asked but after searching this for a few days there appears to be no answer to this specific scenario.Description of the problem:The second line in in the following bash script triggers the error:#!/bin/bashsessionuser=$( ps -o user= -p $$ | awk '{print $1}' )print $sessionuserHere is the error message:Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.Things I have tried:I have tried every combination of single quotes, back angled single quotes, double quotes, and spacing I could think of both inside and outside the $() command output capture method.I have tried using $( exec ... ) where ... is the command being attempted here.I have read up on bash, and searched these forums and many others and nothing seems illuminate why this error message is happening or how to work around it.If the suggestion given in the error message is followed like this:sessionuser=$( ps -o user= -p 1000 | awk '\{print $1}' )It results in the following error message combined with the previous one:awk: cmd. line:1: \{print $1}awk: cmd. line:1: ^ backslash not last character on lineUnescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.The message refers to line 528 in /usr/bin/print. Here is that line:$comm =~ s!%{(.*?)}!$_='$ENV{$1}';s/\`//g;s/\'\'//g;$_!ge;Rational for my bash script:The string $USER can be rewritten and is therefore not necessarily reliable. The command whoami will return different results depending on whether or not privileges have been elevated for the current user.As such there is a need for reliably attaining the current session users name for portability of scripting, and that is because I am probably not going to keep the same user name forever and would like my scripts to continue working regardless of who I have logged in as.All of that is because user files are being backed up that have huge directory structures and many files. Every once in a while a file with root ownership and permissions will end up in that backup stack for that user. There are lots of reasons why this happens and sometimes its just because that user backed up a wallpaper or a theme they like from the system directory structure, or sometimes its because a project was compiled by that user and some of its directories or files needed to be set to root ownership and permissions for it to function in some way, and other times it may be due to some other strange unaccounted for thing.I understand that rsync might be able to handle this problem, but I'd like to understand how to tackle the Unescaped left brace in a Bash script problem first.I can study rsync on my own, but after trying for a few days this bash script doesn't appear to have a solution that is easy to discover or illuminate through either online searches or reading the manuals.[UPDATE 01]:Some information was missing from my original post so I'm adding it here.Here are the relevant system specs:OS: Xubuntu 16.04 x86_64Bash: GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu)Source and Rational for the commands I'm using:3rd reply down in the following thread:https://stackoverflow.com/questions/19306771/get-current-users-username-in-bashPrint vs. PrintfI posted this question using print instead of printf because the source I copied it from used the print syntax. After using printf I get the same error message with an added error message as output:Unescaped left brace in regex is deprecated, passed through in regex; marked by <-- HERE in m/%{ <-- HERE (.*?)}/ at /usr/bin/print line 528.Error: no such file sessions_username_hereWhere sessions_username_here is a replacement of the actual sessions user name for the purpose of keeping the discussion generalized to whatever username could or might be used.[UPDATE FINAL]The chosen solution offered by Stphane Chazelas clarified all the issues my script was having in a single post. I was mistakenly assuming that the 2nd line of the script since the output was complaining about brackets. To be clear it was the 3rd line that was triggering the warning (see Chazelas post for why and how) and that is probably why everyone was suggesting printf instead of print. I just needed to be pointed at the 3rd line of the script in order to make sense of those suggestions.Things that didn't work as suggested: sessionuser=$(logname)Resulting error message: logname: no login name...so maybe that suggestion isn't quite as reliable as it might seem on the surface.If user privileges are elevated which is sometimes the case when running scripts then: id -un would output rootand not the current session's user name. This would probably be a simple matter of making sure the script drops out of root privileges before execution which could solve this issue but that is beyond the scope of this thread. Things that did or could work as suggested:After I figure out how to verify my script is running in a POSIX environment and somehow de-elevating root privileges, then I could indeed use id -un to acquire the current sessions username, but those verifications and de-escilations are beyond the scope of this threads question.For now without POSIX verification, privilege testing, and de-escalation the script does what was originally intended to do without error. Here is what that script looks like now: #!/bin/bash sessionuser=$( ps -o user= -p $$ | awk '{printf $1}' ) printf '%s\n' $sessionuserNote: The above script if run with elevated privileges still outputs root instead of the current sessions username even though the privilege escalated command: sudo ps -o user= -p $$ | awk '{printf $1}'will output the current sessions username and not root so even though the scope of this thread is answered I am back to square one with this script.Thanks again to xtrmz, icarus, and especially Stphane Chazelas who somehow was able catch my misunderstanding of the issue. I'm really impressed with every one here. Thanks for the help! :)
Saving command output to a variable in bash results in Unescaped left brace in regex is deprecated
bash;escape characters;command substitution
It's the third line (print $sessionuser) that causes that error, not the second.print is a builtin command to output text in ksh and zsh, but not bash. In bash, you need to use printf or echo instead.Also note that in bash (contrary to zsh, but like ksh), you need to quote your variables.So zsh's:print $sessionuser(though I suspect you meant:print -r -- $sessionuserIf the intent was to write to stdout the content of that variable followed by a newline) would be in bash:printf '%s\n' $sessionuser(also works in zsh/ksh).Some systems also have a print executable command in the file system that is used to send something to a printer, and that's the one you're actually calling here. Proof that it is rarely used is that your implementation (same as mine, as part of Debian's mime-support package) has not been updated after perl's upgrade to work around the fact that perl now warns you about those improper uses of { in regular expressions and nobody noticed.{ is a regexp operator (for things like x{min,max}). Here in %{(.*?)}, that (.*?) is not a min,max, still perl is lenient about that and treats those { literally instead of failing with a regexp parsing error. It used to be silent about that, but it now reports a warning to tell you you probably have a problem in your (here print's) code: either you intended to use the { operator, but then you have a mistake within. Or you didn't and then you need to escape those {.BTW, you can simply use:sessionuser=$(logname)to get the name of the user that started the login session that script is part of. That uses the getlogin() standard POSIX function. On GNU systems, that queries utmp and generally only works for tty login sessions (as long as something like login or the terminal emulator registers the tty with utmp).Or:sessionuser=$(id -un)To get the name of one user that has the same uid as the effective user id of the process running id (same as the one running that script).It's equivalent to your ps -p $$ approach because the shell invocation that would execute id would be the same as the one that expands $$ and apart from zsh (via assignment to the EUID/UID/USERNAME special variables), shells can't change their uids without executing a different command (and of course, of all commands, id would not be setuid).Both id and logname are standard (POSIX) commands (note that on Solaris, for id like for many other commands you'd need to make sure you place yourself in a POSIX environment to make sure you call the id command in /usr/xpg4/bin and not the ancient one in /bin. The only purpose of using ps in the answer you linked to is to work around that limitation of /bin/id on Solaris).If you want to know the user that called sudo, it's via the $SUDO_USER environment variable. That's a username derived by sudo from the real user id of the process that executed sudo. sudo later changes that real user id to that of the target user (root by default) so that $SUDO_USER variable is the only way to know which it was.Note that when you do:sudo ps -fp $$That $$ is expanded by the shell that invokes sudo to the pid of the process that executed that shell, not the pid of sudo or ps, so it will give not give you root here.sudo sh -c 'ps -fp $$'Would give you the process that executed that sh (running as root) which is now either still running sh or possibly ps for sh invocations that don't fork an extra process for the last command.That would be the same for a script that does that same ps -p $$ and that you run as sudo that-script.Note that in any case, neither bash nor sudo are POSIX commands. And there are many systems where neither are found.
_unix.351686
I ran btrfs scrub and got this:scrub status for 57cf76da-ea78-43d3-94d3-0976308bb4cc scrub started at Wed Mar 15 10:30:16 2017 and finished after 00:16:39 total bytes scrubbed: 390.45GiB with 28 errors error details: csum=28 corrected errors: 0, uncorrectable errors: 28, unverified errors: 0OK, I have good backups, and I would like to know which files these 28 errors are in so I can restore them from backup. That would save me a lot of time over wiping and restoring the whole disk.
btrfs found uncorrected disk errors, How can I find which files they are in?
btrfs
null
_unix.256933
I recently installed Kali Linux 2.0 and tried to update the software. This is what I did:I edited /etc/apt/sources.listto contain the following mirrors :deb http://http.kali.org/kali kali-rolling main non-free contribdeb http://http.kali.org/kali kali-rolling main contrib non-freedeb-src http://http.kali.org/kali kali-rolling main contrib non-freedeb http://http.kali.org/kali sana main non-free contribdeb http://security.kali.org/kali-security sana/updates main contrib non-freedeb-src http://http.kali.org/kali sana main non-free contribdeb-src http://security.kali.org/kali-security sana/updates main contrib non-freethen ran the following commands:apt-get cleanapt-get updateWhile running the apt-get update, I was not able to connect to the Kali server. Here is the error message:Err http://security.kali.org sana/updates InReleaseErr http://http.kali.org sana InReleaseErr http://security.kali.org sana/updates Release.gpgUnable to connect to kali.mirror.garr.it:http:Err http://http.kali.org kali-rolling Release.gpgUnable to connect to kali.mirror.garr.it:http:Err http://http.kali.org sana Release.gpgUnable to connect to kali.mirror.garr.it:http:Segmentation fault Reading package lists... DoneW: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/InReleaseW: Failed to fetch http://http.kali.org/kali/dists/sana/InReleaseW: Failed to fetch http://security.kali.org/kali-security/dists/sana/updates/InReleaseW: Failed to fetch http://http.kali.org/kali/dists/kali-rolling/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Failed to fetch http://security.kali.org/kali-security/dists/sana/updates/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Failed to fetch http://http.kali.org/kali/dists/sana/Release.gpgUnable to connect to kali.mirror.garr.it:http:W: Some index files failed to download. They have been ignored, or old ones used instead.How can I fix this error?
Unable to update kali linux from regular source repositories
debian;kali linux
You should NEVER modify sources.list in Kali Linux. Here's what should be in them:deb http://http.kali.org/kali kali-rolling main contrib non-free# For source package access, uncomment the following line# deb-src http://http.kali.org/kali kali-rolling main contrib non-freeYou probably have no connection to the internet. That's why the apt-get update failed.
_unix.64097
I recently got an SSD. I use it to store my / as well as my /home directories (on different partitions).For each user, I would like to have most of their folders on my big RAID-1 with 2 hard drives (I'm talking about /home/<user>/Downloads, /home/<user>/Music, /home/<user>/Documents, etc. to make this more clear).First I thought about symlinks, but I think this wouldn't work, as the whole home-directories should be encrypted with ecryptfs.So, how can this be achieved?
have /home/user/Downloads (and other user folders) on a different partition
directory;home;ecryptfs
I found a solution. Yet it is not perfect, but I think it can be improved.Basically I did what @rcoup suggested here:https://askubuntu.com/questions/103835/securely-automount-encrypted-drive-at-user-login/165451#165451On debian for some reason mount.ecryptfs_private is in /sbin/. One can access mount.ecryptfs_private without root-privileges, however instead ofmount.ecryptfs_private extraI had to use/sbin/mount.ecryptfs_private extraI wrote a script to mount every folder in home seperately, however that's maybe not the best way to do it, as everytime I move a file (e.g. from Downloads to Music) this process takes some time. Maybe it would be better to use /sbin/mount.ecryptfs_private to just mount one folder and use symlinks then.
_unix.196476
How can I add an existing user to a group in FreeBSD? The command usermod does not work.
How to add a user to a group in FreeBSD
freebsd;users
pw is the command you are looking for. To add user klaatu to the group foo, do:pw groupmod foo -m klaatuHere is the FreeBSD handbook page on the subject. It's an easy and informative read:Users and Basic Account Management
_unix.210636
I've noticed that whenever I want to see fail2ban logs after 3 or 4 days of being working, it compresses the logs to .gz which Im fine with that:-rw-r--r--. 1 root root 90034 May 1 12:49 dmesg.old-rw-------. 1 root root 0 Jun 14 03:13 fail2ban.log-rw-------. 1 root root 8974 May 24 02:22 fail2ban.log-20150524.gz-rw-------. 1 root root 20 May 24 03:44 fail2ban.log-20150601.gz-rw-------. 1 root root 20 Jun 1 03:30 fail2ban.log-20150607.gz-rw-------. 1 root root 4785 Jun 14 03:10 fail2ban.log-20150614.gzThe problem is that it stops working like you can see in my main fail2ban.log it has 0 bytes and nothing inside of it.I was thinking that probably fail2ban has nothing to log but I see the secure log and I see the following:Jun 18 09:24:52 localserver sshd[9641]: input_userauth_request: invalid user Exit [preauth]Jun 18 09:24:53 localserver sshd[9641]: Connection closed by 123.56.112.165 [preauth]Jun 18 10:03:19 localserver sshd[10218]: Invalid user alina from 123.56.112.165Jun 18 10:03:19 localserver sshd[10218]: input_userauth_request: invalid user alina [preauth]Jun 18 10:03:20 localserver sshd[10218]: Connection closed by 123.56.112.165 [preauth]Jun 18 10:11:24 localserver sshd[10329]: Invalid user kadmin from 173.201.39.212Jun 18 10:11:24 localserver sshd[10329]: input_userauth_request: invalid user kadmin [preauth]Jun 18 10:11:24 localserver sshd[10329]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:24 localserver sshd[10331]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:25 localserver sshd[10333]: Invalid user guest from 173.201.39.212Jun 18 10:11:25 localserver sshd[10333]: input_userauth_request: invalid user guest [preauth]Jun 18 10:11:25 localserver sshd[10333]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:25 localserver sshd[10335]: Invalid user pi from 173.201.39.212Jun 18 10:11:25 localserver sshd[10335]: input_userauth_request: invalid user pi [preauth]Jun 18 10:11:25 localserver sshd[10335]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:26 localserver sshd[10337]: Invalid user ubnt from 173.201.39.212Jun 18 10:11:26 localserver sshd[10337]: input_userauth_request: invalid user ubnt [preauth]Jun 18 10:11:26 localserver sshd[10337]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:26 localserver sshd[10339]: Invalid user xbian from 173.201.39.212Jun 18 10:11:26 localserver sshd[10339]: input_userauth_request: invalid user xbian [preauth]Jun 18 10:11:26 localserver sshd[10339]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:11:26 localserver sshd[10341]: Invalid user admin from 173.201.39.212Jun 18 10:11:26 localserver sshd[10341]: input_userauth_request: invalid user admin [preauth]Jun 18 10:11:27 localserver sshd[10341]: Received disconnect from 173.201.39.212: 11: Bye Bye [preauth]Jun 18 10:42:29 localserver sshd[10741]: Invalid user andrei from 123.56.112.165Jun 18 10:42:29 localserver sshd[10741]: input_userauth_request: invalid user andrei [preauth]Jun 18 10:42:29 localserver sshd[10741]: Connection closed by 123.56.112.165 [preauth]Which makes me mad because attacks are still in place and fail2ban is doing nothing about it. I checked if fail2ban is still working and seems to me like it is: sudo fail2ban-client statusStatus|- Number of jail: 1`- Jail list: ssh-iptablesI made sure also that the logpath is correct:# Jail for more extended banning of persistent abusers# !!! WARNING !!!# Make sure that your loglevel specified in fail2ban.conf/.local# is not at DEBUG level -- which might then cause fail2ban to fall into# an infinite loop constantly feeding itself with non-informative lines[recidive]logpath = /var/log/fail2ban.logport = allprotocol = allbantime = 604800 ; 1 weekfindtime = 86400 ; 1 daymaxretry = 5sudo fail2ban-client status ssh-iptables gives the following: Status for the jail: ssh-iptables|- Filter| |- Currently failed: 0| |- Total failed: 1089| `- File list: /var/log/secure`- Actions |- Currently banned: 0 |- Total banned: 137 `- Banned IP list: Any other idea that can help me to fix this problem?
fail2ban stops logging after some time (3-4 days)
logs;jails;fail2ban
null
_softwareengineering.102741
Is this agile? Scrum? Any suggestions on how this can be made more agile under the circumstances? Which points are positives and which can be improved?The product is developed for a customer who will re-sell it while paying us royalty.The team does not get to talk directly to the end user. Only to the reseller.A product requirements document was created before starting development. The requirements are rigid and do not change.A delivery schedule was agreed on with milestones such as alpha, beta etc. and features/times attached to those milestones.All developers on the Scrum team report to the product owner, a software manager.Testers on the team report to a QA manager.The product owner has directed the team towards certain high risk technical tasks. The output of those tasks is not usable by the end user but rather some technology/code that will eventually be used in the product.The product owner has created a backlog based on the requirements.The product owner is unable to answer some questions regarding the product. He refers to others or to the documented requirements.The team goes through the motions of Scrum. Daily Scrum, Sprint Planning, Retrospective etc. There is a ScrumMaster.Every sprint the product owner and management decide what backlog items the team works on.There is a burndown chart. Scrum board with stories and tasks. The estimates on those come from the team.The team sits in an open floor bull pen shared with other teams, all visible and audible. There is cross-team noise and there is foot traffic around the team area.The team may be required to attend various meetings not directly related to the goals of the sprint.There are pressures to select certain technical solutions. Some tools and processes are mandated.
Is this agile? Scrum? How to improve agility?
agile;scrum
null
_cs.35418
I developed a randomized self-adjusting binary search tree years ago, which I called a shuffle tree, but was unable to ever have it published because my proofs were rejected (with little explanation). I've since given up the hope of publishing (I'm not an academic so it doesn't matter so much), but perhaps I can have some closure: I'm going to present the tree here, and perhaps someone can help me understand where my proofs fall short? Through testing, I'm quite certain that my understanding of the data structure is correct, but the proofs were always lacking.First, understand how a top-down splay tree can be implemented around a traverse() function. Shuffle trees can be implemented similarly, where all operations defer to a traverse() function for the balancing operation.I'm going to begin with a C traverse function for shuffle trees, then I'll explain:// returns node with key k,// or returns the leaf containing// the closest key to k.node * Traverse( key k, node *root, int treesize ) { signed int iCounter = rand() % treesize; node *pRet = 0; node *p = root; while ( p ) { pRet = p; if ( k < value(p) ) { p = left(p); if (( ! iCounter )&& p ) { RotateRight( pRet ); pRet = parent(p); } // end if } else if ( value(p) < k ) { p = right(p); if (( ! iCounter )&& p ) { RotateLeft( pRet ); pRet = parent(p); } // end if } else break ; // break while --iCounter; iCounter >>= 1; } // end while return ( pRet );} The rotations used are simple single rotations.Like a scapegoat tree, shuffle trees sample depth to find imbalance, but unlike scapegoat trees, they execute at most one rotation per access to attempt to restore balance.At the beginning of traversal, we set an integer count-down value, to a random number in the range [0,N-1], where N is the size of the tree. As we iterate from a parent node to its child, we decrease the counter with I := (I-1)/2. When the counter equals zero, then the current node becomes a candidate rotation pivot. If we need to iterate past the candidate pivot, then we will commit to the rotation. We rotate the pivot away from the direction of traversal.As search depth increases, the likelihood of a rotation increases. No rotations will occur beyond depth lgN. The counter requires lgN random bits per operation. Shuffle trees also record theirsize, so that the counter can be set. No balancinginformation needs to be recorded in tree nodes.Searches may not navigate to a leaf; if the workingset is clustered near the root, then deep searcheswill not be required. As a result, rotations can occur less frequently in a well-configured tree.If a node in the tree is not weight-balanced, thenan access is more likely to traverse into its largersub-tree. A rotation at the node probably moves some of the descendants from the larger sub-tree tothe smaller one. Since these operations are probabilistic, rotations will occur which can deterioratebalance; but as the imbalance increases, the likelihood of a rotation that improves balanceincreases.In effect, the balancing technique is a kind of random sampling. Nodes are selected randomlyfrom traversal paths and their balance is manipulated. Frequently used data attract more attentionand, therefore, benefit more from balancing activity than infrequently used data. The treeeventually approximates a weight-blanced configuration for the data set, where the probability ofaccess is the weight for each node.Here's where it gets dicey: proving that a traversal occurs in lgN.I argue the probability that an adversary can select a nodeto force a rotation that impairs balance is(1 - Pw) * Product over A of ( 1 - px )Where Pw is a number estimating the overall weight balance of the tree (Pw >= 0.5) and px is the weight of node x, the probability it is accessed. Set A is the set containing the pivot and all its ancestors. If rotations occur which impair balance, and Pw increases, then the overall probability of a favorable rotation increases. As Pw increases, the probability of a favorable rotation dwarfs the probability of a poor rotation. The tree does not linearize, and lgN is maintained.Eh. What do you think?
Proof of Randomized Self-Adjusting Binary Search Tree
data structures;randomized algorithms;correctness proof
null
_codereview.51028
I am working on a project in which I construct a URL with a valid hostname (but not a blocked hostname) and then execute that URL using RestTemplate from my main thread. I also have a single background thread in my application which parses the data from the URL and extracts the block list of hostnames from it.If any block list of hostnames is present, then I won't make a call to that hostname from the main thread and I will try making a call to another hostname. By block list, I mean whenever any server is down, its hostname is on the block list.Here is my background thread code. It will get the data from my service URL and keep on running every 10 minutes once my application has started up. It will then parse the data coming from the URL and store it in a ClientData class variable.public class TempScheduler { private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1); public void startScheduler() { final ScheduledFuture<?> taskHandle = scheduler.scheduleAtFixedRate(new Runnable() { public void run() { try { callServiceURL(); } catch (Exception ex) { ex.printStackTrace(); } } }, 0, 10, TimeUnit.MINUTES); } } // call the service and get the data and then parse // the response. private void callServiceURL() { String url = url; RestTemplate restTemplate = new RestTemplate(); String response = restTemplate.getForObject(url, String.class); parseResponse(response); } // parse the response and store it in a variable private void parseResponse(String response) { //... Map<String, Map<Integer, String>> primaryTables = null; Map<String, Map<Integer, String>> secondaryTables = null; Map<String, Map<Integer, String>> tertiaryTables = null; //... // store the data in ClientData class variables if anything has changed // which can be used by other threads if(changed) { ClientData.setMappings(primaryTables, secondaryTables, tertiaryTables); } // get the block list of hostnames Map<String, List<String>> coloExceptionList = gson.fromJson(response.split(blocklist=)[1], Map.class); List<String> blockList = new ArrayList<String>(); for(Map.Entry<String, List<String>> entry : coloExceptionList.entrySet()) { for(String hosts : entry.getValue()) { blockList.add(hosts); } } // store the block list of hostnames which I am not supposed to make a call // from my main application ClientData.setBlockListOfHostname(blockList); }}Below is my ClientData class in which I am using CountDownLatch -public class ClientData { public static class Mappings { public final Map<String, Map<Integer, String>> primary; public final Map<String, Map<Integer, String>> secondary; public final Map<String, Map<Integer, String>> tertiary; public Mappings( Map<String, Map<Integer, String>> primary, Map<String, Map<Integer, String>> secondary, Map<String, Map<Integer, String>> tertiary ) { this.primary = primary; this.secondary = secondary; this.tertiary = tertiary; } } private static final AtomicReference<Mappings> mappings = new AtomicReference<>(); private static final CountDownLatch hasBeenInitialized = new CountDownLatch(1); // do I need this extra AtomicReference? private static final AtomicReference<List<String>> blockListOfHosts = new AtomicReference<List<String>>(); // do I need this extra latch here? private static final CountDownLatch hasBeenInitializedBlockHostnames = new CountDownLatch(1); public static Mappings getMappings() { try { hasBeenInitialized.await(); return mappings.get(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); throw new IllegalStateException(e); } } public static void setMappings( Map<String, Map<Integer, String>> primary, Map<String, Map<Integer, String>> secondary, Map<String, Map<Integer, String>> tertiary ) { setMappings(new Mappings(primary, secondary, tertiary)); } public static void setMappings(Mappings newMappings) { mappings.set(newMappings); hasBeenInitialized.countDown(); } public static void setBlockListOfHostname(List<String> listsOfHostnames) { blockListOfHosts.set(listsOfHostnames); hasBeenInitializedBlockHostnames.countDown(); } public static boolean isExceptionHost(String hostName) { List<String> blockHostList = blockListOfHosts.get(); if (blockHostList != null) { return blockHostList.contains(hostName); } else { return false; } }}Here is my main application thread code in which I find all the hostnames on which I can make a call and then iterate the hostnames list to make a call.If that hostname is null or in the block list, then I won't make a call to that particular hostname and will try the next hostname in the list.@Overridepublic DataResponse call() throws Exception { List<String> hostnames = new LinkedList<String>(); Mappings mappings = ClientData.getMappings(); // use mappings.primary // use mappings.secondary // use mappings.tertiary // .. some code here for (String hostname : hostnames) { // If host name is null or host name is in block list category, skip sending request to this host if (hostname == null || ClientData.isExceptionHost(hostname)) { continue; } try { String url = generateURL(hostname); response = restTemplate.getForObject(url, String.class); break; } catch (RestClientException ex) { // log exception // how to add this hostname in the block list as well in `ClientData` class? } }}I don't need to make a call to the hostname whenever it is down from the main thread. And my background thread gets these detail from one of my services, whenever any server is down. It will have the list of hostnames and whenever they are up, that list will get updated.Do I need extra CountDownLatch for block list of hostname in ClientData class or not?Do I need extra AtomicReference for block list of hostname as well or not?This code will be called at a rate of 1000 requests per second so it has to be fast. For the first time, whenever my blockListOfHosts is being updated from the background thread, I can return false instead of blocking the call using CountDownLatch but it has to be atomic, all the threads should see correct value of the block list of hostnames.And also, whenever any RestClientException is being thrown, I will add that hostname in the blockListOfHosts as well since my background thread is running every 10 minutes so that list won't have this hostname until 10 minutes is done. And whenever this server came back up, my background will update this list automatically.
Constructing a URL for execution using RestTemplate
java;performance;url;rest;atomic
For adding and removing single host names, I would use a simple ConcurrentHashMap without the AtomicReference. Initialize it with an empty map and drop the additional latch.Update: I really doubt you need to replace the list of hosts all at once, but here's the combined form anyway:private static final AtomicReference<ConcurrentHashMap<String, String>> blockedHosts = new AtomicReference<ConcurrentHashMap<String, String>>(new ConcurrentHashMap<String, String>());public static boolean isHostBlocked(String hostName) { return blockedHosts.get().containsKey(hostName);}public static void blockHost(String hostName) { blockedHosts.get().put(hostName, hostName);}public static void unblockHost(String hostName) { blockedHosts.get().remove(hostName);}public static void replaceBlockedHosts(List<String> hostNames) { ConcurrentHashMap<String, String> newBlockedHosts = new ConcurrentHashMap<>(); for (String hostName : hostNames) { newBlockedHosts.put(hostName, hostName); } blockedHosts.set(newBlockedHosts);}
_unix.268308
My actual problem is that Nginx is not able to render pages (403 forbidden) despite the permissions being set to appropriately (in my opinion)The directory of stackoverflow at default location:user1@wfe1 ~]$ ls /usr/share/nginx/html/stackoverflow/ -altotal 4drwxr-xr-x. 2 root root 23 Mar 9 02:59 .drwxrwxr-x. 4 root www 89 Mar 9 02:59 ..-rw-r--r--. 1 root root 6 Mar 9 02:59 index.htmlThe directory of stackoverflow at user location:[user1@wfe1 ~]$ ls stackoverflow/ -altotal 4drwxr-xr-x. 2 root root 23 Mar 9 02:52 .drwxr-xr-x. 3 nginx nginx 79 Mar 9 02:51 ..-rw-r--r--. 1 root root 6 Mar 9 02:52 index.htmlConfiguration file:server{ listen 80; server_name localhost; root /usr/share/nginx/html/stackoverflow; #Works #root /home/user1/stackoverflow; #Doesn't work index index.html;}The one that fails shows a 403 forbidden error. To get to the root of the issue, I am using the following command and browse using my browser which yields the output as shown below...[root@wfe1 user1]# sudo strace -p 9114 -e trace=fileProcess 9114 attachedstat(/home/user1/stackoverflow/index.html, {st_mode=S_IFREG|0644, st_size=6, ...}) = 0open(/home/user1/stackoverflow/index.html, O_RDONLY|O_NONBLOCK) = -1 EACCES (Permission denied)open(/home/user1/stackoverflow/favicon.ico, O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory)The output as you can see is Permission Denied. I would like to know which user account was used to access the file? How can I dig in further?EDITED the question with newer permissions.
How can I find which user is accessing a file using strace?
linux;centos;nginx;strace
null
_unix.4043
I run a VNC client (tightvnc) along with other applications on a Windows machine. Inside my VNC session, I typically have several xterms and gvim windows open. How can I switch between the applications within VNC? If I do ALT-TAB, that results in switching between the applications in Windows where the whole VNC is considered as a single application; I do not want that. Is there some way to configure some key bindings, any two/three keystroke is okay for me, to do the job?
Task Switcher inside VNC
window manager;vnc
You should be able to modify the keyboard shortcut assigned to switching windows on the VNC server side desktop. You didn't specify what desktop environment you are using but on my computer (Ubuntu 10.10 with Gnome) this is the Keyboard Shortcuts control panel from the System menu (System > Preferences > Keyboard Shortcuts).The Alt-Tab function is near the bottom (under Window Management, labelled Move between windows, using a popup window). Change it from Alt-Tab to something that isn't a shortcut on either your Windows computer or the apps you're using on the VNC server side.
_unix.291309
How do I backup a LVM partition to an image for recovery purposes?I am trying to run dd on /dev/sda2, but it crashes after ca 8 hours, at around 380G sudo lvmdiskscan /dev/centos/swap [ 3.89 GiB] /dev/sda1 [ 500.00 MiB] /dev/centos/root [ 50.00 GiB] /dev/sda2 [ 465.27 GiB] LVM physical volume /dev/centos/home [ 411.38 GiB] /dev/sdb1 [ 931.51 GiB] 3 disks 2 partitions 0 LVM physical volume whole disks 1 LVM physical volumeAnyway, could running from a stick make any difference?
How do I create a dd IMAGE from LVM for recovery purposes
dd;disk;lvm
null
_unix.157059
I'm getting the following errors every 10-30 seconds on a virtual Red Hat Enterprise Linux 6.5 2 Server on Amazons EC2. Sep 23 09:57:05 ServerName init: ttyS0 (/dev/ttyS0) main process (1612) terminated with status 1Sep 23 09:57:05 ServerName init: ttyS0 (/dev/ttyS0) main process ended, respawningSep 23 09:57:05 ServerName agetty[1613]: /dev/ttyS0: tcgetattr: Input/output errorDoes anyone know what is causing this and how I could go about fixing it? Thanks.
ERROR: init: ttyS0 (/dev/ttyS0) main process (1612) terminated with status 1
linux;tty;io;init;amazon ec2
A virtual Red Hat installation probably doesn't have any serial ports connected (which is what /dev/ttyS0 is: COM1 in DOS parlance), so trying to start agetty to listen to the serial port is doomed to fail. Find the line in /etc/inittab that contains agetty and ttyS0 and change respawn to off.EDIT: In case the system is using upstart, as in redhat 6, do stop ttyS0 to stop the service now, and do mv /etc/init/ttyS0.conf /etc/init/ttyS0.conf.NOT to prevent starting the service after a reboot. (There is a better way of preventing the starting but I don't know it at the moment...)
_webmaster.88394
I have deleted some of the low quality posts in my website. But that posts still available in Google search result. Can anyone know how to make Google re-cache my site?
How to remove pages from Google Search Engine Cache
google;search;google cache
null
_webmaster.50487
I have 100's of duplicate meta description issue in google webmaster tool because of the pagination. Pagination pages are showing as duplicate in tool. So I added the rel=next and rel=prev to the pagination link anchor tags. But it seems like not working.I red the Google office webmaster blog post. That never mentioned to add anchor tag to the rel attributes. So should only adding rel attribute with link work or it will work with anchor tag as well. example : <link rel=next href=http://www.example.com/article?story=abc&page=2 />I red few unofficial blogs that mentioned the it will work with anchor. So please can anyone tell what I am doing here wrong!
rel=next in anchor tag is not working
google search console;rel;pagination
null
_codereview.90469
JavaScript is my first programming language, and I'm still pretty new. I'm just looking for feedback. What can I do to make this more efficient and handle larger numbers? // Project Euler - Smallest Multiple// This program finds the smallest positive number// that is evenly divisible by all numbers between 1 and n.function smallestMult(n){ var dividends = []; // the numbers by which the program must divide for (var i = 1; i <= n; i ++){ dividends.push(i); } var result = n; // will increase in increments of n var count = 0; // result has been found when count == n while (count < n){ for (var x = 0; x < dividends.length; x ++){ if (result % dividends[x] === 0){ count += 1; // increases count for every successful division } else { count = 0; // if a division fails, count returns to 0 result += n; // and the result is increased by n } } } return result;}console.log(smallestMult(12));
Project Euler #5 - Smallest Multiple
javascript;algorithm;programming challenge
null
_webmaster.66885
I have an Apache 2.2.15 web server with the primary site at /web/mybiz which corresponds to http://mybiz.domain.com. We now have a new subdomain http://abc.mybiz.domain.com with the homepage living at /web/mybiz/abc/index.html. Currently, I have a simple rewrite so when people visit http://abc.mybiz.domain.com, they get redirected to http://mybiz.domain.com/abc/index.html. The includes for that homepage live in /web/mybiz/static and /web/mybiz/images. I need to have it so that people visiting don't see the URL change in the browser, but I cannot figure out how to make it work and keep the includes all working.
Apache URL rewriting with masking and includes outside of DocumentRoot
apache;mod rewrite;url rewriting;masking
null
_unix.767
I know that both apt-get and aptitude are command line package management interfaces on Debian derived Linux, with different options, but I'm still somewhat confused. Under the hood, aren't they using the same APT system? Why does Debian maintain these parallel tools? (Bonus question: what on earth is wajig?)
What is the real difference between apt-get and aptitude? (How about wajig?)
debian;package management;apt;aptitude
The most obvious difference is that aptitude provides a terminal menu interface (much like Synaptic in a terminal), whereas apt-get does not.Considering only the command-line interfaces of each, they are quite similar, and for the most part, it really doesn't matter which one you use. Recent versions of both will track which packages were manually installed, and which were installed as dependencies (and therefore eligible for automatic removal). In fact, I believe that even more recently, the two tools were updated to actually share the same database of manually vs automatically installed packages, so cases where you install something with apt-get and then aptitude wants to uninstall it are mostly a thing of the past. There are a few minor differences:aptitude will automatically remove eligible packages, whereas apt-get requires a separate command to do soThe commands for upgrade vs. dist-upgrade have been renamed in aptitude to the probably more accurate names safe-upgrade and full-upgrade, respectively.aptitude actually performs the functions of not just apt-get, but also some of its companion tools, such as apt-cache and apt-mark.aptitude has a slightly different query syntax for searching (compared to apt-cache)aptitude has the why and why-not commands to tell you which manually installed packages are preventing an action that you might want to take.If the actions (installing, removing, updating packages) that you want to take cause conflicts, aptitude can suggest several potential resolutions. apt-get will just say I'm sorry Dave, I can't allow you to do that.There are other small differences, but those are the most important ones that I can think of.In short, aptitude more properly belongs in the category with Synaptic and other higher-level package manager frontends. It just happens to also have a command-line interface that resembles apt-get.Bonus Round: What is wajig?Remember how I mentioned those companion tools like apt-cache and apt-mark? Well, there's a bunch of them, and if you use them a lot, you might not remember which ones provide which commands. wajig is one solution to that problem. It is essentially a dispatcher, a wrapper around all of those tools. It also applies sudo when necessary. When you say wajig install foo, wajig says Ok, install is provided by apt-get and requires admin privileges, and it runs sudo apt-get install foo. When you say wajig search foo, wajig says Ok, search is provided by apt-cache and does not require admin privileges, and it runs apt-cache search foo. If you use wajig instead of apt-get, apt-mark, apt-cache and others, then you'll never have this problem:$ apt-get search fooE: Invalid operation searchIf you want to know what wajig is doing behind the scenes, which tools it is using to implement a particular command, it has --simulate and --teaching modes.Two wajig commands that I often use are wajig listfiles foo and wajig whichpkg /usr/bin/foo.
_unix.26225
This is a query based on personal use. A troll in our local network had downloaded Crikey, the key-event simulator. He used it to simulate events on other computers, leading to unwanted things. For example, a person was playing the popular FPS Urban Terror. The attacker used Crikey to change the player's nickname (/nick trollface). As best as we understand, the attacker ssh'd into our computers, and then switched X windows somehow, and mimicked our events. I was wondering whether someone knew how he switched X Windows and did this.
If you SSH into another computer, how to access other X displays?
ssh;xorg;security;x11;gentoo
This is not supposed to be possible; either you are running a vulnerable version of some software or you have misconfigured something.Under normal configurations, connecting to an X server requires a sort of password called X cookie. The cookie is randomly generated when the X server starts and stored in a file. Normally, only the user who started the X server can read this file, and so other users cannot obtain the cookie. For a detailed explanation of how to access an X display when the location of the cookie isn't immediately apparent, such as when accessing the display of a remote machine over an SSH connection, see Open a window on a remote X display (why Cannot open display)? See also Is there a way to communicate with someone at their desktop? and Can I launch a graphical program on another user's desktop as root? regarding accessing another user's X display.Note that Crikey is not at fault here. Crikey is not an attack program in any way. Essentially, Crikey writes to a file, and it's not Crikey's fault if that file does not have sufficiently restrictive permissions.Possible avenues of attacks include:X cookies stored in a file with insufficiently restricted permissions. Check the permissions of ~/.Xauthority or $XAUTHORITY; if this file is readable by anyone but the owner, something is misconfigured.X cookies transmitted in clear text over the network. Use SSH.X cookies available in clear text because they are stored on an NFS filesystem that anyone with physical access to the network can mount. Don't use NFS (at least not this way) if you don't trust all users with root access to a machine on the network.The targeted user ran xhost +. Don't do that.
_unix.304287
So I have kinda just resigned to using nano for this, but I though I would put it out on Unix.Linux to A) Challenge somebody and B) learn how/if It can be done.I want to prepend a link to an rsa file (command=/sbin/shutdown -h now).Most of the things I found when google cat prepend to file make it so it would end up like this .command=/sbin/shutdown -h nowssh-rsa MyRSsAkEyasetcetcWhat I need is :command=/sbin/shutdown -h now ssh-rsa MySRasKeytsadnasdnasdAka all one line, prepend to first line.
Cat prepend to first line, NOT new line
text processing;command line
This is a simple sed command:sed 's!^!command=/sbin/shutdown -h now !'If the public key is in a file then you can use the -i flag to edit the file in place:$ cat key.pub ssh-rsa MySRasKeytsadnasdnasd$ sed -i 's!^!command=/sbin/shutdown -h now !' key.pub$ cat key.pub command=/sbin/shutdown -h now ssh-rsa MySRasKeytsadnasdnasd
_unix.349304
Having trouble mounting a btrfs filesystem. Originally created on a server running xbian. Trying to mount on an up-to-date OpenSUSE 42.2 server. Complains about unsupported feature 0x10, open_ctree failed.How can I mount this filesystem ?Mount attempt# file -s /dev/sdc2/dev/sdc2: BTRFS Filesystem (label xbian, sectorsize 4096, nodesize 16384, leafsize 16384)# mount -t btrfs /dev/sdc2 /mntmount: wrong fs type, bad option, bad superblock on /dev/sdc2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.#dmesg output[ 119.698406] BTRFS info (device sdc2): disk space caching is enabled[ 119.698409] BTRFS: couldn't mount because of unsupported optional features (10).[ 119.744887] BTRFS: open_ctree failedbtrfs version# rpm -qa|grep btrfsbtrfsprogs-udev-rules-4.5.3-3.1.noarchbtrfsprogs-4.5.3-3.1.x86_64libbtrfs0-4.5.3-3.1.x86_64btrfsmaintenance-0.2-13.1.noarch#btrfs inspect-internalReports unknown flag. This behaviour seen on stock btrfs version supplied with OpenSUSE (btrfs-progs v4.5.3+20160729) and with latest when downloaded from git and compiled (btrfs-progs v4.9.1)# btrfs inspect-internal dump-super /dev/sdc2superblock: bytenr=65536, device=/dev/sdc2---------------------------------------------------------csum 0x394d4988 [match]bytenr 65536flags 0x1 ( WRITTEN )magic _BHRfS_M [match]fsid 71ecbcc5-c88f-4f27-b4d8-763bd801765elabel xbiangeneration 129root 4669440sys_array_size 97chunk_root_generation 102root_level 0chunk_root 131072chunk_root_level 0log_root 0log_root_transid 0log_root_level 0total_bytes 7451181056bytes_used 691642368sectorsize 4096nodesize 16384leafsize 16384stripesize 4096root_dir 6num_devices 1compat_flags 0x0compat_ro_flags 0x0incompat_flags 0x179 ( MIXED_BACKREF | COMPRESS_LZO | COMPRESS_LZOv2 | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA | unknown flag: 0x10 )csum_type 0csum_size 4cache_generation 129uuid_tree_generation 112dev_item.uuid a8b49751-56e3-4c42-a1d3-40a1554c800cdev_item.fsid 71ecbcc5-c88f-4f27-b4d8-763bd801765e [match]dev_item.type 0dev_item.total_bytes 7451181056dev_item.bytes_used 926941184dev_item.io_align 4096dev_item.io_width 4096dev_item.sector_size 4096dev_item.devid 1dev_item.dev_group 0dev_item.seek_speed 0dev_item.bandwidth 0dev_item.generation 0#
Unable to mount btrfs filesystem open_ctree failed
mount;btrfs
The problem is indeed that the two Linux versions sport a slightly different BTRFS version, i.e. do not support the same features:[ 119.698406] BTRFS info (device sdc2): disk space caching is enabled [ 119.698409] BTRFS: couldn't mount because of unsupported optional features (10).It seems that the xbian has enabled that features, while OpenSuse 42.2 does not, which prevents interoperability.These FS features are optional: This means it is possible to create downward compatible BTRFS partitions on newer systems that are readable from older systems (without those features), controlled by the parameters that are passed to the mkfs.btrfs program. The numeric code of the features is 10 - unknown flag: 0x10. I had a hard time to figure out what that codes means (my guess: extended inode references.) But since the number is so low, I think this is something basic. I think you cannot make this filesystem readable by unpatched kernels anymore. Otherwise, knowing the feature, we maybe could specify a mount option to avoid the error; like here, where the fs compression algorithm is specified:mount -t btrfs -o compress=lz4 dev /mntIf we do not know what this feature is you even cannot update your kernel in OpenSuse to match xbian. Usually in such a situation, you would rely on ext4 instead for compatibility reasons.
_cogsci.3586
I'm looking for the name of the cognitive bias that is expressed in the following story.A fellow coworker was instrumental in getting a 75 gallon fish tank installed in the lobby of the company that we work at a couple of weeks ago. The water needs to cycle for a while before fish can be introduced. The other day he was testing the water when someone walked by and asked when we would be getting fish. He said probably a week or so. The person was appalled that it would take so long to get fish.The company had been without a fish tank for 20 years and now they've had one for a couple weeks and this person thought one or two more weeks was way too long to wait. What is the name of this bias?
Name of the bias where someone really needs something after they find out it exists
cognitive psychology;motivation;well being;bias
It's not a bias. It is natural human nature. At least the 2 year old's case is.It's just like how you would not have thought of going to Six Flags (An amusement park) unless it was mentioned to you. When the 2 year old hears ice cream, the kid thinks of the sweet taste, or the pleasure ice cream brings. In the kid's case, the case is impatience or inability to defer gratification. For the employee, it is simply that he thinks the cycling of the tank and fish delivery would take less longer than a week or two.The two examples are both different things. I am positive that the 2 year old's case is NOT a bias. The employee's case may simply be surprise.
_softwareengineering.108248
A meeting today went well where I explained that cloud computing which one of the persons recognized was something else than a traditional RDBMS and I said that cloud computing is that everthing is software. It didn't seem like Aha! when I said it so I wonder what I should say. I thought the main specifics of cloud computing are integrated services, no traditional RDBMS and resources are allocated as software and payment model is pay per usage instead of pay per hardware. And/or should I stress the concept of PaaS i.e. that it is a platform? Wikipedia says the distinction is between products and services but that we said about web services 15 years ago. Thanks in advance for you answers
How should I communicate the specifics of cloud computing (as compared to other)
google app engine;cloud computing
Although you can host your own cloud, for most businesses it means this:You pay another company to take care of some or all your data. Your data lives on their computers, where their employees take care of your data (in the sense of keeping it alive, not in the sense of keeping it up to date), privileged access to your data (at least some parts of privileged access), the software that manages your data, and the computers that run the software that manages your data. Their employees take care of upgrades to the software and to the computers. Depending on the contract, their employees might take care of disaster recovery, too. Now, if that all sounds too good to be true . . .
_codereview.112783
I wanted to try and use the Sieve of Eratosthenes to find all prime numbers between some arbitrary bounds 1 <= m <= n. The simple implementation of the sieve does not consider the lower bound at all, it always starts from 1 (or actually from 2). So for a big enough n, simply creating an array would be impossible.The algorithm first finds all prime numbers from 1 to sqrt(n), then uses those numbers to find all primes in the given range.I'd like to know if:I'm using more memory than necessaryI'm unnecessarily repeating some operationsI can improve the style of this codeNote: I am not validating user input for simplicity sake.import java.util.*;public class PrimeLister { private static ArrayList<Integer> segmentSieve(int upperBound) { boolean[] primes = new boolean[upperBound + 1]; Arrays.fill(primes, true); ArrayList<Integer> numbers = new ArrayList<>(); for (int i = 2; i <= upperBound; i++) { if (!primes[i]) continue; for (int j = i * i; j <= upperBound; j += i) { primes[j] = false; } if (primes[i]) numbers.add(i); } return numbers; } private static int findOffset(int start, int prime) { for (int i = 0; i < prime; i++) if (start++ % prime == 0) return i; return -1; } public static void listPrimes(int lowerBound, int upperBound) { ArrayList<Integer> segmentPrimes = segmentSieve((int) Math.floor(Math.sqrt(upperBound))); int[] offsets = new int[segmentPrimes.size()]; boolean[] primes = new boolean[1 + upperBound - lowerBound]; Arrays.fill(primes, true); for (int i = 0; i < offsets.length; i++) { int tmp = segmentPrimes.get(i); offsets[i] = findOffset(lowerBound, tmp); for (int j = offsets[i]; j < primes.length; j += tmp) { if (!primes[j] || (j + lowerBound) == tmp) continue; primes[j] = false; } } for (int i = 0; i < primes.length; i++) { if (primes[i] && (i + lowerBound) != 1) System.out.println(i + lowerBound); } System.out.println(); } public static void main(String[] args) { Scanner in = new Scanner(System.in); int lowerBound = in.nextInt(); int upperBound = in.nextInt(); listPrimes(lowerBound, upperBound); in.close(); }}
Windowed Sieve of Eratosthenes in Java
java;algorithm;primes;sieve of eratosthenes
What makes your code difficult to read is nonsense (in the sense of 'carries no meaning') like int tmp, or stuff like the condition (j + lowerBound) == tmp. The superfluous parentheses are just noise but j + lowerBound does not make sense at all, as it corresponds to offsetting the current index j into the window by the window's lower bound. lowerBound + j would make sense, as it corresponds to the actual number represented by the jth slot of the window. The fact that operator + is commutative is beside the point; it's humans who must understand your code. And your code is so difficult to understand that you don't even understand it fully yourself! Trying to express code with the greatest possible clarity can be a great aid in understanding the problem under consideration; on the other hand, churning out code when the problem has not been understood achieves the opposite effect.The problem under consideration has no need of arrays containing offsets into the window. The algorithm just needs one offset during each iteration of the outer loop, just once. The computation of the offset would have warranted some 'hands-free' thinking time before the actual coding. The first location that needs to be worked on by the inner 'cross off the composites' loop is p * p, where p is the current prime; let's call this start. This value needs to be reduced to an offset within the window being worked on, which is trivial (start - lower_bound) if start >= lower_bound. If start < lower_bound then it's time to unsheath the modulo operator.// (p is the current prime)int stride = p << (p & 1);int start = p * p;int offset;if (start >= lower_bound){ offset = start - lower_bound;}else{ int before_the_segment = (lower_bound - start) % stride; offset = before_the_segment == 0 ? 0 : stride - before_the_segment;}// ...An alternative expression for the tricky case would beoffset = (stride - (lower_bound - start) % stride) % stridebut it leaves no clue for the reader as to what's intended... This can be simplified further to drop one module operator, by careful consideration of the magnitudes involved. But it is easy to shoot oneself in the foot. If there's no pressing need for shaving cycles then it's better to leave the simpler - if more verbose - code in place.The original code uses the prime p itself as stride, which has the effect that all even composites get crossed off multiple times (first during the run with p = 2, and then again during all even steps during the runs with other primes). This means that the code does almost double the amount of work that's necessary. A simple fix is to use p + p as stride when p <> 2. In effect this amounts to using a two-spoke wheel and ignoring the even spoke entirely except for its one lone prime occupant (the number 2). Another solution - which halves memory consumption - would be to represent only the even numbers in the sieve array and to pull the only even prime out of thin air when needed. Higher-order wheels would further reduce memory consumption and the amount of work done, but they would complicate the code considerably. By contrast, the two-spoke wheel gives a lot of bang while adding only one minuscule complication to the code.
_webapps.96209
I mistakenly created my album and set it to unlisted and now I'm unable to locate my album.
Change my album from unlisted to public in Google+
google plus;google plus photos
null
_unix.210747
I am currently using grep like this:grep search_string search_file > output_fileHere is an example:grep ich arbeite deu.txt > out.txtWhen used in this way, I only get the last 20 matches in out.txt. I think this is because the search_string contains a space and quotation marks, because when I try:grep arbeite deu.txt > out.txt...I get the expected result (all matches show up in out.txt) How can I get grep to return all matches when I search for a string containing spaces?EDIT: My input looked like this:...I wonder why. Ich frage mich, warum.I work a lot. Ich arbeite viel.I'll ask Tom. Ich frage mal Tom....I wish Tom wouldn't keep bothering me with questions. Ich wnschte, Tom wrde aufhren, mich mit Fragen zu nerven.I wonder if Tom realizes how many hours a day I work. Ich frage mich, ob Tom klar ist, wie viele Stunden tglich ich arbeite....
How to get more than 20 lines of output with grep when searching for a string with spaces
shell;grep;macintosh
null
_cs.35283
I got some problems with building a set, which should looks like this: $S = A\times B \subset N \times N $, where $S$ is decidable but $A$ is undecidable. Could somebody give me a clue how to actually do this?
Decidable product with an undecidable projection
computability
Let $A$ be set of Turing machines that halt, which is clearly undecidable. Now $B$ needs to be something that helps you decide $A$ (a certificate if you will, in the sense of a certificate for a $NP$ problem). Hint: if a TM halts, it halts in a finite number of steps.
_webmaster.29705
Possible Duplicate:How to provide Google reviews information? Check out this google places page as an example:http://maps.google.com/maps/place?hl=en&sugexp=erf1&pq=virginia+honda+&cp=17&gs_id=50&xhr=t&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&ix=sea&biw=1600&bih=1109&um=1&ie=UTF-8&q=los+angeles+honda+dealers&fb=1&gl=ca&hq=honda+dealers&hnear=0x80c2c75ddc27da13:0xe22fdf6f254608f4,Los+Angeles,+CA,+USA&cid=17542835494479136794&ei=rnuxT_xqw4SDB4ynhakJ&sa=X&oi=local_result&ct=placepage-link&resnum=3&sqi=2&ved=0CL0BEOIJMAIAt the bottom you will seeReviews from around the web: citysearch.com (49) - insiderpages.com (18) - dealerrater.com (37)I'm working on a website that is a similar to these directories, people can come and review dealers.We've marked the reviews with rich snippets, the problem is sometimes they do show up on some google places pages and sometimes they do not.It seems completely random, google links to wrong pages, and also the count is not accurate.I was wondering if there is an official guide on how to do this properly or if this is less about technical aspects of doing this and more about business relationships with directories and google.Any help is appreciated.
How do you feed reviews to Google places page for businesses?
seo;google search;local seo;google local search
null
_cs.54006
I am working on a project which needs to traverse a 2d spatial grid, and would like to use a space-filling curve indexing scheme. Unfortunately, I have no guarantee that the input grid will be a power of two along either dimension. In fact, it may be just barely over a power of two in some cases. Virtually extending the grid to be a power of two in size may use too much memory (it is a very large grid). Is it still possible to use a space-filling curve, such as a Morton curve, to index this grid?
Is it possible to use a space-filling curve to index a 2D grid which is not $2^x$ by $2^y$?
data structures
null
_softwareengineering.348278
Im coding a Hash-Life implementation in C++14 and working on a multi-threaded implementation. It has obvious potential for multi-threading. The task can be broken down into essentially 13 sub-tasks. 9 of the tasks are largely independent and the other 4 depend on 4 of the 9 but not each other. One one makes 9 (parallel) which come back together to make 4 (parallel).The process is actually recursive so those 13 will breakdown into 13 of their own down to a trivial tier which is just calculated by brute force. At each tier of recursion the tasks get smaller and it seems likely theres a point where multi-tasking loses its return and the algorithm should go sequential. But thats a detail. Just imagine multiple tiers of parallelism breaking down.This whole thing is just begging to be pushed through a worker thread pool.What Im inviting is ideas about smart ways of handling this. A basic thread pool wont do because theres a risk (in practice inevitability) of an interesting deadlock. If you chop a task into 13 tasks and then wait for them to complete (or twelve and do one in series or whatever) you will invite deadlock. Essentially you will end up with all the threads waiting on un-started tasks in the queue which will never be started because all the threads are waiting on un-started tasks in the queue that will . and so on. You roughly need more threads than tasks and while its possible to create threads willy-nilly it isnt an efficient model and compared to below results in excessive thread swapping.This seems to me to be potentially generic problem. A basic thread-pool model at best ends with n (task) producers and m consumers (workers) but this is a producing-consumer situation that seems like a natural way to parallelise recursion and isnt covered by that n to m case.My main idea is to avoid deadlock when a worker realises it needs a task completed to proceed and that task is un-started by taking the task back and complete it in series (deadlock avoided).Theres an attractive feature here that means rather than sleeping and allowing another thread to be swapped in threads will tend to just remain active and pull work in. There is even scope to estimate the biggest unstarted task and start that first thereby minimising the critical path and minimizing time by maximizing utilization. The downside is that theres an overhead of putting a task in a queue if it then gets pulled out and done in the thread that submitted it. There are ways of mitigating that.My problem is all the searching I can do wont come back with any analysis of this kind of problem.People endlessly want to rake over the separate Producer vs Consumer problem and I cant find how to cover this recursive model which seems so natural.Considering a 'consumers that are themselves producers' model I'm looking for:Resources (patterns, papers, blogs, code, etc.) covering this problem.Proposals, insights or thinking.Comments or ideas. That is particularly in consideration of avoiding the deadlock that 'naive' use of a 'basic' thread-pool invites.C++14 isnt important here except I have a decent threading library on hand.I have a toy solution solving the problem of summing numbers 1 - n by recursively dividing the range and adding the parts in C++ and am prepared to share but it's just a bit too long to put in this post.
Producing Consumers Thread Pool
design patterns;concurrency;parallelism
You are looking at units of work run in parallel. Many programming environments have something like a Task in .net, which is a unit of work that is not tied to a specific thread. The abstraction you need to perform the work might be called a task. These are typically queued and run on worker threads by a subsystem (a library). There can be many more tasks than worker threads.The dependencies are the issue. Avoid parallelizing the higher level loops, or recursion, and only parallelize the leaf node work, where presumably you need enough CPU cycles to be worth running in parallel. The algorithm is then to do all your recursion, loops, etc, in one thread, and create a queue of tasks that have no other dependencies, and can be performed in parallel.If you are having trouble distinguishing between the work needed by the composite nodes and the leaf nodes, the design needs to be revisited. A unit of work (task) that can be run independently on a worker thread should use mutable state that is owned only by that task -- if you need to access shared mutable state, you may gain nothing by parallelization, since you must serialize access to the shared state. In other words, you should not access the producer consumer queues within your units of work.