id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.183801 | First of all, I'm on OSX10. My default shell is BASH, which I have set up (through .profile and .bashrc) to automatically run the FISH shell when I open my terminal emulator. This allows me to set up variables etc. in BASH before I load up FISH.Sometimes, however, I want to run scripts which are written for BASH, from my FISH shell. This is necessary because FISH isn't syntactically compatible with BASH. When typing 'bash' in my FISH, the BASH I open automatically opens another FISH on top of itself, because of my .profile/.bashrc. That makes it all fishy (pun intended), because I then have to exit the top FISH to get into the BASH on top of the second FISH.My question is: I know BASH can be loaded up as a login shell (executing .profile), and a non-login shell (executing .bashrc). Would it be possible to add a third 'context', which I can set up to load when BASH is run from inside FISH? That would solve the double-FISH problem because I'd be able not to load either .bashrc or .profile.I hope you understand my question -- thanks in advance for answers! | Custom bash 'context' when running from FISH | bash;osx;fish | You could set a variable in the script which starts fish to note that you're in fish:export IN_FISH=yesThen, before that, you check whether it's already set:if [ ${IN_FISH} != yes ]; then export IN_FISH=yes fish # replace with the command you use to start fishfiThus, in your first bash, IN_FISH isn't set, so it gets set and fish is started. When you start bash from FISH, IN_FISH is already set, so bash doesn't start fish again... |
_unix.4029 | I know many directories with .d in their name:init.dyum.repos.dconf.dDoes it mean directory? If yes, from what does this disambiguate?UPDATE: I've had many interesting answers about what the .d means, but the title of my question was not well chosen. I changed mean to stand for, I hope this is clearer now. | What does the .d stand for in directory names? | directory;fhs | The .d suffix here means directory. Of course, this would be unnecessary as Unix doesn't require a suffix to denote a file type but in that specific case, something was necessary to disambiguate the commands (/etc/init, /etc/rc0, /etc/rc1 and so on) an the directories they use (/etc/init.d, /etc/rc0.d, ...)This convention was introduced at least with Unix System V but possibly earlier. The init command used to be located in /etc but is generally now in /sbin on modern System V OSes.Note that this convention has been adopted by many applications moving from a single file configuration file to multiple configuration files located in a single directory, eg: /etc/sudoers.dHere again, the goal is to avoid name clashing, not between the executable and the configuration file but between the former monolithic configuration file and the directory containing them. |
_vi.8020 | My colour scheme (morning) doesn't play nice with the quickfix window: I cannot read the selected item's location, because the foreground and background colours are the same. Because of this I want to redefine some highlighting styles, e.g. for Search and LineNr. However, I only want to do this in the quickfix window.When I edit ~/.vim/syntax/qf.vim with my changes, this affects also highlighting in other syntaxes. How can I change highlighting styles for one syntax only? I'm using, for example:hi Search ctermbg=white | Overriding highlighting style for one syntax | syntax highlighting;search;colorscheme;quickfix | null |
_codereview.151279 | There are 2 versions of the get_startup_folders() method below, both work as expected and are called only once.The 1st version has several minor pieces of repeated code which are removed by the use of an inner function in the 2nd version.I've read several times that the use of inner functions is often frowned on by Python pros, except for in a narrow range of specific cases (e.g. closures and factory functions). I was wondering whether the 2nd version below would be considered superior to the 1st version by experienced Python programmers.1st version:def get_startup_folders(self): folders = [] for item in STARTUP_FOLDERS: if item == home: if self.home_folder not in folders: folders.append(self.home_folder) elif item == file: if self.file_folder: if self.file_folder not in folders: folders.append(self.file_folder) elif item == project: for folder in self.project_folders: if folder not in folders: folders.append(folder) elif item == user: for folder in self.user_folders: if folder not in folders: folders.append(folder) # Fail-safe: If empty then use the home folder as a fall back; # self.home_folder is guaranteed to contain the home folder. if len(folders) == 0: folders.append(self.home_folder) return folders2nd version:def get_startup_folders(self): folders = [] def add_folder(folder): if folder and folder not in folders: folders.append(folder) for item in STARTUP_FOLDERS: if item == home: add_folder(self.home_folder) elif item == file: add_folder(self.file_folder) elif item == project: for folder in self.project_folders: add_folder(folder) elif item == user: for folder in self.user_folders: add_folder(folder) # Fail-safe: If empty then use the home folder as a fall back; # self.home_folder is guaranteed to contain the home folder. if len(folders) == 0: add_folder(self.home_folder) return folders | Getting startup folders | python;comparative review;file system | null |
_webapps.65685 | Does the spotify web player (play.spotify.com) use p2p transfers to supplement the bandwidth available for streaming directly from their own servers? I know their software client used to do that but I wasn't sure about the online version. | Does the Spotify web player (play.spotify.com) user peer-to-peer connections? | spotify | null |
_unix.374282 | I am storing some multi-GB files on two hard drives. After several years in offline storage (unfortunately in far from ideal conditions), I often get some files with bit-rot (the two copies differ), and want to recover the file. The problem is, the files are so big, that within the same file, on some storage devices one bit gets rotten, whereas on another one a different bit gets bit-rotten, and so neither of the disks contains an uncorrupted file.Therefore, instead of calculating the MD5 checksums of the entire files, I would like to calculate these checksums of each 1KB-chunk. With such a small chunk, there is a lot less chance that the same 1KB-chunk will get corrupted on both hard drives. How can this be done? I am sure it shouldn't be hard, but I spent over an hour trying different ways, and keep failing. | How to separately checksum each block of a large file | files;split;hashsum;checksum | I am not offering a complete solution here, but rather I'm hoping to be able to point you along the way to building your own solution. Personally I think there are better tools, such as rsync, but that doesn't seem to fit the criteria in your question.I really wouldn't use split because that requires you to be able to store the split data as well as the original. Instead I'd go for extracting blocks with dd. Something like this approach may be helpful for you.file=/path/to/fileblocksize=1024 # Bytes per blocknumbytes=$(stat -c '%s' $file)numblocks=$((numbytes / blocksize))[[ $((numblocks * blocksize)) -lt $numbytes ]] && : $((numblocks++))blockno=0while [[ $blockno -lt $numblocks ]]do md5sum=$(dd bs=$blocksize count=1 skip=$blockno if=$file 2>/dev/null | md5sum) # Do something with the $md5sum for block $blockno # Here we write to stdout echo $blockno $md5sum : $((blockno++))done |
_cs.13677 | This is the first time that I'm looking in depth into the topic, although I've always been curious.Could someone let me know about online resources (courses, tutorials, etc) and books that cover the basics of the topic?I'd like to explore both the theoretical part and the more practical part of Data Mining. | Getting started with Data Mining | reference request;data mining | You can start with Data Mining: Concepts and Techniques.To see data mining algorithms in practice you can use Weka. |
_codereview.7001 | I'm generating all combinations of an array, so for instance, [a, b, c, d] will generate:[ a, b, ab, c, ac, bc, abc, d, ad, bd, abd, cd, acd, bcd, abcd]Here's the code I've written that does complete this task.What I'd like to know is if there is a better way, as iterating over the array twice feels like I'm cheating, or the complexity of the code is much more computationally expensive than it needs to be.Also, the name for a function that takes an array and returns the combinations, what might that be called? Combinator seems inappropriate.var letters = [a, b, c, d];var combi = [];var temp= ;var letLen = Math.pow(2, letters.length);for (var i = 0; i < letLen ; i++){ temp= ; for (var j=0;j<letters.length;j++) { if ((i & Math.pow(2,j))){ temp += letters[j] } } if (temp !== ) { combi.push(temp); }}console.log(combi.join(\n)); | Generating all combinations of an array | javascript;combinatorics | A recursive solution, originally seen here, but modified to fit your requirements (and look a little more JavaScript-y):function combinations(str) { var fn = function(active, rest, a) { if (!active && !rest) return; if (!rest) { a.push(active); } else { fn(active + rest[0], rest.slice(1), a); fn(active, rest.slice(1), a); } return a; } return fn(, str, []);}Test:combinations(abcd)Output:[abcd, abc, abd, ab, acd, ac, ad, a, bcd, bc, bd, b, cd, c, d]Regarding the name: Don't name it permutations; a permutation is an arrangement of all the original elements (of which there should be n! total). In other words, it already has a precise meaning; don't unnecessarily overload it. Why not simply name it combinations? |
_codereview.3252 | I've only been writing php for a couple of months, and I've never really had anyone to look at any code I have written. I've written this class, that returns an email address from a database, based on a set schedule. I feel like a lot of the time, I'm doing things the long way, or just the wrong way. So I would really appreciate it if someone could review this class, and make any suggestions at all, as far as coding style, optimization, etc..Let me explain how it works - a quick overview.This email script is called by procmail, and the email address it returns gets forwarded by procmail. The database it connects to has 4 tables(right now).The schedule table has 4 columns, and time in/out are military time.date | tc_name | time_in | time_outThe Tour Consultant table has columns:tc_name | friendly_name | emailThe counters table holds the counters for the currently active Tour Consultants:tc_name | countThe lastactivehash table is one field only, and just holds a string of thelast tour consultants that were active. Each time this string changes the counterstable is flushed, and reinitialized with the current active Tour Consultants.How this is supposed to work: Should divy up emails evenly between current active Tour Consultants. If no one is active, it should look ahead to tomorrow, if it is evening, or the same day if it is morning, and process everyone that is working that day.I tried to name the functions in a way that would explain what they are doing. This is a pretty simple class, but if I should comment it out, let me know.Note: get_manual_override is not implemented.<?phpclass schedule {private $tomorrow;private $today;private $timenow;private $dblink;private $active_day;private $active_time;private $counters;public $final_email; function __construct() { $this->dblink = new mysqli('127.0.0.1', '####', '####', '####'); date_default_timezone_set('America/Anchorage'); $this->today = date('Y-m-d'); $this->tomorrow = date('Y-m-d', mktime(0, 0, 0, date(m) , date(d)+1, date(Y))); //military time $this->timenow = date('Hi'); //This is to be implemented as of yet if ($address = $this->get_manual_override){ //email it die(); } else{ //Query the db for everyone that has today's date set if ($this->get_active_full_day($this->today)){ //Filter these results based on who is active at the current time. $this->get_active_time(); } //Fallback, try to get tomorrow. else{ if($this->get_active_full_day($this->tomorrow)){ $this->process_whole_day(); $this->filter_active(); } //Ultimate Fallback, set a default array of emails here (TODO) else{ echo 'didnt find anything'; } } } } //Query the db for people who are active today private function get_active_full_day($date) { $query = SELECT * FROM schedule LEFT OUTER JOIN tour_consultants ON tour_consultants.tc_name = schedule.tc_name WHERE `date` = '$date'; $result = $this->dblink->query($query) ; if((isset($result->num_rows)) && ($result->num_rows != '')) { $itr = 0; //Store the results into an associative array. while ($row = $result->fetch_assoc()) { $this->active_day[$itr]['time_in'] = $row['time_in']; $this->active_day[$itr]['time_out'] = $row['time_out']; $this->active_day[$itr]['tc_name'] = $row['tc_name']; $this->active_day[$itr]['email'] = $row['email']; $itr++; } return true; } else{ return false; } } //This will only run if Today's date is set up in the database. private function get_active_time() { //Loop through the array of active today, and look for people who are currently working. //If they are active, add them to the activetime array. foreach($this->active_day as $record => $ar) { if($this->is_between($this->timenow, $ar['time_in'], $ar['time_out'])) $this->active_time[] = $ar; }//If it didn't find anybody currently active. if(!isset($this->active_time)){ $times_out = array(); $times_in = array();//Make an array of everybody working today's times in and times out. $itr = 0; foreach($this->active_day as $record => $ar) { $times_in[$itr] = $ar['time_in']; $times_out[$itr] = $ar['time_out']; $itr++; }//If the time now is less than the minimum of the times in, then it is morning, and process everyone working //Today if($this->timenow < min($times_in)) { if($this->process_whole_day()){ $this->filter_active(); return true; } else{ return false; } //If the time now is later than the max of times out, get everyone working tomorrow, and process them. } elseif($this->timenow > max($times_out)) { if($this->get_active_full_day($this->tomorrow)){ if($this->process_whole_day()){ $this->filter_active(); return true; } else{ return false; } } else{ return false; } } else { //THis else happens if we are probably between shifts... //Process the whole current day here. $this->process_whole_day(); $this->filter_active(); } } //This else is what happens when it does find people working at the current time. else{ $this->filter_active(); return true; } } private function filter_active() { if(!isset($this->active_time)){ return false; } else{/* Get a list of the names of people that were active last time an email was sent. If the list has changed, reset the email counters, if it hasn't changed, get the current counters. */ if($lastactive = $this->get_last_active()){ $curractive = ''; foreach($this->active_time as $arr) { $curractive .= $arr['tc_name']; } if($lastactive != $curractive) { $this->reset_counters(); } else{ $this->counters = $this->get_counters(); }} //Error getting last hash, so reset counters to be safe.else{ $this->reset_counters();} /* Add the counters array to the active time array. */ $min = min($this->counters); foreach($this->active_time as $id => $arr) { if(isset($this->counters[$arr['tc_name']])){ $this->active_time[$id]['sent'] = $this->counters[$arr['tc_name']]; } else{ $this->active_time[$id]['sent'] = 0; $min = 0; } } /* Find the people who have been emailed the least */ foreach($this->active_time as $id => $arr) { if($arr['sent'] == $min){ $leastsent[$id] = $arr; } }/* If more than one person has the same minimum counter, pick a random one of them. */ if(count($leastsent) > 1){ $final = array_rand($leastsent, 1); $final = $leastsent[$final]; } else{ $final = $leastsent['0']; } if(isset($final)) { $newcounter = $final['sent']; /* Increment the counter, and store it in the database, then set the lastactive names in the database. ($this->activetime) */ $newcounter++; $this->set_counter($final['tc_name'], $newcounter); $this->set_last_active(); $this->final_email = $final['email']; } } } /* Get the list of people who were last active */ private function get_last_active() { $query = SELECT hash FROM lastactivehash WHERE `id` = '0' LIMIT 1; if($result = $this->dblink->query($query)) { while ($row = $result->fetch_assoc()) { $oldhash = $row['hash']; } return $oldhash; } else{ return false; } } /* Set the list of people who were active this time around */ private function set_last_active() { $names = ''; foreach($this->active_time as $arr) { $names .= $arr['tc_name']; } $query = UPDATE lastactivehash SET `hash`='$names' WHERE `id`='0'; $result = $this->dblink->query($query); if($this->dblink->affected_rows != 1) return false; else return true; } /* Get the list of email counters */ private function get_counters() { $query = SELECT * FROM counters; if($result = $this->dblink->query($query)) { while ($row = $result->fetch_assoc()) { $counters[$row['tc_name']] = $row['count']; } return $counters; } else{ return 0; } } /* Set a single email counter */ private function set_counter($name, $count) { if($name == '' || $count == ''){ return false; } else{ $query = UPDATE counters SET `count`='$count' WHERE `tc_name`='$name'; $this->dblink->query($query); if($this->dblink->affected_rows != 1) return false; else return true; } } /* Reset email counters, set everybody with an active time to 0 */ private function reset_counters() { $truncate = TRUNCATE TABLE counters; $this->dblink->query($truncate); foreach($this->active_time as $arr){ $name = $arr['tc_name']; $query = INSERT INTO counters (tc_name, count) VALUES ('$name', '0'); $this->dblink->query($query); if($this->dblink->affected_rows != 1) $bad = 1; $this->counters[$arr['tc_name']] = '0'; } if ($bad = 1) return false; else return true; } /* Simple utility function check if one value is between two others. */ private function is_between($value, $min, $max){ if (($value >= $min) && ($value <= $max)) return true; else return false; } /* Helper function - Set the array of active_day, to currently active, active_time */ private function process_whole_day(){ if((isset($this->active_day)) && (!isset($this->active_time))){ $this->active_time = $this->active_day; return true; } else{ return false; } }} | PHP email selector class | php;mysql | I'd consider:breaking it into smaller object (e.g. extract counters)do not hardcode the dblink connection + settings (e.g. pass the object in the constructor)use phpdoc commentscorrect formattingTry to write unit test for this class, you'll spot all the drawbacks quickly. |
_unix.78106 | I have a computer connected via ethernet cables to my home router that preforms server like duties. Specifically, for some time I was running rtorrent on this machine. I wish I knew when this started to occur (then I could try and associate the break with an event), yet mysteriously all of my active torrents suddenly failed to seed or download. The program error message read Tracker: [Failed sending data to the peer].My first attempt at troubleshooting was to completely uninstall and reinstall rtorrent. I rewrote (read: uncommented the example) my config files and started clean. The first torrent file in erred with the same message. I am familiar with the common key sequences to fix torrents that go rogue in rtorrent: C-d, C-k, C-e, C-r, C-s and many variations on the same, none of them proved fruitful. Next up the operating system. I formatted a rather poorly maintained Linux Mint install and wrote it over with a fresh command-line Ubuntu installation. Feeling sure that I had nuked the issue I installed rtorrent only to once again be greeted with [Failed sending data to the peer].If my thinking is logical the next step would be to investigate the connection between the machine and the router. However I am not really sure how to do that (only an amateur with network engineering) or what to analyze/check or even how to start. Any recommendations for how to proceed? Also if you need any specifics about the setup to offer advice, just ask and I will post links to config files or other information.Extra Info: I can successfully download torrents from other machines on the same subnet.Edit 1: I've verified something suspicious about the machine in question. When it is requested to ping www.google.com it replies unknown host If I discover google's public IP on a different computer (same subnet!) and then using that information request the faulty machine to ping 74.125.239.131 instead it succeeds. Could this DNS issue (that's what it is right?) be affecting the torrent protocol? | How should I proceed in troubleshooting rtorrent? | rtorrent | null |
_unix.119921 | If my pwd is ~/repos/blog/app/views/, I'd like to show only blog/app/views in the prompt i.e. I want to show only the project root. Project root is the parent directory of .git directory. Is there a way I can achieve this? | Show path from project root in ZSH Prompt | zsh;prompt | null |
_unix.156949 | The right button on my kid's acer v5 is broken. It got wet with milk and now it suddenly appears to keep pressed.How can I deactivate the buttons at all to only work with the touchpad?I have OpenSUSE 13.1 with KDE. | how to keep the touchpad but want to deactivate the buttons | kde;opensuse;touchpad | Most touchpads can be manipulated with the command line tools synclient and xinput. You can read more about both of these command line tools here in the ArchLinux wiki:https://wiki.archlinux.org/index.php/Touchpad_SynapticsOf the 2 tools, I do not believe you can disable the uttons using synclient. You may be able to do so using xinput. Of the 2 tools, this is the more cumbersome one to use, but it's not overly difficult.If you run it with the -h switch you'll get the following usage info:$ xinput -husage : xinput get-feedbacks <device name> xinput set-ptr-feedback <device name> <threshold> <num> <denom> xinput set-integer-feedback <device name> <feedback id> <value> xinput get-button-map <device name> xinput set-button-map <device name> <map button 1> [<map button 2> [...]] xinput set-pointer <device name> [<x index> <y index>] xinput set-mode <device name> ABSOLUTE|RELATIVE xinput list [--short || --long || --name-only || --id-only] [<device name>...] xinput query-state <device name> xinput test [-proximity] <device name> xinput create-master <id> [<sendCore (dflt:1)>] [<enable (dflt:1)>] xinput remove-master <id> [Floating|AttachToMaster (dflt:Floating)] [<returnPointer>] [<returnKeyboard>] xinput reattach <id> <master> xinput float <id> xinput set-cp <window> <device> xinput test-xi2 <device> xinput map-to-output <device> <output name> xinput list-props <device> [<device> ...] xinput set-int-prop <device> <property> <format (8, 16, 32)> <val> [<val> ...] xinput set-float-prop <device> <property> <val> [<val> ...] xinput set-atom-prop <device> <property> <val> [<val> ...] xinput watch-props <device> xinput delete-prop <device> <property> xinput set-prop <device> [--type=atom|float|int] [--format=8|16|32] <property> <val> [<val> ...] xinput disable <device> xinput enable <device>I would start with the options whose names include the text button.$ xinput -h 2>&1 | grep button xinput get-button-map <device name> xinput set-button-map <device name> <map button 1> [<map button 2> [...]]You'll need the device's name in order to query it. For that you'll use xinput list.Example$ xinput list Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] Logitech Unifying Device. Wireless PID:4013 id=9 [slave pointer (2)] SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] TPPS/2 IBM TrackPoint id=12 [slave pointer (2)] Virtual core keyboard id=3 [master keyboard (2)] Virtual core XTEST keyboard id=5 [slave keyboard (3)] Power Button id=6 [slave keyboard (3)] Video Bus id=7 [slave keyboard (3)] Sleep Button id=8 [slave keyboard (3)] AT Translated Set 2 keyboard id=10 [slave keyboard (3)] ThinkPad Extra Buttons id=13 [slave keyboard (3)]It's typically SynPS/2 Synaptics TouchPad this device handle, but may vary for your particular hardware.$ xinput get-button-map SynPS/2 Synaptics TouchPad1 2 3 4 5 6 7 8 9 10 11 12 These are all the buttons that are specified for my Thinkpad T410 laptop's touchpad. Any corners and such on the touchpad are also considered buttons, that's why there are so many in the above output. You can find out more about which buttons are which number in the above list using the --long switch.Example$ xinput list --long SynPS/2 Synaptics TouchPad ... SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] Reporting 8 classes: Class originated from: 11. Type: XIButtonClass Buttons supported: 12 Button labels: Button Left Button Middle Button Right Button Wheel Up Button Wheel Down Button Horiz Wheel Left Button Horiz Wheel Right None None None None None Button state: Class originated from: 11. Type: XIValuatorClass Detail for Valuator 0: Label: Rel X Range: 1472.000000 - 5888.000000 Resolution: 75000 units/m Mode: relative Class originated from: 11. Type: XIValuatorClass Detail for Valuator 1: Label: Rel Y Range: 1408.000000 - 4820.000000 Resolution: 105000 units/m Mode: relative Class originated from: 11. Type: XIValuatorClass Detail for Valuator 2: Label: Rel Horiz Scroll Range: 0.000000 - -1.000000 Resolution: 0 units/m Mode: relative Class originated from: 11. Type: XIValuatorClass Detail for Valuator 3: Label: Rel Vert Scroll Range: 0.000000 - -1.000000 Resolution: 0 units/m Mode: relative ...OK that's great, but how do I disable a button?If you take a look at the man page for xinput you'll see the following clue:$ man xinput... --set-button-map device map_button_1 [map_button_2 [...]] Change the button mapping of device. The buttons are specified in physical order (starting with button 1) and are mapped to the logical button provided. 0 disables a button. The default button mapping for a device is 1 2 3 4 5 6 etc....So if you take note of which button is the one that you want to disable using the xinput list --long SynPS/2 Synaptics TouchPad, you could do the following, if say you wanted to disable button #5.$ xinput set-button-map SynPS/2 Synaptics TouchPad 1 2 3 4 0 6 7 8 9 10 11 12NOTE: In the above example, SynPS/2 Synaptics TouchPad can also be replaced by 11, as that is the ID of this particular input, so this is the same as above:$ xinput set-button-map 11 1 2 3 4 0 6 7 8 9 10 11 12Tip on device namesIn the output of xinput list you may have noticed a column with the strings id=#.$ xinput list Virtual core pointer id=2 [master pointer (3)] Virtual core XTEST pointer id=4 [slave pointer (2)] Logitech Unifying Device. Wireless PID:4013 id=9 [slave pointer (2)] SynPS/2 Synaptics TouchPad id=11 [slave pointer (2)] TPPS/2 IBM TrackPoint id=12 [slave pointer (2)]Those IDs can be used instead of the long annoying string: SynPS/2 Synaptics TouchPad.$ xinput list-props 11 |
_webapps.90908 | I have a problem with my google sheets updating the IMPORTXML data everytime i open my sheet.Does importxml function run everytime I open Google Sheets ? Is there a way of turning off this? I don't want it to refresh data every time I open the sheet. Surely there must be a way of controlling when the update must be done. | Import XML data Reloading everytime I open the sheet | google spreadsheets | I don't think there is a built-in way to disable automatic updates of importXML, but here is a workaround. Enter the script given below in the Script Editor. It will add a new menu item, Custom > Update imported data, next time the spreadsheet is opened. Place any importXML formulas in the first row of a sheet and precede them with a backtick, so they are not recognized immediately: `=importXML(http://cnn.com, //div)This doesn't do anything on its own. But when the command Update imported data is executed, it will place the actual formula (without backtick) one row below, so it is executed. After that, it will replace all formulas on the sheet with their output; in particular there will not be any active importXML formula left. The original backticked formula will stay in place, so the data can be refreshed again just by using the same menu item. function onOpen() { var menu = [{name: Update imported data, functionName: update}]; SpreadsheetApp.getActiveSpreadsheet().addMenu(Custom, menu);}function update() { var sheet = SpreadsheetApp.getActiveSheet(); var range = sheet.getDataRange(); range.offset(1, 0).clear(); var values = range.getValues()[0]; for (var j=0; j<values.length; j++) { if (/^`=import/i.test(values[j])) { range.getCell(1, j+1).offset(1, 0).setFormula(values[j].slice(1)); } } SpreadsheetApp.flush(); var range = sheet.getDataRange().offset(1, 0); range.copyTo(range, {contentsOnly: true});}LimitationsWhen updating the output, the script erases everything below the first row, to make room for new data. So you can't have much else on this sheet, other than importXML. Put the rest of logic on other sheets. Alternatively, one can modify the script to keep the first N rows unaffected, and use the rows starting with N+1 for imported data. |
_unix.175584 | I try to run nmcli_dmenu, here is the error message:Error: Object 'networking' is unknown, try 'nmcli help'.Usage: nmcli connection { COMMAND | help } COMMAND := { list | status | up | down | delete } list [id <id> | uuid <id>] status [id <id> | uuid <id> | path <path>] up id <id> | uuid <id> [iface <iface>] [ap <BSSID>] [--nowait] [--timeout <timeout>] down id <id> | uuid <id> delete id <id> | uuid <id>Error: 'con' command 'show' is not valid.Could anyboday tell me what's wrong?(Ubuntu, xmonad, network-manager) | nmcli_dmenu doesn't work | networkmanager;dmenu | null |
_ai.2701 | I want to develop an artificial life simulator to simulate cells living in water.I want to see how they search for food, how they life and die and how they reproduce and evolve.My problem is that I don't know where to start, I have no idea about if there are books or tutorial about how to program this kind of simulator. And also I don't know if I can use here machine learning.By the way, I'm a programmer and I want to do it using C++ and Unreal Engine.Where can I find more info about how to do it? | Artificial life simulator | neural networks;machine learning;genetic algorithms | The best approach would be starting with smaller projects involving neural networks and genetic algorithms to gain experience in order to speedup the coding of the project you have proposed; playing around with TensorFlow and Unreal Engine it is not a bad idea.Hint: when implementing your idea of artificial life, you should consider that each cell/organism have to have some kind of sensors in order to capture informations from the environment; such informations i.e. the position and the distance of the nearest meal and/or predators, the temperature, the pressure and depth of water, should be passed through the neural network to determine the response of the cell. Also, in your environment you should promote the spreading of organisms which responses are euristically better i.e. cells that don't get caught by predators or don't die by starvation. How? Simply by evolving their brain/brains/sensors through a genetic algorithm that favors individuals/species with good parameters. I recommend you a nature-inspired AI method, it is called NEAT model. It explains how to implement a neural networks that can be evolved. The paper can be found here: Evolving Neural Networks through Augmenting Topologies.A different approach to NEAT would be Deep Reinforcement Learning; in the link you can find a demo artifical organism that learns how to find meals. There are a ton of parameters and implementations you can consider, the only limit is your creativity. |
_unix.258222 | I am trying to use Jenkins to build a C++ project in a Docker container. I have no problem building in Jenkins, or building in a container outside of Jenkins.Below is what I tried. I am omitting the volumes mapping for clarity.Case 1The following command successfully runs a build in a shell.docker run --rm --interactive=true --tty=true $IMAGE makeHowever when run in Jenkins as an execute shell step Docker returns the following error.cannot enable tty mode on non tty inputCase 2The following command is similar to the previous one but disables interactivity.docker run --rm $IMAGE makeJenkins can run a build successfully. However there are serious issues when aborting a build. The build is immediately marked as aborted but the container keeps running until the build completes. Also the container is not removed after exiting.When run in a shell the command builds successfully but it is not possible to interrupt it. Also the container is removed after exiting.QuestionWould anyone know how to cleanly run builds in Docker containers from Jenkins and retain the capability to abort builds?Using any of the Jenkins plugins is not an option because the Docker calls are inside scripts and cannot be extracted easily. | How to run builds in Docker containers from Jenkins | tty;docker;pty;jenkins | null |
_softwareengineering.206860 | I read in a book that both methods and fields are considered the attributes of a class in Python. However, recently I was told by a friend of mine that methods may not be considered the attributes of a class.Then I decided to check this out again, and found this question mentioned on Wikipedia saying that this is a Python tradition to refer to methods as attributes, in contrast to other object-oriented programming languages.So my question is: why are methods considered the class atributes along with the fields in Python in contrast to other languages?Thank you! | Why are methods considered the class attributes in Python? | object oriented;python;conventions | Your friend was wrong. Methods are attributes.Everything in Python is objects, really, with methods and functions and anything with a __call__() method being callable objects. They are all objects that respond to the () call expression syntax.Attributes then, are objects found by attribute lookup on other objects. It doesn't matter to the attribute lookup mechanism that what is being looked up is a callable object or not.You can observe this behaviour by looking up just the method, and not calling it:>>> class Foo(object):... def bar(self):... pass... >>> f = Foo()>>> f.bar<bound method Foo.bar of <__main__.Foo object at 0x1023f5590>>Here f.bar is an attribute lookup expression, the result is a method object.Of course, your friend me be a little bit right too; what is really happening is that methods are special objects that wrap functions, keeping references to the underlying instance and original function, and are created on the fly by accessing the function as an attribute. But that may be complicating matters a little when you first start out trying to understand Python objects. If you are interested in how all that works, read the Python Descriptor HOWTO to see how attributes with special methods are treated differently on certain types of attribute access.This all works because Python is a dynamic language; no type declarations are required. As far as the language is concerned, it doesn't matter if Foo.bar is a string, a dictionary, a generator, or a method. This is in contrast to compiled languages, where data is very separate from methods, and each field on an instance needs to be pinned down to a specific type beforehand. Methods generally are not objects, thus a separate concept from data. |
_softwareengineering.211696 | I'm making a website in PHP where the user can search a big MySQL database. The user is shown the first result. I want the next button to take the user to the next result, and so on.The trivial solution is for each result page to execute the user's search query again and use OFFSET and LIMIT to get the n-th result that is displayed. But this feels like a Schlemiel the Painter's algorithm: re-executing the same query over and over to get to the n-th result is inefficient.Since others must have faced this situation before: how is this typically solved? | More efficient way to paginate search results | php;mysql | null |
_webmaster.11004 | Possible Duplicate:www.mydomain.com secure but not mydomain.com? HiWe've got our website running and https / ssl works just great.EXCEPTwhen users enter http://www.mydomain.com and then enter the secure area and get directed to httpS://www.mydomain.com it works fine.But when users enter just http://mydomain.com and then enter the secure are and get directed to httpS://mydomain.com they get a warning about the certificate being from a website called www.mydomain.com while they're trying to enter mydomain.com ....Does our SSL cert not cover both www.mydomain.com and mydomain.com? Are we suppose to buy TWO certs, on for each?? Surely not?Any help or pointers to the standard way of doing things like this would be great.We're using Apache HTTPD to forward requests to Tomcat webapps. Apache is taking care of all the SSL connections. | www.mydomain.com secure but not mydomain.com? | apache;https | null |
_unix.27005 | Suppose I want to encrypt a file so that only I can read it, by knowing my SSH private key password. I am sharing a repo where I want to encrypt or obfuscate sensitive information. By that, I mean that the repo will contain the information but I will open it only in special cases.Suppose I am using SSH-agent, is there some easy way to encrypt the file for only me to open it later?I cannot see why I should use GPG for this, question here; basically I know the password and I want to only decrypt the file by the same password as my SSH key. Is this possible? | Encrypting file only with SSH -priv-key? | ssh;encryption | I think your requirement is valid, but on the other hand it is also difficult, because you are mixing symmetric and asymmetric encryption. Please correct me if I'm wrong.Reasoning:The passphrase for your private key is to protect your private keyand nothing else.This leads to the following situation: You want touse your private key to encrypt something that only you can decrypt.Your private key isn't intended for that, your public key is thereto do that. Whatever you encrypt with your private key can bedecrypted by your public key (signing), that's certainly not what you want. (Whatever gets encrypted byyour public key can only be decrypted by your private key.)So you need to use your public key to encrypt your data, but for that, youdon't need your private key passphrase for that. Only if you want todecrypt it you would need your private key and the passphrase.Conclusion: Basically you want to re-use your passphrase for symmetric encryption. The only program you would want to give your passphrase is ssh-agent and this program does not do encryption/decryption only with the passphrase. The passphrase is only there to unlock your private key and then forgotten.Recommendation: Use openssl enc or gpg -e --symmetric with passphrase-protected keyfiles for encryption. If you need to share the information, you can use the public key infrastucture of both programs to create a PKI/Web of Trust.With openssl, something like this:$ openssl enc -aes-256-ctr -in my.pdf -out mydata.enc and decryption something like$ openssl enc -aes-256-ctr -d -in mydata.enc -out mydecrypted.pdfUpdate:It is important to note that the above openssl commands do NOT prevent the data from being tampered with. A simple bit flip in the enc file will result in corrupted decrypted data as well. The above commands cannot detected this, you need to check this for instance with a good checksum like SHA-256. There are cryptographic ways to do this in an integrated way, this is called a HMAC (Hash-based Message Authentication Code). |
_computerscience.5246 | I want to write plugin (library) for Unity3d (it doesn't matter which framework I will choose for this, question is ), for cutting arbitrary mesh with plane (for simplicity it will be plane for beginning). There are following steps:1) check every triangle whether it lies above, under plane or intersected by plane, assign all vertices to VER_ABOVE or VER_UNDER lists, recalculate triangles, put them in TRI_ABOVE or TRI_UNDER lists2) Split intersected triangles into triangle and quadrilateral, triangulate last one and put them in corresponding lists (TRI_ABOVE or TRI_UNDER)3) Triangulate slice surface, assign it to TRI_ABOVE and TRI_UNDERAnd now the question: Where it is better to calculate intersections, triangulation (for example I will use Constrained Delaunay Triangulation or Sweep Line Non-Convex Polygonal Triangulation cutting into monotone polygonals) and other stuff: on CPU or on GPU (using Shaders), if someone could explain me general pipeline from loading cashing mesh to memory and so on on GPU and on CPU, so I can better understand the most time consuming actions and can optimize some stuff.I will also be thankful for sharing link on code examples calculating similar stuff. | What is better to use for real-time computing Mesh - Plane intersection points, GPU or CPU? | shader;real time;mesh;triangulation;tesselation | doing the calculation to decide whether a point is on one side of a plane or the other is very simple (a single dot product). Doing that 3 times and having a special case when they don't match to split the triangle is pretty fast. It's also a parallel problem. The hardest part is reserving the space for the output.Preparing the data to start computing this on the gpu is already enough overhead that it's not worth doing it on the gpu unless the mesh is very big or you are splitting it multiple times (due to the plane moving for example). Or the data is already in good enough condition to be sent over to the gpu directly through DMA without the cpu needing to touch it at all. |
_unix.7441 | Can root kill init process (the process with pid 1)? What would be its consequences? | Can root kill init process? | root;init | By default, no, that's not allowed. Under Linux (from man 2 kill):The only signals that can be sent to process ID 1, the init process, are those for which init has explicitly installed signal handlers. This is done to assure the system is not brought down accidentally.Pid 1 (init) can decide to allow itself to be killed, in which case the kill is basically a request for it to shut itself down. This is one possible way to implement the halt command, though I'm not aware of any init that does that.On a Mac, killing launchd (its init analogue) with signal 15 (SIGTERM) will immediately reboot the system, without bothering to shut down running programs cleanly. Killing it with the uncatchable signal 9 (SIGKILL) does nothing, showing that Mac's kill() semantics are the same as Linux's in this respect.At the moment, I don't have a Linux box handy that I'm willing to experiment with, so the question of what Linux's init does with a SIGTERM will have to wait. And with init replacement projects like Upstart and Systemd being popular these days, the answer could be variable.UPDATE: On Linux, init explicitly ignores SIGTERM, so it does nothing. @jsbillings has information on what Upstart and Systemd do. |
_webapps.78583 | Every week I get an email message from somebody asking me to confirm my Twitter account. I don't even have a Twitter account! The sender's email address is [email protected], which I find suspicious as it is so generic and does not include the sender's name, which appears in the subject line. I have never opened any of his emails as I suspect this is a malicious phishing scam. In the meantime, I have blocked it so the emails end up in my spam box. Also, I want to report the above to Twitter to see if they can do anything about it, should it be a legitimate user, which I doubt. How do I email or call them as I am not a Twitter user? | How to contact Twitter to report phishing scams | twitter;phishing | null |
_unix.136931 | I've spent so much time trying to get various flavours of Linux to run on my Dell XPS LS02 laptop which only has a HDMI output.I've tried all sorts of solutions and methods including http://bumblebee-project.org/ However none have managed to get me a stable second display working.The question, is there any distribution that supports the intel/nvidia(optimus) hybrid chipset out of the box? | A distro that supports intel/nvidia hybrid graphics out of the box | nvidia;hybrid graphics;intel graphics | null |
_webmaster.16368 | I am interested in learning how to run a server to host a website and provide me with email and all the other features hosting companies offer. I have read snippets from all over about different aspects, but I would like an absolute begining as I don't really even understand how servers work. Can anyone recommend a site or book that would offer a solid foundation for understanding how a server works. | I want to learn about servers, where should I start? | web hosting;webserver;web development | null |
_softwareengineering.354332 | It's a C#, ASP.NET MVC Project and here is the problem:User can enter their email address in the reset password text field and click on the Reset Password button. Each time the user clicks on the Reset Password button a new email will be sent to that email address.I am trying to limit the user reset password email to only once every x hours.What are the possible solutions?Here is what I have in mind so far, but I guess there are some other smarter ways:Create a cookie on the client side once the first reset password request came, and set the cookie expiry date to x hours. As long as the cookie exists, don't let user reset their password. (The problem with this is that the user can easily use a different browser or remove that cookie manually)Create a new DB table and insert the user email address and expiry date into that table. When user tries to reset his password, as long as the record exists in the DB don't let future reset password emails be sent. | Possible way to handle multiple reset password email | design;design patterns;web development;web applications;passwords | The second approach works well. I used something similar in a recent project. When someone requests a password reset, it generates the verification code (used for the verification link) and puts it into a table along with the email address and a timestamp. If a password reset is requested again, it checks that table first and won't send out the email if another request had been made within the past X minutes.There's a periodic task that then goes through and clears out any password reset entries as they expire (for ex, after 24 hours). |
_cs.53864 | It is known that when a Random Access Machine halts the output on the registers is going to be $R_1,\ldots,R_{||R_0||}$ after going through its instruction set where $||R_0||$ denotes the length of the binary representation of $R_0$. Is there any implication changing the output on the registers as $R_1,\ldots,R_{R_0}$. | Random Access Machine output | complexity theory;turing machines | null |
_softwareengineering.197590 | So I'm finishing up refactoring some code to remove a number of previously-mutable objects and add a better generic processing for all the classes in the domain. Just as I thought I was finishing I eralized that there is one sub-class that has some additional state.The additional state is a link to other classes that are used as part of the logic for knowing when new domain objects will be created, deleted, or modified. However, this sub-class is only created at bootup, or when someone runs a command to re-read and update to new configuration files. I know that this object will always be created before any of the other objects in my domain that are dependent on it are, such that anyone pointing to this object are all gaurenteed to point to the same instance. I effectively have the Memorization pattern by an accident of how the structure is created.I could refactor away the mutability; but it would require a bit of work modifying bootup logic that I would prefer to avoid. Or I could change the has and equals methods to ignore this one set of mutable values so my model will treat this object exactly like it's immutable parent and trust that my knowledge of the method it is constructed prevent me from aliasing issues when I do try to use it's mutable traits.So how 'wrong' is it to bend my contract for thie class this way be? | subclass of immutable object not immutable, can this work? | design;mutable | There's nothing inherently wrong with mutable subclasses provided you don't make assumptions about mutability in other parts of your code. As an example, the Foundation framework that's part of Cocoa and Cocoa Touch frameworks (MacOS X and iOS respectively) has a number of immutable data containers that have mutable subclasses. NSMutableArray is a mutable subclass of the immutable NSArray, NSMutableDictionary is a mutable subclass of the immutable NSDictionary, etc. This works fine if you think of mutability as an added feature rather than something that needs to be removed from the superclass. Most importantly, client code should never try to make changes to an object that's advertised as immutable, even if the object happens to be an instance of a mutable subclass. So, if a method returns a NSArray, you might actually get back an instance of NSMutableArray, but you should always treat it as immutable anyway. |
_unix.210532 | I want to test the LDAP connectivity between my linux machine to the windows domain controler , so I installed successfully the tool- ldapsearch The Linux machine do authentication of users agaisnt the domain controller ( win machine )so to test the LDAP I run this command ldapsearch -x -h domainController.apple.com -b dc=apple,dc=comwhat I get is that: # extended LDIF # # LDAPv3 # base <dc=apple,dc=com> with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 1 Operations error text: 00000000: LdapErr: DSID-0C090627, comment: In order to perform this ope ration a successful bind must be completed on the connection., data 0, vece # numResponses: 1can someone help me to understand the results here from ldapsearch tool?or maybe the syntax in the command ldapsearch isnt right ?the ldap.conf as defined in my linux machine:more /etc/ldap.conflogdir /var/log/ldapdebug 0referrals noderef nevernss_getgrent_skipmembers yeshost domainController.apple.combase DC=apple,DC=comuri ldap://domainController.apple.com/ | how to verified LDAP on Linux machine | linux;ldap;active directory;domain | null |
_softwareengineering.214474 | I'm designing a REST API for a three-tier system like: Client application -> Front-end API cloud server -> user's home API server (Home).Home is a home device, and is supposed to maintain connection to Front-end via Websocket or a long poll (this is the first place where we're violating REST. It gets even worse later on). Front-end mostly tunnels Client requests to Home connection and handles some of the calls itself.Sometimes Home sends notifications to Client.Front-end and Home have basically the same API; Client might be connecting to Home directly, over LAN. In this case, Home needs to register some Client actions on a Front-end itself.Pros for REST in this system are:REST is human-readable;REST has a well-defined mapping of verbs (like CRUD), nouns and response codes to protocol objects;It works over HTTP and passes all the possible proxies;REST contras are:We need not only a request-response communication style, but also a publish-subscribe;HTTP error codes might be insufficient to handle three-tier communication errors; Front-end might return 202 Accepted to some async call only to find out that necessary Home connection is broken and there should have been 503;Home needs to send messages to Client. Client will have to poll Front-end or to maintain a connection.We are considering WAMP/Autobahn over Websocket to get publish/subscribe functionality, when it struck me that it's already looking like a message queue.Is it worth evaluating a sort of messaging queue as a transport?Looks like message queue contras are:I'll need to define CRUD verbs and error codes myself on a message level.I read something about higher maintenance cost, but what does it mean?how serious are these considerations? | REST or a message queue in a multi-tier heterogeneous system? | architecture;web services;rest;message queue;websockets | null |
_cs.47544 | Here i want to discuss about Linear Voronoi game. The game consists of two players, and a finite set of users placed along a line. Each player has 2m facilities, where m>0 is a fixed integer. The game starts by the first player, placing 2 facilities on the line, followed by which the second player places 2 facilities, and this continues for multiple rounds. Assuming that each user is served by a facility closest to it, the payoff of each player is defined as the number of users served by the facilities of that player. The goal of each player is to maximize his payoff. I am trying to find an optimal strategies for last move of both players to find their best placements in the game. | Voronoi game in discrete space | graphs;discrete mathematics;game theory;computer games | null |
_webapps.92598 | The problem:View Facebook news feedScroll down the pageInfinite scroll loads about 100 posts (10 days)No more posts are shown, instead a box titled Add Friends to See More Stories is shownIt appears to be impossible to go back further in time. In other words, Facebook seems to limit the number of posts that you can view. This has been a problem I've seen for 6 months or longer. In my current case I haven't checked Facebook for about 3 months, so only being able to see 10 days worth of past posts is not useful.I've tried using F.B. Purity and Social Fixer, and using a completely clean browser profile, and switching between but Most Recent and Top Stories, and the problem still occurs. The number of friends doesn't seem to be relevant since users with 4,000+ friends report the same issue.Other users report the same problem: https://www.facebook.com/help/community/question/?id=10205280399081185https://www.facebook.com/help/community/question/?id=10203669060999418but I cannot find any in-depth analysis of the problem online. This appears to be an artificial limit imposed by Facebook. Is there a workaround? Continuing to make calls to the Infinite scroll data source with the correct magic parameters will likely return older stories, but I have not tried to decipher the query format used for those calls. | How to view older stories in Facebook news feed? [Add Friends to See More Stories] | facebook;news feed | null |
_cs.45299 | Say you execute a clear interrupt instruction (CLI) in a pipelined CPU. While that instruction is being fetched, an interrupt occurs, so the instruction after the CLI is from the interrupt handler.You excepted no interrupts because of the CLI instruction but you still got one. How is this problem solved? | Clear interrupt instruction in a pipelined CPU | computer architecture;cpu pipelines | Such a control hazard can be handled effectively the same way that a branch misprediction is handled. Either the CLI instruction commits and the interrupt handler fetch is treated as the mispredicted path and fetch is restarted after the CLI or the CLI instruction is not committed and the interrupt handler is treated as the correct path. |
_codereview.169768 | ProblemAdapted from this HackerRank problemGiven a string S, find the unordered pairs of substrings that are anagrams of each otherTwo strings are anagrams of each other if the letters of one string can be rearranged to form the other string.ApproachAnother way of thinking about anagrams is that two strings are anagrams if the frequency of the characters in both strings is identical.In other words, if you were to create a Map of the number of times every Character is found in each string, both Maps should be identical.`abba` => `{ 'a': 2, 'b': 2 }``bbaa` => `{ 'a': 2, 'b': 2 }`Thus, I approached the problem in the following wayStart with a pair count value of 0Iterate through all substrings of a given lengthCreate a Map that will keep track of all anagrams and their frequency (again, for substrings of the given length)For each substring, represent the frequency of each Character using a Map<Character, Integer>If the character frequencies calculated in Step #4 are not found in the key values of the Map created in Step #3 thenadd them to the Map. If it does exist, then increment the pair count by the number of times the same anagram (i.e. character frequencies) have already been seen. This is because for every time an identical anagram is seen, it can bepaired with every previous instance of the same anagram. After incrementing the pair count, also increment the number of times an anagram has been seen by 1Return the pair count valueImplementationpublic class UnorderedAnagrammaticPairsCounter { public static int countUnorderedAnagrammaticPairs(String s) { int count = 0; for (int substringLength = 1; substringLength <= s.length(); substringLength++) { Map<Map<Character, Integer>, Integer> substringsCounts = new HashMap<>(); for (int index = 0; index <= s.length() - substringLength; index++) { String substring = s.substring(index, index + substringLength); Map<Character, Integer> characterCounts = UnorderedAnagrammaticPairsCounter.getCharacterCounts(substring); Integer substringCounts = substringsCounts.get(characterCounts); if (substringCounts == null) { substringsCounts.put(characterCounts, 1); } else { count += substringCounts; substringsCounts.put(characterCounts, substringCounts + 1); } } } return count; } public static Map<Character, Integer> getCharacterCounts(String s) { Map<Character, Integer> characterCounts = new HashMap<>(); for (char c : s.toCharArray()) { Integer characterCount = characterCounts.get(c); if (characterCount == null) { characterCounts.put(c, 1); } else { characterCounts.put(c, characterCount + 1); } } return characterCounts; }} | Unordered Substring Anagrammatic Pairs | java;strings;programming challenge | null |
_webmaster.57100 | I currently use Google custom search for my static site. When I use it, I sometimes get images next to the entry:I think Google simply takes the first image in the article. But I would rather like it to take the featured image (which is most of the time not in the article at all, but much smaller than all images that are in the articles and shows the topic of the article much better).How can I tell Google which image to use for the preview? | Can I tell Google which image to use for the preview in custom search? | images;site search;google custom search | I believe this is what you are looking for - https://support.google.com/customsearch/answer/1626955?hl=enYou can specify thumbnail images as follows:PageMap data in the section of your HTML page A thumbnail meta tag.Using a PageMapYou can specify a thumbnail image by adding a PageMap (a block of code) to the section of your page. This content is invisible to users, but it can provide valuable information to Custom Search. Create a thumbnail DataObject for your thumbnail image, like this: <!-- <PageMap> <DataObject type=thumbnail> <Attribute name=src value=http://www.example.com/recipes/applepie/applepie.jpg/> <Attribute name=width value=100/> <Attribute name=height value=130/> </DataObject> </PageMap> -->(You can also use PageMaps to create actions and custom attributes.) Using a thumbnail meta tagTo specify a thumbnail image for a page, you can add a thumbnail meta tag to the section of the page, like this: <meta name=thumbnail content=http://example/foo.jpg /> |
_codereview.59895 | For a while now I've been after a lock-free, simple and scalable implementation of a multiple producer, single consumer queue for delegates in C#. I think I finally have it. I've run basic tests on it showing it works, and the design is so simple that I've managed to convince myself it is rock-solid.This relies on a compare-and-swap approach to update the queue, similar to the new lock-free pattern used to generate event field accessors in C# 4.0 (see here), combined with an Interlocked.Exchange to read-and-set the queue to null.Essentially, this derived from the realization that message-queues are really one-shot multicast delegates that reset their invocation lists after message execution!However, parallel code is very hard to get right so I would like confirmation that this pattern is indeed correct. I've found my intuitions can be surprisingly misleading when it comes to parallelism, and there's always some crazy edge-case driving me off...So, to the question: Can anyone confirm to me that the below message queue design pattern is thread-safe?public class MessageQueue{ Action queue; public void Enqueue(Action message) { Action currentQueue; var previousQueue = queue; do { currentQueue = previousQueue; var newQueue = currentQueue + message; previousQueue = Interlocked.CompareExchange(ref queue, newQueue, currentQueue); } while (previousQueue != currentQueue); } public void Process() { var current = Interlocked.Exchange(ref queue, null); if (current != null) { current(); } }} | Lock-free multiple producer single consumer message queue | c#;multithreading;delegates;lock free | As far as thread safety goes, your code is fine. But there's a couple of things I want to point out:The meaning of your variables (previousQueue, newQueue and currentQueue) isn't very clear, at least not to me. And when writing multi-threaded code, readability becomes extremely important.Also, for CAS-loops, I always find a while(true) - break loop a lot easier on the eyes, but that's my personal opinion.Here's my suggestion for improving readability:while(true){ var expectedOldQueue = queue; var newQueue = expectedOldQueue + message; var actualOldQueue = Interlocked.CompareExchange(ref queue, newQueue, expectedOldQueue); if(expectedOldQueue == actualOldQueue) break;}Also, I'm a bit concerned about the design you chose to achieve a multiple producer/single consumer queue, or maybe I'm just not understanding something...If I understood correctly, producers will enqueue actions, instead of items that need to be consumed, and producers will simply call Process to trigger those actions, correct?So, instead of this://producerqueue.EnqueueItem(item);//consumervar item = queue.Dequeue();Console.WriteLine(item);You're proposing this://producerqueue.Enqueue(() => Console.WriteLine(item));//consumerqueue.Process();If so, this worries me because it goes against the main vein of a consumer/producer architecture, where the consumers are detached from the producers, and have no idea how items will be consumed.With your proposal, producers will be in charge of producing work items and define how they are going to be consumed. |
_webapps.6437 | Is it possible to bookmark a link that, when chosen, will automatically open up new document (a word processing document in this case) in Google Documents even if I don't have GD open at the time? Having such a link would be a bit of timesaver for me. | Save a link to create new document in Google Docs | bookmarks;google documents | You can indeed. Looking at the HTTP traffic, when you click the button to create a new document, it goes to https://docs.google.com/document/create first. I've checked and you can hit this page directly and it will do just what you want.As noted by Ava, you can also use:https://docs.google.com/spreadsheets/createhttps://docs.google.com/presentation/create |
_codereview.9186 | I recently discovered that there is no free read-write lock implementation for Ruby available on the Internet, so I built one myself. The intention is to keep it posted on GitHub indefinitely for the Ruby community at large to use.The code is at https://github.com/alexdowad/showcase/blob/master/ruby-threads/read_write_lock.rb. For convenience, it is also repeated below. (There is another version which goes further to avoid reader starvation; it is currently at https://github.com/alexdowad/showcase/blob/fair-to-readers/ruby-threads/read_write_lock.rb.)Alex Kliuchnikau very kindly reviewed the first version and found a serious bug. To fix that bug, I had to rethink the implementation and make some major changes. I am hoping someone can review the new version, first for correctness, and then for any ways to increase performance. Also, if you have a multi-core processor, please try running the file (it has a built-in test script). Problems which don't show up on my single-core machine may show up much more easily with multiple cores.# Ruby read-write lock implementation# Allows any number of concurrent readers, but only one concurrent writer# (And if the write lock is taken, any readers who come along will have to wait)# If readers are already active when a writer comes along, the writer will wait for# all the readers to finish before going ahead# But any additional readers who come when the writer is already waiting, will also# wait (so writers are not starved)# Written by Alex Dowad# Bug fixes contributed by Alex Kliuchnikau# Thanks to Doug Lea for java.util.concurrent.ReentrantReadWriteLock (used for inspiration)# Usage:# lock = ReadWriteLock.new# lock.with_read_lock { data.retrieve }# lock.with_write_lock { data.modify! }# Implementation notes: # A goal is to make the uncontended path for both readers/writers lock-free# Only if there is reader-writer or writer-writer contention, should locks be used# Internal state is represented by a single integer (counter), and updated # using atomic compare-and-swap operations# When the counter is 0, the lock is free# Each reader increments the counter by 1 when acquiring a read lock# (and decrements by 1 when releasing the read lock)# The counter is increased by (1 << 15) for each writer waiting to acquire the# write lock, and by (1 << 30) if the write lock is takenrequire 'atomic'require 'thread'class ReadWriteLock def initialize @counter = Atomic.new(0) # single integer which represents lock state @reader_q = ConditionVariable.new # queue for waiting readers @reader_mutex = Mutex.new # to protect reader queue @writer_q = ConditionVariable.new # queue for waiting writers @writer_mutex = Mutex.new # to protect writer queue end WAITING_WRITER = 1 << 15 RUNNING_WRITER = 1 << 30 MAX_READERS = WAITING_WRITER - 1 MAX_WRITERS = RUNNING_WRITER - MAX_READERS - 1 def with_read_lock acquire_read_lock yield release_read_lock end def with_write_lock acquire_write_lock yield release_write_lock end def acquire_read_lock while(true) c = @counter.value raise Too many reader threads! if (c & MAX_READERS) == MAX_READERS # If a writer is running OR waiting, we need to wait if c >= WAITING_WRITER # But it is possible that the writer could finish and decrement @counter right here... @reader_mutex.synchronize do # So check again inside the synchronized section @reader_q.wait(@reader_mutex) if @counter.value >= WAITING_WRITER end else break if @counter.compare_and_swap(c,c+1) end end end def release_read_lock while(true) c = @counter.value if @counter.compare_and_swap(c,c-1) # If one or more writers were waiting, and we were the last reader, wake a writer up if c >= WAITING_WRITER && (c & MAX_READERS) == 1 @writer_mutex.synchronize { @writer_q.signal } end break end end end def acquire_write_lock while(true) c = @counter.value raise Too many writers! if (c & MAX_WRITERS) == MAX_WRITERS if c == 0 # no readers OR writers running # if we successfully swap the RUNNING_WRITER bit on, then we can go ahead break if @counter.compare_and_swap(0,RUNNING_WRITER) elsif @counter.compare_and_swap(c,c+WAITING_WRITER) while(true) # Now we have successfully incremented, so no more readers will be able to increment # (they will wait instead) # However, readers OR writers could decrement right here, OR another writer could increment @writer_mutex.synchronize do # So we have to do another check inside the synchronized section # If a writer OR reader is running, then go to sleep c = @counter.value @writer_q.wait(@writer_mutex) if (c >= RUNNING_WRITER) || ((c & MAX_READERS) > 0) end # We just came out of a wait # If we successfully turn the RUNNING_WRITER bit on with an atomic swap, # Then we are OK to stop waiting and go ahead # Otherwise go back and wait again c = @counter.value break if (c < RUNNING_WRITER) && @counter.compare_and_swap(c,c+RUNNING_WRITER-WAITING_WRITER) end break end end end def release_write_lock while(true) c = @counter.value if @counter.compare_and_swap(c,c-RUNNING_WRITER) if (c & MAX_WRITERS) > 0 # if any writers are waiting... @writer_mutex.synchronize { @writer_q.signal } else @reader_mutex.synchronize { @reader_q.broadcast } end break end end endendif __FILE__ == $0# for performance comparison with ReadWriteLockclass SimpleMutex def initialize; @mutex = Mutex.new; end def with_read_lock @mutex.synchronize { yield } end alias :with_write_lock :with_read_lockend# for seeing whether my correctness test is doing anything...# and for seeing how great the overhead of the test is# (apart from the cost of locking)class FreeAndEasy def with_read_lock yield # thread safety is for the birds... I prefer to live dangerously end alias :with_write_lock :with_read_lockendrequire 'benchmark'TOTAL_THREADS = 40 # set this number as high as practicable!def test(lock) puts READ INTENSIVE (80% read, 20% write): single_test(lock, (TOTAL_THREADS * 0.8).floor, (TOTAL_THREADS * 0.2).floor) puts WRITE INTENSIVE (80% write, 20% read): single_test(lock, (TOTAL_THREADS * 0.2).floor, (TOTAL_THREADS * 0.8).floor) puts BALANCED (50% read, 50% write): single_test(lock, (TOTAL_THREADS * 0.5).floor, (TOTAL_THREADS * 0.5).floor)enddef single_test(lock, n_readers, n_writers, reader_iterations=50, writer_iterations=50, reader_sleep=0.001, writer_sleep=0.001) puts Testing #{lock.class} with #{n_readers} readers and #{n_writers} writers. Readers iterate #{reader_iterations} times, sleeping #{reader_sleep}s each time, writers iterate #{writer_iterations} times, sleeping #{writer_sleep}s each time mutex = Mutex.new bad = false data = 0 result = Benchmark.measure do readers = n_readers.times.collect do Thread.new do reader_iterations.times do lock.with_read_lock do mutex.synchronize { bad = true } if (data % 2) != 0 sleep(reader_sleep) mutex.synchronize { bad = true } if (data % 2) != 0 end end end end writers = n_writers.times.collect do Thread.new do writer_iterations.times do lock.with_write_lock do # invariant: other threads should NEVER see data as an odd number value = (data += 1) # if a reader runs right now, this invariant will be violated sleep(writer_sleep) # this looks like a strange way to increment twice; # it's designed so that if 2 writers run at the same time, at least # one increment will be lost, and we can detect that at the end data = value+1 end end end end readers.each { |t| t.join } writers.each { |t| t.join } puts BAD!!! Readers+writers overlapped! if mutex.synchronize { bad } puts BAD!!! Writers overlapped! if data != (n_writers * writer_iterations * 2) end puts resultendtest(ReadWriteLock.new)test(SimpleMutex.new)test(FreeAndEasy.new)end | Read-write lock implementation for Ruby, new version | ruby;multithreading;locking | Have you considered to join reader and writer queues into a single writer-waiting queue? This may make the code easier to synchronize and also solve the issue with readers starvation when constant writes are performed, something like this: if write lock is taken or queue is not empty, new operation (read or write) goes into queueif no write lock is taken and queue is empty, new read operation is performed. New write operation goes into queue and waits for read operations to end when write lock released, run next operation from the queue. If this is read operation, try run next operations until encounter write operation/queue is empty. If write operation is encounered, wait for reads (see item 2)I believe you should encapsulate operations on the counter into separate, say ReadWriteCounter, abstraction. It will hold Atomic @counter instance internally and will perform check and operations through methods like new_writer, writer_running? etc. This should improve code readability.I ran the code at Pentium(R) Dual-Core CPU E5400 @ 2.70GHz machine:MRI Ruby 1.9.2-p290:READ INTENSIVE (80% read, 20% write):Testing ReadWriteLock with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.030000 0.030000 0.060000 ( 0.499945)WRITE INTENSIVE (80% write, 20% read):Testing ReadWriteLock with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.030000 0.010000 0.040000 ( 1.761913)BALANCED (50% read, 50% write):Testing ReadWriteLock with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.030000 0.010000 0.040000 ( 1.121450)READ INTENSIVE (80% read, 20% write):Testing SimpleMutex with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.010000 0.020000 0.030000 ( 2.125943)WRITE INTENSIVE (80% write, 20% read):Testing SimpleMutex with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.010000 0.010000 0.020000 ( 2.114639)BALANCED (50% read, 50% write):Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.010000 0.010000 0.020000 ( 2.114838)READ INTENSIVE (80% read, 20% write):Testing FreeAndEasy with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.000000 0.000000 0.000000 ( 0.058086)WRITE INTENSIVE (80% write, 20% read):Testing FreeAndEasy with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.010000 0.010000 0.020000 ( 0.060335)BALANCED (50% read, 50% write):Testing FreeAndEasy with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.000000 0.000000 0.000000 ( 0.057053)Jruby 1.6.5.1:READ INTENSIVE (80% read, 20% write):Testing ReadWriteLock with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 0.875000 0.000000 0.875000 ( 0.875000)WRITE INTENSIVE (80% write, 20% read):Testing ReadWriteLock with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 1.880000 0.000000 1.880000 ( 1.880000)BALANCED (50% read, 50% write):Testing ReadWriteLock with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 1.201000 0.000000 1.201000 ( 1.201000)READ INTENSIVE (80% read, 20% write):Testing SimpleMutex with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 2.196000 0.000000 2.196000 ( 2.196000)WRITE INTENSIVE (80% write, 20% read):Testing SimpleMutex with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 2.219000 0.000000 2.219000 ( 2.219000)BALANCED (50% read, 50% write):Testing SimpleMutex with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each time 2.229000 0.000000 2.229000 ( 2.229000)READ INTENSIVE (80% read, 20% write):Testing FreeAndEasy with 32 readers and 8 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.074000 0.000000 0.074000 ( 0.074000)WRITE INTENSIVE (80% write, 20% read):Testing FreeAndEasy with 8 readers and 32 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.060000 0.000000 0.060000 ( 0.060000)BALANCED (50% read, 50% write):Testing FreeAndEasy with 20 readers and 20 writers. Readers iterate 50 times, sleeping 0.001s each time, writers iterate 50 times, sleeping 0.001s each timeBAD!!! Readers+writers overlapped!BAD!!! Writers overlapped! 0.105000 0.000000 0.105000 ( 0.104000) |
_codereview.85352 | Challenge: Swap positions in a listSpecifications:Your program should accept as its first argument a path to a filename. The file contains several test cases, one on each line. Each test case is a list of numbers, supplemented with positions to be swapped. List and positions are separated by a colon. Positions start with 0. There may be more than one position swaps, separated by a comma. Positions swaps are processed left to right.Solution:import java.io.File;import java.io.FileNotFoundException;import java.util.ArrayList;import java.util.List;import java.util.Scanner;public class SwapElements { public static void main(String[] args) throws FileNotFoundException { Scanner input = new Scanner(new File(args[0])); while (input.hasNextLine()) { String[] temp = input.nextLine().split(:); printSwapped(swapList(temp[0], temp[1])); } } public static void printSwapped(List<String> list) { StringBuilder result = new StringBuilder(); for (String s : list) { result.append(' ').append(s); } System.out.println(result.substring(1)); } public static List<String> swapList(String input, String swapKey) { List<String> result = modifyToList(input); for (String s : swapKey.split(,)) { String[] keys = s.split(-); int index1 = Integer.parseInt(keys[0].substring(1)); int index2 = Integer.parseInt(keys[1]); String target1 = result.get(index2); String target2 = result.get(index1); result.set(index1, target1); result.set(index2, target2); } return result; } public static List<String> modifyToList(String list) { List<String> result = new ArrayList<>(list.length()); for (String s : list.split(\\s+)) { result.add(s); } return result; } }Sample Input:1 2 3 4 5 6 7 8 9 : 0-8 1 2 3 4 5 6 7 8 9 10 : 0-1, 1-3Sample Output:9 2 3 4 5 6 7 8 1 2 4 3 1 5 6 7 8 9 10It's actually been a while since I did one of these and I get the feeling it shows. The solution itself, of course, works, but I get the sense that it's memory intensive and/or slow. I'd mostly like to focus on optimizing speed, but any and all general feedback is welcome. What do you think? | Swapping Elements in a list | java;performance;algorithm;programming challenge | Switching from a List<Integer> to an int[] speeds things up quite a bit, for the input examples you've provided. Note: I've slightly simplified your code to work off a fixed array of strings, merely because it made my testing easier.public class SwapElements { private static final String[] lines = { 1 2 3 4 5 6 7 8 9 : 0-8, 1 2 3 4 5 6 7 8 9 10 : 0-1, 1-3 }; public static void main(String[] args) throws Exception { for (String line : lines) { String[] temp = line.split(:); int[] numbers = toPrimitiveArray(temp[0]); printSwapped(swapList(numbers, temp[1])); } } private static int[] toPrimitiveArray(String input) { String[] numbers = input.split( ); int[] result = new int[numbers.length]; for (int i = 0; i < numbers.length; i++) { result[i] = Integer.parseInt(numbers[i]); } return result; } private static void printSwapped(int[] list) { StringBuilder result = new StringBuilder(); for (int i : list) { result.append(' ').append(i); } System.out.println(result.substring(1)); } private static int[] swapList(int[] input, String swapKey) { for (String s : swapKey.split(,)) { String[] keys = s.split(-); int index1 = Integer.parseInt(keys[0].substring(1)); int index2 = Integer.parseInt(keys[1]); int target1 = input[index2]; int target2 = input[index1]; input[index1] = target1; input[index2] = target2; } return input; }}Some basic benchmarking showed me that this improved performance from 4,125ns to 2,536ns (almost 40% reduction), for processing both input strings.Other comments:I've changed most of your public methods to private, since they don't appear to be used outside of the class.Your original code doesn't correctly close the Scanner you use. These days, you can avoid that entire problem using NIO classes:List<String> fileLines = Files.readAllLines(Paths.get(args[0]));You may wish to refactor your code to introduce a int[] swapElements(String input) method, so you can unit test more easily. |
_codereview.99004 | I am working on rewriting an application in Laravel 5.1. I am new to the exception handling technique introduced in 5.0. I have overlooked taking advantage of throwing/catching exceptions frequently in the past, but am working on this now. I am most likely not up to speed on some of the best practices.public function render($request, Exception $e){ switch (true) { case $e instanceof Exceptions\Auctions\AuctionTypeException: // Should I throw a new exception here? Is passing the old exception message to a new exception good practice? throw new Exceptions\Notifications\AlertException($e->getMessage()); break; case $e instanceof Exceptions\Auctions\AuctionArgumentException: // Should I throw a new exception here? Is passing the old exception message to a new exception good practice? throw new Exceptions\Notifications\AlertException($e->getMessage()); break; //////////////////////////////////////////// // Custom errors to display in error view // //////////////////////////////////////////// case $e instanceof Exceptions\Notifications\AlertException: return response()->view('errors.notification', ['message' => $e->getMessage(), 500]); break; ////////////////////////////////////////////// // Unknown exceptions are rendered normally // ////////////////////////////////////////////// default: return parent::render($request, $e); }}Docs for the exception handler are here.While most of my exceptions will fall under the explicitly ignore and continue umbrella, I feel like I will need to handle my exceptions in a more abstract/polymophic way later down the road, so I am preparing now. This is a huge app, and I will have several custom Exceptions.This code currently works and behaves as excepted. Exceptions that I explicitly state I want to do something special with can be added as a case, and if it is new, or I just want to handle it normally...it falls under the default.I would like to know if I am going down the right path - to me, this seems like a fairly clever way to handle this. Is it a good idea to catch an exception, only to throw another exception with the previous exceptions message? Are there any pitfalls to checking the instance and evaluating true like this? | Exception Handler on a Switch | php;object oriented;laravel | null |
_softwareengineering.299240 | I am trying to standardise my code as much as possible, including DocComments, using PHPCS.It seems that the PEAR standards contain two sniffs that require almost exactly the same tags appear in the Class and File DocBlocks:PEAR.Commenting.ClassCommentPEAR.Commenting.FileCommentBoth of these want to see these tags: @category, @package, @author, @license, @link.----------------------------------------------------------------------FOUND 10 ERRORS AFFECTING 2 LINES---------------------------------------------------------------------- 6 | ERROR | Missing @category tag in file comment 6 | ERROR | Missing @package tag in file comment 6 | ERROR | Missing @author tag in file comment 6 | ERROR | Missing @license tag in file comment 6 | ERROR | Missing @link tag in file comment 13 | ERROR | Missing @category tag in class comment 13 | ERROR | Missing @package tag in class comment 13 | ERROR | Missing @author tag in class comment 13 | ERROR | Missing @license tag in class comment 13 | ERROR | Missing @link tag in class comment----------------------------------------------------------------------It would be silly to repeat these because all my source files contain just a single class (or interface or trait).My question is, which tags should go where. Should they all go in the file comment, all in the class comment, or should they be split between the two. | PHPDoc Comment, Class vs File | php;documentation | Based on what I can find, this is my own opinion which I pose as an answer. I would really like feedback on this. This answer is based on a proposed (not-accepted) standard.Breakdown:Looking through the proposed PSR-5 standard, particularly the description of each tag helped a bit.@categoryDeprecated in favour of @package which does essentially the same thing, so can be removed from the sniff.@packageCan be used in either, however in the file block it applies to: global functions, global constants, global variables, requires and includes. In the class it applies to the class and all containing elements. Assuming that your file only contains a class, the @package tag would be meaningless in the file block.@authorThis can apply to any structural element. The documentation doesn't specifically help answer the question of which, however since the file contains the class, I would say this should appear in the most encompassing element (the file comment), with other authors adding an @author tag to any sub elements they write.@licenseAgain, this can be applied to any structural element, but is applied to all sub elements, therefore the file seems most appropriate.@linkLink is also deprecated in favour of @seeSo:@see@see is looser than @link, and could happily be applied to both the file and the class. For example the file could reference the project website, the file could reference the documentation for the class.Summary:So this is what I think the file should roughly look like<?php/** * FileName.php * @author My Name <[email protected]> * @copyright 2015 My Company * @license Licence Name * @see Link to project website */namespace My/Namespace;use Another/Namespace/Class;/** * Class summary * A longer class description * @package Vendor/Project * @see Link to class documentation */class MyClass { ...} |
_codereview.78802 | I'm learning Haskell using the University of Pennsylvania's online materials. I'm a few lessons in and was looking for some feedback about whether I'm thinking functionally enough or porting over my Python background inappropriately.Below are my answers to the problems set out in lesson three (these were someone's homework once but not any more, and were never mine!). Is my code below making any rookie mistakes for someone coming from another paradigm to functional?import Data.List-- Exercise 1-- Take a list xs and an integer n, and return a list of every nth element of xseveryNth :: [a] -> Int -> [a]everyNth xs n = [snd x | x <- (zip [1..] xs), fst x `mod` n == 0]skips :: [a] -> [[a]]skips xs = map (everyNth xs) [1..length xs]-- Exercise 2-- Take a list of integers and produce a list of local maximatriples :: [a] -> [[a]]triples (x:xs) | length (x:xs) < 3 = [] | otherwise = (x:take 2 xs) : triples xsisMiddleMax :: (Ord a) => [a] -> BoolisMiddleMax (x:y:z:[]) = x < y && y > zlocalMaxima :: [Integer] -> [Integer]localMaxima xs = map maximum . filter isMiddleMax $ triples xs-- Exercise 3-- Take a list of integers 0-9 and return a histogramgroupSort :: (Eq a, Ord a) => [a] -> [[a]]groupSort xs = group $ sort xscountInstance :: (Eq a, Ord a) => [a] -> (a, Int)countInstance xs = (head xs, length xs)countInstances :: (Eq a, Ord a) => [a] -> [(a, Int)]countInstances xs = map (countInstance) $ groupSort xsmaxInstances :: [Int] -> IntmaxInstances xs = maximum $ map (snd) $ countInstances xsspaceOrStar :: [Int] -> Int-> CharspaceOrStar xs y | y `elem` xs = '*' | otherwise = ' 'getData :: (Eq a, Ord a) => [a] -> Int -> [a]getData xs n = [fst x | x <- countInstances xs, snd x >= n]histRow :: [Int] -> StringhistRow xs = map (spaceOrStar xs) [0..9]buildHist :: [Int] -> StringbuildHist xs = intercalate \n . map (histRow) $ map (getData xs) . reverse $ [1..maxInstances xs]histogram :: [Int] -> Stringhistogram xs = buildHist xs ++ \n==========\n0123456789\nmain :: IO()main = do print $ skips ABCD print $ skips hello! print $ skips [1] print $ skips [True, False] print $ skips ([] :: [Int]) print $ localMaxima [2,9,5,6,1] print $ localMaxima [2,3,4,1,5] print $ localMaxima [1,2,3,4,5] putStr $ histogram [1,4,5,4,6,6,3,4,2,4,9] | Am I thinking functionally in these simple Haskell functions? | beginner;haskell;functional programming | Overall it's pretty nice. I like that you've learned how to use $; it really improves readability in my opinion. One thing that immediately caught my attention though was the chain of prints. Haskell can do better! You can for example put those in a list, and mapM_ print over it.let fns = [ skips ABCD , skips hello! , skips [1] , skips [True, False] , skips ([] :: [Int]) , localMaxima [2,9,5,6,1] , localMaxima [2,3,4,1,5] , localMaxima [1,2,3,4,5] ]mapM_ print fnsThis makes it easier to refactor later, if you want to do something else than print, write them to different files, all kinds of stuff. The lisp-ish approach of code-as-data is quite useful when enumerating cases like you did here.isMiddleMax (x:y:z:[]) = x < y && y > zSpooky! Use -Wall to get a proper warning for that one: incomplete patterns in function .... You might want to consider adding a meaningful error message:isMiddleMax _ = error Only works on 3-element lists!triples (x:xs) | length (x:xs) < 3 = []Dunno if you know, but there exists a thing called alternate name capture; it's written like so:triples (x:xs @ allXs) | length allXs < 3 = []If you consider this an overkill, why not simply change the clause to length xs < 2? I don't like the same pattern repeated for some reason.countInstances xs = map (countInstance) $ groupSort xsmaxInstances xs = maximum $ map (snd) $ countInstances xsThe parens around countInstance and snd are unnecessary (it's not terrible to leave them, just pointing that out). HLint is a useful tool that can provide such hints for you!Oh, and also, in the second case; multiple $ are a bit less readable; consider this instead:maxInstances xs = maximum . map (snd) . countInstances $ xsThis expresses it in a truly functional way, and makes it susceptible to eta-reduction!:maxInstances = maximum . map (snd) . countInstancesThis is called point-free (not to be confused with pointless :)) notation, and makes it extremely clear that the function is indeed a composition of other functions ((.) is Haskell's composition operator).I'll add more if I notice any more areas for improvement. |
_softwareengineering.218293 | Pretty simple question. Should package structure closely resemble class hierarchy? If so, how closely? Why or why not?For instance, let's say you've got class A and class B, plus class AFactory and class BFactory. You put class A and class B in the package com.something.elements, and you put AFactory and BFactory in com.something.elements.factories. AFactory and BFactory would be further down the hierarchy package-wise, but they'd be further up class-wise. Is this sort of thing a good idea or a bad idea? | Should package structure closely resemble class hierarchy? | architecture;packages;namespace;hierarchy | According to Uncle Bob, classes should be grouped together into packages if they change together. Presumably, if class A changes, then class AFactory would need to change as well, but class BFactory would not. So, if A and B are unrelated, then each should be in a separate package together with the corresponding factory. On the other hand, if there is a dependency between A and B that forces you to change one when you change the other, then the two classes and the two factories should all be in the same package.If you follow this pattern, then you should be able to build each package into a separate library, independently of the other one. |
_hardwarecs.7979 | I'm thinking about buying the new Nintendo Classic Mini: SNES and I need to know if I could use my Nvidia Shield: K1 android Tablet, which has a mini HDMI slot, as a monitor for this.I hope I'm in the right community for my Question. | Can I use a tablet as monitor? | tablet;hdmi;video game console | null |
_softwareengineering.223456 | I am developing a C# windows service application, which have different configuration files for development, for production system, for test system, like:Dev.configTest.configProd.configNow we are using SVN version control system, and configuration files stored in projects directories:-trunk -MyFooService -Configs Dev.config Test.config Prod.config -MyFooService2 -Configs Dev.config Test.config Prod.configPrior publishing service to production you should get respective configuration file from that folders. How do you think, it is correct? Or configuration files should be placed in another repository? Or in another folder or should not be placed in VCS?Or may be it will be better to place files like that:-trunk -src -MyFooService -MyFooService2 -configs -MyFooService -Configs Dev.config Test.config Prod.config -MyFooService2 -Configs Dev.config Test.config Prod.config | Where to place configuration files sources | c#;configuration | What we've come up with is the following. We only place one config file under version control. It contains the settings of the development environment. It serves two purposes. One, if a developer opens a project and runs it, it should just work (see also the Joel test :)). Two, it serves as a template only. You shouldn't actually store actual configuration settings in version control, but these are two very good reasons to make an exception with the development settings, out of necessity.When we publish a project to a server, we never overwrite configuration files, we only merge changes into it if there were any. The problem you are facing is that environments can change all the time, you can add new environments or modify existing ones, and these environment changes may have nothing to do at all with your development cycle. Even more importantly, we have no control at all over the clients' environments. Why would we want to store those settings in our version control? When we give a published project to our clients, we rename the config files to config.sample, they are forced to edit it according to their needs. |
_unix.339696 | I had mounted ext4 filesystem (/dev/sdg1) and accidentally did dd if=/dev/sda of=/dev/sdg and CTRL-C after 1 second, so only 60 MB data has been transferred./dev/sda1 has ext3 root filesystem.What I have now:Restored partition on sdg (as it was rewrited from sda)All superblocks on sdg1 are from sda1Any ideas to restore data? | Overwrote 60MB of mounted filesystem from other filesystem with dd | linux;dd;restore | null |
_cstheory.36264 | We say two languages $\;\;\; L\hspace{.02 in},\hspace{-0.02 in}L' \: \subseteq \: \{\hspace{-0.02 in}0,\hspace{-0.05 in}1\hspace{-0.03 in}\}^* \;\;\;$ agree infinitely-often with each otherif and only if there are infinitely-many $n$ such that $\;\;\; L \cap \{\hspace{-0.02 in}0,\hspace{-0.05 in}1\hspace{-0.03 in}\}^n \: = \: L' \cap \{\hspace{-0.02 in}0,\hspace{-0.05 in}1\hspace{-0.03 in}\}^n \:\:\:\:$.For a language $L$ let io-$L$ be the set of languages which agree infinitely-often with $L$.Let io-P be the set of languages that agree infinitely-often with some language in P.Let io-NPH be the infinitely-often version of NPH (NP-hard w.r.t. Cook reductions):$L \in$ io-NPH iff for all $L' \in$ NP, some language in io-$L'$ is polynomial-time Turing reducible to $L$. Is io-P does not contain NP known to imply that io-P $\cup$ io-NPH does not contain NP ? | Is the infinitely-often version of Ladner's theorem known? | cc.complexity theory;np intermediate;structural complexity | null |
_cstheory.14189 | Is there a well-known randomized algorithm for the set cover problem in the literature - such that it has an approximation ratio of $O(\log n)$ or $f$ - where $f$ is the max frequency of an element. please don't mention the randomized rounding method with LP (or any other method depending on LP) ? | Is there a randomized algorithm for set-cover? | ds.algorithms;randomized algorithms;set cover | null |
_codereview.6808 | As of now, I am using this code to open a file and read it into a list and parse that list into a string[]:string CP4DataBase = C:\\Program\\Line Balancer\\FUJI DB\\KTS\\KTS - CP4 - Part Data Base.txt;CP4DataBaseRTB.LoadFile(CP4DataBase, RichTextBoxStreamType.PlainText);string[] splitCP4DataBaseLines = CP4DataBaseRTB.Text.Split('\n');List<string> tempCP4List = new List<string>();string[] line1CP4Components;foreach (var line in splitCP4DataBaseLines) tempCP4List.Add(line + Environment.NewLine);string concattedUnitPart = ;foreach (var line in tempCP4List){ concattedUnitPart = concattedUnitPart + line; line1CP4PartLines++;}line1CP4Components = new Regex(\UNIT\,\PARTS\, RegexOptions.Multiline) .Split(concattedUnitPart) .Where(c => !string.IsNullOrEmpty(c)).ToArray();I am wondering if there is a quicker way to do this. This is just one of the files I am opening, so this is repeated a minimum of 5 times to open and properly load the lists.The minimum file size being imported right now is 257 KB. The largest file is 1,803 KB. These files will only get larger as time goes on as they are being used to simulate a database and the user will continually add to them.So my question is: is there a quicker way to do all of the above code? | Speedily Read and Parse Data | c#;parsing | The part of the code that makes it really slow is this:string concattedUnitPart = ;foreach (var line in tempCP4List){ concattedUnitPart = concattedUnitPart + line; line1CP4PartLines++;}You should not concatenate large strings like that. The string gets longer and longer for each iteration, and it's copied into a new string each time.If you Read a file that is 1.8 MB, that consists of lines which varies between 50 and 100 characters, you will have been copying about 10000MB of data before you have the result.Also, it scales very badly, so when the files grow it will grow slower at an exponential rate. To handle a file that is 5 MB you will be copying about 80 000 MB of data.James suggestion to do a replace seems to be a good option. If you want to split and join, you can use the String.Join method:string[] splitCP4DataBaseLines = CP4DataBaseRTB.Text.Split('\n');string concattedUnitPart = String.Join(Environment.NewLine, splitCP4DataBaseLines) + Environment.NewLine; |
_unix.232489 | I am able to do this, array=(2 46 7 4 2 1 1 1 23 4 5)store=(${array[*]:5:5})echo ${store[@]} # print 1 1 1 23 4 5Now instead of extracting the 5 elements from position 5 from a user array, I need to extract command-line args from 5 and onward. I tried similar way but I am getting empty outputstore=(${$[*]:5:5}) # <----------------- Something to be changed here?echo ${store[@]} # EMPTY OUTPUTAny help, how to store n args from position mth onward in a array? | Storing part of command line arguments into user array | bash;shell;shell script;array | In bash (and also zsh and ksh93, the general form of parameter expansion or Substring Expansion is:${parameter:offset:length}If the length is omitted, you will get from offset to the end of parameter.In your case:array=(2 46 7 4 2 1 1 1 23 4 5)store=( ${array[@]:5} )printf '%s\n' ${store[@]}will generate from 6th element to the last element.With $@:printf '%s\n' ${@:5}will generate from $5 to the end of positional arguments.Also note that you need to quote the array variable to prevent split+glob operator on its elements.With zsh, you can use another syntax:print -rl -- $argv[5,-1] |
_vi.11250 | When typing LaTeX one often needs to type a \ to invoke a command such as \omega. One of the nicer features of LaTeX is the ability to define your own commands, which can then be detected by Vim if 'define' is properly set. This will allow completion of these commands with <c-x><c-d>. I would like to avoid typing the \ too often as it is located rather awkwardly on my keyboard. Therefore I would like to be able to type ome<c-x><c-d> and have vim complete that to \omega. In other words, I want to be able to complete a word of which I never typed the first character. Is this even possible using the vim completion function or do I need something more capable?Just to be clear, I will of course be remapping <c-x><c-d> in this case, as this is still an awkward combination. | autocomplete without typing first character | autocompletion | null |
_webapps.78516 | When I login to my google account and list the devices that have accessed my account, I see an unknown device access from India. I do not recognize this device and have subsequently changed my password and added two factor authentication. But I still see the same device access my google account less than 2 days ago. I added 2FA about 3 weeks back and have been regularly checking my account access for unknown devices.Any insight into this matter is greatly appreciated. | Google account shows unknown device logged in. Password reset and 2FA does not make this device go away | google account | null |
_unix.14867 | As I read in Wikipedia, Unix started as a revolutionary operating system written mostly in C allowing it to be ported and used on different hardware. Descendants of Unix is mentioned next, mostly BSD. Clones of Unix, Minix/Linux are discussed as well.But what happened to the original Unix operating system?Does it exist as an operating system any more or is it nothing more than a standard like POSIX nowadays?Do note that I am aware of this answer but it has no mention of the fate of the original Unix beyond the derived works. | What is Unix now? | history | We can distinguish UNIX the trademark from Unix the code-base.AT&TUnix was initially developed at Bell Labs, owned by AT&T. This Unix team became AT&T's Unix System Laboratories (USL) and produced Unix System V (Roman numeral for five) or SysV for short. The University of California at Berkeley (UCB) also licenced Unix for academic use, their Computer Systems Research Group (CSRG) later made many important changes and additions (notably TCP/IP) in their Berkeley Software Distribution (BSD) which were later incorporated into many descendants of Unix leading to the BSD vs SysV split. Ultimately a lot of the BSD changes were back-ported into SysV (which we can consider the main ancestral Unix code base).Along the way, many different businesses have licenced this code-base (at various stages in it's development) and used it as the basis of their proprietary Unix operating systems - AIX, HPUX, IRIX, Solaris, Ultrix and dozens of others. Novell (Attachmate)USL was purchased by Novell. At this time, the ancestral Unix code was known as Unix system V release 4 - or SVR4 for short. Novell named their product Unixware to complement the name of their legacy network OS Netware. Novell have been Acquired by Attachmate.The Santa Cruz OperationNovell eventually sold their Unix business to an old SVR3.2 licensee The Santa Cruz Operation (SCO) whose main business up to that point was selling a product named OpenServer that was based on Unix SVR3.2. Novell (since bought by Attachmate) still own some rights to Unix but do not do any work on the source code.Caldera / The Sco Group / TSG Group IncThe Santa Cruz Operation later sold their Unix business to a Linux company Caldera who later renamed themselves to The SCO Group (sometimes referred to as new SCO or SCOG) and who had a disastrous failure of leadership leading to chapter-11 bankruptcy and sale of the Unix business to UnXis, a business formed for this purpose. Subsequently The SCO Group were reorganised into TSG Group Inc and TSG Operations Inc. They have no role regarding maintenance of the ancestral Unix code base. In August 2012 TSG Group Inc converted to chapter 7 bankruptcy.UnXis / XinuosSo now UnXis are responsible for marketing and developing/maintaining Unixware - the ancestral AT&T Unix code base. Because the Santa Cruz Operation (old SCO) originally ported Unix to the x86 platform, I believe x86 and x86_64 are the only target platforms that UnXis directly support. On June 12 2013, UnXis announced it had been renamed Xinuos. Microsoft licensed Unix and ported it to 16-bit Zilog Z8000 - old SCO purchased Xenix from them and ported it to the 16-bit 8086 architecture (used by IBM for their original IBM PC). Old SCO later ported SVR3.2 to x86 as 32-bit SCO-Unix later renamed OpenServer Novell's rights were contested, somewhat futilely, by The SCO Group (now named TSG Group Inc), the bankrupt remnants of the old Linux company Caldera. It is not yet clear whether TSG Group Inc have finally discontinued this and related litigation as a result of August 30, 2011 court decisions against them |
_cs.61247 | A friend needs to find the pool balls in an image of a pool table. Would a Hough transform be a good idea? Why/why not? Would RANSAC be better?The question comes from this study guide (http://www.cc.gatech.edu/~afb/classes/CS4495-Spring2015-OMS/), is not homework, and I am not currently taking this class.What I know: RANSAC is least-squares plus a system with voting. We pick the least squares solution that has the least outliers. Hough transform is about finding parametric shapes. You can transform something into Hough space. For a circle you might want to know the radius. If you know more information about your circle (the pool ball) you'll not have as much space to search.I think it makes sense to use Hough transforms in the case of finding pool balls so I am wondering how and why you'd even use RANSAC.Edit: I think you could use RANSAC if you had another image to match to that you knew had pool balls in it. The images would be matched against each other, and the features that came out would be the pool balls. But again, it seems like RANSAC isn't a good idea because it could only fit one model to the whole image. | What are the pros and cons of RANSAC versus Hough Transform? | machine learning;image processing;pattern recognition | null |
_cs.52123 | I am learning the basics of the language C, and so far we have covered up to loops. This question is related to my assignment. The problem is I do not understand what the question is asking.I know the error in the code, but I don't understand what this question is asking me to do? What is an instance? Am I supposed to make a different piece of code? Because I am pretty sure the error is a semantic one(knowledge based rather than syntax). Here is the code anyways just for context: | What is this question asking for?-->Test this code, find an error in the program and construct an instance to demonstrate the error (in C) | c | null |
_unix.191967 | If I do (in a Bourne-like shell):exec 3> file 4>&3 5> file 6>> fileFile descriptors 3 and 4, since 4 was dup()ed from 3, share the same open file description (same properties, same offset within the file...). While file descriptors 5 and 6 of that process are on a different open file description (for instance, they each have their own pointer in the file).Now, in lsof output, all we see is:zsh 21519 stephane 3w REG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 4w REG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 5w REG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 6w REG 254,2 0 10505865 /home/stephane/fileIt's a bit better with lsof +fg:zsh 21519 stephane 3w REG W,LG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 4w REG W,LG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 5w REG W,LG 254,2 0 10505865 /home/stephane/filezsh 21519 stephane 6w REG W,AP,LG 254,2 0 10505865 /home/stephane/file(here on Linux 3.16) in that we see fd 6 has different flags, so it has to be a different open file description from the one on fd 3, 4 or 5, but from that we can't tell fd 5 is on a different open file description. With -o, we could also see the offset, but again same offset doesn't guarantee it's the same open file description.Is there any non-intrusive1 way to find that out? Externally, or for a process' own file descriptors?1. One heuristic approach could be to change the flags of one fd with fcntl() and see what other file descriptors have their flags updated as a result, but that's obviously not ideal nor fool proof | find out which file descriptors share the same open file description | file descriptors | null |
_unix.152615 | Basically, I am trying to edit my .bashrc such that when I type ls or whatever I type on the console, it will be displayed in Green color. At the same time, all the results / output displayed by ls or other commands (output from a python / java script) will be displayed in a grey color. Is this possible ? What would I need to add into the .bashrc file ? ThanksUPDATE:Well. Thank you very much for the answers and comments:I saw from this this link that I just have to add the following next to the definition of $PS1trap '[[ -t 1 ]] && tput sgr0' DEBUGThen it works. I am not sure there will be any issues. But it seems to work for now. | Bash: How to get different color for the command line and its output? | bash;bashrc | null |
_cs.74964 | Undergraduate math student here attempting to understand neural networks. Picked up a text on sale, Neural Network Learning and Expert Systems (Gallant), and I'm just starting on the exercises for chapter 1. The full question is:Prove that a network is a feedforward network if and only if its cells are numbered in a way such that:all $p$ input cells are numbered from $1$ to $p$. whenever cell $u_j$ is connected to cell $u_i$, then $j < i$. The definition of a feedforward network, according to Gallant, is a directed network such that there are no cycles between any nodes. I'm confused for a few reasons. This sounds like I'll need at least a basic understanding of graph theory, which I don't have. Any quick and dirty resources that should cover what I need for this kind of material? I don't even know how to represent a network like this mathematically.I'm used to the numbering of things being pretty arbitrary, for example, numbering basis vectors in a set just to differentiate between them. I don't really have an intuitive idea for why the numbering of input cells is a factor in deciding if a network has directed cycles or not.Thanks in advance. | Prove a network is a feedforward network if and only if the numbering of its cells satisfy these conditions | graph theory;machine learning;neural networks | Let $G = (V, E)$ with $E \subseteq V \times V$ be a directed graph. A cycle is a path of edges $e_1 = (v_1, v_2), e_2=(v_2, v_3), ..., v_n = (e_n, e_1)$ with $e_i \in E$.Hence, by definition, the second condition means that the graph has no cycles.The first condition is necessary because otherwise there could be hidden nodes receiving input from the input nodes (and not vice versa). One could, of course, simply define input nodes as the nodes of a network which don't have incoming edges. But this might cause problems in the definition of recurrent networks. |
_unix.191187 | I have not used wpa_supplicant before, so am confused as to whether a valid connection is being made. I used wpa_passphrase to get a psk and made that output my wpa_supplicant.conf. I then connect with:wpa_supplicant -Dwext -iwlan0 -c/etc/wpa_supplicant.confand this is the output:rfkill: Cannot open RFKILL control deviceioctl[SIOCSIWAP]: Operation not permittedwlan0: Trying to associate with e8:04:62:23:57:d0 (SSID='Guest' freq=2412 MHz)wlan0: Associated with e8:04:62:23:57:d0wlan0: WPA: Key negotiation completed with e8:04:62:23:57:d0 [PTK=CCMP GTK=CCMP]wlan0: CTRL-EVENT-CONNECTED - Connection to e8:04:62:23:57:d0 completed (auth) [id=0 id_str=]It seems to connect but there are errors at the start, what do they mean? Do they affect the connection or does this look like I am connected correctly? I ask this as I try to give wlan0 an address with dhcp pr udhcpc and it does not get one, any idea why?I have tried these two wpa_supplicant.conf'snetwork={ ssid=Guest #psk=xxxxxxxx psk=<numbers>}andupdate_config=1network={ ssid=Guest proto=RSN key_mgmt=WPA-PSK pairwise=CCMP TKIP group=CCMP TKIP psk=<numbers> }Both give the same thing | wpa_supplicant gives rfkill errors upon connection? | wifi;wpa supplicant | Those two errors of rfkill are by Rfkill, a tool for enabling and disabling wireless devices. Most of the time the kernel does not have rfkill enabled in it. And so there is no /dev/rfkill file present, and rfkill command will give errors like rfkill: Cannot open RFKILL control deviceControl device here means /dev/rfkill |
_datascience.22247 | I have written a function for 10 fold crossvalidation that I want to use for different models, e.g PPR, MARS. However, I get an error when running it and I cannot figure out why it does not work? My CV function: cv10 <- function(reg.fn, formula, dataset, ...){ set.seed(201) ### Number of observations nrow <- nrow(dataset) ### Create a permutation of the observations indices Ind <- sample.int(nrow,nrow, replace = FALSE) ### Compute the size of each of the 10 folds M <- nrow / 10 # 'fold size' ### Initialize the score score <- 0 ### The first fold will then contain the observations which correspond to.. ### ..the indices of the first M elements of Ind. for(i in 1:10){ beg <- i*M end <- (i+1)*M ### Data to train the model data.train <- dataset[Ind[-beg:-end],] ### Data to test the model data.fold <- dataset[Ind[beg:end],] ### Fit the model model <- reg.fn(formula,data=data.train,...) predicted.y <- predict(model,data.fold) ### Update the CV-score score <- sum((predicted.y - data.fold[,1])^2) / M } return(score/10)}Testing using ppr: cv.scores <- numeric(10)### Some codefor(i in 1:10){ score <- cv10(reg.fn = ppr, formula = y~., dataset = data, nterms=i) cv.scores[i] <- scores}cv.scoresThe traceback:> Error in matrix(NA, length(keep), object$q, dimnames = list(rn,> object$ynames)) : > length of 'dimnames' [1] not equal to array extent > 4.> matrix(NA, length(keep), object$q, dimnames = list(rn, object$ynames)) > 3.> predict.ppr(model, data.fold) > 2.> predict(model, data.fold) > 1.> cv10(reg.fn = ppr, formula = y ~ ., dataset = data, nterms = i)The data I am using: structure(list(y = c(23.0551546516262, 27.8893494373006, 3.32468370559938, -13.5852336127512, -5.14668013186906, -0.489523212484223, -14.328750654513, -4.26428395686341, -2.75486620989581, 17.3107345018601, 25.6193450849393, 0.605103858286016, -1.30909806542865, 2.03575942172917, -19.1193524499977, -1.46508279385589, 2.65778970954973, 14.8513018374104, -2.87449028138997, 1.37368992108124, -1.43518738939116, 0.0199676357940499, -1.549025998582, -4.06263285631006, -9.15130335901099, -2.62794216480131, -1.68473200963303, 3.15144283445608, 7.78027589015824, 9.09732626327383), x1 = c(0.286060694657523, -0.344546030966432, 0.325763726232689, -1.69658096808073, -1.2854825202758, -0.0750318862014798, 0.266937353823139, 0.0559340444850217, -2.30403430891787, 0.189004139305415, 0.693296170158882, 0.223809355083932, 0.398456942903131, 1.01347438447768, -0.64785307166209, 0.648452713333917, 0.207342703528518, 0.0643901392726141, 0.669380920067964, -0.374254446133507, -0.244000842201787, -0.988253138922366, 1.24206047974719, -1.68266602919039, 1.44289062580162, -0.465439746975312, 0.693661499094998, -0.0877255722586039, -0.955080382553146, 0.170100884691593), x2 = c(-0.343601401483176, -0.924078839603673, 0.973710320640175, 0.0267187344544633, -1.36283892301834, 0.105184057636645, -0.644019900369909, 0.960031901250783, 0.147336523178527, 0.339467057535232, -0.192287076626924, 0.0722969316029643, 0.389789911800799, -0.328247051156339, -0.090450711707476, 0.716681577815978, 0.0626860575507786, -0.69236622624416, 0.584444051353438, -0.0911664147267412, -0.315213328094698, -0.0806856079787168, 0.484583750517842, -0.120406402869962, 0.596077475841207, -0.36353784662963, -0.780093462571257, 0.324679908484668, 0.508548510215705, -0.193595813912055 ), x3 = c(0.982327855388361, 0.624091435911063, 0.621531522270016, -0.902870741076395, 0.931325903563023, -1.05264178470207, 0.307132555544596, 0.275469955530981, 2.78596687577565, -0.590390951909848, -0.0257046477898407, -0.122008374353289, 0.455026913225061, -0.607514744574133, 0.595817459312108, 1.48223488775224, 0.636854208609479, 0.201054337281812, -0.716437866742046, -2.30960460962945, -1.11690418809942, 0.296611889529358, 0.992033628272787, -0.769290105905667, -1.4112664763812, 0.972758797977034, 0.680563892580633, 0.0312007101558726, 2.40109797772769, 0.27149586035907), x4 = c(2.87744884192944, 2.97037391737103, 2.04590974515304, -2.09065303439274, -0.886272139381617, 0.258417838253081, -2.48789734393358, -1.14431498106569, 1.52785618370399, 2.43856811150908, 2.88160788919777, 0.143826744519174, -1.32458955561742, 0.850324050989002, -2.63397432630882, -0.270683331415057, 1.85416122945026, 2.19268380571157, -1.33175755385309, 1.08762756781653, 0.7014160878025, 0.907778979744762, -1.3183526317589, 0.718872689176351, -2.21834870846942, -0.750489700119942, -0.889076801016927, 1.39292777515948, 2.34955989941955, 2.1975970286876), x5 = c(1.48984236368162, 0.869139640762428, 0.748845036625717, 0.351786000608901, -1.47779050566991, -2.3154451409239, 2.20221698212952, 0.414262887380592, 0.244955910040375, 0.429121363729595, -0.317306195296495, -1.38016320237183, 0.694020488858179, 0.305431051706151, -0.398558943204744, -1.00163421976715, 1.29024064725421, -0.770948417017754, 0.741664981312622, 0.169399870781162, -1.35676745536567, 0.471865193264912, 0.960859048309877, 1.46760491067668, 1.4378809852526, 0.0349201858899876, -1.42177690061078, -1.43127605517511, -0.101638629745238, 1.49972397311187 )), .Names = c(y, x1, x2, x3, x4, x5), row.names = c(NA, 30L), class = data.frame) | R - Function for 10 fold crossvalidation | r;cross validation | null |
_cogsci.10225 | Does the thought process in our mind for solving a problem or bringing out a solution to a problem depend on culture or language?If so, how can these skills be represented and addressed? | Is there evidence for cross-cultural differences in problem solving skills? | problem solving;linguistics;cross cultural psychology | null |
_scicomp.4708 | I am working on estimating the position and orientation (pose) of a model (rigid object) from its silhouette in an image. For this, I have constructed an error measure between the model in its pose and the silhouette, which looks roughly like:$$\epsilon ( \bar{x} ) = \sum_{\forall i} \| f(\bar{x}, m_i) - s_i \|^2$$where $\bar{x}$ is a six-dimensional vector describing the 3D translation and rotations as$$ f( \bar{x}, p ) = R_{\bar{x}} \cdot p + t_{\bar{x}} $$Ordinarily, this could be nonlinear least squares, however there is a catch: An assignment needs to be made between model-points $m_i$ and silhouette points $c_i$, which complicates the evaluation of the error measure.I am approaching the problem as a general nonlinear optimization problem. I already know that this error measure is continous, but not continously differentiable due to the aforementioned assignment. I do have gradient information however, but this does not take the assignment into account and therefore is not completely accurate.The question: Is there a method which can calculate/approximate and visualize the basins of attractions in this six-dimensional space?If this is absolutely not feasible, is there a method which can calculate/approximate the number of local minima within a bounded region? | Approximating and visualizing basins of attraction | optimization | Visualizing 6 dimensional domains is simply not easy. Unless of course, your uber-dimensional monitor is back form the repairman. Getting parts from the future is never a quick thing to do however, so mine languishes in a back room with my busted Holodeck.Kidding aside, visualizing a basin in 6-d really is not easy. Even computing the limits of a basin of attraction will be difficult. The curse of dimensionality hounds you.Ok, even in a lower number of dimensions, identifying the boundaries of such a basin requires solving MANY optimization problems. After all, a basin of attraction need not be a convex set. It need not be connected. And, since an optimizer, starting from distinct starting values will yield results that are still distinct, you must now do some clustering, testing that the multiple solutions truly are the same.There are other issues of course. Suppose I ask to minimize the function (x-y)^2 in the (x,y) plane? Clearly any point on the line y=x is a solution, and all are equally good. But clustering will have problems here, as they will on any such degeneracies, and identifying degeneracies in 6-d is not always trivial.Finally, you ask about identifying the NUMBER of local minimizers in any bounded region. The is too is quite difficult for a general black box problem. The field of global optimization has been working on problems like this for many years, though I don't think they can give you any hard, easy to compute answers in general. |
_codereview.981 | Basically, I'm uploading an excel file and parsing the information then displaying what was parsed in a view.using System.Data;using System.Data.OleDb;using System.Web;using System.Web.Mvc;using QuimizaReportes.Models;using System.Collections.Generic;using System;namespace QuimizaReportes.Controllers{ public class UploadController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(HttpPostedFileBase excelFile) { if (excelFile != null) { //Save the uploaded file to the disc. string savedFileName = ~/UploadedExcelDocuments/ + excelFile.FileName; excelFile.SaveAs(Server.MapPath(savedFileName)); //Create a connection string to access the Excel file using the ACE provider. //This is for Excel 2007. 2003 uses an older driver. var connectionString = string.Format(Provider=Microsoft.ACE.OLEDB.12.0;Data Source={0};Extended Properties=Excel 12.0;, Server.MapPath(savedFileName)); //Fill the dataset with information from the Hoja1 worksheet. var adapter = new OleDbDataAdapter(SELECT * FROM [Hoja1$], connectionString); var ds = new DataSet(); adapter.Fill(ds, results); DataTable data = ds.Tables[results]; var people = new List<Person>(); for (int i = 0; i < data.Rows.Count - 1; i++) { Person newPerson = new Person(); newPerson.Id = data.Rows[i].Field<double?>(Id); newPerson.Name = data.Rows[i].Field<string>(Name); newPerson.LastName = data.Rows[i].Field<string>(LastName); newPerson.DateOfBirth = data.Rows[i].Field<DateTime?>(DateOfBirth); people.Add(newPerson); } return View(UploadComplete, people); } return RedirectToAction(Error, Upload); } public ActionResult Error() { return View(); } }}Not feeling so confident this is the best approach. Any suggestion any of you MVC3 vets have for this aspiring senior programmer? :) | Not feeling 100% about my Controller design. | c#;mvc;asp.net mvc 3;controller | null |
_softwareengineering.311080 | IntentPackages should be designed to perform a single function well. Ideally this means that they should be highly modular and different packages should be able to be 'plugged-in' simply by ensuring that they share a common interface.However, I'm unsure how to do this without creating some kind of dependency between the packages or creating ugly 'integration classes' or a 'common interfaces' package.An Example ConceptI don't want to get too bogged down in domain-level symantics, so I'll use the simple, though slightly contrived, example of an event logger.Package ADoes XYZ and allows for a custom logging object to be provided. To ensure that the logger is valid, Package A contains a PackageA\Log interface. When a logging object is provided to the classX constructor, it throws an exception if the logger object does not implement PackageA\Log.Package BDoes ABC and allows for a custom logging object to be provided. To ensure that the logger is valid, Package B contains a PackageB\Log interface. When a logging object is provided to the classA constructor, it throws an exception if the logger object does not implement PackageB\Log.Package CA unified logger that provides logging for Package A and Package B... et al.Contains Log object that is built to provide a generic logger for all packages in the application.Current Sub-optimal Ideas:1. Implement neither of the interfaces in the class PackageC\Log but extend the class for each interface variation (e.g. PackageC\Loggers\PackageA extends PackageC\Log).Problem: Requires more maintenance and means that PackageC will have to be modified for every new package that it interfaces with.2. Implement the interfaces PackageA\Log and PackageB\Log directly in the class PackageC\Log.Problem: If PackageC is re-used in another project, errors will be thrown if either PackageA or PackageB are not present.3. Create a Common Interfaces package and have all packages implement/require those interfaces for their public interfaces.Problem: Massively impractical, would only work for integrating packages that you created; no third party interaction.QuestionHow can PackageC\Log fulfill the requirements of both PackageA\ClassX and PackageB\ClassA?In reality this question is usually more complex since the interfaces required by PackageA and PackageB are probably not the same. So is option 1 (defined above) the only way to solve this? i.e. the package that implements the interface has to write integration classes? | Package Interfaces - Coupling & Re-Usability | interfaces;code reuse;dependencies;packages;coupling | Since actual operations for a logging component will be pretty much the same for any other component/class/... that uses it, it makes sense to only offer one interface from Package C.If Package A and/or Package B require other interfaces for their logging to work, it would make sense to use the Adapter design pattern to connect the interfaces of PackageA\Log to PackageC\Log and PackageB\Log to PackageC\Log.The interface PackageC\Log remains generic, while predefined logging interfaces can be connected to it. |
_webapps.85182 | I am trying to add an alias email address to my current (and quite old) Hotmail account. The current Hotmail service is managed under the Outlook.com umbrella, and it seems like Microsoft won't provide new email accounts under the hotmail.com domain. When trying to add an alias, the domain is pre-set to outlook.com.Is there a way to configure an alias under the Hotmail domain? | How to create an alias email address for the Hotmail domain? | outlook.com | null |
_webmaster.100724 | Ok so I have my registrar, webhost and cloudflare, now my domain currently points to my host and I want to add cloudflare, and none of this is making sense because my host says they don't support cloudflare usage on their plans so they can't help me and my registrar says to point my domain to cloudflare and from cloudflare to my host. Does this make any sense to anyone?? | DNS config for server host and cloudflare | web hosting;dns;cloudflare | null |
_unix.143798 | Here's the script. It is successful when I run it from the BASH prompt, but not in the script. Any ideas?When I say fails, I mean the sed regex doesn't match anything, so there is no replaced text. When I run it on the command line, it matches.Also, I might have an answer to this. It has to do with my grep alias and GREP_OPTIONS having a weird interplay. I'll post back with the details on those.#!/bin/bashfor ((x = 101; x <= 110; x++)); do urls=${urls} www$x.site.com/configdone;curl -s ${urls} | grep -i Git Commit | sed -r s/.*Git Commit<\/td><td>([^<]+).*/\1/g | Why does the same sed regex (after grep) fail when run in a bash script vs bash command line? | bash;command line;scripting;grep;regular expression | I was actually able to figure this out, and I figure I'd add it here for the next googler who bangs their head against the same wall.I had a grep alias and GREP_OPTIONS set. This caused color highlighting to remain on in the script, even when piping to another command. That usually doesn't play nicely with sed.Here's my .alias and options:alias grep='grep -i --color'export GREP_OPTIONS=--color=alwaysSo when running from the script, it doesn't use the aliased command and so forces color to always be on. So when I checked my alias and saw the --color option (which means auto, which means don't color output that gets piped to another command (like sed). I was confused because I forgot I had set GREP_OPTIONS as well, so I expected the grep in the script to have color set to auto by default (as it would if I hadn't set the global GREP_OPTIONS). But not so.Here are my new settings (I believe the --color flag to GREP_OPTIONS is redundant, but I leave it there as a reminder):alias grep='grep --color=always'export GREP_OPTIONS=--ignore-case --colorThat way, any time I am on the command line, I'll have highlighting on for all my greps (which is usually what I want). But in scripts it will default to coloring only when not piped to another command. I'll still have to add --color=always to many of my scripts (since I tend to prefer highlighting in most cases, even when piping to another command, unless I don't ever see the output). |
_unix.10503 | How can I list in long format all files (located in a directory) which belong to me (rights) and were modified more than 7 days ago? | Listing all my files modified more than X days ago, in long format | shell;wildcards | null |
_softwareengineering.309724 | I am wanting to have/ or make a program that runs on a (raspberry pi)computer cluster with one pi executing only video content while the other only handles music, etc under a main program like an AI. Am I in the right direction? Isnt this parallel computing? | Raspberry pi computer cluster question? | parallel programming | null |
_codereview.114683 | My team and I have a very poor understanding of best practices in relation to telnet. We have Task.Delay and Task.Wait in the code, including async voids and from what I understand those are potential areas that can cause deadlocks and other issues.I'm trying to understand async in relation to running a telnet client. Is this a safe implementation of cancellation tokens and telnet connections?public enum TelnetClientStatus : int{ Unset = 0, Connected= 1, Disconnected = 2, Failed = 3}public TelnetClientStatus Status { get; set; } = TelnetClientStatus.Unset;public async Task OpenConnection(){ //Don't allow the session to keep trying to open after the 5 seconds var cancellationTokenSource = new CancellationTokenSource(); cancellationTokenSource.CancelAfter(TimeSpan.FromSeconds(5)); Status = await Task.Run(MakeConnection, cancellationTokenSource.Token);}private async Task<TelnetClientStatus> MakeConnection(){ _streamSocket = new StreamSocket(); _dataWriter = new DataWriter(_streamSocket.OutputStream); _dataReader = new DataReader(_streamSocket.InputStream) { InputStreamOptions = InputStreamOptions.Partial }; try { await _streamSocket.ConnectAsync(new HostName(HostName), Port.ToString()); return TelnetClientStatus.Connected; } catch (TaskCanceledException) //All other exceptions need to be investigated not ignored { //I understand a connection failed due to timeout, give the app some flag to work with //Maybe retry the connection? return TelnetClientStatus.Failed; } catch (Exception ex) { throw ex; }} | Async telnet connection over StreamSocket | c#;asynchronous | null |
_codereview.134980 | I'm not a security expert, but while checking over our AES implementation for our flagship product, I've noticed some strange things, like the output length having a relation with the input length and no apparent use of an IV.@Servicepublic class EncryptionServiceImpl implements EncryptionService { /** The logger for this class */ private static final Logger LOGGER = new Logger(EncryptionServiceImpl.class); /** There's one and only one instance of this class */ private volatile static EncryptionServiceImpl INSTANCE; /** True if EncryptionService is initialized. */ private boolean isInitialized = false; private Cipher cipherEncrypt; private Cipher cipherDecrypt; private String keyHex; /** * Constructor is private, use getInstance to get an instance of this class */ private EncryptionServiceImpl() { initialize(); } /** * Returns the singleton instance of this class. * * @return the singleton instance of this class. */ public static EncryptionServiceImpl getInstance() { if (INSTANCE == null) { synchronized (EncryptionServiceImpl.class) { if (INSTANCE == null) { INSTANCE = new EncryptionServiceImpl(); } } } return INSTANCE; } /** * Initialize EncryptionService. */ private synchronized void initialize() { if (!isInitialized){ // Get key from SystemSettings. SystemSettingsService systemSettingsService = (SystemSettingsService) ServiceFactory.getInstance().createService(SystemSettingsService.class); keyHex = systemSettingsService.getScmuuid(); byte[] keyBytes; // If keyHex is not blank (field scmuuid already exists in the database): if (StringUtils.isNotBlank(keyHex)) { keyBytes = hexToBytes(keyHex); SecretKeySpec secretKeySpec = new SecretKeySpec(keyBytes, AES); try { cipherEncrypt = Cipher.getInstance(AES); cipherDecrypt = Cipher.getInstance(AES); cipherEncrypt.init(Cipher.ENCRYPT_MODE, secretKeySpec); cipherDecrypt.init(Cipher.DECRYPT_MODE, secretKeySpec); } catch (InvalidKeyException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } catch (NoSuchAlgorithmException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } catch (NoSuchPaddingException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } //EncryptionService is initialized. isInitialized = true; } else { /* * If keyHex is blank, either we have an SQL exception or the key hasn't * been generated yet. Trying to use the EncryptionService without proper * initialization, will throw a FatalException. * If the key hasn't been generated yet, the next exception will trigger * the caller to use the generateKey() method in the catch block. */ //throw new NoEncryptionkeyException(); } } } /** * @see shared.bs.encryption.EncryptionService#isInitialized() */ public boolean isInitialized() { return isInitialized; } /** * @see shared.bs.encryption.EncryptionService#decrypt() */ public String decrypt(String value) { if (StringUtils.isBlank(value)){ return null; } // NULL values from log files can be interpreted as a String with value null (see e.g. bug REDACTED) if (value != null && value.equalsIgnoreCase(null)) { return null; } if (getCipherDecrypt() == null) { throw new EncryptionFailureException(Decryption failure. EncryptionService is not properly initialized.); } byte[] encryptedBytes = null; byte[] decryptedBytes = null; try { encryptedBytes = hexToBytes(value); decryptedBytes = cipherDecrypt.doFinal(encryptedBytes); } catch (NumberFormatException e) { throw new EncryptionFailureException(Decryption failure., e); } catch (IllegalBlockSizeException e) { throw new EncryptionFailureException(Decryption failure., e); } catch (BadPaddingException e) { throw new EncryptionFailureException(Decryption failure., e); } return new String(decryptedBytes); } /** * @see shared.bs.encryption.EncryptionService#encrypt() */ public String encrypt(String value) { if (StringUtils.isBlank(value)){ return null; } if (getCipherEncrypt() == null) { throw new EncryptionFailureException(Encryption failure. EncryptionService is not properly initialized.); } byte[] encrypted = null; String encHex = null; try { encrypted = cipherEncrypt.doFinal(value.getBytes()); encHex = asHex(encrypted); } catch (IllegalBlockSizeException e) { throw new EncryptionFailureException(Encryption failure., e); } catch (BadPaddingException e) { throw new EncryptionFailureException(Encryption failure., e); } catch (NumberFormatException e) { throw new EncryptionFailureException(Encryption failure., e); } return encHex; } /** convert a byte array to a hex String */ private String asHex(byte buf[]) { StringBuffer strbuf = new StringBuffer(buf.length * 2); int i; for (i = 0; i < buf.length; i++) { if (((int) buf[i] & 0xff) < 0x10) { strbuf.append(0); } strbuf.append(Long.toString((int) buf[i] & 0xff, 16)); } return strbuf.toString(); } /** convert a hex String to a byte array */ private byte[] hexToBytes(String hex) { byte[] bts = new byte[hex.length() / 2]; for (int i = 0; i < bts.length; i++) { bts[i] = (byte) Integer.parseInt(hex.substring(2 * i, 2 * i + 2), 16); } return bts; } /** * @see shared.bs.encryption.EncryptionService#generateKey() */ public synchronized void generateKey(){ if (!isInitialized){ /* * Make sure the key really doesn't already exist and that an initialization failure * isn't the result of an earlier SQLException. * Try again retrieving the key from the database. */ SystemSettingsService systemSettingsService = (SystemSettingsService) ServiceFactory.getInstance().createService(SystemSettingsService.class); String tryKeyHex = systemSettingsService.getScmuuid(); if (StringUtils.isNotBlank(tryKeyHex)) { // Something came back from the database, we try to initialize again and return silently. initialize(); return; } // Generate a new 128 bit strong AES key. KeyGenerator kgen; try { kgen = KeyGenerator.getInstance(AES); } catch (NoSuchAlgorithmException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } kgen.init(128); // 128 is in standard JCE SecretKey secretKey = kgen.generateKey(); byte[] keyBytes = secretKey.getEncoded(); keyHex = asHex(keyBytes); // We have a keyHex, it's time to generate the ciphers: SecretKeySpec secretKeySpec = new SecretKeySpec(keyBytes, AES); try { cipherEncrypt = Cipher.getInstance(AES); cipherDecrypt = Cipher.getInstance(AES); cipherEncrypt.init(Cipher.ENCRYPT_MODE, secretKeySpec); cipherDecrypt.init(Cipher.DECRYPT_MODE, secretKeySpec); } catch (InvalidKeyException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } catch (NoSuchAlgorithmException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } catch (NoSuchPaddingException e) { throw new InitializationFailureException( Failure to generate a new encryption key., e); } /* * Persist keyHex (field scmuuid in SystemSettings) and encrypt all existing non-encrypted * passwords and secure build/deploy-parameters in the database with this new key. * removed for brevity in this example: each of these is an extra method call. */ }//End: if (!isInitialized) }I got the impression that this has room for improvement, since information about the plaintext is leaking. Please note that because of legacy code throughout the entire project (mainly database field lengths), all our ciphertext output has to be shorter than 255 characters. In effect, this means that the output has to be 224 bytes long.And yes, I know that we're encrypting passwords. These are not user passwords. Those are handled through external systems like Active Directory and LDAP. These encrypted passwords are used to authenticate to external 3rd party systems where implementing a token-based authentication scheme either is not possible, is not feasible, or has been tried without success. | Implementing AES encryption in Java | java;cryptography;aes | Well two things, the Cipher.getInstance is not so good, as you saidit's not using an IV; it should be using at least something likeAES/CBC/PKCS5Padding and probably a longer key, i.e. 256 bits(which needs to be enabled for the JVM because of US exportrestrictions; OpenJDK will already have that on, otherwise you'll get anexception during setup)Edit: See comment. The default settings (and you might want toconfirm that with a debugger on the cipher objects - I just did thatwith some sample code) isECB mode,which isn't secure at all.Also take a look on https://crypto.stackexchange.com/ or perhaps crosspost there if you have more concerns.For the exceptions I'd actually just catch GeneralSecurityException -there's not much point in catching three different subclasses as it'sdoing the same thing anyway.The comments aren't amazing. E.g. I can hardly believe that LOGGER isthe logger for the class.The thread-safe singleton is fine AFAIK, looking at articles onDouble-Checked Locking.The commented out throw in initialize should really be removedbecause it's dead code that just confuses the reader.The flow between generateKey, which retries the initialize callbasically and the initialize call during the constructor is confusingto say the least, like either there's something missing or generateKeybasically does do initialisation, but never sets isInitialized.Deduplicating the crypto would be a great idea btw. And please justinitialise variables to their value instead of null with assignment -in most blocks in the code that's no problem at all and reduces thenumber of lines quite a bit.The hex parsing/printing looks fine but again I have a hard timebelieving there's no library you have available instead of doing ityourself.Lastly, decrypt and encrypt aren't synchronized - where is thesynchronisation happening? Cipherisn't thread-safe;perhaps just the single doFinal call is, but even then any of thethrown exceptions would mean that the cipher object has to be reset to avalid state separately. |
_softwareengineering.134826 | I'm teaching myself J2EE technologies using Glassfish as my webserver and EJB container. I'm very interested in learning REST as well, and developing an application that is adherent to the rules of REST.My first project is to write a chat client. The user will go to a webpage, download a webpage with the javascript to run the chat client (which posts the data to the server and fetches it as well). The calls to post data, and to fetch data, from the webserver will be through a RESTful interface. Right now I've done this through servlets that listen on the /chatroom/getMessages and /chatroom/postMessage URI's.The wrinkle that I run into when I try to convert this to a RESTful service using JAX-RS that doesn't use servlets is that I feel like I'm reinventing the wheel. With the servlet specification I had this HTTPSession object that made it very easy to keep track of where someone is in the chat buffer (and therefore which messages should be sent to them when they visit /chatroom/getMessages). But now when I make it completely RESTful, and just use POJOs with JAX-RS (which I actually like better from a style standpoint) I now have to reinvent session state if I want it by handing the person a token, and having them hand it back to me every time we talk just like the automagically generated session cookie would have done for me if I was using servlets.WHY should I implement this with JAX-RS and abandon the servlets? I haven't seen any JAX-RS tutorials that mix servlets and JAX-RS (probably for good reason), so this doesn't seem to be an option. What I really want to know is what compelling reasons there are for going with REST. What does it buy me to not just use the servlets in a RESTful way? | Why should I use JAX-RS REST instead of normal servlets? | web services;rest;soa | A chat service as a rest-ful api is a GOOD match !I think resource-based interfaces are still a very important concept to talk about. The above answer is just not correct, even though it is over 4 years old.In general, really NOTHING is wrong about building a chat server interface as an resource oriented ReST-ful API. It is absolutely valid and a very good match for the principle. There are even example and tutorial pages out there, that use this as a quite straightforward example to emphazise the intention behind ReST-ful API.Why it is GOOD It can be a forum like service or a realtime chat, it does not matter in this regard. It boils down to the domain model.EXAMPLE 1 (classic realtime chat):Be it a more elaborate chat service, then one needs to create a chat room. The chatroom will be attached to him/her (the user object) and the chatlobby enclosing it (just assumed). One can now either check the user resource or the chatlobby resource, or filter through chatroom resources via additional query-params, what is perfectly valid.Which post the client has lately viewed and stuff like this, as long as it is not modelled in our chat service domain, is not part of the server state like mentioned in wikipedia. This is the session state of the client, the client tells which part of the collection of chat message resource it wants to GET.So how dare one says that it does not match !!For the forum like chat it is even simpler.EXAMPLE 2 (forum like chat):A chat forumPost is a Domain Object aswell as a comment on this forumPost. Surely then they will need a unique identifier to allow for a unique resource identifier (URI). One can even see that this looks quite similar to the above example. It is generally a good example, however you want to turn the chat.With an AOP audit you get created and modified times injected, can handle authorization ... The core aspect is, that this kind of api makes it necessary to understand the domain and the fundamental parts that matter to the user, the consumer, or interacting applications in general.STATELESS AND STATEFUL:It is really important to keep the principles in mind.The stated Wikipedia entry on ReST and the paragraph on STATELESS communication is aimed towards the client state, not the state of domain objects. Resources aka domain objects are by definition STATEFUL. The paragraph referenced is about session state, not domain state.If you intend to develop enterprise software, it is crucial to keep the boundaries and rationales straight.IN A NUTSHELL:Comments and Posts, aswell as Users are perfect examples for a restful api.For this reason a chat service is absolutely fine to start with.Having a seqential identifier will definitely not hurt.Go further and try to use links for the relations (href) and define the relation via (rel) relation attribute. Then the client can use domain terms to consume the correct endpoints. The client can via this technique (HATEOAS) explore the entire graph of objects without any knowledge of the domain objects themselves.Hope this helps to clarfiy, even though this is an older entry, the topic is still absolutely recent. |
_codereview.129814 | Just looking for some constructive (harsh) criticism of a project I've completed and handed in. This is a theoretical implementation of the system, specifically has a simplified registration number and a simplified driving licence number generator. I've added in the other classes for clarity (and criticism is welcome for those) but would like focus on the RegistrationNumber.java class and LicenceNumber.java class and if I have guaranteed uniqueness.RentalAgency.javapackage carhireapp;import java.util.*;/* * Author: Andrew Cathcart, S130315904 * Main rental agency class * Contains the companies fleet of cars that they rent, as well as methods to get the currently * rented cars, get the number of available cars of a certain size, see what car a certain * driving licence is renting, issue a car to an individual with a valid licence and also terminate a rental. */public class RentalAgency { private static List<Vehicle> ListOfCars = new ArrayList<Vehicle>(); private static Map<DrivingLicence, Vehicle> FLEET = new HashMap<DrivingLicence, Vehicle>(); // When RentalAgency is created, populate the ListOfCars public RentalAgency() { populateList(); } // A method to populate the map of vehicles with 20 small cars and 10 large // cars private void populateList() { for (int i = 0; i < 20; i++) { ListOfCars.add(new SmallCar()); } for (int i = 0; i < 10; i++) { ListOfCars.add(new LargeCar()); } } // Returns the entire List listOfCars public List<Vehicle> getListOfCars() { return ListOfCars; } // Returns the entire map FLEET public Map<DrivingLicence, Vehicle> getFleet() { return FLEET; } /* * True for small, false for large. For all objects in the list, if the * vehicle in the list is a SmallCar object and is not rented, add to the * counter */ public int availableCars(Boolean isSmall) { int count = 0; for (Vehicle temp : ListOfCars) { if (temp.isSmall() == isSmall) if (!temp.isRented()) { count++; } else if (!temp.isRented()) { count++; } } return count; } // Returns a list of vehicle objects that are currently rented public List<Vehicle> getRentedCars() { List<Vehicle> rentedCars = new ArrayList<Vehicle>(); for (Vehicle temp : ListOfCars) { if (temp.isRented()) { rentedCars.add(temp); } } return rentedCars; } // Returns the car matching a driving licence public Vehicle getCar(DrivingLicence licence) { if (FLEET.containsKey(licence)) { return FLEET.get(licence); } else return null; } public void issueCar(DrivingLicence licence, Boolean isSmall) { Calendar dob = Calendar.getInstance(); dob.setTime(licence.getDriverDateOfBirth()); Calendar today = Calendar.getInstance(); int age = today.get(Calendar.YEAR) - dob.get(Calendar.YEAR); if (today.get(Calendar.MONTH) < dob.get(Calendar.MONTH)) { age--; } else if (today.get(Calendar.MONTH) == dob.get(Calendar.MONTH) && today.get(Calendar.DAY_OF_MONTH) < dob.get(Calendar.DAY_OF_MONTH)) { age--; } Calendar doi = Calendar.getInstance(); doi.setTime(licence.getDateOfIssue()); int yearsHeld = today.get(Calendar.YEAR) - doi.get(Calendar.YEAR); if (today.get(Calendar.MONTH) < doi.get(Calendar.MONTH)) { yearsHeld--; } else if (today.get(Calendar.MONTH) == doi.get(Calendar.MONTH) && today.get(Calendar.DAY_OF_MONTH) < doi.get(Calendar.DAY_OF_MONTH)) { yearsHeld--; } /* * Code to calculate the age of the person and also how many years * they've held their licence Credited to user Zds from * stackoverflow.com and Irene Loos from coderanch.com * http://www.coderanch.com/t/391834/java/java/calculate-age * http://stackoverflow.com/questions/1116123/how-do-i-calculate- * someones-age-in-java */ boolean flag = false; // A simple flag to toggle depending on if we find an appropriate car to // issue if ((licence.isFull()) && (!licence.getCurrentlyRenting())) { // If the individual has a full licence and is not currently renting // a car for (Vehicle temp : ListOfCars) { // iterates through the list of Vehicles if (temp.isSmall() == isSmall) { // checks if the user entered true or false for isSmall and // finds cars in the list from this if ((age >= 21) && (yearsHeld >= 1)) { // checks their current age and how many years they've // owned their licence against the requirements if ((!temp.isRented()) && (temp.isFull())) { // It then checks that the car in the list is not // rented and has a full tank temp.setIsRented(true); licence.setCurrentlyRenting(true); FLEET.put(licence, temp); flag = false; break; } else if ((age >= 25) && (yearsHeld >= 5) && (!temp.isRented()) && (temp.isFull())) { temp.setIsRented(true); licence.setCurrentlyRenting(true); FLEET.put(licence, temp); flag = false; break; } else flag = true; } else flag = true; } else flag = true; } } else flag = true; if (flag) { System.out.println(An appropriate car could not be issued); } } // Removes key:value pairs from a map when given a licence object // Also sets DrivingLicence's currentlyRenting status to false and Vehicle's // isRented status to false // Returns the fuel required to fill the tank, else -1 public int terminateRental(DrivingLicence licence) { if (FLEET.containsKey(licence)) { int fuelRequiredToFill = ((FLEET.get(licence).getFuelCapacity()) - (FLEET.get(licence).getCurrentFuel())); licence.setCurrentlyRenting(false); FLEET.get(licence).setIsRented(false); FLEET.remove(licence); return fuelRequiredToFill; } return -1; }}Vehicle.javapackage carhireapp;/* * Author: Andrew Cathcart, S130315904 * A Vehicle interface */public interface Vehicle { public String getRegNum(); public int getFuelCapacity(); public int getCurrentFuel(); public void isTankFull(); public boolean isFull(); public boolean isRented(); public void setIsRented(Boolean bool); public int addFuel(int amount); public int drive(int distance); public boolean isSmall();}AbstractVehicle.javapackage carhireapp;/* * Author: Andrew Cathcart, S130315904 * An Abstract class which implements the Vehicle interface * This class does not include implementation for the drive method in the Vehicle interface */public abstract class AbstractVehicle implements Vehicle { private RegistrationNumber regNum; private int fuelCapacity; private int currentFuel; private boolean isFull; private boolean isRented; public AbstractVehicle() { RegistrationNumber regNumObj = RegistrationNumber.getInstance(); regNum = regNumObj; isFull = true; setIsRented(false); } public String getRegNum() { return regNum.getStringRep(); } public void setFuelCapacity(int capacity) { this.fuelCapacity = capacity; } public int getFuelCapacity() { return fuelCapacity; } public int getCurrentFuel() { return currentFuel; } public void setCurrentFuel(int amount) { currentFuel = amount; isTankFull(); } public void isTankFull() { if ((currentFuel - fuelCapacity) >= 0) { isFull = true; } else isFull = false; } // Calls the isTankFull method and then returns isFull public boolean getIsFull() { isTankFull(); return isFull; } public boolean isRented() { return isRented; } public void setIsRented(Boolean bool) { isRented = bool; } public int addFuel(int amount) { if (amount <= 0) { throw new IllegalArgumentException(You must add an amount greater than zero); } if (isFull || !isRented) { return 0; } // If the tank is full or the car is not rented return zero if ((currentFuel + amount) <= fuelCapacity) { currentFuel += amount; if (currentFuel == fuelCapacity) { isFull = true; return amount; } else return amount; } // If the current fuel plus the amount to add is less than or equal to // the fuel capacity, add the amount to the current fuel and if the // current fuel is equal to the fuel capacity then set the boolean // isFull to true and return the amount added if ((currentFuel + amount) > fuelCapacity) { int difference = (fuelCapacity - currentFuel); currentFuel = fuelCapacity; isFull = true; return difference; } // Covers the case where the amount added would cause the current fuel // to exceed the fuel capacity return -1; }}SmallCar.javapackage carhireapp;/* * Author: Andrew Cathcart, S130315904 * Implements the drive method for a small car * Super class is AbstractVehicle */public class SmallCar extends AbstractVehicle {private int smallFuelCapacity = 45; private boolean isSmall = true; // Calls the super constructor, sets the fields appropriately public SmallCar() { super(); super.setFuelCapacity(smallFuelCapacity); super.setCurrentFuel(smallFuelCapacity); } public boolean isSmall() { return isSmall; } // returns the number of whole Litres of fuel consumed during the journey public int drive(int distance) { int fuelUsed = 0; if (distance < 0) { throw new IllegalArgumentException(Distance cannot be less than zero); } if (super.isRented() && (super.getCurrentFuel() > 0)) { fuelUsed = (distance / 25); super.setCurrentFuel(super.getCurrentFuel() - fuelUsed); return fuelUsed; } return fuelUsed; } public boolean isFull() { boolean bool = super.getIsFull(); return bool; } }LargeCar.javapackage carhireapp;/* * Author: Andrew Cathcart, S130315904 * Implements the drive method for a large car * Super class is AbstractVehicle */public class LargeCar extends AbstractVehicle { private int largeFuelCapacity = 65; private boolean isSmall = false; // Calls the super constructor, sets the fields appropriately public LargeCar() { super(); super.setFuelCapacity(largeFuelCapacity); super.setCurrentFuel(largeFuelCapacity); } public boolean isSmall() { return isSmall; } // returns the number of whole Litres of fuel consumed during the journey public int drive(int distance) { int fuelUsed = 0; if (distance < 0) { throw new IllegalArgumentException(Distance cannot be less than zero); } if (super.isRented() && (super.getCurrentFuel() > 0)) { if (distance <= 50) { fuelUsed = (distance / 15); super.setCurrentFuel(super.getCurrentFuel() - fuelUsed); return fuelUsed; } else { int moreThan = (distance - 50); fuelUsed = (50 / 15) + (moreThan / 20); super.setCurrentFuel(super.getCurrentFuel() - fuelUsed); return fuelUsed; } } return fuelUsed; } public boolean isFull() { boolean bool = super.getIsFull(); return bool; }}Implementation - Driving LicenceCar Registration NumberFor this project (though not in real life) a car registration number has two components - a single letter followed by a four digit number. For example: - a1234You must provide access to each component and an appropriate string representation of the registration number.Registration numbers are unique. You must guarantee that no two cars have the same registration number.RegistrationNumber.javapackage carhireapp;import java.util.HashMap;import java.util.Map;import java.util.Random;public final class RegistrationNumber { private static final Map<String, RegistrationNumber> REGNUM = new HashMap<String, RegistrationNumber>(); // Stores stringRep with object private final char letter; // One letter private final int numbers; // Four numbers private final String stringRep; // letter + number, e.g. A1234 private RegistrationNumber(char letter, int numbers) { this.letter = letter; this.numbers = numbers; this.stringRep = String.format(%s%04d, letter, numbers); // Pad the string to make sure we always get a four digit number } public static RegistrationNumber getInstance() { Random random = new Random(); // Using the random class instead of math.random as it is a static // method final Character letter = (char) (random.nextInt(26) + 'A'); final int numbers = random.nextInt(9000) + 1000; final String stringRep = letter + numbers + ; if (!REGNUM.containsKey(stringRep)) { REGNUM.put(stringRep, new RegistrationNumber(letter, numbers)); } // If the randomly generated registration plate is unique then create a // new object and return a reference to it else if (REGNUM.containsKey(stringRep)) { return getInstance(); } // If the randomly generated registration plate is not unique, call the // getInstance method again return REGNUM.get(stringRep); // return a reference } public char getLetter() { return letter; } public int getNumbers() { return numbers; } public String getStringRep() { return stringRep; } public String toString() { return RegistrationNumber [letter= + letter + , numbers= + numbers + , stringRep= + stringRep + ]; }}Driving LicenceYou must guarantee the uniqueness of licence numbers.LicenceNumber.javapackage carhireapp;import java.util.Date;import java.util.HashMap;import java.util.Map;import java.util.Random;import java.util.Calendar;public final class LicenceNumber { private static final Map<String, LicenceNumber> LICENCENUM = new HashMap<String, LicenceNumber>(); private final String initials; private final int yearOfIssue; private final int serialNum; private final String stringRep; private LicenceNumber(String initials, int yearOfIssue2, int serialNum) { this.initials = initials; this.yearOfIssue = yearOfIssue2; this.serialNum = serialNum; stringRep = initials + - + yearOfIssue2 + - + serialNum; } public static LicenceNumber getInstance(Name fullName, Date dateOfIssue) { final String initials = fullName.getFirstName().substring(0, 1) + fullName.getLastName().substring(0, 1); Calendar cal = Calendar.getInstance(); cal.setTime(dateOfIssue); final int yearOfIssue = cal.get(Calendar.YEAR); Random r = new Random(); // Using the random class instead of math.random as it is a static // method final int serialNum = r.nextInt(11); final String stringRep = initials + - + yearOfIssue + - + serialNum; if (!LICENCENUM.containsKey(stringRep)) { LICENCENUM.put(stringRep, new LicenceNumber(initials, yearOfIssue, serialNum)); } else if (LICENCENUM.containsKey(stringRep)) { return getInstance(fullName, dateOfIssue); } // If two people have the same name, date of birth and are generated the // same serial number, call the getInstance again return LICENCENUM.get(stringRep); // If the licence number is unique then create a // new object, put it into the HashMap and return a reference to it, // else return a reference } public String getInitials() { return initials; } public int getYearOfIssue() { return yearOfIssue; } public int getSerialNum() { return serialNum; } public String getStringRep() { return stringRep; } @Override public String toString() { return LicenceNumber [initials= + initials + , yearOfIssue= + yearOfIssue + , serialNum= + serialNum + , stringRep= + stringRep + ]; }}Name.javapackage carhireapp;/* * Author: Andrew Cathcart, S130315904 * Relied upon by LicenceNumber class * A simple class used to create and store information about a persons name */public final class Name { private final String firstName; private final String lastName; public Name(String firstName, String lastName) { if ((firstName == null) || (firstName.isEmpty())) { throw new IllegalArgumentException(firstName cannot be null or empty); } if ((lastName == null) || (lastName.isEmpty())) { throw new IllegalArgumentException(lastName cannot be null or empty); } this.firstName = firstName; this.lastName = lastName; } public String getFirstName() { return firstName; } public String getLastName() { return lastName; } @Override public String toString() { return firstName= + firstName + lastName= + lastName; }}DrivingLicence.javapackage carhireapp;import java.util.Date;/* * Author: Andrew Cathcart, S130315904 * Relies on the Name and LicenceNumber classes * A simple class to store information about a driving licence */public final class DrivingLicence { private final Name driverName; private final Date driverDateOfBirth; private final Date dateOfIssue; private final LicenceNumber number; private final boolean isFull; private boolean currentlyRenting = false; public DrivingLicence(Name driverName, Date dateOfBirth, Date dateOfIssue, boolean isFull) { this.driverName = driverName; this.driverDateOfBirth = dateOfBirth; this.dateOfIssue = dateOfIssue; this.number = LicenceNumber.getInstance(driverName, dateOfIssue); this.isFull = isFull; } public Name getDriverName() { return driverName; } public Date getDriverDateOfBirth() { return driverDateOfBirth; } public Date getDateOfIssue() { return dateOfIssue; } public LicenceNumber getNumber() { return number; } public boolean isFull() { return isFull; } public void setCurrentlyRenting(Boolean bool) { currentlyRenting = bool; } public boolean getCurrentlyRenting() { return currentlyRenting; } @Override public String toString() { return \nDrivingLicence \ndriverName= + driverName.toString() + \ndriverDateOfBirth= + driverDateOfBirth + \ndateOfIssue= + dateOfIssue + \nnumber= + number.toString() + \nisFull= + isFull + ]; }} | A Car Rental Agency - guaranteeing uniqueness | java;beginner | null |
_webmaster.58552 | On a HTML5 website, there is a single page, index.php . Its code contains other 5 pages.An AJAX navigation is used, so the URL of other page looks like example.com/#!/page_Example.I had tried to submit to Google many addresses of this kind using a XML sitemap, but Google indexes only the main page.Also, I've made some 301 redirects from example.com/Example to example.com/#!/page_Example, and submitted them, but the main page is still the only one indexed.How could I sumbit those URLs to Google? | Submit example.com/#!/page URLs to Google | seo;google;sitemap;ajax | According to Google, you should list your AJAX URLs in a sitemap exactly as you say you've done:4. Consider updating your Sitemap to list the new AJAX URLsCrawlers use Sitemaps to complement their discovery crawl. Your Sitemap should include the version of your URLs that you'd prefer to have displayed in search results, so in most cases it would be http://example.com/ajax.html#!key=value.At this point, I'll have to assume that the problem is somewhere else.In particular, keep in mind that the method Google uses to fetch AJAX pages is not the same as how normal browsers do it, and that it requires extra server-side support (parsing the _escape_fragment_ query parameter and serving the appropriate server-side generated version of the AJAX content based on it). If your index.php is not set up to do that, Google will never even see you AJAX content, and thus cannot index it.To start with, I'd suggest using the Fetch as Googlebot feature of the Google Webmaster Tools to see if Google can even load your AJAX content. If it cannot, follow the instructions here to set up your site so that it can. |
_unix.387704 | I have a raspberry pi which swaps to an external NAS (please don't judge me :) ) over CIFS (link is a direct 100Mbit full duplex Ethernet connection). Swap works fine until a certain threshold which is about 100MB. The system then freezes and network connection with NAS stops. Samba share is mounted with automount. Where do I start if I want to find out why the Pi stops using the swapfile? I dont mind the slowdown because I'm just compiling heavy programs but when it freezes I'm forced to reboot. | System freezes when swapping | raspberry pi;swap;cifs | null |
_codereview.172879 | I'm new to programming. I wrote the following code in python. It's a cmd based address book.import osfrom reportlab.pdfgen import canvasfrom cryptography.fernet import Fernetp_d_D = {}d_D = {}dd = {}file_path = os.path.join(os.path.expanduser('~'), 'Desktop', str(os.path.basename(__file__)) + '__database.txt') # The# Location of dict database stored in the users HDDdata = {}key = b'kXDB2gXHHwOoVC6FCwKjDhCa3JW0vRiKkHv3iBuPhx0=' # Secret key value # for Fernet encryption# Loads the decrypted dict into the memory (data gets the decrypted values)def make_eval2(): make_eval() global dd global data decrypt_dict() data = eval(str(dd))# A function that encrypts or decrypts dict and stores into dict dbdef encrypt_dict(write=False, decrypt=False): l = [] for k, v in data.items(): for i in v: if decrypt is False: a = encrypt_decrypt(i) else: a = encrypt_decrypt(i, decrypt=True) l.append(a) ch = [l[x:x + 4] for x in range(0, len(l), 4)] for it in ch: d_D.setdefault(it[0], (it[0], it[1], it[2], it[3])) if write is True: write_read_data('w', text=str(d_D)) global p_d_D p_d_D = d_D.copy() d_D.clear()# Decrypts the dict data and stores into dict dddef decrypt_dict(): l = [] global data global dd if len(data) < 1: print('No entry') else: for k, v in data.items(): for i in v: a = encrypt_decrypt(i, decrypt=True) l.append(a) ch = [l[x:x + 4] for x in range(0, len(l), 4)] for it in ch: dd.setdefault(it[0], (it[0], it[1], it[2], it[3]))# A function that does the actual enc. or dec. of provided dict valuesdef encrypt_decrypt(x, decrypt=False): global key f = Fernet(key) x_en = x.encode() if decrypt is False: a = f.encrypt(x_en).decode() return a else: a = f.decrypt(x_en).decode() return a# Initial file maker that creates an empty dict dbdef make_dict_file_to_write(): global file_path if os.path.exists(file_path): pass else: open(file_path, 'w')# Loads the dict db from HDD to dict data (into the memory)def make_eval(): global data x = write_read_data('r', read=True) if x == '': pass else: data = eval(x)# A modified writing function for writing and reading to/from filesdef write_read_data(mode, **kwargs): global file_path read = kwargs.get('read', False) text = kwargs.get('text', None) with open(file_path, mode) as f_to_w_read: if 'read' in kwargs: if read is True: return f_to_w_read.read() else: return None if 'text' in kwargs: if text is not None and mode == 'w': x = f_to_w_read.write(str(text)) return x return None# Returns the user given entries as encrypted values in the form of a tupledef get_contact_details(): a = '' tags = ['name', 'address', 'cell-number', 'email'] for tag in tags: x = input('Please enter contact %s: ' % tag) a += x.lower() + ',' name, address, cell, email = a.rstrip(',').split(',') h_name, h_address, h_cell, h_email = encrypt_decrypt(name), encrypt_decrypt(address), encrypt_decrypt(cell), \ encrypt_decrypt(email) return h_name, h_address, h_cell, h_email# Adds a new contact entry to the dict db in HDDdef add_contact(): make_eval() contact = get_contact_details() data.setdefault(contact[0], contact) write_read_data('w', text=str(data)) clear_dict() print('New contact added\r\n')# Prints requested entry into a PDF file in the current folder# It is a part of search_contact() functiondef print_data(x): if x in data.keys(): xx = ('Name: %s\r\nAddress: %s\r\nCell-number : %s\r\nEmail: %s\r\n' % (data[x][0].capitalize(), data[x][1]. capitalize(), data[x][2].capitalize(), data[x][3].lower())) print(xx) file_name = '%s\'s contact information' % x.capitalize() c = canvas.Canvas(file_name + '.pdf', pagesize='A4') t = c.beginText() t.setFont('Times-Bold', 12) t.setTextOrigin(30, 700) t.textLines(file_name + ' :\r\n') t.textLines(xx) c.drawText(t) c.showPage() c.save() print('Document will be printed after quitting\r\n')# Prints all the entries of the dict db into readable formatdef print_all(): make_eval() if len(data) < 1: print('No entry') else: encrypt_dict(decrypt=True) global p_d_D for k, v in p_d_D.items(): print('Name: %s\r\nAddress: %s\r\nCell-number : %s\r\nEmail: %s\r\n' % (v[0].capitalize(), v[1].capitalize(), v[2], v[3].lower())) p_d_D.clear()# Allows a requested entry to be printed out in the PDF formatdef search_contact(): make_eval2() search_item = input('Enter the full name of the person you wish to search: \r\n').lower() if search_item in data.keys(): print_data(search_item) clear_dict() else: print('No match found\r\n')# Part of edit_contact() function, which provides different editing choices to the userdef choice_edit(): while True: choiceEdit = input('Press A to change address\nPress C ' 'to change cell-number\nPress E to change email\nPress Q ' 'to quit without editing\n').lower() if choiceEdit == 'a' or choiceEdit == 'c' or choiceEdit == 'e' or choiceEdit == 'q': return choiceEdit print('Not correct option\r\n')# Part of edit_contact() functiondef do_it_again(): while True: try: doAgain = int(input('Press 1 to edit again\nPress 2 to save and quit\r\n')) if doAgain == 1 or doAgain == 2: return doAgain print('Not correct option\r\n') except ValueError: print('Select an integer\r\n')# Part of edit_contact() functiondef print_try(x=None): xx = input('Enter new %s\r\n' % x).lower() return xx# Stores the updated dict to the dict dbdef update(f, c): x = tuple(f) # Converts the list X into a tuple data.update({(c, x)}) # Updates data dict encrypt_dict(write=True)# Part of edit_contact() function which gives the option to store edited data or re-edit a requested entrydef next_gen(list_x, edit_pointer, edit_name, print_call=''): list_x[edit_pointer] = print_try(x=print_call) update(list_x, edit_name) doooAgain = do_it_again() if doooAgain == 1: edit_contact() if doooAgain == 2: update(list_x, edit_name) clear_dict()# Edits an stored entry on requestdef edit_contact(): make_eval2() # Decrypts the dict to readable format var_edit_name = input('Enter the contact name that you with to edit: \r\n').lower() if var_edit_name in data.keys(): z = list(data[var_edit_name]) # Converts the tuple into a list to allow edits cho_edit = choice_edit() if cho_edit == 'a': next_gen(z, 1, var_edit_name, print_call='address') elif cho_edit == 'c': next_gen(z, 2, var_edit_name, print_call='cell-number') elif cho_edit == 'e': next_gen(z, 3, var_edit_name, print_call='email') elif cho_edit == 'q': update(z, var_edit_name) clear_dict() else: print('Please select a correct option\r\n') edit_contact() else: print('No such name found\r\n')# Clears the memories of all the dictsdef clear_dict(): global dd, data, p_d_D, d_D dd.clear() data.clear() p_d_D.clear() d_D.clear()# Deletes an entrydef delete_contact(): make_eval2() # Decrypts the dict to readable format var_del_name = input('Enter the contact name that you with to delete: \r\n').lower() if var_del_name in data.keys(): data.pop(var_del_name, None) encrypt_dict(write=True) # Encrypts the dict to unreadable format and stores it in the dict database clear_dict() # Clears the memories of all the dicts else: print('No such name found\r\n')# Initially checks if the dict db is empty or notdef initial_emptyness_checker(): make_eval() if len(data) < 1: print('Address book is currently empty\r\n') else: return True# Allows to quit from the appdef quit_here(): make_eval() write_read_data('w', text=str(data)) quit()# Provides options to the user to choose a set of actiondef option_switcher(): options = { 1: add_contact, 2: search_contact, 3: delete_contact, 4: edit_contact, 5: print_all, 6: quit_here} xx = ['Add contract', 'Search', 'Delete', 'Edit', 'Print all entries', 'Quit'] c = 1 for i in xx: print('Option %d: %s' % (c, i)) c += 1 try: select = int(input('\nPlease select an option \r\n')) if 1 <= select < 7: options[select]() else: print('Options 1-6\r\n') except ValueError: print('Invalid option\r\n') option_switcher()def main(): print('Welcome to AddressBookEXtREM V1 by AJ\r\n') make_dict_file_to_write() initial_emptyness_checker() while True: option_switcher()if __name__ == '__main__': main()The above app allows a user to make contact lists and store them in the hard disk. Moreover, it provides security to the stored database file via encryption. User can anytime update or delete or edit stored contacts. Moreover, it provides the user to print out the searched item into a PDF file. The code is pretty long. I hope someone could suggest me to make it shorter or more efficient. | Command based Address book app with encryption and store facilities | python;cryptography | null |
_webapps.46477 | How can I show the name or email address for responses to form survey in the results spreadsheet? We want to see each other's responses but results sheets seem to be anonymous and only show time stamps of responses, but no identifier. | How to see names for survey responses? | google forms | null |
_cs.39871 | While trying to improve the performance of my collision detection class, I found that ~80% of the time spent at the gpu, it spent on if/else conditions just trying to figure out the bounds for the buckets it should loop through.More precisely: each thread gets an ID, by that ID it fetches its triangle from the memory (3 integers each) and by those 3 it fetches its vertices(3 floats each). Then it transforms the vertices into integer grid points (currently 8x8x8) and transforms them into the triangle bounds on that gridTo transform the 3 points into bounds, it finds the min/max of each dimension among each of the pointsSince the programming language I am using is missing a minmax intrinsic, I made one myself, looks like this: procedure MinMax(a, b, c): local min, max if a > b: max = a min = b else: max = b min = a if c > max: max = c else: if c < min: min = c return (min, max)So on the average it should be 2.5 * 3 *3 = 22.5 comparisons which ends up eating up way more time than the actual triangle - edge intersection tests (around 100 * 11-50 instructions). In fact, I found that pre-calculating the required buckets on the cpu (single threaded, no vectorization), stacking them in a gpu view along with bucket definition and making the gpu do ~4 extra reads per thread was 6 times faster than trying to figure out the bounds on the spot. (note that they get recalculated before every execution since I'm dealing with dynamic meshes)So why is the comparison so horrendously slow on a gpu? | Why are comparisons so expensive on a GPU? | computer architecture;parallel computing | GPUs are SIMD architectures. In SIMD architectures every instruction needs to be executed for every element that you process. (There's an exception to this rule, but it rarely helps).So in your MinMax routine not only does every call need to fetch all three branch instructions, (even if on average only 2.5 are evaluated), but every assignment statement takes up a cycle as well (even if it doesn't actually get executed).This problem is sometimes called thread divergence. If your machine has something like 32 SIMD execution lanes, it will still have only a single fetch unit. (Here the term thread basically means SIMD execution lane.) So internally each SIMD execution lane has a I'm enabled/disabled bit, and the branches actually just manipulate that bit. (The exception is that at the point where every SIMD lane becomes disabled, the fetch unit will generally jump directly to the else clause.)So in your code, every SIMD execution lane is doing:compare (a > b)assign (max = a if a>b)assign (min = b if a>b)assign (max = b if not(a>b))assign (min = a if not(a>b))compare (c > max)assign (max = c if c>max)compare (c < min if not(c>max))assign (min = c if not(c>max) and c<min)It may be the case that on some GPUs this conversion of conditionals to predication is slower if the GPU is doing it itself. As pointed out by @PaulA.Clayton, if your programming language and architecture has a predicated conditional move operation (especially one of the form if (c) x = y else x = z) you might be able to do better. (But probably not much better).Also, placing the c < min conditional inside the else of c > max is unnecessary. It certainly isn't saving you anything, and (given that the GPU has to automatically convert it to predication) may actually be hurting to have it nested in two different conditionals. |
_softwareengineering.26375 | For solo projects, do you keep your build / management tools on your local machine, or on a separate server? If the server is not guaranteed to be safer or more reliable than my own machine I struggle to see the point, but maybe I'm missing some things.Note that I'm not debating the value of continuous integration or having a staging environment etc.. just the question of whether it exists on separate hardware. | Separate servers vs local machine for builds, issue tracking etc on solo project | tools;personal projects | That depends, I would say.Pro local machine:Works without net.Easier to maintain.Pro separate server:Some tools (continuous integration) may cause load that is annoying on your local machine.You can access your tools from different machines.You have a copy of your data on a different machine. |
_codereview.14699 | My application hangs up on IE and mobile browsers when these functions are fired. Is there anything that stands out as being obviously performance-killing?$this.find('input.bundle-check').live('change', function() { var $box = $(this), ntn = $box.data().ntn, price = $box.data().price, savings = $box.data().savings; if ($box.is(':checked')) { productsBundled[ntn] = { price : price, savings : savings, ntnid : ntn, qty : 1 }; $box.siblings('label').text(' Selected') $box.closest('.grid-product').fadeTo(300, 0.5) } else { $box.siblings('label').text(' Add Item'); delete productsBundled[ntn]; $box.closest('.grid-product').fadeTo(300, 1.0) } refreshSelectedItems(productsBundled); $this.find('.itemCount').text(concat('(',objectCount(productsBundled),')'));})function refreshSelectedItems(products, remote) { var itemntns = [], totalPrice=0.00, totalSavings=0.00; products = products || {}; remote = remote || 2; if (objectCount(products) > 0) { $.each(products, function(i, item) { $qtyBox = $('.selected-item[data-ntn=' + i + '] .cartqty'); itemntns.push(i); totalPrice += (item.price * ($qtyBox.val() || 1)); totalSavings += (item.savings * ($qtyBox.val() || 1)); // console.log('qtyBox', $qtyBox.val()) }); if(remote > 1) { $.ajax({ url: '/Includes/pageHelper.cfc', type: 'post', async: true, data: { method: getBundleSelectedItems, productList: itemntns.join(',') }, success: function(data) { var $container = $('.selected-items > span'); $container.html(data); $.each($container.find('.selected-item'), function() { var myntn = $(this).data().ntn, $price = $(this).find('.price'), $bunPrice = $('<span />').addClass('bundle-price'); $(this).find('.cartqty').val(productsBundled[myntn].qty); if (productsBundled[myntn].savings > 0) { $bunPrice.text(concat(' $', productsBundled[myntn].price.toFixed(2))); $(this).find('em').hide(); $price.after($bunPrice.after($('<span />').addClass('sale').text(concat(' You save $', productsBundled[myntn].savings.toFixed(2), '!')))); } }) } }) } } else { $('.selected-items > span').html(''); } $this.find('.bundle-saving').text(concat($, totalSavings.toFixed(2))); $this.find('.bundle-addons').text(concat($, totalPrice.toFixed(2))); totalPrice = totalPrice + (parseFloat($('.original-products > div:has(:checked)').data().price) * parseInt($('.original-products > div:has(:checked) .cartqty').val())); $this.find('.bundle-total').text(concat($, totalPrice.toFixed(2)));} | Functions Giving Performance Issues | javascript;jquery | There are a few things which stand out to me:var $box = $(this),ntn = $box.data().ntn,price = $box.data().price,savings = $box.data().savings;You use $box.data() several times; it may be faster to cache the return value of that function. if ($box.is(':checked')) {:checked is not a standard CSS selector, so it will be slower than simply:if (this.checked) {Later, you used $.each($container.find('.selected-item'), function() {I'm not sure if it's faster, but you could just do: $container.find('.selected-item').each(function() {Finally, this line:totalPrice = totalPrice + (parseFloat($('.original-products > div:has(:checked)').data().price) * parseInt($('.original-products > div:has(:checked) .cartqty').val()));The :has selector is not standard CSS, so it can't use browsers' native functions. instead, consider using the has() method instead:totalPrice = totalPrice + (parseFloat($('.original-products').children('div').has(':checked').data().price) * parseInt($('.original-products').children('div').has(':checked').find('.cartqty').val(), 10));Note that I also added the radix argument to parseInt. A micro-optimization at best, but it does mean the JS engine doesn't need to guess.Hope that helps. |
_webmaster.93063 | I want to re-direct all requests to http://subdomain.example.com to http://www.example.comWhat should the code be? | 301 redirect a subdomain to the root domain with www | htaccess;301 redirect | null |
_codereview.94694 | I'm new to C# (coming from a JavaScript background) and it seems like this code could be greatly improved.This SQL query:SELECT RegionString,SubRegionString,CountryString,COUNT(*) AS sizeFROM tableGROUP BY RegionString,SubRegionString,CountryStringReturns:RegionString SubRegionString CountryString Size-----------------------------------------------Americas 2Americas NorthAmerica Canada 5Americas NorthAmerica US 3Americas SouthAmerica Chile 3EMEA AsiaPacific Australia 2EMEA AsiaPacific Japan 1EMEA SouthernEurope Turkey 1EMEA WesternEurope 1I made this C# code:public class NameChildObject{ public string name { get; set; } public int size { get; set; } public List<NameChildObject> children { get; set; } public NameChildObject() { children = new List<NameChildObject>(); }}public ActionResult ByRegion(){ var returnResults = new List<NameChildObject>(); var uniqueRegions = (from row in repository.GetAllEntities() select row.RegionString).Distinct(); foreach (string region in uniqueRegions) { returnResults.Add(new NameChildObject() { name = region }); var uniqueSubRegions = (from row in repository.GetAllEntities() where row.RegionString == region select row.SubRegionString).Distinct(); foreach (string subRegion in uniqueSubRegions) { var regionObject = returnResults.Find(row => row.name == region); var countryInfo = (from row in repository.GetAllEntities() where row.SubRegionString == subRegion group row by row.CountryString into g select new NameChildObject() { name = g.Key, size = g.Count() }); regionObject.children.Add(new NameChildObject() { name = subRegion, children = countryInfo.ToList()}); } } return Json(returnResults, JsonRequestBehavior.AllowGet);}To convert the data into this format:[ { name: Americas, size: 0, children: [ { name: , size: 0, children: [ { name: , size: 2, children: [] } ] }, { name: NorthAmerica, size: 0, children: [ { name: Canada, size: 5, children: [] }, { name: US, size: 3, children: [] } ] }, { name: SouthAmerica, size: 0, children: [ { name: Chile, size: 3, children: [] } ] } ] }, { name: EMEA, size: 0, children: [ { name: AsiaPacific, size: 0, children: [ { name: Australia, size: 2, children: [] }, { name: Japan, size: 1, children: [] } ] }, { name: SouthernEurope, size: 0, children: [ { name: Turkey, size: 1, children: [] } ] }, { name: WesternEurope, size: 0, children: [ { name: , size: 1, children: [] } ] } ] }] | Converting table data into nested JSON | c#;asp.net;json;entity framework;linq to sql | null |
_unix.200199 | I want to set up dnsmasq on a Raspberry Pi to use as a DNS server for my home network. My goal is to have a setup where when a new device is connected to the home network, it can be addressed by its hostname without modification to anything on the PiSo as an example, say I have a machine with mymachine in /etc/hostsname, and I then plug it in to my network with an ethernet cable (and tell it where the DNS server is of course). I should then be able to go to some other machine on the network and do ping mymachine.lan.mydomain and have it ping mymachine. So I guess for this to work the DHCP server would updated the DNS server?Does dnsmasq do this automatically when it is set up to be the DHCP server for the LAN as well? | Does dnsmasq automatically update when running as a dhcp server as well? | dns;dhcp;dnsmasq | null |
_webapps.87897 | I have a sheet/tab that contains nothing but a single column where each row is a word, let's call it wordlist.In another sheet/tab, I would like to make it so that for one of the columns, if the value matches ANY of the values from wordlist, that the cell is highlighted (or the input is rejected).It is quite easy to do the opposite of this but I can't figure this one out. I have tried looking up how to do it and I have been messing around with custom formulas for a while now but I can't seem to figure it out. | Highlight/reject cells that contain any of the values from another sheet | google spreadsheets | null |
_cogsci.15592 | So I was sleeping on a road trip and I had to wake up to go to the restroom at a gas station we stopped at. I walked in like a zombie and the cashier sheepishly said hello to me. I took it that he was nervous and then my brain started coming up with some scenario that he was being held hostage by someone, told not to call the police and was somewhere in the store. As I was going to the restroom I sort of convinced myself that this was actually happening and I started to get kind of nervous. Later I realized that the only reason the cashier seemed nervous was because I walked in like I had just risen from the dead. Which got me worried again thinking I'm starting to develop some form of schizophrenia. I'm 16 years old and have been diagnosed with OCD in the past. I know this is a better question for a professional but I figured this would give me a quick reassurance for concern or if that was just an ocd sort of thing. I will be talking to a doctor and my parents afterward I just trust stack exchange's opinion for now. Thank you | Is it common for OCD sufferers to experience schizophrenic episodes | schizophrenia | As someone who has had OCD since the age of 12, I can assure that this sounds very much like an OCD symptom. It is likely that you walked into the store like a zombie, and at the time, did not consider your personal appearance, assuming only that the store owner was acting strange. However, if you had been aware of the way you were acting, you would have likely picked up on this instantly and dismissed it. Unless we are salient to our own features, we are unaware of how others are perceiving us. Your ability to gauge your salience was likely decreased due to just waking up from your nap.As an OCD sufferer, I've definitely had bouts where I was concerned I was developing schizophrenia, only to realize later that I was hyper-aware of every little sign that pointed in that direction. If this is an ongoing worry of yours, confirmation bias will likely amplify the most ambiguous of symptoms related to losing your mind.Let's get a little further into schizophrenia, however. The Diagnostic and Statistical Manual - IV (yes, the old one, though the symptoms have not changed much) states:Characteristic symptoms: Two or more of the following, each present for much of the time during a one->month period (or less, if symptoms remitted with treatment).DelusionsHallucinationsDisorganized speech, which is a manifestation of formal thought disorderGrossly disorganized behavior (e.g. dressing inappropriately, crying frequently) or catatonic behaviorNegative symptoms: Blunted affect (lack or decline in emotional response)Alogia (lack or decline in speech)Avolition (lack or decline in motivation)Imapaired communication (due to hearing voices)Social or occupational dysfunction: one or more major areas of functioning, work, interpersonal relations, self-care, are markedly below the level achieved prior to the onset.Significant duration: Continuous signs of the disturbance persist for at least six months. This six-month period must include at least one month of symptoms (or less, if symptoms remitted with treatment).If signs of disturbance are present for more than a month but less than six months, the diagnosis of schizophreniform disorder is applied. Psychotic symptoms lasting less than a month may be diagnosed as brief psychotic disorder, and various conditions may be classed as psychotic disorder not otherwise specified. First and foremost hallucinations and delusions are NOT something that you are able to identify. Schizophrenia leaves the sufferer out of touch with reality. The sufferer believes in their hallucinations and delusions with every fibre of their body, they don't simply think that they are hallucination or becoming deluded. That being said, no, you are not developing schizophrenia. If you were, you wouldn't know it, and once you found out that you had it, you'd be the last one to know.Note positive and negative symptoms, are not good and bad. Think of a positive symptom as an addition to one's life (the addition of delusions), and a negative symptom as something that's being taken away from one's life (motivation). If you read further down the DSM criteria, you will see that one-off events like this do not substantiate a diagnosis of schizophrenia. However, a brief psychotic episode is still possible, possible, but extremely highly unlikely based on the fact that you have a great amount of insight (self-knowledge) of what was going on with you in the store. |
_unix.375379 | I am using the Ubuntu system and I want to install mplayer for my own use only. Can I install it without the superuser (root) rights? | Is it possible to install mplayer without superuser rights | software installation | null |
_codereview.25673 | private static List<ConstrDirectedEdge> kruskalConstruct(ConstructionDigraph CG) { int current = CG.srcVertexIndex(); boolean visited[] = new boolean[CG.V()]; visited[current] = true; UF uf = new UF(CG.V()); List<ConstrDirectedEdge> feasibleNeighbours = new ArrayList<ConstrDirectedEdge>(); List<ConstrDirectedEdge> solution = new ArrayList<ConstrDirectedEdge>(); do { /* clear neighbours from prev iteration */ feasibleNeighbours.clear(); /* build feasible neighbour list */ for (ConstrDirectedEdge directedEdge : CG.adj(current)) { int v = CG.getVertex(directedEdge.to()).getSource(); int w = CG.getVertex(directedEdge.to()).getDestination(); if (!visited[directedEdge.to()] && !uf.connected(v, w)) { feasibleNeighbours.add(directedEdge); } } //TODO: code smell if (feasibleNeighbours.isEmpty()) { break; } /* calculate the probability for each neighbour */ double R = calculateR(feasibleNeighbours); System.out.println(R for source is : + R); for (ConstrDirectedEdge feasibleneighbour : feasibleNeighbours) { feasibleneighbour.calcProbability(R, alpha, beta); } /* pick a neighbour */ ConstrDirectedEdge pickedUp = choiceEdgeAtRandom(feasibleNeighbours); visited[pickedUp.to()] = true; current = pickedUp.to(); solution.add(pickedUp); uf.union(CG.getVertex(current).getSource(), CG.getVertex(current).getDestination()); } while (!feasibleNeighbours.isEmpty()); return solution;}I want to eliminate the code smell that is the break in the middle of the loop. As can be seen I have chosen to use a do {} while() in order to do the initialization with the initial node being CG.srcVertexIndex(). What I was thinking about is make the following code: for (ConstrDirectedEdge directedEdge : CG.adj(current)) { int v = CG.getVertex(directedEdge.to()).getSource(); int w = CG.getVertex(directedEdge.to()).getDestination(); if (!visited[directedEdge.to()] && !uf.connected(v, w)) { feasibleNeighbours.add(directedEdge); } }into a separate function that will be returning a List<ConstrDirectedEdge> and then the loop can turn into:while(!(feasibleNeighbours = getFeasibleNeighbours(CG, visited, uf)).isEmpty()) {/* calculate the probability for each neighbour *//* pick a neighbour */}But then I will have to pass visited and uf as parameter which in fact are going to be manipulated by getFeasibleNeighbours functions - in essence I will be using input parameters to store output state which is not a good idea.Finally I could just make visited and uf static private vars and just reinitialize them when needed and use them directly, but again this seems kind of wrong.The code is working fine, however I'd like to hear opinions on how this can be made more readable. | Simplifying finding neighbors in graph | java;graph | null |
_unix.338992 | I have successfully created Live USB with Kali Linux and booted up on my Macbook Pro 13, Mid 2014. Everything works perfect, except one thing: my internal wireless card isn't detected by Kali, thus I cannot connect to Wi-Fi.Have gone through these manuals: https://pentestmac.wordpress.com/2015/11/28/kali-linux-broadcom-wireless-on-macbook/https://forums.kali.org/showthread.php?25240-Macbook-Pro-Kali-Mac-OS-Dual-Boot-Install-Guide-amp-WiFi-Guideeverything goes well, but fails on last command, which is modprobe wl. I get this error: FATAL: Module wl not found.Somebody has experience with it, is it possible to detect internal wireless card on Mac in Kali Linux, or do I have to buy an external USB wireless adapter?Thanks a lot for any help. | Wi-Fi not working on Kali Linux (Macbook Pro 13, Mid 2014) | wifi;kali linux;macintosh | null |
_webmaster.87515 | I am building a Product comparison web/application into my website. It will compare products by specs, price etc. I am wondering if there any dangers of not making a user log in before using the web page. Ie crawlers going crazy and query the database lots of times? Otherwise, I would be letting any guest user add products to the comparison 'basket' and comparing them.Edit: I don't want to block robots access to the page either, as I think this could be good for SEO purposes.Is there any dangers of not making users log in to use it? | Is there SEO value or danger when creating product comparison functionality? | seo;web development;web crawlers | null |
_softwareengineering.210125 | This is related to this question but not quite the same. BTW, I'm not a native English speaker.I keep having a hard time choosing a proper name for collections - Lets say that we have a collection of item s and that each item has a name. How will we call the collection of names such that the collection's name will be the most understandable under standard naming conventions?var itemNames = GetItemNames(); // Might be interpreted as a single item with many namesvar itemsNames = GetItemsNames(); // No ambiguity but doesn't look like proper Englishvar itemsName = GetItemsName(); // Also sounds strange, and doesn't seem to fit in e.g. a loop:foreach( itemName in itemsName){...}As native English speakers, how would you name your collections, and how would you expect them to be named? | collection naming - singular or plural | naming;naming standards;collections | What about GetNamesOfItems?Otherwise, GetItemNames can be a good alternative, if the documentation is clear about what is being returned. This also depends on the items itself:GetPersonNames is not clear, since a person can have multiple names,GetProductNames is more explicit in a context where a product can have one and one only name.GetItemsNames looks indeed quite strange.GetItemsName is simply incorrect: we expect as a result a common name shared by multiple items, not a sequence of names.As an example, take a method which returns the prices of products in a category:public Money[] NameHere(CategoryId categoryId) { ... }GetProductPrices could have been interpreted in two ways: either it returns the prices of products, or several prices of a product, one price per currency. In practice, given that the parameter of the method is a category, and not a single product, it is obvious that the first alternative is true.GetPricesOfProducts appears the most clear alternative. There are zero or more products in a category, and for each, we return its price. The only issue is the case when there are actually several prices per product, one price per currency.GetProductsPrices looks strange, but is still clear.GetProductsPrice looks totally wrong: we expect such method to return Money, not Money[].Also note that in the example above, it may be even better to slightly change the signature itself:public Tuple(ProductId, Money)[] GetPricesOfProducts(CategoryId categoryId) { ... }appears easier to understand and removes the ambiguity described in the second point. |
_unix.88277 | (Edited to clarify the role of Emacs in the problem with the display.)My current gnu-screen session has gotten corrupted somehow, and Emacs fails to display UTF-8 characters properly.I've confirmed that in freshly-started gnu-screen processes, Emacs displays UTF-8 characters properly, but at the moment it would be very disruptive to replace the corrupted gnu-screen session with a new one, and insteadI'm looking for ways to further troubleshoot the problem with this corrupted gnu-screen session, and hopefully fix it.FWIW, I give more background below, including a description of what I've done so far to diagnose the problem.I started this gnu-screen session several days ago at my OS X workstation at work with% screen -U...(as I always do). Since then I have re-attached this session from several machines (possibly after first ssh-ing to my workstation at work) using% screen -U -dR(again, this is what I always do). I did precisely this this morning at my workstation at work (the machine where the gnu-screen process is actually running).Today, for the first time since I created this gnu-screen session, I needed to work with files that contain a lot of non-ascii UTF-8 characters. It was then that I discovered that this gnu-screen session must have gotten corrupted somehow, because it displays all these characters as ?, resulting in an unusable display.(As I already alluded to, these UTF-8-rich files are displayed correctly by freshly-started gnu-screen sessions, so I'm pretty sure that the display problem is with the particular gnu-screen session that here I'm calling corrupted. Also, I confirmed that the ??? display shows up in every terminal that I have attached the gnu-screen session from, so the problem is not with the terminal program hosting the gnu-screen session. Lastly, I also confirmed that the problem is not with one particular Emacs session: in the corrupted gnu-screen session, every new Emacs sessions displays the UTF-8 characters as ?, which argues against the problem being with a particular Emacs session.)I've confirmed that utf8 is on by running:utf8 on onThe output of :info is(1,5)/(210,52)+10000 +(-)flow app log UTF-8 0(zsh)And, FWIW:% /usr/local/bin/screen --versionScreen version 4.00.03 (FAU) 23-Oct-06Also, I should point out that newWhat else can I do to troubleshoot this problem?UPDATE: Drav Sloan and Stephane Chazelas both asked about my locale settings:% localeLANG=LC_COLLATE=CLC_CTYPE=CLC_MESSAGES=CLC_MONETARY=CLC_NUMERIC=CLC_TIME=CLC_ALL=Currently, for OS X I don't set any locale-related variables.On Linux systems, my .zshenv does setexport LANG=en_US.utf8export LC_ALL=en_US.utf8...but if I put the same lines in my .zshenv on Darwin, I get error messages to the effect that setting locale failed. I vaguely remember bashing my skull for several hours over the problem of finding the right locale settings for Darwin/Lion. It may have been that setting nothing emerged as the least awful solution to the problem and, after all, at least fresh gnu-screen sessions do display UTF-8 characters correctly, even in the absence of an explicit locale setting. But clearly I need to figure out how to properly set locale in Darwin/Lion...UPDATE2: OK, I think I figured out the reason for the errors I mentioned above: in Darwin/Lion, the string en_US.utf8 is invalid; instead it should be en_US.UTF-8. | Corrupted gnu-screen session not displaying UTF-8 correctly | terminal;emacs;gnu screen | null |
_unix.113538 | I have Oracle XE 10.0.2 installed on my development system.Recently I have been unable to restart it:$ sudo /etc/init.d/oracle-xe stopShutting down Oracle Database 10g Express Edition Instance.Stopping Oracle Net Listener.$ sudo /etc/init.d/oracle-xe startStarting Oracle Net Listener.Starting Oracle Database 10g Express Edition Instance.$ sqlplus SQL*Plus: Release 10.2.0.1.0 - Production on Tue Feb 4 19:54:53 2014Copyright (c) 1982, 2005, Oracle. All rights reserved.Enter user-name: hrEnter password: ERROR:ORA-01089: immediate shutdown in progress - no operations are permittedSo I tried killing off all oracle processes by hand:$ sudo killall oracle tnslsnrThis kills the processes (they are no longer listed in ps). I then try starting Oracle again:$ sudo /etc/init.d/oracle-xe startStarting Oracle Net Listener.Starting Oracle Database 10g Express Edition Instance.SQL*Plus gives the same error./usr/lib/oracle/xe/app/oracle/admin/XE/bdump/alert_XE.log says:Starting Oracle Database 10g Express Edition Instance.Tue Feb 4 19:59:30 2014Starting ORACLE instance (normal)I have not reconfigured Oracle recently, but I have shut it down hard (power off), so it may be in a inconsistent state that I need to force it to recover from. | Oracle: ORA-01089: immediate shutdown in progress - no operations are permitted | oracle database | A complete guide on how to get Oracle on GNU/Linux unstuck from ORA-01089 is here.The idea is basically to log in to Oracle as sysdba and issue a shutdown command (oracle user in OS is the standard in this example from the link provided):root# sudo su - oracleoracle$ sqlplusSQL*Plus: Release 10.2.0.1.0 - Production on Sun Feb 9 15:16:09 2014Copyright (c) 1982, 2005, Oracle. All rights reserved.Enter user-name: / as sysdbaConnected to:Oracle Database 10g Express Edition Release 10.2.0.1.0 - ProductionSQL> shutdown abortORACLE instance shut down.In a single line:echo shutdown abort | sudo su - oracle -c sqlplus / as sysdbaIf this does not work try some of the spells on https://dba.stackexchange.com/questions/15888/oracle-shutdown-method and finish up with:/etc/init.d/oracle-xe stop/etc/init.d/oracle-xe start |
_unix.310534 | so have to program in R a table of data about temperature FOr R to read it correctly it needs to be sorted according to the pdf of lecture notes like the picture belowthe file is one column with all the data. Need to loop a few times to put into columns at the end the last 2 loops will iterate minus one becuase there are 2 missing enrtries.[1][7][13][19][25][31][37][43][49][55][61][67][73][79][85][91][97][103][109][115][121][127][133]-0.367918-0.451778-0.556487-0.505967-0.663492-0.624129-0.531023-0.469536-0.416556-0.347795-0.152032-0.251405-0.218081-0.133076-0.393492-0.207025-0.323398-0.0614220.1291840.0921310.1773810.3629600.370861-0.317154-0.498811-0.568014-0.368630-0.535226-0.675199-0.551480-0.455500-0.538602-0.383147-0.050356-0.297744-0.146923-0.184608-0.322453-0.322901-0.0460980.0990610.0509260.2110060.2969120.3603860.416356-0.317069-0.403252-0.526737-0.315155-0.457892-0.570521-0.444860-0.489551-0.339823-0.356958-0.095295-0.296136-0.358796-0.222896-0.267491-0.216440-0.131010-0.0938730.1861280.0741930.3518740.2913700.491245-0.393357-0.353712-0.475364-0.387099-0.617208-0.558340-0.444257-0.385962-0.316963-0.262097-0.088983-0.303984-0.377482-0.165795-0.257946-0.080250-0.016080-0.1090970.1595650.2691070.3636500.3856380.650217-0.457649-0.577277-0.340468-0.494861-0.684107-0.379505-0.451256-0.305391-0.360309-0.2720090.044418-0.405346-0.441748-0.154384-0.274517-0.3165830.021495-0.0153740.0108360.3849350.3294360.453061-0.468707-0.504825-0.367002-0.585158-0.672176-0.308313-0.388185-0.393436-0.486954-0.257514-0.073264-0.255647-0.194232-0.137509-0.151345-0.2416720.0576380.1254500.0386290.1947620.4084090.325297 | sort text data into correct columns using commands | command line;text formatting | null |
_unix.109671 | I'd like to have a local directory be mounted upon logging into a remote server via SSH. So if I were to ssh into foo.bar.com as user baz then there would be a directory mounted in /home/baz/roaming - that corresponds to my local workstations roaming dir - for the length of my connection.My local workstation in this case is a Mac, but the methodology is likely more generally *nix in nature. Most likely an SSH wrapper would be necessary that performs some sort of mount related actions after a new ssh connection occurs.Has anyone done this? Suggestions or existing utilities would be very helpful! | Mount a workstation dir upon login to server over ssh | ssh;mount | null |
_unix.257733 | I need grep's output to be indented with tabs/spaces. This is the plain, un-indented version: MyCmd | grep id:I tried this without success:MyCmd | grep id: | echo | How to indent grep's output? | grep;indentation | You could do it with awk instead of grep if that's acceptable:MyCmd | awk '/id:/ {print $0}'or if you need grep, sed could help:MyCmd | grep id: | sed -e 's/^/ /'The awk version does its own pattern match for lines that contain id: and then will print the spaces before the line. The sed version does the grep as you already did it but then replaces the start of each line (regex ^ matches the start of a line) with the spaces |
_unix.213611 | I love file. I use it multiple times a day. I love it so much that I install Cygwin on my Windows machines just so I can use it. Anyway, in going through older files on my system, I find there are many files that just report data from the file command. Understandably.Some of these files however do have an indicator in their header of what kind of file they are, but are not found in the magic file database yet. My questions are three-fold:Is there an online repository of magic file definitions that I can use to supplement or update the default ones that came with my OS? (My folder /usr/share/file/magic shows the most recent entry as almost one year ago, and I know people are continually updating these definitions)How can I submit a new definition that I've developed so that the rest of the *nix community can benefit? The online repo?Is it as simple as dropping the magic definition file in the folder, and my OS will magically find it, or do I have to somehow rebuild the definition library? Do I have to do anything with the magic.mgc file, or just the folder of individual definitions?Thank you ahead of time for your help. | Update magic file list and/or submit my own | linux | In the past I've had changes included in the magic file by submitting a Debian bug report but it's probably faster to submit them upstream directly.In answer to your questions:The latest released source can be found here - there's a link to a mirror of the source repo there. Yes, I believe either submitting a bug report or emailing the mailing list should be all that's needed to add a file definition.You can create your own magic file and point file file to it by using the -m option. |
_cstheory.38138 | $k$-Dominating set:Given a graph $G=(V,E)$ where $V$ is a set of vertices and $E$ a set of edges, and an integer $k$, the $k$-Dominating set problem determines if there exists a subset of vertices $V$ of $V$ of size at most $k$, such that for every Vertex $u \in V$, there is an edge $uv \in E$ for some vertex $ v \in V'$.It is easy to see $k$-Dominating set problem for planar graphs in $O(f(k)\log n)$ space.Can we solve the $k$-Dominating set problem for planar graphs in $f(k)+c \log n$ space where $c$ is some constant.Answer to this question is yes[link] Page 11 theorem 2.4.Their proof based on the FPT algorithm for finding the $k$-Dominating set problem for planar graphs [link] page 11 theorem 2.4.Can we get the simple proof or process for finding the $k$-Dominating set problem for planar graphs in $f(k)+c \log n$ space where $c$ is some constant? | parametrized logspace algorithm for k-dominating set for planar graphs | cc.complexity theory;graph algorithms;space complexity;logspace | null |
_cs.18749 | The algorithm (from here) - Create a set S of remaining possibilities (at this point there are 1296). The first guess is aabb.Remove all possibilities from S that would not give the same score of colored and white pegs if they were the answer.For each possible guess (not necessarily in S) calculate how many possibilities from S would be eliminated for each possible colored/white score. The score of the guess is the least of such values. Play the guess with the highest score (minimax).Go back to step 2 until you have got it right.I confused about the 3nd step - what is mean - how many possibilities from S would be eliminated for each possible colored/white scorewhat is the correct answer and the guess here ? Can someone clear it some more ? | Mastermind (board game) - Five-guess algorithm | algorithms;game theory;board games | The text you quoted seems clear as it is. But I'll try to elaborate on step 3, since you asked:Let $S$ denote the set of possible secrets (given responses to moves you've made so far). Given a candidate guess $g$, you run over all possibilities $s \in S$ and calculate the response that you'd get if you guessed $g$ and the secret was $s$ (the number of black pegs and the number of white pegs); this is the colored/white score. Now, for each colored/white score that could be received, if you were to get that score, you could eliminate some possibilities from $S$ as incompatible with that colored/white score; the goodness of a colored/white score is the number of possibilities eliminated. The helpfulness of a candidate guess $g$ is the minimum of the goodness of all the colored/white scores you could possibly get, in response to $g$. Select the guess $g$ with highest helpfulness.In other words, let $R(g,s)$ denote the number of black pegs and number of white pegs you'd get if the secret were $s$ and you guessed $g$ (this is what your quote calls the colored/white score). Let $Z(g) = \{R(g,s) : s\in S\}$, so that $Z(g)$ denotes the set of colored/white scores you could possibly get if you made guess $g$ (given that the secret $s$ has to be one of the possibilities in $S$). Now, if you have a colored/white score $z$ where $z \in Z(g)$, let$$G(g,z) = |\{ s \in S : R(g,s) \ne z\}|,$$so that $G(g,z)$ is the goodness of getting a colored/white score $z$ in response to guess $g$. Also, let$$H(g) = \min \{G(g,z) : z \in Z(g)\},$$so that $H(g)$ denotes the helpfulness of a candidate guess $g$.Now step 3 says: you should play the guess $g$ that maximizes $H(g)$. |
_unix.37610 | I would like to make a utility that always enables the --color argument for the grep command in any distribution. Is there a way to do this or do I have to search for a way for each distribution? | Is there a global grep.conf in Unix/Linux? | grep | These simple GNU tools don't have config files. You can use shell aliases.alias grep=grep --color=autoPut that in your ~/.bashrc file (or equivalent to what you use). Then you will always use that alias for the grep command. |
_unix.218855 | I'm new to Mint, but not to Linux. Is it possible to enable icon display for the root desktop? I tried the System Settings > Desktop config utility, but that doesn't display the icons. Is there a config file somewhere?As requested:The desktop environment is Cinnamon. The file manager is Nemo, and running it doesn't cause the icons to appear on the desktop.My objective has already been stated and the reason for it should be obvious. It seems so many of you are paranoid over just about everything these days....Finally, your other comments are otherwise offensive. Are you here to help run this forum or to call people stupid? | MInt 17.2, Enable Icon display for root desktop? Eg Home, Computer,etc | linux mint | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.