id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webapps.4724
What are the online tools available for managing conference/journal papers, articles and books?I am looking for features like export to IEEE reference style, BibTex style, etc;
Online BibTex References
webapp rec
null
_webapps.87064
So I finished a bunch of work offline, and then I synced it... and it did not go well after:It merged without deleting a bunch of parts as you can see there, and Page 7 is gone completely:The errors are just annoying, but losing that entire page is really frustrating. I want to see if I can find the old version still sitting on my computer. Any ideas?
Google Drive Synced with Errors. On Ubuntu, can I recover my offline version of the file before it synced?
google drive;google documents;google chrome;linux
null
_webmaster.83503
I got a problem regarding setting my server settings so that my files can't be accessed from people linking directly to the files, but can when they click a link on my website to open the file.I tried this:RewriteCond %{HTTP_REFERER} !^http://19.24.3.13/~child/ [NC] RewriteCond %{HTTP_REFERER} !^http://19.24.3.13/~child/.*$ [NC] RewriteRule \.(pdf|doc|docx)$ /~child/ [L]Problem is, when I want to open these files via my website, I get an error, because the link to the file is a direct link, something I wanted to prevent.So to counter this, I need to let through the referrals from my own website. I tried this:SetEnvIf Referer ^http://19.24.3.13/~child/.*$ legit_referalSetEnvIf Referer ^$ legit_referal<LocationMatch \.(pdf|doc|doxc)$ > Order Deny,Allow Deny from all Allow from env=legit_referal</LocationMatch>But with no success. I get a server 500 error if I try to access it.As you can see I use ip-adresses, because I have no domain name, only the ip.Can someone point me in the right direction?
Restrict access to certain files but not when linked from my own website
htaccess;referrer
Problem is, when I want to open these files via my website, I get an error, because the link to the file is a direct linkAn ordinary link on your website is not a direct link. If the browser is sending any referer at all then when a user clicks a link on your website then the referer is your website. If you are not getting a referer header in this instance then something else is going on.However, you probably do need to allow an empty referer for when user's browsers don't send the HTTP referer header (for whatever reason). For example, when users type the URL directly in their browser (this is a direct link), or simply hit the reload button - presumably you do want to allow this? If you don't allow this then it is possible that some legitimate users might have problems accessing your files.Your first example looks pretty much OK, except that http://19.24.3.13/~child/ looks a bit weird (this looks like the temporary URL that some shared hosts supply before the domain resolves?). However, the following should work:RewriteCond %{HTTP_REFERER} !^$RewriteCond %{HTTP_REFERER} !^http://19\.24\.3\.13RewriteRule \.(pdf|docx?)$ - [F]The above will return a 403 Forbidden for PDF, DOC and DOCX URLs when the HTTP referer is not empty AND not the current host. Note that this allows direct requests (when the HTTP Referer is empty). If you wish to prevent direct requests then omit the first RewriteCond directive.Your second code block results in a 500 Internal Server Error because LocationMatch is not permitted in .htaccess files. You need the Files directive.UPDATE: I'd previously included %{HTTP_HOST} in the condPattern (2nd argument to the RewriteCond directive) - that was a stupid mistake! Server variables are not evaluated in the CondPattern (a regex) so would have matched the literal string %{HTTP_HOST}! Which would never happen, so the condition (a negative match) would always have succeeded and the request would always be blocked!
_webapps.84718
Often when I'm browsing my google docs in Chrome, I find that I want to open the same folder in the Mac OS X Finder so that I can edit one of the native documents. Right now I'm forced to open my root Google Drive folder and then browse the same folder. How can I do this more quickly, directly to the right folder?
How do I open a selected folder in Google Docs in the Mac Finder?
google drive;google documents;mac
null
_unix.193655
I recently setup OpenVPN on a CentOS 6 machine. The setup went smooth and I can connect to it fine from a client computer when both computers are on the same network.I wanted to know how to make the connection when the client computer is on a different network at a different location.At the moment I am getting the following error on the client side's logTue Mar 31 19:20:14 2015 OpenVPN 2.3.6 x86_64-w64-mingw32 [SSL (OpenSSL)] [LZO] [PKCS11] [IPv6] built on Mar 19 2015Tue Mar 31 19:20:14 2015 library versions: OpenSSL 1.0.1m 19 Mar 2015, LZO 2.08Enter Management Password:Tue Mar 31 19:20:14 2015 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.1:25340Tue Mar 31 19:20:14 2015 Need hold release from management interface, waiting...Tue Mar 31 19:20:14 2015 MANAGEMENT: Client connected from [AF_INET]127.0.0.1:25340Tue Mar 31 19:20:15 2015 MANAGEMENT: CMD 'state on'Tue Mar 31 19:20:15 2015 MANAGEMENT: CMD 'log all on'Tue Mar 31 19:20:15 2015 MANAGEMENT: CMD 'hold off'Tue Mar 31 19:20:15 2015 MANAGEMENT: CMD 'hold release'Tue Mar 31 19:20:15 2015 Socket Buffers: R=[8192->8192] S=[8192->8192]Tue Mar 31 19:20:15 2015 UDPv4 link local: [undef]Tue Mar 31 19:20:15 2015 UDPv4 link remote: [AF_INET]192.168.20.17:1194Tue Mar 31 19:20:15 2015 MANAGEMENT: >STATE:1427822415,WAIT,,,Tue Mar 31 19:21:15 2015 TLS Error: TLS key negotiation failed to occur within 60 seconds (check your network connectivity)Tue Mar 31 19:21:15 2015 TLS Error: TLS handshake failedTue Mar 31 19:21:15 2015 SIGUSR1[soft,tls-error] received, process restartingTue Mar 31 19:21:15 2015 MANAGEMENT: >STATE:1427822475,RECONNECTING,tls-error,,Tue Mar 31 19:21:15 2015 Restart pause, 2 second(s)The iptables is stopped so I'm not sure how to make the OpenVPN server accessable from outside the local network
OpenVPN Connect from a Different Network
remote;openvpn;troubleshooting
null
_unix.257590
I need to use SSH on my machine to access my website and its databases (setting up a symbolic link- but I digress).Following problem: I enter the command: ssh-keygen -t dsaTo generate public/private dsa key pair. I save it in the default (/home/user/.ssh/id_dsa): And enter Enter passphrase twicethen I get this back: WARNING: UNPROTECTED PRIVATE KEY FILE! Permissions 0755 for '/home/etc.ssh/id_rsa' are too open. It is recommended that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: [then the FILE PATH in VAR/LIB/SOMEWHERE]Now to work round this I then tried-sudo chmod 600 ~/.ssh/id_rsa sudo chmod 600 ~/.ssh/id_rsa.pub But shortly after my computer froze up- and on logging back on there was a could not find .ICEauthority error. I got round this problem- and deleted the SSH files but want to be able to use the correct permissions to avoid these issues in future. How should I set up ICEauthority, or where should I save the SSH Keys- or what permissions should they have? Would using a virtual machine be best?This is all very new and I am on a very steep learning curve, so any help appreciated.
SSH Key Permissions Chmod settings?
ssh;chmod
null
_codereview.173055
I have a list of palindromes List<string> palindromes = new List<string>() { hijkllkjih, ijkllkji, jkllkj, kllk, ll, defggfed, efggfe, fggf, gg, abccba, bccb, cc, qrrq, rr, mnnm, nn, pop, o, s, q, r};But I want to remove palindromes that are substring of other palindromes. So, the list would becomeList<string> palindromes = new List<string>() { hijkllkjih, defggfed, abccba, qrrq, mnnm, pop, s, q, r};To do this I have written the following code List<int> indexList = new List<int>();for (int i = 0; i < palindromes.Count; ++i){ if (indexList.Contains(i)) continue; string tmp = palindromes[i]; for (int j = 0; j < palindromes.Count; ++j) { if (i == j || indexList.Contains(j) || palindromes[j].Length > palindromes[i].Length) continue; if (tmp.Contains(palindromes[j])) indexList.Add(j); }}foreach (var index in indexList.OrderByDescending(l => l)) palindromes.RemoveAt(index);Can this be improved?
Removing Substrings from List
c#
You can simplify your code to this:var palindromesCopy = palindromes.ToList();palindromes.RemoveAll(x => palindromesCopy.Any(y => x != y && y.Contains(x)));If you want to remove duplicated strings as said by @MoonKnight in comments, just call Distinct on palindromes after removing.
_unix.94109
I am running Linux Mint 13, with a KDE 4 desktop manager. I would like to launch applications from a terminal (konsole in my specific case) and setting the exact size of the window and the location of the window. As an example, if I launch Kate and Chromium from a terminal, I want Kate's window the cover the left-half of my screen and I want Chromium to cover the upper-right quarter of my screen.How can I accomplish this?ps: I have a 15.6 screen set to a 1920x1080 resolution.
Launching applications from a terminal with specific window size and location
kde;chrome;window management;window geometry
null
_unix.303528
My server is crashing every two days around early afternoon. I've tried overloading the server with CPU intensive programs but that does not cause it to crash so I believe it to be a certain program or configuration. being run that is causing it. I've downloaded crash and tried doing some simple commands on it but I'm not sure what it is outputting.[root@resh boot]# crash /usr/lib/debug/lib/modules/2.6.32-642.1.1.el6.x86_64/vmlinux /var/crash/127.0.0.1-2016-08-02-09\:12\:20/vmcoreKERNEL: /usr/lib/debug/lib/modules/2.6.32-642.1.1.el6.x86_64/vmlinuxDUMPFILE: /var/crash/127.0.0.1-2016-08-02-09:12:20/vmcore [PARTIAL DUMP]CPUS: 32DATE: Tue Aug 2 09:09:29 2016UPTIME: 12:47:24LOAD AVERAGE: 4.78, 4.66, 4.55TASKS: 998NODENAME: resh.cluster.orgRELEASE: 2.6.32-642.1.1.el6.x86_64VERSION: #1 SMP Tue May 31 21:57:07 UTC 2016MACHINE: x86_64 (2294 Mhz)MEMORY: 31.8 GBPANIC: BUG: unable to handle kernel NULL pointer dereference at 0000000000000002PID: 42993COMMAND: kslowd002TASK: ffff88040d88d520 [THREAD_INFO: ffff880100000000]CPU: 7STATE: TASK_RUNNING (PANIC)crash> btPID: 42993 TASK: ffff88040d88d520 CPU: 7 COMMAND: kslowd002#0 [ffff8801000039c0] machine_kexec at ffffffff8103fdcb#1 [ffff880100003a20] crash_kexec at ffffffff810d1fe2#2 [ffff880100003af0] oops_end at ffffffff8154bd00#3 [ffff880100003b20] no_context at ffffffff810518cb#4 [ffff880100003b70] __bad_area_nosemaphore at ffffffff81051b55#5 [ffff880100003bc0] bad_area_nosemaphore at ffffffff81051c23#6 [ffff880100003bd0] __do_page_fault at ffffffff8105231c#7 [ffff880100003cf0] do_page_fault at ffffffff8154dc8e#8 [ffff880100003d20] page_fault at ffffffff8154af95[exception RIP: unknown or invalid address]RIP: 0000000000000002 RSP: ffff880100003dd8 RFLAGS: 00010202RAX: ffffffffa0465a80 RBX: ffff8801bc7da200 RCX: ffff8801bc7da2a8RDX: 0000000000000002 RSI: 00000000ffffffff RDI: ffff8801bc7da200RBP: ffff880100003e20 R8: ffffffff81ad12d8 R9: fe2582cc8764a601R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000R13: ffff8801bc7da248 R14: ffff8801bc7da290 R15: 00000000ffffffffORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018#9 [ffff880100003dd8] fscache_object_slow_work_execute at ffffffffa0460e9f [fscache]#10 [ffff880100003e28] slow_work_execute at ffffffff81121363#11 [ffff880100003e68] slow_work_thread at ffffffff81121645#12 [ffff880100003ee8] kthread at ffffffff810a662e#13 [ffff880100003f48] kernel_thread at ffffffff8100c28aSince it seemed to be happening every two days, I've tried looking at the cron jobs but there are no cron jobs that match a schedule of every two days. I've also tried updating the kernel but that has not helped at all either.
How can I find the cause of my CentOS 6.8 server crashing (suffering kernel panic) every two days?
centos;crash;kernel panic
null
_softwareengineering.294761
I'm evaluating a library whose public API currently looks like this:libengine.h/* Handle, used for all APIs */typedef size_t enh;/* Create new engine instance; result returned in handle */int en_open(int mode, enh *handle);/* Start an engine */int en_start(enh handle);/* Add a new hook to the engine; hook handle returned in h2 */int en_add_hook(enh handle, int hooknum, enh *h2);Note that enh is a generic handle, used as a handle to several different datatypes (engines and hooks).Internally, most of these APIs of course cast the handle to an internal structure which they've malloc'd:engine.cstruct engine{ // ... implementation details ...};int en_open(int mode, *enh handle){ struct engine *en; en = malloc(sizeof(*en)); if (!en) return -1; // ...initialization... *handle = (enh)en; return 0;}int en_start(enh handle){ struct engine *en = (struct engine*)handle; return en->start(en);}Personally, I hate hiding things behind typedefs, especially when it compromises type safety. (Given an enh, how do I know to what it's actually referring?) So I submitted a pull request, suggesting the following API change (after modifying the entire library to conform):libengine.hstruct engine; /* Forward declaration */typedef size_t hook_h; /* Still a handle, for other reasons *//* Create new engine instance, result returned in en */int en_open(int mode, struct engine **en);/* Start an engine */int en_start(struct engine *en);/* Add a new hook to the engine; hook handle returned in hh */int en_add_hook(struct engine *en, int hooknum, hook_h *hh);Of course, this makes the internal API implementations look a lot better, eliminating casts, and maintaining type safety to/from the consumer's perspective.libengine.cstruct engine{ // ... implementation details ...};int en_open(int mode, struct engine **en){ struct engine *_e; _e = malloc(sizeof(*_e)); if (!_e) return -1; // ...initialization... *en = _e; return 0;}int en_start(struct engine *en){ return en->start(en);}I prefer this for the following reasons:Added type safetyImproved clarity of types and their purposeRemoved casts and typedefsIt follows the recommended pattern for opaque types in CHowever, the owner of the project balked at the pull request (paraphrased):Personally I don't like the idea of exposing the struct engine. I still think the current way is cleaner & more friendly.Initially I used another data type for hook handle, but then decided to switch to use enh, so all kind of handles share the same data type to keep it simple. If this is confusing, we can certainly use another data type.Let's see what others think about this PR.This library is currently in a private beta stage, so there isn't much consumer code to worry about (yet). Also, I've obfuscated the names a bit.How is an opaque handle better than a named, opaque struct?Note: I asked this question at Code Review, where it was closed.
Why use an opaque handle that requires casting in a public API rather than a typesafe struct pointer?
c;api design;libraries
null
_webmaster.102337
Is there any generally accepted average time for crawling page in Google Webmasters Tools? You can find this report on the crawl stats page.I am trying to audit possible SEO issues, my website currently has average time to crawl around 700ms.
What is an acceptable Time spent downloading a page (in milliseconds) in Google Search Console?
google search console;performance
null
_codereview.28310
I started learning JavaScript a week ago, and I made a sorting function on my own. It does work, but I need reviews and how to write a better one.<body><ul id=list> <li>Art</li> <li>Mobile</li> <li>Education</li> <li>Games</li> <li>Magazines</li> <li>Sports</li></ul>var list = document.getElementById(list);var myList = list.getElementsByTagName(li);var a = [];for (var i = 0; i < myList.length; i++) { a[i] = myList[i].innerHTML;}a.sort();for (var i = 0; i < myList.length; i++) { myList[i].innerHTML = a[i];}Output:ArtEducationGamesMagazinesMobileSports
My first javascript sorting
javascript;sorting;dom
null
_unix.139480
I'm new to linux shell scripting. I have a few input files that I have to parse one by one. So in my work in progress WIP folder I have one input file at a time. While one file is undergoing processing, I want a log file to be created in the LOG folder with the same name as that of input file but with the extension .log.Is there any advice on how I can create another file with the same name of a different file?What I actually want to do is this: copy the filename and store it in a variable, then use the variable to create a $variable.log file and write the log to it.
Create a log file with the same name as input file
bash;shell script;filenames
Assuming that there is only ever one file in the directory:file_name=`ls wip_folder`log_file=${log_file}.log
_scicomp.128
Informally in our lab, we have developed 2 metrics to compare CFD solvers overthe range of machines we have access to. One is called COMP, which standsfor COde Machine Performance. This single number is supposed to represent theabsolute performance of a given code on a given machine. It is computed, for agiven run, by multiplying the number of cells per computing/processing core bythe number of iterations performed and then dividing by the runtime. In anideal situation, this number should be constant no matter the number of coresbeing used, the size of the grid or the duration of the run. It directlyindicates on how many cells one iteration can be performed by one core in onesecond. By extension, the acronym COMP will be used as unit for measuring theperformance of the codes. For examples, if a given run yields 3.2 k COMPs, itmeans the code is able to process one iteration on 3200 cells per core persecond, or 3200 iterations on one cell per core per second, or any similarcombination. Derived from COMP, we have obvious metrics like speed, speed-upand efficiency, which are just expressing in different ways raw performance andscaling.The other metric, which is designed to compare the efficiency of differentschemes/codes on the same machines, simply looks at the amount of CPU timerequired per unit of physical time simulated. Of course, this leaves many parameters outof the analysis, like the grid or the accuracy of the obtained solution. But westrive to compare runs on equivalent grid/accuracy (for example, if you compare a 4th orderscheme with a 2nd order scheme, you should probably use half as many points fora similar accuracy).What do you think of these metrics? Do they appear valid to you? Do you know oruse other similar metrics to benchmark your CFD solver?I should also add that we usually deal with explicit schemes on structured grids, although we are now starting to do some comparisons with a DG code. Thereasoning might be different for unstructured grids and/or implicit schemes.
Benchmarking performance in CFD: how to compare machines and codes?
performance;fluid dynamics;hpc;benchmarking
At the end of the day, the only things that matter are wall-clock time to a chosen accuracy and Watts (or dollars) to a chosen accuracy.For implementation and hardware performance, I like to measure in terms of memory bandwidth and flop/s per Watt or per dollar. If the performance of the code is very far below the hardware peak according to these metrics, then there is likely some implementation inefficiency. Alternatively, if even very simple benchmarks like STREAM perform well below machine peak, then the machine may have bottlenecks that reduce its realizable performance.The efficiency of an algorithm really can't be measured by cells per second except within a restricted class. If you measure by that metric, I argue that you have missed the point.
_unix.348771
Environment:Fedora 25 (4.9.12-200.fc25.x86_64)GNOME Terminal 3.22.1 Using VTE version 0.46.1 +GNUTLSVIM - Vi IMproved 8.0 (2016 Sep 12, compiled Feb 22 2017 16:26:11)tmux 2.2I recently started using tmux and have observed that the colors within Vim change depending on whether I'm running inside or outside of tmux. Below are screenshots of Vim outside (left) and inside (right) of tmux while viewing a Git diff:My TERM variable isOutside tmux: xterm-256colorInside tmux: screen-256colorVim reports these terminal types as expected (via :set term?):Outside tmux: term=xterm-256colorInside tmux: term=screen-256colorVim also reports both instances are running in 256-color mode (via :set t_Co?):Outside tmux: t_Co=256Inside tmux: t_Co=256There are many similar questions out there regarding getting Vim to run in 256-color mode inside tmux (the best answer I found is here), but I don't think that's my problem given the above information.I can duplicate the problem outside of tmux if I run Vim with the terminal type set to screen-256color:$ TERM=screen-256color vimSo that makes me believe there's simply some difference between the xterm-256color and screen-256color terminal capabilities that causes the difference in color. Which leads to the question posed in the title: what specifically in the terminal capabilities causes the Vim colors to be different? I see the differences between running :set termcap inside and outside of tmux, but I'm curious as to which variables actually cause the difference in behavior.Independent of the previous question, is it possible to have the Vim colors be consistent when running inside or outside of tmux? Some things I've tried include:Explicitly setting the default terminal tmux uses in ~/.tmux.conf to various values (some against the advice of the tmux FAQ): set -g default-terminal screen-256color set -g default-terminal xterm-256color set -g default-terminal screen.xterm-256color set -g default-terminal tmux-256colorStarting tmux using tmux -2.In all cases, Vim continued to display different colors inside of tmux.
Why do Vim colors look different inside and outside of tmux?
terminal;vim;tmux;colors
tmux doesn't support the terminfo capability bce (back color erase), which vim checks for, to decide whether to use its default color scheme.That characteristic of tmux has been mentioned a few times -Reset background to transparent with tmux?Clear to end of line uses the wrong background color in tmux
_vi.9062
It worked fine on my home pc, but it messes up colors on my working machine.Here is my .vimrc:set nocompatiblefiletype offset rtp+=~/.vim/bundle/Vundle.vimcall vundle#begin()Plugin 'VundleVim/Vundle.vim'Plugin 'FelikZ/ctrlp-py-matcher'Plugin 'JulesWang/css.vim'Plugin 'altercation/vim-colors-solarized'Plugin 'ap/vim-css-color'Plugin 'bronson/vim-trailing-whitespace'Plugin 'cakebaker/scss-syntax.vim'Plugin 'christoomey/vim-tmux-navigator'Plugin 'gcorne/vim-sass-lint'Plugin 'kien/ctrlp.vim'Plugin 'mattn/emmet-vim'Plugin 'mileszs/ack.vim'Plugin 'mxw/vim-jsx'Plugin 'othree/html5.vim'Plugin 'othree/javascript-libraries-syntax.vim'Plugin 'othree/yajs.vim'Plugin 'scrooloose/nerdtree'Plugin 'scrooloose/syntastic'Plugin 'tpope/vim-surround'Plugin 'vim-airline/vim-airline'Plugin 'vim-airline/vim-airline-themes'call vundle#end()filetype plugin onfiletype plugin indent onset autoreadset wildmenuset rulerset backspace=eol,start,indentset whichwrap+=<,>,h,lset lazyredrawset noerrorbellsset novisualbellset t_vb=set tm=500set tabstop=4set shiftwidth=4set smarttabset expandtabset smartindentset autoindentset gdefaultset showmatchset hlsearchset incsearchset ignorecaseset ffs=unix,dos,macset fencs=utf-8,cp1251,koi8-r,ucs-2,cp866set relativenumberset cursorlineset showcmdset laststatus=2set statusline=\ %{HasPaste()}%F%m%r%h\ %w\ \ CWD:\ %r%{getcwd()}%h\ \ \ Line:\ %lset nobackupset noswapfilelet NERDTreeShowHidden=1let g:ctrlp_custom_ignore = 'vendor\|node_modules\|.git'set statusline+=%#warningmsg#set statusline+=%{SyntasticStatuslineFlag()}set statusline+=%*let g:syntastic_always_populate_loc_list = 1let g:syntastic_auto_loc_list = 1let g:syntastic_check_on_open = 1let g:syntastic_check_on_wq = 1let g:syntastic_javascript_checkers = ['eslint']let g:syntastic_css_checkers = ['csslint']let g:syntastic_sass_checkers = ['sass_lint']let g:syntastic_scss_checkers = ['sass_lint']let g:syntastic_php_checkers = ['php']let mapleader= nnoremap <leader>g :CtrlP<CR>nnoremap <leader>f /nnoremap <leader>F :Ack!<space>nnoremap <leader>h :%s/nnoremap <leader>o :NERDTreeToggle<CR>nnoremap <leader><space> :noh<CR>nnoremap <leader>n :tabn<CR>nnoremap <leader>p :tabp<CR>nnoremap j gjnnoremap k gknnoremap <leader>j <C-]>nnoremap <leader>k <C-O>nnoremap <F4> :Ack! <cword><CR>nnoremap <F12> :!ctags -R --exclude=node_modules .<cr>inoremap <Up> <NOP>inoremap <Down> <NOP>inoremap <Left> <NOP>inoremap <Right> <NOP>noremap <Up> <NOP>noremap <Down> <NOP>noremap <Left> <NOP>noremap <Right> <NOP>inoremap <C-h> <C-o>hinoremap <C-j> <C-o>jinoremap <C-k> <C-o>kinoremap <C-l> <C-o>lmap <Leader>bg :let &background = ( &background == dark? light : dark )<CR>nnoremap <leader>s :StackOverflow<space>set pastetoggle=<F2>syntax enablesyntax sync minlines=256set t_Co=256set background=darkcolorscheme solarized Run these commands before startupautocmd VimEnter * AirlineTheme solarizedautocmd BufWritePre * :%s/\s\+$//eautocmd FileType php set omnifunc=phpcomplete#CompletePHPautocmd FileType javascript set omnifunc=javascriptcomplete#CompleteJSautocmd FileType css set omnifunc=csscomplete#CompleteCSSautocmd FileType html set omnifunc=htmlcomplete#CompleteTagsIn my .bashrc I export TERM=xterm-256color.Am I missing something?Edit: it happens both in and out of tmux, vim version - 7.4.576, terminal - bash.Screenshot:Problem I'm trying to solve is cursorline and relativenumber column being black, should've clarified this earlier...
Solarized colorscheme is not displayed correctly (cursorline and relativenumber column)
vimrc;colorscheme;plugin vundle;plugin solarized
null
_webmaster.69418
How can i report to University of Kentucky someone using their webpage to get seo rank?This is the link .http://www.uky.edu/seeblue/Social/index.html ( the did take it off after i speak with them about it )
How can i report to University of Kentucky someone using their webpage to get seo rank?
seo
Contact the university directly: http://www.uky.edu/UKHome/subpages/contact.html Ask to be patched through to whoever maintains their website. Or just ask for their email address. Let them know about the situation and leave it with them. To answer your question; I don't think this is illegal. Just shady.
_unix.343134
I am new to linux, I use Lubuntu, I installed the bootloader on an USB drive, I connect the USB in order to load the OS which was installed on a HD partition, I need to use the USB for other stuff, so I need to format it, and I have a clean unused SD card, could someone tell me how to put the bootloader on the SD card?
Create bootloader on an SD card
linux;boot loader;lubuntu
#sudo fdisk -lfind you sdcard device#sudo grub-install /dev/<insert sd drive name here>
_vi.9935
I have Bash file in which I have constructs that include (@)_ as part of the variable name. For example:(@)_VariableName${(@)_VariableName[@]}${#(@)_VariableName[@]}${!(@)_VariableName[@]}The Bash files are pre-processed and the literal string (@)_ is replaced with characters so that the final result is valid Bash. The sequence (@)_, if present, will always be at the beginning of a variable name.I'd like to modify the sh.vim syntax file so that variable names are considered to be valid in identifier highlighting and not shown with shDerefWordError highlighting. I've tried modifying:syn match shDerefVar contained {\@<=!\k\+ nextgroup=@shDerefVarListto:syn match shDerefVar contained {\@<=\((@)_\)\?!\k\+ nextgroup=@shDerefVarListbut this has not worked.Note that this allowance might reasonably be made conditional to a variable such as b:is_eggsh.Can anyone suggest a solution?
Modify sh.vim to accept (@)_ as part of a variable name for Bash highlighting
syntax highlighting;filetype sh
You are close. The problem is that you took the syntax definition for Bash's special ${!varname}, and that doesn't match.I would also define a separate syntax group, used exclusively for your placeholder variables. The sh.vim syntax script is extensive via syntax clusters, that's how you install it as an additional dereference:syn match shDerefPlaceholder contained {\@<=[!#]\?(@)_\k\+ nextgroup=@shDerefVarListsyn cluster shDerefList add=shDerefPlaceholderAlternativeIntegrating with an existing syntax can be messy... especially with an advanced one like shell (which supports several sub-modes). An alternative would be using :match Preproc /(@)_[a-zA-Z_][a-zA-Z0-9_]*/; this always applies over existing syntax highlighting. Only complication: this is window-local, so you'd need some :autocmds to install it, like this:au BufNewFile,BufRead * if &syntax == 'sh' | match Preproc /(@)_[a-zA-Z_][a-zA-Z0-9_]*/ | endif
_softwareengineering.311620
I'm going to release a XUL Application to my users. It is a freeware (and maybe it will be opensource-ed later). Previously I used to package it with XULRunner and ship them all to the end user. Now I have to use firefox -app /path/to/application.ini instead.However, these slight modifications are required:I'd like to remove all unnecessary files and directories. Including these directories: browser, components, defaults, dictionaries, gmp-clearkey, icons, webapprt, and almost all .ini files. This will save ~18Mb or so.I want to rename firefox, firefox.app and firefox.exe to something else. Also, I want to change (or remove) the default firefox icon.On Mac, there is an extra step. I'd like to remove 32-bit code from XUL library, using ditto command. This will save more than ~60Mb according to my tests. This is extremely usefull as I don't want to have both 32bit and 64bit on the same bundle.Do these modifications require Mozilla's written permission?There is some information on this page: Mozilla Trademark Policy. However, I can't make sure do they require written permission or don't.
Distribute a XUL Application using Firefox binary
firefox;mpl
I got this answer after contacting trademarks AT Mozilla dot com:If you are shipping a modified version of Firefox, these steps are required :-) You may not brand it as Firefox in any way. As long as you make that change, you can make whatever other changes you want. That's a freedom given to you by the open source licenses; you don't need our permission.You will, of course, have to abide by the open source licenses governing our code, for example by telling your users where to get the source code of any MPLed code and Modifications. See: https://www.mozilla.org/MPL/2.0/FAQ/
_softwareengineering.119507
So traditional scrum board looks something like thisBacklog | Story notStarted inprogress Donestory 1 Story1 tasks Story 2 Story2 tasksStory ..Story nEpic xEpic x+1However in general a story has many scenarios and when working with BDD you want to write each scenario for a story as Given, when and then. Also the scenarios don't belong in the notstarted column, inprogess or Done as a scenario is not a task. So you realize that a scenario/s should have their own column between story and notstarted, as a scenario can have many task to be considered done. If you are going to build your task from scenarios then why would you need the story on the scrum board in the first place, maybe they should be left in the backlog. Some people put scenarios on the back of each story. This is a on going debate in my team and I wanted to see if anyone has solved this differently. Cheers!
Where do you put scenarios on a scrum board?
bdd;scrum
null
_softwareengineering.305930
In many books and tutorials, I've heard the practice of memory management stressed and felt that some mysterious and terrible things would happen if I didn't free memory after I'm done using it.I can't speak for other systems (although to me it's reasonable to assume that they adopt a similar practice), but at least on Windows, the Kernel is basically guaranteed to cleanup most resources (with the exception of an odd few) used by a program after program termination. Which includes heap memory, among various other things.I understand why you would want to close a file after you're done using it in order to make it available to the user or why you would want to disconnect a socket connected to a server in order to save bandwidth, but it seems silly to have to micromanage ALL your memory used by your program.Now, I agree that this question is broad since how you should handle your memory is based on how much memory you need and when you need it, so I will narrow the scope of this question to this: If I need to use a piece of memory throughout the lifespan of my program, is it really necessary to free it right before program termination?Edit: The question suggested as a duplicate was specific to the Unix family of operating systems. Its top answer even specified a tool specific to Linux (e.g. Valgrind). This question is meant to cover most normal non-embedded operating systems and why it is or isn't a good practice to free memory that is needed throughout the lifespan of a program.
If I need to use a piece of memory throughout the lifespan of my program, is it really necessary to free it right before program termination?
programming practices;memory usage
If I need to use a piece of memory throughout the lifespan of my program, is it really necessary to free it right before program termination?It is not mandatory, but it can have benefits (as well as some drawbacks).If the program allocates memory once during its execution time, and would otherwise never release it until the process ends, it may be a sensible approach not to release the memory manually and rely on the OS. On every modern OS I know, this is safe, at the end of the process all allocated memory is reliably returned to the system.In some cases, not cleaning up the allocated memory explicitly may even be notably quicker than doing the clean-up.However, by releasing all the memory at end of execution explicitly, during debugging / testing, mem leak detection tools won't show you false positivesit might be much easier to move the code which uses the memory together with allocation and deallocation into a separate component and use it later in a different context where the usage time for the memory need to be controlled by the user of the componentThe lifespan of programs can change. Maybe your program is a small command line utility today, with a typical lifetime of less than 10 minutes, and it allocates memory in portions of some kb every 10 seconds - so no need to free any allocated memory at all before the program ends. Later on the program is changed and gets an extended usage as part of a server process with a lifetime of several weeks - so not freeing unused memory in between is not an option any more, otherwise your program starts eating up all available server memory over time. This means you will have to review the whole program and add deallocating code afterwards. If you are lucky, this is an easy task, if not, it may be so hard that chances are high you miss a place. And when you are in that situation, you will wish you had added the free code to your program beforehand, at the time when you added the malloc code.More generally, writing allocating and related deallocating code always pairwise counts as a good habit among many programmers: by doing this always, you decrease the probability of forgetting the deallocation code in situations where the memory must be freed.
_codereview.25972
I am developing a C# Windows service that will always watch different folders/files and DB query results on different time intervals. There can be dozens of watchers each watching a specific file or folder or DB query results and sends emails at specific email address if some predefined threshold is met, and begins watching again.The requirements that I'm trying to address are:Each Watcher must do its task in a separate Thread.If a watcher/tasks throws an exception, it must automatically be restarted after, say, 3 hours.The strategy I am using:I create a List of IWatcher objects which holds many instances of DB, File and Folder watchersIWatcher is just an interface with some common properties/methods across all DB/File/Folder watchersI call a method BeginWatch which creates a separate task for each watcher in watcher's list and stored that task in a Dictionary. The key is set to the ID of watcher.I surrounded thread method's body with try/catch to catch any exception.If an exception occurs, I create a Timer object in the catch() body, save the watcher's ID (whose work is stopped) in this timer and schedule it with its interval set to a few hours. I remove the task object from tasks' dictionary.Upon occurring of the 'Elapsed' event of this timer, I recreate another task and add it to tasks' dictionary.Dictionary object for holding tasks against a watcher's ID:private Dictionary<string, TaskDetails> _watcherThreadsHerestring: DB/File or Folder watcher's unique IDTaskDetails: A class that holds a Task and other information for that task that I need during program execution.TaskDetails:class TaskDetails{ //The Task object public Task WatcherTask { get; set; } //CancellationTokenSource reference for each task .. //if we need to cancel tasks individually public CancellationTokenSource WatcherCancellationToken { get; set; } //It a Timer that will enable this specific task after a few hours //if an exception occurs public MyTimer DisablingTimer { get; set; } public TaskDetails( Task task, CancellationTokenSource cancellationToken) { this.WatcherTask = task; this.WatcherCancellationToken = cancellationToken; }}Next I create a list of watchers and then call the following method to create a task for each watcher:public void BeginWatch() { //this._watchers is the List<IWatcher> where all the watcher objects are stored if (this._watchers == null || this._watchers.Count == 0) throw new ArgumentNullException(Watchers' not found); //call CreateWatcherThread for each watcher to create a list of threads this._watchers.ForEach(CreateWatcherThread); }I create a list of tasks with other info://I call this method and pass in any watcher which i want to run in a new thread//This method is called savaral times for creating a TASK for each IWatcher objectprivate void CreateWatcherThread(IWatcher watcher){ IWatcher tmpWatcher = watcher.Copy(); CancellationTokenSource cancellationToken = new CancellationTokenSource(); //Create a task and run it Task _watcherTask = Task.Factory.StartNew( () => _createWatcherThread(tmpWatcher, cancellationToken), cancellationToken.Token, TaskCreationOptions.LongRunning, TaskScheduler.Default); //Add Thread, CancellationToken and IsDisabled = false to dictionary //Save Key as WID that will be unique. this will help us retrieving //the right TASK object later from the list this._watcherThreads.Add( tmpWatcher.WID, new TaskDetails(_watcherTask, cancellationToken) );}Thread method will perform the operation and will set a timer to enable this watcher after some time if an exception occurs.private void _createWatcherThread(IWatcher wat , CancellationTokenSource cancellationToken){ IWatcher watcher = wat.Copy(); bool hasWatchBegin = false; try { //run forever for (;;) { //dispose the watcher and stop this thread if CANCEL token has been issued if (cancellationToken.IsCancellationRequested) { ((IDisposable)watcher).Dispose(); break; } else if (!hasWatchBegin) { watcher.BeginWatch(); hasWatchBegin = true; } } } catch (Exception ex) { //set timer to reactivate this watcher after some hours ///This is an extended System.Timers.Timer class. I have only added WatcherID property in it to associate it with a specific Watcher object MyTimer enableTaskTimer = new MyTimer(); enableTaskTimer.Interval = AppSettings.DisabledWatcherDuration; // say 3 hours // Store Watcher's ID so we can create another Task later and it begins running this watcher enableTaskTimer.WatcherID = watcher.WID; enableTaskTimer.Elapsed += DisablingTimer_Elapsed; enableTaskTimer.Start(); //remove the thread from existing list as this task will be recreated again and stored in the list this._watcherThreads.Remove(watcher.WID); //Log exception SingletonLogger.Instance.WriteToLogs(ex.Message, LogSeverity.Error); }}void DisablingTimer_Elapsed (object sender, System.Timers.ElapsedEventArgs e){ MyTimer timer = sender as MyTimer; // get the watcher object by its WID.. MyTime .WatcherID has the Id of the watcher that crashed ... now its time to recreate a task that will again begin a watch IWatcher wat = this._watchers.Where(w => w.WID == timer.WatcherID).SingleOrDefault(); //dispose tie timer ... no more needed timer.Stop(); timer.Dispose(); //recreate a new thread for watcher if(wat != null) this.CreateWatcherThread(wat);}Am I doing this the right way?
File/folder watcher Windows service
c#;multithreading;timer;windows;task parallel library
null
_codereview.129073
I have implemented a metaclass in Python 3 that, apart from the usual instance constructor (i.e. __init__), enables you to define a class constructor (as a class method that I called __init_class__). The metaclass extends abc.ABCMeta so it supports abstract classes, i.e. the class constructor is not called if a class is abstract.I've written up some tests for it, they pass, and the metaclass also works in practice for me, but I'd like to know:Is this even a good approach for adding a class constructor functionality?Are there any problematic corner-cases that I might run into with my code?Am I testing it correctly?Any comments about my code being (or not) pythonic?Implementation:import abcimport inspectclass MyType(abc.ABCMeta): Metaclass. It adds extra functionality of an optional class constructor, which might by added to the class by adding a class method __init_class__(cls) to the class definition. Class constructor takes no arguments. def __init__(cls, name, bases, nmspc): super().__init__(name, bases, nmspc) cls.__has_init_class__ = hasattr(cls, '__init_class__') if cls.__has_init_class__ and not inspect.isabstract(cls): cls.__init_class__()Tests:import unittestclass MyTypeTests(unittest.TestCase): Unit tests for MyType metaclass. def test_class_constructor(self): Is the class constructor invoked? class WithoutInit(metaclass=MyType): # pylint: disable=R0903 Dummy class. initialized = False class AbstractWithoutInit(metaclass=MyType): # pylint: disable=R0903 Dummy abstract class. initialized = False @abc.abstractmethod def spam(self): Dummy abstract method. pass class WithInit(metaclass=MyType): # pylint: disable=R0903 Dummy class with class constructor. initialized = False @classmethod def __init_class__(cls): cls.initialized = True class AbstractWithInit(metaclass=MyType): # pylint: disable=R0903 Dummy abstract class with class constructor. initialized = False @abc.abstractmethod def spam(self): Dummy abstract method. pass @classmethod def __init_class__(cls): cls.initialized = True self.assertFalse(WithoutInit.initialized) self.assertFalse(AbstractWithoutInit.initialized) self.assertTrue(WithInit.initialized) self.assertFalse(AbstractWithInit.initialized)
Python 3 class constructor
python;object oriented;python 3.x;constructor
null
_softwareengineering.219385
As the question states: When implementing SOA, is it a concept intended for communication between systems over a network or is it intended as a concept that operates within the language as a pattern?
Is SOA as a concept intended to function within code or between machines over a network?
design patterns;soa
Service Oriented Architecture is an architecture, so the answer is neither.It's not a design pattern within a language because it governs decisions far, far outside of the program design - notably, how all your business data is organized into services, which has a close relationship with your organizational structure. Even some of the technical concepts like fire-and-forget messaging are generally language-agnostic.And it's not specifically related to communication between systems over a network because you could implement an entire SOA in a single process if you wanted to. The preferred method of service interaction in an SOA is in-process, and data or messages should only cross process boundaries when you specifically need to scale out. Even then, SOA is concerned with the logical rather than physical deployment. If you have a billing service, the architecture says nothing about where that service is located, and parts of it may in fact be located in several different physical endpoints.SOA lends itself well to distributed systems because of some of the other technical constraints it tends to impose, such as asynchrony and loose coupling. Distributed systems generally behave better when they treat the network as a network (i.e. don't depend on low latency/high bandwidth) and when components can all operate autonomously. But that's an outcome of SOA, not its goal.An SOA is very simply the opposite of a canonical data model; in other words, each Service is like a little dictatorship that guards its data ferociously and won't share anything with any other service except what it absolutely needs in order to function. You can implement that in any programming language, and with (almost) any physical infrastructure.
_softwareengineering.317139
I'm using Unity as IoC with C#, but I guess the question really isn't really limited to Unity and C#, but IoC in general.I try to follow the SOLID-principle, which means that I got very few dependencies between two concrete classes. But when I need to create new instances of a model, what's the best way to do it?I usually use a factory create my instances, but there's a few alternatives and I wonder which is better, and why?Simple Factory:public class FooFactory : IFooFactory{ public IFoo CreateModel() { return new Foo(); // references a concrete class. }}Service-locator-factorypublic class FooFactory : IFooFactory{ private readonly IUnityContainer _container; public FooFactory (IUnityContainer container) { _container = container; } public IFoo CreateModel() { return _container.Resolve<IFoo>(); // Service-locator anti-pattern? }}Func-factory. No dependencies to other classes.public class FooFactory : IFooFactory{ private readonly Func<IFoo> _createFunc; public FooFactory (Func<IFoo> createFunc) { _createFunc= createFunc; } public IFoo CreateModel() { return _createFunc(); // Is this really better than service-locator? }}Which IFooFactory should I use, and why? Is there a better option?The examples above are of a more conceptual level, where I try to find a balance between SOLID, maintainable code and service locator. Here's an actual example:public class ActionScopeFactory : IActionScopeFactory{ private readonly Func<Action, IActionScope> _createFunc; public ActionScopeFactory(Func<Action, IActionScope> createFunc) { _createFunc = createFunc; } public IActionScope CreateScope(Action action) { return _createFunc(action); }}public class ActionScope : IActionScope, IDisposable{ private readonly Action _action; public ActionScope(Action action) { _action = action; } public void Dispose() { _action(); }}public class SomeManager{ public void DoStuff() { using(_actionFactory.CreateScope(() => AllDone()) { // Do stuff. And when done call AllDone(). // Another way of actually writing try/finally. } }}Why do I use a Factory at all? Because I sometimes need to create new models. There are various scenarios when this is necessary. For eg. in a mapper and when the mapper has a longer lifetime than the object it should map. Example for factory usage:public class FooManager{ private IService _service; private IFooFactory _factory; public FooManager(IService service, IFooFactory factory) { _service = service; _factory = factory; } public void MarkTimestamp() { IFoo foo = _factory.CreateModel(); foo.Time = DateTime.Now; foo.User = // current user _service.DoStuff(foo); } public void DoStuffInScope() { using(var foo = _factoru.CreateModel()) { // do stuff with foo... } }}
Service-locator anti-pattern alternative
c#;inversion of control;service locator
null
_unix.47814
I search the terminal command history by pressing Ctrlr but what if:This is an old commandThis is an | less -S older commandI press Ctrlr and then I type this is an and the old command commes up but not the older. How can I search all the this is an commands? Is it possible to pipe all similar commands to grep or something?If I set -o vi, how do I undo it?
Searching command history
bash;terminal;command history
To search for a command in the history press ctrl+r multiple times ;-)You can also grep through the history using: history | grep YOUR_STRING
_cs.60599
In a description of OOP in my textbook, it is written that in procedure oriented program the program is organized around its code while in object oriented programming, the program is organized around its data. What is the meaning of this statement? An explanation with an example would be of great help.
In object-oriented programming, the program is organized around its data
programming languages;object oriented
null
_reverseengineering.6306
Where i can find a working link to this tutorial? I have searched a lot on the net but all links are broken. Could anyone upload it somewhere?Original linkhttp://www.alex-ionescu.com/vb.pdf
Where can i find Visual Basic Image Internal Structure Format by Alex Ionescu?
patch reversing;visual basic
http://web.archive.org/web/20071020232030/http://www.alex-ionescu.com/vb.pdf[more chars ftw][more chars ftw]
_webapps.107974
I'm watching a playlist of 130 entries from start to finish or so I thought when after watching the 46th entry I'm thrown back to the first. I was expecting to see the 47th. When I tap next I go to the beginning of the playlist. When I tap on the 47th entry I go to the 47th entry. I don't understand why it works like this? I have been trying to figure out the purpose and usage of YouTube playlists for a long time but it keeps eluding me with unexpected behavior.Using safari for iOS 10. The playlistBefore tapping nextAfter tapping next
Why does YouTube playlists restart in the middle?
youtube;youtube playlist
null
_webapps.40600
Lately, I've been getting notifications alerting me when one of my friends is at a nearby location. Since I couldn't care any less, I'd like to disable these, but I don't know where that particular setting is.
How do I disable so-and-so is nearby at whatever notifications?
facebook
null
_codereview.151893
Here is a list where I would like to keep only the 'windows' where there are a group of number higher than 3, but if in any case inside this group you find a smaller number, you keep it. Moreover, if there is an isolated high number (between zeros for example), we need to delete it. l = [3.5, 0, 0, 0.5, 4, 10, 20, 3, 20, 10, 2, 0, 0, 3.5, 0, 2, 18, 15, 2, 14, 2, 0]and the expected result :[4, 10, 20, 3, 20, 10, 18, 15, 2, 14]This program does it but I feel this it not very pythonic, could you think of any other way ?l = [3.5, 0, 0, 0.5, 4, 10, 20, 3, 20, 10, 2, 0, 0, 3.5, 0, 2, 18, 15, 2, 14, 2, 0]new_l = []index = []seuil = 3for i, elt in enumerate(l): if elt > seuil: if (l[i-1] and l[i+1]) > 1: new_l.append(elt) index.append(i) else: pass else: try: if (l[i-1] and l[i+1]) > seuil: new_l.append(elt) index.append(i) except IndexError: passprint indexprint new_l
Pythonic way to select element in a list window
python
Write functions, improvements can range from faster code to the ability to reliably time or profile your code.You don't need to use else if you're just using pass in that block.Your code is WET, so it's not DRY. Instead of duplicating the code, you could instead assign to a variable in the if and else, to check against.You may want to get into a habit of using if __name__ == '__main__':. In short, if you import the file it won't run the code.This can get you:def rename_me(l, seuil): new_l = [] index = [] for i, elt in enumerate(l): bound = 1 if elt > seuil else seuil try: if (l[i-1] and l[i+1]) > bound: new_l.append(elt) index.append(i) except IndexError: pass return index, new_lA couple more things:I think you have a bug, (l[i-1] and l[i+1]) > bound probably isn't doing what you think it is. It's only checking if l[i+1] is greater than bound. As 1 and 2 == 2 and 0 and 2 == 0 are both true. Instead you may want to use l[i-1] > bound < l[i+1].I don't see why you'd need the indexes and the elements from the original list, and so you could instead just return one or the other.You can change the function to be a generator function, as this reduces memory usage.You could use a modified itertools.pairwise, to remove the IndexError.from itertools import teedef rename_me(l, seuil): a, b, c = tee(l, 3) next(b, None) next(c, None) next(c, None) for index, (i, j, k) in enumerate(zip(a, b, c), 1): if i > (1 if j > seuil else seuil) < k: yield index
_unix.33855
I'm working on a software which connects to a Real Time data server (using TCP) and I have some connections dropping. My guess is that the clients do not read the data coming from the server fast enough. Therefore I would like to monitor my TCP sockets. For this I found the ss tool.This tool allows to see the state of every socket - here's an example line of the output of the command ss -inm 'src *:50000'ESTAB 0 0 184.7.60.2:50000 184.92.35.104:1105 mem:(r0,w0,f0,t0) sack rto:204 rtt:1.875/0.75 ato:40My question is: what does the memory part mean? Looking at the source code of the tool I found that the data is coming from a kernel structure (sock in sock.h). More precisely, it comes from the fields :r = sk->sk_rmem_allocw = sk->sk_wmem_queued;f = sk->sk_forward_alloc;t = sk->sk_wmem_alloc;Does somebody know what they mean? My guesses are:rmem_alloc : size of the inbound bufferwmem_alloc : size of the outbound buffersk_forward_alloc : ???sk->sk_wmem_queued : ???Here are my buffers sizes :net.ipv4.tcp_rmem = 4096 87380 174760net.ipv4.tcp_wmem = 4096 16384 131072net.ipv4.tcp_mem = 786432 1048576 1572864net.core.rmem_default = 110592net.core.wmem_default = 110592net.core.rmem_max = 1048576net.core.wmem_max = 131071
Kernel socket structure and TCP_DIAG
linux;tcp;socket
sk_forward_alloc is the forward allocated memory which is the total memory currently available in the socket's quota.sk_wmem_queued is the amount of memory used by the socket send buffer queued in the transmit queue and are either not yet sent out or not yet acknowledged.You can learn more about TCP Memory Management in chapter 9 of TCP/IP Architecture, Design and Implementation in Linux By Sameer Seth, M. Ajaykumar Venkatesulu
_unix.59899
I have a number of users of a desktop machine, who each have a folder on a remote samba server. I am trying to set up per-user mounting of these network folders following the instructions here:https://askubuntu.com/questions/67405/auto-mounting-network-shares-per-userThat article recommends adding the following line to sudoers:user ALL= NOPASSWD: /bin/mount -t cifs -o cred=/home/user/.Music.cred //server/music /home/user/MyMusicFolderThat command works fine, but I need to mount the shares so that they are owned by a specific group. I have tried changing the line to read like this:user ALL= NOPASSWD: /bin/mount -t cifs -o cred=/home/user/.Music.cred,gid=xxx //server/music /home/user/MyMusicFolderBut with the addition of the ,gid=xxx the sudoers file no longer parses correctly. I assume this is because the NOPASSWD command actually takes a comma separated list of arguments so I need to escape the comma in the command. I can do this using a backslash, like so:user ALL= NOPASSWD: /bin/mount -t cifs -o cred=/home/user/.Music.cred\,gid=xxx //server/music /home/user/MyMusicFolderNow the sudoers file will parse, and I can run the mount command with sudo without being asked for a password. Unfortunately I now get this error:Sorry, user is not allowed to execute '/bin/ls /home/' as root on ubuntu.when I try to run any other command. I don't get this error if I remove the \,gid=xxx from the line. So what is the right way to escape commas in the sudoers file?
Passwordless sudo of a command containing a comma
linux;ubuntu;sudo
From the sudoers(5) man page:Note that the following characters must be escaped with a '\' if they are used in command arguments: ',', ':', '=', '\'.
_unix.324145
I am trying to setup a Linux Container using bridged networking.Here's how I setup my bridge: http://www.ericsbinaryworld.com/2016...he-connection/Here's how I installed the container: http://www.ericsbinaryworld.com/2016...etting-up-lxc/When I use lxc-attach -n lemmy to get into the container, I don't have internet access within the container.Did I forget an easy step?This is running in a KVM VM that is using macvtap and that the VM itself is able to access the net.Other relevant info/things I've done to try and debug the problem.Host OS: Fedora 24.VM: CentOS 7 - named AirshipInside of Airship, a container - named Lemmy.First round of debugging:I started the VM - Airship.Logged into Airship as root.ping www.google.comworks.lxc-start -n lemmy -dlxc-attach -n lemmyNow I'm inside the container.ping 8.8.8.8gets meconnect: Network is unreachableSo I did an ip a and it looks like the interface isn't up.Did a check of systemctl status network.service and apparently it was in a failed state.When I tried a systemctl start network.service it just stays there without seeming to finish.Second round of debugging:When I did a systemctl status network.service - it looks like it was stalling on trying to get a DHCP address.So I edited the following file:/etc/sysconfig/network-scripts/ifcfg-eth0To have:DEVICE=eth0ONBOOT=yesIPADDR=192.168.1.36PREFIX=24GATEWAY=192.168.1.1DNS1=192.168.1.7DOMAIN=mushroomkingdomHOSTNAME=NM_CONTROLLED=noTYPE=EthernetMTU=So now it comes up and has an IP address. But I can't reach anyone local or internet.Dmesg shows:[ 3932.778454] virbr0: port 2(vethFXTSQ3) entered forwarding state[ 4089.412588] virbr0: received packet on eth0 with own address as source addressIt can ping itself and the host.[root@lemmy ~]# ping 192.168.1.36PING 192.168.1.36 (192.168.1.36) 56(84) bytes of data.64 bytes from 192.168.1.36: icmp_seq=1 ttl=64 time=0.030 ms64 bytes from 192.168.1.36: icmp_seq=2 ttl=64 time=0.034 ms64 bytes from 192.168.1.36: icmp_seq=3 ttl=64 time=0.019 ms64 bytes from 192.168.1.36: icmp_seq=4 ttl=64 time=0.031 ms[root@lemmy ~]# ping 192.168.1.35PING 192.168.1.35 (192.168.1.35) 56(84) bytes of data.64 bytes from 192.168.1.35: icmp_seq=1 ttl=64 time=0.085 ms64 bytes from 192.168.1.35: icmp_seq=2 ttl=64 time=0.047 msBut if I try my local DNS:[root@lemmy ~]# ping 192.168.1.7PING 192.168.1.7 (192.168.1.7) 56(84) bytes of data.From 192.168.1.36 icmp_seq=1 Destination Host UnreachableFrom 192.168.1.36 icmp_seq=2 Destination Host UnreachableFrom 192.168.1.36 icmp_seq=3 Destination Host UnreachableOther things you might ask for:[root@airship ~]# lxc-info -n lemmyName: lemmyState: RUNNINGPID: 3802IP: 192.168.1.36CPU use: 0.18 secondsBlkIO use: 92.50 KiBMemory use: 1.11 MiBKMem use: 0 bytesLink: vethFXTSQ3TX bytes: 3.24 KiBRX bytes: 54.10 KiBTotal bytes: 57.34 KiBand on the VM hosting the container:[root@airship ~]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWNlink/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 52:54:00:3d:99:5c brd ff:ff:ff:ff:ff:ffinet 192.168.254.214/24 brd 192.168.254.255 scope global dynamic ens4 valid_lft 2308sec preferred_lft 2308secinet6 fe80::5054:ff:fe3d:995c/64 scope link valid_lft forever preferred_lft forever3: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000link/ether 52:54:00:64:f5:67 brd ff:ff:ff:ff:ff:ff4: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UPlink/ether 52:54:00:64:f5:67 brd ff:ff:ff:ff:ff:ffinet 192.168.1.35/24 brd 192.168.1.255 scope global virbr0 valid_lft forever preferred_lft foreverinet6 fe80::5054:ff:fe64:f567/64 scope link valid_lft forever preferred_lft forever8: vethFXTSQ3@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master virbr0 state UP qlen 1000link/ether fe:6f:c5:df:0e:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0inet6 fe80::fc6f:c5ff:fedf:ee1/64 scope link valid_lft forever preferred_lft foreverand:[root@airship ~]# brctl showbridge name bridge id STP enabled interfacesvirbr0 8000.52540064f567 no eth0 vethFXTSQ3
LXC Container cannot access LAN OR Internet
networking;lxc
null
_unix.82876
I have two VirtualBox virtual machines: one with SuSE Enterprise 10 and one with Red Hat Enterprise 5. I want to connect through SSH from RedHat to SuSE.Both are configured to use a bridged adapter with vnic0 as name and Allow All for Promiscuous Mode. SuSE has IP 10.211.55.5 and Red Hat has IP 10.211.55.6. I can connect through SSH from SuSE to Red Hat but not vice-versa. What could be the problem?If I ping from Red Hat to SuSE and vice versa, everything is in order and the transmitted packets are received. If I traceroute from SuSE to Red Hat I get 1 10.211.55.6 (10.211.55.6)(H!) 0.509 ms (H!) 14.974 ms (H!) 0.877 mswhich seems in order, but when I traceroute from RedHat to SuSE I get traceroute to 10.211.55.5 (10.211.55.5), 30 hops max, 40 byte packets1 * * *2 * * *3 * * *4 * * *...If I do ssh -l user 10.211.55.6 from the SuSE virtual machine everything connects ok but if I do ssh -l user 10.211.55.5 from the Red Hat virtual machine I get Connection timed out.Both systems are connected to the internet and I can surf the net.What am I doing wrong? Why I can't SSH from Red Hat to the SuSE virtual machine?
Connect two VirtualBox virtual machines through SSH
networking;ssh;virtualbox;suse
null
_codereview.74067
I wish to redirect users to the login page if they attempt to visit a page which requires them to be logged in. After logging in, however, I want to redirect the user back to their original destination. I've written a redirect.php script which is to be included on all such pages:<?phprequire session.php;if(!$user){ header(Location: login.php?dest=.urlencode($_SERVER[REQUEST_URI])); die();}?>Then on my login page I have the following:<?php $dest = ./;if(isset($_GET[dest])){ $dest = $_GET[dest];}?>with the following JavaScript:var URL = <?php echo $dest; ?>;//...//upon successful login (via AJAX):window.location.replace(URL);Everything here works as intended but where does this stand from a security standpoint?One vulnerability that comes to mind is something likehttp://mysite.com/login.php?dest=http://phishingsite.comHow might I best prevent something like this? Would regex be suitable here?Are there any other security concerns with this type of thing? Perhaps a standard way of doing this? Or better yet, a method which does not use GET variables at all?
PHP login redirect security
php;security;authentication;url
null
_unix.274865
I'm a complete newbie and I'm starting to learn the basic for shell scripts. So I apologize if this question is very simple for most users. I'm trying to display the text TRUE to the screen if a file is tested to have the readable bit set using the if then statement. Thank you most kindly,
code to display the text TRUE
linux;shell;centos;scripting
null
_softwareengineering.291638
I am new to Junit. I have a class with four methods. I am creating a test class with test cases. Can I have a test case that uses more than one methods of testing class or there should one test case per one method. What is good practice ?
Multiple methods in single test case
junit
null
_unix.73915
I have a quick question that I ran across while trying to install Linux Mint along with Windows 8 on my computer. I'm pretty new to Linux stuff so I'm sorry if this is pretty obvious. I'm mainly looking for confirmation that my partitioning scheme will work, but also what I should do with the bootloader. I have a hard drive and an SSD, but the hard drive is just being used for storage so I won't really mention it.As it stands, my SSD already has Windows 8 installed on an NTFS partition, as well as a 10GB swap partition and an empty ext4 partition where I plan to install Mint. I have a few questions about this setupWill there be any problems mounting /home on an empty ext4 partition on my hard drive while mounting / on the SSD? I can't see why it would be, but I thought I might as well ask.The Mint installer asks me which device to use for the boot loader installation. The choices on my SSD are:/dev/sdb (entire SSD)/dev/sdb1 (described as Windows 8 (loader) in menu, which is the NTFS partition with the entire Window 8 installation including bootloader)/dev/sdb6 (the ext4 partition where I'm installing Mint)Right now, /dev/sdb1 is marked as my boot partition which makes sense, seeing as that's where the Windows 8 bootloader is. If I choose to install the new bootloader there, would it overwrite the Windows 8 one and mess everything up? My understanding is that Mint installs GRUB, and when you boot into that it gives you the choice of going into Mint or Windows, and if you choose Windows it just jumps over to the Windows bootloader. With this mind, I was thinking that I should just install the bootloader into /dev/sdb6, leaving the Windows partition completely alone and then set /dev/sdb6 as my boot partition. Then when I boot it will go into GRUB, and if I choose the Windows 8 bootloader it will jump over to /dev/sdb1 and start Windows normally. I really don't understand this stuff that well though, so I thought it would definitely be a good idea to ask so I don't make Windows unbootable or something. I also don't understand the choice of putting the bootloader in /dev/sdb since there is no unallocated space on the SSD or anything, so the idea of installing the bootloader across 3 partitions seems a little off to me.
Question About Bootloader Partition (Mint)
linux mint;partition;dual boot;boot loader
null
_cs.6846
Possible Duplicate:Complexity inversely propotional to $n$ I'm curious if anyone's come up with a problem or method as n => infinity t => 0. Are there any sort of cases found in quantum computing?
is there an example of an algorithm that has O(1/n)?
time complexity
As the complexity of an algorithm is a measure of the number of operations (in a sense to be defined in each context) needed to do some computation in function of the size of some input, sub-constant complexity does not make any sense. With your exemple, $O(\frac1n)$, it means that for a sufficiently large input, the algorithm does strictly less than one operation, which in terms of Turing machines means that the initial state is accepting, which means that the corresponding Turing machine does not output anything.Edit: I had not seen the quantum computing reference, so my answer might not be total, although I doubt it makes sense even in that context.
_ai.3301
I'm working on a project where I train a Q-learning agent to learn an optimal control policy for a water heater. I've set up a simulation which allows the agent to explore for one year. I then examine the results of the agent performance exploiting its optimal policy for the following year. The agent can perform the following actions (available actions depend on the state of the environment):Turn the electrical heating element on.Turn the electrical heating element off.Turn gas heating on.Turn gas heating off.Do nothing.The goal of the agent reach the target temperature (50 deg C) when hot water is scheduled. The agent is rewarded for choosing actions which produce the lowest CO2 emissions (the CO2 emissions produced from electricity vary over time).One of the issues I have noticed is that during the exploration phase, the agent tries a lot of weighted random actions which causes the water heater to overheat (>80 deg C). When the water heater overheats, it is not possible for the agent to perform further actions other than switching off heating and doing nothing. The agent is also punished for reaching the overheating tank state. The tank may remain in the overheated state for some time. It seems as if the tendency to overheat the tank during exploration is negatively impacting how the agent learns its policy as it reduces the number of experiences in other states.Is there a term for this kind of situation during exploration in reinforcement learning? During exploration, the agent uses a chooses a softmax weighted random aciton. Are there alternative ways of choosing actions that may still allow for exploration while not reaching the overheating state?
Agent exploration which leads to a negative state where actions are limited
machine learning;reinforcement learning
null
_unix.323446
I am having an issue where DHCP (I though as I read in other similar topics) is clearing the /etc/resolv.conf file on each boot. I am not sure about how to deal with this since the post I have found (1, 2 and some others) are for Debian based distros or other but not Fedora.This is the output of ifcfg-enp0s31f6 so for sure is DHCP:cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6 HWADDR=C8:5B:76:1A:8E:55TYPE=EthernetDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=noIPV6_AUTOCONF=noIPV6_DEFROUTE=noIPV6_FAILURE_FATAL=noIPV6_ADDR_GEN_MODE=stable-privacyNAME=enp0s31f6UUID=0af812a3-ac8e-32a0-887d-10884872d6c7ONBOOT=yesIPV6_PEERDNS=noIPV6_PEERROUTES=noBOOTPROTO=dhcpPEERDNS=yesPEERROUTES=yesIn the other side I don't know if Network Manager is doing something else around this.Update: Content of NetworkManager.conf (I have removed the comments since are useless)$ cat /etc/NetworkManager/NetworkManager.conf [main]#plugins=ifcfg-rh,ibftdns=none[logging]#domains=ALLCan I get some help with this? It's annonying be setting up the file once and once on every reboot.UPDATE 2After a month I'm still having the same issue where file gets deleted by something. Here is the steps I did follow in order to make a fresh test:Reboot the PCAfter PC gets restarted open a terminal and try to ping Google servers of course without success:$ ping google.comping: google.com: Name or service not knownCheck the network configuration were all seems to be fine:$ cat /etc/sysconfig/network-scripts/ifcfg-enp0s31f6 NAME=enp0s31f6ONBOOT=yesHWADDR=C8:5B:76:1A:8E:55MACADDR=C8:5B:76:1A:8E:55UUID=0af812a3-ac8e-32a0-887d-10884872d6c7BOOTPROTO=staticPEERDNS=noDNS1=8.8.8.8DNS2=8.8.4.4DNS3=192.168.1.10NM_CONTROLLED=yesIPADDR=192.168.1.66NETMASK=255.255.255.0BROADCAST=192.168.1.255GATEWAY=192.168.1.1TYPE=EthernetDEFROUTE=yesIPV4_FAILURE_FATAL=noIPV6INIT=noRestart the network service:$ sudo service network restart[sudo] password for <current_user>: Restarting network (via systemctl): [ OK ]Try to ping Google servers again, with no success:$ ping google.comping: google.com: Name or service not knownCheck for file /etc/resolv.conf:$ cat /etc/resolv.conf cat: /etc/resolv.conf: No such file or directoryFile doesn't exists anymore - and this is the problem something is deleting it on every rebootCreate the file and add the content of DNS:$ sudo nano /etc/resolv.conf Ping Google servers this time with success:$ ping google.comPING google.com (216.58.192.110) 56(84) bytes of data.64 bytes from mia07s35-in-f110.1e100.net (216.58.192.110): icmp_seq=1 ttl=57 time=3.87 msAny ideas in what could be happening here?
File /etc/resolv.conf deleted on every reboot, why or what?
networking;fedora;networkmanager;dhcp;resolv.conf
null
_unix.162501
I need an overview of what data is kept in FreeBSD 9.3's Process Control Block and in it's Thread Control Block. Where can I find that information?
FreeBSD Process and Thread Control Block
freebsd
null
_unix.328669
I'm trying to deactivate the checksum offloading on OSX.sudo sysctl -w net.link.ether.inet.apple_hwcksum_tx=0--sysctl: oid 'net.link.ether.inet.apple_hwcksum_tx' is read onlyThis one above seems not to work anymore. Cheers
How to disable tcp checksum offloading on OSX 10.11.6?
osx;tcp
null
_unix.189072
I have a list of roughly 100 entries to be deleted from a csv-delimited file. They are already in another text file called 'tbd.txt;My first thought is to write a bash for loop around 'sed -i' but that seems horribly wasteful of disk i/o.Is there a better way to have sed parse the file of deletions internally? There is a similar problem here but the solution doesn't seem scalable.
Match and delete lines with ~100 strings
text processing;sed;csv
null
_cs.14308
I am new to C++ and just learned that The declaration of a static data member in the member list of a class is not a definition. You must define the static member outside of the class declaration, in namespace scope. I am curious about why this is. Are there particular design advantages to making the rules of the language in this way? I am not asking: tell me what Stroustrup was thinking when he made C++ -- which I know is an unfair question on these forums. I am asking: if someone sat down to make a computer language, why might they require that static data members are defined outside of the class. What are the advantages of this? What are the disadvantages? What kinds of costs would this incur? Why would someone put this quirk in their language? (Also I'm curious, but I know it's not totally appropriate to ask here: how did this quirk end up in C++).
If someone was designing a computer language, why would they require static members to be defined outside the class?
programming languages
Such language design solution makes sense, because class body is basically a declaration, which normally has to be in the header file. All definitions has to appear in cpp files, where class instances are actually used. If you allow static members to be defined inside class body, then you mix declarations with definitions. So I would require this to make separation between declarations and definitions clear.In terms of this separation it is fine to initialise static const class members inside class body. Basically, you are defining some constant and it could be equivalently done with some #define directive.It doesn't seem that this restriction is due to the limits of the language or a compiler. Java doesn't have this restriction and allows definitions inside the body of a class for any static data member (not const only).Update: C++11 relaxes the restriction and supports non-static member in-class initialisation. Static members still have to be defined outside.
_webmaster.30705
Possible Duplicate:How to have a blogspot blog in my domain? i have a blog from blogger named as www.myclipta.blogspot.com. i am updating regulary. Then i bought a custom domain with myclipta.com. Now i want to redirect from blogger domain to my custom domain. i don't know how to do this . i heard that to set dns name servers and CNAME..But i am not able to do this..can any one can guide me please..
Redirecting from blogger to custom domain
domains;blogger;302 redirect
null
_hardwarecs.3932
I have set up a VM (web development server) on a USB drive, but consider moving it to a SD card. VM has currently around 25 GB; so with a 64 GB card I would probably be safe.The price is pretty irrelevant, as long as I can justify it to the management.why?I dont want the VM to be stored on the hosts hardware.And the USB hard drive has two drawbacks:It is an extra object in my bag and on the table (which might be in a cafe, train, waiting room); an SD card would be much more handy.When the host (a Surface Book) goes into standby, it cuts the power to the USB HD; when it wakes up, the VM wont respond to anything but switching off or reboot.consideration:The card will not only be for putting a few files on it and updating them every now and then, but for actually working on it.The Surface Book supports UHS-I, as far as I could find out.questions:Should I chose a certain type of storage? (performance)Should I prefer a certain manufacturer? (reliability, durability)options:half size SD - I found Transcends JetDrive Lite, designed for MacBook Air/Pro.never found a working microSD to SD adapter; guess the contacts are wearing off fast when you plug/unplug the microSD? But I really like the looks of Bosvisions microSD Adapter) usb stick (SanDisk Ultra Fit or something similar): probably 3rd choice, because the Surface has only two USB ports
SD card for heavy use
usb;virtual machines;sd card
null
_cs.13089
I just recently learnt about the existence of this Hierarchical Temporal Memory. I already read the main document (which seems rather easy to understand), but one red flag is that the document is neither peer-reviewed nor attempting to explain why it should work in details. I tried to look around for some independent sources, and found a few papers that compare its performance against others, but none explain why it perform well or not well. I noticed some comments claiming that it was looked down by mainstream expert, but I was unable to find any actual criticisms.So I would like to ask, what are the criticism regarding the performance of HTM? Assuming the following:-There are huge amount of training data to use, enough for even months long training session. Basically, any criticisms regarding size or length of training is not relevant.-Since HTM is meant to be generic, any domain-specific criticism should be related to a more fundamental problem.Thank you for your help.
Some criticisms of Hierarchical Temporal Memory?
neural networks
null
_unix.218245
Considering a routine such as this one:alpha() { echo a b c |tr ' ' '\n'; }which outputs a stream, I would like to take the output stream, transform it, and paste it with the original output stream.If I take use upcasing as a sample transformation, I can achieve what I want with:$ mkfifo p1 p2$ alpha | tee p1 >( tr a-z A-Z > p2) >/dev/null &$ paste p1 p2a Ab Bc CMy question is, is there a better way to do this, preferably one not involving named pipes?
Joined pipelines
shell script;pipe;fifo;coprocesses
null
_unix.108145
I stumbled across a blog that mentioned the following command. who mom likesIt appears to be equivalent to who am i The author warns to never enter the following into the command line (I suspect he is being facetious)who mom hatesThere is nothing documented about the mom command. What does it do?
Is `who mom likes` a real linux command?
who;whoami
Yes it's a joke, included in by the developers of the who command. See the man page for who.excerptIf FILE is not specified, use /var/run/utmp. /var/log/wtmp as FILE is common. If ARG1 ARG2 given, -m presumed: 'am i' or 'mom likes' are usual.This U&L Q&A titled: What is a non-option argument? explains some of the terminology from the man page and my answer also covers alternatives to who .. .... commands.DetailsThere really isn't anything special about am I or am i. The who command is designed to return the same results for any 2 arguments. Actually it behaves as if you called it with its -m switch. -m only hostname and user associated with stdinExamples$ who -msaml pts/1 2014-01-06 09:44 (:0)$ who likes candysaml pts/1 2014-01-06 09:44 (:0)$ who eats cookiessaml pts/1 2014-01-06 09:44 (:0)$ who blah blahsaml pts/1 2014-01-06 09:44 (:0)Other implementationsIf you take a look at The Heirloom Project, you can gain access to an older implementation of who.The Heirloom Toolchest is a collection of standard Unix utilities.Highlights are:Derived from original Unix material released as Open Source by Caldera and Sun.The man page that comes with this who in this distribution also has the same feature, except it's more obvious.$ groff -Tascii -man who.1 |less...SYNOPSIS who [-abdHlmpqRrstTu] [utmp_file] who -q [-n x] [utmp_file] who [am i] who [am I]...... With the two-argument synopsis forms `who am i' and `who am I', who tells who you are logged in as.......
_hardwarecs.6317
I need to urgently choose an external hard drive. I need IT to run windows 10 on my iMac, since virtual machines shows too low performance in my case. I think I need an SSD, since its data transfer rate faster than the HDD. I know that if SSD fails - it fails entirely, but I will not store important data on it (I need it for working with Hololens emulator and several other programs). And I hope with careful treatment, he will live longerSo, I'm looking for:SSD500GB or more$50-150 max Here is a model I have found - https://www.amazon.com/gp/product/B016JREG84/ref=ox_sc_act_title_1?ie=UTF8&psc=1&smid=ATVPDKIKX0DERWhat do you think about it? Would you recommend different one?I need an answer urgently, and I would be very grateful for your help!Thank you very much in advance!
Urgently choosing SDD - need advice!
hard disk
While I can't judge on the given SSD model, I can issue a recommendation for another one.The Samsung 850 EVO 500GB SSD.Here's why it's a good SSD:Price. It's 130USD on Amazon right now and thus just as expensive as the SSD you linked.Speed. The 850 Evo will go to 400MB/s no problem. Anything beyond that will depend on your system but probably won't make noticeable differences in practice.Durability / Warranty. Samsung gives you a 5-year / 150TBW warranty on this SSD, meaning Samsung guarantees that your SSD will still work perfectly fine if you use it for less than 5 years and write less than 150TB (300 complete writes over the drive or 84GB / day) during that time-span. Chances are this SSD won't fail you even if you go beyond the 150TBW limit. In fact, even the less-durable predecessor with half the storage (for which Samsung only gives you three years of warranty) survived 300TBW without major issues. It is thus expected that you can go at least up to 500 - 1000TBW without loosing the 850 EVO. If you want even more you need to put down the additional 90USD for a 850 PRO which should survive a few PB worth of writes.Brand. Samsung flash storage has a very good reputation.Software. Samsung provides software for all its SSDs to optimize the system perfomance (stuff like enabling AHCI). The software also tells you how much data has already been written on your SSD.As a non-exhaustive personal opinion: I have used this very SSD for more than 1 year now
_unix.68263
I am currently re-setting up the apparmor profile for Firefox 19.0.2 on Ubuntu 12.04 and I am slightly confused. I must of had Firefox 7.01 last time I used this and if I do apparmor_status then regarding firefox I get..profiles are in enforce mode./usr/lib/firefox/firefox{,*[^s][^h]}//browser_java/usr/lib/firefox/firefox{,*[^s][^h]}//browser_openjdk/usr/lib/firefox/firefox{,*[^s][^h]}//sanitized_helper..profiles are in complain mode/usr/lib/firefox-7.0.1/firefox.sh/usr/lib/firefox/firefox{,*[^s][^h]}/usr/lib/firefox/firefox{,*[^s][^h]}//null-34/usr/lib/firefox/firefox{,*[^s][^h]}//null-34//null-35..processes are in complain mode./usr/lib/firefox/firefox{,*[^s][^h]} (3818) /usr/lib/firefox/firefox{,*[^s][^h]} (17960) /usr/lib/firefox/firefox{,*[^s][^h]} (21817) /usr/lib/firefox/firefox{,*[^s][^h]}//null-34 (3819) /usr/lib/firefox/firefox{,*[^s][^h]}//null-34//null-35 (3823) Now in the dir /etc/apparmor.d/ the profiles I have in relation to firefox areusr.bin.firefox and usr.lib.firefox-7.0.1.firefox.sh. Regarding the location of the firefox exec itself on my system - /usr/bin/firefox is a sym link to /usr/lib/firefox/firefox.sh i.e. there is no version number.Are these profiles in enforce mode some how sub profiles that are inheriting the parent profile and still in enforce mode despite the parent being in complain? Why is the profile shown in the status of the form /usr/lib/firefox/firefox not 1/usr/lib/firefox/firefox-7.01`?Finally I thought messages were supposed to go to /var/log/messages yet this file does not exist for me, despite processes being left in complain mode for some time...
apparmor and Firefox
ubuntu;apparmor
null
_unix.64551
How do I set up an encrypted swap file (not partition) in Linux? Is it even possible? All the guides I've found talk about encrypted swap partitions, but I don't have a swap partition, and I'd rather not have to repartition my disk.I don't need suspend-to-disk support, so I'd like to use a random key on each boot.I'm already using a TrueCrypt file-hosted volume for my data, but I don't want to put my swap in that volume. I'm not set on using TrueCrypt for the swap file if there's a better solution.I'm using Arch Linux with the default kernel, if that matters.
How do I set up an encrypted swap file in Linux?
linux;arch linux;encryption;swap
Indeed, the page describes setting up a partition, but it's similar for a swapfile:dd if=/dev/urandom of=swapfile.crypt bs=1M count=64loop=$(losetup -f)losetup ${loop} swapfile.cryptcryptsetup open --type plain --key-file /dev/urandom ${loop} swapfilemkswap /dev/mapper/swapfileswapon /dev/mapper/swapfileThe result:# swapon -sFilename Type Size Used Priority/dev/mapper/swap0 partition 4000176 0 -1/dev/mapper/swap1 partition 2000084 0 -2/dev/mapper/swapfile partition 65528 0 -3swap0 and swap1 are real partitions.
_unix.347101
the given file looks likeCHrIS john herzog 10001 Marketingtim johnson 10002 ITruth bertha Hendric 10003 HRchrist jason hellan 10004 Marketingmy code:readFile=$1#error checking to see if the file exists and is not a directoryif [ ! -f $readFile ]then #echo failed no param passed exit 1else #reads in the file and stores the information into the variabel var. while read -r var do #echo $var fName=$(echo $var | cut -f1 | awk '{print $1}') mName=$(echo $var | cut -f2 | awk '{print $2}' | tr \t x) echo $mName done < $readFilefiHow can I get the middle tab in line 2 with tim (needs to be an X) johnson 10002 IT to change into an X?
How to replace a tab with a character in a file
text processing
Try this:lets say the content is stored in a file filecat file | sed -E 's/ / x/'would give CHrIS john xherzog 10001 Marketingtim x johnson 10002 ITruth xbertha Hendric 10003 HRchrist jason hellan 10004 MarketingAs to why write the sed in the aforementioned way, refer this
_unix.307190
We want to change the vitesse switch on mpc8308erdb with a ks8999 switch. And connect eTSEC0 and eTSEC1 to two separate ks8999 switches.I want to know what files i should change in order for linux to work in this new condition.This switches are fast ethernet switches and connect to board with mii interfaces.(there is no mdio interface in them)we changed device tree like this and checked switch functionality by checking its other ports but networking is not working.unchanged device tree: enet0: ethernet@24000 { cell-index = <0>; device_type = network; model = eTSEC; compatible = gianfar; reg = <0x24000 0x1000>; local-mac-address = [ 00 00 00 00 00 00 ]; interrupts = <32 0x8 33 0x8 34 0x8>; interrupt-parent = <&ipic>; tbi-handle = <&tbi0>; phy-handle = < &phy0 >;/* sleep = <&pmc 0xc0000000>; */ fsl,magic-packet; fsl,lossless-flow-ctrl = <0>; ptimer-handle = < &ptp_timer >; }; enet1: ethernet@25000 { cell-index = <1>; device_type = network; model = eTSEC; compatible = gianfar; reg = <0x25000 0x1000>; local-mac-address = [ 00 00 00 00 00 00 ]; interrupts = <35 0x8 36 0x8 37 0x8>; interrupt-parent = <&ipic>; /* tbi-handle = <&tbi1>; */ /* phy-handle = < &phy1 >; */ /* Vitesse 7385 isn't on the MDIO bus */ fixed-link = <1 1 1000 0 0>;/* sleep = <&pmc 0x30000000>; */ fsl,magic-packet; fsl,lossless-flow-ctrl = <0>; ptimer-handle = < &ptp_timer >; phy-connection-type = rgmii-id; };changed device tree: enet0: ethernet@24000 { cell-index = <0>; device_type = network; model = eTSEC; compatible = gianfar; reg = <0x24000 0x1000>; local-mac-address = [ 00 00 00 00 00 00 ]; interrupts = <32 0x8 33 0x8 34 0x8>; interrupt-parent = <&ipic>; fixed-link = <2 1 100 0 0>; /* sleep = <&pmc 0xc0000000>; */ fsl,magic-packet; fsl,lossless-flow-ctrl = <0>; ptimer-handle = < &ptp_timer >; phy-connection-type = mii; }; enet1: ethernet@25000 { cell-index = <1>; device_type = network; model = eTSEC; compatible = gianfar; reg = <0x25000 0x1000>; local-mac-address = [ 00 00 00 00 00 00 ]; interrupts = <35 0x8 36 0x8 37 0x8>; interrupt-parent = <&ipic>; fixed-link = <1 1 100 0 0>;/* sleep = <&pmc 0x30000000>; */ fsl,magic-packet; fsl,lossless-flow-ctrl = <0>; ptimer-handle = < &ptp_timer >; phy-connection-type = mii; };In reference board one eTSEC is connected to a phy and another is connected to a gigabit ethernet switch.
Modifying ethernet ports on device tree
embedded;network interface;device tree
null
_unix.303055
I have installed liveusb-creator using DNF on Fedora 24. These are the dependencies that were installed along with it:liveusb-creator.noarch 3.95.2-1.fc24python-cssselect.noarch 0.9.1-9.fc24python-lxml.x86_64 3.4.4-4.fc24python-pyquery.noarch 1.2.8-7.fc24python-qt5.x86_64 5.6-4.fc24python-qt5-rpm-macros.noarch 5.6-4.fc24qt5-qtconnectivity.x86_64 5.6.1-2.fc24qt5-qtenginio.x86_64 1:1.6.1-2.fc24qt5-qtlocation.x86_64 5.6.1-2.fc24qt5-qtmultimedia.x86_64 5.6.1-3.fc24qt5-qtquickcontrols.x86_64 5.6.1-1.fc24qt5-qtsensors.x86_64 5.6.1-2.fc24qt5-qtserialport.x86_64 5.6.1-1.fc24qt5-qttools-common.noarch 5.6.1-2.fc24qt5-qttools-libs-clucene.x86_64 5.6.1-2.fc24qt5-qttools-libs-designer.x86_64 5.6.1-2.fc24qt5-qttools-libs-help.x86_64 5.6.1-2.fc24qt5-qtwebchannel.x86_64 5.6.1-2.fc24qt5-qtwebsockets.x86_64 5.6.1-2.fc24sip.x86_64 4.18-2.fc24Now I want to uninstall liveusb-creator again, but dnf remove liveusb-createor attempts to remove more packages than were installed (including Java, which I don't want to remove):java-1.8.0-openjdk x86_64 1:1.8.0.101-1.b14.fc24 @updates 496 kjava-1.8.0-openjdk-devel x86_64 1:1.8.0.101-1.b14.fc24 @updates 40 Mliveusb-creator noarch 3.95.2-1.fc24 @updates 2.1 Mpython-cssselect noarch 0.9.1-9.fc24 @fedora 301 kpython-lxml x86_64 3.4.4-4.fc24 @fedora 3.0 Mpython-pyquery noarch 1.2.8-7.fc24 @fedora 171 kpython-qt5 x86_64 5.6-4.fc24 @updates 20 Mpython-qt5-rpm-macros noarch 5.6-4.fc24 @updates 137 qt5-qtconnectivity x86_64 5.6.1-2.fc24 @updates 1.3 Mqt5-qtdeclarative x86_64 5.6.1-5.fc24 @updates 14 Mqt5-qtenginio x86_64 1:1.6.1-2.fc24 @updates 589 kqt5-qtlocation x86_64 5.6.1-2.fc24 @updates 2.7 Mqt5-qtmultimedia x86_64 5.6.1-3.fc24 @updates 3.1 Mqt5-qtquickcontrols x86_64 5.6.1-1.fc24 @updates 3.7 Mqt5-qtsensors x86_64 5.6.1-2.fc24 @updates 801 kqt5-qtserialport x86_64 5.6.1-1.fc24 @updates 190 kqt5-qttools-common noarch 5.6.1-2.fc24 @updates 34 kqt5-qttools-libs-clucene x86_64 5.6.1-2.fc24 @updates 132 kqt5-qttools-libs-designer x86_64 5.6.1-2.fc24 @updates 5.2 Mqt5-qttools-libs-help x86_64 5.6.1-2.fc24 @updates 647 kqt5-qtwebchannel x86_64 5.6.1-2.fc24 @updates 227 kqt5-qtwebsockets x86_64 5.6.1-2.fc24 @updates 230 kqt5-qtxmlpatterns x86_64 5.6.1-1.fc24 @updates 4.1 Msip x86_64 4.18-2.fc24 @updates 396 kttmkfdir x86_64 3.0.9-48.fc24 @fedora 107 kxorg-x11-fonts-Type1 noarch 7.5-16.fc24 @fedora 863 kWhy are there more packages in the list and how can I remove only the ones that were installed previously?
How to revert a `dnf install`?
fedora;package management;live usb;dnf
I don't know the answer to the first part of your question. If you have dnf history recording activated (I think it's on by default), you can use that to undo the installation:sudo dnf history | headwill show the last few transactions, with an identifier on the left; find your installation, thensudo dnf history info ${transaction}(replacing ${transaction} as appropriate) will show the details of the installation, andsudo dnf history undo ${transaction}will undo it (if possible).
_webmaster.33719
I am developing an app using PHP and deploying it on Apache on the Amazon AWS environment.This app requires to be made available to customers from their own chosen domain name?How can I achieve this? For example:www.customer1.com => /var/www/myapp.mydomain.com www.customer2.com => /var/www/myapp.mydomain.comI would like to do this similar to how bitly enables shortened URL's for custom domains.www.myshorturl.com is DNS configured to a CNAME - cname.bitly.com.Appreciate if someone could help me achieve this functionality.If there are any other details required, please let me know, I shall update the same.
How can I point wildcard domains to a folder in Apache?
apache;dns;virtualhost;cname;bitly
There are several approaches to this.If this server hosts nothing else:Make sure you have only one VirtualHost and that it's FIRST in the configuration.Check that you can access the site (destination) via the raw IP, and by the Amazon domain name they give you (in the control panel, it's some numbers and letters then amazon.com)Once you have this, you only need to tell your customers to set their A-record to your server's IP. (Now be careful -- you need to make sure you have this IP for as long as you have customers).Alternatively, set on YOUR DNS records, [app.domain.com] --> [Amazon IP] and then tell your customers to make a CNAME. That way, if your IP changes, you can just change your CNAME and all the customers should be updated relatively quickly, automatically.If you use this server for many sites (and they are name-based VirtualHosts)On the VirtualHost that runs this application, set ServerName [your-domain] and ServerAlias [buy-another-static-IP] because you can actually make Apache listen on a static IP on a per-virtual-host basis. (Amazon distributes these very cheaply). Make sure also in the config that Apache listens on ALL IP addresses, which would include the one you would buy/rent if you don't have it already.Second option here is to use customer's domains (provided this is not automated and your customer-base is small) and do ServerAlias www.customer1.com and so forth.
_codereview.41010
I've implemented a common wrapper pattern I've seen for the .NET cache class using generics as follows:private static T CacheGet<T>(Func<T> refreashFunction, [CallerMemberName]string keyName = null){ if (HttpRuntime.Cache[keyName] == null) HttpRuntime.Cache.Insert(keyName, refreashFunction(), null, DateTime.UtcNow.AddSeconds(600), System.Web.Caching.Cache.NoSlidingExpiration); return (T)HttpRuntime.Cache[keyName];}It could then be called like so:public static Dictionary<string, string> SomeCacheableProperty{ get { return CacheGet(() => { Dictionary<string, string> returnVal = AlotOfWork(); return returnVal; }); }}However, the CacheGet method could be implemented using dynamic:private static dynamic CacheGet(Func<object> refreashFunction, [CallerMemberName]string keyName = null){ if (HttpRuntime.Cache[keyName] == null) HttpRuntime.Cache.Insert(keyName, refreashFunction(), null, DateTime.UtcNow.AddSeconds(600), System.Web.Caching.Cache.NoSlidingExpiration); return HttpRuntime.Cache[keyName];}The questions I have:Is there a technically (or philosophically) superior preference between these two implementations?Are these different at runtime?If they are both left in, which one is being called in the getter method?
Cache wrapper - Generics vs Dynamic
c#;generics;cache
First of all, I think you misspelled refresh as refreash.Second, your usage can probably be simplifiedCacheGet(AlotOfWork);Finally, you might want to check if refreshFunction returns null and maybe log a warning then as that function returning null would cause a cache miss every time.Now to answer your specific questions.I'll say it, the generic implementation is better. dynamic is a great trapdoor when you get really bogged down with generics or anonymous types and there's neat things you can do with it (see Dapper) but it still has some gotchas. For example, I do not think your function with dynamic will work in most cases. HttpRuntime.Cache expects and returns object types meaning all types are being downcast or boxed. Therefore, if your function returns a User object, what is stored is still an object and what is returned from the cache is downcast likewise. Therefore your user.Username property will not be available until you cast, even though it's dynamic.Yes. The generic version - with some subtle yet real differences - will run as if it was written for the type you're filling <T> with. The dynamic version will just be a value and let the DLR figure out how to invoke members (which again, unless you're calling ToString() or GetHashCode(), will fail). dynamic will also be slower as the runtime binding has to be done every time, though admittedly this is unlikely to be any sort of bottleneck.Obviously I'm going to say always use the generic version in this case.
_unix.364395
I'm trying to send mails on my ubuntu server (from gmail). I have the Starter Cloud package from Scaleway. I have installed sendmail on my ubuntu server with the following tutorial.In my Laravel application I have the following configuration (mailtrap for testing):MAIL_DRIVER=smtpMAIL_PORT=2525MAIL_HOST=smtp.mailtrap.ioMAIL_USERNAME=1f129791a7e29fMAIL_FROM_NAME=My [email protected]_PASSWORD=passwordmailtrapBut when I try to send the email I'm getting the following error:What could be my problem here? Or should I follow this tutorial and install/configure postfix on my server? When I try this on my local environment (Homestead) this works without problems.
Send mails on Ubuntu 16.04 - Failed to authenticate on SMTP server with username using 3 possible authenticators
ubuntu;email;postfix;sendmail;smtp
null
_datascience.10550
I have a list of email subjects like<XYZ> commented on <ABC>Weekly review for <Company>Your account is ready And I want to find patterns in them so I can group them.Is there a well known algorithm I can use? Preferably with wide language implementations or easy re-implementation.The algorithm should be unsupervised.The number of different emails is not known.Update:I think I can break this down into two problems:Group subjects by the similar words they use, resulting in the following. Each group should be very distinct from the rest (they should be almost perfectly exclusive) and the algorithm should give relatively small number of groups with good length of the common words.[commented, on][weekly, review][your, account, is, ready]Once grouped, it should be easy to find a state automaton that accepts only the group's subject and thus eliminates variableThen I can go back and check if there are any intersections and tweak the variables.Having said that, is it better to use a completley different approach like neural nets maybe? I have zero experience with those, but if it makes more sense, I am open to learning.
How would you categorise email subjects to find similar emails?
text mining;algorithms
null
_cogsci.10917
Social support has several times been linked to psychological well-being and even physical health.Some studies have also shown that shyness correlates stronger with introversion than with neuroticism.This implies that less social people should score lower on physical health and mental well-being.Note that I am not asking if introversion causes emotional stress, but rather if extroverts are better at coping with emotional stress because their extraversion give better odds for social support.Is there any research on correlations between extraversion, physical and mental health?References:https://en.wikipedia.org/wiki/Social_supportWhere in the Big5 does shyness belong?
Do introverted people have more emotional stress?
social psychology;personality;health psychology
null
_unix.187035
I read some books which use dump and restore. They said that during restore, first restore the latest full backup, and then restore each incremental backup created after the full backup. But I am using rsync. If I am correct, it does incremental backup by hard links to previous backup. Then when restore, can I just do one restore: copy the latest incremental backup? Because this will in turn copy the previous backup it hard-links to? Can I also delete the backups older than the last one? Because hardlinked files won't be deleted if they are still hard linked? How can we write a script to automatically remove older backups, so that only the most recent 2 or 3 backups are kept?If the answers to the above two are yes, then backup and restore with rsync are simpler than dump and restore.Then why do we need dump and restore?Thanks.
Restore from incremental backup?
rsync;backup;restore;dump
null
_codereview.79276
pwgen is a nice password generator utility. When you run it, it fills the terminal with a bunch of random passwords,giving you many options to choose from and pick something you like,for example:lvk3U7cKJYkl pLBJ007977Qx b9xhj8NWPfWQpMgUJBUuXwpG OAAqf6Y9TXqc fJOyxoGYCRSQbpbwp6f2MxEH fUYTJUqg0ZMB GjVVEQxuer0koqTEvV1LmdJu si47MkHNRpAw 3GKV8NdGMvwfAlthough there are ports of pwgen in multiple systems,it's not so easy to find in Windows.So I put together a simple Python script that's more portable,as it can run in any system with Python.I added some extra features I often want:Skip characters that may be ambiguous, such as l1ioO0Z2IAvoid doubled characters (slow down typing)Here it goes:#!/usr/bin/env pythonfrom __future__ import print_functionimport randomimport stringimport refrom argparse import ArgumentParserterminal_width = 80terminal_height = 25default_length = 12alphabet_default = string.ascii_letters + string.digitsalphabet_complex = alphabet_default + '`~!@#$%^&*()_+-={}[];:<>?,./'alphabet_easy = re.sub(r'[l1ioO0Z2I]', '', alphabet_default)double_letter = re.compile(r'(.)\1')def randomstring(alphabet, length=16): return ''.join(random.choice(alphabet) for _ in range(length))def has_double_letter(word): return double_letter.search(word) is not Nonedef easy_to_type_randomstring(alphabet, length=16): while True: word = randomstring(alphabet, length) if not has_double_letter(word): return worddef pwgen(alphabet, easy, length=16): for _ in range(terminal_height - 3): for _ in range(terminal_width // (length + 1)): if easy: print(easy_to_type_randomstring(alphabet, length), end=' ') else: print(randomstring(alphabet, length), end=' ') print()def main(): parser = ArgumentParser(description='Generate random passwords') parser.add_argument('-a', '--alphabet', help='override the default alphabet') parser.add_argument('--complex', action='store_true', default=False, help='use a very complex default alphabet', dest='complex_') parser.add_argument('--easy', action='store_true', default=False, help='use a simple default alphabet, without ambiguous or doubled characters') parser.add_argument('-l', '--length', type=int, default=default_length) args = parser.parse_args() alphabet = args.alphabet complex_ = args.complex_ easy = args.easy length = args.length if alphabet is None: if complex_: alphabet = alphabet_complex elif easy: alphabet = alphabet_easy else: alphabet = alphabet_default elif len(alphabet) < length: length = len(alphabet) pwgen(alphabet, easy, length)if __name__ == '__main__': main()How would you improve this? I'm looking for comments about all aspects of this code.I know that the terminal_width = 80 and terminal_height = 25 variables don't really reflect what their names imply. It's not terribly important, and good enough for my purposes, but if there's a way to make the script detect the real terminal width and height without importing dependencies that reduce portability, that would be pretty awesome.
Gimme some random passwords
python;python 2.7;python 3.x
Mostly a matter of personal preference but I'd define a variable in pwgen like :get_string = easy_to_type_randomstring if easy else randomstringto avoid duplicated logic.Then, you can simplify your code by using join instead of having multiple print.def pwgen(alphabet, easy, length=16): get_string = easy_to_type_randomstring if easy else randomstring for _ in range(terminal_height - 3): print(' '.join(get_string(alphabet, length) for _ in range(terminal_width // (length + 1))))
_softwareengineering.351881
I want to ask about what architecture may have Python web-server that implements web-API which calls C++ code as CPython extension (C++ uses only standart library, except Boost.Python for Pythonisation).More about situation:C++ code (just one function named cpp_func for example) runs for 1ms in most cases, but in very specific cases it can run until RAM is full, so it need very good timeout managment.for security and performence reasons, I prefer to not allow everyone at anytime execute cpp_func, so it must have some DB for authentication.project has small budget, and now it uses only one CPU with 500Mb RAM (Heroku free dyno).Current state:For now I have web-server implementation in aiohttp with ProcessExecutorPool wrap for cpp_func, but I have problems with good timeout checking and it seems hard at all. Maybe I'm doing something wrong? How should I implement this web-server? (Architecture at all, maybe libs/methods)
Python web-api & C++ calculations
web api;c++11;async;python 3.x;multiprocessing
null
_unix.195030
I have the following script. When it runs it prints the Start and end Times to the file 'result.txt'.However, I also want to record the total runtime (end-start), but where I am doing echo runtine at the end, it just returns runtime: with nothing recorded next to it. Am I doing something incorrect?#!/bin/bashclearecho Test 001 > result.txtecho start time: $(date +%T) >> result.txtstart=`date +%s`#DO STUFF HEREend=`date +%s`echo end time: $(date +%T) >> result.txtruntime=$((end-start))echo runtime: $(runtime) >> result.txtecho - - -
recording total runtime in bash script
bash;debian;shell script
First of all, you are reinventing the wheel. That's what the time command is for:$ time script.shreal 0m0.005suser 0m0.000ssys 0m0.004sThen, you have a syntax error:echo runtime: $(runtime) >> result.txtThe $(foo) syntax is command substitution, it will try to run foo. What you meant was echo runtime: $runtime >> result.txtBy the way, you should always include the error messages you get in your question.
_unix.198958
I know how to gunzip a file to a selected location.But when it comes to utilizing all CPU power, many consider pigz instead of gzip. So, the question is how do I unpigz (and untar) a *.tar.gz file to a specific directory?
unpigz (and untar) to a specific directory
tar;compression;gzip;multithreading
I found three solutions:With GNU tar, using the awesome -I option:tar -I pigz -xvf /path/to/archive.tar.gz -C /where/to/unpack/it/With a lot of Linux piping (for those who prefer a more geeky look):unpigz < /path/to/archive.tar.gz | tar -xvC /where/to/unpack/it/More portable (to other tar implementations):unpigz < /path/to/archive.tar.gz | (cd /where/to/unpack/it/ && tar xvf -)(You can also replace tar xvf - with pax -r to make it POSIX-compliant, though not necessarily more portable on Linux-based systems).Credits go to @PSkocik for a proper direction, @Stphane Chazelas for the 3rd variant and to the author of this answer.
_webapps.23284
Lately I have been having an issue. Whenever I close a open tab in Google Chrome which has a YouTube video open in it, it crashes. The browser just freezes and then a window pops up saying Google Chrome has stopped responding.I've heard that sometimes adobe flash player can mess things up so I tried uninstalling it but that didn't fix it. This only started happening a few days ago and I didn't change anything except now I tried using the beta version of Google Chrome to see if it was fixed.General info:Windows 7 64 BitLatest Google Chrome Beta Version.If anyone knows the issue your help would be greatly appreciated. (caches and history was cleared already tried it)
Google Chrome Crashes Every-time I close a tab with a YouTube video open
youtube;google chrome
Okay I have done some research and found that this is a known issue and the Google team was able to reproduce the issue and is working on a fix. See Help Forum Article.In case you don't want to wait for the fix to be implemented into the normal Google Chrome Build you can try Google Canary. Fixed it for me! Google Canary operates as a secondary browser, so you can have Google Chrome and Google Canary both installed and running at the same time it will not overwrite the old one like Google Chrome Beta does.Edit: It's awesome to see the YouTube/Chrome team working to fix the issue! The issue has now been fixed according to another Answer they have made on the Help Forums. Crash Fixed Help Forum Link
_cs.70493
A function $T: \mathbb{N} \rightarrow \mathbb{N}$ is time-constructible if there exist a turing machine $M$ which computes $1^{T(n)}$ on input $1^{n}$ in $T(n)$ time.Let $T_1$ and $T_2$ be two time-constructible functions in accordance to the definition above. I'm unable to prove the following:$T_1(n)^{T_2(n)}$ is time-constructibleMy approach:I thought of constructing $T_1(n)^k$ using $T_1(n)^{k-1}$. Given $T_1(n)$ and $T_1(n)^{k-1}$ I know I can multiply them in time $T_i(n)^k$. Hence the net time to compute $T_1(n)^{T_2(n)}$ by my construction = $T_1(n) + T_1(n)^2 +....+T_1(n)^{T_2(n)} > T_1(n)^{T_2(n)}$ To print $T_1(n)^{T_2(n)}$ in time $T_1(n)^{T_2(n)}$ i would need to print a 1 on the output tape in every timestamp which intuitively sounds impossible to me
Proving that a time-constructible function to the power a time-constructible function is a time-constructible function
turing machines;time complexity
null
_unix.337169
How do I use a program command-line-parameters with gksu? I have a program that takes parameters.sudo myprog --datzload --maximizeBut then I get an error IBus error .... owner is not rootSearching tells me I should be using gksu but then it takes the parameters for myprog as parameters for itself and says --datzload is not a command and then shows the help page.Kinda in a loop here. So, how to use gksu and myprog or should I just continue to use sudo and ignore the IBus error?
Using gksu with command-line parameters
ibus;gksu
null
_softwareengineering.331682
I'm planning on using a message queue for communication between a game engine and game server. This should allow me to write both without direct dependencies on each other.The example that I'll use in this question is a DisconnectMessage. A server can send a DisconnectMessage to the engine if, for example, the client has exceeded the timeout value and has not responded to a ping request. The engine can also send a DisconnectMessage to the server if, for example, a server operator issues a kick command for a player. In both of these cases, the player is saved to the game's player repository.So what I have at the moment is a server, a game engine, and two message queues (one for incoming, one for outgoing).For now, what I'd like to have is one instance of a type-safe message handler for each message type. This is where the problem is for me, as I am unable to get a specific handler for a generic message.The code I have is something like:public interface GameMessage {}public interface GameMessageHandler<T extends GameMessage> { public void handle(T message);}public class DisconnectMessage implements GameMessage { // ...}public class DisconnectMessageHandler implements GameMessageHandler<DisconnectMessage> { @Override public void handle(DisconnectMessage message) { // ... something something }}public class GameEngine implements Runnable { public GameEngine(Queue<GameMessage> in, Queue<GameMessage> out) { // ... } @Override public void run() { if (!in.empty()) { GameMessage message = in.poll(); handle(message); } }}The current method I have (that does not work) is as follows// in GameEnginepublic <T extends GameMessage> void handle(T message) { GameMessageHandler<T> handler = getHandler(message.getClass()); handler.handle(message);}public <M extends Message, H extends GameMessageHandler<M>> H getHandler(Class<M> messageClass) { // get handler somehow from a dictionary}However, Java's type system does not allow me to achieve this in this manner.Is there a way that I can get a concrete message handler from the base interface class?Or, perhaps a different question that could change the answer; is there a different/better way than using a message queue to prevent a circular dependency?
Message queue between server and engine
java;design patterns;object oriented design
null
_unix.311347
Unlike other contab related questions on the site, the PATH and BASH is correctly configured.The input to run the script is:0 14 * * * /absolute/path/to/script.shWe also tried to put in a non bash command to see if its executet.43 14 * * * /bin/echo hallo > /var/log/cron_check.logBut this doesnt work either. the syslog shows that the command is beeing processed by crontab.(root) CMD (/absolute/path/to/script.sh)However, I noticed something with ps aux | grep cron:I have no Idea what this is about. I also tried the exact setup on different machines with the same OS (just different patch states) and there it works perfectly. And yes, the script does work from command line.I tried pstree -lp | grep cron as suggested in the comments:|-cron(26186)---cron(26404)---cron(26405)Then I tried lsof -p 26405COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEcron 26405 root cwd DIR 8,2 4096 3407969 /var/spool/croncron 26405 root rtd DIR 8,2 4096 2 /cron 26405 root txt REG 8,2 48752 2696872 /usr/sbin/croncron 26405 root mem REG 8,2 57723 1163275 /lib64/libcrypt-2.11.3.socron 26405 root mem REG 8,2 98147 1163332 /lib64/libresolv-2.11.3.socron 26405 root mem REG 8,2 60736 2722352 /usr/lib64/libnam.so.0.0.0cron 26405 root mem REG 8,2 47576 2703634 /lib64/libnss_nam.so.0.0.0cron 26405 root mem REG 8,2 6104 983053 /lib64/security/pam_deny.socron 26405 root mem REG 8,2 6192 983090 /lib64/security/pam_warn.socron 26405 root mem REG 8,2 10456 983088 /lib64/security/pam_umask.socron 26405 root mem REG 8,2 18832 983064 /lib64/security/pam_limits.socron 26405 root mem REG 8,2 10392 983067 /lib64/security/pam_loginuid.socron 26405 root mem REG 8,2 88752 2703486 /lib64/libz.so.1.2.7cron 26405 root mem REG 8,2 39496 2692377 /usr/lib64/libcrack.so.2.8.0cron 26405 root mem REG 8,2 23024 2736459 /lib64/security/pam_pwcheck.socron 26405 root mem REG 8,2 22896 2708673 /lib64/libxcrypt.so.2.0.0cron 26405 root mem REG 8,2 52288 2736460 /lib64/security/pam_unix2.socron 26405 root mem REG 8,2 14552 983055 /lib64/security/pam_env.socron 26405 root mem REG 8,2 6208 983076 /lib64/security/pam_rootok.socron 26405 root mem REG 8,2 61646 1163319 /lib64/libnss_files-2.11.3.socron 26405 root mem REG 8,2 52516 1163325 /lib64/libnss_nis-2.11.3.socron 26405 root mem REG 8,2 108272 1163313 /lib64/libnsl-2.11.3.socron 26405 root mem REG 8,2 38708 1163315 /lib64/libnss_compat-2.11.3.socron 26405 root mem REG 8,2 19173 1163303 /lib64/libdl-2.11.3.socron 26405 root mem REG 8,2 100936 2703667 /lib64/libaudit.so.0.0.0cron 26405 root mem REG 8,2 1775524 2703483 /lib64/libc-2.11.3.socron 26405 root mem REG 8,2 118080 2703498 /lib64/libselinux.so.1cron 26405 root mem REG 8,2 14680 2703591 /lib64/libpam_misc.so.0.82.0cron 26405 root mem REG 8,2 56048 2703589 /lib64/libpam.so.0.83.1cron 26405 root mem REG 8,2 155179 2703510 /lib64/ld-2.11.3.socron 26405 root 0r FIFO 0,8 0t0 1624205722 pipecron 26405 root 1w FIFO 0,8 0t0 1624205723 pipecron 26405 root 2w FIFO 0,8 0t0 1624205723 pipecron 26405 root 3u unix 0xffff88000cf9eb40 0t0 1624201568 socket
Crontab doesnt execute bash
bash;cron
null
_unix.270675
I operated a debian server for a while while just by permitting a publickey-based access. Unfortunately I lost the private key (reinstall without backup), and now I am not able to log into the system anymore. I am able to access the hard drive, thus I can modify data on the hard disk. I already reverted the sshd_config-file, but I still get the error when trying to log in via ssh:Authentications that can continue: publickeyWhat do I have to enable in the sshd_config-file in order to enable password-based access again?sshd_config:Port 234Protocol 2HostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyHostKey /etc/ssh/ssh_host_ecdsa_keyUsePrivilegeSeparation yesKeyRegenerationInterval 3600ServerKeyBits 768SyslogFacility AUTHLogLevel INFOLoginGraceTime 120PermitRootLogin yesStrictModes yesRSAAuthentication yesPubkeyAuthentication yesIgnoreRhosts yesRhostsRSAAuthentication noHostbasedAuthentication noPermitEmptyPasswords yesChallengeResponseAuthentication noX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yesBanner /etc/issue.netAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-serverUsePAM no
Revert ssh from publickey to password
ssh
null
_softwareengineering.293941
The title says it all, basically. Are there any deterministic compression algorithms - that is, an algorithm which, given identical input, will always produce identical output?As far as I know, all widely-used compression algoritms are adaptive and will vary their output based on whatever heuristic they happen to be using at the moment.
Any Deterministic Compression Algorithms out There?
algorithms
null
_unix.343591
When monitoring disk io, most of the io is attributed to jbd2, while the original process that caused the high io is attributed with much lower io percentage. Why?Here's iotop's example output (other processes with IO<1% omitted):
Why most the of disk io is attributed to jbd2 and not to the process that is actually using the io?
process;io;disk
jbd2 is a kernel thread that updates the filesystem journal.Tracing filesystem or disk activity with the process that caused it is difficult because the activities of many processes are combined together. For example, if two processes are reading from the same file at the same time, which process would the read be accounted against? If two processes write to the same directory and the directory is updated on disk only once (combining the two operations), which process would the write be accounted against?In your case, it appears that most of the traffic consists of updates to the journal. This is traced to the journal updater, but there's no tracing between journal updates and the process(es) that caused the write operation(s) that required this journal update.
_codereview.167012
Let's say we have a class SomeInt which holds just one single final int value:public class SomeInt{ private final int value; public SomeInt(int value) { this.value = value; } public int getValue() { return this.value; }}Option A - Overring equals and hashCode:public class SomeInt{ private final int value; public SomeInt(int value) { this.value = value; } public int getValue() { return this.value; } @Override public boolean equals(Object other) { if (!(other instanceof SomeInt)) { return false; } return ((SomeInt) other).hashCode() == hashCode(); } @Override public int hashCode() { return this.value; }}Option B - Providing only one Instance per Value:import java.util.Map;import java.util.HashMap;import java.lang.ref.WeakReference;public class SomeInt{ private static final Map<Integer, WeakReference<SomeInt>> INSTANCES = new HashMap<>(); private final int value; private SomeInt(int value) { this.value = value; } public synchronized static SomeInt of(int value) { WeakReference<SomeInt> weakRef = INSTANCES.get(value); SomeInt instance = null; if (weakRef != null) { // keep reference before asking isEnqueued to ensure // it's not getting garbage collectedbetween the calls instance = weakRef.get(); } if (weakRef == null || instance == null || weakRef.isEnqueued()) { instance = new SomeInt(value); INSTANCES.put(value, new WeakReference<>(instance)); } return instance; } public int getValue() { return this.value; }}Pros of A:If the Object holds references to other Complex-Types the implementation is much easier and will be easy to customize when something changesCan also be used with mutable ObjectsCons of A:There may be many Objects with the same informationPros of B:There will always be only one Object holding this information== comparison possible since equality is guaranteed by the generating methodCons of B:Can only be used with immutable ObjectsQuestion:What is the better approach to ensure comparisons and occurences in Maps and Collections don't fail?
Overriding equals and hashCode vs providing only Single Instance on immutable Objects
java
Your approach is flawed in both ways. I see some fundamental misunderstanding of equals, hashcode and == operator.To Option APros of A ... Can also be used with mutable ObjectsOverriding hashcode and equals on mutable objects will lead to unaccessable elements within hash-based datastructures like HashMap or HashSet. Always make sure your objects are immutable when overriding these methods.Furthermore your implementation of equals is semantically wrong. This is because you make equals dependent on hashcode only.Maybe you can precheck the hashcode to avoid a complex equals-evaluation if the hashcodes aren't equal.hashcode has a totally other purpose. It provides a value that is used in hash-based datastructures to balance lookup tables to increase lookup performance and minimze the binary search path.You may say that Integer.hashcode(int i) always returns i. Yes, but you depend on implementation details for a totally different semantic.A correct implementation for your SomeInt:public class SomeInt { private final int value; public SomeInt(int value) { this.value = value; } public int getValue() { return this.value; } @Override public boolean equals(Object object) { boolean equals = false; if (object instanceof SomeInt) { SomeInt that = (SomeInt) object; equals = this.value == that.value; } return equals; } @Override public int hashCode() { return this.value; }}To Option BYes you can do so. But the achieve the same with the correct hashcode equals implementation on immutable datastructures with less overhead.Logical pathIf you override equals you have to override hashcodehashcode is used within hash-based datastructures to put objects into bucketsIf you change the value hashcode depends on AND you have put the object into a hash-based datastructure before, it is very probable that you never find this object again.Therefore values that are used to generate the hashcode must not change if you want to use the objects within hash-based datastructures.
_computergraphics.4969
I'm in the process of working out how to pack all the information I need for a Physically Based Deferred Renderer into a G-Buffer without using an obscene amount of render targets. What I have so far is 4 3-part vectors: Albedo/DiffuseNormalTangentPositionAnd 4 single componentsMetallicRoughnessHeightAmbient OcclusionA naive approach is to bundle one of the single components into the alpha (fourth) channel with one of the 3-part vectors, which is my current line of investigation. However, given that four 4-channel full precision floating point render targets isn't small I understand it's common to use half precision and even smaller representations to be more memory conscious. What I'm asking is: which components can I safely cut precision down on without losing quality, and by how much?
How much precision do I need in my G-Buffer?
optimisation;deferred rendering
First of all, you don't need position in the G-buffer at all. The position of a pixel can be reconstructed from the depth buffer, knowing the camera setup and the pixel's screen-space xy position. So you can get rid of that whole buffer.Also, you don't ordinarily need tangent vectors in the G-buffer either. They're only needed for converting normal maps from tangent space, and for parallax mapping; these would be done during the G-buffer fill pass (when you have tangents from the mesh you're rendering), and the G-buffer would only store normals in world or view space.Material properties like colors, roughness, and metallic are usually just 8-bit values in the G-buffer, since they're sourced from 8-bit textures. Same for AO.Height is also not needed in the G-buffer unless you're going to be doing some kind of multi-pass blending that depends on it, but if you do need it, 8 bits is probably enough for that too.Normals can be benefit from being stored as 16-bit values rather than 8-bit. Half-float is okay, but 16-bit fixed-point is even better, as it gives you more uniform precision across all orientations (half-float is more precise near the axes and loses some precision away from them). Moreover, you can cut them from 3 components down to 2 using octahedral mapping.So, at the end of the day, a minimal G-buffer might look like:Material color + metallic: RGBA8Octahedral world-space normal + roughness + AO: RGBA16and that's all! Only 12 bytes per pixel.Alternatively, you could use an RG16 buffer for the normals, and move roughness + AO into a separate 8-bit buffer. That would give you some room to grow should you eventually need more G-buffer components of either 8-bit or 16-bit sizes.
_webmaster.6425
Possible Duplicate:How to find web hosting that meets my requirements? I'd like to be able to have a very lightweight Windows Service call some code on the Web Server at regular intervals. Do I have any options besides a Dedicated/Semi-Dedicated Server?
Are there any Shared Web Hosts that provide access to run Windows Services?
web hosting;looking for hosting;windows
null
_unix.346039
The Fuse packages that are available by default on CentOS 7.3 are a bit dated. The compilation process for Fuse 3 and s3fs should be pretty straight forward. Fuse compiles and installs fine:mkdir ~/src && cd src# Most recent version: https://github.com/libfuse/libfuse/releaseswget https://github.com/libfuse/libfuse/releases/download/fuse-3.0.0/fuse-3.0.0.tar.gztar xvf fuse-3.0.0.tar.gz && cd fuse-3.0.0./configure --prefix=/usrmake make installexport PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64ldconfigmodprobe fusepkg-config modversion fuseNo problems there... Things show up where they should it seems,$ ls /usr/lib:libfuse3.a libfuse3.la libfuse3.so libfuse3.so.3 libfuse3.so.3.0.0 pkgconfig udev$ ls /usr/local/lib/pkgconfig/:fuse3.pc$ which fusermount3:/usr/bin/fusermount3So I proceed to install s3fs:cd ~/srcgit clone https://github.com/s3fs-fuse/s3fs-fuse.gitcd s3fs-fuse./autogen.sh./configure --prefix=/usrAnd then every time, I hit this:...configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6) were not met:No package 'fuse' foundConsider adjusting the PKG_CONFIG_PATH environment variable if youinstalled software in a non-standard prefix.Alternatively, you may set the environment variables common_lib_checking_CFLAGSand common_lib_checking_LIBS to avoid the need to call pkg-config.See the pkg-config man page for more details.Any idea why s3fs is not finding Fuse properly?
s3fs refuses to compile on CentOS 7, why's it not finding Fuse?
centos;compiling;configure;fuse;s3fs
Version 1.8 of s3fs doesn't support fuse3. I learnt it rather hard way.I edited s3fs configure script to replace fuse with fuse3 in the version check. configure script went well after that. However, s3fs compilation fails with some error around incompatibility with fuse functions used. (I don't have the exact compilation error - didn't save the error).I ended up installing fuse 2.9.x and s3fs installation went well.
_webapps.2270
Any recommendations for web apps for sending bulk SMS to several mobile numbers?
Web app to send Bulk SMS
webapp rec;sms
Clickatell and Commzgate support bulk messages - both have easy-to-use web APIs as well. Both these services are quite expensive though. If most of the SMSes you are sending are local, you can setup an SMS gateway (ozeki, visualtron etc), a GSM modem and send SMSes using a purchased SIM card. Remember to test using a prepaid SIM to avoid running up a large phone bill in case the automated system malfunctions.
_unix.38909
As we know a very basic software engineering principal is loose coupling. But we know that programs in UNIX-like OSs are extremely coupled. How this can be explained/justified? I mean from extremely coupled, a lot of dependencies between programs, even when you want install a simple application you have to consider a lot of dependencies (as you see in app manager), some time even you are unable to update a program because it will break some dependent programs. Indeed stand-alone softwares are rare in beauty world of Linux (in compare with other OSs).
How extreme-coupling in UNIX-like OSs can be acceptable?
application
null
_unix.355721
I am using KDE 5 with Yakuake as my drop-down terminal, on Kubuntu. I am now encountering a documented bug, which is that the separators between multiple terminals only show on a black background. Screenshots attached:My diagnosis here is that the color profile configuration options for some reason don't affect the color of the terminal separator. (This used to be the case in KDE 4). I've tried changing skins, editing my Shell profile, etc. It now seems to me that surely the color of the separator line must be hardcoded somewhere, and that I can change this color? Does anyone know where I can find this?
Configure Yakuake Terminal Separator Color?
terminal;kde;colors;debugging;kubuntu
null
_unix.115998
I have several columns of data. I always have the same number of rows (say 5). In the 2nd column, I want to multipy the first value by 5, then the second value by 4, the third value by 3, etc. I then want to sum these values, and divide by the sum of the values in the second column. How would I do this in sed and/or awk?Example:4 5 7 1 2 35 1 2 3 1 24 2 3 6 1 23 4 1 6 3 32 3 1 2 1 6Answer: (5*5 + 4*1 + 3*2 + 2*4 + 1*3)/(5 + 1 + 2 + 4 + 3) = 3.067
How do I multiply and sum column data using awk and or sed?
sed;awk
Replace 6 with total (number of lines + 1) if needed: awk '{mult+=$2*(6-NR); sum+=$2;} END {print mult/sum;}' yourfile.txt Displays: 3.06667
_unix.150916
So doing something like this in bash and most other shells won't work to create multiple subdirectories or files within subdirectories... mkdir */testtouch */hello.txtThere are of course many ways of actually doing this, my preferred one is to use find when possible rather than using a for loop, for readability mainly.But my question is, why do the above not work?From what I understand it's because the full destination file/path does not exist but surely that's a good thing if I'm trying to mkdir or touch. I've always just moved on and never really questioned it.But does anyone have a decent explanation for this that will help me understand once and for all?
bash/shell pathname expansion for mkdir, touch etc?
shell;wildcards
null
_unix.148270
I'm running Ubuntu 14.04 on a machine that has a lot of hard drives plugged into it. These hard drives have partitions with old OS's which have a lot of key data that I use often. The problem is, I have 2 partitions with the same name, Main Drive and Main Drive. Ubuntu, to differentiate between them, renames one drive to Main Drive1, while keeping the other just Main Drive. The problem is, every time I restart Ubuntu, it chooses randomly which partition to rename. As a result, any bookmarks or directories that I have in those partitions, do not work, and have to be reconfigured every time I reboot.Are there any solutions to this problem?
Two HDD partitions with the same name result in uncertain directories
ubuntu;partition;rename;reboot
null
_unix.35681
I just accidentally scrubbed all the partitions from the wrong disk./dev/sda is the boot disk, and /dev/sdb is a new disk I am trying to set up as a RAID mirror. I accidentally fat-fingered it, and wound up deleting the partition table on /dev/sda, rather then /dev/sdb.The system is still up and running, so it's running off a cached partition table somewhere.Can I recover the partition table, or at least view it, so I can recreate the partitions exactly where they were?fdisk /dev/sda -l yields no partitions.Yeah, I feel clever
Accidentally deleted the partitions on my boot disk. The system is still running. How can I recover?
linux;partition;data recovery
The kernel keeps the partition table in cache permanently (unless explicitly told to reload, and that can't be done if some of the partitions are in use). So you're safe until you reboot (or tell the kernel to operate on data that doesn't reflect the true disk contents; for example, if you've already activated mdraid, it might have written its metadata on the disk already).If you have an up-to-date backup of your boot sector (the first 512 bytes), you can restore it (cat boot-sector-backup >/dev/sda do check that the size of the file you're restoring is exactly 512 bytes). Your bootloader installation may have created a boot sector backup, but if it's been upgraded or you've repartitioned since then, it won't be up-to-date. Do not restore a backup that may be obsolete.The kernel's information about the partitions is accessible through /sys/class/block/sda/sda*. In the directory for each partition (sda1, sda2, etc.):start contains the offset of the beginning of the partition, in 512-byte sectors.size contains the size of the partition, in 512-byte sectors (except for the extended partition).If you have partitions numbered 5 or above, they are logical partitions (see What is the difference between extended partition and logical partition), contained inside an extended partition. There is a single extended partition (or none), and it is one of the partitions 14. The file size does not contain the size of the extended partition, so you first need to determine that; it must be large enough for all logical partitions to fit, and must not encompass any primary partitions (the other partitions numbered 14).Run fdisk /dev/sda. Use u to switch the unit to sectors. Create the partitions (n) with the right offset and size (as the prompt says, put + before the number of sectors when it comes to the size), starting with the extended partition.Use p to check that the partition table looks right. If some of these partitions are not Linux data partitions, use t to set their type (82 for Linux swap, c for a Windows FAT32 partition, 7 for a Windows NTFS partition). If you have a bootable DOS/Windows partition, set its bootable flag (a).Double-check that the output looks good, then press w to commit the new table to disk.Save the contents of /sys/class/block/sda/ in a tar archive on a USB stick. Then reboot from a removable media. **After rebooting, if the partition table you created is not correct, you risk massive data corruption**. So from the removable media, runfsck -n(don't forget the-n) to check the consistency of the filesystems on each partition (don't usemount`, which would only work if the offset was correct and could damage the disk (even in read-only mode, because it would write the journal) if the offset was correct but not the size).If fsck finds no filesystem, you got the offset of a partition wrong. If it reports errors, chances are you got the size of the partition wrong. As long as you haven't written to the disk, you can still fix the partition table. When you have no partition from the disk mounted, pressing w in fdisk will make the kernel re-read the partition table. Once you have your partitions right, you should be able to reboot into your normal system and continue as usual.
_unix.36068
I'm using Xfce 4.3 and the lightweight Thunar file manager. I often change file listings between Sort by Date Modified and Sort by Name. But it seems the only way to do this is by clicking the column header with the mouse, which slows down productivity.Is there anyway to activate these sort mechanisms using the keyboard? They are not listed in any of the menus.
Thunar file manager: Sort by column keyboard shortcut?
xfce;thunar
Try following the steps in this faq entrty
_webmaster.55151
Let's say I have a website with frontend and backend. The website allows users to upload some data ranging from hundreds of MB to GB. How do you effectively upload this data?The easiest way would be to upload it from client/browser to the frontend server. Then send it via API or something to the backend server which has connected data storage, where we'll save the data. This would, however, run really slow, because the upload will be twice as big.Another way that occurred to me would be to send the data directly from javascript running inside the browser to the backend's API. This can be inappropriate when I don't want the backend/backend's API to be accessible to public. It's also not good for the architecture (if you have fe and be, you probably don't want to communicate from client to be).So, do you have any ideas? Or is there some kind of general way to do this effectively? Does it include CDNs? Or should I store the data in a database as base64 strings if it'd only be few hundreds of MB pre file?
Effective upload to backend?
php;javascript;content;data;uploading
null
_codereview.75089
We have been working on a mutation analysis tool for Haskell tests called MuCheck. It accepts any Haskell source file, and a function name to mutate, applies a defined set of mutation operators on it, and runs the specified test suite on it. The mutation of code is accomplished using the haskell-src-ext library, and SYB functions. I would like help on how to make this code better. I would really appreciate comments on improving the clarity, and also better ways of doing things.I have run it through hlint, and have accepted most of its recommendations.Our project along with unit tests and run instructions is here.The library entry-- | MuCheck base modulemodule Test.MuCheck (mucheck) whereimport System.Environment (getArgs, withArgs)import Control.Monad (void)import Test.MuCheck.MuOpimport Test.MuCheck.Configimport Test.MuCheck.Mutationimport Test.MuCheck.Operatorsimport Test.MuCheck.Utils.Commonimport Test.MuCheck.Utils.Printimport Test.MuCheck.Interpreter (mutantCheckSummary)import Test.MuCheck.TestAdapter-- | Perform mutation analysismucheck :: (Summarizable a, Show a) => ([String] -> [InterpreterOutput a] -> TSum) -> String -> FilePath -> String -> [String] -> IO ()mucheck resFn mFn file modulename args = do numMutants <- genMutants mFn file let muts = take numMutants $ genFileNames file void $ mutantCheckSummary resFn muts modulename args (./mucheck- ++ mFn ++ .log)Configuration Options-- | Configuration modulemodule Test.MuCheck.Config whereimport Test.MuCheck.MuOpimport Test.MuCheck.Operators (allOps)data GenerationMode = FirstOrderOnly | FirstAndHigherOrder deriving (Eq, Show)data Config = Config {-- | Mutation operators on operator or function replacement muOps :: [MuOp]-- | Mutate pattern matches for functions?-- , doNegateGuards :: Rational-- | Maximum number of mutants to generate. , maxNumMutants :: Int-- | Generation mode, can be traditional (firstOrder) and-- higher order (higher order is experimental) , genMode :: GenerationMode } deriving Show-- | The default configurationdefaultConfig :: ConfigdefaultConfig = Config {muOps = allOps , doMutatePatternMatches = 1.0 , doMutateValues = 1.0 , doNegateIfElse = 1.0 , doNegateGuards = 1.0 , maxNumMutants = 300 , genMode = FirstOrderOnly }Interpreter for mutants{-# LANGUAGE StandaloneDeriving, DeriveDataTypeable #-}-- | The entry point for mucheckmodule Test.MuCheck.Interpreter (mutantCheckSummary) whereimport qualified Language.Haskell.Interpreter as Iimport Control.Monad.Trans ( liftIO )import Data.Typeableimport Test.MuCheck.Utils.Print (showA, showAS, (./.))import Data.Either (partitionEithers, rights)import Data.List(groupBy, sortBy)import Data.Function (on)import Test.MuCheck.TestAdapter-- | Given the list of tests suites to check, run one test suite at a time on-- all mutants.mutantCheckSummary :: (Summarizable a, Show a) => ([String] -> [InterpreterOutput a] -> TSum) -> [String] -> String -> [String] -> FilePath -> IO ()mutantCheckSummary testSummaryFn mutantFiles topModule evalSrcLst logFile = do results <- mapM (runCodeOnMutants mutantFiles topModule) evalSrcLst let singleTestSummaries = zip evalSrcLst $ map (testSummaryFn mutantFiles) results tssum = multipleCheckSummary (isSuccess . snd) results -- print results to terminal putStrLn $ delim ++ Overall Results: putStrLn $ terminalSummary tssum putStrLn $ showAS $ map showBrief singleTestSummaries putStr delim -- print results to logfile appendFile logFile $ OVERALL RESULTS:\n ++ tssum_log tssum ++ showAS (map showDetail singleTestSummaries) return () where showDetail (method, msum) = delim ++ showBrief (method, msum) ++ \n ++ tsum_log msum showBrief (method, msum) = showAS [method, \tTotal number of mutants:\t ++ show (tsum_numMutants msum), \tFailed to Load:\t ++ (cpx tsum_loadError), \tNot Killed:\t ++ (cpx tsum_notKilled), \tKilled:\t ++ (cpx tsum_killed), \tOthers:\t ++ (cpx tsum_others), ] where cpx fn = show (fn msum) ++ ++ (fn msum) ./. (tsum_numMutants msum) terminalSummary tssum = showAS [ Total number of mutants:\t ++ show (tssum_numMutants tssum), Total number of alive mutants:\t ++ (cpx tssum_alive), Total number of load errors:\t ++ (cpx tssum_errors), ] where cpx fn = show (fn tssum) ++ ++ (fn tssum) ./. (tssum_numMutants tssum) delim = \n ++ replicate 25 '=' ++ \n-- | Run one test suite on all mutantsrunCodeOnMutants mutantFiles topModule evalStr = mapM (evalMyStr evalStr) mutantFiles where evalMyStr evalStr file = do putStrLn $ > ++ : ++ file ++ : ++ topModule ++ : ++ evalStr I.runInterpreter (evalMethod file topModule evalStr)-- | Given the filename, modulename, test to evaluate, evaluate, and return result as a pair.---- > t = I.runInterpreter (evalMethod-- > Examples/QuickCheckTest.hs-- > Examples.QuickCheckTest-- > quickCheckResult idEmp)evalMethod :: (I.MonadInterpreter m, Typeable t) => String -> String -> String -> m (String, t)evalMethod fileName topModule evalStr = do I.loadModules [fileName] I.setTopLevelModules [topModule] result <- I.interpret evalStr (I.as :: (Typeable a => IO a)) >>= liftIO return (fileName, result)-- | Datatype to hold results of the entire rundata TSSum = TSSum {tssum_numMutants::Int, tssum_alive::Int, tssum_errors::Int, tssum_log::String}-- | Summarize the entire runmultipleCheckSummary isSuccessFunction results -- we assume that checking each prop results in the same number of errorCases and executedCases | not (checkLength results) = error Output lengths differ for some properties. | otherwise = TSSum {tssum_numMutants = countMutants, tssum_alive = countAlive, tssum_errors= countErrors, tssum_log = logMsg} where executedCases = groupBy ((==) `on` fst) . sortBy (compare `on` fst) . rights $ concat results allSuccesses = [rs | rs <- executedCases, length rs == length results, all isSuccessFunction rs] countAlive = length allSuccesses countErrors = countMutants - length executedCases logMsg = showA allSuccesses checkLength results = and $ map ((==countMutants) . length) results ++ map ((==countExecutedCases) . length) executedCases countExecutedCases = length . head $ executedCases countMutants = length . head $ resultsMutation of code{-# LANGUAGE ImpredicativeTypes #-}-- | Mutation happens here.module Test.MuCheck.Mutation whereimport Language.Haskell.Exts(Literal(Int), Exp(App, Var, If), QName(UnQual), Stmt(Qualifier), Module(Module), ModuleName(..), Name(Ident, Symbol), Decl(FunBind, PatBind), Match, Pat(PVar), Match(Match), GuardedRhs(GuardedRhs), prettyPrint, fromParseResult, parseFileContents)import Data.Maybe (fromJust)import Data.Generics (GenericQ, mkQ, Data, Typeable, mkMp, listify)import Data.List(nub, (\\), permutations)import Control.Monad (liftM, zipWithM)import System.Randomimport Data.Time.Clock.POSIXimport Test.MuCheck.MuOpimport Test.MuCheck.Utils.Sybimport Test.MuCheck.Utils.Commonimport Test.MuCheck.Operatorsimport Test.MuCheck.ConfiggenMutants = genMutantsWith defaultConfiggenMutantsWith args funcname filename = liftM length $ do ast <- getASTFromFile filename g <- liftM (mkStdGen . round) getPOSIXTime let f = getFunc funcname ast ops, swapOps, valOps, ifElseNegOps, guardedBoolNegOps :: [MuOp] ops = relevantOps f (muOps args ++ valOps ++ ifElseNegOps ++ guardedBoolNegOps) swapOps = sampleF g (doMutatePatternMatches args) $ permMatches f ++ removeOnePMatch f valOps = sampleF g (doMutateValues args) $ selectIntOps f ifElseNegOps = sampleF g (doNegateIfElse args) $ selectIfElseBoolNegOps f guardedBoolNegOps = sampleF g (doNegateGuards args) $ selectGuardedBoolNegOps f patternMatchMutants, ifElseNegMutants, guardedNegMutants, operatorMutants, allMutants :: [Decl] allMutants = nub $ patternMatchMutants ++ operatorMutants patternMatchMutants = mutatesN swapOps f fstOrder ifElseNegMutants = mutatesN ifElseNegOps f fstOrder guardedNegMutants = mutatesN guardedBoolNegOps f fstOrder operatorMutants = case genMode args of FirstOrderOnly -> mutatesN ops f fstOrder _ -> mutates ops f getFunc fname ast = head $ listify (isFunctionD fname) ast programMutants ast = map (putDecls ast) $ mylst ast mylst ast = [myfn ast x | x <- take (maxNumMutants args) allMutants] myfn ast fn = replace (getFunc funcname ast,fn) (getDecls ast) case ops ++ swapOps of [] -> return [] -- putStrLn No applicable operator exists! _ -> zipWithM writeFile (genFileNames filename) $ map prettyPrint (programMutants ast) where fstOrder = 1 -- first order getASTFromFile filename = liftM parseModuleFromFile $ readFile filename-- | Mutating a function's code using a bunch of mutation operators-- (In all the three mutate functions, we assume working-- with functions declaration.)mutates :: [MuOp] -> Decl -> [Decl]mutates ops m = filter (/= m) $ concatMap (mutatesN ops m) [1..]-- the third argument specifies whether it's first order or higher ordermutatesN :: [MuOp] -> Decl -> Int -> [Decl]mutatesN ops m 1 = concat [mutate op m | op <- ops ]mutatesN ops m c = concat [mutatesN ops m 1 | m <- mutatesN ops m (c-1)]-- | Given a function, generate all mutants after applying applying -- op once (op might be applied at different places).E.g.:-- op = < ==> > and there are two instances of <mutate :: MuOp -> Decl -> [Decl]mutate op m = once (mkMp' op) m \\ [m]-- | is the parsed expression the function we are looking for?isFunctionD :: String -> Decl -> BoolisFunctionD n (FunBind (Match _ (Ident n') _ _ _ _ : _)) = n == n'isFunctionD n (FunBind (Match _ (Symbol n') _ _ _ _ : _)) = n == n'isFunctionD n (PatBind _ (PVar (Ident n')) _ _) = n == n'isFunctionD _ _ = False-- | Generate all operators for permutating pattern matches in-- a function. We don't deal with permutating guards and case for now.permMatches :: Decl -> [MuOp]permMatches d@(FunBind ms) = d ==>* map FunBind (permutations ms \\ [ms])permMatches _ = []-- | generates transformations that removes one pattern match from a function-- definition.removeOnePMatch :: Decl -> [MuOp]removeOnePMatch d@(FunBind [x]) = []removeOnePMatch d@(FunBind ms) = d ==>* map FunBind (removeOneElem ms \\ [ms])removeOnePMatch _ = []-- | generate sub-arrays with one less elementremoveOneElem :: Eq t => [t] -> [[t]]removeOneElem l = choose l (length l - 1)-- AST/module-related operations-- | Parse a module. Input is the content of the fileparseModuleFromFile :: String -> ModuleparseModuleFromFile inp = fromParseResult $ parseFileContents inpgetDecls :: Module -> [Decl]getDecls (Module _ _ _ _ _ _ decls) = declsputDecls :: Module -> [Decl] -> ModuleputDecls (Module a b c d e f _) decls = Module a b c d e f decls-- Define all operations on a valueselectValOps :: (Data a, Eq a, Typeable b, Mutable b, Eq b) => (b -> Bool) -> [b -> b] -> a -> [MuOp]selectValOps pred fs m = concatMap (\x -> x ==>* map (\f -> f x) fs) vals where vals = nub $ listify pred mselectValOps' :: (Data a, Eq a, Typeable b, Mutable b) => (b -> Bool) -> (b -> [b]) -> a -> [MuOp]selectValOps' pred f m = concatMap (\x -> x ==>* f x) vals where vals = listify pred mselectIntOps :: (Data a, Eq a) => a -> [MuOp]selectIntOps m = selectValOps isInt [ \(Int i) -> Int (i + 1), \(Int i) -> Int (i - 1), \(Int i) -> if abs i /= 1 then Int 0 else Int i, \(Int i) -> if abs (i-1) /= 1 then Int 1 else Int i] m where isInt (Int _) = True isInt _ = False-- | negating boolean in if/else statementsselectIfElseBoolNegOps :: (Data a, Eq a) => a -> [MuOp]selectIfElseBoolNegOps m = selectValOps isIf [\(If e1 e2 e3) -> If (App (Var (UnQual (Ident not))) e1) e2 e3] m where isIf If{} = True isIf _ = False-- | negating boolean in GuardsselectGuardedBoolNegOps :: (Data a, Eq a) => a -> [MuOp]selectGuardedBoolNegOps m = selectValOps' isGuardedRhs negateGuardedRhs m where isGuardedRhs GuardedRhs{} = True boolNegate e@(Qualifier (Var (UnQual (Ident otherwise)))) = [e] boolNegate (Qualifier exp) = [Qualifier (App (Var (UnQual (Ident not))) exp)] boolNegate x = [x] negateGuardedRhs (GuardedRhs srcLoc stmts exp) = [GuardedRhs srcLoc s exp | s <- once (mkMp boolNegate) stmts]Supporting routines for mutationmodule Test.MuCheck.MuOp (MuOp , Mutable(..) , (==>*) , (*==>*) , (~~>) , mkMp' , same ) whereimport Language.Haskell.Exts (Name, QName, QOp, Exp, Literal, GuardedRhs, Decl)import qualified Data.Generics as Gimport Control.Monad (MonadPlus, mzero)data MuOp = N (Name, Name) | QN (QName, QName) | QO (QOp, QOp) | E (Exp, Exp) | D (Decl, Decl) | L (Literal, Literal) | G (GuardedRhs, GuardedRhs) deriving Eq-- boilerplate code-- | The function `same` applies on a `MuOP` determining if transformation is-- between same values.same :: MuOp -> Boolsame (N (a,b)) = a == bsame (QN (a,b)) = a == bsame (E (a,b)) = a == bsame (D (a,b)) = a == bsame (L (a,b)) = a == bsame (G (a,b)) = a == bmkMp' (N (s,t)) = G.mkMp (s ~~> t)mkMp' (QN (s,t)) = G.mkMp (s ~~> t)mkMp' (QO (s,t)) = G.mkMp (s ~~> t)mkMp' (E (s,t)) = G.mkMp (s ~~> t)mkMp' (D (s,t)) = G.mkMp (s ~~> t)mkMp' (L (s,t)) = G.mkMp (s ~~> t)mkMp' (G (s,t)) = G.mkMp (s ~~> t)showM (s, t) = \n ++ show s ++ ==> ++ show tinstance Show MuOp where show (N a) = showM a show (QN a) = showM a show (QO a) = showM a show (E a) = showM a show (D a) = showM a show (L a) = showM a show (G a) = showM a-- end boilerplate code-- Mutation operation representing translation from one fn to another fn.class Mutable a where (==>) :: a -> a -> MuOp(==>*) :: Mutable a => a -> [a] -> [MuOp](==>*) x lst = map (\i -> x ==> i) lst(*==>*) :: Mutable a => [a] -> [a] -> [MuOp]xs *==>* ys = concatMap (==>* ys) xs-- we handle x ~~> x separately(~~>) :: (MonadPlus m, Eq a) => a -> a -> (a -> m a)x ~~> y = \z -> if z == x then return y else mzero-- instancesinstance Mutable Name where (==>) = (N .) . (,)instance Mutable QName where (==>) = (QN .) . (,)instance Mutable QOp where (==>) = (QO .) . (,)instance Mutable Exp where (==>) = (E .) . (,)instance Mutable Decl where (==>) = (D .) . (,)instance Mutable Literal where (==>) = (L .) . (,)instance Mutable GuardedRhs where (==>) = (G .) . (,)A few generic routines used in other parts-- | Common functions used by MuCheckmodule Test.MuCheck.Utils.Common whereimport System.FilePath (splitExtension)import System.Randomimport Data.Listimport Control.Applicative-- | The `choose` function generates subsets of a given sizechoose :: [a] -> Int -> [[a]]choose xs n = filter (\x -> length x == n) $ subsequences xs-- | The `coupling` function produces all possible pairings, and applies the-- given function to eachcoupling fn ops = [(fn o1 o2) | o1 <- ops, o2 <- ops, o1 /= o2]-- | The `genFileNames` function lazily generates filenames of mutantsgenFileNames :: String -> [String]genFileNames s = map newname [1..] where (name, ext) = splitExtension s newname i= name ++ _ ++ show i ++ ext-- | The `replace` function replaces first element in a list given old and new values as a pairreplace :: Eq a => (a,a) -> [a] -> [a]replace (o,n) lst = map replaceit lst where replaceit v | v == o = n | otherwise = v-- | The `sample` function takes a random generator and chooses a random sample-- subset of given size.sample :: (RandomGen g, Num n, Eq n) => g -> n -> [t] -> [t]sample g 0 xs = []sample g n xs = val : sample g' (n - 1) (remElt idx xs) where val = xs !! idx (idx,g') = randomR (0, length xs - 1) g-- | The `sampleF` function takes a random generator, and a fraction and-- returns subset of size given by fractionsampleF :: (RandomGen g, Num n) => g -> Rational -> [t] -> [t]sampleF g f xs = sample g l xs where l = round $ f * fromIntegral (length xs)-- | The `remElt` function removes element at index specified from a listremElt :: Int -> [a] -> [a]remElt idx xs = front ++ ack where (front,b:ack) = splitAt idx xs-- | The `swapElts` function swaps two elements in a list given their indicesswapElts :: Int -> Int -> [t] -> [t]swapElts i j ls = [get k x | (k, x) <- zip [0..length ls - 1] ls] where get k x | k == i = ls !! j | k == j = ls !! i | otherwise = x-- | The `genSwapped` generates a list of lists where each element has been-- swapped by anothergenSwapped :: [t] -> [[t]]genSwapped lst = map (\(x:y:_) -> swapElts x y lst) swaplst where swaplst = choose [0..length lst - 1] 2Common SYB routines{-# LANGUAGE RankNTypes #-}-- | SYB functionsmodule Test.MuCheck.Utils.Syb (relevantOps, once) whereimport Data.Generics (Data, GenericM, gmapMo)import Test.MuCheck.MuOp (mkMp', MuOp, same)import Control.Monad (MonadPlus, mplus)import Data.Maybe(isJust)-- | apply a mutating function on a piece of code one at a time-- like somewhere (from so)once :: MonadPlus m => GenericM m -> GenericM monce f x = f x `mplus` gmapMo (once f) x-- | The function `relevantOps` does two filters. For the first, it-- removes spurious transformations like Int 1 ~~> Int 1. Secondly, it-- tries to apply the transformation to the given program on some element -- if it does not succeed, then we discard that transformation.relevantOps :: (Data a, Eq a) => a -> [MuOp] -> [MuOp]relevantOps m oplst = filter (relevantOp m) $ filter (not . same) oplst -- check if an operator can be applied to a program where relevantOp m op = isJust $ once (mkMp' op) mCommon print routinesmodule Test.MuCheck.Utils.Print whereimport Debug.Traceimport Data.List(intercalate)-- | simple wrapper for adding a % at the end.n ./. t = ( ++ show (n * 100 `div` t) ++ %)-- | join lines togethershowAS :: [String] -> StringshowAS = intercalate \n-- | make lists into lines in text.showA :: Show a => [a] -> StringshowA = showAS . map showtt v = trace (> ++ (show v)) vMutation opertorsmodule Test.MuCheck.Operators (comparators, predNums, binAriths, arithLists, allOps) whereimport Test.MuCheck.MuOpimport Test.MuCheck.Utils.Commonimport Language.Haskell.Exts (Name(Symbol), Exp(Var), QName(UnQual), Name(Ident))-- | all available operatorsallOps = concat [comparators, predNums, binAriths, arithLists]-- | comparison operators [<, >, <=, >=, /=, ==]comparators = coupling (==>) $ map Symbol [<, >, <=, >=, /=, ==]-- | predicates [pred, id, succ]predNums = coupling (==>) $ map varfn [pred, id, succ]-- | binary arithmetic [+, -, *, /]binAriths = coupling (==>) $ map Symbol [+, -, *, /]-- | functions on lists [sum, product, maximum, minimum, head, last]arithLists = coupling (==>) $ map varfn [sum, product, maximum, minimum, head, last]-- utilitiesvarfn = Var . UnQual . IdentCommon routines for integrating test frameworksmodule Test.MuCheck.TestAdapter whereimport qualified Language.Haskell.Interpreter as Iimport Data.Typeabletype InterpreterOutput a = Either I.InterpreterError (String, a)data TSum = TSum {tsum_numMutants::Int, tsum_loadError::Int, tsum_notKilled::Int, tsum_killed::Int, tsum_others::Int, tsum_log::String}-- Class/Instance declarationtype MutantFilename = Stringclass Typeable s => Summarizable s where testSummary :: [MutantFilename] -> [InterpreterOutput s] -> TSum isSuccess :: s -> BoolUsing the above for QuickCheck integration (Different module)Adapter{-# LANGUAGE StandaloneDeriving, DeriveDataTypeable, TypeSynonymInstances #-}-- | Module for using quickcheck propertiesmodule Test.MuCheck.TestAdapter.QuickCheck whereimport qualified Test.QuickCheck.Test as Qcimport Test.MuCheck.TestAdapterimport Test.MuCheck.Utils.Print (showA, showAS)import Data.Typeableimport Data.List((\\))import Data.Either (partitionEithers)deriving instance Typeable Qc.Resulttype QuickCheckSummary = Qc.Result-- | Summarizable instance of `QuickCheck.Result`instance Summarizable QuickCheckSummary where testSummary mutantFiles results = TSum { tsum_numMutants = r, tsum_loadError = e, tsum_notKilled = s, tsum_killed = f, tsum_others = g, tsum_log = logMsg} where (errorCases, executedCases) = partitionEithers results [successCases, failureCases, gaveUpCases] = map (\c -> filter (c . snd) executedCases) [isSuccess, isFailure, isGaveUp] r = length results e = length errorCases [s,f,g] = map length [successCases, failureCases, gaveUpCases] errorFiles = mutantFiles \\ map fst executedCases logMsg = showAS [Details:, Loading error files:, showA errorFiles, Loading error messages:, showA errorCases, Successes:, showA successCases, Failure:, showA failureCases, Gaveups:, showA gaveUpCases] isFailure :: Qc.Result -> Bool isFailure Qc.Failure{} = True isFailure _ = False isGaveUp :: Qc.Result -> Bool isGaveUp Qc.GaveUp{} = True isGaveUp _ = False isSuccess = Qc.isSuccessMainmodule Main whereimport System.Environment (getArgs, withArgs)import Control.Monad (void)import Test.MuCheck (mucheck)import Test.MuCheck.TestAdapter.QuickCheckimport Test.MuCheck.TestAdapterimport Test.MuCheck.Utils.Printmain :: IO ()main = do val <- getArgs case val of (-h : _ ) -> help (fn : file : modulename : args) -> withArgs [] $ mucheck tsFn fn file modulename args _ -> error Need function file modulename [args]\n\tUse -h to get help where tsFn :: [MutantFilename] -> [InterpreterOutput QuickCheckSummary] -> TSum tsFn = testSummaryhelp :: IO ()help = putStrLn $ mucheck function file modulename [args]\n ++ showAS [E.g:, ./mucheck qsort Examples/QuickCheckTest.hs Examples.QuickCheckTest 'quickCheckResult idEmpProp' 'quickCheckResult revProp' 'quickCheckResult modelProp',]
Mucheck - a mutation analysis tool for Haskell programs
haskell;unit testing
null
_cs.42758
where is all deleted data will go from a memory system ? if it is not deleting actually where it storing ? i am always wonder about this when we are sending something to a memory system it takes long time (depends up on usb port) and when we are deleting it not taking more than a bling of eyes and disappears actually where it goes ? ** deletion process is happening on both memory device and internet is same or different ** ? can we recover data from internet ?
where is all deleted data will go from memory system/internet?
memory management;memory hardware;memory access;memory allocation;shared memory
null
_unix.1079
Diff is a great tool to display the changes between two files. But how to display the similarities of two text files (while ignoring the differences)?I.e. sample input:a:Foo BarXHelloWorld42b:Foo BazHelloWorld23Pseudo output (something like this):@@ 2,3=Hello WorldJust sorting both files and using comm is not enough, because in that case the line information is lost.
Output the common lines (similarities) of two text files (the opposite of diff)?
command line;shell;diff
How about using diff, even though you don't want a diff? Try this:diff --unchanged-group-format='@@ %dn,%df %<' --old-group-format='' --new-group-format='' \ --changed-group-format='' a.txt b.txtHere is what I get with your sample data:$ cat a.txt Foo BarXHelloWorld42$ cat b.txt Foo BazHelloWorld23$ diff --unchanged-group-format='@@ %dn,%df%<' --old-group-format='' --new-group-format='' \ --changed-group-format='' a.txt b.txt@@ 2,3HelloWorld
_softwareengineering.241389
Should an interface only be used to specify certain behavior? Would it be wrong to use interface to group logically related data?To me it looks like we should not use interface to group logically related data as structure seems a better fit. A class may be used but class name should indicate something like DTO so that user gets the impression that class does not have any behavior.Please let me know if my assumption is correct.Also, are there any exceptions where interface can be used to group logically related data?
Should interface only be used for behavior and not to show logical data grouped together?
design;object oriented design;interfaces
null
_codereview.132464
Currently, I'm working on gamification platform and I have the following rules for achieving a new level:Level 0: When user registersLevel 1: When user confirms accountLevel 2: When user completes a quizLevel 3: When user completes a mission and publish more than 5 comments in the blogLevel 4: When user completes more than two missions and completes a campaign (campaign means donating money for something)Level 5: Raised at least $50 in campaigns and complete at least two campaignsNow, I have the following database where I'll keep a track of each user action (e.g. complete_quiz, complete_mission, etc.):Achievementsid (int)event (varchar)amount (int) # used when event has specific value, e.g. money for complete_campaign eventuser_id (int)date_created (int)When the user make any action (e.g. complete_mission), I'll insert the action in the table above and will call the following method to check if user has new level unlocked:public function checkIfAchievmentUnlocksNewLevel($userObj){ $currentLevel = $userObj->level; $nextLevel = ++$currentLevel $isLevelUnlocked = false; switch($nextLevel) { case 0: case 1: // Each registered user will have these levels by default. break; case 2: $completedQuiz = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id , array(':event' => 'complete_quiz', ':user_id' => $userObj->id)); $isLevelUnlocked = count($completedQuiz) ? true : false; break; case 3: $completedMissions = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'complete_mission', ':user_id' => $userObj->id)); $wildwireComments = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'wildwire_comment', ':user_id' => $userObj->id)); $isLevelUnlocked = (count($completedMissions) && count($wildwireComments) >= 5) ? true : false; break; case 4: $completedMissions = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'complete_mission', ':user_id' => $userObj->id)); $completedCampaigns = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'complete_campaign', ':user_id' => $userObj->id)); $isLevelUnlocked = (count($completedMissions) >= 2 && count($completedCampaigns)) ? true : false; break; case 5: $campaignRaisedMoney = $this->db->select(SELECT SUM(amount) FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'complete_campaign', ':user_id' => $userObj->id)); $completedCampaigns = $this->db->select(SELECT . $this->fieldsList . FROM . $this->table . WHERE event = :event AND user_id = :user_id, array(':event' => 'complete_campaign', ':user_id' => $userObj->id)); $isLevelUnlocked = (count($compaignRaisedMoney) >= 50.00 && count($completedCampaigns) >= 2) ? true : false; break; } if($isLevelUnlocked) { $userHelper->setLevel($nextLevel); }}This code is working now, but I want to refactor it and am looking for any suggestions on how to improve it.
Level system in a gamification platform
php;mysql
First, I am concerned about the underlying data model. I am not sure that trying to fit all different event types into the same table makes sense here since the events are so different in nature (account validation vs. taking a quiz vs. making comments vs. completing missions vs. completing a campaign).My guess is that you should have separate tables in the database for each of these types of events. One table for quiz results for all users, one table for all user comments, one table for storing mission information, etc.Second, I am concerned with how you are hard-coding the level requirements into this section of code and storing the level on the user record. This is a very tight coupling of your leveling logic with the user object (presumably in database as well). What happens if you change your leveling criteria?I would probably strive to store necessary properties or expose necessary methods with the user class to be able to pass a user object to an independent level-determination class where it is compared against levelling criteria. So to fill out a rough class skeleton, perhaps you are looking at something like this:class User { public $id; public $name; // etc. public function __construct($id) { // set up object from database record } public function accountConfirmed() { // return true/false as to whether account is confirmed } public function getComments() { // get list of all comment objects // perhaps pass some parameter to allow for filtering } public function getQuizzes() { // get list of all quizzes, and perhaps have parameters for filtering } public function getMissions() { // get missions } public function getCampaigns() { // get campaigns } public function getCampaignTotals() { // get campaign totals } public function setLevel() { // place to set the level on the user // perhaps you update this to the database if changed // and you decide you do need to store level on the user record for ease of lookup }}class userLevelCalculator { // place to store user object that was passed to class protected $user; protected $levelCheckFunctions = array(); // constructor receives user object public function __construct(User &$user) { $this->user = $user; } // logic to determine user level // this could probably be broken down into separate class methods if needed public function getUserLevel() { // check for level 1 if(!$user->accountConfirmed()) { $user->setLevel(0); return 0; } // check for level 2 $quizCount = $user->getQuizzes(); if($quizCount < 1) { $user->setLevel(1); return 1; } // check for level 3 $commentCount = $user->getComments(); $missionCount = $user->getMissions(); if($commentCount < 5 || $missionCount < 1) { $user->setLevel(2); return 2; } // check for level 4 $campaignCount = $user->getCampaigns(); if($missionCount < 2 || $campaignCount < 1) { $user->setLevel(3); return 3; } // check for level 5 $campaignTotals = $user->getCampaignTotals(); if($campaignCount < 2 || $campaignTotals < 50) { $user->setLevel(4); return 4; } $user->setLevel(5); return 5; } }
_softwareengineering.161410
So when you call malloc or new [] from your C/C++ application, how does the CRT translate it into Windows API calls?
How are new[] and malloc implemented in Windows?
windows;memory;allocation;malloc
It depends if you are in debug or release mode. In release mode, as Pedro said there is HeapAlloc/HeapFree which are kernel functions, while in debug mode (with visual studio) there is a hand written version of free and malloc (to which new/delete are re-directed) with thread locks and more exceptions detection, so that you can detect more easily when you did some mistakes with you heap pointers when running your code in debug mode. This is one more reason why a code compiled in debug mode is slower. I think it is the only one API which is simulated like this in debug mode.Just step it into the debugger to understand how it works, if I remember well this debug version uses somes trees of double linked lists.And this is also with this kind of re-writing of malloc/free that memory consumption analysers work : there is tool for that in visual studio which just uses another .dll as the implementation of allocations functions which notice another data analyser program each time they are called.
_cstheory.16968
Can i evaluate a formula $(a_i + b_i) \cdot c_i$ if i have the encryption of $a_i,b_i,c_i$ respectively using a homomorphic encryption scheme that supports multiplications and additions, supposing that each value is a small integer? I am aware of BGN scheme that evaluates 2-DNF formulas but in my scheme i want the other way around. Instead of $a_1 \cdot b_1 + a_2 \cdot b_2 + \ldots +a_n \cdot b_n$ i want to evaluate $(a_1 + b_1) \cdot c_1 + (a_2 + b_2) \cdot c_2 + \ldots + (a_n + b_n) \cdot c_n$
How to homomorphically and efficiently evaluate $$(a_1 + b_1) \cdot c_1 + (a_2 + b_2) \cdot c_2 + \ldots + (a_n + b_n) \cdot c_n$$
cr.crypto security;homomorphic encryption
The original question seems to assume that the BGN scheme is the state-of-the-art for problems like this (correct me if I'm wrong :)), so for what it's worth:The BGN scheme scheme is a prototypical version of a somewhat homomorphic encryption scheme -- the bilinear map gives you a single multiplication on any ciphertext still in the source group $\mathbb{G}$, and (of course) unbounded additions.It's worth noting that the additively homomorphic property (ie, the unbounded additions) comes from the algebraic representation of the ciphertexts. For similar reasons, El Gamal has multiplicatively homomorphic ciphertexts (unbounded multiplications, no additions), and Paillier has additively homomorphic ciphertexts (no multiplications, unbounded additions).I think around a year after BGN, Craig Gentry came out with the first fully homomorphic encryption scheme (unbounded multiplications, unbounded additions).The current state-of-the-art for FHE schemes is some combination of:((Below are PDF links to ePrint.))Fully Homomorphic Encryption without Modulus Switchingfrom Classical GapSVPFully Homomorphic Encryption with Polylog OverheadSomewhat Practical Fully HomomorphicEncryptionAnother Thought:In case the word efficiently in the title was intended to be interpreted as not FHE, here's an independent observation of mine that might be useful for situations like you ran into just now -- i.e. I think I need more multiplications than the bilinear map in BGN allows, but I don't want to take the huge hit of FHE...If you implement a BGN-like scheme, but substitute a multilinear map for the bilinear map..See: Candidate Multilinear Maps from Ideal Lattices..then you should be allowed up to $\kappa$ multiplications, for multilinearity parameter $\kappa$.The complexity of the GGH multilinear map depends on $\kappa$, but so long as $\kappa=O(1)$, I have a feeling there wouldn't be much difference between the resulting scheme and the BGN scheme in terms of concrete efficiency. (In fact, it's an interesting question on its own!)In any case, suppose you ran into a situation where you needed.... THREE multiplications per plaintext... Think about using a multilinear map.A few more details for intuition about what multilinear maps are, if they're new to you:BGN uses groups $(\mathbb{G}, \mathbb{G}_T)$ and a map $e: \mathbb{G}\times\mathbb{G}\rightarrow\mathbb{G}_T$.IIRC, the GGH multilinear map with, say, $\kappa = 2$ can be seen as reducing (in the simplest case) to groups $(\mathbb{G}_0, \mathbb{G}_1, \mathbb{G}_2)$ and a set of bilinear maps, written generally as $e = \{e_i\}_{i\in [0,...,\kappa-1]}$ where for all $i$, $e_i : \mathbb{G}_i\times\mathbb{G}_i\rightarrow\mathbb{G}_{i+1}$.
_cs.14862
There are $n$ elements in a hash table of size $m \geq 2n$ which uses open addressing to avoid collisions. The hash function was chosen randomly among a set of uniform functions. A set $H$ of hash-functions $h:U\to\{0,\dots,m-1\}$ is called uniform, if for every tuple of different keys $x,y \in U$ the number of hash-functions $h \in H$ with $h(x) = h(y)$ is $\frac{|H|}{m}$ at most.Show that the propability that for $i = 1, 2, \dots,n$ the $i$-th insert operation needs more than $k$ attempts, is $2^{-k}$ at most.This is an assignment, which I got as homework. What I already worked out:The propability $p_1$ for a collision is 0 of course for an empty table.The propability $p_i$ for a collision after k attempts should be $\frac{i - 1}{2n}\cdot k$ assuming that the table is filled with $i-1$ elements to this point and the tables size is $2n$ as worst case.So I have$$p_i= \frac{i-1}{2n} \cdot k \leq 2^{-k},$$but I don't know where to go from here.The method of open hashing used here simply iterates over different hash-functions until a free place is found (for example $h(x) = (x \bmod j) \bmod n$ with increasing prime numbers for $j$.
Proof that probability that hashing with open addressing needs more than $k$ attempts is $2^{-k}$ at most
algorithm analysis;data structures;hash tables
null
_unix.368697
I created a custom Arch distro iso with Archiso and I wrote an installation script that prompts the user for input for install options. So far, the installation process is:boot the arch isoexecute the installation script with:$ ./install.shinput when promptedYour typical-user-friendly-installer boots right to the installer and gets going. I'd like to do that by having ./install.sh run automatically instead of being executed by the user, so that step #2 is eliminated.if I understand correctly, the arch iso gets the user to a terminal via a systemd service that calls /sbin/agetty. I think I either need to modify or replace that service to make it something that calls my script, but I'm not sure how to go about that, or if this is even close to the right approach.What's the proper way to boot to an installer script on a distro live CD?
Automatically running an installation cli script in terminal on startup
shell script;arch linux;systemd;system installation;livecd
null
_unix.12063
For fun and laziness, I've got 20 entries in my GRUB2 menu. To get to the bottom one quickly, I tap down-arrow a couple of times during the GRUB loading screen. I can press the key 15 times (+/- 1, don't remember) -- the next press, GRUB beeps and the menu choice isn't affected.Why would someone put the limit at 2^4 on a 64-bit processor? Is it even a GRUB problem, or is it caused by keyboard queuing?
Why can GRUB2 only remember 4 bits?
grub2;bios
Do you mean you press the key 15 times before Grub has time to process the first press? If so, that's the BIOS buffering the key presses. The BIOS probably has a fixed-size buffer whose size probably hasn't changed in >30 years. (The API hasn't changed, the hardware has to some extent but for the BIOS's sake it'll emulate older hardware, and there isn't any demand for fancier behavior, so BIOS writers don't bother.)
_softwareengineering.332869
I'm working on the design of a compiler for a language in which I have to use curly brace for two different purposes. I am currently stuck in writing a non-ambiguous grammar but I realized that some other language have the same behaviour.That is why I'm trying to understand how a part JS/Ecmascript grammar is built. More precisely, I would like to understand how the parser mangage to correctly identify statement blocks like function body from objects definitions since they are both surrounded with curly braces and their content can be syntaxically the same.I have searched for JS grammar on the web but most documentation is very messy. I found this, which is very clear : http://hepunx.rl.ac.uk/~adye/jsspec11/llr.htm but the only rule using curly braces is the following :CompoundStatement: { Statements }Does anyone could help me with this
Javascript / Ecmascript language grammar disambiguation
javascript;compiler;grammar
The grammar you linked to looks incomplete and buggy (it doesn't mention object literals). Let's talk about this one instead. Here, braces occur in four rules:ObjectLiteral { }| { FieldList }...Block { BlockStatements }...SwitchStatement switch ParenthesizedExpression { }| switch ParenthesizedExpression { CaseGroups LastCaseGroup }...NamedFunction function Identifier FormalParametersAndBodyFormalParametersAndBody ( FormalParameters ) { TopStatements }The function and switch cases will never clash with the block and object literal, because these are preceded by the function and switch keywords. As soon as the parser recognizes these keywords, all other alternative productions can be discarded.However, there appears to be a ambiguity between blocks and object literals. How should the statement {} be parsed? This could either be an empty object literal, or an empty block.The grammar prevents this by tracking whether an expression is the initial expression of the statement, and provides different productions for each case:PrimaryExpression(normal) SimpleExpression| FunctionExpression| ObjectLiteralPrimaryExpression(initial) SimpleExpressionThis disallows object literals in the first expression, and also function definitions if a function occurs at the beginning of a statement, it always is a function definition statement, never a function expression.This makes the JavaScript grammar unambiguous. This is important for LR grammars, since it avoids reduce/reduce conflicts.For top-down parsers (e.g. hand-written recursive descent), we could leave the grammar ambiguous but assign priorities to each alternative. E.g. in a Statement and TopStatement, the Block and FunctionDefinition alternatives would always have to be attempted before the ExpressionStatement alternative.For generalized parsers that can parse all context-free grammars (not just LL or LR subsets), ambiguities do not hinder the parser. Depending on the parser, it either returns one arbitrary parse tree of all valid parse trees (undesirable in practice), or iterates over all possible parse trees. An actually ambiguous grammar is not good for programming languages, since we usually want to write programs that cannot be misunderstood by the compiler.However, there's an interesting class of unambiguous grammars that cannot be parsed by LR parsers, and requires a generalized parser. Here, we have a reduce/reduce conflict between the A and B rules, but it is just a local ambiguity and is resolved later in the program:Top = X | YX = A Balanced xY = B Balanced yA = a bB = a bBalanced = x | y | () | ( Balanced ) | ( Balanced Balanced )Since the Balanced rule is a context-free grammar in itself, no amount of lookahead will resolve this. The input ab((y()))x has exactly one parse tree (is unambiguous), but is not parseable by common parsers. When designing a language, even local ambiguities should therefore be avoided.