id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_softwareengineering.16323
After reading this post about ideal programming language learning sequence, I am wondering what would have been the answers if the question was performance -instead of learning- oriented ?Since there are many programming languages I chose to ask the question for OOL to be the least subjective. But any thought or comparison about no-OOL are appreciated :DIf we omit the programming effort, time and costs. What is your ranking of the most powerful object oriented languages ?
Object Oriented Programming Language performance ranking
programming languages;performance
There's an interesting slide in this Scale at Facebook presentation that shows relative performance of a few languages compared to C++.C++ (1)Java (2)C# (3)Erlang (6)Python (21)Perl (38)PHP (+-40)Ruby (+-70)
_unix.298480
I have /dev/sda mounted on /, as the root partition. Can I safely run badblocks in read-only mode on this device? Will it show false positives/negatives because it's mounted?
Can I safely run badblocks in read-only mode on a mounted drive?
filesystems;mount;hard disk;disk;badblocks
Read-only is just that - reading from the disk. It will pick up sector read errors but (obviously) not sector write errors.Categorically, it is safe to run on a device that is being used a mounted filesystem.With respect to possible false positives, block IO is not managed, i.e. there are no reader/writer locks. So there is no interaction between badblocks and the filesystem layer.
_unix.56678
I've been recently try to write a script to automate checks for new version of ports and software installed on my FreeBSD server. This script is added to root's crontab and fires daily. If I run it from sudo /path/to/script it goes forward decently sending mail with content on my email address. If it's run by cron I get an empty mail. I think that the reason might be that while update sometimes window appears (from make config i think) with compilation options, but I might be wrong. Here's the script:#!/usr/local/bin/bash# DIRECTORIES SETUPscript_path_dir=/tmpworking_dir=$script_path_dir/portsupgradescript# FILES SETUPmail_file=$working_dir/mail.txtmail_address=MY_MAIL_ADDRESSmail_subject=Daily updatepm_out=portmaster_log.txtpu_out=portupgrade_log.txt# STARTif [ ! -d $script_path_dir ]; then echo Script base directory set does not exist. Creating... mkdir $script_path_dir else echo Script base directory set exists. OKfiif [ ! -d $working_dir ]; then echo Script working directory set does not exist. Creating... mkdir $working_dir else echo Script working directory set exists. OKfiif [ $(ls -A $working_dir) ]; then echo Script working directory is empty. OKelse echo Script working directory is not empty. Cleaning... rm -rf $working_dir/*firm -rf $pm_outrm -rf $pu_outrm -rf $mail_file/usr/sbin/portsnap fetch update && \/usr/local/sbin/portmaster -L --index-only | egrep '(ew|ort) version|total install' > $pm_outlinecount=`wc -l $pm_out | awk {'print $1'}`if [ $linecount != 0 ]then echo Master file log not empty. Concatenating... cat $pm_out >> $mail_file else echo Master file log empty... ( x ) fiportupgrade -aqyP -l $pu_outupg_linecount=`wc -l $pu_out`if [ $upg_linecount != 0 ] then echo Upgrade file log not empty. Concatenating... cat $pu_out >> $mail_file else echo Upgrade file log empty... ( x ) fiecho Seding mail report... cat $mail_file | mail -s $mail_subject $mail_addressIs there any way to select defaults on make config window so this would be not a showstopper? Or maybe I should run this script sudoed in user's cron, not root's?
Bash port auto upgrading script in cron doesn't work properly?
bash;freebsd;upgrade;email;bsd ports
Auto-upgrading from cron is kind of a bad idea. You should really read /usr/ports/UPDATING in case there's some sort of manual action that needs to be taken. I'm sure this probably won't be very popular, sorry, but it's true. There's a reason UPDATING exists.As far as your script goes, you can define BATCH=yes in /etc/make.conf and you won't be prompted for configuration. You may also But that doesn't mean your upgrades will go well.
_unix.339353
I have a problem with backscatter. Spammers send emails to non existent username @ existent domain hosted on my server. I am trying to abort the session instead of sending bounce messages back to forged sender addresses. I tried adding reject_unverified_recipient, but that doesn't seem to work.When I check mailq, I can see many stuck user doesn't exist bounce emails from MAILER_DAEMON to non existent recipients.Here is my postconf -nappend_dot_mydomain = nobiff = nobroken_sasl_auth_clients = yesconfig_directory = /etc/postfixdovecot_destination_recipient_limit = 1inet_interfaces = allinet_protocols = ipv4mailbox_size_limit = 0message_size_limit = 102400000milter_default_action = acceptmilter_protocol = 2mydestination = localhostmyhostname = domain.commynetworks = 127.0.0.0/8non_smtpd_milters = inet:localhost:8891readme_directory = norecipient_delimiter = +relay_domains =relayhost =resolve_numeric_domain = yessmtp_tls_session_cache_database = btree:${data_directory}/smtp_scachesmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)smtpd_milters = inet:localhost:8891smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination, reject_unknown_recipient_domain, reject_unverified_recipient, permit_auth_destinationsmtpd_sasl_auth_enable = yessmtpd_sasl_path = private/authsmtpd_sasl_security_options = noanonymoussmtpd_sasl_tls_security_options = noanonymoussmtpd_sasl_type = dovecotsmtpd_tls_CAfile = /etc/ssl/certs/domain.com.chain.crtsmtpd_tls_cert_file = /etc/ssl/certs/domain.com.crtsmtpd_tls_key_file = /etc/ssl/private/domain.com.keysmtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scachesmtpd_use_tls = yesvirtual_alias_domains = mysql:/etc/postfix/sqlconf/virtual_alias_domains.cfvirtual_alias_maps = mysql:/etc/postfix/sqlconf/virtual_mailbox_maps.cfvirtual_mailbox_domains = mysql:/etc/postfix/sqlconf/mydestination.cfvirtual_transport = dovecotThis is the master.cf filesmtp inet n - - - - smtpd -o content_filter=spamassassin -o receive_override_options=no_header_body_checks,no_unknown_recipient_checks,no_milterssmtps inet n - - - - smtpd -o content_filter=checkhook -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yesdovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${recipient}pickup unix n - - 60 1 pickupcleanup unix n - - - 0 cleanupqmgr unix n - n 300 1 qmgr#qmgr unix n - n 300 1 oqmgrtlsmgr unix - - - 1000? 1 tlsmgrrewrite unix - - - - - trivial-rewritebounce unix - - - - 0 bouncedefer unix - - - - 0 bouncetrace unix - - - - 0 bounceverify unix - - - - 1 verifyflush unix n - - 1000? 0 flushproxymap unix - - n - - proxymapproxywrite unix - - n - 1 proxymapsmtp unix - - - - - smtprelay unix - - - - - smtpshowq unix n - - - - showqerror unix - - - - - errorretry unix - - - - - errordiscard unix - - - - - discardlocal unix - n n - - localvirtual unix - n n - - virtuallmtp unix - - - - - lmtpanvil unix - - - - 1 anvilscache unix - - - - 1 scachemaildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipientscalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}spamassassin unix - n n - - pipe user=spamfilter argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}checkhook unix - n n - - pipe user=www-data argv=/etc/postfix/scripts/send ${sender} ${recipient}Here are some logs that were made when I tried to send to invalid local recipient.Jan 22 19:09:34 ip-12345 postfix/qmgr[19938]: CF96B20013B: from=<[email protected]>, size=249, nrcpt=1 (queue active)Jan 22 19:09:35 ip-12345 postfix/pickup[19939]: 982D320013D: uid=5007 from=<[email protected]>Jan 22 19:09:35 ip-12345 postfix/pipe[21485]: CF96B20013B: to=<[email protected]>, relay=spamassassin, delay=18, delays=16/0/0/1.2, dsn=2.0.0, status=sent (delivered via spamassassin service) Jan 22 19:09:35 ip-12345 postfix/qmgr[19938]: CF96B20013B: removedJan 22 19:09:35 ip-12345 postfix/cleanup[21477]: 982D320013D: message-id=<[email protected]>Jan 22 19:09:35 ip-12345 postfix/qmgr[19938]: 982D320013D: from=<[email protected]>, size=1333, nrcpt=1 (queue active)Jan 22 19:09:35 ip-12345 dovecot: auth: Debug: master in: USER#0111#[email protected]#011service=ldaJan 22 19:09:35 ip-12345 dovecot: auth-worker(14636): Debug: sql([email protected]): SELECT '/var/vmail/[email protected]' as home, 'vmail' as uid, 'vmail' as gid, concat('*:storage=', quota_kb) AS quota_rule, concat('*:messages=', quota_msg) AS quota_rule2 FROM users WHERE username = 'nonexistentx' AND domain = 'localdomain.com' and active=1 Jan 22 19:09:35 ip-12345 dovecot: auth-worker(14636): sql([email protected]): unknown user Jan 22 19:09:35 ip-12345 dovecot: auth: Debug: userdb out: NOTFOUND#0111 Jan 22 19:09:35 ip-12345 postfix/pipe[21400]: 982D320013D: to=<[email protected]>, relay=dovecot, delay=0.07, delays=0.05/0/0/0.02, dsn=5.1.1, status=bounced (user unknown) Jan 22 19:09:35 ip-12345 postfix/cleanup[21396]: A8B0720013C: message-id=<[email protected]>Jan 22 19:09:35 ip-12345 postfix/bounce[21474]: 982D320013D: sender non-delivery notification: A8B0720013C Jan 22 19:09:35 ip-12345 postfix/qmgr[19938]: A8B0720013C: from=<>, size=3394, nrcpt=1 (queue active) Jan 22 19:09:35 ip-12345 postfix/qmgr[19938]: 982D320013D: removed Jan 22 19:09:35 ip-12345 postfix/smtp[21496]: A8B0720013C: to=<[email protected]>, relay=none, delay=0.03, delays=0/0.01/0.02/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=sender.ocm type=A: Host not found) Jan 22 19:09:35 ip-12345 postfix/qmgr[19938]: A8B0720013C: removed
Postfix reject unknown recipient
linux;postfix
null
_cs.72278
Let $a,b>0$ and suppose we have divided the domain $\Lambda:=[0,a]\times[0,b]$ into a grid of width $n_x$ and height $h_y$.We enumerate the grid from bottom to top and left to right. Given $(x_0,y_0),(x,y)\in\Lambda$, I want to iterate over each grid cell through whose interior the line segment connecting $(x_0,y_0)$ and $(x,y)$ passes.Suppose we number the first cell by $(0, 0)$ and the last cell by $(n_x-1,n_y-1)$. How can we find an efficient algorithm, which iterates over the desired cells?
Iterate over each cell of a grid through whose interior a given line segment passes
computational geometry;graphics
null
_webmaster.5932
We have an issue in our company right now where gay people are complaining that our forms are not gay friendly we have a wedding registry page where it says groom and bride but they want a different way of presenting it so that it would be acceptable by that demographic. Any good suggestions on how we handle this? (Hope no one gets offended, Its a legitimate question and were actually dealing with this right now)Here is a screenshot of the form
Modernizing traditional Marriage Application Web Form
asp.net;web development
There's no real standard or perfect way of doing it. Some options:First spouseSecond spouseorParty AParty BorPartner APartner BorBride/GroomBride/Groom
_webmaster.43609
Could anyone tell me how to block access to a website for libwww-perl agents.Can only seem to find .htaccess solutions.Many thanksjohn
block libwww-perl on windows server 2003
htaccess;iis
According to this site you can do blocking on iis after installing the URL Rewrite Module http://www.seomoz.org/ugc/blocking-bots-based-on-useragentThe specific pattern you would want to use is libwww-perlHere is an almost identical question asked on serverfault https://serverfault.com/questions/408913/can-iis-block-specific-user-agent-from-requesting-a-page-web It suggests installing Ionics Isapi Rewrite Filter or Microsoft URLScan
_webapps.50605
I have a column of cells in a Google Spreadsheet with values such as:512212323423532What I would like to do is convert all these into a hyperlink and keep the value as the link text:http://www.example.com/id/{value}...where {value} is the value of the cell. I know the format of a hyperlink in a Google spreadsheet but I don't want to do this manually every time I put in a number. I want a simple process that when I add a new row the contents of this column is turned into a link with the value I input.I tried this:=HYPERLINK(CONCATENATE(http://www.example.com/id/,A1);A1)But I get:error: Circular dependency detected
Making all cells in a column links from the values within the cell?
google spreadsheets;google apps script
I wasn't able to reproduce your results. As a matter of fact, it worked perfectly.What you tried to do is most probably the following:In A1 you typed in =HYPERLINK(CONCATENATE(http://www.example.com/id/,A1);A1) and this yields an error of coarse.UpdateIf you really want to get the result in A1, then you need to use a script.Code// globalvar ss = SpreadsheetApp.getActiveSpreadsheet();function onOpen() { var menu = [{name: create URL, functionName: createURL}]; ss.addMenu(URL, menu);}function onEdit(e) { var activeRange = e.source.getActiveRange(); if(activeRange.getColumn() == 1) { if(e.value != ) { activeRange.setValue('=HYPERLINK(http://www.example.com/id/'+e.value+','+e.value+')'); } }}function createURL() { var aCell = ss.getActiveCell(), value = aCell.getValue(); aCell.setValue('=HYPERLINK(http://www.example.com/id/'+value+','+value+')'); }Explained:The e.value will retrieve the cells value (only applicable a cell). The setValue() will add the concatenated string into the getActiveRange(). All is only executed when e.value contains something and the active range is in column A. I've created an extra menu option as well, to be able access the script this way.Example:I've created an example file for you: onEdit URL builderAdd this script via Tools>Script editor, into the script editor. Press the bug button and you can use the script.
_codereview.30235
The following working code basically swaps the content of two divs. The divs are created dynamically, then the user checks which records they want to swap, and then clicks on a swap button that triggers a function to swap an inner div element.The part that swaps the div's seems really messy to me. I'm only a beginner/intermediate with JavaScript, so I would really appreciate some help with this.//dynamically create the div elements$('#SwapBedsFields').html(options); for (var i = 1; i <= NumberOfBeds; i++) { var RowIndex = $.inArray(i.toString(), bedNumberColumn); var ptName = 'Empty'; var UMRN = ; var UMRNstr = ''; var pID = ''; if (RowIndex != -1) { ptName = patientNameColumn[RowIndex]; UMRN = patientUMRNColumn[RowIndex]; pID = pIDColumn[RowIndex]; UMRNstr = ( + UMRN + ); }; options += '<div style=white-space:nowrap; height:25px; width:100%; id=bed-topdiv-' + i + '><input type=checkbox style=vertical-align:middle; name=bed-' + i + ''; options += 'id=bed-' + i + ' /><div style=display:inline; vertical-align:middle;>&nbsp;Bed ' + i + ' - </div><div id=bed-div-' + i + ' class=inner-bed-div-class style=display:inline; vertical-align:middle; bedNum=' + i + ' uMRN=' + UMRN +' >' + ptName + '&nbsp;' + UMRNstr + '</div></div>'; };//***THIS IS THE BIT I FIND PARTICULARLY MESSY!***//then in another function - when the user clicks a button to swap the div elements...var selected = new Array();var patientname1;var patientname2;var b1html;var b2html;var pt1UMRN;var pt2UMRN;$('#SwapBedsFields input:checked').each(function () { selected.push($(this).attr('name'));});//get the html of the two bed divspatientname1 = $('#bed-div-' + selected[0].substring(4)).html();patientname2 = $('#bed-div-' + selected[1].substring(4)).html();pt1UMRN = $('#bed-div-' + selected[0].substring(4)).attr('umrn');pt2UMRN = $('#bed-div-' + selected[1].substring(4)).attr('umrn');//swap the elements around$('#bed-div-' + selected[0].substring(4)).html(patientname2);$('#bed-div-' + selected[1].substring(4)).html(patientname1);//update the umrn attribute$('#bed-div-' + selected[0].substring(4)).attr({ umrn: pt2UMRN});$('#bed-div-' + selected[1].substring(4)).attr({ umrn: pt1UMRN });//message the user$('#bed-topdiv-' + selected[0].substring(4)).effect(highlight, {}, 2000);$('#bed-topdiv-' + selected[1].substring(4)).effect(highlight, {}, 2000);
Swapping dynamic page div's around on button click
javascript;jquery
null
_unix.193422
locate gtags would find all the files named gtags.What if I only need executables, is there any way to do this?
How find only executable files using 'locate'?
locate
Not easily. You can use locate bash | while IFS= read -r line; do [[ -x $line ]] && echo $line; doneto find all executables where the name contains bash. This is faster than using find across the whole filesystem because only a few files need to be checked.locate bash does what it always does (lists all matches)| (pipe) takes the output from the first command (locate) and sends it to the second one (the rest of the line)the while ...; do ... done loop iterates over every line it receives from the pipe (from locate)read -r line reads one line of input and stores it in a variable called line (in our case, a path/file name)[[ -x $line ]] tests whether the file in $line is executableif it is, the && echo $line part prints it on your screen
_codereview.106423
This is a sort-of-useless utility I wrote to learn my way around shell programming better.My concerns are:Is the code readable? Could it be more efficient, or just simpler? The logic ends up being a little complicated and I have a lot of nested if statements. Should I use 4 tabs instead of 2?Is the interface itself (the flags, arguments, etc) intuitive? Is the documentation clear?How easily this could be ported to Bash? As far as I remember, the only Zsh-specific feature I use here is zparseopts, but I've been casually using Zsh for a while and I could have forgotten some of the inconsistencies between shells.# shebang [-iv] [-t interpreter] [-I extension] [-J [extension]] [filename]## If no filename is given, print a shebang. If a filename is given, append a shebang to the file and print its contents. If a filename is given and the -I or -J options are specified, append a shebang to the file in place.## OPTIONS# -i# Interactive mode; ask for confirmation first.# -I# Modify filename in place (as in `sed -i`) using specified extension. Extension is mandatory.# -J# Same as -I, but extension is mandatory only if the string gsed cannot be found in the output of `which sed`. Overridden by -I.# -t# Specify an interpreter, e.g. `shebang -t zsh` produces #!/usr/bin/env zsh# -v # Print (to stderr) the interpreter being used.shebang () { local interpreter local inplace local inplace2 local verbose local interactive local input_shebang local shebang local continue local sed_command local gsed_avail zparseopts -D t:=interpreter I:=inplace J::=inplace2 v=verbose -i=interactive (( $? )) && return 1 if [[ -n $1 ]]; then input_shebang=$(sed -n '1 { /^#!/ p; }' $1) if (( $? )); then echo Unable to read $1. >&2 return 1 fi if [[ -n $input_shebang ]]; then echo $1 already has a shebang. >&2 return 1 fi fi if [[ (-z $interpreter) && (-z $1) ]]; then echo The -t option is mandatory if no argument is supplied. >&2 return 1 fi if [[ (-z $interpreter) && (-n $1) ]]; then interpreter=$(filename=$(basename $1); [[ $filename = *.* ]] && echo ${filename##*.} || echo '') # grab extension interpreter=${interpreter:#(* *|* | *)} # extensions with whitespace probably aren't legit interpreter=${interpreter:-sh} # assume sh if no extension and no -t option else interpreter=$interpreter[2] fi shebang=#!/usr/bin/env $interpreter if [[ -n $verbose ]]; then echo Using interpreter '$interpreter' >&2 fi if [[ -n $interactive ]]; then read -q continue?Shebang will be '$shebang'. Ok? y/n: [[ $continue == n ]] && return 1 fi if [[ -z $1 ]]; then echo $shebang else gsed_avail=$(command which -s gsed && echo 1) echo $gsed_avail if [[ -n $gsed_avail ]]; then sed_command=1 i\\$shebang\n if [[ -n $inplace ]]; then gsed ${inplace/-I/-i} $sed_command $1 elif [[ -n $inplace2 ]]; then gsed ${inplace2/-J/-i} $sed_command $1 else gsed $sed_command $1 fi else sed_command='1 i\REPLACEME\n' # need to use single quotes to preserve the line break sed_command=${sed_command/REPLACEME/$shebang} # work around the single quotes if [[ -n $inplace ]]; then sed ${inplace/-I/-i} $sed_command $1 elif [[ (-n $inplace2) && (-z ${inplace2#-J}) ]]; then echo '-J' was given without an argument, but 'gsed' is unavailable. Specify an explicit extension or use -I. elif [[ -n ${inplace2#-J} ]]; then sed ${inplace2/-J/-i} $sed_command $1 else sed $sed_command $1 fi fi fi}
Print a shebang line, or prepend it to a file
bash;shell;portability;zsh;ksh
null
_unix.304905
As far as I can tell GDM3 is incompatible with RealVNC so I uninstalled it and I installed LightDM. However now I can't get anywhere when I try to connect to RealVNC.What I did before was run:sudo -u localuser vncserver-virtual... and it opened up a VNC server on port 5901 to which I could connect. I still can, but now it displays the message:Xsession: unable to start X session --- no .xsession file, no .Xsession file, no session managers, no window managers, and no terminal emulators found; aborting.I'm guessing I need to make changes to my /etc/vnc/xstartup.custom file to somehow get X to detect LightDM? Its current contents seems centered around GDM:#!/bin/sh[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresourcesxsetroot -solid greyif [ -f /usr/bin/gnome-session ]; then # Some gnome session types won't work with Xvnc, try to pick a sensible # default. for SESSION in ubuntu-2d 2d-gnome gnome-classic gnome-fallback; do if [ -f /usr/share/gnome-session/sessions/$SESSION.session ]; then DESKTOP_SESSION=$SESSION; export DESKTOP_SESSION GDMSESSION=$SESSION; export GDMSESSION STARTUP=/usr/bin/gnome-session --session=$SESSION; export STARTUP fi donefiunset SESSIONBINXTERM_COMMAND=xterm -geometry 80x24+10+10 -lsif [ -x /etc/X11/Xsession ]; then SESSIONBIN=/etc/X11/Xsessionelif [ -x /etc/X11/xdm/Xsession ]; then SESSIONBIN=/etc/X11/xdm/Xsessionelif [ -x /etc/X11/xinit/Xsession ]; then SESSIONBIN=/etc/X11/xinit/Xsessionelif [ -x /etc/X11/gdm/Xsession ]; then SESSIONBIN=/etc/X11/gdm/Xsession gnome-sessionelif [ -x /etc/gdm/Xsession ]; then SESSIONBIN=/etc/gdm/Xsession gnome-sessionelif [ -x /etc/kde/kdm/Xsession ]; then SESSIONBIN=/etc/kde/kdm/Xsessionelif [ -x /usr/dt/bin/Xsession ]; then XSTATION=1 DTXSERVERLOCATION=local export XSTATION DTXSERVERLOCATION SESSIONBIN=/usr/dt/bin/Xsessionelif [ -x /usr/dt/bin/dtsession ]; then SESSIONBIN=/usr/dt/bin/dtsessionelif which twm > /dev/null 2>&1; then $XTERM_COMMAND & SESSIONBIN=twmfiif [ x${SESSIONBIN} = x ]; then echo No session located; just starting a terminal $XTERM_COMMAND echo Terminal closed with return code $?else echo Starting session: $SESSIONBIN $SESSIONBIN echo Session terminated with return code $?fivncserver-virtual -kill $DISPLAYAny ideas what I need to change to get my RealVNC session to connect to LightDM?
Can't get RealVNC to work with LightDM
debian;x11;xorg;vnc
null
_unix.140001
Can I configure the TOR Browser which comes with Whonix to use a proxy,so that it becomes an extra hop after the TOR exit node? Connecting as follows:Browser(workstation) > TOR(gateway) > exit node > proxyEffectively rendering it a proxychain.In contrast to the Tor Browser Bundle which connects to TOR via 127.0.0.1 under Whonix we have a different situation where we connect through the gateway to TOR. So a connection like I described maybe this might be possible.Is there any more recommended solution for proxy chaining under Whonix?
Possible to add proxy after TOR exit node?
proxy;tor;whonix;proxychains
null
_unix.301986
I restored add-ons, bookmarks, and favorites from another installation (Manjaro) by copying the contents of ~/.mozilla/firefox/??????.default into the correct folder of this installation.Now, I cannot search from the address bar or the search pane in firefox. Nor can I add a search provider from the firefox preferences window (below)
How do I re-enable search from the menu bar?
linux mint;firefox
I chose to reset firefox as detailed in their tutorialHowever, it did not fix the problem immediately. My add-ons (xmarks and LastPass) were not re-enabled until I restarted firefox for another time (refresh was the first time). Then, I discarded my xmarks settings and re-imported from a previous save. It was not clear to me if re-importing my xmarks settings or if just restarting the browser for the second time was the solution.
_webapps.102509
Meetup.com is useful, but it never forgets anything, even when I tell it to.If I mark down an interest and then later remove it, or if I join a group and then leave it, or even if I just look at the group's page too often, Meetup will use that data over and over again to make totally useless suggestions for meetups that I will never actually go to, either because I don't want to go to meetups for that interest, or because I'm not eligible to join the group.Is there any way to stop Meetup.com from suggesting a specific group, or from using a specific interest wastefully to suggest groups?
How can I stop getting suggestions for a specific group?
meetup
null
_codereview.15510
Now this is something I've looked into, and while I have a working solution, I don't like it.Background:Through our intranet website, we want to run a process that copies a file from another machine, unzips it, and then analyzes the content. As the file is large, this takes some time (usually around 5-6 minutes). Rather than have the user just hit a Button and pray that they get a success message in 5-6 minutes, we want to show the progress via updates to a TextBox.What I've learned so far:This isn't as simple as putting everything into an UpdatePanel and updating it at various steps in the process. Seems like it would be, but it's not. I looked into threading as well, but I couldn't get it working. That is to say, I got the process to run on a separate thread, but while it was running, the interface wouldn't update. It would queue up everything, and then display it all at once, once the process finished. As I'm still relatively new, the possibility that I was just doing something wrong is high.What I have (which works, I guess...):Two .aspx pages, DatabaseChecker.aspx and Processing.aspxDatabaseChecker.aspx:<form id=form1 runat=server> <asp:Button ID=btnExecute runat=server onclick=btnExecute_Click style=height: 26px Text=Execute /> <iframe src=Processing.aspx name=sample width=100% height=700px style=border-style: none; overflow: hidden;> </iframe></form>Processing.aspx:<meta http-equiv=refresh content=1 /><form id=form1 runat=server> <asp:TextBox ID=TextBox1 runat=server Height=633px TextMode=MultiLine Width=504px></asp:TextBox></form>The .aspx portion is very simple. DatabaseChecker.aspx simply has a button to begin the process, and it has Processing embedded as an iframe.The result looks like this:The TextBox1 in Processing.aspx is where the progress update goes.Now let me just point out the dirty trick and the part I don't like right now. In Processing.aspx, there is a meta tag to refresh the page once per second.How it works (summary):When the process starts, a Session variable called [Running] is set to true. When the process ends, Session[Running] is set to false. And since Processing.aspx refreshes once per second, what happens is it saves the current contents of TextBox1.Text to another Session variable called [TextBoxContent]. And then the Page_Load method for Processing.aspx fills the TextBox back up with the previous content, and adds a period.So the output will begin simply looking like Process Starting, but after 10 seconds it will look like Process Starting.......... (one period per second).How it works (details):The process begins in DatabaseChecker.aspx's Execute button:protected void btnExecute_Click(object sender, EventArgs e){ Session[TextBoxContents] = Copying .zip file from other machine...; Session[Running] = true; Thread thread = new Thread(new ThreadStart(TheProcess)); thread.IsBackground = true; thread.Start();}private void TheProcess(){ CopyFromOtherMachine(); UnzipFiles(); ConvertTextFilesToDataTables(); //and so on and so forth Session[Running] = false;}private void CopyFromOtherMachine(){ if (File.Exists(Path.Combine(FileRootDirectory, DesiredFileName))) { Session[TextBoxContent] += Previous .zip file already detected. Deleting...; File.Delete(Path.Combine(FileRootDirectory, DesiredFileName)); Session[TextBoxContent] += OK! + Environment.NewLine; } Session[TextBoxContent] += Copying .zip from other machine...; File.Copy(@\\someothermachine\production\desiredfile.zip, Path.Combine(FileRootDirectory, DesiredFileName)); Session[TextBoxContent] += OK! + Environment.NewLine;}// UnzipFile()// ConvertTextFilesToDataTables()// and so onAnd then we have Processing.aspx's Page_Load method, which is where we display our progress:protected void Page_Load(object sender, EventArgs e){ if (Session[TextBoxContent] != null) { TextBox1.Text = Session[TextBoxContent].ToString(); if ((bool)Session[Running] != false) { Session[TextBoxContent] += .; } }}What I want to improve:Basically, everything. This whole thing feels like a really makeshift house of cards. In particular, I don't like that the page has to update once per second, particularly because the Windows mouse icon changes to loading and not loading cursors very fast. If the user knows the system and knows what's going on, then yeah big deal I guess, we can just deal with it because we know what's going on. But I think to the average user, the behavior is jarring, it feels like something might be wrong or something.Hopefully I've made it clear what I'm trying to achieve overall, so I'm open to other ideas about how to go about it.
Making a page update based on the progress of a process
c#;asp.net
Have you considered using SignalR?As their homepage states, it's a library for ASP.NET developers that makes it incredibly simple to add real-time web functionality to your applications. What is real-time web functionality? It's the ability to have your server-side code push content to the connected clients as it happens, in real-time.
_unix.67054
I' ve just bought new RAM and I'd like to benchmark and compare with my old. How can I do that?
How to benchmark RAM memory with a Linux Distro?
ram;benchmark
The package hardinfo (http://sourceforge.net/projects/hardinfo.berlios/) is a pretty decent system benchmarker with a nice GUI. The simplest way to compare the two would be to benchmark one save the results and then compare it to your benchmarking of the other.EDITDepending on your distro, you may already have hardinfo installed, for example on Lubuntu it is called System Profiler and Benchmark.
_cs.65190
I'm just getting started with cs. In school I heard about that modern micro processor are not perfect. So what are the issues? Are does problems related to energy(power draw)/ time /speed(clock speed)? Or is the design faulty?So if you could design the perfect cpu what would it look like?
What are the design issue with today's micro processor
computer architecture;cpu
There are inherent tradeoffs in targeting a given use. An implementation optimized for one workload, power budget, and cost (at a particular volume of sales) will necessarily be less than optimal for some other use.Binary compatibility places another constraint on optimization. An ISA which is a good/easy compiler target and which is not strongly tied to a particular set of implementation techniques and technologies will be less than optimal for a given implementation technology (i.e., some optimizations are hindered by the abstraction presented in the ISA) and will sacrifice some benefits in performance, energy-efficiency, etc. for these other goals.(Redesigning the ISA for every implementation is not a solution, even when using an intermediate-level software distribution format to provide compatibility. The cost and delay of ISA design (including developing an optimizing compiler for the software distribution format) constrains how extensively changes can be made to optimize for particular targets. Even microarchitectures are typically extensively reused because of development costs, including time to market and risk.)Time to market also constrains optimization. High performance designs targeting new manufacturing processes begin work years before the process characteristics are defined. (This also applies to application targets. Algorithms and use patterns change.) There is also a limited amount of design effort that can be reasonably applied. Even if the value of optimizations could justify the development costs, worse but available products can establish market momentum.
_bioinformatics.81
I have a FASTA file with 100+ sequences like this:>Sequence1GTGCCTATTGCTACTAAAA ...>Sequence2GCAATGCAAGGAAGTGATGGCGGAAATAGCGTTA......I also have a text file like this:Sequence1 40Sequence2 30......I would like to simulate next-generation paired-end reads for all the sequences in my FASTA file. For Sequence1, I would like to simulate at 40x coverage. For Sequence2, I would like to simulate at 30x coverage. In other words, I want to control my sequence coverage for each sequence in my simulation.Q: What is the simplest way to do that? Any software I should use? Bioconductor?
How to simulate NGS reads, controlling sequence coverage?
fasta;ngs;simulated data
I am not aware of any software that can do this directly, but I would split the fasta file into one sequence per file, loop over them in BASH and invoke ART the sequence simulator (or another) on each sequence.
_datascience.16961
I'm having trouble finding a good reward function for the pendulum problem, the function I'm using: $-x^2 - 0.25*(\text{xdot}^2)$which is the quadratic error from the top. with $x$ representing the current location of the pendulum and $\text{xdot}$ the angular velocity.It takes a lot of time with this function and sometimes doesn't work.Any one have some other suggestions?I've been looking in google but didn't find anything i could use
Reinforcement learning, pendulum python
reinforcement learning
You could use the same reward function that Openai's Inverted Pendulum is using:$costs=-(\Delta_{2\pi}\theta)^2 - 0.1(\dot{\theta})^2 - 0.001u^2$where $(\Delta_{2\pi}\theta)$ is the difference between current and desired angular position performed using modulo $2\pi$. The variable $u$ denotes the torque (the action of your RL agent). The optimal is to be as close to zero costs as it gets.The idea here is that you have a control problem in which you can come up with a quadratic 'energy' or cost function that tells you the cost of performing an action at EVERY single time step. In this paper (p.33 section 5.2) you can find a detailed description.I have tested RL algorithms in this objective function and I did not encounter any problems for convergence in both MATLAB and Python. If you still have problems let us know what kind of RL approach you implemented and how you encoded the location of the pendulum.Hope it helps!
_unix.10785
Possible Duplicate:Arch Linux not booting after system update After running a system update, in which the kernel was updated, when I try to boot Arch (in single user, not quiet), I get the message bin/sh cant access tty and I get dropped at a prompt, but I can't type. I am dual-booting with OS X, if that makes a difference.
Arch Linux not functional after kernel upate
kernel;arch linux;boot;dual boot;tty
null
_webmaster.33945
After a recent upgrade to parallels plesk 11, we decided to start using their web presence builder tool. However, every video, (documentation and instructional) I have viewied shows the link should just be under websites and domains, or even on the homepage. It is in neither location. I have verified it is both installed and up-to-date, under server -> updates and upgradesAny idea how I access the web presence builder?
parallels plesk 11 missing web presence builder
plesk
for other's reference, the site builder link is hidden if your license does not support the power pack (godaddy, may be named different under different providers). You can view your license by going to server -> license managementsimply look for websites by web presence builder, if the value is limited to 0, you need to talk to your provider. Godaddy's upgrade was 6.99/mo
_webmaster.59825
I want to configure a domain such that when the user goes to my website, the system identifies the IP country of user and auto changes www.mysite.com/index.aspx to www.unitedstate.mysite.com (for US People)www.france.mysite.com (for French People)www.japan.mysite.com (for Japanese People)I want to develop my website globally, changing the language and currency by identifying the country of user visiting the website based on the IP address.Can I do it? I'm using IIS on a private ASP.net Windows Server.
How to implement country domains based on geo IP address in ASP.net?
subdomain;url rewriting;iis;users;geotargeting
null
_softwareengineering.352668
Is there any reason not to build JSON data that can be indexed by some key? For example in the WhenIWork API below, using the user's id to quickly access the data? The reason I'm asking is because it seems that for a lot of uses on the client side you could easily index into the array then and grab the data you need vs. looping through the JSON array looking for a specific id. But a lot of APIs do not do this (2 of those examples below).WhenIWork API -- Users{ users: [ { id: 4364, login_id: 2112, first_name: Goldie, last_name: Wilson, }, { id: 27384, login_id: 2112, email: [email protected], first_name: Jennifer, last_name: Parker, } ]}GitHub API -- Events[ { type: Event, public: true, payload: { }, repo: { id: 3, name: octocat/Hello-World, url: https://api.github.com/repos/octocat/Hello-World }, actor: { id: 1, login: octocat, gravatar_id: , avatar_url: https://github.com/images/error/octocat_happy.gif, url: https://api.github.com/users/octocat }, org: { id: 1, login: github, gravatar_id: , url: https://api.github.com/orgs/github, avatar_url: https://github.com/images/error/octocat_happy.gif }, created_at: 2011-09-06T17:26:27Z, id: 12345 }]In my specific case I have a users portion of the API which gives me data similar to the code above. Then by going to users/availability I can get all the availability for users. But right now I have to loop through all the data looking for specific IDs.{ [ { user_id:41, date:2017-07-01, status:Unavailable }, { user_id:41, date:2017-07-02, status:Available},, [ { user_id:47, date:2017-07-01, status:Available }, { user_id:47, date:2017-07-02, status:Leave\/TDY } ]}
JSON data with a key/index for easy searching
php;json
I think the reason most API's represent those data structures with arrays instead of objects is because arrays inherently support iteration, but objects don't. For example, to iterate over the keys in an JSON object in JavaScript, you must inspect the object's properties (but avoid certain properties) using Object.keys or something similar. Iteration is more natural if arrays are used, and an index can be built very simply by iterating over the array once.arr.reduce((index, item) => Object.extend({}, index, {[item.id]: item}), {})Another reason an API might use an array over an object is that it reflects the format that data is fetched from persistance. In every database I've used results are always represented collection (e.g. SQL query rows), not a map, so it's natural to carry that structure to the interface layer without changing it. Further collections work regardless of whether there is a unique key or not, but maps work best with a unique key.Finally--arrays are an ordered type. If the order of results has meaning (like events in a timeline), then returning a map wouldn't be very useful because the order of properties in JSON is not intended to be maintained.
_unix.343688
I decided to install Linux as I feed up with getting notification from windows that tell me to get my software licence.It is not the one reason why I move to Linux but it is another topic.Anyway,I installed Kali Linux 2016.1 and I have some issues with sounds which is so noisy,squeaky sound.Later ,I decided to upgrade to 2016.02.But thats not solve the problem,still same problem with voice.How can I fix this problem.My Linux distro :Kali Linux 2016.2 32 bit,i386arch.My hardware: Acer LaptopSound driver :I guess it is pulseaudio.The output of the lsmod|grep snd is like this.snd_hda_codec_hdmi 40960 1 snd_hda_codec_realtek 65536 1 snd_hda_codec_generic 65536 1 snd_hda_codec_realtek snd_hda_intel 28672 3 snd_hda_codec 94208 4 snd_hda_intel,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek snd_hda_core 57344 5 snd_hda_intel,snd_hda_codec,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek snd_hwdep 16384 1 snd_hda_codec snd_pcm 86016 4 snd_hda_intel,snd_hda_codec,snd_hda_core,snd_hda_codec_hdmi snd_timer 28672 1 snd_pcm snd 57344 14 snd_hda_intel,snd_hwdep,snd_hda_codec,snd_timer,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_codec_realtek,snd_pcm soundcore 16384 1 sndand the command of lspci -v | grep -A7 -i audiois:00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) Subsystem: Acer Incorporated [ALI] 82801I (ICH9 Family) HD Audio Controller Flags: bus master, fast devsel, latency 0, IRQ 29 Memory at 96700000 (64-bit, non-prefetchable) [size=16K] Capabilities: Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) (prog-if 00 [Normal decode])
Kali Linux 2016.2 sound problem
drivers;kali linux;audio
null
_unix.318455
I have an Ubuntu Linux server machine.It boots up fine and gets it's network configuration up and running perfectly fine.What I want to do, is somehow grab the network configuration and save it, and somehow reload that exact same network configuration. Specifically ip address and netmask, router/gateway and any static routes.The reasons are obscure and probably not so relevant.Is there a way to do this? To grab an existing network config and re-run it?UPDATE RESPONSE TO COMMENT:OK to explain why I have such a strange request:What I am doing is executing a new operating system from within an existing operating system. The new operating system needs to implement the exact same network setup - i.e. router/gateway and ip address/netmask. The network information is not available via DHCP or any other mechanism - it gets injected into the first operating system when it boots. That means I need to pass the networking information from the first OS into the second OS, from which point I need to instruct the second OS to configure itself with the network information that was passed in. That's the context, although I suspect that explaining it will confuse the issue.
How to store network configuration and reload?
networking
null
_cs.16160
I was wondering how to remove duplicate values from a linked list in $\mathcal{O}(n\lg n)$ time. I have an idea that by using merge sort when we want to compare elements for choosing the small one, if they are equal advance on pointer and just consider one element. Any alternatives?
How to purge a linked list in $\mathcal{O}(n\log n)$ time?
sorting;linked lists
Sort the linked list in $O(n \log n)$ time. Go through each element of the list in their order and remove the element, if it is the same as the previous one in $O(n)$ time.The total complexity is $O(n \log n)$, which is what you are searching for.
_codereview.86048
I wrote a map reduce program which uses multi threads, bounded buffers, condition variables. It works perfectly for some types of inputs. In the program there are N mappers, R reducers, 1 merger. mappers get data from input files, put each string in files to corresponding buffer-X-Y. reducers read from those buffers, sort the strings, and put each sorted sequence to buffer-Y. merger merges given sequences and writes to an output file.How I run:make;valgrind --tool=memcheck --leak-check=yes ./program 1 5 file o 10File content:Take me down to the paradise city Where the grass is green and the girls are pretty Take me home (oh won't you please take me home) Take me down to the paradise city Where the grass is green and the girls are pretty Take me home (oh won't you please take me home)If you make that file content a a bb bb it gives different output.Is there any way I can improve it?#include <errno.h>#include <stdio.h>#include <stdlib.h>#include <pthread.h>#include <sys/types.h>#include <unistd.h>#include <string.h>#define FILE_NAME_SIZE 20#define WORD_LENGTH 286// a bit extra to carry \t(occurance)struct arg { int index; char *file_name;};struct listNode { char *data; struct listNode *next; int occurrence;};int N;int R;char output_file[WORD_LENGTH];int bufsize;/* there are n*r buffers between mappers-reducers. 3d array. r buffers between reducers-merger. should be 2d array but buffer[N] is allocated for second type of buffers. */char ****buffer;// buffer[i][j][0]=fill, buffer[i][j][1]=use, // buffer[i][j][2]=count, buffer[i][j][3]=to be inserted in totalint ***buffer_info;pthread_mutex_t **mutex;pthread_cond_t **c_fill;pthread_cond_t **c_empty;void put(int i, int j, char *value) { strcpy(buffer[i][j][(buffer_info[i][j][0])], value); buffer_info[i][j][0] = (buffer_info[i][j][0] + 1) % bufsize; buffer_info[i][j][2]++; printf(put into buffer[%d][%d] '%s'\n, i, j, value);}char* get(int i, int j) { char* tmp = buffer[i][j][(buffer_info[i][j][1])]; buffer_info[i][j][1] = (buffer_info[i][j][1] + 1) % bufsize; buffer_info[i][j][2]--; printf(-get '%s' from buffer[%d][%d]\n, tmp, i, j); return tmp;}void insert (struct listNode **ptr, char *value) { struct listNode *newPtr; int cmp; // find a place to instert node to LL while(*ptr){ // Comparision to detect & remove duplicates nodes cmp = strcmp(value, (*ptr)->data); // duplicate if(cmp == 0){ (*ptr)->occurrence++; return; } // the point where i need to add the node if(cmp < 0) break; ptr = &(*ptr)->next; } // now here *ptr points to the pointer that i want to change // it can be NULL, if we are at the end of the LL newPtr = malloc(sizeof *newPtr); if(!newPtr) return; newPtr->data = strdup(value); newPtr->occurrence = 1; if(newPtr->data == NULL){ free(newPtr); return; } // here we are connecting our brand new node to the LL newPtr->next = *ptr; *ptr = newPtr;}static void *merger(){ FILE *file; char temp[WORD_LENGTH]; char temp2[WORD_LENGTH+30]; int occurrence=0; int i; char *tok; // once we have the element that comes first we will put write it to file. file = fopen(output_file, w); if (file == NULL) { printf(Error opening file!\n); exit(1); } // first element (according to asc order) in each file will be here // we will use this array to find the next string to write to output char child_heads[R][WORD_LENGTH]; int child_heads2[R]; // when there is no string left in a temp, we will // describe it here. value 1 means we are done. int child_done[R]; for(i = 0; i<R; i++){ child_done[i] = 0; // printf(buffer_info[%d][%d][3]: %d\n, N, i, buffer_info[N][i][3]); if(buffer_info[N][i][3] != 0){ pthread_mutex_lock(&mutex[N][i]); while (buffer_info[N][i][2] == 0) pthread_cond_wait(&c_fill[N][i], &mutex[N][i]); strcpy(temp2, get(N, i)); buffer_info[N][i][3]--; pthread_cond_signal(&c_empty[N][i]); pthread_mutex_unlock(&mutex[N][i]); tok = strtok(temp2, \t); strcpy(temp, tok); while((tok = strtok(NULL, \t))) occurrence=atoi(tok); strcpy(child_heads[i], temp); child_heads2[i] = occurrence; } else { child_done[i] = 1; } // printf(%d.%d\n, i, child_done[i]); } int done; int min_i = -1; while(1){ min_i = -1; done = 1; // comparisons are not started, we assign the first available // item as the minimum for comparisons. if(min_i == -1){ for(i=0; i<R; i++){ if(child_done[i]==0){ min_i = i; break; } } } for(i=0; i<R; i++){ if(child_done[i]==0 && strcmp(child_heads[i],child_heads[min_i])<0){ min_i = i; } // if all data in all files are read stop the outer loop if(child_done[i] == 0){ done = 0; } } if(done == 1){ break; } // write the element into the file fprintf(file, %s\t%d\n, child_heads[min_i], child_heads2[min_i]); // so we used the string that comes first in heads, // now we'll update that element's place in heads array. if(buffer_info[N][min_i][3] != 0){ pthread_mutex_lock(&mutex[N][min_i]); while (buffer_info[N][min_i][2] == 0) pthread_cond_wait(&c_fill[N][min_i], &mutex[N][min_i]); strcpy(temp2, get(N, min_i)); buffer_info[N][min_i][3]--; pthread_cond_signal(&c_empty[N][min_i]); pthread_mutex_unlock(&mutex[N][min_i]); tok = strtok(temp2, \t); strcpy(temp, tok); while((tok = strtok(NULL, \t))) occurrence=atoi(tok); strcpy(child_heads[min_i], temp); child_heads2[min_i] = occurrence; } else { // there is no element coming from the temp child_done[min_i] = 1; } } // close temp files fclose(file); pthread_exit(NULL);}static void *reducer(void *arg){ int index=*((int*)arg); // printf(--- reducer %d here!\n, index); char temp[WORD_LENGTH]; char temp2[WORD_LENGTH+30]; strcpy(temp, ); strcpy(temp2, ); int j, k; struct listNode *head = NULL; // read from buffer for(j=0; j<N; j++){ for(k=0; k<buffer_info[j][index][3]; k++){ pthread_mutex_lock(&mutex[j][index]); while (buffer_info[j][index][2] == 0) pthread_cond_wait(&c_fill[j][index], &mutex[j][index]); strcpy(temp, get(j, index)); pthread_cond_signal(&c_empty[j][index]); pthread_mutex_unlock(&mutex[j][index]); insert(&head, temp); } } // this buffer must carry a sorted sequence. // when there is no items left to send, then merger must be notified // how to know when there is no items left to send? // her buffer[i][j] iin // write to buffer struct listNode *ptr = head; while (ptr){ sprintf(temp2, %s\t%d, ptr->data, ptr->occurrence); // write word to a buf. pthread_mutex_lock(&mutex[N][index]); while (buffer_info[N][index][2] == bufsize) pthread_cond_wait(&c_empty[N][index], &mutex[N][index]); // see buffer definition to understand why (N) is here put(N, index, temp2); buffer_info[N][index][3]++; pthread_cond_signal(&c_fill[N][index]); pthread_mutex_unlock(&mutex[N][index]); // printf(Reducer %d - %d.%s\t%d\n, index, i, ptr->data, ptr->occurrence); ptr = ptr->next; } // deallocations while (head){ ptr = head; head = head->next; free(ptr->data); free(ptr); } pthread_exit(NULL);}static void *mapper(void *arg_ptr){ // printf(mapper %s here!\n, ((struct arg *) arg_ptr)->file_name); int bytes; int j, i; i = ((struct arg *) arg_ptr)->index; // read input file char temp[WORD_LENGTH]; FILE *file; file = fopen(((struct arg *) arg_ptr)->file_name, r); if (file == NULL) { printf(Error opening file: %s\n, ((struct arg *) arg_ptr)->file_name); return NULL; } // scan the next %s from stream and put it to temp while(fscanf(file, %s, temp) > 0){ bytes = 0; int k; for(k=0; k<strlen(temp)+1; k++){ bytes += temp[k]; } j = bytes % R; // write word to a buf. pthread_mutex_lock(&mutex[i][j]); while (buffer_info[i][j][2] == bufsize) pthread_cond_wait(&c_empty[i][j], &mutex[i][j]); put(i, j, temp); buffer_info[i][j][3]++; pthread_cond_signal(&c_fill[i][j]); pthread_mutex_unlock(&mutex[i][j]); // good luck understanding :) } fclose(file); pthread_exit(NULL);}int main(int argc, char *argv[]) { printf(_______________________________________\n); int i, j, k, ret; // program inputs: <N> <R> <infile1> <infileN> <finalfile> <bufsize> N = atoi(argv[1]); // atoi = ascii to int R = atoi(argv[2]); char input_files[N][WORD_LENGTH]; for(i=0; i<N; i++){ strcpy(input_files[i], argv[3+i]); } strcpy(output_file, argv[3+N]); bufsize = atoi(argv[4+N]); if(bufsize>10000 || bufsize < 10 || N > 20 || N < 1 || R > 10 || R < 1){ printf(Input is out of range!\n); return 0; } // create buffer buffer = (char* ***)malloc((N+1) * sizeof(char* **)); for(i = 0; i < (N+1); i++){ buffer[i] = (char* **)malloc(R * sizeof(char* *)); for(j = 0; j < R; j++){ buffer[i][j] = (char* *)malloc(bufsize * sizeof(char*)); for(k = 0; k < bufsize; k++){ buffer[i][j][k] = (char*)malloc(WORD_LENGTH * sizeof(char)); } } } // buffer info. see decleration for explaination buffer_info = (int* **)malloc((N+1) * sizeof(int* *)); for(i = 0; i < (N+1); i++){ buffer_info[i] = (int* *)malloc(R * sizeof(int*)); for(j = 0; j < R; j++){ buffer_info[i][j] = (int* )malloc(4 * sizeof(int)); for(k = 0; k < 4; k++){ buffer_info[i][j][k] = 0; } } } // create mutex mutex = (pthread_mutex_t* *)malloc((N+1) * sizeof(pthread_mutex_t*)); for(i = 0; i < (N+1); i++){ mutex[i] = (pthread_mutex_t*)malloc(R * sizeof(pthread_mutex_t)); for(j = 0; j < R; j++){ mutex[i][j] = (pthread_mutex_t) PTHREAD_MUTEX_INITIALIZER; } } // create cond vars c_empty = (pthread_cond_t* *)malloc((N+1) * sizeof(pthread_cond_t*)); for(i = 0; i < (N+1); i++){ c_empty[i] = (pthread_cond_t*)malloc(R * sizeof(pthread_cond_t)); for(j = 0; j < R; j++){ c_empty[i][j] = (pthread_cond_t) PTHREAD_COND_INITIALIZER; } } c_fill = (pthread_cond_t* *)malloc((N+1) * sizeof(pthread_cond_t*)); for(i = 0; i < (N+1); i++){ c_fill[i] = (pthread_cond_t*)malloc(R * sizeof(pthread_cond_t)); for(j = 0; j < R; j++){ c_fill[i][j] = (pthread_cond_t) PTHREAD_COND_INITIALIZER; } } // create mapper threads pthread_t tids[N]; struct arg args[N]; for(i=0; i<N; i++){ args[i].index = i; args[i].file_name = input_files[i]; ret = pthread_create(&(tids[i]), NULL, &mapper, (void *) &args[i]); if (ret != 0) { return 0; } } for(i=0; i<N; i++){ ret = pthread_join(tids[i], NULL); if (ret != 0) { printf(thread join failed \n); return 0; } } // create reducer threads pthread_t tids_r[R]; int args_r[R]; for(i=0; i<R; i++){ // either pass the addresses of array elements, or allocate new memory // in each iteration and pass the address. otherwise, we have memory problems. args_r[i] = i; ret = pthread_create(&(tids_r[i]), NULL, &reducer, (void *) &args_r[i]); if (ret != 0) { printf(thread create failed \n); return 0; } } for(i=0; i<R; i++){ ret = pthread_join(tids_r[i], NULL); if (ret != 0) { printf(thread join failed \n); return 0; } } // create merger thread pthread_t tid; ret = pthread_create(&tid, NULL, &merger, NULL); if (ret != 0) { printf(thread create failed \n); return 0; } ret = pthread_join(tid, NULL); if (ret != 0) { printf(thread join failed \n); return 0; } // freeing for(i = 0; i < (N+1); i++){ for(j = 0; j < R; j++){ for(k = 0; k < bufsize; k++){ free(buffer[i][j][k]); } free(buffer[i][j]); free(buffer_info[i][j]); } free(buffer[i]); free(buffer_info[i]); free(mutex[i]); free(c_empty[i]); free(c_fill[i]); } free(buffer); free(buffer_info); free(mutex); free(c_empty); free(c_fill); return 0;}
Mergesort using map-reduce, multithreads, buffers and condition variables
c;multithreading;memory management;mergesort;mapreduce
null
_cs.6552
Given a set of coins with different denominations $c1, ... , cn$ and a value v you want to find the least number of coins needed to represent the value v.E.g. for the coinset 1,5,10,20 this gives 2 coins for the sum 6 and 6 coins for the sum 19. My main question is: when can a greedy strategy be used to solve this problem?Bonus points: Is this statement plain incorrect? (From: How to tell if greedy algorithm suffices for the minimum coin change problem?)However, this paper has a proof that if the greedy algorithm works for the first largest denom + second largest denom values, then it works for them all, and it suggests just using the greedy algorithm vs the optimal DP algorithm to check it. http://www.cs.cornell.edu/~kozen/papers/change.pdfPs. note that the answers in that thread are incredibly crummy- that is why I asked the question anew.
When can a greedy algorithm solve the coin change problem?
algorithms;combinatorics;greedy algorithms
A coin system is canonical if the number of coins given in change by the greedy algorithm is optimal for all amounts. The paper D. Pearson. A Polynomial-time Algorithm for the Change-Making Problem. Operations Reseach Letters, 33(3):231-234, 2005 offers an $O(n^3)$ algorithm for deciding whether a coin system is canonical, where $n$ is the number of different kinds of coins. From the abstract:We then derive a set of $O(n^2)$ possible values which must contain the smallest counterexample. Each can be tested with $O(n)$ arithmetic operations, giving us an $O(n^3)$ algorithm.The paper is quite short.For a non-canonical coin system, there is an amount $c$ for which the greedy algorithm produces a suboptimal number of coins; $c$ is called a counterexample. A coin system is tight if its smallest counterexample is larger than the largest single coin.The paper Canonical Coin Systems for Change-MakingProblems provides necessary and sufficient conditions for coin systems of up to five coins to be canonical, and an $O(n^2)$ algorithm for deciding whether a tight coin system of $n$ coins is canonical.There is also some discussion in this se.math question.
_unix.365774
What exactly do all the fields mean with the gnu time program invoked with /usr/bin/time -v pipeline?Command being timed: xmllint config/locations/test1.xmlUser time (seconds): 0.00System time (seconds): 0.00Percent of CPU this job got: 50%Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.00Average shared text size (kbytes): 0Average unshared data size (kbytes): 0Average stack size (kbytes): 0Average total size (kbytes): 0Maximum resident set size (kbytes): 3160Average resident set size (kbytes): 0Major (requiring I/O) page faults: 0Minor (reclaiming a frame) page faults: 139Voluntary context switches: 1Involuntary context switches: 2Swaps: 0File system inputs: 0File system outputs: 0Socket messages sent: 0Socket messages received: 0Signals delivered: 0Page size (bytes): 4096Exit status: 0I've searched and searched but found on details on a lot of the fields like file system inputs and outputs.
GNU Time Field Expanation
time
null
_unix.293163
Running Ubuntu 12.04.I have a script that sets up the environment, its run by /etc/bash.bashrc. (It may be set to run by other shell profiles inits, I didn't actually set it up myself)When I Ctr+Alt+T to open a terminal, the script runs once. But if I SSH into my machine from another box, the /etc/bash.bashrc init script runs, but then it also gets run again, and I'm not sure why.Other users experience the same 'double' initialize. It's not necessarily a problem, but I would really like to isolate the issue for academic reasons.I added an echo into /etc/bash.bashrc to let me know when it's executing. I see the echo the first time, but not the second time, leading me to believe something else is executing the script, I just can't figure out what it is. I checked ~/.profile, ~/.bashrc, and ~/etc/profileI should emphasize that this behavior only happens when a user SSHes into the machine. I know there is some difference between interactive/login/non-login shells, but I'm not quite clear on the matter yet...
Initilization script runs twice on SSH
bash;shell script;shell;profile
To figure out what's invoking that environment setup script, you need to add tracing commands in that script, not in the one place where you know it's used.There's no portable way to report the stack of shell script inclusions. In bash, you can see that through the BASH_SOURCE variable. Dash keeps sourced scripts open while running them, so listing the open files should give you a good idea. Since bash is the default interactive shell and dash is the default scripting shell, this should cover most cases.if [ -n $BASH_SOURCE ]; then eval 'echo ${BASH_SOURCE[@]}'else readlink /proc/$$/fd/[4-9] 2>/dev/nullfi(${BASH_SOURCE[@]} is protected behind eval because it's a syntax error in sh.)Note that .bashrc or /etc/bash.bashrc is the wrong place for environment variables. It's only run by interactive non-login instances of bash in particular it isn't executed for SSH logins. If /etc/bash.bashrc is executed for an SSH login then it means that another script (typically ~/.bash_profile) sources it. The right place for site-specific environment variables is a script in /etc/profile.d/ or extra entries in /etc/environment.Regarding login shells and interactive shells, see Difference between Login Shell and Non-Login Shell?
_unix.312754
[EDIT #1 by OP: Turns out this question is quite well answered by exiftool creator/maintainer Phil Harvey in a duplicate thread on the ExifTool Forum][EDIT #2 by OP: From ExifTool FAQ: ExifTool is not guaranteed to remove metadata completely from a file when attempting to delete all metadata. See 'Writer Limitations'.]I'd like to search my old hard drives for photos that are not on my current backup drive. Formats include jpg, png, tif, etc..., as well as various raw formats (different camera models and manufacturers).I'm only interested in uniqueness of the image itself and not uniqueness due to differences in, say, the values of exif tags, the presence/absence of a given exif tag itself, embedded thumbnails, etc ...Even though I don't expect to find any corruption/data-rot between different copies of otherwise identical images, I'd like to detect that, as well as differences due to resizing and color changes.[Edit #3 by OP: For clarification: A small percentage of false positives is tolerable (a file is concluded to be unique when it isn't) and false negatives are highly undesirable (a file is wrongly concluded to be a duplicate).]My plan is to identify uniqueness based on md5sums after stripping any and all metadata.How can I strip the metadata? Will exiftool -all= <filename> suffice?
How to strip metadata from image files
file metadata;exif
null
_softwareengineering.225113
A few sprints ago I was assigned a task that was primarily research. I had to figure out how to get our product to interoperate with a very complex black box that we did not develop. I couldn't think of a way to estimate this work. Even if I got the ball rolling and knew the immediate problem I faced, I could not get a sense of how many other problems I'd have to solve after that. I could never tell if I was almost done or far from it. How am I supposed to estimate a backlog item like this? I want to elaborate the nature of this assignment. I knew what calls I had to make to interoperate with the black box. That was the easy part. But the API took a very, very complex object as a parameter. Calling the API would throw an error and it was not easy to figure out what that error was trying to tell me. The black box wouldn't tell me all the problems wrong with my request, it would just tell me the first problem it found. This made it very difficult to know how much work I had left.
In scrum, how do you give an estimate for a backlog item that is primarily research?
scrum;estimation;product backlog
if you can't estimate - and in this scenario it sounds like there really is no way to know in advance how long any particular part of the process will take - then the next-best option is to time-box the effort: how much time you're willing to spend on it, whether you get anywhere or notonce you get into it, you may have a better idea of how to estimate the remaining effort
_unix.342138
On aix(and on unix usually..) the tar command doesn't playgood with some archives created with gnu tar.I want to use my tar gnu on AIXon .spec file I put this line%define tar /opt/freeware/bin/tarHow to tell the rpm -bb command to use this tar and not tar of /usr/bin?
spec file: I want to use my tar
tar;rpm;aix;rpm spec
null
_cs.16697
I'm looking for job sites in applied/interdisciplinary mathematics, more specially, say postdocs or higher positions in mathematics and medical imaging, mathematics and computer vision. I'm aware of mostly all the popular job sites, mathjobs, euro math jobs, jobs.ac.uk, nordic math jobs etc etc, but most of the jobs there are of 'pure' nature, with very few for applied/interdisciplinary.I'm trying to find postdoctoral position in mathematical imaging problems, which would use significant amount of conformal/quasiconformal mappings, Riemann surfaces, differential geometry etc. Looking into individual group's webpage is too much work. But if there's an webpage containing all the information, that'll be much better! So, if you know any such website for the above (for Europe(preferable) and US), I'd appreciate if you could pass them onto me. Thanks!
Job sites for applied/interdisciplinary mathematics related to computer science?
computational geometry;image processing;computer vision
null
_unix.378350
I am using GNOME Shell 3.22.0 on nixos, and trying to enable natural scrolling for my mouse's scroll wheel.Under settings, there is a 'natural scrolling' option, as shown in this screenshotMy mouse wheel scrolls in the same (non-natural) direction whether natural scrolling here is selected to be on or off.How can I enable natural scrolling? Do I need to report this to gnome (or nixos) somehow as a bug?
natural scrolling does not work in gnome
gnome;scrolling;nixos
null
_unix.185050
loading before the login is fast, but it takes 30-50 seconds after logging in.Once its fully booted, it has no other speed issues. Something is bottlenecking during the login process and I don't know what that is.I'm using the factory ATI driver and I have dual-displays. I've tried the ATI driver with no success so I reverted to the stock driver.I've seen other people with similar issues but no solutions.
Linux Mint 17.1 slow after login
linux mint
from what I see on the net, it seems to be related to cinnamon.If you try xfce you will find that it is much faster.Give it a try and report back pleaseRegardsFrank
_unix.285111
I want to extract the logs between the current time stamp and 15 minutes before and sent an email to the people configured. I developed the below script but it's not working properly; can someone help me?? I have a log file containing this pattern:[2016-05-24T00:58:04.508-04:00] [oim_server1] [TRACE:32] [] [oracle.iam.scheduler.impl.quartz] [tid: OIMQuartzScheduler_QuartzSchedulerThread] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21904] [APP: oim#11.1.2.0.0] [SRC_CLASS: oracle.iam.scheduler.impl.quartz.QuartzJob] [SRC_METHOD: <init>] Constructor QuartzJob[2016-05-24T00:58:04.508-04:00] [oim_server1] [TRACE:32] [] [oracle.iam.scheduler.impl.quartz] [tid: OIMQuartzScheduler_QuartzSchedulerThread] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21904] [APP: oim#11.1.2.0.0] [SRC_CLASS: oracle.iam.scheduler.impl.quartz.QuartzJob] [SRC_METHOD: <init>] Constructor QuartzJob[2016-05-24T00:58:04.513-04:00] [oim_server1] [TRACE:32] [] [oracle.iam.scheduler.impl.quartz] [tid: OIMQuartzScheduler_Worker-1] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21908] [APP: oim#11.1.2.0.0] [SRC_CLASS: oracle.iam.scheduler.impl.quartz.QuartzTriggerListener] [SRC_METHOD: triggerFired] Trigger state 0[2016-05-24T00:58:04.515-04:00] [oim_server1] [TRACE:32] [] [oracle.iam.scheduler.impl.quartz] [tid: OIMQuartzScheduler_Worker-1] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21908] [APP: oim#11.1.2.0.0] [SRC_CLASS: oracle.iam.scheduler.impl.quartz.QuartzTriggerListener] [SRC_METHOD: triggerFired] Trigger state 0[2016-05-24T00:58:04.516-04:00] [oim_server1] [TRACE:32] [] [oracle.iam.scheduler.impl.quartz] [tid: OIMQuartzScheduler_Worker-1] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21908] [APP: oim#11.1.2.0.0] [SRC_CLASS: oracle.iam.scheduler.impl.quartz.QuartzTriggerListener] [SRC_METHOD: triggerFired] Trigger Listener QuartzTriggerListener.triggerFired(Trigger trigger, JobExecutionContext ctx)[2016-05-24T01:00:04.513-04:00] [oim_server1] [WARNING] [] [oracle.iam.scheduler.vo] [tid: OIMQuartzScheduler_Worker-7] [userId: oiminternal] [ecid: 0000LI6NBsP4yk4LzUS4yW1NBABd000003,1:21956] [APP: oim#11.1.2.0.0] IAM-1020021 Unable to execute job : CmyAccess Flat File WD Candidate with Job History Id:1336814[[org.identityconnectors.framework.common.exceptions.ConfigurationException: Directory does not contain normal files to read HR-76 at org.identityconnectors.flatfile.utils.FlatFileUtil.assertValidFilesinDir(FlatFileUtil.java:230) at org.identityconnectors.flatfile.utils.FlatFileUtil.getDir(FlatFileUtil.java:176) at org.identityconnectors.flatfile.utils.FlatFileUtil.getFlatFileDir(FlatFileUtil.java:182) at org.identityconnectors.flatfile.FlatFileConnector.executeQuery(FlatFileConnector.java:134) at org.identityconnectors.flatfile.FlatFileConnector.executeQuery(FlatFileConnector.java:58) at org.identityconnectors.framework.impl.api.local.operations.SearchImpl.rawSearch(SearchImpl.java:105) at org.identityconnectors.framework.impl.api.local.operations.SearchImpl.search(SearchImpl.java:82) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.identityconnectors.framework.impl.api.local.operations.ConnectorAPIOperationRunnerProxy.invoke(ConnectorAPIOperationRunnerProxy.java:93) at com.sun.proxy.$Proxy735.search(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.identityconnectors.framework.impl.api.local.operations.ThreadClassLoaderManagerProxy.invoke(ThreadClassLoaderManagerProxy.java:107) at com.sun.proxy.$Proxy735.search(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.identityconnectors.framework.impl.api.BufferedResultsProxy$BufferedResultsHandler.run(BufferedResultsProxy.java:162)The script I have written counts the errors found and stores them in a file with the number; if the error count increases it will run the script and send a mail. I can configure cron for this but the script I have written is not working fine. Can someone help me to extract logs between the current time and the last 15 minutes and generate a temp file?LogDir=/data/app/Oracle/Middleware/user_projects/domains/oim_domain/servers/oim_server1/[email protected]=FailedMESSAGE=Scheduler [email protected] NOW=$(date +%FT%T.000%-04:00)T2=$(date --date='15 minutes ago' +%FT%T.000%-04:00)OUT=/tmp/oim_server1-diagnostic_$(date +%F-%H-%M).logfind $LogDir -mmin -15 -name oim_server1-diagnostic.log > files.txtcount=0;if [ -f lastCount ]; then count=$(cat lastCount)fiwhile read filedo echo reading file \n $file currentCount=$(grep -c 'Directory does not contain normal files to read HR-76' $file) if [ $currentCount -ne $count -a $currentCount -ne 0 ];then echo Error Found $currentCount awk -v TSTART=[$T2] -v TEND=[$NOW] '$1>=TSTART && $1<=TEND' $LogDir/oim_server1-diagnostic.log > $OUT test -s $OUT && echo -e $MESSAGE | mailx -S smtp=$SMTP -a $OUT -r $SENDER -s $SUBJECT $EMAIL1 rm -f $OUT fi echo $currentCount > lastCountdone < files.txtThis script is extracting the logs but not in the appropriate format. The largest log which i am finding with (grep -c 'Directory does not contain normal files to read HR-76' $file)I want to extract all logs between two timestamps. Some lines may not have the timestamp, but I want those lines also. In short, I want every line that falls under two time stamps. This script is giving me log file only have timestamp and rest all the lines are missing any suggestion ??? Please note the start time stamp or end time stamp may not be there in all lines of the log, but I want every line between these two time stamps.Sample generation of above log mentioned:::[2016-05-24T01:00:04.513-04:00] [oim_server1] [WARNING] [] [oracle.iam.scheduler.vo] [tid: OIMQuartzScheduler_Worker-6] [userId: oiminternal] [ecid: 0000LIt5i3n4yk4LzU^AyW1NEPxf000002,1:23444] [APP: oim#11.1.2.0.0] IAM-1020021 Unable to execute job : CmyAccess Flat File WD Employee with Job History Id:46608[[
How to extract logs between the current time and the last 15 minutes
text processing;awk;sed;grep
null
_codereview.62924
This is sourced from the Stanford Coursera self study DB class SQL quizzes.Students at your hometown high school have decided to organize their social network using databases. So far, they have collected information about sixteen students in four grades, 9-12. Here's the schema:Highschooler (ID, name, grade) English: There is a high school student with unique ID and a given first name in a certain grade.Friend (ID1, ID2) English: The student with ID1 is friends with the student with ID2. Friendship is mutual, so if (123, 456) is in the Friend table, so is (456, 123).Likes (ID1, ID2) English: The student with ID1 likes the student with ID2. Liking someone is not necessarily mutual, so if (123, 456) is in the Likes table, there is no guarantee that (456, 123) is also present.DB is herePrompt: Find the number of students who are either friends with Cassandra or are friends of friends of Cassandra. Do not count Cassandra, even though technically she is a friend of a friend.My answer (which works) is below, but I am wondering if there is a more succinct way of accomplishing the same results. Any feedback would be appreciated.select count(*)-1 from (select id2from friend f, highschooler awhere a.name='Cassandra'and a.id=f.id1unionselect id2from friend f, highschooler awhere a.id=f.id1and f.id1 in ( select id2from friend f, highschooler awhere a.name='Cassandra'and a.id=f.id1 ) )
Database of students in a social network
sql;mysql
StyleIndentation would make your query more readable.Table names in MySQL may be case sensitive, depending on the underlying filesystem. Therefore, it is safest to write the query using identifiers with the same case as stated in the problem.SQL keywords are conventionally written in ALL CAPS, though some programmers object to that convention.Single-letter names for table aliases are cryptic. While I can somewhat understand friend f, I have a harder time accepting highschooler a.FormulationUNION, short for UNION DISTINCT, automatically deduplicates the result set. If possible, prefer UNION ALL for efficiency. However, in this case, you do need deduplication for the correct answer.The two halves of the union seem repetitive.Subtracting one from the final count is weird and risky. I suggesting filtering out Cassandra herself from the result set.Here's how I would write it:SELECT count(Friend.ID1) FROM Highschooler AS Cassandra, Friend WHERE Cassandra.name = 'Cassandra' AND ( Friend.ID2 = Cassandra.ID OR Friend.ID2 IN ( SELECT Friend.ID1 FROM Friend WHERE Friend.ID2 = Cassandra.ID ) ) AND Friend.ID1 <> Cassandra.ID;
_softwareengineering.136148
I'm working on a messaging system for two years now, the system was written by a long ago gone team and involves emails and document processing. The basic process is:Receive an email, parse it, save attachments to samba share. Notify message processor. We have a robust Java application doing this very well.Process the message, this involves getting user data from LDAP, calling file processing web-services on different platforms. The system has a sort of re-delivery service that polls database for failed or timed out messages in different states and resends them.The file processing parts are EJB web-services that basically run command line utilities.The problem lies in our orchestration solution which is based on OpenESB (almost dead now), it has a bunch of BPELs calling each other and calling remote EJBs (EJB3.0 on glassfish v2). The biggest problem is that too much logic is done in BPELs (say, all database updates are done in BPELs, there's no persistance layer), they look like the most horrifying spaghetti I've ever seen. Not to mention they make NetBeans which we're using to edit them run really slow, to the point that some files are only editable in a simple text editor as XML. Moreover the system has a number of nasty bugs that would require major refactoring to fix and I'm dreaded to even think about it. These messages are handled manually by our support stuff so there's almost no client-impact but I would like to have a real transactional system anyway.Now, for the question itself. I'm willing to spend a lot of my own free time trying to reach two goals: build a replacement for ESB and BPELs, learn new and preferably trendy technologies. I would like to keep the code in file processing EJBs since they run just fine, although I'm thinking of getting rid of SOAP as their remote interface.So I'm asking for any insight of which technology(-ies) would let me create a robust messaging solution that would:Have a friendly persistence layer, there won't be anything really complex in dbs, just message meta-data.Take care of balancing and polling calls to file processing web-services.Wouldn't require dealing with tons of XML in different places in order to add a new interface method.Would be scalable - run on a number of machines.Would allow realtime monitoring, like viewing message queues, current load, statuses etc.Hopefully will let me learn something new, meaning not J2EE stack. Basically I'm open to anything except BPEL and OpenESB.
Choosing right technology for messaging system
architecture;web services;java ee;enterprise architecture;messaging
I'm having a lot of fun with Erlang right now. It's a great language in and of itself. The real fun is in the framework and runtime it sits on. The Erlang Runtime is designed for distributed, concurrent applications (it was born at Ericsson for their network infrastructure). For integration with other systems, there is RabbitMQ which is built in Erlang and has native APIs for most of the big languages. It has a built in management interface that allows you to view messages and the like.Addressing your requirements specifically:Erlang has a built in database called Mnesia. For durable messaging, Rabbit leverages Mnesia (if an instance goes down the queue won't go down with it) and you don't have to worry about it yourself it just works.Out of the box Rabbit uses round robin pub-sub. If you have 3 listeners to what's called a fanout queue they will be alternated between for each message.Rabbit has an API driven configuration with UI's available to manage it without changing programs.Check: Erlang was built with scalability in mind...it was originally designed to support Ericsson's network infrastructure. Combined with RabbitMQ and you can distribute your message processing across many nodesCheck: Rabbit MQ has many built-in and 3rd party plugins for MonitoringErlang is gaining a lot of traction as a solution for concurrent processing definitely a different approach outside of the standard OO paradigm.
_webmaster.22774
I'm looking for a JavaScript-based tool for displaying draggable, zoomable maps. It has to have the following features:Can add markers, labels, balloons (a la Google maps)Must be scriptable after creation (so I can add or update content on the fly)must work offlinemust scale to the browser size (specifically, it must be mobile friendly)Ideally, pure JavaScript or based on the jQuery framework. I'd rather not have to have competing frameworks on a single page.
Javascript for draggable, zoomable map with Google-Maps-like features
javascript;map
null
_unix.356786
I set up two asterisk servers (on Fedora) in different networks. My goal is to make a call from softphone (on windows lite with ip: 192.168.20.3) to the asterisk server 2 which is in the other network (ip:192.168.10.2). But the problem is in registration between the two asterisk servers which are behind NAT.NAT IP for Asterisk Server 1: 100.100.100.100NAT IP for Asterisk Server 2: 200.200.200.200Architecture:IAX.conf in Asterisk server 1:[general]autokill=yesexternip=100.100.100.100localnet=192.168.10.0/255.255.255.0nat=yesregister => zone1:[email protected][zone2]type=friendhost=200.200.200.200trunk=yesnat=yesqualify=yessecret=welcomecontext=incoming_zone2permit=0.0.0.0/0.0.0.0IAX.conf in Asterisk server 2:[general]externip=200.200.200.200localnet=192.168.20.0/255.255.255.0nat=yesautokill=yesregister => zone1:[email protected][zone1]type=friendhost=100.100.100.100trunk=yesnat=yesqualify=yessecret=welcomecontext=incoming_zone1permit=0.0.0.0/0.0.0.0extensions.conf in Asterisk server 1[general]autofallthrough=yes[phones]include => internalinclude => remote[internal]exten => _5XXX,1,NoOp()exten => _5XXX,n,Playback(hello-world)exten => _5XXX,n,Dial(SIP/${EXTEN})exten => _5XXX,n,Hangup()[remote]exten => _6XXX,1,NoOp()exten => _6XXX,n,Playback(hello-world)exten => _6XXX,n,Dial(IAX2/zone2/${EXTEN})exten => _6XXX,n,Hangup()[incoming_zone2]include => internalextensions.conf in Asterisk server 2[general]autofallthrough=yes[phones]include => internalinclude => remote[internal]exten => _6XXX,1,NoOp()exten => _6XXX,n,Playback(hello-world)exten => _6XXX,n,Dial(SIP/${EXTEN})exten => _6XXX,n,Hangup()[remote]exten => _5XXX,1,NoOp()exten => _5XXX,n,Playback(hello-world)exten => _5XXX,n,Dial(IAX2/zone1/${EXTEN})exten => _5XXX,n,Hangup()[incoming_zone1]include => internalRegistration state: RejectedNOTES:PING between the two networks is ok Firewall on servers was turned off
Asterisk and NAT: SIP and IAX registration failed on remote connection behind NAT
nat;asterisk;cisco;voip
null
_codereview.153336
ProblemFind 2nd degree connections ( friends friends), output these 2nd degree connections ranked by number of common friends (i.e 1st degree connections) with you, (example: if 2nd degree connection A has 10 common friends (1st degree connections) with you but 2nd degree connection B has 8 common friends (1st degree connections)with you, then A should be ranked first) Input is your connection graph represented by undirected graph nodes, output is list of 2nd degree connections represented by graph nodes.My current solution is to find all first degree connection, and then for each first degree connection, I try to find true (i.e. not overlapped with first degree connection) second degree connection, and build a common connection (between self, and 2nd degree connection) frequency dictionary. Finally, I sort the dictionary by frequency.I'm wondering if any advice of more efficient implementation (in terms of algorithm time complexity), bugs in my code and code style advice. I am thinking of using BFS could improve performance (in terms of algorithm time complexity) for this problem.from collections import defaultdictclass Graph: def __init__(self): self.out_neighbour = defaultdict(list) def add_edge(self, from_node, to_node): self.out_neighbour[from_node].append(to_node) self.out_neighbour[to_node].append(from_node) def second_order_rank(self, from_node): first_order_set = set() for n in self.out_neighbour[from_node]: first_order_set.add(n) second_order_rank = defaultdict(int) # key: node id, value: count of common first order connection for first_order_node in self.out_neighbour[from_node]: for second_order_node in self.out_neighbour[first_order_node]: if second_order_node == from_node or second_order_node in first_order_set: continue for second_order_neighbour in self.out_neighbour[second_order_node]: if second_order_neighbour in first_order_set: second_order_rank[second_order_node] += 1 rank_order = [] for n,c in second_order_rank.items(): rank_order.append((c,n)) rank_order = sorted(rank_order, reverse=True) result = [] for r in rank_order: result.append(r[1]) return resultif __name__ == __main__: g = Graph() edges = [(1,2),(1,3),(2,3),(2,4),(2,5),(3,4),(3,6),(4,7),(4,10),(5,9),(6,8)] for e in edges: g.add_edge(e[0],e[1]) print g.second_order_rank(1)
Second degree connection ranking
python;algorithm;python 2.7;graph
null
_webapps.44763
I'm trying to setup a form and spreadsheet database to track inventory on a daily basis at a remote location. The idea is that the person at the remote location can update our stock via a simple online form (using jotform integrated w/ google spreadsheets) and then take the raw data on sheet1 and make a nice looking/organized front page with all the pertinent info plus some basic calculations (re: does this mornings opening stock match yesterdays closing level, etc). The best way I can figure to do this is to setup some basic call functions on the clean front page, but in order to make sure I'm using the latest data I need to ensure that I'm always pointing to the most recent time. The simplest way that I can think of to do this is have the spreadsheet autosort every time a new entry is made so that the last entry is always at the top. That way I can just write something like ='sheet1'!C2 in order to post the last entry in column 2 which should be the latest entry. And as new entries are made the front page should be updated with the latest numbers. (and also, I can know what I need to drive out in the morning before opening) I've tried using both an onEdit and onOpen approach to the problem, but the onEdit only works when I go in and make manual changes to column D, which is annoying. when I use onOpen, it doesn't sort at all. I've read something about the possibility of sorting on change rather then on edit, will this work for me?Here are the two sample scripts I've been working with that haven't produced what I wanted it to. // LinkBack to this script: // http://webapps.stackexchange.com/questions/7211/how-can-i-make-some-data-on-a-google-spreadsheet-auto-sorting/43036#43036 /** * Automatically sorts the 1st column (not the header row) Ascending. */function onEdit(event){ var sheet = event.source.getActiveSheet(); var editedCell = sheet.getActiveCell(); var columnToSortBy = 4; var tableRange = A2:AQ149; // What to sort. if(editedCell.getColumn() == columnToSortBy){ var range = sheet.getRange(tableRange); range.sort( { column : columnToSortBy, ascending: false } ); }}====================function onOpen(event) { var sheet = SpreadsheetApp.getActiveSpreadsheet(); var columnToSortBy = 4; var tableRange = A2:AQ149; var range = sheet.getRange(tableRange); range.sort( { column : columnToSortBy, ascending: false } ); }
How to use Google Scripts w/ spreadsheet and jotform to auto sort so the newest entry is always listed on the top row?
google spreadsheets;google apps script;sorting
null
_unix.1641
I am trying to customize the color scheme of Kile/Kate. I could do it, except I could not find any way to change the color of the side pane such as files, etc.I prefer dark background, and having dark background in editing space and white background in the left pane is not good for my eyes.
How to change background color of side pane of Kate and Kile?
kde;colors;theme;kate;kile
null
_softwareengineering.135411
From http://www.microsoft.com/download/en/details.aspx?id=28942ASP.NET MVC 4 also includes ASP.NET Web API, a framework for building and consuming HTTP services that can reach a broad range of clients including browsers, phones, and tablets. ASP.NET Web API is great for building services that follow the REST architectural style, plus it supports RPC patterns.If ASP.NET MVC 4 supports RPC style communication what does that mean for WCF?On what basis should we chose to use WCF or ASP.NET MVC Web API's RPC mechanism?
If ASP.NET MVC 4 supports RPC style communication what does that mean for WCF?
.net;web services;asp.net mvc;wcf
Nothing. You're still free to use WCF where it is most suitable, or at your own discretion.ASP.NET MVC has supported a RESTful communication style since its inception, and many people use it as a thin veneer for RESTful services. That doesn't automatically cause WCF to go obsolete, or make ASP.NET MVC the One Tool to Rule Them All.This is why carpenters and other craftsmen don't just have one type of hammer. They have several different types, each optimized for a particular type of hammering.To help you decide which to use, listen to this Hanselman podcast:This is not your father's WCF - All about the WebAPI with Glenn BlockHow does WCF fit into a world of Web 2.0 lightweight APIs? What's the WCF WebAPI and how does compare to services in ASP.NET MVC?I haven't personally looked at it yet, but it wouldn't surprise me if the Web API you refer to in ASP.NET MVC 4, and the new WebAPI in WCF, turn out to be the same thing. Phil Haack is probably using WCF to implement WebAPI internally in ASP.NET MVC 4, or they both resolve to the same internal mechanism. See Also http://wcf.codeplex.com/wikipage?title=WCF%20HTTP
_webapps.99510
If a Facebook Group has, say, 1000 members, do they all get notifications of all posts to that group? (Assuming they have simply joined/favorited, not selected anything special, like see all).
Do all members of a Facebook Group get notifications of ALL posts?
facebook;facebook groups;facebook notifications
null
_softwareengineering.343687
I use a traditional MVC pattern for my web projects:Controllers: handle use case scenarios steps (can call business logic in models or services)Views: presentationModels......I hit a wall with the model that prevents me from properly embracing object orientation.I got perhaps the wrong assumption that an entity should have only a single model.For example, in my current web site project I have an 'image' entity.So I have a model to represent a row in the database 'image' table: I fetch the record to the model constructor, which nicely populate corresponding class properties. I can then use the instance in my code. Nice.Problem is the other way around, when I must receive data from a form for database storage. The raw data from form needs quite some business processing, and involves coupling between data and logic for reuse, so a class is appropriate. Problem is that if I use the same model as above, the one for getting data FROM the database, things gets really ugly with lots of conditional in constructor.Is one of the right thing to do, would simply create a separate model class for getting data TO the database, and not stay stuck with single model class for image ?What's the good practice in such case ?
Several models for one entity
object oriented;mvc;domain model
Your problem is very common among applications which are not tiny. Sooner or later you might start needing different representations of the same entity, for whatever reason, be it performance or simply better fulfilling your business logic layer.Having different models for a single entity is not a bad idea, if that is what solves your issue.I usually implement a thick CUD (Create, Update, Delete) layer to properly validate my entities, this thick layer consists of validation of business rules, object's properties, relations, but should an object pass the entire write layer then I am pretty sure the model is valid. With that in mind I then use a very thin layer for reads which is very fast, because it's stripped down of all the transformations that would have been otherwise necessary.To make applications as flexible as possible without implementing a lot of overhead, I like to atomize my read queries so that they return only primary keys of entities and then use another layer, you may call it ModelCreators, which has methods accepting ids from the atomized layer and using those constructs models.In practice it then may look like this:package Users.Devices.Atomized;// pseudo-Java codefinal class UsersDevicesQuery { public List<int> findDevicesForUser(final int userId) { // logic to filter out ids of user's devices }}package Users.Devices.ModelCreators;final class WithPushTokenQuery { public List<WithPushToken> loadModels(final List<int> ids) { // logic to return models }}This approach has proved to me to be pretty good, because it keeps classes and methods very cohesive and yet very dynamic - if you decide to change your model structure you can do so without touching the querying logic and/or if you want to change the querying logic you can do so without altering the classes used to constructing models.
_cs.61151
In A simplified NP-complete MAXSAT problem, a reduction is given from Min Vertex Cover to MAX-2SAT by replacing each each vertex $x_i$ by a single-variable clause, and each edge by a two-variable clause:\begin{align} \Phi = \left(\bigwedge_{i=1}^n x_i\right) \wedge \left(\bigwedge_{\lbrace i,j\rbrace \in E} (\overline{x}_i \vee \overline{x}_j)\right) \end{align}This basically makes sense to me, because the QUBO version of Vertex Cover is to maximize:\begin{align}L = \sum_{i=1}^N x_i - \sum_{\lbrace i,j\rbrace \in E} x_ix_i\end{align}and QUBO can be converted to MAX-2SAT quite simply.However, I would to know how the reverse transformation works. How do you go from MAX-2SAT to Vertex Cover?I don't actually know if this is an unsolved problem or not, but I figure it shouldn't be since they are both NP-Complete. Would it be as simple/tedious as trying to force an arbitrary MAX-2SAT instance into the same form as $\Phi$? I don't know if that can be done though.
Reducing MAX-2SAT to Vertex Cover?
complexity theory;reductions;satisfiability
null
_unix.166059
I have 2 interfaces wlan0 and wlan1.When I want to see the modes that support my cards I do iw list and I see phy0 and phy2. How do I know what information corresponds to what card?In other words, how can I know what wlan_x corresponds to which phy_x?
Output of `iw list`: phy_x corresponds to what interface?
command line;wifi;network interface;wlan
null
_unix.281880
[gala@arch ~]$ sudo !!sudo hdparm -i /dev/sda/dev/sda: Model=KINGSTON SHFS37A120G, FwRev=603ABBF0, SerialNo=50026B725B0A1515 Config={ HardSect NotMFM HdSw>15uSec Fixed DTR>10Mbs RotSpdTol>.5% } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=4 BuffType=unknown, BuffSize=unknown, MaxMultSect=1, MultSect=1 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=234441648 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=yes: unknown setting WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-2,3,4,5,6,7 * signifies the current active modeWhere does hdparm read the Model field from? Somewhere from sysfs ? Where from?
Get block device model name and manufacturer from pseudo-fs
hard disk;sysfs;hdparm
# strace hdparm -i /dev/sdaioctl(3, HDIO_GET_IDENTITY, 0x7fffa930c320) = 0brk(0) = 0x1c42000brk(0x1c63000) = 0x1c63000write(1, \n, 1) = 1write(1, Model=So hdparm gets its information from the HDIO_GET_IDENTITY ioctl, not from sysfs. That doesn't mean that the information can't be accessed from sysfs, of course.Next we can look up HDIO_GET_IDENTITY in the kernel source. LXR is convenient for that. The relevant hit shows a call to ata_get_identity. This function looks up the model in the device description at the offset ATA_ID_PROD in the device description.Looking at where else ATA_ID_PROD is used, and with sysfs in mind, we find a hit in ide-sysfs.c, in a function called model_show. This function is referenced by the macro call just below DEVICE_ATTR_RO(model), so if the ata driver is exposing the IDE interface, there's a file called model in the device's sysfs directory that contains this information.If the ata driver is exposing the SCSI interface, tracing the kernel source is a lot more complicated, because the code uses different ways of extracting the information from the hardware. But as it turns out there is also a model field in the device's sysfs directory.As for where the device's sysfs directory is, there are several ways to access it. The sysfs.txt file in the kernel documentation documents this, not very well. The simplest way to access it is via /sys/block which contains an entry for each block device:$ cat /sys/block/sda/device/modelThere are a lot of symbolic links in /sys. The physical location of that directory depends on how the disk is connected to the system; for example it has the form /sys/devices/pci//ata/host/target/ for an ATA device with a SCSI interface that's connected to a PCI bus.
_codereview.171167
I am using Joomla and connecting to an MSSQL database to store the resulting set(s) in arrays. I am utilizing this syntax, but there must be a more efficient way of coding this.<?php $option = array(); $option['driver'] = 'mssql'; $option['host'] = '555.555.55.5'; $option['user'] = 'username'; $option['password'] = 'password'; $option['database'] = 'database'; $option['prefix'] = ''; $db = JDatabaseDriver::getInstance($option); $sql = $db->getQuery(true); $sql = Select ranchstyle from information; $db->setQuery($sql); $rows = $db->loadRowList(); $output = array(); foreach ($rows as $row) { array_push($output, $row); } $data = json_encode($output[0]); $query2 = $db->getQuery(true); $query2 = Select maestro from musicinfo; $db->setQuery($query2); $rows1 = $db->loadRowList(); $output1 = array(); foreach ($rows1 as $r) { array_push($output1, $r); } $data1 = json_encode($output1[0]);?>
Populate two arrays with two different SQL queries
performance;php;php5
There are two possible answers to this question, a generalized one and a specific one.To help with such questions in general, there is one programming concept which is often underestimated by PHP users. It is called user-defined functionsIn a nutshell, you can write a code once and then use it any number of times.function getArrayFromSql($db, $sql){ $db->setQuery($sql); $rows = $db->loadRowList(); $output = array(); foreach ($rows as $row) { array_push($output, $row); } return $data;}Here you created a function which you can use any number of times:$sql = Select ranchstyle from information;$output = getArrayFromSql($db, $sql);$data = json_encode($output[0]);$sql = Select maestro from musicinfo;$output = getArrayFromSql($db, $sql);$data1 = json_encode($output[0]);Whereas for the specific answer we need to review your code more closely. It does unnecessary job all the time.First, you are making a useless call to $db->getQuery(true);, as with the very next step the $sql variable gets overwritten.Second, you actually have an array already, as $db->loadRowList(); gives you a first class array. But for some reason you are duplicating it into $data. So your code actually should be$sql = Select ranchstyle from information;$db->setQuery($sql);$rows = $db->loadRowList();$data = json_encode($rows[0]);$sql = Select maestro from musicinfo;$db->setQuery($sql);$rows = $db->loadRowList();$data1 = json_encode($rows[0]);Third, given you are encoding only the first item of the array, you seems don't need an array at all, so select and fetch one line only. $sql = Select ranchstyle from information limit 1;$db->setQuery($sql);$row = $db->loadRow();$data = json_encode($row);$sql = Select maestro from musicinfo limit 1;$db->setQuery($sql);$row = $db->loadRow();$data1 = json_encode($row);
_datascience.6530
I would like to right an algorithm to convert unstructured texts (with contests descriptions) to structured data with the following fields:contest start date (optional)contest end datemain prizeadditional prizes (optional)I have hundreds of text examples, which could be used for model learning. How to approach this task? Just in case this is important - the preferable language is Python. But I never worked on such tasks before.
How to convert unstructured texts to structured data?
machine learning;python
null
_unix.237287
How do I find out the default permissions for a newly created text file? I've tried ls -l filename.txtbut it keeps saying it cannot access the txt file, so im pretty sure I'm doing it wrong and I've gone over notes/googled but I cant find out how to look at its default permissions... any help would be appreciated
Default text file permissions?
permissions;text
null
_unix.45462
I would like to get a list of all the wireless networks.iwlist wlan0 scan | grep ESSIDThis will only show me the wireless network I am currently connected to. When I run the command as root, it shows me all the available networks. If I run the command without sudo quickly after this, all the networks will show up, but after a while they are all gone except the network I am currently connected to.Is there a way to get all the available networks when I am not root?
How to get list of available wireless networks without being root
linux;wifi;not root user
You could (or do?) probably use wpa_supplicant; using its ctrl_interface configuration key, you can allow non-root users (e.g. those with group wheel) access via wpa_cli (i.e. /sbin/wpa_cli scan_results [1])# allow frontend (e.g., wpa_cli) to be used by all users in 'wheel' groupctrl_interface=DIR=/var/run/wpa_supplicant GROUP=wheelThere's also a command-line switch to wpa_suppliant,-u Enabled DBus control interface. If enabled, interface defini tions may be omitted.giving you a DBus interface and thus another possibility for non-root access (I think NetworkManager uses this interface).[1] Once connected, this only shows the wireless LAN you are connected to...I don't know if this is any different with NetworkManager.
_cstheory.21529
The halting problem for Turing machines is perhaps the canonical undecidable set. Nevertheless, we prove that there is an algorithm deciding almost all instances of it. The halting problem is therefore among the growing collection of those exhibiting the black hole phenomenon of complexity theory, by which the difficulty of an unfeasible or undecidable problem is confined to a very small region, a black hole, outside of which the problem is easy. [Joel David Hamkins and Alexei Miasnikov, The halting problem is decidable on a set of asymptotic probability one, 2005]Can anyone provide references to other black holes in complexity theory, or another place where this or related concepts are discussed?
Problems with efficient solution except for a small fraction of inputs
cc.complexity theory;reference request;computability;undecidability
I'm not sure whether this is what you're looking, but the phase transition in random SAT is an example. Let $\rho$ be the ratio of number of clauses to number of variables. Then a random SAT instance with parameter $\rho$ is very likely to be satisfiable if $\rho$ is less than a fixed constant (near 4.2) and is very likely to be unsatisfiable if $\rho$ is a little bit more than this constant. The black hole is the phase transition.
_codereview.171505
How can I optimize my code to retrieve the data faster? I'm currently using nested foreach loops to ping API on 2 levels (for each KPI -> for each municipality).API holds KPI data for each municipality in Sweden. Because the API is built with limitation of 5000 response limit per request i have to use iteration to ping the api. The API holds KPIs for municipalities in Sweden. There are some 2000-3000 KPIs and for each KPI there are 290 municipalities and each municipality holds data for 1 or several years for that KPI.Link to API documentation: https://github.com/Hypergene/koladaPackages I need to use:httr \$\rightarrow\$ get the data from the APIjsonlite \$\rightarrow\$ converting JSON to dataframesforeach \$\rightarrow\$ foreach loopssnow \$\rightarrow\$ running foreach loops as parallelSome set up:url \$\leftarrow\$ http://api.kolada.sepath \$\leftarrow\$ (/v2/municipality?title) # path metadata municipality look up table.path2 \$\leftarrow\$ (/v2/kpi) #path metadata KPI lookup tableFunction to ping the API with input KPI id and municipality id:API_get <- function(kpi,municipality){as.data.frame(fromJSON( content( GET(url = url, path = str_c(/v2/data/kpi/,kpi,/municipality/,municipality)),text))) %>% unnest() %>% select(kpi_id = values.kpi, municipality_id = values.municipality, period = values.period, gender, status, value) %>% type_convert()}Alternative function that tries to simplify the function to speed up the process:API_get2 <- function(kpi,municipality){as.data.frame(fromJSON( content( GET(url = url, path = str_c(/v2/data/kpi/,kpi,/municipality/,municipality)),text))) %>% unnest()}The speed difference between API_get2 runs 1-1, 5 secs faster than API_get on 1 KPI and all of the 290 municipalities.Here is the nested foreach loop that I run:system.time(foreach(k = SKL_kpi_has_no_gender$kpi_id[1:1], .combine = rbind, .packages = c(tidyverse,jsonlite,httr,stringr)) %:% foreach(m = as.character(SKL_municipality$municipality_id[1:290]), .combine = rbind, .packages = c(tidyverse,jsonlite,httr)) %dopar% API_get2(k,m) ) user system elapsed 0.20 0.03 3.17I have tried running the mapply function as the function i created has two inputs but it seem to be slower. It runs on one of my CPU cores so maybe that is why the foreach has a speed advantage. I am more wondering if there is a way to make the speed non-linear with the increase in KPIs that is iterated over.
Nested for each loop (parallell) API call to work around API response 5000 row limitation
r
null
_webapps.22596
Does anyone know if the complete list of skills used in LinkedIn.com can be exported, or downloaded?I'm building a volunteer website and this would be handy to have.
Complete list of Linkedin Skills
linkedin
null
_cs.40200
I am learning about neural networks and have a couple of things I don't understand.Firstly, in competitive learning I understand that only the neuron with the strongest output is reinforced. That is done in a manners imilar to:wi*j = I(j)*h(i)Where w* indicates the 'winning' neuron, j indicates the input we are considering, I(j) is the value of such input and h(i) is the sum of all weighted inputs. This is repeated for each connection leading to the winning neuron.My question is... Why? Why not simply, for example, increase the connection by an arbitrary amount? Or by another function? I have done quite some research, but still can't make sense of this.Thanks!
Updating connections weights in neural networks
artificial intelligence;neural networks
null
_codereview.109942
In learning golang, I wrote a small CLI utility that will take paths as arguments and list out their md5 hash as hex strings. Included are two flags that alter functionality, --check | -c which takes an additional file, hashes both, and returns whether or not they match (and exits with the proper return code), and --text | -t that takes the path to a text file that is presumed to have exactly the contents of the hexified MD5 sum and checks it (as above).The repository is here and code is below:package mainimport ( bytes crypto/md5 flag fmt io/ioutil log os strings)var checkSum, textFile string// build Flagsfunc init() { const ( checkSumDefault = checkSumUsage = File to check against textFileDefault = textFileUsage = File that has the md5 hash as its only content ) flag.StringVar(&checkSum, check, checkSumDefault, checkSumUsage) flag.StringVar(&checkSum, c, checkSumDefault, checkSumUsage) flag.StringVar(&textFile, text, textFileDefault, textFileUsage) flag.StringVar(&textFile, t, textFileDefault, textFileUsage)}func makeHash(fname string) [16]byte { data, err := ioutil.ReadFile(fname) if err != nil { panic(fmt.Sprintf(Failed to read file %s, fname)) } return md5.Sum(data)}func main() { flag.Parse() var result int if checkSum != { // check mode -- against file checkSumHash := makeHash(checkSum) argHash := makeHash(flag.Arg(0)) if checkSumHash == argHash { fmt.Print(They match!) result = 0 } else { fmt.Print(No match) result = 1 } } else if textFile != { // check mode -- against text checkSum, err := ioutil.ReadFile(textFile) if err != nil { log.Println(err) log.Fatalf(Can't read text file %s, textFile) } checkSum = bytes.TrimSpace(checkSum) argHash := makeHash(flag.Arg(0)) if strings.EqualFold(fmt.Sprintf(%x, argHash), string(checkSum)) { fmt.Print(They match!) result = 0 } else { fmt.Print(No match) result = 1 } } else { // print mode for _, fname := range flag.Args() { hash := makeHash(fname) fmt.Printf(%x\n, hash) } result = 0 } os.Exit(result)}I feel like I should refactor out the different functions and have each one return its success code. That's something I'll do when I have a little time free to poke at it.The biggest question, however, is the best way to compare the [16]byte argHash to the []byte checkSum if --text is specified. Currently I'm using bytes.TrimSpace, casting to string, and comparing that with Sprintf(%x, argHash). Is there a better way? Seems silly to convert both to string rather than comparing bytes, but I don't know a better way to do it.
Comparing md5.Sum to text from file
go
To clearly state our problem/task: We want to compare 2 checksums, one supplied as a text being the hexadecimal representation, and the other being an array of the raw bytes.2 forms exist: a hexadecimal representation and raw bytes. To compare them, we need the same representation. So 2 possible paths:1. Convert the second to hex representationLet's see your proposed solution. If we want to handle leading/trailing spaces, and if the text input may contain lowercased and uppercased hex digits, your solution is as simple as it can be.A variation may be to convert the text to lowercased, and then we can simply compare strings without strings.EqualFold(). The lowercase conversion can be done by calling strings.ToLower() or since we already have the input as []byte, by bytes.ToLower():checkSum = bytes.ToLower(bytes.TrimSpace(checkSum))argHash := makeHash(flag.Arg(0))if fmt.Sprintf(%x, argHash) == string(checkSum) { // They match} else { // They don't match}2. Convert the first to raw bytesCompare slices ([]byte)We may choose to convert the text checksum back to a []byte which holds the raw bytes of the checksum (NOT the bytes of the UTF-8 encoded hex representation).We can parse the hex representation simply with hex.DecodeString(). Or even better: since we have the text checksum as a []byte, we can use hex.Decode() which takes input as a []byte.As an extra gain, we don't even have to care about lower or upper case: hex.Decode() handles that for us.And we can convert [16]byte to []byte by simply slicing it. Once we have 2 []byte, we can use bytes.Equal() to compare them (in Go slices are not comparable unlike arrays).checkSum = bytes.TrimSpace(checkSum)dst := make([]byte, 16)if _, err := hex.Decode(dst, checkSum); err != nil { // Invalid input, not hex string or not 16 bytes!} else { argHash := makeHash(flag.Arg(0)) if bytes.Equal(argHash[:], dst) { // They match } else { // They don't match }}Compare arrays ([16]byte)As a variation of the previous solution, we will use arrays to do the comparison as arrays are comparable.Since makeHash() already returns an array [16]byte, we only need to get the raw bytes of the text checksum into an array. The simplest and fastest is to create an array [16]byte, and pass such a slice to hex.Decode() that shares its backing array with our new array. We can obtain such a slice by simply slicing the array:checkSum = bytes.TrimSpace(checkSum)dst := [16]byte{}if _, err := hex.Decode(dst[:], checkSum); err != nil { // Invalid input, not hex string or not 16 bytes!} else { if makeHash(flag.Arg(0)) == dst) { // They match } else { // They don't match }}Compare manually (byte-by-byte)We can also do the comparison manually, it's relatively easy and straightforward.But first to do it manually, let's create a simple helper function which tells if a hex digit (the text representation) equals to the raw data:func match(hex, raw byte) bool { if raw < 10 { return hex-'0' == raw } return hex-'a'+10 == raw || hex-'A'+10 == raw}And with this the solution:checkSum = bytes.TrimSpace(checkSum)argHash := makeHash(flag.Arg(0))if len(checkSum) != 2*len(argHash) { // Quick check: length differ, they don't match!} else { equal := true for i, v := range argHash { if !match(checkSum[i*2], v >> 4) || !match(checkSum[i*2+1], v & 0x0f) { equal = false break } } // Now the variable equal tells if they are equal}
_unix.203138
I use iwconfig to show me some information about my current internet connection like this$ iwconfigwlp2s0 IEEE 802.11abg ESSID:eduroam Mode:Managed Frequency:2.462 GHz Access Point: 06:0B:6B:2E:A7:80 Bit Rate=6 Mb/s Tx-Power=22 dBm Retry short limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=42/70 Signal level=-68 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:19522 Missed beacon:0lo no wireless extensions.In the past these information were updated each time I ran the command. Lately, the information remain the same from the beginning of the connection. The same applies to what I get via cat /proc/net/wireless. What may be the reason for this and how can I fix it?
What are possible reasons why iwconfig is not updating connection stats?
networking;wifi;proc
The answer was given by an update of the arch linux kernel to 4.1.2. Now it works again, thus it must have been something to do with the kernel version.
_unix.105994
I know how to make a heart with this sequence: composeKey, <, 3But how do you make a star ?
How do you make a star symbol with the compose key in Linux?
x11;compose key
Take a look at your .XCompose file in your home directory. You probably have a line like:<Multi_key> <asterisk> <asterisk> : U2605 # BLACK STARIf not, add that line, and you should be good to go with <Compose> * *
_unix.329665
This error occurs when trying to run multiple instances of an exe program running through wine: X Error of failed request: BadWindow (invalid Window parameter)Major opcode of failed request: 10 (X_UnmapWindow)Resource id in failed request: 0x400001Serial number of failed request: 114Current serial number in output stream: 114I think X11 might be causing this error, but I don't know how to fix it.
X Error of failed request - headless machine
x11;wine;window;headless
null
_unix.275354
I have setup my SMTP server (on a Linux/Ubuntu 15.04 VPS rented at OVH) according to http://www.binarytides.com/install-postfix-dovecot-debian/ (so there is no traditional user on the box; hence it looks like a naive installation of procmail is not relevant)Any clues about adding some spam filter using free software only (with an ideological preference for GPLv3+ or LGPLv3+ ones)?Some additional wishes:I would like a possible web interface (to unblock some emails filtered as spam) but I profoundly dislike PHP. My web server is nginx.I probably am interested in also using spamoracle (or some Bayesian machine learning filter).I am willing to code a tiny thing which could help.I'm a bit afraid to do wrong. It is an active server for my family MX domain @starynkevitch.net and I am getting all my personal emails there, and it is also used by my family (about a dozen persons).
postfix&dovecot -- adding some spam filter
postfix;dovecot
null
_codereview.55892
I'm trying to calculate what time a certain time in a time zone is today, so I can schedule something to happen at that time in that time zone. I've got a table with what I have termed the Nominal Time, which is stored as a datetimeoffset with an arbitrary date, as the only parts I care about is the time and the time zone offset. So, the Nominal Time column has values along the lines of:2014-07-01 10:00:00.0000000 +02:002014-07-01 10:00:00.0000000 -05:002014-07-01 10:00:00.0000000 -07:002014-07-01 10:00:00.0000000 +01:00(In this case 10am is my time I want to schedule this events). From this, I want to get that time today, so these would become:2014-07-02 10:00:00.0000000 +02:002014-07-02 10:00:00.0000000 -05:002014-07-02 10:00:00.0000000 -07:002014-07-02 10:00:00.0000000 +01:00when run at the date of this writing (2014-07-02). I currently have SQL that does this, but I don't really like it:With NominalTimes as (select Id, NominalTime, SYSDATETIMEOFFSET() Now from FaxQueue where status=0), CalcTimes as (select Id, NominalTime, Now, DATEPART(year,Now) NomYear, DATEPART(month,Now) NomMonth,DATEPART(day,Now) NomDay, DATEPART(hour,NominalTime) NomHour,DATEPART(minute,NominalTime) NomMinute, DATEPART(tzoffset,NominalTime) NomOffset from NominalTimes)select Id, NominalTime, Now, DATETIMEOFFSETFROMPARTS(Nomyear,NomMonth,NomDay,NomHour,NomMinute,0,0,NomOffset/60,NomOffset%60,0) from CalcTimes(Excuse the excessive CTEs; I'm trying to build this up bit by bit.) The end goal of this is to have a query that returns a list of rows where the nominal time happens within the next, say, hour (actual window size isn't important). I will also note that from the function of the program, I do not need to worry about a time straddling a daylight saving time transition (the program is meant to run before DST happens in a time zone, and deliver a notification).Is there a better way of doing these date calculations in SQL, or is this really about as good as it's going to get?
Calculating a time in a time zone from multiple dates in SQL
sql;datetime;sql server;t sql
First, let me restate your problem, to make sure I understand it correctly. You want to take the NominalTime column, which is of type datetimeoffset, and replace the date part with today's date, where today is defined according to the timezone in which the SQL Server is running. The time and timezone offset will remain unchanged, even across DST boundaries.To roll one field of a datetime-like object forward or backward, use the DATEADD() function:SELECT Id , NominalTime , SYSDATETIMEOFFSET() AS Now , DATEADD(day, DATEDIFF(day, NominalTime, SYSDATETIMEOFFSET()), NominalTime) FROM FaxQueue WHERE status = 0;SQL Fiddle demonstration
_webapps.4644
I'm wondering if anyone knows of a Greasemonkey script or something for Firefox which will either filter, or indicate my notifications in Facebook such that I can either: See only notifications for items on which I have actually commented (as opposed to those I have simply 'liked') Highlight notifications for items on which I have actually commented so I can distinguish them from ones I have merely 'liked'After a quick search in GM, I see ones to color code matching notifications for the same post, but I'd like to either filter out or dim out notifications for posts on which I haven't actually commented.Other solutions besides Greasemonkey are also welcome.
Filter Facebook notifications?
facebook;facebook notifications
This issue has actually been taken care of, now that FB groups notifications by post.
_reverseengineering.11073
I just came across the MyNav Python scripts and would like to use some of their functionality on my IDA Pro database. I have IdaPython installed properly, but I cannot seem to get the plugins to show up in the menu. I have gone to File > script and run MyNav.py, but all that happens is the myexport.pyc and mybrowser.pyc files get generated where I have the source files.I tried copying those into the plugins directory and restarting, but nothing shows up. The install documentation on the google code page is non-existent. Has anyone had any success installing this plugin package? I am using Python 2.7 with IdaPython 1.5.2, but have also tried with Python 2.6
Using MyNav Python Scripts With IDA Pro 6.1 On Windows 7
ida;idapython;idapro plugins
null
_codereview.45717
I'm not much of a jQuery coder (more of a PHP coder), except to call a few get values out of class and id and Ajax calls. However, I need to make a checkout system work and one thing I would like to do is to put a loading wait while I send a form through Ajax. When it's finished, it would redirect the user.So on click:load the loading spinnerfade the background outAjax callwhen Ajax is done submitting a form which indirectly redirects.Here is my code, following why I don't like it:jQuery.fn.center = function () { this.css(position,absolute); this.css(top, ( $(window).height() - this.height() ) / 2+$(window).scrollTop() + px); this.css(left, ( $(window).width() - this.width() ) / 2+$(window).scrollLeft() + px); return this; } $( .buttonFinish ).on('click', function(event) { $('body').fadeOut('slow'); $('#loader').center(); $('#loading').center(); $('#loader').show(); $('#loading').show(); //Use the above function as: var request = $.ajax({ url:'{{ URL::route('saveCart') }}', type:'POST' }); request.done(function(msg) { $('.order_number').val(msg); $('#summaryForm').submit(); event.preventDefault(); }); return false; });Let's get a few questions out of the way: {{UR::route('')}} is laravel syntax to call the routing name for URL. loader/loading are two div that I have for the spinner one is text the other is an animation through CSS. (I'd rather CSS it than to load a pix, but I am aware that loader.gif can be small)My issue (and feeling) that this can be better. I am using CSS with display:none to hide my two div, and then I call fadeOut, show, show, and center center function to center my loading Ajax in order to center things. Is there a compact way to achieve all the 4 steps needed in a clean way?
Fading and Loader jQuery Improvement
javascript;jquery;html;css
Let's go through it one section at a time.jQuery.fn.center = function () { this.css(position,absolute); this.css(top, ( $(window).height() - this.height() ) / 2+$(window).scrollTop() + px); this.css(left, ( $(window).width() - this.width() ) / 2+$(window).scrollLeft() + px); return this; }You don't have to explicitly append 'px' to your numeric values. jQuery is smart enough to correctly format your CSS styles.Since this is a jQuery plugin, you return this; at the end of the function. This is good, asit preserves chaining.If the .center function is only used for the loading divs, consider removing it and insteadstyling the divs with plain CSS.#loader { position: fixed; top: 50%; left: 50%; margin-top: -50px; /* half of the height */ margin-left: -100px; /* half of the width */}...$( .buttonFinish ).on('click', function(event) { $('body').fadeOut('slow'); $('#loader').center(); $('#loading').center(); $('#loader').show(); $('#loading').show();As mentioned by @Flambino, this particular code (as presented in the question) will fade out the entire body of the HTML document, which includes everything, even the loader elements. Perhaps the code that you're actually using is slightly different (is the selector .body or #body instead?).It's best practice to enclose JavaScript that uses external dependencies inside an IIFE.By doing so, you can use the safe name of an external dependency while still being able toalias is to a more convenient identifier locally. In addition, you can freely create variables inside the scopeof the IIFE without polluting the global namespace. Your example code is fairly short, but you likely have muchmore code on the page, and it is a good habit to have regardless. Here's one way to do it:(function($) { $( .buttonFinish ).on('click', function(event) { ... });})(jQuery);You can select both elements at the same time, just like in CSS: $('#loader, #loading').You can chain the .show() and .center() together: $('...').show().center(). //Use the above function as: var request = $.ajax({ url:'{{ URL::route('saveCart') }}', type:'POST' }); request.done(function(msg) { $('.order_number').val(msg); $('#summaryForm').submit(); event.preventDefault(); }); return false;});Since you're not using the request variable for anything other than .done(),you can skip declaring it and simply chain .done() after $.ajax():$.ajax({ ...}).done(function(msg) { ...});As an overall note, the indentation of the code (as it is in the question) seems haphazard,making the control flow more difficult to see than it could be. Making matchingbraces have the same indentation (and, in general, maintaining a consistent style)will help make the code more readable and understandable.
_codereview.61221
For my first use of javascript I've made an app that takes an input verb from html in Japanese and outputs it in to many conjugated (manipulated) forms. Essentially, it defines an alphabet, some initial arrays and functions for use, a function to initially check for values and interact with the html page, an object to pass different verb values to, and then a whole slew of self invoking functions that essentially take the verb and pass it to the object, and then another function to interact with the page.Right now I'm at a loss on how to rewrite it in modular JS. I have a couple libraries defined in the HTML (bootstrap, jQuery, and a Japanese IME). Although I'm using self-invoking functions and have a general idea of what modules are, I have no idea what a practical way of splitting up the code in to modules would be.$(document).ready(function () { //hiragana table var hiragana = { a: [, , , , , , , , , , , , , , ], i: [, , , , , , , , , , , , , , ], u: [, , , , , , , , , , , , , , ], e: [, , , , , , , , , , , , , , ], o: [, , , , , , , , , , , , , , ], teOne: [, , ], teTwo: [, , ], change: function (input, initVowel, desiredVowel) { var x = hiragana[initVowel].indexOf(input); return hiragana[desiredVowel][x]; } }; var groupOneExceptions = [, , , , , , , ]; var groupThree = [, ]; var existence = [[, ], []]; //check if in array function isInArray(array, search) { return array.indexOf(search) >= 0; } //add input to the page function printPage(id, value) { $(# + id).replaceWith(<div id = + id + > + value + </span>); } //bind input to wanakana on page load var input = document.getElementById(input); wanakana.bind(input); //check radio buttons and enact changes on enter form $(input:radio[name=input-method]).change(function () { if ($(this).val() === Hiragana) { //wanakana support wanakana.bind(input); $(#input).attr(placeholder, ); } if ($(this).val() === Romaji) { wanakana.unbind(input); $(#input).attr(placeholder, taberu); } }); //Click the button to get the form value. $(#submit).click(function () { var verb = { //put an if check here for masu? LATER group: , u: $(#input).val(), end: , endTwo: , withoutEnd: , i: , te: , preMasu: , masu: , ta: , taEnd: , nakatta: , mashita: , masendeshita: , teEnd: , nai: , naiEnd: , masen: , ou: , ouEnd: , naidarou: , eba: , ebaEnd: , nakereba: , eru: , eruEnd: , erunai: , seru: , serunai: , reru: , rerunai: }; var init = (function () { printPage(callout, ); //clear table for (prop in verb) { if (typeof verb[prop] === string) { printPage(prop, ); } } //init verb.u for hiragana processing if (wanakana.isKana(verb.u) === false) { verb.u = wanakana.toHiragana(verb.u); } //do some initial slicing verb.end = verb.u.slice(-1); verb.endTwo = verb.u.slice(-2, -1); verb.withoutEnd = verb.u.slice(0, -1); if (isInArray(hiragana.u, verb.end) === false) { printPage(callout, <div class=\bs-callout bs-callout-danger\> It doesn't look like + verb.u + is a valid Japanese verb in plain form. Try something that ends with an \u\.</div>); } if (isInArray(existence[0], verb.u)) { printPage(callout, <div class=\bs-callout bs-callout-info\> If you were referring to the existence construct + verb.u + (to be), refer here.</div>); } else if (isInArray(existence[1], verb.u)) { printPage(callout, <div class=\bs-callout bs-callout-info\> If you were referring to the existence construct + verb.u + (is), refer here.</div>); } })(); if (isInArray(hiragana.u, verb.end)) { verb.getGroup = (function () { if (isInArray(groupThree, verb.u)) { verb.group = 3; } else if (verb.end === && (isInArray(hiragana.i, verb.endTwo) || isInArray(hiragana.e, verb.endTwo))) { verb.group = 2; } else if (isInArray(hiragana.u, verb.end)) { verb.group = 1; } if (isInArray(groupOneExceptions, verb.u)) { verb.group = 1; } })(); verb.getI = (function () { if (verb.group === 1) { verb.preMasu = hiragana.change(verb.end, u, i); verb.i = verb.u.slice(0, -1) + verb.preMasu; } if (verb.group === 2) { verb.i = verb.u.slice(0, -1); } if (verb.group === 3) { verb.i = hiragana.change(verb.withoutEnd, u, i); } })(); verb.getTe = (function () { if (verb.group === 3 || verb.group === 2) { verb.te = verb.i + ; } if (verb.group === 1) { if (isInArray(hiragana.teOne, verb.preMasu)) { verb.teEnd = ; } else if (isInArray(hiragana.teTwo, verb.preMasu)) { verb.teEnd = ; } else if (verb.preMasu === ) { verb.teEnd = ; } else if (verb.preMasu === ) { verb.teEnd = ; } else if (verb.preMasu === ) { verb.teEnd = ; } //exception if (verb.u === ) { verb.teEnd = ; } verb.te = verb.withoutEnd + verb.teEnd; } })(); verb.getNai = (function () { if (verb.group === 3) { if (verb.u === ) { verb.nai = ; } if (verb.u === ) { verb.nai = ; } } if (verb.group === 2) { verb.nai = verb.i + ; } if (verb.group === 1) { if (verb.preMasu === ) { verb.naiEnd = ; } else { verb.naiEnd = hiragana.change(verb.preMasu, i, a) + ; } verb.nai = verb.withoutEnd + verb.naiEnd; if (verb.u === ) { verb.nai = ; } } })(); verb.getMasu = (function () { verb.masu = verb.i + ; })(); verb.getMasen = (function () { verb.masen = verb.i + ; })(); verb.getTa = (function () { if (verb.group === 3 || verb.group === 2) { verb.ta = verb.i + ; } if (verb.group === 1) { verb.taEnd = verb.teEnd.slice(0, -1) + hiragana.change(verb.teEnd.slice(-1), e, a); verb.ta = verb.withoutEnd + verb.taEnd; } })(); verb.getNakatta = (function () { verb.nakatta = verb.nai.slice(0, -1) + ; })(); verb.getMashita = (function () { verb.mashita = verb.i + ; })(); verb.getMasendeshita = (function () { verb.masendeshita = verb.masen + ; })(); verb.getOu = (function () { if (verb.group === 3) { verb.ou = verb.nai.slice(0, -2) + ; } if (verb.group === 2) { verb.ou = verb.i + ; } if (verb.group === 1) { verb.ouEnd = hiragana.change(verb.preMasu, i, o) + ; verb.ou = verb.withoutEnd + verb.ouEnd; } })(); verb.getNaidarou = (function () { verb.naidarou = verb.nai + ; })(); verb.getEba = (function () { if (verb.group === 3) { verb.eba = verb.withoutEnd + ; } if (verb.group === 2) { verb.eba = verb.i + ; } if (verb.group === 1) { verb.ebaEnd = hiragana.change(verb.preMasu, i, e) + ; verb.eba = verb.withoutEnd + verb.ebaEnd; } })(); verb.getNakereba = (function () { verb.nakereba = verb.nai.slice(0, -1) + ; })(); verb.getEru = (function () { if (verb.group === 3) { if (verb.u === ) { verb.eru = ; } if (verb.u === ) { verb.eru = ; } } if (verb.group === 2) { verb.eru = verb.withoutEnd + ; } if (verb.group === 1) { verb.eruEnd = hiragana.change(verb.preMasu, i, e) + ; verb.eru = verb.withoutEnd + verb.eruEnd; } })(); verb.getErunai = (function () { verb.erunai = verb.eru.slice(0, -1) + ; })(); verb.getReru = (function () { if (verb.group === 3) { if (verb.u === ) { verb.reru = ; } if (verb.u === ) { verb.reru = ; } } if (verb.group === 2) { verb.reru = verb.eru; } if (verb.group === 1) { verb.reru = verb.nai.slice(0, -2) + ; } })(); verb.getRerunai = (function () { verb.rerunai = verb.reru.slice(0, -1) + ; })(); verb.getSeru = (function () { if (verb.group === 3) { if (verb.u === ) { verb.seru = ; } if (verb.u === ) { verb.seru = ; } } if (verb.group === 2) { verb.seru = verb.withoutEnd + ; } if (verb.group === 1) { verb.seru = verb.nai.slice(0, -2) + ; } })(); verb.getserunai = (function () { verb.serunai = verb.seru.slice(0, -1) + ; })(); verb.process = (function () { var prop = ; //goes through verb object, checks for romaji, prints page for (prop in verb) { if (typeof verb[prop] === string) { if (wanakana.isKana($(#input).val()) === false) { verb[prop] = wanakana.toRomaji(verb[prop]); } printPage(prop, verb[prop]); } //string } //for in })(); } });});
Correctly rewriting Japanese verb conjugator in modular JS
javascript;jquery;require.js
Comments such as //check if in array before a function called isInArray and //hiragana table are completely pointless. It would be more important to describe what you are actually doing for everyone who doesn't know Japanese.$(# + id).replaceWith(<div id = + id + > + value + </span>);: Besides the mismatching tags, this looks like a very bad way to output something.document.getElementById(input);: Don't hard code IDs. The next time you or someone else needs to re-use the code on a different page, they'll need to search and replace them all, also why don't you use jQuery here?wanakana is not defined.There are far too many magic strings.In general you seem to completely confused on the point of using self-invoking functions. You are not using them for any practical purpose nor do they return anything and yet you attempt to store their return value.I can only suggest: Re-write everything in a straight forward procedural manner, breaking the functionally down into functions (procedures) no longer than, say, 10 lines (arbitrary number, just for the practice). Don't have the functions access any global variables - that is let them work only with their arguments. The only exception would be accessing any global constants. Make sure you don't use any string literals, other than when defining constants.Give all constants a name with CAPITAL_LETTERS, so they are recognizable as such.Ignore input and output for now. For testing/development just start your program with a function call to the main function with the verb as it's argument, and output to the console. E.g.: console.log( japaneseVerbConjugator(Some Japanese verb) );
_unix.163838
Unfortunately it happens rather regularly, that some i/o job or some other activity clogs all the cpu power on my rather old linux machine and leaves me with a frozen desktop, where not even the mouse will move, for some time.Rather than trying to troubleshoot here with you, I was wondering:Why does music continue to play, while my PC is seemingly frozen (probably a process independent of the desktop) and why does it continue to loop the last second or so, after that process seems to have frozen as well?Would it not make more sense, that the application in the background, that produces the sound, stops as soon as it's buffer is empty? How come the sound starts looping all of a sudden?I've seen this behaviour on Windows as well, even on my Android phone. For a specific example, I had a YouTube Video open, while creating a very large gzip archive, which brought the CPU down with i/o wait.After still being able to listen to the video for about 3 minutes after the Desktop froze, unresponsive to mouse and Keyboard it then started to loop the last second and about half a minute later crashed entirely.
Why does music still loop, when my PC freezes?
audio;freeze
null
_reverseengineering.14641
I have some binary files, each of them contain instructions of a function, (may be a little more in the end). The begining of the file also is the start point of the function. This files were extracted from a ELF file.The platform is arm64.So, how to load and analyze this file using angr?I upload a sample file here: xfrank.pythonanywhere.com/binThe original target:Every function has a switch case statement, the target is to get all intergers of the case expression.Example(C code):void func1(int cmd){ switch (cmd) { case 1: xxxx break; case 10: yyyy; break; }}Result: 1,10
In angr, how to Load and Analyze a binary file that only contain a function instructions
binary
null
_softwareengineering.325749
I am trying to learn WHEN NOT to use:classesmember variablesHERE IS THE CODEaccess_point_detection_classes.pyfrom scapy.all import *class Handler : def __init__(self) : self.wap_list = [] self.TempWAP = None def process_packet_capture(self, packet_capture) : for packet in packet_capture : if packet.haslayer(Dot11) : if packet.subtype == 8 : bssid = packet.addr3 if packet.haslayer(Dot11Elt) : p = packet[Dot11Elt] while isinstance(p, Dot11Elt) : if p.ID == 0 : essid = p.info if p.ID == 3 : channel = ord(p.info) p = p.payload self.TempWAP = WirelessAccessPoint(bssid, essid, channel) if self.check_for_duplicates() == False: self.wap_list.append(self.TempWAP) def check_for_duplicates(self) : for w in self.wap_list : if self.TempWAP.bssid == w.bssid : return True return Falseclass WirelessAccessPoint : def __init__(self, bssid, essid, channel) : self.bssid = bssid self.essid = essid self.channel = channelaccess_point_detection.pyfrom scapy.all import *from access_point_detection_classes_module import *def main() : H = Handler() packet_capture = sniff(count = 50) H.process_packet_capture(packet_capture) for w in H.wap_list : print w.bssid print w.essid print w.channel print '++++++++++++++++++++++++' # enable_managed_mode('wlan0')if __name__ == __main__ : main()HERE ARE MY THOUGHTSI am mostly sure I know WHEN to use a class and member variables. Look at my second class WirelessAccessPoint. The perfect need for a class in my opinion.I need multiple objects of this typeEasier management of the data that comes along with this typeThat's great, but look at my first class Handler. To be honest, I mostly created it so I could practice OOP. It does come in handy when I need to access wap_list. However, I initialize it without passing parameters for the constructor. Is that defeating the purpose? It seems that I could create all the methods of this class with only the self parameter to them. For instance ( haha ) I could make packet_capture a member variable of Handler and have the method process_packet_capture() take the parameter self.At this point I feel like if I do not set some ground rules for myself I will try to make everything a class and everything a member variable and I will never pass a parameter besides self ever again!
When NOT to use a class / member variable?
object oriented;python;object oriented design;class design;class
However, I initialize it without passing parameters for the constructor. Is that defeating the purpose?No, absolutely not. Handler, maybe misnamed, but in a sense it represents a collection object, which is perfectly reasonable without constructor parameters. A collection object maintains some state for it's duration, and, many collections are created with an initially empty set.However, what I don't necessarily like is the use of a member variable for the short-lived state of TempWAP. I think a local variable and some parameter passing would be better here, no? Maybe the example doesn't make it clear that TempWAP is otherwise used in your more complete implementation, or, as another alternative, you might actually have two classes here that are being conflated (one shorter lived and one longer lived), but based on what I can see in the question, I'd go for local variable instead of member variable here. The point is that the lifetime of the two member variables in Handler seems inconsistent. So, these member should be differentiated, either one using a local variable, or in another class (overkill, though perhaps). To reiterate, when all your member variables in the same class have the same useful lifetime, you have a better class than otherwise.
_unix.353798
I was trying to create a wifi hotspot for some experiment, but i was unable to create an open hotspot. Hotspot created is always secured by WPA2 security. Is there any way to create an open hotspot without any password?
How to create an open Wifi hotspot with no password in Linux Mint?
linux mint;wifi;wifi hotspot
You can create an open AP using create_ap tool.Install create_apgit clone https://github.com/oblique/create_apcd create_apsudo make installStart and enable the service:sudo systemctl start create_apsudo systemctl enable create_apTo create an open Access point run the following command:sudo create_ap wlan0 eth0 MyAccessPointTo create an open Access Point from the same wifi interface (wlan0) run :sudo create_ap wlan0 wlan0 MyAccessPointEditTo solve the hostapd not found error , you should install hostapd:sudo apt install hostapd
_webapps.27635
Are weekly email reports of website statistics from Google Analytics still available with since the latest updates/redesign? EDIT: and does anyone know of any plans to bring this back, if it has been discontinued?
Email Reports from Google Analytics
google analytics
This seems to be still working, and this is how I believe it is done:Sign into your Google Analytics accountClick on Standard ReportingClick on Email BETAEnter your email address and Subject in their respective fieldsClick on the , CSV drop down menu to choose one report formatEnsure Weekly is selected next to the Frequency labelClick on your day of the weekType your message in the white paneClick on the Send button
_unix.257845
Running a command in a screen and detaching is quite simple. screen -S test -d -m echo output of command that runs foreverHowever I would also like to pipe all the output to a file for logging, how can run the following in a screen and detach.echo output of command that runs forever &> output.logEdit:Just to clarify, I need this for a script so simple starting a screen and doing running the command and detaching is not an option.
How to run a program in a screen, redirect all output to a file and detach
linux;pipe;gnu screen
screen -dmS workspace; screen -S workspace -X stuff $'ps aux > output-x\n'I first create a detached session with the -d switch, I called my session workspace. I then send my command to the same session with -X stuff, I am using $'', but you could also use double quotes, but have to do a control M instead of a \n, which I don't like so I normally use the method I described above.After this piece of code runs, you will find the output-x with the list of processes, and also if you do a:screen -lsyou will see the session has been detached.Since you said you are going to be running a script. You might want to have your script search for a detached session (I am using workspace), and if it exists send commands to that pre-existing session, instead of making a new session every time screen -dmS sessionName is ran, example is below: #!/bin/bash if ! ( screen -ls | grep workspace > /dev/null); then screen -dmS workspace; fi screen -S workspace -X stuff $'ps aux > output-x\n'I hope this helps.
_softwareengineering.267612
From what I've read and implemented, DTO is the object that hold a subset of value from a Data model, in most cases these are immutable objects. What about the case where I need to pass either new value or changes back to the database?Should I work directly with the data model/actual entity from my DAL in my Presentation layer?Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
When is it appropriate to map a DTO back to its Entity counterpart
domain driven design;dto
If you have an entity as in a DDD-entity (not what is generally called an entity in frameworks such as entity framework) then the entity should protect its invariants. Thus using the entity as a DTO is most likely wrong. A DTO is unvalidated input data from the user and should be treated as such. An entity is validated data in one of the object types valid invariants. Thus you should have a conversion between a DTO and an entity that converts a validated DTO to a domain entity object. In many cases people skimp on these by using anemic domain models (http://www.martinfowler.com/bliki/AnemicDomainModel.html) where the entities generally contain no logic and then they put all the invariant protection logic etc in external code (and thus need to check invariants all over the place). This can work, but imho it leads to bugs. But it's something that happens pretty much everywhere - especially since a lot of frameworks encourage this by making it super easy to write code this way. This is imho too bad since it leads to the easiest/fastest way of coding not being the one that leads to the best results (if you follow DDD that is)...But it's also important to recognise that in many cases DDD might be overkill. If you are basically making a CRUD application (which in many cases you probably are) then DDD might be overkill for your domain. Of course this doesn't mean that you should throw away all the good ideas and just go into full-on do-anything mode, but in some cases not caring about certain parts of DDD can be the correct choice to make. The hard trick here is, of course, to identify if your domain is complex enough to warrant DDD or if it's easy enough to not bite yourself in the ass by foregoing certain parts of it. :)
_webmaster.91911
In my html page, the link is http://metfm.esy.es/index.php, I use four external links of webcams. When I test my page the message is In my html page the links are normal, but in the test it tells me the same links with query strings.I want to remove all the query parameters of the links with htaccess . I tried so many things from Internet examples, but they did not work.
Removing query string from external URL in my HTML page
parameters
null
_vi.7266
On Debian based systems, there is a package named vim-addon-manager.My understanding is that it allows to install some plugins based on a repository of available plugins. To be able to install them, the plugins have to be packaged and pushed to the debian repos.I don't understand the point of this package because it seems much less flexible than the other plugin managers which allows to install any plugin from github, a git repo or even a local folder, which allow parallel installation, lazy-loading, etc...In the first place I thought that the package was an old solution created before the other plugin managers and more or less deprecated, but its git repo seems to indicate that its development is still active.So my questions are:Is there other differences than the available plugins betwwen vim-addon-manager and the other plugin managers? And if so, which differences?Are the packages and the other managers meant for the same purpose or are they complementaryIn which use-case is it more convenient to use the package instead of the other plugins?Note that my question is inspired by this one but here I am not asking how to use the package, but rather why would someone need it.
When should I use vim-addon-manager instead of a regular package manager?
plugin system;plugin managers
N.B., I'm one of the original authors of Debian's vim-addon-manager (which I'll refer to as dvam for the rest of this answer, to avoid confusion with Marc Weber's vam).dvam is intended solely to manage addons that are distributed in the form of Debian packages. There are people that prefer, for various reason, to use packaged software even for things like Vim addons, instead of getting the software directly from upstream.In the broader sense, yes dvam and more general tools like plug, vundle, etc. are meant for the same purpose -- providing a mechanism for enabling the use of certain addons in your Vim environment. They are targeting different use cases, though, and can be used to complement each other.dvam intends to give a user of a Debian-based system control over which packaged addons are enabled, both system-wide and for a specific user. That is, it tries to solve the use cases of a sysadmin installing and enabling a packaged addon in the system-wide config but allowing the user to disable it, as well as the reverse (enabling an addon that's disabled in the system-wide config).There are some warts in the way Debian's tool was initially designed (symlinking individual files rather than working on directory like pathogen does) which haven't fully been addressed yet. I've been dragging my feet on fixing that, but should revisit it to see if Vim's new 'packpath'/:packadd features help me with that at all.
_unix.335175
Assume you have some text file (e.g. a log file) and you openit in a vim editor and hit the command:g/aaaIt will output a result in which you can move with j and k keysand when you move to the bottom a green sentencePress ENTER or type command to continue will appear.I understand it somehow, that I can use some commands with the result,but don't know how to find what I can do with it.One action I'd like to do, is to save the lines to the new file.Of course you could use a command$ grep aaa file.txt > new_file.txtbut is it possible from the vim editor directly?
Copy grep output from vim editor
vim
It is possible to do this through a multi-step process.Within vim::redir > new_file.txt:g/aaa:redir ENDSee :help redir from within vim.The :redir command can also append to an existing file by modifying the first command.:redir >> new_file.txt
_unix.70614
I should echo only names of files or directories with this construction:ls -Al | while read stringdo...donels -Al output :drwxr-xr-x 12 s162103 studs 12 march 28 12:49 personal domaindrwxr-xr-x 2 s162103 studs 3 march 28 22:32 public_htmldrwxr-xr-x 7 s162103 studs 8 march 28 13:59 WebApplication1For example if I try:ls -Al | while read stringdoecho $string | awk '{print $9}donethen output only files and directories without spaces. If file or directory have spaces like personal domain it will be only word personal.I need very simple solution. Maybe there is better solution than awk.
How to output only file names (with spaces) in ls -Al?
linux;command line;ls
null
_codereview.75494
I needed a way to cache a large number of objects with two different types of keys.In my case:A String key represents my object in a serialized form. This way if I get the same serialized object from an outside source, I an easily determine if it already exist and avoid re-parsing it (I just re-put() the existing entry to make it freshly cached).The second key is an Integer index, which is used as a light ID of the object for inter-process communications.I also wanted the access to a value with both key types to be \$O(1)\$, since I use both pretty often. I am using one LRU map with strong references to the values, and a second map with weak references. This way when an entry is removed from the first map, I expect the second map to release the value object as soon as the GC is invoked.My only concern is that the second map still keeps the redundant keys after their values are released, which is bad in case of large keys. I thought of using a ReferenceQueue or a WeakHashMap somehow to delete those keys, but couldn't yet come up with a satisfying implementation.I added a Thread to deal with deleting the unused keys in the weak references map. import java.lang.ref.WeakReference;import java.util.Collection;import java.util.Collections;import java.util.HashMap;import java.util.HashSet;import java.util.LinkedHashMap;import java.util.LinkedList;import java.util.Map;import java.util.Set;/** * An LRU cache with double mapping - two keys of different types are used * for accessing each cached value. * * @author Eliyahu * * @param <K1> - first key type. * @param <K2> - second key type. * @param <V> - values type. */public class DoubleMappedLRU<K1, K2, V> { private final Map<K1, V> strongRefMap; private final Map<K2, WeakReference<V>> weakRefMap; /** * Constructs a new DoubleMappedLRU with the specified capacity. * * @param capacity - the maximum number of values in the cache. */ public DoubleMappedLRU(int capacity) { strongRefMap = createSyncedLRUMap(capacity); weakRefMap = new HashMap<K2, WeakReference<V>>(); /** * This thread occasionally iterates the weakRefMap and removes keys with deleted values. */ new Thread(clean deleted keys) { WeakReference<DoubleMappedLRU<K1, K2, V>> lruRef = new WeakReference<DoubleMappedLRU<K1, K2, V>>(DoubleMappedLRU.this); @Override public void run() { while (lruRef.get() != null) { try { Map<K2, WeakReference<V>> wm = lruRef.get().weakRefMap; synchronized (DoubleMappedLRU.this) { for (K2 key : wm.keySet()) { if (wm.get(key) != null && wm.get(key).get() == null { wm.remove(key); } } } Thread.sleep(1000); } catch (InterruptedException iex) { return; } catch (Exception ex) { ex.printStackTrace(); } } } }.start(); } /** * Caches the specified value with the two specified keys in this double mapping. * * @param key1 - first key for the value * @param key2 - second key for the value * @param value - the value to cache */ public synchronized void put(K1 key1, K2 key2, V value) { strongRefMap.put(key1, value); weakRefMap.put(key2, new WeakReference<V>(value)); } /** * Returns the value to which the specified key is mapped, * or null if this map contains no mapping for the key. * @param key - the key whose associated value is to be returned * * @return the value to which the specified key is mapped, * or null if this map contains no mapping for the key */ public synchronized V get1(K1 key) { return strongRefMap.get(key); } /** * Returns the value to which the specified key is mapped, * or null if this map contains no mapping for the key. * @param key - the key whose associated value is to be returned * * @return the value to which the specified key is mapped, * or null if this map contains no mapping for the key */ public synchronized V get2(K2 key) { WeakReference<V> ref = weakRefMap.get(key); if (ref != null) { return ref.get(); } return null; } private static <K, V> Map<K, V> createLRUMap(final int maxEntries) { return new LinkedHashMap<K, V>(maxEntries+1, 0.75F, true) { private static final long serialVersionUID = -7654704024424510182L; @Override protected boolean removeEldestEntry(Map.Entry<K, V> eldest) { return size() > maxEntries; } }; } private static <K, V> Map<K, V> createSyncedLRUMap(final int maxEntries) { Map<K, V> cache = createLRUMap(maxEntries); return (Map<K, V>)Collections.synchronizedMap(cache); } /** * Returns an Iteratable Collection of values, which is * a copied instance of the underlying mapped values. * <p> * This method return only a copy of the values to prevent * outside operations on the inner map structure. */ public Collection<V> valuesCopy() { return new LinkedList<V>(strongRefMap.values()); } /** * Returns a Set of type K1 keys, which is * a copied instance of the underlying mapped keys. * <p> * This method return only a copy of the keys to prevent * outside operations on the inner map structure. */ public Set<K1> keySet1Copy() { return new HashSet<K1>(strongRefMap.keySet()); } /** * Returns a Set of type K2 keys, which is * a copied instance of the underlying mapped keys. * <p> * This method return only a copy of the keys to prevent * outside operations on the inner map structure. */ public Set<K2> keySet2Copy() { return new HashSet<K2>(weakRefMap.keySet()); }}
A double mapped cache with WeakReferences for second key type
java;cache;hash table;weak references
null
_unix.276402
What happens with fonts in my JAVA apps? SOAP UI screen as example (see toolbox)...I have no idea where is the problem.SYS: up-to-date ARCH LINUX.$ archlinux-java statusAvailable Java environments:java-7-openjdkjava-8-jdk (default)java-8-jre/jrejava-8-openjdk
Font bug in Java apps on Arch Linux
arch linux;java;fonts;jdk
null
_softwareengineering.315274
I have a client that's requested a detailed Scope of Work/Statement of Work. Upon looking into it, it seems they want timelines, costs, features, the whole nine.In order to do a detailed SOW, one basically has to have the whole system planned out ahead of time.Yet, the customer is not happy with any development approach other than Agile.Seems to me, either:A detailed SOW = Waterfall approach, orA detailed SOW continuously needs to be updated when taking an Agile approach, in which case the detail of the SOW seems awfully pointless.I'm not really a big shop, and it seems to me that putting together a detailed SOW (especially after explaining that estimates are difficult for all the reasons we know estimates are difficult) with timelines and costs, and further maintaining it through an iterative/revisit often approach seems like a whole lot of overhead.On the flip side, a much more general SOW with some grey area in the detail seems much more appropriate and easier to maintain, however this isn't the impression I get when I read up on what an SOW should contain.How do you balance a detailed SOW with an agile approach? And does it seem correct to say that a detailed SOW is veering toward a waterfall approach to things?(I might note that I never work on fixed-bid pricing, all is hourly, just because...)
Detailed Scope of Work.. Waterfall?
client relations;contract;design by contract
Agile doesn't preclude you from having a plan/requirements upfront. It precludes you from assuming that those plans won't change.It also doesn't preclude you from having a deadline. We all have deadlines. It does however, give you an earlier sense than waterfall development of where you stand with regards to meeting that deadline with the desired requirements, since at the end of each sprint, you have a shippable product and can gauge how far you are from the finish line.Martin Fowler describes a way to deal with this that he calls Scope Limbering.The key to this is this line:... from the beginning we sought to put the relationship between our companies on a collaborative note rather than a confrontational note. The biggest problem with the fixed scope contract is it immediately pits the client and contractor on opposite sides where they are fighting each other about whether something is a change and who should pay for the change.(emphasis mine)What Fowler did when facing a situation almost exactly like yours was to build a buffer into your quote, then work very closely with the client (which you should be doing anyway) to demonstrate to them how much the requirements change as the process goes on.
_softwareengineering.98691
While I understand what the final keyword is used for in the context of classes and methods as well as the intent of its use for in regards to variables; however, the project I just started working on seems to have an excessive number of them and I'm curious as to the logic behind it.The following snippet of code is just a short example as I don't see much point in the final keyword for the key and value variables:private <K, V> Collection<V> getValuesForKeys( final Map<K, V> map, final Collection<K> keys) { final Collection<V> values = new ArrayList<V>(keys.size()); for (final K key : keys) { final V value = map.get(key); if (value != null) { values.add(value); } } return values;}I have been doing a bit of reading the usage through articles I have found via Google; however, does the pattern really do things such as help the compiler optimize the code?
Excessive use final keyword in Java
java;coding standards;final
There are many references suggesting a liberal use of final. The Java Language Specification even has a section on final variables. Various rules in static analysis tools also support this - PMD even has a number of rules to detect when final can be used. The pages that I linked to provide a number of points as to what final does and why you should use it liberally.For me, the liberal use of final accomplished two things in most code, and these are probably the things that drove the author of your code sample to use it:It makes the intent of the code much more clear, and leads to self-documenting code. Using final prevents the value of a primitive object from changing or a new object being made and overwriting an existing object. If there's no need to change the value of a variable and someone does, the IDE and/or compiler will provide a warning. The developer must either fix the problem or explicitly remove the final modifier from the variable. Either way, thought is necessary to ensure the intended outcome is achieved.Depending on your code, it serves as a hint for the compiler to potenitally enable optimizations. This has nothing to do with compile time, but what the compiler can do during compilation. It's also not guaranteed to do anything. However, signaling the compiler that the value of this variable or the object referred to by this variable will never change could potentially allow for performance optimizations.There are other advantages as well, related to concurrency. When applied at a class or method level, having to do with ensuring what can be overridden or inherited. However, these are beyond the scope of your code sample. Again, the articles I linked to go far more in-depth into how you can apply final.The only way to be sure why the author of the code decided to use final is to find him and ask him.
_unix.352358
So I've been trying to setup a vpn in whonix workstation but it is proving somewhat difficult for me. Specific vpn is nord vpn and I was following their instructions for command line install but it fails at step 4 as it cannot download the ca and config files.I then downloaded them in another system and then put them in whonix workstation. When trying to copy the files into cd /etc/openvpn I am unable to do so because I don't have root permissions. From memory I can use gksudo to operate file manager as root and put the files where they need to be. gksudo isn't a recognised command however so that stopped me again.I don't really want to wreck my system by tapping around in the commandline too much; is there an easy way to put the files where they need to go?? If I had the network manager in whonix workstation it would also be alot easier but again it is hard to get installed etc.
openvpn in whonix workstation? gksudo to move files as root?
linux;debian;networking;permissions;whonix
null
_unix.345488
I am a little lost with kwalletmanager5. I upgraded my system from OpenSuseLEAP42.1 to OpenSuseLEAP42.2 a couple of months ago. With KDE5.Since I updated my system I have problems with the kwalletmanager, or better the interplay of kwalletmanager5 and the KDE4 version of kwalletmanager.> kwalletmanager --versionQt: 4.8.6KDE: 4.14.28KDE Wallet Manager: 2.0> kwalletmanager5 --versionkwalletmanager5 16.12.1> kwalletd --versionQt: 4.8.6KDE: 4.14.28KDE-Dienst fr Passwortspeicher: 0.2The main problem I encountered was that the new kwalletmanager5 does not accept a GPG key to encrypt the wallet. This is a problem since I used GPG in KDE4. Now, each time I open the kwalletmanager5, I have to enter first the password for kwalletmanager5 and then for kwalletmanager4.So I tried to uninstall the KDE4 version of kwalletmanager. Which caused problems with some other applications... Somewhere I read that you need to have installed both versions so that KDE4 applications can use kwalletmanager and KDE5 applications kwalletmanager5. All in all, I have two questions:Is there a way to unlock kwalletmanager with kwalletmanager5?I would like to use GPG-encryption with kwalletmanager5. Is there a way?Consider me, as a guy that has no clue. Be pedantic in your responses please! :D I hope this is the correct place to ask the question...
How to unlock Kwalletmanager KDE4 with kwalletmanager5 KDE5?
kde;opensuse;kde5;kwallet
null
_unix.146724
I try to mount a SDHC card under GNU/Linux. Unlike what happens usually, /var/log/syslog doesn't mention sdb1, just:Jul 26 16:07:53 xvii kernel: [ 159.404842] scsi 6:0:0:0: Direct-Access Singim SD Card MMC/SD 1.4F PQ: 0 ANSI: 0 CCSJul 26 16:07:53 xvii kernel: [ 159.405115] sd 6:0:0:0: Attached scsi generic sg2 type 0Jul 26 16:08:01 xvii kernel: [ 168.239600] sd 6:0:0:0: [sdb] Attached SCSI removable diskMoreover fdisk -l /dev/sdb outputs nothing. What should I do?EDIT (2014-07-27): I could have this SD card again, and it seems to be faulty. Yesterday, I was trying it via a USB card reader. Today, I've tried it directly by putting it in the SD slot of my laptop, and I got thousands of I/O errors:Jul 27 11:56:35 xvii kernel: [ 8091.317234] mmc0: new high speed SDHC card at address 1234Jul 27 11:56:35 xvii kernel: [ 8091.317477] mmcblk0: mmc0:1234 SA04G 3.68 GiBJul 27 11:56:35 xvii kernel: [ 8091.320119] mmc0: Got data interrupt 0x00200000 even though no data operation was in progress.Jul 27 11:56:35 xvii kernel: [ 8091.322277] mmcblk0: error -84 transferring data, sector 0, nr 8, cmd response 0x900, card status 0xb00Jul 27 11:56:35 xvii kernel: [ 8091.322289] mmcblk0: retrying using single block readJul 27 11:56:35 xvii kernel: [ 8091.324862] mmcblk0: error -84 transferring data, sector 0, nr 8, cmd response 0x900, card status 0x0Jul 27 11:56:35 xvii kernel: [ 8091.324872] end_request: I/O error, dev mmcblk0, sector 0Jul 27 11:56:35 xvii kernel: [ 8091.326398] mmcblk0: error -84 transferring data, sector 1, nr 7, cmd response 0x900, card status 0x0Jul 27 11:56:35 xvii kernel: [ 8091.326405] end_request: I/O error, dev mmcblk0, sector 1Jul 27 11:56:35 xvii kernel: [ 8091.329056] mmcblk0: error -84 transferring data, sector 2, nr 6, cmd response 0x900, card status 0x0[...]and gdisk -l didn't find any partition table, and lsblk output about the card:mmcblk0 179:0 0 3.7G 0 diskA bit later I tried again, and the card was recognized:Jul 27 12:08:00 xvii kernel: [ 8776.617712] mmc0: new high speed SDHC card at address 1234Jul 27 12:08:00 xvii kernel: [ 8776.618117] mmcblk0: mmc0:1234 SA04G 3.68 GiBJul 27 12:08:00 xvii kernel: [ 8776.620324] mmcblk0: p1and I could mount it: /dev/mmcblk0p1 on /media/mmc type vfat (rw,nosuid,nodev,noexec,noatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=utf8,shortname=mixed,errors=remount-ro,user=vinc17)gdisk -l /dev/mmcblk0 found only a MBR partition table, but the second partition table overlaps the last partition.
mounting a SD card without a partition
linux;mount;sd card
The link /dev/$disk points to the whole of a block device, but, on a partitioned disk without unallocated space, the only part which isn't also represented in /dev/$disk[num] is the first 2kb-4mb or so - $disk's partition table. It's just some information written to the raw device in a format that the firmware and/or OS can read. Different systems interpret it in different ways and for different reasons. I will cover three.On BIOS systems this table is written in the MBR master boot record format so the firmware can figure out where to find the bootable executable. It reads the partition table because in order to boot BIOS reads in the first 512 bytes of the partition the table marks with the bootable flag and executes it. Those 512 bytes usually contain a bootloader (like grub or lilo on a lot of linux systems) that then chainloads another executable (such as the linux kernel) located on a partition formatted with a filesystem the loader understands.On EFI systems and/or BIOS systems with newer kernels this partition table can be a GPT GUID partition table format. EFI firmware understands the FAT filesystem and so it looks for the partition the table describes with the EFI system partition flag, mounts it as FAT, and attempts to execute the path stored in its Boot0000-{GUID} NVRAM variable. This is essentially the same task that BIOS bootloaders are designed to do, and, so long as the executable you wish to load can be interpreted by the firmware (such as most Linux kernels since v. 3.3), obviates their use. EFI firmware is a little more sophisticated.After boot, if a partition table is present and the kernel understands it, /dev/${disk}1 is mapped to the 4mb+ offset and ends where the partition table says it does. Partitions really are just arbitrary logical dividers like:start of disk | partition table | partition 1 | ... and so on | end of diskThough I suppose it could also be:s.o.d. | p.t. | --- unallocated raw space --- | partition 1 | ... | e.o.d. It all depends on the layout you define in the partition table - which you can do with tools like fdisk for MBR formats or gdisk for GPT formats.The firmware needs a partition table for the boot device, but the kernel needs one for any subdivided block device on which you wish it to recognize a filesystem. If a disk is partitioned, without the table the kernel would not locate superblocks in a disk scan. It reads the partition table and maps those offsets to links in /dev/$disk[num]. At the start of each partition it looks for the superblock. It's just a few kb of data (if that) that tells the kernel what type of filesystem it is. A robust filesystem will distribute backups of its superblock throughout its partition. If the partition does not contain a readable superblock which the kernel understands the kernel will not recognize a filesystem there at all.In any case, the point is you don't really need these tables on any disk that need not ever be interpreted by firmware - like on disks from which you don't boot (which is also the only workable GPT+BIOS case) - and on which you want only a single filesystem. /dev/$disk can be formatted in whole with any filesystem you like. You can mkfs.fat /dev/$disk all day if you want - and probably Windows will anyway as it generally does for device types it marks with the removable flag.In other words, it is entirely possible to put a filesystem superblock at the head of a disk rather than a partition table, in which case, provided the kernel understands the filesystem, you can:mount /dev/$disk /path/to/mount/pointBut if you want partitions and they are not already there then you need to create them - meaning write a table mapping their locations to the head of the disk - with tools like fdisk or gdisk as mentioned.All of this together leaves me to suggest that your problem is one in these three:your disk has no partition table and no filesystem It was recently wiped, never used, or is otherwise corrupt.your disk's partition table is not recognized by your os kernel BIOS and EFI are not the only firmware types. This is especially true in the mobile/embedded realm where an SDHC card could be especially useful, though many such devices use layers of less-sophisticated filesystems that blur the lines between a filesystem and a partition table.your disk has no partition table and is formatted with a filesystem not recognized by your os kernel After rereading your comment above I'm fairly certain it is the latter case. I recommend you get a manual on that tv, try to find out if you can get whatever filesystem it is using loaded as a kernel module in a desktop linux and mount the disk there.
_cs.64873
According to the Wikipedia page on the Push-relabel maximum flow algorithm:Subcubic $O(|V||E| \log\frac{|V|^2}{|E|})$ time complexity can be achieved using dynamic trees, although in practice it is less efficient.What if $|E| = |V|^2$? Don't we then have this?$$O(|V||E| \log\frac{|V|^2}{|V|^2})$$$$O(|V||E| \log 1)$$$$O(|V||E| 0)$$$$O(0)$$There's no way this can be right, so what am I misunderstanding?
Does the running-time of this push-relabel algorithm become zero if there are many edges?
graphs;algorithm analysis;runtime analysis;landau notation
First of all, we usually assume that there are no self-loops. As a consequence, $|E| \leq |V|(|V| - 1) < |V|^2$.That aside, there is (probably) a misuse of Landau notation here.The given bound is (probably) not correct for all (families of) graphs. For instance, if $|E| = 0$ the same arithmetic issue occurs but the algorithm will certainly have non-zero running time.The authors probably simplified away terms of the order of, say, $\Theta(|V|)$ or $\Theta(|E|)$. They do this because they are asymptotically dominated by the given term -- if all quantities are non-zero. This is a fundamental problem with using Landau notation with more than one variable without being rigorous about it.So, even if the leading term would become zero (which can certainly happen and be correct!) lower-order terms would be non-zero and lead to useful bounds.ExampleConsider this very simple algorithm:1 def algo(n, m) 2 x = 03 for i = 1 .. n 4 for j = 1 .. m5 x += 16 return xA standard analysis tells you immediately that the running-time is in $\Theta(nm)$ as that is how often the line x += 1 will be executed. Now, if n or m is zero, does the algorithm have zero running time? No!A more precise analysis leads to the running-time being$\qquad c_1 + c_2 n + c_3 m + c_4 nm$with suitable constants $c_1$ (cost of lines 2 and 6, setup of line 3), $c_2$ (management of line 3, setup of line 4), $c_3$ (management of line 4) and $c_4$ (line 5).BackgroundLandau notation only makes sense if the parameters go to infinity in suitable ways. We can not just insert finite values for some parameters and expect it to behave well.Refer to A general definition of the O-notation for algorithm analysis by Kalle Rutanen et al. for details.
_unix.202082
So I read this Wiki article on deduplication with btrfs. However, it doesn't describe the semantics followed by btrfs deduplication.Assume you have a dozen files. They all contain identical data, but their user and group ownership and permissions (along with extended attributes, ACLs etc) may differ.Will the deduplication feature of btrfs allow me to cut down the on-disk size to approximately one twelfth of the overall size before?Hardlinks obviously won't work because their semantics imply shared meta-data (ownership, permissions).My kernel version is 3.16.
Deduplication semantics with btrfs - meta-data differs, file data identical
btrfs;deduplication
Deduplication works on a block level. If you have files with identical content but different metadata, assuming a fully deduplicated system, the whole contents will only be stored once. Even if the files are only partially identical, deduplication can save space. For example, if you had two-byte blocks and files containingfile1 = ABCDfile2 = AABAABfile3 = AABthen they would be stored in 5 blocks:file1 = block1,block2file2 = block3,block4,block1file3 = block3,block5If you have identical directories (i.e. directories containing files with the same names and the same inode numbers, e.g. as the result of cp -al or a similar file-level deduplicating incremental backup) then they too could be stored in the same blocks.
_softwareengineering.274306
When writing non-member, free functions, can they be placed in the global namespace so long as the signature specifies a namespace-scoped object? For example, in the code below, is Example 2 acceptable design? Which is better, Example 1 or Example 2? Does your answer change if the function is an operator overload?Research: I don't think this question was addressed when I took a C++ programming class (or it may have been addressed before I was ready to understand it). I tried searching for the answer with a few different permutations of the keywords but did not get any quality hits.#include <iostream>#include <string>#include <sstream>/* Example 1 */namespace myapp{ namespace xyz { class Thing { public: Thing( int value ) : myValue( value ) {} void setValue( int value ) { myValue = value; } int getValue() const { return myValue; } private: int myValue; }; std::string toString( const Thing& thing ) { std::stringstream ss; ss << thing.getValue(); return ss.str(); } }}/* Example 2 */namespace myapp{ namespace xyz { class AnotherThing { public: AnotherThing( int value ) : myValue( value ) {} void setValue( int value ) { myValue = value; } int getValue() const { return myValue; } private: int myValue; }; }}std::string toString( const myapp::xyz::AnotherThing& thing ){ std::stringstream ss; ss << thing.getValue(); return ss.str();}int main(int argc, const char * argv[]){ /* Example 1 */ myapp::xyz::Thing t( 1 ); std::cout << myapp::xyz::toString( t ) << std::endl; /* Example 2 */ myapp::xyz::AnotherThing a( 2 ); std::cout << toString( a ) << std::endl; return 0;}
Free Standing Functions in Global Namespace
design;c++;programming practices;namespace
boost uses a lot of free functions (free functions are a good thing). The free functions are maintained close to the namespace that involves the objects or other related classes associated with the free function. Then the free function definitions are hoisted to upper namespaces as required.This technique gives control over the scope of free functions. Utility free functions can be defined in a namespace then not hoisted treating them as private-like functions.In this case, Example 1 is more appropriate choice but adding hoisting declarations in the outer namespaces will allow the functions to be more easily referenced.
_unix.243657
I'm modifying a bunch of initramfs archives from different Linux distros in which normally only one file is being changed.I would like to automate the process without switching to root user to extract all files inside the initramfs image and packing them again.First I've tried to generate a list of files for gen_init_cpio without extracting all contents on the initramfs archive, i.e. parsing the output of cpio -tvn initrd.img (like ls -l output) through a script which changes all permissions to octal and arranges the output to the format gen_init_cpio wants, like:dir /dev 755 0 0nod /dev/console 644 0 0 c 5 1slink /bin/sh busybox 777 0 0file /bin/busybox initramfs/busybox 755 0 0This involves some replacements and the script may be hard to write for me, so I've found a better way and I'm asking about how safe and portable is:In some distros we have an initramfs file with concatenated parts, and apparently the kernel parses the whole file extracting all parts packed in a 1-byte boundary, so there is no need to fill each part to a multiple of 512 bytes. I thought this 'feature' can be useful for me to avoid recreating the archive when modifying files inside it. Indeed it works, at least for Debian and CloneZilla.For example if we have modified the /init file on initrd.gz of Debian 8.2.0, we can append it to initrd.gz image with:$ echo ./init | cpio -H newc -o | gzip >> initrd.gzso initrd.gz has two concatenated archives, the original and its modifications. Let's see the result of binwalk:DECIMAL HEXADECIMAL DESCRIPTION--------------------------------------------------------------------------------0 0x0 gzip compressed data, maximum compression, has original file name: initrd, from Unix, last modified: Tue Sep 1 09:33:08 20156299939 0x602123 gzip compressed data, from Unix, last modified: Tue Nov 17 16:06:13 2015It works perfectly. But it is reliable? what restrictions do we have when appending data to initfamfs files? it is safe to append without padding the original archive to a multiple of 512 bytes? from which kernel version is this feature supported?
Appending files to initramfs image - reliable?
kernel;boot;initramfs;cpio
It's very reliable and supported by all kernel versions that support initrd, AFAIK. It's a feature of the cpio archives that initramfs are made up of. cpio just keeps on extracting its input....we might know the file is two cpio archives one after the other, but cpio just sees it as a single input stream.Debian advises use of exactly this method (appending another cpio to the initramfs) to add binary-blob firmware to their installer initramfs. For example:https://wiki.debian.org/DebianInstaller/NetbootFirmwareInitramfs is essentially a concatenation of gzipped cpio archives which are extracted into a ramdisk and used as an early userspace by the Linux kernel. Debian Installer's initrd.gz is in fact a single gzipped cpio archive containing all the files the installer needs at boot time. By simply appending another gzipped cpio archive - containing the firmware files we are missing - we get the show on the road!
_codereview.125825
In an attempt to think up a more efficient String Matching algorithm than the nave \$O(n \cdot k)\$ one I came up with a slightly modified approach.In my algorithm (code in Java), I tried a little modification of the \$O(n \cdot k)\$ approach where every time a character match starting from i fails after k positions, we start from i+1th place. In this algo however, if matching fails at k position, I resume comparison from k-1 because we know up to k-1 the characters are common.Below is the implementation of this approach.I will really appreciate anyone taking a look into the code and Confirm if things work the way I explained. My tests seem to be working fine.What is the complexity of this approach? Is it worst case \$O(n \cdot k)\$ still?An example of a worst case input.Code:public class ModifiedSearch { private String text; public ModifiedSearch(String text) { this.text = text; } public int findPattern(String pattern) { int textLength = text.length(); int patLength = pattern.length(); int common = 0; boolean broken = false; for (int tIndex = 0; tIndex <= textLength - patLength; broken = false, tIndex++) { int k = 0; while (tIndex + common + k < textLength && common + k < patLength) { if (text.charAt(tIndex + common + k) != pattern.charAt(common + k)) { broken = true; common = (common + k - 1 < 0) ? 0 : common + k - 1 ; break; } k++; } if (!broken && common + k == patLength) { return tIndex; } } return -1; }}I have tried this with a combination of inputs large and small, but this does not seem to worse out to \$O(n \cdot k)\$. Any help greatly appreciated.
String Matching w/o repeating the comparisons for common part of the two strings
java;algorithm;strings
null
_softwareengineering.129999
My question is rather a design question. In my program I got to a data structure that looks something like this:private ConcurrentHashMap<A, ConcurrentHashMap<B, ConcurrentHashMap<Integer, C>>> services = new ConcurrentHashMap<A, ConcurrentHashMap<B, ConcurrentHashMap<Integer, C>>>();Is there a way to handle such a data structure more elegantly? Thanks!edit: A, B and C are business classes. An A instance can have (as association) many Bs and a B can have many mappings Integer-C.
Is there a way to handle nested Collections more elegantly?
java;design;data structures
Create a class Triple with fields for A,B,Integer, override hashCode() and equals(), and use Map<Triple,C> instead of Map<A,Map<B,Map<Integer,C>>>In this approach - you put all elements in one map, with a larger possible range of keys.
_cogsci.9499
I'm trying to track down a source for an idea I saw in a documentary a while ago (sorry, no idea where!) It demonstrated how children learn differently at different ages, through showing them balancing an unevenly weighted bar (in other words, a bar that looks the same along its length, but is in fact heavier at one end).Loosely, the programme suggested that: young children (toddlers?) can get such a bar to balance intuitively, through trial-and-error; slightly older children (5?) try to balance the bar half way, and so say it won't balance; older children again, understand that the bar is not evenly weighted and so once again realise that experimentation is needed to find the balancing point.Am I completely imagining that I saw some research of this kind? Or does anybody know where I can find a link to such a piece of research?
Study showing how different ages of children learn about balancing objects
learning;cognitive development
I suspect you are thinking of Karmiloff-Smith's work. Here's an excerpt from an '88 article almost exactly matching the description. Children were asked to balance a series of blocks on a narrow metal support. Some of the blocks had their weight evenly distributed and balanced at their geometric centre. Others had been drilled with lead in one end and, although they looked identical to the first type, they actually balanced way off centre.[...]Very briefly, it was shown that 4 and 5 year olds could do this task very easily. They simply picked up each block, moved it along the support until they felt the direction of imbalance, and corrected that by using proprioceptive feedback until the block balanced. By contrast, 6 to 7 year olds placed every block at its geometric centre and were thus incapable of balancing anything but the blocks where the weight was evenly distributed. Finally, 8 to 9 year olds were able to balance all the types of block, as had the youngest subjects.I'm not super familiar with the author, but given the publication time and terminology, this seems like a theoretical predecessor to the more recent work on embodied cognition, dynamic systems and ecological psychology.KarmiloffSmith, A. (1988). The Child is a Theoretician, Not an Inductivist*. Mind & Language, 3(3), 183-196.
_unix.10434
Short question: I have two dynamically generated tar archives (so they have different timestamps), how can I compare them, ignoring any different in time?Backgrounds...I am doing some backup, in which I use a script to generate things that needs to be backed up, put them in to a directory, then tar the directory and keep several old versions. The backup script needs to run every 30 minutes to make sure we don't lose hours of work.Now I realize that there are periods of time that the data doesn't change, so it doesn't make sense to store duplicates of the same thing over and over again. I would like to compare the archives before saving. My attempt was to run cmp newdata.tar.gz olddata.tar.gz and only store newdata.tar.gz if it contains new data. Apparently that didn't work, because there are different timestamps.
Compare the contents of dynamically generated archives
backup;tar
null
_unix.3955
In my struggles to install ClearOS correctly, I now have another problem: After the first boot I keep getting stuck on Determining IP information for wlan0.... It doesn't move forward at all. I've even waited for 20 minutes before continuing, but didn't make a difference. Right now my comp is unbootable because of this?Anyway to fix this?
Stuck on Determining IP information for wlan0 at boot
linux;networking;boot
I'm not sure, but it sounds like a dhcp issue. Maybe the acccess point you want to connect to is not set up with dhcp, or not available? For now, try pressing ctrl-c when this message appears, sometimes it works (dont know about clearos). If it works let this service start in background.
_cs.59519
I am trying to compute a single-source shortest path in an interprocedural control flow graph (iCFG). That is a directed, unweighted, cyclic graph with edge labels. Some of these labels represent interprocedural call and return. The impact of these is that they determine which edge must be traversed on a call return.Is there a shortest path algorithm adapted to this kind of graph?EditClarification: suppose that I am able to identify a pair of nodes of interest (n1, n2) in the graph using a plain boring graph traversal, I want to obtain the shortest path between n1 and n2 that respects the constraints of the graph.
Single-source shortest path algorithm for graphs representing stacked behavior
algorithms;graphs;reference request;shortest path
null
_unix.184935
By mistake I have deleted dpkg executable from bin folder. Now whenever I try to install anything I get the following error message:Sub-process /usr/bin/dpkg returned an error code (100)
Accidentally deleted dpkg executable
ubuntu;dpkg
null
_unix.127608
I know there are tools to inspect packets (e.g. wireshark) or to simulate latency/packet loss (e.g. netem), but they all seem to require administrative permissions to inspect/modify packets.I'm looking for something which could intercept packets for a single application and be usable by a standard (non-root) user.I'd like some kind of valgrind tool for network, where you use it to wrap your application and it intercepts network requests to inspect/modify them.My main use case is to enable students to use such tools on university computers in a simple way (i.e. without needing virtual machines or support from the administrators).Do such tools exist? Otherwise, what prevents them from actually existing in the first place?Note: tools such as setcap, which need to be configured by system administrators and may grant too many powers to the users (e.g. inspecting every packet, and not just the ones generated by their applications) aren't suitable.Edit: OK, so after some research, I found out about the LD_PRELOAD trick, which could be used as a poor man's wrapper to intercept specific library functions (e.g. connect, sendto, etc.) and count/modify packets. Apparently the reason why there are not so many non-root tools based on this is because:it's not useful for security purposes (it can be easily circumvented), andmost people during useful things with network already need root access anyway, so outside very specific applications tailored for learning, there is no real demand.But this is technically feasible, in case any curious students are willing to develop such applications themselves.Another technique would be to code a valgrind plug-in to deal with this and use valgrind to run the program. But it seems way overkill, so no sane people have done it.Could someone more knowledgeable than me just confirm if my previous statements are correct?
Non-administrative network-related tools
networking;not root user
null
_cs.77851
I have a DAG. I want to construct a boolean formula $\varphi$ that represents all paths from a source node to a sink node.In particular, I have a variable for each vertex. A path $v_1 \to v_2 \to \dots \to v_k$ is represented as the subformula $v_2 \lor \dots \lor v_{k-1}$. I want a formula $\varphi$ that is a conjunction of all of these subformulas (one subformula per path from a source node to a sink node).How can I construct $\varphi$ in CNF form?Input example: W ^ (r1 and r2 both reach w)R1 R2 X X (this signifies a connection to R1 and R2)b1 b2^ ^| |c1 c2Desired output in CNF format:(b1 or R1)and(b1 or R2)and(b2 or R1)and (b2 or R2)The output represents all paths from c1 or c2 to w.
How to convert a graph into a Boolean formula that represents all paths from a source node to a sink node?
algorithms;graphs;graph theory;boolean algebra
null