id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webmaster.47558 | Tags are causing duplicate content on my website and I want to make sure that I block them from being indexed. Is this the right syntax to use for blocking those Tags from being crawled?User-agent: *Disallow: /blog/tag | Robots.txt Blocking Blog Tags? | seo;robots.txt;duplicate content;canonical url;tags | null |
_codereview.49778 | Implement an iterator (generic) which skips an element if it is equal to the previous element. e.g: AAABBCCCCD, produces ABCD.Please suggest improvements.import java.util.Iterator;public class DeDupIterator implements Iterator {E next = null;Iterator<E> itr;public DeDupIterator(Iterator<E> iter) { itr = iter; next = itr.next();}@Overridepublic boolean hasNext() { if(itr.hasNext()) if (next != null) { return true; } return false;}@Overridepublic E next() { E item=null; while (itr.hasNext()) { item = (E) itr.next(); if (!item.equals(next)) { E temp = next; next = item; return temp; } } next = item; return next;}@Overridepublic void remove() { itr.remove();}} | A deduplicating iterator | java;iterator | public boolean hasNext() { if(itr.hasNext()) if (next != null) { return true; } return false;}First, there's inconsistent usage of braces for your block if statements. Second, if you are already keeping track of what is the next element to be returned by your de-dup iterator, wouldn't it be enough to just check against that?public boolean hasNext() { return next != null;}A suggestion regarding the remove() implementation: the Javadoc API suggests that it can be called only once per call to next(). Since your implementation of next() is already quite different, you may want to re-consider whether your implementation can be as simple as calling remove() on the underlying iterator. In your example, is it expected to be removing only one 'C' or all 'C's when remove() is called? |
_unix.110299 | System information: Debian WheezyCPU: intel core i7 3770I initially only had en_US.UTF-8 as the default language. This morning, I changed the /etc/local.gen file and un-comment the zh_CN.UTF-8 and run locale-gen:# nano /etc/local.gen# locale-genAfter that, I reboot the system, then I cannot see the log in screen. It is a black screen without any word, any sign or anything on the screenThen, I log into the recovery mode, and check the locale, I saw thislocaleLANG=LC_CTYPE=POSIXLC_NUMERIC=POSIXLC_TIME=POSIXLC_COLLATE=POSIXLC_MONETARY=POSIXLC_MESSAGES=POSIXLC_PAPER=POSIXLC_NAME=POSIXLC_ADDRESS=POSIXLC_TELEPHONE=POSIXLC_MEASUREMENT=POSIXLC_IDENTIFICATION=POSIXLC_ALL=I used dpkg-reconfigure locales to set the locale to en_US.UTF-8 again (disable zh_CN.UTF-8), but the locale still stays in POSIX.dpkg-reconfigureI reinstalled the locale by using dpkg --reinstall install locales. It didn't help either.dpkg --reinstall install localesI think the default locale being set to POSIX might be the problem. Then I edit the .bashrc file (for both root and my account) and added # nano ~/.bashrcaddedexport LC_ALL= en_US.UTF-8export LANG = en_US.UTF-8export LANGUAGE = en_US.UTF-8Now I can see all locale setting being changed to en_US.UTF-8 but I still cannot see the log in page.I did some search, and guess might be related to this bug, which is an really old issue. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=330500I looked at it, but don't know how to use it.And this is highly possibly related to PAM.What else can I do?Update, I have exported the logs.In the auth.log, I saw the following,Jan 21 10:09:13 QLin gnome-keyring-daemon[3864]: couldn't allocate secure memory to keep passwords and or keys from being written to the disk......Jan 21 10:14:18 QLin polkitd(authority=local): Unregistered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session2 (system bus name :1.58, object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)All log files can be found at dropbox | Problem with log in after change locale, maybe relate to PAM | linux;debian;locale;pam | null |
_computerscience.1892 | I've recently rebuild shaders for my program and it stopped working ( black screen ) on OS X ( El Capitan ), but it's ok on Linux on GTX 660.I've tested it on another Apple hardware and it worked on OS X on R9 395 ( but super slow, because of double ). So I suppose it's problem with my Intel HD Graphics 5000.There are no shader compilation errors, and here is my shader code:https://github.com/Marqin/YuriaViewer/blob/4241e384da0f27d26cbf5557518db905a9d40039/vertex.glslKeep in mind that this software worked on El Capitan with OpenGL 4.1 before shader rewrite. Here are my glfw hints:glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 1);glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);My program also is checking for GL_ARB_gpu_shader_fp64 and it's available on my Macbook ( Macbook Air 2013 mid ).I've debugged it a little and it looks that i is always lesser than vis on OS X, that's why it's black.I've also made a simple test - I've typed all uniform values in shader by hand and now it wasn't black, but I've got some gibberish on screen. Then I've changed every dvec3 and dvec2 to float versions and it showed nice fractal. So it looks like double is not working on OS X. But how can it be? It's saying that GL_ARB_gpu_shader_fp64 is available and it even doesn't complain when I request it in vertex shader.How can I make it work on OS X Macbook? | How to make double working in OpenGL 4.1 on OS X ( Intel HD Graphics 5000 )? | opengl;shader | null |
_softwareengineering.356223 | In RDBMS, goals of Normalization:Free the database of modification anomaliesMinimize re-design when extendingAvoid bias toward any particular access patternFirst step involves avoiding redundancy etc...As data is normalised and divided in multiple tables, join operations are required to satisfy queries placed by application. join operation is costly.With relational model, there is an issue of object mismatch i.e., gap between real world(application access) and relational world.On the contrast, Document databases(like MongoDB) do not have object mismatch issue. The single most important factor in designing the database schema with document database, is about, matching the data access patterns of your application. So, database design is pretty easy.They also support transactions, nowDe-normalisation issue still gets carried in Document database, unless you link data among collections and map/reduce on application accessQuestion:Why document oriented database is yet to replace relational model database? | Normalisation Vs Join , trade-off | database;mysql;mongodb;postgres;normalization | null |
_unix.234012 | I have a text-file that contains a list of filenames (one filename per line).Now I would like to calculate the size of all these files. I think I will have to do a ls -la on every line of the file and then accumulate the filesize.I think that awk will be part of the solution, but thats just guess. | Sum of filesize of a list of files | bash;files;awk | null |
_unix.209008 | ProblemI've searched internet like anything but couldn't find much about limiting upload.The solutions given are not limiting IP basis like this one but LAN as a whole. +-----++--------+ | S || User A |---+ W |+--------+ | I |+--------+ | T | +--------+ +----------+| User B |---+ C +-----| Router |--------| Internet |+--------+ | H | +--------+ +----------+ .... ... / ...+--------+ | H || User N |---+ U |+--------+ | B | +-----+UserA:172.16.10.2UserB:172.16.10.3 RouterPrivate:172.16.0.1UserC:172.16.10.4I want to limit only upload of 172.16.10.3 & 172.16.10.4 using tc htb and iptables What I've already triedaltered the script as per my requirementIF_INET=external# upload bandwidth limit for interfaceBW_MAX=2000# upload bandwidth limit for 172.16.16.11BW_CLIENT=900# first, clear previous settingstc qdisc del dev ${IF_INET} root# top-level htb queue discipline; send unclassified data into class 1:10tc qdisc add dev ${IF_INET} root handle 1: htb default 10# parent class (wrap everything in this class to allow bandwidth borrowing)tc class add dev externel parent 1: classid 1:1 htb \ rate ${BW_MAX}kbit ceil ${BW_MAX}kbit# two child classes## the default child classtc class add dev ${IF_INET} parent 1:1 \ classid 1:10 htb rate $((${BW_MAX} - ${BW_CLIENT}))kbit ceil ${BW_MAX}kbit# the child class for traffic from 172.16.16.11tc class add dev ${IF_INET} parent 1:1 \ classid 1:20 htb rate ${BW_CLIENT}kbit ceil ${BW_MAX}kbit# classify traffictc filter add dev ${IF_INET} parent 1:0 protocol ip prio 1 u32 \ match ip src 172.16.16.11/32 flowid 1:20but this will not work for limiting upload. So what's the solution? | Is it possible to throttle upload bandwidth per `IP` basis using `tc`, `htb` and `iptables` ? (Download limitation not required) | linux;networking;iptables;tc;qos | null |
_codereview.167271 | TLDR: How can I check DeclaringType for null without causing a further cascade of problems in the function?I have a C# LINQ query that uses reflection to list all Areas, Controllers, and Actions in my MVC Project. It appears to work correctly. I have not encountered any actual errors in the output.Resharper, however, is declaring a possible null reference warning (see screenshot). It is not a false flag because there is always the possibility that Type MembershipInfo.DeclaringType could be null. I went with Resharper's suggestions of fixing the code (Check expression for null or Use Conditional Access). Neither approach helped.I have even tried splitting apart the function and running a for-each loop to detect and change null into an empty string, but because even that for-each loop depends on a possibly null variable, I still get the same Resharper warning.My workaround is to simply ignore the warning, but I was wondering if there is a way to check for null that doesn't cause even more cascading issues? If this were T-SQL I could easily place a case or where statement in the query; alas, I am unable to work that approach here. private static IEnumerable<RwsPagelistViewModel> ActionLevelPermissionList(){ var projectName = Assembly.GetExecutingAssembly().FullName.Split(',')[0]; var asm = Assembly.GetAssembly(typeof(MvcApplication)); var list = asm.GetTypes(). SelectMany(t => t.GetMethods(BindingFlags.Instance | BindingFlags.DeclaredOnly | BindingFlags.Public)) .Where(d => d.ReturnType.Name == ActionResult) .Select(n => new { Area = n.DeclaringType.Namespace.ToString() .Replace(projectName + ., ) .Replace(Areas., ) .Replace(.Controllers, ) .Replace(Controllers, ), Controller = n.DeclaringType != null ? n.DeclaringType.Name.Replace(Controller, ) : null, Action = n.Name, Attributes = string.Join(,, n.GetCustomAttributes().Select(a => a.GetType().Name.Replace(Attribute, ))), AuthorizedRole = AuthorizedRole(n.CustomAttributes) }) .Where(z => z.Attributes.Contains(RoleAuthorize)) .Select(y => new RwsPagelistViewModel { Area = y.Area, Controller = y.Controller, Action = y.Action, AuthorizedRole = y.AuthorizedRole }); return list;} | Addressing System.NullReference Resharper warning in function | c#;linq;reflection | null |
_unix.68426 | I was looking at the command to burn iso image to a dvd. But I couldnt get the name of the device. In the /dev/ I could see cdrom,cdrw,dvd,dvdrw. I am using debian. When I gave the command, I got the following outputdd if=debian-6.0.7-i386-DVD-1.iso of=/dev/dvdrw dd: opening `/dev/dvdrw': Read-only file system | How to burn iso image to DVD using dd command | dvd;burning | You can't use dd this way (it might work for DVD-RAM though). What you are looking for is growisofs - (the main) part of dvd+rw-tools.growisofs -Z /dev/dvdrw=image.iso |
_cs.11759 | Without using pumping lemma, can we prove $A =\{ww \mid w \in \{0,1\}^* \}$ is non regular?Is $L= \{w \mid w \in \{0,1\}^* \}$ non regular? I'm thinking of using concatenation to prove the former isn't regular. If L is non regular then so is LL | Without using pumping lemma, can we determine if $A =\{ww \mid w \in \{0,1\}^* \}$ is non regular? | formal languages;regular languages | Your idea (while interesting) is unlikely to work for two reasons:How would you express $A$ as a concatenation of a language with itself?If $L$ is non-regular, it may still be the case that $LL$ is regular. See this post:Is $A$ regular if $A^{2}$ is regular?You could prove it using the Myhill-Nerode theorem - show that there are infinitely many equivalence classes. Or you could simply use a pumping argument without the lemma. That is, follow the proof of the lemma for this particular language. |
_unix.220226 | On Solaris I know disk are c0d0p0 or c0d0s0 for ide, c1t1d1s0 for scsi. I see there are links on pci devices,for example: ls -lh /dev/dsk/c0d0s7lrwxrwxrwx 1 root root 50 ago 1 22:53 /dev/dsk/c0d0s7 -> ../../devices/pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:hSomeone know what does it mean pci@0,0/pci-ide@1,1/ide@0/cmdk@0,0:h?I think pci0,0 is pci bus, pci-ide@1,1 is first controller. ide,ide@0 the master disk?And cmdk@'0,0h? | Reading the device tree for ide disks | solaris;device tree | cmdk@0,0:h is the driver instance for a disk. Per the Solaris documentation:The cmdk device driver is a common interface to various disk devices. The driver supports magnetic fixed disks and magnetic removable disks. |
_webmaster.23366 | I am working on optimizing a page's loading speed. Here are some analytics:Notice how the images, although only accounting for 65% of the total size (1.1MB), are by far the slowest loading assets: 96% of time. I'd like to know which are the recommended practices on optimizing loading speed, only taking images into account.Some of the techniques we are already applying:image compressionimages hosted on cookieless domain and CDNspriting everything that can be spritedhttp headers: keep alive and Expires to one year.Disclaimer: I have gone through the available documentation, I think by focusing on image loading optimization I am not creating a duplicate or a subjective question. | Best practices when loading images for improving page loading speed | images;optimization;page speed | null |
_unix.32215 | I recently installed Arch Linux on my notebook and it works fine.But there is still one small problem. Following to the SLiM manual I can use exit to exit to the shell.Unlike expected, I won't return to the shell, there is just a black screen where I cannot input anything. Also ALT + F1, ALT + F2, ALT + F3 or ALT + F4 makes no difference.Do I need to configure anything? Any suggestions? | Slim exit as username ends in black screen | arch linux;slim | See the wiki page. If you still problems, you'll need to capture the log and put it into pastebin to show us:tail -n 50 /var/log/slim.logBTW. According to Arch Linux wiki, SLiM is outdated and upstream development has ceased |
_unix.223921 | I recently bought a Raspberry Pi, and have started playing around with it. After changing my MOTD, (to include colours), the colour codes are coming up as raw-text instead of executing.I am connected to my Raspberry Pi via SSH in a Mac Terminal. I also tried directly via the Raspberries command line. How do I allow colour?This may be a very simple or mediocre question, but I am new to the Unix/Linux community and are still learning.Below is a screenshot of the failed MOTD:The file I am editing is /etc/motd. I am editing it with nano.Code below:The programs included with the Debian GNU/Linux system are free software;the exact distribution terms for each program are described in theindividual files in /usr/share/doc/*/copyright.Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extentpermitted by applicable law.#!/bin/bashecho $(tput setaf 2) .~~. .~~. '. \ ' ' / .'$(tput setaf 1) .~ .~~~..~. : .~.'~'.~. : ~ ( ) ( ) ~( : '~'.~.'~' : ) ~ .~ ( ) ~. ~ ( : '~' : ) $(tput sgr0)Raspberry Pi$(tput setaf 1) '~ .~~~. ~' '~'$(tput sgr0) | No colour in MOTD | ssh;terminal;raspberry pi;colors;motd | Due to the fact that etc/motd is a plain text file, commands are not executed, but instead printed as so:#!/bin/bashecho $(tput setaf 2) .~~. .~~. '. \ ' ' / .'$(tput setaf 1) .~ .~~~..~. : .~.'~'.~. : ~ ( ) ( ) ~( : '~'.~.'~' : ) ~ .~ ( ) ~. ~ ( : '~' : ) $(tput sgr0)Raspberry Pi$(tput setaf 1) '~ .~~~. ~' '~'$(tput sgr0)Instead, create a new file called motd.sh inside /etc and input the MOTD there instead. This is now an executable script, but is not executed. So goto /etc/profile and add at the end of the file:bash /etc/motd.shThis will now execute the script upon connection and display colour. .~~. .~~. '. \ ' ' / .' .~ .~~~..~. : .~.'~'.~. : ~ ( ) ( ) ~( : '~'.~.'~' : ) ~ .~ ( ) ~. ~ ( : '~' : ) Raspberry Pi '~ .~~~. ~' '~' |
_scicomp.2246 | For a project I am working on (in hyperbolic PDEs) I would like to get some rough handle on the behavior by looking at some numerics. I am, however, not a very good programmer. Can you recommend some resources for learning how to effectively code finite difference schemes in Scientific Python (other languages with small learning curve also welcome)?To give you an idea of the audience (me) for this recommendation:I am a pure mathematician by training, and am somewhat familiar with the theoretical aspects of finite difference schemesWhat I need help with is how to make the computer compute what I want it to compute, especially in a way that I don't duplicate too much of the effort already put in by others (so as to not re-invent the wheel when a package is already available). (Another thing I would like to avoid is to stupidly code something by hand when there are established data structures fitting the purpose.)I have had some coding experience; but I have had none in Python (hence I don't mind if there are good resources for learning a different language [say, Octave for example]). Books, documentation would both be useful, as would collections of example code. | Recommendation for Finite Difference Method in Scientific Python | python;finite difference;reference request;hyperbolic pde | Here is a 97-line example of solving a simple multivariate PDE using finite difference methods, contributed by Prof. David Ketcheson, from the py4sci repository I maintain. For more complicated problems where you need to handle shocks or conservation in a finite-volume discretization, I recommend looking at pyclaw, a software package that I help develop.Pattern formation code Solves the pair of PDEs: u_t = D_1 \nabla^2 u + f(u,v) v_t = D_2 \nabla^2 v + g(u,v)import matplotlibmatplotlib.use('TkAgg')import numpy as npimport matplotlib.pyplot as pltfrom scipy.sparse import spdiags,linalg,eyefrom time import sleep#Parameter valuesDu=0.500; Dv=1;delta=0.0045; tau1=0.02; tau2=0.2; alpha=0.899; beta=-0.91; gamma=-alpha;#delta=0.0045; tau1=0.02; tau2=0.2; alpha=1.9; beta=-0.91; gamma=-alpha;#delta=0.0045; tau1=2.02; tau2=0.; alpha=2.0; beta=-0.91; gamma=-alpha;#delta=0.0021; tau1=3.5; tau2=0; alpha=0.899; beta=-0.91; gamma=-alpha;#delta=0.0045; tau1=0.02; tau2=0.2; alpha=1.9; beta=-0.85; gamma=-alpha;#delta=0.0001; tau1=0.02; tau2=0.2; alpha=0.899; beta=-0.91; gamma=-alpha;#delta=0.0005; tau1=2.02; tau2=0.; alpha=2.0; beta=-0.91; gamma=-alpha; nx=150;#Define the reaction functionsdef f(u,v): return alpha*u*(1-tau1*v**2) + v*(1-tau2*u);def g(u,v): return beta*v*(1+alpha*tau1/beta*u*v) + u*(gamma+tau2*v);def five_pt_laplacian(m,a,b): Construct a matrix that applies the 5-point laplacian discretization e=np.ones(m**2) e2=([0]+[1]*(m-1))*m h=(b-a)/(m+1) A=np.diag(-4*e,0)+np.diag(e2[1:],-1)+np.diag(e2[1:],1)+np.diag(e[m:],m)+np.diag(e[m:],-m) A/=h**2 return Adef five_pt_laplacian_sparse(m,a,b): Construct a sparse matrix that applies the 5-point laplacian discretization e=np.ones(m**2) e2=([1]*(m-1)+[0])*m e3=([0]+[1]*(m-1))*m h=(b-a)/(m+1) A=spdiags([-4*e,e2,e3,e,e],[0,-1,1,-m,m],m**2,m**2) A/=h**2 return A# Set up the grida=-1.; b=1.m=100; h=(b-a)/m; x = np.linspace(-1,1,m)y = np.linspace(-1,1,m)Y,X = np.meshgrid(y,x)# Initial datau=np.random.randn(m,m)/2.;v=np.random.randn(m,m)/2.;plt.hold(False)plt.pcolormesh(x,y,u)plt.colorbar; plt.axis('image'); plt.draw()u=u.reshape(-1)v=v.reshape(-1)A=five_pt_laplacian_sparse(m,-1.,1.);II=eye(m*m,m*m)t=0.dt=h/delta/5.;plt.ion()#Now step forward in timefor k in range(120): #Simple (1st-order) operator splitting: u = linalg.spsolve(II-dt*delta*Du*A,u) v = linalg.spsolve(II-dt*delta*Dv*A,v) unew=u+dt*f(u,v); v =v+dt*g(u,v); u=unew; t=t+dt; #Plot every 3rd frame if k/3==float(k)/3: U=u.reshape((m,m)) plt.pcolormesh(x,y,U) plt.colorbar plt.axis('image') plt.title(str(t)) plt.draw()plt.ioff() |
_unix.193589 | I am working on a project to monitor the network devices by the help of SNMP and MRTG, RRDTool. As part of bandwidth monitoring, I can be able to get the maximum used bandwidth per time resolution.Meanwhile, I need to maintain a history of total data usage volume. I knew it is possible to get from vnStat. But I don't know how to achieve this with SNMP. I believe some one can help me on this. | Total data usage history with SNMP | monitoring;snmp;rrdtool | null |
_unix.360826 | I have a Lenovo Ideapad 14'' IBR-14'' with Intel Celeron CPU N3060 @ 1.60GHz, 32GB disk and 4GB RAM; I have booted a couple of times with a FreeBSD 11 install stick but it does not seem to detect the internal eMMC disk. Linux Mint is able to detect it after disabling Secure Boot in the UEFI BIOS but FreeBSD 11.0 does not, in fact, in dmesg the disk does not appear.What to do? | FreeBSD 11 doesn't detect internal eMMC 32 GB disk | freebsd;hard disk;ssd | After several retries, and testing FreeBSD 12.0-Current, I arrived to the conclusion the internal eMMC disk is not supported by FreeBSD 11.0; upon booting with FreeBSD 12.0-current, dmesg already showed the internal disk:mmcsd0: 31GB <MMCHC DF4032 0.1 SN 3C4DE893 MFG 11/2016 by 69 0x0000> at mmc0 50.0MHz/8bit/65535-blockmmcsd0boot0: 4MB partion 1 at mmcsd0mmcsd0boot1: 4MB partion 2 at mmcsd0mmcsd0rpmb: 4MB partion 3 at mmcsd0 |
_unix.58591 | I was trying to Linux Mint from live-usb and made a stupid mistake.I created master boot record and my HDD partition become unallocated.After rebooting from live OS, I'm unable to get to boot menu.http://www.linux-mag.com/s/i/articles/7875/Figure2.jpgIs there any way to recover all my data? Right now I can only boot to live-usb. | gparted partition master boot record corrupt | partition;gparted | First, to avoid messing up, you should backup an entire image of the disk (provided you have a bigger disk to store it). For this, several solutions are proposed on this question, last time i did it, I used dd. Once you are sure you can restore the image in case of problem, you can use testdisk to redetect the partition table and fix it.This question for instance provides a solution. |
_scicomp.19310 | I'm trying to find the looped route with the lowest value of D/n, where D=Distance, and n=Number_of_points, for a cloud of 3D points. However there are a few conditions.Conditions:Each point must have one or more points in the route that are greater than $X$ distance away.Not every point needs to be passed through.I will try and explain what I mean:Imagine a system of delivery routes, whereby you get paid only if the delivery point is greater than $X$ meters away from the pickup point.So, if you travel point $A \rightarrow B$ where the distance is $X$, you get paid, but you have been paid once in $X$. If you have say 10 equidistant points on a circle with circumference $X$. The distance in a line between them is $X/10$. So you are stopping once every $x/10$ to pick up a new package. Now, once you travel half way around the circle to point 5, you can deliver the package from the starting point and get paid. This means once the route has been completed halfway, you are getting paid every $X/10$ instead. Much more efficient.Now here is the kicker, my data is not in a neat circle, but rather a cloud of 3D points.Which algorithm or set of algorithms could be used to determine the most efficient route between the number of points?Sorry if I horribly explained my problem, from what I understand the problem might be NP-complete but if an approximate solution can be calculated, that would work. | Finding most efficient route (distance/number of nodes) that uses nodes at least X amount away from another node | algorithms;approximation algorithms | null |
_unix.198121 | I wanted to know if It is possible to bound two IPSEC tunnels ? I've already done a VPN bonding of SSH tunnels, but It will be better if I use IPSEC. | It is possible to make an IPSEC VPN Bonding? | vpn;ssh tunneling;tunneling;ipsec;bonding | null |
_unix.255190 | Using rm -rf LargeDirectory to delete a large directory can take a large amount of time to complete depending on the size of the directory. Is it possible to get a status update or somehow monitor the progress of this deletion to give a rough estimate as to where along in the process the command is? | Is it possible to determine the progress of an rm command? | rm | null |
_codereview.63713 | The following code is my first C program I have made for an university assignment about Interprocess Communication. This involved creating a parent process, the farmer, and then creating a certain number of forked processes, the workers, and we had to let the workers calculate something and use message queues to communicate it from the parent to the worker and vica versa. The computations were chosen to be Mandelbrot computations by the teacher. The goal of the assignment was to get familiar with the POSIX libraries.These were the functional requirements, in my own words:The program must use the settings from settings.hThe farmer starts NROF_WORKER worker processesThe message queues have a size of MQ_MAX_MESSAGES messagesThe names of the message queues had to include the student name and the student idThe worker queue needs to be as full as possibleThe farmer queue needs to be as empty as possibleBusy waiting is not allowedWe have to use a rsleep(10000) call to simulate some kind of random waiting, implementation given.Lastly, another implicit requirement is that we cannot use extra processes or threads, we are only allowed to use the 1 farmer process and the NROF_WORKER worker processes.The following implementations were given:The code that outputs the image on the screenThe Mandelbrot computationThe Makefile that was provided:##BINARIES = worker farmerCC = gccCFLAGS = -Wall -g -cLDLIBS = -lrt -lX11all: $(BINARIES)clean: rm -f *.o $(BINARIES)worker: worker.o farmer: farmer.o output.oworker.o: worker.c settings.h common.houtput.o: output.c output.h settings.hfarmer.o: farmer.c output.h settings.h common.hThis is the relevant code:settings.h#ifndef _SETTINGS_H_#define _SETTINGS_H_// remove the comments for the output you like: either graphical (X11) output// or storage in a BMP file (or both)#define WITH_X11//#define WITH_BMP// settings for interprocess communications// (note: be sure that /proc/sys/fs/mqueue/msg_max >= MQ_MAX_MESSAGES)#define NROF_WORKERS 64#define MQ_MAX_MESSAGES 64// settings for the fractal computations#define INFINITY 10.0#define MAX_ITER 512// settings for graphics#define X_PIXEL 880#define Y_PIXEL 660#define X_LOWERLEFT -2.0#define Y_LOWERLEFT -1.0#define STEP 0.003//#define X_LOWERLEFT -0.65//#define Y_LOWERLEFT -0.5//#define STEP 0.0001// lower left pixel (0,0) has coordinate// (X_LOWERLEFT, Y_LOWERLEFT)// upperright pixel (X_PIXEL-1,Y_PIXEL-1) has coordinate// (X_LOWERLEFT+((X_PIXEL-1)*STEP),Y_LOWERLEFT+((Y_PIXEL-1)*STEP))#endifoutput.h#ifndef _OUTPUT_H_#define _OUTPUT_H_extern void output_init (void);extern void output_draw_pixel (int x, int y, int color);extern void output_end (void);#endifsettings.h/* * * Contains definitions which are commonly used by the farmer and the workers * */#ifndef _COMMON_H_#define _COMMON_H_#include settings.h#define STUDENT_NAME FrankVanHeeswijktypedef struct{ int y;} MQ_FARMER_REQUEST_MESSAGE;typedef struct{ int y; int x_colors[X_PIXEL];} MQ_WORKER_RESULT_MESSAGE;#endiffarmer.c/* * */#include <stdio.h>#include <stdlib.h>#include <stdbool.h>#include <string.h>#include <sys/wait.h>#include <sys/types.h>#include <sys/stat.h>#include <errno.h> #include <unistd.h> // for execlp#include <mqueue.h> // for mq#include settings.h#include output.h#include common.hstatic char mq_farmer_request_name[80];static char mq_worker_result_name[80];static void fork_children(pid_t children_IDs[]){ int i; for (i = 0; i < NROF_WORKERS; i++) { pid_t processID = fork(); if (processID < 0) { perror(fork() failed: + processID); } else { if (processID == 0) { execlp(./worker, worker, mq_farmer_request_name, mq_worker_result_name, NULL); perror(execlp() failed); } children_IDs[i] = processID; } }}static void kill_children(pid_t children_IDs[]){ int i; for (i = 0; i < NROF_WORKERS; i++) { waitpid(children_IDs[i], NULL, 0); }}static void process_worker_result_message(MQ_WORKER_RESULT_MESSAGE worker_result_message){ int x; for (x = 0; x < X_PIXEL; x++) { output_draw_pixel(x, worker_result_message.y, worker_result_message.x_colors[x]); } }int main (int argc, char* argv[]){ if (argc != 1) { fprintf (stderr, %s: invalid arguments\n, argv[0]); } output_init (); //create message queues sprintf(mq_farmer_request_name, /mq_farmer_request_%s_%d, STUDENT_NAME, getpid()); sprintf(mq_worker_result_name, /mq_worker_result_%s_%d, STUDENT_NAME, getpid()); struct mq_attr attr; attr.mq_maxmsg = MQ_MAX_MESSAGES; attr.mq_msgsize = sizeof(MQ_FARMER_REQUEST_MESSAGE); mqd_t mq_farmer_request = mq_open(mq_farmer_request_name, O_WRONLY | O_CREAT | O_EXCL, 0600, &attr); if (mq_farmer_request < 0) { perror(error opening farmer request message queue in farmer); } attr.mq_maxmsg = MQ_MAX_MESSAGES; attr.mq_msgsize = sizeof(MQ_WORKER_RESULT_MESSAGE); mqd_t mq_worker_result = mq_open(mq_worker_result_name, O_RDONLY | O_CREAT | O_EXCL, 0600, &attr); if (mq_worker_result < 0) { perror(error opening worker result message queue in farmer); } //create children pid_t children_IDs[NROF_WORKERS]; fork_children(children_IDs); //send & receive data MQ_FARMER_REQUEST_MESSAGE farmer_request_message; MQ_WORKER_RESULT_MESSAGE worker_result_message; //keep farmer request queue as filled as possible, keep worker result queue as empty as possible int msg_max = Y_PIXEL; int msg_num_received = 0; int msg_num_sent = 0; while (msg_num_sent < msg_max && msg_num_received < msg_max) { //fill up farmer request queue to the max int get_farmer_request_attr_result = mq_getattr(mq_farmer_request, &attr); if (get_farmer_request_attr_result < 0) { perror(error getting farmer request attr in farmer); exit(EXIT_FAILURE); } while (attr.mq_curmsgs < attr.mq_maxmsg && msg_num_sent < msg_max) { //send message farmer_request_message.y = msg_num_sent; int sent = mq_send(mq_farmer_request, (char*)&farmer_request_message, sizeof(farmer_request_message), 0); if (sent < 0) { perror(error sending message in farmer); exit(EXIT_FAILURE); } msg_num_sent++; get_farmer_request_attr_result = mq_getattr(mq_farmer_request, &attr); if (get_farmer_request_attr_result < 0) { perror(error getting farmer request attr in farmer); exit(EXIT_FAILURE); } } //empty worker result queue int get_worker_result_attr_result = mq_getattr(mq_worker_result, &attr); if (get_worker_result_attr_result < 0) { perror(error getting worker result attr in farmer); exit(EXIT_FAILURE); } while (attr.mq_curmsgs > 0) { //take one message int received_bytes = mq_receive(mq_worker_result, (char*)&worker_result_message, sizeof(worker_result_message), NULL); if (received_bytes < 0) { perror(error receiving message in farmer); exit(EXIT_FAILURE); } msg_num_received++; process_worker_result_message(worker_result_message); //because we took one message, we can now send another one if (msg_num_sent < msg_max) { farmer_request_message.y = msg_num_sent; int sent = mq_send(mq_farmer_request, (char*)&farmer_request_message, sizeof(farmer_request_message), 0); if (sent < 0) { perror(error sending message in farmer); exit(EXIT_FAILURE); } msg_num_sent++; } get_worker_result_attr_result = mq_getattr(mq_worker_result, &attr); if (get_worker_result_attr_result < 0) { perror(error getting worker result attr in farmer); exit(EXIT_FAILURE); } } } //stop children int i; for (i = 0; i < NROF_WORKERS; i++) { farmer_request_message.y = -1; int sent = mq_send(mq_farmer_request, (char*)&farmer_request_message, sizeof(farmer_request_message), 0); if (sent < 0) { perror(error sending message in farmer); } } kill_children(children_IDs); //close message queues int closed_farmer = mq_close(mq_farmer_request); if (closed_farmer < 0) { perror(failed to close farmer request queue in farmer); } int closed_worker = mq_close(mq_worker_result); if (closed_worker < 0) { perror(failed to close worker result queue in farmer); } int unlink_farmer = mq_unlink(mq_farmer_request_name); if (unlink_farmer < 0) { perror(failed to unlink farmer request queue in farmer); } int unlink_worker = mq_unlink(mq_worker_result_name); if (unlink_worker < 0) { perror(failed to unlink worker result queue in farmer); } output_end(); return EXIT_SUCCESS;}worker.c/* * */#include <stdio.h>#include <stdlib.h>#include <stdbool.h>#include <string.h>#include <errno.h> // for perror()#include <unistd.h> // for getpid()#include <mqueue.h> // for mq-stuff#include <time.h> // for time()#include <complex.h>#include settings.h#include common.hstatic double complex_dist (complex a){ // distance of vector 'a' // (in fact the square of the distance is computed...) double re, im; re = __real__ a; im = __imag__ a; return ((re * re) + (im * im));}static int mandelbrot_point (double x, double y){ int k; complex z; complex c; z = x + y * I; // create a complex number 'z' from 'x' and 'y' c = z; for (k = 0; (k < MAX_ITER) && (complex_dist (z) < INFINITY); k++) { z = z * z + c; } // 2 // k >= MAX_ITER or | z | >= INFINITY return (k);}/* * rsleep(int t) * * The calling thread will be suspended for a random amount of time * between 0 and t microseconds * At the first call, the random generator is seeded with the current time */static void rsleep (int t){ static bool first_call = true; if (first_call == true) { srandom (time(NULL) % getpid()); first_call = false; } usleep (random () % t);}int main (int argc, char* argv[]){ //open message queues char* mq_farmer_request_name = argv[1]; char* mq_worker_result_name = argv[2]; mqd_t mq_farmer_request = mq_open(mq_farmer_request_name, O_RDONLY); if (mq_farmer_request < 0) { perror(error opening farmer request message queue in worker); } mqd_t mq_worker_result = mq_open(mq_worker_result_name, O_WRONLY); if (mq_worker_result < 0) { perror(error opening worker result message queue in worker); } //read messages MQ_FARMER_REQUEST_MESSAGE farmer_request_message; MQ_WORKER_RESULT_MESSAGE worker_result_message; while (true) { int received_bytes = mq_receive(mq_farmer_request, (char*)&farmer_request_message, sizeof(farmer_request_message), NULL); if (received_bytes < 0) { perror(error receiving message in worker); break; } if (farmer_request_message.y < 0) { break; } rsleep(10000); worker_result_message.y = farmer_request_message.y; int x; for (x = 0; x < X_PIXEL; x++) { double mx = (X_LOWERLEFT + x * STEP); double my = (Y_LOWERLEFT + worker_result_message.y * STEP); worker_result_message.x_colors[x] = mandelbrot_point(mx, my); } int sent = mq_send(mq_worker_result, (char*)&worker_result_message, sizeof(worker_result_message), 0); if (sent < 0) { perror(error sending message in worker); break; } } int closed_farmer = mq_close(mq_farmer_request); if (closed_farmer < 0) { perror(failed to close farmer request queue in worker); } int closed_worker = mq_close(mq_worker_result); if (closed_worker < 0) { perror(failed to close worker result queue in worker); } return EXIT_SUCCESS;}In order to help you understand the intended implementation in the farmer, I have attached this pseudo-code to understand how I want to achieve a maximally filled worker queue and a minimally filled farmer queue:while (not_all_messages_sent) { while (mq_farmer_request not full) { send request to mq_farmer_request; } while (mq_worker_result not empty) { receive result from mq_worker_result; process result; if (not_all_messages_sent) { send request to mq_farmer_request; } }}I'd like to have a review on all aspects of my code. I have a Java background so I am very used to putting everything that is reusable into methods and I hate hard-coding, but these seems less applicable in C. I wouldn't mind to have more methods in my code.You can view the full code and extract a working copy over at my Github repository. | Interprocess Communication in a Farmer-Worker setup | c;homework;queue;child process;posix | Overall this is some very nicely written C, well done. Some stuff I noted aside from @Morwenn's good points:Portability: <mqueue.h> is a POSIX C library. Unfortunately this restrains the platforms that you compile this for (I couldn't compile this on my Mac without some fiddling around). There are two ways you could fix this:Include the header with your package and use that to compile with. I'm not sure how portable that header is though. If it is portable then I would probably go with this option.Rewrite your code using <fcntl.h>, <signal.h>, <sys/types.h>, and <time.h>.Push your code: Looking at your Makefile, you don't have some things that I would consider a necessity. Going off of my own C Makefile boilerplate, Here is some stuff I would change:Add more CFLAGS:CFLAGS = -Werror -Wall -Wextra -pedantic-errors -Wformat=2 -Wno-import -Wimplicit -Wmain -Wchar-subscripts -Wsequence-point -Wmissing-braces -Wparentheses -Winit-self -Wswitch-enum -Wstrict-aliasing=2 -Wundef -Wshadow -Wpointer-arith -Wbad-function-cast -Wcast-qual -Wcast-align -Wwrite-strings -Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes -Wmissing-declarations -Wredundant-decls -Wnested-externs -Winline -Wdisabled-optimization -Wunused-macros -Wno-unusedThese are all the flags that I always use for all of my projects. Sometimes I use even more. I would recommend that you take a look at all of the warning options sometime and fine-tune them for your own needs.Specify your compiler version.CC = gcc-4.9Right now you are using whatever the OS has set for the default in its PATH variable. This could lead to some compatibility problems, since earlier versions of a compiler obviously won't have support for later standards. Which leads me to my next point...Standards: You are using what appears to be the -std=gnu89 as @Morwenn said. Please no. Those are 25 year old standards. Your code should be updated for more modern standards. At a minimum, I think you should be using -std=gnu99, but why not go the extra mile and use the -std=gnu11 standards?Stress-test, and then some: write some code to test that your code works in a variety of cases. Push your code to it's breaking point. Find out what makes it break and fix it. This is how you develop bullet-proof software. You may have to incorporate some fuzzing to fully test what your code can handle.Some other more minor stuff:Is there a reason that you need a whole struct for one value?typedef struct{ int y;} MQ_FARMER_REQUEST_MESSAGE;Could this not work in its place?typedef int MQ_FARMER_REQUEST_MESSAGE; // just needs shorthand typenameYour naming conventions aren't what I would consider normal. Structure names are usually written in PascalCase.Always declare your variables within your for loops.for (int i = 0; i < NROF_WORKERS; ++i)This is universally considered a good practice, and is the main reason (I think) that you should be using the C99 standards (since the C89 syntax doesn't support it).You return ((re * re) + (im * im)) from your complex_dist() function. I would use the pow() function from <math.h> (there are some other places where I would use this in your code as well). Why? Because the function handles errors in a way that simply multiplying two values together couldn't. Please note however, that this will result in slightly less efficient code.The variable names in your mandelbrot_point() could maybe be better. If you can't find more suitable names, then more documentation is definitely needed.You may want to consider using Doxygen to generate documentation your code. It's what I use and I helps me with commenting stuff a lot.From Code Complete, 2nd Edition, p. 761:Use only one data declaration per line.Its easier to modify declarations because each declaration is self-contained.[...]Its easier to find specific variables because you can scan a single column rather than reading each line. Its easier to find and fix syntax errors because the line number the compiler gives you has only one declaration on it.You don't have to return 0/EXIT_SUCCESS at the end of main(), just like you wouldn't bother putting return; at the end of a void-returning function. The C standard knows how frequently this is used, and lets you not bother.C99 & C11 5.1.2.2(3)...reaching the } that terminates the main() function returns a value of 0.Prefer snprintf() to sprintf().There might be more stuff I missed, but this is all I think I could review without actually compiling the code and running some tests. |
_reverseengineering.12782 | I have a memory dump of a VM running Windows server 2012 R2. The dump is of the entire RAM (4 GB).I want to extract as many features as possible from this dump. Mainly I want to extract all stacks of all threads running on the machine and exist in the memory. Alternatively, I want to extract call sequences of all threads.Are there any tools / tutorials / books etc. which can help me perform this task?I am familiar with both Volatility and Rekall, are there any specific plugins that can help me achieve my goals there? | Extracting threads' stack from Windows memory dump | digital forensics;callstack;thread;stack;process | I am Not Sure what you are looking for let me try i have a dump file of a vm too MEMORY.dmp from a vm that ran xp sp3 created using .crash from a kernel debugger attached to it i loaded it using windbg as below windbg -z memory.dmp now i thought i will count how many threads are running so i did some thing like this kd> r $t0 = 0; !for_each_thread r $t0= @$t0+1 ; ? @$t0Evaluate expression: 306 = 00000132now let me see the call stacks for all threads so i do kd> !for_each_thread .thread @#Thread ; k2it spits out Implicit thread is now 812915b8 # ChildEBP RetAddr 00 fc8d37b4 804dc0f7 nt!KiSwapContext+0x2e01 fc8d37c0 804e3b7d nt!KiSwapThread+0x46Implicit thread is now 8128eda8 # ChildEBP RetAddr 00 fc8e3d34 804dc0f7 nt!KiSwapContext+0x2e01 fc8e3d40 804e407e nt!KiSwapThread+0x46Implicit thread is now 8128eb30 # ChildEBP RetAddr xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxok instead of k2 i do k i get a full stack trace Implicit thread is now 810efda8 *** Stack trace for last set context - .thread/.cxr resets it # ChildEBP RetAddr 00 f8ad3c38 804dc0f7 nt!KiSwapContext+0x2e01 f8ad3c44 804dc143 nt!KiSwapThread+0x4602 f8ad3c6c bf802f52 nt!KeWaitForSingleObject+0x1c203 f8ad3ca8 bf801b2a win32k!xxxSleepThread+0x19204 f8ad3cec bf819e6c win32k!xxxRealInternalGetMessage+0x41805 f8ad3d4c 804de7ec win32k!NtUserGetMessage+0x2706 f8ad3d4c 7c90e4f4 nt!KiFastCallEntry+0xf807 0007fe24 7e4191be ntdll!KiFastSystemCallRet08 0007fe44 0100a740 USER32!NtUserGetMessage+0xc09 0007fe80 0100c216 wmiprvse!WindowsDispatch+0x310a 0007ff14 0100c314 wmiprvse!Process+0x2250b 0007ff1c 010247aa wmiprvse!WinMain+0x4e0c 0007ffc0 7c817067 wmiprvse!WinMainCRTStartup+0x1740d 0007fff0 00000000 kernel32!BaseProcessStart+0x23Implicit thread is now 8113b960 *** Stack trace for last set context - .thread/.cxr resets itXXXXXXXXXXXXXXXXhope your query is answered if not please explain what is it you mean by call sequences addressing the comment by Igor Skochinsky if the format of the file is raw as in lets say captured with matthieu suiches now defunct win32dd.exe one can use volatility's plugin raw2dmp and use the resulting windbg compatible dmpfile in windbg as abovevol25 -f foo.dmp --profile=Win7SP1x86 imageinfoVolatility Foundation Volatility Framework 2.5INFO : volatility.debug : Determining profile based on KDBG search... Suggested Profile(s) : Win7SP0x86, Win7SP1x86 AS Layer1 : IA32PagedMemoryPae (Kernel AS) AS Layer2 : FileAddressSpace (E:\vola\foo.dmp) PAE type : PAE DTB : 0x185000L KDBG : 0x82d32c28L Number of Processors : 1 Image Type (Service Pack) : 1 KPCR for CPU 0 : 0x82d33c00L KUSER_SHARED_DATA : 0xffdf0000L Image date and time : 2016-06-02 18:08:14 UTC+0000vol25 -f foo.dmp --profile=Win7SP1x86 raw2dmp --output-image=foowind.dmpVolatility Foundation Volatility Framework 2.5Writing data (5.00 MB chunks): |.....dumpchk.exe foowind.dmp Loading dump file foowind.dmpMicrosoft (R) Windows Debugger Version 10.0.10586.567 X86Copyright (c) Microsoft Corporation. All rights reserved.Loading Dump File [xxx\foowind.dmp]Kernel Complete Dump File: Full address space is availableComment: 'File was converted with Volatility'xxxxxxxxxxxxxxxxxxxxxxx*** ERROR: Module load completed but symbols could not be loaded for win32dd.exeCannot find frame 0x6c, previous scope unchanged*** ERROR: Module load completed but symbols could not be loaded for win32dd.sysProbably caused by : win32dd.exe ( win32dd!Unknown ) |
_softwareengineering.126959 | Periodically, I wonder about this:The short-circuit OR would always return the same value that the unshort-circuited OR operator would? I expect that the short-circuit OR would always evaluate more quickly. So, was the unshort-circuited OR operator included in the C# language for consistency? What have I missed? | Why is there both a short-circuit OR as well as unshort-circuited variation of that operator in C#? | c#;boolean;logic | Both operands were meant for different things, coming from C which didn't have a boolean type. The short-circuit version || only works with booleans, while the non-short circuit version | works with integral types, performing a bitwise or. It just happened to work as a non-short circuit logical operation for booleans which are represented by a single bit, being 0 or 1.http://en.wikibooks.org/wiki/C_Sharp_Programming/Operators#Logical |
_unix.367693 | I was messing around with fish and noticed this handy behaviorIf I typed wget -<tab><tab><tab>, I was put into an interactive menu. However, when I typed, I searched the descriptions of the arguments themselves. I tried this in zsh, and typing in this menu only seemed to bring me back to my interactive prompt. Is there a way to achieve similar functionality in zsh? | Fish-like argument completion search in ZSH | zsh;autocomplete | null |
_softwareengineering.210482 | I'm developing a WPF application whose core functionality involves creating a large object graph (often tens of thousands of entities) which the user can modify certain parts of, then save to a database. A graph can also be subsequently retrieved, modified and saved again. It's feasible that a user can create dozens of these graphs per day.The app also provides the user with different ways to search the entities in the database (predominantly text searches on various fields), presenting the search results to the user and allowing the user to select one, which will result in the relevant entity graph being retrieved in full and presented for the user to view and modify as above.Currently I'm using Entity Framework and SQL Express, but I'm uneasy about certain aspects of the architecture and design, and the client isn't keen on having to install SS. I've only recently come across the concept of NoSql databases, and sounds like they might be a good fit for what I'm doing, but I have a couple of questions.First, I'm assuming performance wouldn't be any worse than EF when reading or writing one of these object graphs?What about my app's search functionality? Will a NoSql db support this sort of thing, and what would the performance be like, bearing in mind the size and quantity of documents that I'm likely to have.What about lookup (reference) data? Would I duplicate such data in each NoSQL document, or would I keep it all in a single NoSql document and store their IDs in the main documents?Finally, any recommendations for a product? MongoDb and RavenDb seem to be the main OSS contenders for Windows. | Is a NoSQL database suitable for me? | nosql;mongodb;ravendb | null |
_cs.71411 | I'mtakingAndrew Ng's machine learningcourse andweek 5 covers the training of neural networks. The modified cost function for neural network training is derived from the logisticregression cost function, and is described as follows:$$\begin{gather*} J(\Theta) = - \frac{1}{m} \sum_{i=1}^m \sum_{k=1}^K \left[y^{(i)}_k \log ((h_\Theta (x^{(i)}))_k) + (1 - y^{(i)}_k)\log (1 - (h_\Theta(x^{(i)}))_k)\right] + \frac{\lambda}{2m}\sum_{l=1}^{L-1} \sum_{i=1}^{s_l} \sum_{j=1}^{s_{l+1}} ( \Theta_{j,i}^{(l)})^2\end{gather*}$$Here, K is the number of output units, L is thetotal number of layers in thenetwork, and ${s_l}$ is the number of units in layer l. I don't understand why thesecond half of the cost function,intended toprevent over-fitting by minimizing the values of thetheta parameters, is summed across theentire network while the first half of the equation which actuallydetermines theta is only for a specific layer.Ifeachlayer's cost function already includesaminimizationparameter for that layer's theta values,why is it necessary toperform this minimization globally for every layer in thecost function J?As Iunderstand it,intraining a neural networkyoumainly treat each layer as completely separate from the next (and in fact, that is one of the major selling points of the NN approach, whereinsolvingL layers independently of one-another lets you use a simple approach to obtain powerful results) - so why arewesumming thesecond half of the equation over all layers? | Why doesthe neural network logistic regression cost function sum for all layers only for lambda? | machine learning;neural networks | The first term depends on the output of the entire network, i.e., on all the parameters for all layers, not just a single layer's parameters.This is an application of the empirical risk minimization framework for training a classifier, with regularization. The cost function $J(\Theta)$ has the form$$J(\Theta) = \text{empirical risk} + \text{regularization penalty}.$$Denote the empirical risk by $R(\Theta)$; it depends on the parameters $\Theta$ (for all layers). Denote the regularization penalty by $P(\Theta)$; it depends on the parameters $\Theta$ (for all layers).The empirical risk $R(\Theta)$ is the sum of the loss for each instance in the training set, i.e., it has the form$$R(\Theta) = {1 \over m} \sum_{i=1}^m \ell(h_\Theta(x^{(i)}), y^{(i)}),$$where $\ell$ is a loss function, $x^{(i)},y^{(i)}$ is the $i$th instance in the training set, and $h_\Theta(x)$ denotes the output of the classifier on input $x$ when the parameters are $\Theta$. Note that the output of the classifier depends on all the parameters for all the layers, so $R(\Theta)$ depends on all the parameters for all layers (not just a single layer).A comment on notation: We use $\Theta$ to denote all of the parameters, i.e., a single vector that concatenates the parameters for each layer. Here it appears $\Theta^{(l)}$ has been used to denote the parameters for the $l$th layer, so $\Theta = (\Theta^{(1)}, \dots, \Theta^{(L)})$.In your specific example, we use the cross-entropy loss for $\ell$ (this is also known as the logistic loss function, where there are only two classes). That's how we get the first term.Also the regularization penalty $P(\Theta)$ depends on all of the parameters, for all layers; it is a sum of penalties for each layer.If you work through the math, you'll see how we get something of the form you've shown in the question. As this answer hopefully makes clear, each term depends on the parameters for all the layers, not just a single layer. |
_codereview.100448 | I have tried to write a small light weighted php session handling class that use PHP's session_set_save_handler() function to overwrite the default session handling functionalities and usage Database to store the sessions data instead of default files system. It checks for possible session hijacking attempts and renews session periodically. I want to know how feasible my class is and improvements that can be incorporated to make it more robust and secure.sessionmanager.lib.php:<?phptrait Singleton{ private static $_instance; public static function getInstance($config = array()) { if (!(self::$_instance instanceof self)) { self::$_instance = new self($config); } return self::$_instance; }}/** * @category Sessionmanager * @version 1.0 * @author Anirban Nath <[email protected]>* */class sessionmanager{ use Singleton; /** * [$_db PDO object holder] * @var [object] */ private $_db; /** * [$_https Cookie secure flag holder] * @var [boolean] */ private $_https; /** * [$_user_agent User Agent holder] * @var [string] */ private $_user_agent; /** * [$_ip_address Client Machien IP address] * @var [string] */ private $_ip_address; /** * [$_expiry Session LIfetime, default 2 Hours ] * @var integer */ private $_expiry = 7200; /** * [$_session_cookie_ttl Session Cookie Lifetime , default (0:Clear the session cookies on browser close) ] * @var integer */ private $_session_cookie_ttl = 0; /** * [$_refresh_interval Refresh Interval toi regenerate Session Id, default 10 minutes] * @var integer */ private $_refresh_interval = 600; /** * [$_table_name Tbale name for storing Session information] * @var string */ private $_table_name = sessions; /** * [$_session_id Holder for session_id] * @var [string] */ private $_session_id; /** * Sescure session Salt * @ClassConstant */ const SECURE_SESSION = '--$ecure$ess10n--'; /** * [__construct ,Pass configuration values to __setconfig, register session handlers and starts the sesssion] * @param array $config [array of configuartion params] */ public function __construct(array $config) { session_set_save_handler( array($this, 'open'), array($this, 'close'), array($this, 'read'), array($this, 'write'), array($this, 'destroy'), array($this, 'gc') ); $this->_setConfig($config); session_start(); } /** * [_setConfig Sets up the configurations values passed in by __contsruct function and creates a session storage MySql table] * @param [Array] $config [Configs params holder] */ private function _setConfig($config) { $this->_db = $config['dbconnector']; $this->_expiry = (isset($config['expiry']))? $config['expiry'] : $this->_expiry ; $this->_session_cookie_ttl = (isset($config['session_cookie_ttl']))? $config['session_cookie_ttl'] : $this->_session_cookie_ttl ; $this->_https = (isset($_SERVER['HTTPS'])) ? TRUE: FALSE; $this->_refresh_interval = (isset($config['refresh_interval'])) ? $config['refresh_interval']: $this->_refresh_interval; $this->_user_agent = isset($config['user_agent']) ? $config['user_agent'] : $_SERVER['HTTP_USER_AGENT']; $this->_ip_address = $this->_getRealIpAddr(); ini_set('session.cookie_lifetime', $this->_session_cookie_ttl); ini_set('session.gc_maxlifetime', $this->_expiry); ini_set('session.cookie_httponly', 1); ini_set('session.entropy_file', '/dev/urandom'); ini_set('session.hash_function', 'whirlpool'); ini_set('session.use_only_cookies', 1); ini_set('session.cookie_secure', $this->_https); ini_set('session.entropy_length' ,512); ini_set('session.use_trans_sid', false); $stmt_create = CREATE TABLE IF NOT EXISTS {$this->_table_name} ( `session_id` varchar(255) NOT NULL, `data` text, `user_agent` varchar(255) NOT NULL, `ip_address` varbinary(16) NOT NULL, `last_updated` int(11) NOT NULL, `fingerprint` varchar(255) NOT NULL, PRIMARY KEY (`session_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; $this->_db->exec($stmt_create); } /** * [_getRealIpAddr Get the IP address of the user] * @return [strings] [IP address of the client] */ private function _getRealIpAddr() { if (!empty($_SERVER['HTTP_CLIENT_IP'])) { /*check ip from share internet*/ $ip = $_SERVER['HTTP_CLIENT_IP']; } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) { /*to check ip is pass from proxy*/ $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $ip = $_SERVER['REMOTE_ADDR']; } return $ip; } /** * [open The open callback works like a constructor in classes and is executed when the session is being opened. * It is the first callback function executed when the session is started automatically or manually with session_start(). * Return value is TRUE for success, FALSE for failure. ] * @param [string] $path [Path for saving session file] * @param [string] $name [Session Name] * @return [boolean] */ public function open($path, $name) { return true; } /** * [close The close callback works like a destructor in classes and is executed after the session write callback has been called. * It is also invoked when session_write_close() is called. Return value should be TRUE for success, FALSE for failure. ] * @return [boolean] */ public function close() { /*calling explicitly method gc(),that will clear all expired sessions*/ $this->gc(); return true; } /** * [_refresh Whenever new session id is required we can call this method, sets new session id] */ private function _refresh() { session_regenerate_id(true); $this->_session_id = session_id(); } /** * [_needRenewal method for checking if the session needs Renewal from $_refresh_interval] * @param [int] $id [session_id] * @return [boolean] */ private function _needRenewal($id) { $stmt = $this->_db->prepare(SELECT last_updated FROM {$this->_table_name} WHERE session_id = ?); $stmt->execute(array($id)); $record = current($stmt->fetchAll()); if ($record !== FALSE && count($record) > 0) { /*Checks if the session ID has exceeded it's permitted lifespan.*/ if((time() - $this->_refresh_interval) > $record['last_updated']) { /*Regenerates a new session ID*/ $this->_refresh(); $sql = UPDATE {$this->_table_name} SET session_id =:session_id, last_updated =:last_updated WHERE session_id = '$id'; $stmt = $this->_db->prepare($sql); $stmt->bindParam(':last_updated', $id , PDO::PARAM_INT); $stmt->bindParam(':session_id', $this->_session_id , PDO::PARAM_STR); //this is what will be returned by Refresh $stmt->execute(); return true; } } return false; } /** * [_isExpired method for checking if the current session is expired] * @param [array] $record [session info array passed in by _read()] * @return boolean */ private function _isExpired($record) { $ses_life = time() - $this->_expiry; $stmt = $this->_db->prepare(SELECT session_id FROM {$this->_table_name} WHERE last_updated < ? AND session_id = ?); $stmt->execute(array($ses_life, $record['session_id'])); $record = current($stmt->fetchAll()); if($record) return true; else return false; } /** * [read The read callback must always return a session encoded (serialized) string, * or an empty string if there is no data to read. * This callback is called internally by PHP when the session starts or when session_start() is called. * Before this callback is invoked PHP will invoke the open callback. ] * @param [string] $id [session_id] */ public function read($id) { try { $stmt = $this->_db->prepare(SELECT session_id, fingerprint, data, user_agent, INET6_NTOA(ip_address), last_updated FROM {$this->_table_name} WHERE session_id = ?); $stmt->execute(array($id)); $record = current($stmt->fetchAll()); if(empty($record['session_id'])) { $this->_refresh(); return ''; } else { if($this->_isSuspicious($record['fingerprint']) || $this->_isExpired($record)) { $this->destroy($id); throw new Exception('Possible Session Hijack attempt/Session expired/Some mismatch.'); } else { /*Need a renewal ?*/ if($this->_needRenewal($id)) { /*recursive call*/ $this->read($this->_session_id); } return $record['data']; } } } catch(PDOException $e) { echo $e->getMessage(); $this->_refresh(); return ''; } catch(Exception $e) { echo $e->getMessage(); $this->_refresh(); return ''; } } /** * [_getFingerPrint generates session fingerprints md5(USER AGENT + SECURE SESSION + IP ADDRESS)] * @return [strings] [encryted session info fingerprint] */ private function _getFingerPrint() { return md5($this->_user_agent.self::SECURE_SESSION . $this->_ip_address); } /** * [_isSuspicious check for possible session hijack attempt, by comaparing encrypted user system specific values against exoisting records ] * @param [string] $fp [session fingerprint] * @return boolean */ private function _isSuspicious($fp) { return ($fp != $this->_getFingerPrint()) ? True : False; } /** * [write The write callback is called when the session needs to be saved and closed. * This callback receives the current session ID a serialized version the $_SESSION superglobal. * The serialization method used internally by PHP is specified in the session.serialize_handler ini setting. * Here we are storing/updating the session data against the session id] * @param [string] $id [session id] * @param [serilized data] $data [The serialized session data passed to this callback should be stored against the passed session ID] * @return [boolean] */ public function write($id, $data) { try { $sql = INSERT INTO {$this->_table_name} (session_id, user_agent, ip_address, last_updated, data, fingerprint) VALUES (:session_id, :user_agent, INET6_ATON(:ip_address), :last_updated, :data,:fingerprint) ON DUPLICATE KEY UPDATE data =VALUES(data), last_updated=VALUES(last_updated); $time = time(); $fp = $this->_getFingerPrint(); $stmt = $this->_db->prepare($sql); $stmt->bindParam(':session_id', $id , PDO::PARAM_STR); $stmt->bindParam(':user_agent', $this->_user_agent, PDO::PARAM_STR); $stmt->bindParam(':ip_address' , $this->_ip_address , PDO::PARAM_STR); $stmt->bindParam(':last_updated', $time , PDO::PARAM_INT); $stmt->bindParam(':data', $data , PDO::PARAM_STR); $stmt->bindParam(':fingerprint', $fp, PDO::PARAM_STR); $stmt->execute(); return true; } catch(PDOException $e) { echo $e->getMessage(); return false; } catch(Exception $e) { echo $e->getMessage(); return false; } } /** * [destroy deletes the current session id from the database] * @param [string] $id [session_id] * @return [boolean] */ public function destroy($id) { /**/ $stmt = $this->_db->prepare(DELETE FROM {$this->_table_name} WHERE session_id = ?); $session_res = $stmt->execute(array($id)); if (!$session_res) return false; else return true; } /** * [gc The garbage collector callback is invoked internally by PHP periodically in order to purge old session data. * The frequency is controlled by session.gc_probability and session.gc_divisor. * The value of lifetime which is passed to this callback can be set in session.gc_maxlifetime. * here we are calling this via _close(), to delete all expired sessions from DB * Return value should be TRUE for success, FALSE for failure. ] * @return [boolean] */ public function gc() { $ses_life = time() - $this->_expiry; $stmt = $this->_db->prepare(DELETE FROM {$this->_table_name} WHERE last_updated < ?); $session_res = $stmt->execute(array($ses_life)); return true; } /** * [__destruct register_shutdown_function() the following prevents unexpected effects when using objects as save handlers * Session data is usually stored after script terminated without the need to call session_write_close(), * but as session data is locked to prevent concurrent writes only one script may operate on a session at any time. * When using framesets together with sessions you will experience the frames loading one by one due to this locking. * You can reduce the time needed to load all the frames by ending the session as soon as all changes to session variables are done. ] */ public function __destruct() { register_shutdown_function('session_write_close'); }}try{ $pdo = new PDO('mysql:host=localhost;dbname=test', '', ''); $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); $config['dbconnector'] = $pdo;}catch(PDOException $e){ echo $e->getMessage(); die;}$s = sessionmanager::getInstance($config);Usage example: example.php: <?php error_reporting(E_ALL); ini_set('display_errors', 1); error_reporting(-1); include('sessionmanager.lib.php'); $_SESSION['Motto'] = 'Lets Do it !'; var_dump($_SESSION); exit; | Simple PHP session handler class (using MySQL for session data storage) | php;object oriented;security;session | null |
_softwareengineering.231500 | I have been doing some research on accessing a DB from mobile devices. There are so many different ways of doing it. There are also multiple ways to make it secure. What is a good way to access a database from a mobile device securely? Also considering my specific case, where I have an existing WCF service which does everything I would need for the app, I would like to re-use it. I have been working on using jsonp calls to the service and develop apps for iOS, Android and windows phone. The calls are secured by HTTPS and I would be implementing some authentication policy for the calls. But how secure would all this be?I have some really sensitive data to worry about. I am using HTML5 and JS for app development to keep it uniform and less difficult to manage for all platforms.How do I access the database and keep my data secure? | How do I access databases from a mobile device securely? | security;mobile;wcf | You shouldn't be accessing any external database from a mobile device at all. From the network's point of view, connection to a database should only be done within a local network where connection is reliable and fast, which is something you will not get with mobile connections.From the design point of view, exposing a database directly is simply a bad idea, not only it's too lower level, also it will be a nightmare when users are using different versions of your app and trying to do different things to your database. You should provide RESTful web services with appropriate high API abstraction for your mobile app instead. |
_unix.282318 | I have a script that asks the user for the path to their desired directorydefault_path = '/path/to/desired/directory/'user_path = raw_input(Enter new directory [{0}]: .format(default_path)) or default_pathprint user_pathThe script has a default directory if no input is given. I want the script to remember the directory entered by the user. That is, the next time the script is run I want the default path to be the path input by the last user. Is this possible?Maybe more generally, is there a good reference for writing command-line tools in python? | How can I get a python script to remember user input? | python | If you are maintaining the program, then make it store the history. In Python, use the readline library and make it store a history file.If you don't want to modify the program, you can use a wrapper such as rlwrap.rlwrap -H ~/.myprogram.history myprogram |
_vi.11609 | Very often, I write markdown in VIM, and there will be paragraphs in those markdown. To help my editing, I setup my Vim to wrap a line on 80 chars. It works nice if I just keep typing, but the problem is if I need to do some correction, it becomes very annoying. demo(taken from wikipedia first order logic):The adjective first-order distinguishes first-order logic from higher-order logic in which there are predicates having predicates or functions as arguments. In first-order theories, predicates are often associated with sets. In interpreted higher-order theories, predicates may be interpreted as sets of sets.So far so good. But when I revise the article, I might decide to add something in the middle, say:The adjective first-order distinguishes first-order logic from higher-order logic in which there are predicates having predicates or functions as arguments,or in which one or both of predicate quantifiers or function quantifiers are permitted.In first-order theories, predicates are often associated with sets. In interpreted higher-ordertheories, predicates may be interpreted as sets of sets.notice line 3 is the one I want to wrap. If I do it in VIM, I will need to manually join the lines and rewrap the whole paragraph.Anyone got idea how to make VIM do it automatically? | automatically rewrap lines when writing markdown in VIM | formatting;filetype markdown;word processing | Even simpler: the a flag for 'formatoptions' enables automatic formatting of paragraphs whenever text is inserted or deleted. See :help fo-table for details on 'formatoptions' flags and :help autoformat.:set formatoptions+=agq and gw will format the lines that the following motion moves over.Formatting is done with one of three methods: 1. If 'formatexpr' is not empty the expression is evaluated. This can differ for each buffer. 2. If 'formatprg' is not empty an external program is used. 3. Otherwise formatting is done internally. In the third case the 'textwidth' option controls the length of each formatted lineThe difference between the two, is that gq will leave the cursor on the first non-blank of the last formatted line. gw will put the cursor back where it started.You can easily manually rewrap the paragraph your cursor is currently in with gwap, or the entire document with gggwG, though that will move your cursor thanks to the leading gg.With an autocommand, you can have formatting happen automatically. Here's an example that formats the current paragraph when leaving insert mode:augroup myformatting autocmd! autocmd InsertLeave * normal gwap<CR>augroup ENDThere are other autocommand triggers that you may find work better for you. You can explore the options under :help autocmd-events. The most relevant ones are probably under the various subheading. |
_cseducators.415 | When demoing code in class, I have tried at least three different methods of instruction:Write code live and have students type alongWrite code live and have students follow my logic while only I type and narrate my thinkingStart with a full working demo and analyze an already written programIn all circumstances students can access the programs on GitHub after class. Moving forward, I would like to develop a more consistent and effective approach. What instructional methods are most effective for teaching with sample programs? | Instructional Methods for In-class Code Demos | lesson ideas;curriculum design | null |
_scicomp.8752 | I have a bunch of polygons and a coarse uniform grid. I want to implement two different range queries, for a rectangle aligned with the uniform grid:Does the rectangle intersect with any polygon at all?Give me all the polygons which intersect the rectangle.A simple data-structure would be to have for each cell a list of pointers to the polygons that overlap the cell. This data-structure is appropriate for the first range query. It seems less appropriate for the second range query, because the same polygon may overlap multiple cells, and we don't want to report the same polygon multiple times. One solution would be to use a std::set<polygon_t*> container to collect the pointers to polygons that are delivered by a query. However, let's assume I mistrust std::set in the sense that it will do too many memory allocations even if I reuse the same container for multiple queries. (This mistrust may be unfounded, so I should definitively try this first.)I wonder whether the problem can be solved more efficiently, if I replace the polygons by their grid aligned bounding boxes. During a range query, the knowledge of the lower left point of the bounding box should be sufficient for determining whether the polygon corresponding to the box has already been reported. But even in this case, the complexity of a query returning $m$ polygons won't be $O(m)$, because some work is done for every cell intersecting the bounding box of the polygon. I wonder whether creating three additional lists with the polygons whose bounding box starts in this cell in x-direction, y-direction and x- and y-direction allowing to avoid this additional work would be a good idea. Instead of creating new lists, I could also try to just partition the existing list appropriately.In general, the coarse uniform grid based data-structures discussed above offer essentially constant lookup time, at the cost of potentially high memory consumption. What is the state of the art regarding such data-structures? Is there a way to avoid the potential memory bloat caused by elements with huge bounding boxes which overlap many grid cells? Also, I don't like the fact that only the simple data-structure can handle polygons directly, while the more efficient data-structures only handle the bounding boxes of the polygons. | Practical implementation of spatial binning for rectangular range queries | c++;data structures;spatial data | null |
_datascience.10213 | I have a dataset with 10 columns and 158 rows. I try to predict my test dataset which is 1 columns with 158 rows.I made cross-validations, grid-search and use ElasticNet algorithm. Also before the evaluate the model I check the pearson correlation between 10 columns which I used for train with other 1 column which I try to predict. The correlation is not good but when I evaluate model the R^2 score is near 0.98 . How can I be ensure that this score is confidental ? Because I didn't expect a R^2 like this. This is too high that I expect.Thanks in advance. | How can I ensure about my R^2 score? | machine learning;data mining;scikit learn | null |
_softwareengineering.328225 | Can anyone explain why do we need +1 byte for sentinel value?As we know that 1 char = 1 byte so if we declare an array such as char a[50] why I can't store 50 chars instead of 49? | Char array - why do we need +1 byte for sentinel value? | c++;array | Typically C++ users prefer to use std::string where possible. Your question stems from lower-level C-programming and native types (which of course are still relevant in C++, but anyways)Bottom-line, when you pass C-strings around you're simply passing the address of the first byte of the string. A lot of commonly-used string manipulation function have something internally like this:char* c;for (c = str; *c != 0; ++c) { // Do something}When you're staring out programming in C/C++ you may be tempted to say I know this string is 50 bytes long. But do you really?char myStr[50] = My Name Is Assimilater;It's pretty common to allocate more than the exact number of bytes you need. You wouldn't want printf to copy any garbage characers after Asimilater to the console, now would you?C-String buffers also tend to get reused to store many strings over their lifetime; this isn't a problem so long as the buffer is big enough to handle any string you intend to throw at it.This, coupled with the fact that an algorithm may go off into myStr[51] and beyond if not given a sentinel to stop at, makes the precense of a sentinel value important. |
_codereview.85874 | I wrote a small script to get the probability of a given text to be spam. I downloaded some ham (good content) and spam (bad content) text from the internet. The spam corpus is a 1.8 MB txt file, and the ham corpus a 3.6 MB txt file.My method is pretty naive: I count how many times each word appears in the ham and spam and calculate the ratio (ham_count / spam_count): the higher the better. I then average this ratio, getting a final number that should be interpreted as follows:almost 0 almost surely spam1 perfectly equally likely to be spam or hambig_number almost surely hamTOP_WORDS = 3class String def remove_invalid self.encode(UTF-8, :invalid => :replace, :undef => :replace, :replace => ?) endendclass Array def avg self.inject(:+) / self.length.to_f endenddef is_spam?(text, ham, spam) # Small value means spam: 0 is almost surely spam # Big values mean good content words = text.split words.each #+ 1 avoids division by zero error .map {|word| (ham.count(word) + 1) / (spam.count(word) + 1)} .avgend# Attribution to 'Enron' for the mail corpusham = File.read(good_mails.txt).remove_invalid.splitspam = File.read(bad_mails.txt).remove_invalid.splitmessages = [buy all now friend, you must click here now, I would like to hear from you, what is your research about, buy 100% discount, national lottery: you won, little money needed buy all now!, I am intersted in your work, you could make some good money from it, I would propose a financial affair, new policy free just for you, no trouble, apparently you misunderstood me yesterday ]puts messages.map{|txt| [txt, : , is_spam?(txt, ham, spam)].join} | Spam classifier | ruby;email | Let's kick off with some style stuff, since your algorithm looks good (i.e. no obvious room for improvement) though basic.I'm using this as my style guide; it seems to be fairly well accepted.Avoid needless metaprogramming. You could easily have avg as its own function; ditto for remove_invalid. There's no particular reason to monkey-patch String and Array besides prettiness, and the code doesn't lose anything to a couple of extra parentheses.Indents in Ruby are two spaces, not four. Put spaces around { (except in string interpolation; see below). It makes your code much more readable. For example, this:.map {|word| (ham.count(word) + 1) / (spam.count(word) + 1)}becomes this:.map { |word| (ham.count(word) + 1) / (spam.count(word) + 1) }Since my attention has been drawn to that area like a velociraptor to impossible faint noise, why did you use words.each instead of just words in is_spam?? That's redundant. Array has a map function, and both Array#map and Enumerable#map return the same thing (assuming that all else is equal). All you have to do is delete the .each on that line.(You also have some trailing whitespace at the end of the comment on that line. Ew.)What on Earth is this?[txt, : , is_spam?(txt, ham, spam)].joinJust use string interpolation:#{ txt }: #{ is_spam?(txt, ham, spam) }Much simpler, at least to me. Okay, right around this point I realized you're a Python fellow. It explains a lot1, including this odd indentation:messages = [buy all now friend, you must click here now, I would like to hear from you, # etc. ]Aside from what I pointed out in #2, there are three issues here (the style guide doesn't explicitly say any of them, AFAICT, but they're the standard from what I've seen):You didn't indent the innards; you should put one indent before each line until the ].The first element should be on the line after the opening brace, not on the same line, and indented the same as everything else.The ending bracket should be indented as much as the opening one, not as much as the elements in the array.You never use TOP_WORDS in your code. Since it's useless, delete it.In Ruby, methods ending in ? tend to return booleans, not numbers, so is_spam? should return true or false, depending on if it's spam or not. Alternatively, you could rename it to spam_factor, since that's what it seems to be calculating. See the code below for how I chose to solve that particular dilemma.That's about all I see right now. With these changes (and some personal style changes which are purely opinion-based, so I didn't enumerate them), here's what your code looks like:def remove_invalid(str) str.encode('UTF-8', :invalid => :replace, :undef => :replace, :replace => '?')enddef avg(arr) arr.inject(:+) / arr.length.to_fenddef spam_factor(text, ham, spam) # Small value means spam: 0 is almost surely spam # Big values mean good content words = text.split avg(words.map { |word| (ham.count(word) + 1) / (spam.count(word) + 1) }) # + 1 avoids division by zero errorenddef is_spam?(text, ham, spam) spam_factor(text, ham, spam) > 1end# Attribution to 'Enron' for the mail corpusham = remove_invalid(File.read('good_mails.txt')).splitspam = remove_invalid(File.read('bad_mails.txt')).splitmessages = [ 'buy all now friend', 'you must click here now', 'I would like to hear from you', 'what is your research about', 'buy 100% discount', 'national lottery: you won', 'little money needed buy all now!', 'I am intersted in your work, you could make some good money from it', 'I would propose a financial affair', 'new policy free just for you, no trouble', 'apparently you misunderstood me yesterday']puts messages.map { |txt| #{txt}: #{is_spam?(txt, ham, spam)} }From what I can get of your algorithm without having a runnable version to play with, it looks good. About the only suggestion I could offer would be to make it linear rather than exponential (i.e. subtraction instead of division), to make it more intuitive to use/simpler to update with more features. That would change your function (if I've understood it right) to:def spam_factor(text, ham, spam) # Small value means spam: 0 is almost surely spam # Big values mean good content words = text.split avg(words.map { |word| (ham.count(word) + 1) - (spam.count(word) + 1) }) # + 1 avoids division by zero errorenddef is_spam?(text, ham, spam) spam_factor(text, ham, spam) > 0Notice that the sole difference is that I swapped / for -. The result is that the more positive the result, the more hammy it is; the more negative, the more spammy.I've got a niggling idea for using a Hash rather than recalculating the amount every time, but I dunno if it'd work out. I'll have to fiddle around a bit.1: #2, #5. |
_unix.308068 | Went through this post: Pass arguments to function exactly as-isBut have a slightly different setup:I have 3 bash functions foo, bar, baz. They are setup as follows:foo() { bar $1}bar() { var1=$1 var2=$2 echo $var1 test $var2}export ENV_VAR_1=1export ENV_VAR_2=2 3foo ${ENV_VAR_1} ${ENV_VAR_2}I'd expect the output to be: 1 test 2 3But the output is: 1 test 2I get why this happened. bar was executed as follows: bar 1 2 3My question is: how do I get it to execute bar 1 2 3Approaches I tried:foo () {bar $1} # Out: 1 2 3 test. Makes sense since 1 2 3 is interpreted as a single argument. | pass arguments to function as is | bash | This provides a single string as an argument to foo:foo ${ENV_VAR_1} ${ENV_VAR_2}Because $1 is not in quotes, the shell performs word splitting and, consequently, this provides three arguments to bar:bar $1Word splitting is done on any IFS characters in S1. The original source of those characters is not considered.Simpler exampleLet's define x as:$ x=${ENV_VAR_1} ${ENV_VAR_2}Now, let's print $x:$ printf %s\n $x1 2 3As you can see, $x is interpreted as one argument. By contrast, consider:$ printf %s\n $x123In the above, word-splitting is performed on $x creating three arguments.Shell strings have no notion of history. String x has no record of 2 3 being part of one string before x was assigned. String x just consists of 1, space, 2, space, and 3 and word splitting operates on the spaces.Alternative: selecting your own IFSThis produces the output that you want:$ foo() ( IFS=@; bar $1; )$ foo ${ENV_VAR_1}@${ENV_VAR_2}1 test 2 3In foo, we set IFS to @. Consequently, all subsequent word splitting is performed using @ as the word separator. So, when calling foo, we put a @ at any location at which we want word splitting. |
_unix.251902 | I've set a encrypted home directory for user piranha3:root@raspberrypi:~# ecryptfs-verify -u piranha3 -hINFO: [/home/piranha3/.ecryptfs] existsINFO: [/home/piranha3/.ecryptfs/Private.sig] existsINFO: [/home/piranha3/.ecryptfs/Private.sig] contains [2] signaturesINFO: [/home/piranha3/.ecryptfs/Private.mnt] existsINFO: [/home/piranha3] is a directoryINFO: [/home/piranha3/.ecryptfs/auto-mount] Automount is setINFO: Mount point [/home/piranha3] is the user's homeINFO: Ownership [piranha3] of mount point [/home/piranha3] is correctINFO: Configuration validBut after piranha3 logouts directory is not unmounted:root@raspberrypi:~# mount | grep ecryptfs/home/.ecryptfs/piranha3/.Private on /home/piranha3 type ecryptfs (rw,nosuid,nodev,relatime,ecryptfs_fnek_sig=729061d7fa17b3a4,ecryptfs_sig=eb5ec4d9c13e2d74,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs)lsof output:lsof: WARNING: can't stat() cifs file system /media/cifs Output information may be incomplete.lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete.System Information:root@raspberrypi:~# dpkg -l ecryptfs-utilsDeseado=desconocido(U)/Instalar/eliminaR/Purgar/retener(H)| Estado=No/Inst/ficheros-Conf/desempaqUetado/medio-conF/medio-inst(H)/espera-disparo(W)/pendienTe-disparo|/ Err?=(ninguno)/requiere-Reinst (Estado,Err: maysc.=malo)||/ Nombre Versin Arquitectura Descripcin+++-========================-=================-=================-======================================================ii ecryptfs-utils 103-5 armhf ecryptfs cryptographic filesystem (utilities)root@raspberrypi:~# uname -aLinux raspberrypi 4.1.13-v7+ #826 SMP PREEMPT Fri Nov 13 20:19:03 GMT 2015 armv7l GNU/LinuxAnd finally about PAM:root@raspberrypi:~# grep -r ecryptfs /etc/pam.d/etc/pam.d/common-session:session optional pam_ecryptfs.so unwrap/etc/pam.d/common-password:password optional pam_ecryptfs.so /etc/pam.d/common-auth:auth optional pam_ecryptfs.so unwrap/etc/pam.d/common-session-noninteractive:session optional pam_ecryptfs.so unwrapWhy is not /home/directory unmounted? | ecryptfs: auto-umount does not work | unmounting;ecryptfs | null |
_unix.134181 | I have an ssh connection to a RHEL x86 machine, and I can't use yum and I don't have root rights. Is there a way to still install rtmpdump? By looking at this tutorial, it seems that installing rtmpdump on CentOS (as I understand, CentOS and RHEL are similar) is pretty easy, but only if you have yum? So what are my options? | Building rtmpdump on RHEL x86 without yum and no root rights | linux;centos;rhel;yum | null |
_unix.123602 | I've always wondered this but never took the time to find out, so I'll do so now - how portable is the usage shown here of either /proc/$$/fd/$N or /dev/fd/$N? I understand POSIX guarantees /dev/null, /dev/tty, and /dev/console (though I only found that out the other day after reading the comments on this answer) but what about these others? As far as I can tell they're pretty common, but in what systems can I not expect to find them? Why not? Is it more likely to find one than the other? Will they always exhibit like attributes?I tend to use these devices pretty extensively in all manner of ways, and I'd like to know if there's a chance I'd come up short just trying. Also the above questions should be understood to be only what I think I'd like to know, but, since I obviously have to ask in the first place, I may not know best in this regard and they should not be considered stringent requirements for an answer. Just clue me in if you can, please. | Portability of file descriptor links | process;proc;open files;file descriptors;portability | The /proc/PID/fd/NUM symlinks are quasi-universal on Linux, but they don't exist anywhere else (except on Cygwin which emulates them). /proc/PID/fd/NUM also exist on AIX and Solaris, but they aren't symlinks. Portably, to get information about open files, install lsof.Unices with /proc/PID/fdLinuxUnder Linux, /proc/PID/fd/NUM is a slightly magic symbolic link to the file that the process with the ID PID has open on the file descriptor NUM. This link is magic in that, for example, it can be used to access the file even if the file is removed. The link will track the file through renames, too. /proc/self is a magic symbolic link which points to /proc/PID where PID is the process that accesses the link.This feature is present on virtually all Linux systems. It's provided by the driver for the proc filesystem, which is technically optional but used for so many things (including making the ps work it reads from /proc/PID) that it's almost never left out even on embedded systems.CygwinCygwin emulates Linux's /proc/PID/fd/NUM (for Cygwin processes) and /proc/self.Solaris (since version 2.6), AIXThere are /proc/PID/fd entries for each file descriptor, but they appear as the same type as the opened file, so they provide no information about the path of the file. They do however report the same stat information as fstat would report to the process that has the file open, so it's possible to determine on which filesystem the file is located and its inode number. Directories appear as symbolic links, however they are magic symlinks which can only be followed, and readlink returns an empty string.Under AIX, the procfiles command displays some information about a process's open files. Under Solaris, the pfiles command displays some information about a process's open files. This does not include the path to the file (on Solaris, it does since Solaris 10, see below).Solaris (since version 10)In addition to /proc/PID/fd/NUM, modern Solaris versions have /proc/PID/path/NUM which contains symbolic links similar to Linux's symlinks in /proc/PID/fd/NUM. The pfiles command shows information about a process's open files, including paths.Plan9/proc/PID/fd is a text file which contains one record (line) per file descriptor opened by the process. The file name is not tracked there.QNX/proc/PID/ is a directory, but it doesn't contain any information about file descriptors.Unices with /proc but no direct access to file descriptors(Note: sometimes it's possible to obtain information about a process's open files by riffling through its memory image which is accessible under /proc. I don't count that as direct access.)Unices where /proc/PID is a fileThe proc filesystem itself started out in UNIX 8th edition, but with a different structure, and went through Plan 9 and back to some unices. I think that all operating systems with a /proc have an entry for each PID, but on many systems, it's a regular file, not a directory. The following systems have a /proc/PID which needs to be read with ioctl:Solaris up to 2.5OSF/1 now known as Tru64IRIX (?)SCO (?)MINIX 3MINIX 3 has a procfs server which provides several Linux-like components including /proc/PID/ directories. However this does not there is no /proc/PID/fd.FreeBSDFreeBSD has /proc/PID/ directories, but they do not provide information about open file descriptors. (There is however /proc/PID/file which is similar to Linux's /proc/PID/exe, giving access to the executable through a symbolic link.)FreeBSD's procfs is deprecated.Unices without /procHP-UXOpenBSDNetBSDMac OSXFile descriptor information through other channelsFuserThe fuser command lists the processes that have a specified file open, or a file open on the specified mount point. This command is standard (available on all XSI-compliant systems, i.e. POSIX with the X/Open System Interface Extension).You can't go from a process to file names with this utility.LsofLsof stands for list open files. It is a third-party tool, available (but usually not part of the default installation) for most unix variants. Obtaining information about open files is very system-dependent, as the analysis above might have made you suspect. The lsof maintainer has done the work of combining it all under a single interface.You can read the FAQ to see what kinds of difficulties lsof has to put up with. On most unices, obtaining information about the names of open files requires parsing kernel data structures. Quoting from FAQ 3.3 Why doesn't lsof report full path names?:Lsof can't obtain path name components from the kernel name caches of the following dialects:AIXOnly the Linux kernel records full path names in the structures it maintains about open files; instead, most kernels convert path names to device and node number doublets and use them for subsequent file references once files have been opened.If you need to parse information from lsof's output, be sure to use the -F mode (one field per line), preferably the -F0 mode (null-delimited fields). To get information about a specific file descriptor of a specific process, use the -a option with -p PID and -d NUM, e.g. lsof -a -p 123 -d 0 -F0n./dev/fd/NUM for file descriptors of the current processMany unix variants provide a way to for a process to access its open files via a file name: opening /dev/fd/NUM is equivalent to calling dup(NUM). These names are useful when a program wants a file name but you want to pass an already-open file (e.g. a pipe or socket); for example the shells that implement process substitution use them where available (using a temporary named pipe where /dev/fd is unavailable).Where /dev/fd exists, there are also usually (always?) synonyms (sometimes symbolic links, sometimes hard links, sometimes magic files with equivalent properties) /dev/stdin = /dev/fd/0, /dev/stdout = /dev/fd/1, /dev/stderr = /dev/fd/2.Under Linux, /dev/fd is a symbolic link to /proc/self.Under most unices (IRIX, OpenBSD, NetBSD, SCO, Solaris, ), the entries in /dev/fd are character devices. They usually appear whether the file descriptor is open or not, and entries may not be available for file descriptors above a certain number.Under FreeBSD and OSX, the fdescfs filesystem provides a dynamic /dev/fd directory which follows the open descriptors of the calling process. A static /dev/fd is available is /dev/fd is not mounted.Under OSF/1 (Tru64), /dev/fd is provided via fdfs.There is no /dev/fd on AIX or HP-UX. |
_softwareengineering.22416 | I am being approached with a job for writing embedded C on micro controllers. At first I would have thought that embedding programming is too low on the software stack for me, but maybe I am thinking about it wrong.Normally I would have shrugged off an opportunity to write embedded code, as I don't consider myself an electrical engineer. Is this a bad assumption? Am I able to write interesting and useful software for embedded systems, or will I kick myself for dropping too low on the software stack?I went to school for computer science and really enjoyed writing a compiler, thinking about concurrent algorithms, designing data structures, and developing frameworks. However, I am currently employed as a web developer, which doesn't scream the interesting things I just described. (I currently deal with issues like: this check box needs to be 4 pixels to the left and this date is formatted wrong.)I appreciate everyone's input. I know I have to make the decision for myself, I just would like some clarification on what it means to be a embedded programmer, and if it fits what I find to be interesting. | Is embedded programming closer to electrical engineering or software development? | embedded systems | If you want to be good at working on embedded systems, then yes, you need to think like a EE some of the time. That is generally when you are writing code to interface with the various peripherals (serial busses like UART, SPI, I2C or USB), 8 and 16-bit timers, clock generators, and ADCs and DACs. Datasheets for microcontrollers often run into the hundreds of pages as they describe every bit of every register. It helps to be able to read a schematic so you can probe a board with an oscilloscope or logic analyzer.At other times, it is just writing software. But under tight constraints: often you won't have a formal OS or other framework, and you might have only a few KB of RAM, and maybe 64 KB of program memory. (These limits are assuming you are programming on smaller 8 or 16-bit micros; if you are working with embedded Linux on a 32-bit processor, you won't have the same memory constraints but you will still have to deal with any custom peripheral hardware that your Linux distro doesn't provide drivers for.)I have a background in both EE and CS so I enjoy both sides of the coin. I also do some web programming (mostly PHP), and desktop apps (C# and Delphi), but I've always enjoyed working on embedded projects the most. |
_reverseengineering.2857 | I am trying to analyze an old malware sample in OllyDbg. It has instruction of the format CALL <JMP.&KERNEL32.SetUnhandledExceptionFilter>I am not an expert in Assembly. I know that CALL is used to call a sub-routine and JMP is used to jump to a particular address in the memory but what is the result of using CALL with JMP? Could anyone clarify on it? Even pointers to where I could find answers would be very helpful. Thanks. | Why is JMP used with CALL? | disassembly;malware;assembly | Seeing a call in the form CALL <JMP.&KERNEL32.SetUnhandledExceptionFilter> suggests that the binary was compiled with Visual C++'s /INCREMENTAL option, hence the table of jump thunks.... an incrementally linked executable (.exe) file or dynamic-link library (DLL):...May contain jump thunks to handle relocation of functions to new addresses.... |
_unix.385759 | I have xfce debian system, the WiFi card on my laptop was not detected by the default network manager however the wired connection worked perfectly. So I decided to install WICD and remove the default network manager, using the following:apt-get install wicd apt-get remove network-managerand now with WICD installed nothing is working at all, not the wired nor the wifi it just shows a blank windows (where the networks are supposed to be) | WICD shows nothing | debian;xfce;networkmanager;wicd | null |
_unix.378431 | So I have installed Arch Linux on an HP Chromebook. I have tried everything to get the Audio on it to work. I only see 2 HDMI outputs when I run aplay -lI am running ArchLinux 2017.06.01 Kernel 4.11.9-1-ARCH.I have already installed linux-max98090 and it didn't help at all.Any ideas why it won't work? | Audio not working Arch Linux (CHROMEBOOK) | arch linux;audio;chrome book | Your link does not work, but if your Chromebook runs on Intel m5 (or any other Skylake) processor then you are out of luck, there is no Linux driver yet. |
_unix.23246 | Possible Duplicate:What is the difference between symbolic and hard links? Today, my teacher talked about the differences between 'hard links' and 'soft links', but she never really explained why we need to use them, which basically made me curious about why there needs to be a hard link vs. soft link choice when it seems like they serve the same functionality. Aren't they both used as indirect links to a target file? I have never used the unix command line very much, so I guess I've never really been exposed to the commands in actionThanks :) | Hard links vs. Soft links: When would you want to use one over the other? | symlink;hard link | null |
_cstheory.3333 | Possible Duplicates:Do many-one reductions and Turing reductions define the same class NPCMany-one reductions vs. Turing reductions to define NPC Let $P,Q \subseteq \Sigma^*$ be languages such that $P$ and $Q$ are both in NP. Assume that $P$ is NP-complete under Karp reductions (there is a polynomial time many-one reduction from 3-SAT to $P$) and there is a Turing-reduction from $P$ to $Q$, i.e. there is a polynomial-time algorithm which decides $P$ when given oracle-access to $Q$. Intuitively this means that deciding $Q$ is as hard as deciding any problem in NP, while at the same time $Q$ is in NP, so Q cannot be ``harder'' than NP. Does this imply that $Q$ is NP-complete under Karp reductions, i.e. does this imply the existence of a polynomial-time many-one reduction from 3-SAT to Q? Any pointers are much appreciated. | Contained in NP and Turing-reduction from an NP-complete problem $\Rightarrow$ NP-complete under Karp reductions? | cc.complexity theory;open problem;reductions | null |
_webapps.43116 | I'm using the importrange() function in Google Sheets to reference data from 'spreadsheet1' and display in it 'spreadsheet2'.This works fine, but if I edit 'spreadsheet1' and then go to 'spreadsheet2' it's not automatically updated, nor does it update if I refresh the page. I've set my cache to be limited to 0MB so it shouldn't be saving anything locally.Any ideas how I can get it to check back the its master spreadsheet more often and update its values accordingly? | Delay in updating referenced cells when using importrange in Google Sheets | google spreadsheets | null |
_scicomp.7857 | In discussing smoothing filters,Numerical Recipes p. 772 says... irregularly sampled data, where the values $f_i$ are not uniformly spaced ... one can simply pretend that the data points are equally spaced ... a rough criterion: If the change in $f$ across the full width of the $N$ point window is less than $\sqrt{N/2}$ times the measurement noise on a single point, then the cheap method can be used.Where does this come from ?It looks odd:for $f(x) = \text{sin} ( 2 \pi f x )$ on [$-N \dots N]$,change in $f = 2 < \sqrt{N/2}\ \sigma$ would require bigger $N$ for smaller $\sigma$ . Or have I misunderstood ?Also, does this work for smoothing data on non-uniformly spaced 2d squares, 3d cubes ... ?Added: https://gist.github.com/denis-bz/5957279 is a short Python programthat tries to elucidate this, for quantization noise(sample at integers +- random-uniform( -1/2, 1/2 )). | Error in treating non-uniformly spaced data as uniform | data analysis;signal processing | null |
_unix.227990 | I need enable php-fpm on php 5.3.29 to Centos 6, I have this error while running the ./configure script:... config.status: creating sapi/fpm/php-fpm.confconfig.status: creating sapi/fpm/init.d.php-fpmconfig.status: creating sapi/fpm/php-fpm.serviceconfig.status: creating sapi/fpm/php-fpm.8config.status: creating sapi/fpm/status.htmlconfig.status: error: cannot find input file: sapi/cgi/php-cgi.1.inThe command ./configure is:./configure --prefix=/opt/php-5.3.29 \--with-zlib-dir \--with-freetype-dir \--enable-mbstring \--with-libxml-dir=/usr \--enable-soap \--enable-calendar \--with-curl \--with-mcrypt \--with-zlib \--with-gd \--disable-rpath \--enable-inline-optimization \--with-bz2 \--with-zlib \--enable-sockets \--enable-sysvsem \--enable-sysvshm \--enable-pcntl \--enable-mbregex \--with-mhash \--enable-zip \--with-pcre-regex \--with-mysql \--with-pdo-mysql \--with-mysqli \--with-png-dir=/usr \--enable-gd-native-ttf \--with-openssl \--with-fpm-user=apache \--with-fpm-group=apache \--with-libdir=lib64 \--enable-ftp \--with-kerberos \--with-gettext \--with-gd \--with-jpeg-dir=/usr/lib/ \--enable-fpm | Can't compile php 5.3 on Centos 6 | centos;compiling;php | null |
_webapps.104155 | For some reason, len (data.length) is 0 when there is only one row and jumps to 2 as soon as I add a row. I've checked to see if the function is counting one row above or below my intended range and it is not. It counts the right amount of 'Externals' as long as I have more than one row. I am unsure of why this may be.//Get data.length of range passed in through argument//loop through the range based on number of rows and count 'External'function count_external(data){ var len = data.length; var total_external = 0; var i = 0; for(i = 0; i < len;i++) { if(!data[i][0]) { total_external += 1; } } return total_external;} | Why does this FOR Loop count 0 rows with one row passed in but counts 2 after adding a second row? | google spreadsheets;google apps script | null |
_unix.353212 | Assuming that I have a file containing only of characters and numbers (no special characters or punctuation marks), how can I swap two words that may or may not be right next to each other using VIM?I tried the following (swap all occurrences of Tom with Jerry) but it doesn't work for obvious reasons.:%s/Tom/Jerry/g:%s/Jerry/Tom/gThank you! | Swap words in VIM without using any third party plugins | linux;vim;swap | Maybe you could use an intermediary variable like this:%s/Tom/XX9G235a65/g :%s/Jerry/Tom/g :%s/XX9G235a65/Jerry/g |
_softwareengineering.195308 | I'm working on a wishlist system, where users can add items to their various wishlists, and I plan to allow users to re-order the items later on. I am not really sure about the best way to go about storing this in a database while remaining fast and not turning in to a mess (this app will be used by a fairly large user base, so I don't want it to go down to clean up stuff).I initially tried a position column, but it seems like that would be quite inefficient having to change every other item's position value when you move them around.I have seen people using a self-reference to refer to the previous (or next) value, but again, it seems like you would have to update a whole lot of other items in the list.Another solution I've seen is using decimal numbers and just sticking items in the gaps between them, which seems like the best solution so far, but I'm sure there has to be a better way.I would say a typical list would contain up to about 20 or so items, and I will probably limit it to 50. The re-ordering would be using drag and drop and will probably be done in batches to prevent race conditions and such from the ajax requests. I'm using postgres (on heroku) if it matters.Does anyone have any ideas?Cheers for any help! | Storing a re-orderable list in a database | database;database design;mongodb;postgres | First, don't try to do anything clever with decimal numbers, because they'll spite you. REAL and DOUBLE PRECISION are inexact and may not properly represent what you put into them. NUMERIC is exact, but the right sequence of moves will run you out of precision and your implementation will break badly.Limiting moves to single ups and downs makes the whole operation very easy. For a list of sequentially-numbered items, you can move an item up by decrementing its position and incrementing the position number of whatever the previous decrement came up with. (In other words, item 5 would become 4 and what was item 4 becomes 5, effectively a swap as Morons described in his answer.) Moving it down would be the opposite. Index your table by whatever uniquely identifies a list and position and you can do it with two UPDATEs inside a transaction that will run very quickly. Unless your users are rearranging their lists at superhuman speeds, this isn't going to cause much of a load.Drag-and-drop moves (e.g., move item 6 to sit between items 9 and 10) are a little trickier and have to be done differently depending on whether the new position is above or below the old one. In the example above, you have to open up a hole by incrementing all positions greater than 9, updating item 6's position to be the new 10 and then decrementing the position of everything greater than 6 to fill in the vacated spot. With the same indexing I described before, this will be quick. You can actually make this go a bit faster than I described by minimizing the number of rows the transaction touches, but that's a microoptimization you don't need until you can prove there's a bottleneck.Either way, trying to outdo the database with a home-brewed, too-clever-by-half solution doesn't usually lead to success. Databases worth their salt have been carefully written to do these operations very, very quickly by people who are very, very good at it. |
_webapps.14042 | I'd like to make an anonymous PayPal donation. Is it possible? If not, what if I make a PayPal payment using my credit card directly (i.e. not logged in with an account)? Does it show up as anonymous in that way? | Make a PayPal donation anonymously | paypal | null |
_unix.46809 | I know that I can use the -A and -B command for grep to get a lot of what I am looking for however that is not quite what I want.I am looking to parse the httpd.conf file to search for a domain. Then display everything between the VirtualHost tags for that domain. A example of the virtualhost is as follows. To search for a domain I run the following command:less /usr/local/apache2/conf/httpd.conf |grep domain.tldBut that does not give me the full virtualhost only the lines that contain the domain.<VirtualHost 192.168.1.10:80> SSLEngine on SSLCACertificateFile /usr/share/ssl/certs/ca-bundle.crt SuexecUserGroup anzenketh wheel ServerName anzenketh.net ServerAlias www.anzenketh.net ServerAdmin [email protected] DocumentRoot /home/anzenketh/www/anzenketh.net ScriptAlias /cgi-bin/ /home/anzenketh/www/cgi-bin/ <Directory /home/anzenketh/www/cgi-bin> AllowOverride None Options ExecCGI Order allow,deny Allow from all </Directory> CustomLog /var/log/httpd/anzenketh/anzenketh.net-access_log combined ErrorLog /var/log/httpd/anzenketh/anzenketh.net-error_log</VirtualHost> | Display lines inbetween text with grep | text processing;grep | null |
_softwareengineering.39976 | I recently considered contributing to an open-core software (a product which core is released with an open-source license by the company that developed it, but it retained a paid license for more advanced functionality). But some questions occurred about the nature of working with an open-core project, and I would like the input of the community. My question is would you still be motivated to contribute if:most if not all the software was contributed by paid developers.you'd be benefiting the finances of one company (opposed to benefiting the community)the direction of the software would be determined by the company and not by you (or the community).I would like also for someone to confirm if those are myths or notimplementing the same features that are sold by the company won't be allowed.forking the project won't be allowed (it might depend on the open-source license).Thank you.ps: Could someone create an open-core tag? | Would you contribute to open-core software? | open source | The answers to your question will depend on the license the product is released under so it's impossible to give you a catch all answer but I think there are general principals you can look out for.If the core / open source components are released under an established open source license then the sort of restrictions you talk about won't exist. If it's under a variant license then they may be in place.Personally I'd only get involved in working on code that is covered by a recognised, unencumbered open source license. The company are getting free work done, the least they can do is not try and restrict how people use stuff they're being given for free.I'd also want to understand a bit more about who manages commits and how the overall direction is governed. You don't want to be working on something which ends up getting declined because the company who owns the product decides it's not compatible with their vision. That's not to say they shouldn't have a big say, just an overall direction and intent should be public.But I don't think there's anything wrong with the model if done in this way. You could make a case that Mac OS X is an example of this as it uses elements of the FreeBSD kernel which is obviously open source. |
_cs.71209 | I'm confused about how the bead-tiling algorithm for the Dubins' TSP from this article works: On the Dubins Traveling Salesperson Problems:novel approximation algorithmsKetan Savla, Emilio Frazzoli, Francesco Bullohttp://www-bcf.usc.edu/~ksavla/papers/KS-EF-FB:06h.pdf What does the algorithm return? Does it return this path that has been going over all (meta-)beads almost log2(n) times? Or only the final path obtained from the last recursive phase? a Dubins tour is constructed with the following properties: ... This process is iterated log2n times, ...Also, does this result path go back to the top left corner since it started there?If yes, why waste some length of our path to go back to the top, instead of going to the first node encountered to obtain our Dubins tour? ... it visits all (meta-bead) rows in sequence top-to-down, alternating between left-to-right and right-to-left passes, ...A minor issue is also the complexity stated. it states the computational complexity is of order n. I personally thought it to be so much more. The computational complexity of the RECURSIVE BEAD-TILING ALGORITHM is of order n.All in all, to express it in one simple question :What does the bead-tiling algorithm do? | Bead-tiling algorithm for Dubins' TSP | traveling salesman | null |
_unix.39278 | I have this bash script#!/bin/bashcat $@ | while read linedo for word in $line do echo $word | circling-the-square # here's where i need to add the if statement: #if the word contains one of the four [!?.,], #then also echo that punctuation mark donedonecircling-the-square is a Python script based on Norvig's spelling corrector.That script rids its input of punctuationdef words(text): return re.findall('[a-z]+', text.lower()) so I need bash to notice this. I guess sed or awk might be useful but I still don't know how to write that regex or put it in an if statement, so I'm asking this here.As is, passing the filealec@ROOROO:~/oddi-o/newton-fluxions$ cat 199 advertisement lately publijtid by the author, the british hemisphere, or a map of a new contrivance, proper for initiating young minds in the firft rudiments of geography, and the ufe of the globes.givesalec@ROOROO:~/oddi-o/newton-fluxions$ ./hmmb 199advertisementlatelypublijtidbytheauthorthebritishhemisphereoramapofanewcontrivanceproperforinitiatingyoungmindsinthefirstrudimentsofgeographyandthefewoftheglobes.Which isn't perfect but still useful. FYI, I have edited the file in question to contain only \w and the punctuation [!?.,]. The file does not contain characters like : or ;, so I just need it to echo these four punctuation marks if they are included as part of a word, viz.:alec@ROOROO:~/oddi-o/newton-fluxions/finforno$ ./hmmb 199advertisementlatelypublijtidbytheauthor,thebritishhemisphere,oramapofanewcontrivance,properforinitiatingyoungmindsinthefirftrudimentsofgeography,andtheufeoftheglobes. | How do I add an if statement (regarding punctuation in a word) to this bash script | bash;shell script;sed;awk | Use a regex as shown below. It finds words containing one or more of your specified punctuation marks and prints out the word and the first matching punctuation mark. You can extend it as you see fit.if [[ $word =~ ^.*([!?.,])+.*$ ]]then echo Found word: $word containing punctuation mark: ${BASH_REMATCH[1]}fi |
_unix.237851 | I am using chef firewall cookbook in the following way:firewall 'default'firewall_rule 'ssh' do port 22endIt configures firewalld internally by writing commands like this to /etc/sysconfig/firewalld-chef.rules:firewall-cmd --direct --add-rule ipv4 filter INPUT 50 -p tcp -m tcp -m multiport --dports 22 -m comment --comment 'ssh' -j ACCEPTfirewall-cmd --direct --add-rule ipv6 filter INPUT 50 -p tcp -m tcp -m multiport --dports 22 -m comment --comment 'ssh' -j ACCEPTHowever, after systemctl restart firewalld, these changes are not applied. I lack experience with Linux network configuration, so I don't know how can I continue. Is there an easy way to set this file to run with firewalld start? Am I doing something wrong in my recipe? | How are chef firewall rules applied to firewalld in CentOS 7? | centos;firewalld;chef | The firewall cookbook is using /etc/sysconfig/firewalld-chef.rules to track the state of the firewall rules. It builds the list of commands it should run, and if the list is different than the contents of that file, it reapplies the whole file by clearing the ruleset and then running each command in the file. It doesn't necessarily run that file directly.We had someone contribute the original logic in this recipe, so I couldn't tell you much more about the way they are using it, but running the :save action seems to be the intended way to ensure the rules are permanent. If you're trying to be sure the rules are applied on startup, try running the firewall resource's :save action after all of the firewall_rule(s) are applied. |
_softwareengineering.118646 | A project I'm working on was hosted on a svn repository at bettercodes.org. This was under another user's name and that repository is now unavailable.The person responsible for the project was sent an export from Eclipse and I had a more or less up-to-date project on my machine. I could see no other option than to commit my copy of the project to another repository. It is now possible that some of the most recent changes by another programmer are missing.Now, what options do I have to bring this project together? Is some of the necessary metadata saved on Eclipse or are all the changes on the original repository? What tools could I use to check what is missing in terms of classes, config files and code? | What options are there if an SVN repository becomes unavailable? | svn;repository | What kind of project is this? Why is the repo unavailable?In the case of SVN, all the changes and diffs are in the server repository and not on the clients. You're going to somehow need to get access to the original repo if you want all the other data.Some of the distributed version control systems, like Git and Mercurial, store all the metadata on all machines using the repository. In the future, it might be a wise idea to use one of those. Or, use an SVN server under your direct control. |
_webmaster.90084 | I have noticed so many spam referral traffic on my website on Google Analytic. I have tried to block them with the help of filters but it could not help.I know there is also a way to block it with htaccess but is not any other way to block it in Google Analytic?Looking forward for good solutionThanks in advance | How to Block Spam Referral Traffic in Google Analytic? | google analytics;googlebot | null |
_softwareengineering.294828 | Trying to clearly state the semantics of a function call. In calling a function, are the arguments passed to the function the ones the calling code initially gives or the ones the function receives?With code like below, the calling code in bar() calls foo() twice. This first time with int 2, then ...1) Function foo() is called with a double 3.1, x has the converted value of int 3,or2) The value double 3.1 is converted to int 3 and function foo() is called,or3) Something else.IOWs, is the conversion of double 3.1 to int 3 part of the call of a function (the conversion would not happen without the function call) or is it a preceding activity (consider to be part of the calling code) before the function call?void foo(int x);void bar() { foo(2); foo(3.1);}This query is primarily C, yet language agnostic thoughts appreciated. A quick answer is not needed.[Edit]Note: This is not an question of how platforms create the binary/executable to implement the program code - just about the code/language.[Edit 2]@ Erik Eidt useful comment provided better words to use (at least for C) for this question.Perhaps a more succinct question would be, from the language perspective: Are functions called with actual arguments or formal parameters?C11 3.3 actual arguments and 3.16 1 formal parameter. | When is an object passed to a function? | programming languages;c;parameters;semantics | null |
_unix.84742 | I have Fedora 19 running Gnome 3 Shell installed on my laptop, and in a virtual machine running inside VMWare Player on my desktop. The VM has the proprietary VMWare tools and is up to date as of today.On the laptop I am used to opening the menu by poking the corner with the mouse. However I am unable to be able to get it to work in the VM.I have tried putting the VM to fullscreen and also installing an extension to Gnome that allowed me to configure things like the hot corner threshold. However they did not seem to have any kind of effect.Is there something in the fact that I run it inside a VM that keeps me from being able to use the hot corner? | Can I use Gnome 3 Shell Activities hot corner in VMWare Player VM? | gnome3;gnome shell | null |
_webmaster.28637 | My setup consists of: a XAMPP 1.7.7 installation, with Apache configured to listen to port 8080 dyn.com to make the projects accessible from the web ( 8080 shows up as 'open' when checking with http://www.yougetsignal.com/tools/open-ports )various PHP/Ajax/etc projects stored in folders inside xampp/htdocs. Ex: xampp/htdocs/cryptthe xampp folder, phpMyAdmin and MySQL are password-protectedWhen trying to access any of the projects using dyn.dns URLs ( http://mydomain.dyndns-ip.com/crypt ), I get a web page with something like Object Not Found The requested URL '/crypt' was not found on the RomPager server.Adding the port number ( http://mydomain.dyndns-ip.com:8080/crypt ) will result in The connection has timed out pages.However, all the projects work when loaded with a localhost URL ( http://localhost:8080/crypt/ ).Other issues:accessing the root dyn.com URL asks me to log in to my routerwhen clicking the Admin button for Apache on the XAMPP control panel, it opens http://localhost/xampp/ - instead of http://localhost:8080/xampp/I think this might have something to do with password-protecting the xampp folder,but there doesn't seem to any automatic way of undo-ing the security changes,and I don't know how to do that manually. Any pointers in troubleshooting this issue would be appreciated. | xampp not working with DynDNS | apache;dns;localhost;xampp | null |
_softwareengineering.126986 | Using C#, I have been doing multithreaded development for about 5 years, and consider myself quite proficient (I wrote my own lock-free queue and task parallel framework before Microsoft made TPF).However, I find it incredibly difficult to find information on practical multithreaded system design patterns anywhere. There are some good resources on low-level algorithms and collections, but not much on system design.So to the question, anyone know where this information can be found? | where can I find an overview of known multithreading systems architectures design patterns? | design patterns;multithreading;system | Parallel Programming with Microsoft .NET: Design Patterns for Decomposition and Coordination on Multicore ArchitecturesThis is a book, I recommend wholeheartedly.It is:New - published last year. Means you are not reading somewhat outdated practices.Short - about 200+ pages, dense with information. These days there is too much to read and too little time to read 1000+ pages books.Easy to read - not only it is very well written but it introduces hard to grasps concepts in really simple to read way.Intended to teach - each chapter gives exercises to do. I know it is always beneficial to do these, but rarely do. This book gives very compelling and interesting tasks. Surprisingly I did most of them and enjoyed doing them. |
_unix.52400 | There are eBook readers for Android, there's Okular for KDE, and stuff like that, but what I want, is an eBook (ePub format) reader for my normal Linux desktop.I know there's Calibre, which goes way beyond being just an eBook reader, and theres FBReader, Which doesn't really work as of yet. Given that eBooks have been around for several years now, I'd assume, more software would've sprung up by now. | Recommendation for an eBook reader for Gnome | gnome;software rec;epub;ebooks | A couple of others are Cool Reader and AZARDI. Lucidor was another, but development stopped and the website is down (although you can still find the debs e.g. here). In my opinion AZARDI is the best of these.Update: Lucidor seems to be back in development and its website is back online. |
_unix.364383 | I came across a confusing variation in the understanding what options and arguments are with regard to the syntax of commands.For instance, I encountered definitions like:command -a -b -c d e fsome differ between -a -b -c, call them options or switches and d e f by calling them arguments.command -a -b -c d e f some, for instance a bash manual, call all -a -b -c d e f arguments and explains, that all of them are accessible from a script by $1 $2 $3 $4 $5 $6 respectively.command -a b=csome call -a an option, b an argument and c the value, but others mix them like in the first two points, in one variety calling all -a b c arguments.Those three versions are only examples for a plethora of different calling varieties, I do not even know how to list them all, but I noticed that for sure there is no fixed naming convention.Or at least, there is no standardised naming convention I know about, because I came across different random sources, but even among official Linux and GNU affiliated sites or manuals I could met this inconsistency.Is there a unquestionable official naming scheme I can refer to? | Confusion about changing meaning of arguments and options, is there an official standard definition? | command line;gnu;standard | Adapted from the POSIX standard's Utility Argument Syntax section:utility_name [-a] [-b] [-c option_argument] [-d|-e] [-f[option_argument]] [operand...]The utility in the example is named utility_name. It is followed by options, option-arguments, and operands.The arguments that consist of - characters and single letters or digits, such as a, are known as options (or, historically, flags). Certain options are followed by an option-argument, as shown with [-c option_argument]. The arguments following the last options and option-arguments are named operands.The standard also defines argument asIn the shell command language, a parameter passed to a utility as the equivalent of a single string in the argv array created by one of the exec functions. An argument is one of the options, option-arguments, or operands following the command name.All things after the utility_name on the command line are the utility's arguments, and they all show up in the positional parameters if it's a shell script. The terms option, option-argument, and operand are more specific names for these arguments on the command line.Flag and switch are common synonyms to option.In the case ofutility -a b=c-a and b=c are arguments,-a is an option if the utility recognises it as such (the ln utility has no -x option, so -x is not an option to ln, strictly speaking, and ln -x would trigger a diagnostic message),b=c is an option-argument if the -a option takes an argument, otherwise it's an operand,b and c are not options, option-arguments and not operands in themselves.As you notice from my text above, working from the synopsis of a utility (as given by the manual of the utility) would have been easier than trying to decode a generic command typed on the command line. The manual will clearly state what options takes option-arguments and what arguments are operands etc.To call c a value is IMHO perfectly ok. It's not a something that is standardised, but very few would misunderstand you if you say c is the value given to b. It would be clear from the context of the utility in question.For example$ awk -v var=d '...' data.inAnyone who knows about awk would say that -v var=d means the awk variable var is assigned the value d on the command line. |
_unix.328110 | sync is one of the user account created by Debian itself. I'm wondering why Debian sets its login shell to /bin/sync instead of /bin/false. How does Debian use this user account? | Why does Debian set the login shell of user sync to /bin/sync? | debian;users;passwd | This is documented in /usr/share/doc/base-passwd/users-and-groups.txt.gz:syncThe shell of user sync is /bin/sync. Thus, if its password is set to something easy to guess (such as ), anyone can sync the system at the console even if they have no account on the system.This is really a historical artifact, I wouldn't expect a sync user to be set up in this way nowadays. In the past it would be useful to have such a user so that people with physical access to a console (e.g. in a server room or a lab full of workstations, as you'd find in universities) could reduce the risk of data loss when shutting down a system (to recover from a rogue process or simply to use the workstation, if it had been left locked by its previous user). Unix systems before Debian tended to have a sync user and a shutdown user with which you could actually shut a system down properly without knowing the root password. (On our Sun SPARCstations we'd just STOPA boot...)It's worth noting, as Peter Cordes mentioned, that other mechanisms are available on many systems to ensure safe shutdowns or reboots from a console without being able to authenticate as root: ACPI events triggered by pressing the power switch (which lead to a clean shutdown), or CtrlAltDel (which leads to a clean reboot). AltSysRq can be used as a last resort to sync, kill, unmount and reboot, but it's not a clean reboot. As mentioned by JdeBP, having a sync user is a very old idea, dating back at least to the early 1980s. |
_codereview.136078 | Working from 'Programming in C' by Kochan. I'm on an exercise in the chapter 'Pointers'.This was the exercise:'Write a function called insertEntry() to inset a new entry into a linked list. Have the procedure take as arguments a pointer to the list entry to be inserted (of type struct entry as defined in chapter), and a pointer to an element after which the new entry is to be inserted.I've been struggling through this book but this exercise only took me a few minutes, I'm concerned I'm missing the point. Can you please make some suggestions regarding if I've gone wrong?It compiles and runs fine.Could I have done this better?#include <stdio.h>struct entry { int value; struct entry *next; };void insertEntry(struct entry *addOn, struct entry *element);int main (void){ struct entry n1, n2, n3, addOn; struct entry *list_pointer = &n1; n1.value = 100; n1.next = &n2; n2.value = 200; n2.next = &n3; n3.value = 300; n3.next = (struct entry *) 0; while(list_pointer != (struct entry *) 0) { printf(%i\n, list_pointer->value); list_pointer = list_pointer->next; } list_pointer = &n1; insertEntry(&addOn, &n3); while(list_pointer != (struct entry *) 0) { printf(%i\n, list_pointer->value); list_pointer = list_pointer->next; } return 0;}void insertEntry(struct entry *addOn, struct entry *element){ element->next = addOn; addOn->value = 400; addOn->next = (struct entry *) 0;} | Exercise to create an insertString function to linked-list | c;linked list;pointers | I'm not sure if your insertEntry function is correct. It seems to be hardcoded to add an entry at the end of the linked list; you want to be able to add an entry anywhere (except at the start of the list which is the object of the next exercise in the book).Here's my solution to this exercise./* Exercise 10.2 Write a function called insertEntry() to insert a new entry into a linked list. Have the procedure take as arguments a pointer to the list entry to be inserted (of type struct entry as defined in this chapter), and a pointer to an element in the list *after* which the new entry is to be inserted. note: inserts n2_5 after n2*/#include <stdio.h>struct entry{ int value; struct entry *next;};void insertEntry (struct entry *new, struct entry *follow){ new->next = follow->next; follow->next = new;}int main (void){ void insertEntry (struct entry *new, struct entry *follow); struct entry n1, n2, n3, n4, n5, n2_5, *listPtr; n1.value = 100; n1.next = &n2; n2.value = 200; n2.next = &n3; n3.value = 300; n3.next = &n4; n4.value = 400; n4.next = &n5; n5.value = 500; n5.next = (struct entry *) 0; printf (\nlinked list: ); listPtr = &n1; while ( listPtr != (struct entry *) 0 ) { printf (%i , listPtr->value); listPtr = listPtr->next; } printf (\n); // insert new entry n2_5.value = 250; printf (inserting new entry %i ...\n, n2_5.value); insertEntry (&n2_5, &n2); printf (linked list: ); listPtr = &n1; while ( listPtr != (struct entry *) 0 ) { printf (%i , listPtr->value); listPtr = listPtr->next; } printf (\n); return 0;} |
_codereview.73427 | Given the following definition of Cons:data Cons a = Cons a (Cons a) | Empty deriving ShowI implemented a flatten function.It's mean to work exactly like concat :: [[a]] -> [a], but for Cons.flatten :: Cons (Cons a) -> Cons aflatten Empty = Emptyflatten (Cons (Empty) ys) = flatten ysflatten (Cons (Cons x xs) ys) = Cons x (flatten (Cons xs ys))Test data:test1 :: Cons (Cons Int)test1 = Cons (Cons 5 Empty) Emptytest2 :: Cons (Cons Int)test2 = Cons (Cons 5 (Cons 10 Empty)) Emptytest3 :: Cons (Cons Int)test3 = Cons (Cons 5 (Cons 10 (Cons 20 Empty))) test2Some tests:ghci> flatten test1Cons 5 Emptyghci> flatten test2Cons 5 (Cons 10 Empty)ghci> flatten test3Cons 5 (Cons 10 (Cons 20 (Cons 5 (Cons 10 Empty)))) | Implement `flatten` for `Cons` | haskell | null |
_unix.365846 | I am prototyping Apache Flume. My task is to transfer text file contents between two Ubuntu VMs I set up using VirtualBox. I have almost no knowledge of Flume, although I have been going through their documentation.With VirtualBox I was able to create an Internal Network, and the two VMs are successfully able to ping each other.I was also able to download and configure flume as shown: https://cwiki.apache.org/confluence/display/FLUME/Getting+StartedMy point being that Flume is installed and works on both VMs (or so I think).This post give me a slight idea: https://stackoverflow.com/questions/19112465/flume-data-transferring-to-server aSo, do I implement exactly that flume.conf file on each VM? And do I need write Java to do this?Also, I found this video, around the 18 min mark this guy does almost exactly what I want. However, I would like the input to be a text file.So, could you tell me step-by-step (I am a beginner) how I would go about doing this? Or point me to any useful tutorials.Thank you! | Apache Flume to transfer text file contents between two Ubuntu VMs | ubuntu;virtualbox;virtual machine | I solved this problem by using the sink type as file_roll. I also had to identify the second VM's IP so the two could connect.I found the step-by-step instructions on how to do this at this link: http://thisandthat.io/blog/flume-part3/Also, you do not need to write any Java to do this. However, as is standard with Flume, ensure your JAVA_HOME is declared properly in the flume-env.sh file. All other parts are handled by the .conf file created (as described in the above link). |
_webmaster.17793 | The below is a part of traceroute to my hosted server: 9 ae-2-2.ebr2.dallas1.level3.net (4.69.132.106) 19.433 ms 19.599 ms 19.275 ms10 ae-72-72.csw2.dallas1.level3.net (4.69.151.141) 19.496 ms ae-82-82.csw3.dallas1.level3.net (4.69.151.153) 19.630 ms ae-62-62.csw1.dallas1.level3.net (4.69.151.129) 19.518 ms11 ae-3-80.edge4.dallas3.level3.net (4.69.145.141) 19.659 ms ae-2-70.edge4.dallas3.level3.net (4.69.145.77) 90.610 ms ae-4-90.edge4.dallas3.level3.net (4.69.145.205) 19.658 ms12 the-planet.edge4.dallas3.level3.net (4.59.32.30) 19.905 ms 19.519 ms 19.688 ms13 te9-2.dsr01.dllstx3.networklayer.com (70.87.253.14) 40.037 ms 24.063 ms te2-4.dsr02.dllstx3.networklayer.com (70.87.255.46) 28.605 ms14 * * *15 * * *16 zyzzyva.site5.com (174.122.37.66) 20.414 ms 20.603 ms 20.467 msWhat's the meaning of lines 14 and 15? Information hidden? | What does an asterisk/star in traceroute mean? | dns;ip address | If a packet is not acknowledged within the expected timeout, an asterisk is displayed.From http://en.wikipedia.org/wiki/TracerouteHowever zyzzyva.site5.com did eventually respond which is why you have line 16. |
_codereview.83225 | I've written a function that works however I'm sure there is a better way. I need to parse specific tags from an xml document (Microsoft Word docx document.xml). Here is the general structure of the xml in question.//A ton of crap...<w:tbl> <w:tr> <w:tc> <w:p> <w:r> <w:t>Data_I_want</w:t> </w:r> </w:p> </w:tc> </w:tr> <w:tr> <w:tc> <w:p> <w:r> <w:t>Data_I_want</w:t> </w:r> </w:p> </w:tc> </w:tr></w:tbl>...// A ton more crap//Same structure repeats and I need to grab that n number of times where n is unknown.// Also the order of the data must be preserved within each parent tbl tag.Here is an excerpt of my code:def recurse_search_tables(self, xml_data): Recursively traverse the xml child nodes.. tag_base = r'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}' for child in xml_data: if child.tag.replace(tag_base,'') == 'tbl': for c in child: if c.tag.replace(tag_base,'') == 'tr': for tr in c: if tr.tag.replace(tag_base,'') == 'tc': for tc in tr: if tc.tag.replace(tag_base,'') == 'p': for p in tc: if p.tag.replace(tag_base,'') == 'r': for r in p: if r.tag.replace(tag_base,'') == 't': try: self.decide_to_print(r.text.encode('UTF-8')) except: pass finally: self.recurse_search_tables(child) else: self.recurse_search_tables(child) else: self.recurse_search_tables(child) else: self.recurse_search_tables(child) else: self.recurse_search_tables(child) else: self.recurse_search_tables(child) else: self.recurse_search_tables(child)Calling that mess via:tree = ET.parse('document.xml')root = tree.getroot()self.recurse_search_tables(root)Now like i said, this code works however it isn't the fastest (but is is suitable). How can I improve this? | Parsing specifically nested XML Tags | python;parsing;xml | Thanks to Ferada for the hint. Here is what I was able to replace that monster with. Execution time went from more than 590 seconds to less than 2 seconds. Note: all of the string replacements were another performance pain point, using a single string is much quicker here. def recurse_search_tables(self, root): # Get individual tables and their children nodes. for table in root.iter(r'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}tbl'): for t in table.iter(r'{http://schemas.openxmlformats.org/wordprocessingml/2006/main}t'): self.decide_to_print(t.text.encode('UTF-8')) |
_unix.231286 | I have the following file structure:cwd/ ---dir1 ---file_name1 ---some_file_name ---dir2 ---file_name1 ---some_other_file_name ---some_fileI want to get zip file such that when I unzip I get cwd directory and NOT pwd directory. So according to the man I need to use -j flag. But when I use him in this case (OS X mac unix) I got error regarding the issue I have in 2 different directories same file name (file_name1 in the example).zip error: Invalid command arguments (cannot repeat names in zip file).first full name...second full name...this my be a result of using -j According to this it seems nothing can be done and this is how zip -j works.How can I still achieve the requirement of compressing the zip without the default pwd file structure? (I cannot rename the files - there is a reason why from the beginning I use zip via shell etc...).Thanks, | Unix zip command junk-paths flag not working on same file names | bash;shell script;zip | null |
_computergraphics.271 | Especially when rendering particle effects, the same object needs to be rendered several times with slightly modified properties. But these changes are often limited to properties like pose or textures, and are not in the geometry of the object itself. A different pose (translation, rotation) is merely a matrix multiplication with the vertices, and a different texture with an applied texture atlas are merely different texture coordinates.Nevertheless each of the particles needs to be drawn and I currently do that by calling glDrawElements for every particle, while setting uniforms appropriately. Until now this is sufficient for rendering the scene in real-time, but for more complex scenes or simply for more particles emitted by the particle system this could easily lead to dropping frame rates.So, is there a way to reduce the number of draw calls for very similar objects? | How to reduce the number of draw calls when rendering one object multiple times? | opengl;performance;real time;mobile;opengl es | There are many, many ways to draw things in OpenGL, so this is naturally confusing sometimes. The first method you describe, setting the shader parameters and issuing one draw call per object is usually the most inefficient, due to the high API overhead. The second one, using instanced drawing is a much smarter approach for objects with the same parameters.When it comes to particles, specifically, there are two other methods which I'm aware of and have tested: The first one, more traditional and easier to implement, is to generate a unique quadrilateral for each particle in the application code. Then use one of the several optimized buffer streaming paths of OpenGL to upload this data and issue a single draw call. This is the most straightforward method and provides good results. It will involve very few API calls if you can map the vertex/index buffers (glMapBuffer/MapBufferRange).Another method is to move the whole thing to a shader program, using Transform Feedback. This method is a little more complicated to get up and running, but you can find a lot of references on the subject, such as this tutorial. This one should be the optimal path, since it moves the whole simulation to the GPU.Those are some of the optimized ways of rendering particle effects. But OpenGL provides several other rendering paths that are better for different cases, one of such is indirect draw, which isn't available on ES at the moment, but is probably one of the fastest drawing paths available on modern PC OpenGL. Transform Feedback also requires a geometry shader, so it is not available for current OpenGL-ES.For more on the optimized rendering paths of OpenGL, I recommend watching this very good presentation by NVidia: Approaching Zero Driver Overhead in OpenGL. At the end of the talk, they show a very interesting benchmark of several methods and how they compare with the Direct3D equivalents. |
_cs.57093 | I'm interested in implementing an algorithm detailed on this Wiki page for finding the longest path of a DAG.The second part of the algorithm says the length of the longest path of a node with no outgoing edges should be zero, but my programming professor says this node should return 1 as its longest path length. (Perhaps he says this assuming each node's length is set to 0 and a recursive function that visits each node will add 1 to whichever child has the longest path.)I think that if someone can answer the question: what is the length of the longest path of a DAG containing a single node, then I will have an answer to my question. Though, I'm not sure if a single-node graph constitutes a DAG.As you can probably tell, I'm new to graphs, so any guidance is appreciated. | Longest path of a DAG containing a single node? | graphs | You have not misunderstood anything, there is simply a discrepancy of notation: sometimes the length of a path is measured as its number of edges, sometimes as its number of nodes, depending on context. In your implementation, it doesn't matter which convention you use, as long as you are consistent throughout your algorithm. Then if you use the second convention and your algorithm returns $3$ for a particular vertex, it means the longest path to that vertex has three nodes: $abv$. If you use the first convention (which is what this Wikipedia article uses, but not your professor), the algorithm should return $2$, because the longest path has two edges.If this is an assignment, and your professor uses the convention that paths are measured by their vertices, and says that you should too, then you should too. |
_webmaster.38945 | There is something I don't understand in Google Webmaster Tools (GWT) for my WordPress site.In menu Blocked URLs, it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am I missing something?User-agent : *Disallow: /*?Disallow: /wp-login.phpDisallow: /wp-adminDisallow: /wp-includesDisallow: /wp-contentAllow: /wp-content/uploadsDisallow: */trackbackDisallow: /*/feedDisallow: /*/commentsDisallow: /cgi-binDisallow: /*.php$Disallow: /*.inc$Disallow: /*.gz$Disallow: /*.cgi$Disallow: /author/*I'm afraid my robots.txt doesn't block several URLs I want to block.Edit (screenshot): | Robots.txt never downloaded but some blocked URLs in GWT | google search console;robots.txt | Problem comes from encoding of the robots.txt file. UTF-8 encoding is recommended by Google. |
_computergraphics.2488 | I am going through an undocumented function that takes in three points and a Z-value that projects three points on the plane defined by Z-value. I want to understand the mathematical theory behind it and I need your help to decode the function:SlicerSegment project2D(Point3& p0, Point3& p1, Point3& p2, int32_t z) const{ SlicerSegment seg; seg.start.X = p0.x + int64_t(p1.x - p0.x) * int64_t(z - p0.z) / int64_t(p1.z - p0.z); seg.start.Y = p0.y + int64_t(p1.y - p0.y) * int64_t(z - p0.z) / int64_t(p1.z - p0.z);seg.end.X = p0.x + int64_t(p2.x - p0.x) * int64_t(z - p0.z) / int64_t(p2.z - p0.z); seg.end.Y = p0.y + int64_t(p2.y - p0.y) * int64_t(z - p0.z) / int64_t(p2.z - p0.z); return seg;} | 2D projection from some points | projections | null |
_codereview.24144 | I have a pipeline with loops of filters and one input filter. The rest are splitters, transform and output.I would like my code to go over the filters and push them into a queue (order is important). However, if I have a loop, I only want to count the end of the loop once.Here is my working code. Please review it.queue<string> CEngineInternal::GetFiltersListToRun(){ queue<string> filtersBFS; queue<string> filtersToRun; set<string> alreadyFoundFilters; // add input filters to the list for(BaseFilterMap::const_iterator it = m_Filters.begin(); it != m_Filters.end(); it++) { string FilterName = (*it).first; const CBaseFilter *filter = (*it).second; if(filter->IsInputFilter()) { //push the input filter filtersBFS.push(FilterName); filtersToRun.push(FilterName); alreadyFoundFilters.insert(FilterName); } } while (!filtersBFS.empty()) { string filterName = filtersBFS.front(); filtersBFS.pop(); //get the OutputPin connections for the next filter FilterConnection* filterConnection = m_Connections[filterName]; ASSERT(filterConnection != NULL); PinConnectionVector& outPinConnections = filterConnection->GetOutputPinConnections(); if (outPinConnections.size() != 0) { // find all the output pin connections for (size_t i = 0; i < outPinConnections.size(); ++i) { //add connected filters id's to the list const std::string& oFilterName = outPinConnections[i].ConnectedFilterName(); if (oFilterName.empty()) { continue; } if (alreadyFoundFilters.find(oFilterName) != alreadyFoundFilters.end()) { continue; } else { filtersBFS.push(oFilterName); filtersToRun.push(oFilterName); alreadyFoundFilters.insert(oFilterName); } } } } return filtersToRun;} | BFS for creating a queue without repetitions and with loops in the graph | c++;queue;breadth first search | As far as I can tell, this code seems pretty easy to follow, especially with the comments. Here are several things that stood out to me:Since you haven't provided your own queue implementation, I assume you're using std::queue. If so, remove using namespace std and use std:: where necessary.This entire method should be const. You're modifying three local containers and returning one, while not modifying any class members. By making this const, data members will stay immutable (modifications will cause compiler errors) and the reader will be aware of this.queue<string> CEngineInternal::GetFiltersListToRun() constfiltersBFS is not a very descriptive name; we know this program is about BFS. It's also not the local queue that is returned, so it is unclear what this is used for within the function.Your functions start with a capital letter, which is not proper naming in C++. Only user-defined types should be named this way. Your variables (except for FilterName) are named correctly, however. Both variables and functions should start with a lowercase letter.For this line:for(BaseFilterMap::const_iterator it = m_Filters.begin(); it != m_Filters.end(); it++)It may be more readable to declare the iterator type before the loop.In addition, it may be better to use pre-increment since you're not dealing with basic types. This could also potentially avoid an extra copy.BaseFilterMap::const_iterator it;for (it = m_Filters.begin(); it != m_Filters.end(); ++it) {}However, if you're using C++11, consider auto instead of your current iterator. The compiler will determine the correct type, and it's more readable. You may also keep it inside the loop statement.for (auto it = m_Filters.begin(); it != m_Filters.end(); ++it) {}Better yet, if you're using C++11, consider a range-based for-loop:for (auto& it : m_Filters) {}This:if (outPinConnections.size() != 0)should instead use !empty():if (!outPinConnections.empty())For here:for (size_t i = 0; i < outPinConnections.size(); ++i) {}If this is the size type returned by size(), make it std::size_t since this is C++. Otherwise, make sure you're using the correct size type to avoid potential loss-of-data warnings.These:(*it).first;(*it).second;should use the more readable -> operator:it->first;it->second;You say:If I have a loop, I only want to count the end of the loop once.But I don't see that anywhere in the code, plus there are multiple loops used here. If you're still receiving the correct output and no errors associated with them, then they should be okay. |
_unix.258499 | Logged in as user1, I'd like to project to another unix user env, let's say user2, without type it's password.By project, I mean:- change current $HOME- call user2's bash startup scripts (to get it's prompt and global vars)What I've written a little script switch_user_env.sh:HOME=/usr/users/$1cd $HOMEbash. /etc/profileif [ -f $HOME/.bash_profile ]; then . $HOME/.bash_profilefiif [ -f $HOME/.profile ]; then . $HOME/.profilefiI call it like following: . ./switch_user_env.sh user2Current shell get correctly change to bash but startup scripts are not called.Can you help me to understand what is wrong ?Thanks ![EDIT]switch_user_env.sh source code is available in GitHub: https://github.com/pierrefevrier/switch-user-env | Change unix user environment | linux | Because remaining part your script (after bash call) will be executed after you exit from bash (type exit command or Ctrl+d). Calling bash in your script takes the control of futher execution.Here is updated version:HOME=/usr/users/$1cd $HOMESOURCE_PROFILE=/etc/profileif [ -f $HOME/.bash_profile ]; then SOURCE_PROFILE=$HOME/.bash_profilefiif [ -f $HOME/.profile ]; then SOURCE_PROFILE=$HOME/.profilefibash --rcfile $SOURCE_PROFILE |
_unix.211963 | Built Yocto (Poky fido branch) linux distro for Raspberry Pi 2 following this excellent tutorial Part 1. Now trying to run Chromium.Added meta-browser to my bblayers.confAdded chromium recipes to my .bb image file.Was able to compile and build my image but getting these errors when trying to run:root@raspberrypi2:/usr/bin/chromium# ./chrome[527:527:0624/195537:FATAL:browser_main_loop.cc(161)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.Aborted--no-sandboxroot@raspberrypi2:/usr/bin/chromium# ./chrome --no-sandbox[528:528:0624/195641:ERROR:browser_main_loop.cc(164)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.[528:528:0624/195641:ERROR:browser_main_loop.cc(210)] Gtk: cannot open display: root@raspberrypi2:/usr/bin/chromium# [530:530:0624/195641:ERROR:image_metadata_extractor.cc(111)] Couldn't load libexif.[530:530:0100/000000:ERROR:zygote_linux.cc(587)] write: Broken pipe^CDISPLAY=:0.0root@raspberrypi2:/usr/bin/chromium# export DISPLAY=:0.0 && ./chrome --no-sandbox[531:531:0624/195652:ERROR:browser_main_loop.cc(164)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.[531:531:0624/195652:ERROR:browser_main_loop.cc(210)] Gtk: cannot open display: :0.0root@raspberrypi2:/usr/bin/chromium# [533:533:0624/195652:ERROR:image_metadata_extractor.cc(111)] Couldn't load libexif.[533:533:0100/000000:ERROR:zygote_linux.cc(587)] write: Broken pipe^C--use-gl=eglI'd be very interested to have it working with hardware accelerationroot@raspberrypi2:/usr/bin/chromium# export DISPLAY=:0.0 && ./chrome --no-sandbox --use-gl=egl [534:534:0624/195901:ERROR:browser_main_loop.cc(164)] Running without the SUID sandbox! See https://code.google.com/p/chromium/wiki/LinuxSUIDSandboxDevelopment for more information on developing with the sandbox on.[534:534:0624/195901:ERROR:browser_main_loop.cc(210)] Gtk: cannot open display: :0.0root@raspberrypi2:/usr/bin/chromium# [536:536:0624/195901:ERROR:image_metadata_extractor.cc(111)] Couldn't load libexif.[536:536:0100/000000:ERROR:zygote_linux.cc(587)] write: Broken pipe^CAny pointers are welcome. | How do I properly add Chromium to my Yocto linux distribution | raspberry pi;chrome;arm;yocto | null |
_cstheory.18134 | A new paper came out claiming quasi-polynomial algorithm for Discrete Logarithm.http://arxiv.org/abs/1306.4244If correct, does it mean we no longer have an exponential separation in complexity of a classical algorithm and its quantum version for the discrete logarithm problem? Does this have any implication for quantum complexity theory? | New algorithm for Discrete log and its implications for Quantum computing | cc.complexity theory | Well, one crucial observation is that the new algorithm apparently only works for groups of the form $Z_{p^k}$ where $p$ is small --- it doesn't give a speedup for groups of the form $Z_p$. The latter is the much more common setting that people talk about, both for cryptography and for Shor's algorithm, and the new algorithm doesn't threaten the quantum speedup there. On the other hand, yes, unless I'm mistaken it does make the speedup much smaller in the $Z_{p^k}$ case. |
_unix.140420 | I've set this in my sysctl.conf:net.netfilter.nf_conntrack_max=268435456Because, this border 16,777,216 achieved very quick.Also, I've decrease timeouts:net.ipv4.tcp_fin_timeout = 30net.netfilter.nf_conntrack_tcp_timeout_established=54000I'm trying to do this:./masscan 0.0.0.0/0 -p 80 --rate 500000 --exclude exclude.conf -oG output.txtI want to increase the rate from 500,000 to 5,000,000. I have 100 mbps.I'm trying to change the rate from 100,000 to upper borders. Nevertheless, my net.netfilter.nf_conntrack_count doesn't grow more than 1,609,909. And remaining time 73 hours 20 minutes, with rate 100k or 500k, it doesn't matter. Other settings:net.core.rmem_default = 4194304net.core.wmem_default = 4194304net.core.rmem_max = 4194304net.core.wmem_max = 4194304net.ipv4.tcp_rmem = 4096 65536 4194304net.ipv4.tcp_wmem = 4096 16384 4194304Seems like something else needs to be increased. Looks like somewhere insufficient memory or record fields.How do I find the bottle neck? | masscan. how to configure Linux? | linux;networking;nmap | null |
_codereview.51500 | The more I consider object oriented JavaScript the more I am confused. There are so many different ways and concepts and I simply do not know any longer what fits best for my purposes.I like the constructor pattern but I have the feeling that I've mixed different concepts:Subclass.prototype = Object.create(Superclass.prototype);function Subclass(name, value) { Superclass.call(this); this._name = name; this._value = value; this._init();}Object.defineProperties(Subclass.prototype, { 'name': { get: function() { return this._name; } }, 'value': { get: function() { return this._value; }, set: function(value) { this._value = value; } }})Subclass.prototype._init = function() { // some initialization};Subclass.prototype.compute = function() { var name = this.getName(); var value = this.getName(); // do something with name and value...};Is there a cleaner way?Should one use getters and setters for internal access (like I would do it in other languages)?Edit #1Define properties of prototype (Object.defineProperties(Subclass, {} to Object.defineProperties(Subclass.prototype, {}) | Simple JavaScript (sub) class [properties, getters, methods] | javascript;beginner | null |
_softwareengineering.100429 | I work at a place that has a clear separation between designers and developers. We're a fairly new start-up and we're trying to figure out what would be the best workflow for our team. We're a small company with a total of about 11 employees, three of which are designers (they deal with the HTML, CSS, some jQuery), and three developers primarily working with ASP.NET Web Forms. I am one of the developers and don't have a knack for design, but I am comfortable enough with CSS and jquery/Javascript to understand the mechanics and whatnot.So, currently we wait for the designers to provide us with HTML structure clobbered together with server-side includes, js, jquery, css, etc.Typically we try to work around their stuff and begrudgingly try to keep their structure roughly the same with the consequence of forcing us to have a lot of hacks in place to work with it. Yes, this means using their Server-Side Includes mixed in with our .aspx pages, amongst other things that seem to go against the general asp.net web forms flow.There have been times when I have changed the includes into a user control and had their common HTML out in master pages, etc. All of which was consequently followed by a lot of barking from the design team. Their main complaint being that master pages and user controls make their maintenance of the site more difficult.I can see their point of view, and I wouldn't like it either if people messed with my workflow either, which I guess I am doing so with them.I have tried several times to approach them about moving site structure that complements the asp.net web forms environment we programmers are working with, but every time the subject of working with master pages comes up, the design team throws a near tantrum.Our current approach is simply forcing us to put together a lot of hacks and work-arounds so that it gels with their design and frankly it's getting frustrating and just adding more unnecessary technical dept to our projects.My question is simply this: what workflow works best for you and your design team? Do your designers know how to use master pages, etc.Is it unreasonable to have designers understand the mechanics of how master pages and user controls work, and in general how ASP.NET puts a page together? (I don't feel this is an unreasonable expectation of them...)So, what do you guys think? | What workflow do you use with asp.net web forms development and your design department | asp.net;workflows;webforms | null |
_codereview.118966 | King Kohima problem: King Kohima has reserved a new exclusive street for his executive class employees where they can build their homes. He has assigned you to plan that street. You have to decide on which plots along the street are new building allowed to be built. In order to this, you first want to calculate the number of possible ways of assigning free plots to buildings while keeping in mind this restriction No two consecutive plots can have buildings on them. This is done to ensure a sense of free space in the arena. The street is divided in M sections. Each section corresponds to 2 plots, one on each side of the street. Find the number of possible assignments. InputIn the first line youre given M ( M 1000 ). Output In the first and only line output the result/ Example Input: 3 Output: 25 Example explanation: If we just look at the one street side and mark X as a plot where building is allowed and Y as a free plot, we have: XYX, YXY, YYX, XYY, YYY. Since the same number exists on the other side, we have 5*5 = 25 combinations. For example, if the input is 3, then only five layouts (YYY, YYX, YXY, XYY, XYX) of the eight possible combinations (YYY, YYX, YXX, YXY, XYY, XYX, XXY, XXX) are valid. For both sides of the street, it's 5*5=25.Please critique this solution.def binary(i): r=str(0) while(i): x=str(i%2) if(x=='1' and r[-1]=='1'): #two consecutive X are not possible return None else: r=r+str(x) i=i/2 return r m=3 #inputl=[]for i in range(0,2**m): l.append(binary(i)) #converted the number to binaryprint(l)c=0 #count possible plot availablefor i in l: if i: c=c+1print(c*c) #both side of the lane | Solution to King Kohima problem in Python | python;programming challenge;combinatorics;time limit exceeded | null |
_unix.100764 | I have a tab-tabulated file.I would like to check if every line has the same number of tabs.For first step, I'd like to print the number of tabs for each individual line.I've tried grep -o '\t' infile | wc -l, but my implementation of grep says grep: invalid option -- o. Is there an other way?Nice to have: if possible, due personal preference, I'd prefer to do this with util (grep, cat, etc.) tools, preferably not awk or bash scripting. | Count tabs per line in text file with utils | command line;text processing;grep;utilities;wc | null |
_softwareengineering.283779 | I want to consume part of the Steam WebAPI, it's a simple Rest Service but it produces some complex json. I thought about using the Newtonsoft Json.Net Library generate my c# objects.Whats the best way to get the needed Classes to easily use the Library?I thought about using http://json2csharp.com/ to generate the needed Classes, but of course these don't follow any naming conventions. But as i save the data into a database anyway I have to convert it to the EF-Classes anyway. I could write my own classes and use annotations, would result in nicer code but a little bit more work.Whats the best practice creating the c# infrastructure for a external json service? | c# class generation/architecture for json rest service | c#;class design | I think you have two options here.Best practice. Create objects which match the json returned by/sent to the API and serialize/deserialze these objects when you make calls or receive responses. Hide the JSON implementation from the consumer of this 'Client Library'These objects could be created with a tool, or hand crafted. Either way you are probably going to have some custom code to deal with the edge cases of converting JSON to .net datatypes and classesAdditionally, you'll need to spend alot of time creating and maintaining these objects.Quick 'n' Dirty. rather than serializing to objects, simply parse the JSON and query the values out of the responses that you need to use with XPath or similar.This will yield a partial implementation of a Client, but if you only want a sub set of the features may well be much faster to implement |
_webapps.99800 | How can I determine who, or how many members download files from my public Facebook group? I am the admin. | File download stats for a public Facebook group | facebook groups | null |
_cs.22905 | The question is simple: $\qquad \operatorname{DropMiddle}(L)=\{xy\in\Sigma^* \mid |x|=|y| \land \exists a\in\Sigma\colon xay\in L\}$. Prove that CFL's aren't closed under $\operatorname{DropMiddle}$.I should probably be looking for a counter example, but I'm coming up short. I know that the language $ww$ ($w$ is a word in some CFL) isn't a CFL, but I can't figure out if I'm on the right track at all. | Prove that context free languages aren't closed under DropMiddle | formal languages;context free;closure properties | First off, you are right about looking for a counter example. However, $ww$ is a dead end, since no matter what you add to the middle, you'll stiil have a language that is not context-free.Hint 1: If the middle character is in some way special in $L$, you can essentially modify a PDA for $L$ to nondeterministically guess the middle and the removed letter and it will accept $\operatorname{DropMiddle}(L)$. So you should try to look for a language, where the dropping makes the middle special.Hint 2: If the specialness of the middle is the only feature of $\operatorname{DropMiddle}(L)$, a PDA could just use its stack to determine the middle. So you need to force it to use the stack in a different way.Solution: $L = \{ aw_1aw_2aw_3a\ldots aw_na \mid w_i\in \{(,)\},~ w_1\ldots w_n \text{ is correctly parenthesized} \}$. The $w_i$ part forces any PDA for $L$ or $\operatorname{DropMiddle}(L)$ to use its stack to count open parens. Thus it is impossible to check if, the single missing $a$ after dropping is indeed missing from the exact middle of the word. Formally proving that $\operatorname{DropMiddle}(L)$ is not context-free should be doable with Ogden's Lemma.Choose a word $(^k)^k(^k)^k$ (plus the $a$s inbetween) and mark $k$ characters each left and right of the middle. |
_softwareengineering.226249 | If you borrow code from some source, it is probably best to cite it (like adapted from [source]). However, if you take let's say you borrowed this function (example in C++):void doWork(){ cout << Doing work!\n << endl;}and now you editted it to be like this:void doWork(){ string name = ; cout << Starting work...\n << endl; cout << Hello user. What is your name? << endl; cin >> name; ...}Would you still have to say adapted from [source] or something like that? Or are you in the safe zone?Note: I am specifically wondering how this applies to copyrighted (and open source) code | Citing Borrowed Code | copyright | If you are taking code from copyrighted or open source code, then you must conform by the licensing terms under which you obtained the code. If you do not have such a licensing agreement, then you are in violation of copyright law regardless of whether you cite the code or not.In the case of GPL, in order to use the snippet of code, you must also release ALL of your code under the GPL and include the GPL license as part of your distributed product. You do not need to cite which part you borrowed.Commercial code typically does not require any form of acknowledgement, licensing or citation.Other open source licenses may vary in requirements, but as far as I know none require you to cite the individual parts of the code. |
_unix.346882 | ip route listdefault via 172.20.10.1 dev eth010.22.0.0/24 dev tun0 proto kernel scope link src 10.22.0.254172.20.10.0/24 dev eth0 proto kernel scope link src 172.20.10.4172.22.1.0/24 via 10.22.0.1 dev tun0172.22.2.0/24 via 10.22.0.2 dev tun0172.22.3.0/24 via 10.22.0.3 dev tun0172.20.x.x are Azure VMs.172.22.x.x are remote clients connected over openvpn. VPN is 10.22.0.xI can get from 172.22.0.0/16 to 172.20.11.5 (using nc and ping)I cannot get back from 172.20.11.5 to 172.22.0.0/16 (neither nc or ping).What am I missing? | Routing over openvpn | routing;openvpn | null |
_unix.168021 | free -m currently puts out something like this. I would like to know using grep/awk how we can get the total free amount. i.e. 9083 | Print total free memory from `free -m` | linux;grep;string | Using awk:awk '/^-/ {print $4}' <(free -m)9083Be aware that in procps-ng 3.3.10, the output format changes, and this line will then look like:awk '/Mem:/ {print $4}' <(free -m)9083The amount of available memory can be accessed in the newer procps-ng in a different field:awk '/Mem:/ {print $7}' <(free -m) |
_bioinformatics.282 | How can I manipulate protein-interaction network graph from the String database using STRINGdb bioconductor package and R?I have downloaded the whole graph for Homo sapiens from STRING, which has about 20.000 proteins.How do I read the file using that package? How do I filter things I don't need? Supposing that I want to keep tumor data, as an example. | How to manipulate protein interaction network from String database in R? | r;bioconductor;database;networks | I think the easiest way is to download the graph using STRINGdb.library(STRINGdb)string_db <- STRINGdb$new(version=10, species=9606, score_threshold=400, input_directory= )full.graph <- string_db$get_graph()Now you can use igraph, to manipulate the graph. Let's assume you want to take 200 proteins with the highest degree, i.e. number of edges they have.library(igraph)# see how many proteins do you have vcount(full.graph)# find top 200 proteins with the highest degreetop.degree.verticies <- names(tail(sort(degree(full.graph)), 200))# extract the relevant subgraphtop.subgraph <- induced_subgraph(full.graph, top.degree.verticies)# count the number of proteins in itvcount(top.subgraph)How to get disease specific genes?There's no GO annotation for cancer or Alzheimer's disease. It is out of scope of the GO consortium.What you can do, you can either take KEGG Pathways annotation, or manually select list of relevant GO-terms. Or acquire the list from one of the papers. For example annotation term 05200 corresponds to the cancer KEGG pathway. You can easily retrieve proteins associated with the annotation:cancer.pathway.proteins <- string_db$get_term_proteins('05200')$STRING_idAnd then perform subgraphing as described above.Alternatively you can try to get an enrichment score for an every gene given it's neighbors (the way enrichment is shown on the string-db website). Then you can keep only those having top enrichment scores. Probably get_ppi_enrichment_full or get_ppi_enrichment functions will help you to do that. |
_webapps.88705 | When I go to entries and see the entire list of all my entries, there is an export button at the top. When I click the export button, nothing happens. I've tried with firefox and chrome. I can see the browser does a quick loading action when I click the button, but nothing attempts to download. I must be missing something simple. Any hints would be appreciated. | Cognito Forms: How do you export data? | cognito forms | There was a temporary issue affecting export this evening. This has been resolved. Just click export (in any modern browser) and it should work just fine now. |
_unix.213953 | I'm using infinality for my font rendering on my Arch machine. (And yes, I've installed the multilib packages.) My fonts are beautiful everywhere except in Wine since anti aliasing does not work out of the box.I've found a fix here: I have to run xrdb -query | grep -vE 'Xft\.(anti|hint|rgba)' | xrdb in the terminal and then anti aliasing works. There are 3 reasons I'm not satisfied with this solution:It's not permanent. I have to run this command every time I restart my pc.It's hacky. I have no idea what this is doing. I'd like to understand what's going on.If anyone can give me a solution that fixes anti aliasing and meets at least some of my requirements I would really appreciate it. | Wine anti aliasing doesn't work | fonts;wine | null |
_webmaster.106511 | I have Funnel Visualisation set up with three steps and 1st one marked as required.Then there's Custom Report with Landing Page dimension and some metrics:Page Views as first metricFunnel Goal Completions as second metricFunnel Goal Conversion Rate as third metricWhile there's no conversions in Funnel Visualisation yet there are conversions in Custom Report.I was reading documentation and probably have some explanation - just want to confirm my idea of what is going on:Q1Funnel is being filtered by 1st required step and there was no conversion which included visit to required page first?Q2In Custom report meaning of the conversion is just user who entered in particular Landing Page and then visited the last step from the Funnel Goal (omitting any required steps as Funnel conditions apply only in Funnel)?Q3If there's required step in Funnel backfill never happens?Q4If the same user is able to go thorough the Funnel steps multiple times how many conversions it will score? I've read that only one per session but how can I know how much time it takes before user will be counted as new session? Even if it will be counted as new session it will probably have the same User ID and will be counted as returning user - will it score another conversion in Funnel Visualisation? | Funnel visualisation goal in custom report | google analytics | null |
_unix.168816 | After an update to php5-common this night, cron reports a sed error invalid option -- 'z'.This problem has also been reported on the Debian mailing list.What should I do, until the problem is resolved through another update, to fix this issue in the meantime?I fear that messing with the cron entry, that the package created, I could cause the next update to fail. | Cron reports error in php5-common job after update | debian;php | As @artfulrobot pointed out in his comment, 5.4.35-0+deb7u2 was released, which fixes the issue. After installing it, everything went back to normal. |
_webmaster.31851 | I recently updated our site taking it from a multi-page site to a single page site.The problem now is that when the site is searched in say Google, it displays the site as well as the indexed pages. So if a user clicks say our About page, it takes them to our now outdated material.I am hoping to get some guidance on how to properly handle this.I figure the first step is to now setup a robots.txt on our new index page to tell the engines not to crawl beyond index.php.But in the meantime, how do I handle the fact that when searching our site on Google we may still have users who try to click on sub-page links?Should I simply setup redirects while waiting for the engines to update?And if so, do I need to setup redirects on each page using PHP or is this something I would take care of on our sites control panel?I am not very familiar with redirects...Any help is appreciated! | How to correctly handle redirect after site facelift | php;redirects;sitemap | The best thing to do is to use 301 redirects in your .htaccess file. Essentially Google or any browser looking for a page (for this example, the About Us page) gets redirected to the page you want them to see. This is done instantly really, and requires no You are being redirected, please stand by messages.The code is:RewriteRule ^name-of-old-page/$ /name-of-new-page/ [R=301,L] |
_codereview.109565 | The following code gets video renditions from Brightcove using the media API. It then generates an ordered source list for the JW Player and selects the default rendition based on a pre-set bitrate.function buildSourcesFromBrighcoveID(brightcoveID, onComplete) { var sources = []; var brightcoveAPI = 'http://api.brightcove.com/services/library'; var parameters = { command: 'find_video_by_id', video_id: brightcoveID, video_fields: 'renditions', media_delivery: 'http', token: 'jskS1rEtQHy9exQKoc14IcMq8v5x2gCP6yaB7d0hraRtO__6HUuxMg..' }; // Get renditions $.ajax({ dataType: jsonp, url: brightcoveAPI, data: parameters }).done(function(data){ var targetRate = 800000; // Bps var lowestDiff; var closestRate; // Sort them by encoding rate, higher rate first data.renditions.sort(function(a, b) { return parseFloat(b.encodingRate) - parseFloat(a.encodingRate); }); // Get the closest encoding rate to target var currentRate = data.renditions[0].encodingRate; var lowestDiff = Math.abs(currentRate - targetRate); $.each (data.renditions, function(id, rendition){ var diff = Math.abs(rendition.encodingRate - targetRate); if (lowestDiff > diff) { lowestDiff = diff; closestRate = rendition.encodingRate; } }); // Build sources for the player $.each (data.renditions, function(id, rendition){ var newItem = { file: rendition.url, label: Math.round(rendition.encodingRate / 1000) + ' Kbps' } // Set default rendition if (rendition.encodingRate == closestRate) { newItem[default] = true; } sources.push(newItem); }); if (onComplete) onComplete(sources); }); return sources; } function loadVideoPlayer(sources) { var playerInstance = jwplayer(player); playerInstance.setup({ sources: sources, width: 727, height: 455, autostart: true }); }buildSourcesFromBrighcoveID('1520880903001', loadVideoPlayer);CodepenQuestionsHow can I be sure buildSourcesFromBrighcoveID completed before doing anything else? I currently run the function that builds the player from within the first function to be sure the Ajax completed, but I would like them to be independent.Could I build the sources array in a more efficient way? I am currently iterating the renditions 3 times (to sort, to search for the best rendition and to build the sources array). | JW Player + Brightcove Integration | javascript;async await;iteration | null |
_cogsci.6089 | Many similar questions here ask either about working or short-term memory, or about various tricks and techniques to efficiently remember information. My question is, is it possible to improve the capability for remembering long-term itself? So that learning/memorizing something would take less time or repetition, and the acquired knowledge/information will be stored for longer and with less distortions.The effect might not be permanent, but it should be prolonged and substantial. To illustrate my point, let's assume for argument's sake that learning poems has the effect of improving long-term memory. This would mean that, after regularly memorizing new poems for several weeks, i find that i'm able to memorize new poems spending less time, the poems retain remembered for longer. I'm also able to remember content of non-fiction and fiction texts better too, so that i can recall tricky facts and logical inferences, which before memorizing poems i tended to forget. So, is there a method to improve the general efficiency of your long-term memory? In other words, ability to retain more information for longer periods with less effort? | Is it possible to permanently improve long-term memory? | learning;memory;long term memory | null |
Subsets and Splits