id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_webmaster.44675
I have a blog and I want to monetize it. I have applied to Google ads but they are saying that my blog has duplicate content. Is there any site like Google AdSense where I can get ads for my blog?
Find ads for blog
advertising;cpc ads
Here is an article that has 5 Google Adsense Alternatives. It recommends:AdBriteBidvertiserChitikaClicksoreClickZ
_softwareengineering.224975
The problemC# project consisting of WCF services used by a Flex application.A customer may request a functionality change that requires me to alter code to work just for them. It could be a single line of code in a method or maybe a method acts in a totally different way for customer x.My IdeasUse branches for customers that have a customization. When a release is ready, merge to customer branches and try not to break / forget what their customization was for. We use SVN. I'm not a huge fan of this as the code base is very large.Use inversion of control, dependency injection, and MEF. Create an interface for the class/s that needs to be modified. Create a new class library project (that is, customerabc), add a new class that implements the class just created, override method/s as needed for customer changes. Add a MEF export. I then place this in a customization folder and point MEF there. If it finds a DLL file in the folder, it uses that instead of the export from the executing assembly.I like option 2.Pros :Easy - normal deployment install, then drop in their DLL file. Obvious - it might not be overly clear wether or not a customer is running code from their branch. With this option I could just look at the customization folder.Clean - only the files that need to be customized exist. There isn't any need for a full copy of trunk. d. It promotes better SOLID for future development and refactoring (this project has little OOP).Cons:It will be harder to manage changes in trunk out to the customer projects.It doesn't necessarily solve my problem with database changes or changes on the Flex side.If the change is a single line of code in a 500 line method, I don't see any other option than to override that method, copy paste the code to the customer override and make the one line change. This isn't good use of DRY to me, but is there a good way around this?OK, so better use of OOP and SOLID principles could mitigate some of this, but it also means that to implement a simple customer request, I have to do some major refactoring to the whole class... potentially many classes.What should I do?
Handling changes specific to a customer
version control;builds
Option #1 is reasonable, but it represents a big hassle.Option #2 should be the dictionary definition of overengineering.Option #3: consider this:Instead of multiple disjoint sets of requirements, (one for each client,) suppose that you have a single set of requirements, covering all customers, with the additional requirement that any given installation of your application should cater to a single customer, which is specified in configuration. True, this means that specific customer concerns will be scattered throughout the codebase, and that every customer will be receiving inactive features, possibly even unused database tables and columns, but do they need to know? And if they do need to know, do they care? Do they mind? Would they please mind their own business and let you do your job in whatever way is more productive for you?My car engine has some hooks on it. I have no use for these hooks, but they are there because they made it much easier to install the engine in the factory, and much safer for the workers there, so the presence of these hooks has actually lowered the end price that I paid for my car. So, I am perfectly fine with these otherwise useless hooks.
_unix.96656
Some backgroundA friend of mine was using in his office a NAS Buffalo-LS-WVL with two disks of 1TB each. It seems that the two disks were mounted as raid 1, but, as you will read, probably they have been not. The NAS gave some problems of extremely slowness and then suddenly didn't work anymore. I've been called in to rescue his data.Both disk have exactly the same partitioning: one physical and 6 logical, and in the 6th data lives in (aprox 80GB out of 0,95TB).Disk /dev/sdd seems to give hardware problems (slowliness, sector reading error, etc.), whereas /dev/sde is a well-physically functioning disk.The goal is to extract data that were contained in the NAS. If not all data, the most to be extracted, the better. These data are vital for the company of this friend of mine.What I have tried already1st attempt: Mounting disks aloneThis is the very first try, hoping it works, I've tried to get each disk and mount it alone, and I got this message:root@ubuntu:~# mount /dev/sdd6 /mnt/n-or-root@ubuntu:~# mount /dev/sde6 /mnt/nboth gave me the same message:mount: unknown filesystem type 'linux_raid_member'2nd attempt: Creating disk array RAID 1 and try to mount themOK, if I cannot mount them alone, then I need to create an array of disks. Let's suppose (the most logical) the original configuration was raid 1, and use one disk at a time:root@ubuntu:~# mdadm --create --run --level=1 --raid-devices=2 \ /dev/md/md-singolo-e6--create-missing /dev/sde6 missinggives:mdadm: /dev/sde6 appears to be part of a raid array: level=raid0 devices=2 ctime=Mon Sep 26 10:23:48 2011 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md/md-singolo-e6--create-missing started.So, it seems that the original raid was in 0 mode and not 1 mode. Bad new, as a disk is giving sector problems. Anyway, I gave a try to mount the newly created RAID1 array (even if I know it's no-sense):root@ubuntu:~# mkdir /mnt/md-singolo-e6--create-missing root@ubuntu:~# mount /dev/md/md-singolo-e6--create-missing \ /mnt/md-singolo-a6--create-missing/gave:mount: /dev/md127: can't read superblockExactly the same result has been given for the other disk.3rd attempt: Creating disk array RAID 0 and try to mount themOK, as it has been stated that it was Raid0, let's go for it:root@ubuntu:~# mdadm --create --run --level=0 --raid-devices=2 \ /dev/md/md001hw /dev/sdd6 /dev/sde6 gives:mdadm: /dev/sdd6 appears to be part of a raid array:level=raid1devices=2ctime=Mon Oct 14 16:38:33 2013mdadm: /dev/sde6 appears to be part of a raid array:level=raid1devices=2ctime=Mon Oct 14 17:01:01 2013mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md/md001hw started.OK, once created I try to mount it:root@ubuntu:~# mount /dev/md/md001hw /mnt/nmount: you must specify the filesystem typeAt this point all ext2,3,4 specified with -t gave error.4th attempt: Creating disk images and work with themOK, as a disk has problem it is much better to work on a copy (dd) of the data partition, padded with 0 (sync) in case of block read error (error). I therefore created the two images:This one for the good disk (block of 4MB, to be faster):root@ubuntu:~# dd bs=4M if=/dev/sde6 of=/media/pietro/4TBexthdd/sde6-bs4M-noerror-sync.img conv=noerror,syncand this one for the disk with problems (minimum block size, to be safer)root@ubuntu:~# dd if=/dev/sde6 of=/media/pietro/4TBexthdd/sdd6-noerror-sync.img conv=noerror,syncOnce I got the two images I've tried to use them as RAID 0, with the command specified above. Nothing to do, the answer that came is that the images is not a block device and it does not create the array.5th attempt: going byte-a-byte to rescue some dataOK, if a proper mounting is not working, let's go to extract data trough byte-a-byte reading and header and footer info. I used *foremost*to do this job, both on each single disk: for disk 1:root@ubuntu:~# foremost -i /dev/sde6 -o /media/pietro/4TBexthdd/foremost_da_sde6/it creates sub-folders with file extensions, but no population at all in them. Whereas for disk 2 (the damaged one):root@ubuntu:~# foremost -i /dev/sdd6 -o /media/pietro/4TBexthdd/foremost_da_sdd6_disco2/neither the sub-folder structure is created by foremost.Same result when I tried foremost on RAID 0 array:root@ubuntu:~# foremost -i /dev/md/md001hw -o /media/pietro/4TBexthdd/foremost_da_raid_hw/Neither sub-folder structure has been created.Where I need some help / My QuestionsFirst and foremost question: how to rescue data? Does anyone has any hint I've not tried?Could anyone of you suggest anything different of what I've done?Other questions:I'm new to mdadm, did I do everything correctly? Was effectively the original array created on Sept 26th, 2011 in Raid 0 mode? Why I cannot use the partition images to create an array?AppendixThis is the output of dmesg in case of reading from the failing disk (/dev/sdd):[ 958.802966] sd 8:0:0:0: [sdd] Unhandled sense code[ 958.802976] sd 8:0:0:0: [sdd] [ 958.802980] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE[ 958.802984] sd 8:0:0:0: [sdd] [ 958.802987] Sense Key : Medium Error [current] [ 958.802994] sd 8:0:0:0: [sdd] [ 958.802999] Add. Sense: Unrecovered read error[ 958.803003] sd 8:0:0:0: [sdd] CDB: [ 958.803006] Read(10): 28 00 00 d5 c7 e0 00 00 f0 00[ 958.803021] end_request: critical target error, dev sdd, sector 14010336[ 958.803028] quiet_error: 36 callbacks suppressed[ 958.803032] Buffer I/O error on device sdd, logical block 1751292[ 958.803043] Buffer I/O error on device sdd, logical block 1751293[ 958.803048] Buffer I/O error on device sdd, logical block 1751294[ 958.803052] Buffer I/O error on device sdd, logical block 1751295[ 958.803057] Buffer I/O error on device sdd, logical block 1751296[ 958.803061] Buffer I/O error on device sdd, logical block 1751297[ 958.803065] Buffer I/O error on device sdd, logical block 1751298[ 958.803069] Buffer I/O error on device sdd, logical block 1751299[ 958.803074] Buffer I/O error on device sdd, logical block 1751300[ 958.803078] Buffer I/O error on device sdd, logical block 1751301[ 961.621228] sd 8:0:0:0: [sdd] Unhandled sense code[ 961.621236] sd 8:0:0:0: [sdd] [ 961.621238] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE[ 961.621241] sd 8:0:0:0: [sdd] [ 961.621243] Sense Key : Medium Error [current] [ 961.621248] sd 8:0:0:0: [sdd] [ 961.621251] Add. Sense: Unrecovered read error[ 961.621254] sd 8:0:0:0: [sdd] CDB: [ 961.621255] Read(10): 28 00 00 d5 c8 d0 00 00 10 00[ 961.621266] end_request: critical target error, dev sdd, sector 14010576[ 964.791077] sd 8:0:0:0: [sdd] Unhandled sense code[ 964.791084] sd 8:0:0:0: [sdd] [ 964.791087] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE[ 964.791090] sd 8:0:0:0: [sdd] [ 964.791092] Sense Key : Medium Error [current] [ 964.791096] sd 8:0:0:0: [sdd] [ 964.791099] Add. Sense: Unrecovered read error[ 964.791102] sd 8:0:0:0: [sdd] CDB: [ 964.791104] Read(10): 28 00 00 d5 c8 00 00 00 08 00[ 964.791114] end_request: critical target error, dev sdd, sector 14010368[ 964.791119] quiet_error: 22 callbacks suppressed[ 964.791122] Buffer I/O error on device sdd, logical block 1751296
recovery data from RAID and disk failure (Linux)
hard disk;data recovery;raid;mdadm;failure
I hate to be the bearer of bad news, but...Q: I'm new to mdadm, did I do everything correctly? A: No. In fact, you did just about everything in the most destructive way possible. You used --create to destroy the array metadata, instead of using --assemble which probably would have allowed you to read the data (at least, to the extent the disk is capable of doing so). In doing so, you have lost critical metadata (in particular, the disk order, data offset, and chunk size). In addition, --create may have scribbled array metadata on top of critical filesystem structures.Finally, in your step (3), I see that mdadm is complaining of RAID1 on both disksI'm hoping that's from you trying (2) on both disks, individually. I sincerely hope you didn't let RAID1 start trying to sync the disks (say, had you added both to the same RAID1 array).What to do nowIt seems like you've finally created images of the drives. You ought to have done this first, at least before trying anything beyond a basic --assemble. But anyway,If the image of the bad drive missed most/all sectors, determine if professional data recovery is worthwhile. Files (and filesystem metadata) are split across drives in RAID0, so you really need both to recover. Professional recovery will probably be able to read the drive.If the image is mostly OK, except for a few sectors, continue.Make a copy of the image files. Only work on the copies of the image files. I can not emphasize this enough, you will likely be destroying these copies several times, you need to be able to start over. And you don't want to have to image the disks again, especially since one is failing!To answer one of your other questions:Q: Why I cannot use the partition images to create an array?A: To assemble (or create) an array of image files, you need to use a loopback device. You attach an image to a loopback device using losetup. Read the manpage, but it'll be something along the lines of losetup --show -f /path/to/COPY-of-image. Now, you use mdadm on the loop devices (e.g., /dev/loop0).Determine the original array layoutYou need to find out all the mdadm options that were originally used to create the array (since you destroyed that metadata with --create earlier). You then get to run --create on the two loopback devices, with those options, exactly. You need to figure out the metadata version (-e), the RAID level (-l, appears to be 0), the chunk size (-c), number of devices (-n, should be 2) and the exact order of the devices.The easiest way to get this is going to be to get two new disks, put then in the NAS, and have the NAS create a new array on them. Preferably with the same NAS firmware version as originally used. IOW, repeat the initial set up. Then pull the disks out, and use mdadm -E on one of the members. Here is an example from a RAID10 array, so slightly different. I've omitted a bunch of lines to highlight the ones you need: Version : 1.0 # -e Raid Level : raid10 # -l Raid Devices : 4 # -n Chunk Size : 512K # -c Device Role : Active device 0 # gets you the device order Array State : AAAA ('A' == active, '.' == missing)NOTE: I'm going to assume you're using ext2/3/4 here; if not, use the appropriate utilities for the filesystem the NAS actually used.Attempt a create (on the loopback devices) with those options. See if e2fsck -n even recognizes it. If not, stop the array, and create it again with the devices in the other order. Try e2fsck -n again.If neither work, you should go back to the order you think is right, and try a backup superblock. The e2fsck manpage tells you what number to use; you almost certainly have a 4K blocksize. If none of the backup superblocks work, stop the array, and try the other disk order. If that doesn't work, you probably have the wrong --create options; start over with new copy of the images & try some different optionsI'd try different metadata versions first.Once you get e2fsck to run, see how badly damaged the filesystem is. If its completely trashed, that may mean you have the wrong chunk size (stop and re-create the array to try some more). Copy the data off.I suggest letting e2fsck try to fix the filesystem. This does risk destroying the filesystem, but, well, that's why you're working on copies! Then you can mount it, and copy the data off. Keep in mind that some of the data is likely corrupted, and that corruption may be hidden (e.g., a page of a document could have been replaced with NULLs).I can't get the original parameters from the NASThen you're in trouble. Your other option is to take guesses until one finally works, or to learn enough about the on-disk formats to figure it out using a hex editor. There may be a utility or two out there to help with this; I don't know.Alternatively, hire a data recovery firm.
_unix.229234
Having a CSV file like this:HEADERfirst, column|second some random quotes column|third ol' columnFOOTERand looking for result like: HEADERfirst, column|second some random quotes column|third ol' columnin other words removing FOOTER, quotes in beginning, end and around |. So far this code works: sed '/FOOTER/d' csv > csv1 | #remove FOOTERsed 's/^\//' csv1 > csv2 | #remove quote at the beginningsed 's/\$//' csv2 > csv3 | #remove quote at the endsed 's/\|\/|/g' csv3 > csv4 #remove quotes around pipeAs you see the problem is it creates 4 extra files. Here is another solution, that has a goal not to create extra files and to do the same thing in a single script. It doesn't work very well. #!/bin/kshsed '/begin/, /end/ { /FOOTER/d s/^\// s/\$// s/\|\/|/g }' csv > csv4
Join multiple sed commands in one script for processing CSV file
sed;csv
First of all, as Michael showed, you can just combine all of these into a single command:sed '/^FOOTER/d; s/^\//; s/\$//; s/\|\/|/g' csv > csv1I think some sed implementations can't cope with that and might need: sed -e '/^FOOTER/d' -e 's/^\//' -e 's/\$//' -e 's/\|\/|/g' csv > csv1That said, it looks like your fields are defined by | and you just want to remove around the entire field, leaving those that are within the field. In that case, you could do:$ sed '/FOOTER/d; s/\(^\||\)/\1/g; s/\($\||\)/\1/g' csv HEADERfirst, column|second some random quotes column|third ol' columnOr, with GNU sed:sed -r '/FOOTER/d; s/(^|\|)/\1/g; s/($|\|)/\1/g' csv You could also use Perl:$ perl -F| -lane 'next if /FOOTER/; s/^|$// for @F; print @F' csv HEADERfirst, column|second some random quotes column|third ol' column
_webmaster.7712
Possible Duplicate:How to find web hosting that meets my requirements? I am looking for a cheap dedicated server. (I was earlier happy with VPS, until I realized that the disk I/O is not at all reliable and depends on what your neighbours are up to at the moment).I was browsing through http://www.lowenddedi.net/the-databaseI don't understand memory speed and NIC speed columns at all. What will be their affect? Do I need to worry about them?Also, can someone help suggest a provider, with following criteria:1) Good & reliable Network 2) Price <= $60/month.
Help selecting dedicated server with good disk I/O & network
server;looking for hosting;dedicated hosting
null
_unix.386168
I want to build rpm package for my icon theme and upload it to www.gnome-look.org, with deb and tar.gz. If I'll build it on Fedora will users be able to use it on other systems like Red Hat?The package don't have any binaries only few shell scripts and svg files.
Can rpm package build on Fedora be used on other systems?
fedora;rpm
If I'll build it on Fedora will users be able to use it on other systems like Red Hat?Yes. If you provide only non-binary content. And if you will use compatible paths and compatible RPM macros for RHEL. If I remember well, for example %doc does not work in older RPMs.
_unix.296972
I am using FreeBSD 10.2 using ZFS on root as the file system (zroot01). I have an external hard disk with a ZFS file system from another FreeBSD 10.2 system (zroot02) that I want to temporarily mount, read only, so I can get some files off of it, then disconnect it afterward. I don't want the external ZFS system to clobber or replace my current file system, nor do I want the data on the external to be corrupted/altered either.To demonstrate what I'm trying to accomplish, if I was using UFS I'd do something like this:mount -t ufs -o ro /dev/ada0s2 /mnt/my-fun-mountpoint...where /dev/ada0s2 is the partition on my external drive and /mnt/my-fun-mountpoint is in the /mnt directory of my existing operating system.All of the searching and man page reading has not provided a crystal-clear method for doing so. What answers I did find ended up taking over my current file system and corrupting it beyond repair -- obviously not the result I'm looking for. I attempted this a while ago so I don't remember which commands I tried, unfortunately.Can you please provide some clear guidance on how to do this? Thank you in advance for your help.
How to mount external ZFS file system without clobbering/altering current or external filesystem
mount;zfs
Well, it really depends on how read-only you want the pool to be. And no, that's not a joke.First, a bit of terminology: in ZFS, you import a pool, and optionally mount the (any) file systems within it. You can import a pool without mounting any file systems by passing -N to zpool import and then later on mount any desired file systems using zfs mount. (This is a perfectly valid scenario if, for example, you want to access only a single file system out of many, or if you want to do something resembling an off-line scrub of the pool.)ZFS isn't a big fan of truly read-only access. For example, if ZFS detects an error that it is able to repair, I believe it will repair the error and write the repaired data to disk even if you imported the pool as read-only. My understanding is that, in ZFS parlace, read-only applies only to the user-visible state of the pool and its datasets. If, on the other hand, you make a binary copy of the disk to a file (or set of files), make those files truly read only, and try to import the pool from there, ZFS won't be able to import the pool at all no matter how hard you try. If you make the files writable, it will work fine. (I actually tried this just a few weeks ago, albeit using a zvol, and ZFS vehemently refused to import the pool. When I set the zvol to read/write instead of read-only, the pool imported fine.) Other file systems like (on Linux) ext4 and probably others handle this situation somewhat gracefully, but ZFS balks.If you are unlucky, and don't have ECC RAM installed in the system where you are importing the pool, then ZFS' attempting to correct any errors it encounters might actually make things worse, although opinions differ on whether this is actually a real risk in practice. Personally I am of the opinion that any data I care enough about to protect with ZFS and snapshots and storage-level redundancy and backups and whatnot deserves the protection offered by ECC RAM also, but many PCs don't have ECC RAM.So, you can import the pool in read-only mode, with a specific alternate root to keep it from stepping on anything else's toes, but you need to be aware that it isn't necessarily truly read-only in a forensic sense. (It will, however, ensure that you don't accidentally change anything in the pool.) To do a read-only import, assuming that the pool is named tank and that the device node(s) is/are available in /dev, you would use a command like:# zpool import tank -d /dev -o readonly=on -R /mnt/someplaceThis will look in /dev for anything holding a ZFS pool with the name tank, import it, temporarily setting the pool property readonly to on (which means that all user-initiated writes will be rejected) and temporarily setting its altroot property to /mnt/someplace. (These property values are temporary in the sense that they are not persisted to the disk(s) as current property values, so if you export and re-import the pool without them, the values will be back to normal. They might possibly be written to the pool history though, which once the pool is imported you can look at with zpool history tank if you are so inclined.) Once the pool is imported, you will see your files under /mnt/someplace and have normal, read-only access to them, including any snapshots that are already made on the datasets in the pool.Given your example, I suspect that you would use something along the lines of:# zpool import zroot02 -d /dev -o readonly=on -R /mnt/my-fun-mountpointWhen you are done, remember to cleanly export the pool:# zpool export tankor perhaps# zpool export zroot02That will unmount all file systems and other datasets within the pool, flush all buffers (to the extent that any need flushing in the first place), mark the pool as not imported on all constituent devices, and perform any other necessary housekeeping tasks to ensure that the pool can safely be moved to a different system and imported there later.
_unix.280871
I have python3 installed on a work computer.Python 3.4.3 (default, May 3 2016, 09:46:33) [GCC 4.4.7 20120313 (Red Hat 4.4.7-16)] on linuxType help, copyright, credits or license for more information.The interactive editor is not working. I can't use emacs control sequences, for example. I just get ^A displayed instead of going to the beginning of my line.There's mention of the feature here:https://docs.python.org/3.4/tutorial/interactive.htmlIt says:Some versions of the Python interpreter support editing of the current input line and history substitution, similar to facilities found in the Korn shell and the GNU Bash shell. This is implemented using the GNU Readline library, which supports various styles of editing. The docs don't say anything about needing to enable this feature, which versions of the Python interpreter support editing, or if there is perhaps something in the build process, assuming Python3 was built from source, that made the GNU Readline library not work. And, I've googled a bunch to see how I might fix the problem with no luck.The odd thing is that there is Python 2 installed on the same machine and it supports interactive editing just fine. And, the Python 3 installed on my home machine works just fine too.
Python3 not coming up in interactive mode
python3;interactive
I was the tech working on the issue and found how to get interactive editing to work. The issue with going through yum is since the OS using so much python we cant update it through yum (company policy).I had to compile python 3.4.3 from source. After it was compiled and installed, I had to add each package that was missing. This particular package was gnureadline. Readline is deprecated. Here are the steps I took to build and install the package (for CentOS 6.7):wget https://pypi.python.org/pypi/gnureadline/6.3.3tar -xzvf gnureadline-6.3.3.tar.gzcd gnureadline-6.3.3python3 setup.py installNOTE: Here I ran into an issue /usr/bin/ld: cannot find lncurses. Using /usr/bin/ld lncurses --verbose found that the paths it was searching didnt have the libraries. Created a symlink and it worked. If you don't get the errors then skip to the last step.ln -s /lib64/libncurses.so.5.7 /usr/lib64/libncurses.sopython3 setup.py installVerified I can use ctrl-a and arrow keys to move around in the line.
_unix.210881
Is it possible to get the title of a window (e.g., gnome-terminal) via a terminal command, without installing any outside packages such as xdotool, xprop or wmctrl?Much appreciated.
Get window name on Red Hat GNOME
linux;rhel;gnome;gnome terminal
null
_webapps.31594
Me and all my friends Facebook accounts this week seem to have a huge breach of privacy. Sidelined on everybody's timeline is a strip with year/month navigation. Clicking on a year jumps to the top of that stream. Highlighted here is a box saying NNN friends posted on XYZ's timeline. and proceeds to list them.At first glance, this looks like wall posts, but digging in to older years it looks like huge numbers of private messages are now showing up there in well. In fact there is a series of warnings being forwarded chain letter style in my friends status updates about this privacy breach.I've seen several news articles saying that these are not actually private messages but only wall posts. Looking through my own and my friends walls, this seems preposterous, there are huge numbers of messages that would certainly have been private.Why did these private messages start showing in my timeline and how can I remove them? Is there a way to do it without also removing all normal wall posts from my timeline?
Why are private messages all over Facebook timelines?
facebook;privacy;facebook timeline
null
_softwareengineering.338140
Most answers I see online are You don't need a contract to consume RESTful services. But currently, consuming endpoints is one of the biggest time commitment issues in our .NET environment. Oh how easy it would be to consume a WADL.For example take this WADL. This is something that needs to be consumed.<resources base=http://domain/api/rest/> <resource path=AssignID> <method id=assignId name=POST> <request> <ns2:representation xmlns:ns2=http://wadl.dev.java.net/2009/02 xmlns= element=StudentObject mediaType=application/xml/> </request> <response> <ns2:representation xmlns:ns2=http://wadl.dev.java.net/2009/02 xmlns= element=StudentAssignmentResult mediaType=application/xml/> </response> </method> </resource>... two hundred more methods/resources</resources>And all I need to do is call a very simple method.StudentAssignmentResult stuResult = AssignID.assignId(Wadl.Post, StudentObject stuObj);If your endpoint needed something like thisapi/rest/AssignID/assignId/{name}/{ssn}This would just become a method parameter. StudentAssignmentResult stuResult2 = AssignId.assignId(Wadl.Get, UriString name, UriInt ssn);
Has Anyone Included Consuming WADL in .Net Yet?
api;api design
null
_unix.327920
I just installed a fresh server with CENTOS 7 Minimal on it and i configured my SSH,FTP,SMB... But then when i tried to create a physical volume on a disk the CLI returned that none of the LVM (Logical Volumes) commands are found so i tried to install a lvm package but there wasn't any so i started google -ing my problem but i couldn't find anyone with the same problem, solution or even any documentation on the absence of the LVM in the minimal package of Cenots 7 1151. So my Question is how can i install the lvm commands so i can manage my storage on this server, which i had and still am planing on using as my main storage server
LVM absent Centos 7 Minimal
linux;centos;lvm;storage
null
_unix.74101
I have a system with an unrecoverable /usr partition. Terrified the drives are going bad, I've got it booted into a LiveCD environment, and I can't remember what the install architecture was, the most I have is it's CentOS 5.5.Because of the Live environment, none of the standard methods work such as uname or checking /proc.Here is the kernel that was used: vmlinuz-2.6.18-194.32.1.el5Is there anything I can scan the file for to figure out if the architecture is 32 or 64 bit?Or something else I can look at on the file system? Nothing in /usr will work because that partition is now dead.
Determining Linux architecture from files
linux;centos;cpu architecture
null
_softwareengineering.338180
I have a question about the object X.equals(Y).I use Sonar and it says that I have to move the string literal on the left side of this string comparison: !date.equals().So I did that: !().equals(date) but I don't really know if it is right or not.
Question about the Java objects' equals() method
java
null
_cs.76922
Let $\Phi$ be a k-CNF and $\Phi_{min}$ be a minimal CNF (one that contains smallest amount of literal occurences) that is equal to $\Phi$.Can $\Phi_{min}$ contain a clause of size $m > k$?What I have tried:Let's define the concept of partial assignment: asingnment that has free variables. Example: $x_1 = 0, x_2 = \{0\ ,1\}, x_3 = 1$. Here $x_2$ is a free variable.If $\Phi$ contains clause $C(p)$, then $\Phi(\overline p) =0$.Example: $\Phi = (x_1\lor x_2\lor x_3)\land (x_2\lor x_3\lor x_4)$. Here $\Phi(x_1=0,x_2=0,x_3=0)=0$.Going further, if $\Phi$ is k-CNF, it means that shortest unsatisfied partial assignment has length $l\leq k$.Also, formula already contains info about all unsatisfied partial assignments.P.1 and p.2 says, that we don't need to use partial assignment of length $l>k$ to express the formula.One more statement, $\Phi(p)=0\Rightarrow \Phi(p, x_i)=0$, where $x_i$ is fixed variable that is not in $p$.Here is where I got stuck: $\Phi_{min}$ contains smallest amount of shortest inverted partial assignments of formula $\Phi$. Let's say that each of partial assignments $p_1, p_2$ has length $l$, such that $\Phi(p_1) =0, \Phi(p_2)=0$. Can we change them to one longer partial assignment $p$?Restrictions are following: if you'll change or remove any variable in $p_1$ or $p_2$ (we'll call them $p'_1$ and $p'_2$ respectively), then $\Phi(p'_1) = 1$ and $\Phi(p'_2)=1$.Intuitively it seems that they can't be combined, but what about logic?
Can minimal CNF contain clause longer than initial CNF?
boolean algebra;normal forms
null
_codereview.32138
I wrote DSSudokuSolver - a sudoku solving algorithm a while back. Is there any possibility that this algorithm can be improved?Original Algorithm:CleanElements = function(comp_ary, Qsudoku){ for(i=0; i<9; i++){ for(j=0; j<9; j++){ /*if(Qsudoku[i][j] != ){ comp_ary[i][j]=[]; }*/ for(k=0; k<9; k++){ i_index = comp_ary[i][k].indexOf(Qsudoku[i][j]); if(i_index != -1){ comp_ary[i][k].splice(i_index, 1); } j_index = comp_ary[k][j].indexOf(Qsudoku[i][j]); if(j_index != -1){ comp_ary[k][j].splice(j_index, 1); } } if(i < 3){ i_min = 0; i_max = 2; } else if(i < 6){ i_min = 3; i_max = 5; } else{ i_min = 6; i_max = 8; } if(j < 3){ j_min = 0; j_max = 2; } else if(j < 6){ j_min = 3; j_max = 5; } else{ j_min = 6; j_max = 8; } for(i_box=i_min; i_box<=i_max; i_box++){ for(j_box=j_min; j_box<=j_max; j_box++){ index = comp_ary[i_box][j_box].indexOf(Qsudoku[i][j]); if(index != -1){ comp_ary[i_box][j_box].splice(index, 1); } } } } } return comp_ary;}FindElements = function(comp_ary, Qsudoku){ for(i=0; i<9; i++){ for(j=0; j<9; j++){ if(comp_ary[i][j].length == 1){ if (Qsudoku[i][j] == ){ Qsudoku[i][j] = comp_ary[i][j][0]; comp_ary[i][j] = []; } } } } return Qsudoku;}IsThereNullElement = function(Qsudoku){ for(i=0; i<9; i++){ for(j=0; j<9; j++){ if(Qsudoku[i][j] == ){ return false; } } } return true;}InitEmptyArray = function(){ empty_ary = Array(); for(i=0; i<9; i++){ empty_ary[i] = Array(); for(j=0; j<9; j++){ empty_ary[i][j] = Array(); for(k=0; k<9; k++){ empty_ary[i][j][k] = (k+1).toString(); } } } return empty_ary;}DSSolve = function(Qsudoku){ comp_ary = InitEmptyArray(); //Complementary Array window.comp_ary_old = comp_ary; IterationMax = 5000; while(true){ IterationMax -= 1; comp_ary = CleanElements(comp_ary, Qsudoku); console.log(comp_ary); if(window.comp_ary_old == comp_ary){ //implement this. } else{ window.comp_ary_old = comp_ary; } Qsudoku = FindElements(comp_ary, Qsudoku); //console.log(Qsudoku); if(IsThereNullElement(Qsudoku)){ return Qsudoku; } if(IterationMax == 0){ return null; } }}
DSSudokuSolver - A JavaScript Sudoku solving algorithm
javascript;optimization;algorithm;sudoku
It's not a huge improvement, just taking a stab at a few slight tweaks:var sudoku = { CleanElements:function(comp_ary, Qsudoku){ var i_factor, j_factor, i_min, i_max, i_index, j_index, index; for(var i=9; i--;){ i_factor = (3*Math.floor(i/3)); i_min = 6 - i_factor; i_max = 8 - i_factor; for(var j=9; j--;){ j_factor = (3*Math.floor(j/3)); j_min = 6 - j_factor; j_max = 8 - j_factor; for(var k=9; k--;){ i_index = comp_ary[i][k].indexOf(Qsudoku[i][j]); j_index = comp_ary[k][j].indexOf(Qsudoku[i][j]); if(i_index !== -1){ comp_ary[i][k].splice(i_index,1); } if(j_index !== -1){ comp_ary[k][j].splice(j_index,1); } } for(var i_box=i_max; i_box>=i_min; i_box--){ for(var j_box=j_max; j_box>=j_min; j_box--){ index = comp_ary[i_box][j_box].indexOf(Qsudoku[i][j]); if(index !== -1){ comp_ary[i_box][j_box].splice(index, 1); } } } } } return comp_ary; }, FindElements:function(comp_ary, Qsudoku){ for(var i=9; i--;){ for(var j=9; j--;){ if(comp_ary[i][j].length === 1){ // in case you were specifically checking that it was an empty string and not a null / undefined / etc, change to Qsudoku[i][j] === '' if (Qsudoku[i][j].length === 0){ Qsudoku[i][j] = comp_ary[i][j][0]; comp_ary[i][j] = []; } } } } return Qsudoku; }, IsThereNullElement:function(Qsudoku){ for(var i=9; i--;){ for(var j=9; j--;){ // same here, change to === '' if specifically needed if(Qsudoku[i][j].length === 0){ return false; } } } return true; }, InitEmptyArray:function(){ var empty_ary = Array(); for(var i=9; i--;){ empty_ary[i] = Array(); for(var j=9; j--;){ empty_ary[i][j] = Array(); for(var k=9; k--;){ empty_ary[i][j][k] = (k+1)+''; } } } return empty_ary; }, DSSolve:function(Qsudoku){ var self = this, comp_ary = self.InitEmptyArray(), Qsudoku; this.comp_ary_old = comp_ary; for(var i=5000; i--;){ comp_ary = self.CleanElements(comp_ary, Qsudoku); // console.log(comp_ary); if(sudoku.comp_ary_old === comp_ary){ // implement this. } else { sudoku.comp_ary_old = comp_ary; } Qsudoku = self.FindElements(comp_ary, Qsudoku); // console.log(Qsudoku); if(self.IsThereNullElement(Qsudoku)){ return Qsudoku; } if(i === 0){ return null; } } }};And then you call it with this (Qsudoku being the value you want to pass in):sudoku.DSSolve(Qsudoku);Quick breakdown of changes:changed all for loops and final while loop to decrement (faster in all browsers)changed == '' to .length === 0 (faster in all browsers)applied strict comparison === rather than implicit == (faster on certain browsers)changed multiple if/else if/else statements to applying Math.floor to compute reduction factorencapsulated all functions within object to allow for use of object comp_ary_old (instead of using window)added explicit var statement for variable declaration (prevents bubbling up to window)moved variables to top of respective function and assigned value at point where the fewest loops occur while retaining value integritychanged the .toString() function to the +'' trick (its a miniscule improvement, more of a squeeze every byte thing, so if you would rather stick with code clarity switch it back to .toString())I haven't tested this at all, so no benchmarks to show if it actually improves performance, but theoretically it should maintain your code operations while executing faster. Figured it was worth a shot, since no one else answered. Hope it helps!
_softwareengineering.141607
I'm working on a project (for college) in C++.The goal is to write a program that can more or less simulate a beamof particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following:there is a World that holds all objectsyou can add objects to this world such as Particle, Dipole and Quadrupoletime is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear).each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations)save the Particle positionsrepeatThis seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field)So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?
Are there design patterns or generalised approaches for particle simulations?
c++;object oriented;design patterns
Check into the graphics community for particle physics simulations. There are many existing patterns (physics solvers and such for particle systems), and it is a well-understood concept for that community, and there are many implementations. You will save yourself a great deal of time.
_unix.15584
I'm just putting together a machine with eight 2TB disks.I will be using Raid 6 (12TB of usable capacity) on top of them, but I'm not sure whether I should make LVM on top of the RAID, or what filesystem to use.What filesystems can be resized when used inside LVM?
Choosing filesystem for a 16TB Raid
filesystems;lvm;raid
With lvm on top of a raid device you are flexible to create multiple virtual devices (and filesystems) on it. And you are flexible to change the size of those devices.If you are 100% sure that you don't need that and you only need one big filesystem, then you can directly create the filesystem on your raid device. One layer of indirection and complexity is removed in that case.To choose a filesystem, the most important points are:should be well tested and stableshould be mainstream enoughgood performance of courseThat means one is usually conservative when it comes to filesystems.Using these criteria you have basically 3 choices on Linux (as of 2011-06:ext3ext4xfsOn big devices I use xfs because a mkfs.xfs is way faster.All of these filesystems can be resized.Update:I did a small benchmark on a 3 TB device (using 4k blocksize in all filesystems):$ awk -F\; -f mkfs.awk mkfs FS SIZE(TB) TIME(S) RSS(MB) SPEEDUP SPACEUP ext3 1 217 37 1.00 1.00 ext3 2 478 74 1.00 1.00 ext3 3 829 111 1.00 1.00 ext4 1 139 37 1.55 1.00 ext4 2 298 74 1.60 1.00 ext4 3 515 111 1.61 1.00 xfs 1 5 2 43.23 17.01 xfs 2 9 2 51.43 33.49 xfs 3 15 2 54.73 50.05(The speed/mem-up is against ext3)(System: Debian 6.0 amd64, mkfs.ext 1.41.12, mkfs.xfs 3.1.4, WD SATA drive, hdparm -t about 120 MB/s buffered disk reads)That means mkfsing a ext[34] filesystem is up to 54 times slower than mkfsing a xfs one. Approximating this to a 12 TB creating a ext fs would really take about an hour (xfs only about a minute).
_codereview.21685
i created a database class from a good tutorial and wanted to put it up here so it would get in some search results. it took me about 2 days to find it. also i added a few custom functions to it.. here it is :P and if there is something that can be done better or more proficiently please feel free to let me know.config.php:// Database Constantsdefined('DB_HOST') ? NULL : define('DB_HOST', 'edit:host');defined('DB_USER') ? NULL : define('DB_USER', 'edit:user');defined('DB_PASS') ? NULL : define('DB_PASS', 'edit:pass');defined('DB_NAME') ? NULL : define('DB_NAME', 'edit:databasename');database.class.php:class Database {private $dbhost = DB_HOST;private $dbuser = DB_USER;private $dbpass = DB_PASS;private $dbname = DB_NAME;private $dbh;private $error;private $stmt;public function __construct() { // set DSN $dsn = 'mysql:host=' . $this->dbhost . ';dbname=' . $this->dbname; // set OPTIONS $options = array( PDO::ATTR_PERSISTENT => TRUE, PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION ); // Create a new PDO instance try { $this->dbh = new PDO($dsn, $this->dbuser, $this->dbpass, $options); } catch (PDOException $e) { $this->error = $e->getMessage(); }}public function query($query) { $this->stmt = $this->dbh->prepare($query);}public function selectQuery($table, $fields, $FieldToQuery, $value) { try { if ((gettype($fields) != 'array') || (gettype($value) != 'array')) { $fields = (array) $fields; $FieldToQuery = (array) $FieldToQuery; $value = (array) $value; } $holders = $FieldToQuery; for ($i = 0; $i < count($holders); $i++) { $holders[$i] = ':' . $holders[$i]; } $array = array_combine($holders, $value); $query = 'SELECT ' . implode(',', $fields) . ' FROM ' . $table . ' WHERE ' . implode(',',$FieldToQuery) . ' = ' . implode(',', $holders); $this->query($query); $this->bindArray($array); $rows = $this->resultset(); return $rows; } catch (PDOException $e) { $this->error = $e->getMessage(); }}public function insertQuery($table, $fields, $values) { try { if ((gettype($fields) != 'array') || (gettype($values) != 'array')) { $fields = (array) $fields; $values = (array) $values; } $holders = $fields; for ($i = 0; $i < count($holders); $i++) { $holders[$i] = ':' . $holders[$i]; } $array = array_combine($holders, $values); $query = 'INSERT INTO ' . $table . '(' . implode(',', $fields) . ') VALUES (' . implode(',', $holders) . ')'; $this->query($query); $this->bindArray($array); $this->execute(); } catch (PDOException $e) { $this->error = $e->getMessage(); }}public function bindArray($array) { foreach ($array as $key => $value) { $this->bind($key, $value); }}public function bind($param, $value, $type = null) { if (is_null($type)) { switch (true) { case is_int($value): $type = PDO::PARAM_INT; break; case is_bool($value): $type = PDO::PARAM_BOOL; break; case is_null($value): $type = PDO::PARAM_STR; } } $this->stmt->bindValue($param, $value, $type);}public function execute() { $this->stmt->execute();}public function resultset() { $this->execute(); return $this->stmt->fetchAll(PDO::FETCH_ASSOC);}public function single() { $this->execute(); return $this->stmt->fetchAll(PDO::FETCH_ASSOC);}public function rowCount() { return $this->stmt->rowCount();}public function lastInsertId() { return $this->dbh->lastInsertId();}public function beginTransaction() { return $this->dbh->beginTransaction();}public function endTransaction() { return $this->dbh->commit();}public function cancelTransaction() { return $this->dbh->rollBack();}public function debugDumpParams() { return $this->stmt->debugDumpParams();}}here is the link http://culttt.com/2012/10/01/roll-your-own-pdo-php-class/
PHP PDO Custom class ple
php;mysql;classes;pdo
null
_webapps.53348
With the new (May-2013) version of Google Maps, is it possible to display results for several search queries on the same map--similar to how it worked in classic Google Maps? (see the green marks for McDonalds and red for Burger King on the screenshot below)
display results for multiple queries on new Google Maps: possible; how?
google maps
null
_datascience.22225
I'm using the topicmodels package for R to cluster a big set of short texts (between 10-75 words) into topics. After manually reviewing a few models it seems like there are 20 realtivly stable topics. However, what I find really weird is that they are all roughly the same size! Each topic catches around 5% of tokens and 5% of texts. In terms of the tokens, the smallest topic is 4.5% the largest 5.5%.Can anybody suggest if this a 'normal' behaviour? This is the code I'm using:ldafitted <- LDA(sentences.tm, k = K, method = Gibbs, control = list(alpha = 0.1, # default is 50/k which would be 2.5. a lower alpha value places more weight on having each document composed of only a few dominant topics delta = 0.1, # default 0.1 is suggested in Griffiths and Steyvers (2004). estimate.beta = TRUE, verbose = 50, # print every 50th draw to screen seed = 5926696, save = 0, # can save model every xth iteration iter = 5000, burnin = 500, thin = 5000, # every thin iteration is returned for iter iterations. Standard is same as iter best = TRUE)) #only the best draw is returnedIn short: My question is if there are circumstances under which it is reasonable that Latent Dirichlet allocation will cluster text in topics of equal size? Or is it something I should be worried if it happens?
Equally sized topics in Latent Dirichlet allocation
topic model;lda
null
_webmaster.86103
Based on the information below, I understand having a datePublished for the actual post on the WordPress blog index page, but it required a datePublished for the actual blog. How does Google treats the datePublished at the LiveBlogPosting level (not the liveBlogUpdate level)? Is it the date that the blog was published? Is it the date of the most recent post that was published? Is it the last date the blog was modified? Or maybe something else?
Two datePublished for LiveBlogPosting
seo;blog;schema.org;rich snippets;dates
From Schema.orgs perspective:The datePublished property for the LiveBlogPosting gives the publication date of the blog post, i.e., when it was first published, typically saying something happened and then the live blogging begins.The datePublished property for each BlogPosting referenced with the liveBlogUpdate property gives the publication date of that update.Both have nothing to do with the date of the last modification. This can be given in the dateModified property: For the LiveBlogPosting you could use dateModified each time another update gets posted, but that would be redundant (as it would be the same date as the datePublished of the newest update). I would only use it for modifications that happen after the live blogging stopped.For the referenced BlogPosting items, you could use dateModified if the update, after it got published, gets modified (but thats probably rather uncommon, as typically a new update gets posted with a correction instead).From Googles perspective, it wouldnt make any sense not to follow Schema.orgs definitions. However, they dont seem to have a Rich Snippet (or similar product) that makes use of LiveBlogPosting (or do they?).
_codereview.137958
I have subclassed Qt QAbstractTableModel with QJsonDocument as data source which I have reimplemented the setData() method in:bool UeJsonPlacesTableModel::setData(const QModelIndex& index, const QVariant& value, int role){ if(role!=Qt::EditRole|| index.row()<0|| index.row()>=this->m_ueJsonData.isArray()?this->m_ueJsonData.array().size():this->m_ueJsonData.isObject()?this->m_ueJsonData.object().size():0|| index.column()<0|| index.column()>=this->m_ueJsonData.isArray()?this->m_ueJsonData.array().size():this->m_ueJsonData.isObject()?this->m_ueJsonData.object().size()>0?this->m_ueJsonData.object().size():0:0) { return false; } // if QVariantList dataList=this->m_ueJsonData.toVariant().toList(); QVariantMap dataVariantMap=this->m_ueJsonData.toVariant().toList().at(index.row()).toMap(); QVariantMap::const_iterator dataIterator=dataVariantMap.constBegin(); int dataIndex=0; QString keyName=QString(); QString dataValue=QString(); while(dataIterator!=dataVariantMap.constEnd()) { if(dataIndex==index.column()) { keyName=dataVariantMap.keys().at(dataIndex); } else { dataIterator++; dataIndex++; } // if } // while QVariantMap changedData; changedData.insert(keyName, value.toString()); dataList.replace(index.row(), changedData); this->m_ueJsonData=QJsonDocument::fromVariant(dataList); emit(dataChanged(index, index)); return true;} // setDataMy humble opinion is that the code is very ugly. Can someone show me guidelines for its optimisation?
Reimplemented QAbstractTableModel::setData
c++;mvc;json;qt
if(role!=Qt::EditRole|| index.row()<0|| index.row()>=this->m_ueJsonData.isArray()?this->m_ueJsonData.array().size():this->m_ueJsonData.isObject()?this->m_ueJsonData.object().size():0|| index.column()<0|| index.column()>=this->m_ueJsonData.isArray()?this->m_ueJsonData.array().size():this->m_ueJsonData.isObject()?this->m_ueJsonData.object().size()>0?this->m_ueJsonData.object().size():0:0){Stop. Regardless of what you're doing here, you should not be doing so many things in one condition. This looks like you need to write a new function instead.Talking about functions, they'd greatly improve the readability and maintainability of your code. Now you got everything in one big bool. Your while could use it's own function. The data changing, inserting and replacing could probably use a wrapper as well. If you give those functions meaningful names, the readability will increase big time!Imagine the following:You haven't touched this code in 6 months. Now you want to add a feature. To add this feature, you try to understand how this code works and why you wrote it like this. Since it has been a while, you can't rely on your memory. How long would it take you to grasp the workings of this function?Exactly. Way longer than necessary. Now imagine you share this code. Like you did here. How long does it take someone not familiar with your program to figure it out? Way longer than necessary.For the sake of your future self, improve the readability of the code by splitting things up.
_webapps.73473
I'd like to pass a query param to the form so that it pre-fills at least one of the fields. Is this possible when embedding on a website?
Cognito Forms: pre-fill field with query parameter
cognito forms
I am a developer for Cognito Forms.You can pre-fill select fields on your form by modifying the embed code that you place on your website. First off lets start with your normal embed code:<div class=cognito><script src=https://services.cognitoforms.com/include/required></script><script src=https://services.cognitoforms.com/session/script/iibc3e48-82t9-4642-b097-dp442bc9d123></script><script>Cognito.load(forms, { id: 1 });</script></div>We will be adding new code to the line that contains the script tag:<script>Cognito.load(forms, { id: 1 });</script>We will be pre-filling a name field, by targeting the field by label name, in this case Name:<script>Cognito.load(forms, { id: 1, entry: {Name: {First: John, Last: Smith } });</script>We are targeting the field Name and we are setting the value for First to John, and the value for Last to smith.Once added your new embed code will look like this:<div class=cognito> <script src=https://services.cognitoforms.com/include/required></script> <script src=https://services.cognitoforms.com/session/script/iibc3e48-82t9-4642-b097-dp442bc9d123></script> <script>Cognito.load(forms, { id: 1, entry: {Name: {First: John, Last: Smith } });</script> </div>The Name field is not the only field that can be pre-filled. You can use this method on any of our other fields, with the exception of the File Upload field.
_webapps.62606
I am using Gmail. I received an e-mail from A, forwarded it to B, and then replied to the A e-mail. Does A see my conversation with B about A email?
Does replying to an e-mail that was forwarded include the original sender in the conversation?
gmail
null
_cs.66166
As my title points out, I don't understand how do you show that, in general, the diameter of a MST (minimal spanning tree) can be bigger than the diameter of G, by the factor $\Omega(n)$. $(n:= |V|, G =(V,E)) $.And I don't understand, how big is $\Omega(n)$ and why $\Omega$?I made a example where the distance of two nodes in $G$ is smaller than the distance of the two nodes in the MST.Is it correct to write,$D_{MST} = D_{G} \cdot \Omega(n)$ ?$\Omega(n) = D_{MST}/D_{G}$ ?$D_{G} =$ diameter of graph $G$ and $D_{MST} =$ diameter of MST.
Show that the diameter of a MST is sometimes larger by a factor $\Omega(n)$ than the diameter of the graph $G$
graphs;graph theory;spanning trees
You say that you don't understand how to show that the diameter of a MST can be $\Omega(n)$ larger than the diameter of the underlying graph.However, it seems that what you don't understand is what it means that the diameter of a MST can be $\Omega(n)$ larger than the diameter of the underlying graph.It is impossible to prove a theorem if you don't understand its statement.Here is what you are asked to do. Given a weighted graph $G$, we will use the following notation:$n(G)$ is the number of vertices in $G$.$D(G)$ is the diameter of $G$.$D_{MST}(G)$ is the minimum diameter of a minimum spanning tree of $G$.Give a sequence of graphs $G_i$ such that $n(G_i) \to \infty$ and $D_{MST}(G) = \Omega(n(G)) \cdot D(G)$.It seems that you are not familiar with the notation $\Omega(\cdot)$. It is the counterpart of big O, and described in the answers to this question.Unpacking the big $\Omega$ notation, we can rephrase the question:Give a sequence of graphs $G_i$ such that $n(G_i) \to \infty$ and there exists a constant $C>0$ such that $D_{MST}(G_i) \geq C n(G_i) D(G_i)$.The question is somewhat ambiguous it's not clear if you need to provide an example for every $n$ or just for infinitely many $n$. I assumed the latter, but in fact in your case there is a sequence $G_i$ with $n(G_i) = i$, so you can try proving this stronger statement instead.
_webmaster.57736
I'd like to insert an image on a webpage with an alt property. But the text length I'd like to use for this alt property is pretty long: about 200 words (slightly less than 1000 characters). Moreover this text has some line breaks.I have some doubts that such a long alt property won't be appreciated by search engines. So do I need to follow some guidelines regarding the length of an alt property? I have the same question regarding the title property.
Are there any length limitations for image alt or title property?
seo;html;alt attribute;title attribute
So do I need to follow some guidelines regarding the length of an alt property?There is no set maximum length for the value of alt tags, however, most advise for the benefit of sight impaired users to keep it under 125 characters - see this and this.I have the same question regarding the title property.In regards to title attributes, like with the alt text, there is no set maximum length. However, since this is used for tooltip text for an element, usability for the visually impaired should also be considered as the W3 covers here, suggesting that the 125 character size would similar apply. You may also find through browser compatibility testing, that the tooltip will get cutoff after a certain length, varying by browser.Since title property might also refer to title element, search engines like Google will truncate your title element, or even compose their own based on your content, if it's too long. As suggested here by MOZ, the goal should be to keep it to under 70 characters or less:Title Tag: Best Practices for LengthAim for title tags containing fewer than 70 characters. This is the limit Google displays in search results. Title tags longer than 70 characters may be truncated in the results, or search engines may choose to display different text from the document in place of the title tag. Recent experiments have shown that the number of characters displayed in the search results may also vary based onamong other thingsthe width in pixels of each letter. 70 characters is still a good general guideline for length, though.
_softwareengineering.301627
I have a few lists of items (which will likely but not necessarily have common elements). The lists are known to be sorted, but don't have a comparison function (they were manually put into the system in sorted order). I'd like to combine these lists into one list containing all the elements that is in a reasonable order given the existing ones.Obviously there are edge cases and decisions to be made - I'm not too worried about the specifics, but rather I'd like a general approach to tackle the problem. Obviously if the two lists have nothing in common, there isn't much to be done besides appending one to the other, but that won't happen much.As an example:l1 = ['Task A', 'Task B', 'Task C', 'Task D']l2 = ['Task B', 'Task B2', 'Task D', 'Task G']l3 = ['Task A', 'Task C', 'Task E', 'Task D']result = ['Task A', 'Task B', 'Task B2', 'Task C', 'Task E', 'Task D', 'Task G']Again, I realize there is no perfect solution, and the output is not mission critical, I just want it to be reasonably nicely ordered in most cases. Any help would be appreciated.
Merging Sorted Lists (with no comparison function)
algorithms;sorting
This is a well-defined problem with a deterministic solution. You can think of each list as forming part of a directed acyclic graph:Then, constructing a merged order is simply a matter of using one of the well-known algorithms to find a topological sorting. The more similarities you have between the lists, the fewer valid topological sorts you will have. If there are no elements in common, the topological sort will work for that case as well. It will list all possible ways to combine the three lists. You just have to pick the one that suits you best aesthetically.
_unix.186
Could I get ZFS to work properly in Linux? Are there any caveats / limitations?
ZFS under Linux, does it work?
linux;filesystems;zfs
ZFS is not in the official Linux kernel, and never will be unless Oracle relicenses the code under something compatible with the GPL.This incompatibility is disputed. The main arguments in favor of ZFS being allowed on Linux systems revolve around the so-called arm's length rule. That rule applies in this case only if ZFS is provided as a separate module from the kernel, the two communicate only through published APIs, and both code bases can function independently of each other. The claim then is that neither code base's license taints the other because neither is a derived work of the other; they are independent, but cooperate. Nevertheless, even under this interpretation, it means the ZFS modules must still be shipped separately from the Linux kernel, which is how we see it being provided today by Ubuntu.Quite separately from the CDDL vs GPL argument, NetApp claims they own patents on some technology used in ZFS. NetApp settled their lawsuit with Sun after the Oracle buyout, but that settlement doesn't protect any other Linux distributor. (Red Hat, Ubuntu, SuSE...)As I see it, these are your alternatives:Use btrfs instead, as it has similar features to ZFS but doesn't have the GPL license conflict and has been in the mainline kernel for testing since 2.6.29 (released in January 2009).The main problem with btrfs is that it's had a long history of problems with its RAID 5/6 functionality. These problems are being worked out, but each time one of these problems surfaces, it resets the stability clock.Another concern is that Red Hat have indicated that the next release of Red Hat Enterprise Linux will not include btrfs.One of the reasons Red Hat is taking that position on btrfs is that they have a plan to offer similar functionality using a different technology stack they are calling Stratis. Therefore, another option you have is to wait for Stratis to appear, with 1.0 scheduled for the first half of 2018, presumably to coincide with Red Hat Enterprise Linux 8.Use a different OS for your file server (FreeBSD, say) and use NFS to connect it to your Linux boxesUse ZFS on FUSE, a userspace implementation, which works neatly around the kernel licensing issue at the expense of a significant amount of performanceIntegrate ZFS on Linux after installing the OS.The license conflict makes distributing the combined system outside your organization legally questionable. I am not a lawyer, but my sense is that, patent issues aside, distributing ZFS on Linux is about as worrisome as distributing non-GPL binary drivers (such as those for certain video cards) with the system. If one of these bothers you, the other should, too.Switch to Ubuntu, which has been shipping ZFS kernel modules with the OS since 16.04. Canonical believes that it is legally safe to distribute the ZFS kernel module with the OS itself. You would have to decide whether you trust Canonical's opinion; consider also that they may not be willing to indemnify you if a legal issue comes up.Beware that it is not currently possible to boot from ZFS with Ubuntu without a whole lot of manual hackery.Incidentally, btrfs is also backed by Oracle, but was started years before the Sun acquisition. I don't believe the two will ever merge, or one be deprecated in favor of the other due to the license conflict and patent issue. ZFS is too popular to go away, but there will continue to be demand for a ZFS alternative.
_unix.215500
I created a Virtual Machine with Virtualbox - the host system is Linux Mint Cinnamon 17.2, the guest - Windows 8.1 Pro. I enabled all acceleration features in the VM settings.To run the WP8 emulator one needs Hyper-V. But, to my surprise, the Windows guest claims that Hyper-V is not supported.Is it possible to use Hyper-V on a Windows guest?
VirtualBox, Hyper-V and a Linux host
virtualbox;hyper v
Yes, it is now possible to use Hyper-V on a Windows guest OS, but not with VirtualBox. This technology is referred to as nested virtualization.You can vote up the feature request for VirtualBox here. Unfortunately, that request has been around for 6 years now, and the devs initially indicated that it would only be of limited usefulness. With more and more SW relying on virtualization (Windows Mobile Emulation, Android Emulation, Vagrant, etc.), I would hope that it becomes a higher priority. It's still being actively commented on and requested as recently as 11/16/2015, but as of May 2015 the developers still have different priorities.As of the Windows 10 Fall Update (and the Windows Server 2016 previews), Hyper-V is now capable of nesting a Hyper-V hypervisor:Nested virtualization is running virtualization inside a virtualized environment. In other words, nesting allows you to run the Hyper-V server role inside a virtual machine.source. The technology is still very new and appears to still be in preview.The open source Xen hypervisor also claims support for nested virtualization:Nested virtualization is the ability to run a hypervisor inside of a virtual machine. The hypervisor that runs on the real hardware is called a level 0 or L0; the hypervisor that runs as a guest on L0 is called level 1 or L1; a guest that runs on the L1 hypervisor is called a level 2 or L2.source: http://wiki.xenproject.org/wiki/Nested_Virtualization_in_XenVMWare also has extensive support for multiple nesting scenarios in its commercial products:Hyper-V requires hardware-assisted virtualization, so it can only be run under ESXi 5.0, Workstation 8, Player 4 or Fusion 4 (or later). Hyper-V performs relatively poorly as a guest hypervisor under ESXi 5.0, but it performs reasonably well under Workstation 8, Player 4 or Fusion 4 (or later). Under Workstation 9, Player 5 or Fusion 5, you should set the guest OS type to Hyper-V.source: http://communities.vmware.com/docs/DOC-8970
_webmaster.68960
I have been looking at a number of shared hosting providers, and I used domaintools.com services to see how many other domains are hosted at a single IP address. I think Hostgator had a high number of domains on one IP address (around 1,300) and DreamHost had a much smaller number - around 40 domains.Should I worry about a high number versus a low number, or is this inconsequential to the performance of the domain, all other things being equal?
The number of sites on a shared host
web hosting;ip address
Both answers are right to a point. I used to be a web host and a registered ISP. I was a presenter at the first ISPCon known as ISPOne for USRobotics and represented well over 1 billion dollars in sales in just the first quarter. I have been out of the industry for quite a while, but not too much has changed except for some of the offerings and some of the technology. The newer technologies, however, are based upon older technologies that have been around for 30 years. I will limit my answer to shared hosting only. Here is what you need to know.Shared Hosting:Generally speaking, there are as many sites per computer as possible. I know that is a Duh! statement, however, not all hosts do this. Quality hosts gauge performance of their servers and move sites around as needed while others do not care one whit. The common trick in the industry is to put as many sites on a computer and promise more disk space than they actually have. This is because it is rare that anyone uses the all of disk space (as well as CPU and memory) made available to them. Quality hosts will at least monitor performance and allocations and make changes dynamically.As far as the computers, they are often the cheapest generic clone computers that they can get away with. Quality hosts will use a name brand of course, but use a cheaper model such as Dell over HP. Higher quality hosts will use clusters of computers and SAN technology to allocate resources. There are different cluster based technologies and SAN technology that allow a high number of computers and hard drives to be allocated within a dynamic space and appear as a single entity.There are usually huge banks of computers so it is not practical that all of them have a public IP address. IP addresses within the hosts LAN is always private IP addresses. The public facing IP addresses are on routers or specialized computers/harware that manage traffic and bandwidth. The public facing hardware will use NAT and/or proxy to direct the traffic to the right computer. It is not uncommon for many thousands of standalone computers to be managed by software and sites moved around from computer to computer seamlessly. As well, it is not uncommon that a cluster based technology and SAN technology is used to host a huge number of sites.Because of this, there is no correlation between number of domains assigned to an IP address and performance.Here is what is important:The hosts reputation. Period.It does not matter if a bank of standalone computers are used or large-scale clusters with SAN. Obviously the latter is preferred within telecom production environments, however, the difference between the two are really minimal for hosting these days due to options available. For the customer, there should be no difference. What is important is that the host cares about the customer and responds to issues prior to the customer calling. A heads-up monitor, external monitor, internal monitor (per machine), a network monitor should be able to alarm the host immediately when things go wrong or performance is suffering. It should be standard practice if the problem is solved seamlessly immediately before the customer even notices. Fail-over of all stripes, hot spares, OTS (on the shelf) pre-configured hardware spares, spare in the air hardware, snap-shots and images, dynamic host allocation, and fast networks should make moving and recovery very easy and fast. If standard practices are observed, the customer should never have a problem that they did not create and fixing it should be a snap.On a side note, the claim of 99.999% up-time is a statistical impossibility. It is BS plain and simple. I worked as a consultant where %100 up-time was required with an SLA (service level agreement) and fees paid to the customer for anything including a 1 second outage. These fees started at $10,000. In all the years, a fee has never had to be paid. This is possible with hosting but rarely done except at tier level 1 providers. Otherwise, expect something within the 97%-98% (point something) as being standard.
_cs.75927
Consider a while loop of the form : $\texttt{while (C) {S}}$with $\texttt{C}$ the condition and $\texttt{S}$ the body of the loop.Let $\texttt{I}$ and $\texttt{V}$ respectively be an invariant and a variant of this loop. The rule for total correctness of while loops is given in my textbook by: If $\texttt{I} \Rightarrow \texttt{V} \geq 0$ And $[\texttt{I} \land \texttt{C} \land \texttt{V} = v_0] \,\texttt{S} \, [\texttt{I} \land \texttt{V} < v_0]$Then $[\texttt{I}] \, \texttt{while (C) {S}} \, [\texttt{I} \land \neg\texttt{C}]$From what I think I understand, in order for the loop to terminate, the variant $\texttt{V}$ must stricly decrease and that it must also be bounded by zero. However, when I translate that mathematically, I obtain a different proposition from that of my textbook : $$[\texttt{V} \geq 0 \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{V} \geq 0 \land \texttt{V} < v_0]$$My question : Are this last proposition and my textbook's rule saying the same thing about what needs to be proven in order for the loop to terminate? In other words : is $[\texttt{I} \land \texttt{C} \land \texttt{V} \geq 0 \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{I} \land \texttt{V} \geq 0 \land \texttt{V} < v_0]$the same as$\texttt{I} \Rightarrow \texttt{V} \geq 0$ together with $[\texttt{I} \land \texttt{C} \land \texttt{V} = v_0] \,\texttt{S} \, [\texttt{I} \land \texttt{V} < v_0]$Why or why not?
Hoare logic - total correctness of loops
hoare logic
They are equivalent, in the sense that every time you can apply the textbook rule you can also apply your own rule, and vice versa. The invariant for the two rules is similar, but not the same.Converting a textbook rule instance into an instance of your ruleSuppose we have an application or your textbook rule. I.e., we have found some $\texttt{I}$ for which:$\texttt{I} \Rightarrow \texttt{V} \geq 0$ together with $[\texttt{I} \land \texttt{C} \land \texttt{V} = v_0] \,\texttt{S} \, [\texttt{I} \land \texttt{V} < v_0]$Then, thanks to the implication above, we also have $\texttt{I} \iff \texttt{I} \land \texttt{V} \geq 0$. Using rule PrePost, we can rewrite the invariant into its equivalent, and we get an application of your rule:$[\texttt{I} \land \texttt{C} \land \texttt{V} \geq 0 \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{I} \land \texttt{V} \geq 0 \land \texttt{V} < v_0]$Here, we use the same invariant as in the textbook rule.Converting an instance of your rule into a textbook rule instanceNow, for the converse direction. Suppose we have found $\texttt{I}$ for your rule:$[\texttt{I} \land \texttt{C} \land \texttt{V} \geq 0 \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{I} \land \texttt{V} \geq 0 \land \texttt{V} < v_0]$Now, we can't assume $\texttt{I} \Rightarrow \texttt{V} \geq 0$, so we can't use $\texttt{I}$ for the textbook rule. However, we can use as a new invariant $\texttt{I}' := \texttt{I} \land \texttt{V} \geq 0$. We trivially have $\texttt{I}' \Rightarrow \texttt{V} \geq 0$ by construction (*). Further, from the hypothesis$[\texttt{I} \land \texttt{C} \land \texttt{V} \geq 0 \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{I} \land \texttt{V} \geq 0 \land \texttt{V} < v_0]$we can obtain (by PrePost) $[\texttt{I}' \land \texttt{C} \land \texttt{V} = v_0] \, \texttt{S} \, [\texttt{I}' \land \texttt{V} < v_0]$ (**)Properties (*) and (**) are exactly what we need to apply the textbook rule.
_softwareengineering.93594
I am working on Asp.net webform and it already provides me ready to user Ajax solution by using an update panel, so should I invest my time learning how Ajax really work ?
Do i need to know how Ajax works since Asp.net provides me UpdatePanel
asp.net;ajax
Yes, because:Update panel runs the entire Page Lifecycle at server, while Page Methods or Web Services (AJAX calls) don't.Update panel sends back the entire ViewState to the server even for small communications like getting the current server date value, while Manual AJAX is in your control and you can send (transfer) less data.Update panel causes the entire page at server to be rendered, but only returns the required section, while in AJAX you don't do such stupid actions.Update panels get messy when they want to be coordinated with each other. In other words, there are many times that you need to make an AJAX call, but then on successful response, you don't want to change the DOM of that zone. Rather you want to manipulate somewhere else. For example, in an Email client software, when someone clicks on an unread email item, you send an AJAX request to the server to get the email body, but then on success callback, you should also update the part of your screen where you announce the number of unread emails, and you should subtract one from that number. These coordinations really become tricky at server side.Microsoft's AJAX solutions (not Ajax Control Toolkit, but the Update Panel, Update Progress, and Timer) was so unintuitive to the web world, that it introduced MVC to keep its place in market. With Microsoft Ajax, the second call stops the first unfinished call. Many times you really need concurrent ajax calls to the server.Stop using Update Panel. You will use AJAX someday, whenever you want to work professionaly. Thus, use it today.
_softwareengineering.199834
I want to understand the architecture of web apps that use subdomains. I don't think that I'm phrasing this well, so let me explain.Many web apps, like tumblr or shopify create a user's site on a subdomain. Say for example my tumblr account was johndoe then you could find my tumblr blog at johndoe.tumblr.com. Can someone explain how this is implemented?
How do web apps create subdomains?
web development;web applications
Basically you could either set new CNAME record to your DNS server for each user (if you have such ability by your Hosting/DNS server provider) or use the Wildcard DNS record method and then use some rewrite rules to process the requests.You can read more about it on this older post in stackoverflow.
_unix.57652
I am running Linux Mint 13. The problem is that whenever I connect/disconnect an external monitor to my laptop it freezes. If the monitor is connected on boot it works fine. Any ideas ? Output from inxi -SGxc 0System: Host: ****-VGN-NS140E Kernel: 3.2.0-23-generic x86_64 (64 bit, gcc: 4.6.3) Desktop: Xfce 4.10.0 (Gtk 2.24.10) Distro: Linux Mint 13 MayaGraphics: Card: Intel Mobile 4 Series Chipset Integrated Graphics Controller bus-ID: 00:02.0 X.Org: 1.11.3 drivers: intel (unloaded: vesa,fbdev) Resolution: [email protected] GLX Renderer: Mesa DRI Mobile Intel GM45 Express Chipset GLX Version: 2.1 Mesa 8.0.4 Direct Rendering: Yes
Linux Mint 13 XFCE freezes when external monitor connected
linux;linux mint;dual monitor
A script as mentioned in comment to the question:#!/bin/bashxrandr \ --output LVDS-1 \ --auto \ --dpi 145 \ --left-of DVI-D-1 \ --output DVI-D-1 \ --primary \ --auto \ --dpi 96sleep 1killall -USR1 xfce4-panelThis makes the DVI-connected device the main display device and positions the laptop screen (LVDS) to the left of DVI. Names of the devices vary - check output of xrandr -q for names on your system. After the configuration settles down, xfce4-panel is signalled to reload itself - this is mostly to ensure, that the workspace switcher updates its cached desktop sizes (without this it would only display a single screen miniatures).To disable the monitor you need something like:#!/bin/bashxrandr --output DVI-D-1 --offsleep 1killall -USR1 xfce4-panelYou might also want to check Session and Startup entry in the XCE Settings manager fo anything that would resemble an application that would try to do this automagically and possibly remove it (I can't remember whether this was a standalone service or whether it was part of the widow manager).
_webmaster.99474
My website releated to entertainment and movies trailer and review based site, my site viewers list actually very low, I have set keywords which is found from google keywords planner. And created sitemap. Right after my site improvements not good, may I need any changes according to my site traffic and viewers increments.
How can I increased my site traffic
web crawlers;sitemap;traffic;web traffic
null
_datascience.6200
Some times we come across datasets in which classes are imbalanced. For eg. class A may have 2000 instances but class B has only 200. How can we train a classifier for such datasets?
What are the basic approaches for balancing a dataset for machine learning?
machine learning;dataset
null
_softwareengineering.251250
If I write a C program and compile it to an .exe file, the .exe file contains raw machine instructions to the CPU. (I think).If so, how is it possible for me to run the compiled file on any computer that runs a modern version of Windows? Each family of CPUs has a different instruction set. So how come any computer that runs the appropriate OS can understand the instructions in my .exe file, regardless of it's physical CPU?Also, often in websites in the download page of some application, you have a download for Windows, for Linux, and for Mac (often two downloads for each OS, for 86 and 64 bit computers). Why aren't there many more downloads, for each family of CPUs?
Why do executables depend on the OS but not on the CPU?
low level;cpu;machine code
Executables do depend on both the OS and the CPU:Instruction Set: The binary instructions in the executable are decoded by the CPU according to some instruction set. Most consumer CPUs support the x86 (32bit) and/or AMD64 (64bit) instruction sets. A program can be compiled for either of these instruction sets, but not both. There are extensions to these instruction sets; support for these can be queried at runtime. Such extensions offer SIMD support, for example. Optimizing compilers might try to take advantage of these extensions if they are present, but usually also offer a code path that works without any extensions.Binary Format: The executable has to conform to a certain binary format, which allows the operating system to correctly load, initialize, and start the program. Windows mainly uses the Portable Executable format, while Linux uses ELF.System APIs: The program may be using libraries, which have to be present on the executing system. If a program uses functions from Windows APIs, it can't be run on Linux. In the Unix world, the central operating system APIs have been standardized to POSIX: a program using only the POSIX functions will be able to run on any conformant Unix system, such as Mac OS X and Solaris.So if two systems offers the same system APIs and libraries, run on the same instruction set, and use the same binary format, then a program compiled for one system will also run on the other.However, there are ways to achieve more compatibility:Systems running on the AMD64 instruction set will commonly also run x86 executables. The binary format indicates which mode to run. Handling both 32bit and 64bit programs requires additional effort by the operating system.Some binary formats allow a file to contain multiple versions of a program, compiled for different instruction sets. Such fat binaries were encouraged by Apple while they transitioning from the PowerPC architecture to x86.Some programs are not compiled to machine code, but to some intermediate representation. This is then translated on-the-fly to actual instructions, or might be interpreted. This makes a program independent from the specific architecture. Such a strategy was used on the UCSD p-System.One operating system can support multiple binary formats. Windows is quite backwards compatible and still supports formats from the DOS era. On Linux, Wine allows the Windows formats to be loaded.The APIs of one operating system can be reimplemented for another host OS. On Windows, Cygwin and the POSIX subsystem can be used to get a (mostly) POSIX-compliant environment. On Linux, Wine reimplements many of the Windows APIs.Cross-platform libraries allow a program to be independent of the OS APIs. Many programming languages have standard libraries that try to achieve this, e.g. Java and C.An emulator simulates a different system by parsing the foreign binary format, interpreting the instructions, and offering a reimplementation of all required APIs. Emulators are commonly used to run old Nitendo games on a modern PC.
_webmaster.2632
With all the Open Source fonts I have available, and I can download them via the CSS font directive, what's the benefit of the TypeKit API?Are there drawbacks of this? How does it work technically? Are there certain ways of constructing my website I should avoid?
What's the pro/cons of Adobe TypeKit API? Any best practices?
css;website design;cdn;typography;fonts
Annoyingly on-line fonts are not supported by all browsers (Opera on the iPhone being a pet peeve).Google Fonts pure CSS system seems to work better than the alternatives, but you can end up with a FoUC on many browsers.Google Font API ...was co-developed by Google and TypeKit so I hope the following experience is similar enough:the JavaScript library is quite light, but watch out for any serious increase in latency from the extra DNS lookups and HTTP connections that may be caused by cross-domain resourcesusing the JSAPI version is quite slow on an empty cache (as a side note you also can't combine JS requests into one big file)being able to declare the fonts needed separately from the JavaScript include allows for post-loading the library and really helps with (X)HTML template flexibilitythe extra paint events triggered when the JavaScript library re-paints the entire screen by changing page-wide classes will mean your CSS has to be efficient (Google offer a guide on this)whilst having to declare italics and different weights with some fonts decreases the download size, it adds an extra burden to either the designer or the programmercustom fonts, used tastefully, look beautiful
_webmaster.11195
They are a lot of tools, that can automatically submit my website to thousands of web-catalogs.Is it a good practice to increate rankings ?What are pros and cons against it ?
Automatic registration in web-catalogs and SEO
seo;tools
Pros:Cons: These links are worthless as they are almost certainly considered link farms (and if you ever link back to one of them you risk being considered part of it and getting penalized or banned yourself). And even if they aren't, there are so many links on those pages that what little value there is available is diluted the point where there is no more value. And that's assuming they have any value at all and the odds are they don't since they will have little no links pointing to them and they're almost certainly off topic. A lot of the sites they claim to submit to also no longer exist so you're getting less linking opportunities then you think.In short, this is a waste of time
_softwareengineering.338381
I'm working on removing left recursion from a grammar, and of course started with the algorithm described on Wikipedia. As advertised, it unfortunately exploded into a much larger grammar that was harder to understand than the original. However, I casually noticed that I can avoid a lot of the explosion if I just replace rules like this:A -> B | A c BWith this:A -> B | B c AIn my particular grammar, c is very often some delimiter (e.g., ,) used to describe a non-empty delimited list. Is there a more general algorithm here that I can apply? That is, one that performs transforms like this without introducing new rules to the grammar?
Is there a name for this grammar transform (or a more general form of it)?
parsing;grammar
null
_unix.40909
I installed the package nfs-utils and tried it via:# mount -t nfs server:/mnt /mntmount.nfs: rpc.statd is not running but is required for remote locking.mount.nfs: Either use '-o nolock' to keep locks local, or start statd.mount.nfs: an incorrect mount option was specifiedOk, probably need to start that - via systemd - right?# systemctl start nfs-lock.service Job failed. See system journal and 'systemctl status' for details.# journalctlJun 15 23:22:18 host rpc.statd[24339]: Version 1.2.6 startingJun 15 23:22:18 host rpc.statd[24339]: Opening /var/run/rpc.statd.pid failed: Permission denied[..]Jun 15 23:22:18 host systemd[1]: nfs-lock.service: control process exited, code=exited status=1Jun 15 23:22:18 host systemd[1]: Unit nfs-lock.service entered failed state.Looks like a SELinux related problem?Jun 15 23:22:18 host setroubleshoot[3211]: analyze_avc() avc=scontext=system_u:system_r:rpcd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 access=['unlink'] tclass=file tpath=rpc.statd.pidJun 15 23:22:18 host setroubleshoot[3211]: SELinux is preventing /usr/sbin/rpc.statd from unlink access on the file rpc.statd.pid.Jun 15 23:22:18 host setroubleshoot[3211]: analyze_avc() avc=scontext=system_u:system_r:rpcd_t:s0 tcontext=unconfined_u:object_r:var_run_t:s0 access=['write'] tclass=file tpath=rpc.statd.pidJun 15 23:22:18 host setroubleshoot[3211]: SELinux is preventing /usr/sbin/rpc.statd from write access on the file rpc.statd.pid.Ok - now the question is: what SELinux configuration or what file label do I have to change?# systemctl status nfs-lock.servicenfs-lock.service - NFS file locking service. Loaded: loaded (/usr/lib/systemd/system/nfs-lock.service; enabled) Active: failed (Result: exit-code) since Fri, 15 Jun 2012 23:22:18 +0200; 13min ago Process: 24338 ExecStart=/sbin/rpc.statd $STATDARG (code=exited, status=1/FAILURE) Process: 24334 ExecStartPre=/usr/lib/nfs-utils/scripts/nfs-lock.preconfig (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/nfs-lock.serviceIs a package missing - or am I using the wrong service?
How to mount NFS 3 volumes on Fedora 17?
fedora;nfs;selinux;systemd
Not sure if this will help because I did not see any SElinux errors.But I'm posting what worked for me and the problems I encountered in the hope it helps.After installing Fedora 17 I upgrade to the latest release but did not reboot. I did log out and back in because of the updates to several gnome packages. (I did not notice that the update included an update of systemd as well.)To mount my NFS shares I installed nfs-utils and tried to start the rpcbind service:sudo systemctl start rpcbind.serviceI received the following error:Failed to issue method call: Unit var-run.mount failed to load: No such file or directory. See system logs and 'systemctl status var-run.mount' for details.var-run.mount appears to have been removed recently yum whatprovides shows that systemd-44-8.fc17 still had it.Several other NFS services threw the same error.In my case simply rebooting helped. So you might want to update to the latest packages and reboot. (If someone knows a way to make systemd reread it config without rebooting please let me know.)
_codereview.147067
I'm trying to write a small macro library in Racket that extends a few of the racket/match functions I use, by printing which clause was expanded.While this seems to be decent practice for learning macros in Racket, I still find them incredibly daunting. At some point, I wind up just trying everything with no real sense of direction until it works how I want, and that's partially how I built my macros up to their current state, as seen here:#lang racket(require (for-syntax racket/string racket/list racket/syntax syntax/parse syntax/parse/experimental/template syntax/parse/lib/function-header));; From answer to my question here:;; http://stackoverflow.com/a/38577121/1017523(begin-for-syntax (define (clauses->numbers stx) (range (length (syntax->list stx)))))(define-syntax (generate-debug-version stx) (syntax-case stx () [(_ function-name) (with-syntax ([new-name (format-id stx ~a/debug #'function-name)]) #'(define-syntax (new-name stx) (... (syntax-parse stx [(_ id [pattern value] ...) (with-syntax ([(n ... ) (clauses->numbers #'([pattern value] ... ))]) (syntax/loc stx (function-name id [pattern (begin (displayln (format Matched case ~a (add1 n))) value)] ...)))]))))]))(generate-debug-version match)(generate-debug-version match*)The small first macro, clauses->numbers, determines which clause is expanded in racket/match functionsThe main macro, generate-debug-version, is what I'm trying to simplify.This macro currently takes one function name (for example, match or match*), and generates a macro that defines a new function with a /debug suffix that prints the expanded clause number.Before asking this question, I was trying to add the ability to supply multiple function names to this macro using ellipses (making the last two lines (generate-debug-versions match match*)), but I can't figure out how in my macro's current state.Can someone give me advice on how to simplify/clarify my generate-debug-version macro, so I can ultimately rewrite it to accept multiple arguments?I'm aware of the simpler define-syntax-rule macro tool, but it didn't work in the circumstances I attempted.
Simplifying Macro-Generating Racket Macro
scheme;macros;racket
Starting at the top, you have a lot of unused imports. Fortunately, DrRacket can tell you which imports are unused, since it will highlight them in red when you hover your mouse over them. Additionally, if you click the Check Syntax button, it will color all the unused imports red. Using that, we can trim the import list down to the following set:(require (for-syntax racket/list racket/syntax syntax/parse))Next, lets take a look at the bulk of your code, generate-debug-version. First of all, its a bit odd that youre using syntax-case for the outer macro, but syntax-parse for the inner one. Just use syntax-parse everywhere; theres really no reason to ever use syntax-case in Racket.Furthermore, the syntax/parse/define module provides a nice define-syntax-parser abbreviated form, which helps to simplify things slightly and remove some redundancy:(require (for-syntax racket/list racket/syntax) syntax/parse/define)(define-syntax-parser generate-debug-version [(_ function-name) (with-syntax ([new-name (format-id this-syntax ~a/debug #'function-name)]) #'(define-syntax-parser new-name (... [(_ id [pattern value] ...) (with-syntax ([(n ... ) (clauses->numbers #'([pattern value] ... ))]) (syntax/loc this-syntax (function-name id [pattern (begin (displayln (format Matched case ~a (add1 n))) value)] ...)))])))])Note the replacement of explicit uses of stx with uses of this-syntax, which always refers to the current piece of syntax being matched within a syntax-parse form.Now with this in mind, we can start to refactor the bulk of the macro itself. There are a few things that could be improved, staring with the uses of with-syntax. Conveniently, syntax/parse permits #:with clauses after patterns themselves, which act sort of like wrapping the body with with-syntax, but they can use syntax-parse patterns, and the parser will backtrack if they fail. This lets us simplify the code even further:(define-syntax-parser generate-debug-version [(_ function-name) #:with new-name (format-id this-syntax ~a/debug #'function-name) #'(define-syntax-parser new-name (... [(_ id [pattern value] ...) #:with (n ...) (clauses->numbers #'([pattern value] ...)) (syntax/loc this-syntax (function-name id [pattern (begin (displayln (format Matched case ~a (add1 'n))) value)] ...))]))])However, theres actually an issue here, which is that the use of format-id really shouldnt use this-syntax at all. Instead, it should pull lexical context from #'function-name itself, since the provided identifier should control where the generated identifiers lexical context comes from. (For an example of where this can be important, try using the define-tracing-match* form from the end of this answer without this change, and youll see what the issue is.)(define-syntax-parser generate-debug-version [(_ function-name) #:with new-name (format-id #'function-name ~a/debug #'function-name) ; ... ])Another thing we can improve is that we can use the id syntax class for the function-name pattern, which will ensure that the generate-debug-version macro is actually provided an identifier, and will raise a syntax error if it isnt. Additionally, function-name really isnt a great name for that pattern, since match is not a function, it is a form. Lets fix that:(define-syntax-parser generate-debug-version [(_ form-id:id) #:with debug-id (format-id #'form-id ~a/debug #'form-id) #'(define-syntax-parser debug-id ; ... )])While were discussing names, generate-debug-version isnt a very good one. For one thing, it doesnt just generate a debug form, it defines one. For another, it specifically generates debug versions of match-like forms, nothing else, so that should probably be included in the name, too. I picked the name define-tracing-match, but you could pick a similar name if you wished.Okay, what now? Well, while clauses->numbers works, it could honestly be better. Its really nice that we can use syntax patterns to write such a declarative style of macro, but clauses->numbers isnt declarative at all, its completely procedural. To help fix that, we can write a splicing syntax class which will number the clauses for us, lifting out the procedural component into a separate piece. That looks like this:(begin-for-syntax (define-splicing-syntax-class numbered-clauses #:attributes [[pattern 1] [value 1] [n 1]] #:description #f [pattern {~seq [pattern value] ...} #:with [n ...] (range (length (attribute pattern)))]))You can read more about what syntax classes do and how they work in the extensive documentation, but the basic idea here is to extract out some procedural logic into a reusable pattern than syntax-parse can understand. The #:attributes and #:description options are optional here, but the former helps readability, and the latter helps with error messages.With that syntax class in place, we can further simplify the main macro by using it:(define-syntax-parser define-tracing-match [(_ form-id:id) #:with debug-id (format-id #'form-id ~a/debug #'form-id) #'(define-syntax-parser debug-id (... [(_ id clause:numbered-clauses) (syntax/loc this-syntax (form-id id [clause.pattern (begin (displayln (format Matched case ~a (add1 'clause.n))) clause.value)] ...))]))])This makes our main macro completely declarative (save for the small use of format-id, which is pretty harmless), and its pretty much entirely built out of patterns and templates. This is what the Racket macro system is so good at: it lets you pretty much just write what you mean, and you still get great error reporting for when things go wrong.With all these changes in place, heres what the final code looks like:#lang racket(require (for-syntax racket/list racket/syntax) syntax/parse/define)(begin-for-syntax (define-splicing-syntax-class numbered-clauses #:attributes [[pattern 1] [value 1] [n 1]] #:description #f [pattern {~seq [pattern value] ...} #:with [n ...] (range (length (attribute pattern)))]))(define-syntax-parser define-tracing-match [(_ form-id:id) #:with debug-id (format-id #'form-id ~a/debug #'form-id) #'(define-syntax-parser debug-id (... [(_ id clause:numbered-clauses) (syntax/loc this-syntax (form-id id [clause.pattern (begin (displayln (format Matched case ~a (add1 'clause.n))) clause.value)] ...))]))])A bit more concise, and hopefully a bit clearer, too.Now for a couple of extras. You mentioned that you wanted to write a generate-debug-versions macro. This wouldnt be too hard to do with the new version of generate-debug-version by using some well-placed ellipses, actually, but it would probably be even easier to just define a new generate-debug-versions macro (or, since we changed the name, a define-tracing-match* macro) that defers to the existing one.The easiest way to do this is probably to use the define-simple-macro form, also from syntax/parse/define, which is basically syntax-rules mixed with all the syntax-parse enhancements. The macro itself is trivial:(define-simple-macro (define-tracing-match* form-id:id ...) (begin (define-tracing-match form-id) ...))Now, its extremely easy to define both match/debug and match*/debug at the same time:(define-tracing-match* match match*)One final additional thing you can do is adjust the format-id call to copy source location information and syntax properties from the provided form-id, like this:#:with debug-id (format-id #'form-id ~a/debug #'form-id #:source #'form-id #:props #'form-id)By doing this, when you hover your cursor over a use of match/debug in DrRacket, it will draw an arrow to the identifier used with define-tracing-match to generate the debug version in the first place, which is useful.However, if you wanted to get even fancier, you could skip that step and use the 'sub-range-binders syntax property to convey even more fine-grained information to DrRacket. Specifically, you can adjust the outer macro for define-tracing-match to attach the property as follows:(define-syntax-parser define-tracing-match [(_ form-id:id) #:with debug-id (format-id #'form-id ~a/debug #'form-id) (syntax-property #'(define-syntax-parser debug-id (... [(use-id id clause:numbered-clauses) (syntax/loc this-syntax (form-id id [clause.pattern (begin (displayln (format Matched case ~a (add1 'clause.n))) clause.value)] ...))])) 'sub-range-binders (let ([id-len (string-length (symbol->string (syntax-e #'form-id)))]) (vector (syntax-local-introduce #'debug-id) 0 id-len 0.5 0.5 (syntax-local-introduce #'form-id) 0 id-len 0.5 0.5)))])This will make DrRacket draw an arrow for part of the match/debug identifier, the part that shares the same name as the provided one. The best way to illustrate what this does is with a screenshot:(This is the technique the built-in struct macro uses to get the special binding arrows for its field accessors.)Admittedly, however, this is pretty advanced macrology, and I probably wouldnt do something this fancy unless I was distributing it as part of a library, so no worries if you leave that part out.
_codereview.147950
Here I made a program in batch that detects all files with the .user extension. Then it allows the user to pick a username by entering the number associated with that username. The code is messy, so I will explain.@echo offsetlocal ENABLEEXTENSIONS ENABLEDELAYEDEXPANSION:login_menuclsset x=1set users=cd usersfor %%A in (*.user) do (echo !x!. %%~nAset users=%%~nA:!x!,!users!set /a x=!x!+1)echo.set /p ch=Select User: if %ch% == (goto login_menu)for %%B in (%users%) do ( for /f tokens=1,2 delims=: %%C in (%%B) do ( set userNumber=%%D set userN=%%C if !ch! == %%D goto password echo BDEV: %%B pause))echo.echo That user doesn't exist!pausegoto login_menu:passwordclscd usersfor /f tokens=1,2 delims=: %%E in (!userN!.user) do (set password=%%F)echo Enter your password, !userN!echo.set /p password1=Password: if %password1% == %password% goto menuecho.echo That password is invalid!pausegoto password:menuecho Hey! You're logged in as !userN!pauseThe variable x is going to be the number which the username will be associated with. The variable users Makes sort of a 'map' to usernames to the number associated with that usernameIn the first for loop, it gets all the files in the folder users. It echo's out all the usernames that the user can pick.The second for loop goes into the variable users and separates the usernames with their respective numbers. For example: if I have the usernames admin:4,steve:3,john:2,jane:1 it will separate them into admin:4 and steve:3 and john:2 and jane:1.The third for loop (which is in the second for loop) separates each username to number into separate variables. For example, if we have the username Collins with the number 3, it will put the username Collins in the userN variable and the number 3 into the variable userNumber.It then checks what number the user selected.The final for loop goes into the user file that the user has selected. So if the user selected admin it will go to the user file admin.user and find the password.If the password is invalid, it rejects access. If it correct it allows access.Is there any way to make it less like spaghetti code?If you need more explanation, I will be happy to provide more information.
Login System In Batch
authentication;batch
My main recommendation is that your variables have more descriptive names. While !x! may seem like a reasonable variable name for a counter that you're never going to use again, calling it something like !counter! makes your code easier for other people to maintain when there's a lot more of it.I also got rid of ENABLEEXTENSIONS because that's enabled by default, and I moved cd users above :login_menu because if somebody enters an invalid user number and you're already in the users folder, your code will try to go into the users folder that's inside of the users folder and since there isn't one, you'll get an error.To cut out that nested for loop, I stored the username map in an array. From there, you can determine whether or not a username is valid by if the variable exists.Finally, I replaced the !s that you echoed with ^^!s so that they would be escaped. Because you have delayed expansion enabled, the last bit of your code would be displayed as HeyuserN because batch would consider ! You're logged in as ! to be a variable (because that's a valid variable name in batch).If you could guarantee that you would never have more than ten users, you could build a string for a choice list to guarantee that the user would never enter an invalid user number, but I assumed this would be used by a large number of people.@echo offsetlocal enabledelayedexpansioncd users:login_menuset user_counter=1for %%A in (*.user) do ( echo !user_counter!. %%~nA set users[!user_counter!]=%%~nA set /a user_counter+=1)echo.set /p user_selection=Select user: if %user_selection%== goto login_menuif not defined users[%user_selection%] ( echo That user does not exist^^! pause goto login_menu)set user_name=!users[%user_selection%]!:enter_passwordfor /f tokens=1,2 delims=: %%E in (!user_name!.user) do set stored_password=%%Fecho Enter your password, !user_name!echo.set /p entered_password=Password: if %entered_password%==%stored_password% goto menuecho.echo That password is invalid^^!pausegoto enter_password:menuecho Hey^^! You're logged in as !user_name!^^!pause
_scicomp.1454
Note: the following post may include controversial opinions, so pleasenote that they are only my opinions, and not intended to offendanyone.I'm being programming in some form or the other since around 1999. Iinitially used R, and then later, around 2004, mostly switched toPython.For many scientific applications, for example, simulation,including such things as MCMC, both R and Python are too slow and needto be sped up. The usual way of doing so is by extending with C orC++. For both R and Python, this is what I did, using R's C API withC++, and the Boost Python library with Python.However, for various reasons, this combination is not the idealsolution. What is important in programming, particularly algorithms?Expressiveness and speed, which are of course related. The moreexpressive a language, the faster one can write in it.1) As far as expressiveness goes, neither R nor Python are reallyideal for writing scientific algorithms in my opinion. They do notclosely map to the underlying algorithm. However, they are bothconsiderably better than C++.2) I enjoy writing in Python, which is a pleasant language, though asnoted above it is not ideal for algorithmic work. However, when one hasto work with a Python/C++ combination because of speed issues, thismix becomes considerably less pleasant to work with. What usuallyhappens is that I first write in Python, and once I have somethingthat is working well, often discover that it is too slow (for somesubjective value of too slow). I then face the decision of whether tospend some unreasonable amount of time rewriting it in C++, or put upwith the slowness. In hindsight I often feel I might have been betteroff putting up with the slowness, especially as the speedups obtainedare unpredictable. Also, the Boost Python interface between the two isa significant maintenance headache, and having code in two verydifferent languages glued together like this is just distracting. Nocriticism of Boost Python intended, it is as powerful an interface asone could imagine, and pretty much just works most of the time.Now, in an ideal world, with unlimited time and resources, neither ofthese problems would be a major deal. However, in scientific projectsI have worked on, I've had the following experience.Whether or not I have collaborators on the project, I always seem towind up doing the vast majority of the computing. In a total of 5significant projects, I only had substantial participation from onepeople on one project. That one person did more than pull his weight;he did as much as me or more. However, in all other cases, includingprojects with multiple collaborators, I've done (virtually) all thecomputational work. While I can say that I have not been blessed withthe best collaborators (it seems to be a mixture of laziness andincompetence) it is not clear to me whether this state of affairs islikely to change in the future.Computational scientific work is an enormous amount of effort, and ifI can't change how my collaborators behave, I can change the way Iwork. The most important improvement would be to get things done morequickly. Which brings me to the main consideration here, which is thatswitching languages to something less orthodox may help. Based on pastresearch, the most likely candidates in order of likelihood are CommonLisp and Ocaml. I've been thinking about this for years, but recentlyhave been thinking about it more seriously.As far I can tell, few people use either CL or Ocaml for scientificcomputation. On searching this site, I found two references to CL (onewas mine) and one to Ocaml (mine). I've had a couple of encouragingcontacts over the years with adventurous people working on thefringe. In 2008 I came across a bookreview of Peter Seibel'sPractical Common Lisp (which I own), by Tamas K. Papp. This caughtmy attention, since it was one of the few mentions of scientific computingfor Lisp that I had come across on the net. I wrote to Tamas, who immediatelyreplied helpfully and encouragingly. To quote himMy programming productivity probably increased tenfold with Lisp, but that took about a year to happen and I am still learning (I was doing quite well after 2 months though). So if you are working on something time-critical, then postpone the switch.You should consider asking folks on c.l.l, I am not the only one who knows about these things, others do scientific computing on Lisp.He also has a blog and a GitHubpage.Another person I briefly corresponded with (in December 2006) was IraKalet, who has used CommonLisp in the context of radiation oncology.Perhaps there are others who do scientific computing on Lisp, but Idon't know of anyone.The most common problem people cite with CL is the lack oflibraries. This is a severe problem in general purpose computing, butmay not be so much so in scientific computing, particularly from theground up implementations of algorithms. Specifically, I can get bymost of the time with a basic math library, including probabilitydistribution functions, a multidimensional array library, and abasic set of containers e.g. map, set, list etc. as found in the C++ andPython standard libraries.I know even less about Ocaml than I do about CL, but threw that in as analternative. It is supposedly very fast, has one free implementationby French researchers, and seems like the most viable of the ML familyof languages for scientific computing.To conclude, I'm wondering if others have experience with this, andwhat thoughts they have, if any. EDIT: I'm mostly interested in first hand experience, in the context of the issues I've discussed above. E.g. if you used to use Python and C++ (or R and C++) and moved to a more obscure language, I'd be most interested in hearing about your experiences.
Using unconventional programming languages for scientific computation
languages
null
_unix.310893
After a photo shoot I have two folders: JPGs (files *.jpg) and RAWs (files *.CR2). Usually I take a look in the JPGs folder and delete those ones that I don't like. What I want is to create a bash script to check: for file.cr2 in folder.getFiles if file.jpg is NOT in folder2.getfiles delete file.cr2I have seen some examples with rsync but with files with the same extension and I'm not very good at bash, so I can do it in C but I want to learn.
synchronize files with different extension in MacOS
bash;shell script;files;osx;rm
Try this:for file in cr2files/*; do test=jpgfiles/$(basename ${file:: -3})jpg if [ ! f $test ]; then echo $file fidoneIf it gets the results that you want, you can replace the line - echo $file - with - rm $file -. The line test=jpgfiles/$(basename ${file:: -3])jpg removes the path and extention from the file name and replaces them with jpgfile/filename.jpg . Remember to change the pathnames cr2files and jpgfiles to the correct ones. You can use variables if you like or pass the path names to your script in arguments. Edit:Your space in Toshiba ser was messing up basename. Heres the solution:#!/bin /bashIFS=for file in $1/*; do test=$2/$(basename ${file:: -3})jpg if [ ! f $test ]; then rm $file fidoneCall it like this ./removephotos.sh volumes/toshiba ser/raw volumes/toshiba ser/jpg. Take note that there is nothing between the single quotes in IFS='' on line #2.
_webapps.89198
How can I automatically number a list in Trello? For example on my board I have a list named to do and 3 items in this list: shopping, cleaning, haircut. Is it possible to have these items numbered automatically by Trello? And if I move one item into another list can Trello renumber the items automatically according to the list the item is in?
Numbered list in Trello
trello
null
_cstheory.18670
Here is a nearest neighbor problem.Given reals $a_1, \ldots, a_n$ (very large $n$!), plus target real $p$, find $a_i$ and $a_j$ whose SUM is closest to $p$. We allow reasonable pre-processing/indexing of $a_1, \ldots, a_n$ (up to $O(n \log n)$), but at query time (given $p$), the result should be returned very fast (e.g., $O(\log n)$ time).(Simpler example: if we only wanted the SINGLE $a_i$ that is closest to $p$, we would sort $a_1, \ldots, a_n$ offline, $O(n \log n)$, then do binary search at query time, $O(\log n)$).Solutions that don't work:1) Sort $a_1, \ldots, a_n$ offline, then at query time, start from both ends & move two pointers inward (http://bit.ly/1eKHHDy). Not good, because of $O(n)$ query time.2) Sort $a_1, \ldots, a_n$ offline, then at query time, take each $a_i$ and perform binary search for a buddy that helps it sum to something close to $p$. Not good, because of $O(n \log n)$ query time.3) Sort all pairs $(a_{1}, \ldots, a_{n})$ offline, then do binary search. Not good, because of $O(n^2)$ pre-processing.Thanks!ps. Further generalizations needed for practice: (1) $a_1, \ldots, a_n$ and $p$ to be 50-dimensional vectors, (2) close to be vector cosine distance, and (3) $k$-best closest pairs-that-sum, not just 1-best.
Select two numbers that sum to $p$, using sub-linear query time
ds.algorithms;cg.comp geom;ds.data structures
This is almost certainly impossible.Suppose you could solve your problem with preprocessing time $P(n)$ and query time $Q(n)$. Then there is a simple algorithm to solve the 3SUM problemGiven a set of $n$ real numbers, do any three elements sum to zero?in $P(n)+n\cdot Q(n)$ time. We pre-process all the numbers, then for each number $a_k$, we find the value of $a_i+a_j$ that is closest to $-a_k$; if it matches $-a_k$ exactly, we have found a solution to the 3SUM problem.However, the fastest algorithm known for 3SUM runs in $O(n^2)$ time, and this algorithm is widely conjectured to be optimal. Moreover, there is a matching $\Omega(n^2)$ lower bound in a restricted but natural decision tree model of computation. For sets of integers, there are slightly subquadratic-time algorithms that play games with bits, but even in the integer RAM model, 3SUM is conjectured to require $\Omega(n^2/\text{polylog}\,n)$ time.So assuming that conjecture is correct, your problem either requires (near-)quadratic preprocessing time or (near-)linear query time.
_unix.207319
I'm new to the world of Raspberry 2 and Linux and I have install Chromium on the Raspberry. I did this because I thought it would be a good way to access my Google Chrome Bookmarks (Favourites). However, I'm having problems. When I log into Chromium in order to sync, I get the message:The sync server is busy, please try again later.I've tried a few hours later, next day, etc. I suspect the sync server being busy is not the problem.Can anyone tell me how to fix this problem and help me sync my bookmarks?The version of Chromium is:22.0.1229.94. I vaguely understand it's possible to get a later version. I'm new to Linux and would have to be told the explicit steps to do so. The Linux I'm running came with the Raspberry 2 and is some flavor of Debian (I'd report the version number if I knew where to look!).Finally, I'm not wedded to Chromium. I just want a browser where I can seemy Chrome bookmarks and (ideally) have them synced every time I add a bookmark to Chrome or the browser on Raspberry.
How to sync bookmarks (favourites) between Google Chrome and Chromium
chrome;synchronization;bookmarks
null
_cs.56487
Most of websites put some restrictions on how to use their services; the following paragraphs are taken from Terms of Service of such website:Only one account per computer is allowed to view ads. If more than one account in a single computer view ads, all of those accounts will be permanently suspended.User can only use a maximum of 3 distinct computers in a 10 days period to view ads. Any attempt to use more than that in that defined period will cause an account suspension.Question:No problem with such rules, they must be respected. But I want to know how can servers recognize a specific computer!Is there a computer fingerprint that would be transferred through HTTP connection and can identify each computer?
How Can a Network Server Identify a Specific Computer, Is There a Computer Fingerprint?
computer networks;security;protocols
null
_cs.48436
I know that we can prove closure of two regular languages under operations like union, intersection, concatenation etc. by constructing NFAs for them but how to do the same thing using regular expressions, specifically proving that reversal of a regular language is closed using regular expression?
How to prove closure property of regular languages using regular expressions?
formal languages;regular languages;proof techniques;regular expressions;closure properties
null
_softwareengineering.187963
I have heard of several situations of people using say, JavaScript or Python (or something), inside a program written in C#. When would using a language like JavaScript to do something in a C# program be better then just doing it in C#?
When would using a scripting language within a larger program be useful?
scripting
null
_unix.209807
I was trying to install a perl module Future::Utils on my Ubuntu machine but didn't find the exact command. I tried this command but it didn't work:sudo apt-get install libfuture-utils-perlI have this result when i run this command:Reading package lists... DoneBuilding dependency treeReading state information... DoneE: Unable to locate package libfuture-perlCan you help me resolve this issue
Installing perl modules
ubuntu;apt;perl;libraries
null
_webapps.35270
I have an event on Facebook that a number of people are attending. I'd like to send a message to each of the attendees, and I'd like the message to start with 'Hi !'. This is similar to the old concept of mail merge where you have a list of recipients whose data you fill in to a template message before sending it to them. Any ideas on how I could go about doing this? Any 3rd-party tools that you can use to script Facebook?
How can you 'mail merge' on Facebook?
email;facebook
Cannot be done natively through Facebook. Your only hope is to be able to export your Facebook contacts and run them through a mail merge on your local. Or try and upload/import the contacts list to Gmail and use Google Docs as a way to send out personalised mass emails.If you wanted to try your hand at the Gmail & Google Docs method, Create a Mail Merge with Gmail and Google Docs has some steps. They are roughly thus:Create a new Google Docs SpreadsheetSelect Mail Merge from the menu and allow authorisation access to Gmail and/or Google ContactsCreate a new group on Google Contacts with the emails and details of those you want to send toClick Mail Merge Import Google Contacts and select the name of the new group. The details should import.Compose the email as desiredGo to the Mail Merge menu and click Start Mail Merge. The sending will begin and update.There's a little more nuance in the link mentioned, but listed here is the general step direction you'll be taking.
_webapps.5953
Is there any way to see the list of top words searched for on Google? I'm looking for a range of somewhere like 100-500 words.The problem with Google Hot Trends/Searches and Google Zeitgeist is that it shows searches, not words. Here's an example of why the latter is more important to me. Say I want to find out if a lot of people are using search engines to find recipes. You would think that I could look at the term recipe and find this out. However, this is not the case because recipe will only show me the results for people searching for recipe as a single keyword (e.g. they are looking for recipe sites). I'm looking for how many people are actually looking for recipes with queries such as pumpkin pie recipe, scones recipe, etc.The problem with Google Trends is that it only shows you the popularity of a single term over time, or it allows you to compare it to other terms. This can't easily be used to for the scenario I presented above.If this is not possible, I'm also looking for a way to do the opposite, that is, given a word, I want to know how popular it is. For example I would ask how popular the word email is, and then it would say something like 86, indicating that email is the 86th most searched word on Google. The closest thing I found is this page which claims to list the top 500 search words, but it doesn't mention where it gets its data from or which search engine it is talking about.Note: if other popular search engines provide this feature, I would be interested in those as well.
Google top 100 words searched for
google search;analytics;search engine
null
_webapps.92219
My Facebook messages are entering on my email account as well, I tried to delete email address from my Facebook account but I am having a problem. How can I block Facebook messages from coming to my email account?
Facebook messages on my email account
facebook;facebook messages
null
_webmaster.5940
I'm using codeigniter + jquery on a linux server and i want to integrate in my website a photo/image editor (after that an user have uploaded an image, he must be able to edit it), i need just some simple tools, zoom in and zoom out, brightness and contrast. A good solution may be a java, flash, silverlight applet or something like that.Any idea?Tnx Claudio
Image editor Applet
html;images;jquery;flash;silverlight
Another awesome solution would be Pixlr. It has many photoshop-like features (and I mean MANY - for a web app, that is), and has a decent API. Check it out here.And yes, I would, too, stay away from java and silverlight tools. Pixlr would most probably be just fine for any web user.PS: I think it's better than Picnik (which, if I remember correctly, is now owned by Google).
_softwareengineering.342536
Let's say I have a forum application running, and I start building a separate web application with a Python web framework. Both applications use their own MySQL databases, which contain their own respective user account tables. What would be the best approach to keeping the users between the databases in sync? I want to make it so that I can login with the same information on both applications.
Keeping data between two databases in sync
database;web development;synchronization
null
_softwareengineering.333160
There's currently a huge growth in larger companies offering new languages and / or frameworks for us to use to create websites, apps or software and I'm interested to know for what reason do people think this is? There's always been a few big players (Java, C++, Perl Php, VB, C#, Ruby etc) but a chunk of those were created by enthusiasts whose reason for doing so is more obvious.These days all the big tech giants are pushing their own Languages and Frameworks hard in a fight for our usage (I group Languages and Frameworks together as the growth seems pretty similar and will more than certainly be for similar reasons). For example, Apple is pushing and developing Swift at an astonishing speed, Google's Go has had amazing growth, although C# has been around for some time Microsoft has finalised (I think!) the official release of .Net Core and that shows a big shift in their goals towards multi platform and open-sourcing.What do companies gain from this? Especially as the majority of them are now open-source, I'm interested to get an idea financially why they do it? With all the resources and man-time needed to create, fine tune and document them and offering, more often the not, the majority of the same features and performance of competitors - why do they seem so desperate for us as developers to jump ship and hop on board their boat/s?Is it quite simply for marketing reasons to maintain an outer view of technological advancement, making the company seem like a leader of tech or do they get something else for developers creating tools using their Languages / Frameworks?It's less obvious than it was in the days of Microsoft wanting you to develop .Net so that you would need Windows and use IIS to host and Visual Studio to develop - with .Net Core that's not the case. Apple working with IBM to make Swift an option for web programming again moves things away from locking you down like you used to have to do to use a Mac to create iOS apps.Compile to Javascript languages and Javascript frameworks have been one of the biggest growth areas - How can Facebook profit over Google for getting more developers using React, GraphQL etc instead of Angular 2? And vice-versa? Both are free, with free resources and no tie-in as far as deployment is concerned, e,g you don't need to use Google Cloud Platform for Angular apps. After listening to a number of Podcasts recently and reading plenty of articles I find it interesting how team members and fans of each are extolling the virtues of there Language / Framework above the others with subtle digs the other way - is this just like nerd evangelism or is there more to it? Even with something like creating a strongly typed version of Javascript, Typescript and Flow get pushed in equal measure and seems from the outside like a mini war between Facebook and Microsoft for gorwth and traction. I could go on with load of examples (eg Xamarin, React Native, Native etc) but have probably got the idea across - what's going on here?
Why is there such a fight for companies to produce new languages + frameworks
programming languages;frameworks;microsoft;google;apple
Companies are doing this in order to make some product or platform (on which they earn a profit) more attractive. For example Apple is developing Swift in order to make iOS development more attractive for developers. The hope is of course more iOS developers means more apps which in turn makes the platform more attractive for consumers, which means Apple will sell more iOS devices and earn more money. Pretty simple.In other cases the profit motive is more indirect. For example Sun developing Java. Sun had their own hardware, but Java was deliberately cross-platform. The reason was Windows at the time had a near-monopoly on desktop platforms which meant developers only developed software for Windows, which made other platforms like Suns less attractive for consumers. A vicious circle from the perspective of Sun. The hope was Java would encourage developers to write cross-platform software, thereby making the non-windows platforms more attractive for consumers..Net was developed as a counter-measure from Microsoft, since Java attracted many developers because it seemed more modern than Microsoft's offerings (VB and C++). Microsoft therefore tried to deliver a similar modern platform but still tied to the Windows platform.If you look into other platforms and frameworks you can see the same forces at work again an again.The change in the .Net strategy (from Windows-specific to cross-platform) is clearly because Windows have lost in the mobile platform market. .Net was created to solidify a virtual desktop monopoly, but now the goal is basically the opposite, to create a cross-platform environment to fight the lock-in by iOS and Android. So basically Microsoft has the role Sun had, and Apple has the role Microsoft had.Google and Facebook have pushed HTML and JavaScript heavily. This has primarily been to make the web a stronger platform and to make it more attractive compared to platform-specific development which advantage the platform-owners like Microsoft and Apple. Microsoft used to fight the web and sabotage web standards, but after they lost the battle for the mobile platforms they (not surprisingly) have become great proponent for the platform-independent web, and even work together with Google on Typescript.
_webapps.100328
I use Google Calendar and attend a few Meetups, and when I RSVP to a Meetup event, Gmail automatically makes an entry in my calendar for the Meetup from the notification email meetup sends. And keeps it updated, and that is really nice. But... I've noticed that people I share my calendar with, like my wife, kids, and a few friends, can't see those events. They think I'm free that evening, when I'm not. That's (really) bad, because I depend on Google Calendar to keep us all on the same page.
Google Calendar entries from Meetup marked Private; family can't see I'm going out
google calendar;meetup
null
_webapps.8662
When I use Google's OAuth system to login to a website (e.g. a StackExchange site), I see it use a string similar to the following: www.google.com/accounts/o8/id?id=oethionbmqnjbonthaenthiqb_tneohqjb2oeDoes this string contain any personal identifiable information? For example, is my email address encoded in this string? How about the file it links to? Basically I'm wondering if there is any way that someone can know my email address if they have this string.I realize that when you authorize a site, you send them your email address. That's not what I'm asking about. I'm only concerned about this URL specifically and what information, if any, can be deduced by it.
Does Google's OAuth URL embed my email address?
google account;oauth
The string itself doesn't contain personal information but it points to where information can be obtained.Fear not, Google will tell you what information will be passed (depends on what the target application requires) and ask you for your approval beforehand, each time you use your OAuth/OpenID on a new site, so if someone just have the link, won't learn your e-mail address unless you allow it (per site).Currently Google's OpenID system won't allow you to choose what info you want to be passed, it's just a yes to all or no to all, so if you really want to limit the passed information, better use an alternative, like myOpenID for example.
_unix.275189
The following perl script consume-10-lines-and-exit reads ten lines from stdin and prints them. After that it exits.#!/usr/bin/env perluse strict;use warnings FATAL => 'all';for (my $i = 0; $i < 10; $i++) { my $line = <STDIN>; print $line;}I'm trying to combine consume-10-lines-and-exit and cat in such a way that the first ten lines of input and consumed and printed by the first command and then the rest are consumed by cat.The following code few snippets all print 1 through 10 instead of 1 through 13 like I was expecting.printf '1 2 3 4 5 6 7 8 9 10 11 12 13' \ | tr ' ' '\n' | { perl consume-10-lines-and-exit; cat; }printf '1 2 3 4 5 6 7 8 9 10 11 12 13' \ | tr ' ' '\n' | ( perl consume-10-lines-and-exit; cat )printf '1 2 3 4 5 6 7 8 9 10 11 12 13' \ | tr ' ' '\n' | sh -c 'perl consume-10-lines-and-exit; cat'Is there a construction for sequencing commands so they will read input from stdin until they exit and then the next command will continue where the previous one left off?
How to make two commands consume input from stdin sequentially?
shell
Your problem is that perl is using buffered input, so reads ahead beyond the lines you want to consume. Try this byte-by-byte version:perl -e 'use strict;use warnings FATAL => all;for (my $i = 0; $i < 10; $i++) { my $line; while(sysread(*STDIN,my $char,1)==1){ $line .= $char; last if $char eq \n; } print $line;}'
_unix.9669
When configured accordingly (set header_cache=) mutt saves the mail headers in a cache file. That could be used to generate mail statistics. Does anybody know something about the file format? Are there any tools available to extract the information contained? (Besides strings, grep, awk and the like)
How can I generate email statistics from mutt header cache?
mutt
Short answer:it's entirely possible that the cache will not be comprehensive. If you delete mail and hcache later recomputes the header cache for that mailbox, your stats will not include mail from before the deletion.If you don't have access to the mail logs for your server, do you have access to a filter mechanism, e.g. procmail? You could use that to generate an alternative log for analysis.Otherwise, can you poll your mailbox with a program that can generate a log of mail received? Something like an offlineimap filter, or fetchmail/retchmail combined with some hashing and caching.Longer answer:The cache file is a DBM-style database. Depending on the exact build options for your mutt, it could be one of QDBM, tokyo cabinet, gdbm or Berkeley DB (BDB); which all implement a variation of BDB's API.I believe that it is unlikely you can reliably read the DB unless you use the right library implementation. ldd tells me my local mutt uses the tokyo cabinet implementation:$ ldd /usr/bin/muttlibtokyocabinet.so.8 => /usr/lib/libtokyocabinet.so.8 (0xb74f2000)You would then need to write a program, using that library, to query the BDB stored within the cache file. There are bindings for Perl, Ruby, Lua, Java, and of course C.It would appear that headers are stored as values in the DB, indexed by a CRC. From what I can tell, the CRC is derived from the path to a mailbox, which implies that the stored headers are the headers for all mail in that mailbox. So your program is essentially going to end up with a buffer containing all headers for all mail in a given mailbox. I don't think it will be much more useful than pulling the headers from all mail currently in your mailbox (and given the short answer above, not guaranteed to be more reliable).
_opensource.5480
I manage a project with a very long open source history - Zikula (and https://github.com/zikula/core).Zikula grew out of PHPNuke and PostNuke. Most of the oldest code is long gone, but some legacy remains. Early documentation states Zikula is free software released under the GPL license!. Most files within the project have a header which directs to a copy of the GPL or LGPL and there is a NOTICE which discusses the mixture of licenses within. Additionally, we depend heavily on Symfony which uses MIT and contributions since beginning on Github in March 2010 are tagged as MIT. We also have many vendors within the project of various licensing.We have a ticket requesting some clarity on the licensing and I am unable to answer. First, because of the information above and second because of my lack of understanding.I think I would prefer Zikula be licensed with a permissive license like MIT or LGPL (as I believe they are?). The GPL seems to be too restrictive for our case, but likely was the dominant option at the time (15 - 20 years ago?). From what I understand the GPL is infections and subjugates other licenses within the project.How can I straighten out this spaghetti? I'm a coder, not a lawyer. How do I audit the current code base and know what was contributed under what licensing and how do I ensure compliance in the future? How to I mix these licenses together or change and relicense as MIT/LGPL or similar?
How to Audit licensing of old project?
licensing;license recommendation;license compatibility;relicensing;multi licensing
This is going to be long, because changing the license from the GPL to a more permissive license is really complicated for a project that is big, old, and has many contributors.Which licenseAn open-source license should not be chosen because it is popular, but because it is aligned with your goals:Strong copyleft licenses like the GPL (and for web apps: the AGPL) try to maximize freedom for end users of any application using this code. This limits how other developers can use the code.Weaker copyleft licenses like the LGPL don't ensure end-user freedoms for the complete application, but only to those components subject to the license. This allows developers to incorporate such code into proprietary projects under certain conditions, but does not allow them to turn the code proprietary.Permissive licenses like the MIT license and Apache License 2.0 try to maximize freedoms for developers at the expense of end users. Developers can create and distribute modified versions without having to publish their source code. For new projects, the Apache License 2.0 should probably be strongly preferred since it includes a contributor patent grant.The license change proccessA license change is a social problem. You will need buy-in and agreement of your community. If the idea of a license change is received positively, you can:Stop accepting contributions unless the contributors explicitly agree to relicense their changes to the new license.Contact all copyright holders of all past contributions, and ask them to license their contributions to your project under the new license. You should keep a permanent record of their consent to this license change. Ideally you get a signed letter. In practice, I guess using GitHub issues would be OK. You can ping contributors in an issue with @example mentions.Note that the contributors don't always personally hold the copyright to their contributions, e.g. if the copyright belongs to their employer. You would then have to get permission from their employer at the time.Wait for the responses to roll in. This may take multiple months. Remember that contributors may be on a vacation, may have shifted their focus away from open-source contributions, or may be dead. You can try to follow up if they don't respond in a reasonable time frame.If everyone agreed, you can change all license headers and publish the project with the changed license.A note on licensing new contributions differently in a GPL projectI see your pull request template specifies the MIT license. This is a great step to give you maximum flexibility during this relicensing.But since your current license is GPL, any contributions are derivative of existing GPL code in the project and therefore also have to be GPL-licensed. Your contributors do not have the right to issue them under the new license unless you can give them the code under the new license, which requires that all previous contributions have been relicensed.These changes are therefore not MIT-licensed, but the contributors have given you the option of relicensing them later under the MIT license or a compatible license. If a contributor has given you this option for all their contributions, there is no need to contact them about the license change.Dealing with code where consent to license change could not be obtainedIf not everyone agreed to the license change, this becomes more complicated. Silence is not consent! If someone does not respond, you have to assume that they oppose the license change.If a contributor died, note that their copyright term extends for 70 years after death in most jurisdictions. You can try to contact the current copyright owners, most likely the deceased contributor's estate.If you have anonymous or pseudonymous contributions, relicensing is exceedingly difficult and I won't discuss that here.If a contributor only made very minor contributions that do not pass the threshold of originality, then these contributions are not subject to copyright and you do not need their permission to relicense the project including these licenses. Where this threshold is set depends on the case law in your jurisdiction. This threshold does not give you a right to use these contributions, but just a possible legal defence when accused of copyright violation in respect to these changes. I would be uncomfortable relying on this for anything more substantial than typo fixes.You can track the license status independently for each file or component. The possible statuses are:The file or component still includes GPL code.All past and present contributors agreed to license change, but the file or component directly or indirectly depends on GPL code.All past and present contributors agreed to license change, and the file or component has no dependencies on GPL code.Only in the last case can you update the file to display the copyright/license header for the new license. This might allow you to immediately relicense some components, if your project has a suitable architecture (a win for decoupling and inversion of control!). But while even one piece of GPL code is still present in the project, the project as whole remains subject to the GPL.For the remaining GPL files, you can try to rewrite them so that they no longer include GPL parts. This is quite tricky because the GPL is a copyleft license: although the other authors of the file agree to the license change, their contributions are derived from GPL code so they can only license their changes under the GPL, not under the new license. It is therefore not sufficient to just rewrite any lines touched by a contributor who didn't agree to the license change. You will have to rewrite the complete file or component from scratch, preferably as a clean-room implementation.As an additional difficulty, GPL code may have been copypasted within the project. Again, the pasted code and any code derived from the pasted code may not be relicensed until the original author has agreed to the license change. Auditing for this might be very difficult, unless the problematic contributions are fairly recent and comparatively minor.ConclusionDepending on your goals, resources, and community, this might be over fairly quickly, or be a long process that extends across multiple months or even years. And it could be the case that significant authors do not agree with the license change, thus requiring unreasonable effort to eliminate their contributions. In that case, you may want to accept that the project has been locked in to the GPL, and clarify your documentation to reflect this license.
_vi.4172
I am pretty new to Vim and I need some help with work-flow developing in C++.Currently, I am using Sublime Text as my primary editor for the following work-flow:Lets say I am working on RingBuffer class, for which I have ring_buffer.hring_buffer.cppunittest\ring_buffer_mock.hunittest\ring_buffer_test.cpp and unittest\ring_buffer.makefile.Each makefile has its own set of dependencies. Sublime Text allows you to add a build system, which runs a shell command defined by user in project file, and parses the output. To test the Ring Buffer, I go to menu->tools->BuildSystem->RingBuffer. So here is the questionCan you recommend a good work-flow in VIM?Is it possible in local .vimrc to specify makefile, such as ring_buffer.makefile?EDIT! First off, set makeprg=make\ -f\ ring_buffer.makefile works well, but it does not offer any automation. So I am going to ask a second question which can be found here:Determining makefile based on source file name
Specify Makefile
vimrc
null
_webmaster.16248
According to the apache FilesMatch docs:The FilesMatch directive provides for access control by filenameBasically, I only want to set an expires header for resources that have a 10 digit cache buster id appended to the name. So, here is my attempt at such a thing in my httpd.conf<FilesMatch (jpg|jpeg|png|gif|js|css)\?\d{10}$> ExpiresActive On ExpiresDefault now plus 5 minutes</FilesMatch>And here is an example of a resource I want to match:http://localhost:3000/images/of/elvis/eating-a-bacon-sandwich.png?1306277384Now obviously my FilesMatch regexp is not matching so I am guessing 1 of 2 things is happening. Either my regexp is wonky or the '?1231231231' cache busting part of the file is not part of what apache considers part of the filename. Can anybody confirm and/or give me a way to cache only those resources that will not persist beyond the next deploy?
Apache FilesMatch regexp: Can it match by the cache buster 10 digit (rails generated) following the filename?
apache2;httpd.conf
null
_softwareengineering.158603
I was going through the source code of an open source framework, where I saw a variable payload mentioned many times. Any ideas what payload stands for?
What does the term Payload mean in programming
terminology;variables
The term 'payload' is used to distinguish between the 'interesting' information in a chunk of data or similar, and the overhead to support it. It is borrowed from transportation, where it refers to the part of the load that 'pays': for example, a tanker truck may carry 20 tons of oil, but the fully loaded vehicle weighs much more than that - there's the vehicle itself, the driver, fuel, the tank, etc. It costs money to move all these, but the customer only cares about (and pays for) the oil, hence, 'pay-load'.In programming, the most common usage of the term is in the context of message protocols, to differentiate the protocol overhead from the actual data. Take, for example, a JSON web service response that might look like this (formatted for readability):{ status:OK, data: { message:Hello, world! }}In this example, the string Hello, world! is the payload, the part that the recipient is interested in; the rest, while vital information, is protocol overhead.Another notable use of the term is in malware. Malicious software usually has two objectives: spreading itself, and performing some kind of modification on the target system (delete files, compromise system security, call home, etc.). The spreading part is the overhead, while the code that does the actual evil-doing is the payload.
_webmaster.79668
I have to fix a website overloaded. The website doesn't load most times with a 503 error. I want to temporally stop it to everybody but me, so I want to deny all the IPs but mine.The server has several WordPress sites within the same hosting and domain. I have found the possibily of deny the access of the IPs I want (black listing). Unfortunatelly, I have not find how to white list the access to it. Is it possible?
CPanel: how to whitelist the access of the website?
cpanel;filtering
null
_scicomp.24187
Recall that a unit lower triangular matrix $L\in\mathbb{R}^{n\times n}$ is a lower triangular matrix with diagonal elements $e_i^{T}L e_i = \lambda_{ii} = 1$. An elementary unit lower triangular column form matrix, $L_i$, is an elementary unit lower triangular matrix in which all of the nonzero subdiagonal elements are contained in a single column. For example, for $n = 4$$$L_1 = \begin{pmatrix}1 & 0 & 0 & 0\\\lambda_{21} & 1 & 0 & 0\\\lambda_{31} & 0 & 1 & 0\\\lambda_{41} & 0 & 0 & 1\\\end{pmatrix} \ \ \ L_2 = \begin{pmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & \lambda_{32} & 1 & 0\\0 & \lambda_{42} & 0 & 1\\\end{pmatrix} \ \ \ L_3 = \begin{pmatrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & \lambda_{43} & 1\\\end{pmatrix}$$Our first task was to show that any unit lower triangular column form matrix, $L_i\in\mathbb{R}^{n\times n}$, can be written as the identity matrix plus an outer product of two vectors, i.e., $L_i = I + v_i w_i^{T}$ where $v_i\in\mathbb{R}^{n\times n}$ and $w_i\in \mathbb{R}^n$.solution - Since only the $i$-th column of $L_i$ differs from the identity matrix the outer product $v_i w_i^{T}$ must have the same structure. This implies that $w_i = e_i$ and it follows that $v_i$ is added to the $i$-th column of $I$ to define $L_i e_i$. Since only elements below the main diagonal element are different from $I$, it follows that $v_i$ has a lower structure to its potentially nonzero elements. This is often indicated in the notation by using $l_i$ instead of the generic $v_i$. The conditions on the vector are $$l_i^{T}e_j = \begin{cases}0 \ & 1\leq j \leq i\\\lambda_{ji} \ & i+1\leq j \leq n\end{cases}$$and the expression is $L_i = I + l_i e_i^{T}$Now the question I have is the following: i.) Suppose $L_i\in\mathbb{R}^{n\times n}$ and $L_j\in\mathbb{R}^{n\times n}$ are elementary unit lower triangular column form matrices with $1\leq i < j \leq n-1$. Consider the matrix product $B = L_i L_j$. Determine an efficient algorithm to compute the product and its computational and storage complexity.ii.) Suppose $L_i\in\mathbb{R}^{n\times n}$ and $L_j\in\mathbb{R}^{n\times n}$ are elementary unit lower triangular column form matrices with $1\leq j \leq i \leq n-1$. Consider the matrix product $B = L_i L_j$. Determine an efficient algorithm to compute the product and its computational and storage complexity.The only difference from (i) and (ii) are the inequalities as you can see. I have been told that (i) requires no computation but I don't understand why. I am quite confused about these types of problems. Any suggestions are greatly appreciated.
Efficient algorithm for a matrix product
linear algebra;algorithms;matrices;complexity
null
_softwareengineering.289884
I originally started writing a question on StackOverflow about a clever way to optimise keeping a version history of large text fields in a relational database table, possibly by using deltas instead of incurring the storage cost of a full copy of the changed text in an audit table on each update, which is regularly suggested as the simplest way to keep version history in a database.As I was writing it, I began to wonder, what exactly do I mean by incur the storage cost, really? I've read in a few places on the internet that the complete works of Shakespeare uncompressed comes to around 5Mb, so assuming that's true, 1TB could hold roughly 200,000 copies.That is a big book, with a lot of text in it. 200,000 is a lot of copies of that book. A 1TB spinning disk will not exactly break the bank these days, either.When we're talking about text in a database in 2015, is it wasted effort to think about compression, optimisation, or even deliberately minimising inputs, or is storage cheap enough now that I'm never going to have to care about hitting an upper limit, in practice, and I should instead optimise for app code and schema simplicity?
Is using up 'too much' storage space a practical concern when storing only text in a database in 2015?
database;storage;cost estimation
null
_softwareengineering.127007
I am seeking some ideas for how to build and install software with some parameters. These including target OS, target platform CPU details, debugging variant, etc.Some parts of the install are shared, such as documentation and many platform independent files, others are not, such as 64 and 32 bit libraries when these are separated and not together in a multi-arch library.On big networked platforms one often has multiple computers sharing some large server space, so there is actually cause to have even Windows and Unix binaries on the same disk.My product has already fixed an install philosophy of $INSTALL_ROOT/genericname/version/ so that multiple versions can coexist.The question is: how to manage the layout of all the other stuff?
File system layout for multiple build targets
configuration;builds;install
null
_hardwarecs.1885
I just built a new gaming computer. It's a skylake build with the i7 6700k, Asus Maximus viii hero, and some g.skill ram (32gb). I'm about to buy a new video card, and I've been looking over the options for a couple of weeks.I've decided that I want to spend between $650 and $700 and get a 980ti.This turned out to be a pretty large point of contention in the forums I've been looking through, so I figured I'd check here.In the benchmarks I've looked at, I've seen pretty good comparisons here and here between theAsus Strix OC, Gigabyte G1 Gaming, MSI Gaming 6G, EVGA SC.I've narrowed it down to the Asus Strix OC, MSI Gaming 6G, and adding in the Gigabyte Xtreme. Based on different reviews, I decided on the Strix at one point, but I keep seeing people complain about their heatsink not touching their processor.I've had a couple of bad experiences with Gigabyte products, so the xtreme makes me a little nervous.And the MSI Gaming clashes terribly with my mobo/case (also, it's the least powerful of the three cards chosen).Any advice on how to decide? I know they are all amazing cards, and there probably isn't a wrong choice, but this is my first REAL gaming rig, and don't want any regrets. I'm loving the cpu/mobo so far, and really want to stay happy. Feel free to throw any other card in that price range at me, and I'll look into it. Any advice is welcome.Side NoteThis really shouldn't be a side note, but I'll be using this computer for general use. I will do some amount of game development on it, but not much, and some general programming, but the video card is obviously so I can get max settings on any game I want for a while.Edit1In order to make this less opinion based, I've changed the actual question. I know I want the Strix because of how fantastic its performance in benchmarks is, but I'm worried about the past problems with their heatsink. Does anyone know if Asus has fixed the problem with the Strix cards? If no one knows, I think I'd rather go with the Gigabyte Xtreme, to avoid having to send a card back.
Has Asus fixed the heat sink problem with the Strix 980ti
gaming;graphics cards;game development
null
_codereview.2214
Is it good or bad practice (or, when is it appropriate) to use shorthand methods?Lets pretend we're using jQuery here, because it has a lot of them.:jQuery.ajax()jQuery.get()jQuery.getJSON()jQuery.getScript().load()jQuery.post()These are all really just shorthand forms for the ajax function, with the exception of load which combines another function later on.I can use $.ajax( dataType: 'script',) to do the same thing as getScript, but why?This to me seems insane, and overly complicates the API. Is this good, or bad, or what? Why isn't there a getCSS and a PostCSS and a postScript and a putScript too?
shorthand methods
javascript;jquery
Hm, good question. I guess I would say that the number one rule of programming is DRY: Don't Repeat Yourself. You could argue that any function is just shorthand for whatever it does. But whenever you find yourself typing the same thing over and over and over, even if that thing is only two or three lines, it's worth at least asking yourself, should I make this into a function?Now the authors of JQuery don't know exactly what kind of code you're writing, they just have a general idea of what JavaScript code out there in the world looks like. So maybe they originally just had $.ajax() and then over time they realized, you know, 99% of the time, people are either doing always GETs or always POSTs, it's rare that you do something like:var method;if (someComplicatedThing()) method = 'GET';else method = 'POST';$.ajax({method:method});so why not shorten it up a little bit?I agree it's possible to overdo it though, and have a zillion little functions that are all variants of the same idea, making the API seem overwhelming. So it's a question of finding the right balance, I guess.
_unix.340770
If we only got disks from iSCSI (additionally multipath) and no local disks, how can we install a CentOS 7 on them?
Install CentOS 7 without any local disk?
centos;system installation;iscsi;san
null
_codereview.19654
I've written a script to generate DNA sequences and then count the appearance of each step to see if there is any long range correlation.My program runs really slow for a length 100000 sequence 100 times replicate. I already run it for more than 100 hours without completion.#!/usr/bin/env pythonimport sys, randomimport osimport mathlength = 10000initial_p = {'a':0.25,'c':0.25,'t':0.25,'g':0.25} tran_matrix = {'a': {'a':0.495,'c':0.113,'g':0.129,'t':0.263}, 'c': {'a':0.129,'c':0.063,'g':0.413,'t':0.395}, 't': {'a':0.213,'c':0.495,'g':0.263,'t':0.029}, 'g': {'a':0.263,'c':0.129,'g':0.295,'t':0.313}}def fl(): def seq(): def choose(dist): r = random.random() sum = 0.0 keys = dist.keys() for k in keys: sum += dist[k] if sum > r: return k return keys[-1] c = choose(initial_p) sequence = '' for i in range(length): sequence += c c = choose(tran_matrix[c]) return sequence sequence = seq() # This program takes a DNA sequence calculate the DNA walk score. #print sequence #print len u = 0 ls = [] for i in sequence: if i == 'a' : #print i u = u + 1 if i == 'g' : #print i u = u + 1 if i== 'c' : #print i u = u - 1 if i== 't' : #print i u = u - 1 #print u ls.append(u) #print ls l = 1 f = [] for l in xrange(1,(length/2)+1): lchange =1 sumdeltay = 0 sumsq = 0 for i in range(1,length/2): deltay = ls[lchange + l ] - ls[lchange] lchange = lchange + 1 sq = math.fabs(deltay*deltay) sumsq = sumsq + sq sumdeltay = sumdeltay + deltay f.append(math.sqrt(math.fabs((sumsq/length/2) - math.fabs((sumdeltay/length/2)*(sumdeltay/length/2))))) l = l + 1 return fdef performTrial(tries): distLists = [] for i in range(0, tries): fl() distLists.append(fl()) return distListsdef main(): tries = 10 distLists = performTrial(tries) #print distLists #print distLists[0][0] averageList = [] for i in range(0, length/2): total = 0 for j in range(0, tries): total += distLists[j][i] #print distLists average = total/tries averageList.append(average) # print total return averageListout_file = open('Markov1.result', 'w')result = str(main())out_file.write(result)out_file.close()
Generating DNA sequences and looking for correlations
python;beginner;algorithm;bioinformatics
null
_scicomp.27197
So I have been investigating a problem to get a glider with control of its elevator to fly as far as possible from any given initial state. To keep this simple, we will view this in 2D space with the following differential equation:\begin{align}\dot{\boldsymbol{q}} = \dot{\begin{bmatrix} x \\y \\\theta \\\phi \\\dot{x}\\\dot{y}\\\dot{\theta}\end{bmatrix}} &= \begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{\theta} \\u \\-\left(f_w \sin\theta + f_e \sin\left(\theta + \phi\right)\right)m^{-1} \\\left(f_w \cos\theta + f_e \cos\left(\theta + \phi\right)\right)m^{-1} - g\\\left(f_e (l \cos\phi + l_e ) - f_w l_w\right) I^{-1}\end{bmatrix} \\\end{align}and the following relationships:\begin{align}f_w &= \rho S_w |\dot{\boldsymbol{x}}_w|^2 \sin\alpha_w \\f_e &= \rho S_e |\dot{\boldsymbol{x}}_e|^2 \sin\alpha_e \\\alpha_w &= \theta - \tan^{-1}\frac{\dot{y}_w}{\dot{x}_w} \\\alpha_e &= \theta + \phi - \tan^{-1}\frac{\dot{y}_e}{\dot{x}_e} \\\dot{x}_w &= \dot{x} + l_w \dot{\theta} \sin\theta \\\dot{y}_w &= \dot{y} - l_w \dot{\theta} \cos\theta \\\dot{x}_e &= \dot{x} + l\dot{\theta}\sin\theta + l_e\left(\dot{\theta} + \dot{\phi}\right)\sin\left(\theta + \phi\right)\\\dot{y}_e &= \dot{y} - l\dot{\theta}\cos\theta - l_e\left(\dot{\theta} + \dot{\phi}\right)\cos\left(\theta + \phi\right) \\\dot{\boldsymbol{x}}_w &= \dot{x}_w \hat{e}_x + \dot{y}_w \hat{e}_y \\\dot{\boldsymbol{x}}_e &= \dot{x}_e \hat{e}_x + \dot{y}_e \hat{e}_y \\\end{align}where $u$ is the control, essentially a choice for $\dot{\phi}$, $\phi$ is the relative elevator angle with respect to the pitch angle $\theta$, $x$ and $y$ are the horizontal and vertical positions, $\dot{x}$ and $\dot{y}$ are the horizontal and vertical speeds, $\dot{\theta}$ is the angular velocity of the glider, $f_w$ is the net aerodynamic force magnitude from the main wing, $f_e$ is the net aerodynamic force magnitude from the elevator wing, $g$ is gravitational acceleration constant, and the other constants are tied to glider physical traits.The values being used for the various constants are the following:\begin{align}m &= 0.05 \\g &= 9.81 \\\rho &= 1.292 \\S_w &= 0.1 \\S_e &= 0.025 \\I &= 6 \cdot 10^{-3} \\l &= 0.35 \\l_w &= -0.03 \\l_e &= 0.05\end{align}and the initial condition I primarily use to test is the following:\begin{align}\boldsymbol{q}_0 = \begin{bmatrix} x_0 \\y_0 \\\theta_0 \\\phi_0 \\\dot{x}_0\\\dot{y}_0\\\dot{\theta}_0\end{bmatrix} &= \begin{bmatrix} 0 \\ 2 \\ 0 \\0 \\6 \\0\\0\end{bmatrix} \\\end{align}I am experimenting with using Dynamic Programming to tackle this problem when $u$ is constrained such that $-1 \leq u \leq 1$. Since Dynamic Programming is memory intensive for large state spaces, I recognized that for an optimal distance controller, I don't actually need the first two states, $x$ and $y$. With this change, I defined $\boldsymbol{q} = \lbrack \theta, \phi, \dot{x}, \dot{y}, \dot{\theta} \rbrack^T$ along with the associated differential equations truncation. I also define the discrete dynamical system using the following:\begin{align}\boldsymbol{q}_{k+1} &= \boldsymbol{q}_k + \Delta t \dot{\boldsymbol{q}}\left(\boldsymbol{q}_k, u_k\right)\\&= f\left(\boldsymbol{q}_k,u_k\right)\end{align}To go along with this change, I made the overall optimization problem to maximize the following:\begin{align}V &= \sum_{i=1}^N \Delta t \dot{x}_i - \gamma \dot{y}^2_i\\\text{subject to}& \begin{matrix} -1 \leq u_k \leq 1 \\ \boldsymbol{q}_{k+1} = f\left(\boldsymbol{q}_k,u_k\right)\end{matrix}\end{align}because the cost function should approximate the value for $x$ at the end of a flight, which is obviously what I would have optimized if using the full system of equations.After doing some experiments, it seems the cost function chosen doesn't really work well for reasons I am unsure of. However, if I change the cost function to the following, it performs much better:\begin{align}V &= \sum_{i=1}^N \theta_i^2 + \gamma \dot{\theta}_i^2\end{align}for some $0 \leq \gamma \lt 1$. I chose this second cost function thinking one thing that might help a long flight is the glider remaining level instead of diving too soon and losing a lot of energy. It works decent, but I am still wondering why the first isn't doing too well.With all this said, is there any problems with the first optimization formulation that stand out?
Optimal Control using Dynamic Programming - Optimizing for Furthest Distance
optimization;constrained optimization;optimal control;dynamic programming
null
_cs.71937
I was reading in Papadimitriou's Computational Complexity book Chapter 14, about Oracle Machines. Papadimitriou defines, in definition 14.3, page 339-340, Oracle Turing Machines with oracle a language $A \subseteq \Sigma^*$:The computation of $M^{?}$ with oracle access $A$ in input $x$ is denoted $M^A(x)$.So far so good.In the next paragraph, he writes: If $\mathcal{C}$ is any deterministic or non-deterministic complexity class, we can define $\mathcal{C}^A$ to be the class of all languages decided by machines of the same sort and time bound as $\mathcal{C}$, that have oracle access to $A$.My question is:Given any complexity class $B$ and $C$, can we always define $B^C$?This is motivated by a question I posted (and deleted) at TCS.stackexchange where , using Papadimitriou's notation, I defined $B^C$ for a complexity class $B$ and $C$ but I received criticism (besides the context of the question) that I am not allowed to do that because oracle is not an operation defined on languages and hence complexity classes. Does this contradict the extract from Papadimitriou's book (where he explicitly defines $B^C$ for a _complexity class $B$)?My understanding is thatWe can define $B^C$ for a complexity class (i.e., a set of languages) $B$ if $B$ can be defined by a Turing Machine model. If yes, why it is not explicit in Papadimitriou's book?
Precise definition of oracle classes $A^B$
complexity theory;computability;computation models;complexity classes;oracle machines
null
_softwareengineering.178727
Here's a situation that usually happens in some companies:Announce interesting product X.Promise a release date.Release on the promised released date, ready or not.Users discover and report defects.Send patch after patch after patch after patch after patch.My question is: Ummm, what could be the factors that would lead them to tolerate these undesirable practices? So, in the name of quality, what can be practically and realistically improved in those practicies?I can think of time constraints, user feedback, sponsors pressuring the company, lack of money.
Why would companies allow these practices?
project
null
_cs.29306
So most resources providing Sudoku puzzles assign a difficulty category to each puzzle, even some I've seen with 15 or more difficulty categories. But what is a good way to assign these difficulty categories? If enough human puzzle solvers were used, the average time for a human to complete a puzzle and the percentage of people who successfully solved the puzzle could be computed for the human sample, and difficulty categories assigned accordingly. But it seems like there should be predictable scenarios that keep appearing as various puzzles are being solved that affect the average human difficulty, which could be automatically detected as a computer solves the puzzle and then these patterns could be assembled into a predicted average difficulty for humans. Are there / what are good techniques to do this? Maybe machine learning with enough training data of human performance on sample puzzles?
Are there ways to automatically (no human testing) measure a $9 \times 9$ Sudoku puzzle's average hardness for a human to solve?
algorithms;machine learning;board games;sudoku
null
_cogsci.9097
Given the following hypothetical situation:An individual discovers that his girlfriend has cheated on him, but decides to continue to date her after she assures him that she will not do it again. Upon reading messages on her phone he becomes suspicious and begins to believe that she will cheat on him again. He simultaneously believes he should break up with his girlfriend but also doesn't want to break up with her despite having absolutely no support for anything but monogamy.Is that a good example of Cognitive Dissonance?
Cognitive Dissonance
cognitive psychology;cognitive dissonance
null
_unix.94456
Is it possible to make commands in crontab run with bash instead of sh? I know you can pass commands to bash with -c, but that's annoying and I never use sh anyway.
How to change cron shell (sh to bash)?
shell;cron
You should be able to set the environment variable prior to the cron job running:SHELL=/bin/bash5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1
_scicomp.8193
Recently I am using Umfpack with Intel MKL BLAS. To link the library to a program one has to link mkl_rt.lib / mkl_rt.so. However there is no word which version: sequential or parallel of library is linked.Anyone could help?Thanks in advance.
How to tell which (sequential or parallel) version of Intel MKL is linked?
blas;intel mkl
I believe that MKL has the threaded parallel and serial functions in one unified library. You can try setting OMP_NUM_THREADS or MKL_NUM_THREADS to a range of values and see how the performance varies. Setting either to 1 will give you the serial behavior.
_codereview.141518
Would appreciate any suggestions or improvements, I'm sure there are many. Especially so on the way I've avoided multiple hits by the missiles; shifting the asteroid into a separate group to enable the animation to continue without registering more hits and hence addition to the score.P.S As it stands my game has a ship at the bottom of the screen which can move horizontally but not vertically. It can fire missiles at a horizontally moving asteroid sprite at the top of the screen. If the asteroid is hit by a missile sprite, it triggers an explosion animation and the sprite is removed. A new asteroid sprite is then spawned. Sprite:class MyAsteroid(pygame.sprite.Sprite): My sprite def __init__(self): super().__init__() self.height = 0 self.width = 0 self.vel_x = 5 self.velocity_y = 0 self.hit = False self.files = [] self.images = [] self.index = 0 self.asteroid_explosion = pygame.mixer.Sound(explosion.wav) for i in range(1,10): file = explosion0.bmp new_file = file.replace(0, str(i)) self.files.append(new_file) for file in self.files: self.images.append(load_image(file)) def ast_hit(self): self.hit = True def update(self, missile_group, the_screen): super().update() self.X = self.X + self.vel_x if self.X < 0 or self.X > the_screen.width - self.width: self.vel_x = -(self.vel_x) if self.hit == True: self.vel_x = 0 self.asteroid_explosion.play() self.image = self.images[self.index] self.index += 1 if self.index >= len(self.images): missile_group.remove(self) def draw(self): super().draw() def load(self, image): self.image = image self.rect = self.image.get_rect() self.width, self.height = image.get_size() # X property def _get_x(self): return self.rect.x def _set_x(self, value): self.rect.x = value X = property(_get_x, _set_x) # Y property def _get_y(self): return self.rect.y def _set_y(self, value): self.rect.y = value Y = property(_get_y, _set_y) # position property def _get_pos(self): return self.rect.topleft def _set_pos(self, value): self.rect.topleft = value position = property(_get_pos, _set_pos)Hit detection:for asteroid in asteroid_group: asteroid_hit = False asteroid_hit = pygame.sprite.spritecollide(asteroid, missile_group,\ True) if asteroid_hit: asteroid.ast_hit() asteroid_group.remove(asteroid) hit_group.add(asteroid) #asteroid.hit = True## for missile in asteroid_hit:## missile_group.remove(missile) score += 10
Avoiding multiple hits to Sprite by shifting Sprite to separate group
python;python 3.x;pygame
null
_softwareengineering.201579
I have a Java program that takes about an hour to run. While it is running, if I change the source code and recompile it, will this affect the above run?
Recompiling a java project while it is running
java;compiler;runtime
null
_vi.2071
Say I'm editing file foo. I want to copy/write what I have in the buffer to bar and change the buffer to be editing bar instead of foo. I can achieve this with::w bar:e barBut that has a few problemsIf bar is actually /usr/local/share/long/path/to/bar, I really don't want to type that in twice, even with tab completion.It reloads the file, potentially messing with the settings/folds/etc. I had for that buffer.The working directory is left the same. 1 is the biggest problem I'd like a solution to address; 2 would be really helpful, 3 is more of a nice to have.Is there a cleaner way to do this?
How can I copy the current file and start editing the copy instead of the current file?
save;multiple files;file operations
null
_cs.66144
I am using a sensing board able to detect magnetic signals between the board and a display.I have a set of objects that are represented (each of them) by a unique set of points (magnets) with a particular shape. For example: object #1 is made by three points that form an equilateral triangle with side length 1cm; object #2 is made by three points that form a right triangle with sides 3cm, 4cm, 5cm; object #3 is made by three aligned points with distance 2cm; and so on. I can have a multiplicity of objects with unique patterns.Now I have a list of points with the coordinates w.r.t. the Cartesian plane, and I need to match them referring to the patterns I got from the objects. I also know that every point must be matched, therefore I can minimize the overlapping errors. In practice, every point in the set can belong to maximum one object, and at the same time also it must belong to an object of the initial set.Any idea on how to do that in an efficient way?
Group points by given shapes
computational geometry
In the general case a problem like this is NP, however in the vast majority of real cases it should be easy.20 points make 1140 triangles so it shouldn't be hard to pick out the triangles most similar to your basic shapes (unless the shapes can be more complicated). A little ugly backtracking may be needed when the top scoring triangles overlap.Also, if most magnets move continuously, you can easily map old points to new points and old triangles to new triangles.What I'm talking about are fairly obvious methods. There may be smarter ways to do this, but you don't necessarily need them.
_unix.307090
I created a boot USB drive with Arch Linux and sucessfully installed it, but didn't like it as much as I thought I would so I then created a boot USB drive with Debian. The problem is that only the 16GB USB drive is visible in the partition menu during the Debian installation sequence and I would like to install it to the laptop's hard disk. I tried clearing the existing Arch Linux paritions from the disk by re-creating the Arch Linux boot USB and using the shell and parted to remove them, but this didn't make it visible to the Debian installer either. Any ideas?
Debian installer doesn't display hard disk in partition menu
debian;debian installer
null
_unix.301380
I am creating a script that will email errors/warnings from a log. I would like to have this sent every half hour but I only want to send it if there is a new entry. How would I grep out only the last half hour of errors?The time stamp in the log is in the following format.< Aug 1, 2016 2:15:29 PM MDT> < Error details.....>The script so far is:#!/bin/bashcat /var/log/logfile.log | egrep -i error|warning | tee -a /tmp/log.tmpget only last 30 min of errors | mail -s Errors/Warning [email protected] it possible to convert the time stamps (Aug 1, 2016 2:15:29 PM MDT) to epoch time then compare it to the current epoch time or is there a way with sed/awk/perl to get the last 30 minutes?
Log file grep entries from last 30 min
awk;sed;scripting;perl;date
Great ideas, the simplest is @MelBurslan suggestion to diff the files.#!/bin/[email protected]=/var/tmp/alerts.tmpLOG30=/var/tmp/LOG30LOGNOW=/var/tmp/LOGNOWHOST=`hostname`# setup fileif [ -f ${OFILE} ]; then cat /dev/null > ${OFILE}else touch ${OFILE}ficat /var/log/logfile.log | egrep -i error|warning | tee -a ${LOGNOW}diff ${LOG30} ${LOGNOW} | tee -a ${OFILE}if [ -f ${OFILE} ]; then echo Errors | cat - ${OFILE} > temp && mv temp ${OFILE} mailx -r [email protected] -s Errors ${MAILTO} < ${OFILE}firm ${LOG30}mv ${LOGNOW} /var/tmp/LOG30rm ${OFILE}
_vi.5647
As far as I know, all good scripts/programs start off with a shebang line as the first line:#!/bin/bash#!/usr/bin/env python#!/usr/bin/perletc. Is it possible to pass that line to Vim in command-mode to generically determine the program to use when executing the current file.For example, perl file needs:! perl%Python, :! python%If the shebang is already there in the file, is it possible to replace the specific program before the % character (current file) with the shebang line/argument?Of course, if the file is executable already :! ./% worksThe idea is to map a key so that, as you write code, simply hitting a shortcut key the file is executed; mimicking an IDE.Once again, assuming the file is not (yet) executable, since you just started writing it in Vim.
How to pass generic shebang line to shell
command line;external command
I'm not sure this is what you want but you could try this mapping using the <F7> key:nnoremap <F7> :<C-U>sil! exe '!' . matchstr(getline(1), '#!\zs.*') . ' ' . shellescape(expand('%:p'), 1) <Bar> redraw!<CR>matchstr(getline(1), '#!\zs.*') extract the text after the shebangshellescape(expand('%:p'), 1) expand the full path to the current file and protect characters that may have a special meaning for the shell; the second non-nul argument is useful to escape special items such as !, %, # which could be expanded by Vim on the command-lineIf you want to see the output in the shell, you could remove :sil! and :redraw!:nnoremap <F7> :<C-U>exe '!' matchstr(getline(1), '^#!\zs.*') shellescape(expand('%:p'), 1)<CR>
_softwareengineering.76087
I have come across this several times when selling a prepackaged solution. Customer buys the package, which clearly sets out that it can do XYZ, but the customer wanted it to do ABC. The customer then emails for support. I inform the customer that the product was never designed for the purpose they had in mind (integrating it with another product).The customer asks for a refund as they cannot use the product. This is where I'm in two minds. First, the product is fully functioning and they have now obtained the source code (PHP script). How am I to know they aren't going to use it anyway and still want a refund? Second, I do feel bad for the customer. If they're being honest, and most are, then they cannot use the product and therefore wasted the money in their eyes. But, that wasn't my fault.Up until now I've refunded the money if requested, but now I'm comparing what I do with how bigger companies deal with this kind of situation. What would they do? Maybe because they're bigger, they don't care about a few refunds every now and then, but to a one man band like me, every sale is needed!What is the best way to deal with this kind of situation?
Customer buys software for function it cannot do and then complains. How to resolve?
customer relations;sales
While I agree in a service industry reputation is a key issue, one of the things that nullifies that is the unlikelihood of word of mouth sales, repeat customers, or any of the other hallmarks of a good reputation. If you're a one man software seller, then it's unlikely you're offering a ton of products, particularly if they're as complicated as this one likely is given some of the hints in your comments. While I agree with @George Stocker that the number of these requests points to a potential problem in the clearness of your product's capabilities, I also agree (though less aggressively towards customers) with his commenter @SLC that customers may tend to be lazy with respect to ascertaining product features.My opinion (and personal practice for my own side projects) is this:With a clearly visible source code there should be a key activation mechanism within the software that allows operation of the software for 30/60/90/whatever days. It doesn't have to be enterprise level suitable for Microsoft or anything, but something that makes it very unattractive to try to get around. During the period, if the product is undesired, their money is refunded and the key no longer works at the end of it. If a refund is not requested, a new key is delivered and no refund is given from then on.If someone is not smart enough to try before they buy or throws money down on a product without verifying first that it will do what they need, then they deserve to be separated from their money. Make it clear on your website that services and products are offered without refund at all or after a certain amount of time, etc. If you use the method I mention in #1 mention that.Research the return policies for software at major companies (software in the box). See if any of them might be compatible with your capabilities. Most won't accept refunds on opened software or will refund a certain amount minus a restocking fee. When you ship the code, it is considered immediately opened software, and these policies may be helpful to you.In all aspects of purchasing/selling I involve myself in, I operate under the phrase Caveat emptor. It's the responsibility of the purchaser to make sure they know what they're buying. You're not out there smooth talking these people into buying your software, it's being purchased through your website. They're not being taken for a ride, they're being frivolous with their money, and their carelessness will only end up costing you money in sales and time spent dealing with it.On the other hand, if you are out there smooth talking them out of their wallets, give their money back, ya crook.
_cs.6374
I got a n*m matrix updated in realtime (i.e. about every 10ms) with values between 0 and 1024, and I want to work out from that matrix a multitouch trackpad behaviour, which is:generate one or more points on the surface given the values on the matrix,make this or those point as big as the value can be.For example here is a few lines of a 9x9 matrix updates, and we can consider the following matrix as an example (with a touch in the middle):[ [ 12, 7,12 ], [ 12,129,19 ], [ 12, 11,22 ] ]The goal is to mimic the behaviour of a common touchpad (like on every smartphone, or laptop). So, I'm getting values from a evenly distributed matrix of capacitive sensors on a physical object, which are processed by a microcontroller into a matrix, and I want to get coordinates and weight of one or several points.The idea would be to get something like this (of course, I don't expect to have more than 2 or 3 detected points, and that level of precision with a matrix that small).Here are a few example raw logs:http://m0g.net/~guyzmo/touch_diag.log http://m0g.net/~guyzmo/touch_double.logEdits:Thinking about my problematic made me consider this idea: I think I should make some kind of interpolation to augment the definition of the matrix, and in some way make the new values additive.i.e. imagine we have the following matrix :[ [ 200, 200, 150 ], [ 150, 150, 80 ], [ 80, 80, 40 ] ]and we want to interpolate it somehow into something that would look like (I'm inventing the values, but it's to expose the idea):[ [ 200, 400, 200, 175, 150 ], [ 175, 200, 175, 150, 125 ], [ 150, 170, 150, 125, 80 ], [ 100, 125, 100, 80, 60 ], [ 80, 80, 80, 60, 40 ] ]I've looked at interpolation algorithms, and it looks like the one we want that is the closer to our needs is the hermite interpolation. But though I have RTFM on interpolation methods, I don't know how I can apply it to a matrix.
How to correlate a matrix of values to get a coordinated point?
algorithms;matrices
I finally managed to get what I want. I used python's scipy'sinterpolate.RectBivariateSpline(x,y,z,kx,ky)to create a 60x60 matrix out of the 3x3 matrix. And out of that, I used opencv libraries to detect blobs (mainly, findContours() and fitEllipse()), so now I got a list of ellipses matching the several touches I have.
_webapps.72725
A few months ago comments were loading automatically, now I have to refresh to see new ones. How do I turn auto-load back on?
How can real time comments be turned on?
facebook;facebook timeline
null
_cs.49787
Take the alphabet A={0,1} I need to build a regular expression for the language with less or equal substrings 011 than 110. I tried to figure out what would be the finite automata but I'm not to sure. I also tried to proof it isn't regular using Myhill-Nerode theorem but the problem is the language readjusts itself:110011 (1 110, 1 011)011110 (1 110, 1 011)011011 (2 011, 1 110)110110 (1 011, 2 110)Now I'm convinced it should be regular but don't know how to proof it.Edit:Should be something similar to: $(0^{+}11^{+}+11^{+}0^{+})^{*}110(0^{+}11^{+}+11^{+}0^{+})^{*} + \epsilon$?
Finding regular expression for a language with more substring of one type than from another
regular languages;finite automata;regular expressions
null
_webapps.87219
Essentially all I want to do is to forward an email from Gmail to another account but when I do Gmail inserts into the new message the original senders details and the email message at the top but I don't want that to appear in the forwarded email. I know I can manually delete that information but I want this to be an automated process and wondered if there was a script that I could run that would strip out these details before the email was forwarded to the new email address(es)Is this possible?
Gmail Auto Forwarding
gmail;gmail filters
null
_unix.129551
Assume a user uid=1000 and guids=1000,33,277 is allowed to create a file in the folder /files/. Is there any way I can prevent that the this user allows others to read the file (which are not at least in the groups 1000, 33 or 277)?Let the file created be /files/user1000.file then the question can be specific:Is there a way to prevent this outcome of ls /files/user1000.file -al-rw-rw-r-- 1 1000 1000 6 May 15 17:21 user1000.fileand have this instead:-rw-rw---- 1 1000 1000 6 May 15 17:21 user1000.fileMaybe using umask? I know that there are things like setgid drw-rws---, so I'm optimistic there might be a way.Yet I would imagine it is up to the user to decide to do a chmod o+rw user1000.file?
Is it possible to prevent files created being world-readable?
permissions;files
null
_scicomp.26116
What would be the numerical method of choice to find minima in a non-smooth, non-convex, locally Lipschitz function $f: \mathbb{R}^n\rightarrow \mathbb{R}$. The function $f$ is mostly smooth but contains three-dimensional cusps of the following form:$$g: \mathbb{R}^3\rightarrow \mathbb{R}\\g(\boldsymbol{x})= -\exp(-{\lVert \boldsymbol x \rVert}_2)$$with $\lVert \cdot \rVert_2$ being the Euclidean Norm.
Optimization of non-smooth, non-convex, locally Lipschitz functions of type exp(-abs(x))
optimization;nonconvex
I think I found the right article myself:Lewis, A.S. & Overton, M.L. Math. Program. (2013) 141: 135. doi:10.1007/s10107-012-0514-2
_unix.62535
I have a very simple ksh script and at certain points I want to write to a log file. I use the following commands in two places...print Directory listing 1:\n > ${LogFile}ll >> ${LogFile}(Note: The second time this command is used print Directory listing 2)My problem is, when I view the log file afterwards, only the second execution of these commands work! So there's no Directory listing 1 and accompanying ll output.I have tested and tested the script to ensure that there's nothing wrong my logic. I've added print test commands just before each so I know they get executed.Is there something I've done wrong or I'm not realising?
Unsure about the behaviour of my script when writing to log file
shell;files;io redirection;ksh;output
Whenever you do a redirection with > (your first line), the ${LogFile} is truncated to 0 and then written. If I understand right, you do the above twice, the first stuff is overwritten by the second.What you have to do is along the lines:> ${LogFile} # This just truncates if there was anything there, writes nothing ...echo First round >> ${LogFile}ls -l >> ${LogFile} ...echo -e \nSecond round >> ${LogFile}ls -l >> ${LogFile} ...
_webapps.14541
How do I group separate emails into the one conversation if they weren't already?For instance, I email X, the manager at a client, asking something. Then Z, her secretary, replies with the answer. Is there a way to group these emails into a single conversation? I don't want to use labels, I really want to group the messages.
Group different emails into one Gmail conversation
gmail
Forward one of the e-mails to yourself using the subject line from one of the other e-mails.Unfortunately, that seems to be the only way, and the e-mails you forward won't be clearly indicated as having been sent by the real sender (nice header with their name in colour), but it will work to get everything in one place.Given these two e-mails, like in your scenario:E-mail 1:to Person 1 (manager, in your scenario)from Yousubject Subject 1body Body 1 (question, in your scenario)E-mail 2:to Youfrom Person 2 (secretary, in your scenario)subject Subject 2body Body 2 (answer, in your scenario)...to get everything into one conversation, you send this e-mail:E-mail 3 (a forward of Person 2's message, with the subject manually changed):to Youfrom Yousubject Fw: Subject 1 [need to override Fw: Subject 2 with Fw: Subject 1]body Body 2 (answer, in your scenario)(I am saying forward instead of reply here in order to bring any attachments into the conversation too!)
_softwareengineering.246793
You can find an endless list of blogs, articles and websites promoting the benefits of unit testing your source code. It's almost guaranteed that the developers who programmed the compilers for Java, C++, C# and other typed languages used unit testing to verify their work.So why then, despite its popularity, is testing absent from the syntax of these languages?Microsoft introduced LINQ to C#, so why couldn't they also add testing?I'm not looking to predict what those language changes would be, but to clarify why they are absent to begin with.As an example: We know that you can write a for loop without the syntax of the for statement. You could use while or if/goto statements. Someone decided a for statement was more efficient and introduced it into a language.Why hasn't testing followed the same evolution of programming languages?
Why isn't testing a language a supported feature at the syntax level?
programming languages;unit testing;syntax
As with many things, unit testing is best supported at the library level, not the language level. In particular, C# has numerous Unit Testing libraries available, as well as things that are native to the .NET Framework like Microsoft.VisualStudio.TestTools.UnitTesting. Each Unit Testing library has a somewhat different testing philosophy and syntax. All things being equal, more choices are better than less. If unit testing were baked into the language, you'd either be locked into the language designer's choices, or you'd be using... a library, and avoiding the language testing features altogether.ExamplesNunit - General purpose, idiomatically-designed unit testing framework that takes full advantage of C#'s language features.Moq - Mocking framework that takes full advantage of lambda expressions and expression trees, without a record/playback metaphor.There are many other choices. Libraries like Microsoft Fakes can create shims... mocks that don't require you to write your classes using interfaces or virtual methods.Linq isn't a language feature (despite it's name)Linq is a library feature. We got a lot of new features in the C# language itself for free, like lambda expressions and extension methods, but the actual implementation of Linq is in the .NET Framework.There's some syntactic sugar that was added to C# to make linq statements cleaner, but that sugar is not required to use linq.
_webapps.13007
PayPal collects my customers' addresses, but I need an easy way to integrate them into my CRM.
How can I auto-import contacts from PayPal into Highrise?
crm
Old answer here, but try using this PayPal to Highrise Zap. It'll let you create contacts in Highrise or add notes to existing contacts when they buy something from you in PayPal. Disclosure: I run Zapier, but this is what I'd use even if I didn't. :)