id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.276954
The Porters Handbook says in 5.12.1.3. Default Options that DOCS, NLS, and EXAMPLES are on by default for all ports. I want them off, so I have to unchecked them manually during make config-recursive for every port. How can I set them off by default?
Turn DOCS, NLS, and EXAMPLES options off by default for all FreeBSD ports
freebsd;make;bsd ports
You can use make.conf. See an old announce:The following variables can be used in make.conf to configure options.They are processed in the order listed below, i.e. later variablesoverride the effects of previous variables. Options saved using theoptions dialog are processed right before OPTIONS_SET_FORCE.OPTIONS_SET - List of options to enable for all ports.OPTIONS_UNSET - List of options to disable for all ports. ${UNIQUENAME}_SET - List of options to enable for a specific port.${UNIQUENAME}_UNSET - List of options to disable for a specific port.OPTIONS_SET_FORCE - List of options to enable for all ports.OPTIONS_UNSET_FORCE - List of options to disable for all ports.${UNIQUENAME}_SET_FORCE - List of options to enable for a specific port.${UNIQUENAME}_UNSET_FORCE - List of options to disable for a specific port.To know the UNIQUENAME of a port you can run make -V UNIQUENAME ina port directory.An example configuration is given below.OPTIONS_SET= NLS # enable NLS for all ports unless configured # otherwise using the option dialogOPTIONS_UNSET= DOCS # aka NOPORTDOCS# configuration for xorg-server overriding the configuration from the# option dialogxorg-server_SET_FORCE= AIGLXxorg-server_UNSET_FORCE=HAL SUID
_codereview.149567
I'd like to know if the following code is a good implementation of MergeSort? I tried some examples and the code was right, so I guess that the algorithm works correctly.public static int[] myMerge (int[] array, int[] array2){ int[] giveback = new int[array.length + array2.length]; int i = 0; int j = 0; for (int x = 0; x < giveback.length; x++){ if (array[i] >= array2[j]){ giveback[x] = array2[j]; j++; } else{ giveback[x] = array[i]; i++; } if (i == array.length){ x++; for(int c = j; c < array2.length; c++){ giveback[x] = array2[c]; x++; } return giveback; } if (j == array2.length){ x++; for (int b = i; b < array.length; b++){ giveback[x] = array[b]; x++; } return giveback; } } return giveback;}public static int[] myMergeSort (int[] array){ if (array.length <= 1 ){ return array; } if (array.length % 2 == 0){ int[] right = new int[array.length/2]; int[] left = new int[array.length/2]; int counter = 0; for (int i = 0; i < array.length/2; i++){ left[i] = array[i]; } for (int j = array.length/2; j < array.length; j++){ right[counter] = array[j]; counter++; } return myMerge(myMergeSort(right),myMergeSort(left)); } else{ int[] right = new int[array.length/2]; int[] left = new int[array.length/2]; int counter2 = 0; for(int i = 0; i < array.length/2 +1; i++){ left[i] = array[i]; } for(int j = array.length/2 +1; j < array.length; j++){ right[counter2]=array[j]; counter2++; } return myMerge(myMergeSort(right),myMergeSort(left)); }}
Implementation of mergesort in Java
java;sorting;mergesort
null
_unix.319315
How could I use something like sed to split a file into two so the file containingeric shwartzdavid snyderwhere the 4 spaces between entries are actually tabs into two files such as:file1:ericdavidfile2:shwartzsnyderSo it puts everything after the tab on each line into another file.
splitting a file with lines separated by tabs into two files
text processing;sed;awk
A solution could be:awk '{ print $1 > file1; print $2 > file2}' file
_unix.341696
On Linux, the load average is the average number of processes that are either runnable or waiting averaged over the past 1, 5 and 15 minutes.On OpenBSD (and possibly other BSDs, but neither the quote nor the context really says), load average is the number of processes which have (wanted to) run at least once in the most recent 5-second window, with a degradation over time.However, I was unable to locate information on how load average is actually defined on FreeBSD.What is the exact meaning of the load average numbers on FreeBSD?
How is load average calculated on FreeBSD?
freebsd;load average
null
_unix.60078
Could you recommend a way to figure out which driver is being used for a USB device.Sort of a usb equivalent of lspci -k command.
Find out which modules are associated with a usb device?
drivers;kernel modules
Finding the Kernel Driver(s)The victim device$ lsusb Bus 010 Device 002: ID 046d:c01e Logitech, Inc. MX518 Optical MouseBus 010 Device 003: ID 051d:0002 American Power Conversion Uninterruptible Power SupplyWe're going to try to find out what driver is used for the APC UPS. Note that there are two answers to this question: The driver that the kernel would use, and the driver that is currently in use. Userspace can instruct the kernel to use a different driver (and in the case of my APC UPS, nut has).Method 1: Using usbutils (easy)The usbutils package (on Debian, at least) includes a script called usb-devices. If you run it, it outputs information about the devices on the system, including which driver is used:$ usb-devicesT: Bus=10 Lev=01 Prnt=01 Port=01 Cnt=02 Dev#= 3 Spd=1.5 MxCh= 0D: Ver= 1.10 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1P: Vendor=051d ProdID=0002 Rev=01.06S: Manufacturer=American Power ConversionS: Product=Back-UPS RS 1500 FW:8.g9 .D USB FW:g9 S: SerialNumber=XXXXXXXXXXXX C: #Ifs= 1 Cfg#= 1 Atr=a0 MxPwr=24mAI: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=00 Prot=00 Driver=usbfsNote that this lists the current driver, not the default one. There isn't a way to find the default one.Method 2: Using debugfs (requires root)If you have debugfs mounted, the kernel maintains a file in the same format as usb-devices prints out at /sys/kernel/debug/usb/devices; you can view with less, etc. Note that debugfs interfaces are not stable, so different kernel versions may print in a different format, or be missing the file entirely.Once again, this only shows the current driver, not the default.Method 3: Using only basic utilities to read /sys directly (best for scripting or recovery)You can get the information out of /sys, thought its more painful than lspci. These /sys interfaces should be reasonably stable, so if you're writing a shell script, this is probably how you want to do it.Initially, lsusb seems to count devices from 1, /sys from 0. So 10-2 is a good guess for where to find the APC UPS lsusb gives as bus 10, device 3. Unfortunately, over time that mapping breaks downsysfs re-uses numbers even when device numbers aren't. The devnum file's contents will match the device number given by lsusb, so you can do something like this:$ grep -l '^3$' /sys/bus/usb/devices/10-*/devnum # the ^ and $ to prevent also matching 13, 31, etc./sys/bus/usb/devices/10-2/devnumSo, in this case, it's definitely 10-2.$ cd /sys/bus/usb/devices/10-2$ ls10-2:1.0 bDeviceClass bMaxPower descriptors ep_00 maxchild remove urbnumauthorized bDeviceProtocol bNumConfigurations dev idProduct power serial versionavoid_reset_quirk bDeviceSubClass bNumInterfaces devnum idVendor product speedbcdDevice bmAttributes busnum devpath ltm_capable quirks subsystembConfigurationValue bMaxPacketSize0 configuration driver manufacturer removable ueventWe can be sure this is the right device by cating a few of the files:$ cat idVendor idProduct manufacturer product 051d0002American Power ConversionBack-UPS RS 1500 FW:8.g9 .D USB FW:g9 If you look in 10-2:1.0 (:1 is the configuration, .0 the interfacea single USB device can do multiple things, and have multiple drivers; lsusb -v will show these), there is a modalias file and a driver symlink:$ cat 10-2\:1.0/modalias usb:v051Dp0002d0106dc00dsc00dp00ic03isc00ip00in00$ readlink driver../../../../../../bus/usb/drivers/usbfsSo, the current driver is usbfs. You can find the default driver by asking modinfo about the modalias:$ /sbin/modinfo `cat 10-2\:1.0/modalias`filename: /lib/modules/3.6-trunk-amd64/kernel/drivers/hid/usbhid/usbhid.kolicense: GPLdescription: USB HID core driverauthor: Jiri Kosinaauthor: Vojtech Pavlikauthor: Andreas Galalias: usb:v*p*d*dc*dsc*dp*ic03isc*ip*in*depends: hid,usbcoreintree: Yvermagic: 3.6-trunk-amd64 SMP mod_unload modversions parm: mousepoll:Polling interval of mice (uint)parm: ignoreled:Autosuspend with active leds (uint)parm: quirks:Add/modify USB HID quirks by specifying quirks=vendorID:productID:quirks where vendorID, productID, and quirks are all in 0x-prefixed hex (array of charp)So, the APC UPS defaults to the hid driver, which is indeed correct. And its currently using usbfs, which is correct since nut's usbhid-ups is monitoring it.What about userspace (usbfs) drivers?When the driver is usbfs, it basically means a userspace (non-kernel) program is functioning as the driver. Finding which program it is requires root (unless the program is running as your user) and is fairly easy: whichever program has the device file open.We know that our victim device is bus 10, device 3. So the device file is /dev/bus/usb/010/003 (at least on a modern Debian), and lsof provides the answer:# lsof /dev/bus/usb/010/003 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAMEusbhid-up 4951 nut 4u CHR 189,1154 0t0 8332 /dev/bus/usb/010/003And indeed, its usbhid-ups as expected (lsof truncated the command name to make the layout fit, if you need the full name, you can use ps 4951 to get it, or probably some lsof output formatting options).
_unix.278950
I am creating a crontab that compresses 15 minute clips from my security camera into one file (24 Hours Long) and then having the clips delete.avimerge -o /media/jmartin/Cams/video/Full_$(date +%F --date Yesterday) -i /media/jmartin/Cams/video/$(date +%F --date Yesterday)* # Converts files from the past 24 hours into one .avirm /media/jmartin/Cams/video/$(date +%F --date Yesterday)* # Removes old clips that have already been compressedMy question is What is the danger of using the $date variable. Could something possibly happen where it deletes all files in /video/? What would you recommend as a safer alternative?Example filenames (Yes, those are spaces in the filename):2016-04-25 00:00:01.avi 2016-04-25 00:15:02.avi 2016-04-25 00:30:02.avi 2016-04-25 00:45:01.avi
Dangers of using rm command with variables
bash;cron;date;rm
Two things jump out:You have no checking for failure of the substitutionThere is a race condition if the date changes between uses of the date command.You could solve them both like this:#/bin/bash# Exit if any command failsset -edir='/media/jmartin/Cams/video'day=$(date +%F --date Yesterday)# Conbine files from the past 24 hours into a single AVI fileavimerge -o $dir/Full_$day -i $dir/$day*# Remove old clips that have already been compressedrm $dir/$day*
_unix.60181
Short question is: should we always do a yum update --exclude=kernel-* to do an update in Fedora?I was a bit surprised that when I got a new Fedora machine at work (it used to be Red Hat Enterprise Linux (RHEL)), the first time it started up, it asked me to do an update, and I naturally said yes, and for anything it asked, I just used the default answer (usually replying yes by pressing ENTER).But it turned out that the kernel was somehow corrupted, having 3.6.9 and 3.6.10 components, and the machine would not boot up (would cause kernel panic), and had to use the second option to boot up in the boot menu. (the IT department told me it is the last problem-free version, like a checkpoint version). But even that, the machine was very slow and my coworker told me later on that my kernel was running partly on earlier version and partly running with 3.6.9 or 3.6.10 components and they might not be totally compatible and that's probably why it was so slow.My coworker knew about Fedora for quite a while, and he was able to fix it by doing a series of yum remove and yum downgrade, and making the kernel and header and component all back to an earlier version (I think it was 3.3.4 but I will have to go back to work and check).So is it true that for Fedora, we always have to abort the default update request, and do ayum update --exclude=kernel-*to be safe that we won't get a kernel that is not yet stable or still in beta? This is somewhat counter-intuitive to me, as I know other update systems would usually use only the stable version, unless the user specifically types in a particular version which might be a beta version.Is there actually a way to update only the stable components, or do we need to use yum update --exclude=kernel-* all the time to be safe?(this was a bit surprising to me, as anybody at work using Fedora can be affected by this, and it could be hundred or even thousands of people, so what is a more correct way of doing the system updates?)
Will Fedora's yum update install a current or beta version of the kernel and cause problems?
fedora;yum;upgrade
null
_cogsci.480
Several apps and sites offer flashcard-based learning that repeat the cards you do poorly on over a period of time (the more inaccurate the answer the closer to each other the repetitions are). One example of this is Super Memo.Is there rigorous empirical support for these types of flashcard systems working?Do they exploit a known function of memory?
Are spaced flashcards effective for learning?
learning;memory;mnemonic;encoding
There's lots of research out there on flash cards and they are a proven, effective study aid.Flash Cards work because of the Forgetting Curve; rehearsal and retrieval before you forget an item strengthens the memory before it decays allowing one to optimize encoding into LTM.The paper Optimising learning using flashcards: Spacing is more effective than cramming is a good summary of why Spacing helps retention, highlighting why it's effective in education and should be emphasized over cramming, which current educational models (finals, tests) currently promote.Spacing is effective for Long Term Memory as opposed to cramming/massing which is only effective (but it IS effective) in the short term as shown by the above forgetting function.
_webmaster.39277
I'm using http://schema.org/MedicalClinic on main page of my client's site. I've filled in as many as I can including address,name, url, image url, geolocation, descripition, opening hours, specialty etc.It's coming up ok when I test it on Richsnippet tool but I don't know what benefits I can get from using it.I don't see anything on the search result pages on Google.What should I expect from it? Would the image show up on the SERP? If so, wouldn't it be similar to what you would get with Publisher meta tag and Google Plus as it also allows you to get Google Plus image appearing on the SERP?
Schema.org MedicalClinic
seo;google;bing;google plus
At the highest level, you should never expect anything, given there's no guarantee that microdata will even be used if discovered. Specific to your question, though, this is not one of the documented suported types, so barring some other specialized search engine, at the moment Google doesn't appear to even be looking for this flavor of microdata, much less giving you any benefits from it.
_webmaster.5165
I am trying to set up a custom search on my site and just for testing was checking out the search API's. The trouble is that the API do give me the valid results and total number of items found etc, however when I try to go to second page of the search they do not provide me with any data. Is it because I have to pay for the custom search engine?
Paging Google Custom Search via XML
google
null
_unix.237987
I am having an issue that I think is associated with X, xrandr and maybe WM I am using. I am on Debian 8 stable, updated;Intel graphics,i3wm, no DElightDM (not sure if this is relevant, but at some point I thought it might be). I used GDM at the time of the crash, then tried to install lightdm. I don't know the intricacies of authentication of X by the DM.Here is the scenario. I come home and connect my laptop to two monitors (VGA1 and HDMI1) and turn off LVDS1. For that I have a function in .bashrc function duo { xrandr --output HDMI1 --right-of LVDS1 xrandr --output LVDS1 --off # this is probably bad, but it still works thanks to xrandr xrandr --output HDMI1 --mode 1280x1024 xrandr --output HDMI1 --right-of VGA1 xrandr --output HDMI1 --rotate left xrandr --output VGA1 --mode 1280x1024}The function is messy because I was experimenting and trying to break down how xrandr should change layout. This works 100% of the time without issues. When I want to disconnect and go back to laptop mode I pull out both cables and press Super+Shift+F8 which in my i3wm is bound to xrandr --auto, which should disconnect VGA1 and HDMI1 since they are not plugged in anymore and i3wm will move all workspaces to single screen. Sometimes this works, but almost often X server crashes and drops into DM prompting for login. So I lose all of applications open and possibly files (although I am OCD when it comes to saving).Here is a syslog. It starts with a line printed by my script that's bound to Super+Shift+F8 in my i3wm config file. The reason for this shortcut is that I don't have a udev rule for VGA or HDMI. I had a rule that ran a script, but removed it. I can post, but the post is already very big - don't want to clutter it. So when I unplug HDMI or VGA my LVDS goes black and this script should turn it on. I can also post Xorg.log, from /var/log, but it does not seem to have anything useful (I will post, but again - they are long - please let me know).Now a complication: I have Gnome 3 installed that came with Debian 8 install. When I use it and no i3wm - everything works and X does not crash! So I can plug-in 2 monitors, turn off LVDS and unplug hot and safely. It's not that I don't like Gnome, but I am very used to i3wm and minimal light set up (use the same on my Arch desktop). Laptop is also old for Gnome3. I'd rather not go into trying other DE's.#!/bin/bash# Super+Shift+F8 is bound to this script in WMfunction laptop() { xrandr --auto xrandr --output VGA1 --off xrandr --output HDMI1 --off}echo running laptop scriptlaptopOct 21 20:13:12 debianone /etc/gdm3/Xsession[8574]: running laptop scriptOct 21 20:13:12 debianone /etc/gdm3/Xsession[8574]: xrandr: cannot find crtc for output LVDS1Oct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) intel(0): Allocated new frame buffer 1024x1280 stride 4096, tiledOct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: X Error of failed request: BadMatch (invalid parameter attributes)Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Major opcode of failed request: 140 (RANDR)Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Minor opcode of failed request: 21 (RRSetCrtcConfig)Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Serial number of failed request: 35Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Current serial number in output stream: 35Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: i3: No usable outputs available.Oct 21 20:13:13 debianone org.gtk.vfs.Daemon[8621]: A connection to the bus can't be madeOct 21 20:13:13 debianone org.gtk.vfs.Daemon[8621]: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.Oct 21 20:13:13 debianone org.a11y.Bus[8621]: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returned 0 bytes on an async read (g-io-error-quark, 0). Exiting.Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: [9400:9400:1021/201313:ERROR:chrome_browser_main_extra_parts_x11.cc(57)] X IO error received (X server probably went away)Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: [libi3] libi3/font.c Using Pango font DejaVu Sans Mono, size 8Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: [libi3] libi3/font.c X11 root window dictates 98.223565 DPIOct 21 20:13:13 debianone org.a11y.atspi.Registry[8648]: XIO: fatal IO error 11 (Resource temporarily unavailable) on X server :0Oct 21 20:13:13 debianone org.a11y.atspi.Registry[8648]: after 1608 requests (1608 known processed) with 0 events remaining.Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: drracket: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: [9434:9434:1021/201313:ERROR:x11_util.cc(82)] X IO error received (X server probably went away)Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Can't open display :0Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Exiting due to signal.Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: XIO: fatal IO error 11 (Resource temporarily unavailable) on X server :0Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: after 2716 requests (2716 known processed) with 0 events remaining.Oct 21 20:13:13 debianone /etc/gdm3/Xsession[4989]: Process 8664 dead!Oct 21 20:13:13 debianone /etc/gdm3/Xsession[4989]: Warning: no target process found. Waiting for it...Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Process 8664 dead!Oct 21 20:13:13 debianone /etc/gdm3/Xsession[8574]: Warning: no target process found. Waiting for it...Oct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: synapticsOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: AT Translated Set 2 keyboard: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Asus WMI hotkeys: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: USB Camera: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Microsoft Microsoft Nano Transceiver v1.0: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Microsoft Microsoft Nano Transceiver v1.0: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Microsoft Microsoft Nano Transceiver v1.0: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Logitech USB Keyboard: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Logitech USB Keyboard: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Sleep Button: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Video Bus: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) evdev: Power Button: CloseOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (II) UnloadModule: evdevOct 21 20:13:13 debianone gdm-Xorg-:0[8485]: (EE) Server terminated successfully (0). Closing log file.I am little desperate since this is a big problem. I've seen bugreports on similar issues in X, KDE, Debian and Ubuntu and they show fixed. I am definitely updated to latest and still crashing. Do I need to backport newer X? Or something else? Thanks for reading and help in advance.
Issues with X and xrandr on Debian
debian;xorg;crash;i3
It's likely that the rapid succession of xrandr messages is triggering a bug in the X server. I would suggest you do two things:File a bug against the X server. It is not supposed to crash, no matter what you do (at worst, it should produce an error message)Change your script so that it calls xrandronly once:xrandr --output LVDS --off --output VGA1 --mode 1280x1024 --output HDMI1 --mode 1280x1024 --rotate left --right-of VGA1The point here is that you can pass multiple commands per output to xrandr, as well as multiple outputs. I would personally also set one of the outputs as the primary output (with --primary), but that's not critical.EDIT: Looking at the log in a bit more detail, we see this:Oct 21 20:13:12 debianone /etc/gdm3/Xsession[8574]: xrandr: cannot find crtc for output LVDS1A CRTC is a display controller chip; the actual component which transforms the frame buffer generated by the GPU into scanlines which are then sent out over whatever output is selected (VGA, DVI, HDMI, DisplayPort, yada yada); the abbreviation stands for Cathode Ray Tube Controller, although that terminology is obviously somewhat outdated. Most GPUs have less of those than they have outputs, and the number of CRTCs is usually the limiting factor that decides how many monitors a GPU card can steer at the same time. Up to a few years ago, for most of Intel's mobile GPUs that number was two, although with the appearance of 4K screens (which require two CRTCs per monitor) most modern mobile GPUs now have three.Since the system also talks about LVDS (which is an older standard now being replaced by embedded display port or eDP), it's a pretty safe bet to assume you have two CRTCs.What the error message that I quoted above means is that when you ask the X server to enable the LVDS panel, it looks for an available CRTC and doesn't find one. Things then seem to go horribly wrong. The solution to your problem would therefore be to ensure there is an available CRTC when you try to enable the external monitor, by disabling the external outputs before you enable the internal ones rather than afterwards, as you're trying to do now.
_unix.186694
I would like to multiply a single column in a .txt file by a variable and then write to another .txt file. What am I missing from the awk line? Appreciate any help in advance.!/bin/bashFILES=/path/to/filesfor f in ${FILES}do echo $f wc -l $f B=10000000 TOTALLINES=$(wc -l $f | cut -f1 -d' ') echo TOTALLINES: ${TOTALLINES} SCALINGFACTOR=$(echo 100000000 / $TOTALLINES | bc -l) echo scaling_factor: ${SCALINGFACTOR} awk '{printf($1\t$2\t$3\t$4 * ${SCALINGFACTOR})}' $f_prepped.txt > $f_normalized.txtdone
How can I multiply a column by a variable and write out?
bash;shell;shell script
Inside awk you don't have direct access to shell variables, you need to pass them as an options, so change awk command to something like:awk -v SF=$SCALINGFACTOR '{printf($1\t$2\t$3\t$4*SF)}'
_webapps.19014
This site Intelligence Squared Australia - IQ2 - the Australian forum for live debate publishes past debates in pages like:Past live debates: 2011 series SydneyPast live debates: 2011 series MelbourneNot one of the pages have a RSS or ATOM feed. The only way to be notified is through Twitter? I don't have a Twitter account anymore and I don't want to use it directly. And the Twitter feed has many unwanted posts.Should I make an Yahoo Pipe for each page and subscribe to the generated feed? How else can I be notified?
How to get update notifications for the iq2oz site?
twitter;feeds;yahoo pipes
I just created the Yahoo Pipe but it will work only for 2011 debates
_webmaster.20013
I was unfortunately the victim of a PHP exploit. Looking through my webserver logs, people are still attempting to reach the URL used in the phish. I want to redirect them to a site that will educate these people on what phishing is.My question: Is there a (generic / vendor-neutral) phishing education website that you suggest I send them to with a 301 redirect? (I assume a 301 is the best option.)
Where should I redirect (removed) phishing pages
redirects;301 redirect;security
Yes! There is a good landing page to drop people on. It's here:http://education.apwg.org/r/en/index.htmIt's designed to educate users, in a helpful way.
_codereview.58943
So I needed very basic state management and notification for a small game-like thing I'm building. I decided to implement something like a finite state machine (but not quite, it doesn't transition upon events, but instead is told to transition). This is mostly used for changing visual behavior and animation or whatnot, so the most important ability of a client is to simply check which state it is in. I had a few objectives:There needs to be a finite list of states configured at creation, and I as the user need to be warned if I try to enter a state that doesn't existStates need to have parameters that hold auxiliary information. For example, a state called selected might have parameters selectedPort and selectionContextI need basic event handling, so that other parts of the application can be notified upon entering or leaving a stateAs far as the implementation, the class holds an objects of states, who themselves are objects that contain their own parametersHere's the class I threw together:function FSM(states) { this.states = states // Set the _fsmName of each state to its name in the states object // This makes it much easier to check if we're in a particular state for (var prop in this.states) { this.states[prop]._fsmName = prop } // Initialize to an arbitrary state, client can transition to // whatever their desired initial state is this.state = this.states[Object.keys(this.states)[0]] this.cbReg = {}}FSM.prototype.transition = function(next) { for (var prop in this.states) { if (prop == next) { // Handle onLeave subscribers if (this.state._fsmName in this.cbReg) { this.cbReg[this.state._fsmName].forEach(function(val, ind, arr) { if (val.when == onLeave) { val.callback() } }) } this.state = this.states[prop] // Handle onEnter subscribers if (this.state._fsmName in this.cbReg) { this.cbReg[this.state._fsmName].forEach(function(val, ind, arr) { if (val.when == onEnter) { val.callback() } }) } return } } throw FSM State doesn't exist}FSM.prototype.inState = function(st) { return (this.state._fsmName == st)}FSM.prototype.get = function(p) { // Try to get a state parameter, but throw if the // parameter doesn't exist if (p in this.state) { return this.state[p] } else { throw FSM Invalid Parameter }}FSM.prototype.set = function(p, v) { // Try to set a state parameter, but throw if the // parameter doesn't exist, instead of silently adding it if (p in this.state) { this.state[p] = v } else { throw FSM Invalid Parameter }}FSM.prototype.register = function(stateName, when, cb) { // Register a callback upon entering or leaving a state if (!(stateName in this.states)) { throw FSM State doesn't exist } if ((when != onEnter) && (when != onLeave)) { throw FSM Invalid callback time specifier } if (!(stateName in this.cbReg)) { this.cbReg[stateName] = [] } this.cbReg[stateName].push({ when: when, callback: cb })}A client might use it like:this.SM = new FSM({ normal: {}, selected: { selectedPort: null }})this.SM.transition(normal)this.SM.register(selected, onEnter, function() { ... })...if (this.SM.inState(normal))...if (this.SM.inState(selected)) { var p = this.SM.get(selectedPort) ...}A few things:Existing FSM libraries seemed much more focuses on managing state in async situations, like callbacks from web requests and so forth. Are there any libraries that implement a simpler model, and particularly with the existence of auxiliary state information?Any obvious problems with functionality or style in this code?Are there other design patterns that might accomplish my intent here?
Simple Javascript state management class
javascript;classes;state machine;state
null
_unix.356455
I have an Arch Linux install (running on a 3yo ASUS Zenbook UX31A) that works fine. But, when trying to fix some USB issues I started poking around and I don't seem to have a boot loader installed - or at least can figure out what the one I got.Because of all the warnings and concern on the Installation Guide around UEFI, I tried to follow the instructions about booting, and partitions, as well as I could and, like I said, the system boots and work fine.According to my pacman logs, efibootmgr was installed at the time, and I have it to this day, but it's not listed as a boot loader in the Arch Wiki (because it's not a boot loader, apparently)I ran the bootinfoscript and it said:=> No boot loader is installed in the MBR of /dev/sda.I don't fully understand what boot loaders are and everything they do, so I might be missing something obvious, but shouldn't I have one? If not, how can my laptop boot without it?
Is it possible to not have a boot loader?
boot;boot loader
Yes, it's possible to not have a boot loader in addition to the one in the computer's firmware (which is UEFI here). Well, that's not strictly true because in this case the Linux kernel functions as its own boot loader, if it is configured to include the EFI stub. This makes the kernel binary a valid EFI program which can be run directly from the UEFI firmware, thus closing the gap between the firmware present in the Flash ROM on the motherboard and the kernel image.Usually a boot manager like systemd-boot is used together with an EFI stub kernel. A boot manager functions as a chooser program with which you can choose between several kernel versions or boot some other operating system (Windows, for example.) A boot loader like GRUB usually also includes a chooser, but it differs from a boot manager in that it includes functionality to actually load software from disk to memory. A boot loader must typically first load itself in several stages, then locate the kernel on the disk, load it into a predefined location in RAM, and finally start the kernel.
_webapps.95745
How do we call a function once a Cognito Form loads? The following does not trigger the alert:Cognito.load(forms, { id: 1 }, function() { alert(done);});
Cognito Forms - Call a function once form loads
cognito forms;embed
null
_softwareengineering.269502
I am a self-taught programmer. I started programming about 1.5 years ago. Now I have started to have programming classes in school. We have had programming classes for 1/2 year and will have another 1/2 right now.In the classes we are learning to program in C++ (which is a language that I already knew to use quite well before we started).I have not had any difficulties during this class but there is one recurring problem that I have not been able to find a clear solution to.The problem is like this (in Pseudocode): do something if something failed: handle the error try something (line 1) again else: we are done!Here is an example in C++The code prompts the user to input a number and does so until the input is valid. It uses cin.fail() to check if the input is invalid. When cin.fail() is true I have to call cin.clear() and cin.ignore() to be able to continue to get input from the stream.I am aware that this code does not check of EOF. The programs we have written are not expected to do that.Here is how I wrote the code in one of my assignments in school:for (;;) { cout << : ; cin >> input; if (cin.fail()) { cin.clear(); cin.ignore(512, '\n'); continue; } break;}My teacher said was that I should not be using break and continue like this. He suggested that I should use a regular while or do ... while loop instead.It seems to me that using break and continue is the simplest way to represent this kind of loop. I actually thought about it for quite a while but did not really come up with a clearer solution.I think that he wanted me to do something like this:do { cout << : ; cin >> input; bool fail = cin.fail(); if (fail) { cin.clear(); cin.ignore(512, '\n'); }} while (fail);To me this version seems alot more complex since now we also have variable called fail to keep track of and the check for input failure is done twice instead of just once.I also figured that I can write the code like this (abusing short circuit evaluation):do { cout << : ; cin >> input; if (fail) { cin.clear(); cin.ignore(512, '\n'); }} while (cin.fail() && (cin.clear(), cin.ignore(512, '\n', true);This version works exactly like the other ones. It does not use break or continue and the cin.fail() test is only done once. It does however not seem right to me to abuse the short circuit evaluation rule like this. I do not think my teacher would like it either.This problem does not only apply to just cin.fail() checking. I have used break and continue like this for many other cases that involve repeating a set of code until a condition is met where something also has to be done if the condition is not met (like calling cin.clear() and cin.ignore(...) from the cin.fail() example).I have kept using break and continue throughout the course and now my teacher has now stopped complaining about it.What are your opinions about this?Do you think my teacher is right?Do you know a better way to represent this kind of problem?
How to structure a loop that repeats until success and handles failures
c++;coding style;loops;io;imperative programming
I would write the if-statement slightly different, so it is taken when the input is successful.for (;;) { cout << : ; if (cin >> input) break; cin.clear(); cin.ignore(512, '\n');}It's shorter as well.Which suggests a shorter way that might be liked by your teacher:cout << : ;while (!(cin >> input)) { cin.clear(); cin.ignore(512, '\n'); cout << : ;}
_softwareengineering.166048
In an application framework when performance impact can be ignored (10-20 events per second at max),what is more maintainable and flexible to use as a preferred medium for communication between modules - Events or Futures/Promises/Monads?It's often being said, that Events (pub/sub, mediator) allow loose-coupling and thus - more maintainable app... My experience deny this: once you have more that 20+ events - debugging becomes hard, and so is refactoring - because it is very hard to see: who, when and why uses what.Promises (I'm coding in Javascript) are much uglier and dumber, than Events. But: you can clearly see connections between function calls, so application logic becomes more straight-forward. What I'm afraid. though, is that Promises will bring more hard-coupling with them...p.s: the answer does not have to be based on JS, experience from other functional languages is much welcome.
Futures/Monads vs Events
architecture;maintainability;async;event programming;monad
Monads and events play quite nicely together, for example have a look at .NET Rx. I think there should be even an JavaScript implementation. http://msdn.microsoft.com/en-us/data/gg577609.aspx
_webmaster.71058
Can I place a robots.txt file inside a subdomain and it only affect that subdomain and not the root domain?For example:I have example.com and mysubdomain.example.com. mysite.com already has a robots.txt that does it's own directives. Can I disallow all bots to mysubdomain.example.com by placing a robots.txt inside it's own folder and it not affect my main domain example.com?
Can I place a robots.txt file inside a subdomain and it only affect that subdomain and not the root domain?
seo;subdomain;robots.txt
Yes. Whatever the web root for the subdomain is where you would put a robots.txt for that subdomain's contents. It will not affect the root domain and the root domain's robots.txt will not affect the subdomain.
_unix.72323
TLDR: Can I pre-define a stow folder other than /usr/local with GNU Stow?I do not have admin privileges on the machine I use for work, and I was told that I could use GNU Stow to manage my installations. The tool looks great, but everywhere in the documentation I read that stow uses /usr/local as the installation directory where it builds the symlink farm.Unfortunately this folder was already populated by root, and I do not have write privileges on anything under /usr/local.There is a flag -t that I can use in the command line to specify the target directory, but since I will always be using the same (I want my installations to consistently be under the same target directory), I was wondering if there is a way to use a default path of my choice.
Pre-specifying a default GNU Stow target-directory
software installation;symlink;stow
You can configure a default target via the .stowrc file; please see this section of the manual. If there is a compelling reason for needing to also set the default target directory via an environment variable, I can implement that for the next release too.
_codereview.82328
I am working with a some small functions to test recursion. I am fairly new to Python and am wondering if there is a cleaner way to get the job done.def count(t,p): ''' recursive function when passed a binary tree (it doesnt matter if it is a binary search tree) and a predicate as arguments; it returns a count of all the values in the tree for which the predicate returns True. ''' if t == None or t.value == None: return 0 elif p(t.value): return 1 + count(t.right, p) + count(t.left, p) else: return count(t.right, p) + count(t.left, p)def equal(ll1,ll2): ''' recursive function when passed two linked lists; it returns whether or not the linked lists contain exactly the same values in the same order. ''' if ll1 == None and ll2 == None: return True if (ll1 != None and ll2 == None) or\ (ll2 != None and ll1 == None): return False elif ll1.value == ll2.value: return equal(ll1.next, ll2.next) else: return Falsedef min_max(ll): ''' a recursive when passed a linked list; it returns a 2-tuple containing the minimum value followed by the maximum value. If the linked list is empty, return (None, None) ''' if ll == None: return None, None maybe_min, maybe_max = min_max(ll.next) if maybe_min == None or ll.value < maybe_min: least = ll.value if maybe_min != None and ll.value > maybe_min: least = maybe_min if maybe_max == None or ll.value >= maybe_max: most = ll.value if maybe_max != None and ll.value < maybe_max: most = maybe_max return least, most
Using recursion to count nodes in a binary tree, test equality of linked lists, and find extrema of a linked list
python;recursion;python 3.x;tree;linked list
It is better to test for x is None rather than x == None.Avoid using single-letter variable names they may make sense to you, but not to anyone else.I don't see any reason why a node should be automatically not counted if its value is None. Shouldn't it be up to the predicate to decide whether nodes with None as a value are counted or not?You can eliminate a case by taking advantage of the fact that int(False) is 0 and int(True) is 1.def count(tree_node, predicate): Counts the tree_node and its descendants whose value satisfies the predicate. if tree_node is None: return 0 else: return int(predicate(tree_node.value)) + \ count(tree_node.left, predicate) + \ count(tree_node.right, predicate)Stylistically, it would be better to consistently use either one long if elif chain or just ifs with early returns. I also suggest putting the recursive case at the end of the function.(ll1 != None and ll2 == None) or (ll2 != None and ll1 == None) can be simplified.def equal(ll1, ll2): Recursively checks whether two linked lists contain the same values in the same order. if ll1 is None and ll2 is None: return True if ll1 is None or ll2 is None: return False if ll1.value != ll2.value: return False return equal(ll1.next, ll2.next)Assuming that the linked list contains no None data values, the logic can be simplified.def min_max(ll): Returns a 2-tuple of the minimum and maximum values. If ll is empty, returns (None, None). if ll is None: return None, None if ll.next is None: return ll.value, ll.value least, greatest = min_max(ll.next) return min(ll.value, least), max(ll.value, greatest)
_unix.195518
I recently learned here how to open a program from the command line.my question is, how can i make a set of programs open simultaneously with one command entered?
how to open sets of programs simultaneously with command line
command line
You can run a program simply by typing its name (and Enter, of course).To run a program in the background, giving you control of your terminal again, you can append &.So:gvim /etc/hosts # Runs gvim and waits until it's finishedBut:gvim /etc/hosts & # Runs gvim and returns control to the terminalTherefore, this can be used to start three programs, one after the other:kontact &rekonq &something_else &If you can type fast enough they'll appear to start simultaneously. Or you can put all three commands on the same line, like this, so that the commands are executed only once you hit Enter:kontact & rekonq & something_else &
_unix.246032
Here is the part that generates the 10 random numbers.MAXCOUNT=10count=1while [ $count -le $MAXCOUNT ]; do number=$RANDOM let count += 1doneNow how do I output this to an array and then echo that array?
How to store 10 random numbers in an array then echo that array?
shell script;array
Are you using bash? In that case, try something like that:MAXCOUNT=10count=1while [ $count -le $MAXCOUNT ]; do number[$count]=$RANDOM let count += 1doneecho ${number[*]}You can also replace the last line with:echo ${number[@]}Some documentation here: http://www.tutorialspoint.com/unix/unix-using-arrays.htm
_unix.273228
Upon the request of some users, I decided to add all the intermediate steps and results to my initial post so that users can better walk me through a solution. This is added under the headline Additions below the question. Below Additions, there is a section called, resolution, where I have added extra steps that I have taken in order to resolve this issue:Question:Today, as I was trying to continue running my codes in the command-line shell, I noticed that none of the commands are actually recognized by the shell in Fedora 21 (kernel 4.1.13-100.fc21.i686 on an i686 (tty2)). I thought if I restart and reboot the system, the issue should resolve. However, to my surprise I noticed that the system is not starting up after login. I tried to do diagnostics by pressing CTRL+ALT+F2 when the screen goes black to see where actually it stops working. The last line that I saw a complete stop was saying:wait for plymouth boot screen to quitNow, I have done lots of search and research but since I am newbie in linux systems, I don't feel comfortable to touch my system without a help. Would you mind letting me know how to fix such issue when actually no command is accepted in the diagnostic mode in the shell by saying the following?-bash: <...>: command not foundThe only thing I can think of is some possible automatic update that I was not aware of or messing my .bashrc (which I can no longer see inside it by using the following command:)sudo gedit ~/.bashrcAdditions:I was able to login to my system only after entering the diagnostic mode by pressing Ctrl+Alt+F2 right after reboot and login into the main startup which fails under normal conditions. Fedora release 21 (Twenty One) Kernel 4.1.13-100.fc21.i686+PAEdebug on an i686 (tty2)In this mode, then the login prompt appearlocalhost login:After entering my username, then it says:Password:After entering my password, then it says:Last login: Wed Mar 30 15:33:54 on tty2[bbenjamin@localhost ~]$ It is here that none of the commands are recognized by the shell no matter what. And the error message is usually:-bash: <...>: command not foundwhere <...> is basically any command.The only time I was successful in getting most commands get realized by the shell was when I ran the following code (as mentioned by in the answer):PATH=/usr/bin:/usr/sbinAfter which at least I could look for and see my files and folders and programs (since most commands are getting realized.) However, I still need to logging normally so that I can make use of all the graphics and other features of Fedora which is impossible in the diagnostic mode. To make this possible in particular I need to open my .bashrc file and fix its issues permanently (assuming that I can have access to its original version somehow.) To do this, I need to run commands like (sudo) gedit ~/.bashrcHowever, I am receiving error messages like:Unable to init server: Could not connect: Connection refused(gedit:1397): Gtk-WARNING **:cannot open display:or running commands like this one:~/.bash_profilewhich would yield error message:bash: /home/bbenjamin/.bash_profile: Permission denied.Now, learning from the answer, I am not supposed to run this latter command as it is not executable. And instead I should run it in the following format:source ~/.bashrcAfter which I don't know how to proceed.However, I don't know why the former command (sudo) gedit ~/.bashrc is not working either. I remember that I always used to make slight changes in the .bashrc file depending on my need. This time I don't know how I made changes in it that it caused all the issues explained here. So now, my question is whether there is a command-line based method that I can open .bashrc and look inside it and make needed changes permanently so my system logins appropriately leads me into its normal graphical mode where I see and utilize all Fedora features.ResolutionI learned that once I am in the diagnostic mode through the command Ctrl+Alt+F2 right after unsuccessful login, I can temporarily fix the messed up file .bashrc by running the command PATH=/usr/bin:/usr/sbin. Then I could take a look inside my .bahsrc file through running the command line cat .bashrc. It was only then that I saw the contents of the file in which I had several paths added to the file. Since I had kept a record of my added files at the bottom of previous paths in a chronological order, I knew that the problematic path was the very last one. Now, in order to fix the issue, I had to actually modify the file. This was achieved by the command line nano .bashrc after which a new page appeared in which I had the chance of commenting out the problematic line by adding # in front of it. At the end, I saved my changes and exited. The last step I had to make was to reboot the system with its new modified .bashrc file through the command line telinit 6 after which the logging proved to be successful.
Why doesn't my bash terminal recognize any command in the shell?
linux;fedora;bashrc
It sounds like you put something in your ~/.bashrc which is causing PATH to be set in a way that doesn't include /usr/bin, which is where most programs actually live. If you run this:PATH=/usr/bin:/usr/sbinmost commands should start working and then you can edit ~/.bashrc and fix whatever is resetting PATH there. (And, actually you want to set PATH in ~/.bash_profile instead of ~/.bashrc see How to correctly add a path to PATH?)(Note, by the way, that no Fedora update would mess with this, as updates don't alter files in your home directory. Sometimes when you run updated software that software might update its own config files, but that doesn't apply to ~/.bashrc.)On your edit: the gedit text editor only works in graphical mode. In text mode, you'll need a text-based editor. The easiest of these is probably nano. Install it with dnf install nano and then use nano instead if gedit. The actual editing functions will be a little different but it's pretty simple.
_unix.260538
I bought Lenovo Ideapad 500S-14ISK and installed Debian 8. It's a fresh install, and no other OS is on this computer.I am having difficulties on connecting to Wi-Fi and days of googling to hunt any hint resulted no any good.LogsI'll write down some outputs on the commands below.sudo iwconfigeth0 no wireless extensions.lo no wireless extensions.lspci -nn00:00.0 Host bridge [0600]: Intel Corporation Device [8086:1904] (rev 08)00:02.0 VGA compatible controller [0300]: Intel Corporation Device [8086:1916] (rev 07)00:14.0 USB controller [0c03]: Intel Corporation Device [8086:9d2f] (rev 21)00:14.2 Signal processing controller [1180]: Intel Corporation Device [8086:9d31] (rev 21)00:15.0 Signal processing controller [1180]: Intel Corporation Device [8086:9d60] (rev 21)00:16.0 Communication controller [0780]: Intel Corporation Device [8086:9d3a] (rev 21)00:17.0 SATA controller [0106]: Intel Corporation Device [8086:9d03] (rev 21)00:1c.0 PCI bridge [0604]: Intel Corporation Device [8086:9d10] (rev f1)00:1c.4 PCI bridge [0604]: Intel Corporation Device [8086:9d14] (rev f1)00:1c.5 PCI bridge [0604]: Intel Corporation Device [8086:9d15] (rev f1)00:1f.0 ISA bridge [0601]: Intel Corporation Device [8086:9d48] (rev 21)00:1f.2 Memory controller [0580]: Intel Corporation Device [8086:9d21] (rev 21)00:1f.3 Audio device [0403]: Intel Corporation Device [8086:9d70] (rev 21)00:1f.4 SMBus [0c05]: Intel Corporation Device [8086:9d23] (rev 21)01:00.0 3D controller [0302]: NVIDIA Corporation Device [10de:1347] (rev a2)02:00.0 Network controller [0280]: Qualcomm Atheros Device [168c:0042] (rev 30)03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 15)Network controller and etnernet controller on lspci -v02:00.0 Network controller: Qualcomm Atheros Device 0042 (rev 30) Subsystem: Lenovo Device 4035 Flags: bus master, fast devsel, latency 0, IRQ 11 Memory at d4000000 (64-bit, non-prefetchable) [size=2M] Capabilities: <access denied>03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15) Subsystem: Lenovo Device 3835 Flags: bus master, fast devsel, latency 0, IRQ 139 I/O ports at c000 [size=256] Memory at d4204000 (64-bit, non-prefetchable) [size=4K] Memory at d4200000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: r8169sudo ifdown wlan0; sudo ifup wlan0ifdown: interface wlan0 not configuredInternet Systems Consortium DHCP Client 4.3.1Copyright 2004-2014 Internet Systems Consortium.All rights reserved.For info, please visit https://www.isc.org/software/dhcp/Cannot find device wlan0Bind socket to interface: No such deviceIf you think you have received this message due to a bug ratherthan a configuration issue please read the section on submittingbugs on either our web page at www.isc.org or in the README filebefore submitting a bug. These pages explain the properprocess and the information we find helpful for debugging..exiting.Failed to bring up wlan0.lsmodModule Size Used bybnep 17431 2 i915 837175 0 uvcvideo 79005 0 videobuf2_vmalloc 12816 1 uvcvideovideobuf2_memops 12519 1 videobuf2_vmallocvideobuf2_core 47787 1 uvcvideohid_generic 12393 0 v4l2_common 12995 1 videobuf2_coreecb 12737 1 videodev 126451 3 uvcvideo,v4l2_common,videobuf2_coremedia 18305 2 uvcvideo,videodevusbhid 44460 0 btusb 29721 0 bluetooth 374429 21 bnep,btusb6lowpan_iphc 16588 1 bluetoothjoydev 17063 0 nfsd 263032 2 auth_rpcgss 51211 1 nfsdoid_registry 12419 1 auth_rpcgssnfs_acl 12511 1 nfsdnfs 188136 0 lockd 83389 2 nfs,nfsdfscache 45542 1 nfssunrpc 237402 6 nfs,nfsd,auth_rpcgss,lockd,nfs_aclath10k_pci 41341 0 ath10k_core 288619 1 ath10k_pcix86_pkg_temp_thermal 12951 0 coretemp 12820 0 ath 26067 1 ath10k_corekvm 388784 0 mac80211 548031 1 ath10k_corenvidia 8491586 0 crc32_pclmul 12915 0 cfg80211 437217 3 ath,mac80211,ath10k_coresnd_hda_codec_hdmi 45118 1 snd_hda_codec_realtek 67127 0 snd_hda_codec_generic 63181 2 snd_hda_codec_realteksnd_hda_intel 26327 4 snd_hda_controller 26646 1 snd_hda_intelaesni_intel 151423 1 aes_x86_64 16719 1 aesni_intelsnd_hda_codec 104500 5 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_codec_generic,snd_hda_intel,snd_hda_controllerlrw 12757 1 aesni_intelcompat 22686 4 cfg80211,mac80211,ath10k_pci,ath10k_coresnd_hwdep 13148 1 snd_hda_codecgf128mul 12970 1 lrwglue_helper 12695 1 aesni_intelsnd_pcm 88662 4 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel,snd_hda_controllerablk_helper 12572 1 aesni_intelcryptd 14516 2 aesni_intel,ablk_helpersnd_timer 26614 1 snd_pcmsnd 65244 16 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_hda_codec_generic,snd_hda_codec,snd_hda_intelpsmouse 99249 0 soundcore 13026 2 snd,snd_hda_codecserio_raw 12849 0 pcspkr 12595 0 shpchp 31121 0 ideapad_laptop 17447 0 sparse_keymap 12818 1 ideapad_laptoprfkill 18867 4 cfg80211,ideapad_laptop,bluetoothbattery 13356 0 ac 12715 0 acpi_cpufreq 17218 0 acpi_pad 21165 0 evdev 17445 11 processor 28221 5 acpi_cpufreqfuse 83350 1 parport_pc 26300 0 ppdev 16782 0 lp 17074 0 parport 35749 3 lp,ppdev,parport_pcautofs4 35529 2 ext4 473802 2 crc16 12343 2 ext4,bluetoothmbcache 17171 1 ext4jbd2 82522 1 ext4sg 29973 0 sd_mod 44356 4 crc_t10dif 12431 1 sd_modcrct10dif_generic 12581 0 nouveau 1122508 0 crct10dif_pclmul 13387 1 crct10dif_common 12356 3 crct10dif_pclmul,crct10dif_generic,crc_t10difcrc32c_intel 21809 0 mxm_wmi 12515 1 nouveaui2c_algo_bit 12751 2 i915,nouveauttm 77862 1 nouveauahci 33334 3 libahci 27158 1 ahcixhci_hcd 152977 0 drm_kms_helper 49210 2 i915,nouveaur8169 68262 0 drm 249955 6 ttm,i915,drm_kms_helper,nvidia,nouveaumii 12675 1 r8169libata 177508 2 ahci,libahciscsi_mod 191405 3 sg,libata,sd_modusbcore 195427 4 btusb,uvcvideo,usbhid,xhci_hcdusb_common 12440 1 usbcorethermal 17559 0 wmi 17339 2 mxm_wmi,nouveauvideo 18096 2 i915,nouveauthermal_sys 27642 4 video,thermal,processor,x86_pkg_temp_thermali2c_hid 17410 0 hid 102264 3 i2c_hid,hid_generic,usbhidi2c_core 46012 9 drm,i915,i2c_hid,drm_kms_helper,i2c_algo_bit,nvidia,v4l2_common,nouveau,videodevbutton 12944 2 i915,nouveau
I can't connect to Wi-Fi, no wlan0 device on iwconfig
debian;wifi
Installing the needed firmware and backports will enable Wi-Fi.These commands will work flawlessly on Debian 8 in Lenovo Ideapad 500S-14ISK.Install some basic tools first, if you haven't them yet:sudo apt-get install vim git build-essentialGrab the firmware from github and copy the files you need in the system folder:# assuming that you use your Downloads folder to store the files needed.cd ~/Downloadsgit clone https://github.com/kvalo/ath10k-firmware.gitcd ath10k-firmware/QCA9377/hw1.0sudo mkdir -p /lib/firmware/ath10k/QCA9377/hw1.0sudo cp board.bin /lib/firmware/ath10k/QCA9377/hw1.0sudo cp firmware-5.bin_WLAN.TF.1.0-00267-1 /lib/firmware/ath10k/QCA9377/hw1.0/firmware-5.binsudo modprobe -r ath10k_pci(I think nothing will actually change by the last line but I was doing it for sure I don't screw up anything)Download the backports, build it then install, following with reboot.cd .. # getting back to the Downloads folderwget https://www.kernel.org/pub/linux/kernel/projects/backports/2015/11/20/backports-20151120.tar.gztar -xf backports-20151120.tar.gzcd backports-20151120make defconfig-ath10k # pray for the make process here goes flawlessly.makesudo make installsudo modprobe ath10k_pcisudo reboot(ath10k_pci will starts running after reboot, so the last line before reboot was not necessary?)
_unix.84258
Background:I use Debian Lenny on an embedded device uname -aLinux device 3.4.0 #83 Sun May 26 17:07:14 CEST 2013 armv4l GNU/LinuxI have a C code (say my_C_program) that calls a board specific binary file (via system(spictl someparameters) ) called spictl to use SPI interface user:~# ls -al /usr/local/bin/spictllrwxrwxrwx 1 root staff 24 Jun 9 2011 spiflashctl -> /initrd/sbin/spiflashctlif I run my code (my_C_program) from the command lineuser:~# /user/sbin/my_C_programthe spictl is executed without problem and outputs data from the SPI interface. Problem:I need the program to be run when the board is powered. Therefore, I add /user/sbin/my_C_program line before the exit 0 at/etc/rc.local. When the board is powered, the my_C_program is executed and spictl is executed but the SPI interface does not output any data.I tried to run the program via /etc/init.d/ script on this link. The script works fine and it executes the my_C_program, the program executes the spictl successfully (as system() return value says), but the SPI interface does not output any data!ls -l /usr/sbin/my_C_program-rwxrwxrwx 1 root root 61713 Jun 28 2013 /usr/sbin/my_C_programtop shows that the program is run as root PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1095 root 20 0 2524 1140 924 R 4.2 1.8 0:00.35 top 1033 root RT 0 35092 34m 1852 S 3.0 56.5 0:14.06 node Questionif I execute the my_C_program on terminal, the program calls the spictl (via system(spictl someparameters)) and it is executed without any problem. But if I run my_C_program through /etc/rc.local or /etc/init.d script, then spictl does not work as it supposed to be. I suspect that it has something to do with the root privilege. When I execute the program on the terminal (as root) all works fine. But I guess /etc/rc.local and /etc/init.d somehow runs the program with a lower privilege. I could not really solve the difference between executing a command from the terminal as root or executing the program via /etc/rc.local or /etc/init.d/ script. If you also think that it is a privilege problem, could you please explain how can I ensure that init.d script or rc.local would run the program with the highest privilege. Or what could be the issue?Please note that the problem/question is not SPI related.P.S. in order not to have unnecessary conversation like oh Debian lenny is too old, you should use wheezy etc., the board has armv4l processor and as the producer says it does not support wheezy because of some processor instructions.
why do I have two different results if I run a program through terminal(as root) or /etc/init.d(or /etc/rc.local)
debian;root;init script;privileges
You're C executable probably requires some environment variables to be set to function. For example the env. variable $PATH or $LD_LIBRARY_PATH. There also other variables such as $HOME which won't be set until a user has logged in.This last one might be necessary for your app to access config files and/or log files, for example.
_softwareengineering.27410
I'm an experienced developer with .NET, and understand core computing concepts well (OOP, design patterns, etc) but would like to also learn rails. Is there a book out there that's the de-facto standard for describing best practices, design methodologies, and other helpful information on Ruby on Rails? What about that book makes it special?
Is there a canonical book on Ruby on Rails?
books;ruby on rails
Agile Web Development with Rails will bring you up to speed at a relatively rapid pace.For learning the ins and outs of the Ruby language itself, I found Programming Ruby 1.9 helpful.Between those two books you'll know what you need to know to get going.If you are looking for free and online, Ruby on Rails Tutorial isn't bad at all.
_scicomp.19217
I am seeking recommendations on how to compute the Binder ratio numerically accurate when doing Monte Carlo simulation on spin models. Binder ratio is defined as:$$ B = \frac{\langle M^4\rangle}{\langle M^2\rangle^2}. $$Given a safe method to compute $M$ per metropolis sweep, one can get directly $M^2$ and $M^4$. If we take $N$ samples of these values, we can then get the average ones; $\langle M\rangle$, $\langle M^2\rangle$ and $\langle M^4\rangle$.Then one can compute the Binder ratio. But near the critical temp, the results are not so precise. Am I having too much floating point error in $\langle M^2\rangle$ and $\langle M^4\rangle$?An example of how the Binder ratio looks, it was a very short simulation. A proper simulation generates the very smooth curve, with little standard error, but the spike remains.Edit: It is a parallel tempering simulation, it might be possible that the cause for the unexpected Binder Ratio could be related to measuring too early when replicas are not thermalized properly.
Accurate way for computing a ratio coming from Monte Carlo simulation
numerical
null
_softwareengineering.270381
I want to create a Microservices application, in which every microservice is responsible for its own part of the front end. At the same time, I want to create the front end in AngularJS as a Single Page Application (SPA). When a new microservice gets deployed, the web front end would automatically pick up the new front end part and add it to the SPA. What would be the best way of realising this?This is what I came up with. Each microservice could be responsible for its own Angular module. Then when the customer navigates to the application, a server component (ASP.NET or JSP) could see which microservices are online and create an html page which includes the angular modules from those microservices. What the front end component can also do, is enable some microservices to some specific customers which have extended privileges, like admins or VIP customers. Of course, for this to work, I need a nice structured way for each microservice to take up a part of the screen, without 'knowing' what other microservices are on the screen. A simple solution would be to create a tab for each microservice. On the tab, the microservice in charge can put its functionality on the page. The front end component would be responsible for general stuff like (angular-)routing and look-and-feel.Is this the best way of realising this goal? Does anyone have experience with this?
Create an AngularJS front end for a Microservices application
angularjs;front end;microservices
null
_codereview.143138
For homework I had to do the following:Give a list of all the teachers that don't have a classroom assigned to them.It involves the following 2 tables:Code inside Groups table would be the classroom in this case and TeacherId matches the Id inside Teachers table.My solution to the problem is the following query:SELECT Id, FirstName, MiddleName, LastNameFROM TeachersWHERE Id NOT IN (SELECT TeacherId FROM Groups)It works perfectly, however, I wonder if there was a better solution using JOINs.Edit:I should mention there is a CONSTRAINT on the Groups table:ALTER TABLE GroupsADD CONSTRAINT [FK_Groups_Teachers]FOREIGN KEY (TeacherId)REFERENCES [Teachers] ([Id])
SQL query to select all teachers not in different table
sql
Your query using NOT IN is good, but you can also use LEFT JOIN and keep only the teachers who don't have Groups:SELECT t.Id, FirstName, MiddleName, LastNameFROM Teachers t LEFT JOIN Groups g ON t.Id=g.TeacherIdWHERE g.TeacherId IS NULL
_codereview.164
Is this code good enough, or is it stinky? using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.IO;namespace DotNetLegends{ public class LogParser { /// <summary> /// Returns a populated Game objects that has a list of players and other information. /// </summary> /// <param name=pathToLog>Path to the .log file.</param> /// <returns>A Game object.</returns> public Game Parse(string pathToLog) { Game game = new Game(); //On actual deployment of this code, I will use the pathToLog parameter. StreamReader reader = new StreamReader(@D:\Games\Riot Games\League of Legends\air\logs\LolClient.20110121.213758.log); var content = reader.ReadToEnd(); game.Id = GetGameID(content); game.Length = GetGameLength(content); game.Map = GetGameMap(content); game.MaximumPlayers = GetGameMaximumPlayers(content); game.Date = GetGameDate(content); return game; } internal string GetGameID(string content) { var location = content.IndexOf(gameId); var gameID = content.Substring(location + 8, 10); gameID = gameID.Trim(); return gameID; } internal string GetGameLength(string content) { var location = content.IndexOf(gameLength); var gamelength = content.Substring(location + 13, 6); gamelength = gamelength.Trim(); var time = Convert.ToInt32(gamelength) / 60; return time.ToString(); } internal string GetGameMap(string content) { var location = content.IndexOf(mapId); var gameMap = content.Substring(location + 8, 1); switch (gameMap) { case 2: return Summoner's Rift; default: return nul; } } internal string GetGameMaximumPlayers(string content) { var location = content.IndexOf(maxNumPlayers); var maxPlayers = content.Substring(location + 16, 2); maxPlayers = maxPlayers.Trim(); return maxPlayers; } internal string GetGameDate(string content) { var location = content.IndexOf(creationTime); var creationDate = content.Substring(location + 14, 34); creationDate = creationDate.Trim(); return creationDate; } }}
Parsing a file for a game
c#;game;parsing
You have a lot of undescriptive magic numbers and code repetition whilst retrieving the contents of a field. You could eliminate the repetition and make those numbers a little more meaningful by introducing a single method:protected string GetFieldContent(string content, string field, int padding, int length){ var location = content.indexOf(field); padding += field.Length; var fieldVal = content.Substring(location + padding, length); fieldVal = fieldVal.Trim(); return fieldVal;}Use it like so:internal string GetGameMaximumPlayers(string content){ var maxPlayers = GetFieldContent(content, maxNumPlayers, 3, 2); return maxPlayers;}Something to note here is the padding value has changed. You no longer need to include the length of the field name itself and can just describe the number of junk characters afterwards.Padding lengthUpon examining your code I noticed one peculiarity - the fields have inconsistent, magical padding lengths:gameID padding: 2gameLength padding: 3mapId padding: 3maxNumPlayers padding: 3creationTime padding: 2As a symptom of these being magic numbers I have no idea why this is the case. This is one of the many reasons to avoid magic numbers like the plague: it's difficult to understand their meaning. I'll trust you to evaluate whether varying padding lengths is necessary, or whether you can just assume a constant padding for all fields.If we can assume a constant padding amount for all fields then we can change the code further a little bit to make your life easier. There are two steps to this change.First, give your LogParser class a private field:private const var defaultPadding = 2Second, GetFieldContent can be refactored to produce this:protected string GetFieldContent(string content, string field, int length){ var location = content.indexOf(field); var padding = defaultPadding + field.Length; var fieldVal = content.Substring(location + padding, length); fieldVal = fieldVal.Trim(); return fieldVal;}Then getting the contents of a field becomes simpler:var maxPlayers = GetFieldContent(content, maxNumPlayers, 2);
_webapps.54013
I recently created a Facebook group. I'd like to invite a bunch of people to join the group instead of adding them without their permission. Is this possible?
Is it possible to quickly invite people to join a group rather than just adding them?
facebook;facebook groups
null
_unix.312104
There are ways to configure dead keys in Linux to mimic the behavior they have in Windows ?This document describe how it works in Windows:US-Int'l Keyboard Layout to Type Accented CharactersThe difference between Linux and Windows dead keys is highlighted in bold: When you press the APOSTROPHE (') key, QUOTATION MARK () key, ACCENT GRAVE (`) key, TILDE (~) key, ACCENT CIRCUMFLEX key, or CARET (^) key, nothing appears on the screen until you press the a second key. If you press one of the letters designated as eligible to receive an accent mark, the accented version of the letter appears. If you press an ineligible key, two separate characters appear.In Linux when you press a second key not eligible to receive a accent, the first key pressed is lost. This reduce the typing productivity, a lot.
Make dead keys insert both characters if the combination is not recognized
x11;keyboard layout;xkb;dead keys
null
_softwareengineering.165379
I would like to know what documents (ISO?) should I follow when I write a functional specification. Or what should designers follow when creating the system design? I was told that there was a progress in last years but was not told what the progress was in (college professor). Thank youEDIT: I do not speak about document content etc. but about standards for capturing requirements, for business analysis.
What norms/standards should I follow when writing a functional spec?
design;documentation;standards
I'm more of a CMMI fan, but that might be because I've gone through the pain of getting to level 3 -- on what was originally a research project. If we knew what we were doing we wouldn't call it research. That's a bit counter to the concepts of to any those software quality / process improvement efforts. I've also been with organizations that became ISO 9001 certified.Both CMMI and ISO can be a bit (more than a bit!) burdensome. Getting certified at CMMI-DEV 3 is costly, in dollars and in time. Quality is not free. (At least that silly management mantra went out the door.) IMO, CMMI level 2 is a reasonable target for most organizations; CMMI 3 is where you start to need to be very sure the product is right. CMMI 4 and beyond: I wouldn't want to work there. The stuff I work on, if done wrong, could lead to hundred of million dollar catastrophes. Research project quality, or even CMMI 2, was not good enough. CMMI 4 was (thankfully) deemed too counterproductive.
_unix.284449
I usually use GNOME as a desktop for Debian and when I install a program like Terminator via apt I can immediately find it in the drop-down menu and drag a shortcut to it in the toolbar if I like.I decided to try Cinnamon with a Debian VM I created, however I'm unable to find any of the programs I've installed via apt in the menu.The programs are there, because I can run them from the command line. Launching them from the command line is not preferable though, both because it takes longer than clicking a shortcut and because it makes it such that I have to keep the original terminal running while using whatever program I launch from it.Is there either a way to make these programs automatically show up in the menu or can I find them in the filesystem somewhere and add a toolbar shortcut that way? Terminator is suitable for an example. I tried finding it with find and by looking in bin but I didn't see it.
Apt-get installed programs in Debian Cinnamon Desktop
debian;cinnamon
There's a bug here for that issue.Options are to:Log out and back in again.Press Alt+F2 then press r then Enter to restart Cinnamon.
_softwareengineering.318938
I have around 30 not-changing objects (the amount of them is final, no more can be added or removed). Each object has an id as well as some booleans describing what the object is and what it isn't. Now, each objects has a variable that must be changed at runtime. Most of these variables are just an integer, but some also have strings, lists, etc.Now I'm wondering how to implement this. My current attempt is an enum with the given objects, their properties and methods to change them (for the variable I chose just object as type, to store both integers and lists). It works, but it doesn't feel like the proper, OO-way to do this. What are the alternatives? The programming language is Java, if that matters.Here's my attempt (a bit more complicated than what I explained above):public enum StatusInfos { THING_1(id0, false ,false, false, NOT_UPGRADABLE), THING_1_WITH_HAT(id1, true, false, false, NOT_UPGRADABLE), ANOTHER_THING(id2, false, false, false, NOT_UPGRADABLE), GREEN_THING(id3, true, false, false, NOT_UPGRADABLE), TALKING_DUCK(id4, true, false, false, NOT_UPGRADABLE); private final String id; private final Boolean hasAdditionalValue; private Double value; private Double additionalValue; private boolean needsDouble; private boolean needsPerCent; private Integer upgradeCategory; private Object additionalValue; StatusInfos(String id, Boolean hasAdditionalValue, boolean needsDouble, boolean needsPerCent, Integer upgradeCategory){ this.id = tag; this.hasAdditionalValue = hasAdditionalValue; this.needsDouble = needsDouble; this.needsPerCent = needsPerCent; this.upgradeCategory = upgradeCategory; } public String id(){ return id; } public Double value(){ return value; } public void setValue(Double value){ this.value = value; } public boolean hasAdditionalValue(){ return hasAdditionalValue; } public Double additionalValue(){ return additionalValue; } public void setAdditionalValue(Double newAdditionalValue){ additionalValue = newAdditionalValue; } public boolean hasSpecialValue(){ return false; } public Object specialValue(){ return null; } public void setSpecialValue(Object newValue){ return; } public boolean needsNumbersAfterComma(){ return needsDouble; } public boolean needsPerCent(){ return needsPerCent; } public Integer getUpgradeCategory() { return upgradeCategory; } public Object getAdditionalValue(){ return additionalValue; } public void setAdditionalValue(Object additionalValue) this.additionalValue = additionalValue; }}
How to store many global variables?
java;programming practices
I'd define a class containing all of the common stuff (is the list of booleans the same among these objects?) and their getters and setters, then subclass depending on the type of the changeable item within, then put them all into a container optimized for how you look up these things (id, probably).This way, you have one global.
_webapps.33220
How can I copy cell values rather than references from one sheet to another? Right now using Filter, Query and ArrayFormula creates references which create empty cells and disallow sorting of the sheet. I need the copy to only carry through values and not references.
How can I copy cell values rather than references from one sheet to another in Google Sheets
google spreadsheets
null
_unix.353812
Hypothetical situation:What would be the long term effects of running sudo chmod -R 777 /? I know that it means that all users have full permissions on all files, but are there any other side-effects?
sudo chmod-R 777 /
chmod
null
_unix.321422
I have a little open source project that for various reasons I've tried to write in reasonably portable shell script. Its automated integration tests check that hostile characters in path expressions are treated properly, among other things.Users with /bin/sh provided by bash are seeing a failure in a test that I've simplified down to the following:echo A bug\\'s lifeecho A bug\\\\'s lifeOn bash, it produces this expected result:A bug\'s lifeA bug\\'s lifeWith dash, which I've developed against, it does this:A bug\'s lifeA bug\'s lifeI'd like to think that I haven't found a bug in dash, that I might be missing something instead. Is there a rational explanation for this?
Why does dash expand \\\\ differently to bash?
bash;shell script;quoting;echo;dash
Inecho A bug\\'s lifeBecause those are double quotes, and \ is special inside double quotes, the first \ is understood by the shell as escaping/quoting the second \. So a A bug\'s life argument is being passed to echo.echo A bug\'s lifeWould have achieved exactly the same. ' being not special inside double quotes, the \ is not removed so it's the exact same argument that is passed to echo.As explained at Why is printf better than echo?, there's a lot of variation between echo implementations.In Unix-conformant implementations like dash's, \ is used to introduce escape sequences: \n for newline, \b for backspace, \0123 for octal sequences... and \\ for backslash itself.Some (non-POSIX) ones require a -e option for that, or do it only when in conformance mode (like bash's when built with the right options like for the sh of OS/X or when called with SHELLOPTS=xpg_echo in the environment).So in standard (Unix standard only; POSIX leaves the behaviour unspecified) echos,echo '\\'same as:echo \\\\outputs one backslash, while in bash when not in conformance mode:echo '\\'will output two backslashes.Best it to avoid echo and use printf instead:$ printf '%s\n' A bug\'s lifeA bug\'s lifeWhich works the same in this instance in all printf implementations.
_codereview.123867
I'm trying to find the shortest and best way to achieve the following:Given input integer $N, get the following output:n = 0, output = 0 n = 1, output = 0 n = 2, output = 10 n = 3, output = 100 n = 4, output = 1000 n = 5, output = 10000I do this with the following code, but there must be a better option to do this.<?php $n = 1; function getNumber($n) { if ($n === 0 OR $n === 1) { return 0; } else { return 1.str_repeat(0, $n -1); } } echo getNumber($n);?>
Get number from N
php;php5;integer
Maybe you want something short like isecho ($n <= 1) ? 0 : pow(10, $n - 1);
_unix.45302
Okay, first off, this is not a problem I am facing, but I would like to understand this better.If I wish to shutdown / reboot my machine from the command line I need to call:$ sudo poweroff$ sudo rebootThat is, I need root privileges to make these ACPI calls. However, I start my DE, (I use XFCE) without granting it root privileges: $ startxfce4 --with-ck-launchNow, I know that the --with-ck-launch parameter helps allows XFCE to shutdown / reboot my system, but I do not understand how.What allows ConsoleKit to shutdown without root privileges? How can it change the runlevel without super-user privileges? And since it is possible, how can I shutdown my machine from the console without root privileges?
How do DE's call ACPI functions?
linux;not root user;shutdown;privileges;consolekit
You can communicate with ConsoleKit through dbus. For example using the dbus-send tool a few notable commands are,Shutdown: dbus-send --system --print-reply --dest=org.freedesktop.ConsoleKit /org/freedesktop/ConsoleKit/Manager org.freedesktop.ConsoleKit.Manager.StopReboot:dbus-send --system --print-reply --dest=org.freedesktop.ConsoleKit /org/freedesktop/ConsoleKit/Manager org.freedesktop.ConsoleKit.Manager.RestartThere are also commands for hibernate and suspend but I do not know what they are.edit:Found suspend commanddbus-send --system --print-reply --dest=org.freedesktop.Hal /org/freedesktop/Hal/devices/computer org.freedesktop.Hal.Device.SystemPowerManagement.Suspend int32:0On newwer systemsdbus-send --system --print-reply --dest=org.freedesktop.UPower /org/freedesktop/UPower org.freedesktop.UPower.Suspend
_softwareengineering.221339
What if, instead of looking at the C++ specification, you analyze the behavior (by inspecting the source code and testing with sample inputs) of existing C++ compilers and use your knowledge of C++ to create a new compiler? Is it a good way to implement programming languages? What are the possible disadvantages of this approach?
Implementing a programming language without the specification
reverse engineering
null
_unix.18628
Can anyone suggest a script that will take as input the name of one or more directories and a media size, and output lists of files for input to tar using -T (assuming no compression)?scdbackup/sdvdbackup sort of does this, but it's full of bloat that I don't need. So basically looking for something like this:./splitTars file1 file2 .... 2.0Twhere file can be a file or directory, and the last argument is the size of the media (e.g. 2TB). It should then output a file list for each tar archive and give a warning for files that are too big to fit on the media. If nothing like this exists, one way to do it would be to create the list of files using find, re-arrange them in increasing or decreasing size, then start cutting the list up into pieces.
Generating sets of files that fit on a given media size for tar -T
scripting;disk usage;tar
null
_unix.110479
On FreeBSD 8.3 I'm running script (as root):/usr/local/etc/rc.d/foo/foo.sh startcontent is typical:. /etc/rc.subrname=foorcvar=${name}_enableload_rc_config ${name}required_files=${foo_conf}in /etc/rc.d.local I have:foo_conf=/cf/foo/config/foorcAnd when I start it I get:/usr/local/etc/rc.d/foo.sh: WARNING: /cf/foo/config/foorc is not readable./usr/local/etc/rc.d/foo.sh: WARNING: failed precmd routine for fooBut when I run application (/usr/local/bin/foo) directly with -f parameter as /cf/foo/config/foorc application starts normal.Permission for foorc file: -rwxr-xr-x and for directory: drwxr-xr-x.Fragment in rc.subr looks like:check_required_before(){ for _f in $required_files; do if [! -r ${_f} ]; then warn ${_f} is not readable fi done}It is permission problem or what?
rc.subr can't access file?
permissions;freebsd;rc
null
_codereview.149669
This is improved code after I some issue in pointed by @Edward in the last question: C++ operator overloading for matrix operations This work assignment in operator overloading .I need to use operators *, [][], =, +, -, << on objects of type matrix for example add to matrix using this code: m=m+s.I already sent the code to my teacher but I still want your opinion so I can improve the next code.matrix.h #ifndef Matrix_h#define Matrix_h#include <iostream>class Matrix{ private: int rows; int cols; int **Mat; public: Matrix (const int &rows,const int &cols); Matrix(const Matrix &other); ~Matrix (); int* & operator[](const int &index) const ; void operator=(const Matrix &other ); Matrix operator -()const; Matrix operator -(const Matrix &other)const; Matrix operator +(const Matrix &other)const ; Matrix operator *(const Matrix &other)const; Matrix operator *(const int &num)const; int getMatrixRows(const Matrix &other){return other.rows;} int getMatrixCols(const Matrix &other){return other.cols;} friend Matrix operator *(const int & num,const Matrix &m) { return (m*num); } friend Matrix operator +(const int &num,const Matrix &t) { return (num+t); } friend std::ostream &operator<<(std::ostream &os, const Matrix &m) { for (int i=0; i < m.rows; ++i) { for (int j=0; j < m.cols; ++j) { os << m.Mat[i][j] << ; } os << '\n'; } return os;}};#endifmatrix.cpp#include Matrix.h#include <iostream>#include <cassert>Matrix::Matrix(const int &n_rows,const int &n_cols )//constructor of class Matrix{ rows=n_rows; cols=n_cols; Mat=new int* [cols]; assert(Mat); for(int i =0;i<rows;i++) { Mat[i]=new int[cols]; assert(Mat[i]); } for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) Mat[i][j]=0; } Matrix::Matrix(const Matrix &other) //copy constructor{ cols=other.cols; rows=other.rows; Mat=new int* [other.rows]; assert(Mat); for(int i =0;i<other.rows;i++) { Mat[i]=new int[other.cols]; assert(Mat[i]); } for(int i=0;i<other.rows;i++) for(int j=0;j<other.cols;j++) Mat[i][j]=other[i][j];}int* & Matrix::operator [](const int &index) const // overloading operator []{ return Mat [index];}void Matrix::operator=(const Matrix &other ) // overloading operator ={ if(Mat !=other.Mat && cols==other.cols && rows==other.rows) { for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) Mat[i][j]=other.Mat[i][j]; }} Matrix Matrix::operator-()const // overloading operator -{ Matrix temp(rows,cols); for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]=Mat[i][j]*-1; return temp;} Matrix Matrix::operator +(const Matrix &other)const //add 2 matrix{ Matrix temp(rows,cols); if (rows!=other.rows ||cols!=other.cols) { for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]=Mat[i][j]; return temp; } else { for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]+=other.Mat[i][j]+Mat[i][j]; } return temp; }Matrix Matrix::operator *(const Matrix &other)const //multiplay matrix on the right{ if (cols!=other.rows) { Matrix temp(cols,rows); for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]=Mat[i][j]; return temp; } else { Matrix temp(cols,other.rows); for(int i=0;i<rows;i++) for(int j=0;j<other.cols;j++) for(int k =0;k<cols;k++) temp[i][j]+=Mat[i][k]*other.Mat[i][j]; return temp; }}Matrix Matrix::operator *(const int &num)const //multiplay with number{ Matrix temp(rows,cols); for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]=Mat[i][j]*num; return temp; }Matrix Matrix::operator -(const Matrix &other)const //matrix subtraction { Matrix temp(rows,cols); if (rows!=other.rows ||cols!=other.cols) { for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]=Mat[i][j]; return temp; } else { for(int i=0;i<rows;i++) for(int j=0;j<cols;j++) temp.Mat[i][j]+=Mat[i][j]-other.Mat[i][j]; } return temp;}Matrix::~Matrix ()//destrucor { for(int i =0;i<rows;i++) delete [] Mat[i]; delete [] Mat;}main.cpp#include Matrix.h#include <iostream>int main(){ Matrix m(2, 2); m[0][0] = 2; m[1][1] = 2; std::cout << m << std::endl; m = m; const Matrix s = -m; std::cout << m << std::endl << s << std::endl; m = s+2 * -m * m * 2 - s; std::cout << m << std::endl << s << std::endl; std::cout << s[1][1] << std::endl; return 0 ; }I have been told to throw exceptions rather than asserts and to make your base class destructor virtual. What is the right way to do it? I never used exception before and not familiar with the concept of virtual destructor.Prefer a single allocation instead of doing multiple allocations in the constructor, it would be simpler to do only a single allocation. This is both faster and simpler@Edward wrote this, but is it possible to allocate 2 dimensional array with an allocation?Another thing I didn't understand is what to do when main is trying to use the function illegally for example add 2 matrix that not in the same size. I created a new object and gave him the same data as one then called the function and returned it. m=m+s in this example, if m and s are not in the same size I just returned new object with the values of m. Is it the right way?
C++ operator overloading for matrix operations - follow-up
c++;beginner;overloading
null
_webmaster.24943
I have multilanguage website targeting by subdomain:http://en.site.com/ - Displays only contents in Englishhttp://de.site.com/ - Displays only contents in German...http://site.com/ - Displays contents in all languagesTitle/Description are also translated depending on subdomain. Main domain is in English.I would like to have different Title/Description on main domain http://site.com/ for users from different region:http://google.co.uk/ should display Title/Description in Englishhttp://google.de/ should display Title/Description in GermanMy question:For main domain http://site.com/ is it possible to return different Title/Description for Google bot from different regions? Or there is only one Google bot and targeting is only completed by search engine, not crawler?Thank you!
Different Title/Description for Google in different regions
google search;multilingual
null
_unix.19498
In this thread, yoda suggests the following solution for using colors in zsh#load colorsautoload colors && colorsfor COLOR in RED GREEN YELLOW BLUE MAGENTA CYAN BLACK WHITE; do eval $COLOR='%{$fg_no_bold[${(L)COLOR}]%}' #wrap colours between %{ %} to avoid weird gaps in autocomplete eval BOLD_$COLOR='%{$fg_bold[${(L)COLOR}]%}'doneeval RESET='$reset_color'Correct me if I am wrong, but if I understand correctly, autoload colors && colors allows you to call colors by their name, while the rest of the script just wraps them in ${ $}.This made me think about the following questions:Is there a way to know what colors are loaded by calling autoload colors && colors? How do I know what colors are supported by my terminal?
Understanding colors in zsh
zsh;colors
The colors function records the names of colors and similar attributes (bold, underline and so on) in the associative array color. This array associates names with terminal attribute strings, which are numbers, e.g. 00 normal, 42 bg-green, echo ${(o)color}If you want to see how the array is built, look at the source of the function: which colors or less $^fpath/colors(N).The colors function only defines names and escape strings (in the associative arrays fg and bg) for the 8 standard colors. Your terminal may have more. See this answer for how to explore what colors are available.
_unix.193714
I am aware of following thread and supposedly an answer to it. Except an answer is not an answer in generic sense. It tells what the problem was in one particular case, but not in general.My question is: is there a way to debug ordering cycles in a generic way? E.g.: is there a command which will describe the cycle and what links one unit to another?For example, I have following in journalctl -b (please disregard date, my system has no RTC to sync time with):Jan 01 00:00:07 host0 systemd[1]: Found ordering cycle on sysinit.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on local-fs.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on cvol.service/startJan 01 00:00:07 host0 systemd[1]: Found dependency on basic.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on sockets.target/startJan 01 00:00:07 host0 systemd[1]: Found dependency on dbus.socket/startJan 01 00:00:07 host0 systemd[1]: Found dependency on sysinit.target/startJan 01 00:00:07 host0 systemd[1]: Breaking ordering cycle by deleting job local-fs.target/startJan 01 00:00:07 host0 systemd[1]: Job local-fs.target/start deleted to break ordering cycle starting with sysinit.target/startwhere cvol.service (the one that got introduced, and which breaks the cycle) is:[Unit]Description=Mount Crypto VolumeAfter=boot.mountBefore=local-fs.target[Service]Type=oneshotRemainAfterExit=noExecStart=/usr/bin/cryptsetup open /dev/*** cvol --key-file /boot/***[Install]WantedBy=home.mountWantedBy=root.mountWantedBy=usr-local.mountAccording to journalctl, cvol.service wants basic.service, except that it doesn't, at least not obviously. Is there a command which would demonstrate where this link is derived from? And in general, is there a command, which would find the cycles and show where each link in the cycle originates?
generic methodology to debug ordering cycles in systemd
systemd
Is there a command which would demonstrate where this link is derived from?The closest you can do is systemctl show -p Requires,Wants,Requisite,BindsTo,PartOf,Before,After cvol.service, which will show the resulting (effective) dependency lists for a given unit.is there a command, which would find the cycles and show where each link in the cycle originates?To my knowledge, there is no such command. Actually systemd offers nothing to aid in debugging ordering cycles (sigh).According to journalctl, cvol.service wants basic.service, except that it doesn't, at least not obviously.First, the requirement dependencies (Wants=, Requires=, BindsTo= etc.) are independent of ordering dependencies (Before= and After=). What you see here is an ordering dependency cycle, i. e. it has nothing to do with Wants= etc.Second, there is a number of default dependencies created between units of certain types. They are controlled by DefaultDependencies= directive in the [Unit] section (which is enabled by default).In particular, unless this directive is explicitly disabled, any .service-type unit gets implicit Requires=basic.target and After=basic.target dependencies, which is exactly what you see. This is documented in systemd.service(5).
_computergraphics.56
In GLSL, perspective correct interpolation of vertex attributes is the default setting - one can disable it for specific vertex attributes by using the noperspective qualifier. Other than in post-processing shaders, I've never seen the perspective correct interpolation disabled - are there any other use cases? Also, does it even make a difference, performance-wise?
When to disable perspective correct interpolation ( noperspective )
opengl;glsl;performance
Use cases are only limited by your imagination! noperspective means that the attribute is interpolated across the triangle as though the triangle was completely flat on the surface of the screen. You can do antialiased wireframe rendering with this: output a screen-space distance to the nearest edge as a noperspective varying and use that as coverage in the pixel shader.Or if you're doing non-photorealistic rendering and want a pattern in screen-space like halftoning, you can enable noperspective on your UVs used for texturing.Does it make a performance difference? Probably, but you probably won't notice (with the potential exception of less powerful graphics hardware). Most GPUs are composed of a series of pipeline stages that execute in parallel, and in some sense you only pay the cost for the most expensive stage. If rasterization is the most limiting part for you, then you may see a difference from the divisions that you're skipping per-pixel. I would guess that is most likely when rendering a shadow map or a depth prepass, but those also have the fewest attributes to interpolate.
_webmaster.78279
I have a site for creating random passwords: http://passwordcreator.orgI was very surprised to get a notice that the site has mobile problems because it is mobile friendly. Here are the things that Google is complaining about:Size content to viewportGoogle is complaining about the tables at the bottom of the page. Tabular data is nearly impossible to fit in the width of a mobile device.It is only those tables at the bottom of the page that are outside the viewport. They don't push the content at the top of the page out of the viewport. If a user gets that far down the page and is interested in that data, I don't have much choice but to allow them to scroll right.The only remedy that I see would be to hide those tables on smaller screens. That would make the mobile experience worse, not better. It would remove functionality from mobile.Click targets too close togetherGoogle is complaining that the passwords are too close together:The only reason that they are clickable is because clicking them selects the entire password. This makes them easier to copy and paste. They are close together, but it doesn't hurt mobile usability. Clicking doesn't take you away from content. It's easy enough to try again if you miss.Possible remedies would be:Make them not clickable on small screens (which would make the site less usable)Move them further apart (fewer will be visible which makes the site less usable)Prioritize visible contentYour page requires additional network round trips to render the above-the-fold content. For best performance, reduce the amount of HTML needed to render above-the-fold content.The entire HTML response was not sufficient to render the above-the-fold content. This usually indicates that additional resources, loaded after HTML parsing, were required to render above-the-fold content.This is one that the tool is just flat out wrong on as far as I can tell. There is only one network request. The site uses no images. All the CSS and JS is inline. There aren't even third party calls for ads or analytics:What can I do about this?Do I have to make my site worse for this Google algorithm if I want to retain mobile rankings? Are there any ways to mark items as not a problem in this case?
Google wrongly marks my mobile friendly site as not-friendly
google;mobile;penalty;googlebot mobile
null
_softwareengineering.290936
I started working on a website,for tracking and rating watched anime/manga/etc. and recommendations, and it should also have an API, for providing the info about series and other things.On similar sites, I have noticed that, to use an API, one typically needs a token/auth of sorts, and there are certain usage limits, even if it's for reading info publicly available on the site.But the problem is, you could circumvent all those limits by crawling the site directly.Even if the format is less convenient, once you have a parser in place there's no problem. Actually, if it uses clientside rendering, the info will already be sent in a convenient format.And on the other hand, this would also put more strain on the server, because the info may be spread out on multiple pages, needing more requests, and it would also send info not required by the client app.In the end, is there a point in restricting the API used for info that's available publicly on the site? Should there be an unrestricted, unauthed API for reading public info, in order to avoid needless blunder for both sides?Or should, instead, the site itself have request limits, like an API?
API with limits vs site crawling
api design;web api
Having a public API for data access from your site is about making the data available in a convenient, supported, well-defined and always-up-to-date manner. It is a way for a site owner to say 'here is data I collect and own, but I want you to be able to use it so I'm making it available. Oh, and I promise not to change the structure or do anything that might break your applications without communicating about it clearly'.Crawling has some technical limitations, some very important legal considerations AND is prone to breaking without any sort of notification from the owner of the data. Personally I would not hesitate to consume a public JSON API if that has data I need, but I'd be hard pressed to start writing a crawler/parser to get it off a website...
_unix.114407
When deploying my application under linux, where do I put my libraries, executable and the desktop entry file? And what about other files my program needs? For example background pictures, audio files etc.I heard that I put my executable file in the /usr/bin/ folder, my libraries in the /opt/<myapp>/lib/ folder and my desktop entry file in /usr/share/applications/ folder. Is that correct?But where is the general place for application resources?Is that everything I need to care of when deploying my application or are there other steps I am missing?
Deploying my application
directory structure
The Filesystem Hierarchy Standard specifies where to put files.If you're installing files outside of the package manager, always put them under /usr/local or under /opt. Never touch anything under /usr except via the package manager, except for things under /usr/local./usr/local/bin: executables intended to be executed by users (interactively or from scripts)/usr/local/lib: libraries available to many programs, not just yours/usr/local/lib/YOUR-PROGRAM-NAME: any other architecture-dependent files/usr/local/share/doc: documentation (except in man and info format)/usr/local/share/info: documentation in info format/usr/local/share/man/man*: man pages/usr/local/share/YOUR-PROGRAM-NAME: any other architecture-independent filesThese days, the separation of the share area which contains architecture-independent files isn't very important. It was devised back when hard disks were smaller and it was important to save space by not storing architecture-independent files twice in heterogeneous networks. You can skip this distinction if you like and put everything under lib/YOUR-PROGRAM-NAME.If you prefer to use /opt, put everything under /opt/YOUR-PROGRAM-NAME, and make symbolic links in /usr/local/bin (and /usr/local/share/man/man* and /usr/local/share/info if you provide documentation in man and info format) so that users can invoke your program.If you make deb or rpm packages, put files under /usr instead of /usr/local. Check each distribution's documentation for its particularities.
_webapps.60297
I have found answers to this but they don't work. I want to invite my friends to like my band page.If I am on the band page and try to Build an audience, it only gives me options to do this via email.If I am on my personal page I see no option for Build an audience. I am obviously the admin to my own page I assume. What am I missing?
How do I invite a friend to like my page?
facebook;invite
null
_softwareengineering.323848
I am searching for a universal algorithm that shifts pitch and time while keeping the sample rate. (I am trying to program a sound generator (sine, triangle...) as an exercise)I just want to squeeze the samples of the sound together so it appears to be shorter and pitched higher.It's easy if you want to speed it up with a nice number like 2 (leave out every second sample) but what if you want to speed it up by 2.5?Can someone name an algorithm for this? (preferably in C++, but other languages are also fine)(I spent the last 30 minutes writing an algorithm that computes averages of decimals of a number (Like the average of every 2.5 numbers). Then I realized it's totally useless so please help)Does this approach make sense?: (Will be adding pseudocode shortly)N is the amount of the pitchDivide the index of every sample of the sound by N (2/2.8=0.71; 3/2.5=1.07)For every whole number, compute the distances to the next samples (0.29; 0.07)Value portions of distance (1-(0.29/0.36)=19%; 1-(0.07/0.36)=81%New number (New Sample = (19%*SampleA + 81%*SampleB)/2)Code:byte[] input;double f = factor;byte[] output = new byte[ceil(input / f)];output[0] = input[0]for(int i = 1; i < output.length) { output[i] = (input[floor(i*factor)] + input[ceil(i*factor)]) / 2;}
Pitch/Time Shifting of a PCM byte array
c++;algorithms
As a commenter said, look up 'Audio Resampling'. Also, maybe buy the book called 'The Art of Digital Audio' or similar. Basically, what you are doing is linear interpolation, should work. But resampling at a much higher rate should allow you to pick samples (from the denser set) close to the sample points you need for e.g 2.5 speedup. Analyzing the error from such heuristics is non-trivial, for that you need a good book.If you like maths, read this practical classic: https://www.amazon.com/Fourier-Transform-Its-Applications/dp/0073039381 This 2nd book helps you understand what happens to the signal if you do linear interpolation between samples (it is an exercise...)
_softwareengineering.225478
In RFC 2617 HTTP Authentication: Basic and Digest Access Authentication they speak always of username and password for the authentication.Why should I choose to take a username as identifier for a website? Usernames are often hard to choose a not existing one or one which is unique. But everyone getting an account somewhere has an email which is unique.Why does the RFC not speak of something abstract like a unique string identifier? That would make much more sense to me where everyone logs into his account with his email...When someone starts with security/authentication the first thing he will think reading the RFC 2617 that he can not do basic auth because I want email + password. What am I missing here?
Why speaks basic http authentication always of a username
http;authentication
The problem here isn't so much the spec, as it is your interpretation.It seems there's a habit among computer professional to always expect verbally spoken and written language to have the same well-defined meaning as programming languages (i.e. if you use this word in a sentence, you must mean this).What's worse is that different people have different preconceptions of the exact and true meaning of certain words and that leads to all kinds of discussions, arguments and all-out battles. Here's an example of someone trying to understand a difference between fragile and brutal and you can read my ranty answer: https://softwareengineering.stackexchange.com/questions/131240/difference-between-brittle-and-fragile/131259#131259You have fallen into the same trap with this auth spec. They mention username and password but all it really means is that your credentials have two parts: public part that everyone knows about and can uniquely identify you by and a private secret part that only the user knows so he can get authenticated. But at the end, the public part (aka a username) can be just about anything you want, including someone's e-mail, or in some dumb cases, their social security# or in corporate case their unique identifier which puts then on the same level as all other resources.
_datascience.11752
I'm doing some ADA boosting with Decision stumps and in inducing a binary classifying decision stump, i'm finding both leaf nodes to have a positive value. Can this be the case? Is this possible?
Decision Stumps with same value leaf nodes
decision trees
What is the overall response rate? If it's low (even 15-20%) it may be difficult to find decision stumps that contain one leaf with > 50% response! You could consider oversampling or changing cutoff probability, but I think if your using only 2 leaf trees, your model is bound to struggle.
_softwareengineering.328878
My apology if this question is already answered & accessible through search - not quite sure how to phrase this particular query. So, here's scenario & question:File a.py imports several common Python modules (pandas, numpy, etc.) and a file b.py will make use of classes created in a.py, as well as the modules which a.py imports.For sake of performance & clean code, which of these options should I do?1) create a class in b.py which imports not only the classes, but also the common modules, created in a.py?2) import only the newly-created 'a' classes and import the common modules anew in 'b'?3) create a _init__.py, setup.py or main.py for the project which in-aggregate imports all required modules across the files being created?
Python inheritance/import - parent file imported modules, should child import too or thru parent?
python;inheritance
null
_codereview.87044
After a discussion of which users were from Australia, I wrote my first SQL query to find out:SELECT u.DisplayName'Display Name', u.Reputation'Rep', u.Location'Location'FROM Users uWHERE u.Location LIKE '%Australia%'ORDER BY 'Rep' DESCAs always, please tell me the good, the bad, and the ugly.The query can be found here
Query to find users from Australia
sql;stackexchange
I don't like how you're specifying the column aliases. I expect a whitespace between the column and the alias.I like that you're not specifying the optional AS keyword though - I find it only adds clutter when it's there.Also I would have used [square brackets] instead of single quotes, and layout the field names on separate lines, like this:SELECT u.DisplayName [Display Name] ,u.Reputation [Rep] ,u.Location [Location]That way you can easily add, reorder, or comment-out a column if you need to.
_webmaster.86273
For those of you who read an earlier post of mine, you'll understand that my site has AdSense ads and no ads load in IE 7. In fact, I tested another website that uses ads in the same browser and even they don't show up.I checked my user access log for my website for there seems to be some people still using IE 6 along with other older web browsers but the numbers aren't in the majority.Now I see from https://support.google.com/adsense/answer/191268?hl=en that AdSense only wants to support IE 10 and up.In the other post I made, someone suggested I should try to encourage users to upgrade their browser but not in a forceful manner but I'm not sure if that's enough to convert a user to a browser in which they actually see the AdSense ads.The point here is that I want everyone who visits my website to see at least one ad per visit and so far, IE 7 is preventing that from happening. Installing IE 6 is a joke on XP so I couldn't test in that. Luckily on webpagetest.org a computer in Montreal running IE 8 showed the ads.I admit I do suck at advertising and I also don't want to hurt real users.So what would be the grand solution here?
Is posting a small upgrade your browser message enough to get users off IE 7 so that they see ads?
google adsense;browsers;cross browser;internet explorer 6;internet explorer 7
null
_webapps.104076
I have customers re-ordering meal delivery service weekly.I'd like for them to be able to re-order easily instead of retyping their information address etc every week. Is there any feature for returning customers, or a way to track if they are returning to avoid asking them certain questions? or not allowing them to use a specific promo code more than once?
Cognito Forms- Returning Customer
cognito forms
null
_softwareengineering.24798
After a burst of ranting about homework, the applicability of the classes I'm taking, and my computer science teacher, I have some questions concerning education for my career path as a developer.I taught myself everything I know about programming. I've been thinking lately about the advantages of teaching myself vs. learning from a teacher. I feel I might miss some things I would normally learn. I might learn some concepts wrong, or something. After all, computer science eduction has been around longer than I have.The problem is, I'm not learning anything new about computer science in my programming class. I put the computer programming 2 course on my schedule (it was labeled as Computer Science when I signed up for it). It turns out that it's just learning C++ and of course OOP. In fact, the CP2 students are in the same classroom at the same time as the CP2 students, except the CP1 course is a half-year class instead of full year. I thought we'd do something like data structures (other than arrays), or something I'm less experienced in. The teacher had me change my schedule to CP1, but he lets me loose to work on projects with the CP2 students while he teaches the CP1 students.I usually end up helping the CP2 students with build errors and things like that after I'm done. I certainly don't mind helping them, but I'm not learning anything new, and I likely won't, seeing that AFAIK linked lists will be the most advanced concept taught (for the CP2 students; The class ends this semester for CP1, including me). Frankly, I'm not interested in copying source code from paper to screen, which is 90% of what the CP2 students do.I want to be learning something computer science related. I'd hate to sit around waiting for my options to catch up. I want to take classes not offered by my high school, like discrete mathematics, algorithms and data structures, or something like that. So my question is, where can I take classes which aren't offered by my high school? Can I take college classes? (keep in mind there isn't any concurrent, AP, or distance-ed classes concerning computer science offered by my high school besides the CP1/CP2 class I'm in). Do you know any online classes I could take?Thanks,Danny ShieldsP.S: I talk about my programming experience on my StackExchange profile if it helps.
As a programmer, what paths should I take concerning education?
self improvement;education;experience
The problem is, I'm not learning anything new about computer science in my programming class.Even in Germany, you don't get the CS stuff (data structures, algorithms, turing model, automatons, proofs, complexity, etc.) until grade 11. In fact, only few select schools offer CS at grade 11(-13). If you are out of luck, university (basically grade 14+ in our system) is the first institution you will get real CS from.Before that, what schools do is just applied practice. Internet technologies like HTML and Javascript to make your own web page. Programming with QBASIC to drive some LEDs hooked up to the LPT port. Stuff you can get done without CS theory.That's for where to take classes. But that is usually not the place where the most proficient programmers emerge from. The B.Sc.s have theory, but the programs they write usually suck because they lack familiarity with pretty much all languages. Even people that just moved from M.Sc. to PhD have the problem, less, but noticable.Few, if any, delve into what is essential for programming: selected paradigms, or perhaps called patterns. I am not talking about procedural VS OOP, or imperative vs declarative. Did any course ever talk about the Builder pattern? The Factory pattern? No? (I recommend http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612 )What programmers should at best have is experience, and a lot of that. That does not mean they should not also have an understanding about CS concepts (the more the better), but in contrast to university exams, it is not so strictly necessary to understand everything.And as I see it, the programmers being able to churn code has more practical (industrial) weight than a scientist being able to churn theory. (Conversely, a clever business that has to deal with internals of algorithms employs one of each type of person and utilizes twin programming, a.k.a. pair programming.)
_unix.155029
I have a report running in UNIX and creating a file named xxxx_ddmmyy_hhmm_zzzzzz.txt. I need to remove the zzzzzz from the file name. How do I do it?
How to remove a specific string from file name
files;rename
null
_webmaster.53095
I have a HTTP sniffer application which runs on C++ , when I am getting the user agent field I get something like : Mozilla/5.0 (Windows NT 6.1; WOW655970812; fr=0ea2JbE8UfOtMxCAb.AWUSc_K0DPt0NpgvgWvZVDCZzug.BSHX4S.8d.FIo.AWUtAj4s; xs=1%3AFOApj11sTU3NXA%3A0%3A1378405752%3A6395; sub=128; p=129; act=1378934556224%2F153; presence=EM378935116EuserFA21655970812A2EstateFDsb2F1378934519166Et2F_5b_5dElm2FnullEuct2F1378934511240EtrFA2close_5fescA2Etwlocale=tr_TR; c_user=1655970812; fr=0ea2JbE8UfOtMxCAb.AWUSc_K0DPt0NpgvgWvZVDCZzug.BSHX4S.8d.FIo.AWUtAj4s; xs=1%3AFOApj11sTU3NXA%3A0%3A1378405752%3A6395; sub=128; p=129; act=1378934556224%2F153; presence=EM378935116EuserFA21655970812A2EstateFDsb2F1378934519166Et2F_5b_5dElm2FnullEuct2F1378934511240EtrFA2close_5fescA2EtwF4219503720EatF1378934899190EwmlFDfolderFA2inboxA2Ethread_5fidFA2user_3a626679213A2CG378935116367CEchFDp_5f1655970812F195CCThis doesn't happens often, but still enough to cause problems in my customer's database. It seems for sure I need to make a sanity check while getting this field and after getting this field. My question if the user agent information above somehow meaningful. If not what might be the cause for such strange user agent field in request header.
Long user agent field in HTTP header
http;user agent
This UA string definitely looks broken -- I can say that this gibberish looks similar to content of the cookie header.Most often reasons:Fake UA (either used on purpose by some bot/script ... or it's a programmer mistake (or buggy library that was used) when whole request body and headers were assembled manually;Your app got it somehow wrong.
_webmaster.34053
We wish to host multiple apps across multiple servers. What we are looking for (ideally) is an existing solution which will work. For example, normally to do it we'd follow a route (for failover) like:App is installed on one server along with mysql databaseApp is also installed on a second server. Rsync is used to mirror the files over to the second server and ensure consistencyMySQL is installed with a Master->Slave setup. We use a service such as DNS Made Easy which has a DNS failover. If one server goes down it automatically routes traffic to the backup serverWe have done the above a few times and generally its fine. The issue I have here is that the above is for one app. What I would like to look at is how we can manage for multiple apps and if there is a layer (such as VMWare) that has complete mirroring built in at the OS level? For example how do web hosts currently do it when they ensure that more than one machine is running a bunch of hosted websites. If you were running hosting and you had 200 clients on a server you would want the same clients across 2 or more servers and want everything mirrored. Any advice would be much appreciated.
Mirroring of Apps across servers
server;mirror
null
_unix.3719
I've got this configuration:WRouter ->(by wifi) Computer1And I want to add another computer (computer2) connected by cable to computer1.Is it possible then to configure computer1 to forward all packets from/to computer2 with DHCP packets too? And if yes, how?
forwarding DHCP packets
linux;ubuntu;dhcp
null
_unix.91960
I'm getting the following output when the mount command is executed.[root@]# mount/dev/sda2 on / type ext4 (rw)proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)tmpfs on /dev/shm type tmpfs (rw)/dev/sda1 on /boot type ext4 (rw)/dev/sda3 on /home type ext4 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)gvfs-fuse-daemon on /root/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev)I'm not able to understand the output of this command.Can anyone give the explanation for this output?
Can anyone explain the output of mount?
linux;filesystems;mount;linux kernel
null
_unix.342688
There is a fundamental aspect of the way permissions work in linux directories I think I have not understood.I have this folder I was trying to access from my local apache server:sudo chmod 777 /home/ut/programmes/Programmation/p5-linux/sudo -u www-data ls /home/ut/programmes/Programmation/p5-linux/ls: cannot read directory '/home/ut/programmes/Programmation/p5-linux/': Permission denied Why is it not working ? even though the permission is 777 ?Moreover, by doing: sudo chown ut:www-data /home/utsudo chmod 710 /home/utwithout changing anything to the permission in /home/ut/programmes/Programmation/p5-linux/, now this is what I get:sudo -u www-data ls /home/ut/programmes/Programmation/p5-linux/icudtl.dat libffmpegsumo.so locales nw.pak p5 p5.png Projetsthe only thing I did was to change the group of a parent dictory.why does it work now ?
why does changing the group of my /home folder affects what happens in a subdirectory
permissions;directory
null
_webapps.54798
I would like to find the number of Fridays for a specific month via function in Google Spreadsheets. For example, for January 2014 the value would be 5 and for February 2014 the value would be 4.How can I do that?
Count the number of Fridays in a specific month
google spreadsheets
This is how to do that with Google Apps Script.Codefunction specificDays(dayName, monthName, year) { // set names var monthNames = [January, February, March, April, May, June, July, August, September, October, November, December ]; var dayNames = [Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saterday ]; // change string to index of array var day = dayNames.indexOf(dayName); var month = monthNames.indexOf(monthName)+1; // determine the number of days in month var daysinMonth = new Date(year, month, 0).getDate(); // set counter var sumDays=0; // iterate over the days and compare to day for(var i=1; i<=daysinMonth; i++) { var checkDay = new Date(year, month-1, parseInt(i)).getDay(); if(day == checkDay) { sumDays ++; } } // show amount of day names in month return sumDays;}ScreenshotRemarksAdd the script via Tools>Script editor in the menu. Save the script and you're on the go !!ExampleI've created an example file for you: Amount of Day Names in Month
_unix.228741
I was wondering:Is it possible to create a checksum of a directory (using something like md5sum)Is it possible to recursively create checksum for each file inside the dir (and then print it out)?Or both?I'm using bash
Get checksum of directory on bash
bash;files;directory;hashsum
md5sum won't take directory as input, however tar cf - FOO | md5sumwill checksum it, if a file is change any place within FOO, checksum will change, but you won't have any hint of which file. The checksum will also change if any file metadata changes (permissions, timestamps, ).You might consider using : find FOO -type f -exec md5sum {} \; > FOO.md5which will md5 every file individually, and save the result in FOO.md5. This makes it easier to check which file has changed. This variant only depends on file content, not on metadata.
_softwareengineering.153359
These days many apps support asynchronous updates. For example, if you're looking at a list of widgets and you delete one of them then rather than wait for the roundtrip to the server, the app can hide the one you deleted, giving immediate feedback. The actual deletion on the server will happen in the background. This can be seen in web apps, desktop apps, iOS apps, etc.But what about when the background operation fails. How should you feed back to the user? Should you restore the UI to the pre-deletion state? What about when multiple background operations fail together?Does this behaviour/pattern have a name? Perhaps something based on the Command pattern?
Asynchronous update design/interaction patterns
design patterns;async;user interaction
null
_cseducators.3069
I currently coach our school's CyberPatriot team. (You can find information on the CyberPatriot competition here.) We are heading into our second year of competing. Last year we did pretty well for being a more casually organized team, but I'd love to bring in a stronger curriculum. There are helpful training modules provided for coaches, but for more advanced vulnerabilities, independent study and research is necessary for achieving at the highest level in the competition. In particular, the images students must secure encompass a wide range of operating systems. Last year had Windows 7, Windows 8.0, Windows 8.1, Windows 10, Windows Server 2008, and Ubuntu 14.04. Additionally, students used Packet Tracer to learn networking security through the Cisco Networking Academy.The bottom line is that there is a ton of potential information to cover, and there is a precise way to succeed at CyberPatriot. This page explaining how the competition works details the point system: basically you earn points for doing a precise task in securing the system and lose points for making it less secure.My question is this: what resources can I use to strengthen my students' ability to succeed in this competition? For context, they are doing it as an extracurricular activity, and any resource that is engaging for students to work though on their own is a bonus. They are highly-motivated, but since there is so much out there, it's hard to know where to begin since cybersecurity is not my background.
Curricular Support for a CyberPatriot Club
resource request;security;extracurricular club;cyberpatriot
null
_unix.139154
If packagehello does not match, the output is still displayed.Aim: to see no output in situation 2Situation 1:user@hostname ~]$ sudo yum list 'package*'packagehellopackagehellopackage2worldpackagehellopackage2worldSituation 2:user@hostname ~]$ sudo yum list 'package*' | grep -E 'package1.*|package2.*'package2worldpackage2worldHow to show the output only if both words match using grep?
Show output only if both words match using grep
grep;scientific linux
Try this:sudo yum list 'package*' | grep -E 'package1.*package2|package2.*package1'or using multiple grep:sudo yum list 'package*' | grep 'package1' | grep 'package2'
_codereview.77180
This is a Rspec test for pagination in Rails project. I'm not sure if I should write the test in spec/requests or spec/controllers.And there must be a lot of thing that I had better to do. Which part of code should I refactor?spec/requests/companies_spec.rdescribe Companies do describe GET /companies do before(:all) { 50.times { FactoryGirl.create(:company) }} describe index do context with 50 companies do it has not second page do visit root_path expect(page).to have_no_xpath(//*[@class='pagination']//a[text()='2']) end end context with 51 companies do before{ FactoryGirl.create(:company) } it has second page do visit root_path find(//*[@class='pagination']//a[text()='2']).click expect(page.status_code).to eq(200) end end end endend
RSpec test for pagination
ruby;pagination;rspec
There are many ways to test this. Mostly, though, it'd be nice to avoid having to create 50+ records, since it slows down your tests.If you use a request spec, though, it's probably best to create 50+ records, since it's a high-level test, so you'll want to be close to the real usage scenario.But you can cheat a little in other places. For instance, if you have the records-per-page number defined in a way that's configurable, you can set it to something lower in you pagination test (or you can set it globally for the test environment). For instance, if the per-page is set to 2, you only need to create 3 records to test pagination. That'll be a lot faster than creating 51 records.If you're spec'ing the view itself, you can simply define the instance variables that'll trigger pagination links, and not bother with the actual records. Or you can use FactoryGirl.build_list to merely build the records and assign them to a view-accessible variable, without actually storing them in the database - again, faster.You can also look into mocking and stubbing to avoid actually creating the records.For your current code, You can do a couple of things, like:describe Companies do describe GET /companies, order: :defined do before(:all) { FactoryGirl.create_list :company, PER_PAGE } context with few records do it does not paginate records do visit /companies expect(page).to have_no_xpath(//*[@class='pagination']//a[text()='2']) end end context with many records do it paginates records do FactoryGirl.create :company visit /companies expect(page).to have_xpath(//*[@class='pagination']//a[text()='2']) find(//*[@class='pagination']//a[text()='2']).click expect(page.status_code).to eq(200) end end endendChanges I've made:Using FactoryGirl.create_list to create a number of records at once.Using a PER_PAGE constant, just in case it isn't 50. This could also be an ENV var, an instance variable, or simply hard-coded. But naming it helps document the code.Using order: :defined to force the examples to be run in the order they're defined. This avoids the specs randomly failing because the 2nd test has been run before the first one.I've change the visit path to /companies because that's what the spec is about. You used visit root_path, which no doubt worked fine, but the spec is about visiting /companies, so I find it nicer to keep it consistent.You might also want to check that the correct records actually show up on the page. I.e. attempt to find the name of the 51st company within the rendered page, when you've gone to the 2nd page's path.Lastly, you may want to add some specs for how the system should behave if you go say page 4, but there aren't enough records to show anything.But again, I'd probably start with view/controller specs, before moving on to high-level request specs. Request specs are great because they test everything pretty close to actual usage. But that also makes them more complex, so the more you can check at a lower level, the better.
_codereview.127706
Ive been working on a Rampart inspired multiplayer game for a few weeks now, and it is finally in a playable state. The big thing left to do before going alpha was to add a tutorial to the game. I did a tutorial once before for my city building game, but I was very unhappy with how I coded it. I had a Tutorial class and a TutorialPhase enum, and every render loop the GameScreen checked to see whether the tutorial was enabled, and if so it told the Tutorial object to check for whether the conditions of the current state were achieved. Then the Tutorial would call methods of the GameScreen to display the next part of the tutorial.This time I wanted to separate everything tutorial related from the regular classes as much as possible. I created a TutorialGameScreen class that extends the GameScreen class. All of the conditional logic is inside that. There is still a TutorialPhase enum which contains the strings that will be displayed for each phase, as well as whether or not a click is required to advance. I override only the methods of the GameScreen class that will be necessary to check the conditions.I think that this way is much cleaner. I still need the regular GameScreen to check whether the tutorial is enabled (with a simple boolean) so that I can prevent the game timer from ticking down while the tutorial is running. Id love to hear opinions about my approach.You can try the game hereCastlepartsTutorialGameScreenpublic class TutorialGameScreen extends GameScreen { private Table tutorialTable; private LibGDXGame libGDXGame; private Table overlayTable; private Label tutLabel1; private Label tutLabel2; private Label tutLabel3; private TutorialPhase tutorialPhase; public TutorialGameScreen(LibGDXGame libGDXGame, GameType gameType, Difficulty difficulty) { super(libGDXGame, gameType, difficulty); this.libGDXGame = libGDXGame; this.buildTutorialUI(); this.tutorialPhase = TutorialPhase.START; this.tutorialMode = true; this.setLabelTextForPhase(); } private void buildTutorialUI() { this.tutorialTable = new Table(this.libGDXGame.skin); this.tutorialTable.setFillParent(true); this.libGDXGame.hudStage.addActor(this.tutorialTable); this.tutLabel1 = new Label(, this.libGDXGame.smallButtonFontStyle); this.tutorialTable.add(this.tutLabel1).padBottom(-10); this.tutorialTable.row(); this.tutLabel2 = new Label(, this.libGDXGame.smallButtonFontStyle); this.tutorialTable.add(this.tutLabel2).padTop(-10).padBottom(-10); this.tutorialTable.row(); this.tutLabel3 = new Label(, this.libGDXGame.smallButtonFontStyle); this.tutorialTable.add(this.tutLabel3).padTop(-10).padBottom(-10); this.addOverlayTable(); } private void setLabelTextForPhase() { this.tutLabel1.setText(this.tutorialPhase.tut1); this.tutLabel2.setText(this.tutorialPhase.tut2); this.tutLabel3.setText(this.tutorialPhase.tut3); } private void addOverlayTable() { this.overlayTable = new Table(this.libGDXGame.skin); this.overlayTable.setFillParent(true); //this.overlayTable.setDebug(true); Image image = new Image(this.libGDXGame.alpha); this.overlayTable.add(image).expand().fill(); this.overlayTable.addListener(new ClickListener() { @Override public void clicked(InputEvent event, float x, float y) { TutorialGameScreen.this.advanceTutorial(); } }); this.libGDXGame.hudStage.addActor(this.overlayTable); } private void advanceTutorial() { this.overlayTable.remove(); int phaseNumber = this.tutorialPhase.ordinal(); //if its the last phase //end the tutorial if (phaseNumber == TutorialPhase.values().length - 1) { this.tutLabel1.remove(); this.tutLabel2.remove(); this.tutLabel3.remove(); this.tutorialMode = false; return; } this.tutorialPhase = TutorialPhase.values()[(phaseNumber + 1)]; if (this.tutorialPhase == TutorialPhase.CREEP_MODE) { this.world.setGameType(GameType.CREEP); } if (this.tutorialPhase == TutorialPhase.SHOOT) { SequenceAction sequence = new SequenceAction(); sequence.addAction(Actions.scaleTo(2, 2, 0.5f, Interpolation.sine)); sequence.addAction(Actions.scaleTo(1, 1, 0.5f, Interpolation.sine)); this.playerCannonBox.addAction(Actions.forever(sequence)); } this.setLabelTextForPhase(); if (this.tutorialPhase.needsOverlay) { this.addOverlayTable(); } } @Override protected void cannonDragStopped(float x, float y) { super.cannonDragStopped(x, y); if (this.tutorialPhase == TutorialPhase.SHOOT) { this.advanceTutorial(); this.playerCannonBox.clearActions(); this.playerCannonBox.setScale(1); } } @Override public void cannonballHitTile(final Tile tile, Tile building) { super.cannonballHitTile(tile, building); if (building.getType() != TileType.NONE && this.tutorialPhase == TutorialPhase.SHOOT_WALLS && !this.world.isPlayerTile(tile.position)) { this.advanceTutorial(); } } @Override public void wallBuiltAtPoint(MapPoint point) { super.wallBuiltAtPoint(point); if (this.tutorialPhase == TutorialPhase.BUILD_WALLS && this.world.isPlayerTile(point)) { this.advanceTutorial(); } } @Override public void floorBuiltOnTile(Tile tile) { if (this.tutorialPhase == TutorialPhase.BUILD_FLOOR && this.world.isPlayerTile(tile.position)) { this.advanceTutorial(); } }}TutorialPhasepublic enum TutorialPhase { START(Welcome to Castleparts., Click to continue., , false), SHOOT(See the white square, around your cannon?, Click, drag, release to shoot., false), SHOOT_WALLS(Good shot!, Now try to hit your enemy., Aim for one of the walls., false), BUILD_WALLS(Nice! See the wall, buttons at the bottom?, Drag and drop to place one., false), BUILD_FLOOR(Very good., Fully enclose to build floors., Floors score points., false), INFO(Great! Have the most, floors at the end to win., And watch your energy!, true), CREEP_MODE(In creep mode, the, enemy floors spread out, continuously., true), ALL_DONE(Have fun!, And make sure to, try the multiplayer., true); public final String tut1; public final String tut2; public final String tut3; public final boolean needsOverlay; private TutorialPhase(String tut1, String tut2, String tut3, boolean needsOverlay) { this.tut1 = tut1; this.tut2 = tut2; this.tut3 = tut3; this.needsOverlay = needsOverlay; }}And here's a screenshot:
Game Tutorial in Java
java;object oriented;game;libgdx
SubclassingIn short, I definitely think you made the correct decision here to extend GameScreen. This is the classic Is-A vs. Has-A (ie. Composition vs. Inheritance). In this case, the TutorialGameScreen is certainly a GameScreen (has everything that a standard GameScreen would have), but with added functionality (eg. ability to not run the timer, display additional UI elements, etc.). TutorialPhaseInternationalizationAny time you are displaying text to a user, you should consider the needs of your audience - including languages! Take a look at ResourceBundle for how to do this. Also this tutorial. For an enum, I find it easy to just use the enum element name as the key for the resource file. Then on your getDisplayString() method (or whatever it is called) you can simply do:public enum TutorialPhase { // ... public String getDisplayString() { return ResourceBundle.getBundle(getClass().getName()).getString(name()); } // ...}Multiple linesAs a result of internationalization you'll want to avoid hardcoding the multiple lines of text. You can't be certain that a bit of text that fits in English will also fit in, say, German. Also, there's no guarantee that a users system font settings will match your own. That said, I would advice combining those three Strings into one. I realize this poses a bit of a technical challenge (How do I split a label onto multiple lines?), but I would argue that the responsibility of solving this problem does not belong to TutorialPhase. Instead, there may be Label implementations that allow for multilines, text-wrapping, etc.Public fieldsNow that we're down to one String, it should really be private. TutorialPhase can then expose a single method (eg. getDisplayText()) to access the String. Similarly, needsOverlay should be private and we can add a method (eg. isOverlayNeeded()) to access the boolean.Updated code suggestion (you can document this enum better):public enum TutorialPhase { /** * The START. */ START(false), /** * The SHOOT. */ SHOOT(false), /** * The SHOOT_WALLS. */ SHOOT_WALLS(false), /** * The BUILD_WALLS. */ BUILD_WALLS(false), /** * The BUILD_FLOOR. */ BUILD_FLOOR(false), /** * The INFO. */ INFO(true), /** * The CREEP_MODE. */ CREEP_MODE(true), /** * The ALL_DONE. */ ALL_DONE(true); private final boolean needsOverlay; private TutorialPhase(final boolean needsOverlay) { this.needsOverlay = needsOverlay; } /** * Returns the text to display during the phase. * @return The non-null, non-empty display text. */ public String getDisplayText() { return ResourceBundle.getBundle(getClass().getName()).getString(name()); } /** * Returns whether or not an overlay is needed for the phase. * @return {@code true} if an overlay is needed, otherwise {@code false}. */ public boolean isOverlayNeeded() { return needsOverlay; }}TutorialGameScreenStill related to your enum, but it happens in this class - relying on the order of enum elements is very brittle. What if you (or someone else) added a new element, or accidentally swapped them around? This would break the progression of your tutorial! Instead, you could do the following:private static final List<TutorialPhase> TUTORIAL_PHASES = Arrays.asList( TutorialPhase.START, TutorialPhase.SHOOT, TutorialPhase.SHOOT_WALLS, TutorialPhase.BUILD_WALLS, // ...etc...);private final Iterator<TutorialPhase> phaseIterator = TUTORIAL_PHASES.iterator();...and use the Iterator to progress through the phases you have defined.Misc.Overuse of 'this'. In your code, the only place that needs it is in the constructor. The general consensus is to only use the 'this' keyword when necessary in order to maintain clean code.Make your private fields final whenever possible. See: Use final liberally
_unix.351591
I am doing ssh on a machine and executing certain commands.My last command gives me a variable which I need for a script present locally. However, how do I access that variable after I logout from the machine?Edit: Please assume that I have already login to the machine.
How to persist variable from remote shell
bash;ssh;remote
Depends on how you want to do it. Getting the output of a single command within an interactive session isn't that easy to do automatically. You could of course just copy and paste the output from the terminal. But you could also save the output to a file on the remote, and then do something like var=$(ssh remote cat file.with.var), or run the command that generates the final output similarly: var=$(ssh remote somecommand). Or, if you want to do it directly from an interactive session, you could rig up an expect script to do it.
_webmaster.16449
Among other I run a humor site, developed in php. I have a custom search built where people search for everything under the sun.I have noticed that it helps me to have a Latest searches section where I list the latest searches performed.As you can understand I have a lot of searches that are naughty. Not because I have anything offensive on the site, but I guess it's what some people want to find. Most of the time what they actually find is jokes around the subject or some video of a stand up talking about the search item.The question I have is. Should I filter out these terms from my page? I am afraid that the site might get blocked from office firewalls and maybe banned by an advertising network (like adsense) just because some stupid robot found a term that thought in the sites' text.One thing that you might ask is: If the site is family safe, how come all these people search for naughty terms. 2 reasons.Google. Many times people looking for something sexy linked to a strange object (e.g. sex and lion or dog or oak tree) end up to my site. Of course they don't find what they are looking for, so they hit search which is pre-filled with the term that led them to the site. After that the term goes in latest searches.Whether A happens or just a users types in something naughty, this term's popularity is beeing reenforced by the fact that many people see it in latest searches, click on it and search for it again. So if someone searches for funny hat the term dissappears from latest searches within minutes. If he types something more spicy, other people click on it and it remains in the latest searches for hours.Once again I want to clarify that the site has ZERO nudity and it doesn't even have sexy videos. YouTube is way more sexier that my site.thanks for anyone spending the time to share his knowledge.Update: Although John Conde did give it a shot, I think the dillema at hand is still on. Should I remove controversial terms from the text of a non-controversial site? I know for a fact that Google might ban a site that it thinks is porn related. Is the risk worth it? What would you do?
Finally is it bad to mention words around sex in your page?
seo;keywords;search
I would think that this wouldn't affect your Adsense account since the actual content of your website is not sexual in nature. Additionally, having a word mentioned once or twice on a page, and sometimes not at all since newer searches can bump them off of the page, will have little impact on that page's relevancy for adult-related phrases.Having said that, I would recommend filtering out those words as they can only do you harm as you have suggested (filtering software blocking your site, etc). Since your site does not contain the kind of content implied by those search terms there is no reason to give those search terms prominent placement on your website.
_unix.232514
This:set -xrm -f p; mkfifo pexec 99<>psucceeds in bash but fails (99 not found) in dash.How do I get it to work in dash?
Dash exec redirection error
io redirection;dash
The POSIX standard that was derived from the Bourne Shell and it's descendant ksh88 explicitly mentions that this is not granted to work. The reason is the shell syntax:<>fileopens stdin for reading and writing, and:[n]<>fileopens file descriptor n for reading and writing.n in this case is a single digit.You used the number 99 and this is a two digit number that is outside the range specified by POSIX. So the parser did not see it as a number that is related to the redirection operator but as a separate argument. So this argument was seen as the file to execute by exec.If you like portable scripts, follow the POSIX standard and if you like to write fully portable scripts, make things 100% Bourne Shell compatible.As there does not seem to be a reason for using 99 in your case, I recommend to use 9.
_softwareengineering.214714
Current field-of-use restrictions defined in Oracle Binary Code License Agreement for the Java SE Platform Products prohibit its use in embedded systems.General Purpose Desktop Computers and Servers means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications... are excluded from this definition and not licensed under this Agreement.Do these restrictions also apply to OpenJDK and other possible implementations? Is the only way to use Java in such an environment to acquire a separate license from Oracle?
Is there any way around the field-of-use restrictions in Java?
java;licensing
No, these restrictions do not apply to OpenJDK. They are only for the Oracle-branded binary installation packages of the JDK and JRE (which I think still include some code that is not in OpenJDK).If you use OpenJDK, you are only bound by the OpenJDK's license, which is GPL+linking exception.
_unix.354314
I have an OpenVPN Server running on my VPS. The clients behind that server are now able to surf on the internet as anonymous. Now I want to forward a port to a client. I already added these rules, without any success:sysctl -w net.ipv4.ip_forward=1iptables -t nat -A PREROUTING -p tcp --dport 28006 -j DNAT --to x.x.27.6iptables -t nat -A PREROUTING -p udp --dport 28006 -j DNAT --to x.x.27.6After it was not working, I tried additional SNAT rules like here https://unix.stackexchange.com/a/55845/223140I don't know what I should do now..
Forward port to OpenVPN Client
iptables;openvpn;port forwarding;nat
null
_unix.331950
I posted this question in the Security community, and I was advised to better post it here in the Unix one. In addition I had 3 questions and 1 remark:Q1: How do you evaluate that they do not work?When I try to access my NAS using its public address, the gateway flows the request to my FW (which is in the DMZ) and the flows goes to $NAS_IP (as expected), but conntrack gives me an [unreplied] tag an the return flow from $NAS_IP is going a unexpected hard IP, on top the port 1194 is reported Closed to any check from the WAN.Q2: Is there anything in iptables -L?Yes for me it looks OK this is where I see that I am beyond my limits.Q3: What threats do you plan to mitigate against?I intendto allow a connection requests from the WAN through this TW to the openvpn server located on $NAS_IP (one single udp port 1194)to drop any other (IP and/or port) connection from the WANto allow connection (all protocols) FW from anywhere in the LANR1: You added a lot of tags but did not explain how they pertain to iptables.It could well be because I am not smart enough to achieve with simple rules what I want to achieve.I must add that these 3 rules are part of a bigger set, but that are the only ones proceeding to a NAT.Here are the rules:iptables -t nat -A PREROUTING -d $INET_IP -p udp --dport 1194 -j DNAT --to $NAS_IP:1194iptables -A FORWARD -m state --state NEW,ESTABLISHED,RELATED -d $NAS_IP -j ACCEPTiptables -t nat -A POSTROUTING -p udp --dst $NAS_IP --dport 1194 -j SNAT --to-source $INET_IPQ: Is this firewall/router the network default route on your NAS? Or does your NAS have a direct route to the rest of the Internet? roaima [2016-12-21 15:24]A:Yes the NAS has also a route to the WAN directly via the Gateway as the other LAN members.Q: How do you determine from the Internet that port 1194 is closed?A: I see that the UDP 1194 is closed because I try to connect the VPN (openvpn) and it stay stuck after having solved the HOSTNAME (DNS) and attacking the hard IP (I see this from the WAN end and also by checking conntrack).Q: You're using UDP not TCP here.A: Yes, I have openvpn on udp (not tcp) but I can change it if this simplifies the case.Q: What are your table policies (output of iptables -S | grep -w P)?A: iptables -S | grep -w P output is: -P INPUT ACCEPT -c 8 688 -P FORWARD ACCEPT -c 0 0 -P OUTPUT ACCEPT -c 3 344Q: Do you have any DROP/REJECT rules in the FORWARD table before the rule you've shown in your question?A: Before the NAT rules I have indeed DROP rules: #deny all what is not udp on 1194 port as well as all tcp port iptables -A INPUT -p udp ! --source 192.168.1.0/24 --dport 1:1193 -j DROP iptables -A INPUT -p udp ! --source 192.168.1.0/24 --dport 1195:65535 -j DROP iptables -A INPUT -p tcp ! --source 192.168.1.0/24 --dport 1:1193 -j DROP iptables -A INPUT -p tcp ! --source 192.168.1.0/24 --dport 1195:65535 -j DROPBut it is too radical at that stage (apt-get from the FW is now blocked...)Q+: If so, please move your FORWARD/ACCEPT to the beginning of the ruleset.A+: I did it following your recommendation now it works!!! Thanks so much!!!!!!Q: Your third rule should rewrite to an internal IP address not what I assume is the external one.A: The third rule: -j SNAT --to-source $INET_IP relates to the FW IP address.Q: Why can traffic go from your NAS to the Gateway without going through the firewall? This makes little or no security sense.A: Maybe not a good choice. I explain my rationale (I am not fixed on it): the NAS registers itself to a domain server after each change to public gateway address and there are no ports open to external IP requests on the NAS.@roaima: I thank you very much for your so effective recomm! moreover with my messy way to misuse the comment feature. I took a bit of time to reshuffle (more properly I hope) the question text because I was banned editing as bad user (which I can understand).
My iptables rules don't seem to work; I do not understand what's wrong
iptables;firewall;port forwarding;nat
null
_softwareengineering.111178
My project manager, when providing requirements for specific tasks, does not care about the implementation details. Although he has a programming background and has some knowledge of the MVC framework, he does not consider the perspective of the developer.For example, I was given a task to create a simple form in ASP.NET MVC. This form should be pluggable - that is, the customer should choose which fields do or do not exist and which fields are required. If this were a simple form with validation, I would easily be able to implement it using ASP.NET validations. However, the problem is not simple and requires design and architecture first. The time that I have to implement the solution, which is not well understood, is very restricted. Not having sufficient time will not let me come upwith a solution that meets the requirements, but also benefits myself and any future developers.What should I do in this situation? Do you feel that given requirements in the example can be expected from a single developer?
What should I do when my project manager does not care about implementation details?
design;project management;scheduling;task
null
_unix.273333
Below is my input file:PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:16:46.523.177, 2 PS Sensor Value = -5.501000 , Min = -5.583000 , Max = -5.319000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:16:46.523.210, 3 PS Sensor Value = 15.996000 , Min = 15.814000 , Max = 16.078000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:16:46.523.231, 4 PS Sensor Value = -16.505000 , Min = -16.587000 , Max = -16.323000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:16:46.523.263, 5 PS Sensor Value = 6.509000 , Min = 6.327000 , Max = 6.591000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:16:46.523.302, 6 PS Sensor Value = 4.002000 , Min = 3.820000 , Max = 4.084000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.481.557, 1 PS Sensor Value = 6.199000 , Min = 6.017000 , Max = 6.281000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.518.691, 2 PS Sensor Value = -5.503000 , Min = -5.585000 , Max = -5.321000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.523.156, 3 PS Sensor Value = 15.996000 , Min = 15.814000 , Max = 16.078000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.523.195, 4 PS Sensor Value = -16.505000 , Min = -16.587000 , Max = -16.323000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.523.221, 5 PS Sensor Value = 6.509000 , Min = 6.327000 , Max = 6.591000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:17:46.523.240, 6 PS Sensor Value = 4.002000 , Min = 3.820000 , Max = 4.084000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.480.644, 1 PS Sensor Value = 6.199000 , Min = 6.017000 , Max = 6.281000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.522.615, 2 PS Sensor Value = -5.501000 , Min = -5.583000 , Max = -5.319000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.522.729, 3 PS Sensor Value = 15.996000 , Min = 15.814000 , Max = 16.078000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.522.765, 4 PS Sensor Value = -16.505000 , Min = -16.587000 , Max = -16.323000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.522.788, 5 PS Sensor Value = 6.509000 , Min = 6.327000 , Max = 6.591000PA43410-2,1,3,/vobs/atlas-idc/src/utils/logger/IDCLogger/IDC.cpp,48,19:18:46.522.810, 6 PS Sensor Value = 4.002000 , Min = 3.820000 , Max = 4.084000I need to compare the PS Sensor Value, Min and Max value. (Min greater than PS Sensor greater than Max) I'm expecting the output like below and here I should have only the normal lines.What is normal lines? : If Min value greater than PS Sensor value && PS Sensor value is greater than Max value then the line is normal lines and that should removed. Expected Output: PA43410-2 PS Sensor Value = 6.509000 , Min = 6.327000 , Max = 6.591000PA43410-2 PS Sensor Value = 6.199000 , Min = 6.017000 , Max = 6.281000Eg: If Min < PS sensor value < Max, // Dont care the normal lines. Throw this line away. Else Pull this line to consolidated new file. //Only focus on abnormal lines.
How to compare the strings using < (Greater than symbol)
sed;awk
null
_codereview.168774
I was given task to build a client server application, using any technology I want.The task was to build a database(doesn't have to be a real database, it can be mocked). the client side should support more than one user/I didn't use a real database I just created some shares and created a mechanism to update them from time to time using a random value.I initialized the database with 2 users, I don't need to add users or delete them, just show I can support more than one.Since I have a background in C# and WPF, I created 3 projects:1. WPF/MVVM client side2. common library3. WebAPI - server side, which includes the database.I would like you to please comment about the correctness of my implementation as if it was a code review for your team.OOP design, usage of client server, please don't take into account I did it with WPF.Assume you have half a day to work on the project and then submit.I would appreciate any comments or questions.1. WPF project/ MVVM - I used mvvm light tool kitMainWindow.xaml<Window x:Class=Client.MainWindow xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml Title=MainWindow Height=350 Width=525> <Grid> <Grid.RowDefinitions> <RowDefinition Height=auto /> <RowDefinition Height=auto /> <RowDefinition Height=auto /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width=Auto></ColumnDefinition> <ColumnDefinition Width=Auto></ColumnDefinition> </Grid.ColumnDefinitions> <TextBlock Grid.Row=0 Grid.Column=0 Text=enter use name: MinWidth=75/> <TextBox Grid.Row=0 Grid.Column=1 MinWidth=75 Text={Binding UserName,Mode=TwoWay,UpdateSourceTrigger=PropertyChanged}/> <Button Grid.Row=1 Grid.Column=0 Content=Get All Shares Command={Binding GetAllSharesCommand,Mode=TwoWay}/> <Button Grid.Row=1 Grid.Column=1 Content=Get My Shares Command={Binding GetSharePerUserCommand,Mode=TwoWay}/> <DataGrid Grid.Row=2 ItemsSource={Binding Shares}> </DataGrid> </Grid></Window>HttpHandler.csnamespace Client{ public class HttpHandler { private HttpClient client; public HttpHandler() { client = new HttpClient(); client.BaseAddress = new Uri(http://localhost:18702/); client.DefaultRequestHeaders.Accept.Clear(); client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(application/json)); } public async Task<IEnumerable<Share>> GetallSharesAsync(string path) { IEnumerable<Share> shares = null; HttpResponseMessage response = await client.GetAsync(path); if (response.IsSuccessStatusCode) { shares = await response.Content.ReadAsAsync<IEnumerable<Share>>(); } return shares; } public async Task<IEnumerable<Share>> GetSharePerUserAsync(string path) { IEnumerable<Share> shares = null; HttpResponseMessage response = await client.GetAsync(path); if (response.IsSuccessStatusCode) { shares = await response.Content.ReadAsAsync<IEnumerable<Share>>(); } return shares; } public async Task<IDictionary<string, int>> GetAllUsersAsync(string path) { IDictionary<string, int> users2Id = null; HttpResponseMessage response = await client.GetAsync(path); if (response.IsSuccessStatusCode) { users2Id = await response.Content.ReadAsAsync<IDictionary<string, int>>(); } return users2Id; } }}ClientViewModel.csnamespace Client{ public class ClientViewModel : ViewModelBase { private ObservableCollection<Share> _shares; public ObservableCollection<Share> Shares { get { return _shares; } set { _shares = value; } } private string _userName; public string UserName { get { return _userName; } set { _userName = value; RaisePropertyChanged(UserName); GetAllSharesCommand.RaiseCanExecuteChanged(); } } private RelayCommand _getAllSharesCommand; public RelayCommand GetAllSharesCommand { get { return _getAllSharesCommand; } set { _getAllSharesCommand = value; } } private RelayCommand _GetSharesPerUserCommand; public RelayCommand GetSharePerUserCommand { get { return _GetSharesPerUserCommand; } set { _GetSharesPerUserCommand = value; } } private HttpHandler handler; private Dictionary<string, int> _userName2Id; public ClientViewModel() { GetAllSharesCommand = new RelayCommand(ExecuteGetAllShares, CanExecuteGetAllShares); GetSharePerUserCommand = new RelayCommand(ExecuteGetSharePerUserCommand, CanExecuteGetSharePerUserCommand); handler = new HttpHandler(); Shares = new ObservableCollection<Share>(); GetUsers(); } private async void GetUsers() { IDictionary<string, int> userNames2ID = await handler.GetAllUsersAsync(api/users); _userName2Id = new Dictionary<string, int>(userNames2ID); } private bool CanExecuteGetSharePerUserCommand() { return !String.IsNullOrEmpty(UserName); } private async void ExecuteGetSharePerUserCommand() { string temp = api/shares + / + _userName2Id[UserName]; try { IEnumerable<Share> tempShares = await handler.GetSharePerUserAsync(temp); Shares.Clear(); foreach (var item in tempShares) { Shares.Add(item); } } catch (Exception) { throw; } } public bool CanExecuteGetAllShares() { return !String.IsNullOrEmpty(UserName); } public async void ExecuteGetAllShares() { try { IEnumerable<Share> tempShares = await handler.GetallSharesAsync(api/shares); Shares.Clear(); foreach (var item in tempShares) { Shares.Add(item); } } catch (Exception) { throw; } } }}2.Common - project Share.csnamespace Common{ public class Share { public int Id { get; set; } public string Name { get; set; } public double Price { get; set; } }}3.Server - WebApi project(yea I know what a great name)WebApiConfig.csnamespace SharesApp{ public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: DefaultApi, routeTemplate: api/{controller}/{id}, defaults: new { id = RouteParameter.Optional } ); } }}Controllers folderSharesController.csnamespace SharesApp.Controllers{ public class SharesController : ApiController { //this is a mock for a real database, i'm not sure where do I need to connect to the real DB private static IDataBase _dataBase; public SharesController() { if (_dataBase == null) { _dataBase = new SharesDataBase(); } } public IEnumerable<Share> GetAllShares() { try { return _dataBase.GetAllShares(); } catch (Exception) { throw; } } public IHttpActionResult GetUpdatedShares(int id) { IEnumerable<Share> share = null; try { share = _dataBase.GetShareById(id); } catch (Exception) { throw; } if (share == null) { return NotFound(); } return Ok(share); } }UsersController .csnamespace SharesApp.Controllers{ public class UsersController : ApiController { private Dictionary<string, int> _userName2Id; public UsersController() { _userName2Id = new Dictionary<string, int>(); _userName2Id.Add(user10, 1); _userName2Id.Add(user20, 2); } public IDictionary<string, int> GetAllUserNames() { return _userName2Id; } public string GetUserNameById(int id) { if (!_userName2Id.ContainsValue(id)) { return null; } return _userName2Id.FirstOrDefault(x => x.Value == id).Key; } }}Models folderIDataBase.csnamespace SharesApp.Models{ public interface IDataBase { IEnumerable<Share> GetAllShares(); IEnumerable<Share> GetShareById(int id); }}SharesDataBase.csnamespace SharesApp.Models{ public class SharesDataBase : IDataBase { //user name to list of shares names const string INTC = INTC; const string MSFT = MSFT; const string TEVA = TEVA; const string YAHOO = YAHOO; const string P500 = P500; private List<Share> _shares; private Random _random; private int _maximum = 100; private int _minimum = 1; public Dictionary<int, List<string>> User2Shares { get; set; } private Object thisLock = new Object(); public SharesDataBase() { _random = new Random(); User2Shares = new Dictionary<int, List<string>>(); //init the shares list _shares = new List<Share> { new Share { Id = 1, Name = INTC, Price = 1 }, new Share { Id = 2, Name = MSFT, Price = 3.75 }, new Share { Id = 3, Name = TEVA, Price = 16.99}, new Share { Id = 4, Name = YAHOO, Price = 11.0}, new Share { Id = 5, Name = P500, Price = 5.55}, }; //init the users User2Shares.Add(1, new List<string>() { INTC, MSFT, TEVA }); User2Shares.Add(2, new List<string>() { YAHOO, P500, TEVA }); Task.Run(()=>UpdateShares()); } private void UpdateShares() { while (true) { System.Threading.Thread.Sleep(1000);// wait for 1 sec lock (thisLock) { foreach (var item in _shares) { int tempRandom = _random.Next(1, 1000); if (tempRandom % 100 == 0) { item.Price = _random.NextDouble() * (_maximum - _minimum) + _minimum; } } } } } public IEnumerable<Share> GetAllShares() { return _shares; } public IEnumerable<Share> GetShareById(int id) { if (!User2Shares.ContainsKey(id)) { return null; } var listOfShares = User2Shares[id]; if (listOfShares.Count == 0) { //this userName doesn't have any shares return null; } List<Share> sharesList = new List<Share>(); foreach (var name in listOfShares) { var res = _shares.FirstOrDefault(x => x.Name == name); if (res != null) { sharesList.Add(res); } //share is missing from the server } return sharesList; } }
Stocks application using Web Api
c#;asp.net web api
null
_softwareengineering.287822
I am trying to build something like Manic Time - which is an application that tracks what the user is currently working on. It worked flawlessly on Windows, but doesn't support Linux.It has mad features, but the core is basically just tracking what the current 'active' window is, it's process, window title etc.I've been thinking about this problem for some time and here's the Pythonic pseudo-code that I've come up with, but I'm not sure if this is the way to go. # The script will probably run as a daemonwhile True: # Get process, window title, etc. wnd_details = get_active_window_details() # Save the current timestamp and the details to a database (SQLite) insert_in_db(current_timestamp, wnd_details) # Wait for a second sleep(1000)Will executing a write query per second affect performance?An optimization might be to remember what the previous window details were and write to database only when the window changes (the user has switched to another application) but that will add unnecessary complexity to the code.Yet another thing to look into might be some sort of hooks or callbacks, so my Python code gets called whenever a Window change occurs (like a new window is created or active window is changed) I guess Windows had something similar to this, but have no idea about Linux.
How to design a time tracking or activity monitoring application?
desktop application;application design;ubuntu
I have found an open source application that seems to be doing EXACTLY what I wanted. https://github.com/gurgeh/selfspy/
_unix.119507
These rules dont allow update the system (Debian wheezy on raspberry pi).Dont allow ping:ping: sendmsg: Operation not permitted. I am trying to install a home web server and this rules dont work properly. I want that the server allow my CMS (joomla) to update and allow the update of the system itself.What I need to change in this file to allow update joomla and debian? *mangle :PREROUTING ACCEPT [12:624] :INPUT ACCEPT [12:624] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [28:9440] :POSTROUTING ACCEPT [12:2128] COMMIT *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] :spooflist - [0:0] -A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --update --seconds 90 --hitcount 4 --name DEFAULT --rsource -j DROP -A INPUT -i eth0 -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name DEFAULT --rsource -A INPUT -j spooflist -A INPUT -i lo -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP -A INPUT -i eth0 -f -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,PSH,URG -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -m limit --limit 5/min --limit-burst 7 -j LOG --log-prefix NULL Packets -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -m limit --limit 5/min --limit-burst 7 -j LOG --log-prefix XMAS Packets -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,ACK FIN -m limit --limit 5/min --limit-burst 7 -j LOG --log-prefix Fin Packets Scan -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,ACK FIN -j DROP -A INPUT -i eth0 -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,ACK,URG -j DROP -A INPUT -i eth0 -m pkttype --pkt-type broadcast -j LOG --log-prefix Broadcast -A INPUT -i eth0 -m pkttype --pkt-type broadcast -j DROP -A INPUT -i eth0 -m pkttype --pkt-type multicast -j LOG --log-prefix Multicast -A INPUT -i eth0 -m pkttype --pkt-type multicast -j DROP -A INPUT -i eth0 -m state --state INVALID -j LOG --log-prefix Invalid -A INPUT -i eth0 -m state --state INVALID -j DROP -A INPUT -d 192.168.0.17/32 -i eth0 -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT # ACEPTAR SALIDAS POR EL 80 Y POR EL 443 #-A INPUT -i eth0 -p tcp -m tcp --sport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #-A INPUT -i eth0 -p tcp -m tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p icmp -m icmp --icmp-type 8 -m state --state NEW,RELATED,ESTABLISHED -m limit --limit 30/sec -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 53 -m state --state ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p udp -m udp --sport 123 -m state --state ESTABLISHED -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT -A INPUT -m limit --limit 5/min --limit-burst 7 -j LOG --log-prefix DEFAULT DROP -A INPUT -j DROP -A FORWARD -j spooflist -A OUTPUT -j spooflist -A OUTPUT -o lo -j ACCEPT -A OUTPUT -s 192.168.0.17/32 -o eth0 -p tcp -m tcp --sport 22 -j ACCEPT #-A OUTPUT -o eth0 -p tcp -m tcp --dport 53 -m state --state NEW -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT # ACEPTAR SALIDAS POR EL 80 Y POR EL 443 #-A OUTPUT -o eth0 -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT #-A OUTPUT -o eth0 -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p icmp -m icmp --icmp-type 0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p udp -m udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT -A OUTPUT -o eth0 -p tcp -m tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT COMMITThe iptables -L -n output:Chain INPUT (policy DROP)target prot opt source destinationDROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: UPDATE seconds: 90 hit_count: 4 name: DEFAULT side: source tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 state NEW recent: SET name: DEFAULT side: sourcespooflist all -- 0.0.0.0/0 0.0.0.0/0ACCEPT all -- 0.0.0.0/0 0.0.0.0/0DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags:! 0x17/0x02 state NEWDROP all -f 0.0.0.0/0 0.0.0.0/0DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x3F/0x29DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x3F/0x3FLOG tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x3F/0x00 limit: avg 5/min burst 7 LOG flags 0 level 4 prefix NULL Packets DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x3F/0x00DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x06/0x06LOG tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x03/0x03 limit: avg 5/min burst 7 LOG flags 0 level 4 prefix XMAS Packets DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x03/0x03LOG tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x11/0x01 limit: avg 5/min burst 7 LOG flags 0 level 4 prefix Fin Packets Scan DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x11/0x01DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcpflags: 0x3F/0x37LOG all -- 0.0.0.0/0 0.0.0.0/0 PKTTYPE = broadcast LOG flags 0 level 4 prefix Broadcast DROP all -- 0.0.0.0/0 0.0.0.0/0 PKTTYPE = broadcastLOG all -- 0.0.0.0/0 0.0.0.0/0 PKTTYPE = multicast LOG flags 0 level 4 prefix Multicast DROP all -- 0.0.0.0/0 0.0.0.0/0 PKTTYPE = multicastLOG all -- 0.0.0.0/0 0.0.0.0/0 state INVALID LOG flags 0 level 4 prefix Invalid DROP all -- 0.0.0.0/0 0.0.0.0/0 state INVALIDACCEPT tcp -- 0.0.0.0/0 192.168.0.17 tcp dpt:22ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW,ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 state NEW,ESTABLISHEDACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 8 state NEW,RELATED,ESTABLISHED limit: avg 30/sec burst 5ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:53 state ESTABLISHEDACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp spt:123 state ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:25 state ESTABLISHEDLOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 7 LOG flags 0 level 4 prefix DEFAULT DROP DROP all -- 0.0.0.0/0 0.0.0.0/0Chain FORWARD (policy DROP)target prot opt source destinationspooflist all -- 0.0.0.0/0 0.0.0.0/0Chain OUTPUT (policy DROP)target prot opt source destinationspooflist all -- 0.0.0.0/0 0.0.0.0/0ACCEPT all -- 0.0.0.0/0 0.0.0.0/0ACCEPT tcp -- 192.168.0.17 0.0.0.0/0 tcp spt:22ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 state NEW,ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20 state ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:80 state ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp spt:443 state ESTABLISHEDACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmptype 0 state RELATED,ESTABLISHEDACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53 state NEW,ESTABLISHEDACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:123 state NEW,ESTABLISHEDACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 state NEW,ESTABLISHEDChain spooflist (3 references)target prot opt source destination
The Iptables dont allow update the system
iptables;raspbian
You don't have any rules that allow outgoing pings (ICMP echo-request - type 8). You only have a single rule that allows ICMP echo reply.Additionally most of your --state matches are useless. With --state NEW,ESTABLISHED you might as well not use --state at all, as there's only 2 possible states, NEW and ESTABLISHED. Yes there is RELATED but related is also going to be new or established.Also you can simplify this ruleset immensely by adding a single -A INPUT -m state --state RELATED,ESTABLISHED at the top instead of having to do every single port individually.
_codereview.77843
For a quick summary: I've created this internal web application, and I've hit a point where I can really see the mess I've made. I need some help separating the logic, the view, and the data.More detail: Over the past few months, I've been doing all I can to learn more about JavaScript built web sites/applications. I've created the below code, and it seems as though any additions are just ruining the entire thing. This is the fourth time I've started from scratch on this, and I can't get a product I really like.Now, it works as it should, I just don't like the way it's built. I tried using Angular.js, but that was over-kill for this single page app (plus working with the routing was nightmarish). Now I've just created this mess of a main.js file, and it needs refactoring.It'd be great if we could avoid suggesting tools requiring node.js.index.html<!DOCTYPE html><html> <head> <!-- Basic Page Needs --> <meta charset=utf-8> <title>Pies</title> <meta name=description content=> <meta name=author content=> <!-- Mobile Specific Metas --> <meta name=viewport content=width=device-width, initial-scale=1> <!-- FONT --> <link href=//fonts.googleapis.com/css?family=Raleway:400,300,600 rel=stylesheet type=text/css> <!-- SCRIPTS --> <script src=//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.js></script> <script src=scripts/main.js></script> <!-- CSS --> <link rel=stylesheet href=css/normalize.css> <link rel=stylesheet href=css/skeleton.css> <link rel=stylesheet href=css/main.css> </head> <body> <div class=container> <header class=row> <div class=twelve columns> <h1>Pies</h1> </div> </header> <div class=row> <div class=six columns id=newOrderContainer> <h2>New Order?</h2> <span class=response success>Successfully created!</span> <span class=response fail>Something went wrong, try again later.</span> <form id=newOrderForm> <label for=customerName>Customer name</label> <input class=u-full-width type=text id=customerName> <label for=dueDate>Due date (optional)</label> <input type=date id=dueDate/> <div class=flavorSelector u-full-width></div> </form> </div> <div class=six columns> <h2>Payments</h2> </div> </div> <div class=row> <div class=twelve columns id=ordersContainer> <h2>Orders</h2> <span class=response success>Successfully paid!</span> <span class=response fail>Something went wrong, try again later.</span> <div id=ordersTableContainer> <table class=u-full-width> <thead> <tr> <th>Name</th> <th>Priority</th> <th>Flavor</th> <th>Payment</th> </tr> </thead> <tbody></tbody> </table> </div> </div> </div> <div class=row> <div class=u-full-width> <h2>Settings</h2> </div> </div> <div class=row> <div class=four columns id=newFlavorContainer> <h3>New Flavor</h3> <span class=response success>Successfully added!</span> <span class=response fail>Something went wrong, try again later.</span> <form id=newFlavorForm> <label for=newFlavor>Flavor</label> <input class=u-full-width type=text placeholder=Apple, pecan, etc. id=newFlavor> <div class=button style=display: block;> Add New Flavor </div> </form> </div> <div class=eight columns id=flavorEditContainer> <h3>Existing Flavors</h3> <div id=flavorsTableContainer> <table class=u-full-width> <thead> <tr> <th>Flavor</th> <th>Delete</th> </tr> </thead> <tbody></tbody> </table> </div> </div> </div> </div> </body></html>There's our main page. I've used the Skeleton CSS boilerplate.main.js$(document).ready(function() { $.get(scripts/appdata.json, function(data) { $(.flavorSelector).each(function() { for (flavor in data.flavors) { $(this).append('<div class=button style=width: 60%>' + flavor + '</div>'); } }); for (order in data.orders) { var details = data.orders[order]; if (details.paid === false) { $(#ordersTableContainer tbody).append(newOrderRow(order, details)); } } for (flavor in data.flavors) { $(#flavorsTableContainer tbody).append(newFlavorRow(flavor)); } $(#newOrderForm div.button).click(function() { var pdata = { action : newOrder, name : $(#customerName).val(), due : $(#dueDate).val(), flavor : $(this).text() }; $.post(scripts/server.php, pdata, function(data) { if (data != true) { $(#newOrderContainer > span.fail).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $(#customerName).val(); $(#dueDate).val(); fetchOrders(); $(#newOrderContainer > span.success).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); }); $(.payment-form div.button).click(function() { var pdata = { action : payOrder, hash : $(this).parents(tr).data(hash), paid : $(this).siblings(input).val() }; $.post(scripts/server.php, pdata, function(data) { if (data != true) { $(#ordersContainer > span.fail).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $(tr[data-hash= + pdata[hash] + ]).fadeOut(600, function() { $(this).remove(); }).delay(1000); $(#ordersContainer > span.success).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); }); }, 'json'); $(#newFlavorForm div.button).click(function() { var pdata = { action : newFlavor, flavor : $(#newFlavorForm input[type=text]).val() }; $.post(scripts/server.php, pdata, function(data) { if (data != true) { $(#newFlavorContainer > span.fail).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } else { $(#newFlavorForm input[type=text]).val(); $(#newFlavorContainer > span.success).show().delay(5000).fadeOut(600, function() { $(this).hide(); }); } }); });});function newOrderRow(hash, data) { var name = data.name, flavor = data.flavor, paymentForm = <form class='payment-form'><input type='text'/><div class='button paid'>Paid</div></form>; var priority = <div class='priority' style='background: + getPriority(Math.floor(Date.now() / 1000), data.made, data.due) + '></div>; return <tr data-hash=' + hash + '><td> + name + </td><td> + priority + </td><td> + flavor + </td><td> + paymentForm + </td></tr>;}function newFlavorRow(flavor) { return <tr><td><input type='text' value=' + flavor + '/></td><td><span>Delete</span></td></tr>;}function getPriority(now, made, due) { var colors = [#A30E0E, #FF9401, #6FBF0D]; var marks = [172800, 64800, 0]; var elapsed = now - made; if (due == ) { for (var i = 0; i < marks.length; i++) { if (elapsed > marks[i]) { return colors[i]; } } } var until = due - now; var i = 0; colors.reverse(); for (var i = 0; i < marks.length; i++) { console.log(until, >, marks[i], i); if (until > marks[i]) { return colors[i]; } }}I know, it's bad. Everything is mixed together, and I don't know what to do! Suggestions for architectures would be great, but if you could help me fit this into some framework, then would be great too.Right now, I've got the data stored in a JSON file. I'd prefer not to have it in an RDBMS like MySQL, but I've never worked with anything else so I'm open to suggestions!The data I have is looking like this:{ flavors: { Berry: , Apple: , Pecan: }, orders: { 43d133ecaed389cf527c93117fc29969: { name: Customer1, flavor: Berry, made: 1421471493, due: 1421884800, paid: false }, 4bb7e6668a2a63d32a7487267128d406: { name: Customer2, flavor: Pecan, made: 1421471572, due: 1421884800, paid: false } }}I had some data being paired with flavors, but I got rid of that feature.Is there any chance I can turn this project into something scalable, fast, and modern?
Order a delicious pie here
javascript;html;mvc
First of all I'd like to congratulate you on the HTML part, that one looks clean and takes almost all best practices into consideration. I say 'almost' since ... yeah well ... these days people argue you should put script tags before the </body> tag. This to avoid http stalling the reflow of the browser. Oh well, for simple applications leave it like that. If you want to scale up and add more libraries in the future, you might consider moving the scripts to the bottom.Next one then, the JavaScript part. If you say it works, well done!You say it looks ugly ... ? Do you also know why? Let me sum that up for you just to make sure we're on the same page:logic mixed with strings is a nono if you want to write beautiful code => configurable stringsthe templating is kinda hard-coded into the logic => templating systemthe use of globally defined functions => closurethe amount of iterations (not sure if I can reduce them, we'll see along the way) => best practices in DOM appendingno function describes what the main part is (this becomes essential once you scale up) => modular approachYou don't want to use some additional libraries for this piece of code? I totally agree! You sound like a good decision maker, now you need a little push in the right direction.So ... I've been spreading my logic here and there over stackoverflow/codereview and I believe it will help you too. Please read them as I'm not going to copy/paste the whole idea again. I will provide refactored code and the extra information I'm sure you can take that from some of those answers I've linked.I use Re-Sharper for JavaScript and I like it green (read: jshint valid) so let me tell you what goes wrong even though your code works:ln 24 & 45: Declaration hides parameter data from outer scopeln 32: Use of an implicitly declared global variable 'fetchOrders' (assuming this is a false positive)ln 95 102 104: Duplicate declarationln 102: Value assigned is not used in any execution pathln 110: Not all code paths return a value So this is how I do it. Take the time to compare the approach below with my previously posted answers. It's actually the same stuff over and over again. Once you get the hang of it, you'll notice the benefit of object literals and how to extend/configure YOUR OWN library.window.DeliciousPie = (function ($, project) { // 1. CONFIGURATION var cfg = { cache: { container: '[data-component=orderpie]', flavors: '.flavorSelector', flavorsTable: '#flavorsTableContainer tbody', flavorForm: '#newFlavorForm', flavorFormInputs: 'input[type=text]', flavorSuccess: '#newFlavorContainer > span.success', ordersTable: '#ordersTableContainer tbody', orderForm: '#newOrderForm', orderSuccess: '#newOrderContainer > span.success', orderFail: '#newOrderContainer > span.fail', dueDate: '#dueDate', customerName: '#customerName', paymentForm: '.payment-form', paymentSuccess: '#ordersContainer > span.success', paymentFail: '#ordersContainer > span.fail', formTarget: 'div.button' }, data: { hash: 'hash' }, events: { click: 'click' }, tpl: { flavor: '<div class=button style=width: 60%>{{flavor}}</div>', paymentForm: '<form class=payment-form><input type=text/><div class=button paid>Paid</div></form>', priority: '<div class=priority style=background: {{priority}}></div>', orderRow: '<tr data-hash={{hash}}><td>{{name}}</td><td>{{priority}}</td><td>{{flavor}}</td><td>{{paymentForm}}</td></tr>', flavorRow: '<tr><td><input type=text value={{flavor}}/></td><td><span>Delete</span></td></tr>' }, ajaxOptions: { get: { url: 'scripts/appdata.json', dataType: 'json' }, post: { flavor: { url: 'scripts/server.php', data: { action: 'newFlavor' } }, order: { url: 'scripts/server.php', data: { action: 'newOrder' } }, pay: { url: 'scripts/server.php', data: { action: 'payOrder' } } } }, priorityOptions: { colors: ['#A30E0E', '#FF9401', '#6FBF0D'], marks: [172800, 64800, 0] } }; // 2. ADDITIONAL FUNCTIONS /** * @description Render html template with json data * @see handlebars or mustache if you need more advanced functionality * @param {Object} obj * @param {String} template : html template with {{keys}} matching the object * @return {String} template : the template string replaced by key:value pairs from the object */ function renderTemplate(obj, template) { var tempKey, reg, key; for (key in obj) { if (obj.hasOwnProperty(key)) { tempKey = String({{ + key + }}); reg = new RegExp(tempKey, g); template = template.replace(reg, obj[key]); } } return template; } // 3. COMPONENT OBJECT project.OrderPie = { version: 0.1, init: function () { this.cacheItems(); if (this.container.length) { this.getData(); this.bindEvents(); } }, cacheItems: function () { var cache = cfg.cache; this.container = $(cache.container); this.flavors = $(cache.flavors); this.flavorsTable = $(cache.flavorsTable); this.flavorForm = $(cache.flavorForm); this.flavorFormInputs = this.flavorForm.find(cache.flavorFormInputs); this.flavorSuccess = $(cache.flavorSuccess); this.flavorFail = $(cache.flavorFail); this.ordersTable = $(cache.ordersTable); this.orderForm = $(cache.orderForm); this.dueDate = $(cache.dueDate); this.customerName = $(cache.customerName); this.orderSuccess = $(cache.orderSuccess); this.orderFail = $(cache.orderFail); this.paymentForm = $(cache.paymentForm); this.paymentSuccess = $(cache.paymentSuccess); this.paymentFail = $(cache.paymentFail); }, bindEvents: function () { var self = this, cache = cfg.cache, data = cfg.data, events = cfg.events, ajaxOptions = cfg.ajaxOptions; this.flavorForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.flavor, { data: { flavor: self.flavorFormInputs.val() } }); $.ajax(options).done(function (flavorData) { if (flavorData) { self.flavorFormInputs.val(''); self.flavorSuccess.show().delay(5000).hide(600); } }).fail(function () { self.flavorFail.show().delay(5000).hide(600); }); }); this.orderForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.order, { data: { name: self.customerName.val(), due: self.dueDate.val(), flavor: $(this).text() } }); $.ajax(options).done(function (orderData) { if (orderData) { self.customerName.val(''); self.dueDate.val(''); self.fetchOrders(); self.orderSuccess.show().delay(5000).hide(600); } }).fail(function () { self.orderFail.show().delay(5000).hide(600); }); }); this.paymentForm.on(events.click, cache.formTarget, function () { var options = $.extend({}, ajaxOptions.post.order, { data: { hash: $(this).closest('tr').data(data.hash), paid: $(this).siblings('input').val() } }); $.ajax(options).done(function (paymentData) { if (paymentData) { $('[data-hash=' + options.data.hash + ']').hide(600).delay(1000).remove(); self.paymentSuccess.show().delay(5000).hide(600); } }).fail(function () { self.paymentFail.show().delay(5000).hide(600); }); }); }, getData: function () { var self = this; $.ajax(cfg.ajaxOptions.get).done(function (data) { if (data.hasOwnProperty('flavors')) { self.setFlavors(data.flavors); } if (data.hasOwnProperty('orders')) { self.setOrders(data.orders); } }); }, setFlavors: function (dataFlavors) { var tpl = cfg.tpl.flavor, rows = [], arr = []; this.flavors.each(function () { for (var flavor in dataFlavors) { arr.push(renderTemplate(flavor, tpl)); rows.push(this.addFlavorRow(flavor)); } $(this).append(arr); this.flavorsTable.append(rows); }); }, setOrders: function (dataOrders) { var details, rows = []; for (var order in dataOrders) { if (dataOrders.hasOwnProperty(order)) { details = dataOrders[order]; if (!details.paid) { rows.push(this.addOrderRow(order, details)); } } } this.ordersTable.append(rows); }, addOrderRow: function (hash, data) { var tplVars = $.extend({}, data, { paymentform: cfg.tpl.paymentForm, priority: getPriority(Math.floor(+(new Date) / 1000), data.made, data.due), hash: hash }); return renderTemplate(tplVars, cfg.tpl.orderRow); }, addFlavorRow: function (flavor) { return renderTemplate({flavor: flavor}, cfg.tpl.flavorRow); }, getPriority: function (now, made, due) { var priorityOptions = cfg.priorityOptions, colors = priorityOptions.colors, marks = cfg.priorityOptions.marks, elapsed = now - made, until = due - now; if (!due) { for (var i = 0; i < marks.length; i++) { if (elapsed <= marks[i]) { continue; } return colors[i]; } } colors.reverse(); for (var j = 0; j < marks.length; j++) { console.log(until, >, marks[j], j); if (until <= marks[j]) { continue; } return colors[j]; } }, fetchOrders: function () { console.warn('not implemented function'); } }; // 4. GLOBALIZE NAMESPACE return project;}(window.jQuery, window.DeliciousPie || {}));Once this file is loaded, you can call DeliciousPie.OrderPie.init() on DOM ready and you're good to order some pie (or whatever it is :p)What you gain with this approach:configurable objectsextendable objects (multiple HTML classes, activated by JavaScript, with different config if needed)separation of concernsscalable/modular approachevent controlreflow optimizationmemory optimizationbetter readabilityan easy templating sytem for free (no additional libraries required ^^)RESPEC(t) from your colleaguesI can invent some more, but all in all, quality code1) OverheadWhen a lot of components/modules are loaded from one file and let's say you have a lot of pages ... the overhead you create for undetected modules/components:cfg variable => so try to keep strings in it and only extend cfg in a methodcacheItems() => depends on the speed of your selectors and sizzleinit() method checking the length of the containerSo scalability wise this performs very well. Remember JavaScript in itself is really fast. It's the DOM that slows down quite a lot. For that reason it could be interesting to split-up the cacheItems.2) TemplatingThe templating in my example is also not ideal. It's very basic but also puts HTML into JavaScript and then you can argue that separation of concerns doesn't apply to this approach. Hence the whole script idea seen in Handlebars/Mustache which covers that.However, I would only take that approach if logic in templating is required {{if}}{{else}}{{/if}}. For string replacement only, keep it simple. Extra logics for the templates while looping can be done inside a specific function as well (ex: addOrderRow, addFlavorRow). Besides, you can always leave a comment <!-- js rendering --> inside your HTML as well ... As suggested in the comments: you can create a hidden class or with a data- attribute and pick those chunks up.Some additional reads:JavaScript Module PatternIf I find some time I'll try to test this as well. Probably you'll need to add the flavor data back in there for the templating sytem. And a data-component=orderpie on the main wrapper to kick it in. I hope you are familiar with debugging tools. If not, at least I hope you'll learn a thing or 4. GL!
_webmaster.101296
The issueHey guys, I'm having some issues concerning my website ( http://colegiojeffersontoday.com ). The issue is that although the custom domain works upon first entering the website, for some reason, the domain reverts back to the default heroku app domain whenever you click a link within the website itself. http://desolate-ocean-81838.herokuapp.com. Regarding the DNS settingsIn terms of DNS settings, all I did was add the domains colegiojeffersontoday.com and news.colegiojeffersontoday.com to heroku cli. On the 1and1 domain manager I edited the DNS by adding the subdomain news to apply a cname to it. The idea is that the root domain redirects to the subdomain which has the cname linked to it. Apologies if that sounds confusing.Thank you so much for your help
Heroku app + 1and1 domains - Custom domain name switches back to default heroku app's domain on click within the web app
domains;dns;heroku;1and1
null
_unix.191646
I'd like to run a command at startup which pings a certain address every 10 minutes and writes the result to a file. I've figured out now how to do the pinging and file writing and the 10 min intervals:while true; do my-command-here; sleep 600; doneMy question is, can I put this in /etc/init.d/rc.local or should I be putting it in /etc/rc.local or somewhere else entirely? I'm specifically concerned because it's an infinite loop so I'm not sure if I could put it in one of these startup scripts.Some help would be appreciated. I'm using Ubuntu 12.04.5
Running an infinite loop on startup
startup;ping
This isn't really an infinite loop; it's a task that needs to run every ten minutes. As such the task can go into the task scheduler, cron.Run the command crontab -e and add this single line to the bottom of the file:*/10 * * * * /path/to/my-command-hereEnsure that my-command-here is an executable script (chmod u+x my-command-here) and that its first line starts with #! and the name of the script interpreter (typically #!/bin/bash).Each entry in the pattern */10 * * * * maps to the minute(0-59), hour(0-23), day(1-31), month(1-12), and day of week(0-6, with 0=Sunday).
_cs.25914
Does the following recursive algorithm have a name? If so, what is it?procedure main(): myFilter = new Filter( myPrime = 2 ) //first prime number print 2 //since it would not otherwise be printed for each n in 3 to MAX: if myFilter.isPrime(n): print nobject Filter: integer myPrime PrimeFilter nextFilter = NULL procedure isPrime(integer n): if n is multiple of myPrime: return FALSE else if nextFilter is not NULL: return nextFilter.isPrime(n) else nextFilter = new PrimeFilter(myPrime = n) return TRUESample implementation in Java hereThis is similar to the Sieve of Eratosthenes though after some discussion in the CS chat, we decided that it is subtly different.
What is the name of this prime number algorithm?
algorithms;reference request;primes
O'Neil [1] call this the unfaithful sieve. It's much slower than the sieve of Eratosthenes.For each prime $p$ you do work $\sim p/\log p$ and so the total number of divisions up to $x$ is roughly $x^2/(2\log^2 x)$ if you assume composites are free. (That's essentially true: they take at most $2\sqrt x/\log x$ divisions each for a total of at most $2x^{3/2}/\log x$ divisions.)Divisions take longer than unit time, so the total bit complexity is about $O(x^2\log\log x/\log x)$.[1] Melissa E. ONeill, The Genuine Sieve of Eratosthenes
_codereview.124898
I am a high school math and science teacher. I am working with my students on the concepts of algorithms, unit conversions, and functions. In class, to work on all these concepts, we just worked out the process of how to calculate how old someone is in seconds. I thought this was a particularly interesting algorithm to feed to a computer, so I spent the next couple of days working on this in my spare time.I have not sanitized my user input or done any internal error checking, and the code assumes they were born at exactly 00:00:00 on their day of birth. Laziness on my party, I guess. I am most concerned with Is there a better way to calculate my days offset? This is either the number of days until their birthday this year or since their birthday this year. There is a specific function or three for this, as you can see below in the code.Additionally, are there any bits of particularly cringy (my term) code?I had to google how to get current date and time (c++ date time, first link). So I totally just stole that bit of code.Thank you very much. I tried to comment clearly, but occasionally I might have been less commenty than desirable./* * For use by absolutely anyone for absolutely any reason. *//* * File: main.cpp * Author: Wayman Bell III * * Created on March 31, 2016 */#include <iostream>#include <ctime>using namespace std;int getTheYear();int getTheMonth();int getTheDay();int getTheHour();int getTheMinute();int getTheSecond();int countLeapYears(int, int);int welcome();int calcAgeInSeconds(int, int, int);int calcOffset(int, int);int calcDaysRemainingThisYear(int, int);int calcDaysSinceBDay(int, int);int main(int argc, char** argv) { welcome(); return 0;}int getTheYear( ) //Retrieve current year{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (1900+ltm->tm_year); // print various components of tm structure. /*cout << Year: << 1900 + ltm->tm_year << endl;*/}int getTheMonth() //Retrieve current month{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (1+ltm->tm_mon); // print various components of tm structure. /*cout << Month: << 1 + ltm->tm_mon<< endl;*/}int getTheDay() //Retrieve current day of month{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (ltm->tm_mday); // print various components of tm structure. /*cout << Day: << ltm->tm_mday << endl;*/}int getTheHour() //Retrieve current hour{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (1+ltm->tm_hour); // print various components of tm structure. /*cout << Time: << 1 + ltm->tm_hour << :;*/}int getTheMinute() //Retrieve current minute{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (1+ltm->tm_min); // print various components of tm structure. /*cout << 1 + ltm->tm_min << :;*/}int getTheSecond() //Retrieve current second{ // current date/time based on current system time_t now = time(0); tm *ltm = localtime(&now); return (1+ltm->tm_sec); // print various components of tm structure. /*cout << 1 + ltm->tm_sec << endl;*/}int welcome() //Get birthday input from user. Pass info to appropriate functions. Output age in seconds.{ std::cout << Welcome to the Age Calculator!\n; std::cout << January \t-- 1\t|\tFebruary \t-- 2\nMarch \t\t-- 3\t|\tApril \t\t-- 4\n; std::cout << May \t\t-- 5\t|\tJune \t\t-- 6\nJuly \t\t-- 7\t|\tAugust \t\t-- 8\n; std::cout << September \t-- 9\t|\tOctober \t-- 10\nNovember \t-- 11\t|\tDecember \t-- 12; std::cout << \n\nWhat month were you born in: ; int monthBorn=0; std::cin >> monthBorn; int dayOfMonthBorn=0; std::cout << Enter the day of the month you were born: ; std::cin >> dayOfMonthBorn; int yearBorn=0; std::cout << Enter the year you were born: ; std::cin >> yearBorn; std::cout << \nThank you. One moment.\nCalculating...\n\n; int ageInSeconds=0; ageInSeconds=calcAgeInSeconds(monthBorn,dayOfMonthBorn,yearBorn); std::cout << You are << ageInSeconds << seconds old! Congratulations!\n\n; std::cout << Enter \0\ to end. ; char endPrg = ' '; std::cin >> endPrg; return 0;}int calcAgeInSeconds(int month, int day, int year){ int curYear = getTheYear(); int curMon = getTheMonth(); int curDay = getTheDay(); int curHour = getTheHour(); int curMin = getTheMinute(); int curSec = getTheSecond(); int leapCount= countLeapYears(curYear, year); int yearsOld = curYear - year; //If person has not had their birthday yet, they are a year younger. if (curMon < month || (curMon == month && curDay < day)) yearsOld--; //If this is a leap year, but before leap day, then subtract one leap day. if (((curYear / 4) * 4 == curYear) && (curMon < 2 || (curMon == 2 && curDay <= 28))) leapCount--; //If person born on leap year, but after leap day, subtract one leap day. if (((year / 4) * 4 == year) && (month > 2)) leapCount--; int secondsOld = 0; int dayOffset = 0; dayOffset = calcOffset(month, day); //Account for days since birthday or until birthday secondsOld = yearsOld * 365 + leapCount + dayOffset; //Add up total number of days secondsOld = secondsOld * 24 + curHour; //Convert to hours and add today's hours. secondsOld = secondsOld * 60 + curMin; //Convert to minutes and add today's minutes. secondsOld = secondsOld * 60 + curSec; //Convert to seconds and add today's seconds. return secondsOld; }int calcOffset(int theirMon, int theirDay){ int curYear = getTheYear(); int curMon = getTheMonth(); int curDay = getTheDay(); int curHour = getTheHour(); int curMin = getTheMinute(); int curSec = getTheSecond(); int dayOff = 0; //If they have not yet had their birthday... if ((curMon < theirMon) || ((curMon == theirMon) && (curDay < theirDay))) dayOff = 365 - calcDaysRemainingThisYear(theirMon, theirDay); //If they have had their birthday... else if ((curMon == theirMon) && (curDay == theirDay)) dayOff = 0; else dayOff = calcDaysSinceBDay(theirMon, theirDay); return dayOff;}int calcDaysRemainingThisYear(int theirMon, int theirDay){ int curYear = getTheYear(); int curDay = getTheDay(); int curMon = getTheMonth(); int dayOffset = 0; while (curMon < theirMon) { if (curMon == 1) dayOffset += 31; else if (curMon == 2) { if ((curYear / 4) * 4 == curYear) dayOffset += 29; else dayOffset += 28; } else if (curMon == 3) dayOffset += 31; else if (curMon == 4) dayOffset += 30; else if (curMon == 5) dayOffset += 31; else if (curMon == 6) dayOffset += 30; else if (curMon == 7) dayOffset += 31; else if (curMon == 8) dayOffset += 31; else if (curMon == 9) dayOffset += 30; else if (curMon == 10) dayOffset += 31; else if (curMon == 11) dayOffset += 30; else if (curMon == 12) { dayOffset += 31; curMon = 0; } curMon ++; } dayOffset -= curDay; dayOffset += theirDay; return dayOffset;}int calcDaysSinceBDay(int theirMon, int theirDay){ int curYear = getTheYear(); int curDay = getTheDay(); int curMon = getTheMonth(); int dayOffset = 0; while (theirMon < curMon) { if (theirMon == 1) dayOffset += 31; else if (theirMon==2) { if (((curYear - 1) / 4) * 4 == (curYear - 1)) dayOffset += 29; else dayOffset += 28; } else if (theirMon == 3) dayOffset += 31; else if (theirMon == 4) dayOffset += 30; else if (theirMon == 5) dayOffset += 31; else if (theirMon == 6) dayOffset += 30; else if (theirMon == 7) dayOffset += 31; else if (theirMon == 8) dayOffset += 31; else if (theirMon == 9) dayOffset += 30; else if (theirMon == 10) dayOffset += 31; else if (theirMon == 11) dayOffset += 30; else if (theirMon == 12) { dayOffset += 31; theirMon = 0; } theirMon ++; } dayOffset -= theirDay; dayOffset += curDay; return dayOffset;}int countLeapYears(int curYear, int theirYear){ //Find nearest year divisible by 4 beginning at or prior to the start count, //then begin subtracting 4 from the start count until we get to the finish count int leapYears = 0; int modYears = 0; //Found a better way than the comment under this code. //No If statements required, just find the mod, take it out, calculate leaps. modYears = curYear % 4; curYear -= modYears; while (curYear >= theirYear) { curYear -= 4; leapYears++; } return leapYears; /* Found a better way than this. See above. if ((curYear / 4) * 4 == curYear) //This year is a leap year. { } else if (((curYear - 1) / 4) * 4 == curYear) //Last year was a leap year. { while ((curYear - 1) >= theirYear) { curYear -= 4; leapYears++; } } else if (((curYear - 2) / 4) * 4 == curYear) //Year before last was a leap year. { while ((curYear - 2) >= theirYear) { curYear -= 4; leapYears++; } } else //Next year is a leap year. { while ((curYear - 3) >= theirYear) { curYear -= 4; leapYears++; } } return leapYears;*/}
Calculate Age in Seconds
c++;algorithm
using namespace std;Don't do this, compare for example Why is using namespace std in C++ considered bad practice?. And actually your code does not rely onit, you can simply remove that line.As already mentioned in the comments, neither the argc/argv parametersnor the return statement is required in C++ (see e.g.What should main() return in C and C++? for an overview):int main() { welcome();}Your welcome() function does all the work (which makes the function name quite misleading).A better design would be to separate between input, calculation,and output:void askForBirthdate(int &dayOfBirth, int &monthOfBirth, int &yearOfBirth){ // ...}int main(){ int dayOfBirth, monthOfBirth, yearOfBirth; askForBirthdate(dayOfBirth, monthOfBirth, yearOfBirth); int ageInSeconds = calcAgeInSeconds(dayOfBirth, monthOfBirth, yearOfBirth); std::cout << You are << ageInSeconds << old.\n;}(This will be revised below.) Your age calculation is quite complicated and has errors. As pointed outin the other answers:The leap year algorithm is not correct.Daylight save time transitions are not considered.The current time is retrieved multiple times which can cause inconsistentresults.Instead of converting the current time to day/month/year/... and computingthe difference to the birth day/month/year, it would be easier to gothe other way around: Convert the birth date to a time value(seconds since Jan 1, 1970 UTC) and compute the difference to thecurrent time value (which is obtained by time(0).Converting day/month/year to a time value (according to the localtimezone) is easily done with mktime() (the counterpartto localtime()):time_t timeForDate(int day, int month, int year){ struct tm timeinfo = { 0 }; timeinfo.tm_mday = day; timeinfo.tm_mon = month - 1; timeinfo.tm_year = year - 1900; timeinfo.tm_isdst = -1; return mktime(&timeinfo);}The main program then becomesint main(){ int dayOfBirth, monthOfBirth, yearOfBirth; askForBirthdate(dayOfBirth, monthOfBirth, yearOfBirth); time_t birthTime = timeForDate(dayOfBirth, monthOfBirth, yearOfBirth); time_t nowTime = time(0); time_t ageInSeconds = nowTime - birthTime; std::cout << You are << ageInSeconds << seconds old.\n;}Some other miscelleaneous remarks:You can get rid of the function declarations if you define allfunctions before using them.Statements likeint ageInSeconds=0;ageInSeconds=calcAgeInSeconds(monthBorn,dayOfMonthBorn,yearBorn);can be combined toint ageInSeconds = calcAgeInSeconds(monthBorn, dayOfMonthBorn, yearBorn);And use more (horizontal) space!Check and fix all compiler warnings. For example, in calcOffset(),curYear and three more variables are computed but their valuesare never used.Always use curly braces { } with if-statements, even if the if orelse part consists only of a single statement. That helps to avoiderrors if the code is edited later.The long if/else if/else if/... statement in calcDaysRemainingThisYear() can be simplified by using a switchstatement.
_cs.11481
I've been trying for a while now to find a solution for the problem in the title: determining if a number is perfect using a Turing Machine. I only had one class on the TM and while I did get how it works, this particular algorithm is being really hard for me to develop.The algorithm I'm trying to implement on the TM is basically this (on C, returns true iff n is a perfect number):int main(int n) { int i=1, sum=0; while ( n > i ) { if ( n % i == 0 ) { sum = sum + i; } i++; } return sum == n}The tough part for me right now is the while(n>i) loop and the n%i inside it.Since I already have a program that does a%b, I was trying to build the TM graph around it, but I'm not sure it's the best idea, specially since the b on this case changes on every iteration. The software I'm using to simulate the TM is called JFlap.The algorithm on table or graph form would be perfect.
Algorithm to determine if a number is perfect on a Turing Machine
algorithms;turing machines;decision problem;integers
null
_unix.179078
I am looking at a DTS file which tries to specify different nodes, but interestingly I find a few nodes having different style of nomenclature./ { model = TI AM335x BeagleBone Black; compatible = ti,am335x-bone-black, ti,am335x-bone, ti,am33xx;};&ldo3_reg { regulator-min-microvolt = <1800000>; regulator-max-microvolt = <1800000>; regulator-always-on;};&mmc1 { vmmc-supply = <&vmmcsd_fixed>;};&mmc2 { vmmc-supply = <&vmmcsd_fixed>; pinctrl-names = default; pinctrl-0 = <&emmc_pins>; bus-width = <8>; status = okay;};/ { hdmi { compatible = ti,tilcdc,slave; i2c = <&i2c0>; pinctrl-names = default, off; pinctrl-0 = <&nxp_hdmi_bonelt_pins>; pinctrl-1 = <&nxp_hdmi_bonelt_off_pins>; status = okay; };};What does it convey if a node has & as its prefix? What is the necessity of separating them from root node, while they can be present in the root node itself? Interestingly, the above example also has two root nodes, how is that possible?
Meaning of an ampersand prefix in a device tree
linux kernel;drivers;boot loader;arm;device tree
null
_codereview.127154
The following query is taking over 800ms to run, and returning 300 rows. When deployed to SQL Azure, it takes much longer on an affordable price tier.SELECT Tests.Id, Tests.Title, Tests.AuthorId, Tests.[Status], Users.FirstName + ' ' + Users.LastName AS AuthorName, (SELECT COUNT(1) FROM Results LEFT JOIN Users ON Results.UserId = Users.Id WHERE Results.TestId = Tests.Id AND Results.MarkedBy IS NULL AND Results.QuestionNumber >= 1 AND EXISTS ( (SELECT ClassName FROM UserClasses WHERE UserClasses.UserId = Users.Id) INTERSECT (SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id) INTERSECT (SELECT ClassName FROM UserClasses WHERE UserId = @teacherId) ) ) AS UnmarkedCount, (CASE WHEN EXISTS (SELECT 1 FROM Results WHERE Results.TestId = Tests.Id) THEN CAST(1 AS BIT) ELSE CAST(0 AS BIT) END ) AS AnyResults, (SELECT Stuff((SELECT ',' + ClassName FROM ( (SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id) INTERSECT (SELECT ClassName FROM UserClasses WHERE UserId = @teacherId) ) x FOR XML PATH ('')),1,1,'') ) AS ClassesFROM Tests INNER JOIN Users ON Tests.AuthorId = Users.IdWHERE Users.SchoolId = @schoolId AND Tests.Status <= 4An overview of the schema:Users include students and teachers.UserClasses matches many users to many class names.TestClasses matches many tests to many class names.Each test in Tests can have multiple Results - one per question per student.The query returns a list of tests, using subqueries to find:UnmarkedCount: How many unmarked results exist for this test, where the intersection of the following is not empty:The classes of the student of this resultThe test's classesThe teacher's classesAnyResults: Are there any results for this test?Classes: As a comma-separated list, which of the teacher's classes are assigned to this test?Note that if we remove the condition where three queries are intersected, the execution time is reduced to 150ms. However, this logic is required.How could this be improved?Further Details:Query Execution PlanThis is an extract from the query execution plan, where the heavy lifting seems to occur. I can't see anywhere suggesting indexes.Business logicThe procedure returns a list of all tests at a given school. For each test, it calculates:UnmarkedCount: How many results are not yet marked for students in classes which are both allocated to this test and taught by the current user?Classes: Which of the classes allocated to this test does the current user teach?
SQL query with nested subqueries
performance;sql;sql server
Let's just focus on this part, because that's where your performance goes:SELECT COUNT(1)FROM Results LEFT JOIN Users ON Results.UserId = Users.IdWHERE Results.TestId = Tests.Id AND Results.MarkedBy IS NULL AND Results.QuestionNumber >= 1 AND EXISTS ( (SELECT ClassName FROM UserClasses WHERE UserClasses.UserId = Users.Id) INTERSECT (SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id) INTERSECT (SELECT ClassName FROM UserClasses WHERE UserId = @teacherId) )That pattern of EXISTS (... INTERSECT ...) is better written as a chain of INNER JOIN.The query optimizer of your database already did that internally as well, but it chose the wrong order for the join, resulting in overly large temporary result sets. Especially when joining UserClasses straight on Results without applying the more selective filter by @teacherId first.SELECT COUNT(1)FROM ResultsINNER JOIN TestClasses ON TestClasses.TestId = Tests.Id AND TestClasses.TestId = Results.TestIdINNER JOIN UserClasses AS TeacherClass ON TeacherClass.UserId = @teacherId AND TeacherClass.ClassName = TestClasses.ClassNameINNER JOIN UserClasses AS UserClass ON UserClass.UserId = Results.UserId AND UserClass.ClassName = TestClasses.ClassNameWHERE Results.MarkedBy IS NULL AND Results.QuestionNumber >= 1I reordered the JOIN clauses to ensure that the product remains as small as possible after each single step. Further, I eliminated the unnecessary join on the User schema.However, you don't actually need full INNER JOIN either. If your database system supports that, you can safely replace the 2nd and 3rd of the INNER JOIN with LEFT SEMI JOIN operators instead.So much for fixing the inner select. But as a matter of fact, now we don't even need to do it as a subquery any more, but can just handle if as a LEFT JOIN with COUNT and GROUP BY on the outmost query.Whether this actually gains any performance needs to be tested.There are also a couple of flaws in your database scheme:Take the UserClasses table schema. You are abusing it to describe both the roles of teacher and student for any given class, without distinguishing between these two. I suspect you coded the user role into the Users schema instead, but it would have been better to store different roles in different schemes.You are apparently storing class names as string literals in multiple schemes, this is an indirect violation of the 2NF, but even worse, it requires string comparisons to match the corresponding columns against each other. This should be refactored ASAP.There also appears to be a possible design flaw in Results. If the same test is reused by two different classes, and a pupil is enrolled into both, his test results are now shared between both classes. Test results should probably linked to a specific enrollment to a class, rather than just to the generic test. This also allows to simplify this query further, as the most expensive part of joining on UserClass for querying pupil enrollment is then obsolete.
_codereview.13507
I have a number of jQuery animation effects applied to certain elements on the page:jQuery(#bgd_balance).animate({ backgroundPositionY: 0px, backgroundPositionX: 0px, 'background-size':'100%'},800,swing);jQuery(.balance_left_box).delay(2000).slideDown(200,easeInCirc);jQuery(.balance_left_box p.first-line).delay(2400).slideDown(600,easeInCirc);jQuery(.balance_left_box).delay(1000).animate({ height:270px, top:64px },100,easeInCirc);The problem I'm facing is that when I'm tweaking delay of a certain element, I have to go through everything and adjust all other delays accordingly.Is it possible to have something like this instead (pseudocode):queue.add( delay(2000), jQuery(.balance_left_box).slideDown(200,easeInCirc), delay(2000), jQuery(.balance_left_box p.first-line)X.slideDown(600,easeInCirc); delay(1000), jQuery(.balance_left_box).animate({ height:270px, top:64px },100,easeInCirc);).run();I know I can achieve this queuing by adding callback function to animate() call but then resulting code will be really bulky and hard to read, in my opinion.
Group animation events in jQuery
javascript;jquery;animation
The way I see it, you have 2 options; either use Deferreds, or store your delay in a variable:var delay = 0, $left_box = $(.balance_left_box);$(#bgd_balance).animate({ backgroundPositionY: 0px, backgroundPositionX: 0px, 'background-size':'100%'}, 800, swing);$left_box.delay(delay += 2000).slideDown(200, easeInCirc);$left_box.find(p.first-line).delay(delay += 2400).slideDown(600, easeInCirc);$left_box.delay(delay += 1000).animate({ height:270px, top:64px }, 100, easeInCirc);
_cs.65828
DefinitionsAn image filter is a matrix $m \in \mathbb{R}^{k_1 \times k_2 \times k_3}$ which gets applied to an image $I \in \mathbb{R}^{l_1 \times l_2 \times l_3}$ as a discrete convolution $$I'(n_1, n_2, n_3) = \sum_{i=0}^{k_1} \sum_{j=0}^{k_2} \sum_{k=0}^{k_3} I[n_1-i - \lfloor \frac{k_1}{2} \rfloor, n_2 - j - \lfloor \frac{k_2}{2} \rfloor, n_3 - k - \lfloor \frac{k_3}{2} \rfloor] \cdot m[i, j, k]$$There are some well-known filters like Laplace filters, Prewitt filters, ... (see my interactive example)For example, for an RGB image $k_3 = 3$ and $k_1, k_2$ are width and height.QuestionIs there a metric to compare the similarity of image filters?ContextConvolutional Neural Networks (CNNs) learn image filters. As they are randomly initialized, the filters they learn are different each time you train them. I am interested in quantifying those differences.I could, of course, use any metric for elements of $\mathbb{R}^{k_1 \times k_2 \times k_3}$. However, consider the filters$$\begin{align}m_1 &= \begin{pmatrix}-1&0&1\\-1&0&1\\-1&0&1\end{pmatrix}\\m_2 &= \begin{pmatrix}1&0&-1\\1&0&-1\\1&0&-1\end{pmatrix}\\m_3 &= \begin{pmatrix}-0.9&0.1&1\\-0.9&0.1&1\\-0.9&0.1&1\end{pmatrix}\\\end{align}$$For the image $m_1$ producesand $m_2$ producesYou can see a difference, but much less than for the result of $m_3$:This is probably not captured by most metrics. Another idea was to apply the metrics to the processed images on a given dataset, but this would make the results depend on the dataset and be computationally very intensive.(In case you want to try image filters yourself with Python: https://gist.github.com/MartinThoma/f51a1044c4abc6c7b81915ef96b7cfbd)
Is there a metric for the similarity of two image filters?
machine learning;computer vision;comparison
The ‘k-translation correlation’ is probably a good candidate for what you are looking for. It measures the maximum correlation between a pair of two filters $\mathbf{W_i}$ and $\mathbf{W_j}$ achieved by translating one filter up to k steps along any spatial dimension and then selecting the maximum thereof:$$\rho_k(\mathbf{W_i,W_j})=\max_{(x,y)\in \{-k,...,k\}^2\setminus(0,0)} \frac{\langle\mathbf{W_i}, T(\mathbf{W_j}, x,y)\rangle_f}{\left \| \mathbf{W_i}\right \|_2 \left \| \mathbf{W_j}\right \|_2}\,,$$where $T(\cdot, x,y)$ refers to the translation of its first operand by $(x,y)$ and $\langle\cdot,\cdot\rangle_f$ denotes the flattened inner product of the two filters (the second of which is translated). Note that both filters are reshaped to column vectors to perform the inner product. For more details refer to Doubly Convolutional Networks (Zhai, Cheng, Lu, Zhang, in Proceedings of 30th Conference on Neural Information Processing Systems (NIPS 2016)).
_codereview.138253
I've been working on refactoring a project of mine and need some help. I'm looking to apply some design principles and patterns to make the code cleaner and easier to maintain. It looks like I'm clearly violating the DRY principle as there seems to be quite a bit of repetition. Also, I think there is a design pattern or two that can be implemented, I'm just not sure which ones.The program implements HP's Blocked Recursive Image Composition (BRIC) algorithm . The algorithm creates and steps through a binary tree multiple times to assign various properties like size, coordinates, and aspect ratios to all of the nodes in the tree.I have a BinaryNode class set up that has references to its left child, right child, and parent as well as holds various properties like size, aspect ratio, and coordinates.My Collage class, where the actual collage is constructed from a list of images passed in to the constructor, is set up like so:class Collage{ private BinaryNode root = new BinaryNode(); private List<Image> images; private List<ImageDetails> imageInformation = new List<ImageDetails>(); private Size finalCollageSize; private int collageLength; private Orientation collageOrientation; private int borderWidth; private Color borderColor; private Random random; public Collage(List<String> imagePaths, int collageLength, Orientation collageOrientation, int borderWidth, Color borderColor) { this.images = convertPathsToImages(imagePaths); this.collageLength = collageLength; this.collageOrientation = collageOrientation; this.borderWidth = borderWidth; this.borderColor = borderColor; random = new Random(); SetCollageSplit(); } ...After the Collage object is constructed, I execute collage.CreateCollage(), which immediately hands off the majority of the algorithm to CreateCollageTree():public Bitmap CreateCollage(){ CreateCollageTree(); ...}private void CreateCollageTree(){ InitializeFullBinaryTree(); SetNodeSplits(); SetImagesToLeafNodes(); SetAspectRatios(); SetFinalCollageSize(); SetNewImageSizes(); SetImageCoordinates(); GetImageDetailsFromTree();}I feel a lot of repetition occurs within these methods. For example, take a look at SetNodeSplits(), SetImagesToLeafNodes(), SetAspectRatios(), SetNewImageSizes(), and SetImageCoordinates(). They all simply parse the tree and perform some action on either an inner node or leaf node. Lots of repetition going on here. I was thinking I could parse the tree only once and call the proper methods once I'm at the right node, but that would obviously violate the Single Responsibility Principle (SRP):/// <summary>/// Construct collage tree. It needs to be a full binary tree/// so add 2 nodes for every one image. Also subtact one from the image/// count because the root node has already been created./// </summary>private void InitializeFullBinaryTree(){ for (int i = 0; i < images.Count - 1; i++) { root.addNode(); root.addNode(); }} /// <summary>/// Assign inner nodes a 'Vertical' or 'Horizontal' split at random (50/50 chance)/// </summary>private void SetNodeSplits(){ var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); if (currentNode.assignedSplit == Split.None) { currentNode.assignedSplit = GetRandomSplit(); } } currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}/// <summary>/// Assign images to all leaf nodes/// </summary>private void SetImagesToLeafNodes(){ var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); var imageIndex = 0; while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); } else { // It's a leaf node, so assign an image to it. if (imageIndex < images.Count) { Image image = images[imageIndex]; currentNode.image = image; currentNode.aspectRatio = (float)image.Width / (float)image.Height; imageIndex++; } } currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}/// <summary>/// Set aspect ratios of all nodes in the tree/// </summary>private void SetAspectRatios(){ var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); } currentNode.aspectRatio = CalculateAspectRatio(currentNode); currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}/// <summary>/// Set image sizes for all nodes in the tre/// </summary>private void SetNewImageSizes(){ var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); } currentNode.size = CalculateNewImageSize(currentNode); currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}/// <summary>/// Set coordinates for all nodes in the tree./// </summary>private void SetImageCoordinates(){ var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); // breadth-first while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); } currentNode.coordinates = CalculateImageCoordinates(currentNode); currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}Please help me properly reduce all of this repetition and construct a better design. I'm hoping this is enough information / context. If it's not, I can provide any other code necessary. Also, this is my first C# project, so if there are any conventions or idioms I've violated, please let me know. Thanks a bunch in advance! I truly appreciate any feedback given.
Automated collage tool
c#;object oriented;design patterns
I'll just concentrate on DRYing out your code. Create a general VisitTree method:public VisitTree(Action<BinaryNode> reviver){ if (reviver == null) { throw new ArgumentNullException(reviver); } var currentNode = root; var nodeQueue = new Queue<BinaryNode>(); while (currentNode != null) { if (currentNode.leftChild != null) { nodeQueue.Enqueue(currentNode.leftChild); nodeQueue.Enqueue(currentNode.rightChild); } reviver(currentNode); currentNode = nodeQueue.Count > 0 ? nodeQueue.Dequeue() : null; }}Then you just create an Action that does all the stuff you want on your tree - you keep SRP because that method (or lambda) should be broken up into different methods that do different things.e.g.VisitTree(node => { SetNodeSplit(node); SetImageToLeafNode(node); SetAspectRatio(node); SetFinalCollageSize(node); SetNewImageSize(node); SetImageCoordinate(node); });I'd also suggest adding a IsLeaf property to your BinaryNode class to make it clearer.
_unix.285586
Below is my /etc/X11/xorg.conf.d/1-fbdev.conf./etc/X11/xorg.conf.d/1-fbdev.confSection Device Identifier LCD Driver fbdev Option fbdev /dev/fb0 Option Rotate UDEndSectionI would like to change the Rotate values in real time (without restarting the X)ex)Option Rotate CWHow do I apply during the execution X by changing the settings of the xf86-video-fbdev in real time?
Can I change the X rotate the xf86-video-fbdev during execution?
x11;configuration;video;framebuffer
null
_hardwarecs.4137
I am building a new server. The existing server has what I consider a very nice case (for my needs). It is 10 years old but in like-new condition. I want some advice on keeping that case and PSU (while replacing the motherboard, CPUs, storage, memory and almost everything else).The existing case is an Intel 5U Server Chassis with dual 730 W RPS hot-swappable redundant power supplies, hot swappable fan modules, hot-swappable drive bays and more. I believe the chassis is from the Intel SC5400 Family. It was originally sold as a Gateway E-9510T Server R1 [Part #WME876246]. It has the 5U rack conversion kit, and I have a place for it in the rack. Space is not a problem. This document appears to reference the same chassis I have (and it includes some diagrams so you can get an idea what it looks like).Microsoft Word - TPS_SC5400_Rev_1 0_AJ2.doc - sc5400_tps_rev10.pdfftp://ftp.actina.pl/sterowniki/www/Dokumentacja/S5000PSL/sc5400_tps_rev10.pdfThe power supplies look like this (except 730 Watt): https://www.amazon.com/dp/B00QR19214 The potential problems with the case are:the fans are very loud, so I would have to replace them with something quiethalf of the hot-swappable drive bays are for SCSI disks; I will be using SSDs for fast storagehalf of the hot-swappable drive bays are SATA, but not the latest SATA standard; I will be using multiple Seagate 8 TB Archive SATA III HDDs instead.My biggest question is whether the 730 W RPS hot-swappable redundant power supplies are worth keeping. I will be using a Super Micro X10DAL motherboard with dual Xeon E5 26XX v3 CPUs. Will these older PSU output clean enough power?Also relevant: I have a complete spare case, which gives me 2 more PSU in case I have a failure as well as a spare of every other part I might need.Bottom line: Should I try to reuse this Intel Server Chassis for my new build? Or is it just too outdated?If I don't, I'll be using a standard PC case with a normal PSU (not a server chassis). Nothing will be hot-swappable and I won't have redundant PSU. The case I'll go with is a Fractal Design Define R5 and the PSU will be an EVGA Supernova T2 1000W 80+ Titanium.
Server case upgrade?
server;case
null
_unix.379856
I am trying to write a systemd daemon that monitors a memory value retrieved from the Embedded Controller(EC), and then depending on the value, enables or disables the touchpad/trackpoint. I have the C code that can retrieve the value from the EC, and I am working on turning that into the Daemon, but I am unsure if it is possible to enable/disable the xinput devices from the daemon.The EC Reading Code is forked from the Y2P-PM project: https://gitlab.com/mikoff/Y2P-PMFrom the terminal I can use xinput set-prop Device-Enabled <0,1> to enable or disable.My question is, Can this command be called from C, or is there an equivalent way to do the same thing in C?The purpose of this daemon is to enable proper function of the ThinkPad yoga 14s Tablet/Laptop modes, as I have found the EC value that changes when the machine switches between modes. Eventually I would like a module that could enable the 2 in 1 functionality on linux for many models of machine. One additional question, Is there any detrimental effects that could stem from polling the EC?
Disabling an xinput device from a C systemd daemon
linux;systemd;c;daemon;xinput
null