id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_softwareengineering.280765 | There are three software projects: A, B and C.A is published to anyone and is licensed under GPL.B extends A, is published too, but has no license information or is mistakenly licensed under LGPL. Basically it violates the license of A by not being GPL. Source code of B is still available.C extends B. Can C be published under GPL? Motivation would be A is GPL, any derivative must be GPL too, so B is GPL and C can be GPL too. | Can GPL be implied to a derivative work? | licensing;open source;gpl | First off, B is in violation of the GPL on A. But that's not exactly your concern and is irrelevant to the question here (who knows, maybe B got a LGPL license from A on their code so that it may be released under LGPL?).The question is Can you build a GPL piece of software based on LGPL code? The answer to this is simply yes.The LGPL is less restrictive than the GPL (thus why B is in violation of the license on A unless other provisions were made), but also allows it to be brought back into a GPL project fairly easily.From the LGPL license:Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document.Its part of the license. You can easily build a GPL software based on LGPL code.There are some version differences that you'll have to pay attention to to make sure that the code is licensed in the correct way, under the correct version of the GPL.In the event that there is no license information presented, you do not have the right to extend upon it. B should not have been distributed, but its contributions are not licensed under an open source license. This may have been an internal project that got published or some other event.It is not presented under a license that is compatible with extending with the GPL. Consider the situation that a company, using GPL software internally (acceptable - not a violation), mistakingly made their repo public.In this case, it is quite possible that the project C is in violation of copyright infringement itself (the material that B added that is not licensed under the GPL as it should not have been distributed in the first place).One cannot force a license on someone else's source. It is either in compliance with the license, or in violation of it. If it is in violation of it, then as spelled out in the license:You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).A violation of the GPL does not mean that the material is under GPL, but rather that it can't be distributed. |
_cogsci.17419 | Age is a risk factor for depression when we look at the entire lifespan. Does that hold true in a population of >60 years old? | Does risk of developing depression continue to increase after age 60? | clinical psychology;depression | null |
_unix.382993 | I have a program that writes to stdout in a batch of text very quickly, but the output doesn't have a specific output that I can write an expect loop for. Example of stdout of my program:[time:here] random text 1[time:here] random text 2[time:here] random text 3[time:here] random text 4[time:here] random text 5[time:here] random text 6[time:here] random text 7Then it waits until I interact with it, and then it writes to stdout again with the same style of text:[time:here] random text 8[time:here] random text 9[time:here] random text 10[time:here] random text 11[time:here] random text 12[time:here] random text 13[time:here] random text 14The stdout is printed very fast, like within milliseconds, and then there's a wait until I interact with it. Once I interact stdout is written again very quick, and it waits. This repeats until I close the program.Between the waits I want to send a command to write to another file what time I interacted with the program at (using echo or something similar).Is there any way I can target the wait and do the echo command after every time there's a wait for my stdout? For example if there's a wait for > than 5 seconds then run the echo command, and wait for stdout to change again? | Run a command based on stdout frequency | bash;shell script;shell;expect | You could use a blocking read with a timeout to determine when output has stopped. For example, Bash's read supports a timeout parameter. The following script will write a single line if the output from STDIN stops for more than 2 seconds:#!/bin/shwhile read -r firstline; do while read -r -t 2 line; do continue done echo ---- No data in 2 seconds ----doneThe echo could be re-directed to a different log and the script can be modified to echo the data being read from standard in if desired. |
_codereview.60917 | I wrote a small perl module that dealing with PHP-ish array structures, and I'm going to release it to CPAN.I want it to be reviewed before the release about the module naming and the code itself.Could you please give me improvements? Any suggestions are appreciated.https://github.com/ernix/p5-Object-Squashuse strict;use warnings;package Object::Squash;# ABSTRACT: Remove numbered keys from a nested objectuse parent 'Exporter';use List::Util qw/max/;use version; our $VERSION = version->declare(v0.0.1);our @EXPORT_OK = qw(squash);sub squash { my $obj = shift; return $obj unless ref $obj; $obj = _squash_hash($obj); $obj = _squash_array($obj); return $obj;}sub _squash_hash { my $obj = shift; return $obj unless ref $obj eq 'HASH'; my @keys = keys %{$obj}; if (grep {/\D/} @keys) { return +{ map { $_ => squash($obj->{$_}) } @keys, }; } my $max = max(@keys) || 0; my @ar; for my $i (0 .. $max) { push @ar, sub { return (undef) unless exists $obj->{$i}; return squash($obj->{$i}); }->(); } return \@ar;}sub _squash_array { my $obj = shift; return $obj unless ref $obj eq 'ARRAY'; return (undef) if @{$obj} == 0; $obj = squash($obj->[0]) if @{$obj} == 1; return $obj;}1;__END__=head1 NAMEObject::Squash - Remove numbered keys from a nested object=head1 DESCRIPTIONThis package provides B<squash> subroutine to simplify hash/array structures.I sometimes want to walk through a data structure that consists only of a bunchof nested hashes, even if some of them should be treated as arrays or singlevalues. This module removes numbered keys from a hash.=head1 SYNOPSIS=head2 C<squash> use Object::Squash qw(squash); my $hash = squash(+{ foo => +{ '0' => 'nested', '1' => 'numbered', '2' => 'hash', '3' => 'structures', }, bar => +{ '0' => 'obviously a single value', }, });$hash now turns to: +{ foo => [ 'nested', 'numbered', 'hash', 'structures', ], bar => 'obviously a single value', };=head1 AUTHORShin Kojima <[email protected]>=head1 LICENSEThis program is free software; you can redistribute it and/ormodify it under the same terms as Perl itself. | Perl module to deal with numbered nested objects | perl | Just a few remarks about parts which poke the eye, push @ar, sub { return (undef) unless exists $obj->{$i}; return squash($obj->{$i}); }->();introduces unnecessary subroutine call/overhead which can be replaced with do {} block, or even simpler, push @ar, exists $obj->{$i} ? squash($obj->{$i}) : undef;This one is only matter of taste, soreturn $obj unless ref $obj eq 'HASH';could also be written asreturn $obj if ref $obj ne 'HASH';Usually you don't have to explicitly return undef, as in list context you may end up with list one element long, and that might not be what is intended. Instead just return; or return (); will produce empty list, or undef in scalar context. Soreturn (undef) if @{$obj} == 0;could be better asreturn () if @{$obj} == 0;or perhaps,@$obj or return; |
_webapps.104887 | I have two project spaces that have similarly-structured applications. I would like to copy the reports from my old project space over to my new project space so that I can adapt them for use in my new project. | Is it possible to copy CommCare reports from one project space to another? | commcare | null |
_unix.323439 | Yesterday, my xorg was working but it was using integrated graphics. I tried to get nvidia working. Now, when I run startx, I get a screen on tty3 which shows the output of the last tty I was in. For example, I hit ctrl+alt+f1, type startx and am moved to tty3 where I get a black screen with a solid cursor. If I go back to tty0 I can see the output of startx which shows no errors (besides minor gtk css warning) and a blinking cursor. When I go back to tty3, I see the same thing with a solid cursor.Info:uname -a:Linux Hermes 4.8.7-1-ARCH #1 SMP PREEMPT Mon Nov 15 10:14:30 CET 2016 x86_64 GNU/Linux.xinitrc:if [ -d /etc/X11/xinit/xinitrc.d ] ; then for f in /etc/X11/xinit/xinitrc.d/?*.sh ; do [ -x $f ] && . $f done unset fficonky &exec startxfce4/var/log/Xorg.0.log:[ 1743.684] X.Org X Server 1.18.4Release Date: 2016-07-19[ 1743.684] X Protocol Version 11, Revision 0[ 1743.684] Build Operating System: Linux 4.5.4-1-ARCH x86_64 [ 1743.684] Current Operating System: Linux Hermes 4.8.7-1-ARCH #1 SMP PREEMPT Thu Nov 10 17:22:48 CET 2016 x86_64[ 1743.684] Kernel command line: \vmlinuz-linux ro root=UUID=1d746b96-3184-49ac-a204-0f9deda59c87 pci=nomsi initrd=\initramfs-linux.img[ 1743.684] Build Date: 19 July 2016 05:54:24PM[ 1743.684] [ 1743.684] Current version of pixman: 0.34.0[ 1743.684] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version.[ 1743.684] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown.[ 1743.684] (==) Log file: /var/log/Xorg.0.log, Time: Tue Nov 15 10:24:19 2016[ 1743.684] (==) Using config file: /etc/X11/xorg.conf[ 1743.684] (==) Using system config directory /usr/share/X11/xorg.conf.d[ 1743.684] (==) ServerLayout Layout0[ 1743.684] (**) |-->Screen Screen0 (0)[ 1743.684] (**) | |-->Monitor <default monitor>[ 1743.684] (==) No device specified for screen Screen0. Using the first device section listed.[ 1743.684] (**) | |-->Device DiscreteNvidia[ 1743.684] (==) No monitor specified for screen Screen0. Using a default monitor configuration.[ 1743.684] (**) |-->Input Device Keyboard0[ 1743.684] (**) |-->Input Device Mouse0[ 1743.684] (==) Automatically adding devices[ 1743.684] (==) Automatically enabling devices[ 1743.684] (==) Automatically adding GPU devices[ 1743.684] (==) Max clients allowed: 256, resource mask: 0x1fffff[ 1743.684] (WW) The directory /usr/share/fonts/Type1/ does not exist.[ 1743.684] Entry deleted from font path.[ 1743.684] (==) FontPath set to: /usr/share/fonts/misc/, /usr/share/fonts/TTF/, /usr/share/fonts/OTF/, /usr/share/fonts/100dpi/, /usr/share/fonts/75dpi/[ 1743.684] (==) ModulePath set to /usr/lib/xorg/modules[ 1743.684] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled.[ 1743.684] (WW) Disabling Keyboard0[ 1743.684] (WW) Disabling Mouse0[ 1743.684] (II) Loader magic: 0x821d40[ 1743.684] (II) Module ABI versions:[ 1743.684] X.Org ANSI C Emulation: 0.4[ 1743.684] X.Org Video Driver: 20.0[ 1743.684] X.Org XInput driver : 22.1[ 1743.684] X.Org Server Extension : 9.0[ 1743.685] (--) using VT number 7[ 1743.685] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration[ 1743.686] (II) xfree86: Adding drm device (/dev/dri/card1)[ 1743.686] (II) xfree86: Adding drm device (/dev/dri/card0)[ 1743.704] (--) PCI:*(0:0:2:0) 8086:191b:1462:115a rev 6, Mem @ 0xdd000000/16777216, 0xb0000000/268435456, I/O @ 0x0000f000/64, BIOS @ 0x????????/131072[ 1743.704] (--) PCI: (0:1:0:0) 10de:139b:1462:115a rev 162, Mem @ 0xde000000/16777216, 0xc0000000/268435456, 0xd0000000/33554432, I/O @ 0x0000e000/128, BIOS @ 0x????????/524288[ 1743.704] (WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory)[ 1743.704] (II) LoadModule: glx[ 1743.705] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so[ 1743.710] (II) Module glx: vendor=NVIDIA Corporation[ 1743.710] compiled for 4.0.2, module version = 1.0.0[ 1743.710] Module class: X.Org Server Extension[ 1743.710] (II) NVIDIA GLX Module 375.10 Fri Oct 14 10:01:22 PDT 2016[ 1743.710] (II) LoadModule: nvidia[ 1743.710] (II) Loading /usr/lib/xorg/modules/drivers/nvidia_drv.so[ 1743.710] (II) Module nvidia: vendor=NVIDIA Corporation[ 1743.710] compiled for 4.0.2, module version = 1.0.0[ 1743.710] Module class: X.Org Video Driver[ 1743.710] (II) NVIDIA dlloader X Driver 375.10 Fri Oct 14 09:38:17 PDT 2016[ 1743.710] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs[ 1743.739] (II) Loading sub module fb[ 1743.739] (II) LoadModule: fb[ 1743.739] (II) Loading /usr/lib/xorg/modules/libfb.so[ 1743.739] (II) Module fb: vendor=X.Org Foundation[ 1743.739] compiled for 1.18.4, module version = 1.0.0[ 1743.739] ABI class: X.Org ANSI C Emulation, version 0.4[ 1743.739] (II) Loading sub module wfb[ 1743.739] (II) LoadModule: wfb[ 1743.739] (II) Loading /usr/lib/xorg/modules/libwfb.so[ 1743.739] (II) Module wfb: vendor=X.Org Foundation[ 1743.739] compiled for 1.18.4, module version = 1.0.0[ 1743.739] ABI class: X.Org ANSI C Emulation, version 0.4[ 1743.739] (II) Loading sub module ramdac[ 1743.739] (II) LoadModule: ramdac[ 1743.739] (II) Module ramdac already built-in[ 1743.740] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32[ 1743.740] (==) NVIDIA(0): RGB weight 888[ 1743.740] (==) NVIDIA(0): Default visual is TrueColor[ 1743.740] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0)[ 1743.740] (**) NVIDIA(0): Enabling 2D acceleration[ 1744.067] (II) NVIDIA(0): NVIDIA GPU GeForce GTX 960M (GM107-A) at PCI:1:0:0 (GPU-0)[ 1744.067] (--) NVIDIA(0): Memory: 2097152 kBytes[ 1744.067] (--) NVIDIA(0): VideoBIOS: 82.07.94.00.0e[ 1744.067] (II) NVIDIA(0): Detected PCI Express Link width: 16X[ 1744.067] (II) NVIDIA(0): Validated MetaModes:[ 1744.067] (II) NVIDIA(0): NULL[ 1744.067] (II) NVIDIA(0): Virtual screen size determined to be 640 x 480[ 1744.067] (WW) NVIDIA(0): Unable to get display device for DPI computation.[ 1744.067] (==) NVIDIA(0): DPI set to (75, 75); computed from built-in default[ 1744.067] (--) Depth 24 pixmap format is 32 bpp[ 1744.068] (II) NVIDIA: Using 12288.00 MB of virtual memory for indirect memory[ 1744.068] (II) NVIDIA: access.[ 1744.071] (II) NVIDIA(0): ACPI: failed to connect to the ACPI event daemon; the daemon[ 1744.071] (II) NVIDIA(0): may not be running or the AcpidSocketPath X[ 1744.071] (II) NVIDIA(0): configuration option may not be set correctly. When the[ 1744.071] (II) NVIDIA(0): ACPI event daemon is available, the NVIDIA X driver will[ 1744.071] (II) NVIDIA(0): try to use it to receive ACPI event notifications. For[ 1744.071] (II) NVIDIA(0): details, please see the ConnectToAcpid and[ 1744.071] (II) NVIDIA(0): AcpidSocketPath X configuration options in Appendix B: X[ 1744.071] (II) NVIDIA(0): Config Options in the README.[ 1744.088] (II) NVIDIA(0): Built-in logo is bigger than the screen.[ 1744.088] (II) NVIDIA(0): Setting mode NULL[ 1744.092] (==) NVIDIA(0): Disabling shared memory pixmaps[ 1744.092] (==) NVIDIA(0): Backing store enabled[ 1744.092] (==) NVIDIA(0): Silken mouse enabled[ 1744.093] (==) NVIDIA(0): DPMS enabled[ 1744.093] (II) Loading sub module dri2[ 1744.093] (II) LoadModule: dri2[ 1744.093] (II) Module dri2 already built-in[ 1744.093] (II) NVIDIA(0): [DRI2] Setup complete[ 1744.093] (II) NVIDIA(0): [DRI2] VDPAU driver: nvidia[ 1744.093] (--) RandR disabled[ 1744.095] (II) Initializing extension GLX[ 1744.095] (II) Indirect GLX disabled.[ 1744.145] (II) config/udev: Adding input device Power Button (/dev/input/event4)[ 1744.145] (**) Power Button: Applying InputClass evdev keyboard catchall[ 1744.145] (**) Power Button: Applying InputClass libinput keyboard catchall[ 1744.145] (II) LoadModule: libinput[ 1744.145] (II) Loading /usr/lib/xorg/modules/input/libinput_drv.so[ 1744.146] (II) Module libinput: vendor=X.Org Foundation[ 1744.146] compiled for 1.18.4, module version = 0.22.0[ 1744.146] Module class: X.Org XInput Driver[ 1744.146] ABI class: X.Org XInput driver, version 22.1[ 1744.146] (II) Using input driver 'libinput' for 'Power Button'[ 1744.146] (**) Power Button: always reports core events[ 1744.146] (**) Option Device /dev/input/event4[ 1744.146] (**) Option _source server/udev[ 1744.146] (II) input device 'Power Button', /dev/input/event4 is tagged by udev as: Keyboard[ 1744.146] (II) input device 'Power Button', /dev/input/event4 is a keyboard[ 1744.163] (**) Option config_info udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input5/event4[ 1744.163] (II) XINPUT: Adding extended input device Power Button (type: KEYBOARD, id 6)[ 1744.163] (II) input device 'Power Button', /dev/input/event4 is tagged by udev as: Keyboard[ 1744.163] (II) input device 'Power Button', /dev/input/event4 is a keyboard[ 1744.163] (II) config/udev: Adding input device Video Bus (/dev/input/event7)[ 1744.163] (**) Video Bus: Applying InputClass evdev keyboard catchall[ 1744.163] (**) Video Bus: Applying InputClass libinput keyboard catchall[ 1744.163] (II) Using input driver 'libinput' for 'Video Bus'[ 1744.163] (**) Video Bus: always reports core events[ 1744.163] (**) Option Device /dev/input/event7[ 1744.163] (**) Option _source server/udev[ 1744.164] (II) input device 'Video Bus', /dev/input/event7 is tagged by udev as: Keyboard[ 1744.164] (II) input device 'Video Bus', /dev/input/event7 is a keyboard[ 1744.186] (**) Option config_info udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input9/event7[ 1744.186] (II) XINPUT: Adding extended input device Video Bus (type: KEYBOARD, id 7)[ 1744.187] (II) input device 'Video Bus', /dev/input/event7 is tagged by udev as: Keyboard[ 1744.187] (II) input device 'Video Bus', /dev/input/event7 is a keyboard[ 1744.187] (II) config/udev: Adding input device Video Bus (/dev/input/event8)[ 1744.187] (**) Video Bus: Applying InputClass evdev keyboard catchall[ 1744.187] (**) Video Bus: Applying InputClass libinput keyboard catchall[ 1744.187] (II) Using input driver 'libinput' for 'Video Bus'[ 1744.187] (**) Video Bus: always reports core events[ 1744.187] (**) Option Device /dev/input/event8[ 1744.187] (**) Option _source server/udev[ 1744.188] (II) input device 'Video Bus', /dev/input/event8 is tagged by udev as: Keyboard[ 1744.188] (II) input device 'Video Bus', /dev/input/event8 is a keyboard[ 1744.203] (**) Option config_info udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:12/LNXVIDEO:01/input/input10/event8[ 1744.203] (II) XINPUT: Adding extended input device Video Bus (type: KEYBOARD, id 8)[ 1744.204] (II) input device 'Video Bus', /dev/input/event8 is tagged by udev as: Keyboard[ 1744.204] (II) input device 'Video Bus', /dev/input/event8 is a keyboard[ 1744.205] (II) config/udev: Adding input device Lid Switch (/dev/input/event1)[ 1744.205] (II) No input driver specified, ignoring this device.[ 1744.205] (II) This device may have been added with another device file.[ 1744.205] (II) config/udev: Adding input device Power Button (/dev/input/event3)[ 1744.205] (**) Power Button: Applying InputClass evdev keyboard catchall[ 1744.205] (**) Power Button: Applying InputClass libinput keyboard catchall[ 1744.205] (II) Using input driver 'libinput' for 'Power Button'[ 1744.205] (**) Power Button: always reports core events[ 1744.205] (**) Option Device /dev/input/event3[ 1744.205] (**) Option _source server/udev[ 1744.206] (II) input device 'Power Button', /dev/input/event3 is tagged by udev as: Keyboard[ 1744.206] (II) input device 'Power Button', /dev/input/event3 is a keyboard[ 1744.223] (**) Option config_info udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input4/event3[ 1744.223] (II) XINPUT: Adding extended input device Power Button (type: KEYBOARD, id 9)[ 1744.224] (II) input device 'Power Button', /dev/input/event3 is tagged by udev as: Keyboard[ 1744.224] (II) input device 'Power Button', /dev/input/event3 is a keyboard[ 1744.224] (II) config/udev: Adding input device Sleep Button (/dev/input/event2)[ 1744.225] (**) Sleep Button: Applying InputClass evdev keyboard catchall[ 1744.225] (**) Sleep Button: Applying InputClass libinput keyboard catchall[ 1744.225] (II) Using input driver 'libinput' for 'Sleep Button'[ 1744.225] (**) Sleep Button: always reports core events[ 1744.225] (**) Option Device /dev/input/event2[ 1744.225] (**) Option _source server/udev[ 1744.225] (II) input device 'Sleep Button', /dev/input/event2 is tagged by udev as: Keyboard[ 1744.225] (II) input device 'Sleep Button', /dev/input/event2 is a keyboard[ 1744.243] (**) Option config_info udev:/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input3/event2[ 1744.243] (II) XINPUT: Adding extended input device Sleep Button (type: KEYBOARD, id 10)[ 1744.244] (II) input device 'Sleep Button', /dev/input/event2 is tagged by udev as: Keyboard[ 1744.244] (II) input device 'Sleep Button', /dev/input/event2 is a keyboard[ 1744.246] (II) config/udev: Adding input device Logitech M325 (/dev/input/event9)[ 1744.246] (**) Logitech M325: Applying InputClass evdev pointer catchall[ 1744.246] (**) Logitech M325: Applying InputClass libinput pointer catchall[ 1744.246] (II) Using input driver 'libinput' for 'Logitech M325'[ 1744.246] (**) Logitech M325: always reports core events[ 1744.246] (**) Option Device /dev/input/event9[ 1744.246] (**) Option _source server/udev[ 1744.246] (II) input device 'Logitech M325', /dev/input/event9 is tagged by udev as: Mouse[ 1744.247] (II) Device 'Logitech M325' set to 600 DPI[ 1744.247] (II) input device 'Logitech M325', /dev/input/event9 is a pointer caps[ 1744.293] (**) Option config_info udev:/sys/devices/pci0000:00/0000:00:14.0/usb1/1-8/1-8:1.2/0003:046D:C52B.0004/0003:046D:400A.0005/input/input11/event9[ 1744.293] (II) XINPUT: Adding extended input device Logitech M325 (type: MOUSE, id 11)[ 1744.293] (**) Option AccelerationScheme none[ 1744.293] (**) Logitech M325: (accel) selected scheme none/0[ 1744.293] (**) Logitech M325: (accel) acceleration factor: 2.000[ 1744.293] (**) Logitech M325: (accel) acceleration threshold: 4[ 1744.294] (II) input device 'Logitech M325', /dev/input/event9 is tagged by udev as: Mouse[ 1744.294] (II) Device 'Logitech M325' set to 600 DPI[ 1744.294] (II) input device 'Logitech M325', /dev/input/event9 is a pointer caps[ 1744.295] (II) config/udev: Adding input device Logitech M325 (/dev/input/mouse0)[ 1744.295] (II) No input driver specified, ignoring this device.[ 1744.295] (II) This device may have been added with another device file.[ 1744.296] (II) config/udev: Adding input device HDA Intel PCH Mic (/dev/input/event11)[ 1744.296] (II) No input driver specified, ignoring this device.[ 1744.296] (II) This device may have been added with another device file.[ 1744.296] (II) config/udev: Adding input device HDA Intel PCH Headphone (/dev/input/event12)[ 1744.296] (II) No input driver specified, ignoring this device.[ 1744.296] (II) This device may have been added with another device file.[ 1744.297] (II) config/udev: Adding input device HDA Intel PCH HDMI/DP,pcm=3 (/dev/input/event13)[ 1744.297] (II) No input driver specified, ignoring this device.[ 1744.297] (II) This device may have been added with another device file.[ 1744.297] (II) config/udev: Adding input device HDA Intel PCH HDMI/DP,pcm=7 (/dev/input/event14)[ 1744.297] (II) No input driver specified, ignoring this device.[ 1744.297] (II) This device may have been added with another device file.[ 1744.298] (II) config/udev: Adding input device HDA Intel PCH HDMI/DP,pcm=8 (/dev/input/event15)[ 1744.298] (II) No input driver specified, ignoring this device.[ 1744.298] (II) This device may have been added with another device file.[ 1744.299] (II) config/udev: Adding input device AT Translated Set 2 keyboard (/dev/input/event0)[ 1744.299] (**) AT Translated Set 2 keyboard: Applying InputClass evdev keyboard catchall[ 1744.299] (**) AT Translated Set 2 keyboard: Applying InputClass libinput keyboard catchall[ 1744.299] (II) Using input driver 'libinput' for 'AT Translated Set 2 keyboard'[ 1744.299] (**) AT Translated Set 2 keyboard: always reports core events[ 1744.299] (**) Option Device /dev/input/event0[ 1744.299] (**) Option _source server/udev[ 1744.299] (II) input device 'AT Translated Set 2 keyboard', /dev/input/event0 is tagged by udev as: Keyboard[ 1744.299] (II) input device 'AT Translated Set 2 keyboard', /dev/input/event0 is a keyboard[ 1744.343] (**) Option config_info udev:/sys/devices/platform/i8042/serio0/input/input0/event0[ 1744.343] (II) XINPUT: Adding extended input device AT Translated Set 2 keyboard (type: KEYBOARD, id 12)[ 1744.344] (II) input device 'AT Translated Set 2 keyboard', /dev/input/event0 is tagged by udev as: Keyboard[ 1744.344] (II) input device 'AT Translated Set 2 keyboard', /dev/input/event0 is a keyboard[ 1744.345] (II) config/udev: Adding input device SynPS/2 Synaptics TouchPad (/dev/input/event10)[ 1744.345] (**) SynPS/2 Synaptics TouchPad: Applying InputClass evdev touchpad catchall[ 1744.345] (**) SynPS/2 Synaptics TouchPad: Applying InputClass libinput touchpad catchall[ 1744.345] (**) SynPS/2 Synaptics TouchPad: Applying InputClass touchpad catchall[ 1744.345] (**) SynPS/2 Synaptics TouchPad: Applying InputClass Default clickpad buttons[ 1744.345] (II) LoadModule: synaptics[ 1744.345] (II) Loading /usr/lib/xorg/modules/input/synaptics_drv.so[ 1744.345] (II) Module synaptics: vendor=X.Org Foundation[ 1744.345] compiled for 1.18.3, module version = 1.8.99[ 1744.345] Module class: X.Org XInput Driver[ 1744.345] ABI class: X.Org XInput driver, version 22.1[ 1744.345] (II) Using input driver 'synaptics' for 'SynPS/2 Synaptics TouchPad'[ 1744.345] (**) SynPS/2 Synaptics TouchPad: always reports core events[ 1744.345] (**) Option Device /dev/input/event10[ 1744.383] (II) synaptics: SynPS/2 Synaptics TouchPad: ignoring touch events for semi-multitouch device[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: x-axis range 1472 - 5706 (res 44)[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: y-axis range 1408 - 4800 (res 65)[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: pressure range 0 - 255[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: finger width range 0 - 15[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: buttons: left right double triple[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: Vendor 0x2 Product 0x7[ 1744.383] (--) synaptics: SynPS/2 Synaptics TouchPad: touchpad found[ 1744.383] (**) SynPS/2 Synaptics TouchPad: always reports core events[ 1744.423] (**) Option config_info udev:/sys/devices/platform/i8042/serio1/input/input7/event10[ 1744.423] (II) XINPUT: Adding extended input device SynPS/2 Synaptics TouchPad (type: TOUCHPAD, id 13)[ 1744.423] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) MinSpeed is now constant deceleration 2.5[ 1744.423] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) MaxSpeed is now 1.75[ 1744.423] (**) synaptics: SynPS/2 Synaptics TouchPad: (accel) AccelFactor is now 0.037[ 1744.423] (**) SynPS/2 Synaptics TouchPad: (accel) keeping acceleration scheme 1[ 1744.423] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration profile 1[ 1744.423] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration factor: 2.000[ 1744.423] (**) SynPS/2 Synaptics TouchPad: (accel) acceleration threshold: 4[ 1744.423] (--) synaptics: SynPS/2 Synaptics TouchPad: touchpad found[ 1744.424] (II) config/udev: Adding input device SynPS/2 Synaptics TouchPad (/dev/input/mouse1)[ 1744.424] (**) SynPS/2 Synaptics TouchPad: Ignoring device from InputClass touchpad ignore duplicates[ 1744.425] (II) config/udev: Adding input device PC Speaker (/dev/input/event5)[ 1744.425] (II) No input driver specified, ignoring this device.[ 1744.425] (II) This device may have been added with another device file.[ 1744.426] (II) config/udev: Adding input device MSI WMI hotkeys (/dev/input/event6)[ 1744.426] (**) MSI WMI hotkeys: Applying InputClass evdev keyboard catchall[ 1744.426] (**) MSI WMI hotkeys: Applying InputClass libinput keyboard catchall[ 1744.426] (II) Using input driver 'libinput' for 'MSI WMI hotkeys'[ 1744.426] (**) MSI WMI hotkeys: always reports core events[ 1744.426] (**) Option Device /dev/input/event6[ 1744.426] (**) Option _source server/udev[ 1744.426] (II) input device 'MSI WMI hotkeys', /dev/input/event6 is tagged by udev as: Keyboard[ 1744.426] (II) input device 'MSI WMI hotkeys', /dev/input/event6 is a keyboard[ 1744.443] (**) Option config_info udev:/sys/devices/virtual/input/input8/event6[ 1744.443] (II) XINPUT: Adding extended input device MSI WMI hotkeys (type: KEYBOARD, id 14)[ 1744.444] (II) input device 'MSI WMI hotkeys', /dev/input/event6 is tagged by udev as: Keyboard[ 1744.444] (II) input device 'MSI WMI hotkeys', /dev/input/event6 is a keyboard[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: synaptics[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: libinput[ 1744.708] (II) UnloadModule: libinput[ 1744.709] (II) UnloadModule: libinput[ 1744.736] (II) NVIDIA(GPU-0): Deleting GPU-0[ 1744.807] (II) Server terminated successfully (0). Closing log file./etc/X11/xorg.conf:# nvidia-xconfig: X configuration file generated by nvidia-xconfig# nvidia-xconfig: version 375.10 (buildmeister@swio-display-x86-rhel47-09) Fri Oct 14 11:11:07 PDT 2016Section ServerLayout Identifier Layout0 Screen 0 Screen0 InputDevice Keyboard0 CoreKeyboard InputDevice Mouse0 CorePointerEndSection#Section Files#EndSectionSection InputDevice # generated from default Identifier Mouse0 Driver mouse Option Protocol auto Option Device /dev/psaux Option Emulate3Buttons no Option ZAxisMapping 4 5EndSectionSection InputDevice# generated from default Identifier Keyboard0 Driver kbdEndSectionSection Monitor Identifier Monitor1# VendorName Unknown# ModelName Unknown# HorizSync 28.0 - 33.0# VertRefresh 43.0 - 72.0# Option DPMSEndSectionSection Device Identifier DiscreteNvidia Driver nvidia VendorName NVIDIA Corporation BoardName GeForce GTX 960M BusID PCI:1:0:0EndSectionSection Screen Identifier Screen0 Device Device0 Monitor Monitor0 DefaultDepth 24 SubSection Display Depth 24 EndSubSectionEndSection | xorg not working | arch linux;xorg | null |
_vi.11911 | I have set Vim's default colorscheme to Monokai, and when I exit Vim the Monokai colors are still on the screen until I clear it.How can Vim be set to run clear automatically on exit, or change the color scheme or at least the background color to what it was before it started? | How can Vim be configured to restore normal terminal color on exit? | colorscheme;terminal | How to Clear the Screen When Exiting VimWhen Vim quits, it sends the escape sequence defined by the t_te setting to the terminal in order to tell it what to do. This should be set automatically by Vim to do something sensible, but it looks like something's going wrong in your setup: the likely candidate seems to be that your terminal is configured incorrectly, but let's investigate:To clear the screen, we need to send the clear escape sequence to our terminal. We can find out what sequence is required by querying our terminfo database with the infocmp command. By running infocmp in my terminal (which happens, like yours, to be xterm-256color), I see the following entry:clear=\E[H\E[2JIn this output, the left hand side of the equation refers to a terminal capability, and the right-hand side to the escape sequence used to access it. In the displayed sequence, the \E refers to an Escape character.So, the escape sequence used by xterm to clear the screen is <Esc>[H<Esc>[2J. If we want Vim to clear the screen on exit, we need to configure its t_te setting to send this sequence.We do so with the following Vim command::set t_te=^[[H^[2JN.B. In the above, the two instances of ^[ are Vim's representation of the escape character that was displayed as \E above. You type them by pressing Ctrl+VEsc: not by typing a ^ followed by a [ character.After quitting Vim, the screen should now be cleared.What Might Be Going Wrong For YouThis may or may not actually work for you, however. As already mentioned, Vim should set up t_te sensibly already. There are several places where the problem could be occurring:Your t_te might be set incorrectly by your .vimrc or by a plugin. Setting t_te as above should fix this,You have indicated your $TERM is set to xterm-256color. If you are actually using a different terminal, the escape sequence described above may be wrong. The way to fix this is by setting $TERM to match your actual terminal.It's possible (although less likely) that your terminfo database contains the incorrect info for xterm-256color. You can test if this is the case by running the command tput clear in your terminal. This queries the terminfo database for the clear capability, which is output to the terminal, and should result in the terminal clearing the screen. If this does not do so, then your terminfo database is incorrect, and will need fixing. This is not really a Vim issue, so you may find more help elsewhere.Extra CreditI wrote earlier that Vim should set t_te to something sensible. But for me, this is not clearing the screen. So what does it actually set t_te to?The command :set t_te?, for me, outputs: t_te=^[[?1049l. By looking in the infocmp output, I can see that this corresponds to a capability of rmcup.It turns out that this, in conjunction with the related t_ti setting (which is set to the smcup capability) sets up Vim to use xterm's alternate screen buffer for rendering — when Vim quits, the terminal state is reset to display whatever was displaying before I ran Vim.Again, you can try out this switching of screen buffers outside of Vim by running a sequence of commands in your terminal:ls # just to get something onscreentput smcup # The ls output disappearstput rmcup # The ls output reappears |
_codereview.87117 | I have written the following utility, as my first non-tutorial program in Go.The purpose of the utility isto connect to a torque/force sensor (aka load-cell) via UDP;to send an initialization command; andto record the resulting stream of measurements.Since I am planning further functionality (setting duty cycle of a brushless motor controller via serial), which will be logged to the same file, I have separated the file writing from the UDP packet logging via a channel.Since this is a utility that serves a single purpose, error handling is minimal.I am hoping for feedback regarding the general style and idiomaticy of the code and the folder structure (separating loadcell-related files into a separate sub-package, in the loadcell sub-directory).Specifically, I would appreciate your thoughts on:A possible bug in go version go1.4.2 darwin/amd64, in loadcell.go (see comment before the final for loop)The FromNetworkBytes function in loadCellPacket.go. I really wanted to make a class method, e.g. packet := loadCellPacket.FromNetworkBytes(b), but as far as I found, the Go options are either: packet := loadCellPacketFromNetworkBytes(b) (a simple function), or var packet loadCellBytes; packet.FromNetworkBytes(b). I opted for the later, since I can reuse the packet variable, but I would appreciate feedback.The code is hosted here, and for longevity of this question, I repeat the files below. I have left out import statements for brevity../main.gopackage mainimport ( ... github.com/mikehamer/ati-torque-force-logger/loadcell)func main() { // Parse flags loadCellAddress := flag.String(address, 192.168.1.200:49152, The address of the loadcell) logFileName := flag.String(logfile, fmt.Sprintf(%s_loadcell.log, time.Now().Format(2006-01-02_15-04-05)), The name of the logfile) flag.Parse() //open CSV log logfile, err := os.Create(*logFileName) if err != nil { log.Fatal(err) } defer logfile.Close() // Setup communication channels receivedMeasurements := make(chan loadcell.Measurement) //connect and stream from loadcell go loadcell.ReceiveLoadCellStream(*loadCellAddress, receivedMeasurements) //loop and write logs fmt.Println(Saving output to, logfile.Name()) logfile.WriteString(t, Fx, Fy, Fz, Tx, Ty, Tz\n) for { select { case measurement := <-receivedMeasurements: logfile.WriteString(measurement.String()) } }}./loadcell/loadcell.gopackage loadcell// NETWORK CONSTANTSvar loadCellStartStreamCommand = loadCellCommand{0x1234, 0x0002, 0} // the command to send to enable realtime stream// ReceiveLoadCellStream opens a network stream to the loadcell, sends a configuration packet and then relays received measurements back through the supplied channelfunc ReceiveLoadCellStream(loadCellAddress string, receivedPackets chan<- Measurement) error { // calculate loadcell address remoteAddr, err := net.ResolveUDPAddr(udp, loadCellAddress) if err != nil { log.Fatal(err) } //open connection to loadcell conn, err := net.DialUDP(udp, nil, remoteAddr) if err != nil { log.Fatal(err) } fmt.Println(UDP Server: Local, conn.LocalAddr(), -> Remote, conn.RemoteAddr()) defer conn.Close() // send the command instructing the loadcell to begin a realtime data stream conn.Write(loadCellStartStreamCommand.NetworkBytes()) // begin receiving packets from the network connection and sending them on the outgoing channel startTime := time.Now() buf := make([]byte, 36) //BUG? This causes ReadFromUDP to block = GOOD //var buf []byte // While this causes ReadFromUDP to continuously return 0,nil,nil for { var packet loadCellPacket n, remoteAddr, err := conn.ReadFromUDP(buf) switch { //packet of the correct size is received case uintptr(n) == unsafe.Sizeof(packet): if err := packet.FromNetworkBytes(buf); err != nil { log.Fatal(err) } //decode it from network stream receivedPackets <- packet.ParseMeasurement() //packet is received but with incorrect size case n != 0: log.Print(From, remoteAddr, got unexpected bytes, buf[:n]) //an error occurs case err != nil: log.Fatal(err) } }}./loadcell/loadCellPacket.gopackage loadcell// loadCellPacket is the packet as received over the networktype loadCellPacket struct { RdtSequence uint32 // RDT sequence number of this packet. FtSequence uint32 // The records internal sequence number Status uint32 // System status code // Force and torque readings use counts values Fx int32 // X-axis force Fy int32 // Y-axis force Fz int32 // Z-axis force Tx int32 // X-axis torque Ty int32 // Y-axis torque Tz int32 // Z-axis torque}// FromNetworkBytes parses a loadCellPacket from a network (BigEndian) bytestreamfunc (s *loadCellPacket) FromNetworkBytes(b []byte) error { var packet loadCellPacket buf := bytes.NewReader(b) if err := binary.Read(buf, binary.BigEndian, &packet); err != nil { return err } *s = packet return nil}// ParseMeasurement creates a Measurement from the loadCellPacketfunc (s *loadCellPacket) ParseMeasurement(rxTime float64) Measurement { return Measurement{ rxTime, float32(s.Fx) / 1e6, float32(s.Fy) / 1e6, float32(s.Fz) / 1e6, float32(s.Tx) / 1e6, float32(s.Ty) / 1e6, float32(s.Tz) / 1e6}}./loadcell/LoadCellMeasurement.gopackage loadcell// Measurement is a loadCellPacket that has been converted into a useable formtype Measurement struct { RxTime float64 // the receive time, since the beginning of the program Fx float32 // x-force in Newtons Fy float32 // y-force in Newtons Fz float32 // z-force in Newtons Tx float32 // x-torque in Newton-meters Ty float32 // y-torque in Newton-meters Tz float32 // z-torque in Newton-meters}// Bytes returns the Measurement as a LittleEndian-encoded byte slice, ready for serializationfunc (s *Measurement) Bytes() []byte { buf := new(bytes.Buffer) if err := binary.Write(buf, binary.LittleEndian, s); err != nil { log.Fatal(err) } return buf.Bytes()}// String returns the Measurement as a comma-separated string, ready for loggingfunc (s *Measurement) String() string { return fmt.Sprintf(%.6f, %v, %v, %v, %v, %v, %v\n, s.RxTime, s.Fx, s.Fy, s.Fz, s.Tx, s.Ty, s.Tz)}./loadcell/loadCellCommand.gopackage loadcell// loadCellCommand is a command packet sent to the loadcelltype loadCellCommand struct { header uint16 // = 0x1234 Required command uint16 // Command to execute sampleCount uint32 // Samples to output (0 = infinite)}// NetworkBytes returns the loadCellCommand as a BigEndian-encoded byte slice ready for network transmissionfunc (s *loadCellCommand) NetworkBytes() []byte { buf := new(bytes.Buffer) if err := binary.Write(buf, binary.BigEndian, s); err != nil { log.Fatal(err) } return buf.Bytes()} | Utility that decodes and logs UDP packets | csv;logging;go;server;udp | null |
_softwareengineering.296825 | I am doing some research as to how most hardware accelerated GUI libraries work. I am actually only care about the rendering backends of them here. I am trying to figure out what would be the best way to try and write my own as a sort of side project. I am trying to go for ultimate performance here instead of overly fancy features. I want to b able to draw primitives, text and animation.Some good libraries that I know of are Qt, Skia and Cairo (though I'm not sure what the sate of HWA is on it). I have also looked at NanoVG which is a small library that seems to have a decent following. I did not manage to achieve decent performance with NanoVG though...The one thing that struck me was that all these libraries seem to make use of the concept of painting where it seems that each primitive shape gets drawn from scratch over and over again. What I mean by that is that from the APIs, it does not appear as if though the shapes are created as objects on the GPU or whatever the terminology is and then left there to be rendered automatically. In other words, they are not left on the GPU's memory for being redrawn in some large loop. To elaborate, it seems that for each i.e. rectangle that needs to be drawn, a whole OpenGL state is set up just to render that rectangle and is then destroyed again. It does appear though as if these rendered shapes are then at least rendered at their final destinations, allowing the GPU to then compose the entire scene.The way I expected these libraries to work is by actually storing the entire scene on the GPU (excuse the horrible terminology). For instance, primitives would be triangulated and left in memory where after some intricate process would be used to have a main rendering loop for the scene. Furthermore there would then be mechanisms in place to update attributes or delete or add primitives. This is quite a vague description but I think that you get the idea.What I would like to ask now, is if there is any performance benefit to the painting approach as compared to the saved approach (again, no idea if there are proper names for these things...). Some intricate caching mechanisms of sorts perhaps? Or is this just much simpler to work with?I realize that the saved approach might use more memory on the GPU but are all the OpenGL calls needed for the painting approach not vastly expensive? I guess one might be able to compensate for this by caching the rendered shapes but does the GPU really provide one with so large a benefit when doing such a once-off (or not very regular) rasterization as compared to CPU, especially given the communication overhead? Also, does this communication overhead not pose serious problems for animations when drawing has to be done for every frame?I am quite certain that NanoVG does not have an internal caching mechanism and I would assume that this could be responsible for its rather lackluster performance. Qt on the other hand seems to have excellent performance so it must be doing something right. Google also seems to be able to put Skia to good use.PS. I am not a professional of any sorts and have only recently started to learn OpenGL.EDIT:Another possibility that I have thought of is that maybe the painting approach was deemed necessary purely because of the memory benefits? The reason I would think that is because all these libraries were of course started in a different era and also target embedded platforms meaning that GPU memory might be so scarce on the target(ed) platforms and that using as little as possible of it might be more important than performance. Again though, in a situation like this, I am not convinced that frame-by-frame GPU rasterization given the communication overhead will outperform a CPU, especially considering the probably low pixel count on platforms with such little memory.Furthermore I have read on http://blog.qt.io/blog/2010/01/06/qt-graphics-and-performance-opengl/ that Qt apparently blends together shader code from precoded segments at runtime before painting and then hopes for the OGL compiler to inline the code properly at runtime. This sound like even more OGL initialization overhead to me... | Is hardware accelerated GUI data kept on the GPU | gui;qt;opengl;gpu | Saving the whole window as a single object into GPU (it would be bunch of rectangles saved as VBO) and then rendering it in a single OpenGL draw call would be fast, but it has several disadvantages:The whole geometry would have to be rendered using single shader. Having separate shaders (for opaque copy, transparent copy, gradient, ...) is more useful.The whole geometry could use only from limited amount of textures. Even if you use atlases, you need lot of textures for GUI. (Pieces of the GUI theme, icons, fonts, ...)You have to rebuild and reload the whole object into GPU after every change.Every widget must be able to produce their piece of geometry which is harder to abstract than 2D painting.On some GPUs you can render 2D stuff (filling area with color, copying from picture to picture, ...) with 2D commands which is faster than using 3D pipeline.If you break it into several objects, then you eventually end up with single or few rectangles per object. It's easier and faster to render them without any stored objects.What GUI frameworks do is tracking which exact parts of the window changed and repainting only them. The old image is cached in the GPU. This approach can be used with various drawing backends, not only OpenGL/DirectX accelerated rendering.If you want to check example of a GUI library that does generate geometries which you can feed into OpengGL (or into different 3D api), look at librocket. It can actually bake static geometries together and render them in single draw call, but any element that will be changing often or need to render with its own shader has to stay separate. |
_codereview.45686 | Below is a full working code example of code which is used to compute stats on two class vectors (partitions). There are two functions:pairwise_indication and to_clust_set. The first one is the function in question and the second is just for completeness, since it's apparently not so time consuming. pairwise_indication returns the indicators n11, n00, n10, n01 which are used to compute scores such as the RAND Index (they're named a, b, c, d on that site). The bottleneck probably are the loops (depth is 3) and the logic and look-up stuff inside of pairwise_indication. I already tried to improve the logical function in it by pulling forward the common cases. I was hoping it can be improved some further. What makes it interminable (if I get the profiling at the bottom of the post correct) are the loops and maybe the set look-ups using in, though I cannot imagine something faster.import itertools as itimport numpy as npdef pairwise_indication(part1, part2): r Computes the pairwise indicators. Parameters ---------- part1, part2 : array, (n) The two partition vectors Returns ------- n11, n00, n10, n01 : tuple (int) Tuple with the counts of incidences Examples -------- >>> from cluvap.calc import pairwise_indication >>> import numpy as np >>> p1 = np.array([1, 2, 3, 3, 1, 2]) >>> p2 = np.array([3, 2, 3, 3, 1, 2]) >>> pairwise_indication(p1, p2) (2, 10, 1, 2) n = len(part1) if n != len(part2): raise ValueError('Partition shapes do not match') # Create clustering as set A = to_clust_set(part1) B = to_clust_set(part2) observations = np.arange(n) n11, n00, n10, n01 = 0, 0, 0, 0 # Count incidences # function: P'QR'S v P'QRS v PQRS v PQR'S (' == not, v == or) # condensed: P('QR('S v S) v QR('S v S)) # P: obs1 in a # Q: obs2 in a # R: obs1 in b # S: obs2 in b for obs1, obs2 in it.combinations(observations, 2): for a in A: for b in B: if obs1 in a: if obs2 not in a: if obs1 in b: if obs2 not in b: n00 += 1 else: n01 += 1 else: if obs1 in b: if obs2 in b: n11 += 1 else: n10 += 1 return n11, n00, n10, n01def to_clust_set(part): Converts a partition to a set of clusters. Noise is considered a cluster. Parameters ---------- part : arrays, (n) Partitions. A partition ``p`` assigns the ``n``-th observation to the cluster ``p[n]``. Returns ------- cset : list of sets Examples -------- >>> from cluvap.calc import to_clust_set >>> import numpy as np >>> p = np.array([1, 2, 3, 3, 1, 2]) >>> to_clust_set(p) [set([0, 4]), set([1, 5]), set([2, 3])] clusters = set(part) obs = np.arange(len(part)) return [set(obs[part == C]) for C in clusters]if __name__ == __main__: from time import time p1 = np.array([1, 2, 3, 4, 5] * 20) p2 = np.array([2, 3, 4, 5, 1] * 20) t0 = time() pi = pairwise_indication(p1, p2) t = (time() - t0) * 1000 print 'pairwise_indication(p1, p2) in {:.2f} ms:\n'.format(t), piOutputpairwise_indication(p1, p2) in 19.00 ms:(950, 4000, 0, 0)line profiling>>> p1 = np.array([1, 2, 3, 4, 5] * 20)>>> p2 = np.array([2, 3, 4, 5, 1] * 20)>>> %lprun -f pairwise_indication pairwise_indication(p1, p2)Timer unit: 4.10918e-07 sFile: cluvap\calc.pyFunction: pairwise_indication at line 713Total time: 0.603277 sLine # Hits Time Per Hit % Time Line Contents============================================================== 713 def pairwise_indication(part1, part2):... 736 1 28 28.0 0.0 n = len(part1) 737 1 17 17.0 0.0 if n != len(part2): 738 raise ValueError('Partition shapes do not match') 739 # Create clustering as set 740 1 1455 1455.0 0.1 A = to_clust_set(part1) 741 1 1154 1154.0 0.1 B = to_clust_set(part2) 742 1 33 33.0 0.0 observations = np.arange(n) 743 1 14 14.0 0.0 n11, n00, n10, n01 = 0, 0, 0, 0 744 # Count incidences 745 # (can this be improved using some kind of indexing / logical compression?) 746 4951 19884 4.0 1.4 for obs1, obs2 in it.combinations(observations, 2): 747 29700 115151 3.9 7.8 for a in A: 748 148500 576491 3.9 39.3 for b in B: 749 123750 505606 4.1 34.4 if obs1 in a: 750 24750 100784 4.1 6.9 if obs2 not in a: 751 20000 84631 4.2 5.8 if obs1 in b: 752 4000 16489 4.1 1.1 if obs2 not in b: 753 4000 17915 4.5 1.2 n00 += 1 754 else: 755 n01 += 1 756 else: 757 4750 20272 4.3 1.4 if obs1 in b: 758 950 4005 4.2 0.3 if obs2 in b: 759 950 4185 4.4 0.3 n11 += 1 760 else: 761 n10 += 1 762 763 1 5 5.0 0.0 return n11, n00, n10, n01 | Compute stats on two class vectors | python;algorithm;performance;combinatorics | If I understood what you want to do, this would be a more direct way to compute the same result. At least for the test cases provided, the result is indeed the same.def pairwise_indication(part1, part2): if len(part1) != len(part2): raise ValueError('Partition shapes do not match') n11, n00, n10, n01 = 0, 0, 0, 0 for (a1,a2), (b1,b2) in it.combinations(zip(part1, part2), 2): if a1 == b1: if a2 == b2: n11 += 1 else: n10 += 1 else: if a2 == b2: n01 += 1 else: n00 += 1 return n11, n00, n10, n01On my computer this is 13 times faster than yours.Here's a completely different approach based on the idea that you can partition the partitions with one another (using zip in Python) to obtain a finer partition. (There must be a word for that?) From the size of each subset you can directly calculate the number of pairs that can be formed. Add up the numbers for each partition and subtract the overlap. This is 70 times faster than yours with the given example, and more importantly, operates in linear time, so it scales well to larger data.from collections import Counterdef pairs(n): '''Calculate number of pairs that can be formed from n items''' return n * (n - 1) // 2def partition_pairs(partition): '''Calculate number of pairs in subsets of partition''' return sum(pairs(x) for x in Counter(partition).values())def pairwise_indication(part1, part2): n = len(part1) if n != len(part2): raise ValueError('Partition shapes do not match') n11 = partition_pairs(zip(part1, part2)) n10 = partition_pairs(part1) - n11 n01 = partition_pairs(part2) - n11 n00 = pairs(n) - n11 - n10 - n01 return n11, n00, n10, n01 |
_softwareengineering.117766 | I am a computer science student, and as a result, I was taught C++ as a better version of C with classes. I end up trying to reinvent the wheel whenever a solution to a complex problem is needed, only to find sometime after that, some language feature or some standard library routine could potentially have done that for me. I'm all comfortable with my char* and *(int*)(someVoidPointer) idioms, but recently, after making a (minor) contribution to an open-source project, I feel that is not how one's supposed to think when writing C++ code. It's much different than C is.Considering that I know objected-oriented programming fairly well, and I am okay with a steep learning curve, what would you suggest for me to get my mind on the C++ track when I'm coding C++? | How can I learn to write idiomatic C++? | c++ | Based on your comments you know the C++ syntax.You are not coding in C++ but what is often refereed to as C with classes.The C++ tag on stackoverflow is a good place to start, it includes a reading list and FAQ.The only real way to learn is to write code and get experienced user to comment. You can put your code here for review. A good exampleI'm all comfortable with my char* s Stop using them, switch to std::string.and (int)(someVoidPointer) idioms.Stop using them (apart from to interface with C code). Using the functor concept provides several advantages (included the idea of encapsulating state).But recently, after making a (minor) contribution to an OSS project, I feel that is not how you think in C++. It's much different, though C has its own place.Yes. C and C++ have diverged as languages. Though you can use practically the same syntax what is considered good C code is generally not considered good C++ code (or vice verse).Some friends have suggested Accelerated C++, but again I know what types are, and what classes are and what overloading is.You have the very basics down.How can a (mutilated) C++ programmer, who happens to be sound with the OO concepts write idiomatic programs in the language.With a lot of work :-) |
_datascience.12707 | Consider I have one dependent variable to predict 'Attitude' which can take three values 'Positive/Negative/Neutral'.I have following independent variables or features- Age, Height, Gender, Income etc. I trying to predict Attitude using decision tree classifier.Attitude ~ Age + Height + Gender + Income (Decision Tree) I am getting >90% accuracy for the when tree depth is 15. As tree is dividing on continuous variables (i.e. Age, Income and Height) again and again to get leaf with pure classes. Is this problem of overfitting? Should I convert the continuous variables into categorical variables (like range classes)? | Should we convert independent continous variables (features) to categorical variable before using decision tree like classifier? | machine learning;classification;random forest;decision trees;preprocessing | There is no need to split continuous variables because the tree already does that automatically. The only way you can test for overfitting is by either using a holdout set or by doing cross validation. If you are overfitting, changing a continuous variable to a categorical variable likely won't make a difference. If you get the sense that you're overfitting, you should reduce the depth of your tree. |
_webmaster.16291 | I run a product recommendation engine and I'm hitting a few snags. I'm looking to see if anyone has any recommendations on what I should do to minimize these issues.Here's how the site works:Users come to the site and are presented with product recommendations based on some criteria. If a user knows of a product that is not in our system, they can add it by providing the product name and manufacturer. We take that information, and:Hit one API to gather all the product meta-data (and to validate the product spelling, etc). If the product is not in this first API, we do not allow it in our system. Use the information from step 1 to hit another API for pricing information (gathered from many places online).For the sake of discussion, assume that I am searching both APIs in the most efficient/successful manner possible.For the most part, this works very well. I'd say ~80% of our data is perfectly accurate, but there are a few issues:Sometimes the pricing API (Step 2) doesn't have any information for the product. The way the pricing API is built, it will always return something (theoretically, the closest possible match), and there's no guarantee that the product name is spelled exactly the same way in both APIs, so there's no automated way of knowing if it's the right product.When the pricing API finds the right product, occasionally it has outdated, or even invalid pricing data (e.g. if it screen-scraped the wrong price from a website).Since the site was fairly small at first, I was able to manually verify every product that was added to the website. However, the site has grown to the point where this is taking several hours per day, and is just not efficient use of my time.So, my question is:Aside from hiring someone (or getting an intern) to validate all the data manually, what would be the best system of letting my userbase self-manage the data. Specifically, how can I allow users to edit the data while minimizing the risk of someone ambushing my website, or accidentally setting the data incorrectly. | Best way to implement user-powered data validation | api;data;user input;user generated content | Make it a game where people get points for fixing the data (checking that the pricing APIs product is the same, isn't invalid etc.)People with low points don't get awarded them until a number of other users have fixed the data in the same way. As people get more points it needs fewer other people to crosscheck them, until people with very high points don't need checking at all. So it's a bit like reputation on this and the other stackexchange sites.You could reward those with points with tangible things, like discounts on products, early notifications of good deals and so on. You don't want to make those too large, or it's worth them gaming the system and making lots of money. |
_unix.92720 | The way I understand it, initramfs is responsible for loading the real root filesystem. Now, there are two places where we define that root. First we put an entry in /etc/fstab. Second, we put the device on the kernel boot commands e.g. root=/dev/sda1. Which one does initramfs use to determine where is the root filesystem? If it uses the root kernel parameter, why do we have an entry in /etc/fstab? The second option, (it reads /etc/fstab), is quite illogical because the /etc/fstab file is on the very root device that initramfs is trying to mount in the first place. Very confusing stuff. | Does initramfs use /etc/fstab? | boot;fstab;initramfs | As you stated, the purpose of initramfs is to get the real root filesystem mounted (it can do other things too, but this is the common task).Without an initramfs, the kernel will normally mount a partition up as read-only and then pass control over to /sbin/init. An initramfs just takes over this task from the kernel, usually when the root filesystem isn't a normal partition (mdraid, lvm, encrypted, etc).Now, aside from the background on initramfs, your /etc/fstab resides on your root filesystem. As such, when initramfs is launched, that root filesystem isn't there, and so it can't get to the fstab (chicken and egg problem).Instead we have to pass a parameter into the kernel boot arguments for the initramfs to use. Normally this is something like root=/dev/sdX. However it might also do something to automatically figure out where your root device is, and so there's no parameter at all. Since it's just software (generally a script), it can really do anything it wants for mounting the root device.Now, as stated earlier, the kernel will mount the real root as read-only. The initramfs should do exactly this. Once the initramfs is done, the system proceeds booting exactly as if there were no initramfs at all, and /sbin/init starts up. This init then starts all your normal boot scripts, and it's the job of one of these scripts to read /etc/fstab, switch root to read-write, and mount all your other filesystems. |
_scicomp.16396 | I have seen videos like this before in the past of things like 3D Mandlebulbs and similar fractal sets, but can anyone tell me what sorts of programs are actually used create these visualizations? How computationally expensive are these visualizations to make? Also, what sort of simulations are these? I mean, what sort of coordinate system, or mesh, are these sorts of structures realized on? I have most of my experience with computational simulations in FORTRAN, Python, and Matlab, and I just don't see how you would even begin to undertake creating the data to produce these sorts of visualizations in those languages. Thanks. | How-to: Epic visualizations of 3D fractals? | visualization | null |
_codereview.171705 | CodeIgniter has a query caching class that is initiated in the query function of the DB_driver class. Originally it was designed to store queries associated with a specific controller by using URI segments (segment 1 + segment 2). I found this a rather strange implementation if one is running both the frontend and backend on the same installation of CI. This meant backend files would be stored in the cache folder like admin+projects whereas frontend cache files would be stored like projects.There is a seemingly little know variable (cache_autodel) that exists in the driver source code that allows any non-returning query (INSERT, UPDATE, DELETE .etc.) to trigger a deletion event of the cache files associated with the particular controller e.g. admin+projects but it doesnt delete those in the projects cache folder even though the database has changed rendering the cache results obsolete. Thus you are left with having to either manually delete the cache files, or implement a routine in every INSERT, UPDATE .etc. to take care of deleting these files. Since the files can only be found by controller and not another identifier it is literally a mess to take care of.What I did was the following:Kept the cache files naming the same as CIs implementation that involved md5ing the SQL statement, but organized the files in subfolders not by controller, but by table name.Implemented triggers after INSERT, UPDATE, DELETE for all my tables that would add a master last_modified = NOW() to a database table with fields: table_id, table_name and last_modifiedFor read caches: compared the master last_modified for a table against the creation/modified time of a cache file to determine if it needs to be deleted or not. If not, returned the cache file.Code:class CI_DB_Cache{ /** * CI Singleton * * @var object */ public $CI; /** * Database object * * Allows passing of DB object so that multiple database connections * and returned DB objects can be supported. * * @var object */ public $db; /** * Constructor * * @param object &$db * @return void */ public function __construct(&$db) { // Assign the main CI object to $this->CI and load the file helper since we use it a lot $this->CI = & get_instance(); $this->db = & $db; $this->CI->load->helper('file'); $this->check_path(); } /** * Set Cache Directory Path * * @param string $path Path to the cache directory * @return bool */ public function check_path($path = '') { if ($path === '') { if ($this->db->cachedir === '') { return $this->db->cache_off(); } $path = $this->db->cachedir; } // Add a trailing slash to the path if needed $path = realpath($path) ? rtrim(realpath($path), DIRECTORY_SEPARATOR) . DIRECTORY_SEPARATOR : rtrim($path, '/') . '/'; if (!is_dir($path)) { log_message('debug', 'DB cache path error: ' . $path); // If the path is wrong we'll turn off caching return $this->db->cache_off(); } if (!is_really_writable($path)) { log_message('debug', 'DB cache dir not writable: ' . $path); // If the path is not really writable we'll turn off caching return $this->db->cache_off(); } $this->db->cachedir = $path; return TRUE; } /** * Gets table name from SQL statement * * @param SQL statement $sql * @return string or null */ private function get_table_name($sql) { $pattern = /FROM `(.*?)`/; preg_match($pattern, $sql, $matches); return isset($matches[1]) ? $matches[1] : null; } /** * Returns the strtotime equivalent of the last_modified field * for a given table * * @param string $table * @return boolean || int */ private function get_last_modified($table) { $res = $this->db->simple_query(SELECT `last_modified` FROM `master_table_modified` WHERE `table_name` = '{$table}'); if ($res !== true && $res->num_rows !== 1) { return false; } return strtotime($res->fetch_row()[0]); } /** * Retrieve a cached query * * Cache sub-folder is the name of the table * * @param string $sql * @return string */ public function read($sql) { $table = $this->get_table_name($sql); if (is_null($table)) { return false; } $filepath = $this->db->cachedir . $table . DS . md5($sql); $table_last_modified = $this->get_last_modified($table); if ($table_last_modified === FALSE) { return false; } if (!is_file($filepath)) { return false; } // check table last modified against file modified time if ($table_last_modified > filemtime($filepath)) { @unlink($filepath); return false; } if (FALSE === ($cachedata = file_get_contents($filepath))) { return false; } return unserialize($cachedata); } // -------------------------------------------------------------------- /** * Write a query to a cache file * * @param string $sql * @param object $object * @return bool */ public function write($sql, $object) { $table = $this->get_table_name($sql); if (is_null($table)) { return false; } $dir_path = $this->db->cachedir . $table . DS; $filename = md5($sql); if (!is_dir($dir_path) && !@mkdir($dir_path, 0750)) { return FALSE; } if (write_file($dir_path . $filename, serialize($object)) === FALSE) { return FALSE; } chmod($dir_path . $filename, 0640); return TRUE; } // -------------------------------------------------------------------- /** * Delete cache files within a particular directory * * @depreciated */ public function delete($segment_one = '', $segment_two = '') { return; } // -------------------------------------------------------------------- /** * Delete all existing cache files * * @return void */ public function delete_all() { delete_files($this->db->cachedir, TRUE, TRUE); }}Notes:Since the only statements worth caching in my app are SELECTs itsrather easy to find the table name from an SQL statement using regex.If one is solely using query builder you could easily assign thetable name to a variable accessible in the driver class and then passit to the cache class, but I sometimes just use the$this->db->query() function straight off.MyISAM tables generate last modified by default. But INNODB (what Imusing) only stores this in the database schema in MySQL 5.7+ (Im on 5.6)Results:Seemingly the same load times as with just the regular cache class. Benefits of not creating additional files for overlapping queries in the same table. | CodeIgniter cache replacement | php;codeigniter | null |
_cs.18312 | I've got 30 elements which has to be grouped/sorted into 10 ordered 3-tuple. There are several rules and constraints about grouping/sorting.For example: Element $A$ must not be in the same tuple same unit $B$. Element $C$ must not be right in front of element $A$, etc.I am searching for an approximated algorithm:We don't need to achieve the exact optimum It is OK for some rules not to be satisfied, if it helps to fulfill more rules.Do you know of any algorithm/proceeding that solve this problem or a similar one?I fear to solve it in an optimal way, you have to try out every possible solution-> $2 ^ {30}$EDIT: Sorry for the bad explanation. I am trying to make it a bit clearer:I got 30 elements for example: $\{1,2,3,\ldots,30\}$.I need to group them into 3-tuples so that i get something like: $(1,2,3)$, $(4,5,6)$,$\ldots$,$(28,29,30)$.There are several constraints. For example: 1 cannot precede 2 in an ordered tuple, so, for instance $(1,2,3)$ is not a valid tuple.5 must be together with 4. Those constraints can be broken and its possible that there is no solution where all rules can be fulfilled. An solution is considered as good if the amount of rules broken is low.Hope that makes it clearer and thanks for the help so far. | Algorithm for sorting with constraints | algorithms;sorting;randomized algorithms;greedy algorithms | Just to let anyone know, who got a similiar problem.I found genetic algorithm as an solution to it.Create a population by creating multiple individuals. This is done by setting the elements on a random spot in a vector.Generating the fitness of the individuals by checking how many rules are broken. The fitness is reduced by 1 per rule broken.Checking if the solution is acceptable(Either fitness = 0 or termination criteria satisfied)Doing tournament selection with suitable size(i chosed 3) on the population-> Getting the tournament winner-> Reproduct it, Mutate it or 1-Point-Crossover 2 of them and add it to the limbo.-> Repeat 4. till the limbo got population sizeGoto 3.Hope you get the idea of it. Thanks for the comments on the original question.If you got any question, feel free to ask. |
_unix.290331 | I'd like to attach some files in mutt's compose screen. I press a to attach. However, if I paste in a path with spaces, it eats the spaces up. Similarly, if I drag and drop a GUI icon into my terminal, it will similarly eat up the spaces.Invariably, I mess around a few times, then manually type out (with tab-complete) the entire path. How can I easily attach files from within mutt? | In mutt, how can I easily attach files which contain spaces in their name? | mutt | You can change the key bindings of the line editor prompt to make Space insert a space. By default, it invokes buffy-cycle, which cycles through completion possibilities or offers a completion menu. You can rebind this to another key, for example Alt+Space (I think mutt can't handle Ctrl+Space which the terminal transmits as a null byte).macro editor <space> \Cv bind editor \e\ buffy-cycleAs far as I know, you can't have different key bindings for different kinds of prompts. You can change key bindings dynamically by calling bind in hooks, but I don't think there's a hook that runs at the right time.Alternatively (or in addition), you can define a macro in the compose menu that attaches a file whose name is in the clipboard.macro compose \Ca <attach-file>`xsel -b | sed s/ /$(printf \\026)&/g`<enter> |
_codereview.131879 | Recently, i have been indulging in a lot of codility challenges to improve my coding performance. For each of this exercise, I always aim for simple solutions as opposed to complicated ones that arrive at the same answer . The question isTwo positive integers N and M are given. Integer N represents the number of chocolates arranged in a circle, numbered from 0 to N 1.You start to eat the chocolates. After eating a chocolate you leave only a wrapper.You begin with eating chocolate number 0. Then you omit the next M 1 chocolates or wrappers on the circle, and eat the following one.More precisely, if you ate chocolate number X, then you will next eat the chocolate with number (X + M) modulo N (remainder of division).You stop eating when you encounter an empty wrapper.For example, given integers N = 10 and M = 4. You will eat the following chocolates: 0, 4, 8, 2, 6.The goal is to count the number of chocolates that you will eat, following the above rules.Write a function:class Solution { public int solution(int N, int M); }that, given two positive integers N and M, returns the number of chocolates that you will eat.For example, given integers N = 10 and M = 4. the function should return 5, as explained above.Assume that:N and M are integers within the range [1..1,000,000,000]. Complexity:expected worst-case time complexity is O(log(N+M)); expected worst-case space complexity is O(log(N+M))I am aware a similar questions has been asked in java ChocolatesByNumbersbut my question is more directed to C#public static int PrintNChocolatesInaCircle(int N, int M){ int counter = 1; int start = 0; int value; while ((start + M) % N != 0) { value = (start + M) % N; start = value; counter++; } return counter;}Codility scored my code in terms of Correctness 100% but in terms of performance, it takes longer time to process large elements e.g N = (3^9)(2^14), M=(2^14)(2^14) for a large element and going a bit higher the performance declines. | ChocolatesByNumbers- Find the number of N chocolates in a circle | c#;programming challenge | I think to brute force this kind of question is the wrong path to take to begin with.Quoting the wikipedia on Brute Force Search : While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions which in many practical problems tends to grow very quickly as the size of the problem increases. Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed.As @WinstonEwert has pointed out, the number of chocolates that you can eat, is related to the least common multiplier. And, here is one fast way of computing it making use of the Euclidean Algorithm :static int gcf(int a, int b){ while (b != 0) { int temp = b; b = a % b; a = temp; } return a;}static int lcm(int a, int b){ return (a / gcf(a, b)) * b;}Credit to @AffluentOwl's answerHowever, LCM is not the final answer, but it is the number of chocolate that we care, we have to divide lcm by M. And, we can simplify all this :public static int PrintNChocolatesInaCircle(int N, int M){ // these already have a known answer if (M == 1) return N; if (M == N) return 1; int a = N, b = M; while (b != 0) { var temp = b; b = a % b; a = temp; } return N / a;}Lastly, make sure you respect the requirements : Write a function:class Solution { public int solution(int N, int M); } |
_softwareengineering.285832 | During creation of new Github repository I could choose license under which my project will be hosted on Github. I didn't do that because Github suggested only few licenses to choose (and WTFPL wasn't on the list). However, after repository was created I cannot find any option to indicate either WTFPL license or any other.Is it possible to setup license for my repo after it was created on GitHub? | How can I setup custom license for my github repository? | licensing;github | Absolutely. Create a new file called LICENSE and put your terms in there. For quick adding of the license you can use addalicense.com or manually push the file to GitHub using various tools. (or quickly via the GUI)License file names are normally; LICENSE, LICENSE.txt, LICENSE.md |
_cstheory.27280 | Most current cryptography methods depend on the difficulty of factoring numbers that are the product of two large prime numbers. As I understand it, that is difficult only as long as the method used to generate the large primes cannot be used as a shortcut to factoring the resulting composite number (and that factoring large numbers itself is difficult).It looks like mathematicians find better shortcuts from time to time, and encryption systems have to be upgraded periodically as a result. (There's also the possibility that quantum computing will eventually make factorization a much easier problem, but that's not going to catch anyone by surprise if the technology catches up with the theory.)Some other problems are proven to be difficult. Two examples that come to mind are variations on the knapsack problem, and the traveling salesman problem.I know that MerkleHellman has been broken, that NasakoMurakami remains secure, and that knapsack problems may be resistant to quantum computing. (Thanks, Wikipedia.) I found nothing about using the traveling salesman problem for cryptography.So, why do pairs of large primes seem to rule cryptography?Is it simply because the it is currently easy to generate pairs of large primes that are easy to multiply but difficult to factor?Is it because factoring pairs of large primes is proven to be difficult to a predictable degree that is good enough?Are pairs of large primes useful in a way other than difficulty, such as the property of working for both encryption and cryptographic signing?Is the problem of generating problem sets for each of the other problem types that are difficult enough for the cryptographic purpose itself too difficult to be practical?Are the properties of other problem types insufficiently studied to be trusted?Other. | Why does most cryptography depend on large prime number pairs, as opposed to other problems? | cr.crypto security;primes | null |
_cs.12405 | Is there any convex hull algorithm that can be extended to non-euclidean metric, such as the geodesic distance on the surface of a sphere? | Convex Hull on a Spherical Surface | algorithms;computational geometry | null |
_reverseengineering.12530 | Is there an easy to convert assembly (ARM) to C code? | How to convert assembly to C code | assembly;c;arm | null |
_webapps.86377 | Is there a way to populate a dropdown list differently depending on the values of another cell? For example, the dropdown in A2 contains Movies and Sports. I want the dropdown in A3 to have Horror and Romance in its dropdown selection if I select Movies in A2 and Baseball and Basketball if I select SportsHow can I achieve this? | Is there a way to dynamically populate a dropdown depending on the value of other cells? | google spreadsheets;data validation | You could make the second validation list an if/then statement dependent on the answer to the first one.=if(A1=Movies,Horror,Baseball)=if(A1=Movies,Romance,Basketball)An example is here. |
_unix.55604 | I'm trying to runhaxelib run nme setup linuxTo set up NME for the Linux target on my Debian box. Actually Linux MINT Debian Edition, but that shouldn't matter. However, I get the following output:E: Unable to locate package ia32-libs-multiarchCalled from ? line 1Called from InstallTool.hx line 579Called from setup/PlatformSetup.hx line 440Called from setup/PlatformSetup.hx line 474Called from setup/PlatformSetup.hx line 1410Called from helpers/ProcessHelper.hx line 133Called from helpers/ProcessHelper.hx line 169Uncaught exception - Error running: sudo apt-get install ia32-libs-multiarch gcc-multilib g++-multilib []I also tried to run:sudo apt-get install ia32-libs-multiarchBut I getE: Unable to locate package ia32-libs-multiarchIs this an Ubuntu specific package?Edit: I got this to work eventually, without installing that package or any other. Unfortunatly I don't remember what it was I did. If someone finds this and has the same problem give this question some attention and I will try again.. | nme for linux target setup fails on debian | linux;ubuntu;debian | null |
_unix.240136 | I have a file which has many random lines likeaaa bbbccc dddeee mark: 98 fffggg ggg jjjj iiijjj kkkkI want to use awk AND only gensub to match the number 98 above. So far I have this code below, I think it does not work cause I need to make gensub treat \n as any other character.cat file.txt | awk 'printf(gensub(/^.*mark: ([0-9]+).*$/,\\1,g))}'I need the output of the code above to be only 98. How do I do that?EDITeven when I use the s or m modifier it does not work as it should cause as far as I know the s modifier should make regex treat . as any character including \n. | gensub on multiple lines | text processing;awk | You seem to think that awk treats its input as a multiline string. It doesn't. When you run an awk script on a file, the script is applied to each line of the file separately. So, your gensub was run once per line. You can actually do what you want with awk but it really isn't the best tool for the job. As far as I can tell, you have a large file and only want to print a number that comes after mark: and whitespace. If so, all of these approaches are simpler than fooling around with gensub:Use grep with Perl Compatible Regular Expressions (-P)$ grep -oP 'mark:\s*\K\d+' file 98The -o makes grep only print the matching portion of the line. The \K is a PCRE construct which means ignore anything matched before this point.sed$ sed -n 's/.*mark:\s*\([0-9]\+\).*/\1/p' file98The -n suppresses normal output. The p at the end makes sed print only if the substitution was successful. The regex itself captures a string of numbers following mark: and 0 or more whitespace characters and replaces the whole line with what was captured. Perl$ perl -ne 'print if s/.*mark:\s*(\d+).*/$1/' file98The -n tells perl to read an input file line by line and apply the script given by -e. The script will print any lines where the substitution was successful.If you really, really want to use gensub, you could do something like:$ awk '/mark:/{print gensub(/.*mark:\s*([0-9]+).*/,\\1,g)}' file98Personally, I would do it this way in awk:$ awk '/mark:/{gsub(/[^0-9]/,);print}' file98Since you seemed to be trying to get awk to receive multiline input, this is how you can do that (assuming there are no NULL characters in your file):$ awk '{print(gensub(/^.*mark: ([0-9]+).*$/,\\1,g))}' RS='\0' file98The RS='\0' sets the input record separator (that's what defines a line for awk) to \0. Since there are no such characters in your file, this results in awk reading the whole thing at once. |
_codereview.59827 | I have been reading Clean Code and decided to start working problems on Codechef.com attempting to apply some of what I have learned.Do I seem to be on the right track or am I way off?The challenge is to find the number of trailing zeroes in the decimal form of N!, where 1N109.I am more concerned with the coding style than the way I solved the problem but any comments are appreciated.#include <iostream>#include <vector>int requestNumInts();bool validateInputSize(const int totalNumInts);std::vector<int>* createVector(int totalNumInputs);void loadVector(std::vector<int>* numbersEntered);void printNumTrailingZeros(const std::vector<int>* numbersEntered);int findTrailingZeros(int numberToCalculateZeros);int main() { std::ios::sync_with_stdio(false); std::vector<int>* numbers; numbers = createVector(requestNumInts()); loadVector(numbers); printNumTrailingZeros(numbers); delete numbers; return 0;}int requestNumInts() { int totalNumInts = 0; while(!validateInputSize(totalNumInts)) { std::cin >> totalNumInts; } return totalNumInts;}bool validateInputSize(const int totalNumInts) { const int MAX_NUMBER_OF_INPUTS = 1000000000; const int MIN_NUMBER_OF_INPUTS = 1; if(totalNumInts >= MIN_NUMBER_OF_INPUTS && totalNumInts <= MAX_NUMBER_OF_INPUTS) { return true; } else { return false; }}std::vector<int>* createVector(int totalNumInts) { std::vector<int>* numbers = new std::vector<int>(totalNumInts); return numbers;}void loadVector(std::vector<int>* numbersEntered) { for(unsigned int count = 0; count < numbersEntered->size(); ++count) { std::cin >> (*numbersEntered)[count]; }}void printNumTrailingZeros(const std::vector<int>* numbersEntered) { for(unsigned int count = 0; count < numbersEntered->size(); ++count) { std::cout << findTrailingZeros((*numbersEntered)[count]) << std::endl; }}int findTrailingZeros(int numberToCalculateZeros) { int totalTrailingZeros = 0; const int FACTOR = 5; while(numberToCalculateZeros >= FACTOR) { numberToCalculateZeros /= FACTOR; totalTrailingZeros += numberToCalculateZeros; } return totalTrailingZeros;} | Clean code attempt on codechef.com FCTRL | c++;beginner;programming challenge | Code Reviewint requestNumInts();bool validateInputSize(const int totalNumInts);std::vector<int>* createVector(int totalNumInputs);void loadVector(std::vector<int>* numbersEntered);void printNumTrailingZeros(const std::vector<int>* numbersEntered);int findTrailingZeros(int numberToCalculateZeros);Personally I like to align the function names (this makes it easier to read). This is purely personal. Some like it some don't.int requestNumInts();bool validateInputSize(const int totalNumInts);std::vector<int>* createVector(int totalNumInputs);void loadVector(std::vector<int>* numbersEntered);void printNumTrailingZeros(const std::vector<int>* numbersEntered);int findTrailingZeros(int numberToCalculateZeros);Now that I have lined it up two things sprint to mind.You are returning a vector by pointer (that's not good as there is no ownership semantics (who deletes it)). You should probably return by value. The optimizer will remove any copying and it prevents memory leaks. You can then pass the vector by reference to prevent copying in other situations.You seem to be writing C code. If you implements this inside an object then a lot of you parameters don't need to be passed they are part of the object that is being manipulated.Nice: std::ios::sync_with_stdio(false);Pointer. Boo. Bad. std::vector<int>* numbers;It is rare to see RAW pointers in C++ code. Pointers are usually wrapped inside smart pointers. But in this case you don't even need a pointer just use a normal std::vector as an object in place. std::vector<int> numbers = createVector(requestNumInts());Using a delete is risky. delete numbers;It is hard to tell if numbers was dynamically allocated! You actually have to go and look that up in the function createVector(). So if you change createVector() you also need to go through your code and find every place that calls createVector() to make sure they also use it correctly. Also its not exception safe. If an exception propagates through your code then you leak memory.The main() function is special. If you don't specify a return then the compiler generates a return 0; for you. If your code can do nothing else apart from exit successfully then leave the return 0; out to indicate that there are no failure states. If there are error exit states then return 0; is an indication that the reader of the code should look for exit failures attempts.Here:int requestNumInts() { int totalNumInts = 0; while(!validateInputSize(totalNumInts)) { std::cin >> totalNumInts; } return totalNumInts;}The first attempt will always fail. So why not use a do {} while() loop. This is designed for this situation. You always execute the code before doing the test.Avoid if conditions that return true/false.bool validateInputSize(const int totalNumInts) { const int MAX_NUMBER_OF_INPUTS = 1000000000; const int MIN_NUMBER_OF_INPUTS = 1; if(totalNumInts >= MIN_NUMBER_OF_INPUTS && totalNumInts <= MAX_NUMBER_OF_INPUTS) { return true; } else { return false; }}The above can be written as:Much more readable. return (totalNumInts >= MIN_NUMBER_OF_INPUTS) && (totalNumInts <= MAX_NUMBER_OF_INPUTS);Don't create the vector with new.std::vector<int>* createVector(int totalNumInts) { std::vector<int>* numbers = new std::vector<int>(totalNumInts); return numbers;}RVO and NRVO will remove the copy that happens when you return by value. Also with C++11 and move semantics this makes this even more efficient. So never do this.Also this whole function can be replaced with just a simple declaration in main.Much simpler and more readable.std::vector<int> numbers(requestNumInts());Good try:void loadVector(std::vector<int>* numbersEntered) { for(unsigned int count = 0; count < numbersEntered->size(); ++count) { std::cin >> (*numbersEntered)[count]; }}Couple of different ways to do this:// Use the new foreach keywordfor(auto& val: numbers){ std::cin >> val;}// Using iterators.for(auto loop = numbers.begin(); loop != numbers.end(); ++loop){ std::cint >> (*loop);}// Or we can use the old classic C++03 std::transformstd::transform(std::begin(numbers), std::end(numbers), std::istream_iterator<int>(std::cin), std::begin(numbers), [](int& /*val1*/, int& val2){ return val2;});Again nice effort with the printing.void printNumTrailingZeros(const std::vector<int>* numbersEntered) { for(unsigned int count = 0; count < numbersEntered->size(); ++count) { std::cout << findTrailingZeros((*numbersEntered)[count]) << std::endl; }}Again some other options:// Use the new foreach keywordfor(auto& val: numbers){ std::cout << findTrailingZeros(val) << \n;}// Using iteratorsfor(auto loop = numbers.begin(); loop != numbers.end(); ++loop){ std::cout << findTrailingZeros(*loop) << \n;}// Or we can use the old classic C++03 std::for_eachstd::for_each(std::begin(numbers), std::end(numbers), [](int val){ std::cout << findTrailingZeros(val) << \n;});Also note the use of \n rather than std::endl. The std::endl adds a \n to the stream but then also calls flush. This is hardly ever what you actually want to do. Let the stream flush itself it makes it much more efficient to do so. In your loops get used to use iterators to loop over containers. They are much more versatile and apply to all containers. Also you can pass them to functions very easily and they allow you to specify sub-ranges very trivially (event the foreach uses iterators underneath).How I would do it.First note that the output does not depend on previous values. You could cache them for speedy look-up but that seems overkill for such a simple algorithm. So there is no need to store the data in a vector. #include <iostream> int main() { std::ios::sync_with_stdio(false); int count; std::cin >> count; for(int loop=0; loop < count; ++count) { int value; std::cin >> value; std::cout << LeadingZero(value) << \n; } } |
_vi.9740 | I set my mapleader to Space as :let mapleader = \<Space> and use it in command mode quite often. But when I hold it it makes the cursor to move forward which is very annoying.QUESTION: Is there a way to unbind Space in normal mode? | Unbind Space in normal mode | key bindings;vimrc | null |
_webapps.98241 | I'm sure this is not as complicated as I'm trying to make it. What I'm trying to do is take a calculated cell value, add a letter to it, and reference it from another sheet.So, if A1 is a value of 1, then I want the value of Sheet1!A1If A1 is 92, I want the value of Sheet1!A92If A1 is 0, I don't want anything.If A1 is blank, I don't want anything. | How to reference a cell using a cell value? | google spreadsheets | null |
_unix.10044 | I am trying to cron an rsync via ssh between two fileservers that are running SME Server 7.4 and Ubuntu 10.10, respectively. The first rsync worked just fine (for reasons that I do not know), but now, well... here's the output:[i@i-drive ~]$ rsync -avz -e ssh /home/e-smith/files/ibays/drive-i/files/Warehouse\ Pics/* fm-backup@[hostname removed]:~/img_all/sending incremental file listConverted Warehouse Pictures/rsync: failed to set times on /home/fm-backup/img_all/Converted Warehouse Pictures: Operation not permitted (1)Converted Warehouse Pictures/12903-13099/rsync: failed to set times on /home/fm-backup/img_all/Converted Warehouse Pictures/12903-13099: Operation not permitted (1)Converted Warehouse Pictures/12903-13099/13038/rsync: recv_generator: mkdir /home/fm-backup/img_all/Converted Warehouse Pictures/12903-13099/13038 failed: Permission denied (13)*** Skipping any contents from this failed directory ***Converted Warehouse Pictures/30500 - 30600/30677/rsync: failed to set times on /home/fm-backup/img_all/Converted Warehouse Pictures/30500 - 30600/30677: Operation not permitted (1)Converted Warehouse Pictures/30500 - 30600/30677/P1430928.JPGConverted Warehouse Pictures/30500 - 30600/30677/P1430929.JPGrsync: mkstemp /home/fm-backup/img_all/Converted Warehouse Pictures/30500 - 30600/30677/.P1430928.JPG.8SDmeO failed: Permission denied (13)rsync: mkstemp /home/fm-backup/img_all/Converted Warehouse Pictures/30500 - 30600/30677/.P1430929.JPG.qEfwpI failed: Permission denied (13)Converted Warehouse Pictures/30900 - 31000/rsync: recv_generator: mkdir /home/fm-backup/img_all/Converted Warehouse Pictures/30900 - 31000 failed: Permission denied (13)*** Skipping any contents from this failed directory ***Converted Warehouse Pictures/Tiff's Folder/rsync: failed to set times on /home/fm-backup/img_all/Converted Warehouse Pictures/Tiff's Folder: Operation not permitted (1)Converted Warehouse Pictures/Tiff's Folder/IMG_3474.JPGConverted Warehouse Pictures/Tiff's Folder/IMG_3475.JPGConverted Warehouse Pictures/Tiff's Folder/IMG_3476.JPGrsync: mkstemp /home/fm-backup/img_all/Converted Warehouse Pictures/Tiff's Folder/.IMG_3474.JPG.aBPfwH failed: Permission denied (13)rsync: mkstemp /home/fm-backup/img_all/Converted Warehouse Pictures/Tiff's Folder/.IMG_3475.JPG.4rQNSM failed: Permission denied (13)rsync: mkstemp /home/fm-backup/img_all/Converted Warehouse Pictures/Tiff's Folder/.IMG_3476.JPG.EpQJkY failed: Permission denied (13)Unconverted Warehouse Pictures/PANA 1430200/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430300/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430300: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430300/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430400/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430400: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430400/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430500/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430500: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430500/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430600/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430600: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430600/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430700/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430700: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430700/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430800/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430800: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430800/Thumbs.dbUnconverted Warehouse Pictures/PANA 1430900/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430900: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1430900/Thumbs.dbUnconverted Warehouse Pictures/PANA 1440000/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1440000: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1440000/Thumbs.dbUnconverted Warehouse Pictures/PANA 1440100/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1440100: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1440100/Thumbs.dbUnconverted Warehouse Pictures/PANA 1440200/rsync: failed to set times on /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1440200: Operation not permitted (1)Unconverted Warehouse Pictures/PANA 1440200/Thumbs.dbrsync: mkstemp /home/fm-backup/img_all/Unconverted Warehouse Pictures/PANA 1430200/.Thumbs.db.IRLwfB failed: Permission denied (13)inflate returned -3 (0 bytes)rsync error: error in rsync protocol data stream (code 12) at token.c(546) [receiver=3.0.7]rsync: connection unexpectedly closed (8118 bytes received so far) [sender]rsync error: error in rsync protocol data stream (code 12) at io.c(601) [sender=3.0.7](Note: Trust me, I know that TRWTF is the terrible and horrible way that the directory is organized. That's another project for another day.)Neither account is root, and I don't want to have to make the accounts root for this to work. i@i-drive rsyncs just fine to my OSX fileserver, and from the same folder, even. The account on the OSX box isn't root, either. | Why do I get an rsync: failed to set times on ... : Operation not permitted (1) error on Ubuntu 10.10 with SME Server 7.4? | ubuntu;ssh;permissions;rsync;sme server | null |
_unix.280597 | How can someone print the entire time ie, for example 12:07:59:393(HH:MM:SS:milliseconds) in milliseconds. I found a lot of posts saying how to print Milliseconds along witht HHMMSS but I want to print the realtime clock in milliseconds. I'm using Linux. | Millisecond time in a shell script | shell;shell script;date | null |
_unix.322473 | Centos 7.2Multi Theft Auto 1.5.3 Linux ServerThis Game server runs fine.I would like to know how to best auto start this game server for 24/7 operation on all occasions of server startup/reboot/crash? And I would like it start in a screen that starts detached/minimized but re-attachable. I need an easy to understand step by step example.the program to start is located here:/home/gta3mta/multitheftauto_linux_x64-1.5.3/mta-server64 | Cent OS 7 MTA:SA Game Server Auto Start on boot | centos;boot;startup | null |
_unix.180021 | Researched:http://www.thecave.info/export-proxy-username-password-linux/https://stackoverflow.com/questions/5334110/text-based-ftp-client-settings-behind-a-proxyhttp://www.cyberciti.biz/faq/linux-unix-set-proxy-environment-variable/I have been reading up on reverse FTP; the concept of how it works as well as the configuration. However none could fully fufil my understanding base my the scenario I was give. Most of the web sites only states the commands to forward any FTP request to a proxy server. What I wanted to learn exactly how each individual nodes in the setup has to be configured (except the firewall)Scenario:[Internal FTP Server] -> [DMZ Reverse FTP Server] -> [External Clients Computers/Servers]When a legitimate client computer/server on the external network make a FTP request to the FTP server in the internal network, there will be a reverse FTP server which forwards the request between them. There will be two firewalls in between the reverse FTP server, external client and internal. A typical back-to-back firewall topology.FTP server will be installed on the internal FTP server (Normal Mode)The internal FTP server has to be configure to forward and receive any FTP request to/from the reverse FTP server in the DMZThe reverse FTP server has to be configure as a proxy server that can receive and forward any FTP requestExternal clients/servers should only be able to see the public IP address of the interface of the public facing firewallBoth servers, the FTP server and the reverse FTP server are running on Linux RHEL 6 OSWhat at the commands and steps should I take to perform such configurations on the FTP server as well as the reverse FTP server? Is there any security measures that I should take note of? | How to setup reverse FTP in RHEL? | rhel;firewall;ftp;proxy;file server | null |
_cs.29945 | I've learned that one can represent natural numbers with lambda calculus like this:\begin{align*}c_0 &= \lambda s. \lambda z. z\\c_1 &= \lambda s. \lambda z. s~z\\c_2 &= \lambda s. \lambda z. s~(s~z)\\c_3 &= \lambda s. \lambda z. s~(s~(s~z))\\\end{align*}But could one also write\begin{align*}c'_0 &= \lambda z. \lambda s. z\\c'_1 &= \lambda z. \lambda s. s~z\\c'_2 &= \lambda z. \lambda s. s~(s~z)\\c'_3 &= \lambda z. \lambda s. s~(s~(s~z))\\\end{align*}?Why / why not? | Can the lambda functions in Church numbers be swapped? | lambda calculus;church numerals | null |
_reverseengineering.2728 | I want to debug a DLL when it is called from an application. For example, when Firefox calls nss3.dll NSS Builtin Trusted Root CAs to check HTTPS Certificates, I want to catch the nss3.dll and debug all its transactions with a known debugger like OllyDBG or any other. How to trace threads created and debug them ? | How to debug DLL imported from an application? | debuggers;debugging | In OllyDBG and ImmunityDbg, in Options->Debugging Options-> Events you have an option Break on new module. If this option is set, whenever a new DLL is loaded, Olly/Immdbg will break and let you do your business. In Windbg follow Debug-> Event Filters, in the list you will find Load module, on the side set the options to Enabled and Handeled which will achieve the same result as above.If on the other hand you want to break on the specific function, you can check the DLL exports which lists all the functions exported by DLL. After the DLL is loaded, and the debugger breaks as per previously mentioned settings, you can then proceed to set the breakpoints on individual functions. |
_unix.138045 | I can't mount my encrypted devices anymore.The error is:device mapper: create ioctl failed device or resource busyThis error arises both with two different programs to access TrueCrypt encrypted devices: TrueCrypt and Tc-play.In this case, the recommendation is to remove /dev/mapper/truecrypt* directories, or look for processes that are blocking the device. However, there is no /dev/mapper/truecrypt* directory, and lsof returns nothing.One TrueCrypt device takes a whole HDD. According to fdisk, this partition is formatted with HPFS/NTFS.Another TrueCrypt device is on a partition on /dev/sda. According to fdisk, this partition is Linux (ext3 or ext4, if I remember correctly).What could be causing the error?Software:Debian GNU/Linux 6 | GNU/Linux: device mapper: create ioctl failed device or resource busy | linux;device mapper;ioctl | null |
_softwareengineering.154707 | Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up.A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination.I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work.How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response. | How to show or direct a business analyst to do data modelling? | requirements;systems analysis;data modeling | null |
_unix.153294 | I am examining the egress IPTABLES log of my computer for yesterday and notice the following:IN= OUT=eth0 SRC=192.168.1.1 DST=69.46.36.10 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=12345 DF PROTO=TCP SPT=56789 DPT=4000 WINDOW=29200 RES=0x00 SYN URGP=0After looking up the list of well known TCP ports, there is only a Diablo II game using port 4000. I don't have the game installed on my computer. I tried but could not determine what other services could be connecting to the port.As this only happen yesterday, if I use netstat, I could be staring at the screen the entire day to catch the service in action. Is there a better way to determine what program or user connected to this particular port? | How can I determine what program or user connected to a particular port after the fact? | iptables;tcp | You can use the audit system to log all connect() system calls.sudo auditctl -a exit,always -F arch=b64 -S connect -k connectLogsudo auditctl -a exit,always -F arch=b32 -S connect -k connectLogThen you can search:sudo ausearch -i -k connectLog -w --host 69.46.36.10Which will show something like:type=SOCKADDR msg=audit(02/09/14 12:31:57.966:60482) : saddr=inet host:69.46.36.10 serv:4000type=SYSCALL msg=audit(02/09/14 12:31:57.966:60482) : arch=x86_64 syscall=connect success=no exit=-4(Interrupted system call) a0=0x3 a1=0x20384b0 a2=0x10 a3=0x7fffbf8c9540 items=0 ppid=21712 pid=25423 auid=stephane uid=stephane gid=stephane euid=stephane suid=stephane fsuid=stephane egid=stephane sgid=stephane fsgid=stephane tty=pts5 ses=4 comm=telnet exe=/usr/bin/telnet.netkit key=connectBTW, I've seen that IP address being resolved from grm.feedjit.com and connection attempts being done to that on 400x ports by iPhones. |
_unix.228454 | I am a Linux admin beginner and knew that Linux can do so called ip alias today for the first time.My question: How can one NIC have two IPs at the same time? IP addresses must be identified by MAC address using ARP in general, I think. So, I'm confused that Linux can respond with the same MAC for two different IP requests by ARP, but I'm not sure.Is this guess right? | mechanism of IP alias | linux;networking;ip | ARP requests are the question Who has the address ? The fact that the same interface answers to a bunch of different addresses is no big deal. |
_codereview.169131 | Are two points too close?Trying to do the math as efficiently as possible.deltaX + deltaY is going to be > than the actual distance You might use this in game play - are players in range? // testbool distanceTooClose = DistanceTooClose(new System.Windows.Point(12, 12), new System.Windows.Point(0, 0), 17);// end teststatic bool DistanceTooClose(System.Windows.Point x, System.Windows.Point y, Double minDistance){ double deltaX = Math.Abs(y.X - x.X); double deltaY = Math.Abs(y.Y - x.Y); if((deltaX + deltaY) < minDistance) { return false; } double distanceSquared = deltaX * deltaX + deltaY * deltaY; //double distance = Math.Sqrt(distanceSquared); Double minDistanceSquared = minDistance * minDistance; return (distanceSquared <= minDistanceSquared);} | Distance too close | c#;performance;.net;computational geometry | This is something that you should run a benchmark on to see whether the branch is worth avoiding the 3 multiplications. It is very likely that the branch will not be worth it due to how branch prediction works. But that depends on what data you will be feeding into it and in how tight of a loop you call it. |
_softwareengineering.306395 | I came from a highly functional and procedural background in programming, and never knew that a type is the same as an interface.As in the Design Patterns book by GoF, it says:A type is a name used to denote a particular interface. We speak of an object as having the type Window if it accepts all requests for the operations defined in the interface named Window. An object may have many types, and widely different objects can share a type. (p. 13)The surprising thing is, I thought of type as char (like a character, or 1 byte), or int (a word, or 4 bytes or 8 bytes), or a pointer to character (a string in C language) before. Maybe even a struct with x and y as coordinate of a point, or an array, as a type, but I never thought of a type being an interface.So it looks like a Car object can be of the type Moveable and Soundable, and a Dog object can be of the type Moveable and Soundable, while a Circle object may be of the type Moveable only, until we decide that a Shape object also need to give out sound, when a user clicks on it, and we let the Shape class implement the Soundable interface and now a Circle object is also of the type Soundable?I wonder when and how it happened? Is it actually said to be so by the GoF book for the first time in 1994 when the book got published? Or is it actually existing idea that came from long time ago? It actually sound exactly the same as Duck Typing, but Duck Typing seems like a new concept that began about in 2003 in the Python and Ruby community, not like an idea that was in 1994 or earlier. | How and when did it happen that, a type is an interface? | interfaces;history;dynamic typing;data types;type | null |
_unix.111152 | I am trying to recursively download http://ccachicago.org, and am getting exactly one file, the root index.html, downloaded.I've looked at Download recursively with wget and started using the recommended -e robots=off, but it still behaves the same.How, with wget or some other tool, can I download a copy of the site? | Why is wget -r -e robots=off http://ccachicago.org not acting recursively? | wget;download | you are asking wget to do a recursive download of http://ccachicago.org, but this URL doesn't provide any direct content. instead it is just a re-direct to http://www.ccachicago.org (which you haven't told wget to fetch recursively)..if you tell wget to download the correct URL it will work:wget -r -e robots=off http://www.... |
_datascience.6174 | I am working on Rstudio on a server which has 250GB ram. But its taking too much time to handle a 2GB data file. how should i speed up my work? | Rstudio using 2.5% of 250GB RAM. how to Increase it | r;rstudio | null |
_webapps.3788 | I was wondering if I can get read confirmation from messages sent through Gmail. Conversely, Gmail's behavior concerning confirmations from senders is unclear to me. Does it automatically send read responses, or does it simply ignore them?My question concerns exclusively native Gmail features, without plugins or <img> tag hacks. And please: I do know that e-mail confirmations are a weak system and can be easily circumvented and so on. Enough preaching can be found here. | Gmail read confirmation support | gmail | Ok, so you are asking about both directions, not just sending them.It does not matter whether you are sending or receiving regarding the web UI. You cannot do anything concerning read receipts in GMail. If you send a message from Outlook with a Read Receipt, GMail ignores it. |
_unix.97289 | It's all very confusing. There are different examples out there, for e.g.:<package-name>_<epoch>:<upstream-version>-<debian.version>-<architecture>.debsource: debian package file namesIs section 5.6.12 Version or the Debian Policy Manual also related to the actual package filename too? Or only to the fields in the control file?In this wiki topic about repository formats it doesn't really say anything about conventions, same in the developers best practices guide.Maybe I'm just looking for the wrong thing, please help me and tell me where to find the Debian package name conventions. I'm especially curious where to put the Debian codename. I want to do something like this:<package-name>_<version>.<revision>-<debiancodename>_<architecture>.debwhere <debiancodename> is just squeeze or wheezy. | Debian package naming convention? | debian;package management;packaging | My understanding is that you want to distribute/deploy a package to multiple Debian based distributions.In the Debian/Ubuntu world, you should not provide individual .deb file to download and install. Instead you should provide an APT repository. (in the Fedora/Red Hat/CentOS world I would make a similar advice to provide a YUM repository). Not only does is solves issue of how to name deb file, but repository is an effective way to provide newer version of your package, including bug-fix and security updates. Creating an APT repository is beyond the purpose of this page/question, just search for how to setup an apt repositoryNow back to your question: package naming convention:When you generate the package with dpkg-buildpackage, the package will be named in a standard way. Quoting dpkg-name manpage:A full package name consists of package_version_architecture.package-type as specified in the control file of the package.package_version_architecture.package-typeThe Debian Policy is the right place to know the syntax of the control files: name (for both Source and binary packages), version, architecture, package-type.There is no provision to state the distribution, because this is not how the way thing goes.If you need to compile the same version of a package for multiple distributions, you will change the version field (in the debian/changelog and debian/control file). Some people use the distribution name in the version field. for example openssl:0.9.8o-4squeeze14 1.0.1e-2+deb7u141.0.1k-1 If that's what you want to do, make sure to read debian-policy about debian_revision in version. |
_cs.60131 | I've been reading about hidden Markov models and stumbled upon A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition by Lawrence R. Rabiner (Proc. IEEE, 77(2):257–286, 1989; PDF). The following appears as equation 49 in the section about continuous observations in hidden Markov models:$$b_j(O) = \sum_{m=1}^M c_{jm}\mathfrak{N}[O, \mu_{jm}, U_{jm}], \quad 1\leq j\leq N\,.$$What I want to know is how to use this equation given an estimation of the transition, emission, initial probabilities and a given continuous observed sequence to train the hidden Markov model using the Baum–Welch algorithm.Also I haven't read the rest of the paper, since I already have a good amount of knowledge about discrete hidden Markov models. I'm just trying to learn how to use continuous time series data to train an HMM. | Continuous Observation Densities in HMM | machine learning;hidden markov models | null |
_unix.370593 | I have given the reboot command unexpectedly,to prevent this issue need to assign password confirmation for shutdown or reboot activities even though logging as root.Kindly suggest to prevent this issue.Thanks in advance. | how to prevent the manual shutdown or Reboot to set the passwd even though login as root | passwd | null |
_codereview.92643 | I wrote a simple calculator with general operations. Give me please some advice, suggestions, and criticism about my code: code design, readability, mistakes.Calculator.javaimport javax.swing.*;public class Calculator { public static void main(String[] args) { CalculatorView calculator = new CalculatorView(); // Windows settings calculator.setTitle(Simple Calculator); calculator.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); }}CalculatorEngine.java public class CalculatorEngine { private enum Operator { ADD, SUBTRACT, MULTIPLY, DIVIDE } private double currentTotal; public String getTotalString() { return currentTotal % 1.0 == 0 ? Integer.toString((int) currentTotal) : String.valueOf(currentTotal); } public void equal(String number) { currentTotal = Double.parseDouble(number); } public void add(String number) { convertToDouble(number, Operator.ADD); } public void subtract(String number) { convertToDouble(number, Operator.SUBTRACT); } public void multiply(String number) { convertToDouble(number, Operator.MULTIPLY); } public void divide(String number) { convertToDouble(number, Operator.DIVIDE); } private void convertToDouble(String number, Operator operator) { double dblNumber = Double.parseDouble(number); switch (operator) { case ADD: add(dblNumber); break; case SUBTRACT: subtract(dblNumber); break; case MULTIPLY: multiply(dblNumber); break; case DIVIDE: divide(dblNumber); break; default: throw new AssertionError(operator.name()); } } private void add(double number) { currentTotal += number % 1.0 == 0 ? (int) number : number; } private void subtract(double number) { currentTotal -= number % 1.0 == 0 ? (int) number : number; } private void multiply(double number) { currentTotal *= number % 1.0 == 0 ? (int) number : number; } private void divide(double number) { currentTotal /= number % 1.0 == 0 ? (int) number : number; }}CalculatorView.javaimport javax.swing.*;import java.awt.*;import java.awt.event.ActionEvent;import java.awt.event.ActionListener;public class CalculatorView extends JFrame { // Declaring fields private JTextField display; private static final Font BOLD_FONT = new Font(Font.MONOSPACED, Font.BOLD, 20); // Variables for calculator's state private boolean startNumber = true; // expecting number, not operation private String prevOperation = =; // previous operation private CalculatorEngine engine = new CalculatorEngine(); // Reference to CalculatorEngine public CalculatorView() { // Window settings Dimension size = new Dimension(320, 300); setPreferredSize(size); setResizable(false); // Display field display = new JTextField(0, 18); display.setFont(BOLD_FONT); display.setHorizontalAlignment(JTextField.RIGHT); // Operations panel 1 ActionListener operationListener = new OperationListener(); JPanel operationPanel1 = new JPanel(); String[] operationPanelNames1 = new String[]{+, -, *, /}; operationPanel1.setLayout(new GridLayout(2, 2, 2, 2)); for (String anOperationPanelNames1 : operationPanelNames1) { JButton b = new JButton(anOperationPanelNames1); operationPanel1.add(b); b.addActionListener(operationListener); } // Operations panel 2 JPanel operationPanel2 = new JPanel(); operationPanel2.setLayout(new GridLayout(1, 1, 2, 2)); JButton clearButton = new JButton(C); clearButton.addActionListener(new ClearKeyListener()); operationPanel2.add(clearButton); JButton equalButton = new JButton(=); equalButton.addActionListener(operationListener); operationPanel2.add(equalButton); // Buttons panel JPanel buttonPanel = new JPanel(); ActionListener numberListener = new NumberKeyListener(); String[] buttonPanelNames = new String[]{7, 8, 9, 4, 5, 6, 1, 2, 3, , 0, }; buttonPanel.setLayout(new GridLayout(4, 3, 2, 2)); for (String buttonPanelName : buttonPanelNames) { JButton b = new JButton(buttonPanelName); if (buttonPanelName.equals( )) { b.setEnabled(false); } b.addActionListener(numberListener); buttonPanel.add(b); } // Main panel JPanel mainPanel = new JPanel(); mainPanel.setLayout(new BorderLayout()); mainPanel.add(display, BorderLayout.NORTH); mainPanel.add(operationPanel1, BorderLayout.EAST); mainPanel.add(operationPanel2, BorderLayout.SOUTH); mainPanel.add(buttonPanel, BorderLayout.CENTER); // Window build setContentPane(mainPanel); pack(); setVisible(true); } private void actionClear() { startNumber = true; display.setText(0); prevOperation = =; engine.equal(0); } class OperationListener implements ActionListener { @Override public void actionPerformed(ActionEvent e) { if (startNumber) { actionClear(); display.setText(ERROR - wrong operation); } else { startNumber = true; try { String displayText = display.getText(); switch (prevOperation) { case =: engine.equal(displayText); break; case +: engine.add(displayText); break; case -: engine.subtract(displayText); break; case /: engine.divide(displayText); break; case *: engine.multiply(displayText); break; } display.setText( + engine.getTotalString()); } catch (NumberFormatException ex) { actionClear(); } prevOperation = e.getActionCommand(); } } } class NumberKeyListener implements ActionListener { @Override public void actionPerformed(ActionEvent e) { String digit = e.getActionCommand(); if (startNumber) { display.setText(digit); startNumber = false; } else { display.setText(display.getText() + digit); } } } class ClearKeyListener implements ActionListener { @Override public void actionPerformed(ActionEvent e) { actionClear(); } }} | Simple calculator in Java using Swing and AWT | java;swing;calculator;awt | Usability issuesSome things don't work as I would expect:Pressing the equals button twice in a row or after clearing gives ERROR - wrong operationPressing an operation after the equal buttons (to continue calculations), gives ERROR - wrong operationIt would be good to optimize make the user interface a bit friendlier.Separation of concernsIt's good that you separated the engine, the view, and the main class that just sets up and runs everything.But it would be good to go further.The calculations are performed by the engine,and controlled by an action listener implemented inside the view,using a switch.Instead of a switch,it would be better to abstract the calculation logic,for example using an Operator interface with an apply method.The Calculator class could configure CalculatorView with an arbitrary collection of Operator implementations.In that setup,CalculatorView will not be aware of any of the calculation logic,it will just know that each operation implements Operator,and has an apply method to perform some calculation.That will be more flexible and extensible.NamingMany of the method and variable names are quite good,but there are some bad ones that stand out, for example in this code: for (String anOperationPanelNames1 : operationPanelNames1) { JButton b = new JButton(anOperationPanelNames1); operationPanel1.add(b); b.addActionListener(operationListener); }anOperationPanelNames1 is the most terrible name in the code.b is not great either, spelling out to button would make it a tad more readable, and not terribly long.There is operationPanel1 and operationPanel2,but they are quite different in nature.The first contains operators used in calculations,the second is more about controlling the application,which is different from performing calculations.So instead of numbering variables,you could give them more meaningful names. |
_unix.378742 | I have a mail.log line and using sed and pipes I can extract the subject, the sender, and the recipient of the mail, echo Jul 15 09:04:38 mail postfix/cleanup[36034]: 4A4E5600A5DE0: info: header Subject: The tittle of the message from localhost[127.0.0.1]; from=<sender01@mydomain> to=<recipient01@mydomain> proto=ESMTP helo=<mail.mydomain> | sed -e 's/^.*Subject: //' -e 's/\]//' -e 's/from localhost//' -e 's/^.\];//' |sed -e 's/\[127.0.0.1; //' -e 's/proto=ESMTP helo=<mail.mydomain>//'I have the outputThe tittle of the message from=<sender01@mydomain> to=<recipient01@mydomain>my desired output isJul 15 09:04:38 The tittle of the message from=<sender01@mydomain> to=<recipient01@mydomain>How to extract the date and added it to the output? | print the date of a mail.log line | sed;logs;date;output | Ugly but put this at the beginning of the sed statement: -e 's/^\([[:alpha:]]+ [[:digit:]]+ [[:digit:]]+:[[:digit:]]+:[[:digit:]]+\).*Subject:\(.*\)/\1\2/'Or if you always know that ' mail postfix' will be in the text at that position you can just use: -e 's/^\(.*\) mail postfix.*Subject:\(.*\)/\1\2/'And other variations are possible. The key is to capture the date, skip over parts you don't care about, and again capture the remainder that you still need to process. To capture surround with \( and \) and to print what you've captured use \n where n is the position of a particular capture (first is 1, second is 2, etc.)And now that you know this you can probably figure out how to eliminate all of the separate directives (-e), use multiple capture groups, and get it down to a single sed expression. |
_codereview.98888 | - INTERRUPT HANDLER -> READ FROM IN.1 THROUGH IN.4> WRITE THE INPUT NUMBER WHEN THE VALUE GOES FROM 0 TO 1> TWO INTERRUPTS WILL NEVER CHANGE IN THE SAME INPUT CYCLEI was stuck on this one when I went to bed last night. But of course, like any good programmer, sleeping is when I get most of my work done.My primary goal here is fewer cycle counts. If the above... erm, graph... is anything to go by, it looks like I could have quite a bit less cycle counts. Secondary would be reducing the instruction count (but not at the expense of cycle counts). I can imagine how to get it down to 7 nodes rather than 9, but I can't imagine that also makes the program more efficient (in terms of cycle counts).So, how can I reduce the cycle counts here?ROW 1, COLUMN 1 (IN.1)MOV UP, ACCSTART: MOV 0, DOWN JEZ CHECK JNZ CONTINUECHECK: MOV UP, ACC JEZ START MOV 1, DOWNCONTINUE: MOV UP, ACC JMP STARTROW 1, COLUMN 2 (IN.2)MOV UP, ACCSTART: MOV 0, DOWN JEZ CHECK JNZ CONTINUECHECK: MOV UP, ACC JEZ START MOV 2, DOWNCONTINUE: MOV UP, ACC JMP STARTROW 1, COLUMN 3 (IN.3)MOV UP, ACCSTART: MOV 0, DOWN JEZ CHECK JNZ CONTINUECHECK: MOV UP, ACC JEZ START MOV 3, DOWNCONTINUE: MOV UP, ACC JMP STARTROW 1, COLUMN 4 (IN.4)MOV UP, ACCSTART: MOV 0, DOWN JEZ CHECK JNZ CONTINUECHECK: MOV UP, ACC JEZ START MOV 4, DOWNCONTINUE: MOV UP, ACC JMP STARTROW 2, COLUMN 1MOV UP, RIGHTROW 2, COLUMN 2ADD LEFTADD UPMOV ACC, RIGHTMOV 0, ACCROW 2, COLUMN 3ADD LEFTADD UPADD RIGHTMOV ACC, DOWNMOV 0, ACCROW 2, COLUMN 4MOV UP, LEFTROW 3, COLUMN 3 (OUT)MOV ANY, DOWN | Assembling an Interrupt Handler | performance;assembly;tis 100 | null |
_webapps.9585 | I'm testing some mail sending software that I've written and I'm sending emails in text/plain format with an alternative format of text/html.How do I toggle between the text/plain and text/html views in a web based mail client? I'm using Yahoo, Hotmail, and Gmail for my testing so the method to toggle the views in any one of those clients will work for me.(At the moment all the clients show me the email in text/html format but I want to verify that the text/plain format is good but can't see it.) | Toggle text/plain and text/html in email client such as yahoo, gmail, or hotmail | email | In Gmail, I think your only option is to click the arrow on the top right of the message, and then choose Show Original.The message is most likely to be sent in MIME format, so you can scroll down past the headers and look for something like this: Content-Type: text/plain. In MIME format, there are unique strings (boundaries) between each message part, I believe that each email client chooses its own string to use. You can find out what is being used if you locate the following header:Content-Type: multipart/alternative; boundary=-----------=Sample_Msg_Part156165161321654In this case, the string -----------=Sample_Msg_Part156165161321654 is used to delimit the different message parts.Here's an example...Let's say that the message has the following content:From: [email protected]: [email protected]: TestMIME-Version: 1.0Content-Type: multipart/alternative; boundary=-----------=Sample_Msg_Part156165161321654-----------=Sample_Msg_Part156165161321654Content-Type: text/plainThis is a sample message. This is the text portion of the message.-----------=Sample_Msg_Part156165161321654Content-Type: text/htmlThis is a sample message. This is the <b>html</b> portionof the message.-----------=Sample_Msg_Part156165161321654... the plain text would look like this:This is a sample message. This is the text portion of the message.... while the HTML would look like this:This is a sample message. This is the html portion of the message. |
_webapps.15961 | I want to see only my subscriptions on my YouTube home. Those recommendations are unnecessary pollution. | How to remove recommended items from YouTube home page? | youtube | There is a filter at the very top of the homepage:Switch to Subscriptions and you'll see only subscriptions... |
_reverseengineering.15345 | This happens a lot where when I am reversing a program in a disassembler or debugger, I run into something like this:push eax ; lParampush 1 ; wParampush 80h ; Msgpush ecx ; hWndcall esi ; SendMessageAIn order to effectively reverse this, I need to know what 80h is. The problem is that when compiled (preprocessed), all of the Windows constant macros obviously get turned into numbers so I no longer have the semantic meanings. I also cannot go and search for SendMessage 0x80 because there's no real context there either.The question is, what are some tips in figuring out a Microsoft Windows constant macro name when given only a function and a value like this? I was able to go to SendMessage on MSDN and then from there, look at the Msg parameter which lead me to the System-Defined Messages page. However, like many other MSDN pages, this one only defines the macros by description, rather than provides a table of which value each one corresponds to. This has actually been a regular issue that I've ran into in reversing Windows applications. Another solution I've discovered is to try and locate the .h file for the corresponding macros online and then search for the value there. But this situation is less than ideal because I have no idea if the information is accurate up-to-date, but many times I also do not even know which header file would contain the definition. | What are ways to find Windows constant macro definitions? | windows;api | null |
_webmaster.105050 | Perhaps I'm not getting it as should, but as far as I got it, CloudFlare should use local(geographically) servers, relative to the user, to deliver the content. But instead, I'm getting all the time US servers(I'm in Europe).How do I fix this? | CloudFlare isn't working as CDN | cdn;cloudflare | You're likely using a geolocation service to determine the location of the IP address. This may not accurately tell you where the server for that IP is located - Cloudflare owns large IP blocks. These blocks will be registered to them somewhere in the USA and perhaps the servers for these IPs are even located there. However, if they move a B block in that range to Europe, it means a 1 digit difference in the IP changes the location completely. 104.16.0.0/12 for example is a huge range of IPs. That's over 1.4 mil IPs split into around 64k B blocks (excuse my napkin math). The ISPs would be aware of an edge router's location but IP block registration databases wouldn't. Do a ping command and use response time and TTL to measure distance. TTL will tell you how many routers your ping has bounced through - even if response time doesn't waver much you'll be able to see that it's gone a greater distance. For further detail, a tracert command (Windows) will also reveal more about location by attempting to resolve and ping each individual router along the way. Done from different origin IPs, you'll also be able to see if your ISP is doing any redirecting for Cloudflare in order to shorten distance travelled.Edit: Another answer has pointed out you can also use yourdomain.com/cdn-cgi/trace in order to get a debug output with the 'colo=' code indicating the location of the server being used. Example output:fl=21g22h=yourdomain.co.ukip=your.ip.addressts=1497653403.144visit_scheme=httpsuag=Mozilla Compatible Agentcolo=LHR // Datacentre locationspdy=h2http=h2loc=GB |
_webmaster.52458 | I just discovered that Google have indexed the preview version of my site with a subdomain previewdns.com appended to my actual domain. I need to remove those URL's from the search index. How can I do so? | Removing preview DNS from Google Index | google index | null |
_cs.70016 | I am stroking ellipse with Python scripts. However, due to limited precision of floating numbers to represent irrational numbers, outline of the ellipse will not be accurate and some other unnecessary points will be regarded as ellipse points. Blue points on the graph are entry points. That is, these two points are substituted to the ellipse formula. Can anyone please enlighten me on how to get rid of those unnecessary points? | Excluding needless points when stroking ellipses | graphs;pattern recognition | null |
_unix.258815 | I am trying to create directories and subdirectories as another user, from inside a shell script.The problem is, I am running the script as root, so the directory is being created with root ownership.I have a text file containing the names of directories and subdirectories, and I am using this command to do it:cat dirname.txt | xargs -L 1 mkdirWhich looks like this:cet/mntcet/mnt/jklcet/mnj/lokI tried sudo but only the parent directory gets the desired user ownership. | Create Directories and Subdirectories as Another User | permissions;scripting | Try something like:cat dirName.txt | xargs -L 1 sudo -u ubuntu mkdir -p |
_webmaster.18498 | I visit a good number of websites - around 100-200 - that don't offer RSS resources. I'm going to check through that list from time to time anyway but I could avoid wasting time in sites that weren't updated since last visit.Is there any tool that can provide me quick details any/major changes? | How to keep track of websites updates? | tracking | It's not the ideal solution, (you would need a bot spidering the sites in question and looking for updates* - I am not aware of any free services which would provide this service, but there may be paid ones) but you could use Google Alerts to alert on an exact phrase which appears in the site template for each of the sites.(* Disappointingly, Google's site: operator is not available within Google Alerts)Example:Sign in to your Google account (Gmail) and go to the Google Alerts pageEnter the exact phrase Pro Webmasters Stack Exchange (with quotation marks) in the textbox at the top of the pageSelect:Type: EverythingHow often: As-it-happens (or your preference)Volume: All resultsDeliver to: FeedThis configuration would compile a feed (stored in Google Reader) with the results which Google picks up (though not necessarily any publicly-available change).Edit: I have not tested it, but it appears as though creating the alerts as feed items (which, by the way, you will need to add to folders using the Manage Subscriptions dialog in Google Reader in order to see) will also create a publicly-accessible Atom feed which could be monitored from any other syndicated feed reader (which may be particularly useful if you are automating your solution - you could programmatically filter out results from domains other than those you wish to monitor). |
_codereview.145860 | UPDATE: I have refactored the code into a Gist using @Dmitry's answer as a guide. The update is much simpler to grok, implements IDisposable, and is roughly thirty lines shorter.I wrote this over the weekend for fun and am looking for critique. Style and readability comments are welcome but what I truly need to know is:Does it function as advertised?Are there any lingering bugs that I've missed?Can you come up with a way to make it faster?When I ask these of myself I get 1 = yes, 2 = no, and 3 = maaaaaybe. I'd like to add other features like skipping the header row, inferring data types, validating field counts, etc. but I'll be tackling that kind of thing via derivation or extension since such logic will be simpler to implement if based on an existing IEnumerable<IEnumerable<>> like this one.FLAME ON;Usage:foreach (var row in DelimitedReader.Create(fileName)) { foreach (var field in row) { // do stuff }}Features:Accurate: RFC4180 CompliantEfficient: memory usage is (roughly) equal to the size of the largest rowFast: average throughput of ~25 megabytes per secondFlexible: the default encoding and separator/escape characters can be user-definedLightweight: single 160 line class with no external dependenciesCode:using System;using System.Collections;using System.Collections.Generic;using System.IO;using System.Text;namespace ByteTerrace{ public class DelimitedReader : IEnumerable<IEnumerable<string>> { private const int DEFAULT_CHUNK_SIZE = 128; private const char DEFAULT_ESCAPE_CHAR = ''; private const char DEFAULT_SEPARATOR_CHAR = ','; private readonly char[] m_buffer; private readonly Encoding m_encoding; private readonly char m_escapeChar; private readonly string m_fileName; private readonly char m_separatorChar; public char[] Buffer { get { return m_buffer; } } public Encoding Encoding { get { return m_encoding; } } public char EscapeChar { get { return m_escapeChar; } } public string FileName { get { return m_fileName; } } public char SeparatorChar { get { return m_separatorChar; } } public DelimitedReader(string fileName, char separatorChar = DEFAULT_SEPARATOR_CHAR, char escapeChar = DEFAULT_ESCAPE_CHAR, Encoding encoding = null, int bufferSize = DEFAULT_CHUNK_SIZE) { m_buffer = new char[bufferSize]; m_encoding = (encoding ?? Encoding.UTF8); m_escapeChar = escapeChar; m_fileName = fileName; m_separatorChar = separatorChar; } public IEnumerator<IEnumerable<string>> GetEnumerator() { return ReadFields().GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } IEnumerable<IEnumerable<string>> ReadFields() { return ReadFields(ReadAllChunks(FileName, Encoding, Buffer), SeparatorChar, EscapeChar); } public static DelimitedReader Create(string fileName, char separatorChar = DEFAULT_SEPARATOR_CHAR, char escapeChar = DEFAULT_ESCAPE_CHAR, Encoding encoding = null, int bufferSize = DEFAULT_CHUNK_SIZE) { return new DelimitedReader(fileName, separatorChar, escapeChar, encoding, bufferSize); } public static IEnumerable<char[]> ReadAllChunks(TextReader reader, char[] buffer) { var count = buffer.Length; var numBytesRead = 0; while ((numBytesRead = reader.ReadBlock(buffer, 0, count)) == count) { yield return buffer; } if (numBytesRead > 0) { Array.Resize(ref buffer, numBytesRead); yield return buffer; } } public static IEnumerable<char[]> ReadAllChunks(string fileName, Encoding encoding, char[] buffer) { return ReadAllChunks(new StreamReader(new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.Read, 4096, FileOptions.SequentialScan), encoding), buffer); } public static string ReadField(StringBuilder buffer, int offset, int position, char escapeChar) { if (buffer[offset] == escapeChar) { if (position - offset != 2) { return buffer.ToString(offset + 1, position - offset - 3); } else { return string.Empty; } } else { return buffer.ToString(offset, position - offset - 1); } } public static IEnumerable<IEnumerable<string>> ReadFields(IEnumerable<char[]> chunks, char separatorChar = DEFAULT_SEPARATOR_CHAR, char escapeChar = DEFAULT_ESCAPE_CHAR) { var buffer = new StringBuilder(); var fields = new List<string>(); var endOfBuffer = 0; var escaping = false; var offset = 0; var position = 0; var head0 = '\0'; var head1 = head0; foreach (var chunk in chunks) { buffer.Append(chunk, 0, chunk.Length); endOfBuffer = buffer.Length; while (position < endOfBuffer) { head1 = head0; if ((head0 = buffer[position++]) == escapeChar) { escaping = !escaping; if ((head0 == escapeChar) && (head1 == escapeChar)) { endOfBuffer--; position--; buffer.Remove(position, 1); } } if (!escaping) { if ((head0 == '\n') || (head0 == '\r')) { if ((head1 != '\r') || (head0 == '\r')) { fields.Add(ReadField(buffer, offset, position, escapeChar)); yield return fields; buffer.Remove(0, position); endOfBuffer = buffer.Length; fields.Clear(); offset = 0; position = 0; } else { offset++; } } else if (head0 == separatorChar) { fields.Add(ReadField(buffer, offset, position, escapeChar)); offset = position; } } } } if (buffer.Length > 0) { fields.Add(buffer.ToString()); } if (fields.Count > 0) { yield return fields; } } }} | Delimited File Reader | c#;strings;parsing | I'd prefer to rely on the builtin functionality as much as possible. I want to believe that use of the builtin stuff makes my code more readable and probably faster.So my proposal is:public class DelimitedReader : IEnumerable<string[]>, IDisposable{ private readonly StreamReader reader; public DelimitedReader(string fileName, Encoding encoding = null) : this(new StreamReader(new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite), encoding ?? Encoding.UTF8, encoding == null)) { } public DelimitedReader(StreamReader reader) { this.reader = reader; } public void Dispose() { reader.Dispose(); } public char EscapeChar { get; set; } = ''; public char SeparatorChar { get; set; } = ','; private string[] ParseLine(string line) { List<string> fields = new List<string>(); char[] charsToSeek = { EscapeChar, SeparatorChar }; bool isEscaped = false; int prevPos = 0; while (prevPos < line.Length) { // If in the escaped mode, seek for the escape char only. // Otherwise, seek for the both chars. int nextPos = isEscaped ? line.IndexOf(EscapeChar, prevPos) : line.IndexOfAny(charsToSeek, prevPos); if (nextPos == -1) { // We reached the end of the line if (!isEscaped) { // Add the rest of the line fields.Add(line.Substring(prevPos, line.Length - prevPos).Trim()); break; } // If there is no closing escape char throw new InvalidDataException(The following line has invalid format: + line); } char nextChar = line[nextPos]; if (nextChar == EscapeChar) { // The next char is the escape char if (isEscaped) { // If already in the escaped mode fields.Add(line.Substring(prevPos, nextPos - prevPos)); // No Trim } isEscaped = !isEscaped; // Toggle mode } else { // The next char is the delimiter fields.Add(line.Substring(prevPos, nextPos - prevPos).Trim()); // Trim } prevPos = nextPos + 1; } return fields.ToArray(); } public IEnumerator<string[]> GetEnumerator() { while (!reader.EndOfStream) { yield return ParseLine(reader.ReadLine()); } } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); }}In the class above I use the StreamReader.ReadLine method to read a file line by line, and the String.IndexOf/String.IndexOfAny methods to move within the line.According to my test runs, this approach is a bit faster. |
_unix.352845 | I'm currently building a script that parses json files and make some operation while parsing it.Here's the beginning of my script:for folder in `ls -d $path` do mag=`basename $folder`for file in `ls $folder` # pour chaque fichier prsent dans le dossier spcifdo receivedCustomer=`jq .receivedCustomer $folder/$file` invoiceCustomer=`jq .invoiceCustomer $folder/$file` anotherCustomer=`jq .anotherCustomer $folder/$file` etc...` if [ \( $view == \healthCheck\ \) ] then healthCheckCpt=`expr $healthCheckCpt + 1` fi if [ \( $terminal == \01\ \) ] then terminal01=`expr $terminal01 + 1` fiEtc...Then some operation are a little bit more complex:if [ $sameClientFacture -gt 0 ] then sameClient=`echo $sameClientFacture $anotherCustomerTrue $rattachementClientBorne $factureSansClientRattache | awk '{printf%.2f,$1/($2+$1+$3+$4)*100}'`# echo sameClient : $sameClient fi if [ $anotherCustomerTrue -gt 0 ] then differentClient=`echo $sameClientFacture $anotherCustomerTrue $rattachementClientBorne $factureSansClientRattache | awk '{printf%.2f,$2/($2+$1+$3+$4)*100}'`# echo differentClient : $differentClient fiFinally I just construct another JSON file containing the results of the operations with an echo :echo { x : value , y : value etc... } > out_file I'm surely doing something wrong since I'm very new to shell programming.I'm parsing around 30 000 files at a time of few KB each (from 50 to 150 lines per file), the last attempt took 2070 sec long, compared to a Java parsing I find it very slow..Do you guys have a clue how may I improve my script? | Slow / Poor performance JSON Parsing with JQ | scripting;performance;json;jq | null |
_datascience.2628 | Using SAS Studio (online, student version)...Need to do a nested likelihood ratio test for a logistic regression. Entirety of instructions are: Perform a nested likelihood ratio test comparing your full model (all predictors included)to a reduced model of interest.The two models I have are:Proc Logistic Data=Project_C;Model Dem (event='1') = VEP TIF Income NonCit Unemployed Swing;Run;andProc Logistic Data=Project_C;Model Dem (Event='1') = VEP TIF Income / clodds=Wald clparm=Wald expb rsquare;Run;I honestly have no idea where to even start. Any suggestions would be appreciated. Thanks! | SAS Nested Likelihood Ratio Test for a Logistic Model | logistic regression | null |
_softwareengineering.137947 | I recently read an article on the 37Signals blog and I'm left wondering how it is that they get the cache key.It's all well and good having a cache key that includes the object's timestamp (this means that when you update the object the cache will be invalidated); but how do you then use the cache key in a template without causing a DB hit for the very object that you are trying to fetch from the cache.Specifically, how does this affect One to Many relations where you are rendering a Post's Comments for example.Example in Django:{% for comment in post.comments.all %} {% cache comment.pk comment.modified %} <p>{{ post.body }}</p> {% endcache %}{% endfor %}Is caching in Rails different to just requests to memcached for example (I know that they convert your cache key to something different). Do they also cache the cache key? | How does key-based caching work? | python;django;memcached | null |
_webmaster.12911 | I have been wandering around some SEO sites this evening and have been seeing this term linkbaiting. The word bait kind of makes it not sound so great.What exactly is linkbaiting and how does it differ from article marketing? From the gist of some of these descriptions I'm seeing, this is something fairly new, but it seems almost identical to article marketing. What am I missing? | What is linkbaiting? | seo;backlinks | Its not new. It means creating content that people want to link to. Infographics can be good link bait. The proper way to get links it to have something worth linking to. By creating something unique, or something very informative, you create link bait. Here is a good example of a very well written article that is link bait: http://www.seobook.com/economics-of-content-farms.More info: http://www.ericward.com/linkbait-services.html. |
_cs.74296 | I'm not an instructor. I'm a student doing such course. And I'm just trying to figure out what to do, because we're not given an input language. What kind of (input) language would you suggest for an University compiler project?It's a project typical to the compiler course in CS programs.However, I've been confused about, whether it would be easier to design a language for the project or use some existing programming language.It should be able to handle the following requirements:Readable (no binary noodles)CommentsAt least two different types of data (type errors must be captured at the latest during the run)Integrity technologyMaking choices (if tms)Playback (loops, recursion etc)Parameterizable subroutines (functions, methods, etc) that can use local variablesAdditional featuresTables (multidimensional, 0.5 cr, one-dimensional 0.25 cr)String input and output (0.25 cr)String interpolation or printf-style formatting by yourself (0.25 cr) Complex printf formats (0.25 cr)Records and variants (0.5 cr)Generic (Static) Types (0.5 cr)Classes and Late Binding (0.5 cr)First-class functions (0.5 cr)Garbage collection (entirely self-made) (1 cr)Recursive pattern match (garden Haskell) (0.5 cr)Lazy calculation (1-2 cr depending on the implementation technique)Additionally the implementation would include:Relatively effective interpreter without separate intermediate language 0 crGeneration of intermediate language (eg own, JVM or LLVM) 1 crGeneration of a machine language (eg AMD64 or ARM) from its own intermediate languageNaive register allocation 1 crSmart register allocation (eg graphing) 2 crAlso, what existing programming languages would fit into all of these? Does it have to be a functional language or does even C implement all of these? | What kind of language would you suggest for an University compiler project? | compilers | null |
_cs.16853 | recently there have been a few questions on teaching CS in both cs.se & tcs.se and there are many high-rated related questions on the two sites on the topic. thinking over the latest one made me realize that a lot of students get exposed to some aspects of STEM through the media (sometimes inaccurately), and one of the most powerful media outlets is movies. it seems that maybe instead of rolling ones eyes or recoiling from their unrealism, these have some potential and can be used as a teaching tool (aka teachable moment) by taking them as a student experience to build on, as case studies for students to learn about certain concepts and how the concepts actually work vs the screenwritten, hollywood version, ie address (possible widespread?) misconceptions about the field and its essential aspects.what are key or compelling movies introducing CS-type concepts and what is accurate/inaccurate about the portrayal? [or is it roughly correct?]teaching high school TCS tcs.sewhat should I do with a bunch of 16/17yr olds to get them interested in CS cs.se | computer science in the movies as an educational angle | education | I haven't seen it yet, but Travelling Salesman could be pretty interesting. |
_webapps.103453 | Facebook has a built-in capability to download everything you posted and liked https://www.facebook.com/help/131112897028467/However it doesn't allow to download the content you repostedIn the downloaded archive it is shown asThursday, February 23, 2017 at 8:08am UTC+02Michael Naumov shared Someone's post.I am interested to download all such posts themselves.I tried to use Activity Log - Your posts. It uses load-on-demand approach so in order to get all the content I have to constantly scroll down.So I ran JavaScriptsetInterval(function() { window.scrollTo(0, document.body.scrollHeight); }, 100);After waiting a while I could get the whole timeline loaded. However this doesn't help much because all the images I am looking for are rendered as thumbnails and in order to get the full version of it I have to click on it and then save it from the popup.But before trying to implement that approach I decided to ask the community is there any smarter way to achieve what I want. | Download all Facebook reposts images | facebook | null |
_unix.58588 | I am trying to bind X to do the following:prompt the user whether the session should be killedif y is entered, kill the sessionafter the session is killed select another session (last, previous, or next session)Some similar commands that aren't quite rightKill the session and close the terminal:bind X confirm-before -p Kill #S (y/n)? kill-sessionPrompt the user for the name of the session to kill and select next session after kill:bind X command-prompt -p kill: switch-client -n \; kill-session -t '%%'I haven't been able to find examples of similar commands. Here's a solution something that doesn't work:bind X confirm-before -p Kill #S (y/n)? SESSION='#S' \; \switch-client -n \; kill-session -t \$SESSION\ | Kill a tmux session and select another tmux session | tmux | I think this is close to what you want:bind-key X confirm-before -p Kill #S (y/n)? run-shell 'tmux switch-client -n \\\; kill-session -t \\$(tmux display-message -p \#S\)\'Your #3 approach is along the right lines, but the problem is that confirm-before does not do status-left-style substitutions (e.g. #S) in its command string.A caveat for the above binding is that since everything is done in from run-shell, the commands are run outside the context of any particular client or session. It really only works because the default client (for switch-client) and default session (for #S in display-message -p) are the most recently active ones. This works out as you would expect as long as you only have a single active client (e.g. a single user that does not type into another tmux client until after the shell commands have finished running); it could fail dramatically if (e.g.) you trigger the binding in tmux client A, but new input is received by tmux client B before the shell started by run-shell has had a chance to run its commands.This particular race condition seems like a nice motivation for providing client/session/window/pane information to run-shell commands. There is a TODO entry about getting if-shell and run-shell to support (optional?) status_replace() (i.e. status-left-style substitutions), though maybe a better choice would be format_expand(), which is kind of a newer super-set of status_replace (offers #{client_tty}, etc.). |
_unix.45415 | I have my webserver set up to send out email as a smartserver using postfix and it does not allow any other machines on my network to send mail through it. I've been able to send email from my webserver to any address I like and it still works like that.But I want to change the fact that postfix refuses all clients on the local LAN. I want my desktop PC to be able to send out email through my webserver, but I can't get past these log messages:Aug 13 21:58:01 localserver postfix/smtpd[21838]: connect from diablo[2001:980:1b7f:1:d568:1d76:bc9a:e356]Aug 13 21:58:05 localserver postfix/smtpd[21838]: disconnect from diablo[2001:980:1b7f:1:d568:1d76:bc9a:e356]I tried adding the IPv6 address to the mynetworks line in main.cf, but it doesn't solve the issue.smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)biff = noappend_dot_mydomain = noreadme_directory = no# TLS parameterssmtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pemsmtpd_tls_key_file = /etc/ssl/private/ssl-mail.keysmtpd_use_tls = yessmtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scachesmtp_tls_session_cache_database = btree:${data_directory}/smtp_scachemyhostname = localserver.localalias_maps = hash:/etc/aliasesalias_database = hash:/etc/aliasesmyorigin = /etc/mailnamemydestination = some.server.nl., localserver.local, localhost.local, localhostrelayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104mailbox_size_limit = 0recipient_delimiter = +inet_interfaces = allhome_mailbox = Maildir/smtpd_sasl_auth_enable = yessmtpd_sasl_type = dovecotsmtpd_sasl_path = private/dovecot-authsmtpd_sasl_authenticated_header = yessmtpd_sasl_security_options = noanonymoussmtpd_sasl_local_domain = $myhostnamebroken_sasl_auth_clients = yessmtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destinationsmtpd_sender_restrictions = reject_unknown_sender_domainmailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -n -m ${EXTENSION}smtp_use_tls = yessmtpd_tls_received_header = yessmtpd_tls_mandatory_protocols = SSLv3, TLSv1smtpd_tls_mandatory_ciphers = mediumsmtpd_tls_auth_only = yestls_random_source = dev:/dev/urandomHints/tips anyone? | postfix add single IPv6 address | postfix;ipv6 | null |
_webmaster.101849 | A particular gaming platform allows clients to connect to game servers by entering a URL into their web browser. Upon entering the URL in the web browser and pressing enter, the application launches and joins the specified server. This URL takes the form of a specific application layer protocol, followed by an IP address and port. Ex. xyz://server-ip:portI wish to redirect a subdomain of my website to the address of my game server. If my domain is example.com, I'm looking for play.example.com to redirect to xyz://server-ip:port.Using a forward resource record did not work, nor did using a PHP header redirect, presumably because both are strictly HTTP redirects. Some ideas that I've had include using a <meta> tag, a .htaccess file, javascript, or another resource record, but I'm not familiar enough with any of them to know which, if any, are viable. | Cleanly redirecting a subdomain to an address with a different application layer protocol | redirects;dns | null |
_unix.311558 | I was wondering - how can we be sure that file data isn't being changed (from usermode thread) after the security_mmap_file() hook is called, but before the file is actually mapped. If the data could be changed this is a classic time-of-check-time-of-use attack.I assume there's some lock which I'm missing here...I know that before security_bprm_check() is called (from exec()), the file is write-locked by using deny_write_access() (in do_open_exec()), so that makes sense, but I can't see such a lock before security_mmap_file()Thanks! | LSM security_mmap_file lock question | linux;security;lsm | null |
_unix.58555 | I am consistently running out of inotify resources, leading to errors along the lines of:# tail -f /some/filestail: inotify resources exhaustedtail: inotify cannot be used, reverting to pollingThis eventually happens even if I grow the value of fs.inotify.max_user_watches. I suspect a locally installed Java application is consuming the resources, but I don't have the option of either fixing it or removing it.Is there a way to set a limit on the number of inotify watches that can be consumed by a process? | Can I limit the number of inotify watches available to a process or cgroup? | linux;limit;inotify | null |
_softwareengineering.161670 | I've spent the last year becoming really comfortable with MySQL, but due to its increasing trendiness and my desire to homogenize my web-apps with Heroku, I'd like to start using PostgreSQL for my web apps instead. There are a resources out there for learning PostgreSQL, but I don't really want to have database concepts explained to me from scratch again, and I don't want to have to re-learn all the stuff that's pretty much the same.What are the critical differences that I need to understand - syntactical and conceptual - between MySQL and PostgreSQL that will affect me on a day-to-day basis? | Making the switch from MySQL to PostgreSQL? | database;mysql;postgres | It depends somewhat on how you're using the database.If you're using an ORM, you might not notice any issues at all.I switched an application to using Postgresql (for deployment to Heroku), but only after discovering situations where the SQL created by Rails worked fine on SQLite, but not on Postgresql. Invariably, the issues were caused when joins were querying the same column name on multiple tables. SQLite didn't care, but Postgresql wanted the relation name specified if it was in the 'where' clause.Even though I've worked with both MySQL and Postgresql, I'm not sure of any fundamental conceptual differences between them. They're both fairly solid client-server databases, although PG seems to be generating a better reputation.However, there are definitely some critical syntactical differences between MySQL and Postgresql. I found a decent guide to those here: http://en.wikibooks.org/wiki/Converting_MySQL_to_PostgreSQL |
_unix.307124 | What is the difference between running following commands on terminal?command1for i in {1..3}; do ./script.sh >& log.$i & doneandcommand2for i in {1..3}; do ./script.sh >& log.$i & done &Running the first command shows three job IDs on the screen and I can type the next command on ther terminal. The second command is a bit weird, it does not show any job IDs on screen nor can I see them after running jobs command. Where did the jobs go? Inside of script.sh I have following loopfor k in 1; do ./tmp -argumentsdoneecho helloIf I use command 1, I can see via htop that ./tmp executible is running and echo hello has not yet been executed (not in the log file).If I use command 2, I can see via htop that ./tmp executible is running AND echo hello has ALREADY been executed (as seen in the log file).Why would an & on the terminal change the behaviour of the for loop inside the shell script?[GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)] | Ampersand after for loop on shell scripts | bash;shell script;job control | The first onefor i in {1..3}; do ./script.sh >& log.$i &doneruns in the current shell. Each iteration of the loop runs the script.sh script as a job of the current shell, and so you can see them so.The second onefor i in {1..3}; do ./script.sh >& log.$i &done &first starts a subshell that controls the loop. Then the 3 iterations create 3 subprocesses in that shell, while in your current shell you can only see 1 job, which is the whole command, not yet broken down into particular jobs. (You should see this 1 job. Either as a Running one or as Done.)The ./tmp executable should run the same way in both cases. If you see echo hello has been performed, this means the ./tmp had been finished before. If it behaves abnormally, you should debug (and add the details to your question). Especially, make sure the starting conditions are the same at the time of its call in both cases. Eg. if there are checks for existing files, make sure in both cases they do/don't exist, etc. |
_codereview.92801 | I repeat here this answer on Stack Overflow.I first posted an answer with not finalized code, as a simple description of the solution I could think, without any test. But later I remained interested, so I worked to make it (hopefully) perfectly functional.To precisely define what it is meant to do, let me cite my previous answer:This is a classic dilemma for any CMS or blog, where the teaser should present the begin of an article: often the solution is either stripping text from its tags and cut at a precise count OR keep tags but cut approximately because the tags are counted too...So here the intent is to take an HTML element with any number of children, and any nesting level and return:the same element (i.e. keeping its tag and attributes)where resulting text content (i.e. visible as characters in the resulting page) is limited to a given countwhere resulting text is built from successive text nodes in their natural orderwhere encountered tags are keeped intact and at their natural placeHere is my actual solution:function cutKeepingTags(elem, reqCount) { var grabText = '', missCount = reqCount; $(elem).contents().each(function() { switch (this.nodeType) { case Node.TEXT_NODE: // Get node text, limited to missCount. grabText += this.data.substr(0,missCount); missCount -= Math.min(this.data.length, missCount); break; case Node.ELEMENT_NODE: // Explore current child: var childPart = cutKeepingTags(this, missCount); grabText += childPart.text; missCount -= childPart.count; break; } if (missCount == 0) { // We got text enough, stop looping. return false; } }); return { text: // Wrap text using current elem tag. elem.outerHTML.match(/^<[^>]+>/m)[0] + grabText + '</' + elem.localName + '>', count: reqCount - missCount };}And here is a working example. (I kept the HTML example posted by the previous question OP) | Truncating text with jQuery but keep the HTML formatting | javascript;jquery;html;regex;dom | First of all, I would just like to say that this is a really good and useful function. From a Code Review standpoint, there are almost no errors in it that I know of. Here are a few things I found from examining it:Keep spacing uniformLine 8 doesn't have a space between parameters, where your other function calls do. This is most likely due to quick typing and not any major issue other than a cleanliness nitpick.An invalid input for ifLines 19-21 where you have an if like so:if (missCount == 0) { // We got text enough, stop looping. return false;}You should never use == over === due to it being possible that something like 0 would match the same as 0. This is because the double equals signs finds if it matches an exact value, where triple equals signs tests for exact value and type. So your final code should look like this for the if statement:if (missCount === 0) { // We got text enough, stop looping. return false;}Optional - JSHintIf you use JSHint in JSFiddle, then you'll run into errors when trying to run this:elem.outerHTML.match(/^<[^>]+>/m)[0] + grabText + '</' + elem.localName + '>',If you're worried about that then you just have to write it all on one line. But for being short and concise, that might not be what you want. |
_unix.118289 | This sort of a setup seems to be common in shopping malls and airports. In Western Canada Shaw provides such a service and calls it Shaw Open. I'm pretty sure other locales have similar services from providers such as T-Mobile, etc.From something such as a cell phone it's not very complicated to do. No authentication is necessary to connect to the wifi hotspot as it is open for public access. But my cell phone won't connect to websites or remote services via apps until I use my browser and sign in to a particular webpage provided by the ISP. My question simply stated is: How do I automate the authentication step from a device that doesn't typically have a traditional browser?I have, in my particular case, a raspberry Pi configured with software that I want to use at trade shows etc. Theses locations have the same sort of open hotspots. The Raspi is meant to be self contained. It just does its business and talks to a website. But this outbound connection is blocked by the ISPs open connection because I haven't, nor can I complete the browser part of the process.Assuming I have credentials to do this on a particular provider's network, how can I automate that part of the process without requiring me to open a terminal session to the Pi? What kind of technology is even used here, that I can search for? | How do I authenticate to a wireless provider's open network without using a browser? | wifi;wpa supplicant | null |
_softwareengineering.349485 | While trying to debug a weird issue where I knew an exception should have been thrown but was not, I found the following in the Java standard library's java.lang.ClassLoader class:/** * Open for reading, a resource of the specified name from the search path * used to load classes. This method locates the resource through the * system class loader (see {@link #getSystemClassLoader()}). * * @param name * The resource name * * @return An input stream for reading the resource, or <tt>null</tt> * if the resource could not be found * * @since 1.1 */public static InputStream getSystemResourceAsStream(String name) { URL url = getSystemResource(name); try { return url != null ? url.openStream() : null; } catch (IOException e) { return null; }}What reason would there be where the decision was made that this exception should not be thrown but instead silently consumed?While discussing this with a coworker a possible option was that perhaps this function predated IOException and thus this was added to maintain backwards compatibility, however this method was added in Java 1.1, while IOException was added in 1.0.This is a file operation so IOExceptions would not be out of place, so why would the makers of an exception based language choose returning null over passing up a thrown exception? | Why does Java's getSystemResourceAsStream silently consume IOExceptions? | java;exceptions;standard library | The writers of the function get to define what could not be found includes. Here they include any IOExceptions thrown in the attempt as could not be found. This simplifies the usage of this function, as the user only needs to check for null. A more modern library might have this return Option<InputStream>.Notice how they don't try-catch getSystemResource, the documentation of that also specifies it returns null on failure. |
_cs.66730 | This question is both general in nature and also specific to computer vision. If this is the wrong forum, apologies in advance and suggestions on where to post would be much appreciated.After a certain point, do the benefits of more data plateau for machine learning algorithms?For instance, let's say the goal is object recognition of a basketball. Is there a plateau, say, after training on 1M images of basketballs? Or is the plateau lower at like 100K images? Or is there no plateau at all?More concretely, can you detect basketballs with 99% accuracy after 100K samples, meaning the next 900K samples only nets an additional 1% in accuracy at most?How about for non-image domains such as speech recognition of all words related to the weather in the English language?It seems that if there is a plateau, it would hinge on the complexity of the domain. Assuming the data plateau exists, is there a principle for generalizing what the plateau is for a given domain (e.g., to recognize one type of object with no variations, you need about 100K images from every angle and under different lighting conditions)? | Machine learning: do the benefits of more data plateau after a certain point? | machine learning;computer vision | null |
_webapps.92830 | I want to share some tweets and retweets to Facebook. The Facebook connect app in Twitter shares every tweet to my Facebook account. I want to be able to control which tweet/retweet gets posted to Facebook and which doesn't. | How do I share individual tweets/retweets to Facebook? | facebook;twitter | null |
_cs.1669 | Let $A_P = (Q,\Sigma,\delta,0,\{m\})$ the string matching automaton for pattern $P \in \Sigma^m$, that is $Q = \{0,1,\dots,m\}$$\delta(q,a) = \sigma_P(P_{0,q}\cdot a)$ for all $q\in Q$ and $a\in \Sigma$with $\sigma_P(w)$ the length of the longest prefix of $P$ that is a Suffix of $w$, that is$\qquad \displaystyle \sigma_P(w) = \max \left\{k \in \mathbb{N}_0 \mid P_{0,k} \sqsupset w \right\}$.Now, let $\pi$ the prefix function from the Knuth-Morris-Pratt algorithm, that is$\qquad \displaystyle \pi_P(q)= \max \{k \mid k < q \wedge P_{0,k} \sqsupset P_{0,q}\}$.As it turns out, one can use $\pi_P$ to compute $\delta$ quickly; the central observation is:Assume above notions and $a \in \Sigma$. For $q \in \{0,\dots,m\}$ with $q = m$ or $P_{q+1} \neq a$, it holds that$\qquad \displaystyle \delta(q,a) = \delta(\pi_P(q),a)$But how can I prove this?For reference, this is how you compute $\pi_P$:m length[P ][0] 0k 0for q 1 to m 1 do while k > 0 and P [k + 1] =6 P [q] do k [k] if P [k + 1] = P [q] then k k + 1 end if [q] k end whileend forreturn | Connection between KMP prefix function and string matching automaton | algorithms;finite automata;strings;searching | First of all, note that by definition$\delta(q,a) = \sigma_P(P_{0,q}\cdot a) =: s_1$ and$\delta(\pi_P(q),a) = \sigma_P(P_{0,\pi_P(q)}\cdot a) =: s_2$.Let us investigate $s_1$ and $s_2$ in a sketch:[source]Now assume $s_2 > s_1$; this contradicts the maximal choice of $s_1$ directly. If we assume $s_1 > s_2$ we contradict the fact that both $s_2$ and $\pi_P(q)$ are chosen maximally, in particular because $\pi_P(q) \geq s_1 - 1$. As both cases cases lead to contradictions $s_1=s_2$ holds, q.e.d.As requested, a more elaborate version of the proof:Now we have to show $s_1=s_2$; we do this by showing that the opposite leads to contradictions.Assume $s_2 > s_1$. Note that $P_{0,s_2} \sqsupset P_{0,q}\cdot a$ because $P_{0,s_2} \sqsupset P_{0,\pi_P(q)}\cdot a$ and $P_{0,\pi_P(q)} \sqsupset P_{0,q}$ by definition of $s_2$. Therefore, $P_{0,s_2}$ -- a prefix of $P$ and a suffix of $P_{0,q}\cdot a$ -- is longer than $P_{0,s_1}$, which is by definition of $s_1$ the longest prefix of $P$ that is a suffix of $P_{0,q}\cdot a$. This is a contradiction.Before we continue with the other case, let us see that $\pi_P(q) \geq s_1 - 1$. Observe that because $P_{0,s_1} \sqsupset P_{0,q}\cdot a$, we have $P_{0,s-1} \sqsupset P_{0,q}$. Assuming that $\pi_P(q) < s_1 - 1$ immediately contradicts the maximal choice of $\pi_P(q)$ ($s_1 - 1$ is in the set $\pi_P(q)$ is chosen from).Assume $s_1 > s_2$. We have just shown $|P_{0,\pi_P(q)}\cdot a| \geq s_1$, and remember that $P_{0,\pi_P(q)}\cdot a \sqsupset P_{0,q} \cdot a$. Therefore, $s_1 > s_2$ contradicts the maximal choice of $s_2$ ($s_1$ is in the set $s_2$ is chosen from).As neither $s_1 > s_2$ nor $s_2 > s_1$ can hold, we have proven that $s_1 = s_2$, q.e.d. |
_unix.28436 | I'm trying to mount a disc created some time ago in Amazon EC2. This is what I see (line breaks added for the sake of readability):$ sudo file -s /dev/xvda4/dev/xvda4: x86 boot sector; partition 1: ID=0x83, starthead 1, startsector 63, 10474317 sectors, extended partition table (last)\011, code offset 0x0When I'm trying to mount it:$ sudo mount /dev/xvda4 /mnt/foomount: wrong fs type, bad option, bad superblock on /dev/xvda4, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or soHow can I mount this disc?Maybe this information will help:$ sudo fdisk -lu /dev/xvda4Disk /dev/xvda4: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylinders, total 10485760 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x0952616d Device Boot Start End Blocks Id System/dev/xvda4p1 63 10474379 5237158+ 83 Linux | how to mount this disk? | filesystems;mount | null |
_softwareengineering.312480 | So right now I have a single thread to handle all the requests for the database. Let's say I have 400 requests per second for logins / logouts / other stuff, and 400 requests per second which are only related to items (move them, update them, remove them, etc).Obviously, the problem is that If I want to load an item from the database, but the database is currently processing a login request, then there's gonna be a delay. And I want it to be instant, that's why I wanted to create another thread, exclusive to process item requests, and the other thread to process logins/logouts, etc.Microsoft says this:1: Have multiple statement handles on a single connection handle, with a single thread for each statement handle.2: Have multiple connection handles, with a single statement handle and single thread for each connection handle.What are exactly the differences on both approaches? I obviously need to fetch data and insert/update in both threads at the same time.Will this 2 threads vs 1 approach speed up things?Both threads will work exclusive in different SQL tables (ie the thread for the items will only use ITEMS_TABLE, it will never use the LOGIN_TABLE and vice-versa)Currently I'm using the following functions (C++):SQLSetEnvAttr with SQL_OV_ODBC3SQLConnectSQLAllocHandleSQLBindParameterSQLExecDirect | ODBC 3 Multiple Statements vs Multiple Connections | sql;multithreading;windows;sql server | null |
_unix.136765 | In the video, there is one machine connected to another via SSH, where the second OpenVPN daemon is already started and configured with a user/password requested at login:http://youtu.be/tSNCE6j2zxMHow do I configure the OpenVPN daemon to get it started and configured with user/password automatically? What are the directories to move? What commands do I need? | Daemon OpenVPN configuration file with .conf? | ssh;openvpn | To me it looks like a simple login script that immediately connects to another box. Something like:~$ cat .bash_profilecleartelnet <the-other-box>Not sure why you mention OpenVPN. It doesn't look like OpenVPN is involved here... |
_codereview.8010 | My goal was to make a list dynamically list itself into two columns, no matter the length of the list. I know this is possible by just floating the li nodes, but I wanted to keep the li nodes in the same vertical order. I would really like to hear from people who know more about JavaScript and jQuery than I do if this code looks good or if there is a better or more concise method for implementing this sort of thing.Demo: http://jsfiddle.net/mkimitch/ZEL5x/HTML:<ul class=columned> <li>Australia</li> <li>Brazil</li> <li>Canada</li> <li>Chile</li> <li>China</li> <li>France</li> <li>India</li> <li>Italy</li> <li>Malaysia</li> <li>Norway</li> <li>Russia</li> <li>United Kingdom</li> <li>United States</li></ul>JavaScript:var colLength = $('.columned li').length;var colHeight = $('.columned').height();var liHeight = $('.columned li').height();if ($('.columned li').length % 2 != 0) { var half = (Math.round(colHeight / 2)) + liHeight / 2;} else { var half = Math.round(colHeight / 2);}var firstrow = Math.ceil(colLength / 2);var secondrow = firstrow + 1;$('.columned li:nth-child(-n+' + firstrow + ')').addClass('column1');$('.columned li:nth-child(n+' + secondrow + ')').addClass('column2');$('.columned li:nth-child(' + secondrow + ')').css('margin-top', -half); | Is there a better or more concise way to do this? | javascript;jquery;html | null |
_unix.96428 | I have a archive backup.tar that was created with a nonstandard program a long time ago. I no longer have access to the original program. The archive is Not compressed (gzip). When trying to extract the archive files, I receive an Unexpected EOF error.It's complicated, but I have reason to suspect that the only problem is with the the checksum. I want to extract it and get the files out. Is there a way (perhaps using cpio or pax), to ignore or fix the tar checksum, and extract the files? | Tar ignore or fix checksum | tar;data recovery;checksum;cpio;pax | null |
_webmaster.108783 | I need to create my custom view for data from Google Analytics but I don't have any access to any data for API tests. Is there any place where can I find sample data for this response? | Example data from Google Analytics API response | google analytics;google api | null |
_vi.3370 | Normal command :sort can sort lines based on column or virtual column (\%c or \%v), could the higher level logical column be used as sorting key? Using regular expression looks a little complex for this scenario (the column is around the end of the line?) and it looks similar as what the sort utility does (sork -k), but sort with this functionality is unavailable on Windows. Vim plug-in will also help.For example, I'd like to sort the 2 lines below according to the last column separated by comma. My real scenario has much more columns and string pattern. Specify column delimiter will simplify it a lot.xxx,yyy,zzz,0x123zzxz,xxxx,yyyy,0x121 | Sort based on comma separated words | regular expression;sort | Vim's sort allows you to either skip {pattern}, or only consider it (with the r flag). A regular expression for the last comma-delimited column is easy to formulate: Skip everything until and including the last comma in a line::sort/.*,/For any other column, I would use the r flag, and skip N (here: 2) previous columns via \zs::sort/\([^,]*,\)\{2}\zs[^,]*/ |
_softwareengineering.348663 | So the scenario is we are going to develop a web app. It will primarily consist of:APIFront End stuff (HTML/CSS/JS)I am considering two approaches: Package API separately into a *.war and deploy on the server. Write front end stuff and call those API. (This is what my previous company followed as we had API in hibernate ORM over RESTeasy deployed on JBOSS)Package all the API and UI stuff into a *.jar. This is what my current company is following. Spring Framework, JSP.As I understand it, using the first approach we can have more modularisation. We can point our UI to whatever API URL which can come in handy while testing, switching APIs and so on.Using second approach we can maintain higher level of integrity as in making the UI code more secure and uniform as we can reference them directly from API code. Although the catch is that the code becomes little unreadable and messy, thought that could be just a personal opinion.What are other reasons behind taking these kind of approaches? Is my understanding correct? What approach is suitable in what scenarios? | Better approach to developing/deploying web apps | design patterns;web applications;api | null |
_codereview.111440 | I have a list of lists of 4 integers, each representing the length of one side of a tetragon in clockwise order (so each number is the length of the side of the right-hand-adjacent-side of the previous one). Integers can be negative or positive or 0. My idea is that if any of the sides are less than or equal to 0 then it's not a valid tetragon. Then if the lengths of all the sides are equal then it can form a square. Then if lengths of opposing sides are equal then it can form a rectangle, otherwise it's neither.tgons=[[1,1,1,1],[1,2,1,2],[1,2,3,4],[-1,-2,-1,-2],[1,0,1,0]]squares=0rects = 0neithers = 0for gon in tgons: if any(n <= 0 for n in gon): #if any integers <= 0, it's invalid neithers+=1 elif len(set(gon)) == 1: #if all integers are equal, it's a square squares+=1 elif gon[0] == gon[2] and gon[1] == gon[3]: #if both pairs of opposing sides have equal length, it's a rectangle rects+=1 else: neithers+=1print squares,rects,neithersThis code apparently fails one out of a few test cases (on a certain website). I've thrown all the test cases I could think of at it and I've not been able to get it to fail so far. Is there really a test case that it fails? | Decide if 4 lengths form a square, rectangle or neither | python | Why do we need a set to find a square?Why is a square not a subset of rectangle?You can do the any check in the gon[0] == gon[2].Why no filter on length of gon? Pentagons aren't squares...gon is a poor name... do you mean polygon?I agree with holroy, -5 makes sense, but 0 does not.So I would do:for shape in tgons: if shape[0] == shape[2] > 0 and shape[1] == shape[3] > 0 and len(shape) == 4: if shape[0] == shape[1]: squares += 1 else: rects += 1 else: neithers += 1This can also be changed to allow all 2D rects/squares. (not lines.)if shape[0] == shape[2] != 0 and shape[1] == shape[3] != 0 and len(shape) == 4:Also follow PEP8 a bit more, as then you'll have easier to read code.You mostly need more spaces:Before and after infix operators.After but not before commas. |
_unix.152961 | I set my displays to sleep with xset dpms force off after locking the screen with kscreenlock. This is great, the displays wake up when the mouse is moved (by me, air, the cat). I'd like to set it in a way that only the keyboards can wake up the screens again.Is there some way to do this for the KDE screen locker or in general? | Prevent the mouse to wake up display | kde;input;screen lock | null |
_unix.353662 | Is it possible to make a USB port on one machine think that it is on another? I need to remotely access a machine on another network with mouse and kb, video not needed. KVM switch wont work on this application as remote access cant be enabled on any machines. | Remote USB ports | usb | null |
_softwareengineering.221558 | I want to really opensource a project that did start as a personal hobby, but I'm an ignorant of licensing details.This project, provides some libraries for a given $language, and some wrapper command line utilities for those libraries.Those libraries, across many things, do generate more code in the same language, code which can be reused by the caller.On my drafts, I always used a MIT style license for the sake of brevity, but it's unreleased copies I'm reviewing right now to merge on a finally released version.I would like to use GPL style policies for usage/contribution of my code (lets call it a framework).But.Could choosing GPLv2 (3?) for my project, limit the things that a given user can do with the code generated or managed by my libraries/utilities? (i.e. commercial profit of such generated code without release her improvements/changes to my code)Is there something to consider (in a plain language) when licensing code that generates code ?Update:To try to answer some comments:What do you wish to accomplish with your license?The best goals for the project health (from the view point of a community-based opensource project).Who do you want to use your code?AnybodyWhat do you want to happen to changes?To be back-ported to the project as much as possible.What do you want to happen to generated code?To don't be affected by my licensing choice. It belongs to the user who generated it.What about money - if the user makes money do you want that to affect the licensing?If the user makes money from the generated code, great. If the user makes money modifying my code, I would like to force to publish the changes.I did a group-therapy with those answers. Now, choosing GPL (2? 3?) I could be ok or not?Update2So they have to bundle/compile your library in with theirs when they deploy it, in order for it to run? yes, the library needs to be installed previously, or bundled with the result (there is an option for this). | Licensing libraries (which do code generation, etc) and GPL boundaries | licensing;gpl | The main thing to consider when choosing the license for a code generator is how much of the code generator itself will wind up in its output. With most (all?) compilers, this is such a small portion that the compiler's license is not considered to apply to the compiler's output. This makes it possible to use GPL-licensed compilers to write closed-source software, because the end-product is not considered to be derived from the compiler's source.On the other hand, there are also tools like Bison, where a significant portion of the tool makes it into the output. As Bison is licensed with the GPL, this would normally mean that and software you generate with Bison would also be covered by the GPL, were it not that the Bison license has a special clause to allow its output to be used in (a limited set of) closed source projects.As you do not wish to restrict the uses of the generated code, but you do want modifications to the generator and libraries to be made public, your best choice seems to to use LGPL or a permissive license for the libraries (provided they can be linked dynamically to the user's project) and GPL (optionally with a Bison-like exception) or a permissive license for the code generator. |
_unix.155281 | Suppose I mounted a disk in this way:mount /dev/sdb /mnt/tmpI have some files opened on this filesystem and don't want to unmount it. However I want to temporarily extract the device, then reattach it later. I want all reads and writes to this filesystem to be performed in cache only or be hung until I reattach the device.If I thought about temporarily detaching in advance, I would have used the device mapper:# ls -lh /dev/sdbbrw-rw---- 1 root floppy 8, 16 Sep 12 17:38 /dev/sdb# blockdev --getsize /dev/sdb2211840# dmsetup create sdb_detachable --table '0 2211840 linear 8:16 0'# mount /dev/mapper/sdb_detachable /mnt/tmp(start working with the filesystem)(suddenly need to detach the device)# dmsetup suspend sdb_detachable# dmsetup load sdb_detachable --table '0 2211840 error'# blockdev --flushbufs /dev/sdb(eject the device)(maybe even use the cached part of the filesystem)(reattach the device, now it appears as /dev/sdc)# ls -lh /dev/sdc && blockdev --getsize /dev/sdcbrw-rw---- 1 root floppy 8, 32 Sep 12 17:51 /dev/sdc2211840# dmsetup load sdb_detachable --table '0 2211840 linear 8:32 0'# dmsetup resume sdb_detachable(filesystem is usable again)(finished using it, now need to clean up)# umount /mnt/tmp/# dmsetup remove sdb_detachable# eject /dev/sdcHow can this be accomplished if the device is mounted directly? Can I steal it into the device mapper? | How do I temporarily extract a flash drive or HDD in Linux? | linux;usb drive;external hdd;device mapper;vfs | null |
_webapps.72794 | I'm trying to clean up some old Google Sheets, but some of them don't have a move to bin menu option:Other spreadsheets do have it:What's the difference? How can I delete spreadsheets that don't have the menu item? My current work around is to use the Google Drive desktop integration, and delete them there... | Why is there sometimes no Move to trash/Move to bin | google spreadsheets;google drive | null |
_codereview.51083 | I read many times about controller code mustn't be too complicated and so on. I was developing new features in my simple project. I added a function, which allow users to get access to news only in one specified category. Now, if a user writes some of this URL:/news/common/news/sport/news/financeonly news from specified category would be shown.I was thinking about how to do this through another actions, but realized that I can do it in index action. I need just to check if user entered category, but not id (id can contain only digits), which I've done.Controller:public function indexAction() { $objectManager = $this->getServiceLocator()->get('Doctrine\ORM\EntityManager'); $options = array(); $categoryUrl = (string)$this->params('category'); if($categoryUrl) { // add category to the 'where' $category = $objectManager ->getRepository('\News\Entity\Category') ->findOneByUrl($categoryUrl); if(!$category) { return $this->redirect()->toRoute('news'); } $options['category'] = $category->getId(); $categoryName = $category->getName(); } $news = $objectManager ->getRepository('\News\Entity\Item') ->findBy($options, array('created'=>'DESC')); $items = array(); foreach ($news as $item) { $buffer = $item->getArrayCopy(); $buffer['category'] = $item->getCategory()->getName(); $buffer['user'] = $item->getUser()->getDisplayName(); $items[] = $buffer; } $view = new ViewModel(array( 'news' => $items, 'categoryName' => $categoryName, )); return $view;}What I am doing here? receive category from URLif category specified I add clause to the $options array and set $categoryName as category nameif category is not specified I don't do anything with $options (so it will be blank after this part) and don't set flag (so it is not set, undefined)get news items (function pass $options array)return $categoryName and news array to the viewView:<? if($categoryName) { $title = $categoryName; } else { $title = News list; } $this->headTitle($title); ?>// html code, some conditions etcThere is an if condition. If $categoryName is specified, $title will have the same contents as $categoryName. If $categoryName is not specified, $title will be just News list.QuestionsIs this the correct approach at all? Should I create new actions and handle this case in it?Is it correct to set flags, as I did, send to the view, handle it etc?Is my controller fat now? How can I improve this code?In addition, you can find the full code of files on GitHub:NewsController.php (controller)index.phtml (view)Note: Some words in files are in Russian. | Does my controller code look good? | php;zend framework | On the surface the code in your question seems fine, but I can tell you from experience that once this door is cracked just a bit, it will continue to creak open wider over time.I'm just checking if we have an ID or category name.This just adds a few meta tags.It's already 200 lines; 50 more won't matter.The linked controller, however, is doing way too much work. It should be passing data off to a model class (not the entity manager directly), placing whatever the view needs into the ViewModel, and that's it. Controllers are glue code. As you have it, you'll need to copy all of this code and modify it slightly to expose the CRUD interface in another form.For the code you posted, I would prefer to separate the actions so each handles one specific use case: all items, items matching a category, and one item (not in the code but you mentioned it). Create a regex route for the last two. The beauty of this is that you don't need to do all the conditional checks--the dispatcher does it for you.// see miscellaneous tips belowpublic function init() { $this->objectManager = $this->getServiceLocator() ->get('Doctrine\ORM\EntityManager');}public function allAction() { return new ViewModel(array( 'news' => $this->loadItems(), 'categoryName' => null, ));}public function categoryAction() { $category = $this->objectManager ->getRepository('\News\Entity\Category') ->findOneByUrl((string) $this->params('category')); if (!$category) { return $this->redirect()->toRoute('news'); } return new ViewModel(array( 'news' => $this->loadItems(array('category' => $category->getId())), 'categoryName' => $categoryName, ));}private function loadItems($options = array()) { $news = $this->objectManager ->getRepository('\News\Entity\Item') ->findBy($options, array('created' => 'DESC')); $items = array(); foreach ($news as $item) { $buffer = $item->getArrayCopy(); $buffer['category'] = $item->getCategory()->getName(); $buffer['user'] = $item->getUser()->getDisplayName(); $items[] = $buffer; } return $items;}This is about the same length of the original, but it's far less complicated. Each action is easy to follow and clearly lays out what it requires.MiscellaneousAnd here are a few tips after looking at your linked controller and view code:You are accessing the entity manager in every action (sometimes pulling it from the registry twice in the same method). Do this once by storing it in an instance property in init.The index and list actions are nearly identical. Refactor these to extract the common code into a new private method. It looks like this applies to some of the other actions, e.g., converting a news item into an array for the view with its associated category and user names.You can simplify the title-setting with the Elvis operator: $title = categoryName ?: .Every page should have an H1--even the all news index page.An non-empty array is truthy in PHP. if(count($this->news) != 0): can be shortened to if($this->news):. If you want to be explicit, at least use if(!empty($this->news)):.If $this->news will be an empty array instead of null or false when there are no items, you don't even need the if since looping over an empty array is a no-op.You don't need the ; in <?=...?> since it's an expression instead of a statement. You also don't need it for one-line statements in <?php ... ?>, but we still use it to avoid bugs when someone adds a line. |
_cs.41873 | I'm reading through Pushdown Control-Flow Analysis of Higher-Order Programs, which presents a synthesis of the Abstracting Abstract Machines technique and pushdown automata to get static analysis which perfectly matches call and return sites. The paper presents a monovariant and two polyvariant forms of the system.I can't seem to get my head around what additional expressive power polyvariance (e.g. 1CFA) grants when return-flow merging is already eliminated in the monovariant case by the pushdown methods.Could someone provide an example program in which polyvariance helps? | What additional expressivity does polyvariance give in pushdown CFA? | programming languages;pushdown automata;functional programming | After playing with an implementation (found here), I believe I've figured it out.Although 0PDCFA technically entirely eliminates return-flow merging, this isn't a very strong result in the absence of any polyvariance: if the same function is called in two different contexts, the analysis must end up merging flows within the body of the function, since multiple values are bound to the same variable and the variable's abstract address is identified with its name. 1PDCFA provides a different set of addresses to each syntactically distinct call of each function (I might be either under- or over-selling here), eliminating much of the merging. |
_unix.58319 | Now I'm on the oh-my-zsh, but I'm not sure that it is perfect choice. What is the key difference between grml zsh config (github repo) and oh-my-zsh config? In which case should I prefer grml or oh-my-zsh? | What is the key difference between grml zsh config and oh-my-zsh config | zsh;oh my zsh;grml | I am unable to give a detailed report of their differences but I can at least give a broad overview that may help to answer some basic questions and lead you to places where you can learn more.oh-my-zsh:Built-in plugin/theme systemAuto updater for core, plugins, and themesDefault behavior easily overridden or extendedWidely popular (which means an active community)grml-zsh:Very well documentedProvides many useful built-in aliases and functions (pdf)Default behavior overridden or extended with .zshrc.pre and .zshrc.local filesActively developed but not as popular as oh-my-zshBasically, the most apparent differences between the two are oh-my-zsh's plugin/theme system and auto-updater. However, these features can be added to grml-zsh with the use of antigen, which is a plugin manager for zsh inspired by oh-my-zsh.Antigen allows you to define which plugins and theme you wish to use and then downloads and includes them for you automatically. Ironically, though, most of the plugins and themes are pulled from oh-my-zsh's library which means in order for them to work antigen must first load the oh-my-zsh core. So, that approach leads to more or less recreating oh-my-zsh in a roundabout way. However, if you prefer grml's configuration to oh-my-zsh's then this is a valid option.Bottom line, I believe you just need to try both and see which one works best for you. You can switch back and forth by creating the following files: oh-my-zsh.zshrc (default file installed by oh-my-zsh), grml.zshrc (default grml zshrc), .zshrc.pre, and .zshrc.local.Then if you want to use oh-my-zsh:$ ln -s ~/oh-my-zsh.zshrc ~/.zshrcOr, if you want to use grml:$ ls -s ~/grml.zshrc ~/.zshrcIf you don't want to duplicate your customizations (meaning adding files to the custom directory for oh-my-zsh and modifying the pre and local files for grml), one option is to add your customizations to .zshrc.pre and .zshrc.local and then source them at the bottom of your oh-my-zsh.zshrc file like so:source $HOME/.zshrc.presource $HOME/.zshrc.localAlso, if you decide to use antigen you can add it to your .zshrc.local file and then throw a conditional around it to make sure that oh-my-zsh doesn't run it, like so:# if not using oh-my-zsh, then load plugins with antigen# <https://github.com/zsh-users/antigen.git>if [[ -z $ZSH ]]; then source $HOME/.dotfiles/zsh/antigen/antigen.zsh antigen-lib antigen-bundle vi-mode antigen-bundle zsh-users/zsh-syntax-highlighting antigen-bundle zsh-users/zsh-history-substring-search antigen-theme blinks antigen-applyfi |
_softwareengineering.250616 | I'm new to C# programming, I was experimenting with iterators concept in C#. Here, I'm trying to display all the terms in a list, for that I'm trying different ways to obtain the results. In the below code, I'm using two classes ListIterator and ImplementList. In the ListIterator class : I defined a HashSet and it uses IEnumerator to store the values. Here GetEnumerator() method returns the values in the list. GetEnumerator is implemented in the ImplementList class (other class). Finally, the list is displayed in the console. public class ListIterator{ public void DisplayList() { HashSet<int> myhashSet = new HashSet<int> { 30, 4, 27, 35, 96, 34}; IEnumerator<int> IE = myhashSet.GetEnumerator(); while (IE.MoveNext()) { int x = IE.Current; Console.Write({0} , x); } Console.WriteLine(); Console.ReadKey(); }}In the ImplementList class : GetEnumerator() is defined and it returns the list using yield return x. public class ImplementList : IList<int> { private List<int> Mylist = new List<int>(); public ImplementList() { } public void Add(int item) { Mylist.Add(item); } public IEnumerator<int> GetEnumerator() { foreach (int x in Mylist) yield return x; } }Now, I want to rewrite the GetEnumerator() without using yield return. And it should return all the values in a list. I tried using for loop as for(int x=0; x<Mylist.Count; x++), but it doesn't return all the values in the list. Is it possible to get all the values in the list without using yield return in IEnumerator | Implementing IEnumerator without using 'yield return' in c# | c#;object oriented | MoveNext() is a method that is called over and over and every time it has to move to the next item. This means you can't implement it by simply using a for loop (unless you use yield return, which is pretty much the reason why yield return exists), you need to rewrite it so that each MoveNext() call executes just part of that loop.Specifically, it would look like this:class Enumerator : IEnumerator<int>{ private int i = -1; private ImplementList list; public Enumerator(ImplementList list) { this.list = list; } public bool MoveNext() { i++; return i < list.myList.Count; } public int Current { get { return list.myList[i]; } } object IEnumerator.Current { get { return Current; } } public void Dispose() {} public void Reset() { throw new NotSupportedException(); }}This way, every call to MoveNext() performs the i++ and i < list.Count parts of the for loop. |
_unix.319697 | Code #!/bin/bashstartTimes=$(seq 300 10 330)for startTime in ${startTimes[@]};do endTime=${startTime}+10 echo ${endTime} > /tmp/111test # Output literally: startTimes+10 doneecho Last endTime: ${endTime}Output with bash -x ...++ seq 300 10 330+ startTimes='300310320330'+ for startTime in '${startTimes[@]}'+ endTime=300+10+ echo 300+10+ for startTime in '${startTimes[@]}'+ endTime=310+10+ echo 310+10+ for startTime in '${startTimes[@]}'+ endTime=320+10+ echo 320+10+ for startTime in '${startTimes[@]}'+ endTime=330+10+ echo 330+10+ echo 'Last endTime: 330+10'Last endTime: 330+10Expected output310320330340OS: Debian 8.5Linux kernel: 4.6 backsports | Why this Bash list expression and variable calling fails? | bash;array | don_crissti's answer in comments which points out two mistakes in the declaration of startTimes and in use of ${var[@]} which has to be used with seq outputstartTimes=( $(seq 300 10 330) )for startTime in ${startTimes[@]};do endTime=$(( ${startTime}+10 )) echo ${endTime} > /tmp/111test doneecho Last endTime: ${endTime} |
_unix.337724 | I'm trying to send mail from server A running SSMTP via a server B running Postfix. The Postfix server is running just fine and has been in production for a while without any problems. It runs Postfix with Dovecot.I can use my Gmail account to send mail from SSMTP and that works however I want to use my own Postfix server because I want more control over the entire mail process.In the next logs and code I have replaced my own public domain with example.com.Here is the error that SSMTP produces:root@N40L:/etc/ssmtp# echo test | mailx -vvv -s test [email protected][<-] 220 h******.stratoserver.net ESMTP Postfix (Debian/GNU)[->] EHLO example.com[<-] 250 DSN[->] AUTH LOGIN[<-] 535 5.7.8 Error: authentication failed: Invalid authentication mechanismsend-mail: Server didn't like our AUTH LOGIN (535 5.7.8 Error: authentication failed: Invalid authentication mechanism)I'm running Debian 8 on both machines.Here is my ssmtp.conf:[email protected]=example.com:465rewriteDomain=example.comhostname=example.comFromLineOverride=YESUseTLS=YESAuthUser=N40L@example.comAuthPass=correctpasswordI know SSMTP sometimes has trouble working with non-alphanumeric passwords so the password is a string of letters and numbers. I have verified it using Mutt and I'm certain it is the right password, the right username, the right port.Postfix main.cf:smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)biff = noappend_dot_mydomain = noreadme_directory = nosmtpd_tls_cert_file=/etc/letsencrypt/live/example.com/fullchain.pemsmtpd_tls_key_file=/etc/letsencrypt/live/example.com/privkey.pemsmtpd_use_tls=yessmtpd_tls_auth_only = yessmtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scachesmtp_tls_session_cache_database = btree:${data_directory}/smtp_scachesmtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destinationmyhostname = ********.stratoserver.netmyorigin = /etc/mailnamemydestination = localhost.stratoserver.net, localhostrelayhost =mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128mailbox_command = procmail -a $EXTENSIONmailbox_size_limit = 0recipient_delimiter = +inet_interfaces = allmessage_size_limit=20480000virtual_mailbox_domains = a.bunch.of names.here.and example.comvirtual_mailbox_base = /var/mail/vmailvirtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cfvirtual_gid_maps = static:5000virtual_uid_maps = static:5000virtual_minimum_uid = 5000virtual_alias_maps = mysql:/etc/postfix/mysql_virtual_alias_maps.cfvirtual_transport = lmtp:unix:private/dovecot-lmtpsmtpd_sasl_auth_enable = yessmtpd_sasl_type = dovecotsmtpd_sasl_path = private/authcontent_filter = scan:127.0.0.1:10026receive_override_options = no_address_mappingsThe LetsEncrypt certs show the correct name and a host of phones, both Android and iPhone, as well as a number of different mail clients and its webmail are all satisfied with it. I am positive the certs are in order.master.cf, though I'm not sure it is relevant:smtp inet n - - - - smtpd -v -o content_filter=spamassassinsubmission inet n - - - - smtpd -o syslog_name=postfix/submission -o smtpd_tls_security_level=encrypt -o smtpd_sasl_auth_enable=yes -o smtpd_client_restrictions=permit_sasl_authenticated,reject -o smtpd_relay_restrictions=permit_sasl_authenticated,rejectsmtps inet n - - - - smtpd -o syslog_name=postfix/smtps -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yespickup unix n - - 60 1 pickupcleanup unix n - - - 0 cleanupqmgr unix n - n 300 1 qmgrtlsmgr unix - - - 1000? 1 tlsmgrrewrite unix - - - - - trivial-rewritebounce unix - - - - 0 bouncedefer unix - - - - 0 bouncetrace unix - - - - 0 bounceverify unix - - - - 1 verifyflush unix n - - 1000? 0 flushproxymap unix - - n - - proxymapproxywrite unix - - n - 1 proxymapsmtp unix - - - - - smtprelay unix - - - - - smtpshowq unix n - - - - showqerror unix - - - - - errorretry unix - - - - - errordiscard unix - - - - - discardlocal unix - n n - - localvirtual unix - n n - - virtuallmtp unix - - - - - lmtpanvil unix - - - - 1 anvilscache unix - - - - 1 scachemaildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)bsmtp unix - n n - - pipe flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipientscalemail-backend unix - n n - 2 pipe flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}mailman unix - n n - - pipe flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py ${nexthop} ${user}spamassassin unix - n n - - pipe user=spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}scan unix - - n - 16 smtp -o smtp_send_xforward_command=yes127.0.0.1:10025 inet n - n - 16 smtpd -o content_filter= -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks -o smtpd_helo_restrictions= -o smtpd_client_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks_style=host -o smtpd_authorized_xforward_hosts=127.0.0.0/8Relevant config parts in Dovecot:# 2.2.13: /etc/dovecot/dovecot.confauth_debug = yesauth_debug_passwords = yesauth_verbose = yesmail_debug = yesmail_plugins = quotamail_privileged_group = vmailmanagesieve_notify_capability = mailto}passdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql}protocols = imap lmtp sieveservice auth { unix_listener /var/spool/postfix/private/auth { group = postfix mode = 0666 user = postfix }}service imap-login { inet_listener imaps { port = 993 ssl = yes }}service lmtp { unix_listener /var/spool/postfix/private/dovecot-lmtp { group = postfix mode = 0666 user = postfix }}ssl = requiredssl_cert = </etc/letsencrypt/live/example.com/fullchain.pemssl_key = </etc/letsencrypt//live/example.com/privkey.pemuserdb { args = /etc/dovecot/dovecot-sql.conf.ext driver = sql}verbose_ssl = yesprotocol lmtp { mail_plugins = quota sieve postmaster_address = [email protected]}If I try to send mail from server A and it generates aforementioned error server B log this in /var/mail/mail.log:Jan 16 10:29:54 postfix/smtps/smtpd[13601]: warning: dict_nis_init: NIS domain name not set - NIS lookups disabledJan 16 10:29:54 postfix/smtps/smtpd[13601]: connect from ******.upc-h.chello.nl[62.194.***.***]Jan 16 10:29:54 dovecot: auth: Debug: auth client connected (pid=0)Jan 16 10:29:54 postfix/smtps/smtpd[13601]: warning: ******.upc-h.chello.nl[62.194.***.***]: SASL LOGIN authentication failed: Invalid authentication mechanismJan 16 10:29:54 postfix/smtps/smtpd[13601]: lost connection after AUTH from ******.upc-h.chello.nl[62.194.***.***]Jan 16 10:29:54 postfix/smtps/smtpd[13601]: disconnect from ******.upc-h.chello.nl[62.194.***.***]Same if I add AuthMechanism=LOGIN or AuthMechanism=CRAM-MD5 (which according to SSMTP's man page are the only mechanisms available) to ssmtp.conf so I removed that again.Because the internet is very insistant on using Gmail with SSMTP I tried to humor it for a bit and tried UseSTARTTLS. This then happens on server A:send-mail: Cannot open example.com:465Can't send mail: sendmail process failed with error code 1...and this is logged on server B:Jan 16 10:46:01 postfix/smtps/smtpd[14047]: warning: dict_nis_init: NIS domain name not set - NIS lookups disabledJan 16 10:46:01 postfix/smtps/smtpd[14047]: connect from ******.upc-h.chello.nl[62.194.***.***]Jan 16 10:46:12 dovecot: imap-login: Debug: SSL: elliptic curve secp384r1 will be used for ECDH and ECDHE key exchangesJan 16 10:46:12 dovecot: imap-login: Debug: SSL: elliptic curve secp384r1 will be used for ECDH and ECDHE key exchangesJan 16 10:46:12 dovecot: auth: Debug: auth client connected (pid=14049)Jan 16 10:46:12 dovecot: auth: Debug: client in: AUTH#0111#011PLAIN#011service=imap#011secured#011session=***************AAAAAAAAAAB#011lip=::1#011rip=::1#011lport=143#011rport=60112#011resp=AG40MGxAd*****************QzE3MDE= (previous base64 data may contain sensitive data)Jan 16 10:46:12 dovecot: auth-worker(14017): Debug: sql([email protected],::1): query: SELECT email as username, pwd AS password FROM addresses WHERE email = '[email protected]'Jan 16 10:46:12 dovecot: auth: Debug: client passdb out: OK#0111#[email protected] 16 10:46:12 dovecot: auth: Debug: master in: REQUEST#011154140673#01114049#0111#0114d206d2a85468af9af75b8538aab7485#011session_pid=14050#011request_auth_tokenJan 16 10:46:12 dovecot: auth-worker(14017): Debug: sql([email protected],::1): SELECT 5000 AS uid, 5000 as gid, email, '/var/mail/vmail/example.com/n40l' AS home FROM addresses WHERE email = '[email protected]'Jan 16 10:46:12 dovecot: auth: Debug: master userdb out: USER#011154140673#[email protected]#011uid=5000#011gid=5000#[email protected]#011home=/var/mail/vmail/example.com/n40l#011auth_token=ff5b12*****************aedf315ac08eJan 16 10:46:12 dovecot: imap-login: Login: user=<[email protected]>, method=PLAIN, rip=::1, lip=::1, mpid=14050, secured, session=<0pDTDTNG0AAAAAAAAAAAAAAAAAAAAAAB>Jan 16 10:46:12 dovecot: imap: Debug: Loading modules from directory: /usr/lib/dovecot/modulesJan 16 10:46:12 dovecot: imap: Debug: Module loaded: /usr/lib/dovecot/modules/lib10_quota_plugin.soJan 16 10:46:12 dovecot: imap: Debug: Module loaded: /usr/lib/dovecot/modules/lib11_imap_quota_plugin.soJan 16 10:46:12 dovecot: imap: Debug: Added userdb setting: plugin/[email protected] 16 10:46:12 dovecot: imap([email protected]): Debug: Effective uid=5000, gid=5000, home=/var/mail/vmail/example.com/n40lJan 16 10:46:12 dovecot: imap([email protected]): Debug: Quota root: name=User quota backend=maildir args=Jan 16 10:46:12 dovecot: imap([email protected]): Debug: Quota rule: root=User quota mailbox=* bytes=10737418240 messages=0Jan 16 10:46:12 dovecot: imap([email protected]): Debug: Quota rule: root=User quota mailbox=Trash bytes=+104857600 messages=0Jan 16 10:46:12 dovecot: imap([email protected]): Debug: Quota grace: root=User quota bytes=536870912 (5%)Jan 16 10:46:12 dovecot: imap([email protected]): Debug: Namespace inbox: type=private, prefix=, sep=, inbox=yes, hidden=no, list=yes, subscriptions=yes location=maildir:/var/mail/vmail/example.com/n40lJan 16 10:46:12 dovecot: imap([email protected]): Debug: maildir++: root=/var/mail/vmail/example.com/n40l, index=, indexpvt=, control=, inbox=/var/mail/vmail/example.com/n40l, alt=Jan 16 10:46:12 dovecot: imap([email protected]): Disconnected: Logged out in=50 out=475I can log into server B's webmail without any trouble and send and receive mail for the address I'm using so the account itself is in order. I tried other accounts and they produce the same errors.I'm at a loss. SSMTP should be able to send mail through Postfix. Even with all debug and verbosity options on, I can't find the source of the problem. Any help is greatly appreciated. | Auth error when sending mail through Postfix with SSMTP | postfix;sendmail;mailx;ssmtp | null |
_unix.165754 | If there is a partition, ex.: /dev/sdb1Then how can I increase the partition (with fdisk?), if it was 10 GByte before, and there are still place to increase the partition with another 10 GByte, so sum: how can I increase the partition's size from 10 GByte to 20 GByte? Without data loss! - so re-creating the partition is not a solution. UPDATE: thought there will be a command to modify the partitions end to a new end, so yes, re-creating the partition is OK! :) the main thing is that data on the partition should stay untouched, without any copy here, than copy it back thing. :) | Increase a partition without data loss | fdisk;sles | null |
Subsets and Splits