id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.58825 | How can I assign the IP address of eth0 to an environment variable, say $ip, as easily as possible?Update: Distro is Ubuntu Server 12.04 LTS. | Assigning IP address to environment variable | linux;shell;networking;ip | null |
_webmaster.3604 | On the occasion of a new website, I'm looking for a reliable and inexpensive company to host a big volume of movies and pictures.I need around 1To to start knowing that it will grow each day.Do you have any company to advise ?Thank you. | Host multimedia content | web hosting;storage | null |
_webapps.95868 | I heard that it is possible to convert my Facebook account to become a Facebook page. I created my Facebook account, and now I want to move the account to become a page so as to make it my official Facebook page. | How can I convert my Facebook profile to a Page? | facebook;facebook pages | null |
_unix.42611 | I was calibrating my touch screen, and saw that the best tool around was xinput_calibrator. So I used it. It have two options (one of which did not work), so I am here for the second. It says I should execute this command in a script that starts with your X session:xinput set-int-prop 3M 3M USB Touchscreen - EX II Evdev Axis Calibration 32 14410 2146 14574 2115So I tried ~/.xinitrc, ~/.xsession and ~/.xsessionrc, all of which did not exist. So I created them and the exact content was this command. The first two files made my logins fail (after I login, I fall back to the login screen).With the last file, the calibration was functional, but only after logging in...I need that command to run before the login dialog shows up. I thought of adding this command to the end of /etc/X11/xinit/xinitrc with no result (nothing changed). Also, I tried to add it to the end of /etc/X11/Xsession.d/40x11-common_xsessionrc (after inspecting some of the files), but the result was exactly the same as adding it to ~/.xsessionrc.How can I make this command run before the login screen shows (is this before the window manager starts, or before the X session starts)?(I am running Kubuntu with the default window manager, if that matters)UPDATE As I am using Kubuntu, my display manager is kdm. As the accepted answer suggests, I edited the file /etc/kde4/kdm/Xsetup, and as mentioned here I added the command before the command that is there by default. And it works like a charm :) | How can I run a script that starts before my login screen? | x11;startup;session | All the files you tried to change are read after you log in. Furthermore, ~/.xinitrc and ~/.xsession are the full set of commands that run in a session; ~/.xinitrc is read if you run xinit or startx from a text mode prompt, and ~/.xsession is read if you run a custom session (the name may vary) from a graphical login prompt.You need to configure your display manager, the program that shows the login prompt. For kdm, the KDE display manager, add your command to /etc/kde4/kdm/Xsetup (or /etc/kde3/kdm/Xsetup for older versions) (that's the path on Debian, I haven't verified that Kubuntu uses the same path).For gdm (the Gnome display manager), add your command to /etc/gdm/Init/Default. For xdm (the traditional X display manager), add your command to /etc/X11/xdm/Xsetup. |
_unix.237685 | I can't create two files with same name with different cases in a folder.as an example if file names are like below.test.java and Test.java. warning message appear that saying The name Test with extension .java is already taken. Please choose a different name.in Linux we can do this. how can I do this in mac osx? | how can I create two files with same name with different case in mac osx | files;osx | You generally can't. Mac OS X is typically case-insensitive.It's actually a per-partition setting, AFAIK set when formatting the partition. The default is case-insensitive, and case-sensitive is known for breaking third-party apps. If you decide you need it, I suggest creating a case-sensitive disk image to store those files.(You might also like to know about another site in the Stack Exchange network, Ask Differentpresuming you haven't already heard of them.) |
_softwareengineering.264280 | The company I work for is evaluating some middleware solutions for governace, metering and security of web services. Currently, we're using an Enterprise Service Bus (ESB) for this purpose, but some cool guys in management decided they are going to deploy some API Management Middleware.I researched a bit about these API Management (aka API Gateway) Solutions but couldn't find the difference between them and actual ESBs. I evaluated some white papers from Mule, WSO2, Oracle etc, but the features offered by both products seem to be almost the same. The question is, what an API Management can do that an ESB cannot do and vice-versa? What value can be added to an IT Infrastructure by replacing an ESB for an API Gateway? | Differences between API Gateways and ESBs? | integration;middleware | The reason you're getting the concepts jumbled up is that the vendors are selling them in a package. But they are definitely separate concepts.An API Gateway provides a central access point for managing, monitoring, and securing access to your publicly exposed web services. It would also allow you to consolidate services across disparate endpoints as if they were all coming from a single host. For example let's say you had ten different service endpoints that were all part of a single suite of services. Rather than informing consumers of your service to use service1.yourcompany.com for one service and service2.yourcompany.com for another and so forth, you can instead have them all point to api.yourcompany.com/service1 or api.yourcompany.com/service2 and the gateway would be responsible for redirecting the requests to the appropriate endpoints.An ESB is an internal Bus that allows applications and services to communicate with each other in an uncoupled fashion. All applications can hook into the bus and they can receive any message that interests them when published by another application. They can also publish their own messages that another application may listen for and respond to. The applications are not responsible for connecting with each other directly, they publish their messages to the bus and all interested parties listen and react.Logically the API Gateway is not a replacement for an ESB but rather an enhancement for a service oriented architecture. |
_unix.195641 | I am using SUSE Linux 13.2 with an HP OfficeJet 6700.I want to duplex-print on DIN A4 paper (German industry norm, about the letter size in US, to be exact 210x297 millimetres). This is not possible without illegal margins. I am calling margins illegal if information that should get printed gets lost. I even blogged about it here: http://www.linuxintro.org/wiki/HP_OfficeJet_6700The situation is as follows:printing a test page on DIN A4, non-duplex, with hpijs driver works well (3x13 mm legal margin)printing a test page on DIN A4, non-duplex, with hpcups driver works great (3x3 mm legal margin)printing from libreOffice (set to paper size of DIN A4) and hpcups driver: illegal marginsprinting from libreOffice (set to paper size of DIN A4) and hpijs driver: no illegal marginsduplex print: illegal margins in every case listed aboveThere are the following paper size settings that seem plausible for my situation:A4 210x297mmA4 Borderless 210x297mmIndex Card A4 210x297mmA4 AutoDuplex 210x297mmCustombut non of them, even not custom makes the illegal margins go away as long as I choose duplex print (long edge).Do you have a solution or maybe just a step for troubleshooting this further? What application would I use to print a real test page (as the cups test page obviously does not represent what applications send to the printer)? | how to duplex-print with a HP OfficeJet 6700 | printing;cups | null |
_hardwarecs.4207 | I am interested to internally integrate a CD-RW IDE unit (Plextor Plexwriter Premium 2) using the existing mainboard interface USB 3, Sata 3, PCIe mostly in burning CD's with AMQR not in ripping. As far as I've read, using an IDE converter can be a tricky job and there are some variables including chip manufacturer, etc. Is there anyone that have been used one of the above solutions without experiencing transfer problems / buffer underrun/ loss of quality? Thanks | CD-RW IDE to USB SATA PCIe interface | audio recording | In terms of getting standardized performance out of your IDE drive, the best advice I can give you is to add an IDE controller card to your PC so that you can get a direct IDE to bus connection going over the PCI or PCI-E interface. I've had bad luck getting SATA > IDE adapters to work and would not recommend them.The big problem here is that an IDE controller card costs about as much as, if not more than, a new SATA DVD-RW drive, which would also yield increased performance. It might make more sense simply to bite the bullet and upgrade the drive. |
_webapps.6838 | I know you can synchronize folders using online storage services, but is there any of such sites that offer a synchronization tool? I would like to keep my folder identical to its cloud copy, in more than one computer.If I delete a file from computer A, it should be deleted both in the cloud and then in the computer B. | Is there any folder synchronization service? | sync;storage;online storage;synchronization | Dropbox does exactly what you want. |
_unix.259070 | I was installing Prey through a .deb file donwloaded from the official website when, all of a sudden I realized that I have so many unnecessary installed packages in my Ubunu laptop.This has been my sequence of actions:Tried to sudo dpkg -i prey.deb. Didn't work because of missing packets/conflicts:prey:i386 depn de sudo.prey:i386 depn de python.prey:i386 depn de python-gtk2.prey:i386 depn de scrot.prey:i386 depn de streamer.prey:i386 depn de mpg123.prey:i386 depn de dmidecode.prey:i386 depn de gksu.I then did a sudo apt-get update (all good) and a sudo apt-get upgrade (failed because previous package installation was unsuccessful, I think). APT suggested to do an apt-get -f install so I did. All of a sudden I realized I have an incredibly long list of unnecessary packages that I did NOT have (yesterday, at least): aglfn asymptote asymptote-doc checkbox-ng checkbox-ng-service cm-super cm-super-minimal context context-modules fonts-cabin fonts-comfortaa fonts-dejavu-extra fonts-ebgaramond fonts-ebgaramond-extra fonts-font-awesome fonts-freefont-otf fonts-gfs-artemisia fonts-gfs-baskerville fonts-gfs-bodoni-classic fonts-gfs-complutum fonts-gfs-didot fonts-gfs-didot-classic fonts-gfs-gazis fonts-gfs-neohellenic fonts-gfs-olga fonts-gfs-porson fonts-gfs-solomos fonts-gfs-theokritos fonts-hosny-amiri fonts-inconsolata fonts-junicode fonts-lato fonts-linuxlibertine fonts-lobster fonts-lobstertwo fonts-oflb-asana-math fonts-roboto fonts-sil-gentium fonts-sil-gentium-basic fonts-sil-gentiumplus fonts-stix freeglut3 giblib1:i386 gstreamer0.10-alsa gstreamer0.10-plugins-good gstreamer0.10-x lcdf-typetools libasound2:i386 libatk1.0-0:i386 libaudit1:i386 libavahi-client3:i386 libavahi-common-data:i386 libavahi-common3:i386 libbz2-1.0:i386 libcairo2:i386 libcomerr2:i386 libcups2:i386 libdatrie1:i386 libdb5.3:i386 libdbus-1-3:i386 libdbus-glib-1-2:i386 libdv4:i386 libffi6:i386 libfontconfig1:i386 libfreetype6:i386 libftgl2 libgconf-2-4:i386 libgcrypt20:i386 libgdk-pixbuf2.0-0:i386 libgif4:i386 libglib2.0-0:i386 libgmp10:i386 libgnome-keyring0:i386 libgnutls-deb0-28:i386 libgpg-error0:i386 libgpm2:i386 libgraphite2-3:i386 libgsl0ldbl libgssapi-krb5-2:i386 libgtk2.0-0:i386 libharfbuzz0b:i386 libhogweed4:i386 libid3tag0:i386 libimlib2:i386 libintl-perl libjbig0:i386 libjpeg-turbo8:i386 libjpeg8:i386 libk5crypto3:i386 libkeyutils1:i386 libkrb5-3:i386 libkrb5support0:i386 libltdl7:i386 liblzma5:i386 libmpg123-0:i386 libncursesw5:i386 libnettle6:i386 libosmesa6 libp11-kit0:i386 libpam-modules:i386 libpam0g:i386 libpango-1.0-0:i386 libpangocairo-1.0-0:i386 libpangoft2-1.0-0:i386 libpcre3:i386 libpixman-1-0:i386 libpng12-0:i386 libpoppler-qt5-1 libprojectm2v5 libpython-stdlib:i386 libpython2.7-minimal:i386 libpython2.7-stdlib:i386 libpython3.5-minimal libpython3.5-stdlib libqca2-plugins libqca2v5 libqt5script5 libqxt-core0 libqxt-gui0 libreadline6:i386 libselinux1:i386 libsigsegv2 libsqlite3-0:i386 libssl1.0.0:i386 libstartup-notification0:i386 libsystemd0:i386 libtasn1-6:i386 libtext-unidecode-perl libthai0:i386 libtiff5:i386 libtinfo5:i386 libv4l-0:i386 libv4lconvert0:i386 libx11-xcb1:i386 libxcb-render0:i386 libxcb-shm0:i386 libxcb-util1:i386 libxcomposite1:i386 libxcursor1:i386 libxdamage1:i386 libxfixes3:i386 libxi6:i386 libxinerama1:i386 libxml-libxml-perl libxml-namespacesupport-perl libxml-sax-base-perl libxml-sax-expat-perl libxml-sax-perl libxrandr2:i386 libxrender1:i386 linux-image-4.2.0-16-generic linux-image-4.2.0-18-generic linux-image-4.2.0-19-generic linux-image-4.2.0-22-generic linux-image-extra-4.2.0-16-generic linux-image-extra-4.2.0-18-generic linux-image-extra-4.2.0-19-generic linux-image-extra-4.2.0-22-generic linux-signed-image-4.2.0-18-generic linux-signed-image-4.2.0-19-generic linux-signed-image-4.2.0-22-generic m-tx mpg123:i386 musixtex pfb2t1c2pfb plainbox-secure-policy pmx python3-checkbox-ng python3-checkbox-support python3-jinja2 python3-plainbox python3-pyparsing python3-xlsxwriter python3.5 python3.5-minimal qml-module-qtquick-localstorage qtdeclarative5-localstorage-plugin scrot:i386 streamer:i386 sudo:i386 tex4ht tex4ht-common texinfo texlive-fonts-extra texlive-fonts-extra-doc texlive-formats-extra texlive-games texlive-generic-extra texlive-humanities texlive-humanities-doc texlive-lang-african texlive-lang-arabic texlive-lang-cyrillic texlive-lang-czechslovak texlive-lang-english texlive-lang-european texlive-lang-french texlive-lang-german texlive-lang-greek texlive-lang-indic texlive-lang-italian texlive-lang-polish texlive-lang-portuguese texlive-lang-spanish texlive-luatex texlive-math-extra texlive-music texlive-omega texlive-plain-extra texlive-publishers texlive-publishers-doc texlive-science-doc texlive-xetex ttf-adf-accanthis ttf-adf-gillius ttf-adf-universalis ttf-dejavu-core xawtv-plugins:i386 zlib1g:i386Note that apart from this long list, apt also said that the following packages would be removed (sudo??): Es SUPRIMIRAN els paquets segents: plainbox-provider-resource-generic prey:i386 sudoSo because of all that, I aborted apt-get -f install, just in case...Because I wasn't sure about the dpkg process, I undid the first command by executing dpkg --purge prey. At this point, I checked the list of unnecessary packages (apt-get -f install) again and it was reduced, but still long enough to make me cancel this command. This is the list of packages that apt wants to uninstall because they are not necessary:aglfn asymptote asymptote-doc checkbox-ng checkbox-ng-service cm-super cm-super-minimal context context-modules fonts-cabin fonts-comfortaa fonts-dejavu-extra fonts-ebgaramond fonts-ebgaramond-extrafonts-font-awesome fonts-freefont-otf fonts-gfs-artemisia fonts-gfs-baskerville fonts-gfs-bodoni-classic fonts-gfs-complutum fonts-gfs-didot fonts-gfs-didot-classic fonts-gfs-gazisfonts-gfs-neohellenic fonts-gfs-olga fonts-gfs-porson fonts-gfs-solomos fonts-gfs-theokritos fonts-hosny-amiri fonts-inconsolata fonts-junicode fonts-lato fonts-linuxlibertine fonts-lobsterfonts-lobstertwo fonts-oflb-asana-math fonts-roboto fonts-sil-gentium fonts-sil-gentium-basic fonts-sil-gentiumplus fonts-stix freeglut3 gstreamer0.10-alsa gstreamer0.10-plugins-good gstreamer0.10-xlcdf-typetools libftgl2 libgsl0ldbl libintl-perl libosmesa6 libpoppler-qt5-1 libprojectm2v5 libpython3.5-minimal libpython3.5-stdlib libqca2-plugins libqca2v5 libqt5script5 libqxt-core0 libqxt-gui0libsigsegv2 libtext-unidecode-perl libxml-libxml-perl libxml-namespacesupport-perl libxml-sax-base-perl libxml-sax-expat-perl libxml-sax-perl linux-image-4.2.0-16-generic linux-image-4.2.0-18-genericlinux-image-4.2.0-19-generic linux-image-4.2.0-22-generic linux-image-extra-4.2.0-16-generic linux-image-extra-4.2.0-18-generic linux-image-extra-4.2.0-19-generic linux-image-extra-4.2.0-22-genericlinux-signed-image-4.2.0-18-generic linux-signed-image-4.2.0-19-generic linux-signed-image-4.2.0-22-generic m-tx musixtex pfb2t1c2pfb plainbox-provider-resource-generic plainbox-secure-policy pmxpython3-checkbox-ng python3-checkbox-support python3-jinja2 python3-plainbox python3-pyparsing python3-xlsxwriter python3.5 python3.5-minimal qml-module-qtquick-localstorageqtdeclarative5-localstorage-plugin tex4ht tex4ht-common texinfo texlive-fonts-extra texlive-fonts-extra-doc texlive-formats-extra texlive-games texlive-generic-extra texlive-humanitiestexlive-humanities-doc texlive-lang-african texlive-lang-arabic texlive-lang-cyrillic texlive-lang-czechslovak texlive-lang-english texlive-lang-european texlive-lang-french texlive-lang-germantexlive-lang-greek texlive-lang-indic texlive-lang-italian texlive-lang-polish texlive-lang-portuguese texlive-lang-spanish texlive-luatex texlive-math-extra texlive-music texlive-omegatexlive-plain-extra texlive-publishers texlive-publishers-doc texlive-science-doc texlive-xetex ttf-adf-accanthis ttf-adf-gillius ttf-adf-universalis ttf-dejavu-coreI recall having this list populated with some linux-signed-image... and others yesterday, but definitely didn't have all of them. In fact, some if these packages I know for sure that are being used (e.g. texlive-*, fonts-*, ttf-*, python-*...) What might I have broken and how could I revert this? I suspect the error comes from step 3 but I'm not certain about it.UPDATE: Before even tinkering around with debfoster as suggested in the comments, I have checked some packages and I have noticed that:ubuntu-desktop is not installed (?!) -- and I'm NOT using KDE nor XCFE. | Why do I have so many unnecessary packages? | apt;package management;dpkg;deb | There are a few routines, old wives' tales, for finding and then cleaning out unnecessary packages, in additon to the already suggested debfoster.(first) but, why is that package installed?A tool you will want to use while cleaning out packages is aptitude why pkg-name From the aptitude man page: $ aptitude why kdepim i nautilus-data Recommends nautilus i A nautilus Recommends desktop-base (>= 0.2) i A desktop-base Suggests gnome | kde | xfce4 | wmaker p kde Depends kdepim (>= 4:3.4.3)This only prints out the strongest dependency chain, but will answer many questions quickly. There is also why-not which is not so relevant to removing packages. package removed, config files remainingYou can find packages that are no longer used by yourself but that still have configuration files and the like remaining. To do this, open a terminal and typedpkg-query -l '*' | grep ^rc | awk '{print $2}' |xargs > my_ apt_rc_removeList.lstThe list generated is of all the files in the 'rc' state - removed but configuration files remaining. These left over files you will now remove, but first look over the files listed in the my_ apt_rc_removeList.lst file, to check that you do want all of this cruft removed. Now type aptitude purge `cat apt_rc_removeList.lst`and all this cruft will be removed. gtkorphanAnother application you can use to find left over packages isgtkorphan. From gtkorphan's description in the apt system:GtkOrphan is a graphical tool which scans your Debian system, looking for orphaned libraries. It implements a GUI front-end to deborphan, but adds the package removal capability. A detailed documentation on the program can be found at: http://www.marzocca.net/linux/gtkorphan.html.You can use this to help clean out packages in other sections (other than 'libs') too. mark uninteresting packages as dependencies: remove asapIn aptitude, in one sub-category of your Installed Packages, type l (the letter 'el') and then in the box that appears enter ?not(?automatic) . This will now show only packages that are not dependencies of other packages. Now,scroll over each of these, and on very package that does not interest you directly, hit the M key. This will not remove any packages, but mark each package as only here because, and while, something depends on itNow go through the sections one by one. Most of the 'only as a dependencies' packages will be in the libs section.mark all packages matching 'pattern' as 'auto': remove asapAll of the '-dev' packages can be marked for removal-if-not-required by aptitude markauto ~i~n\-dev$clean out an entire categoryAn entire category (CATEGORY_NAME) can be cleaned out withaptitude purge '~sCATEGORY_NAME ! ~exceptThisApp |
_unix.20056 | I have problem connecting to host via ssh. It prompts me this error :debianbox@debian:~$ssh [email protected]_exchange_identification : Connection closed by remote hostThe problem is that I used to connect to that machine before with the same account, but now I don't know what happen I just get this error. I try to warn the admin but he says that everything works fine. Can somebody tells me what is the problem? | Unable to connect to host via ssh | ssh | You mentioned in the comments that connecting from other addresses works, so most likely you have something like denyhosts running. Denyhosts detects failed SSH attempts and (if there are too many) blocks connections from that address. Check your /etc/hosts.deny file to see if your machine's IP address is in there, and remove it if so. You can add it to /etc/hosts.allow if you like, so it will always be able to connect even if Denyhosts blocks it again(Adapted from several comments on the question) |
_unix.60142 | I am trying to boot the linux kernel (bzImage) in QEMU but have had issues. After asking on U&L I found out that my problem was that I was booting the kernel without a filesystem to boot from. So how can I create a VFS to boot the kernel image? | Creating a Virtual filesystem to boot linux | linux;filesystems;boot;linux kernel;qemu | null |
_softwareengineering.354669 | I'm looking to do the following for 1000s of items:1) get time series data for an item from file 2) calculate mean and standard deviation3) calculate final calculation using mean and standard deviation4) add value to listCurrently the file is one huge CSV of all items and all dates so there isn't much use multithreading that part so I will load it into memory. I would like to do the calculations via multithreading and am hoping to use the TPL (Task Parallel Library). I have a good idea of how to do this for example I could use a parallel for each and do the following:1) get time series data for specific item2) calculate mean3) calculate standard deviation4) calculate final calculation5) add to thread safe dictionaryEven though this is multithreaded it still is very sequential within the thread itself so I thought of the following would be better:Queue for timeseries data retrieval Queue for items to process for mean and standard deviation Queue for items to process for final calculation Couple of threads per queue picking up and working on each item. Basically will there be much benefit in me have more control or shall I just use the parallel for each? | TPL parallel for each or custom queue for multithreaded data processing | c#;multithreading;message queue | null |
_webmaster.65521 | If I pick a hosting package that offers CDN support, then how do server stat tools work, since web pages are served by different servers from all over the world?Can I rely on them in that case? | Do server stat tools work with CDNs? | cdn;statistics;logging | Generally with any server-side language in play, your webpages are going to be created by the server directly and won't be duplicated by the CDN.Instead, a CDN is generally most effective when used on static assets like images, audio or video files, pdfs, word docs, etc. But most of all images, including the images in any page on the server. So you won't get the same stats simply for those static assets, but you'll get most page loads.In general, I wouldn't worry too much about inaccuracies due to CDN usage, the various stats packages generally have some wiggle room in terms of what they can actually pick up accurately anyway.And, of course, the benefits of a CDN are worth it if you get a speed boost out of it. |
_webmaster.26244 | Possible Duplicate:SEO one longer page vs. several targeted subpages? Is is better to have just one big page with nice descriptions of products, or just one small page with link to small pages that have, each, one description of one product?Of course you may think: what a dumb question! Of course the more pages you have the better!What I mean is: if the pages of your products are very small, and the main page is very small too, maybe google will ignore it or flag it as useless (or whatever) whereas a nice and big page has still the potential to be properly indexed by google.What do you think? | SEO: many small pages or only one big? | seo | Very good question. I see that you're representing two extremes;Huge amounts of information on one pageAlmost no information on multiple pagesObviously you have already aknowledged that almost no information will lead to a flag, whereas internal links can help. SEO is all about balancing your options with your needs. If you have a ton of products, but only a little bit of information regarding each, instead of going the simple way of having them all listed just once, include some beef to the homepage and add some featured products, sort by this or that, categorize, etc.In the interest of the soul of the question though, I would definitely suggest using just one page. It might not get high rankings, but atleast you will avoid being flagged by a search engine. |
_cs.71845 | I have encountered a problem in class, tried solving it and faced a problem, I will include my ideas, and the problems i faced.Assume F is a PRF,1.denote $P_k(x) = F_k(x) F_k(1^n)$ for any $n N, k \{0,1\}^n$ and $x \{0,1\}^n$. Is P necessarily a pseudorandom function?2.denote $P_k(x) = F_k(x) 1^n$ for any $n N, k \{0,1\}^n$ and $x \{0,1\}^n$. Is P necessarily a pseudorandom function?My solution: I noticed that for every $k$ it happens that $P_K(1^n)=F_k(1^n) F_k(1^n)=0^n$ therefore i thought that $P_k$ isn't a PRF and i constructed $D^o$ distinguisher, thus leaves me with $|P(D^{P_k(.)}(1^n)=1)-P(D^{f(.)}(1^n)=1)|=1-v(n)$ where v(n) is negligible.The question here is : do I need to show $v(.)$ and which function it is?I have assumed towards contradiction that $P_x$ is not a PRF, thus a distinguisher $D^o$ (with an oracle) can distinguish that it is not random in PPT time ( please excuse my english).such that $|P(D^{P_k(.)}(1^n)=1)-P(D^{f(.)}(1^n)=1)|>\frac1{P(n)}$ where $p(n)$ is a polynom. now that i have this assumption and understanding i created a new distinguisher that encloses $D$ and use it in order to get my contradiction- that F isn't a PRF if $P_x$ isn't.I have a bit of a problem with showing how the enclosing distinguisher will lead to a contradiction.I Hope that i was correct with my method, and if it is indeed a solution to these two problems. | creating new PRF from existing one | cryptography;pseudo random generators | null |
_unix.180943 | I'm on a Mac but I think this is generally Unix-applicable.I'm in the process of learning shell scripting and there's something I seem to be missing. When I'm in the ordinary terminal, I can use scripting syntax like for loops and such in conjunction with commands to do stuff.But....bash opens an interpreter for shell scripting.Which is where I get confused, because isn't the terminal already an interpreter for shell scripting, as demonstrated by the fact that the scripting works when given to stdin?Bonus question: how is bash different from bash -i, which according to man starts an interactive session.....isn't that what happens when you just enter bash on its own? Which, to my eye is no different than being in the normal terminal in the first place... | Terminal vs bash? | bash;shell;terminal | When you launch a terminal it will always run some program inside it. That program will generally by default be your shell. On OS X, the default shell is Bash. In combination that means that when you launch Terminal you get a terminal emulator window with bash running inside it (by default).You can change the default shell to something else if you like, although OS X only ships with bash and tcsh. You can choose to launch a custom command in a new terminal with the open command:open -b com.apple.terminal somecommandIn that case, your shell isn't running in it, and when your custom command terminates that's the end of things.If you run bash inside your terminal that is already running bash, you get exactly that: one shell running another. You can exit the inner shell with Ctrl-D or exit and you'll drop back to the shell you started in. That can sometimes be useful if you want to test out configuration changes or customise your environment temporarily when you exit the inner shell, the changes you made go away with it. You can nest them arbitrarily deeply. If you're not doing that, there's no real point in launching another one, but a command like bash some-script.sh will run just that script and then exit, which is often useful.The differences between interactive and non-interactive shells are a bit subtle and mostly deal with which configuration files are loaded, which error behaviours there are, and whether aliases and similar are enabled. The rough principle is that an interactive shell gives you the settings you'd want for sitting in front of it, while a non-interactive shell gives you what you'd want for a standalone script. All of the differences are documented explicitly in the Bash Reference Manual, and also in a dedicated question on this site.For the most part, you don't need to care. There's not often a reason to launch another shell, and when you do you'll have a specific purpose in mind and know what to do with it. |
_unix.146015 | I have a bunch of redis service instances, and I would like to add a label to them in the output of the ps command.Currently I see:$ ps aux | grep redisroot <snipped> /usr/local/bin/redis-server *:6381 root <snipped> /usr/local/bin/redis-server *:6380 Is there a way to have an output like this:root <snipped> /usr/local/bin/redis-server *:6381 item cache # <== labelroot <snipped> /usr/local/bin/redis-server *:6380 page cache # <== labeli.e. adding a text label to easily identify what each of those instances is for.Is there a way to do this instead of having to make copies of the binary? | Adding a label to start-stop-daemon service in process list | ubuntu;start stop daemon | Assuming redis-server does not have built-in support for changing its own command name after startup (some programs, especially daemons, do have such support), there are a few things you can do:Use an alternate command name.Although the first argument in the command line (argv[0]) is normally the name of the binary used to invoke a command (either its full path name or its base name), it doesn't have to be. And if it isn't, then the application itself probably won't notice or care. But shells launch commands with argv[0] set following this convention so you have to launch it in a special way.To do this, you would probably want to modify the /etc/init.d script that launches this daemon.Make hard links to the binary and launch those. This is similar to your suggestion of copying the binary, but copies are unnecessary. If you use hard links, the binary will not occupy any additional disk space and the code (text) of the multiple instances will all share memory, which won't happen with copies. |
_cogsci.3482 | I've recently became aware that there's a whole field of quantified self - using various methodology to collect data about human performance in an attempt to quantify how the human body/brain works. I'm sure a lot of the methodology uses is not rigorous or scientific, but I'm interested nonetheless.Are there any third party tools/mods/apps that monitor performance of people playing video games? I'm looking at cameras that track people's eye motion, programs that run in background (like rescue time), potentially consumer-grade EEG data or mods that keep logs of games played. Here are some examples: In some strategy games, like starcraft, the concept of Actions per minute - APM is important, and the user is expected to manage dozens of different unit interactions and commands. It would be interesting to see a plot of APM vs game time vs time of the day. In some shooter video games, especially multiplayer, the concept of kill to death ratio is important - how many times a person has killed before being killed. I would expect that there's some kind of a correlation between cognitive performance and ability to stay alive while accomplishing objectives. It would be interesting to see how this is related to the time of day or the person's vital signs. If there already have been studies done on the subject, Im interested in who holds/has the data on | What Quantified Self reserach tools are there for measuring performance while playing videogames? | methodology;video games | null |
_softwareengineering.355484 | I'm creating my own social media website, and I'm faced with a dilemma. There are 2 ways I could design my classes and database, but I don't know which one is more correct.Users can make posts on the website by uploading images, videos, andplain text. At first I wanted to make a class for each post typewhich they inherit from a parent post class, then write all the postattributes of the post class with a db class to one database table called posts.However, during the process I realized that this could be done in another way as well:Instead of the use of inheritance, I would create a table for each type of post (image, video, text), then write each type of user post to their own separate tables. Next I would create a 4th table that selects all the information from the other three tables and creates an object for each post. Then each post can be displayed on the website. This database would be in third normal form.If this was written in languages like java or c# I would definitely use inheritance because of my knowledge with those languages, but since I'm still learning about php, I'm not sure which design is better. | Database and class design for posting user data to a website | php;database design;inheritance | I prefer the second option in this scenario. It would result in more flexibility and ultimately more functionality for your social media website.I am not sure for what exactly you are planning. But when I think of social media websites, I want to post text and attach videos and/or photos to that post. So it might make sense to allow both text and a photo to belong to a post. This is not possible and won't ever be possible if you go for the inheritance approach.In more general terms you might also consider the composition over inheritance principle. Although object-oriented languages seem to be centered around inheritance and you often learn inheritance as one of the core feature of OOP, it is often advisably to prefer composing objects instead of inheriting from them. Your second approach is exactly that: you compose the post with another (or several other) objects, which are text/photos/videos. |
_codereview.71459 | I'm using Caliburn Micro to create a WPF application. So what I want to do here is a typical Master/Detail situation. I'm displaying a list of Users and you can can add/edit a User and save the changes back to the database.Everything works, but I'm just not sure if I'm creating the UserViewModels in the correct way. I lay awake at night yearning for a more elegant way.Anything else that I'm doing wrong please let me know. It greatly helps me out and I appreciate every comment I get. Even if you must call me an idiot. Many thanks in advance!PS: I left a few implementation details out to keep it brief. But let me know if anything else would help.public class UserWorkspaceViewModel : Conductor<UserViewModel>.Collection.OneActive{ private IUnitOfWork _unitOfWork; private UserViewModelFactory _userViewModelFactory; public UserWorkspaceViewModel(IUnitOfWork unitOfWork, IUserViewModelFactory userViewModelFactory) { _unitOfWork = unitOfWork; _userViewModelFactory = userViewModelFactory; LoadUsers(); } public void LoadUsers() { var users = _unitOfWork.Users.GetByDepartmentId(1); foreach (User user in users) { // Use the factory to create the ViewModel UserViewModel viewModel = _userViewModelFactory.CreateInstance(user); // Caliburn Micro specific: add user to the screen collection this.Items.Add(viewModel); } }}Here's the UserViewModelFactory implementation:public class UserViewModelFactory{ private IUnitOfWork _unitOfWork; public UserViewModelFactory(IUnitOfWork unitOfWork) { _unitOfWork = unitOfWork; } public UserViewModel CreateInstance(User user) { UserViewModel vm = new UserViewModel(_unitOfWork); // Use AutoMapper to map properties from user to VM Mapper.Map<User, UserViewModel>(user, vm); return vm; }}And finally here's the UserViewModel:public class UserViewModel : Screen{ private IUnitOfWork _unitOfWork; #region Properties // Properties like FirstName, LastName, etc. #endregion public UserViewModel(IUnitOfWork unitOfWork) { _unitOfWork unitOfWork; } public void Save() { if (this.Id == 0) { User user = Mapper.Map<UserViewModel, User>(this); _unitOfWork.Users.Add(user); _unitOfWork.SaveChanges(); } if (this.Id > 0) { User user = _unitOfWork.Users.GetById(this.Id); Mapper.Map<UserViewModel, User>(this, user); _unitOfWork.SaveChanges(); } }} | Creating list ViewModels in the correct way | c#;dependency injection;wpf;mvvm | null |
_cs.47833 | How might one compute $4^{-1} \mod 17$ I know the answer is 13. I'm just not sure how to arrive at that number, and can't find any good explanations. Any help would be great | Computing mod inverse? | modular arithmetic | In order to compute the inverse of $a$ modulo $n$, use the extended Euclidean algorithm to find the GCD of $a$ and $n$ (which should be 1), together with coefficients $x,y$ such that $ax + ny = 1$. The inverse of $a$ modulo $n$ is thus $x$.The extended Euclidean algorithm gives a constructive proof of Bzout's identity, which states that for all integers $a,b$ there exist integers $x,y$ such that $ax+by = \mathrm{gcd}(a,b)$. A different proof shows that the minimal positive value of $ax+by$ (over all $x,y$) is $\mathrm{gcd}(a,b)$.The extended Euclidean algorithm works in greater generality, for any Euclidean domain. An important example is the ring of polynomials over a field. |
_webapps.27994 | My yahoo email account was sending spams at about 6:40pm today. I immediately updated my password. I also checked Recent Login Activity, but all recorded locations and IP addresses from 4:47 PM yesterday till now are my own. I wonder how my account was possible to send spams while the recorded login acctivities are normal?Thanks!ADDED: header of one spam sent and saved in my Sent folderFrom Tim Thu Jun 14 15:42:07 2012X-YMail-OSG: ivy79oIVM1k8kPIPgi4nfJh2JPdWcnzc7If0UmOfBQtmnkB nEmfLnPHJReceived: from [187.41.82.250] by web162602.mail.bf1.yahoo.com via HTTP; Thu, 14 Jun 2012 15:42:07 PDTX-Mailer: YahooMailWebService/0.8.118.349524Message-ID: <1339713727.16968.BPMail_high_noncarrier@web162602.mail.bf1.yahoo.com>Date: Thu, 14 Jun 2012 15:42:07 -0700 (PDT)From: Tim <[email protected]>Subject: HITo: [email protected]: [email protected], [email protected], [email protected], MIME-Version: 1.0Content-Type: text/plain; charset=us-asciiContent-Length: 71Yahoo notice of failure to deliver the spam to some intended addressSorry, we were unable to deliver your message to the following address.<[email protected]>:Remote host said: 550 5.1.1 <[email protected]> User unknown; rejecting [RCPT_TO]--- Below this line is a copy of the message.Received: from [98.139.212.148] by nm21.bullet.mail.bf1.yahoo.com with NNFMP; 14 Jun 2012 22:42:08 -0000Received: from [98.139.212.214] by tm5.bullet.mail.bf1.yahoo.com with NNFMP; 14 Jun 2012 22:42:08 -0000Received: from [127.0.0.1] by omp1023.mail.bf1.yahoo.com with NNFMP; 14 Jun 2012 22:42:08 -0000X-Yahoo-Newman-Property: ymail-3X-Yahoo-Newman-Id: [email protected]: (qmail 91992 invoked by uid 60001); 14 Jun 2012 22:42:08 -0000DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1339713728; bh=3k5IzdOBwo7Jx0VjjcU11ALbzymfvrJ2SheLqHngG7s=; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=mk5ksTksAaA1u+2GJaaQoJaClM5AQeOmUn4A9e3xYyJVpER/mKvPB6e5NJlZ2WG1zhOvnrMUHGgqwxMMa7lf3K9tHzGxhbLddTxfM0udgCC2Ws4d7ebgACo2lT/92A9qGxxPIXQCSAEiK8/C7P5rQ6ZAOGOv5xMHuSMY3lUzs9Y=DomainKey-Signature:a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=X-YMail-OSG:Received:X-Mailer:Message-ID:Date:From:Subject:To:MIME-Version:Content-Type; b=enSetbkOfQmTtzS221NeSMw+dVAbV6y4iFhhSye/tdOobEqExxBebaFrFsehnXbU10/kB00lr3EVDJFCcYoJT5Sp9a7bz1r9L3CezVCrqeolUUNSN4R9qjreJCxk3YxcTnm9f//PvAIPDsqadFmZyDXcT5FyUEfiwb0cyERbL90=;X-YMail-OSG: ivy79oIVM1k8kPIPgi4nfJh2JPdWcnzc7If0UmOfBQtmnkBnEmfLnPHJReceived: from [187.41.82.250] by web162602.mail.bf1.yahoo.com via HTTP; Thu, 14 Jun 2012 15:42:07 PDTX-Mailer: YahooMailWebService/0.8.118.349524Message-ID: <1339713727.16968.BPMail_high_noncarrier@web162602.mail.bf1.yahoo.com>Date: Thu, 14 Jun 2012 15:42:07 -0700 (PDT)From: Tim <[email protected]>Subject: HITo: [email protected]: 1.0Content-Type: text/plain; charset=us-ascii | Yahoo email account was used to send spams | spam prevention;yahoo mail | You can have a virus. The one that sends e-mails. If you have at least one spam e-mail try to track computers that had received it in Received header. If it tracks down to your computer - you are vulnerable.They can just use your e-mail when sending spam. Nothing you can do unless your mail provider (Yahoo) would use something like DKIM or SPF. |
_unix.285976 | I have a live distribution of Kali running on a usb with persistence. However, after installing updates and a few new software packages, the root drive is pretty much out of space.How do I go about resizing this? I've tried booting GParted on a separate live USB and extending the drive, however GParted puts a little yellow triangle to the left of the /dev/sdb1 partition and essentially locks it.I have also tried resizing the disk during runtime using resize2fs but to no avail. I have been at this for hours now and I'm at breaking point so if anyone could help me out i'd very much appreciate it. Below is a copy of my fdisk -l output:Disk /dev/sdb: 7.3 GiB, 7864320000 bytes, 15360000 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0x0a9a1b1aDevice Boot Start End Sectors Size Id Type/dev/sdb1 * 64 6324223 6324160 3G 17 Hidden HPFS/NTFS/dev/sdb2 6324224 6485375 161152 78.7M 1 FAT12/dev/sdb3 6486016 15359999 8873984 4.2G 83 LinuxDisk /dev/loop0: 2.8 GiB, 2969686016 bytes, 5800168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytes | Expanding the Kali Root Partition | partition;kali linux;gparted | null |
_cstheory.33800 | I'm trying to understand the paper : Dependent Types without the Sugar by implementing an interpreter and type checker for the language. In doing so, I've seen that the unfold t as x -> u syntax for recursive definitions (syntax is defined in Section 2.1) binds a variable, but I don't see why that's needed. None of the examples in the paper actually use the variable binding -- they all use a shorthand form unfold t (meaning unfold t as x -> x).I do see that the type checking rule for it (from section 5) uses the variable binding, but I don't understand the implications of this. As far as I can tell unfold t as x -> u is entirely equivalent to let x = unfold t in u.Can someone provide an example of when the variable binding is helpful or necessary? Is there some term that type-checks with the long form of unfold but not with the short form and let? | PiSigma: why does 'unfold' bind a variable? | type theory;dependent type | I don't think there's any magic/necessary reason. IMO, it's written that way to make it more straightforwardly obvious that unfold is an analytic/checking elimination rule; just like why split is written the way it is rather than being written as first and second projections, and why case is written the way it is rather than being written like uneither in Haskell. Re the analytic bit, note how the other elimination rules are either synthetic/inferring (beta) or bidirectional (bang) |
_unix.4151 | Closest I can come is:useradd --home / -r --shell /sbin/nologin someuserBut this creates an entry into /etc/htpasswd that looks something like this:someuser:x:100:100::/:/sbin/nologinI want that '/' gone, so that it looks like this:someuser:x:100:100:::/sbin/nologinWhich is achievable through usermod:usermod -d '' someuserBut I think this is a bit backwards.Any ideas? | How would you create a user with the HOME_DIR field in /etc/passwd completely blank? | security;users;etc | null |
_unix.114660 | I'm aware of wget -i as a way to download a list of URLs. The only trouble is that I need to pass some different POST data to each one, which works for single urls using wget --post-data= but not for lists.I'm open to any CLI downloader, or even something in JS or Python. I would however like to get either a progress bar for each download or a log file updated each time a dl finishes, or some other way of knowing when a dl finishes. | Download multiple URLs at once | wget;download | null |
_unix.317920 | I am using Linux Mint 18 Xfce 64-bit.As you can see in the image below, for some reason left or right-snapping some windows like the Terminal, Emacs, etc. leaves these breaks in the middle and the bottom of the desktop (in the image you can see red behind these breaks - it's my desktop background). This does not happen for Thunar, Google Chrome or the majority of other programs. Any idea how to get rid of these breaks?A zoomed view: | Linux Mint Xfce window snapping bug | linux mint;xfce;window manager | null |
_webmaster.25519 | i have over 100 urls in my website which i need to redirect:E.g./page/how-to-bake-chocolate-cookies/ will redirect to /cookies/how-to-bake-chocolate-cookies//page/contact/ will redirect to /contact-us/and so on...I want to make sure search engines will catch the new url without affecting my site's quality, ranking, etc...How can I best accomplish this and avoid users experiencing 404 errors and make sure Google will correctly index my new pages? | Redirecting URLs when launching a new website - how to avoid dropping page rank | seo;search engines;htaccess;url rewriting;indexing | null |
_datascience.13213 | To be more specific, loss reserving models in actuarial science, such as the chain ladder method, can be expressed as GLMs. I have developed a predictive model using neural nets which takes into account some aspects of the insured (it is an individual risk model). Can the output of this model be safely used as an input to the insurance company's existing loss reserve model? | Are there pitfalls in using the output of machine learning model, such as a neural net as the input to a traditional GLM or similar? | predictive modeling | Theoretically there is no problem. I've seen tree models put as predictors in logistic models. NN as input into a GLM moodel makes sense. The ultimate decition should be made based on the predictability of the NN. You have to mind a few issues: model mantenance and deployment. The NN model would probably have more parameters than your vanilla GLM deployment might be more involved. You could live it frozen and never update it, and let the GLM model use the NN score as long as it adds to the GLM prediction. interpretability. There might be managers whose acountability is to review risk, and they may not be comfortable with a NN. In that case feeding the NN results into a GLM might make it more acceptable, much as credit scores are used in some risk models. |
_unix.104109 | I need to have a very fast disk for keeping cache. How can I do that in Linux? | How to create memory-based disk in linux? | linux;memory;disk | Thanks to @Mat:# mkdir -p /mnt/ram# mount -t ramfs -o size=20m ramfs /mnt/ram |
_codereview.139141 | I want ls to print a message when run on an empty or full of dotfiles directory. Instead of:$ ls empty_dir/ dotfiles/$ I want:$ ls empty_dir/ dotfiles/empty_dir is emptydotfiles contains only hidden files$ If dir contains only dotfiles, and if the ls command is able to show them, the following should appear:$ ls dotfiles/dotfiles contains only hidden files$ ls dotfiles/ -A.hidden_dirI searched a solution for that behavior, but can't found anything, so end up to implement it myself, using zsh:# Wrapper around ls function, printing dedicated messages# if directory is empty/full of hidden files.# Parse parameters in order to call verbosels_onefile correctly.function verbosels() { # extract options and filepaths ls_options= ls_filepaths= for parameter in $@ do #echo PARAM: $parameter # it's an option if it's start with a dash if [[ $parameter =~ ^-.* ]]; then #echo OPTION ls_options=$ls_options $parameter else # it's a path: call ls on it #echo FILEPATH # get escaped version of given filename filepath=$(print -r -- ${(q)parameter}) #echo filepath: $filepath ls_filepaths=$ls_filepaths $filepath fi done # remove options surrounding spaces ls_options=$(echo $ls_options | tr -d [:space:]) isnotfirst= # set to true after the first iteration # call verbosels_onefile for each filepath for filepath in ${(z)ls_filepaths} do # print space only between ls calls if [ $isnotfirst ]; then echo # line jump else isnotfirst=1 fi #echo CMD: |verbosels_onefile $filepath $ls_options| verbosels_onefile $filepath $ls_options done}# Perform an ls call on only one directory/file, that must be in first parameter.# The other parameters remain untouched.# The ls call take into account any aliases on ls.# This function is called by the higher level verbosels function.function verbosels_onefile() { if [ $1 ]; then 1=$1 else 1=. fi if [ -d $1 ]; then # contains files (hidden included) if [ -n $(command ls -A $1) ]; then if [ -n $(command ls $1) ]; then ls $@ else # the directory is not empty, and contains only hidden files: # print message only if the ls command returns nothing # NOTE: run the ls command twice. Could be costly. if [ -n $(ls $@) ]; then ls $@ else echo $1 contains only hidden files 1>&2 fi fi else echo $1 is empty 1>&2 fi elif [ -e $1 ]; then ls $@ else echo $1 doesn't exists 1>&2 fi}Remarks:seems over-complicatednon-perfect handling of some ls options, notably -l in case of only-dotfiles directory (print total 0)if a directory contains only dot files, the ls command is runned twice. Could be costly if the directory contains a lot of dot files.The following is not valid, because ls formatting (colors, columns,) are not kept:ls_result=$(ls $@)if [ $ls_result ]; then echo $ls_resultelse echo $1 contains only hidden files 1>&2fiI'm using zsh, because it provides some easier treatment. A solution involving only generalist bash could be better because of the portability.I'm looking for any readability/efficiency improvement, eventually for modules/programs that already do the job. | `ls` indicates when directory is empty/full of dotfiles | bash;shell;wrapper;zsh | Portability issuesThe function collects the given options and when it finally calls ls,it puts the options at the end.Unfortunately this doesn't work on OSX,where the options must come before the filenames.Your remarksseems over-complicatedYes. Unfortunately, to get the behavior that you want,I don't think this can get much simpler.non-perfect handling of some ls options, notably -l in case of only-dotfiles directory (print total 0)Unfortunately, the only way to avoid that will only make the script even more complicated.if a directory contains only dot files, the ls command is runned twice. Could be costly if the directory contains a lot of dot files.Perhaps you forgot to count the 2 runs of command ls in the if-else chain.So in fact for each file the ls command is executed 4 times.Filenames with spacesThis will not work when the filenames have spaces:ls_filepaths=$ls_filepaths $filepathYou can make it work with filenames with spaces,and at the same time cleaner,by using arrays.Remove options surrounding spacesI'm not sure what's going on here:# remove options surrounding spacesls_options=$(echo $ls_options | tr -d [:space:])For example -l -a would become -l-a which will not work.Avoid negatives in variable namesIt's generally not recommended to use negatives in variable names like isnotfirst,because it could lead to strange conditions like not isnotfirst,which is hard to read and confusing.I suggest to rename it to first and use ! $first in conditions for a negative meaning.Terminology# Wrapper around ls function, printing dedicated messagesls is not a function, it's a command.For example, verbosels is a function.Looping over $@When looping over $@,you can omit the [email protected] instead of this:for parameter in $@; doYou can write simply:for parameter; doPattern matchingInstead of matching by regular expressions like this:if [[ $parameter =~ ^-.* ]]; thenIt would be slightly simpler to use pattern matching like this:if [[ $parameter == -* ]]; thenSetting a variable to emptyInstead of this:ls_options=ls_filepaths=You can simplify as:ls_options=ls_filepaths=echo is the same as echo You can replace echo with simply echo with no parameters. |
_unix.235869 | Consider the following union mount:mount -t overlay -o lowerdir=/.pre-foo/lower,upperdir=/.pre-foo/upper,workdir=/.pre-foo/work overlay /fooI would like to obfuscate that /.pre-foo to minimize the chance of some process to modify my underlying folders while the union is mounted. I could get it with the following recursive mount:mount -t overlay -o lowerdir=/foo/lower,upperdir=/foo/upper,workdir=/foo/work overlay /fooMy question is: Is this safe? Is there any security and/or performance risk in mounting an overlay recursively? | OverlayFS: Is mounting /foo/lower:/foo/upper to /foo safe? | overlayfs;union mount | null |
_softwareengineering.264666 | Should there be a separate code coverage report for unit and integration tests, or one code coverage report for both?The thinking behind this is that code coverage allows us to make sure that our code has been covered by tests as far as possible (as much as a machine can now anyway).Having a separate report is more convenient for us to know what has not been covered by unit tests, and what has not been covered by integration tests. But this way we cannot see the total coverage percentage. | Separate code coverage reports for unit and integration tests, or one report for both? | unit testing;code quality;integration tests;test coverage | Above all, you need to have and analyse combined (total) coverage. If you think of it, this is the most natural way to properly prioritize your risks and focus your test development effort.Combined coverage shows you what code is not covered by tests at all, ie is most risky and need to be investigated first. Separate coverage reports won't help here, as these don't let you find out if the code is tested somehow else or not tested at all.Separate coverage analysis also can be useful, but it would better be done after you're done with combined analysis and preferably would also involve results of analysing combined coverage.Purpose of separate coverage analysis differs from combined one. Separate coverage analysis helps to improve design of your test suite, as opposed to analysis of combined coverage which is intended to decide on tests to be developed no matter what.Oh this gap isn't covered just because we forgot to add that simple unit (integration) test into our unit (integration) suite, let's add it -- separate coverage and analysis is most useful here, as combined one could hide gaps that you would want to cover in particular suite.From above perspective, it is still desirable though to also have results of combined coverage analysis in order to analyse trickier cases. Think of it, with these results, your test development decisions could be more efficient due to having information about partner test suites.There's a gap here, but developing a unit (integration) test to cover it would be really cumbersome, what are our options? Let's check combined coverage... oh it's already covered elsewhere, that is, covering it in our suite isn't critically important. |
_unix.234734 | In Ubuntu I used sudo update-alternatives --config x-www-browserto set the default internet browser manually.In Manjaro I get:sudo: update-alternatives: command not foundI have set Firefox as the default in its settings and want it to stay so.After installing Chromium, the default browser is now Chromium, although I reconfirmed Firefox as such and in Chromnum settings it says: Chromium cannot determine or set the default browser.How to make Firefox default browser? | Set the default browser, system-wide, on Manjaro | browser;manjaro | null |
_unix.66138 | I have a script (let's name it parent.sh) that calls some other scripts based on input parameters. Finally, it calls the script child.sh.child.sh requests user's input in case it find that some files already exists:Would you like to replace the configuration file with a new one? (Yes/No/Abort): Now, what I want to do it to simulate the keystroke of Y/y inside the parent.sh script in order to always overwrite the files.I cannot use expect.How can I do that? | `expect`-like behaviour in bash script | bash | null |
_unix.245471 | I started the imaging of an AF/512e HDD by first running a following command: ddrescue -n /dev/sdb2 drive_c.img mapfile.logUpon its completion I made a backup of mapfile.log and decided to run the splitting phase with direct disk access using the drive's physical sector size of 4K: ddrescue -d -b4096 -r3 /dev/sdb2 drive_c.img mapfile.logHad I chosen a 512 bytes sector-size would I have scraped more from the bad sectors?As I write this, the splitting stage has finished and the bad sectors are being retried for the second time. Naturally, almost all bad blocks in the mapfile are of n4K size. Will I be able to scrape more off of them if I run the same command but with a 512 b sector?Thoughts and ConfusionFirst of all, I am not even sure if the use direct disk access was appropriate.The info file for ddrescue calls for direct disk access switch whenthe positions and sizes in the log file are ALWAYS multiples of the sector sizewhich would mean that thekernel is caching the disc accesses and grouping them. So if my kernel had been grouping the requests, the smallest block in the mapfile should have been 8K or 16K. In my case, however, the mapfile contained plenty of 512 bytes blocks both unreadable and rescued after the first run had completed.During the second run the majority of the 512 b blocks were merged into 4K blocks. For example, a 512 b bad sector which was adjacent to the non-split block before the splitting phase got merged together with an adjacent bad sector. This seems fine to me. Probably, at the trimming phase a head on the hard drive wasn't able to read a 4K sector so it returned a 512 b bad sector to ddrescue. The trimming ended right there, and the block following the 512 b sector was marked as a non-split.What doesn't seem normal is having a 512 b bad sector like in this screenshot:How come a head is able to read a 4K sector but declare only a 1/8 of it unreadable? I was under impression that a physical sector is read atomically by a head? So if a part of it is bad, the whole sector is bad.This obviously raises a question -- is it possible to get data from a 4K partially bad sector by running ddrescue with or without direct access but with a 512 b sector size? Obviously something doesn't add up.BTW this is my first posted question so please excuse me if the format is not consistent with the forum or the question is too loaded. But that aside I would be grateful to get an input on any of the topics relevant to the main question i.e. Advanced Format, direct disk access, kernel caching etc. as everything I find is either too far from the case in point or clearly assumes expertise from the reader.Cheers! | Which sector size shall I choose to run ddrescue with direct access on an Advanced Format drive? | hard disk;data recovery;ddrescue | null |
_unix.217195 | Every time I deploy a VPS server, I install VNC server.But every time I have to do this:yum groupinstall Desktopyum install tigervnc-serveryum install vncyum install firefox(etc.)Can I write an automated .sh script/file (or something else) thatI could run on every server to install VNC server automatically? Ifso, how? | Writing an install script for CentOS | linux;shell;centos | null |
_webmaster.85826 | I am in a shared host that I have basic cli access.Inside public_html/ I host several addon domains, each in its own dir. Is it possible to make my main domain's document root inside another folder to achieve the following structure?public_html/ --maindomain_root/ --othersite1_root/ --othersite1_root/instead of: public_html/ --maindomain_file1 --maindomain_file2 (..etc) --othersite1_root/ --othersite1_root/ | How to change document root of main domain in cPanel shared host? | apache;server;cpanel;shared hosting | Sorry my question is actually wrong because I had the misconception that all document roots must live inside public_html/. I just moved them outside, and now public_html/ is just the doc root for my main domain. So I avoided to have scattered files and every site is in its own dir. |
_codereview.119979 | This program makes a call to an API (http://api.football-data.org/) and obtains data for fixtures of Chelsea FC for the next 100 days in JSON format. The JSON is parsed into a Java object and then displays match details in the console. I am looking for any possible improvements I could make to this program. Also, if there is a better way to parse JSON, please do mention it. import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;import java.net.HttpURLConnection;import java.net.MalformedURLException;import java.net.URL;import java.util.ArrayList;import com.chelsea.fixtures.FixturesJsonParser;import com.chelsea.fixtures.FixturesJsonParser.FixtureDetails;import com.chelsea.fixtures.FixturesJsonParser.MatchResult;import com.google.gson.Gson;public class CfcFixture{ private static String getJson(String link){ HttpURLConnection conn = null; try { URL url = new URL(link); conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod(GET); conn.connect(); int status = conn.getResponseCode(); switch(status){ case 200: case 201: BufferedReader br = new BufferedReader(new InputStreamReader(conn.getInputStream())); StringBuilder sb = new StringBuilder(); String line; while((line = br.readLine())!=null){ sb.append(line + \n); } br.close(); return sb.toString(); } } catch(MalformedURLException e){ e.printStackTrace(); }catch (IOException e) { e.printStackTrace(); } finally{ if(conn != null){ conn.disconnect(); } } return null; } public static void main(String[] args){ // from api doc, CFC teamid is 61, // timeFrame is next 100 days represented by n100 String link = http://api.football-data.org/v1/teams/61/fixtures/?timeFrame=n100; String jsonFixturesData = getJson(link); Gson gson = new Gson(); FixturesJsonParser fixturesJP = gson.fromJson(jsonFixturesData, FixturesJsonParser.class); System.out.println(Displaying Chelsea Fc Fixtures From +fixturesJP.getTimeFrameStart() + to + fixturesJP.getTimeFrameEnd()); System.out.println(Total Number Matches: +fixturesJP.getNumOfGamesInTimeFrame() +\n); for(int i=0; i<fixturesJP.getNumOfGamesInTimeFrame(); i++){ System.out.println(Match +(i+1) + Details); System.out.println(fixturesJP.getFixtureDetails(i).getHomeTeamName()+ VS. +fixturesJP.getFixtureDetails(i).getAwayTeamName()); System.out.println(Match Time: +fixturesJP.getFixtureDetails(i).getMatchDate()); System.out.println(\n); } // To get goals by home team for particular Match i.e match result //System.out.println(fixturesJP.getFixtureDetails(1).getMatchResult().getHomeTeamGoals()); }}class FixturesJsonParser{ private String timeFrameStart; private String timeFrameEnd; private int count; private ArrayList<FixtureDetails> fixtures; protected String getTimeFrameStart(){ return timeFrameStart; } protected String getTimeFrameEnd(){ return timeFrameEnd; } protected int getNumOfGamesInTimeFrame(){ return count; } protected FixtureDetails getFixtureDetails(int matchNum){ return fixtures.get(matchNum); } class FixtureDetails{ private String date; private String homeTeamName; private String awayTeamName; private MatchResult result; protected String getMatchDate(){ return date; } protected String getHomeTeamName(){ return homeTeamName; } protected String getAwayTeamName(){ return awayTeamName; } protected MatchResult getMatchResult(){ return new MatchResult(); } } class MatchResult { private String goalsHomeTeam; private String goalsAwayTeam; protected String getHomeTeamGoals(){ return goalsHomeTeam; } }} The JSON obtained from API is formatted as below:HTTP/1.1 200 OKContent-Type application/json;charset=UTF-8X-Response-Control: minified...{ timeFrameStart: 2015-10-30, timeFrameEnd: 2015-11-12, count: 2, fixtures: [ { id: 149348, soccerseasonId: 405, date: 2015-11-03T19:45:00Z, matchday: 4, homeTeamName: Manchester United FC, homeTeamId: 66, awayTeamName: CSKA Moscow, awayTeamId: 751, result: { goalsHomeTeam: null, goalsAwayTeam: null } }, { id: 146976, soccerseasonId: 398, date: 2015-11-07T15:00:00Z, matchday: 12, homeTeamName: Manchester United FC, homeTeamId: 66, awayTeamName: West Bromwich Albion FC, awayTeamId: 74, result: { goalsHomeTeam: null, goalsAwayTeam: null } } ]} | Parse JSON data of upcoming fixtures of Chelsea FC | java;beginner;json;api | null |
_unix.294905 | I have a custom command define that gets words definition from dictionary. I wanted to make an autocomplete script to complete the words I want to define using the list of words found in /usr/share/dict/words. This is what I have so far:Autocomplete script: /etc/bash_completion.d/define_define(){ dict='/usr/share/dict/words' cur=${COMP_WORDS[COMP_CWORD]} regex=^$cur* words=$(grep $regex $dict) if [[ $cur != -* ]] then COMPREPLY=( $( compgen -W $(echo $words) $cur) ) else COMPREPLY=() fi return 0}complete -F _define defineWhen I hit [tab][tab] sometimes I get a list of words begining with the typed word, other times it just deletes the last character. For example when I do define wall [tab][tab] I get define wal, but if I do define wal [tab][tab] I get a list of words.Why is this happening? | Autocomplete deletes last character from words sometimes | bash;autocomplete | null |
_cs.4794 | An instance of the SUBSET SUM problem (given $y$ and $A = \{x_1,...,x_n\}$ is there a non-empty subset of $A$ whose sum is $y$) can be represented on a one-tape Turing Machine with a list of comma separated numbers in binary format.If $\Sigma = \{0,1,\#\}$ a reasonable format could be:$( 1 \; (0|1)^* \; \#)^* \#$Where the first required argument is the value $y$ and $\#\#$ encodes the end of the input. For example: 1 0 0 # 1 0 # 1 # #^^^^^^^^ ^^^^ ^ y x1 x2Instance: y=4, A={2,1}I would like to enumerate the SUBSET SUM instances.Question: What is the (best) time complexity that can be achieved by a Turing Machine $TM_{Enum}$ that on input $m$ (which can be represented on the tape with a string of size $\log m + 1$) - outputs the $m$-th SUBSET SUM instance in the above format?EDIT:Yuval's answer is fine, this is only a longer explanation.Without loss of generality we set that $y > 0$ and $0 < x_1 \leq x_2 \leq ... \leq x_n$, $n \geq 0$And we can represent an instance of subset sum using this encoding:$y \# x_1\# d_2\# ...\# d_{n} \#\#$ where $d_i \geq 1, x_i = x_{i-1} + d_i - 1 \; , i \geq 2$Using a binary representation for $y,x_1, d_2, d_3, ...$ we have the following representation:$1 \; ((0|1)^* \# 1)^* \; \#\#$Equivalent to $1 \; (0|1|\#1)^* \; \#\#$. There is always a leading 1 and a trailing ## so we can consider only the $(0|1|\#1)^*$ part.So the decoder TM on input $m$ in binary format should:output the leading 1convert $m$ to base 3 mapping digit 2 to $\#1$when outputing the i-th intermediate $\#$ calculate $x_i = d_i + x_{i-1}-1$output the trailing $\#\#$No duplicate instances are generated. | Time complexity of an enumeration of SUBSET SUM instances | algorithms;formal languages;turing machines;enumeration | SUBSET-SUM instances can be encoded in base 3. We have codes for $0,1,\#$. Some codings are invalid, but in that case we can just immediately output $\#\#$ (or $\#$, if we have just written $\#$). Every SUBSET-SUM problem has infinitely many encodings, I hope that's not a problem.If the input has length $\ell$, then (assuming the tape alphabet has at least 4 symbols) we can do the conversion in time $O(\ell^2)$. I don't know whether this is the best time complexity achievable.Edit: Here is a better encoding. We still have only three input codes, $0,1,\#$. The output string always starts with $1$ and ends with $\#\#$. Further, $\#$ is output as $\#1$. Now each output string is generated once, though several output strings could correspond to the same instance.As an example, your instance is encoded by 00#0#. |
_softwareengineering.250259 | I want to release all my future FOSS projects under a public domain license such as CC0. This is in order to avoid the requirement for attribution present in most FOSS licenses, mainly when the code is compiled into a binary, but also when somebody decides to copy a small portion of my code into their own open source project.So suppose somebody copies a method from my code into their own, which they are planning to release under an MIT license. In order to release the code under MIT, they must claim copyright of the code; but can they claim copyright of the whole if a portion of the code was not their creation? Is that what it means to claim copyright of a derived work?It may seem strange that somebody could take a piece of public domain code, modify it, and apply a new license to the modified version that imposes further restrictions on its usage, such as the attribution requirement of MIT. But since their additional restrictions would apply only to the modified version and not the original version, this seems legitimate.My main concern is maximizing the ways in which people can utilize my code, without worrying about licensing requirements, and also to avoid a viral effect where any derivative works have to be themselves licensed under CC0. | Does CC0 allow sublicensing of derived works? | licensing;open source;copyright;creative commons | #include <ianal.h>Document style licenses are often a poor fit for source code. Of these, the ones that are a gift (and that is a word with legal implications) are the most problematic.Public domain, as seen in the US, is a gift. In particular, a gift may be revoked for any reason. This makes something that is in the public domain possibly treacherous for open source - when the gift is revoked, it suddenly has back its full copyright protections.There are also countries that don't recognize public domain at all. According to a creative commons survey countries such as Belgium, Denmark, France, Germany (and the list goes on and on) don't recognize public domain at all, or only recognize it after the copyright has expired. Of the countries that do recognize public domain, many allow the revocation of it if the copyright hasn't expired.There are anecdotes (I can't find it at the moment) of an open source project that was completely released under public domain that wasn't useable at all in those countries - and so for people there to use and modify the source, they had to get a commercial license (which cost $$$).So, on to CC0. The key to CC0 is that it has some wording in there to make it a license rather than a gift.To the greatest extent permitted by, but not in contravention of, applicable law, Affirmer hereby overtly, fully, permanently, irrevocably and unconditionally waives, abandons, and surrenders all of Affirmer's Copyright and Related Rights and associated claims and causes of action, whether now known or unknown (including existing as well as future claims and causes of action), in the Work (i) in all territories worldwide, (ii) for the maximum duration provided by applicable law or treaty (including future time extensions), (iii) in any current or future medium and for any number of copies, and (iv) for any purpose whatsoever, including without limitation commercial, advertising or promotional purposes (the Waiver).Note the 'permanently' and 'irrevocably' words being used. CC0 cannot be revoked. It also goes on to state that the heirs cannot revoke it either (an issue with public domain).Background reading: http://www.rosenlaw.com/lj16.htmAs to the core question of if CC0 can be sublicensed? GNU.org lists licenses that are compatible with the GPL on Various Licenses and Comments about Them lists the GPL as a compatible license. They also list public domain as being compatible tough note the issues that I raised in the first part... I'm sure that GNU has good lawyers who will fight if something is revoked or there are issues in those countries (on the other hand, I don't and would be vary wary of accepting public domain donations to an open source project).All that said, CC0 still strikes me as awkward in that the license is different for different countries recognizing different parts being contrary to local law (I know that Creative Commons has done a great job to the best of their ability in crafting this). There are places where parts of the permissive license for CC0 would have difficulty (France - Right to withdrawal or reconsider I am still not a lawyer, much less one versed in French law written in English).The thing is, programmers who are going to be caring at all about the license already know how to deal with the MIT, or BSD, or GPL. Other licenses require some legal reading for the project as to if you can include them or not.I'd also point to http://choosealicense.com/licenses/ - scrolling down to the bottom for the public domain one, note that the unlicense allows for sub licensing while CC0 does not.So suppose somebody copies a method from my code into their own, which they are planning to release under an MIT license. In order to release the code under MIT, they must claim copyright of the code; but can they claim copyright of the whole if a portion of the code was not their creation? Is that what it means to claim copyright of a derived work?Give http://www.rosenlaw.com/lj19.htm a read for background (note that this reading of derative works is in conflict with the GPL's understanding - but since we're not talking about the GPL the reading of it is much more common sense).There are several ways to address the approach. You could put the function in another library and point out that library is a derivative work, licensed under however it was obtained (which is compatible with the rest of the project). The copyright of the rest of the project is not a derivative work (GPL questions of static and dynamic linking aside - they try to make the derivative work extend to as many things as possible).For example, if code snippet foo was from an BSD licensed work, you could stick foo in its own library. That would be a derivative work of an apache product and licensed under BSD license. Then the rest of your code would be MIT licensed (for example) and not a derivative work. And since the BSD and MIT licenses are compatible, thats the end of the story.I just used BSD as an example for your license, but really it could be anything as long as it was compatible with the license I'm using. And while CC0 is compatible with everything - there are other, better understood licenses for software that are similarly nearly universally compatible. The BSD 2 clause or MIT, or Apache 2.0 licenses are ones people understand.To get the widest use of your code, you should use a license that people know and understand. If you chose something that is less well understood programmers may be less likely to use it because of the possible legal implications that they'd have to ask a lawyer about. If you have to check if CC0 is sub licensable or not, you don't use it. If you see a BSD license, you know what you can do with it.You could always go with WTFPL.I am still not a lawyer. |
_unix.250827 | On my image folder i want only images,so i will find all video files intruders using this command(found somewhere on stackexchange)find folder/imageonly -type f -exec file -N -i -- {} + |sed -n 's!: video/[^:]*$!!p'But if i want a reverse search?For example i want to find all not video files in video folder?p.s=i don't use extensions,so flags file -N -i.. must be used | Find file type reverse | find | Add an address range to your sed:find folder/imageonly -type f -exec file -N -i -- {} + | sed -n '/: video\/[^:]*$/!s!: [[:alnum:]]*/[^:]*$!!p'/: video\/[^:]*$/! tells it to run the commands on every line that doesn't match the pattern, which you used for matching videos. |
_cs.1393 | I am given an exercise unfortunately I didn't succeed by myself.There is a set of rectangles $R_{1}..R_{n}$ and a rectangle $R_{0}$. Using plane sweeping algorithm determine if $R_{0}$ is completely covered by the set of $R_{1}..R_{n}$.For more details about the principle of sweep line algorithms see here.Let's start from the beginning. Initially we know sweep line algorithm as the algorithm for finding line segment intersectionswhich requires two data structures:a set $Q$ of event points (it stores endpoints of segments and intersections points)a status $T$ (dynamic structure for the set of segments the sweep line intersecting)The General Idea: assume that sweep line $l$ is a vertical line that starts approaching the set of rectangles from the left. Sort all $x$ coordinates of rectangles and store them in $Q$ in increasing order - should take $O(n\log n)$. Start from the first event point, for every point determine the set of rectangles that intersect at given $x$ coordinate, identify continuous segments of intersection rectangles and check if they cover $R_{0}$ completely at current $x$ coordinate. With $T$ as a binary tree it's gonna take $O(\log n)$. If any part of $R_{0}$ remains uncovered that $R_{0}$ is not completely covered.Details: The idea of segment intersection algorithm was that only adjacent segments intersect. Based on this fact we built status $T$ and maintained it throughout the algorithm. I tried to find a similar idea in this case and so far with no success, the only thing I can say is two rectangles intersect if their corresponding $x$ and $y$ coordinates overlap. The problem is how to build and maintain $T$, and what the complexity of building and maintain $T$ is. I assume that R trees can be very useful in this case, but as I found it's very difficult to determine the minimum bounding rectangle using R trees. Do you have any idea about how to solve this problem, and particularly how to build $T$? | Rectangle Coverage by Sweep Line | algorithms;computational geometry | Let's start with $n$ axis-aligned rectangles, since there is a kind of easy direct argument. We'll sweep a vertical line. The events are the endpoints of horizontal edges of the rectangles. As we sweep we maintain a set of intervals on the sweep line that are uncovered by $R_i$, $i\ge 1$:Add the vertical interval covered by the rectangle $R_i$ to the sweep line when we first encounter $R_i$Remove the vertical interval covered by the rectangle $R_i$ from the sweep line when it moves past $R_i$It's easy to do this with a binary tree so that updates take $O(\log n)$ time. (The problem is, essentially, 1-dimensional. You figure out if the endpoints are in an uncovered interval and extend/merge appropriately when adding and lengthen them when removing.)Then you just check that, in the span of $R_0$, none of the uncovered intervals ever intersect the vertical span of $R_0$. The whole thing is $O(n\log n)$ time an $O(n)$ space.For the general case, the obvious trick is not quite so fast. Use the standard sweep line algorithm to compute the whole planar subdivision induced by the rectangles. Clearly some disk-like set $F'$ of the faces covers $R_0$. By itself, this doesn't tell us enough, since what we are interested in is whether any of these faces is inside $R_0$ and outside the other rectangles. To do this, we modify the construction a little bit, so that when we add an edge, we tag one side with the identity of the rectangle it's inside. This adds $O(1)$ overhead, so the construction is $O(n^2\log n)$ time; with no assumptions on the rectangles, the output can be $\Omega(n^2)$ in size, so we are using that much space in the worst case, so the time is, existentially optimal though not output sensitive.Finally, $R_0$ is covered so long as none of the faces in $F'$ have only edges not tagged as being in one of the $R_i$. The point is that if an edge of $f$ is in $R_i$, then the whole of $f$ is as well. Imagine sweeping a line over $f$ orthogonally along this edge: it can only leave $R_i$ either outside of $f$ or $f$ is bounded by more than one edge of $R_i$.So the conclusion is that the special case is $O(n\log n)$ and the general one is $O(n^2\log n)$ at least, but I suspect it can be improved. |
_unix.377913 | I use the following awk in order to remove duplicate lines from the /etc/fstab file on Linux.The problem that it also removes the lines that start with #.How can I change the awk syntax in order to ignore lines starting with # in the file?awk '!a[$0]++' /etc/fstab > /etc/fstab.newcp /etc/fstab.new /etc/fstab | awk + remove duplicate lines but ignore lines that begin with # | awk;fstab | Tell AWK to accept lines starting with # as well as non-duplicate lines:awk '/^#/ || !a[$0]++' /etc/fstab > /etc/fstab.newIf you want to avoid doing this if there are no duplicate lines (per your comments), you can use something likeif awk '!/#^/ && a[$0]++ { dup = 1 }; END { exit !dup }' /etc/fstab; then awk '/^#/ || !a[$0]++' /etc/fstab > /etc/fstab.new copy /etc/fstab.new /etc/fstabfibut that ends up dong the work twice effectively. |
_webapps.69125 | The shortcut to bring up the search interface in Trello is /, but a Trello user is reporting that the / key on his Swedish keyboard does not bring up search.I've seen this issue with other web apps and non-English-US keyboards as well. Is there a workaround for users with foreign language keyboards to be able to use these kinds of keyboard shortcuts? | Are there workarounds for when browser shortcut keys break for users with foreign language keyboards? | trello;keyboard shortcuts;localization | null |
_codereview.142470 | I'm parsing an XML file and searching for a child value.As I'm familiar with this XML file structure, I know that the final value is a child of a child, and so I wrote the following code section:/*Parse XML file and find device friendlyName*/pxUpnp = ezxml_parse_file(XML_FILE_PATH);pxUpnpChild = ezxml_child(pxUpnp, device);/*Looking for friendlyName sub-child*/pxUpnpSubChild = ezxml_child(pxUpnpChild, friendlyName);This code works, but I wander if there is a better or more elegant method to parse a 'child of a child' value using ezxml library?Thank you all in advance! | Parse child value from XML using ezxml | c;xml;linux | null |
_unix.363983 | I currently have chroot users whos home directories contain both an 'upload' directory and a 'download' directory.Originally the permissions on the upload directory wherechown user:sftpadmin uploadchmod 370 uploadand the permissions on the download directory wherechown user:sftpadmin downloadchmod 570 dowloadThe purpose of the sftpadmins group is for service accounts that are a member of this group would be able to place/retrieve files for the user from the respective directories.Now we have a request to allow the users the ability to delete files in the download directory after they are finished with them. However the only option I can come up with to accomplish this is setting the permissions on the download dir to chmod 770 downloadHowever this would grant the chroot'ed users the ability to write any file to this directory, which I would like to avoid.Is there any combination of permissions I can set that would allow them the ability to read, download, and delete the files in the directory, without allowing them to write files to the download directory? It would look something like:Allow user to remove (delete) a fileWill not allow user to change the file.Will not allow user to add a file to the directory. | Granting user ability to delete a file without giving them write permissions to the directory | files;permissions | Well it depends:It's not possible with standard posix permissions, as deleting a file needs the same permission as adding: write permission on the containing directory. If however your file system supports NFSv4 access control lists (e.g. ZFS) it is possible, as there exist the distinct control entries write-data (-> create files) and delete-child. You just have to set the allow delete-child entry on the directory for the particular user, but not the allow write-data entry (or instead: set deny write-data).See https://linux.die.net/man/5/nfs4_acl for a detailed description |
_codereview.92908 | I am looking for a way to merge two files with a list of distinct words, one word per line. I have to create a new txt file that would contain all the words of the first list and all the words from the second list. I don't have any other specifications. The order of words in the result doesn't matter.public class testMain { public static void main(String[] args) { File f1=new File(words.txt); File f2=new File(words1.txt); HashSet <String> hash1=new HashSet<String>(); HashSet <String> hash2=new HashSet<String>(); try{ Scanner s=new Scanner(f1); while(s.hasNextLine()){ hash1.add(s.nextLine()); } s=new Scanner(f2); while(s.hasNextLine()){ hash2.add(s.nextLine()); } } catch(FileNotFoundException e){} hash1.addAll(hash2); Object[]array =hash1.toArray(); File newFile=new File(mixOfLists.txt); try{ PrintWriter writer=new PrintWriter(newFile); for(int i=0; i<array.length; i++){ writer.println(array[i]); } writer.close(); } catch(FileNotFoundException e){ System.out.print(No Such File);} System.out.print(Done!); }} | The most efficient way to merge two lists in Java | java;file;hash table | Exception HandlingYour handling is not great.... this is a sign of poor forward planning: catch(FileNotFoundException e){}And this is a sign of something almost as bad: catch(FileNotFoundException e){ System.out.print(No Such File);} System.out.print(Done!); }The first time I read that, I got confused and thought the Done! println was part of the exception handling. You need to work on the indentation. Also, just printing No such file is not a very helpful exception handling.Style1-liner blocks are seemingly convenient but in the long term can have negative impacts on maintainability. You have a lot of them, and they make reading your code hard.Your code is also suffocating due to lack of whitespace. You need to put spaces around operators to help the code to breath.... yeah, that sounds alarmist, but it really helps. File newFile=new File(mixOfLists.txt); try{ PrintWriter writer=new PrintWriter(newFile); for(int i=0; i<array.length; i++){ writer.println(array[i]); }should be: File newFile = new File(mixOfLists.txt); try{ PrintWriter writer = new PrintWriter(newFile); for(int i = 0; i < array.length; i++) { writer.println(array[i]); } ....ResourcesYou should use try-with-resources for your IO sources and sinks. As things stand at the moment, you don't close the readers properly.AlgorithmYou're reading both files in to their own sets, and then merging the sets, and then outputting the result.A better solution would be to use the boolean return value from the add(...) method to determine whether the word has been seen before... consider: while(s.hasNextLine()){ String line = s.nextLine(); if(hash1.add(line)) { writer.println(line); } }The above code can be used for both the input files, and only writes out the word if the word has not been seen before.This way you have only one set, and you do the merge at the same time as the reading.Also, you should be using Java 8 streams..... hmmm... that would be nice. |
_unix.181887 | I am trying to install mail/pine-pgp-filters on my FreeBSD box, but I am running into a problem. I first tried to install it without having GPG installed, and it listed security/gpg1 as a dependency. I wanted gpg2 (security/gpg), and so I built and installed that. I then attempted to re-install pine-pgp-filters, but it still prompted me to install gpg1.I have confirmed that it is compatible with gpg2, and this segment of the Makefile should take care of which version to use: # We want to be version-agnostic here, but also record the right dependency# if the user installs the package and already has one or the other installed..if exists(${LOCALBASE}/bin/gpg2)BUILD_DEPENDS= gpg2:${PORTSDIR}/security/gnupgRUN_DEPENDS+= gpg2:${PORTSDIR}/security/gnupg.elseBUILD_DEPENDS= gpg:${PORTSDIR}/security/gnupg1RUN_DEPENDS+= gpg:${PORTSDIR}/security/gnupg1.endifSo, my question is: how do you make a port re-consider its dependencies? And if that isn't my problem, then what is?I am happy with solutions using ports directly, portmaster, pkg, whatever. | How to make a port recalculate dependencies | freebsd;configure;bsd ports | null |
_unix.302017 | I have a program which uses these tables, and I want to add some additional functionality to its logging without modifying the program.groups------id bigint not nullname character varying(100) not nullusers-----id bigint not nullname character varying(100) not nullusers_groups------------group_id bigint not nulluser_id bigint not nullI want to write into syslog6 a user123 added to group456 or user123 removed from group456 message every time a user added or removed from a group. My first idea was using PostgreSQL triggers. CREATE OR REPLACE FUNCTION process_ext_audit() RETURNS trigger AS $ext_audit$BEGIN IF (TG_OP = 'DELETE') THEN SELECT name into uname FROM users WHERE id = OLD.user_id; SELECT name into gname FROM groups WHERE id = OLD.group_id; -- write into local6: uname removed from gname ELSIF (TG_OP = 'INSERT') THEN SELECT name into uname FROM users WHERE id = NEW.user_id; SELECT name into gname FROM groups WHERE id = NEW.group_id; -- write into local6: uname added to gname END IF; RETURN NULL;END;$ext_audit$ LANGUAGE plpgsql;CREATE TRIGGER ext_auditAFTER INSERT OR DELETE ON users_groups FOR EACH ROW EXECUTE PROCEDURE process_ext_audit();Is my approach good? If yes, how can I write into syslog from this function?I use postgresql 9.2 with CentOS 7 which uses rsyslog. | How to log PostgreSQL table data changes into syslog? | centos;syslog;rsyslog;postgresql;sql | null |
_unix.98538 | Can somebody help with adding a repo for SuSE Linux Enteprise Edition 10 SP3?First - can't find any official repos. Found a few repos for openSUSE and few other - but every time I try them I get this error (using YaST2):Unable to create installation source 'http://download.opensuse.org/repositories/openSUSE%3a/Tools/SLE_10/'. Unknown source type for http://download.opensuse.org/repositories/openSUSE%3a/Tools/SLE_10/I also get the same with other repos... | SLES 10 - repositories? | repository;sles | null |
_unix.283438 | I recently installed ntp on Debian and when issue command:ntpdate ip_of_domain_controller it states: adjust time server.But the time is 3 hours earlier. Other equipment like HP switches are getting normal time.Is this a bug from unix side? | ntpdate on debian | debian;ntpd | null |
_scicomp.21911 | I'm coding the simplex method and observing that it easily falls into cycling,even if Bland's rule is used.It seems to me I have found the reason and I would like to check my understanding is correct.It seems the problem is in tiny detail of the choice of the pivot row.We choose the pivot row r by the codition B_r / A_{rc} is minimal over r among all NONnegative values B_r / A_{rc} It seems to me the correct way is to change from NONnegative to strictly positive.Is it correct ? At least it helps in my example and seems Okay from theory point of view (well, I am not sure I completely understand the theory).(In textbooks like Hamdy A. Taha and all internet pages they say about NONnegativity )Here is example of cycling with Bland's rule:simplexMatrix =1.0e+04 *-1.0999 -0.9000 -0.7000 0 0 0 0 0 -0.9000 0.0001 0.0001 0.0001 0.0001 0 0 0 0 0.0001 0.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0.0001 0 0 0 0 0.0001 0 0 0.0001 0 0.0001 0 0 0 0 0.0001 0 0.0001 0 0 0.0001 0 0 0 0 0.0001 0.0000simplexMatrix =1.0e+03 * 0 -0.2008 -0.4006 0 1.0999 0 0 0 -0.2008 0 0.0002 0.0004 0.0010 -0.0001 0 0 0 0.00020.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0 -0.0008 -0.0006 0 -0.0001 0.0010 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 0 0.0010 0 0 0 0 0.0010 0.0004simplexMatrix =1.0e+03 * 0 0 -0.2500 0 1.1250 -0.2510 0 0 -0.2008 0 0 0.0003 0.0010 -0.0001 0.0002 0 0 0.00020.0010 0 0 0 0 0.0010 0 0 0.0008 0 0.0010 0.0007 0 0.0001 -0.0013 0 0 0 0 0 -0.0007 0 -0.0001 0.0013 0.0010 0 0.0006 0 0 0.0010 0 0 0 0 0.0010 0.0004simplexMatrix =1.0e+03 * 0 0.3333 0 0 1.1667 -0.6677 0 0 -0.2008 0 -0.0003 0 0.0010 -0.0002 0.0007 0 0 0.00020.0010 0 0 0 0 0.0010 0 0 0.0008 0 0.0013 0.0010 0 0.0002 -0.0017 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 -0.0013 0 0 -0.0002 0.0017 0 0.0010 0.0004simplexMatrix =1.0e+03 * 0 -0.2008 -0.4006 0 1.0999 0 0 0 -0.2008 0 0.0002 0.0004 0.0010 -0.0001 0 0 0 0.00020.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0 -0.0008 -0.0006 0 -0.0001 0.0010 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 0 0.0010 0 0 0 0 0.0010 0.0004Here is the same without Bland's rule simplexMatrix =1.0e+04 *-1.0999 -0.9000 -0.7000 0 0 0 0 0 -0.9000 0.0001 0.0001 0.0001 0.0001 0 0 0 0 0.0001 0.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0.0001 0 0 0 0 0.0001 0 0 0.0001 0 0.0001 0 0 0 0 0.0001 0 0.0001 0 0 0.0001 0 0 0 0 0.0001 0.0000simplexMatrix =1.0e+03 * 0 -0.2008 -0.4006 0 1.0999 0 0 0 -0.2008 0 0.0002 0.0004 0.0010 -0.0001 0 0 0 0.00020.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0 -0.0008 -0.0006 0 -0.0001 0.0010 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 0 0.0010 0 0 0 0 0.0010 0.0004simplexMatrix =1.0e+03 * 0 0.3333 0 0 1.1667 -0.6677 0 0 -0.2008 0 -0.0003 0 0.0010 -0.0002 0.0007 0 0 0.00020.0010 0 0 0 0 0.0010 0 0 0.0008 0 0.0013 0.0010 0 0.0002 -0.0017 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 -0.0013 0 0 -0.0002 0.0017 0 0.0010 0.0004simplexMatrix =1.0e+03 * 0 -0.2008 -0.4006 0 1.0999 0 0 0 -0.2008 0 0.0002 0.0004 0.0010 -0.0001 0 0 0 0.00020.0010 0.0008 0.0006 0 0.0001 0 0 0 0.0008 0 -0.0008 -0.0006 0 -0.0001 0.0010 0 0 0 0 0.0010 0 0 0 0 0.0010 0 0.0006 0 0 0.0010 0 0 0 0 0.0010 0.0004 | Simplex method - cycling and condition >= or > in choice of pivot row | optimization;linear programming | null |
_webapps.105288 | I have 5000 connections and Facebook told me that I need to create a page. Well, some connections are about acquaintances rather than people that I really want to keep in my Facebook profile, like family and close friends. Now, the only plan I had was to create a personal personal email and have the connections migrated there and keep everything else equal... but I don't see a way to achieve that either. I don't want my close contacts to become fans and lose the personal profile perks.Is there a way I can keep a personal account and a page for everything else? | How to upgrade my personal profile to a public figure, yet keep all my personal connections personal? | facebook;facebook pages | null |
_codereview.78289 | I had a problem to define propTypes for my React class. I ran into solution that doesn't feel right:let React = require('react')let CallerCard = require('caller-card')class CallerDetailsPanel { render() { return ( <div className='caller-details-panel'> <CallerCard person={this.props.callerData.owner} /> <CallerCard person={this.props.callerData.user} /> </div> ) }}CallerDetailsPanel.prototype.propTypes = { callerData: React.PropTypes.object}module.exports = React.createClass(CallerDetailsPanel.prototype)Is this correct approach or how propTypes should be defined? If I try to define them inside class, I get a Parse error from 6to5 / esprima on console. | ES6 classes and ReactJS: implementing propTypes | react.js;ecmascript 6 | null |
_unix.369776 | I'm trying to do something like this:sudo su <<EOFselect x in a b c; do echo Selected $x; break; doneEOFHowever, it terminates without accepting input.It works if I do this:sudo su -c 'select x in a b c; do echo $x; break; done'But it's nicer writing longer scripts with heredoc (yes I know it's nicer still to put them in a file).I'm sure this is answered in various places, but I can't seem to hit the right bash/heredoc/tty/stdin search term combo.Is there any way to achieve this? | Run heredoc script via su attached to the current tty | tty;su;stdin;here document | The problem is that select is trying to read from stdin, which is redirected to the here-doc. Since there's no response to the prompt there, it gets an error.The solution is to redirect input back to the terminal within the here-doc.sudo -s <<'EOF'select x in a b c; do echo Selected $x breakdone </dev/ttyEOFAlso, you need to put quotes around EOF to prevent variable expansion in the here-doc. Otherwise it expands $x in the original shell, not in the subshell that gets the value from select. |
_webapps.77940 | In Excel we can trace precedents and dependents. Can some one help me with something similar for Google Sheets?I have got the code for dependents from https://webapps.stackexchange.com/a/50149/88163. function onOpen() { var ss = SpreadsheetApp.getActiveSpreadsheet(); var menuEntries = [] menuEntries.push({name: Trace Dependents, functionName: traceDependents}); ss.addMenu(Detective, menuEntries);}function traceDependents(){ var dependents = [] var ss = SpreadsheetApp.getActiveSpreadsheet(); var currentCell = ss.getActiveCell(); var currentCellRef = currentCell.getA1Notation();; var range = ss.getDataRange(); var regex = new RegExp(\\b + currentCellRef + \\b) var formulas = range.getFormulas(); for (var i = 0; i < formulas.length; i++){ var row = formulas[i]; for (var j = 0; j < row.length; j++){ var cellFormula = row[j]; if (regex.test(cellFormula)){ dependents.push([i,j]); } } } var dependentRefs = []; for (var k = 0; k < dependents.length; k ++){ var rowNum = dependents[k][0] + 1; var colNum = dependents[k][1] + 1; var cell = range.getCell(rowNum, colNum); var cellRef = cell.getA1Notation(); dependentRefs.push(cellRef); } var output = Dependents: ; if(dependentRefs.length > 0){ output += dependentRefs.join(, ); } else { output += None; } currentCell.setNote(output);} | How can I trace precedents in Google Sheet | google spreadsheets;google apps script | null |
_webapps.84237 | Do you know of a way to download photos and videos back from the new http://photos.google.com?I was expecting at least some unofficial tools like Flickr downloader, but I couldn't find any.I'm wary of using the service before I find out how to get data back in case of problems. | How to export photos and albums from Google Photos? | google photos | Google Photos is one of the products included in Google Takeout. You can even select exactly which albums you want to download. It will take some time, but once the archive is ready you'll receive an email with a (private) link for you to download your data.Albums will get their own folders within the archive. Photos not in albums appear to get put in folders based on date. Also, if the archive is too large for a single zip file, it will be broken up in to smaller chunks. (For me, each file was 2GB.) |
_unix.52206 | I experience relatively often that the partition table of a USB stick or SD card is suddenly no longer recognized by the kernel while (g)parted and fdisk still see it, as do other systems. I can even instruct gparted to do a fsck on one of the partitions but it fails of course because the device files let's say /dev/sdbX don't exist.I'll attach the dmesg output:[ 8771.136129] usb 1-5: new high-speed USB device number 4 using ehci_hcd[ 8771.330322] Initializing USB Mass Storage driver...[ 8771.330766] scsi4 : usb-storage 1-5:1.0[ 8771.331108] usbcore: registered new interface driver usb-storage[ 8771.331118] USB Mass Storage support registered.[ 8772.329734] scsi 4:0:0:0: Direct-Access Generic STORAGE DEVICE 0207 PQ: 0 ANSI: 0[ 8772.334359] sd 4:0:0:0: Attached scsi generic sg1 type 0[ 8772.619619] sd 4:0:0:0: [sdb] 31586304 512-byte logical blocks: (16.1 GB/15.0 GiB)[ 8772.620955] sd 4:0:0:0: [sdb] Write Protect is off[ 8772.620971] sd 4:0:0:0: [sdb] Mode Sense: 0b 00 00 08[ 8772.622303] sd 4:0:0:0: [sdb] No Caching mode page present[ 8772.622317] sd 4:0:0:0: [sdb] Assuming drive cache: write through[ 8772.629970] sd 4:0:0:0: [sdb] No Caching mode page present[ 8772.629992] sd 4:0:0:0: [sdb] Assuming drive cache: write through[ 8775.030231] sd 4:0:0:0: [sdb] Unhandled sense code[ 8775.030240] sd 4:0:0:0: [sdb] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE[ 8775.030249] sd 4:0:0:0: [sdb] Sense Key : Medium Error [current] [ 8775.030259] sd 4:0:0:0: [sdb] Add. Sense: Data phase CRC error detected[ 8775.030271] sd 4:0:0:0: [sdb] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00[ 8775.030291] end_request: I/O error, dev sdb, sector 0[ 8775.030300] quiet_error: 30 callbacks suppressed[ 8775.030306] Buffer I/O error on device sdb, logical block 0[ 8775.033781] ldm_validate_partition_table(): Disk read failed.[ 8775.033813] Dev sdb: unable to read RDB block 0[ 8775.037147] sdb: unable to read partition table[ 8775.047170] sd 4:0:0:0: [sdb] No Caching mode page present[ 8775.047185] sd 4:0:0:0: [sdb] Assuming drive cache: write through[ 8775.047196] sd 4:0:0:0: [sdb] Attached SCSI removable diskHere, on the other hand, is what parted has to say about the same disk, at the same time:(parted) print Model: Generic STORAGE DEVICE (scsi)Disk /dev/sdb: 16.2GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 4194kB 62.9MB 58.7MB primary fat16 lba 2 62.9MB 16.2GB 16.1GB primary ext4It's not only parted, even the older fdisk has no trouble with that partition table:Command (m for help): pDisk /dev/sdb: 16.2 GB, 16172187648 bytes64 heads, 32 sectors/track, 15423 cylinders, total 31586304 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000dbfc6 Device Boot Start End Blocks Id System/dev/sdb1 8192 122879 57344 c W95 FAT32 (LBA)/dev/sdb2 122880 31586303 15731712 83 LinuxI'm really clueless. It would be easy to say the partition table is corrupted but then why can gparted still read it without complaints (and there are none) or how can I reconstruct the partition table from what (g)parted miraculously found out? | Partition table not recognized by Linux kernel | linux;kernel;partition;fdisk;parted | For some reason your kernel fails to read the partition table:[ 8775.030291] end_request: I/O error, dev sdb, sector 0[ 8775.030300] quiet_error: 30 callbacks suppressed[ 8775.030306] Buffer I/O error on device sdb, logical block 0[ 8775.033781] ldm_validate_partition_table(): Disk read failed.Thus, it can't create devices for partitions as it did not read the partition table. Later when you try to see the partition table with parted or fdisk the IO is performed successfully. Try to use partprobe /dev/sdX when your kernel did not recognized the partitions at boot time.man partprobe:PARTPROBE(8) GNU Parted Manual PARTPROBE(8)NAME partprobe - inform the OS of partition table changesSYNOPSIS partprobe [-d] [-s] [devices...]DESCRIPTION This manual page documents briefly the partprobe command. partprobe is a program that informs the operating system kernel of partition table changes, by requesting that the operating system re-read the partition table. |
_cs.60109 | Say you are given the following CFG $G$:$$S \to S_1 \mid S_2 \\S_1 \to AbAS_1c \mid \epsilon \\S_2 \to BaBS_2c \mid \epsilon \\A \to Aa \mid \epsilon \\B \to Bb \mid \epsilon$$What is $L(G)$?So far I've derived the following regular expressions:$ S \rightarrow (a^*ba^*)^*c \mid (b^*ab^*)^*c$So far I've come up with this $L(G)$: $L(G) = \{ (a^nba^n)^qc \mid (b^nab^n)^*q : n,q \geq 0 \}$When you approach the second $S_1$ do you include the $c$ ( As in, finish $S_1c$ and the go recursive)? | Finding Language of a CFG | formal languages;context free;formal grammars | To find the language for the grammar,You need to understand how does the recursiveness in production rules work.In solving A->Aa|epsilon , you need to know that epsilon works as a stopper for a recursive production, and determine the number of times the recursive production occurred for the given production rule.In solving A->Aa|epsilon to get the expression for A made up of terminals , one of the ways is to keep doing with the production rule. After you do that you consider the epsilon which determines the number of times the production rule have occurred, and use that result to express the A concisely. A->Aa->Aaa->Aaaa ... you see that the a is produced recursively and epsilon makes A to stop at any right arrow, so you can substitute A with a*. As asterisk is defined as n>=0 and n is an integer. It is same for solving S1 and S2.First you use the conclusion that A can be substituted with a* as they are same expressions, then AbA(S1)c is same as (a*)b(a*)(S1)(c) keep doing that production rule again and again until you find the regularity in them. S1->(a*)b(a*)S1c->(a*)b(a*)(a*)b(a*)S1cc->(a*)b(a*)(a*)b(a*)(a*)b(a*)S1ccc (You know that c occurs every time the production rule occurs)Then you see that there is (a*)(a*) going on here and you can substitute this with just a* because they are exactly the same expression. let the left asterisk's number of recurrence time as L and right asterisk's number of recurrence time as R. We already know that L and R both satisfy the condition L>=0 , R>=0 and L,R are both integers. Then let S=L+R, then the minimum value for S is 0 because L and R has the minimum value that are both equal to zero, and since the +operator is closed on the integer set, we conclude that S>=0 which means minimum value for S is equal to zero. And S is integer, which is exactly equal to the asterisk's definition. since L+R for the expression (a*)(a*) is equal to the T which is the number of recurrence time for a in (a*). so (a*) is exactly the same expression as (a*)(a*).Then S1->(a*)b(a*)S1c->(a*)b(a*)(a*)b(a*)S1cc->(a*)b(a*)(a*)b(a*)(a*)b(a*)S1ccc (You now know that c is produced every time you apply the rule so this also answers your second question.)becomes S1->(a*)b(a*)S1c->(a*)b(a*)b(a*)S1cc->(a*)b(a*)b(a*)b(a*)S1ccc->(a*)b(a*)b(a*)b(a*)b(a*)S1cccc....You find the expression of S1 is ((a*)b)^n(a*)c^n since ((a*)b) and c keeps recurring every time you apply production rule for S1. or you can write (a*)(b(a*))^nc^n as (b(a*)) and c keeps recurring. you can do this same for the S2. and the result should look like this. ((b*)a)^n(b*)c^n or (b*)(a(b*))^nc^n Since S->S1|S2 you finally get L(G)L(G)={(a*)(b(a*))^nc^n | ((b*)a)^n(b*)c^n | epsilon : n>=1 and n is an integer} |
_scicomp.25788 | I am having trouble implementing the 1-D central-upwind scheme as proposed by Kurganov and Petrova. In particular, the discretization of the nonconservative term $\bar{\mathbf{N}}_j^{(2)}$ (eq. 2.25) seems to be blowing my code up.If anyone is familiar with implementing such numerical schemes and is willing to help me out I would appreciate. My code is basically trying to reproduce figure 2.1 of this paper. I would be happy to share my code if you would like or if someone has a similar code I would also be grateful if you would share it.ReferencesKurganov, Alexander, and Guergana Petrova. Central-upwind schemes for two-layer shallow water equations. SIAM Journal on Scientific Computing 31.3 (2009): 1742-1773. | Central-upwind scheme for two-layer shallow water equations | finite difference;numerical analysis;fluid dynamics | null |
_softwareengineering.149464 | I have read in numerous places that when developing a product you need to take a different approach to when you are developing a project (think contract work).Some differences are:1) There is no definitive user, but a user-base. 2) You need to develop the minimal marketable feature set.3) Scheduling needs to be watched, as there is not often a fixed deadline it is possible for the product to run overtime (or for scope creep).I was wondering if there are people out there with experience in both, and if they could offer some input into any differences that they know of. Also if anybody could provide any tips/good references for how to deal with the differences I would be most appreciative. | Difference between planning a project vs. planning a product | project;product;difference | Some significant differences:A project is generally time-limited. A product has a lifespan is usually not known at the time of development - your assumption should be that if it succeeds you have an ongoing business.A project needs to deliver against a specified deliverables. A product needs to offer a viable ongoing business. Guess which is harder :-)It's realatively easy to outsource a project. Outsourcing product development is usually a very bad idea (hint: you shouldn't outsource your core competency!)Since products are (usually) externally focused and intended to scale, quality tends to become more important as a success factor.I think the differences listed in the question actually aren't intrinsic differences between products and projects. You could imagine launching a product with a full feature set for example. A project may well have a broad and loosely defined user base. And both products and projects are likely to have issues with scheduling / scope creep :-) |
_unix.367890 | I need to get the device names of all connected USB disks (ie sdd).I have 3 USB disks plugged in, and 2 SATA disks:$ find /sys/devices/ -name block /sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/block/sys/devices/pci0000:00/0000:00:14.0/usb4/4-2/4-2:1.0/host6/target6:0:0/6:0:0:0/block/sys/devices/pci0000:00/0000:00:14.0/usb4/4-5/4-5:1.0/host4/target4:0:0/4:0:0:0/block/sys/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sys/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/blockI want to ignore the SATA disks, but I need to list all the USB disks.In the terminal, I can us ls and it will give me sdd:$ ls /sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/blocksddBut I need to use this in a script. I need to iterate over all USB disks, and I don't know the exact path in advance, so I have to use wildcards (* or ?):for DISK in $(ls /sys/devices/pci0000:00/0000:00:14.0/usb?/*/*:1.0/host?/target?:0:0/?:0:0:0/block) ; doecho /dev/$DISKdonethe above only works if one USB disk is plugged in. If two or more disks are plugend in, I get sdd as well as the /sys path, which I don't want, ie:/dev//sys/devices/pci0000:00/0000:00:14.0/usb3/3-7/3-7:1.0/host5/target5:0:0/5:0:0:0/block:/dev/sdd/dev//sys/devices/pci0000:00/0000:00:14.0/usb4/4-2/4-2:1.0/host6/target6:0:0/6:0:0:0/block:/dev/sde/dev//sys/devices/pci0000:00/0000:00:14.0/usb4/4-5/4-5:1.0/host4/target4:0:0/4:0:0:0/block:/dev/sdchow can I iterate only over sdd sde sdc ?I am looking for a solution not using udev infrastructure, ie /dev/disk/by-path/ | Get the device name of connected USB disk | shell script;usb;path | You can do it with lsblk command.lsblk -l -o name,tran gives NAME TRANsda satasda1 sdb usbsdc usbsr0 sata-l stands for list format, so it's easier to parse. Otherwise, you would get a tree format like this:NAME TRANsda satasda1sdb usbsr0 sataSpecifying other flags will give you more information like FSTYPE, LABEL, UUID, MOUNTPOINT and many other, just run lsblk --help to see all options.You may want to use --paths --noheadings --scsi flags to have output printed like this:sata /dev/sdausb /dev/sdbusb /dev/sdcsata /dev/sr0and then grep over the input to filter out those lines with usb at the beginning of the line. |
_unix.9252 | I know that using the command:lsof -i TCP (or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed. | Determining what process is bound to a port | networking;process;tcp;lsof | null |
_unix.198711 | I'm trying to use htop in tty1. However, some of the function keys don't appear to work as normal. F1 and F2 do nothing, and F3 seems to trigger setup (which should normally be triggered by F2). In addition, F4 and F5 don't work. Also, when I try and press Esc to get out of these screens, I have to press it twice.In a normal terminal (terminator), the function keys work fine. However, I have to press Esc twice here too, so perhaps that's a red herring.How can I use these function keys in tty1?EDITIn tty1, if I press Ctrl+v then F1 to F5, etc. I get the the following output:^[[[A^[[[B^[[[C^[[[D^[[[EIn terminator, I get^[OP^[OQ^[OR^[OS^[[15~The function keys above this are equivalent (e.g. ^[[17~ for F6).EDIT 2In response to Stphane Chazelas's comment.$TERM is the same in tty1 as in my normal, working terminal. It is xterm-256color.I am not using screen or tmux.I am using htop 1.0.3, although my first edit seems to point to it being an issue upstream of htop.Does infocmp -L1 | grep key_f match what those keys send for you?I'm not sure what you mean by match what those keys send for you, but I ran this command in both my normal terminal and tty1, and the output was identical, as below.key_f1=\EOP,key_f10=\E[21~,key_f11=\E[23~,key_f12=\E[24~,key_f13=\E[1;2P,key_f14=\E[1;2Q,key_f15=\E[1;2R,key_f16=\E[1;2S,key_f17=\E[15;2~,key_f18=\E[17;2~,key_f19=\E[18;2~,key_f2=\EOQ,key_f20=\E[19;2~,key_f21=\E[20;2~,key_f22=\E[21;2~,key_f23=\E[23;2~,key_f24=\E[24;2~,key_f25=\E[1;5P,key_f26=\E[1;5Q,key_f27=\E[1;5R,key_f28=\E[1;5S,key_f29=\E[15;5~,key_f3=\EOR,key_f30=\E[17;5~,key_f31=\E[18;5~,key_f32=\E[19;5~,key_f33=\E[20;5~,key_f34=\E[21;5~,key_f35=\E[23;5~,key_f36=\E[24;5~,key_f37=\E[1;6P,key_f38=\E[1;6Q,key_f39=\E[1;6R,key_f4=\EOS,key_f40=\E[1;6S,key_f41=\E[15;6~,key_f42=\E[17;6~,key_f43=\E[18;6~,key_f44=\E[19;6~,key_f45=\E[20;6~,key_f46=\E[21;6~,key_f47=\E[23;6~,key_f48=\E[24;6~,key_f49=\E[1;3P,key_f5=\E[15~,key_f50=\E[1;3Q,key_f51=\E[1;3R,key_f52=\E[1;3S,key_f53=\E[15;3~,key_f54=\E[17;3~,key_f55=\E[18;3~,key_f56=\E[19;3~,key_f57=\E[20;3~,key_f58=\E[21;3~,key_f59=\E[23;3~,key_f6=\E[17~,key_f60=\E[24;3~,key_f61=\E[1;4P,key_f62=\E[1;4Q,key_f63=\E[1;4R,key_f7=\E[18~,key_f8=\E[19~,key_f9=\E[20~, | How can I pass function keys to htop in a tty? | terminal;console;htop;terminfo | By setting:export TERM=xterm-256coloryou're telling htop (and every other visual terminal application that uses the termcap or terminfo database) that your terminal is a 256 colour xterm and not a Linux virtual console.htop will query the terminfo database to know what sequence of characters is sent upon F1, F2... but will get those for xterm.xterm sends different sequences than the Linux virtual console for those keys which you can verify by querying the terminfo database by hand with infocmp for instance:$infocmp -L1 xterm-256color | grep 'key_f[1-5]=' key_f1=\EOP, key_f2=\EOQ, key_f3=\EOR, key_f4=\EOS, key_f5=\E[15~,$infocmp -L1 linux | grep 'key_f[1-5]=' key_f1=\E[[A, key_f2=\E[[B, key_f3=\E[[C, key_f4=\E[[D, key_f5=\E[[E,So htop will not recognise \E[[A as a F1, it will expect \EOP for that.Here, you don't want to assign values to $TERM in ~/.bashrc. $TERM should be set by the terminal emulators (xterm, terminator) themselves, and by getty for Linux virtual consoles (should be linux there).If you're not happy with the value that a particular terminal emulator picks for $TERM, that's the configuration of that terminal emulators you should update. |
_cs.194 | What is the time complexity of finding the diameter of a graph $G=(V,E)$?${O}(|V|^2)$${O}(|V|^2+|V| \cdot |E|)$${O}(|V|^2\cdot |E|)$${O}(|V|\cdot |E|^2)$The diameter of a graph $G$ is the maximum of the set of shortest path distances between all pairs of vertices in a graph.I have no idea what to do about it, I need a complete analysis on how to solve a problem like this. | The time complexity of finding the diameter of a graph | algorithms;time complexity;graph theory | Update:This solution is not correct.The solution is unfortunately only true (and straightforward) for trees! Finding the diameter of a tree does not even need this. Here is a counterexample for graphs (diameter is 4, the algorithm returns 3 if you pick this $v$):If the graph is directed this is rather complex, here is some paper claiming faster results in the dense case than using algorithms for all-pairs shortest paths.However my main point is about the case the graph is not directed and with non-negative weigths, I heard of a nice trick several times:Pick a vertex $v$Find $u$ such that $d(v,u)$ is maximumFind $w$ such that $d(u,w)$ is maximumReturn $d(u,w)$Its complexity is the same as two successive breadth first searches, that is $O(|E|)$ if the graph is connected.It seemed folklore but right now, I'm still struggling to get a reference or to prove its correction. I'll update when I'll achieve one of these goals. It seems so simple I post my answer right now, maybe someone will get it faster. if the graph is weighted, wikipedia seems to say $O(|E|+|V|\log|V|)$ but I am only sure about $O(|E|\log|V|)$. If the graph is not connected you get $O(|V|+|E|)$ but you may have to add $O((|V|))$ to pick one element from each connected component. I'm not sure if this is necessary and anyway, you may decide that the diameter is infinite in this case. |
_softwareengineering.299071 | Months back I started working on a project using 'jbpm' by Redhat. JBPM is a piece of open source software which maps business processes and allows for user input, in addition to other functions.To deploy and start working with jbpm we were to download a WAR file, and deploy it on an application server of our choice (we chose JBoss) along with configuration of the server. This allowed us to access a web interface into the service.Later we decided to build a web application (custom interface) which leveraged the remote api of JBPM, so we could use JBPM's functionality but pretty up and further customize our app.So physically we had the jbpm service hosted on one server as a war, and our own web application hosted on another server as an EAR. Problem now being, that we'd like to deploy the two apps on the same server to avoid the need for cross-talk between the servers.For a while I was attempting to deploy the jbpm war within the ear of the web app, effectively bundling them together, and suddenly I started running into a host of errors.This is where my question comes in:I'm pretty ignorant when it comes to application servers, and what I'm interested in understanding is why we were able to deploy the war by itself so easily onto our JBoss server, but started running into endless 'ClassNotFound' exceptions when we tried to insert it into an ear.The obvious thing that comes to mind is that when deployed as a war it has direct access to the application server, and when inserting it into the ear there is a layer between the two. So if that's what's going on, how do I solve this problem? If that's not what's going on, what might be causing the discrepancy between the two deployment strategies?My end-game in all of this is to understand if I'm able to use the EAR deployment strategy at all, but my knowledge base is so small that I'm having trouble approaching the problem. | WAR deployment on its own vs within EAR | java;deployment | null |
_vi.7475 | When a variable on which a statusline depends changes, I see the result only in statuslines of active windows. Can inactive ones be made to see those changes too? | Auto-updating statuslines of inactive windows | statusline | null |
_webapps.119 | I deleted my twitter account, and now I can't create a new one with the same email address!How do I get a new twitter account with the same email address? | How do I re-activate my twitter account using the same email address? | twitter;account management | I ran into this recently, and I thought this would be a good question to answer here.One way that worked for me was to create an new twitter account with a different email address, and then once that account was active I changed the email address on the account back to the old email address.Useful tip for me. |
_codereview.55727 | Reverse a doubly linkedlist. Looking for code review, optimizations and best practices.public class ReverseDoublyLinkedList<T> implements Iterable<T> { private Node<T> first; private Node<T> last; private int size; public ReverseDoublyLinkedList() { }; public ReverseDoublyLinkedList(List<T> c) { for (T item : c) { add(item); } }; public void add (T t) { final Node<T> l = last; final Node<T> node = new Node<T>(null, t, null); last = node; if (first == null) { first = node; } else { l.right = node; node.left = l; } size++; } private static class Node<T> { Node<T> left; T item; Node<T> right; public Node(Node<T> left, T item, Node<T> right) { this.left = left; this.item = item; this.right = right; } } public void reverse() { if (first == null) { throw new IllegalArgumentException(The root cannot be null.); } Node<T> node = first; while (node != null) { Node<T> temp = node.left; node.left = node.right; node.right = temp; node = node.left; } Node<T> temp = last; last = first; first = temp; } @Override public Iterator<T> iterator() { return new ListItr(); } private class ListItr implements Iterator<T> { private int count; private Node<T> currentNode; public ListItr() { this.currentNode = first; } @Override public boolean hasNext() { return count < size; } @Override public T next() { if (!hasNext()) { throw new NoSuchElementException(); } T item = currentNode.item; currentNode = currentNode.right; count++; return item; } @Override public void remove() { currentNode.left.right = currentNode.right; currentNode.right.left = currentNode.left; } } public static void main(String[] args) { ReverseDoublyLinkedList<Integer> foo = new ReverseDoublyLinkedList<Integer>(); foo.add(10); foo.add(20); foo.add(30); foo.reverse(); Iterator<Integer> itr = foo.iterator(); while (itr.hasNext()) { System.out.println(itr.next()); } }} | Reverse a doubly linkedlist | java;linked list | null |
_unix.373545 | I have a bunch of files and need to verify their checksums. I have a text file that looks like:checksum <tab> filename <new line>Figured I'd use this as an exercise to improve my shell scripting. This is what I came up with and it did the trick, I'm just curious if there's a better way. I realize that it's not very flexible (such as assuming the file's format and the algorithm is 256). But I tried to avoid cat and echo... :)Thanks!#!/bin/shworkingDir=/path/to/directory/textFile=checksums.txtfilePath=$workingDir$textFilewhile read a b; do shasumOutput=$(/usr/bin/shasum -a 256 $workingDir$b | /usr/bin/awk '{ print $1 }') if [ $a = $shasumOutput ]; then /usr/bin/printf $b checksum matches: $a, $shasumOutput\n else /usr/bin/printf $b checksum doesn't match: $a, $shasumOutput\n fidone < $filePath | Quick script to compare checksums | shell script | null |
_codereview.159496 | I was trying to find any way of developing a string_id, a non-modifiable string holder that is O(1)-copyable and O(1)-comparable to be used as an id. The creator of the string_id is who knows the meaning of each string_id. So, if two different entites creates two ids, that string_id identifiers represent different ids even if they strings representations are equal. Each newly and non-copied object creates a different id.So, two string_ids refers to the same string iif they are in the same copy-hierarchy (a tree of copied objects), and not because their values compares equal.An obvious way of implementing the string_id class could be holding a shared_ptr and comparing the saved address instead of the string contents, but, since the shared_ptr is thread-safe, it adds an extra overhead not needed in certain situations (for example, in my case, all of string_id instances are going to be used in the gui/main-thread).So, I have implemented a light shared pointer as auxiliary class for implementing the string_id class. It has the following characteristic:It has no default constructor. You need always to pass a unique_ptr holding the wanted object, to explicitly state the wanted value and that the instance is going to be non-shared (externally).It's not thread-safe, but it is exception-safe and copying and comparision is O(1), as said before (all methods are).It doesn't use a reference counter to don't manage two dynamic objects. Instead, it use a doubled-linked circular list of family members. When the first object is created, the next and previous pointers point to this (a => a).When an object b is created as a copy of a, b follows a (a => b => a). When an object is deleted, it is removed from the list. If I'm the last one of my hierarchy (if I point to myself), I remove the shared pointer.It can be further improved with a deleter, or implementing the move constructor/assignment to make it a bit faster (I trust the compiler though).The class:template<class resource_t>class family_member{ resource_t* p_raw_resource; mutable family_member const* p_previous; mutable family_member const* p_next;public: explicit family_member(std::unique_ptr<resource_t>&& resource) noexcept : p_raw_resource(resource.release()), p_previous(this), p_next(this) {} family_member(family_member const& sibling) noexcept : p_raw_resource(sibling.p_raw_resource), p_previous(std::addressof(sibling)), p_next(sibling.p_next) { p_next->p_previous = sibling.p_next = this; } family_member& operator=(family_member const& sibling) noexcept { p_raw_resource = sibling.p_raw_resource; p_previous = std::addressof(sibling); p_next = sibling.p_next; p_next->p_previous = sibling.p_next = this; return *this; } ~family_member() { if (p_next == this) // I'm the last one delete p_raw_resource; else { p_next->p_previous = p_previous; p_before->p_next = p_next; } } resource_t& get() noexcept { return *p_raw_resource; } resource_t const& get() const noexcept { return *p_raw_resource; } bool same_family(family_member const& stranger) const { return p_raw_resource == stranger.p_raw_resource; }};To add support for non C++-14 users (no std::make_unique support), the following free function is provided:template<class resource_t, class... args_t>family_member<resource_t> make_family_member(args_t&& ...args){ return family_member<resource_t>(std::unique_ptr<resource_t> (new resource_t(std::forward<args_t>(args)...)) );}And the string_id class:class string_id{ family_member<std::string> str_id;public: explicit string_id(std::string const& id) : str_id(make_family_member<std::string>(id)) {} operator std::string const&() const { return str_id.get(); } operator char const*() const { return str_id.get().c_str(); } friend bool operator==(string_id const& a, string_id const& b) { return a.str_id.same_family(b.str_id); }};The question is, is that implementation memory-safe? Can it be seen as an anti-pattern? Should I go for other solutions two carry-on, at the same time, the id and the string nature of the same object?And the most important question of alls, is it worthy? | C++ string_id, a O(1)-copyable and O(1)-comparable non-modifiable string class | c++;strings;pointers | The copy constructor looks correct.family_member(family_member const& sibling) noexcept : p_raw_resource(sibling.p_raw_resource), p_previous(std::addressof(sibling)), p_next(sibling.p_next){ p_next->p_previous = sibling.p_next = this; }But the point of writing code in a high level language is to try and make it readable. Please don't chain assignments like that. It does not cost you anything to put each on its own line.{ p_next->p_previous = this; sibling.p_next = this;}The assignment operator has a bug.family_member& operator=(family_member const& sibling) noexcept{ p_raw_resource = sibling.p_raw_resource; p_previous = std::addressof(sibling); p_next = sibling.p_next; p_next->p_previous = sibling.p_next = this; return *this;}You correctly add it to the new chain. But you did not remove it from the old chain. So the previous chain that it was in now is broken as it links into the new chain via this.You have an issue with your get(). You have no way to tell if the class actually contains a valid pointer. It is perfectly legal to initialize this object with a nullptr (via an empty std::unique_ptr). Since you can't tell if the object contains a nullptr every call to get() is a game of russian roulette at some point you are going to invoke undefined behavior. |
_unix.388399 | When I start bash or any other shell, it has no history. Do you have any idea what I can do about it? I'm trying to use the upwards arrow and it has no effect if I start a new shell with OpenBSD or Ubuntu xenial. | Enable history for shell | shell;ubuntu;command history;openbsd | null |
_softwareengineering.201756 | I'm writing a game that has a lot of time based aspects. I use time to help estimate player positions when the network stalls and packets aren't going through (and the time between packet's being received and not). It's a pacman type game in the sense that a player picks a direction and can't stop moving, so that system makes sense (or at least I think it does).So I have two questions: 1) How do I sync the clocks of the games at the start since there is delay in the network. 2) Is it okay NOT to sync them and just assume that they are the same (my code is time-zone independent). It's not a super competitive game where people would change their clocks to cheat, but still.The game is being programmed in Java and Python (parallel development as a project) | How to sync clocks over networking for game development? | java;python;game development;networking | null |
_codereview.138312 | I recently answered a question that had me wondering if I could simplify this below code without creating an extra function:for object in objects { if let type = object[type] where !type.isEmpty, let name = object[name] { print(pokemonTypeDefenseChart[type]) for weakness in pokemonTypeDefenseChart[type]! { if pokemonWeaknessChart[weakness] == nil { pokemonWeaknessChart[weakness] = [] } pokemonWeaknessChart[weakness]?.append(name) } } if let typeTwo = object[typeTwo] where !typeTwo.isEmpty, let name = object[name] { for weakness in pokemonTypeDefenseChart[typeTwo]! { if pokemonWeaknessChart[weakness] == nil { pokemonWeaknessChart[weakness] = [] } pokemonWeaknessChart[weakness]?.append(name) } }}Essentially the code loops through objects (list of pokemon) and is designed to add all of the pokemon weak to type (and optionally typeTwo).I ended up just refactoring the for statements inside the if let statements into a function, but I was wondering if it was possible to compress these without an extra function. The only difference is the type.I know I could simplify it a bit by using a map instead of for loops but I wanted to keep it more understandable for the question.Any ideas? | Registering Pokmon weaknesses | swift;pokemon | You could do something like:for x in [type, typeTwo] { if let type = object[x] where !type.isEmpty, let name = object[name], defenseChart = pokemonTypeDefenseChart[type] { /* ... */ }}Also, to simplify the rest of your code, you could extend Dictionary like this:extension Dictionary { subscript(key: Key, fallback fallback: Value) -> Value { get { return self[key] ?? fallback } set { self[key] = newValue } }}This allows you to replace this:for weakness in defenseChart { if pokemonWeaknessChart[weakness] == nil { pokemonWeaknessChart[weakness] = [] } pokemonWeaknessChart[weakness]?.append(name)}by this:for weakness in defenseChart { pokemonWeaknessChart[weakness, fallback: []].append(name)} |
_cstheory.37514 | I am looking for the name and a reference for a $\Delta_2^P$-complete problem that looks like the followingInput:A collection of CNF formulas $\phi_i(x_1^i, x_2^i,\dots, x_m^i, z_1, z_2, \dots, z_{i-1})$ for $1 \leq i \leq n$ where the $x_j^i$ are free variables and the $z_i$ variables are bound, and the value of $z_i$ is true if $\phi_i$ is satisfiable and false if $\phi_i$ is unsatisfiable.Output: Whether $\phi_n$ is satisfiable.I looked at https://cs.stackexchange.com/questions/14251/which-problems-are-hard-for-pnp and at https://mathoverflow.net/questions/2218/characterize-pnp-a-k-a-delta-2p but couldn't find what I'm looking for. I believe I came across a paper mentioning a problem defined similarly to the one above a couple months ago, but I don't remember which paper was listing it. I am not 100% sure about my recollection of the problem definition and this is one of the reasons behind this reference request. | Reference request for a $\Delta_2^P$ satisfiability problem | cc.complexity theory;reference request;sat | null |
_unix.77315 | Given that you usually have to authenticate to the X Server by way of a magic cookie stored in the .xauthority file in the user's home directory: How does GDM (like most login processes, running as root, I would assume) connect to the X Server in order to draw the login display? Does it use any .xauthority files stored in the root user's home directory or does it bypass authentication altogether? | How does GDM authenticate to the X Server? | xorg;x11;gdm | On my system ps finds this:/usr/bin/Xorg -br :0 vt7 -nolisten tcp -auth /var/lib/xdm/authdir/authfiles/A:0-wEJjacThe display manager starts X with the auth file to use as parameter. It can use that file directly.Edit 1:It's KDM in my case, not GDM. |
_unix.168650 | I want to use multiple proxy servers at the same time to speed my downloads. Reverse proxy products such as haproxy and nginx can use multiple proxy servers, but only one proxy per session.Client---haproxy---proxy1/proxy2/proxy3-----WebserverBut I want to balance one session. Imagine that I am downloading big file. In normal conditions this download comes to client through only one proxy. But I want to divide this download into 3 parts and utilize 3 proxies. | bonding multiple proxy connection | proxy;nginx;load balancing;haproxy | null |
_softwareengineering.236459 | I've been finding a lot of blog posts claiming JS encryption is unsafe, here's a couple of detailed ones:http://www.matasano.com/articles/javascript-cryptography/http://rdist.root.org/2010/11/29/final-post-on-javascript-crypto/My question is, if browsers truly are inherently unsafe then, by extension, entering any PCI-related info in the browser is unsafe - regardless of JS encryption, HTTPS, or any other security measures? Which would imply that malicious parties should be taking major advantage of this fact, right? Could someone provide specific examples where browser vulnerabilities were leveraged to steal massive amounts of PCI-info/PII (by massive I mean comparable to the amount that could be obtained by hacking into the hosting servers/DB)?Also, despite all those posts describing security flaws there's a proliferation of payment services and JS crypto libraries - does that indicate that most companies/communities:are unaware of browser vulnerabilities?are simply disregarding the underlying issues and jumping on the bandwagon to make some dough?have weighted the (possibly low) likelihood of someone going through the trouble of exploiting browsers and decided it's still worth to capture payments through browsers?EDITUsing SSL/TLS addresses some of the issues, but definitely not all. Here are a few notable issues that fall outside of the area that SSL/TLS can solve (quoted directly from the Matasano blog post):The prevalence of content-controlled code.We mean that pages are built from multiple requests, some of them conveying Javascript directly, and some of them influencing Javascript using DOM tag attributes (such as onmouseover).The malleability of the Javascript runtime.There is no reliable way for any piece of Javascript code to verify its execution environment. Javascript crypto code can't ask, am I really dealing with a random number generator, or with some facsimile of one provided by an attacker? And it certainly can't assert nobody is allowed to do anything with this crypto secret except in ways that I, the author, approve of. These are two properties that often are provided in other environments that use crypto, and they're impossible in Javascript.What else is the Javascript runtime lacking for crypto implementors?Two big ones are secure erase (Javascript is usually garbage collected, so secrets are lurking in memory potentially long after they're needed) and functions with known timing characteristics. Real crypto libraries are carefully studied and vetted to eliminate data-dependant code paths --- ensuring that one similarly-sized bucket of bits takes as long to process as any other --- because without that vetting, attackers can extract crypto keys from timing.Again, my main point is that, technically, once a credit card number (or some other important piece of info) is entered into a text field of a page there's a chance that it's been compromised - and at that, compromised more easily then if it were entered in the native application. | Browser security and payments | javascript;security;browser;encryption | null |
_ai.111 | Obviously driverless cars aren't perfect, so imagine that the Google car (as an example) got into difficult situation.Here are a few examples of unfortunate situations caused by set of events:the car is heading toward a crowd of 10 people crossing the road, so it cannot stop in time, but it can avoid killing 10 people by hitting the wall (killing the passengers),avoiding killing the rider of the motorcycle considering that the probability of survival is greater for the passenger of the car,killing animal on the street in favour of human being,changing lanes to crash into another car to avoid killing a dog,And here are few dilemmas:Does the algorithm recognize the difference between a human being and an animal?Does the size of the human being or animal matter?Does it count how many passengers it has vs. people in the front?Does it know when babies/children are on board?Does it take into the account the age (e.g. killing the older first)?How would an algorithm decide what should it do from the technical perspective? Is it being aware of above (counting the probability of kills), or not (killing people just to avoid its own destruction)?Related articles:Why Self-Driving Cars Must Be Programmed to KillHow to Help Self-Driving Cars Make Ethical Decisions | How could self-driving cars make ethical decisions about who to kill? | algorithm;self driving;decision theory;ethics | The answer to a lot of those questions depends on how the device is programmed. A computer capable of driving around and recognizing where the road goes is likely to have the ability to visually distinguish a human from an animal, whether that be based on outline, image, or size. With sufficiently sharp image recognition, it might be able to count the number and kind of people in another vehicle. It could even use existing data on the likelihood of injury to people in different kinds of vehicles.Ultimately, people disagree on the ethical choices involved. Perhaps there could be ethics settings for the user/owner to configure, like consider life count only vs. younger lives are more valuable. I personally would think it's not terribly controversial that a machine should damage itself before harming a human, but people disagree on how important pet lives are. If explicit kill-this-first settings make people uneasy, the answers could be determined from a questionnaire given to the user. |
_softwareengineering.304811 | I've never seen a programming language with conditional assignment targets, eg.:// If (x == y), then var1 will be set to 1, else var2 will be set to 1((x == y) ? var1 : var2) = 1The target of the assignment is determined conditionally at run-time, in this case based on whether x == y.It seems like it could be a handy syntax.Anyone know of any programming language which supports this?Or is there a theoretical reason it can't be done effectively? | Has any language ever supported a conditional assignment target? | programming languages;theory;conditions | This isn't really a theory question, but a practical one.C++ supports what you're asking about:[C++14: 5.16/4]: If the second and third operands are glvalues of the same value category and have the same type, the result is of that type and value category [..]For example:#include <iostream>int x = 3, y = 4;void foo(const bool b){ (b ? x : y) = 6;}int main(){ std::cout << x << ' ' << y << '\n'; // 3 4 foo(true); std::cout << x << ' ' << y << '\n'; // 6 4 foo(false); std::cout << x << ' ' << y << '\n'; // 6 6}(live demo)(This is basically the same as *ptr = val, since dereferencing produces an lvalue.)It's worth noting that C doesn't support it:#include <stdio.h>#include <stdbool.h>int x = 3, y = 4;void foo(const bool b){ (b ? x : y) = 6;}int main(){ printf(%d %d\n, x, y); // 3 4 foo(true); printf(%d %d\n, x, y); // 6 4 foo(false); printf(%d %d\n, x, y); // 6 6}// main.c: In function 'foo':// main.c:8:17: error: lvalue required as left operand of assignment// (b ? x : y) = 6; ^(live demo)… though it will allow you to simulate this technique, by applying my early observation regarding pointer dereferences:*(b ? &x : &y) = 6; |
_unix.147687 | I'm trying to use secretsdump.py, which relies on winregistry.py.The error I'm getting is:Traceback (most recent call last): File secretsdump.py, line 41, in <module> from impacket import version, smbconnection, winregistry, ntlmImportError: cannot import name winregistryI've used sys.path to see where python loads files from and copied winregistry.py to some of these directories, which made no difference. I'm using Kali Linux, a Debian based distribution.I've tried upgrading python, which had no effect.What else could I try to solve this problem? | Python failing to import winregistry | debian;python | from impacket import version, smbconnection, winregistry, ntlm in this case means from the package impacket, import the modules version, smbconnection, winregistry and ntlm.This means you need impacket, the package, not winregistry, a submodule of impacket, on the path. Try putting the whole package on the path somewhere, or just putting the impacket package right next to the secretsdumpy.py script.impacket can be found here.The python2 tutorial section on package imports here. |
_unix.74061 | Most of my music is ripped flac's with cue's. I listen to albums with cue mainly because of last.fm scrobbling, but I want the same experience of gapless playback as listening to whole flac using VLC (I very like it's simplicity). Is there any solution for this? | VLC - gapless cue support | audio;vlc | null |
_unix.356459 | What are the possible ways to stream system audio to other devices?mainly I'm looking for a way to stream audio from a kodi (running on raspbian or libreelec...etc) to a smartphone (ios) to use the smartphone as a wireless headphone.I've found some ways around the internet none of which worked for me.mainly the reference for my idea is here.some other ways i've read about are pulseaudio network, jack, icecast and shoutcast.so how to accomplish this the right wayUPDATE: One way that worked is using pulseaudio-dlna and using a upnp media renderer such as bubbleupnp but it's not stable enough(audio lose sync) | Stream system audio to other devices over the network | audio;raspbian;alsa;pulseaudio;streaming | null |
_softwareengineering.82619 | I have some experience regarding IPhone and Android development but I am now struggling to solve a new class of problem: apps that involve a client/server chatroom feature.That is, an app when people can exchange text over the internet, and without having the app to constantly pull content from the server.So that problem cant be solved with a normal php/mysql website, there must be some kind of application running on a server that is able to send message from the server to the phone, rather than having the phone to check for new messages every 10 seconds. So Im looking for ways to solve the different problems here:What framework should I use on the two sides (phone / server)? It should be some kind of library that doesnt prevent me to write paid apps. It should also be possible to have the same server for the Iphone and android version of the app.What server / hosting solution do I need with what sort of features, I just have no experience regarding server application that can handle and initiate multiple connections and are hosted on hardware that is always onlineI tried to find resources online but couldnt so far, either the libraries had the wrong kind of license/language or I just didnt understand. Sometimes there were nice tutorial but for different needs such as peer2peer chat over local network. Same with the server and the hosting problem, not sure where to start really, Im calling for help and I promise I will complete this page with notes about the experience I will get.Obviously the ideal would be to find a tutorial I missed that include client code, server code and a free scalable server. That being said, If I see something as good, it probably means that I have eaten the wrong kind of mushroom again. So, failing that, any pointer which might help me toward that quest, would be greatly appreciated. | Iphone/Android app chatroom development what framework & hosting needs? | iphone;android;hosting;im | If you want to do Cross-Platform you might want to go for Urban Airship. This is a commercial service, but they do have a free plan (up to 1 million message a month, $0.0010 per extra message). They also have an 'advanced package' for unlimited sending. Don't know the price (http://urbanairship.com/pricing)If you're only targeting Android I would go for Google's Cloud to Device Messaging Framework (C2DM). Yes, it's only supported for devices that run API level 2.2 and up, but it's the same technology -- and persistent connection with the phones -- that Google uses for it's own apps which use push notifications.An Android alternatives might be The Deacon Project. It is Open Source, still in beta (last code drop is from 2010. Don't know if it is actively being developed any longer) but it supports older versions of AndroidGood luck with the implementation! |
_cstheory.22130 | Kurt Gdel's incompleteness theorems establish the inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic. Homotopy Type Theory provides an alternative foundation for mathematics, a univalent foundation based on higher inductive types and the univalence axiom. The HoTT book explains that types are higher groupoids, functions are functors, type families are brations, etc. The recent article Formally Verified Mathematics in CACM by Jeremy Avigad and John Harrison discusses HoTT with respect to formally verified mathematics and automatic theorem proving. Do Gdel's incompleteness theorems apply to HoTT?And if they do,is homotopy type theory impaired by Gdel's incompleteness theorem (within the context of formally verified mathematics)? | Homotopy type theory and Gdel's incompleteness theorems | lo.logic;type systems;homotopy type theory | HoTT suffers from Gdel incompleteness, of course, since it has a computably enumerable language and rules of inference, and we can formalize arithmetic in it. The authors of the HoTT book were perfectly aware of its incompletness. (In fact, this is quite obvious, especially when half of the authors are logicians of some sort).But does incompleteness impair HoTT? No more than it does any other formal system, and I think the whole issue is a bit misguided. Let me try an analogy. Suppose you have a car which can't take you everywhere on the planet. For instance, it can't climb vertically up a wall. Is the car impaired? Of course, it can't get you to the top of the Empire State building. Is the car useless? Far from it, it can take you too many other interesting places. Not to mention that the Empire State building has elevators. |
_unix.277317 | I created a Virtual Machine (using VirtualBox) with 20 Gb disk size so I could use Ubuntu 64-bit on a Mac. I needed Ubuntu for running Xilinx ISE 14.7. I downloaded the file but when I tried to install it said I needed 20362 MB Disk Space but I only had 6499 MB.Then I tried to increase the disk size of the virtual machine and I increased it a lot... But when I tried again the same message appeared I needed 20362 Mb and only had 6499 Mb... But I looked for the settings of my virtual machine and I have indeed increased the disk memory... My Optical Drive says also [Empty] (I don't know if that's relevant)...What am I doing wrong?Do I need to create a Virtual Machine again? If so please let me know what is the size I need. thanks | How can I get more space available? | size | null |
_datascience.19647 | In the original paper, the author says that the annotation are the concatenation fo the forward states and the backward states at each time step. In the tensorflow implementation (memory param), the memory field is said to be populated with the output (not hidden state) of an RNN encoder.What am I missing? | Bahdanau Attention | tensorflow;rnn | null |
_unix.273182 | I wanted to install the command locate, which is available via sudo apt-get installmlocate. However, I first ran sudo apt-get installlocate which seems to have installed something else.Typing the command locate <package> however seems to call upon mlocate.What is the package locate, and can (should) it be safely removed? | Difference between locate and mlocate | locate | The locate package is the implementation of locate from GNU findutils. The mlocate package is another implementation of the same concept called mlocate. They implement the same basic functionality: quick lookup of file names based on an index that's (typically) rebuilt every night. They differ in some of their functionality beyond basic usage. In particular, GNU locate builds an index of world-readable files only (unless you run it from your account), whereas mlocate builds an index of all files but only lets the calling user see files that it could access. This makes mlocate more useful in most circumstances, but unusable in some unusual installations where it isn't run by the system administrator (because mlocate has to be setuid root), and a security risk.Under Debian and derivatives, if you install both, locate will run the mlocate implementation, and you need to run locate.findutils to run the GNU implementation. This is managed through alternatives. If you have both installed, they'll both spend time rebuilding their respective index, but other than that they won't conflict with each other. |
_codereview.149700 | Can someone please review my code? I'm saving a hotel rate to a database. I want to save the rate twice, one field will update with every new rate, and one field will contain the rate that was found for that date the first time it was captured. The code looks very clunky and I know it can be improved. def save_results(rates, session, hotel, govt): for item in rates: rate = Rate(**item) try: # check if already in database q = session.query(Rate).filter(Rate.hotel==hotel['object'], Rate.arrive==rate.arrive).first() # update inital_rate if that field is empty if q: if 'govt_rate' in item and q.govt_rate_initial is None: q.govt_rate_initial = rate.govt_rate elif 'commercial_rate' in item and q.commercial_rate_initial is None: q.commercial_rate_initial = rate.commercial_rate if q and govt is True: q.updated = datetime.utcnow() q.govt_rate = rate.govt_rate elif q and govt is False: q.updated = datetime.utcnow() q.commercial_rate = rate.commercial_rate else: if govt is True: rate.govt_rate_initial = rate.govt_rate elif govt is False: rate.commercial_rate_initial = rate.commercial_rate hotel['object'].rates.append(rate) session.commit() except: session.rollback() raiseFull code is below for reference. I would appreciate comments on any other portion as well!# modelsclass Location(Base): __tablename__ = 'locations' id = Column(Integer, primary_key=True) city = Column(String(50), nullable=False, unique=True) per_diem_rate = Column(Numeric(6, 2)) hotels = relationship('Hotel', back_populates='location')class Hotel(Base): __tablename__ = 'hotels' id = Column(Integer, primary_key=True) name = Column(String(100), nullable=False, unique=True) phone_number = Column(String(20)) parking_fee = Column(String(10)) location_id = Column(Integer, ForeignKey('locations.id'), nullable=False) location = relationship('Location', back_populates='hotels') rates = relationship('Rate', back_populates='hotel', order_by='Rate.arrive', lazy='joined')class Rate(Base): __tablename__ = 'rates' id = Column(Integer, primary_key=True) govt_rate = Column(Numeric(6, 2)) govt_rate_initial = Column(Numeric(6, 2)) commercial_rate = Column(Numeric(6, 2)) commercial_rate_initial = Column(Numeric(6, 2)) arrive = Column(Date, nullable=False) govt_link = Column(String(500)) commercial_link = Column(String(500)) updated = Column(DateTime, default=datetime.datetime.utcnow, nullable=False) hotel_id = Column(Integer, ForeignKey('hotels.id'), nullable=False) hotel = relationship('Hotel', back_populates='rates')def scrape_marriott(HOTELS_TO_SCRAPE): # create db session session = create_db_session() good = 0 bad = 0 # loop through list of hotels to scrape for item in HOTELS_TO_SCRAPE: try: # get or create a hotel linked to a location location = get_or_create(session, Location, city=item['city']) hotel = get_or_create(session, Hotel, name=item['name'], location=location) # create a hotel dictionary to pass to the other functions hotel = {'property_code': item['property_code'], 'object': hotel} # govt rates # get rates dictionary rates = get_rates(hotel, govt=True) # save to database save_results(rates, session, hotel, govt=True) time.sleep(randint(20, 30)) # commercial rates # get rates dictionary rates = get_rates(hotel, govt=False) # save to database save_results(rates, session, hotel, govt=False) # log result and increase 'good process' counter print(item['name'] + ' processed successfully') good += 1 # wait between 30 and 60 seconds before next loop time.sleep(randint(30, 60)) except (AttributeError, TypeError, ConnectionError) as e: # log exception print('Error occured for ' + item['name'] + '. ' + e) email_message('Error occured for ' + item['name'] + '. ' + e) bad += 1 continue print('{} processed, {} failed'.format(good, bad)) email_message('{} processed, {} failed'.format(good, bad)) session.close()def get_rates(hotel, govt): dates = build_dates() rates = [] # get rates for this month and next month for d in dates: soup = get_soup(d['arrive'], d['depart'], hotel, govt) rates += parse_rates(soup, govt) time.sleep(randint(2, 5)) # remove duplicates filtered = [] for i in range(0, len(rates)): if rates[i] not in rates[i + 1:]: filtered.append(rates[i]) rates = filtered return ratesdef get_soup(arrive, depart, hotel, govt): if govt is True: rateCode = 'GOV' else: rateCode = 'none' browser = RoboBrowser(parser='html.parser') browser.open('http://www.urlremoved?propertyCode=' + hotel['property_code']) time.sleep(1) form = browser.get_form(action='/reservation/availabilitySearch.mi?isSearch=false') form['fromDate'].value = arrive form['toDate'].value = depart form['flexibleDateSearch'] = 'true' form['clusterCode'] = rateCode # submit form browser.submit_form(form) return browserdef parse_rates(soup, govt): # get calendar links table = soup.find('table') urls = table.find_all('a', class_='t-no-decor') rates = [] # loop through urls and parse each query string for item in urls: if len(item[class]) == 1: # strip newlines and tabs raw_url = item['href'].replace('\n', '').replace('\t', '').replace(' ', '') parsed_url = urlparse(raw_url) query = parse_qs(parsed_url.query) # convert date to datetime format res_date = query['fromDate'][0] res_date = datetime.strptime(res_date, '%m/%d/%y') if govt == True: # append data to rates list rates.append({ 'arrive': res_date, 'govt_rate': query['rate'][0], 'govt_link': 'https://marriott.com' + urlunparse(parsed_url) }) elif govt == False: # append data to rates list rates.append({ 'arrive': res_date, 'commercial_rate': query['rate'][0], 'commercial_link': 'https://marriott.com' + urlunparse(parsed_url) }) return ratesdef save_results(rates, session, hotel, govt): for item in rates: rate = Rate(**item) try: # check if already in database q = session.query(Rate).filter(Rate.hotel==hotel['object'], Rate.arrive==rate.arrive).first() # update inital_rate if that field is empty if q: if 'govt_rate' in item and q.govt_rate_initial is None: q.govt_rate_initial = rate.govt_rate elif 'commercial_rate' in item and q.commercial_rate_initial is None: q.commercial_rate_initial = rate.commercial_rate if q and govt is True: q.updated = datetime.utcnow() q.govt_rate = rate.govt_rate elif q and govt is False: q.updated = datetime.utcnow() q.commercial_rate = rate.commercial_rate else: if govt is True: rate.govt_rate_initial = rate.govt_rate elif govt is False: rate.commercial_rate_initial = rate.commercial_rate hotel['object'].rates.append(rate) session.commit() except: session.rollback() raise | Saving items to database w/ sqlalchemy | python;sqlalchemy | Without changing the rest of the data structure, your try clause can be shortened: try: q = session.query(Rate).filter(Rate.hotel==hotel['object'], Rate.arrive==rate.arrive).first() if govt is True: sector = govt else: sector = commercial if q: if 'govt_rate' in item: sector = govt elif 'commercial_rate' in item: sector = commercial if q[sector + _rate_initial] is None: q[sector + _rate_initial] = rate[sector + rate] else: rate[sector + _rate_initial] = rate[sector + _rate] hotel['object'].rates.append(rate)(This assumes you want the govt argument to save_results to be over-ridden by existing data in the field in cases where they don't match.)Ideally, your fields should have a level for sector. So instead of string concatenation as I've done, you would have:q[sector].rate_initial = rate[sector].rate |
_cstheory.259 | There is a large literature on property testing -- the problem of making a small number of black box queries to a function f:{0,1}^n -> R to distinguish between two cases: 1) f is a member of some class of functions C2) f is epsilon-far from every function in class C. The range R of the function is sometimes boolean: R = {0,1}, but not always. Here, epsilon-far is generally taken to mean hamming distance: the fraction of points of f that would need to be changed in order to place f in class C. This is a natural metric if f has a boolean range, but seems less natural if the range is say real valued. My question: does there exist a strand of the property-testing literature that tests for closeness to some class C with respect to other metrics? | Property testing in other metrics? | reference request;lg.learning;metrics;property testing;black box | Yes, there is! I will give three examples:Given a set S and a multiplication table over S x S, consider the problem of determining if the input describes an abelian group or whether it is far from one. Friedl, Ivanyos, and Santha in STOC '05 showed that there is a property tester with query complexity polylog(|S|) when the distance measure is with respect to the edit distance of multiplication tables which allows addition and deletion of rows and columns from the multiplication table. The same problem was also considered in the Hamming distance model by Ergun, Kannan, Kumar, Rubinfeld and Viswanathan (JCSS '00) where they showed query complexity of O~(|S|^{3/2}).There is a large amount of work done on testing graph properties where the graphs are represented using adjacency lists and there is a bound on the degree of each vertex. In this case, the distance model is not exactly Hamming distance but rather how many edges can be added or deleted while preserving the degree bound.In the closely related study of testing properties of distributions, various notions of distance between distributions have been studied. In this model, the input is a probability distribution over some set and the algorithm gets access to it by sampling from the set according to the unknown distribution. The algorithm is then required to determine if the distribution satisfies some property or is far from it. Various notions of distance have been studied here, such as L_1, L_2, earthmover. Probability distributions over infinite domains have also been studied here (Adamaszek-Czumaj-Sohler, SODA '10). |
_softwareengineering.345385 | I am new to unit testing. Started working on unit test using PHPUnit. But it seems to be taking too much time. If consider I take 3 hours to write a Class, its taking my 7 hours to write test case for it.There are a few factors behind it.As I told, I am new to this stuff, so have to do lot of R&D.Sometimes I get confused what to test in it.Mocking some function takes time.There are lots of permutation and combination in a big function, so it gets difficult and pretty time consuming to mock those functions.Any idea how do I write test cases in faster way? Any ideas for actual code so it gets faster to write test cases?What are the best practices I should follow in my code below?<?php namespace Api\Core;use Api\Exceptions\APICoreException;use Api\Exceptions\APITransformationException;use Api\Exceptions\APIValidationException;use CrmValidation;use Component;use DB;use App\Traits\Api\SaveTrait;use App\Traits\Api\FileTrait;use App\Repositories\Contract\MigrationInterface;use App\Repositories\Contract\ClientFeedbackInterface;use Mockery\CountValidator\Exception;use Api\Libraries\ApiResponse;use App\Repositories\Contract\FileInterface;use App\Repositories\Contract\MasterInterface;use App\Traits\Api\ApiDataConversionTrait;use ClientFeedback;use MigrationMapping;use Migration;use ComponentDetail;use FavouriteEditorCore;/** * Class ClientFeedbackCore * * @package Api\Core */class ClientFeedbackCore{ use SaveTrait, FileTrait, ApiDataConversionTrait; /** * @var array */ private $request = []; /** * @var */ private $migrationFlag; /** * @var string */ private $table = 'client_feedback'; /** * @var MigrationInterface */ public $migrationRepo; /** * @var ClientFeedbackInterface */ public $clientFeedbackRepo; /** * @var MasterInterface */ public $masterRepo; /** * @var FileInterface */ public $fileRepo; /** * ClientFeedbackCore constructor. * * @param MigrationInterface $migrationInterface * @param ClientFeedbackInterface $clientFeedbackInterface * @param MasterInterface $masterInterface * @param FileInterface $fileInterface */ public function __construct( MigrationInterface $migrationInterface, ClientFeedbackInterface $clientFeedbackInterface, MasterInterface $masterInterface, FileInterface $fileInterface ) { $this->clientFeedbackRepo = $clientFeedbackInterface; $this->migrationRepo = $migrationInterface; $this->masterRepo = $masterInterface; $this->fileRepo = $fileInterface; } /** * @author pratik.joshi */ public function init() { $this->migrationFlag = getMigrationStatus($this->table); } /** * @param $request * @return array * @author pratik.joshi * @desc stores passed data into respective entities and then stores into migration tables. If any issue while insert/update exception is thrown. */ public function store($request) { if ($request == null || empty($request)) { throw new APIValidationException(trans('messages.exception.validation',['reason'=> 'request param is not provided'])); } $clientFeedbackId = $migrationClientFeedbackId = $favouriteEditorId = null; $errorMsgWhileSave = null; $clientFeedback = []; $filesSaved = []; $categoryNamesForFiles = []; $operation = config('constants.op_type.INSERT'); $this->init(); if( keyExistsAndissetAndNotEmpty('id',$request) && CrmValidation::getRowCount($this->table, 'id', $request['id']) ) { $operation = config('constants.op_type.UPDATE'); } //Step 1: set up data based on the operation $this->request = $this->convertData($request,$operation); //Step 2: Save data into repo, Not using facade as we cant reuse it, every facade will repeat insert update function if ($operation == config('constants.op_type.INSERT')) { $clientFeedback = $this->insertOrUpdateData($this->request, $this->clientFeedbackRepo); } else if($operation == config('constants.op_type.UPDATE')) { $clientFeedback = $this->insertOrUpdateData($this->request, $this->clientFeedbackRepo,$this->request['id']); } if ( !keyExistsAndissetAndNotEmpty('client_feedback_id',$clientFeedback[ 'data' ]) ) { throw new APICoreException(trans('messages.exception.data_not_saved')); } //If no exception thrown, save id $clientFeedbackId = $clientFeedback[ 'data' ][ 'client_feedback_id' ]; //Step 3: prepare array for mig repo & save() if($this->migrationFlag && $operation == config('constants.op_type.INSERT')) { $this->saveMigrationDataElseThrowException($this->table, $clientFeedback[ 'data' ][ 'client_feedback_id' ], 'client_feedback', $this->request['name']); } //If no exception thrown, save id $paramsForFileSave = [ 'entity_id' => $clientFeedbackId, 'entity_type' => $this->clientFeedbackRepo->getModelName(), ]; //Step 4: Save datainto file, Save job feedback files with params : files array to save, migration data for files //The method prepareFileData will be called by passing multiple files, and some needed params for file which internally calls prepareData //$filePreparedData will be in format : $filePreparedData['field_cf_not_acceptable_four'][0] => whole file array(modified) $filePreparedData = $this->fileRepo->prepareFileData($this->request[ 'files' ], $this->masterRepo, $paramsForFileSave); $filesSaved = $this->fileRepo->filesInsertOrUpdate($filePreparedData); //If any file is not saved, it returns false, throw exception here if($filesSaved == false) { throw new APICoreException(trans('messages.exception.data_not_saved')); } //Step 5: Save data for file in migra repo. //For each file type and each file in it, loop, Check for insert data if(getMigrationStatus('file') && array_key_exists('insert',$filesSaved) && count($filesSaved['insert'])) { foreach ($filesSaved['insert'] as $singleFileSaved) { $fileId = $singleFileSaved['data']['file_id']; $wbTitle = $filesSaved['extra'][$fileId]; $this->saveMigrationDataElseThrowException('file', $singleFileSaved['data']['file_id'], 'files', $wbTitle); } } //We get created by or last modified by $createdOrLastModifiedBy = keyExistsAndissetAndNotEmpty('created_by',$this->request) ? $this->request['created_by'] : $this->request['last_modified_by']; //Calling FavouriteEditorCore as we want to save favorite or un-favorite editor $favouriteEditor = FavouriteEditorCore::core( $this->request[ 'component_id' ], $this->request[ 'rating' ], $this->request[ 'wb_user_id' ], $createdOrLastModifiedBy, $this->request[ 'same_editor_worker' ] ); if ( !issetAndNotEmpty($favouriteEditor[ 'data' ][ 'favourite_editor_id' ]) ) { throw new APICoreException(trans('messages.exception.data_not_saved')); } //If no exception thrown, save id $favouriteEditorId = $favouriteEditor[ 'data' ][ 'favourite_editor_id' ]; //repare array for mig repo & save() if(getMigrationStatus('favourite_editor') && $operation == 'insert') { $this->saveMigrationDataElseThrowException('favourite_editor', $favouriteEditor[ 'data' ][ 'favourite_editor_id' ], 'favourite_editor', null); } // Check if any error while saving $dataToSave = [ 'client_feedback_id' => $clientFeedbackId, 'files' => keyExistsAndissetAndNotEmpty('extra',$filesSaved) ? array_keys($filesSaved['extra']) : null, 'favourite_editor' => $favouriteEditorId ]; //@todo : return standard response // Return final response to the WB. return [ 'data' => $dataToSave, 'operation' => $operation, 'status' => ApiResponse::HTTP_OK, 'error_message' => isset($errorMsgWhileSave) ? $errorMsgWhileSave : null ]; } /** * @param $request * @param $operation * @return array * @author pratik.joshi */ public function convertData($request,$operation) { if( ($request == null || empty($request)) || ($operation == null || empty($operation)) ) { throw new APIValidationException(trans('messages.exception.validation',['reason'=> 'either request or operation param is not provided'])); } //If blank echo ' >> request';echo json_encode($request); echo ' >> operation';echo json_encode($operation); //Normal data conversion $return = $this->basicDataConversion($request, $this->table, $operation); echo ' >> return after basicDC';echo json_encode($return); //Custom data conversion $return[ 'client_code' ] = $request[ 'client_code' ]; $return[ 'component_id' ] = $request[ 'component_id' ]; if (isset( $request[ 'rating' ] ) ) { $return[ 'rating' ] = $request[ 'field_cf_rating_value' ] =$request[ 'rating' ]; } //Add client feedback process status, in insert default it to unread if($operation == config('constants.op_type.INSERT')) { $return[ 'processing_status' ] = config('constants.processing_status.UNREAD'); } else if($operation == config('constants.op_type.UPDATE')) { //@todo : lumen only picks config() in lumen only, explore on how to take it from laravel //if its set and its valid $processing_status_config = array_values(config('app_constants.client_feedback_processing_status')); // Get value from app constant if (isset( $request[ 'processing_status' ] ) && in_array($request['processing_status'],$processing_status_config)) { $return[ 'processing_status' ] = $request[ 'field_cf_status_value' ] = $request[ 'processing_status' ] ; } } //@todo : check for NO if (isset($request[ 'same_editor_worker' ])) { if($request[ 'same_editor_worker' ] == 'no') { $return[ 'wb_user_id' ] = null; } else { $return[ 'wb_user_id' ] = ComponentDetail::getLastWorkerId($request[ 'component_id' ]); } } //Get job title and prepend with CF $return[ 'name' ] = 'CF_'.Component::getComponentTitleById($request[ 'component_id' ]); //@todo check with EOS team for params $dataFieldValues = setDataValues(config('app_constants.data_fields.client_feedback'), $request); // unset which field we are storing in column $return[ 'data' ] = json_encode($dataFieldValues); echo ' >> return '.__LINE__;echo json_encode($return); echo ' >> request & return '.__LINE__;echo json_encode(array_merge($request, $return)); return array_merge($request, $return); } /** * @param $crmTable * @param $crmId * @param $wbTable * @param $wbTitle * @return mixed * @throws APICoreException * @author pratik.joshi */ public function saveMigrationDataElseThrowException($crmTable, $crmId, $wbTable, $wbTitle) { $dataToSave = Migration::prepareData([ 'crm_table' => $crmTable, 'crm_id' => $crmId, 'whiteboard_table' => $wbTable, 'whiteboard_title' => $wbTitle ]); //Save into migration repo $migrationData = $this->insertOrUpdateData($dataToSave, $this->migrationRepo); if ( !keyExistsAndissetAndNotEmpty('migration_id',$migrationData[ 'data' ]) ) { throw new APICoreException(trans('messages.exception.data_not_saved')); } return $migrationData[ 'data' ]['migration_id']; }}//And test case<?phpuse Api\Core\ClientFeedbackCore;use App\Repositories\Contract\MigrationInterface;use App\Repositories\Contract\ClientFeedbackInterface;use App\Repositories\Contract\FileInterface;use App\Repositories\Contract\MasterInterface;class ClientFeedbackCoreTest extends TestCase{ public $mockClientFeedbackCore; public $requestForConvertData; public $returnBasicDataConversion; public $operation; public $convertedData; public $mockMigrationRepo; public $mockClientFeedbackRepo; public $mockMasterRepo; public $mockFileRepo; public $clientFeedbackCore; public $table; public $saveFailedData; public function setUp() { parent::setUp(); $this->requestForConvertData = [ 'client_code' => 'SHBI', 'component_id' => '4556', 'same_editor_worker' => 'yes', 'created_by' => '83767', 'rating' => 'not-acceptable', 'files' => [ 'field_cf_not_acceptable_four' => [ 0 => [ 'created_by' => '83767', 'status' => '1', 'filename' => 'manuscript_0115.docx', 'filepath' => 'sites/all/files/15-01-17/client_feedback/1484497552_manuscript_011512.docx', 'filemime' => 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'filesize' => '116710', 'timestamp' => '1484497552', ], ], ], ]; $this->returnBasicDataConversion = [ 'crm_table' => 'client_feedback', 'active' => true, 'last_modified_date' => '2017-03-30 11:21:23', 'created_date' => '2017-03-30 11:21:23', 'created_by' => '83767', 'last_modified_by' => '83767', ]; $this->convertedData = [ 'client_code' => 'SHBI', 'component_id' => '4556', 'same_editor_worker' => 'yes', 'created_by' => '83767', 'rating' => 'not-acceptable', 'files' => [ 'field_cf_not_acceptable_four' => [ 0 => [ 'created_by' => '83767', 'status' => '1', 'filename' => 'manuscript_0115.docx', 'filepath' => 'sites/all/files/15-01-17/client_feedback/1484497552_manuscript_011512.docx', 'filemime' => 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'filesize' => '116710', 'timestamp' => '1484497552', ], ], ], 'field_cf_rating_value' => 'not-acceptable', 'crm_table' => 'client_feedback', 'active' => true, 'last_modified_date' => '2017-03-30 11:21:23', 'created_date' => '2017-03-30 11:21:23', 'last_modified_by' => '83767', 'processing_status' => 'unread', 'wb_user_id' => 1131, 'name' => 'CF_SHBI350', 'data' => '{field_cf_acceptable_one:null,field_cf_acceptable_two:null,field_cf_acceptable_four:null,field_cf_outstanding_one:null,field_cf_outstanding_two:null,field_cf_acceptable_three:null,field_cf_outstanding_three:null,field_cf_not_acceptable_one:null,field_cf_not_acceptable_two:null,field_cf_not_acceptable_three:null,field_cf_acceptable_same_editor:null,field_cf_outstanding_same_editor:null}', ]; $this->table = 'client_feedback'; $this->saveFailedData = [ 'status' => 400, 'data' => null, 'operation' => 'insert', 'error_message' => 'data save failed error' ]; //Mocking start $this->mockMigrationRepo = Mockery::mock(MigrationInterface::class); $this->mockClientFeedbackRepo = Mockery::mock(ClientFeedbackInterface::class); $this->mockMasterRepo = Mockery::mock(MasterInterface::class); $this->mockFileRepo = Mockery::mock(FileInterface::class); //Set mock of the Core class $this->mockClientFeedbackCore = Mockery::mock(ClientFeedbackCore::class, [$this->mockMigrationRepo, $this->mockClientFeedbackRepo, $this->mockMasterRepo, $this->mockFileRepo])->makePartial(); //Set expectations $this->mockClientFeedbackRepo ->shouldReceive('getModelName')->andReturn($this->table); //For insert data $this->mockClientFeedbackCore->shouldReceive('convertData') ->with($this->requestForConvertData, 'insert') ->andReturn($this->convertedData); } public function tearDown() { // DO NOT DELETE Mockery::close(); parent::tearDown(); } /** * @test */ public function method_exists() { $methodsToCheck = [ 'init', 'store', 'convertData', ]; foreach ($methodsToCheck as $method) { $this->checkMethodExist($this->mockClientFeedbackCore, $method); } } /** * @test */ public function validate_convert_data_for_insert() { //Mock necessary methods $this->mockClientFeedbackCore->shouldReceive('basicDataConversion') ->with($this->requestForConvertData, 'client_feedback', 'insert') ->andReturn($this->returnBasicDataConversion); ComponentDetail::shouldReceive('getLastWorkerId') ->with($this->requestForConvertData[ 'component_id' ]) ->andReturn(1131); Component::shouldReceive('getComponentTitleById') ->with($this->requestForConvertData[ 'component_id' ]) ->andReturn('SHBI350'); $actual = $this->mockClientFeedbackCore->convertData($this->requestForConvertData, 'insert'); $this->assertEquals($this->convertedData, $actual); } /** * @test */ public function validate_convert_data_without_params() { $errorMessage = ''; try{ $this->mockClientFeedbackCore->convertData(null, null); } catch (Exception $e){ $errorMessage = $e->getMessage(); } $this->assertEquals('API Validation Error: Reason: either request or operation param is not provided', $errorMessage); } /** * @test */ public function validate_store_without_params() { $errorMessage = ''; try{ $this->mockClientFeedbackCore->store(null); } catch (Exception $e){ $errorMessage = $e->getMessage(); } $this->assertEquals('API Validation Error: Reason: request param is not provided', $errorMessage); } /** * @test */ public function validate_store_client_feedback_save_fail() { $errorMessage = '';/* $this->mockClientFeedbackCore->shouldReceive('convertData') ->with($this->requestForConvertData, 'insert') ->andReturn($this->convertedData);*/ //For insert, mock separately //@todo : with() attribute does not work here : ->with($this->convertedData,$this->mockClientFeedbackRepo) $this->mockClientFeedbackCore->shouldReceive('insertOrUpdateData') ->andReturn($this->saveFailedData); try { $this->mockClientFeedbackCore->store($this->convertedData); } catch (Exception $e) { $errorMessage = $e->getMessage(); } $this->assertEquals('MigrationError: Data not saved', $errorMessage); } public function validate_store_migration_save_fail() { //saveMigrationDataElseThrowException $this->mockClientFeedbackCore->shouldReceive('saveMigrationDataElseThrowException') ->with('crmTable', 123, 'wbTable', 'wbTitle') ->andReturn($this->saveFailedData); try { $this->mockClientFeedbackCore->store($this->convertedData); } catch (Exception $e) { $errorMessage = $e->getMessage(); } $this->assertEquals('MigrationError: Data not saved', $errorMessage); } public function validate_store_file_save_fail() { } public function validate_store_favourite_editor_save_fail() { } public function validate_store_proper_save() { }}Please help me as I am exceeding deadlines due to not completing testcases in time. ThanksEdit : : @gnat its not duplicate, I mean its taking me too much time to write testcase. My question is not about how much time to spend, but how to write testcase fast? | Writing unit test cases are taking time, any advice? | php;unit testing;tdd;mocking;phpunit | It is not unusual for writing good tests to take considerable time - strictly you should be able to write your tests just from the specification and interface that is what is done in some formal test environments and still get 100% coverage.One big factor is if the developers are not used to writing testable code. In businesses like Aviation there are normally specified limits to the allowable code complexity, (a McCabe of <10 is a usual guideline and a requirement to justify anything over that with hard limit of 20 is what I am used to having to stick to).If you look at a section of code there are some rules of thumb for how many tests it will take to fully test it:1 for a simple run through thenParameter Checks:+1 for each value of an enumerate parameter in some languages +1 for invalid enumerate values.+6 for each float parameter, (0.0, >0.0, <0.0, +INF, -INF, NaN)+5 for each int parameter, (0, >0, <0, MAXINT, -MAXINT)etc.Then for each decision:+1 for each case in a switch+2 for each of <,>, ==, !=+3 for each of <=, >=This adds up fast.Some test frameworks have the ability to produce stubs or mocks automatically that gets you off to a good start.As you progress you will find that:You need to do less researchYou will build up a library of mocks/stubs/smippets that you can reuseYou will become for familiar with the toolsYou might even be able to educate the developers on how to write testable, maintainable, code. |
_codereview.111032 | The task is to assign the n numbers given as input by the user in an array . For Ex : If user gives 10 as input then the generated array should be like this ,arr[0] = 0arr[1] = 1arr[2] = 2arr[3] = 3...arr[10] = 10 The code below works fine but i am using a loop to assign this numbers to an array which can prove a lot of run-time during execution if the user gives an input like 10^6 .puts enter the number of times you want to testtimes = gets.chomp.to_i1.upto times do |i| puts enter the total number elements in the array . no = gets.chomp.to_i puts total number of elements are #{no + 1} arr = [] sum = 0 0.upto no do |i| arr[i] = i end arr.each_index { |index| print #{index} } sum = arr.reduce(:+) puts #{sum} endSo , how should i optimize this code for better performance ? | Assigning numbers to an array without using a loop | performance;ruby;array | null |
_codereview.29680 | I need to refactor following class:public class Message{ public Guid ID { get; set; } public string MessageIn { get; set; } public string MessageOut { get; set; } public int StatusCode { get; set; } //EDIT could be changed during message lifecycle public bool IsDeletable { get { switch (this.StatusCode) { case 12: case 13: case 22: case 120: return true; default: return false; } } } public bool IsEditable { get { switch (this.StatusCode) { case 12: case 13: case 22: case 120: return true; default: return false; } } } public string Message { get { switch (this.StatusCode) { case 11: case 110: return this.MessageIn; case 12: case 13: case 22: case 120: return this.MessageOut; default: return string.Empty; } } }}I would like to remove these businees rules IsDeletable, IsEditableI would like to remove these switch statments in the same timeI am not sure if it's worth knowing that I am mapping entity to database table through Entity Framework.EDIT:One more problem that I have is that fields MessageIn and MessageOut are dependent on StatusCode. One of them are allways populated.I could create new property but still the switch case is there:public string Message{ get { switch (this.StatusCode) { case 10: case 12: case 13: case : return this.MessageIn; default: return this.MessageOut; } } set { // switch again}}Best regards | How to refactor C# class to meet SOLID priniciple | c#;object oriented;design patterns | I would like to remove these switch statments in the same time[Flags]public enum StatusCode{ codeX = 1, codeY = 2, codeZ = 4, codeA = 8, editable = codeX | codeZ, deleteable = codeY | codeZ}Key pointsUse the Flags attributeenum values are powers of 2 - NOT multiples of 2bit-wise AND, OR provides the magic!Editaddressing the comments:public static class StatusCodes { private Dictionary<StatusCode, int> values; private Dictionary<int,StatusCode> keys; static StatusCodes() { values = new Dictionary<StatusCode, string> { {StatusCode.A, 10}, {StatusCode.B, 20}, // and so on } keys = new Dictionary<in, StatusCode> { {10, StatusCode.A}, {20, StatusCode.B}, } } public static int GetValue(StatusCode theStatusCodeKey) {} // don't forget error trapping! public static StatusCode GetKey(int theIntValue) {} // ditto public static bool Editable(StatusCode thisCode) { return (StatusCode.editable & thisCode) == thisCode; } public static bool Editable(int thisValue) { return Editable( GetKey(thisValue)); }}The definition of the codes is all in one place - that's SOLID (D = DRY, don't repeat yourself)The definitions are in its own class - that's SOLID (S = single responsibility)editable and deleteable can be removed from Message - that's very SOLID (S = single responsibility)We know what all those integers mean - thats.. well, just good programming.The StatusCodes are available anywhere and everywhere in the application - That's SOLID, an attribute of being DRY.I'm not sure what status code values might not be fixed in the comments means. Surely the set of status codes is finite.EditDefine editability in StatusCodesUse above to express Message is editableAddress issue of exposing StatusCode.editable as if it were a valid codeAdhere to Single ResponsibilityAdhere to DRY principle...public static class StatusCodes{ private static Dictionary<StatusCode, int> values; private static Dictionary<int,StatusCode> keys; static StatusCodes() { values = new Dictionary<StatusCode, int> { {StatusCode.A, 10}, {StatusCode.B, 20}, {StatusCode.C, 30}, {StatusCode.D, 40} // and so on }; keys = new Dictionary<int, StatusCode> { {10, StatusCode.A}, {20, StatusCode.B}, {30, StatusCode.C}, {40, StatusCode.D} }; } [Flags] enum Fungability { Editable = StatusCode.A | StatusCode.B, Deleteable = StatusCode.B | StatusCode.D } public static int GetValue( StatusCode theStatusCodeKey ) { int retVal; values.TryGetValue( theStatusCodeKey, out retVal ); return retVal; } // don't forget error trapping! public static StatusCode GetKey( int theIntValue ) { StatusCode retVal; keys.TryGetValue( theIntValue, out retVal ); return retVal; } // ditto public static bool Editable( StatusCode thisCode ) { return ( (StatusCode)Fungability.Editable & thisCode ) == thisCode; } public static bool Editable( int thisValue ) { return Editable( GetKey( thisValue ) ); }}public class Message{ public StatusCode myStatus; public Message( int statusCode = 20 ) { myStatus = StatusCodes.GetKey(statusCode); } public Message( StatusCode statusCode = StatusCode.A ) { myStatus = statusCode; } public bool Editable { get { return StatusCodes.Editable( myStatus ); } } public bool Deleteable { get { return StatusCodes.Deleteable( myStatus ); } }}Take AwayStructure data in an OO wayExpose the data adhering to the Single Responsibility principleYou get DRY as a side effectStructure yields simplicity, coherence, clarity. Editable is implemented with only one line of code!Message.Editable went from originally calculating if the status code was editable, to simply asking the StatusCode are you editable? |
_cstheory.31788 | P/poly is the class of decision problems solvable by a family of polynomial-size Boolean circuits. It can alternatively be defined as a polynomial-time Turing machine that receives an advice string that is size polynomial in n and that is based solely on the size of n.mP/poly is the class of decision problems solvable by a family of polynomial-size monotone Boolean circuits, but is there a natural alternative definition of mP/poly in terms of a polynomial-time Turing machine? | What is an equivalent definition of mP/poly in terms of a Turing machine? | cc.complexity theory;complexity classes;circuit complexity;polynomial time;monotone | There is a notion of a monotone non-deterministic and, more generally, alternating Turing machine in the paper Monotone Complexity by Grigni and Sipser. Since polynomial time is the same as alternating logarithmic space, a machine characterization of uniform $\mathsf{mP}$ is the monotone alternating logspace Turing machine. Providing such a machine with polynomial advice will then give a machine definition of $\mathsf{mP/poly}$. |
_cstheory.32590 | I am wondering whether there is an efficient algorithm to compute the basis of the set of vertices of a polytope. Formally, INPUT: a polytope$$\Xi=\{(\vec{a}_1\vec{x}+\vec{b}_1, \cdots, \vec{a}_m\vec{x}+\vec{b}_m)\mid C\vec{x}\leq d\}$$and a subspace $span(E)$ where $E=\{e_1, \cdots, e_{\ell}\}$ is a given set of vectorsOUTPUT: a basis of the linear subspace spanned by$$V(\Xi)\setminus span(E),$$ where $V(\Xi)$ denotes the set of vertices of $\Xi$. (Note that here $\Xi$ is given as an affine mapping of a polytope, which might complicates the problem a little bit.)One can solve the problem in a straightforward approach, but I am asking for an ideally polynomial-time algorithm, or any evidence that this is not possible (e.g., NP-hardness). | Compute basis of vertex set of polytope | ds.algorithms;linear algebra;computational geometry;convex geometry | null |
_unix.74410 | how can I lop through buffers in quickfix list :copen and make some actions with it.Or any alternative way to put those files to args list and I can use argsdo. | Vim - loop through files in cope | vim | I've found a vim plugin that suits my need completely which is vim-qargs. The idea behind is almost the same with @Ingo. |
_cstheory.10462 | I'm wondering if anyone knows of a formalization (even limited) of any part of finite model theory in any of the major proof assistants. (I'm most familiar with Coq, but Isabelle, Agda, etc. would acceptable.)Especially of interest would be any of the results in descriptive complexity. | Proof assistant formalizations of Finite Model Theory | lo.logic;descriptive complexity;finite model theory;proof assistants | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.