id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_codereview.35247
I'm wondering what people think about Dependency Injection vs Service Locator patterns. Specifically I'm using Prism with MEF. I'm also using the MVVM pattern.So I have a service which I export. I also have a class which doesn't export its class type as I don't need that to be registered with MEF. This class is actually a View-Model which is assigned to a hierarchical data template inside a TreeView. However the view-model needs to import an interface registered with MEF. Now I have three possible ways to do this:Set an [Import] on the interface and use ComposeParts in the constructor to inject the dependency.Use SerivceLocator.Current pattern to get the instance of the service.Pass in the service interface on the constructor, which is kind of a manual dependency injection.Here some example code to demonstrate.public interface IFooService{}[Export(typeof(IFooService))][PartCreationPolicy(CreationPolicy.Shared)]class FooSevice : IFooService{}Then the three possible implementation of the View-Model are:public class TreeItemVM_MefInjection{ [Import] public IFooService FooService { get; set; } public TreeItemVM_MefInjection() { var catalog = new AssemblyCatalog (System.Reflection.Assembly.GetExecutingAssembly()); var container = new CompositionContainer(catalog); container.ComposeParts(this); }}public class TreeItemVM_ServiceLocator{ [Import] public IFooService FooService { get; set; } public TreeItemVM_ServiceLocator() { FooService = ServiceLocator.Current.GetInstance<IFooService>(); }}public class TreeItemVM_ManualInjection{ public IFooService FooService { get; set; } public TreeItemVM_ManualInjection(IFooService fooService) { FooService = fooService; }}Each of the tree-view items actually has an ObservableCollection as the tree is hierarchical. Each ViewModel can create it's own children, based on the model data it uses (this is not shown in the above examples just to keep them simple).So my issues with each of these are:It seems a lot of code to write to get automatic injection and I'm worried about the performance of creating a catalog and container temporarily. Is this OK to do it like this?The service locator seems easier but I've read that the service locator is an anti-pattern. Should I be concerned with this?The last one will give better performance as each instance just passes the IFooService interface on the constructor, and can pass it down to it's children when they are instantiated. However if I want to add more dependency injection latter on I need to change the constructor so maybe automatic injection or service locator maybe better.So what would people say is the best method? Is there is a defined best practices method to follow? Are all the methods valid is it is up to the company coding standards to define the pattern to use?Obviously the TreeItems are just normally instanced with new. I don't want these to be exported to MEF as that is overkill and nothing outside of the class library needs to know about them. They are just normal class that need to have MEF dependency injection or find MEF registered interfaces.Anyone have an opinion on the most desirable solution and any gotchas I should be aware of? I'm sure there are other possible solutions to this as well. Any info would be appreciated.
Looking for advice - Dependency Injection over Service Locator in Mef
c#;dependency injection
null
_softwareengineering.344619
I have a running service whose logs are written in realtime to a Log Aggregator (this includes exception logging at runtime). The service also collects stats on the data processing it performs and sending those to a dashboard. In order to collect stats on exceptions and make them available for analysis, should that be done:At the level of the log aggregator which is seeing the runtime exceptions?At the level of my running service since it is collection processing stats?What are best practices?
Stats on exceptions: log aggregator or processing application
performance;exceptions;logging
null
_unix.197339
I have two files.file1:Dave 734.838.9800 Bob 313.123.4567 Carol 248.344.5576 Mary 313.449.1390 Ted 248.496.2204 Alice 616.556.4458 file2:Bob Tuesday Carol Monday Ted Sunday Alice Wednesday Dave Thursday Mary Saturday I merged the two files.file3 should look like this:Name On-Call Phone Carol MONDAY 248.344.5576 Bob TUESDAY 313.123.4567 Alice WEDNESDAY 616.556.4458 Dave THURSDAY 734.838.9800 Nobody FRIDAY 634.296.3356 Mary SATURDAY 313.449.1390 Ted SUNDAY 248.496.2204 But I cannot get the weekdays to be in order. How do I go about doing that?
how to sort by the day of the week?
shell;text processing;date;sort
null
_unix.366048
I am trying to install Xenix 386 and/or SCO V Unix in a VM for historical/research/reviving old times/curiosity purposes.I have already tried to download a couple of media installation images from here.Tried to boot them several time to install the OS, still without much success; up until nowI already tried with VmWare fusion in OS/X:selecting a 32-bit VMdisabling sound cards and USB, to limit the potential interfence of unknown hardware to those OSesgiving it just a couple megabytes of RAMLimiting the virtual disk to the known limit of < 250MBtesting IDE and SCSI disk emulation.Both in Xenix and SCO V, the installation diskette (N1) seems to boot, however either the hard disk is not recognised, or the installation hangs with the message:Setting up disk environmentWhat to do?
Xenix / SCO V running in contemporary machines as VMs
osx;virtual machine;sco;vmware fusion;xenix
I encountered a very interested of couple of articles about a bug, post1 and post2 in the installation/disk driver that explained why it did not run in many hardware platforms over the years. The link, besides explaining the bug, also points out VirtualBox seems to emulate the behaviour and is able to boot those operating systems.So I installed Virtualbox. While it did not recognise an emulated SCSI disk, it recognised an emulated IDE disk < 250MB and got indeed into the installation phase.Setting up installation environment...%disk 0x1F0-01F7 14 - type=W0 unit=0 cyls=734 hds=16 secs=31Welcome to the SCO Unix installation.Installation media used will be Compact Disc (CD-ROM)Hit return to continue...so I grabbed QEMU, and popped N1 in and booted it up. Unfortunately, the system would hang almost immediately after. Some testing revealed that the same issue existed on Bochs. PCjs got a bit further, but kernel panicked nearly immediately. Somewhat surprising to me though was VirtualBox not only booted, it got to the first step of the installer.The OS is extremely picky about the hardware and BIOS and wont boot at all in many virtualizers. It also contains an interesting bug in the AT disk driver (called wd1010 in this XENIX kernel version) which causes the system to hang if the controller, or more likely an IDE disk, responds too fast to the Set Drive Parameters command.P.S. There seems to be hints people managed to hack/patch the bug out. There is no documentation about that, and the process should be specific to the hacked versions.
_softwareengineering.355357
I have to design an application where there are around 5K structured base text files (file.txt) with data and format as below:Primary key is OrgId + ItemIdOgId|^|ItemId|^|segmentId|^|Sequence|^|Action|!|4295877341|^|136|^|4|^|1|^|I|!|4295877346|^|136|^|4|^|1|^|I|!|4295877341|^|138|^|2|^|1|^|I|!|4295877341|^|141|^|4|^|1|^|I|!|4295877341|^|143|^|2|^|1|^|I|!|4295877341|^|145|^|14|^|1|^|I|!|I have incremental update file1.txt which will have same Primary Key information with updated column (the number of columns may differ from the base file format), if Primary Key info is not found in base file then its treated as new entry. e.g Primary key is OrgId + ItemIdFormat 1 for Insert OgId|^|ItemId|^|segmentId|5295877341|^|136|^|4|^|1|^|I|!|5295877341|^|141|^|2|^|1|^|I|!|Format 2 for Update - OgId|^|ItemId|^|segmentId|^|Sequence| 4295877341|^|136|^|5|^|2| OgId|^|ItemId|^|segmentId 4295877346|^|136|^|2|Format 3 for Delete- OgId|^|ItemId|^|segmentId|^|Sequence4295877341|^|145|^|14|^|1|The Final Output is like this .OgId|^|ItemId|^|segmentId|^|Sequence|^|Action|!|5295877341|^|136|^|4|^|1|^|I|!|5295877341|^|141|^|2|^|1|^|I|!|4295877341|^|136|^|5|^|2|^|I|!|4295877346|^|136|^|2|^|1|^|I|!|4295877341|^|138|^|2|^|1|^|I|!|4295877341|^|141|^|4|^|1|^|I|!|4295877341|^|143|^|2|^|1|^|I|!|4295877341|^|145|^||^||^|I|!|I want to use AWS or HADOOP/bigdata but I cannot use Hbase.The size of the base file varies from 5KB to 50GB and size of the incremental file varies from 10 MB to 2 GB.There is a catch, where incremental insert/update/delete of files has to processed in same order as it arrives.
Non HBase solution for huge data that has update and delete in sequential manner
nosql;big data;aws;hadoop
null
_webapps.19353
Up until last week, Google used to place a blue arrow on the first search result. Using the arrow keys, one could select different results. Finally, pressing enter would enter into the selected result. For the past few days, these keyboard shortcuts are no longer there. How do I re-enable them? I search on Google every 10 minutes and can't afford to waste 2 seconds each time pointing my mouse at the desired link.
Can't use enter key shortcut on Google search results anymore
google
This seems to have changed recently. From Google help:Shortcuts for navigating through resultsEnter then Tab will select the first result. See the little arrow appear next to the result youve highlighted. Press Enter to open the first webpage or use the up arrow and down arrow to select other results.So basically, you enter your query, press Enter, and then use the Tab key to show the arrows.
_unix.136266
I have built a home-grown Linux distribution, and I can make the complete disk image file as a non-root user with one exception -- installing the boot loader. I'm using syslinux (actually extlinux), and to install it I have to loop-back mount the boot partition, which requires root/sudo privileges. The commands are run from a makefile, and the variable names should clearly indicate what to replace them with.sudo losetup -o $(BOOT_FS_PARTITION_OFFSET) $(LOOP_DEVICE) $(IMAGE_FILE_NAME)sudo mount $(LOOP_DEVICE) $(LOOP_MOUNT_POINT)sudo $(EXTLINUX) -S $(DISK_SECTORS) -H $(DISK_HEADS) -i $(LOOP_MOUNT_POINT)sudo umount $(LOOP_MOUNT_POINT)sudo losetup -d $(LOOP_DEVICE)Is there a way to write syslinux or extlinux to the disk image file without requiring root privileges?
How do I install syslinux/extlinux to a disk image file without requiring root privileges
system installation;not root user;syslinux;disk image
This is possible for syslinux:syslinux ~/floppy.imaThe syslinux installer contains enough magic to be run on an unmounted filesystem. (In fact, it is designed to do that.) The extlinux installer expects to be run on a mounted filesystem, though.It is almost certainly possible to split off the extlinux installer into a part that copies the files (something like mtools for FAT, which is rare but appears to exist, although one could just integrate them directly with genext2fs), and a part that installs the bootsector (I might be able to cobble this together).I did something like this for GRUB 2, which installs into the space between the MBR and the first partition, for Grml; this was actually easier to do because GRUB, unlike SYSLINUX, does not require as much from the bootsector. It basically depends on your broader requirements. If the above part about SYSLINUX does not help you, contact me, so we can work something out.
_webmaster.7364
sorry for bad english.if renewing cz.cc free domain is free then what do mean renewing domain after two years?(why we should renew for free service when it is free?)
what do mean renewal domain for free service?
domains;free
null
_webmaster.24612
I'm building a website at the moment that has several complicated background images and repeats. The file sizes for each of the images are quite large (I've compressed them down as much as possible!), is there an online tool that I can use to measure the filesize of a page?
How can I measure the size of a webpage?
images;compression;file size
Download Firebug and install Google's Page Speed plugin and or Yahoo!'s YSlow plugin both of these will help you optimise around the background image.Also read Yahoo's Best Practices for Speeding Up Your Website
_unix.64025
I'm trying to piece together the names of the people who contributed to BSD Unix, according to the contents of the SCCS logs. (This is the version control system used at the time.) A number of names appear in a list created by Jonathan Gray, but 72 are still missing. To keep this process organized, I will create a community wiki answer with the list of the unknown contributors. Please add the names beside each identifier.
Who are these BSD Unix contributors?
history;bsd
null
_webmaster.28137
I'm looking for something like Smush.it or PunyPNG that works offline, preferably via command line interface that does gifs, jpegs and pngs.Any suggestions?
Offline lossless image shrinking
looking for a script;images;compression
PNGCrush is the first that comes to mind.Trimage is a bit more comprehensive as far as toolset, and has a GUI also.
_cs.29230
How we calculate the answer of following recurrence?$$T(n)=4T\left(\frac{\sqrt{n}}{3}\right)+ \log^2n\,.$$Any nice solution would be highly appreciated.My solution is to substitute $n=3^m$, giving$$T(3^m)=4T\left(\frac{3^{m/2}}{3}\right)+\log^2 3^m = F(m)=4F((m/2)-1)+m^2=O(m^2logm)= O(\log^2 n \log n \log n)\,.$$
Solve Recurrence Equation Problem
recurrence relation
null
_hardwarecs.6346
What motherboard from the Asus z170 chipset line offers the most set of features? I did some research and it looks like the z170-a is the one but I could be wrong. I'd like to have a motherboard that offers the most features even if I won't necessarily need them. I just like to have the most possible options.Here are a few things I would hope to include but don't let that affect your answer:USB 3.1, type a and cPossibility to do SLI in the futureSome things that I plan to do with the system are:Everyday usageGamingWeb development and running WAMP server for developmentMedia server using PlexPhotoshopI already plan on buying a GTX 1070 graphics card so I can play games on the highest graphics settings possible.
Asus motherboard in the $50-$150 range
gaming;motherboard
I would probably go with this one: ASUS Z170-EThese points jump out, at least at me:DDR4 memory overclocked to 3466MHz(max compatible speed) Onboard USB 3.1 Gen 2 for 10Gbit/s data transfer speedsLightning-fast M.2 with PCIe 3.0 x4 interface
_unix.241636
Why does the following not output hello line?watch bash -c 'echo hello'As this one?watch 'echo hello'I expected to have echo write to bash output directly and this to be read by watch and formatted to terminal. Does bash -c not use stdout?
Watch not showing subshell output
watch;streams
null
_unix.110677
I know someone who'd really like to be able to type with only the left hand, so I had the idea of writing a layout which switches the sides of the keyboard when the caps lock key is pressed.For example, in the QWERTY layout, the qwerty keys would be remapped to uiop[].I wrote the following xmodmaprc (caps lock line at the bottom):keycode 24 = q Q u Ukeycode 25 = w W i Ikeycode 26 = e E o Okeycode 27 = r R p Pkeycode 28 = t T bracketleft braceleftkeycode 29 = y Y bracketright bracerightkeycode 30 = u U q Qkeycode 31 = i I w Wkeycode 32 = o O e Ekeycode 33 = p P r Rkeycode 34 = bracketleft braceleft t Tkeycode 35 = bracketright braceright y Ykeycode 38 = a A j Jkeycode 39 = s S k Kkeycode 40 = d D l Lkeycode 41 = f F semicolon colonkeycode 42 = g G apostrophe quotedblkeycode 43 = h H Return Returnkeycode 44 = j J a Akeycode 45 = k K s Skeycode 46 = l L d Dkeycode 47 = semicolon colon f Fkeycode 48 = apostrophe quotedbl g Gkeycode 36 = Return Return h Hkeycode 52 = z Z n Nkeycode 53 = x X m Mkeycode 54 = c C comma lesskeycode 55 = v V period greaterkeycode 56 = b B slash questionkeycode 57 = n N z Zkeycode 58 = m M x Xkeycode 59 = comma less c Ckeycode 60 = period greater v Vkeycode 61 = slash question b Bkeysym Caps_Lock = Mode_switchHowever, this only works when holding the Caps Lock key, and doesn't toggle the mode by tapping it.Am I missing something simple, or am I trying to solve this issue the wrong way?
Writing single-handed layouts for X
xorg;x11;keyboard;keyboard layout
null
_softwareengineering.36513
For example, the templates provided on the Open Source Initiative website for the 3-clause BSD License, and the MIT License both include an all-caps warranty disclaimer, though the rest of the license is written with normal capitalisation.Is there some genuine reason for this? Or is it just a tradition to make the warranty disclaimer harder to read?
Why is the warranty disclaimer section of a licence usually (always?) shouted?
licensing
Most legal jurisdictions in the US mandate that warranty information in a contract must be conspicuous. Since source code is plain text, I suppose it was decided at some point that the best way to make text conspicuous was to capitalize it, and the precedent stuck.Simple answer: it's required by law.
_codereview.4865
This code is part of one of the methods. I'm pretty sure it is really bad programming. The third if statement is supposed to be called if all 4 variables were set. Is another method needed? How would you write this piece?int width = img.Width;int height = img.Height;int thumbWidth = 0 , thumbHeight = 0;int preWidth = 0, preHeight = 0;//if Landscapeif (width > height && width >= 471){ thumbWidth = 120; thumbHeight = ((120 * height) / width); preWidth = 471; preHeight = ((471 * height) / width);}//if portrait else if (height > width && height >= 353){ thumbHeight = 120; thumbWidth = ((120 * width) / height); preHeight = 353; preWidth = ((353 * width) / height);}//If values were setif (thumbWidth != 0 && thumbHeight != 0 && preWidth != 0 && preHeight != 0){}else { //do other stuff}
Checking for minimum image dimensions
c#;image
The code is not terrible, but I can make a few observations that we can use to improve the code:It will be useful to know elsewhere (presentation time, perhaps?) whether or not this is portait or landscape. Let's build that now so we can keep the information somewhere useful.The only ways any of those values will ever by 0 (not set) is if height or width is zero, or the image is too small. That's probably an invalid state to begin with, and it should the responsibility of the code that calls this to deal with it. So let's check for that up front.I'm concerned about your use of magic numbers: 471, 353, and 120. I'd love to see those factored out to variables.With that in mind, here's an idea://there are better places to define this, but I'll leave them here now for convenience in this exampleint minLandscapeWidth = 471, minPortiatHeight = 353, thumbLongSide = 120;int width = img.Width;int height = img.Height;//you may not need these checks, depending what your img object is and how you use itif (width <= 0) throw new InvalidOperationException(img.Width should be greater than zero);if (height<= 0) throw new InvalidOperationException(img.Height should be greater than zero);bool landscape = (width > height);if ( (landscape && width < minLandscapeWidth) || (!landscape && height < minPortraitHeight) ) throw new InvalidOperationException(the image is too small);int thumbWidth, thumbHeight, preWidth, preHeight;if (landscape){ thumbWidth = thumbLongSide; thumbHeight = ((thumbLongSide * height) / width); preWidth = minLandscapeWidth; preHeight = ((minLandscapeWidth * height) / width);}else { thumbHeight = thumbLongSide; thumbWidth = ((thumbLongSide * width) / height); preHeight = minPortraitHeight ; preWidth = ((minPortraitHeight * width) / height);}//values are now set
_codereview.62856
This code for my blog checks to see if tags are in the params hash. If they are, then only posts that are tagged will be paginated. Otherwise, all of the posts are paginated.class PostsController < ApplicationController def index if params[:tag] @posts = Post.tagged_with(params[:tag]).paginate(page: params[:page]) else @posts = Post.all.paginate(page: params[:page]) end endendI feel like this checking of params shouldn't be the concern of the controller, but of some other model like PostParameterChecker. How do you feel about this? Where does this code actually belong?
Rails controller method that conditionally filters results
ruby;ruby on rails;mvc;controller
I'd say that this does belong in the controller. It's the controller's job to handle requests (including params) and prepare the view - which is what your code does.That said, I wouldn't be opposed to tokland's suggestion of moving the logic to the scope. But personally I would rather choose whether or not to call a scope method at all, than let the scope handle nils by basically doing nothing. It's largely a matter of opinion, though. (You could even move the logic in the other direction, so to speak, and add a /posts/tagged route and wholly separate action.)Anyway, you can tweak the action a bit:def index @posts = params[:tag] ? Post.tagged_with(params[:tag]) : Post.all @posts = @posts.paginate(page: params[:page])endor, avoiding the variable re-assignment (which is also avoidable by just using two different variables, of course):def index @posts = if params[:tag] Post.tagged_with(params[:tag]) else Post.all end.paginate(page: params[:page]endIt could still be a ternary of course.In general though, skinny controller, fat model is a good rule of thumb. But for things like this I think the controller still has a role to play.
_datascience.948
I have found a number of libraries and tools for data science in Scala, I would like to know about which one has more adoption and which one is gaining adoption at a faster pace and to what extent this is the case. Basically, which one should I bet for (if any at this point).Some of the tools I've found are (in no particular order):ScaldingBreezeSparkSaddleH2OSpireMahoutHadoopMongoDBIf I need to be more specific to make the question answerable: I'm not particularly interested in clusters and Big Data at this moment, but I'm interested in sizable data (up to 100 GB) for information integration and predictive analytics.
Any clear winner for Data Science in Scala?
tools
null
_softwareengineering.326207
MyBase is forcing implementation of method f() in all children. This can be achieved either by using abc.ABCMeta to make f() an abstractmethod:import abcclass MyBase(metaclass=abc.ABCMeta): @abc.abstractmethod def f(self, x): passclass Child1(MyBase): def f(self, x): print(x)class Child2(MyBase): passChild1().f(4) # prints 4Child2().f(4) # TypeError: Can't instantiate abstract class Child2 with abstract methods fMyBase() # TypeError: Can't instantiate abstract class MyBase with abstract methods f..or alternatively, by NotImplementedError: class MyBase(): def f(self, x): raise NotImplementedError('Abstract method not implemented.')class Child1(MyBase): def f(self, x): print(x)class Child2(MyBase): passChild1().f(4) # prints 4Child2().f(4) # raises implementation errorMyBase() # does NOT raise errorUsing an abstract class instead of returning NotImplementedError disallows for example accidental instantiation of MyBase(). Are there any other benefits (or drawbacks) from using an abstract class over NotImplementedError?
Using NotImplementedError instead of abstract classes
python;abstract class;python 3.x
null
_softwareengineering.275806
We have a moderately sized Grails web application using GORM/Hibernate over PostgreSQL and GSPs serving HTML, and also a few REST APIs. We are standardising on Scala, and would like to migrate this application to Play or Spray, with Slick to access the existing database.Nimble is currently used for authentication/authorisation and user/role/etc. management.What are the approaches we can take in order to do the migration step by step, avoiding a big bang migration?They are both JVM languages, is there a way to avoid treating them as separate web apps running on separate ports at arms length?
Migrating a Grails application to Scala Play/Spray
scala;grails;playframework
null
_softwareengineering.162937
My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components.What factors do I need to consider when deciding on a change like that?I don't know if I should worry about legal issues or not and if so, how to address this with project management.More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach. Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.
What do I need to learn to decide on rename/recompile source package names because of company rebranding?
project management;refactoring;legal;packages
This is usually a political/marketing question as much as technical. I have been involved in mergers where changing all references to the old name was mandatedAssuming it is just a technical question, are you missing buildable source for any of the components or are there similar significant technical risks? Will making the change break backwards compatibility when backwards compatibility is important? If the answer to either of these questions is yes, clearly avoid the change.Otherwise, I would recommend making the change. Changing it is only going to get harder as more development is done and, otherwise, you will need to explain it to every new developer on the project.
_unix.102233
What is technically the difference of a process that started in foreground and was manually put into background and a daemon? Do they have different properties?
Difference between process in background and daemon
process;daemon;background process
You can take a look at the definition of a Daemon, which tells you what the properties of a daemon are, so biggest ones are:No Controlling Terminal - STDIN, STDOUT, STDERR associated with starting terminal are redirected.Parent Process is set to initDaemon is a Process Group Leader.
_codereview.79095
I needed a simple expiring in-memory cache module for a project I'm working on and I've come up with the following.My requirements for the cache module are:Be able to expire objects after certain period of timeUse what we have in the standard libraryKeep it simpleSo far I've got this.Expiring in-memory cache moduleimport loggingimport threadingfrom time import timefrom collections import OrderedDict__all__ = ['CachedObject', 'CacheInventory', 'CacheException']class CacheException(Exception): Generic cache exception passclass CachedObject(object): def __init__(self, name, obj, ttl): Initializes a new cached object Args: name (str): Human readable name for the cached entry obj (type): Object to be cached ttl (int): The TTL in seconds for the cached object self.hits = 0 self.name = name self.obj = obj self.ttl = ttl self.timestamp = time()class CacheInventory(object): Inventory for cached objects def __init__(self, maxsize=0, housekeeping=0): Initializes a new cache inventory Args: maxsize (int): Upperbound limit on the number of items that will be stored in the cache inventory housekeeping (int): Time in minutes to perform periodic cache housekeeping if maxsize < 0: raise CacheException('Cache inventory size cannot be negative') if housekeeping < 0: raise CacheException('Cache housekeeping period cannot be negative') self._cache = OrderedDict() self.maxsize = maxsize self.housekeeping = housekeeping * 60.0 self.lock = threading.RLock() if self.housekeeping > 0: threading.Timer(self.housekeeping, self.housekeeper).start() def __len__(self): with self.lock: return len(self._cache) def __contains__(self, key): with self.lock: if key not in self._cache: return False item = self._cache[key] if self._has_expired(item): return False return True def _has_expired(self, item): Checks if a cached item has expired and removes it if needed If the upperbound limit has been reached then the last item is being removed from the inventory. Args: item (CachedObject): A cached object to lookup with self.lock: if time() > item.timestamp + item.ttl: logging.debug( 'Object %s has expired and will be removed from cache [hits %d]', item.name, item.hits ) self._cache.pop(item.name) return True return False def add(self, obj): Add an item to the cache inventory Args: obj (CachedObject): A CachedObject instance to be added Raises: CacheException if not isinstance(obj, CachedObject): raise Exception('Need a CachedObject instance to add in the cache') with self.lock: if self.maxsize > 0 and len(self._cache) == self.maxsize: popped = self._cache.popitem(last=False) logging.debug('Cache maxsize reached, removing %s [hits %d]', popped.name, popped.hits) logging.debug('Caching object %s [ttl: %d seconds]', obj.name, obj.ttl) self._cache[obj.name] = obj def get(self, key): Retrieve an object from the cache inventory Args: key (str): Name of the cache item to retrieve Returns: The cached object if found, None otherwise with self.lock: if key not in self._cache: return None item = self._cache[key] if self._has_expired(item): return None item.hits += 1 logging.debug( 'Returning object %s from cache [hits %d]', item.name, item.hits ) return item.obj def housekeeper(self): Remove expired entries from the cache on regular basis with self.lock: expired = 0 logging.info( 'Starting cache housekeeper [%d items in cache]', len(self._cache) ) for name, item in self._cache.items(): if self._has_expired(item): expired += 1 logging.info( 'Cache housekeeper completed [%d removed from cache]', expired ) if self.housekeeping > 0: threading.Timer(self.housekeeping, self.housekeeper).start()Here's an example usage of the caching module:>>> from __future__ import print_function>>> from __future__ import absolute_import>>> from . import CachedObject>>> from . import CacheInventory>>> cache = CacheInventory(housekeeping=60) # housekeeper will run every 60 minutes>>> obj = {'key1': 'value1', 'key2': 'value2'}>>> cached_obj = CachedObject(name='mydictionary', obj=obj, ttl=60) # object will expire in 60 seconds>>> cache.add(obj=cached_obj)>>> print(cache.get('mydictionary')){'key2': 'value2', 'key1': 'value1'}>>> # 60 seconds later -> the object has expired already... >>> print(cache.get('mydictionary'))NoneI'm currently using this caching module for storing VMware vSphere managed objects and the code can also be found in Github:https://github.com/dnaeon/py-vconnector/blob/master/src/vconnector/cache.pyAny thoughts, remarks or suggestions about the design and implementation of this caching module?
Expiring in-memory cache module
python;python 2.7;python 3.x
In this loop you're not using name:for name, item in self._cache.items(): if self._has_expired(item): expired += 1If you only need the values, then iterate over just the values:for item in self._cache.values(): if self._has_expired(item): expired += 1You don't need an if statement here:if self._has_expired(item): return Falsereturn TrueYou can simplify by using the negated boolean expression directly:return not self._has_expired(item)This expression can be simplified:if self.maxsize > 0 and len(self._cache) == self.maxsize:Using chained comparison:if 0 < self.maxsize == len(self._cache):
_unix.340593
I'm trying to execute a command repeatedly on every LOL file in a directory and have the output share the base name. My first thought is find . -type f -iname '*.lol' -exec command {} {}.out \: I know this will result in a lot of lol.out files, but I can rename those in a second step. The problem I'm having is that the command is failing on every file, although I can manually type it in successfully. I would like to debug my metacommand, but I don't know how to see the command that is actually being executed. Is there a way to get find to generate the list of commands it intends to execute?
Preview the command formed by find -exec
find
null
_unix.313116
Environmenthost with windows 10, kitty, vcxsrvguest with centos6, twinkleSituationon windows I have start vcxsrv and kitty with Xforward and connect to centos6. When in kitty I start twinkle I see thisNotesWhen I start for example xterm it looks normal.QuestionDo you have any sugestion how to solve this situation?
twinkle: chars are shown as rectangles
fonts;qt;vcxsrv
null
_hardwarecs.7541
I am looking into getting a new laptop. I will mostly be using it for internet and Word, but also might be using games/programs such as Star Wars: The Old Republic, Lord of the Rings Online, and Maple. I am also hoping it will last me 5 years.The laptop currently on the top of my list is currently the MSI GT62VR DOMINATOR PRO-239, which has two avalible models, (1) and (2). The only difference appears to be that the first has 32 GB RAM at 2133 MHz while the second has 32 GB RAM at 2400 MHz, but the second one costs about $45 more. Is the model with the faster RAM worth the extra money?
Buying a new laptop: 32 GB RAM at 2133 MHz or 2400 MHz for $45 more?
laptop;memory
null
_unix.248118
I see the following in ~/.bashrc : if [ -z ${debian_chroot:-} ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fiwhich means if the variable is not set, and the file exists and is readable, then set the file's content to the variable.Am I supposed to write something to that file while preparing to chroot?If yes, then I'll have to remove that file at the end of chroot job! any explanation or suggestions will be appreciated.
How can I use debian_chroot in bashrc to identify the chroot env?
bash;debian;chroot;bashrc
This variable is just for building the default PS1 shell prompt down below:PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ 'So it is not essential to create the file, although it can be nice having the prompt identifying where you are.As you can see -r tests for a file and if the user can read it, and if it exists, debian_chroot gets the content of it, so create /etc/debian_chroot inside the chroot with the wording you want. (inside, do not do it at the true root, as wont be inside the chroot )So if your chroot is at /mnt, the file you would need to modify is /mnt/etc/debian_chroot (and not /etc/debian_chroot).
_unix.204037
I have a python program which I need to run every minute from 11PM (EDT) to 06AM (EDT). How can I schedule a cron job to do this?* 23-6 * * 1-5 python my_program.pywill this work? or do I have to write 2 separate cron jobs for this?
Cron job to run every minute from 11PM to 6AM
cron
null
_reverseengineering.16183
I'm going to do reverse engineering, So I have extracted a .bin file from a flash and use Binwalk to analyze it. but binwalk just show me some zlip compression format without any size as shown in image. It doesn't show me anything about bootloader or kernel image. when I use binwalk -I *.bin -y LZMA it show me LZMA format just with properties value X6D and dictionary size 0X00 and Uncompressed Size: 0X00, while I know the kernel was compressed with LZMA compression format.Could you please guide me, why binwalk show me zlib with no size and why it doesn't show me anything about bootlader and kernel?Thanks
binwalk show zlib format without any size
binary analysis;firmware;tools
null
_unix.53288
I was fiddling around with parted command on a loopback disk and tried to create some partitions using gpt part table but I keep getting Error: Unable to satisfy all constraints on the partition. when trying to create a logical partition$ sudo parted /dev/loop0(parted) mktable gpt(parted) mkpart primary 1MiB 201MiB(parted) mkpart extended 201MiB -0MiB(parted) unit MiB printModel: Loopback device (loop)Disk /dev/loop0: 102400MiBSector size (logical/physical): 512B/512BPartition Table: gptNumber Start End Size File system Name Flags 1 1.00MiB 201MiB 200MiB primary 2 201MiB 102400MiB 102199MiB extended(parted) mkpart logical 202MiB 1024MiBError: Unable to satisfy all constraints on the partition.Recreating the same partitions using msdos part table doesn't give such error, though. So any idea what's wrong?% sudo parted /dev/loop0GNU Parted 2.3Using /dev/loop0Welcome to GNU Parted! Type 'help' to view a list of commands.(parted) mktable msdos (parted) mkpart primary 1MiB 201MiB(parted) mkpart extended 201MiB -0MiB (parted) mkpart logical 202MiB 1024MiB (parted) unit MiB print Model: Loopback device (loop)Disk /dev/loop0: 102400MiBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags 1 1.00MiB 201MiB 200MiB primary 2 201MiB 102400MiB 102199MiB extended lba 5 202MiB 1024MiB 822MiB logical
Unable to create logical partition with Parted
partition;gpt;parted;disk
The extended and logical partitions make sense only with msdos partition table. It's only purpose is to allow you to have more than 4 partitions. With GPT, there are only 'primary' partitions and their number is usually limited to 128 (however, in theory there is no upper limit implied by the disklabel format). Note that on GPT none of the partitions could overlap (compare to msdos where extended partition is expected to overlap with all contained logical partitions, obviously).Next thing about GPT is that partitions could have names, and here comes the confusion: the mkpart command has different semantics depending on whether you use GPT or msdos partition table.With msdos partition table, the second argument to mkpart is partition type (primary/logical/extended), whereas with GPT, the second argument is the partition name. In your case it is 'primary' resp. 'extended' resp. 'logical'. So parted created two GPT partitions, first named 'primary' and second with name 'extended'. The third partition which you tried to create (the 'logical' one) would overlap with the 'extended', so parted refuses to do it.In short, extended and logical partitions do not make sense on GPT. Just create as many 'normal' partitions as you like and give them proper names.
_unix.269643
A question concerning the time synchronization between the host and the guest System.I am using Windows 7 as my host OS and CentOS 7 is installed as VM in an Oracle VirtualBox environment without network access. I am searching for a solution which allows the VM to get the correct time after a reboot or a snapshot. The challenge is, that I would collect the time from the host system without installation of additional tools. Do you have an idea?
Time sync in VM between Windows as host and CentOS as guest without network
linux;virtualbox;clock
If you guest is Centos Linux then you need to install DKMS (Dynamic Kernel Module Support) package.# yum install dkms# yum install virtualbox-guest-additionsFor reference you can check this Without virtualbox-guest-additions toolDisable the ntp servicechkconfig ntpd offFor Windows HostGo toC:\Documents and Settings\.VirtualBox\Machines\and edit the xml file.>Create a backup of this file< Add the lineVBoxInternal/Devices/VMMDev/0/Config/GetHostTimeDisabled 0
_unix.332568
It doesn't really specify what to set the path of the BIOS files and HDD image(s). I've looked on YouTube and elsewhere. No luck.Here's what I have (it's probably incorrect.) Here are the relevant lines:romimage: file=BIOS-bochs-latestvgaromimage: file=VGABIOS-lgpl-latestata0-master: type=disk, path=c.imgata0-slave: type=disk, path=d.imgboot: c
BOCHS - what to set for BIOS images and HDD image?
emulation
null
_codereview.19263
Based on the answer to my question on StackOverflow, I have ended up with the following code:public class ColumnDataBuilder<T>{ public abstract class MyListViewColumnData { public string Name { get; protected set; } public int Width { get; protected set; } public ColumnType Type { get; protected set; } public delegate TOUT FormatData<out TOUT>(T dataIn); protected abstract dynamic GetData(T dataRow); public string GetDataString(T dataRow) { dynamic data = GetData(dataRow); switch (Type) { case ColumnType.String: case ColumnType.Integer: case ColumnType.Decimal: return data.ToString(); case ColumnType.Date: return data.ToShortDateString(); case ColumnType.Currency: return data.ToString(c); break; case ColumnType.Boolean: var b = (bool)data; if (b) return Y; else return N; default: throw new ArgumentOutOfRangeException(); } } } public class MyListViewColumnData<TOUT> : MyListViewColumnData { public MyListViewColumnData(string name, int width, ColumnType type, FormatData<TOUT> dataFormater) { DataFormatter = x => dataFormater(x); // Per https://stackoverflow.com/a/1906850/298754 Type = type; Width = width; Name = name; } public Func<T, TOUT> DataFormatter { get; protected set; } protected override dynamic GetData(T dataRow) { return DataFormatter(dataRow); } } }This is called from a factory method (in ColumnDataBuilder) as public MyListViewColumnData Create<TOUT>(string name, int width, ColumnType type, MyListViewColumnData.FormatData<TOUT> dataFormater){ return new MyListViewColumnData<TOUT>(name, width, type, dataFormater);}public MyListViewColumnData Create(string name, int width, MyListViewColumnData.FormatData<DateTime> dataFormater){ return new MyListViewColumnData<DateTime>(name, width, ColumnType.Date, dataFormater);}...That, in turn, is called from my code as:builder.Create(Date, 40, x => x.createdDate);and private ListViewItem CreateListViewItem<TDATA>(IEnumerable<ColumnDataBuilder<TDATA>.MyListViewColumnData> columns, TDATA rowData){ var item = new ListViewItem(); foreach (var col in columns) { item.SubItems.Add(col.GetDataString(rowData)); } item.SubItems.RemoveAt(0); // We generate an extra SubItem for some reason. return item;}How can I refactor this so that I'm not using dynamic, but still preserve the syntax as it currently exists in the code?
Refactoring to avoid the use of dynamic
c#
I don't think you need MyListViewColumnData class there, I would replace it with interface and move the GetDataString implementation to MyListViewColumnData<TOut>. And you don't need dynamic here, just use object instead (yes, it will use boxing for most cases except strings, but it's more efficient than dynamics).public class ColumnDataBuilder<T>{ public interface IMyListViewColumnData { string Name { get; } int Width { get; } ColumnType Type { get; } string GetDataString(T dataRow); } public delegate TOut FormatData<out TOut>(T dataIn); public class MyListViewColumnData<TOut> : IMyListViewColumnData { public string Name { get; private set; } public int Width { get; private set; } public ColumnType Type { get; private set; } private readonly FormatData<TOut> _dataFormatter; public MyListViewColumnData(string name, int width, ColumnType type, FormatData<TOut> dataFormater) { _dataFormatter = dataFormater; Type = type; Width = width; Name = name; } public string GetDataString(T dataRow) { object data = _dataFormatter(dataRow); switch (Type) { case ColumnType.String: case ColumnType.Integer: case ColumnType.Decimal: return data.ToString(); case ColumnType.Date: return ((DateTime)data).ToShortDateString(); case ColumnType.Currency: return ((decimal)data).ToString(c); case ColumnType.Boolean: return (bool)data ? Y : N; default: throw new ArgumentOutOfRangeException(); } } } public IMyListViewColumnData Create<TOut>(string name, int width, ColumnType type, FormatData<TOut> dataFormater) { return new MyListViewColumnData<TOut>(name, width, type, dataFormater); } public IMyListViewColumnData Create(string name, int width, FormatData<DateTime> dataFormater) { return new MyListViewColumnData<DateTime>(name, width, ColumnType.Date, dataFormater); }}public enum ColumnType{ String, Integer, Decimal, Date, Currency, Boolean}UpdateIn comments it was asked if you can extract interface for ColumnDataBuilder. Of course you can :), and the easiest way would be to use Extract interface refactoring from ReSharper :). If you still don't use it you'll have to do that manually (move the IMyListViewColumnData and FormatData<TOut> declarations out of ColumnDataBuilder<T> first):public interface IColumnDataBuilder<Tin>{ IMyListViewColumnData Create<TOut>(string name, int width, ColumnType type, FormatData<TOut> dataFormater); IMyListViewColumnData Create(string name, int width, FormatData<DateTime> dataFormater);}
_unix.40810
I am running the stock Xfce spin of Fedora 17 in a VirtualBox virtual machine, and just installed the CDM display manager via yum. I modified the /etc/cdmrc file to start xfce, and added the following to /etc/sysconfig/desktop:DISPLAYMANAGER=/usr/bin/cdmHowever, upon reboot the process hangs right after the Fedora logo appears with this:Cannot open font file TrueCan anyone help me diagnose and troubleshoot this problem? Thank you very much!
Fedora 17 boot hangs after changing to CDM
fedora;boot;display manager
null
_cs.33171
I know that IS (is there independent set of size at least $k$?) on planar cubic graphs is NP-Complete, and IS on triangle-free graphs is also NP-Complete. But how about IS on triangle-free planar cubic graphs? Is it still an NP-Complete problem, or there are some polynomial time solutions? Any ideas or referrences are appreciated, thank you in advance!
Complexity of Independent Set on Triangle-Free Planar Cubic Graphs
complexity theory;graphs;time complexity;np complete
null
_unix.322098
I have a couple machines running in an isolated environment. They can be accessed via a bastion machine which has a public IP address. I'm currently trying to automate the distribution of docker images created in local machine to machines in the isolated environment.Currently I have the following command:docker save test/myapp | gzip | pv | ssh ubuntu@bastion cat > remoteThis command copies a file to the bastion machine. The problem is that I don't want anything to be saved in the bastion machine drive. I want to write a script in the bastion machine that delivers the image to all machines in an isolated environment that don't have public IP address.I think that I should have some kind of a script in bastion machine that would take input from a pipe. The script should make an ssh command to each machine and run docker in a load image command. This would be easy to do with a docker machine but I can't use it because it requires Internet connection. Any ideas?In short: I want to deliver a docker image from local machine to multiple servers via the bastion server.I am pretty new in this kind of scripting so I'm sorry if my question is trivial but thus far I haven't been able to solve it.
Delivering docker image to multiple servers without docker machine
ssh;docker
null
_cs.44335
I got this question in a past test that I'm trying to solve but i don't have the solutions to check my self:Given a set of n segments $[a_i ,b_i]$ where $i=1,..,n$ and $a_i < b_i$.write an algorithm which find a segment that the number of segments $[a_l ,b_l]$ before it $(b_l < a_i)$ are equal to the number of segments $[a_r ,b_r]$ after it $(b_i < a_r)$the algorithm will return its index if found else nullThe algorithm should work in $O(n\log n)$ in worst case.My solution is:running heapsort by $a_i$ (runs in $O(n\log n)$)running bucket sort by $b_i$ which each bucket is $a_i$ (runs in $(O(n))$)loop on each member (X) in reverse order and finding using binary-sort on the rest of the set the segment (Y) which its $b_i$ is equal or max close to $a_i$ and writing in the Y the distance of X from the end of list (number of segments which are right of Y) and writing in X the index of Y (number of segments which are left of X). that happens in (runs in $O(nlgn)$) loop on each member the looking up for an element with (left_count equals right_count) not equals zero and return it (runs in $O(n)$)if nothing found - return nullSo finally the algorithm works in $2lgn + 2n$ which is $O(nlgn)$Am I right? There is a better solution?
Finding a segment which has equal number of segments before and after it
algorithms;time complexity;search algorithms;sorting
Here is an algorithm that achieves $O(n\log n)$ complexity without any elaborate data structures. Just a simple sort and a couple of loops.Sort all the $\{a_i,b_i\}$ together. Call the resulting sequence $(x_1,\ldots,x_{2n})$.Set $n_a\gets0$, $n_b\gets0$.Loop for $n_b$: For $t$ going from $1$ to $2n$ do:If $x_t=b_i$ for some $i$, increment $n_b$Else if $x_t=a_j$ for some $j$, set $L_j\gets n_b$Loop for $n_a$: For $t$ going from $2n$ down to $1$ do:If $x_t=a_i$ for some $i$, increment $n_a$Else if $x_t=b_j$ for some $j$, set $R_j\gets n_a$Now for each interval $[a_i,b_i]$, $L_i$ and $R_i$ contain the number of intervals to its left and to its right, respectivelyFinal loop: For $i$ going from $1$ to $n$If $L_i=R_i$, return $i$Found nothing: Return null.You might need to make some additional checks to take care of cases where $a_i=b_j$ for $i\not=j$.Note that equality checks of the form $x_t=b_i$ can be done by saving the sorting indices. In other words, if you sort an array $u$ into another array $v$, you can save indices $\pi_t$ such that $u_t=v_{\pi_t}$.
_unix.286616
I'm wondering how I could monitor spinlocks. At my client, we have cpu soft lockup failure, for which, if I understand well, spinlock is a likely cause.Different team use that server for predictive modeling using R, Python and SAS, meaning we often have many unsupervized processes running in parallel, possibly with multiprocessing librairies.Monitoring the number of spinlocks or, even better, which processes used them, might help in validating or invalidating them as a cause for our frequent failures (5 during the last 3 weeks).Is there any way to monitor them? If not, how could we know what would be causing those soft lockups?
how to monitor spinlocks
linux;cpu;crash;spinlock
null
_unix.204711
I am having this weird problem while trying to edit the tags of my music library: I modify them with EasyTAG and there seems to be no problem (VLC recognizes the changes I make), but I then transfer the music to my audio player and it does not recognizes the changes. Note, that this issue does not happen all of my records, but only a small portion (less than 1%). What could be wrong and how can I fix it?
Why can't I edit the tags of some mp3 files?
audio;mp3;tagging
null
_codereview.171242
I am on the process of changing jobs, so I would like to get an idea of what I do wrong in order to improve. For this, I have created a small node module. It's very simple; it calculates linear motion (distance, velocity, time). I am interested in knowing what I do wrong, whether it's some crucial mistake or something secondary as the way I format my comments./*** Checks whether two values are valid to be operated with* @param {Float} operandA* @param {Float} operandB* @return {Boolean}*/const areValuesValid = (operandA, operandB) => { if(isNaN(operandA) || isNaN(operandB)) return false; if(operandA === null || operandB === null) return false; return true;}/*** Rounds a value to a max of two decimals* @param {Float} val* @return {Float}*/const round = (val) => { return Math.round(val * 100) / 100;}/*** Calculates the time in relation to the velocity and the distance* @param {Float} velocity* @param {Float} distance* @return {Float}*/const calculateTime = (velocity, distance) => { if(areValuesValid(velocity, distance) === false) return 0; if(parseFloat(velocity) === 0) return 0; return round(distance / velocity);}/*** Calculates the velocity in relation to the time and the distance* @param {Float} time* @param {Float} distance* @return {Float}*/const calculateVelocity = (time, distance) => { if(areValuesValid(time, distance) === false) return 0; if(parseFloat(time) === 0) return 0; return round(distance / time);}/*** Calculates the distance in relation to the velocity and the time* @param {Float} velocity* @param {Float} time* @return {Float}*/const calculateDistance = (velocity, time) => { if(areValuesValid(velocity, time) === false) return 0; return round(velocity * time);}module.exports = { areValuesValid, calculateTime, calculateVelocity, calculateDistance }If anybody is willing to go the extra mile, I have created a repository with this code, which also includes some tests. Github. I would like to know if I go about the tests the right way, if the module structure makes sense and whatever else is criticizable.
NodeJS module to calculate linear motion
javascript;unit testing;node.js;modules;physics
Nice documentation. This code is clear and easy to read. Here are some basic nits:You should probably follow standard JSDoc conventions. Don't do this:/****/Do this instead:/** * */If you're nitpicky about the @param, Float technically doesn't exist since all JavaScript numbers are floats. Use Number instead.See this SO post about rounding to 2 decimal places. Using it is completely up to you, there's nothing wrong with the way you're doing it. (This suggestion is slower, but looks cool I guess)const round = val => { return +val.toFixed(2);};You know the constraints of your areValuesValid() function, there's no need to === false.const calculateTime = (velocity, distance) => { if (areValuesValid(velocity, distance) && parseFloat(velocity) !== 0) { return round(distance / velocity); } return 0;}Also, why the need to do parseFloat(velocity)? If you know all your arguments will be numbers, this is unnecessary. If you don't know the data type of your arguments, you should probably enforce that they are numbers using typeof.Happy coding!
_unix.129330
I'm trying to recreate the Blackbox Gray theme for Openbox. However, the size of the grips is smaller in Openbox compared to Blackbox. Is there a setting somewhere that controls the size of the grips?Openbox:Blackbox:
Adjusting the width of the grip for an Openbox theme
openbox
null
_unix.306877
I installed Kali Linux 2016.1 x86 onto my Dell Latitude 2120 via a USB drive. However, when I boot my computer, it shows the screen GNU GRUB version 2.02 beta2-33. I can select Kali GNU/Linux or Advanced options, but when I launch Kali GNU/Linux, I get some errors:/dev/sda contains a file system with errors, check forced. ... /dev/sda1: UNEXPECTED INCONSISTENCY. RUN fsck MANUALLY. ... /bin/sh: can't access tty: job control turned off. All those errors then just sit there in the terminal, but where it would normally say root@kali# or pull up some kind of GUI, it just says (initramfs) followed by a flashing _ indicating that I can enter a command. I've done fsck, and all it says is fsck from util-linux 2.27.1 and waits for me to enter another command. I've tried startx, x, and gdm3, but they all just spit out /bin/sh: [command I entered]: not found
Kali won't load gui or command line
ubuntu;kali linux;grub;gnu
null
_cs.75312
Say we had two agents and we want them both to traverse a map concurrently.Their goal is to collectively visit a collection of certain points on the map. If there was just one agent, it would be simple enough to just implement BFS or A* and get a good solution. But considering there are two, how can we divide the points to be visited among the two agents in such a way that no steps (or minimal steps) are wasted in visiting all the points?Edit/Clarification:-The points do not need to be visited in a particular order.-If there was only one agent, I could get a possible solution with a BFS or A* where the goal state is all the points having been visited.Edit 2: PicturesAn example of the problem might be as seen below. Agents start at the top left of the maze and the goal is a state space where all the red dots have been visited.A possible path that might be returned by BFS with a single agent:What I would like to achieve with multiple agents:
Dividing a set of goals among two search agents
graphs;search algorithms;search trees;traveling salesman
null
_webmaster.101410
We've recently made some changes so that our site can be included as an iframe widget on other websites.Can we expect this to give us any SEO boost?I can only find information about the effect (or lack of effect) that you get from including someone else's iframe in your own site (this question for example)
SEO effects of being embedded on other websites
seo;iframe
Google does not like links generated from widgets and will penalize sites that use them.However, some widgets add links to a site that a webmaster did not editorially place and contain anchor text that the webmaster does not control. Because these links are not naturally placed, they're considered a violation of Google Webmaster Guidelines.So, if by placing this widget on other site you somehow generate incoming links, you run the risk of being penalized by Google. If this does not generate links to your site through the widget it really won't make a difference either way. To summarize, if your site is just an iframe with no provided by <link to your site> you should be okay. But being the content is in an iframe I wouldn't expect any kind of SEO boost.
_codereview.169324
I'm very new to Python, and far away from writing my own scripts. For my work with lilypond I needed a script that parses a text file, searches for a string and inserts another textfile before the match. I have been searching quite a lot for this kinda script and I did not find any. I ended up combining the snippets I found on here and other sites and came up with this script, which is working:#!/usr/bin/env python# usage:# $ python thisfile.py text.txt searchstring insert.txtimport sysf2 = open(sys.argv[3])data = f2.read()f2.close()with open(sys.argv[1], r+) as f1: a = [x.rstrip() for x in f1] index = 0 for item in a: if item.startswith(sys.argv[2]): a.insert(index, data) break index += 1 f1.seek(0) f1.truncate() for line in a: f1.write(line + \n)I also got a very detailed answer on Stack Overflow, telling what is actually going on in the code, before I was far away from understanding any detail.What I got out of it so far is the following problem:If anything would go wrong with reading in the data from f1 or f2, f1.truncate() would delete the original content of f2, then not being able to (re)write the appropriate content the content would get lost. A much more secure way would be using some kind of temporary file, or at least moving the original content of f1 there before calling truncate().I would be glad for any comments on this problem, and any others if there are.
Python script searching for string in textfile and inserting another textfile before match
python;parsing
The problem is a general one in data processing, one that programmers have to think about all the time! When changing a file \$F\$ from \$A\$ to \$B\$ it's tempting to implement it like this:Read \$A\$ from \$F\$.Compute \$B\$ from \$A\$.Delete \$F\$.Write \$B\$ to \$F\$.But we need to consider the possibility that something will go wrong. Maybe the user will type control-C on the keyboard and interrupt the program? Maybe there will be a power cut? Maybe the disk will not have enough room for \$B\$? If any of these things happened after step 3 and before step 4, then you would be left in a situation where \$F\$ contains neither \$A\$ nor \$B\$. So you have lost your data and can't get it back.This is why we try to design systems so that operations are atomic either they succeed completely or they fail completely. In this case we would use the following procedure:Read \$A\$ from \$F\$.Compute \$B\$ from \$A\$.Write \$B\$ to a temporary location \$G\$.Replace \$F\$ with \$G\$.This works because operating systems (usually!) give us an atomic implementation of step 4. In Python we can use os.rename, where you can see that the documentation says:If successful, the renaming will be an atomic operationIn this design, if something goes wrong before step 4, the file \$F\$ still contains \$A\$, and so we haven't lost our data, and so we have a chance to fix the problem and try again.So in this case, I'd write something like this (but this is not tested, so don't use it blindly!):import osimport shutilimport sysimport tempfile# usage:# $ python thisfile.py text.txt searchstring insert.txttext_file, searchstring, insert_file = sys.argv[1:]with tempfile.NamedTemporaryFile('w', delete=False) as temp: with open(text_file) as f1: inserted = False # Have we inserted insert_file yet? for f1_line in f1: if not inserted and f1_line.startswith(searchstring): with open(insert_file) as f2: for f2_line in f2: temp.write(f2_line) inserted = True temp.write(f1_line)os.rename(temp.name, text_file)Here I've used the library function tempfile.NamedTemporaryFile to choose somewhere to put the temporary file.UpdateSo here's another reason why it's a good idea to make operations atomic you might have made a programming error! The code I wrote above works correctly on my operating system (macOS) but as it says in the os.rename documentation:The operation may fail on some Unix flavors if src and dst are on different filesystemsSo I'm guessing that you're on some kind of Linux system. On these systems you've got to ensure that the temporary file \$G\$ is on the same filesystem as \$F\$, and the only reliable way to do that is to put it in the same directory as \$F\$:import osimport shutilimport sysimport tempfile# usage:# $ python thisfile.py text.txt searchstring insert.txttext_file, searchstring, insert_file = sys.argv[1:]# Directory and name of text_file.dirname, basename = os.path.split(text_file)# Create temporary file in same directory as text_file.with tempfile.NamedTemporaryFile('w', dir=dirname, prefix=basename, delete=False) as temp: with open(text_file) as f1: inserted = False # Have we inserted insert_file yet? for f1_line in f1: if not inserted and f1_line.startswith(searchstring): with open(insert_file) as f2: for f2_line in f2: temp.write(f2_line) inserted = True temp.write(f1_line)os.rename(temp.name, text_file)Writing reliable code that works on different platforms is not easy!
_codereview.114536
This is a script that must send an email at each new article published on a specific website. Any suggestions or improvements to [email protected][email protected]_SITE=example.com/feed.xmlCHECK_INTERVAL=10while [ 1 ]; do LINK_ARTICLE=$(rsstail -i 1 -u $RSS_SITE -l -n 0 -1 | grep -oP Link:+ \K.*) TITLE_ARTICLE=$(rsstail -i 1 -u $RSS_SITE -n 0 -1 | grep -oP Title:+ \K.*) if [ $LINK_ARTICLE != ] && [ $TITLE_ARTICLE != ]; then echo New article published on the site. TITLE: $TITLE_ARTICLE - LINK: $LINK_ARTICLE | EMAIL=$SENDER_EMAIL mutt -s Nuovo Articolo BDO $TO_EMAIL echo New article published on the site. TITLE: $TITLE_ARTICLE - LINK: $LINK_ARTICLE fi sleep $CHECK_INTERVALdone
Notification script | from RSS to Email | Bash
bash;linux
I see a number of things that may help you improve your code.Use shebang lineThe shebang is the line at the beginning of a shell script that tells which program to use. In this case, you probably want this:#! /usr/bin/env bashSee this question for details.Consider using cron instead of sleepIf this is something you want to run automatically, consider running it as a cron tab instead of using sleep within the script.Include some commentsThe program requires rsstail, mutt and sleep which is a requirement that should be documented in a comment.Be cautious about handing variables to programsThe mutt program, like many Linux programs, has a -- option which specifies that no further options are on the command line. This prevents the contents of $TO_EMAIL in a line like the following from being misinterpreted as a command line option.mutt -s $TITLE -- $TO_EMAIL < $BODYTEXTCombine stringsThe echo is used twice with an identical string. An alternative approach is TITLE=Nuovo Articolo BDOBODYTEXT=New article published on the site. TITLE: $TITLE_ARTICLE - LINK: $LINK_ARTICLEmutt -s $TITLE -- $TO_EMAIL < $BODYTEXTecho $BODYTEXTAvoid creating extraneous variablesInstead of creating SENDER_EMAIL, you could just specify EMAIL and then the reassignment of the latter variable before mutt is called would not be necessary.Consider writing a portable scriptBy sticking closely with Posix and avoiding bashisms your code could run on many different kinds of systems, including recent versions of Ubuntu which don't use bash.
_webmaster.78669
I have a website in a shared hosting environment. Recently I found out that I can load other websites' contents using my own domain through URLs like mysite.com/~othersite/. This has resulted in Google indexing a malicious phishing website through my domain and sent me warning emails about it.Tech support say this is normal behavior and if it bothers me I should upgrade to a VPS. They confirmed that I cannot correct this in my own .htaccess file or by other means as this happens at a higher level.My question: Is this the usual, best-practice configuration for shared hosting environments or is the hosting company incompetent (or deliberately creating inconvenience to motivate upgrading)?Am I requesting something overly technically complicated when I say that content from website X should under no circumstances be returned when the request is addressed with the domain of website Y? Is this an unrealistic expectation in a shared environment?
Other websites' content accessible through own domain in shared hosting?
shared hosting
null
_unix.320821
I have a shell script that I created to change the next EFI boot then execute a reboot. If I execute it in a terminal window it works fine, but if I execute it using an Icon in KDE it reboots, but does not change the next efiboot. I have tried setting the Icon to run as root, but that didn't make a difference.Here is the script#!/bin/bashkdialog --title Reboot to Windows Prompt --yesno Are you sure you want to reboot to Windows?;if [ $? = 0 ]; then sudo efibootmgr -n 0 rebootelse kdialog --msgbox Reboot aborted by userfiSomeone even suggested having a pause between the efibootmgr and the reboot, but that didn't work either.
Shell script works different in KDE vs Terminal
shell;sudo;kde;privileges;terminal emulator
Not sure if it's what you're looking for, but have you considered launching a terminal + executing your script from an icon.Right click the icon > Icon Settings > Applicaiton > Command:konsole -e /path/to/your/script.shOr if you need the window to stay open for some reason use -noclose
_webmaster.3
I've noticed that Chrome and Firefox take different amounts of time to render certain things. In general, Chrome has been faster. What should I know about both of them (and IE8/9, too, I guess) when constructing a Javascript/jQuery app?
What are the differences between Firefox's Javascript engine and Chrome's V8?
google chrome;javascript;firefox;jquery
Actually, Spidermonkey (FF) and V8 (Chrome) are very similar in the core javascript engine API in that both try to be standards compliant. The main difference is that Spidermonkey tends to add some nice extras to their API if they feel it is needed. All of this is found at the Mozilla Development Center (MDC) for JavaScript and well documented if it is not a standard. On a side note, I personally search the MDC as my primary source for the JavaScript API.This story is entirely different for IE. While most of the core API such as Math and String are the same, IE differs greatly when it comes to the document object, and any manipulation therein I would agree with balexandre and say that jQuery does a very good job at taking care of that mess for you.The last thing that I will mention is while each engine will process the JavaScript code differently (some faster, some slower, etc.), but this can mostly be considered a black box and all you should need to worry about are the differences in the APIs.
_codereview.87316
String.prototype.replaceAll = function(find, replace) { if (typeof find == 'string') return this.split(find).join(replace); var t = this, i, j; while (typeof(i = find.shift()) == 'string' && typeof(j = replace.shift()) == 'string') t = t.replaceAll(i || '', j || ''); return t;};function html(input, replaceQuoteOff) { if (replaceQuoteOff) return input.toString().replaceAll(['&', '<'], ['&amp;', '&lt;']); return input.toString().replaceAll(['&', '<', ''], ['&amp;', '&lt;', '&quot;']);}function warning(message) { console.log(message);}function spanMarkdown(input) { input = html(input); while (input.match(/\^([\w\^]+)/)) input = input.replace(/\^([\w\^]+)/, '<sup>$1</sup>'); return input .replaceAll('\u0001', '^') .replace(/\[(.+?)\|(.+?)\]/g, '<abbr title=$2>$1</abbr>') .replaceAll('\u0002', '[') .replace(/\[\[(\d+)\](.*?)\]/g, '<sup class=reference title=$2>[$1]</sup>') .replace(/!\[([^\]]+)]\((https?:\/\/[^\s(\\]+\.[^\s\\]+)\)/g, '<img alt=$1 src=$2 />') .replace(/^(https?:\/\/([^\s(\\]+\.[^\s\\]+\.(svg|png|tiff|jpg|jpeg)(\?[^\s\\\/]*)?))/g, '<img src=$1 />') .replace(/\[([^\]]+)]\((https?:\/\/[^\s(\\]+\.[^\s\\]+)\)/g, '$1'.link('$2')) .replace(/([^;[\\])(https?:\/\/([^\s(\\]+\.[^\s\\]+\.(svg|png|tiff|jpg|jpeg)(\?[^\s\\\/]*)?))/g, '$1<img src=$2 />') .replace(/([^;[\\])(https?:\/\/([^\s(\\]+\.[^\s\\]+))/g, '$1' + '$3'.link('$2')) .replace(/^(https?:\/\/([^\s(\\]+\.[^\s\\]+))/g, '$2'.link('$1'));}function inlineMarkdown(input) { var output = '', span = '', current = [], tags = { '`': 'code', '``': 'samp', '*': 'em', '**': 'strong', '_': 'i', '': 's', '+++': 'ins', '---': 'del', '[c]': 'cite', '[m]': 'mark', '[u]': 'u', '[v]': 'var', '::': 'kbd', '': 'q' }, stags = { sup: { start: '^(', end: ')^' }, sub: { start: 'v(', end: ')v' }, small: { start: '[sm]', end: '[/sm]' } }; outer: for (var i = 0; i < input.length; i++) { if (['code', 'samp'].indexOf(current[current.length - 1]) == -1) { if (input[i] == '\\') span += input[++i].replace('^', '\u0001').replace('[', '\u0002'); else { for (var l = 3; l > 0; l--) { if (tags[input.substr(i, l)]) { output += spanMarkdown(span); span = ''; if (current[current.length - 1] == tags[input.substr(i, l)]) output += '</' + current.pop() + '>'; else { if (current.indexOf(tags[input.substr(i, l)]) != -1) warning('Illegal nesting of ' + input.substr(i, l) + ''); output += '<' + tags[input.substr(i, l)] + '>'; current.push(tags[input.substr(i, l)]); } i += l - 1; continue outer; } } for (var j in stags) { for (var l = 5; l > 0; l--) { if (stags[j].start == input.substr(i, l)) { output += spanMarkdown(span) + '<' + j + '>'; span = ''; current.push(stags[j].end); i += l - 1; continue outer; } else if (stags[j].end == input.substr(i, l)) { if (current[current.length - 1] == stags[j].end) { output += spanMarkdown(span) + '</' + j + '>'; span = ''; current.pop(); i += l - 1; continue outer; } else warning('Illegal close tag ' + stags[j].end + ' found'); } } } span += input[i]; } } else if (current[current.length - 1] == 'code' && input[i] == '`') { current.pop(); output += '</code>'; } else if (current[current.length - 1] == 'samp' && input.substr(i, 2) == '``') { current.pop(); output += '</samp>'; i++; } else output += html(input[i]); } output += spanMarkdown(span); if (current.length) warning('Unclosed tags. <' + current.join('>, <') + '>'); for (var i = current.length - 1; i >= 0; i--) output += '</' + current[i] + '>'; return output;}This only parses inline markdown and converts it to HTML (on both node.js and client-side). It doesn't conform to commonmark or any other specification. This is related to:Markdown to HTML which is a blob of regexpsMarkdown to HTML, again which has an interesting (confusing) split/map nesting (and didn't work with XHTML)This one basically goes thru character by character doing things based on the current state of the machine, similar the the (block) markdown function (that goes line by line) in the second question above ^.replaceAll() is used everywhere on my app, so it's not going to change, and I don't think fiddling with String.prototype is wrong.html() does an HTML escape. It doesn't escape everything and doesn't work for all cases, but I'm happy enough.warning() is just a function that collections whatever complaints inlineMarkdown has. This is just a console.log for testing, but I display the warnings to the user when using it client-side.spanMarkdown() deals with linkifying and simple inline-markdown things that can be done with regex it's easy to add stuff like oneboxing here.inlineMarkdown() parses teh markdownz! (and depends on the other functions) tags contains simple tags, which have equivalent start and end markdown sequences and cannot be nested within themselves, while stags contains special tags which have different start and end tags. When looking for tags, it goes thru a loop testing to see if substrings of each length match, which looks messy.I don't know whether I should make non-parsed tags (code and samp) a dedicated expandable block so I can add any more without special-casing them.This parser is also pretty picky, so I've also got a (client-side) function to complain when a user enters markdown that doesn't make sense:HTMLTextAreaElement.prototype.mdValidate = function(correct) { var i = mdWarnings.length; markdown(this.value); var preverr = this.previousSibling && this.previousSibling.classList.contains('md-err') ? this.previousSibling : null, err = mdWarnings[i]; this.lastErrored = err && correct; if (err && (correct || preverr || this.value.substr(0, this.selectionEnd || Infinity).match(/\s$/))) { if (preverr) { if (preverr.firstChild.nodeValue == err) { if (this.lastErrored && err && correct) { var input = this.value, output = '', span = '', current = [], tags = { '`': 'code', '``': 'samp', '*': 'em', '**': 'strong', '_': 'i', '': 's', '+++': 'ins', '---': 'del', '[c]': 'cite', '[m]': 'mark', '[u]': 'u', '[v]': 'var', '::': 'kbd', '': 'q' }, stags = { sup: { start: '^(', end: ')^' }, sub: { start: 'v(', end: ')v' }, small: { start: '[sm]', end: '[/sm]' } }; outer: for (var i = 0; i < input.length; i++) { if (['code', 'samp'].indexOf(current[current.length - 1]) == -1) { if (input[i] == '\\') span += input[++i]; else { for (var l = 4; l >= 0; l--) { if (tags[input.substr(i, l)]) { output += span; span = ''; if (['code', 'samp'].indexOf(tags[input.substr(i, l)]) == -1) output += '\\' + input.substr(i, l); else if (current[current.length - 1] == tags[input.substr(i, l)]) { current.pop(); output += '\\' + input.substr(i, l); } else { output += '\\' + input.substr(i, l); current.push(tags[input.substr(i, l)]); } i += l - 1; continue outer; } } for (var j in stags) { for (var l = 5; l >= 0; l--) { if (stags[j].start == input.substr(i, l)) { output += span + '\\' + input.substr(i, l); span = ''; i += l - 1; continue outer; } else if (stags[j].end == input.substr(i, l)) { if (current[current.length - 1] == stags[j].end) { output += span + '\\' + input.substr(i, l); span = ''; i += l - 1; continue outer; } } } } span += input[i]; } } else if (current[current.length - 1] == 'code' && input[i] == '`') { current.pop(); output += '`'; } else if (current[current.length - 1] == 'samp' && input.substr(i, 2) == '``') { current.pop(); output += '``'; i++; } else output += input[i]; } output += span; if (current[current.length - 1] == 'code' && input[i] == '`') { output += '`'; } else if (current[current.length - 1] == 'samp' && input.substr(i, 2) == '``') { output += '``'; } this.value = output; return true; } return err; } preverr.parentNode.removeChild(preverr); } var span = document.createElement('span'); span.classList.add('md-err'); span.appendChild(document.createTextNode(err)); this.parentNode.insertBefore(span, this); } else if (preverr) preverr.parentNode.removeChild(preverr); return err;};function mdValidateBody() { setTimeout(function(e) { e.mdValidate(); }, 0, document.activeElement);}Basically, when a form is submitted, <textarea>s with markdown are scanned for errors withif (mytextarea.mdValidate(true)) ohnoes.stopFormSubmission()If they exist the function will display an error message. If they submit again, it will automatically escape the markdown in an attempt to correct (the boolean argument) the error, using some of the same code from inlineMarkdown.What can I do to improve these functions?
A character-by-character inline markdown parser
javascript;html;parsing;reinventing the wheel;markdown
null
_datascience.8660
I've just read Jeff Hintons paper on transforming autoencoders Hinton, Krizhevsky and Wang: Transforming Auto-encoders. In Artificial Neural Networks and Machine Learning, 2011.and would quite like to play around with something like this. But having read it I couldn't get enough detail from the paper on how I might actually implement it. Does anyone know how the mapping between input pixels to capsules should work? What exactly should be happening in the recognition units?How it should be trained? Is it just standard back prop between every connection?Even better would be a link to some source code for this or something similar.
Transforming AutoEncoders
neural network;autoencoder
null
_unix.349176
I'm working with some CSV files generated by YouTube (so I cannot change the source structure). In the CSV file, some records span multiple lines. A hypothetical example with many other columns omitted for brevity is as follows:video_id, upload_time, title, policyoHg5SJYRHA0, 2007/05/15, RickRoll'D, Monetize in all countries except: CU, IR, KP, SD, SYTrack in countries: CU, IR, KPBlock in countries: SD, SYdQw4w9WgXcQ, 2009/10/24, Rick Astley - Never Gonna Give You Up, Monetize in all countries except: CU, IR, KP, SD, SYTrack in countries: CU, IR, KP, SD, SYA typical file contains hundreds of thousands of records if not millions of records (one file is 29.57GB in size), which is too big to process in one go, so I would like to split them up into smaller chunks for processing on separate machines. I've previously used split with -l on other report files and that works great when there is no newline in cells. In this case, if the split happens on a bad line (e.g.: line 4 of the example), then I have broken records in two files. Short of parsing the CSV file and then rebuilding it into multiple files, is there an effective way to split CSVs like this?
Splitting CSVs with multiline cells
csv;newlines;split;text formatting
You're going to want to parse the CSV file to re-emit it in smaller chunks the way you want it. During this operation, maybe you even want to re-emit it in a different, more rigorous, well-defined format (like, oh, I don't know, json).Your input file is in quite an unusual format. Python's csv module, for one, can't parse it, because it's got a multi-character delimiter: , (comma space) instead of the more common ,. Otherwise you'd be able to trivially parse and re-emit the file with 5 lines of Python.You'll have to find another parser that works, or write a small one. First, try to find out what the specifics of the format you've got on your hands are, like what the quoting rules are (e.g. what happens when a field quoted with contains .)
_unix.374776
I'm working through Unix For Poets, and trying make a file containing all words/tokens in the Bible. However, when using tr, as suggested, this includes the empty string. See example below:> tr -sc 'A-Za-z' '[\12*]' < bible.txt > bible.words> sed 5q bible.wordsTheProjectGutenbergEBookI have read through the man page for tr, without any luck. Any help with understanding why their included would be much appreciated. EDIT:First example:Line from bible.txt:1:1 Paul, a servant of Jesus Christ, called to be an apostle,Command which reproduces the unexpected result:> echo '1:1 Paul, a servant of Jesus Christ, called to be an apostle,' | tr -sc 'A-Za-z' '[\12*]'PaulaservantofJesusChristcalledtobeanapostleExpected output:PaulaservantofJesusChristcalledtobeanapostleSecond example:Line from bible.txt:The Project Gutenberg Ebook of The King James Bible command with same unexpected result:echo 'The Project Gutenberg EBook of The King James Bible ' | tr -sc 'A-Za-z' '[\12*]'TheProjectGutenbergEBookofTheKingJamesBibleExpected output:TheProjectGutenbergEBookofTheKingJamesBibleNote its the prefix empty line I don't understand.
Why does tr -sc 'A-Za-z' '[\12*]' includes empty line?
tr
You need to understand the tr options at work here to know what's going on.-c => complement the first character set. Means, any chars not found in the first char set are to be selected. In your case, 'A-Za-z' will imply any nonalphabetics like a space, a number, a newline, a control char would be chosen.-s => multiple consecutive chosen chars are to be squashed in as a one.The second set is the chars that are to be mapped into. \12 is the octal ascii for a newline.That means all alphabets(both upper & lower case) are to be left untouched whilst runs of non-alphabetics shall be turned into a single newline: ---- -- -------- - - ----- ----$#%! This is StarWars R2 D2 robot @work.|---| |---| |------| |---| |---| |-----| |----| || \n \n \n \n \n \n \n \n All the alphabets are untouched while a run of multiple nonalphabets are turned into newlines.output:ThisisStarWarsRDrobotwork
_cs.59665
In his 1973 paper On the notion of a random sequence, Levin states (without proving) a characterization of Martin-Lf randomness by writingTheorem 3. A sequence $\alpha$ is random w.r.t. the distribution $P$ in the Martin-Lf sense if and only if the probability ratio $P(\alpha_n)/ R(\alpha_n)$ is bounded belowwhere $R$ is the universal semicomputable (semi)measure. I feel a bit confused how this ratio could not be bounded below, being the ratio of two positive quantities it should always be bounded below by $0$.
Martin-Lf randomness characterization
randomness;kolmogorov complexity
The ratio should be bounded below by a positive constant. Equivalently, its infimum should be positive.
_cs.72166
Is there a term for programming languages that read like written sentences? I'm thinking of languages like Python, where you can almost read the code aloud as a sentence, as opposed to C++ which is really arcane.For example, in Python if 'pizza' not in animals is very clear when read aloud.Seems like maybe there's a formal term for this?
Term for programming languages that read like sentences?
terminology;programming languages
I know of no standard term for that aspect of PL. Maybe you can use human-readable syntax or human-friendly syntax. In PL theory, we (unapologetically) tend to disregard syntactic issues (e.g., look at LISP), and focus more on language features / semantics / types and more math-y stuff. I mean: if you asked me what are the main differences between C++ and Python, I would probably spend a long time before mentioning some syntactic difference.For practical applications, of course, having a more human-centric design that the one offered by pure theory is important. A clean and easy syntax is certainly quite convenient to read and write.Note that, if a PL pushes this principle to extremes, and employs a syntax which is very close to natural language, it could possibly harm productivity. This is because, natural languages can be quite ambiguous, and when programming you really need to be rigorous. Trying to oversimplify a PL by completely removing the math-y aspects of PL is probably not a good idea.COBOL is arguably more human-readable than Python, but it's hardly better. x = y+z is simpler than ADD Y to Z GIVING X.
_cs.6385
Introduction: I recently learned that a multi-tape Turing Machine $\text{TM}_k$ is no more powerful than a single tape Turing machine $\text{TM}$. The proof that $\text{TM}_k \equiv \text{TM}$ is based on the idea that a $\text{TM}$ can simulate a $\text{TM}_k$ by using a unique character to separate the respective areas of each of the $k$ tapes.Given this idea, how would we prove that a process taking $t(n)$ time on a $\text{TM}_k$ can be simulated by a 2-tape Turing machine $\text{TM}_2$ with $ O(t(n))\log(t(n))$ time?
Multitape Turing machines against single tape Turing machines
time complexity;turing machines;simulation;tape complexity
Look at the original paper by Hennie and Stearns:F.C. Hennie and R.E. Stearns, Two-Tape Simulation of Multitape Turing Machines, Journal of the ACM (JACM), Volume 13 Issue 4, Oct. 1966, pp. 533-546The construction is a little bit elaborate, but not extremely difficult to understand. The basic idea is to store and keep the current symbol of each tape in the same fixed column (home column) and simulate a shift of the $k$ tapes using a series of partially filled buffers of increasing length placed on the two tapes on the left and right side of the home column in order to avoid a full shift (that would require $O(t(n)^2)$ steps). If you need further help, modify the question and ask about the points in the paper that are not clear.
_softwareengineering.224798
I am a big fan of agile development and used XP on a very successful project a few years ago. I loved everything about it, the iterative development approach, writing code around a test, pair programming, having a customer on site to run things by. It was a highly productive work environment and I never felt like I was under pressure.However the last few places I have worked use/used Scrum. I know it's the poster child for agile development these days but I'm not 100% convinced it is agile. Below are the two main reasons why it just doesn't feel agile to me.Project Managers Love ItProject managers, who by their very nature are obsessed with timelines, all seem to love Scrum. In my experience they seem to use the Sprint Backlog as a means to track time requirements and keep a record of how much time was spent on a given task. Instead of using a whiteboard they all use an excel sheet, which each developer is required to fill out, religiously.In my opinion this is way too much documentation/time tracking for an agile process. Why would I waste time estimating how long a task is going to take me when I can just get on with the task itself. Or similarly why would I waste time documenting how long a task took when I can move onto the next task at hand.Standup MeetingsThe standup meetings in the previous place I worked were a nightmare. Everyday we had to explain what we had done yesterday and what what we were going to do that day. If we went over on our time estimate for a task the project manager would kick up a stink, and reference the Sprint Backlog as a means of showing of incompetent you are for not adhering to the timeline.Now I understand the need for communication but surely the tone of daily meetings should be lighthearted and focus on knowledge sharing. I don't think it should turn into a where's your homework style charade. Also surely the hole point of agile is that timelines change, they shouldn't be set in stone.ConclusionThe idea of agile is to make the software better by making the developers life easier. Therefore in my opinion any agile process used by a team should be developer led. I don't think having a project manager use a process they have labeled agile to track a project has anything to do with agile development.Thoughts anyone?
Does anyone else feel Scrum isn't agile?
agile;scrum
Yes. Even one of the fathers of agile doesn't agree that Scrum is really agile : youtube.com/watch?v=hG4LH6P8Syk EuphoricI think this link from one of the comments above really says it all. It's worth a watch, Uncle Bob gives a brief history on Scrum and basically says Scrum is not an Agile development process because Scrum has evolved over time to become a management process. The reasons behind this appear to be because it was project managers, and not developers, who were taking the Scrum courses.
_codereview.169418
I was trying to optimize the Radix Sort code, because I never found a code that was simple and easy to understand yet wasn't any slower. I have seen codes on web and in some books that implement arbitrary radices such as 10 and others and also do modulo operation rather than bit-shifting. Those codes however have always been slower that their comparison based counterparts in the same language.Since Radix Sort runs in \$O(n)\$ time, I built up my version of Radix Sort which is coded below in C. I choose C language because of speed, however please correct me if I'm going wrong. The code also works for negative numbers too.I have optimized the code as far as I could go, and maybe I might have missed some more optimization techniques.Any ideas with which I can increase the execution speed ?Motivation for optimization:http://codercorner.com/RadixSortRevisited.htmhttp://stereopsis.com/radix.htmlI was unable to implement all the optimizations in the articles, as it was beyond my skills and understanding mostly and somewhat lack of sufficient time. Other techniques not included in them or out of the box would definitely help a lot.This is the pointer optimized version, long on my system is 32 bits.long* Radix_Sort(long *A, size_t N, long *Temp){ long Z1[256] ; long Z2[256] ; long Z3[256] ; long Z4[256] ; long T = 0 ; while(T != 256) { *(Z1+T) = 0 ; *(Z2+T) = 0 ; *(Z3+T) = 0 ; *(Z4+T) = 0 ; ++T; } size_t Jump, Jump2, Jump3, Jump4; // Sort-circuit set-up Jump = *A & 255 ; Z1[Jump] = 1; Jump2 = (*A >> 8) & 255 ; Z2[Jump2] = 1; Jump3 = (*A >> 16) & 255 ; Z3[Jump3] = 1; Jump4 = (*A >> 24) & 255 ; Z4[Jump4] = 1; // Histograms creation long *swp = A + N; long *i = A + 1; for( ; i != swp ; ++i) { ++Z1[*i & 255]; ++Z2[(*i >> 8) & 255]; ++Z3[(*i >> 16) & 255]; ++Z4[(*i >> 24) & 255]; } // 1st LSB byte sort if( Z1[Jump] == N ); else { swp = Z1+256 ; for( i = Z1+1 ; i != swp ; ++i ) { *i = *(i-1) + *i; } swp = A-1; for( i = A+N-1 ; i != swp ; --i ) { *(--Z1[*i & 255] + Temp) = *i; } swp = A; A = Temp; Temp = swp; } // 2nd LSB byte sort if( Z2[Jump2] == N ); else { swp = Z2+256 ; for( i = Z2+1 ; i != swp ; ++i ) { *i = *(i-1) + *i; } swp = A-1; for( i = A+N-1 ; i != swp ; --i ) { *(--Z2[(*i >> 8) & 255] + Temp) = *i; } swp = A; A = Temp; Temp = swp; } // 3rd LSB byte sort if( Z3[Jump3] == N ); else { swp = Z3 + 256 ; for( i = Z3+1 ; i != swp ; ++i ) { *i = *(i-1) + *i; } swp = A-1; for( i = A+N-1 ; i != swp ; --i ) { *(--Z3[(*i >> 16) & 255] + Temp) = *i; } swp = A; A = Temp; Temp = swp; } // 4th LSB byte sort and negative numbers sort if( Z4[Jump4] == N ); else { swp = Z4 + 256 ; for( i = Z4+129 ; i != swp ; ++i ) { *i = *(i-1) + *i; } *Z4 = *Z4 + *(Z4+255) ; swp = Z4 + 128 ; for( i = Z4+1 ; i != swp ; ++i ) { *i = *(i-1) + *i; } swp = A - 1; for( i = A+N-1 ; i != swp ; --i ) { *(--Z4[(*i >> 24) & 255] + Temp) = *i; } return Temp; } return A;}
Radix Sort speed improvement
performance;c;radix sort
null
_unix.339451
So i've edited gnome-shell.css to change the top bar text color to blue and background color to white but it doesn't do it. https://postimg.org/image/8lioi30yf/ [gnome-shell.css][1]I resterted shell ,restarted pc. What am i doing wrong?I am using fedora 25
Edditing gnome-shell.css doesn't changes the appearance
shell;fedora;gnome
null
_cs.70787
j = 1;while (j <= n/2) {i = 1;while ( i <= j) { cout << j << i; i++;}j++}This is what I have so far: the first loop has a time-complexity of T(n) = n/2, which is of order O(n). The second loop is dependent on the iteration of the outer loop or j. For instance, when n = 8, the outer loop count is 4 and the inner loop count is 1 + 2 + 3 + 4 or 10. Therefore the inner loop appears to have the sequence (j(j+1))/2. I'm having trouble translating this to a concise time complexity. My intuition tells me the Big-O of the this code snippet is O(n^2) and that the time-complexity is T(n) = (n(n+1))/2. Please help clarify this for me. TIA
What is the time complexity (T(n)) and the order?
algorithms;algorithm analysis;time complexity
null
_unix.211640
For some reasons I have to use old distro Fedora12, and yum in its default configuration is unable to locate URLs for packages.% yum search gccLoaded plugins: refresh-packagekitError: Cannot retrieve repository metadata (repomd.xml) for repository: fedora/Please verify its path and try againYUM repos configuration at /etc/yum.repos.d/fedora.repo has the following:#baseurl=http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/Everything/$basearch/os/mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=fedora-$releasever&arch=$basearchThis means that the above mentions site links are no longer valid, don't exist. Are there some mirrors still keeping packages for old distros? In this situation, what URL should I provide to make it work?
Fedora12, yum can't find repositories
linux;fedora;package management;yum
I'm on fedora 20 with the same /etc/yum.repos.d/fedora.repo as you and yum canfind fedora 12 version files. Eg:$ sudo yum --releasever=12 --installroot=/tmp/ list available '*gcc*'(1/2): updates/12/x86_64/primary_db | 6.3 MB 00:54 (2/2): fedora/12/x86_64/primary_db | 12 MB 01:49 Determining fastest mirrors * fedora: ftp-stud.hs-esslingen.de * updates: ftp-stud.hs-esslingen.deAvailable Packagesgcc.x86_64 4.4.4-10.fc12 updatesWhat googling seems to suggest is that your certificates are not uptodate.You should try a yum clean all, temporarily replace https with httpin the .repo file, and do yum reinstall ca-certificates.
_softwareengineering.177223
I'm new to Java and Eclipse. One of my most recent discoveries was how Eclipse comes shipped with its own java compiler (ejc) for doing incremental builds. Eclipse seems to by default output incrementally built class files to the projRoot/bin folder.I've noticed too that many projects come with ant files to build the project that uses the java compiler built into the system for doing the production builds.Coming from a Windows/Visual Studio world where Visual Studio is invoking the compiler for both production and debugging, I'm used to the IDE having a more intimate relationship with the command-line compiler. I'm used to the project being the make file. So my mental model is a little off. Is whats produced by Eclipse ever used in production? Or is it typically only used to support Eclipse's features (ie its intellisense/incremental building/etc)? Is it typical that for the final release build of a project, that ant, maven, or another tool is used to do the full build from the command line?Mostly I'm looking for the general convention in the Eclipse/Java community. I realize that there may be some outliers out there who DO use ecj in production, but is this generally frowned upon? Or is this normal/accepted practice?
Is the output of Eclipse's incremental java compiler used in production? Or is it simply to support Eclipse's features?
java;compiler;eclipse
It would be normal to have a separate build process (e.g. with something like Maven) that does not use the Eclipse compiler which is responsible for producing the final deployable artifacts. For example, in all of my Eclipse Java projects I use:The built-in Eclipse compiler for quick testing (JUnit), debugging and local executionMaven to do the real builds (full test suite, building deployment artifacts etc.). Here Maven is using the version of the Java compiler that comes with my local JDK.TravisCI (via GitHub) to do continous integration testing (which also uses Maven, but on remote machines)You could in theory use the compiled Eclipse class files in production if you want - nothing to stop you packaging these up yourself and deploying them. But this would be a strange thing to do, since it would take a bit of effort and lose you the benefits of having a proper build setup.P.S. if you want to get good that this stuff then I strongly suggest you invest in learning Maven. It's a steep learning curve but really worth it in the long run.
_unix.149548
A read() & write() loop would probably be as good as what I'm looking for, but nevertheless is anything like that around or is it impossible because of an obstacle I didn't envisage ? I'm curious
Is there a system call to bind a file descriptor directly into another?
pipe;file copy;socket;system calls;file descriptors
null
_cogsci.894
The introduction of the new edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-5) is on the horizon. With it are coming some new, evidence-based diagnoses for dissociative disorders [1], which include conditions such as dissociative fugue (which will now be classified as dissociative amnesia) and dissociative identity disorder.Fugue states have always been fascinating to me, Wikipedia states that they area rare psychiatric disorder characterized by reversible amnesia for personal identity, including the memories, personality and other identifying characteristics of individuality...Fugues are usually precipitated by a stressful episode, and upon recovery there may be amnesia for the original stressor.Post-traumatic stress disorder is also precipitated by a stressful (and potentially life threatening) episode, and causes anxiety and autonomic hyperexcitablity.I've never seen anything written about the level of stressor that can induce a fugue state, but based on their common etiology, I would assume that PTSD and fugue states are related psychologically and neurologically.To what extent is this true, are these two seemingly different end products of the same initial event? Is there an anatomical substrate that is common to both? Does changing the status of dissociative fugue -> dissociative amnesia mean they should be considered in isolation?[1] Spiegel, D., Loewenstein, R. J., et al. (2011), Dissociative disorders in DSM-5. Depress. Anxiety, 28: E17E45. doi: 10.1002/da.20923
What is the relationship between post-traumatic stress disorder and fugue states?
abnormal psychology;psychiatry;ptsd
Dissociative Disorders are really fascinating to me as well. Fugue states/episodes as well as dissociative identity disorder (multiple personality disorder) in particular.PTSD must be differentiated from disorders that can exhibit phenomenological similarities, such as borderline personality disorder and dissociative disorders (including dissociative amnesia). I include borderline personality disorder because it is [particularly] difficult to distinguish from PTSD, and the two can coexist or even be causally related!A stressor or traumatic event is the causative factor in the development of PTSD, by definition. As people respond to events as being traumatic differently, the stressor alone is not sufficient enough to cause the disorder. In this case, the presence of intense fear or horror is necessary. The clinical features of PTSD include avoidance and emotional numbing, among other things, and patients may present dissociative states, which is the focus of this topic.Patients with dissociative disorders do not usually have the degree of avoidance behavior, autonomic hyperarousal, or history of trauma that patients with PTSD report.An essential feature of dissociative amnesia is an inability to recall important personal information, usually of a traumatic or stressful nature. As in this (as well as borderline personality disorder and PTSD) disorder, many patients have histories of prior abuse or trauma.Symptoms found only in dissociative amnesia include features of recurrent blackouts, fugue states/episodes, fluctuations in skills, habits, and knowledge. Patients presenting PTSD and/or with borderline personality disorder usually don't present these symptoms. The common factor/symptom is the possible presence of dissociative episodes/states, though they may be precipitated by differing presentation of stressors.Patients with borderline personality disorder can present transient or short-lived dissociative (or even psychotic) episodes that are almost circumscribed, fleeting, or doubtful.Sorry, now I worry I may not have properly answered your question(s). Liken to how different stressors or traumatic events can be experienced in varying degrees in different people, different people can cope or respond differently to them as well. PTSD (like hypervigilance), borderline personality disorder (intense black-and-white thinking, emotional lability), and dissociative disorders (amnesia, blackouts, etc.).ReferencesDSM-V Post-Traumatic Stress DisorderDSM-V Dissociative AmnesiaKaplan and Sadock's Synopsis of Psychiatry
_webmaster.30318
One of my clients web pages has a error in the theme which calls a redundant .css file that does not exist. The page however, looks fine. Thus the client is not willing to have the error fixed. What reasons can be given for fixing the error when it does not cause any issues with the page display.
Is a 404 on a non displaying files a problem
404;error
Http calls have the biggest single impact on front end performance, reducing them should be a priority for every webmaster (even just that one little one that's 404'ing).From Yahoo's performance guidelines:-80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.If you can't fix it, at least put a blank CSS file in the right place so you don't waste time with 404's for it.
_unix.282460
I successfully installed pysnmp with yoaurt S pysnmp, but when I try to execture a scipt with Python 2.7 which has import pysnmp, I get $ python2.7 test_script.txt.py Traceback (most recent call last): File test_script.txt.py, line 85, in <module> import pysnmpImportError: No module named pysnmpany idea what is going wrong?This might be too much, distracting oinformation, but my Arch Linux is in a VM and the company has severe restrictions on internet access from the VM. Both pacman and pip errored, but yaourt was successfulpackages (1) pysnmp-4.3.1-1Total Installed Size: 2.50 MiBNet Upgrade Size: 0.00 MiB:: Proceed with installation? [Y/n] (1/1) checking keys in keyring [##########################] 100%(1/1) checking package integrity [##########################] 100%(1/1) loading package files [##########################] 100%(1/1) checking for file conflicts [##########################] 100%(1/1) checking available disk space [##########################] 100%(1/1) reinstalling pysnmp [##########################] 100%
Python can't import pysmp on Arch Linux
arch linux;software installation
null
_softwareengineering.227915
I've created a ZF2 view helper PageTitle (extending Zend\View\Helper\AbstractHelper). As the name of the helper suggests , it is responsible for rendering the page title for each action view script.So within each action view script (not layout) I would call : echo $this->pageTitle('My Page Title', 'some-icon-class-name');This would render the required HTML output <div class=page-title> <h1><span class=some-icon-class-name></span> My Page Title</h1> </div>The plugin also contains another view helper (button) within it; its job is to render zero or more form button elements.My trouble comes when I need to pass these page specific buttons/links. My current solution is to provide an array of 'buttons' as a third parameter.$this->pageTitle('My Page Title', 'some-icon-class-name', array( 'button label 1' => array( 'attributes' => array( 'class' => array('some-class-name', 'another-class'), 'data-foo' => 'bar', ), )));(Above is just an example of what I have tried to demonstrate the issue. This is in fact numerous buttons, each with several 'attributes' each) I think the above approach is messy, certainly within the view. It also defeats the purpose of even having a view helper as I will need to provided this config again each time I need to reuse the page header (some pages may use other views as children).Some solutions I have considered.Create a page title service factory, per pageEach factory will create a new PageTitle plugin and inject the 'button' config into it. This is then registered ti the view plugin manager under a unique plugin name. So, as an example, I would change the call in my view to:echo $this->mySpecificPageTitle(); // no args as pre-constructedThe downside here is that I then need to create allot of factories (just so I can provide very slightly varied arguments).Provided a 'service' name to the plugin (this is similar to the navigation view helper) then then helper calls this service and it returns the required configuration.For example:echo $this->pageTitle('My Page Title', 'some-icon-class-name', 'MyButtonService');So within the helper:$buttonConfig = $serviceManager->get('MyButtonService');This however again means that I would need to create a factory for each 'MyButtonService' I need.
Zend 'Page Tile' view helper
php;zend framework
I would migrate the plugin code out of the __invoke method and instead return an instance of your view helper object. From there provide methods for adding buttons which store the data temporarily in an internal data store (array). From there you then just need a render method which uses all the stored data to build the output. This way you don't have an ever increase list of parameters and can provide easier to call methods which have a specific purpose.Sample Usage:echo $this->pageTitle()->setTitle('My Title')->setIcon('some-icon')->addButton('Test Btn')->addButton('Another Button', array('class' => 'some-class'))->render();All that is required for this is an __invoke that simple does return $this and the methods above saving their inputs to internal class parameters. Your existing code can be dropped in the render method and slightly re-factored and you should be set.
_unix.39405
I'm trying to build a LFS using version 7.1. I've followed all of the steps up to 5.3 and now I'm stuck because I can't change to $LFS/sources - I get the message:bash: cd: /mnt/lfs/sources: Permission deniedI'm logged in, in a new terminal, as lfs. The directory permissions (as seen from /mnt/lfs by root) are:drwx------ 6 leo leo 4096 May 26 18:02 .drwxr-xr-x 3 root root 4096 May 21 20:43 ..drwx------ 2 root root 16384 May 21 20:24 lost+founddrwxr-xr-x 2 leo leo 4096 May 26 18:00 patchesdrwxrwxrwt 2 lfs root 4096 May 26 17:53 sourcesdrwxr-xr-x 2 lfs root 4096 May 26 18:02 toolsThe mount spec for the partition is:/dev/sdb3 on /mnt/lfs type ext3 (rw)I'm far from new to UNIX and LINUX and this is really annoying me. I know it's something blindingly obvious but I just can't see it.I have restarted the machine, sourced the lfs profile (source ~/.bash_profile) but just can't seem to find the one thing I'm missing. The host system is Debian if that helps.
LFS can't cd to lfs/source - permission denied
linux;permissions;lfs
null
_webmaster.102693
If you look at the console while clicking around this demo website, you'll se the loading times. It's damn fast (I get about 20ms per page from Europe).What is it making it so fast? Is it just websocket? The site's author claims it's a proprietary technology, but it sounds a bit like smoke and mirrors...Thanks for any hints!
What technology is making this website so fast?
performance;page speed
null
_cs.47129
Whats the intuition behind multiplying the factor $\log n$Master Method Case 2 (CLRS Section 4.5)If $f(n) = \theta(n^{\log_b a})$, then $T(n)= \theta(n^{\log_b a} \log n)$In generalized form sometime it can be written asIf $f(n) = \theta(n^{\log_b a} log^k n)$ with $k = 0$, then $T(n) = \theta(n^{\log_b a} \log^{k+1} n)$
Understanding Master Method's Case 2
master theorem
Suppose that $T(n) = aT(n/b) + n^{\log_b a}$ and $T(1) = 1$. Now consider some $n$ which is a power of $b$, say $n = b^k$. The formula gives$$\begin{align*}T(n) &= aT(n/b) + n^{\log_b a} \\ &=a^2T(n/b^2) + a(n/b)^{\log_b a} + n^{\log_b a} \\ &=a^3T(n/b^3) + a^2(n/b^2)^{\log_b a} + a(n/b)^{\log_b a} + n^{\log_b a} \\ &=\cdots \\ &=a^kT(n/b^k) + a^{k-1}(n/b^{k-1})^{\log_b a} + \cdots + n^{\log_b a} \\ &=a^k + a^{k-1}(n/b^{k-1})^{\log_b a} + \cdots + n^{\log_b a} \\ &=a^k(n/b^k)^{\log_b a} + a^{k-1}(n/b^{k-1})^{\log_b a} + \cdots + n^{\log_b a}.\end{align*}$$There are $k+1 = \log_b n + 1$ terms in the final formula. Miraculously, all of them are equal to $n^{\log_b a}$! (Work that out on your own.)
_unix.321796
As the title implies I am having a bit of a hardware configuration issue.I am using a MadCatz R.A.T. 7 gaming mouse, have been using this same mouse since it was first made by Saitek. Basically the issue is that soon after signing in, the mouse buttons simply cease to function. There is a fix listed for linux, where you need to make a config file for the mouse in X, which I have used successfully in ubuntu 16.04, but which has not worked in Debian 8.Anyone have any suggestions?
Not able to get MadCatz R.A.T. 7 to work in Debian 8
debian
null
_codereview.48473
I need comments on the below code:Thread.new {EM.run do IpamAgent::Amqp.run end}module IpamAgent class Amqp class << self def run begin $connection = AMQP.connect(RMQ_CONFIGURATIONS) $connection.on_tcp_connection_loss do |conn, settings| Rails.logger.info <<<<<<<<<<<<<<<< [network failure] Trying to reconnect...>>>>>>>>>>>>>>>>>>>>>>>> conn.reconnect(false, 2) end Rails.logger.info <<<<<<<<<<<<<<<<<<<<<<<<AMQP listening>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> worker = IpamAgent::MessageHandler.new worker.start Rails.logger.info <<<<<<<<<<<<<<<<<<<<<<<<Message handler started>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> rescue Exception => e Rails.logger.info <<<<<<<<<<<<<<<<<<<<<<<<Message handler Exception>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Rails.logger.info [error] Could not handle event of type #{e.inspect} Rails.logger.info <<<<<<<<<<<<<<<<<<<<<<<<Message handler Exception>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> end end end endend module IpamAgent class MessageHandler attr_accessor :ns_exchange, :sc_exchange, :ns_queue, :sc_queue, :service_location, :service_version, :service_name, :message_sequence_id, :presto_exchange, :presto_queue def initialize # Profile the code @ns_exchange = CONFIGURATIONS[ns_exchange] @sc_exchange = CONFIGURATIONS[sc_exchange] @presto_exchange = CONFIGURATIONS[presto_exchange] @ns_queue = CONFIGURATIONS[ns_queue] @sc_queue = CONFIGURATIONS[sc_queue] @presto_queue = CONFIGURATIONS[presto_queue] end # Create the channels, exchanges and queues def start ch1 = AMQP::Channel.new($connection) ch2 = AMQP::Channel.new($connection) ch3 = AMQP::Channel.new($connection) @ns_x = ch1.direct(ns_exchange, :durable => true) @ns_queue = ch1.queue(ns_queue, :auto_delete => false) @ns_queue.bind(@ns_x, :routing_key => @ns_queue.name).subscribe(:ack => true, &method(:handle_ns_message)) @sc_x = ch2.topic(sc_exchange, :durable => true) @sc_queue = ch2.queue(sc_queue, :auto_delete => false) @sc_queue.bind(@sc_x, :routing_key => #).subscribe(:ack => true, &method(:handle_sc_message)) @presto_x = ch3.direct(presto_exchange, :durable => true) @presto_queue = ch3.queue(presto_queue, :auto_delete => false) @presto_queue.bind(@presto_x, :routing_key => @presto_queue.name).subscribe(:ack => true, &method(:handle_presto_message)) end # Handle the messages from Network service component def handle_ns_message(headers, payload) message_headers = JSON.parse(headers.to_json)[headers] payload = eval(payload) headers.ack Rails.logger.info >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>MESSAGE FROM NS<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< Rails.logger.info message_headers Rails.logger.info payload Rails.logger.info >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>MESSAGE FROM NS<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< tenant_detail = IpamAgent::TenantDetail.where(service_instance: payload[orgId]).first if(payload && payload.keys.include?(:responseCode)) new_tenant_detail = IpamAgent::TenantDetail.create(message: ({header => message_headers, payload => payload}), status: waiting_for_sc, service_instance: payload[orgId]) if tenant_detail && tenant_detail.service_group publish_sgid_to_presto(tenant_detail) else get_sgid_from_sc(new_tenant_detail) end else Rails.logger.info(Payload: #{payload}, routing key is #{message_headers}) end end # Retrieve the Service Group ID from Service controller def get_sgid_from_sc(tenant_detail) message = tenant_detail.get_sc_message Rails.logger.info(>>>>>>>>>>>>>>>>>>>>PUBLISHING TO SC<<<<<<<<<<<<<<<<<<<<<<<) Rails.logger.info(message) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>PUBLISHING TO SC<<<<<<<<<<<<<<<<<<<<<<<) @sc_x.publish(message.last, :routing_key => @sc_queue.name, :headers => message.first, :mandatory => true) end # Handle the messages from Service controller def handle_sc_message(headers, payload) message_headers = JSON.parse(headers.to_json) headers.ack payload = eval(payload) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>MESSAGE FROM SC<<<<<<<<<<<<<<<<<<<<<<<) Rails.logger.info(message_headers) Rails.logger.info(payload) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>MESSAGE FROM SC<<<<<<<<<<<<<<<<<<<<<<<) if(payload && payload[serviceInstanceGroupId]) tenant_detail = IpamAgent::TenantDetail.find_or_save(payload) publish_sgid_to_presto(tenant_detail) end end # Shovel the NS request to PMP with Service Group ID def publish_sgid_to_presto(tenant_detail) tenant_details = TenantDetail.where(service_instance: tenant_detail.service_instance, status: waiting_for_sc) tenant_details.each do |sc| sc.update(status: success) message = sc.get_pmp_message Rails.logger.info(>>>>>>>>>>>>>>>>>>>>PUBLISHING TO PRESTO<<<<<<<<<<<<<<<<<<<<<<<) Rails.logger.info(message.first) Rails.logger.info(message.last) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>PUBLISHING TO PRESTO<<<<<<<<<<<<<<<<<<<<<<<) @presto_x.publish(message.last, :routing_key => @presto_queue.name, :headers => message.first, :mandatory => true) end end # Receive the message from PMP presto and publsih it to Network Service def handle_presto_message(headers, payload) message_headers = JSON.parse(headers.to_json) payload = eval(payload) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>MESSAGE FROM PRESTO<<<<<<<<<<<<<<<<<<<<<<<) Rails.logger.info(headers.to_json) Rails.logger.info(payload) Rails.logger.info(>>>>>>>>>>>>>>>>>>>>MESSAGE FROM PRESTO<<<<<<<<<<<<<<<<<<<<<<<) headers.ack @ns_x.publish(payload, :routing_key =>@ns_queue.name, :headers => headers, :mandatory => true) if (message_headers && payload) end end end
Implementation on AMQP in Ruby
ruby;ruby on rails
null
_cs.28901
In Computational Complexity A Modern Approach, one claim says that if $f$ is computable in time $T(n)$ by a bidirectional TM $M$, then it is computable in time $4T(n)$ by a unidirectional TM $\tilde{M}$. How to work out the constant $4$? In my opinion, in addition to go over the edge operation, one transition in $M$ corresponds to one transition in $\tilde{M}$, so where does constant $4$ come from?
How to simulate a bidirectional TM on a regular one with time factor four?
turing machines;simulation;computation models
null
_cstheory.2890
I am looking for references for the following problem, which I feel must have been studied before. I have n items and I want to rank them. I randomise once at the beginning of theprocess and then for each pair of items I have an x% chance of getting theright ordering, let us say independently. I then use these comparison results to rank the items. I would like to know how good/bad the ranking can be given unbounded computation and also any methods for finding a good ranking in reasonable time. Let us also say that there is a true total ordering under the hood.I am aware of some of the literature on binary sorting with errors but the papers I found, at least, seem to answer a different set of questions.
Ranking with errors
reference request;randomness;sorting
If I understand your question correctly it is answered in Braverman and Mossel's Sorting with Noisy Information http://arxiv.org/PS_cache/arxiv/pdf/0910/0910.1191v1.pdf (see also conference version titled Noisy Sorting without Resampling IIRC.)
_webmaster.84149
My domain is hosted on GoDaddy (Thats where I purchased it form). I have Windows Exchange server setup as CNAME data. My website is designed and hosted on a another server (Not GoDaddy). How do I just tell my domain to go to this server just for Website information? For an exmaple, I changed DNS settings on my domain to this Web hosting company's DNS Settings and it broke the link to my Windows Exchagne server. How do I tell the domain to go to this web hosting server ONLY for website requests?
GoDaddy how to direct web hosting to another server without affecting Windows Exchange Server
web hosting;dns;godaddy
null
_unix.92505
my yum command used to work fine , but now , when I try to use it ,it gives me an error :file:///home/user/repo/repodata/repomd.xml: [Errno 14] Could not open/read file:///home/user/repo/repodata/repomd.xmlI don't know what to do , please help me.
error using yum in centos 6
centos;yum
You have one or more files in /etc/yum.repos.d/ that point to file:///home/user/repo as a basepath. Remove or correct those files and you should be okay.
_cs.51962
I am reviewing some old papers for a final tomorrow, and there is a question that I'm not sure about.If a language A is Turing-Recognizable and Undecidable, what can be said of the Turing-Machine that recognizes the complement of A?To my understanding this turing machines accept states, are all those that were of the rejection states in the first Turing Machine. Also seeing as the Language is undecidable, this Turing Machine will not halt for strings that are not in the language. Can someone please tell me if my understanding is correct or not, and if not give me some clarification?
Undecidable language and Turing Machines
computability;turing machines;undecidability;semi decidability
null
_codereview.61141
I'm just getting into Clojure, and I wanted to make sure I am writing code the Clojure way. The challenge I took on is Zeckendorf numbers (fairly trivial).(defn fibs-until ([n] (vec (if (< n 3) (range 1 (inc n)) (concat [1 2] (fibs-until n 1 2))))) ([n a b] (let [fib (+ a b)] (if (> fib n) [] (cons fib (fibs-until n b fib))))))(defn zeckendorf ([n] (if (= 0 n) 0 (zeckendorf n (reverse (fibs-until n))))) ([n fibs] (if (= fibs []) (let [head (first fibs) tail (rest fibs)] (if (or (> head n) (= 0 n)) (str 0 (zeckendorf n tail)) (str 1 (zeckendorf (- n head) tail)))))))A few review questions I have:Is there a more Clojure way to do this?I noticed that I am repeating the same (overloading-esque) pattern of defining functions that have two arities, and calling out to the second in the first. Is there a better way to do this? I wanted to use [[head & tail] fibs] rather than [head (first fibs) tail (rest fibs)] but I get a stack overflow. Why is this?
Zeckendorf numbers the Clojure way
beginner;algorithm;clojure;fibonacci sequence
Starting with your last question first, I'm surprised that you don't always get a stack overflow. Clojure does not support tail-call elimination, meaning that purely recursive functions (like both your fibs-until and zeckendorf) are likely to blow up the stack.There are various reasons why this isn't an issue in practice, first and foremost of which are Clojure's lazy sequences.Elements of a lazy sequence aren't generated until they're used. Among other things, this means that Clojure can easily handle infinite sequences. So a more idiomatic way to implement Fibonacci would be to return an infinite lazy sequence from which you can simply take however many elements you want. There are lots of ways to implement this, one of which is to simply wrap a recursive implementation with lazy-seq:user=> (defn fib [a b] #_=> (lazy-seq #_=> (cons a (fib b (+ a b)))))#'user/fibuser=> (take 10 (fib 1 1))(1 1 2 3 5 8 13 21 34 55)user=> (take 20 (fib 1 1))(1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765)Or, closer to your fib-until function:user=> (take-while #(< % 100) (fib 1 1))(1 1 2 3 5 8 13 21 34 55 89)There are lots of other ways to implement Fibonacci lazily in Clojure.I'll leave a lazy version of zeckendorf as an exercise ;-)Regarding your question 2, using multiple arities the way you do is common and nothing to be concerned about IMHO.
_unix.294569
I am working on a linux server remotely. Are there any command that can allow me to figure out the IP address of this linux server, so that I can ftp some files to this server.
check ip address of a linux server to upload the file
linux;networking;ip;ftp
if the remote machine is directly connected to the internet:hostname -I|cut -f1otherwise, one of these:wget -qO - http://whatsmyip.me/wget -qO - http://ipinfo.io/ipwget -qO - http://ipecho.net/plain; echoIn all cases, must be run on the remote machine.
_cs.30176
Or in other words, find all $v \in V$ such that there exists a path $\forall w \in V$ $v \rightarrow w$ or $w \rightarrow v$. This is for a directed acyclic graph. I need to find an $O(|E| + |V|)$ algorithm for this.I can see how to identify if a given vertex meets these traits (perform a BFS starting at that vertex, then do another BFS on the reverse of that graph and see if every vertex was visited in those BFSes). The obvious solution would be to run this on every vertex of the graph, but that will end up being $O(|E||V| + |V|^{2})$.I've considered identifying strongly connected components, but that doesn't seem like the right approach, since a SCC requires that $v$ and $w$ are mutually reachable, whereas this homework question requires that $v$ and $w$ are only reachable one way.Advice?
Finding vertices for which there either exists a path to all other vertices or other vertices have a path to them
algorithms;graph theory
We can assume that the DAG is connected, since otherwise the solution is trivial. Consider some topological ordering of the vertices. A vertex $v$ satisfies your condition if (1) $v$ can be reached from all vertices preceding $v$ in the ordering and (2) all vertices following $v$ in the ordering are reachable from $v$. We can check conditions (1) and (2) separately.To check condition (2), traverse the topological ordering in order, and for each vertex encountered, remove the vertex. Since this is a topological ordering, when a vertex is removed, it is a source, that is, it has no incoming edges. Condition (2) is satisfied for the vertex iff it is the unique source at that point. Condition (1) can be checked similarly by traversing the topological ordering in reverse.
_unix.386791
I'm running Arch Linux with Windows 7 dual booted. I've left a hefty chunk of my drive for windows, though I don't use it much anymore. I also need more space for my root partition.So my plan is to wipe windows, increase the size of my root partition, and then create another partition left over for tertiary use.I need to know if online resizing through GParted would allow me to increase my / partition size without any hiccups: So if I wipe my windows partition and try to resize root, will I run into any trouble?PS: I've done it once on a test machine, where I had a bunch of space after my partition (unlike the case below), but I did it on a whim since there weren't any consequences.I need to know whether it may break my system.Thanks for the help!
Runtime root partition resizing using GParted
arch linux;partition
null
_unix.58362
I have an odd problem with umask.My current setting is:$ umask0022$ umask -Su=rwx,g=rx,o=rxThis only works for files though and not directories:$ touch abc$ ll abc0 -rw-rw-rw- 1 user1 group1 0 Dec 12 11:39 abc$ mkdir def$ ll -d def8.0K drwxrwxrwx+ 2 user1 group1 4.0K Dec 12 2012 defCan anybody suggest why umask is not working for the directory? Any help is appreciated!This is a new Centos linux system.Edit: thanks for the comments. As some have pointed out, this doesn't work for files either.Extra information: This problem only seems to occur on the home directory which is mounted over NFS, and not on local directories. Could NFS be causing the problem somehow?
umask is not working for directories
linux;permissions;centos;nfs;umask
null
_unix.14651
What is the best way to retrieve the current time zones of a number of countries, on a daily basis? (that would take into account DST changes, of course)ReliablyIf possible the Linux way (i.e. either using internal resources, or a Linux website API)(I'm on Ubuntu 10.04)
Retrieve countries timezones
time;api
If you just want the timezone, then timezones are stored in /usr/share/zoneinfo.If you want to be able to retrieve the current time for a number of different cities or countries, then you can pull them from the Date and Time Gateway.
_unix.374901
I'm doing some work on offloads, and I'm looking at some of the networking offloads (LRO, TSO, GRO and GSO).All of them can be set on/off by using (example for setting LRO on):ethtool -K <interface_name> lro onLRO and TSO have counters that can be viewed by using (again, LRO example):ethtool -S <interface_name> | grep lroBut I can't find anywhere a way to check counters on GRO and GSO.Any idea if these exists, and if so how can I view them?
Does linux have counters for GRO/GSO?
networking
null
_codereview.167134
I am doing a website for an association and I have never done the server side before. Here I made a very simple routing system:app.set('port',(process.env.PORT || 5000));app.use(express.static(__dirname + '/public' ));app.use(express.static(__dirname + '/views' ));app.get('/',function(req,res,next) { res.redirect('/en/insa');}).get('/:language/:page?',function(req, res, next){ var path = __dirname+'/views/'+req.params.language+'/'+req.params.page+'.ejs'; fs.access(path,function(err) { if (err) { res.status(404); console.error('404 : /' + req.params.page); res.render('404.ejs', { page: req.params.page }); return; } res.charset = utf-8; res.render(path ,{language: req.params.language}); });}).listen(app.get('port'),function() { console.log('Server is running, server is listening on port ',app.get('port'));})I just redirect automatically the client if he hits the main www.site.com/ page and then I just check for the language and page in the URL like www.site.com/en/description for instance. The fs.access function checks if the EJS (that's a template generator for HTML pages) page actually exists in the folder before rendering it.Is there anything that could go wrong with this code, or something that could be done better?
Node JS routing system
javascript;node.js;url routing;i18n
null
_unix.370060
I am a very new user to Flume, please treat me as an absolute noob. I am having a minor issue configuring Flume for a particular use case and was hoping you could assist. Note that I am not using HDFS, which is why this question is different from others you may have seen on forums.I have two Virtual Machines (VMs) connected to each other through an internal network on Oracle Virtual Box. My goal is to have one VM watch a particular directory that will only ever have one file in it. When the file is changed, I wish for Flume to only send only the new lines/data. I want the other VM to receive this data and update/concatenate the data to a single file in a particular directory on it.So far, I have this process very close to working. Whenever changes are made in VM1, they are updated on VM2. However, the entire file on VM1 is sent to VM2 every time, not the new lines. For example, if I wrote Test1 and then a while later underneath wrote Test2 to the file on VM1, on VM2 the output would be:Test1Test1Test2What I want to see is: Test1 Test2I am not sure how to implement this, and am sending this email after thoroughly examining the Flume user guide documentation and most relevant articles on stackoverflow/stackexchange. For your reference, below are the current configurations(they are working in the manner I mentioned above).I realize another solution would be to keep the configuration on VM1 and overwrite the file on VM2 everytime new contents are detected. However, I am also unsure how to implement this.Any assistance you could provide is greatly appreciated!
Apache Flume - send only new file contents
virtual machine;synchronization;apache flume
null
_webapps.69990
My husband and I have the same Facebook friend. When that friend likes a post, my husband sees the liked post but I do not. Why is this? Can't I see the same friend's liked post?
Can't see my friend's likes, but another friend can
facebook;friends
null
_datascience.22080
I am fairly new into Data Science but encoutered it before. The following problem troubles me and i hope you guys can point me in the right direction. The input are some strings where some carry the same information others not. An unknow number of these strings are crooked* to a warrying degree. From only one letter off to complete garbage. On the output side are the corrected strings from the input. The catch is that there are only certain, already known, combinations of valid strings possible. In a naive approach i chained some fuzzy searches and already got some promising results. Now i don't know where to start or if there are similar problems already solved.* (are we still allowed to say this?)
Mapping a set of corrupted strings to the correct ones
machine learning;beginner
null
_codereview.85396
The code below takes a String and encrypts it using AES/CBC/PKCS5PADDING as transformation. I am learning as I go and I have a few questions about my code.Is SecureRandom ok for generating my KEY and my IV? What's up with all these exceptions?Is my code creating any vulnerabilities in the encryption process? (mistakes maybe?) Am I seeding SecureRandom properly? I'm hopping to incorporate this into a larger project or build on this. Any suggestions for making the code easier to work with multiple classes? import java.io.UnsupportedEncodingException;import java.security.InvalidAlgorithmParameterException;import java.security.InvalidKeyException;import java.security.NoSuchAlgorithmException;import java.security.SecureRandom;import javax.crypto.BadPaddingException;import javax.crypto.Cipher;import javax.crypto.IllegalBlockSizeException;import javax.crypto.KeyGenerator;import javax.crypto.NoSuchPaddingException;import javax.crypto.SecretKey;import javax.crypto.spec.IvParameterSpec;public class AESCrypt { private SecureRandom r = new SecureRandom(); private Cipher c; private IvParameterSpec IV; private SecretKey s_KEY; // Constructor public AESCrypt() throws NoSuchAlgorithmException, NoSuchPaddingException { this.c = Cipher.getInstance(AES/CBC/PKCS5PADDING); this.IV = generateIV(); this.s_KEY = generateKEY(); } // COnvert the String to bytes..Should I be using UTF-8? I dont think it // messes with the encryption and this way any pc can read it ? // Initialize the cipher // Encrypt the String of bytes // Return encrypted bytes protected byte[] encrypt(String strToEncrypt) throws InvalidKeyException, InvalidAlgorithmParameterException, IllegalBlockSizeException, BadPaddingException, UnsupportedEncodingException { byte[] byteToEncrypt = strToEncrypt.getBytes(UTF-8); this.c.init(Cipher.ENCRYPT_MODE, this.s_KEY, this.IV, this.r); byte[] encryptedBytes = this.c.doFinal(byteToEncrypt); return encryptedBytes; } // Initialize the cipher in DECRYPT_MODE // Decrypt and store as byte[] // Convert to plainText and return protected String decrypt(byte[] byteToDecrypt) throws InvalidKeyException, InvalidAlgorithmParameterException, IllegalBlockSizeException, BadPaddingException { this.c.init(Cipher.DECRYPT_MODE, this.s_KEY, this.IV); byte[] plainByte = this.c.doFinal(byteToDecrypt); String plainText = new String(plainByte); return plainText; } // Create the IV. // Create a Secure Random Number Generator and an empty 16byte array. Fill // the array. // Returns IV private IvParameterSpec generateIV() { byte[] newSeed = r.generateSeed(16); r.setSeed(newSeed); byte[] byteIV = new byte[16]; r.nextBytes(byteIV); IV = new IvParameterSpec(byteIV); return IV; } // Create a KeyGenerator that takes in 'AES' as parameter // Create a SecureRandom Object and use it to initialize the // KeyGenerator // keyGen.init(256, sRandom); Initialize KeyGenerator with parameters // 256bits AES private SecretKey generateKEY() throws NoSuchAlgorithmException { // byte[] bytKey = AES_KEY.getBytes(); // Converts the Cipher Key to // Byte format // Should I use SHA-2 to get a random key or is this better? byte[] newSeed = r.generateSeed(32); r.setSeed(newSeed); KeyGenerator keyGen = KeyGenerator.getInstance(AES); // A // KEyGenerator // object, SecureRandom sRandom = r.getInstanceStrong(); // A SecureRandom object // used to init the // keyGenerator keyGen.init(256, sRandom); // Initialize RAndom Number Generator s_KEY = keyGen.generateKey(); return s_KEY; } public String byteArrayToString(byte[] s) { String string = new String(s); return string; } // Get Methods for all class variables public Cipher getCipher() { return c; } public IvParameterSpec getIV() { return IV; } public SecretKey getSecretKey() { return s_KEY; }}
Encrypting a string using AES/CBC/PKCS5PADDING
java;security;cryptography
Is SecureRandom ok for generating my KEY and my IV? I would generally advise that you don't specify your own SecureRandom for the key generator, unless you have a specific reason to do so. By default, it will select the highest priority implementation it finds amongst the installed providers. Also, if your code is used with a hardware security module (HSM) in the future, it will either completely ignore your request or it will even throw an exception to tell you that you mustn't try to specify an alternative source of randomness.Using it to generate an IV value is fine.What's up with all these exceptions?Yeah, irritating isn't it? The security APIs are peppered with checked exceptions. Fortunately, many of them extend GeneralSecurityException, so you can just throw that if you have no intention of acting upon the individual exceptions.As in all code, throw exceptions that are appropriate to the abstraction of your API layer.Is my code creating any vulnerabilities in the encryption process? (mistakes maybe?) No, it generally looks fine. You should specify UTF-8 when converting your plaintext bytes to a string, but that's about it.Obviously you'll need to store your IV along with your ciphertext when you eventually use this in anger.Am I seeding SecureRandom properly?There's not really any need to seed a SecureRandom object. Many implementations of SecureRandom ignore the seeds they are supplied. Just create it using:SecureRandom random = new SecureRandom();You are currently using SecureRandom::generateSeed() which is actually intended for seeding other PRNGs. There's no need to use it to re-seed your existing SecureRandom instance. Just use the basic no-arg constructor as I suggest above.
_codereview.84775
I've put together an algorithm for an assignment. I've done my best to try and keep it to a professional and readable standard. I'm posting it here so that I can get some feedback and suggestions on what it's like and whether I could improve the algorithm in some way. /***** Algorithm Comments - Name: Shivan Kamal Purpose: To create a C program which takes 3 integers and produces a result that shows which triangle(If valid) they have chosen.*****/Variable Declaration: sideA = integerVariable Declaration: sideB = integerVariable Declaration: sideC = integerCharacter Declaration = ch PRINT -- Lets explore triangles! Please insert a value for side A of your triangle \n PRINT -- Ranging from 1-15cm PRINT -- Now insert a value for side B of your triangle ranging from 1-15cm.\n INPUT -- sideB PRINT -- And finally, insert a value for side C of your triangle ranging from 1-15cm.\n INPUT -- sideC IF (sideA || sideB || sideC <=0) PRINT You cannot have a triangle with any side having a value of 0.\n ELSE IF(sideA || sideB || sideC >15) THEN PRINT Please insert a value between 1cm - 15cm only. ELSE IF (sideA AND sideB == sideC OR sideB AND sideC == sideA OR sideC AND sideA == side C ) THEN PRINT Your input creates a valid EQUILATERAL triangle.\n ELSE IF (sideA==sideB OR sideB==sideC OR sideC==sideA ) PRINT Your input creates a valid SCALENE triangle.\n ELSE IF PRINT Your input creates a valid ISOSCELES triangle.\n ELSE PRINT You have inserted invalid range of values, as a result your triangle is Invalid.\n PRINT Please restart the program by running it again and insert valid values in order to check what triangle you would get. Goodbye. ENDIF ENDIF ENDIF ENDIF ENDIFEND PROGRAM You may notice I did a char declaration but didn't use it in the rest of the algorithm. That is because I'm trying to expand on the algorithm and testing conditions. One of the things I want to do is insert a loop condition where unless a user insert a valid integer the user will be prompted to insert a valid integer, once it's inserted, the program continues on. This happens on 3 occasions at the beginning.Also I have made the integer between a range of specific numbers. How would I be able to improve upon the algorithm so that a user can insert any number, and one of the conditions upon that is that one side of a triangle cannot be longer than the other two sides otherwise the triangle is invalid. If a user inserts, for example sideA as 10sideB as 20then sideC cannot be more than 30. Likewise the condition to be where sideA cannot be more than sideB and sideC combined, as well as sideC and sideA not been higher than sideB.In my code I've currently got it set to make sure a user can only insert a valid integer, and thus will not allow a user to insert any character except an integer. At the end of the algorithm and also in the code, I've got an else-if statement in case the user inserts anything invalid. But since I am trying to make it error proof from the beginning, what would I have to add so that I can remove the final else-if statement.And finally I want to add a section to the algorithm where the result is After user has gotten an answer, the user is then asked to either try again or simply exit. The code is pasted below if you wish to compile. Check it out and make some suggestions based on what I'm trying to make the program do. #include <stdio.h>int main(){ /*** Declaring triangle variable sides ****/ int sideA; int sideB; int sideC; char ch; printf(Lets explore triangles! Please insert a value for side 'A' of your triangle.\n); printf( Ranging from 1-15cm.\n); while(scanf(%d, &sideA) != 1) { printf(You inserted an incorrect value. Please insert a number ranging between 1-15cm, and try again.\n); while ( (ch=getchar()) != '\n' ); } printf( Now insert a value for side 'B' of your triangle ranging from 1-15cm.\n); while(scanf(%d, &sideB) != 1 ) { printf(You inserted an incorrect value. Please insert a number ranging from 1-15cm, and try again.\n); while ( (ch=getchar()) != '\n' ); } printf( And finally, insert a value for side C of your triangle ranging from 1-15cm.\n); while(scanf(%d, &sideC) != 1 ) { printf(You inserted an incorrect value. Please insert a number ranging from 1-15cm, and try again.\n); while ( (ch=getchar()) != '\n' ); } /*** List of conditions based on user input to identify if the triangle is valid and if so, what type of triangle they get***/ if(sideA <=0 || sideB<=0 || sideC <=0) { printf( You cannot have a triangle with any side having a value of 0.\n); } else if(sideA>15 || sideB>15 || sideC >15) { printf(Please insert a value between 1cm-15cm only\n.); } else if( (sideA==sideC && sideB==sideC) || (sideB==sideA && sideC==sideA) || (sideC==sideB && sideA==sideB) ) /*** Code to determine EQUILATERAL TRIANGLE***/ { printf( Your input creates a valid EQUILATERAL triangle.\n); } else if( (sideA == sideB) || (sideB == sideC) || (sideC == sideA) )/*** Code to determine ISOSCELES TRIANGLE***/ { printf(Your input creates a valid ISOSCELES triangle.\n); } else if( (sideA!= sideB) && (sideB != sideC) )/*** Code to determine SCALENE triangle ***/ { printf(Your input creates a valid SCALENE triangle.\n); } else { printf(You have inserted invalid range of values, as a result your triangle is invalid.\n); printf(Please restart the program by closing it and opening it again to retry\n.); printf(Goodbye.\n); }return(0);}NOTE: You may find there are some things in the code that are not in the algorithm. This is purely because I'm a little unsure on how I should write them so that it makes sense to a programmer when they try to write this program.NOTE: Please bear in mind that I am aware the code lacks modularization and has a lot of else -if statements. I wrote the code specifically this way because I was advised to do so. It was part of my instructions.
Generating a triangle from integers
algorithm;c
null
_codereview.22738
So I am working on a LMS project and I have a User class that will handle everything about the user such as registration, login, showing list of courses that they are subscribed to, etc.User.class.phpclass User { protected $_firstName; protected $_lastName; protected $_email; protected $_username; protected $_password; protected $_createdOn; protected $_userLevel; protected $_salt = '}$YY lGC6&wib=w{dpqgzXv>{)A3w)5@mi`/Q7HK|/GwZ6)K<4I~Ey-bQ'; public function getFirstName() { return $this->_firstName; } public function setFirstName($value) { $this->_firstName = $value; if (empty($value)) { setError('firstName', 'Enter your first name.'); } else if (strlen($value) < 2) { setError('firstName', 'The name you provided is too short.'); } else if (!ctype_alpha(str_replace(array('-',' '), '', $value))) { setError('firstName', 'The name you provided can only contain letters.'); } } public function getLastName() { return $this->_lastName; } public function setLastName($value) { $this->_lastName = $value; if (empty($value)) { setError('lastName', 'Enter your last name.'); } else if (strlen($value) < 2) { setError('lastName', 'The name you provided is too short.'); } else if (!ctype_alpha(str_replace(array('-',' '), '', $value))) { setError('lastName', 'The name you provided can only contain letters.'); } } public function getEmail() { return $this->_email; } public function setEmail($value) { $this->_email = $value; $pattern = '!^.{1,}@.{2,}$!i'; if (empty($value)) { setError('email', 'Enter your email.'); } else if (substr_count($value, '@') != 1 and !preg_match($pattern, $value)) { setError('email', 'The email you provided is not valid.'); } } public function getUsername() { return $this->_username; } public function setUsername($value) { $this->_username = strtolower($value); if (empty($value)) { setError('username', 'Enter your username.'); } else if (strlen($value) < 6) { setError('username', 'The username you provided must have at least 6 characters.'); } else if (!ctype_alnum(str_replace('_', '', $value))) { setError('username', 'The username you provided can only contain letters, numbers, and underscores.'); } } public function getPassword() { return $this->_password; } public function setPassword($value) { $this->_password = $value; if (empty($value)) { setError('password', 'Enter a password.'); } else if (strlen($value) < 6) { setError('password', 'The password you provided must have at least 6 characters.'); } } public function setConfirmPassword($value) { if (empty($value)) { setError('confirmPassword', 'Re-enter your password again.'); } else if ($this->_password != $value) { setError('confirmPassword', 'This does not match your password.'); } } private function _encrypt($value) { return sha1(md5($this->_salt.md5($value))); } public function register() { if (!hasErrors()) { try { $core = Core::getInstance(); $sth = $core->dbh->prepare(<<<SQLINSERT IGNORE INTO `users` SET `first_name` = :first_name, `last_name` = :last_name, `email` = LOWER(:email), `username` = LOWER(:username), `password` = :password, `created_on` = NOW()SQL ); $sth->bindValue(':first_name', propercase($this->_firstName), PDO::PARAM_STR); $sth->bindValue(':last_name', propercase($this->_lastName), PDO::PARAM_STR); $sth->bindValue(':email', $this->_email, PDO::PARAM_STR); $sth->bindValue(':username', $this->_username, PDO::PARAM_STR); $sth->bindValue(':password', $this->_encrypt($this->_password), PDO::PARAM_STR); $sth->execute(); } catch (Exception $e) { // print $e->getMessage(); } } } public function login() { if (!hasErrors()) { try { $core = Core::getInstance(); $sth = $core->dbh->prepare(<<<SQLSELECT * FROM `users`WHERE `username` = :usernameLIMIT 1 SQL ); $sth->bindValue(':username', $this->_username, PDO::PARAM_STR); $sth->execute(); $row = $sth->fetch(); $sth->closeCursor(); if ($row and $row->password == $this->_encrypt($this->_password)) { $_SESSION['uid'] = $row->id; $_SESSION['user'] = $row->username; $_SESSION['pass'] = $row->password; $_SESSION['level'] = $row->user_level; } } catch (Exception $e) { // print $e->getMessage(); } } } private function _destroySession() { session_unset(); session_destroy(); } public function logout() { $this->_destroySession(); redirect('index.php'); } public function check() { if (isset($_SESSION['pass'])) { try { $core = Core::getInstance(); $sth = $core->dbh->prepare(<<<SQLSELECT * FROM `users` WHERE `username` = :usernameLIMIT 1SQL ); $sth->bindValue(':username', $_SESSION['user'], PDO::PARAM_STR); $sth->execute(); $row = $sth->fetch(); $sth->closeCursor(); if (!$row or $row->password != $_SESSION['pass']) { $this->logout(); } } catch (Exception $e) { // print $e->getMessage(); } } } public function isLoggedIn() { return (isset($_SESSION['pass'])); } public function getCourseSubscriptions() { $rows = array(); try { $core = Core::getInstance(); $sth = $core->dbh->prepare(<<<SQLSELECT c.`id` AS id, c.`code` AS code, c.`name` AS name, IF(`u_id` IS NULL, 0, 1) AS subscribedFROM `courses` cLEFT JOIN `course_subscriptions` s ON c.id = s.`c_id` AND s.`u_id` = :u_idORDER BY `code`SQL ); $sth->bindValue(':u_id', $_SESSION['uid'], PDO::PARAM_INT); $sth->execute(); $rows = $sth->fetchAll(); $sth->closeCursor(); } catch (PDOException $e) { // print $e->getMessage(); } return $rows; } public function addCourseSubscription($c_id) { try { $core = Core::getInstance(); $sth = $core->dbh->prepare(<<<SQLINSERT IGNORE INTO `course_subscriptions`(c_id, u_id) VALUES(:c_id, :u_id)SQL ); $sth->bindValue(':c_id', $c_id, PDO::PARAM_INT); $sth->bindValue(':u_id', $_SESSION['uid'], PDO::PARAM_INT); $sth->execute(); return true; } catch (PDOException $e) { // print $e->getMessage(); } return false; }}Everything is working fine but I feel.. stuck with how to use this object properly with a session. Below is an example on how I am currently using my class.course_catalog.php$userObj = new User();$userObj->check();$list = $userObj->getCourseSubscriptions();Any suggestions on how I can improve any of this?
How to properly make a user class with a session
php;object oriented;session
First of all I would split the User class in 2 or 3 classes User, Authentication and Registration. The setter error could easily become Exceptions, which you could collect in you Registration class.At the end you will have a smaller User object you can attach to the session.Please also check http://www.phpbuilder.com/columns/validating_php_user_sessions.php3 for some hint regarding session validation.
_webmaster.68489
I have a dynamically constructed page that automatically pulls specific page content and applies it to the meta description.There are <br> tags that end up in the meta description. Should I write the extra code that will remove and store this cleaned-up version, or is it ok/safe/bad-for-SEO to leave the tags there?
in meta-description, ok or not?
seo;meta tags;meta description
I don't think there is any harm in doing this from an SEO perspective as this tag is not used as a ranking factor anymore. As far as Google using it to display the snippet for your pages in their search results, they can choose to simply ignore the <br> tag or choose a different snippet to display such as your ODP description (if it exists) or a snippet of text from your page's content.Having said that, if you have the ability to remove those tags you should do so. If you suspect it may be problematic, and it serves no purpose in your meta tag (and it doesn't) you should remove it.
_softwareengineering.165720
I'm stating to work on a project that I intend to release as open source via the githubs. What are the advantages of putting the code on github from the outset, as opposed to waiting until the project is in a working state before publishing.If it matters, this particular project is a C# app/service, and I have only a free github account (so I can't make it private and then pull back the covers later)
what are the advantages and disadvantages of putting code for an unfinished project on github
open source;github
The quicker you make your code publicly available, the quicker you can gain feedback and people to help you. If your intention is to make the project open source from the beginning, then I would recommend starting your project out as public by default.Github is full of small and unfinished projects so your project should fit right in. The more details you put in the readme file the better as it will help other developers/consumers get up to speed on your project quickly.At the very least, your private projects should be under some sort of version control. If you don't want to pay for a service, then I'd recommend using Dropbox to back up your private local repositories. This way you have file backup and version control on your project which will save you from hours of pain in the future.
_scicomp.13028
I'm looking at FEM discretizations of$$u_i - \Delta u_i = f$$ for $u_1, u_2$ on subdomains $\Omega_1, \Omega_2$ with interface $\Gamma$. A Neumann-Neumann transmission condition can be formulated by solving for a flux $\lambda$ on $\Gamma$ such that $n\cdot \nabla u_1 = \lambda = -n\cdot \nabla u_2$. One dual variational form involves formulating the problem using a Lagrange multiplier unknown $\lambda$, such that$$\begin{align*}a_1(u_1,v_1) + a_2(u_2,v_2) + \int_{\Gamma} \lambda [[v]]&= (f,v1)+(f,v2)\\\int_{\Gamma} \mu [[u]] &= 0\end{align*}$$and we can eliminate $u_1, u_2$ to solve only in terms of $\lambda$. Is there a way to do this for other transmission conditions? I can repeat the same process with for Robin interfaces and add $\alpha u_i$ to the interface BCs. Redefining a new Lagrange multiplier $\tilde{\lambda}$ gives$$\begin{align*}-\alpha u_1 - n\cdot\nabla u_1 &= \tilde{\lambda} = \lambda - \alpha u_1\\\alpha u_2 - n\cdot\nabla u_2 &= \tilde{\lambda} = -(\lambda - \alpha u_2).\end{align*}$$but the second BC is no longer coercive (the boundary term contributes negative $\int_{\Gamma}\alpha u_2v_2$ to $a_2(u_2,v_2)$). Is there a way to do a stable Lagrange multiplier formulation for the Robin-Robin case for elliptic problems?(I'd also be grateful for a reference to previous work if I've overlooked the answer to this in literature.)
Domain decomposition w/Lagrange multipliers
domain decomposition
null
_cstheory.1008
So I have an issue I'm facing in regards to clustering with live, continuously streaming data. Since I have an ever-growing data set I'm not sure what is the best way to run efficient and effective clustering. I've come up with a few possible solutions including:Setting a limit on how many data points to allow, thus whenever the limit is reached as another data point comes in the oldest point is removed. Essentially, this would suggest that older data isn't relevant enough to us anymore to care what we're losing by throwing it out.Once there is enough data to make a good clustering, consider this the setup and as new points come, rather than re-clustering all the data just figure out which cluster center the new point is closest to and add it to that. The benefit here is you could avoid having to re-cluster on every new point and you wouldn't have to store all the other points, just the cluster centers, considering this clustering good enough. The downside is that re-running the algorithm with all data points from the beginning may be more accurate.While those are some potential solutions I brain-stormed, I'd like to know if there are any better known techniques to face this problem. I figure sites like Google had to deal with it somehow (and I'm hoping that add more ram, servers and processors or continually expand your data centers aren't the only answers available).
Continuous Clustering
ds.algorithms;clustering;online algorithms;data streams
It sounds like you're looking for online algorithms for clustering. I suggest searching for online clustering on Google Scholar. Maybe the following links will prove useful (at least as a starting point). Guha et al.: Clustering Data Streams: Theory and PracticeBeringer and Hllermeier: Online clustering of data streams
_unix.47434
I am keen to know the difference between curl and wget. Both are used to get files and documents but what the key difference between them.Why are there two different programs?
What is the difference between curl and wget?
utilities;wget;curl;download
The main differences are:wget's major strong side compared to curl is its ability to download recursively.wget is command line only. There's no lib or anything, but curl features and is powered by libcurl.curl supports FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, POP3, IMAP, SMTP, RTMP and RTSP. wget supports HTTP, HTTPS and FTP.curl builds and runs on more platforms than wget.wget is part of the GNU project and all copyrights are assigned to FSF. The curl project is entirely stand-alone and independent with no organization parenting at allcurl offers upload and sending capabilities. wget only offers plain HTTP POST support. You can see more details at the following link:curl vs Wget
_codereview.118473
I recently reinstated the unit tests in Rubberduck. Previously, our parser was a synchronous parser, with everything running in sequence, and we could just request a parse result. Now, however, it runs asynchronously and we can only request parses. As a result, I have to somehow perform a blocking call to the parser, which I did with a semaphore. First, the semaphore blocks the code from continuing to execute, then the event handlers that gets called when the parser state changes releases it (or, if the code parses remarkably fast, the semaphore has a slot available and waiting for the method to take).Below are a subset of the tests for my Introduce Parameter refactoring. While I am especially looking for feedback on how I handle the blocking parser and the way I set up the tests in general, all feedback is welcome.[TestClass]public class IntroduceParameterTests{ private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(0, 1); void State_StateChanged(object sender, ParserStateEventArgs e) { if (e.State == ParserState.Ready) { _semaphore.Release(); } } [TestMethod] public void IntroduceParameterRefactoring_NoParamsInList_Sub() { //Input const string inputCode =@Private Sub Foo() Dim bar As BooleanEnd Sub; var selection = new Selection(2, 10, 2, 13); //startLine, startCol, endLine, endCol //Expectation const string expectedCode =@Private Sub Foo(ByVal bar As Boolean)End Sub; //Arrange var builder = new MockVbeBuilder(); VBComponent component; var vbe = builder.BuildFromSingleStandardModule(inputCode, out component); var project = vbe.Object.VBProjects.Item(0); var module = project.VBComponents.Item(0).CodeModule; var codePaneFactory = new CodePaneWrapperFactory(); var mockHost = new Mock<IHostApplication>(); mockHost.SetupAllProperties(); var parser = new RubberduckParser(vbe.Object, new RubberduckParserState()); parser.State.StateChanged += State_StateChanged; parser.State.OnParseRequested(); _semaphore.Wait(); parser.State.StateChanged -= State_StateChanged; var qualifiedSelection = new QualifiedSelection(new QualifiedModuleName(component), selection); //Act var refactoring = new IntroduceParameter(parser.State, new ActiveCodePaneEditor(vbe.Object, codePaneFactory), null); refactoring.Refactor(qualifiedSelection); //Assert Assert.AreEqual(expectedCode, module.Lines()); } [TestMethod] public void IntroduceParameterRefactoring_ImplementsInterface_MultipleInterfaceImplementations() { //Input const string inputCode1 =@Sub fizz(ByVal boo As Boolean)End Sub; const string inputCode2 =@Implements IClass1Sub IClass1_fizz(ByVal boo As Boolean) Dim fizz As DateEnd Sub; const string inputCode3 =@Implements IClass1Sub IClass1_fizz(ByVal boo As Boolean)End Sub; var selection = new Selection(4, 10, 4, 14); //startLine, startCol, endLine, endCol //Expectation const string expectedCode1 =@Sub fizz(ByVal boo As Boolean, ByVal fizz As Date)End Sub; const string expectedCode2 =@Implements IClass1Sub IClass1_fizz(ByVal boo As Boolean, ByVal fizz As Date)End Sub; const string expectedCode3 =@Implements IClass1Sub IClass1_fizz(ByVal boo As Boolean, ByVal fizz As Date)End Sub; //Arrange var builder = new MockVbeBuilder(); var project = builder.ProjectBuilder(TestProject1, vbext_ProjectProtection.vbext_pp_none) .AddComponent(IClass1, vbext_ComponentType.vbext_ct_ClassModule, inputCode1) .AddComponent(Class1, vbext_ComponentType.vbext_ct_ClassModule, inputCode2) .AddComponent(Class2, vbext_ComponentType.vbext_ct_ClassModule, inputCode3) .Build(); var vbe = builder.AddProject(project).Build(); var component = project.Object.VBComponents.Item(1); vbe.Setup(v => v.ActiveCodePane).Returns(component.CodeModule.CodePane); var codePaneFactory = new CodePaneWrapperFactory(); var mockHost = new Mock<IHostApplication>(); mockHost.SetupAllProperties(); var parser = new RubberduckParser(vbe.Object, new RubberduckParserState()); parser.State.StateChanged += State_StateChanged; parser.State.OnParseRequested(); _semaphore.Wait(); parser.State.StateChanged -= State_StateChanged; var qualifiedSelection = new QualifiedSelection(new QualifiedModuleName(component), selection); var module1 = project.Object.VBComponents.Item(0).CodeModule; var module2 = project.Object.VBComponents.Item(1).CodeModule; var module3 = project.Object.VBComponents.Item(2).CodeModule; var messageBox = new Mock<IMessageBox>(); messageBox.Setup(m => m.Show(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<MessageBoxButtons>(), It.IsAny<MessageBoxIcon>())) .Returns(DialogResult.OK); //Act var refactoring = new IntroduceParameter(parser.State, new ActiveCodePaneEditor(vbe.Object, codePaneFactory), messageBox.Object); refactoring.Refactor(qualifiedSelection); //Assert Assert.AreEqual(expectedCode1, module1.Lines()); Assert.AreEqual(expectedCode2, module2.Lines()); Assert.AreEqual(expectedCode3, module3.Lines()); } [TestMethod] public void IntroduceParameterRefactoring_PassInTarget_Nonvariable() { //Input const string inputCode =@Private Sub Foo() Dim bar As BooleanEnd Sub; //Arrange var builder = new MockVbeBuilder(); VBComponent component; var vbe = builder.BuildFromSingleStandardModule(inputCode, out component); var project = vbe.Object.VBProjects.Item(0); var module = project.VBComponents.Item(0).CodeModule; var codePaneFactory = new CodePaneWrapperFactory(); var mockHost = new Mock<IHostApplication>(); mockHost.SetupAllProperties(); var parser = new RubberduckParser(vbe.Object, new RubberduckParserState()); parser.State.StateChanged += State_StateChanged; parser.State.OnParseRequested(); _semaphore.Wait(); parser.State.StateChanged -= State_StateChanged; var messageBox = new Mock<IMessageBox>(); messageBox.Setup(m => m.Show(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<MessageBoxButtons>(), It.IsAny<MessageBoxIcon>())) .Returns(DialogResult.OK); //Act var refactoring = new IntroduceParameter(parser.State, new ActiveCodePaneEditor(vbe.Object, codePaneFactory), messageBox.Object); //Assert try { refactoring.Refactor(parser.State.AllUserDeclarations.First(d => d.DeclarationType != DeclarationType.Variable)); messageBox.Verify(m => m.Show(It.IsAny<string>(), It.IsAny<string>(), It.IsAny<MessageBoxButtons>(), It.IsAny<MessageBoxIcon>()), Times.Once); } catch (ArgumentException e) { Assert.AreEqual(Invalid declaration type, e.Message); Assert.AreEqual(inputCode, module.Lines()); return; } Assert.Fail(); }}
Unit Testing the Duck
c#;unit testing;meta programming;rubberduck
I don't believe a Semaphore is needed in the first place. Remove it, register an event handler and continue your act and assert phase inside it.Something like this:parser.State.StateChanged += (o, e) => { var qualifiedSelection = new QualifiedSelection(new QualifiedModuleName(component), selection); var refactoring = new IntroduceParameter(parser.State, new ActiveCodePaneEditor(vbe.Object, codePaneFactory), null); refactoring.Refactor(qualifiedSelection); Assert.AreEqual(expectedCode, module.Lines());};After creating a quick scenario that I believe mimics your use case, it seems to work just fine: https://gist.github.com/Vannevelj/5d0e348fd1424492ff8fparser.State.OnParseRequested();I'm not feeling comfortable with this name for a public member since it's not of the [verb][action] form -- RequestParse() might be more appropriate.var selection = new Selection(4, 10, 4, 14); //startLine, startCol, endLine, endColEither use named arguments or don't use them -- don't put them in commentsWhy do you group 3 scenarios in one test? Either use a parameterized test or extract common logic and keep it separated. Nobody wants to sift through multiple test cases when one of them fails.Avoid try-catch in a unit test -- that means you're doing it the other way around. Using [ExpectedException(typeof(ArgumentException), Invalid declaration type] you've got most of it covered already though I could see why you also want to compare whether the code has changed.Don't use an empty Assert.Fail(), pass in a message.try{ refactoring.Refactor(); messageBox.Verify();} catch (ArgumentException)}Assert.Fail();Given this setup, does it make sense to do the mock.Verify() call? If refactoring.Refactor() throws the exception, the mock.Verify() call will never be evaluated.
_webmaster.1575
Does putting an unrelated site in a domain's subfolder (to avoid buying a new domain name) affect the domain's search engine ranking even if there are no links between them? For example, is it a bad practice will placing a cooking site at programming.com/cooking affect the search engine ranking of programming.com?
Do sites in subfolders affect the ranking of the main domain?
seo;subdirectory
I don't have any data for this, but I personally wouldn't do it. Search engines are spending a lot of effort trying to understand what your site is about in order to determine whether to return it for various queries. Diluting the focus of your site could be risky. In my opinion, the < $20/year is well worth avoiding wasting all of your hard work.
_softwareengineering.70223
Unless it is needed to differentiate between a variable and field with the same name, I never put this. in front of a field or any member access in C#. I see this as no different to m_ prefix that used to be common in C++, and think if you really need to specify that it's a member, your class is too big.However, there are a number of people in my office that strongly disagree.What is considered current best practises regarding this.?EDIT: To clarify, I never use m_ and only use this. when absolutely necessary.
What is the regarded current best practises regarding the this keyword in front of field and methods in c#?
c#;coding style
null