id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_cstheory.7340 | I want to write a simulator for a quantum computing model that I am working on and I was wondering what would be the correct library / implementation strategy to implement quantum cluster states? Specifically I want to compute a cluster state of a specific topological quantum computation. I am investigating algorithms using this model and I would like to have a toy implementation for a presentation. The specific algorithm would perform similarly to this.Encode the given knot algorithm into a set of braids and create a cluster state of those braidsPerform measurement on those braidsReturn a element of $Z_{n} \in_{1,0}$Perform a classical operation on the knot.Perform another operation until the knot is in a desired state.See http://arxiv.org/pdf/1101.4722.pdf for a very simlar model. | Computational Library to compute Quantum Cluster States | ds.algorithms;reference request;quantum computing;implementation;topology | As Peter mentions in the comments, it seems impossible to give an authoritive answer without knowing what exactly you are planning on doing with them. That said, there are at least a few places I can point you which may be of some use.Firstly, Pauli measurements on cluster states can be efficiently simulated on a classical computer. This is a direct result of the Gottesman-Knill theorem (see this paper by Gottesman and this follow-up paper by Gottesman and Aaronson), which Clifford group circuits can be efficiently evaluated via the stabilizer formalism. So it may be that stabilizers are the way you want to go.However, if you want to be a little less general, and restrict yourself to graph states (a general name for cluster states on general graphs) then there are two papers by Hein, Eisert and Briegel and Schlingemann which describe how Pauli measurements performed on a graph state result in states which are locally equivalent to graph states, and provide rules for these transformations. Thus it is quite possible to work with graphs as your data structure, as long as you do not intend to leave the Clifford group.Finally, Ross Duncan and Lucas Dixon have taken a category theoretic approach for automated reasoning about graphs and have produced some nice proof of concept software using this approach (see here).Also, I would point out that Raussendorf, Harrington and Goyal have previously looked at implementing topological computations via measurements on cluster states (they use it to achieve fault-tolerance in cluster states in a very beautiful way), and so you might be interested in their work (see here and here). These papers give an explicit encoding for encoding braids in a cluster state.UPDATE: I notice you have just added the forth point. The Raussendorf-Harrington-Goyal papers I linked to above do provide a very nice way of doing topological quantum computing via cluster states, which allows classical operations on the knots to be done within the Clifford group, and hence the stabilizer and graph transformation approaches I previously mentioned can be used to efficiently simulate these operations. |
_unix.387138 | I want to modify my grub.cfg to select the font dynamically based on the screen resolution. I have a 4K display on my laptop, but often boot with a 1080 external monitor, and the font that works on the 4K display is huge if using the external screen. I do not want to force a lower resolution. I can mostly determine the current video mode based on the output of the 'videoinfo' command, but I don't see how to get the output of that command into a variable so that I can parse it with the 'regexp' command. | Grub2: Set font dynamically based on video resolution? | grub2 | null |
_unix.141299 | My laptop is an Aspire E1 -431. When I installed Linux Mint 15 and 17, I found that the NTFS drives cannot be mounted:Error mounting /dev/sda3 at /media/kutti/BE6C20D66C208B6B: Command-line `mount -t ntfs -o uhelper=udisks2,nodev,nosuid,uid=1000,gid=1000,dmask=0077,fmask=0177 /dev/sda3 /media/kutti/BE6C20D66C208B6B' exited with non-zero exit status 14: The disk contains an unclean file system (0, 0).Metadata kept in Windows cache, refused to mount.Failed to mount '/dev/sda3': Operation not permittedThe NTFS partition is in an unsafe state. Please resume and shutdownWindows fully (no hibernation or fast restarting), or mount the volumeread-only with the 'ro' mount option.Here i have enclosed my terminal working:After editing /etc/fstab and entering the command, sudo mount -a, I get the following message:[mntent]: line 13 in /etc/fstab is bad[mntent]: line 14 in /etc/fstab is bad | Mounting NTFS Drives in Linux Mint | linux;ntfs | null |
_softwareengineering.300223 | In light of the recent OBJ_obj2txt vulnerability in LibreSSL (which was found during the OpenSMTPD audit, and does not affect OpenSSL), it came to my attention that the memory leak issue likely resulted from some earlier code refactoring, where the block scoped variable char *bndec was moved out to be function scoped instead.I know first-hand that there is this great resistance to block scoped variables within old-school projects like OpenBSD, but what other justification would there be to move the char *bndec declaration?More broadly, when was block scoping introduced for variables in C? All I could find is that it was already part of C89. Is that where it started, or was it also part of an earlier spec? | When was block scope for variables introduced to C, and why is it still frowned upon? | c;scope;memory usage | null |
_webapps.28020 | Is there a way to integrate music files from my Google Drive with Google Music? If not, are there any plans to add this functionality? I upload a lot of my files to Google Drive and it would be convenient to be able to play my music files from there, or even import them from there into Google Music. | Is there a way to integrate my Google Drive with Google Music? | google drive;google music | null |
_unix.237193 | I've been looking everywhere. Didn't find the answer so I'm looking up to you!Program in my job outputs many 10-50 *.pc files with folder-structure:RESULTS/MODEL_Y0/Positioning_1.pcRESULTS/MODEL_Y0/SK312_2SK_Y0_2012.pcRESULTS/MODEL_Y100/Positioning_2.pcRESULTS/MODEL_Y100/SK312_2SK_Y100_2012.pcRESULTS/MODEL_Y250/Positioning_45.pcRESULTS/MODEL_Y250/SK312_2SK_-Y575_2012.pcEach PC file has inside absolute paths starting on xxx (not always 101, but it's always the second occurance of the INCLU word) line like:INCLU / Positioning_1.pcINCLU / /ST/statika/AGP-Pedestrian_Ansa-Meta/SK312_2SK_xxx/SK312_SERIE.incINCLU / /ST/statika/AGP-Pedestrian_Ansa-Meta/SK312_2SK_xxx/SK312_xPL_impactor.incINCLU / /ST/statika/AGP-Pedestrian_Ansa-Meta/SK312_2SK_xxx/SK312_materials.incI need to change these lines of absolute path to relative like:INCLU / Positioning_1.pcINCLU / ../../SK312_SERIE.incINCLU / ../../SK312_xPL_impactor.incINCLU / ../../SK312_materials.incWhich I've done by writing a script placed above RESULTS folder: (part of the script)grep -rl ${SEARCH} --include \*.pc ./ | xargs sed -i s#${SEARCH}#${REPLACE}#gwhere:$SEARCH = /ST/statika/AGP-Pedestrian_Ansa-Meta/SK312_2SK_xxx/$REPLACE = ../../BUT here is the problem. When operating from longer than 81 chars path, the program will output the .pc files in the same folder-structure pattern, but inside the PC file, the absolute paths are separated to new line by - on 81. char position:INCLU / Positioning_1.pcINCLU / /ST/statika/uziv/JVERNER/PROJEKTY/Ansa/AGP-Pedestrian_Ansa-Meta/SK312_2S-K_xxx/SK312_SERIE.incINCLU / /ST/statika/uziv/JVERNER/PROJEKTY/Ansa/AGP-Pedestrian_Ansa-Meta/SK312_2S-K_xxx/SK312_xPL_impactor.incINCLU / /ST/statika/uziv/JVERNER/PROJEKTY/Ansa/AGP-Pedestrian_Ansa-Meta/SK312_2S-K_xxx/SK312_materials.incwhere: $SEARCH = /ST/statika/uziv/JVERNER/PROJEKTY/Ansa/AGP-Pedestrian_Ansa-Meta/SK312_2SK_xxx/Here's the problem. My script doesn't see the variable $SEARCH. What's even bigger problem is that the path could be longer than 180 chars so there will be three-line path with two - dividers.I can't comprehend how to write a script that would be functional with these multi-lines so that the path variable would shorten to ../../SK312_*.inc as before with short absolute one-line path. | How to rewrite multiline path into one-line relative path | shell script | If your program is cutting the lines, you will need to join them before running your sed. For example:grep -rl ${SEARCH} --include \*.pc ./ | xargs sed -i s/-$//; s/-\n//; s#${SEARCH}#${REPLACE}#gOr grep -rl ${SEARCH} --include \*.pc ./ | xargs perl -i -pe s/-\n//; s#$SEARCH#$REPLACE#gAlternatively, you could use find instead of grep:find -type f -name '*.pc' -exec perl -i -pe s/-\n//; s#$SEARCH#$REPLACE#g {} +All of the above approaches will recurse into subdirectories. |
_reverseengineering.2897 | Assuming that I have binary file with code for unknow CPU can I somehow detect cpu architecture? I know that it depends mostly on compiler but I think that for most of CPU architectures it should be alot of CALL/RETN/JMP/PUSH/POP opcodes (statistically more than others). Or maybe should I search for some patterns in code specific for CPU (instead of opcodes occurrence)? | Tool or data for analysis of binary code to detect CPU architecture | binary analysis | When you have a hammer, all the problems look like nails...Ive studied something called Normalized Compression Distance - NCD - some time ago, and I'd give it a try if I had a problem similar to yours.Id make a database of examples. Would take 20 programs for each architecture you want to know, with variable sizes, and save them.When confronted with a program that I wanted to know which architecture it is, Id compute its NCD against all my examples.Id pick the best (smaller) NCD and would then verify it if is was a real match (lets say, trying to run it on the discovered architecture).UpdateIve always done in by hand, when it comes to NCD. How I did it:you have 20 files for SPARC and you call them A01, A02, A03, and so on. Your x86 files: B01, B02, etc.You get the unknown file and call it XX.Choose your preferred compression tool (I used Gzip, but see remarks at the end of this answer).Calculate NCD for the first pair:NCD(XX,A01) = ( Z(XX+A01) - min(Z(XX), Z(A01) ) / max(Z(XX), Z(A01))Z( something ) -> means that you compress the something with Gzip and get the file size after compression. For example, 8763 bytes, so Z(something) = 8763.XX + A01 -> means that you concatenate things. You append the A01 file to the end of the XX file. In linux, you could do a cat XX A01 > XXA01.min() and max() -> you calculate the compressed size of XX and A01, and use the minimum and maximum that you get.So youll have a NCD value: itll lie between 0 and 1, and use as many decimals places as you can, because sometimes the difference is in the 7th or 8th digit. Itll be like comparing 0.999999887 to 0.999999524.Youll do that for every file, so youll have 20 NCD results for SPARC, 20 for x86...Get the smaller NCD of all. Lets say that the B07 file gave you the smaller NCD. So, probably, the unknow file is a x86.Tips:your unknow and your test files must have a similar size. When you compare a file with bigger or smaller ones, NCD wont do its magic. So, if youll be testing files of 5 to 10k, Id get test files of 2.5k, 5k, 7.5k, 10k, 12.5k ...In my Master degree I got better results always using the smaller NCD value. The second best method was to do some voting: get the 5 smaller NCD results, and see which architecture got more votes. Ex.: smaller NCD were A03, A05, B02, B06, B07 -> B go 3 votes, so Id say its a x86...compressors based on the Zip construction have a limitation of 32kB: the way they compress things, they just consider 32kB at time. If your XX + A01 is bigger than this, Gzip, Zip, etc., wont give you good results. So, for files that are bigger than 15 or 16kB, Id choose another compressor: PPMD, Bzip... |
_codereview.136639 | Wikipedia has an example of a decorator pattern here:https://en.wikipedia.org/wiki/Decorator_pattern#Second_example_.28coffee_making_scenario.29I was trying to solve this using functional style using Java 8 just to compare the Oop style and functional style of solving the same problem.The solution I came up:1.CoffeeDecorator.javapublic class CoffeeDecorator {public static Coffee getCoffee(Coffee basicCoffee, Function<Coffee, Coffee>... coffeeIngredients) { Function<Coffee, Coffee> chainOfFunctions = Stream.of(coffeeIngredients) .reduce(Function.identity(),Function::andThen); return chainOfFunctions.apply(basicCoffee);}public static void main(String args[]) { Coffee simpleCoffee = new SimpleCoffee(); printInfo(simpleCoffee); Coffee coffeeWithMilk = CoffeeDecorator.getCoffee(simpleCoffee, CoffeeIngredient::withMilk); printInfo(coffeeWithMilk); Coffee coffeeWithWSprinkle = CoffeeDecorator.getCoffee(coffeeWithMilk,CoffeeIngredient::withSprinkles); printInfo(coffeeWithWSprinkle);}public static void printInfo(Coffee c) { System.out.println(Cost: + c.getCost() + ; Ingredients: + c.getIngredients());}}2.CoffeeIngredient.javapublic class CoffeeIngredient { public static Coffee withMilk(Coffee coffee) { return new Coffee() { @Override public double getCost() { return coffee.getCost() + 0.5; } @Override public String getIngredients() { return coffee.getIngredients() + , Milk; } };}public static Coffee withSprinkles(Coffee coffee) { return new Coffee() { @Override public double getCost() { return coffee.getCost() + 0.2; } @Override public String getIngredients() { return coffee.getIngredients() + , Sprinkles; } };}}Now, I am not so convinced with the solution in the CoffeeIngredient. If we had a single responsibility in the Coffee interface, getCost(), using the functional style and applying the decorator pattern seems a lot better and cleaner. It would basically boil down to a Function ,we would not need the abstract class, separate decorators and can just chain the functions.But in the coffee example, with 2 behaviors of the cost and description on the Coffee interface, I am not so convinced that this is a significant value addition as we are creating an anonymous class,overriding the 2 methods.I am not looking at a performance perspective but rather looking at it from a functional vs oop style of solving the problem.If we were to restrict our solution to functional design/style using Java 8, then :Questions:1) Is this functional style of solution acceptable ?2) If not, is there a better way to solve it using Java 8 functional style rather than creating the anonymous classes which seem to implement Coffee interface ? | Decorator pattern using Java 8 | java;design patterns;functional programming | null |
_cogsci.15962 | I often end up having loads of things which I am learning. One of the pattern which I have observed over time is that I often end up leaving things half way - for months and then, I have to restart doing the basics again. I feel that if only I ended up taking a few steps more, I could have gained much more from my revisions.I have a couple of questions - Is this good practice?Is there any cognitive reason behind this?How does one overcome/control this tendency to start off something new before finishing the older ones? | What are the techniques to get over the learning plateau? | psychology;self discipline | null |
_softwareengineering.202870 | I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance:How to make another layer from an existing layerUnable to edit data on the network driveExisting layers in the desktopAssistance with network driveIn this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word)The steps I'm following are:Iterate through the data setDo a row by row comparisonFind the common words between the stringsForm a cluster where number of common words is greater than or equal to 2(eliminating stop words)If number of common words<2, put the string in a new cluster.Assign the rows either to the existing clusters or form a new one depending upon the common wordsContinue until all the strings are processedI am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated. | Clustering Strings on the basis of Common Substrings | c#;sql;strings;cluster;data mining | null |
_codereview.166199 | BackgroundI already changed my mind towards the SOLID principles, and am applying them in everything that I create.Now I am reading a lot of articles about TDD and BDD, aiming to begin applying those concepts in new projects.I decided to use xUnit and Moq as my helpers, because of simplicity and LINQ to Mocks.Like I try to do with every new concept that I learn, I am trying to come up with a pattern to follow, so things become more familiar to work with.The codeThe following code is my first attempt to get into it (mainly based on two articles in the comments). The class being tested is not currently implemented, the point here is to came with a good approach to implement BDD using xUnit with the visual studio runner.To get at that, I tried to use inheritance to pass context to each test case of the same SUT, using names that give a clear intent of each case. Also I tried to separate the action from the assertion and context setup.The intent of the future implemented class is to generate specific flavors of some IQuery (like IFrom, ICount, IName), which represents pieces of an SQL query, dynamically build in some other class, beyond the scope of this.The IInstructionFactory dependency is the abstraction of a class that generate those IQuery derived interfaces and is implemented in another assembly (one for MySql, another for SQL Server).The IExpressionTranslator dependency is the abstraction of a class that gets some lambda expression and converts to ICondition (which in turn is a derivation of IQuery too).Base context class:public abstract class ContextSpecification { protected ContextSpecification() { Context(); BecauseOf(); } protected virtual void BecauseOf() { } protected virtual void Context() { } protected virtual void Cleanup() { }}My first try to BDD unit testing:public class describe_CommonQueryCreator : ContextSpecification { private CommonQueryCreator queryCreator; private Mock<IInstructionFactory> factoryMock; private Mock<IExpressionTranslator> translatorMock; protected override void Context() { SetupExpressionTranslator(); SetupInstructionFactory(); queryCreator = new CommonQueryCreator(factoryMock.Object, translatorMock.Object); } protected virtual void SetupExpressionTranslator() => translatorMock = new Mock<IExpressionTranslator>(); protected virtual void SetupInstructionFactory() => factoryMock = new Mock<IInstructionFactory>(); public class when_creating_count_query : describe_CommonQueryCreator { private ICount resultQuery; public class given_correct_input : when_creating_count_query { private readonly string tableName = MimasTest; private readonly Expression<Func<bool>> predicate = () => true; private Views.ICondition mockedCondition; protected override void SetupExpressionTranslator() { base.SetupExpressionTranslator(); mockedCondition = new Mock<Views.ICondition>().Object; translatorMock.Setup(translator => translator.TranslateToCondition(predicate)).Returns(mockedCondition); } protected override void SetupInstructionFactory() { base.SetupInstructionFactory(); IName mockedName = new Mock<IName>().Object; INameList mockedTables = new Mock<INameList>().Object; IFrom mockedFrom = new Mock<IFrom>().Object; factoryMock.Setup(factory => factory.CreateName(tableName)).Returns(mockedName); factoryMock.Setup(factory => factory.CreateNameList(It.Is<IEnumerable<IName>>(valueList => valueList != null && valueList.Count() == 1 && valueList.First() == mockedName))).Returns(mockedTables); factoryMock.Setup(factory => factory.CreateFrom(mockedTables, It.Is<IEnumerable<IJoin>>(joinList => joinList != null && !joinList.Any()))).Returns(mockedFrom); factoryMock.Setup(factory => factory.CreateCount(mockedFrom, It.Is<IEnumerable<ICondition>>(conditionList => conditionList != null && conditionList.Count() == 1 && conditionList.First() == mockedCondition))); } protected override void BecauseOf() => resultQuery = queryCreator.CreateCountQuery(tableName, predicate); [Fact] public void it_should_return_not_null() => Assert.NotNull(resultQuery); [Fact] public void it_should_call_InstructionFactory_CreateCount_with_correct_values() => factoryMock.VerifyAll(); } public class given_null_tableName_argument : when_creating_count_query { [Fact] public void it_should_throw_ArgumentNullException() => Assert.Throws<ArgumentNullException>(tableName, () => queryCreator.CreateCountQuery(null, () => true)); } public class given_empty_tableName_argument : when_creating_count_query { [Fact] public void it_should_throw_ArgumentException() => Assert.Throws<ArgumentException>(tableName, () => queryCreator.CreateCountQuery(string.Empty, () => true)); } public class given_null_predicate_argument : when_creating_count_query { [Fact] public void it_should_throw_ArgumentNullException() => Assert.Throws<ArgumentNullException>(predicate, () => queryCreator.CreateCountQuery(MimasTest, null)); } }}My future-implemented SUT class:internal sealed class CommonQueryCreator { public CommonQueryCreator(IInstructionFactory instructionFactory, IExpressionTranslator expressionTranslator) { InstructionFactory = instructionFactory; ExpressionTranslator = expressionTranslator; } public IInstructionFactory InstructionFactory { get; } public IExpressionTranslator ExpressionTranslator { get; } public ICount CreateCountQuery(string tableName, Expression<Func<bool>> predicate) { throw new NotImplementedException(); }}This is how it looks like in Test Explorer:I think my Moq expectation setup is correct, as I see, everything should work correctly. My questions are:Is this a right way of doing BDD?Normally I use pascal casing for method names, but I read (and comprehend the arguments) that underscores are better for reading in BDD name conventions, is this a right way?In this pattern, when asserting exceptions I can't use a BecauseOf() override because of it being called in base class constructor. So, I called the SUT function inside the Assert.Throw(), I couldn't become with a workaround, any ideas?Structurally is something that could be better? | Generate pieces of an SQL query | c#;unit testing;moq;bdd | public abstract class ContextSpecification { protected ContextSpecification() { Context(); BecauseOf(); } protected virtual void BecauseOf() { } protected virtual void Context() { } protected virtual void Cleanup() { }}This class does a few things wrong:The constructor seems to call methods that go beyond a normal initialization.The constructor calls virtual methods. This might not work as you expect it: Virtual member call in a constructorBy having a virtual call in an object's constructor you are introducing the possibility that inheriting objects will execute code before they have been fully initialized.The three virtual methods have no implementation. They should be abstract. |
_unix.381091 | I would like to re-run the cap command if it's failed through the shell script with parameters. For example, the first command is executed successfully but the second command can't so when I pass the parameter rerun, the script will start to execute again second command and continue rest of commandsssh -q $username@$server << EOFset -ecd $CT_PATH && cap -q -s instance=$instance mode=quiet diagnostics:allcap production deploycap sales-demo deployexit 1EOF | Restart a script if it's failed part the way through | shell script;shell | null |
_scicomp.4720 | I want to perform $k$-Nearest Neighbor Search in multidimensional space, but not using for example $L_2$-distance. I want the user to specify some similar-pairs examples and then perform a search using this information.What algorithm can I use for this? | $k$-Nearest Neighbor Search using examples | statistics;high dimensional;nearest neighbors | null |
_unix.309008 | I'm running ubuntu gnome 16.04.1 on my hp pavilion ab048tx having an Elantech touchpad. I've tried various dkms fixes available on the internet (including psmouse-elantech-x551c and psmouse-elantech-v7), but nothing seems to get multi-touch into action. Basic functions work (move, click, tap and right-click). Any idea what to do?My (partial) output for cat /proc/bus/input/devices is as follows: I: Bus=0011 Vendor=0002 Product=0001 Version=0000N: Name=PS/2 Elantech TouchpadP: Phys=isa0060/serio1/input0S: Sysfs=/devices/platform/i8042/serio1/input/input5U: Uniq=H: Handlers=mouse0 event6 B: PROP=1B: EV=7B: KEY=70000 0 0 0 0B: REL=3For demsg | grep elantech, it is:[ 2.123958] psmouse serio1: elantech: unknown hardware version, aborting...[ 2.429095] input: PS/2 Elantech Touchpad as /devices/platform/i8042/serio1/input/input5[ 2506.145724] psmouse serio1: elantech: unknown hardware version, aborting...[ 2506.449970] input: PS/2 Elantech Touchpad as /devices/platform/i8042/serio1/input/input20For synclient -l:Couldn't find synaptics properties. No synaptics driver loaded?Relevant output from Xorg.0.log:[ 28.346] (II) config/udev: Adding input device PS/2 Elantech Touchpad (/dev/input/event6)[ 28.346] (**) PS/2 Elantech Touchpad: Applying InputClass evdev pointer catchall[ 28.347] (II) systemd-logind: got fd for /dev/input/event6 13:70 fd 38 paused 0[ 28.347] (II) Using input driver 'evdev' for 'PS/2 Elantech Touchpad'[ 28.347] (**) PS/2 Elantech Touchpad: always reports core events[ 28.347] (**) evdev: PS/2 Elantech Touchpad: Device: /dev/input/event6[ 28.347] (--) evdev: PS/2 Elantech Touchpad: Vendor 0x2 Product 0x1[ 28.347] (--) evdev: PS/2 Elantech Touchpad: Found 3 mouse buttons[ 28.347] (--) evdev: PS/2 Elantech Touchpad: Found relative axes[ 28.347] (--) evdev: PS/2 Elantech Touchpad: Found x and y relative axes[ 28.347] (II) evdev: PS/2 Elantech Touchpad: Configuring as mouse[ 28.347] (**) evdev: PS/2 Elantech Touchpad: YAxisMapping: buttons 4 and 5[ 28.347] (**) evdev: PS/2 Elantech Touchpad: EmulateWheelButton: 4, EmulateWheelInertia: 10, EmulateWheelTimeout: 200[ 28.347] (**) Option config_info udev:/sys/devices/platform/i8042/serio1/input/input5/event6[ 28.347] (II) XINPUT: Adding extended input device PS/2 Elantech Touchpad (type: MOUSE, id 13)[ 28.347] (II) evdev: PS/2 Elantech Touchpad: initialized for relative axes.[ 28.347] (**) PS/2 Elantech Touchpad: (accel) keeping acceleration scheme 1[ 28.347] (**) PS/2 Elantech Touchpad: (accel) acceleration profile 0[ 28.347] (**) PS/2 Elantech Touchpad: (accel) acceleration factor: 2.000[ 28.347] (**) PS/2 Elantech Touchpad: (accel) acceleration threshold: 4[ 28.347] (II) config/udev: Adding input device PS/2 Elantech Touchpad (/dev/input/mouse0)[ 28.347] (II) No input driver specified, ignoring this device.[ 28.347] (II) This device may have been added with another device file.I tried using modprobe psmouse proto=imps but then it is detected as PS/2 Generic Mouse and still nothing. evdev is currently handling the touchpad and I also tried using libinput, but it doesn't work. If I try to force synaptics driver using /usr/share/X11/xorg.conf.d, the touchpad completely stops workingEDIT: My device has a touchscreen (detected as Radiyum)Please ask for more if needed! | New Elantech touchpad lacks multitouch (latest kernel) | kernel;drivers;touchpad | null |
_unix.153540 | I have just created a new raid 5 array using 3 4TB drives (aiming for 8TB of space) on an ubuntu system. While I had a few issues getting started, I believe I have set it up correctly and I have created an ext4 filesystem on it as a single partition using the whole array. When I look at it in gparted though, it reports Size: 7.28 TiB (this is correct - I know the difference between TB and TiB)Used: 117 GiBUnused: 7.16 TiBIf I run sudo df -h I getFilesystem Size Used Avail Use% Mounted on/dev/md0 7.2T 51M 6.8T 1% /home/brad/raidwhich is a different size again. The available is 400G less than the size, but the used is only 51M here!My question is, is this the expected output at this point in time, or is this an indication that something has gone awry? If it is expected then what is using the space that is reported on gparted as used?In case anyone wants to see it here is the output from cat /proc/mdstatmd0 : active raid5 sdb1[0] sdd1[3] sdc1[1] 7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_] [=>...................] recovery = 7.2% (283321088/3906886144) finish=2181.8min speed=27679K/secunused devices: <none>and from sudo fdisk -lDisk /dev/sda: 250.1 GB, 250059350016 bytes255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000ac78f Device Boot Start End Blocks Id System/dev/sda1 * 2048 472330239 236164096 83 Linux/dev/sda2 472332286 488396799 8032257 5 Extended/dev/sda5 472332288 488396799 8032256 82 Linux swap / SolarisWARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdb1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdc1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sdd1 1 4294967295 2147483647+ ee GPTPartition 1 does not start on physical sector boundary.WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.Disk /dev/sde: 1000.2 GB, 1000204886016 bytes255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000 Device Boot Start End Blocks Id System/dev/sde1 1 1953525167 976762583+ ee GPTDisk /dev/md0: 8001.3 GB, 8001302822912 bytes2 heads, 4 sectors/track, 1953443072 cylinders, total 15627544576 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 524288 bytes / 1048576 bytesDisk identifier: 0x00000000Disk /dev/md0 doesn't contain a valid partition tableHmmmm, the bit at the end about /dev/md0 not containing a valid partition table is interesting. | Why is so much of my new ext4 filesystem already marked as used? | linux;raid;mdadm | The issue that /dev/md0 doesn't have a partition table is not relevent to your problem. You plainly stated that you created the filesystem on the raw device I have created an ext4 filesystem on it as a single partition using the whole arrayso it makes sense that you have no partition table, as you did not partition the space. Its not an issue, but be sure that you never write a partition table to that partition, as you will clobber the beginning of the filesystem and lose access to data. As to your other partitions, I see fdisk complaining that the disks have GPT, and advising you to use gdisk instead of fdisk. It is hard to tell anything from the fdisk output.Now to your primary question, where is the space?/dev/md0 7.2T 51M 6.8T 1% /home/brad/raidWhere did your ~400 GB go? The went into the filesystem overhead. The ext4 filesystem preallocates all of the metadata it needs to store to allocate every inode on the system, and additionally on a volume that big, you'll have a lot of copies of the fs superblock and a large number of blocks will be allocated to the filesystem journal. There is nothing to fix or change here, and the size of the filesystem metadata will not grow over time in an ext[234] filesystem. Your only real option if you don't like that amount of filesystem overhead is to tune your inode size or use a different filesystem. |
_softwareengineering.278195 | I will use Java as example, but the question pops up in my mind with any language / framework / stack / pattern / ...For instance in python, should I just use a dict(), or should I subclass it to make my intentions clear? or is that violating duck-typing principal of python? or ...Imagine a simple Echo Server running on port X. The Server consists of:Server accepts clients, and hands them to client handlers.ClientHandler, who handle connected clients (doh!)Main server has a list of registered client handlers, and passes the received request to client handlers in this list, so they can handle or ignore it (not a good design but it's simple). We may now have a ClientHandler who echoes back the received text capitalized, one that only echoes half of it, one that ...Now consider these two versions:interface GenericServer { void start(); void stop(); // ... other server related stuff, such as set backlog. void registerHandler(java.util.Observer o); }It looks like duck-typing to me, but in Java! It's okay to pass in any object of type java.util.Observer as handler. As far as GenericServer is concerned, it should be able to observe.We could also do:interface SpecializedServer { // Marker interface to make everything more clear. interface ClientHandler extends java.util.Observer { } void start(); void stop(); // ... other server related stuff, such as set backlog. void registerHandler(ClientHandler ch); }Now it uses a Marker Interface to say Attention! The passed in observer MUST be a ClientHandler! it must know about clients.But too much use of marker interface may be sign of bad design too. Which version of these two servers is the right way to do it? what is the right thing to do? | For specialized code, use custom interfaces and types or available generic ones? | object oriented design | null |
_codereview.100432 | I was hoping someone would have a look at how I retrieve the questions from the db, parse the JSON and process the results - possibly advise how I could improve efficiency by streamlining my code. I feel that the way I have processed the results is rather cumbersome!I have a MongoDB that is accessed by custom server code. The server deals with matchmaking, rooms, lobbies etc. for multi-player game. The MongoDB is on same space and holds all the questions for the mobile phone quiz game.This is my first attempt at such a project and although I'm competent in Java and my JSON and Mongo skill are novice.My result pulls back every questionEntry element in the documents for a particular TV show that has a metaTag array element which match the search string.JSON sampleThe query: // Query our collection documents metaTag elements for a matching string// @SuppressWarnings(deprecation)public void queryMetaTags(String query){ // Query to search all documents in current collection List<String> continentList = Arrays.asList(new String[]{query}); DBObject matchFields = new BasicDBObject(season.questions.questionEntry.metaTags, new BasicDBObject($in, continentList)); DBObject groupFields = new BasicDBObject( _id, $_id).append(questions, new BasicDBObject($push,$season.questions)); //DBObject unwindshow = new BasicDBObject($unwind,$show); DBObject unwindsea = new BasicDBObject($unwind, $season); DBObject unwindepi = new BasicDBObject($unwind, $season.questions); DBObject match = new BasicDBObject($match, matchFields); DBObject group = new BasicDBObject($group, groupFields); ArrayList<DBObject> pipeline = new ArrayList<>(); pipeline.add(unwindsea); pipeline.add(unwindepi); pipeline.add(match); pipeline.add(group); @SuppressWarnings(deprecation) AggregationOutput output = mongoColl.aggregate(pipeline); //CommandResult output = (CommandResult) //mongoColl.aggregate(pipeline,new BasicDBObject(explain,true)); //mongoColl.explainAggregate(unwindsea,unwindepi,match,group); String jsonString = null; JSONObject jsonObject = null; jsonResultsArray = null; ourResultsArray = new ArrayList<JSONObject>(); // Loop for each document in our collection for (DBObject result : output.results()) { try { // Parse our results so we can add them to an ArrayList jsonString = JSON.serialize(result); jsonObject = new JSONObject(jsonString); jsonResultsArray = jsonObject.getJSONArray(questions); // Put each of our returned questionEntry elements into an ArrayList for (int i = 0; i < jsonResultsArray.length(); i++) { //System.out.println(jsonResultsArray element ( + i + ): + jsonResultsArray.getJSONObject(i).toString()); ourResultsArray.add(jsonResultsArray.getJSONObject(i)); } } catch (JSONException e1) { e1.printStackTrace(); } } }Each game match consists of 10 questions for this topic, so I pull out a random 10 from the results with: public void pullOut10Questions(){ // Array to hold 10 random numbers between 0 and our results total ArrayList<Integer> ourRandomNumbersList = generate10RandomNumbersInRange(ourResultsArray.size()); // Array to hold our 10 random questions from our results ourQuestionsArray = new ArrayList<JSONObject>(); // Loop through each of our results in array for (int i = 0; i < ourResultsArray.size(); i++) { // Loop through our array holding our 10 random numbers for(int j = 0; j < ourRandomNumbersList.size(); j++) { // If our results array index equals one of our 10 random numbers if(ourRandomNumbersList.get(j) == i) { // Then add that result to our final questionElement array ourQuestionsArray.add(ourResultsArray.get(i)); //try { // Remove later it's for print test to console<--------------------------- // System.out.println(Our QuestionEntry from mongo: + ourResultsArray.get(i).getString(questionEntry)); //} catch (JSONException e) { // e.printStackTrace(); //} } } }} // Return 10 random numbers in rangepublic ArrayList<Integer> generate10RandomNumbersInRange(int range){ Random rand = new Random(); int e; int i; int g = 10; // Store random numbers is HashSet HashSet<Integer> randomNumbers = new HashSet<Integer>(); for (i = 0; i < g; i++) { e = rand.nextInt(range); randomNumbers.add(e); // Keep adding numbers until we reach 10 if (randomNumbers.size() <= 10) { if (randomNumbers.size() == 10) g = 10; g++; randomNumbers.add(e); } } // Return our random numbers as an ArrayList ArrayList<Integer> al = new ArrayList<Integer>(); Iterator<Integer> iter = randomNumbers.iterator(); while(iter.hasNext()) { al.add(iter.next()); } return al;}I then throw these results at my pojo's so I can manage them easilly: public void pojoOurQuestions(){ // Copy our questions from ourQuestionsArray into our pojo's // Loop for each JSONObject in ourQuestionsArray for(int i = 0; i < ourQuestionsArray.size(); i++) { try { JSONObject jsonObject = ourQuestionsArray.get(i); String s = jsonObject.getString(questionEntry); // Need to put our question entries into our Pojo's questionEntry = new ObjectMapper().readValue(s, QuestionEntry.class); //System.out.println(Our questionEntry from pojo: + questionEntry.toString()); } catch (JSONException e) { e.printStackTrace(); } catch (JsonParseException e) { e.printStackTrace(); } catch (JsonMappingException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } }}I've not used JSON with Java before, the process of having to parse my results back and forth seems a untidy to me. Maybe I have got things wrong? | Game server in Java querying MongoDB for JSON | java;parsing;mongodb;quiz | null |
_cs.14517 | I feel the notion there are countably many Turing machines is wrong.Suppose there is a Turing machine whose input alphabet is {0}. If we replace the input alphabet {0} with {a} and replace every occurrence of 0 with a in the transition table, then we get another Turing machine. Obviously, these two machines are different beacause they recognize different languages, but using any one reasonable encoding sheme, they could be encoded into the same string. So claiming Turing machines are countable only by enumerating their encodings is wrong, because actually there isn't a bijection between Turing machines and their encodings.Is my opinion right? | Are turing machine really countable? | turing machines | null |
_codereview.93281 | I want to create a production-ready producer/consumer that could help me avoid thread synchronization hell. Is this thread-safe? The main issue is to be safe with exceptions that can arrive.public class AsynchSimpleProducer<T> : IDisposable{ private readonly Action<Exception> error; private readonly BlockingCollection<T> blockingCollection = new BlockingCollection<T>(50); public event Func<T, Task> NewItem; public AsynchSimpleProducer(Action<Exception> error) { this.error = error; Task.Factory.StartNew(() => { //this loop is ends only when blockingCollection.CompleteAdding called Parallel.ForEach(blockingCollection.GetConsumingPartitioner(), SendItem); }); } private void SendItem(T item) { try { if (NewItem !=null) NewItem(item).Wait(); } catch (Exception ex) { error(ex); } } public void Send(T newValue) { blockingCollection.Add(newValue); } public void Dispose() { blockingCollection.CompleteAdding(); }}Example of usage:simpleProducer = new AsynchSimpleProducer<int>(Error);simpleProducer.NewItem += simpleProducer_NewItem;//this part can be in taskvar next = random.Next(0, 1000);simpleProducer.Send(next); | Event-based producer/consumer in C# | c#;producer consumer | null |
_unix.286353 | Users vi and rust share group rust and want to use some file in shared manner.rust$ ls -l myfile -rw-rw-r-- 1 vi rust 0 May 30 03:48 myfilerust$ stat myfile | grep GidAccess: (0664/-rw-rw-r--) Uid: ( 1000/ vi) Gid: ( 1057/ rust)rust$ iduid=1048(rust) gid=1057(rust) groups=1057(rust),...rust$ cat myfilerust$ touch myfile touch: cannot touch myfile: Permission deniedrust $ dd of=myfile dd: failed to open myfile: Permission deniedvi$ iduid=1000(vi) gid=1000(vi) groups=1000(vi),{many unrelated groups skipped},1057(rust),{many unrelated groups skipped}vi$ touch myfilevi$ Only vi user has write access to the file despite of g+w.root# chown rust myfilerust$ ls -l myfile -rw-rw-r-- 1 rust rust 0 May 30 03:51 myfilevi$ touch myfilerust$ chmod g-w myfilevi$ touch myfile touch: cannot touch myfile: Permission deniedvi can or can't write to rust's file depending on g+w bit, as excepted.Why group-writable bit works only in one direction?The file remains unavailable even in a+w mode. Third user can write to the file with a+w although...getfacl myfile returns Invalid argument.The file is on local reiserfs.id vi and id rust matches id in respective users' shells up to order of unrelated groups.One more experiment:vi$ chmod a+w myfilevi$ stat myfile File: myfile Size: 0 Blocks: 0 IO Block: 4096 regular empty fileDevice: fb02h/64258d Inode: 12618147 Links: 1Access: (0666/-rw-rw-rw-) Uid: ( 1000/ vi) Gid: ( 1057/ rust)Access: 2016-05-30 18:49:20.000000000 +0300Modify: 2016-05-30 20:48:23.000000000 +0300Change: 2016-05-30 20:48:23.000000000 +0300 Birth: -root# dived -J -u rust -g rust -- iduid=1048(rust) gid=1057(rust) groups=1057(rust)root# dived -J -u rust -g rust -- dd of=/home/vi/home/rust/myfiledd: failed to open /home/vi/home/rust/myfile: Permission deniedroot# dived -J -u rust -g 99999 -- iduid=1048(rust) gid=99999 groups=99999root# dived -J -u rust -g 99999 -- dd of=/home/vi/home/rust/myfilesfdasafd0+1 records in0+1 records out9 bytes (9 B) copied, 1.14971 s, 0.0 kB/sA mystery. Can grsecurity patches be a problem?Next experiment:root# stat /home/vi/home/rust/myfile File: /home/vi/home/rust/myfile Size: 0 Blocks: 0 IO Block: 4096 regular empty fileDevice: fb02h/64258d Inode: 13848412 Links: 1Access: (0664/-rw-rw-r--) Uid: (99997/ UNKNOWN) Gid: (99998/ UNKNOWN)Access: 2016-05-31 00:39:24.000000000 +0300Modify: 2016-05-31 00:39:24.000000000 +0300Change: 2016-05-31 00:39:24.000000000 +0300 Birth: -root# getfacl /home/vi/home/rust/myfilegetfacl: /home/vi/home/rust/myfile: Invalid argumentroot# for i in {0..1099}; do if dived -J -u $i -g 99998 -- touch /home/vi/home/rust/myfile 2> /dev/null; then echo $i; fi; done01000root# root# root# mount -o remount,noacl /homeroot# root# for i in {0..1099}; do if dived -J -u $i -g 99998 -- touch /home/vi/home/rust/myfile 2> /dev/null; then echo $i; fi; done | head0123456789(and so on, basically it works)root# mount -o remount,acl /homeroot# root# for i in {0..1099}; do if dived -J -u $i -g 99998 -- touch /home/vi/home/rust/myfile 2> /dev/null; then echo $i; fi; done | head01000root# Looks like getfacl (or it's kernel part) is a problem. ACLs are in effect, but are not manageable. | group member unable to write to a group-writable file with reiserfs and extended ACLs | linux;files;permissions;acl;reiserfs | null |
_unix.207255 | I've hard disk failure on this pool. I replace the disk, I don't have any hard error and I can not to put back it online as nofified::~# zpool status data pool: data state: UNAVAILstatus: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning.action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requestedconfig: NAME STATE READ WRITE CKSUM data UNAVAIL 0 0 0 insufficient replicas raidz2-0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 c2t15d0 ONLINE 0 0 0 c2t6d0 ONLINE 0 0 0 c2t7d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 14132293493917319721 UNAVAIL 0 0 0 was /dev/dsk/c2t11d0s0 c2t12d0 ONLINE 0 0 0 c2t13d0 ONLINE 0 0 0 c2t14d0 ONLINE 0 0 0I tried this command::~# zpool online -e data c2t5d0 cannot open 'data': pool is unavailableWhy the zpool data still unavailable?c2t0d0 and c2t1d0 are reserved for the system and are in z mirror.unavailableAnd I would like to know what 's meaning this line:14132293493917319721 UNAVAIL 0 0 0 was /dev/dsk/c2t11d0s0because in my mind there's should be like this: c2t11d0 UNAVAIL 0 0 0Thanks. | zpool online doesn't work | solaris;zfs | null |
_unix.179347 | read a few similar questions/posts and tried the solutions, still stuck. My scenario is simple, external ext4 drive was powered off (via cat) while operating, and failed to mount on boot. As I dug deeper, it got darker:lars@whorus:~$ sudo mount -t hfsplus /dev/sdc3 /media/lars/externalmount: wrong fs type, bad option, bad superblock on /dev/sdc3,lars@whorus:~$ sudo fsck -fr /dev/sdc3fsck from util-linux 2.20.1** /dev/sdc3** Checking HFS Plus volume. Invalid B-tree node size(4, 0)** Volume check failed.lars@whorus:~$ sudo mke2fs -n /dev/sdc3mke2fs 1.42.9 (4-Feb-2014)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks122085376 inodes, 488337654 blocks24416882 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=429496729614903 block groups32768 blocks per group, 32768 fragments per group8192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848lars@whorus:~$ sudo e2fsck -b 32768 /dev/sdc3e2fsck 1.42.9 (4-Feb-2014)e2fsck: Bad magic number in super-block while trying to open /dev/sdc3lars@whorus:~$ testdiskCommand line: TestDiskTestDisk 6.14, Data Recovery Utility, July 2013Christophe GRENIER <[email protected]>http://www.cgsecurity.orgOS: Linux, kernel 3.13.0-33-generic (#58-Ubuntu SMP Tue Jul 29 16:45:05 UTC 2014) x86_64Compiler: GCC 4.8Compilation date: 2013-10-29T01:29:29ext2fs lib: 1.42.9, ntfs lib: libntfs-3g, reiserfs lib: none, ewf lib: none/dev/sda: LBA, HPA, LBA48, DCO support/dev/sda: size 1953523055 sectors/dev/sda: user_max 1953523055 sectors/dev/sda: native_max 1953525168 sectors/dev/sda: dco 1953525168 sectors/dev/sdb: LBA, HPA, LBA48, DCO support/dev/sdb: size 321670847 sectors/dev/sdb: user_max 321670847 sectors/dev/sdb: native_max 321672960 sectors/dev/sdb: dco 321672960 sectorsWarning: can't get size for Disk /dev/mapper/control - 0 B - 1 sectors, sector size=512Hard disk listDisk /dev/sda - 1000 GB / 931 GiB - CHS 121601 255 63, sector size=512 - WDC WD10EZEX-00KUWA0, S/N:WD-WMC1S7063930, FW:15.01H15Disk /dev/sdb - 164 GB / 153 GiB - CHS 20023 255 63, sector size=512 - HDT722516DLA380, S/N:VDB71BTCCZ4KEC, FW:V43OA80ADisk /dev/sdc - 2000 GB / 1862 GiB - CHS 243197 255 63, sector size=512 - WD My Book 111D, FW:1049Partition table type (auto): MacDisk /dev/sdc - 2000 GB / 1862 GiB - WD My Book 111DPartition table type: MacInterface AdvancedHFS+ magic value at 16/82/3 1 P partition_map 1 63 63 2 P Free 64 262207 262144 3 P HFS 262208 3906963439 3906701232 HFS+ blocksize=4096, 2000 GB / 1862 GiB 4 P Free 3906963440 3906963455 16HFS_HFSP_boot_sector 3 P HFS 262208 3906963439 3906701232 HFS+ blocksize=4096, 2000 GB / 1862 GiBHFS+ magic value at 16/82/3HFS+ magic value at 16/82/3Volume headerHFS+ OKBackup volume headerHFS+ OKSectors are identical.Superblock Backup superblock0000 482b0004 80000900 H+...... 482b0004 80000900 H+......0008 6673636b 00003a38 fsck..:8 6673636b 00003a38 fsck..:80010 ca703bfa d0dcbf86 .p;..... ca703bfa d0dcbf86 .p;.....0018 00000000 ca709e6a .....p.j 00000000 ca709e6a .....p.j0020 00007269 00000a94 ..ri.... 00007269 00000a94 ..ri....0028 00001000 1d1b70f6 ......p. 00001000 1d1b70f6 ......p.0030 07a1ba60 125a6e5c ...`.Zn\ 07a1ba60 125a6e5c ...`.Zn\0038 00010000 00010000 ........ 00010000 00010000 ........0040 00026064 00000000 ..`d.... 00026064 00000000 ..`d....0048 00000000 00000083 ........ 00000000 00000083 ........0050 00000000 00000000 ........ 00000000 00000000 ........0058 00000000 00000000 ........ 00000000 00000000 ........0060 00000000 00000000 ........ 00000000 00000000 ........0068 109ee824 e4771b69 ...$.w.i 109ee824 e4771b69 ...$.w.i0070 00000000 03a37000 ......p. 00000000 03a37000 ......p.0078 00000000 00003a37 ......:7 00000000 00003a37 ......:70080 00000001 00003a37 ......:7 00000001 00003a37 ......:70088 00000000 00000000 ........ 00000000 00000000 ........0090 00000000 00000000 ........ 00000000 00000000 ........0098 00000000 00000000 ........ 00000000 00000000 ........00A0 00000000 00000000 ........ 00000000 00000000 ........00A8 00000000 00000000 ........ 00000000 00000000 ........00B0 00000000 00000000 ........ 00000000 00000000 ........00B8 00000000 00000000 ........ 00000000 00000000 ........00C0 00000000 01c00000 ........ 00000000 01c00000 ........00C8 00000000 00001c00 ........ 00000000 00001c00 ........00D0 0000d239 00000e00 ...9.... 0000d239 00000e00 ...9....00D8 08b60936 00000e00 ...6.... 08b60936 00000e00 ...6....00E0 00000000 00000000 ........ 00000000 00000000 ........00E8 00000000 00000000 ........ 00000000 00000000 ........00F0 00000000 00000000 ........ 00000000 00000000 ........00F8 00000000 00000000 ........ 00000000 00000000 ........0100 00000000 00000000 ........ 00000000 00000000 ........0108 00000000 00000000 ........ 00000000 00000000 ........0110 00000000 05000000 ........ 00000000 05000000 ........0118 05000000 00005000 ......P. 05000000 00005000 ......P.0120 125a6e5c 00005000 .Zn\..P. 125a6e5c 00005000 .Zn\..P.0128 00000000 00000000 ........ 00000000 00000000 ........0130 00000000 00000000 ........ 00000000 00000000 ........0138 00000000 00000000 ........ 00000000 00000000 ........0140 00000000 00000000 ........ 00000000 00000000 ........0148 00000000 00000000 ........ 00000000 00000000 ........0150 00000000 00000000 ........ 00000000 00000000 ........0158 00000000 00000000 ........ 00000000 00000000 ........0160 00000000 05000000 ........ 00000000 05000000 ........0168 00000000 00005000 ......P. 00000000 00005000 ......P.0170 00080dd5 00000024 .......$ 00080dd5 00000024 .......$0178 00d26916 000000dc ..i..... 00d26916 000000dc ..i.....0180 00d271ba 00000066 ..q....f 00d271ba 00000066 ..q....f0188 0146e7eb 0000004a .F.....J 0146e7eb 0000004a .F.....J0190 0146dd38 00000034 .F.8...4 0146dd38 00000034 .F.8...40198 015f75aa 00004e1c ._u...N. 015f75aa 00004e1c ._u...N.01A0 00000000 00000000 ........ 00000000 00000000 ........01A8 00000000 00000000 ........ 00000000 00000000 ........01B0 00000000 00000000 ........ 00000000 00000000 ........01B8 00000000 00000000 ........ 00000000 00000000 ........01C0 00000000 00000000 ........ 00000000 00000000 ........01C8 00000000 00000000 ........ 00000000 00000000 ........01D0 00000000 00000000 ........ 00000000 00000000 ........01D8 00000000 00000000 ........ 00000000 00000000 ........01E0 00000000 00000000 ........ 00000000 00000000 ........01E8 00000000 00000000 ........ 00000000 00000000 ........01F0 00000000 00000000 ........ 00000000 00000000 ........01F8 00000000 00000000 ........ 00000000 00000000 ........HFS_HFSP_boot_sector 3 P HFS 262208 3906963439 3906701232 HFS+ blocksize=4096, 2000 GB / 1862 GiBHFS+ magic value at 16/82/3HFS+ magic value at 16/82/3Volume headerHFS+ OKBackup volume headerHFS+ OKSectors are identical.HFS_HFSP_boot_sector 3 P HFS 262208 3906963439 3906701232 HFS+ blocksize=4096, 2000 GB / 1862 GiBHFS+ magic value at 16/82/3HFS+ magic value at 16/82/3Volume headerHFS+ OKBackup volume headerHFS+ OKSectors are identical.New options : Dump : No Align partition: Yes Expert mode : NoTestDisk exited normally.I tried using all the listed backup blocks with e2fsck, but they all came back the same as invalid. Tried restoring the backup one via testdisk, still the same. Hoping to restore the drive without a low level dd= type solution, as I don't have 2TB of storage available for the image :( On the plus side, this isn't the system volume, so it's easy to attempt to mount/unmount without other issues.All help is appreciated, I've got about 20 tabs of forum posts open that tend to end in sad stories. | Recovering from bad superblock on external drive | data recovery;ext4;superblock | null |
_webmaster.28462 | We're currently implementing a voucher system on our site which will allow our users to obtain a 25+% discount on certain products, provided they donate 10% of the purchase price to charity.We will offer the ability to share the discounts via social media in return for larger discounts to the sharer for each person who clicks through the link and buys an item. I understand that social links have SEO benifits, but this appears to be based on lots of people sharing the same link. If our voucher users share a unique link i.e. http://ourdomain.com/sipsfesdf rather than a fixed link http://ourdomain.com/product-name will we still receive the same benifts?Should we instead share something like http://ourdomain.com/product-name/sipsfesdfThanks in advance. | Sharing unique links on social media vs SEO | seo | If I understand you right, rel=canonical may be your friend.Specifically, I assume all those ourdomain.com/asdfghjkl links point to a page that is (almost) identical to that standard product page at ourdomain.com/product-name. If so, you should mark them as being the same by including a tag like:<link rel=canonical href=http://ourdomain.com/product-name />in the head section of the page. That way, search engines will treat links pointing to the shared links (almost) as if they had pointed directly to the main product page, and will only list that page in their result pages.Another possibility would be to have the shared links do a HTTP 301 redirect to the product page after recording that the vistor came in through the shared link. (This is e.g. how the StackExchange software used on this site works: if you click the share buttons next to a question, or the word link below any post, you get a short link that contains your user ID. When someone follows that link, the software records it and then redirects them to the normal URL of the page.) For search engines, this has almost the same effect; the difference is that rel=canonical links are only parsed by search engines, while 301 redirects affect browsers too. Generally, I'd consider 301 redirects more user friendly for purposes like this, but both so have some advantages. For more information, see e.g. this page from Google's Webmaster Tools help.As for ourdomain.com/product-name/sipsfesdf vs. ourdomain.com/sipsfesdf, I doubt there's any SEO difference, at least as long as you use either rel=canonical or 301 redirects. From a user experience viewpoint, the longer links are more informative, but also take up more space in a short message, which could make people more reluctant to share them. I'd suggest allowing both, and deciding which to generate based on the medium (e.g. short links for Twitter, longer for Facebook since it parses them out anyway). Or, for a generic copy this link and share it interface, you could present both and let the user choose. |
_webapps.73080 | Google Sheets has a revision history accessible from File / See revision history. However, it doesn't appear to easily allow you to see when a particular part of the sheet (e.g. a cell) was changed? Is there an easy way to do this, short of clicking through every revision and seeing when that cell changes?(Incidentally, I would consider this analogous to features such as git blame from the git RCS). | Is there a quick way to see when a cell in a Google Sheets was last edited? | google spreadsheets;version control | null |
_webapps.86658 | I have this code:function function1(event){ var timezone = GMT-8; var timestamp_format = hh:mm:ss a; var updateColName = Agent; var timeStampColName = Start Time; var sheet = event.source.getSheetByName(Active Review); var actRng = event.source.getActiveRange(); var editColumn = actRng.getColumn(); var index = actRng.getRowIndex(); var headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues(); var dateCol = headers[0].indexOf(timeStampColName); var cell = sheet.getRange(index, dateCol + 1); var date = Utilities.formatDate(new Date(), timezone, timestamp_format); cell.setValue(date);}function function2(event){ var timezone = GMT-8; var timestamp_format = hh:mm:ss a; var updateColName = Additional Notes; var timeStampColName = End Time; var sheet = event.source.getSheetByName(Active Review); var actRng = event.source.getActiveRange(); var editColumn = actRng.getColumn(); var index = actRng.getRowIndex(); var headers = sheet.getRange(1, 1, 1, sheet.getLastColumn()).getValues(); var dateCol = headers[0].indexOf(timeStampColName); var cell = sheet.getRange(index, dateCol + 1); var date = Utilities.formatDate(new Date(), timezone, timestamp_format); cell.setValue(date);}The script only works if I only have function 1 or function 2. How can I merge these 2 events into one? | Run two timestamps for time in/out in one Google Sheets | google spreadsheets;google apps script | null |
_unix.298597 | I'm trying to write an alias to get the ip of a docker container.The command is the following:docker inspect redis | grep IPAddress | awk 'NR==3{ print $2 }' | sed 's/[^]*\([^]*\).*/\1/'If I launch it from command line it works properly.Then I inserted it into bash_aliases:alias redis-ip=docker inspect redis | grep IPAddress | awk 'NR==3{ print $2 }' | sed 's/[^]*\([^]*\).*/\1/'But when I launch redis-ip I get this error:sed: -e expression #1, char 19: invalid reference \1 on `s' command's RHSAnyone can tell me what is the error about? | `sed` regexp error | linux;sed;docker | Do use a shell function for this rather than an alias:function redis-ip { docker inspect redis | grep IPAddress | awk 'NR == 3 { print $2 }' | sed 's/[^]*\([^]*\).*/\1/'}If the sed does what you want or not, I don't know as I don't know what the docker command outputs. |
_unix.26006 | When you setup a new Ubuntu or OS X installation a user is generally created for you. On OS X it is whatever username you pick. On Ubuntu (the server version) usually the ubuntu user is created.The way I understand it, there is also a root user, which you can access via something like sudo su - root, and entering the password of the ubuntu or the user you created, which is part of the administrators group. Once you switch to root I think you can use the passwd command and change root's password.But what was root's password before that? Does it exist? Is it a random string of numbers and letters? How does the system deal with that? | Is there a root password on OS X and Ubuntu? | ubuntu;osx;password;root | I can answer only for Ubuntu.In Ubuntu the root user has a locked password. From passwd man page: -l, --lock Lock the password of the named account. This option disables a password by changing it to a value which matches no possible encrypted value (it adds a '!' at the beginning of the password).You can see the ! in /etc/shadow.A user with a locked account cannot change its password, but root can, without prior entering of the old password. |
_codereview.123188 | I am working on an app in which I need to get destination list from server and set it in autocomplete text view. I am using text watcher and on its onTextChanged event I am fetching destinations data using async task. My problem is I am not sure that am I doing it in a right way. Please review my code belowFragmentHotels.javapublic class FragmentHotels extends Fragment implements TextWatcher { final Handler mHandler=new Handler(); @Override public void onAttach(Activity activity) { super.onAttach(activity); } public FragmentHotels() { // Required empty public constructor } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment View rootView = inflater.inflate(R.layout.fragment_hotels, container, false); etDestination = (AutoCompleteTextView) rootView.findViewById(R.id.et_destination); etDestination.addTextChangedListener(this); return rootView; } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { Log.i(TAG, TEXT CHANGED TO: + s.toString()); if(!TextUtils.isEmpty(s.toString())) { getDestinationsAsync(s.toString()); } } @Override public void afterTextChanged(Editable s) { } private void getDestinationsAsync(final String term){ new AsyncTask<Void,Void,Void>(){ final String t=term; final Map<String,String> postParams=new HashMap<>(); @Override protected Void doInBackground(Void... params) { Log.i(TAG,do in background in getDestinationAsync); postParams.put(term,t); new PostHandler(TAG,2,2000).doPostRequest(http://www.bhagwatiholidays.com/admin/webservice/destination_name.php, postParams, new PostHandler.ResponseCallback() { @Override public void response(int status, String response) { Log.i(TAG,GOT RESPONSE SUCCESSFULLY); try { Log.i(TAG,PARSING JSON); JSONArray array = new JSONArray(response); Log.i(TAG,JSON ARRAY SIZE: +array.length()); final String[] destinations=new String[array.length()]; for(int i=0;i<array.length();i++){ Log.i(TAG,LABEL: +array.getJSONObject(i).getString(label)); destinations[i]=array.getJSONObject(i).getString(label); } Log.i(TAG, SETTING ADAPTER NOW); mHandler.post(new Runnable() { @Override public void run() { FragmentHotels.this.etDestination.setAdapter(new ArrayAdapter<String>(FragmentHotels.this.getActivity(), android.R.layout.simple_list_item_1, destinations)); } }); } catch (JSONException e){ e.printStackTrace(); } } }); return null; } }.execute(); }} | Loading autocomplete text view adapter dynamically using async task in Android | java;android;json | null |
_webmaster.67940 | I want to check if on a web page I can find defined words. For example I want to check if a web page contains word 'abc'.I know that some pages contains meta tag with name = keywords and name=description.But now every page contains this tag. So I'm also searching in <p> tags. But where else I should search for a matching words to determine the topic of the page? | How to find keywords in HTML code? | html;keywords;meta keywords | There are several things you can do. You can view page source code and look for the following:title tagdescription meta-tagh1, h2, h3... tagsfirst 1 or 2 paragraphsYou will want to discount any stop words of common words of course. While the description meta-tag has little or no value for SEO, it does offer great clues to keywords and page topic. You can always create a spread-sheet to keep track of pages and keywords of course.Another option, and one that I prefer, is to use a keyword density analyzer. Keyword density for SEO is largely a myth, however, these tools offer great clues to the hidden potential of any page and clearly pin-points the topic keywords that any page will perform well for. This may be the best option for you.My favorite now requires an account: http://www.ranks.nl/ This may be a good idea if you are going to use a tool often. Today, I recommend: http://tools.seobook.com/general/keyword-density/ This is a free tool and not quite as detailed as ranks.nl but may be good enough for your purposes. I use both from time to time to check that I am on track when I write a new page.Another option is to use an SEO analysis tool. I like best for this: SEO PowerSuite from http://www.link-assistant.com/ This is an expensive option and I only recommend this if you need to use the tool a lot for competitive analysis or checking your own work on a larger site.If this is too much for you, then perhaps Screaming Frog can help: http://www.screamingfrog.co.uk/seo-spider/ It can be free up to a limit (500 pages), then you have to purchase a license. People seem to like this tool and I have tried it. It requires the option to spider your site directly. I suspect you can target single pages too. Please note, there are more features than listed on this initial page so dig deeper if this looks like a good option for you. I can recommend this tool from my experience. |
_webmaster.100479 | Lets assume my website is mywebsite.com. I need to block the website for all the countries except India. But we need to handle the block gracefully, i.e. showing a page that right now the site is not providing services in their country. Page on my website visible in India: mywebsite.com/category.phpWhen someone outside India opens the website they should see the following URLs:mywebsite.com/world/category.phpKindly note that Google Search should always show the URLs without world in there. Following are the solutions I have on mind:Scenario: Someone tries to open mywebsite.com/category.php from US. The code will check for the IP location and the user would be redirected to mywebsite.com/world/category.phpSolution1: Add no follow and no index tags on mywebsite.com/world/category.php so that Google does not index this page and use a 302 redirection. This page will be served to everyone from outside India. Solution2: Add a 302 redirect from mywebsite.com/category.php to mywebsite.com/world/category.php and also add canonical on mywebsite.com/world/category.php as website.com/category.phpProblem in this approach is loop for Google bot, first we are doing a redirect and then we are putting a canonical to the one which as redirected. Sounds wrong to me but I am not sure. Note: This question is related SEO strategy. I want your suggestions on my SEO strategy. I do not want any technical solution for redirection from .htaccess or IP blocking outside India traffic . | How can I block my website in other countries? | seo;301 redirect;web development;canonical url;ecommerce | null |
_unix.270552 | Is there any difference between (# comments taken from documentation)command > filename # Docs: Redirect stdout to a file.andcommand 1> filename # Docs: Redirect stdout to file filename. | Difference between 1> and > | bash;shell;files;io redirection | From the Bash manual's section on Redirection (emphasis mine):Redirection of output causes the file whose name results from the expansion of word to be opened for writing on file descriptor n, or the standard output (file descriptor 1) if n is not specified. If the file does not exist it is created; if it does exist it is truncated to zero size.So, there is no difference between >foo and 1>foo. |
_cs.69140 | Is it possible to use the theorem of Rice to prove that the emptiness problem is undecidable?With the emptiness problem I mean the question if a certain machine doens't accept any input ?If you can prove it using the theorem of Rice, can you also prove the acceptance problem (if a certain machine will accept a certain string?). | Rice theorem to prove Emptiness problem | computability;undecidability;decision problem | null |
_datascience.19408 | I'm new to machine learning. I have implemented a simple SVR, and I have noticed that there is a strong error reduction when I normalize or scale both input features and output (feature). I would like to know:There are some drawbacks in normalizing/scaling the output?The output value could be affected by this transformation?How can I retrieve the original output value? | Relation between output normalization and error value, performing regression | regression;svm;normalization | null |
_unix.88628 | I have a command that processes data slowly. The command processes lines from a file and writes the results to the output file data.txt:my_command > data.txtThe issue I have is that I'd like to examine output lines in the data.txt file as they are processed. The problem is that no output appears in my output file until the OS decides to dump data to the output file, which happens every few hours. Is there anyway I can force data to be flushed to the file more frequently? | Flush data to file frequently for long running command? | pipe | One option is to unbuffer your command's stdout using stdbuf from GNU Coreutils.I doubt I would be able to explain the technicalities behind it any better than the author does here |
_unix.93550 | Ideally i m trying to use my laptop and a 3Gphone as a WiFi router to redirect FORWARD HTTP but not HTTPS Traffic to privoxy which then forwards the traffic via a SSH tunnel to a ziproxy VPS.for the sake of simplicity privoxy is currently set to defaults ie is not forwarding to another proxy. with exception to accept intersepts 1 also sysctl net.ipv4.ip_forward=1the following iptable commands work locally but is ignored by FORWARD traffic ie users connected by wifi are not filtered by privoxy but the local user is, i want the opposite behaviouriptables -t nat -A POSTROUTING -o ${INTERNET_IFACE} -j MASQUERADEiptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner privoxy -j ACCEPTiptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8118iptables -A FORWARD -i ${WIFI_IFACE} -j ACCEPTHow do I force FORWARD HTTP traffic to go through privoxy ? | iptables redirect FORWARD http traffic to privorxy port | iptables;privoxy | The reason it doesn't work is because you can only modify packets in certain ways at certain parts of the netfilter stack. Modifying the destination on the way out is too late. You need to modify it on the way in.iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8118I recommend researching the various tables that make up the netfilter stack. |
_codereview.82485 | I have a component that I want to render with certain styling based upon the props that it will receive. I'm currently defaulting the prop types, and then creating the styling at render as such: getDefaultProps: function(){ return{ size: small, shape: rounded };},statics : { function getShape(shape){ if(shape === rounded){ return 59px; }else{ return 2px; } }, function getSize(size){ if(size === large){ return 136px; }else if(size===medium){ return 68px; }else{ return 34px; } }},render: function(){ var borderpx = Avatar.getShape(this.props.shape); var imgpx = Avatar.getSize(this.props.size); return( <img src={this.props.img} style={{border-radius : borderpx, height: imgpx}} /> );}What I'm trying to figure out:Is this the correct way to dynamically render the style in React?Is there anything in my code that I could do better? | Changing Component Style | javascript;react.js | null |
_softwareengineering.205901 | There are some design guidelines about testable code in The Art of Unit Testing. The first one is Make methods virtual by default. Im curious to know your idea about non-virtual-by-default behavior in C#. Ive read about Hejlsberg opinions but I think one the most important reasons could be that it may lead us to composition over inheritance principal.Could composition over inheritance be one of those reasons which make non-virtual-by-default preferred over virtual-by-default?UPDATERegarding this subject, please consider test-driven point of view; where we want to write testable code. While we are encouraged to make all members virtual by default (at the mentioned book), we can follow composition over inheritance and keep going non-virtual-by-default. Isn't it better? | Does non-virtual-by-default lead us to composition-over-inheritance? | c#;design patterns;object oriented;unit testing;inheritance | null |
_datascience.12964 | From Tensorflow code: Tensorflow. RnnCell.num_units: int, The number of units in the LSTM cell.Can't undestand what does this mean. What are the units of LSTM cell. Input, Output and forget gates? Does this mean number of units in the recurrent projection layer for Deep LSTM. Then why does this is called number of units in the LSTM cell? What is LSTM cell and what is difference VS LSTM block, what is minimal LSTM unit if not cell? | What is the meaning of The number of units in the LSTM cell? | neural network;tensorflow;rnn | As the helpful comments in that function say,The definition of cell in this package differs from the definition used in the literature. In the literature, cell refers to an object with a single scalar output. The definition in this package refers to a horizontal array of such units.In essence, the layer will contain multiple parallel LSTM units, structurally identical but each eventually learning to remember some different thing. |
_unix.156478 | I am trying to emulate an enviroment that has centos5 and tomcat6 (for some reason), which is a problem because there are no RPMs for tomcat6 which are compatible with centos5 available to me. I do have the source for tomcat6 and I can build it from source.However, I have a number of RPMs which I would like to use that are dependent on tomcat6. I know they will run on centos5 and should work if installed. However, they won't install because even if I did install tomcat6 the RPMs would think it was not installed due to tomcat6 RPM not being installed, only the code.Is there a way to express to RPM/yum that I have built the program as source and it should move on and do the install anyways? I know I can just force the install, but is there a more elegant approach? | make RPM recognize dependency built from source | yum;rpm | So you have unsatisfied dependencies that you need to stub out.Make a dummy package that supplies the missing Provides: isthe general approach.(aside)RPM5 (this isn't you) also permits Requires: to be stubbed outusing a configuration file /etc/rpm/sysinfo/Providename. |
_softwareengineering.160675 | What do you call classes without methods?For example,class A{ public string something; public int a;}Above is a class without any methods. Does this type of class have a special name? | What do you call classes without methods? | programming languages;naming;data structures;class | Most of the time: An anti pattern.Why? Because it faciliates procedural programming with Operator classes and data structures. You separate data and behaviour which isn't exactly good OOP.Often times: A DTO (Data Transfer Object)Read only datastructures meant to exchange data, derived from a business/domain object.Sometimes: Just data structure.Well sometimes, you just gotta have those structures to hold data that is just plain and simple and has no operations on it. But then I wouldn't use public fields but accessors (getters and setters). |
_codereview.70054 | How idiomatic is my code? var Clock = function(hour, minute) { this.hour = hour || 0; this.minute = minute || 0; this.plus = function(minutes) { computed_minutes = this.minute + minutes if (computed_minutes > 60) { this.minute = computed_minutes % 60 this.hour += computed_minutes / 60 } else { this.minute = computed_minutes } if (this.hour >= 24) { this.hour = this.hour - 24 } this.hour = Math.round(this.hour) this.minute = Math.round(this.minute) return this; }, this.minus = function(minutes) { computed_minutes = this.minute - minutes if (computed_minutes < 0) { this.minute = 60 + computed_minutes % 60 this.hour -= (1 + Math.abs(computed_minutes / 60)) } else { this.minute = computed_minutes; } if (this.hour < 0) { this.hour = 24 + this.hour } this.hour = Math.round(this.hour) this.minute = Math.round(this.minute) return this; }, this.equals = function(other) { return this.hour == other.hour && this.minute == other.minute; }}Clock.at = function(hour, minute) { c = new Clock(hour, minute); return c;}Clock.prototype.toString = function() { function format(n) { return n >= 10 ? n : 0 + n; } return format(this.hour) + : + format(this.minute)}module.exports = Clock; | Implement a Clock in JavaScript | javascript;datetime | Some issues:Every instance of Clock gets its own instance of plus and minus rather than attaching these functions to the prototype.Clock.at sets a global variable c.Clock.prototype.toString creates the format function each time it is called; you can move this out of the function and into a different scope so it is only created once.plus and minus do a lot of checking that could be moved into a single set function and then this can also be called from the constructor (so that the same checks and rounding is also done in the constructor).24 and 60 appear as magic constants. In this context it is easy to understand what they are but it would be better to assign them to named constants (HOURS_PER_DAY and MINUTES_PER_HOUR) that identify why those magic numbers are being used.My suggestions:module.export = (function(){ var Clock = function( hours, minutes ) { this.set( hours, minutes ) } Clock.at = function(hours, minutes) { return new Clock(hours, minutes); } Clock.HOURS_PER_DAY = 24; Clock.MINUTES_PER_HOUR = 60; Clock.prototype.set = function( hours, minutes ){ var hrs = Math.round( hours || 0 ); var mns = Math.round( minutes || 0 ); this.hour = ( hrs + Math.floor( mns / Clock.MINUTES_PER_HOUR ) ) % Clock.HOURS_PER_DAY; if ( this.hour < 0 ) { this.hour += Clock.HOURS_PER_DAY; } this.minute = mns % Clock.MINUTES_PER_HOUR; if ( this.minute < 0 ) { this.minute += Clock.MINUTES_PER_HOUR; } return this; } Clock.prototype.plus = function(minutes) { return this.set( this.hour, this.minute + minutes ) }, Clock.prototype.minus = function(minutes) { return this.set( this.hour, this.minute - minutes ); }, Clock.prototype.equals = function(other) { return this.hour == other.hour && this.minute == other.minute; } function format(n) { return n >= 10 ? n : 0 + n; } Clock.prototype.toString = function() { return format(this.hour) + : + format(this.minute) } return Clock;})(); |
_unix.10362 | In ps xf26395 pts/78 Ss 0:00 \_ bash27016 pts/78 Sl+ 0:04 | \_ unicorn_rails master -c config/unicorn.rb 27042 pts/78 Sl+ 0:00 | \_ unicorn_rails worker[0] -c config/unicorn.rb In htop, it shows up like:Why does htop show more process than ps? | Why does `htop` show more process than `ps` | process;ps;top;htop;thread | By default, htop lists each thread of a process separately, while ps doesn't. To turn off the display of threads, press H, or use the Setup / Display options menu, Hide userland threads. This puts the following line in your ~/.htoprc or ~/.config/htop/htoprc (you can alternatively put it there manually):hide_userland_threads=1(Also hide_kernel_threads=1, toggled by pressing K, but it's 1 by default.)Another useful option is Display threads in a different color in the same menu (highlight_threads=1 in .htoprc), which causes threads to be shown in a different color (green in the default theme).In the first line of the htop display, there's a line like Tasks: 377, 842 thr, 161 kthr; 2 running. This shows the total number of processes, userland threads, kernel threads, and threads in a runnable state. The numbers don't change when you filter the display, but the indications thr and kthr disappear when you turn off the inclusion of user/kernel threads respectively.When you see multiple processes that have all characteristics in common except the PID and CPU-related fields (NIce value, CPU%, TIME+, ...), it's highly likely that they're threads in the same process. |
_cs.76171 | Continuing from this answer: https://cs.stackexchange.com/a/56072/43035I don't understand how it's possible to map many transition functions $\delta_1,...,\delta_n$ of a NDTM into just two transition functions $\delta_0',\delta_1'$. How will conflicts be handled?For example: $\delta_1(q_1, a) = (q_2, b, R)\\ \delta_2(q_1, a) = (q_3, c, L)\\ \delta_3(q_1, a) = (q_4, d, R)$ How can you map $\delta_3$? | Mapping many transition functions into two transition functions | complexity theory;turing machines;nondeterminism | Suppose that you have three options $o_1,o_2,o_3$.You first guess whether to apply $o_1$ or not; if not, you stay in place.If you didn't apply $o_1$, you guess whether to apply $o_2$ or $o_3$.Here is how to implement it using transitions, using your example:$$\begin{align*}&\delta_1(q_1,a) = (q_2,b,R) \\&\delta_2(q_1,a) = (q_s,a,R) \\&\delta_1(q_s,\sigma) = (q_t,\sigma,L) \\&\delta_2(q_s,\sigma) = (q_t,\sigma,L) \\&\delta_1(q_t,a) = (q_3,c,L) \\&\delta_2(q_t,a) = (q_4,d,R)\end{align*}$$Here $q_s,q_t$ are new states, and $\sigma$ is any tape symbol. |
_cstheory.38147 | Supposing if $P^{\#P}\subseteq BPP$ then polynomial hierarchy collapses. Does the counting hierarchy collapse as well?Irrespective of $P^{\#P}\subseteq BPP$ are there any collapse results of counting hierarchy that imply collapse results of polynomial hierarchy and vice versa? | Where is the counting hierarchy if polynomial hierarchy collapses? | counting complexity;polynomial hierarchy | null |
_unix.367888 | I have this ~/.ssh/confi script to connect to a distant server (very distant, another contintent) through a firewall:Host ras HostName ras.cse.ust.hk User farshidhss ForwardAgent yes ForwardX11 no LogLevel DEBUG3 ServerAliveInterval 10 ServerAliveCountMax 3Host farshid HostName 10.89.226.143 User luca ForwardX11 no LogLevel DEBUG3 ServerAliveInterval 10 ServerAliveCountMax 3 ProxyCommand ssh -o 'ForwardAgent yes' -o 'ForwardX11 no' ras 'ssh-add && nc %h %p'It's weird because 25% of the times I can connect successfully, but most of the times I get these messages:luca@jarvis:~$ ssh farshiddebug1: Executing proxy command: exec ssh -o 'ForwardAgent yes' -o 'ForwardX11 no' ras 'ssh-add && nc 10.89.226.143 22'debug1: permanently_drop_suid: 1000debug1: identity file /home/luca/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/luca/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.1debug2: resolving ras.cse.ust.hk port 22debug2: ssh_connect_direct: needpriv 0debug1: Connecting to ras.cse.ust.hk [143.89.40.101] port 22.debug1: connect to address 143.89.40.101 port 22: Connection timed outssh: connect to host ras.cse.ust.hk port 22: Connection timed outssh_exchange_identification: Connection closed by remote hostWhy and how can I solve this? | SSH Connection timed out for server via firewall? | ssh;firewall | null |
_webapps.102535 | I'm trying to add a photo to Google Maps which I have taken with my digital camera which has no built in GPS. Google maps did't show any option to add a new photo so I checked Google Maps Help to learn how to add a new photo, but in the section Add photos from Your contributions it says Under the Contribute tab, click Add your photos to Maps. You might not see this option if you haven't taken any photos with your phone or we can't find a location for your photos.Is there any other way I can add non Geo Tagged photos to Google Maps? | How to add Non Geo Tagged Photo to Google Maps? | google maps;photos | null |
_unix.245780 | But they give instructions likecd downloaded_program./configuremake installThis creates the ELF that is needed, and probably some .so files.Why not put those inside a zip file for download, like with windows apps? Is there any reason why they need to be compiled by the user? | Why are programs not distributed in compiled format? | software installation;package management;make;source;elf | null |
_unix.317822 | I am trying to go through individual emails and retrieve the host name.Each email has a To: section with an email address [email protected]. I'm trying to retrieve just aol.comEg:To: [email protected] (abc123)To: [email protected],hk (Jim)To: [email protected]\ (Jim)Expected output:aol.comyahoo.com,hkyahoo.com\ | retrieving host name from email address | text processing;regular expression;email | null |
_webmaster.14940 | I've the following code:p{font-family: Helvetica, Arial, sans-serif;font-weight:100;}It works on Mac OSX on Safari and Firefox, but the font-weight don't work on Windows in any browser.Why? How can I solve that? | font-weight on Windows | css;windows | null |
_unix.97967 | It's a fresh install of Sabayon Linux. I installed mysql (equo install dev-db/mysql), configured it (emerge --config ...), but it doesn't start using /etc/init.d script:# /etc/init.d/mysql start * WARNING: mysql is already starting# /etc/init.d/mysql status * You are attempting to run an openrc service on a * system which openrc did not boot. * You may be inside a chroot or you may have used * another initialization system to boot this system. * In this situation, you will get unpredictable results! * If you really want to do this, issue the following command: * touch /run/openrc/softlevel# /etc/init.d/mysql stop * ERROR: mysql stopped by something elseTouching /run/openrc/softlevel causes even more errors. Googling doesn't advise much.I remember recent OpenRC migration on my Gentoo box, but there I'm still using init.d scripts. Anything else changed I didn't notice? | Sabayon - mysql (and other services) won't start | gentoo;init script;init;sabayon;openrc | Some of services run by process manager such as : upstart, systemd, OpenRC (your case) , SysV and so on. if get ps ax |ergep -i mysql you'll find out myql is running, Use the following documentation: OpenRC doc |
_webapps.9707 | I would like to do a search query and filter out all results from www.foo.com. How do I do that? | How to exclude a domain from Google search? | google search | This may help: (From this site:http://www.greghughes.net/rant/HowToExcludeADomainFromYourGoogleSearchResults.aspx)Note the minus sign that precedes the site: search operator in this case. That's how we tell Google to exclude the site/domain specified. So there you have it. Want to exclude a domain from your search term? Just specify the domain with -site: and you're all set.But what if you don't want to specify the domain to exclude every time by hand? In that case, set up a Google Custom Search Engine (http://www.google.com/coop/cse/) and specify during setup that you want your custom search engine to include results from the entire Internet. Then, after your search engine has been created, go to the Control Panel, choose the Sites tab, and from there you can specify as many domains as you like to exclude from every search. You'll get a custom search engine that you can tweak to your heart's content. |
_unix.244717 | This is still my first time setting up a DomU. With Dom0 being Arch Linux and DomU as well.I recently figured out that I would need an LVM for my setup as I want at least two partitions (root + swap).My current problem is that I don't know what my LVM setup should be and this what I have so far:$ sudo xl create /etc/xen/ArkOS-dev_PV.cfg Parsing config from /etc/xen/ArkOS-dev_PV.cfglibxl: error: libxl_device.c:283:libxl__device_disk_set_backend: Disk vdev=sda1 failed to stat: vm_volumes/root.ArkOS_Dev: No such file or directorylibxl: info: libxl.c:1691:devices_destroy_cb: forked pid 529 for destroy of domain 3My DomU boot configuration file :$ cat /etc/xen/ArkOS-dev_PV.cfgname = 'ArkOS_Dev'kernel = /mnt/arch/boot/x86_64/vmlinuzramdisk = /mnt/arch/boot/x86_64/archiso.imgextra = archisobasedir=arch archisolabel=ARCH_201511memory = 512disk = [ phy:vm_volumes/root.ArkOS_Dev,sda1,w, phy:vm_volumes/swap.ArkOS_Dev,sda2,w, file:/home/xen/ISO/archlinux-2015.11.01-dual.iso,xvdb:cdrom,r ]vif = [ 'mac=00:16:3e:49:2b:a1,bridge=xenbr0' ]root = /dev/sda1 ro$ lsblk -fNAME FSTYPE LABEL UUID MOUNTPOINTsda |-sda1 vfat FF2C-B8A3 /boot|-sda2 btrfs b3f4f40f-a8a1-4438-a187-dc02f2104340 /|-sda3 LVM2_member HiIS0n-cJ24-mdr5-aUVc-sacn-Hpvx-xM2qd2 | |-vm_volumes-root.ArkOS_Dev | `-vm_volumes-swap.ArkOS_Dev `-sda4 swap f90e6e95-5f00-4138-aa76-13feb4bce985 [SWAP]sudo lvdisplay --- Logical volume --- LV Path /dev/vm_volumes/root.ArkOS_Dev LV Name root.ArkOS_Dev VG Name vm_volumes LV UUID tRjJex-aNJg-8gJL-16lD-c1uo-cgfI-1qQEF1 LV Write Access read/write LV Creation host, time hypervisor, 2015-11-21 19:33:14 +0100 LV Status available # open 0 LV Size 87.29 GiB Current LE 22346 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 --- Logical volume --- LV Path /dev/vm_volumes/swap.ArkOS_Dev LV Name swap.ArkOS_Dev VG Name vm_volumes LV UUID t2OeL1-DDvf-vZLP-dxmh-NDbb-tcqb-zqNfGZ LV Write Access read/write LV Creation host, time hypervisor, 2015-11-21 19:33:21 +0100 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 | xl create problem with Arch Linux, Xen, DomU LVM | arch linux;lvm;xen | Solved by this:name = 'ArkOS_Dev'kernel = /mnt/arch/boot/x86_64/vmlinuzramdisk = /mnt/arch/boot/x86_64/archiso.imgextra = archisobasedir=arch archisolabel=ARCH_201511memory = 2048vcpus = 3disk = [ format=raw, vdev=xvda, access=rw, target=/dev/vm_volumes/root.ArkOS_Dev, format=raw, vdev=xvdb, access=rw, target=/dev/vm_volumes/swap.ArkOS_Dev, format=raw, vdev=xvdc, access=ro, devtype=cdrom, target=/home/xen/ISO/archlinux-2015.11.01-dual.iso ]vif = [ 'mac=00:16:3e:49:2b:a1,bridge=xenbr0' ]root = /dev/xvda rwThen after installing the DomU with this:name = 'ArkOS_Dev'bootloader = pygrubmemory = 2048vcpus = 3disk = [ format=raw, vdev=xvda, access=rw, target=/dev/vm_volumes/root.ArkOS_Dev, format=raw, vdev=xvdb, access=rw, target=/dev/vm_volumes/swap.ArkOS_Dev ]vif = [ 'mac=00:16:3e:49:2b:a1,bridge=xenbr0' ]root = /dev/xvda rw |
_codereview.101630 | I have written this Java code for a data structure which includes 3 stacks to supports four operations in \$O(1)\$: push(int x), pop(), min() and max().Instead of pushing new max and min in every push, I tried to optimize code in this way to have less space.import java.util.Stack;public class MyDS { Stack<Integer> s; Stack<Integer> minStack; Stack<Integer> maxStack; public MyDS(){ s = new Stack<Integer>(); minStack = new Stack<Integer>(); maxStack = new Stack<Integer>(); } // Push Method public void push(int k){ if(minStack.isEmpty()){ minStack.push(k); }else if(k <= minStack.peek()){ minStack.push(k); } if(maxStack.isEmpty()){ maxStack.push(k); }else if(k >= maxStack.peek()){ maxStack.push(k); } s.push(k); } // Pop Method public void pop(){ int popped; if(!s.isEmpty()){ popped = s.pop(); }else{ popped = -1; } if(popped == min()){ minStack.pop(); } if(popped == max()){ maxStack.pop(); } } // Min Method public int min(){ if(!minStack.isEmpty()){ return minStack.peek(); }else{ return Integer.MIN_VALUE; } } // Max Method public int max(){ if(!maxStack.isEmpty()){ return maxStack.peek(); }else{ return Integer.MAX_VALUE; } }}This is my earlier version of DS:import java.util.Stack;public class DS { static Stack<Integer> stack; static Stack<Integer> minStack; static Stack<Integer> maxStack; public DS(){ stack = new Stack<Integer>(); minStack = new Stack<Integer>(); maxStack = new Stack<Integer>(); } // Push Method public void push(int k){ stack.push(k); if(!minStack.isEmpty()){ minStack.push(Math.min(k, minStack.peek())); }else{ minStack.push(k); } if(!maxStack.isEmpty()){ maxStack.push(Math.max(k, maxStack.peek())); }else{ maxStack.push(k); } } // Pop Method public void pop(){ if(!stack.isEmpty() && !minStack.isEmpty() && !maxStack.isEmpty()){ stack.pop(); minStack.pop(); maxStack.pop(); } } // Find Min public int findMin(){ if(!minStack.isEmpty()){ return minStack.peek(); } return Integer.MIN_VALUE; } // Find Max public int findMax(){ if(!maxStack.isEmpty()){ return maxStack.peek(); } return Integer.MAX_VALUE; } public static void main(String[] args) { DS ds = new DS(); System.out.println(Push 7, 6, 5: ); ds.push(7); ds.push(6); ds.push(5); System.out.println(S1: + stack); System.out.println(S2: + minStack); System.out.println(S3: + maxStack); System.out.println(Min till now: + ds.findMin()); System.out.println(Max till now: + ds.findMax()); System.out.println(Push 4, 3: ); ds.push(4); ds.push(3); System.out.println(stack); System.out.println(minStack); System.out.println(maxStack); System.out.println(Min till now: + ds.findMin()); System.out.println(Max till now: + ds.findMax()); System.out.println(1 pop(): ); ds.pop(); System.out.println(Min till now: + ds.findMin()); System.out.println(Max till now: + ds.findMax()); System.out.println(1 pop(): ); ds.pop(); System.out.println(Min till now: + ds.findMin()); System.out.println(Max till now: + ds.findMax()); }} | A data structure with push(int x), pop(), min() and max() in O(1) | java;stack | Your MyDS class has the right idea, in general.Special values like -1, Integer.MIN_VALUE, and Integer.MAX_VALUE make me suspicious. All of those special values denote what I consider to be error cases. Using special cases that might also be valid data is a dangerous habit that can lead to bugs. Instead of those special numbers, it would be better to throw exceptions probably NoSuchElementException. You should also offer a size() and/or an isEmpty() method so that users of your data structure can proactively avoid encountering the exception.The three instance variables should be private. The default access is rarely appropriate. java.util.Stack is to be avoided, due to unfortunate historical design decisions (inappropriately extending java.util.Vector, and being thread-safe by default). The documentation recommends ArrayDeque instead.Of the four operations in MyDS, I think pop() could use the most work. It's weird that pop() doesn't return a value. The -1 is entirely avoidable: if the main stack is empty, the min and max stacks should surely be empty too.public int pop() { if (s.isEmpty()) { throw new NoSuchElementException(); } int popped = s.pop(); if (popped == min()) { minStack.pop(); } if (popped == max()) { maxStack.pop(); } return popped;} |
_softwareengineering.343976 | Are there useful programs that don't take inputs such as:A user's keyboard input;an interrupt from a clock;data from another server etc.A program that computed/printed out predefined data could be turned into a file, right? | Is an inputless program redundant? | programming practices | null |
_unix.218034 | I have a Debian 7 VPS setup. I just enabled SSH Key authentication and disabled password authentication but the disabling did not work.When I attempt to SSH into my VPS, it prompts me for my SSH Key password which then works fine, BUT if I hit cancel, it will give me Agent admitted faliure to sign Error and then it prompts me for the current users account password, I enter it in and it logs me in with my account password, even though it's disabled... Does anyone have any idea why it allows me to login with password access? Thank youI am connecting with a 4096 bit key.Here is my sshd_config:Port 22# Use these options to restrict which interfaces/protocols sshd will bind to#ListenAddress ::#ListenAddress 0.0.0.0Protocol 2# HostKeys for protocol version 2HostKey /etc/ssh/ssh_host_rsa_keyHostKey /etc/ssh/ssh_host_dsa_keyHostKey /etc/ssh/ssh_host_ecdsa_key#Privilege Separation is turned on for securityUsePrivilegeSeparation yes# Lifetime and size of ephemeral version 1 server keyKeyRegenerationInterval 3600ServerKeyBits 768# LoggingSyslogFacility AUTHLogLevel INFO# Authentication:LoginGraceTime 120PermitRootLogin noStrictModes yesRSAAuthentication yesPubkeyAuthentication yes#AuthorizedKeysFile %h/.ssh/authorized_keys# Don't read the user's ~/.rhosts and ~/.shosts filesIgnoreRhosts yes# For this to work you will also need host keys in /etc/ssh_known_hostsRhostsRSAAuthentication no# similar for protocol version 2HostbasedAuthentication no# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication#IgnoreUserKnownHosts yes# To enable empty passwords, change to yes (NOT RECOMMENDED)PermitEmptyPasswords no# Change to yes to enable challenge-response passwords (beware issues with# some PAM modules and threads)ChallengeResponseAuthentication no# Change to no to disable tunnelled clear text passwords#PasswordAuthentication no# Kerberos options#KerberosAuthentication no#KerberosGetAFSToken no#KerberosOrLocalPasswd yes#KerberosTicketCleanup yes# GSSAPI options#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#GSSAPIAuthentication no#GSSAPICleanupCredentials yesX11Forwarding yesX11DisplayOffset 10PrintMotd noPrintLastLog yesTCPKeepAlive yes#UseLogin no#MaxStartups 10:30:60#Banner /etc/issue.net# Allow client to pass locale environment variablesAcceptEnv LANG LC_*Subsystem sftp /usr/lib/openssh/sftp-server# Set this to 'yes' to enable PAM authentication, account processing,# and session processing. If this is enabled, PAM authentication will# be allowed through the ChallengeResponseAuthentication and# PasswordAuthentication. Depending on your PAM configuration,# PAM authentication via ChallengeResponseAuthentication may bypass# the setting of PermitRootLogin without-password.# If you just want the PAM account and session checks to run without# PAM authentication, then enable this but set PasswordAuthentication# and ChallengeResponseAuthentication to 'no'.UsePAM yes | Disabling ssh password authentication does not work on my debian VPS | debian;ssh;authentication;vps | null |
_codereview.71493 | There are three scenarios only for when the sign up validations fail, so is there a better way of representing them rather than having 4 scenarios? I don't want to create a model folder so please don't suggest that.Is there a better way of representing this RSpec code on Rails using Capybara?feature 'Login' do before do FactoryGirl.create(:user) end scenario success login, js: true do # set_speed(:slow) visit root_path click_link 'Login' fill_in 'email', :with => '[email protected]' fill_in 'password', :with => 'password' click_button 'Login' expect(page).to have_content('Logged in successfully') end scenario failed login, js: true do # set_speed(:slow) visit root_path click_link 'Login' fill_in 'email', :with => '[email protected]' fill_in 'password', :with => 'something failed' click_button 'Login' expect(page).to have_content('Invalid login/password combination') endendfeature Sign Up do scenario success sign up, js:true do visit root_path click_link 'Login' click_link 'Sign Up' fill_in 'user[email]', :with=>'[email protected]' fill_in 'user[password]', :with=> 'password' fill_in 'user[password_confirmation]', :with=> 'password' click_button 'Create User' expect(page).to have_content('User successfully added.') end scenario failed sign up/Wrong email format, js:true do visit root_path click_link 'Login' click_link 'Sign Up' fill_in 'user[email]', :with=>'signup.example.com' fill_in 'user[password]', :with=> 'password' fill_in 'user[password_confirmation]', :with=> 'password' click_button 'Create User' expect(page).to have_content('is invalid') end scenario failed sign up/Short Email address, js:true do visit root_path click_link 'Login' click_link 'Sign Up' fill_in 'user[email]', :with=>'sign' fill_in 'user[password]', :with=> 'password' fill_in 'user[password_confirmation]', :with=> 'password' click_button 'Create User' expect(page).to have_content('is too short (minimum is 5 characters)') end scenario failed sign up/Long Email address, js:true do visit root_path click_link 'Login' click_link 'Sign Up' fill_in 'user[email]', :with=>'[email protected]' fill_in 'user[password]', :with=> 'password' fill_in 'user[password_confirmation]', :with=> 'password' click_button 'Create User' expect(page).to have_content('is too long (maximum is 50 characters)') end end | RSpec/Capybara tests | ruby;rspec | Instead of creating 3 long signup features for 3 diff email cases, you could do something like: describe email is in wrong format do let(:user) {FactoryGirl.create(:user)} before {user.email = something.with.wrongformat} it {should_not be_valid}enddescribre too long email do let(:user) {FactoryGirl.create(:user)} before {user.email = (a*60)[email protected]} it {should_not be_valid}endIt'll be same as signup process coz in both you deal with user creation.And also as for @tokland answer.I think it could be better not to repeat pattern of:it ....... do expect(page).to ......endBut just add subject {page} after before block on the top.It allows you to write just like:describe ..... do before {visit root_path} it {should have_content('Desired content'}end |
_softwareengineering.349775 | It is a widely held position that checked exceptions as implemented in Java are a bad idea. If you mark a method as throwing, calling code has to either catch the exception, or be marked as throwing, too. For this reason, it is said that exception specifications are contagious. Consequently, they are being removed from C++ (with exception of noexcept).I wonder if you could implement a different kind of checked exceptions. Instead of Caller must catch this, they would mean I will only ever throw this.The calling scope will not have to be changed at all. It is helps me as a writer of the called function to understand what I will possibly throw - if I decide to add an annotation. It would also allow the possible exceptions to be shown during code completion. I could imagine special fatal exceptions will always be allowed, like OutOfMemoryException, or Python's KeyboardInterrupt.For example (pseudocode):// simple case (could actually be inferred)string lookupString(string key) throws only KeyError { return m_map[key];}// complex failing examplestring readFromFile(string filename) throws IndexError { File f = File.Open(filename); return f.readline();}// -> Compilation error:// File.Open may cause IOError, but readFromFile guarantees to only throw IndexError// (optional:)// readFromFile suggests it will throw IndexError,// but no operation in it may possibly throw IndexError.In case you give no specification, I would suggest to allow any exception (throw Throwable). I imagine adding this feature to an existing language, and this would be the only backwards-compatible option. For a new language, you think about a different default.To deal with legacy code (in an external library), there could be a way to tell the compiler that a certain function or block of code only can ever throw certain exceptions. Conceptually a bit like unsafe in C#:I swear throws only ParseError { return JSON.parse(json);}I am not aware of any language that implements this weaker kind of checked exceptions. It seems to me they would have a lot of benefits, but without the drawbacks of Java's checked exceptions. Are there any reasons that this idea wouldn't work? Has any language successfully implemented this, or tried and failed?(Note, please do not read this as a question looking for a language recommendation and then close it. This is a question about language design, I would like to understand the benefits and drawbacks of this approach better. Possible answers I could imagine would be: Yes, this has been attempted in language XY, but doesn't work very well because of interplay with generics. or No, this has never been implemented, but it is a great idea. Because of <language-theoretic argument>, this can be implemented in a sound type system. See this work of Foobar for more information.) | Different kind of checked exceptions - Guarantee to only throw X | language design;exceptions | null |
_cs.23295 | I came across the following problem in a exam. We choose a permutation of n elements $[1,n]$ uniformly at random. Now a variable MIN holds the minimum value seen so far at it is defined to $\infty$ initially. Now during our inspection if we see a smaller value than MIN, then MIN is updated to the new value. For example, if we consider the permutation, $$5\ 9\ 4\ 2\ 6\ 8\ 0\ 3\ 1\ 7$$the MIN is updated 4 times as $5,4,2,0$. Then the expected no. of times MIN is updated?I tried to find the no. of permutations, for which MIN is updated $i$ times, so that I can find the value by $\sum_{i=1}^{n}iN(i)$, where $N(i)$, is the no. of permutations for which MIN is updated $i$ times.But for $i\geq2$, $N(i)$ is getting very complicated and unable to find the total sum. | Expected number of updates of minimum | algorithm analysis;runtime analysis;search algorithms | The trick is to use linearity of expectation. Let $E_k$ be the event that the $k$th input is a left-to-right minimum (i.e., requires an update), and let $X_k$ be an indicator variable for $E_k$, that is, $X_k$ is $1$ if $E_k$ happens and $0$ otherwise. Let $U = X_1 + \cdots + X_n$ be the number of updates. The expected number of updates is$$ \mathbb{E}[U] = \sum_{k=1}^n \mathbb{E}[X_k] = \sum_{k=1}^n \Pr[E_k]. $$It remains to compute $\Pr[E_k]$. We can construct a random permutation $\pi$ of $[n] = \{1,\ldots,n\}$ in the following way: take a random permutation of $[n]$, and randomly permute the first $k$ elements. This shows that the probability that $\pi(k) = \min(\pi(1),\ldots,\pi(k))$ is exactly $1/k$, and so $\Pr[E_k] = 1/k$. All in all, we get$$ \mathbb{E}[U] = \sum_{k=1}^n \Pr[E_k] = \sum_{k=1}^n \frac{1}{k} = H_n, $$the $n$th Harmonic number. It is well-known that $H_n = \ln n + \gamma + O(1/n)$ (Wikipedia contains the entire asymptotic expansion).We can also compute the variance in this way:$$\begin{align*}\mathbb{E}[U^2] &= \sum_{k=1}^n \mathbb{E}[X_k^2] + 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \mathbb{E}[X_k X_\ell] \\ &=\sum_{k=1}^n \Pr[E_k] + 2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \Pr[E_k \land E_\ell],\end{align*}$$where $\land$ is logical AND. We already know that $\Pr[E_k] = 1/k$. In order to compute $\Pr[E_k \land E_\ell]$ (where $k < \ell$), we follow the same route as before. With probability $1/\ell$, $\pi(\ell)$ is a left-to-right minimum. Given that, the probability that $\pi(k)$ is a left-to-right minimum is $1/k$. Therefore $\Pr[E_k \land E_\ell] = 1/(k\ell)$, and so$$\begin{align*}2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \Pr[E_k \land E_\ell] &=2\sum_{k=1}^{n-1} \sum_{\ell=k+1}^n \frac{1}{k\ell} \\ &=\left(\sum_{k=1}^n \frac{1}{k}\right)^2 - \sum_{k=1}^n \frac{1}{k^2} \\ &=H_n^2 - \sum_{k=1}^n \frac{1}{k^2}.\end{align*}$$Therefore$$\begin{align*}\mathbb{E}[U^2] &= H_n + H_n^2 - \sum_{k=1}^n \frac{1}{k^2}, \\\mathbb{V}[U] &= H_n - \sum_{k=1}^n \frac{1}{k^2} = \ln n + \gamma - \frac{\pi^2}{6} + O\left(\frac{1}{n}\right).\end{align*}$$We can compute all other moments in a similar way using (essentially) the inclusion-exclusion principle and the formula$$\mathbb{E}[U^d] = \sum_{i_1,\ldots,i_d=1}^n \prod_{i \in \{i_1,\ldots,i_d\}} \frac{1}{i}.$$If we are careful enough then we can probably establish the asymptotic normality of $U$. |
_webapps.3697 | If I install a service such as Docs, Calendar, or Wave in my Google Apps account I get the ability to change the URL of the service from the stock-standard https://www.google.com/[service]/hosted/[my domain] to something more meaningful.As a result, my calendar service is at http://calendar.[mydomain], documents is at http://docs.[mydomain], etc.However if I install a service from the Google Apps marketplace, I don't get the option to change the URL.Is there any way I can do this? | Can I change the URL of a Google Apps service installed from the Google Apps marketplace? | google apps;url | Third-party apps are usually hosted off-site, so its really up to the app provider to allow that or not.If you have a web server at your site or a shared hosting provider or similar, you could set up a simple redirection yourself. |
_unix.217518 | I have lots of clients need to check if the port is opened on remote server. I use nc command to do this job, however it always give out DNS lookup failure, but I can successfully find the DNS record by using the dig or nslookup. Anyone knows the reason? Thanks![root@client ~]# nc -vzw5 d1.myserver.com 443 d1.myserver.com: forward host lookup failed: Unknown host : No such file or directory[root@ndc-nz1-1 ~]# nslookup d1.myserver.comServer: 192.168.1.155Address: 192.168.1.155#53Name: d1.myserver.comAddress: 192.168.2.25[root@client ~]# dig d1.myserver.com; <<>> DiG 9.2.4 <<>> d1.myserver.com;; global options: printcmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11270;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;d1.myserver.com. IN A;; ANSWER SECTION:d1.myserver.com. 7200 IN A 192.168.2.25;; Query time: 1 msec;; SERVER: 192.168.1.155#53(192.168.1.155);; WHEN: Tue Jul 21 21:13:21 2015;; MSG SIZE rcvd: 67File info on nsswitch.conf[root@client ~]# cat /etc/nsswitch.conf |grep hosts: files dns [root@client ~]# ll /etc/nsswitch.conf -rw-r--r-- 1 root root 1658 Apr 1 2008 /etc/nsswitch.confFile info on resolv.conf[root@client ~]# cat /etc/resolv.confnameserver 192.168.1.155options timeout:1Trying other commands:[root@client ~]# ping d1.myserver.comping: unknown host d1.myserver.com[root@client ~]# wget d1.myserver.com--01:47:57-- http://d1.myserver.com/ => `index.html'Resolving d1.myserver.com... 192.168.2.25Connecting to d1.myserver.com|192.168.2.25|:80... connected. | nc command can't lookup DNS name | dns | Look at the contents of /etc/nsswitch.conf. You probably have not configured the system to use DNS to resolve host names. nslookup and dig don't bother looking to see if the system is configured to use DNS to resolve hostnames. They use DNS regardless. (Though if you don't specify a server, they will use /etc/resolv.conf to find a DNS server to use.)You want to see DNS in the hosts line, something like hosts: files dns |
_codereview.111480 | When my go-to Japanese transcription site went down for a while, I decided to write my own. My application converts between Romaji, Hiragana and Katakana — however, unlike any other converter I've seen, this one does a three-way conversion: there are three text boxes, and typing in one will update the content of the other two.There's a working version here.I'd like any feedback to focus on the big picture; that is, how I implemented this conversion. If anything else in my JS could be improved, though, don't hesitate to point that out as well.How Japanese worksI figured I'd quickly introduce anyone who isn't familiar with Japanese to its writing systems. Keep in mind this is massively simplified, not least because I'm a beginner myself.A Japanese character represents one syllable, which can either be a vowel, or a consonant followed by a vowel.There are a few exceptions to how these combinations are transcribed: s + i is shi, t + i is chi, t + u is tsu, h + u is fu.The only consonant that can appear without a vowel is n.There are two different alphabets: Hiragana and Katakana. They both encode the same syllables, and they're virtually equivalent, they just use different-looking characters: hiragana are mostly round, katakana are blockier.There's also Romaji, which is just representations of hiragana and katakana in the latin alphabet.Example: the syllable me is written as in hiragana and as in katakana. amerika in romaji is in hiragana and in katakana. A small tsu ( or ) doubles the consonant that comes after it.A small ya, yu or yo after a syllable ending in i combines the sounds (ki + small ya is kya). or in the top right corner modify the consonant sound.A doubles the vowel sound that comes before it. In Romaji, long vowels can also be written with a dash on top: is the same as aa.Example: The Japanese word for presentation is happykai in Romaji, in hiragana and in katakana.How the converter worksI regard a string as split into tokens, which is just the name I've given to a characters plus modifiers (like small tsu or small ya). My conversion table is an array of objects, each representing a token and holding three strings for its representation in romaji, hiragana and katakana respectively.The main function, convert, is given a text and the name of the writing system it's in. It loops through the text, cutting off the longest token it can find from the start of the string, and building the resulting strings from the contents of the token. It returns an object holding three strings, each representing the text in romaji, hiragana and katakana respectively.That way, feeding in a string in one writing system simultaneously converts it into the other two.When converting from romaji, spaces and the ' character are purposefully ignored. This is so that you can split syllables in words like bon'yari, to keep the syllable from being interpreted as nya.The problem with this system is that it depends on the order of the tokens in the conversion table. If n came before a, it would first cut off n and then a, never recognizing na. I've found no way to work around this without using a different data structure entirely.Converter.jsfunction Converter() { this.text = this.from = this.result = null; this.conversionTable = getConversionTable();}Converter.prototype.convert = function (text, from) { this.text = text.toLowerCase(); this.from = from; this.result = { romajiText: '', hiraganaText: '', katakanaText: '' }; this._preprocess(); while (this.text !== '') { var token = this._getToken(); this.result.romajiText += token.romaji; this.result.hiraganaText += token.hiragana; this.result.katakanaText += token.katakana; this.text = this.text.substr(token.strLength); } this._postprocess();};Converter.prototype.getResult = function() { return this.result;};Converter.prototype._preprocess = function () { this.text = this.text .replace(//gi, 'aa') .replace(//gi, 'uu') .replace(//gi, 'ee') .replace(//gi, 'ou');};Converter.prototype._getToken = function () { var newToken = {}; if (this._shouldIgnoreChar(this.text[0])) { newToken.romaji = newToken.hiragana = newToken.katakana = ''; newToken.strLength = 1; return newToken; } for (var i = 0; i < this.conversionTable.length; i++) { var token = this.conversionTable[i]; if (this.text.startsWith(token[this.from])) { newToken = token; newToken.strLength = token[this.from].length; return newToken; } } newToken.romaji = newToken.hiragana = newToken.katakana = this.text[0]; newToken.strLength = 1; return newToken;};Converter.prototype._shouldIgnoreChar = function (char) { return char === ' ' || char === '\'';};Converter.prototype._postprocess = function () { this.result.romajiText = this.result.romajiText .replace(/([aiueo])/gi, '$1$1') .replace(/aa/gi, '') .replace(/uu/gi, '') .replace(/ee/gi, '') .replace(/ou/gi, '') .replace(/oo/gi, '');};conversionTable.jsfunction getConversionTable() { return [ {romaji: 'kkya', hiragana: '', katakana: ''}, {romaji: 'kkyu', hiragana: '', katakana: ''}, {romaji: 'kkyo', hiragana: '', katakana: ''}, {romaji: 'ssha', hiragana: '', katakana: ''}, {romaji: 'sshu', hiragana: '', katakana: ''}, {romaji: 'ssho', hiragana: '', katakana: ''}, {romaji: 'ccha', hiragana: '', katakana: ''}, {romaji: 'cchu', hiragana: '', katakana: ''}, {romaji: 'ccho', hiragana: '', katakana: ''}, {romaji: 'hhya', hiragana: '', katakana: ''}, {romaji: 'hhyu', hiragana: '', katakana: ''}, {romaji: 'hhyo', hiragana: '', katakana: ''}, {romaji: 'mmya', hiragana: '', katakana: ''}, {romaji: 'mmyu', hiragana: '', katakana: ''}, {romaji: 'mmyo', hiragana: '', katakana: ''}, {romaji: 'rrya', hiragana: '', katakana: ''}, {romaji: 'rryu', hiragana: '', katakana: ''}, {romaji: 'rryo', hiragana: '', katakana: ''}, {romaji: 'ggya', hiragana: '', katakana: ''}, {romaji: 'ggyu', hiragana: '', katakana: ''}, {romaji: 'ggyo', hiragana: '', katakana: ''}, {romaji: 'jja', hiragana: '', katakana: ''}, {romaji: 'jju', hiragana: '', katakana: ''}, {romaji: 'jjo', hiragana: '', katakana: ''}, {romaji: 'bbya', hiragana: '', katakana: ''}, {romaji: 'bbyu', hiragana: '', katakana: ''}, {romaji: 'bbyo', hiragana: '', katakana: ''}, {romaji: 'ppya', hiragana: '', katakana: ''}, {romaji: 'ppyu', hiragana: '', katakana: ''}, {romaji: 'ppyo', hiragana: '', katakana: ''}, {romaji: 'yye', hiragana: '', katakana: ''}, {romaji: 'wwi', hiragana: '', katakana: ''}, {romaji: 'wwe', hiragana: '', katakana: ''}, {romaji: 'wwo', hiragana: '', katakana: ''}, {romaji: 'vva', hiragana: '', katakana: ''}, {romaji: 'vvi', hiragana: '', katakana: ''}, {romaji: 'vve', hiragana: '', katakana: ''}, {romaji: 'vvo', hiragana: '', katakana: ''}, {romaji: 'ssi', hiragana: '', katakana: ''}, {romaji: 'zzi', hiragana: '', katakana: ''}, {romaji: 'sshe', hiragana: '', katakana: ''}, {romaji: 'jje', hiragana: '', katakana: ''}, {romaji: 'tti', hiragana: '', katakana: ''}, {romaji: 'ttu', hiragana: '', katakana: ''}, {romaji: 'ddi', hiragana: '', katakana: ''}, {romaji: 'ddu', hiragana: '', katakana: ''}, {romaji: 'ttsa', hiragana: '', katakana: ''}, {romaji: 'ttsi', hiragana: '', katakana: ''}, {romaji: 'ttse', hiragana: '', katakana: ''}, {romaji: 'ttso', hiragana: '', katakana: ''}, {romaji: 'ffa', hiragana: '', katakana: ''}, {romaji: 'ffi', hiragana: '', katakana: ''}, {romaji: 'ffe', hiragana: '', katakana: ''}, {romaji: 'ffo', hiragana: '', katakana: ''}, {romaji: 'ffyu', hiragana: '', katakana: ''}, {romaji: 'hhye', hiragana: '', katakana: ''}, {romaji: 'kya', hiragana: '', katakana: ''}, {romaji: 'kyu', hiragana: '', katakana: ''}, {romaji: 'kyo', hiragana: '', katakana: ''}, {romaji: 'sha', hiragana: '', katakana: ''}, {romaji: 'shu', hiragana: '', katakana: ''}, {romaji: 'sho', hiragana: '', katakana: ''}, {romaji: 'cha', hiragana: '', katakana: ''}, {romaji: 'chu', hiragana: '', katakana: ''}, {romaji: 'cho', hiragana: '', katakana: ''}, {romaji: 'nya', hiragana: '', katakana: ''}, {romaji: 'nyu', hiragana: '', katakana: ''}, {romaji: 'nyo', hiragana: '', katakana: ''}, {romaji: 'hya', hiragana: '', katakana: ''}, {romaji: 'hyu', hiragana: '', katakana: ''}, {romaji: 'hyo', hiragana: '', katakana: ''}, {romaji: 'mya', hiragana: '', katakana: ''}, {romaji: 'myu', hiragana: '', katakana: ''}, {romaji: 'myo', hiragana: '', katakana: ''}, {romaji: 'rya', hiragana: '', katakana: ''}, {romaji: 'ryu', hiragana: '', katakana: ''}, {romaji: 'ryo', hiragana: '', katakana: ''}, {romaji: 'gya', hiragana: '', katakana: ''}, {romaji: 'gyu', hiragana: '', katakana: ''}, {romaji: 'gyo', hiragana: '', katakana: ''}, {romaji: 'ja', hiragana: '', katakana: ''}, {romaji: 'ju', hiragana: '', katakana: ''}, {romaji: 'jo', hiragana: '', katakana: ''}, {romaji: 'bya', hiragana: '', katakana: ''}, {romaji: 'byu', hiragana: '', katakana: ''}, {romaji: 'byo', hiragana: '', katakana: ''}, {romaji: 'pya', hiragana: '', katakana: ''}, {romaji: 'pyu', hiragana: '', katakana: ''}, {romaji: 'pyo', hiragana: '', katakana: ''}, {romaji: 'ye', hiragana: '', katakana: ''}, {romaji: 'wi', hiragana: '', katakana: ''}, {romaji: 'we', hiragana: '', katakana: ''}, {romaji: 'wo', hiragana: '', katakana: ''}, {romaji: 'va', hiragana: '', katakana: ''}, {romaji: 'vi', hiragana: '', katakana: ''}, {romaji: 've', hiragana: '', katakana: ''}, {romaji: 'vo', hiragana: '', katakana: ''}, {romaji: 'si', hiragana: '', katakana: ''}, {romaji: 'zi', hiragana: '', katakana: ''}, {romaji: 'she', hiragana: '', katakana: ''}, {romaji: 'je', hiragana: '', katakana: ''}, {romaji: 'ti', hiragana: '', katakana: ''}, {romaji: 'tu', hiragana: '', katakana: ''}, {romaji: 'di', hiragana: '', katakana: ''}, {romaji: 'du', hiragana: '', katakana: ''}, {romaji: 'tsa', hiragana: '', katakana: ''}, {romaji: 'tsi', hiragana: '', katakana: ''}, {romaji: 'tse', hiragana: '', katakana: ''}, {romaji: 'tso', hiragana: '', katakana: ''}, {romaji: 'fa', hiragana: '', katakana: ''}, {romaji: 'fi', hiragana: '', katakana: ''}, {romaji: 'fe', hiragana: '', katakana: ''}, {romaji: 'fo', hiragana: '', katakana: ''}, {romaji: 'fyu', hiragana: '', katakana: ''}, {romaji: 'hye', hiragana: '', katakana: ''}, {romaji: 'kka', hiragana: '', katakana: ''}, {romaji: 'kki', hiragana: '', katakana: ''}, {romaji: 'kku', hiragana: '', katakana: ''}, {romaji: 'kke', hiragana: '', katakana: ''}, {romaji: 'kko', hiragana: '', katakana: ''}, {romaji: 'ssa', hiragana: '', katakana: ''}, {romaji: 'sshi', hiragana: '', katakana: ''}, {romaji: 'ssu', hiragana: '', katakana: ''}, {romaji: 'sse', hiragana: '', katakana: ''}, {romaji: 'sso', hiragana: '', katakana: ''}, {romaji: 'tta', hiragana: '', katakana: ''}, {romaji: 'cchi', hiragana: '', katakana: ''}, {romaji: 'ttsu', hiragana: '', katakana: ''}, {romaji: 'tte', hiragana: '', katakana: ''}, {romaji: 'tto', hiragana: '', katakana: ''}, {romaji: 'hha', hiragana: '', katakana: ''}, {romaji: 'hhi', hiragana: '', katakana: ''}, {romaji: 'ffu', hiragana: '', katakana: ''}, {romaji: 'hhe', hiragana: '', katakana: ''}, {romaji: 'hho', hiragana: '', katakana: ''}, {romaji: 'mma', hiragana: '', katakana: ''}, {romaji: 'mmi', hiragana: '', katakana: ''}, {romaji: 'mmu', hiragana: '', katakana: ''}, {romaji: 'mme', hiragana: '', katakana: ''}, {romaji: 'mmo', hiragana: '', katakana: ''}, {romaji: 'yya', hiragana: '', katakana: ''}, {romaji: 'yyu', hiragana: '', katakana: ''}, {romaji: 'yyo', hiragana: '', katakana: ''}, {romaji: 'rra', hiragana: '', katakana: ''}, {romaji: 'rri', hiragana: '', katakana: ''}, {romaji: 'rru', hiragana: '', katakana: ''}, {romaji: 'rre', hiragana: '', katakana: ''}, {romaji: 'rro', hiragana: '', katakana: ''}, {romaji: 'wwa', hiragana: '', katakana: ''}, {romaji: 'wwi', hiragana: '', katakana: ''}, {romaji: 'wwe', hiragana: '', katakana: ''}, {romaji: 'wwo', hiragana: '', katakana: ''}, {romaji: 'gga', hiragana: '', katakana: ''}, {romaji: 'ggi', hiragana: '', katakana: ''}, {romaji: 'ggu', hiragana: '', katakana: ''}, {romaji: 'gge', hiragana: '', katakana: ''}, {romaji: 'ggo', hiragana: '', katakana: ''}, {romaji: 'zza', hiragana: '', katakana: ''}, {romaji: 'jji', hiragana: '', katakana: ''}, {romaji: 'zzu', hiragana: '', katakana: ''}, {romaji: 'zze', hiragana: '', katakana: ''}, {romaji: 'zzo', hiragana: '', katakana: ''}, {romaji: 'dda', hiragana: '', katakana: ''}, {romaji: 'jji', hiragana: '', katakana: ''}, {romaji: 'ddzu', hiragana: '', katakana: ''}, {romaji: 'dde', hiragana: '', katakana: ''}, {romaji: 'ddo', hiragana: '', katakana: ''}, {romaji: 'bba', hiragana: '', katakana: ''}, {romaji: 'bbi', hiragana: '', katakana: ''}, {romaji: 'bbu', hiragana: '', katakana: ''}, {romaji: 'bbe', hiragana: '', katakana: ''}, {romaji: 'bbo', hiragana: '', katakana: ''}, {romaji: 'ppa', hiragana: '', katakana: ''}, {romaji: 'ppi', hiragana: '', katakana: ''}, {romaji: 'ppu', hiragana: '', katakana: ''}, {romaji: 'ppe', hiragana: '', katakana: ''}, {romaji: 'ppo', hiragana: '', katakana: ''}, {romaji: 'vvu', hiragana: '', katakana: ''}, {romaji: 'a', hiragana: '', katakana: ''}, {romaji: 'i', hiragana: '', katakana: ''}, {romaji: 'u', hiragana: '', katakana: ''}, {romaji: 'e', hiragana: '', katakana: ''}, {romaji: 'o', hiragana: '', katakana: ''}, {romaji: 'ka', hiragana: '', katakana: ''}, {romaji: 'ki', hiragana: '', katakana: ''}, {romaji: 'ku', hiragana: '', katakana: ''}, {romaji: 'ke', hiragana: '', katakana: ''}, {romaji: 'ko', hiragana: '', katakana: ''}, {romaji: 'sa', hiragana: '', katakana: ''}, {romaji: 'shi', hiragana: '', katakana: ''}, {romaji: 'su', hiragana: '', katakana: ''}, {romaji: 'se', hiragana: '', katakana: ''}, {romaji: 'so', hiragana: '', katakana: ''}, {romaji: 'ta', hiragana: '', katakana: ''}, {romaji: 'chi', hiragana: '', katakana: ''}, {romaji: 'tsu', hiragana: '', katakana: ''}, {romaji: 'te', hiragana: '', katakana: ''}, {romaji: 'to', hiragana: '', katakana: ''}, {romaji: 'na', hiragana: '', katakana: ''}, {romaji: 'ni', hiragana: '', katakana: ''}, {romaji: 'nu', hiragana: '', katakana: ''}, {romaji: 'ne', hiragana: '', katakana: ''}, {romaji: 'no', hiragana: '', katakana: ''}, {romaji: 'ha', hiragana: '', katakana: ''}, {romaji: 'hi', hiragana: '', katakana: ''}, {romaji: 'fu', hiragana: '', katakana: ''}, {romaji: 'he', hiragana: '', katakana: ''}, {romaji: 'ho', hiragana: '', katakana: ''}, {romaji: 'ma', hiragana: '', katakana: ''}, {romaji: 'mi', hiragana: '', katakana: ''}, {romaji: 'mu', hiragana: '', katakana: ''}, {romaji: 'me', hiragana: '', katakana: ''}, {romaji: 'mo', hiragana: '', katakana: ''}, {romaji: 'ya', hiragana: '', katakana: ''}, {romaji: 'yu', hiragana: '', katakana: ''}, {romaji: 'yo', hiragana: '', katakana: ''}, {romaji: 'ra', hiragana: '', katakana: ''}, {romaji: 'ri', hiragana: '', katakana: ''}, {romaji: 'ru', hiragana: '', katakana: ''}, {romaji: 're', hiragana: '', katakana: ''}, {romaji: 'ro', hiragana: '', katakana: ''}, {romaji: 'wa', hiragana: '', katakana: ''}, {romaji: 'wi', hiragana: '', katakana: ''}, {romaji: 'we', hiragana: '', katakana: ''}, {romaji: 'wo', hiragana: '', katakana: ''}, {romaji: 'n', hiragana: '', katakana: ''}, {romaji: 'ga', hiragana: '', katakana: ''}, {romaji: 'gi', hiragana: '', katakana: ''}, {romaji: 'gu', hiragana: '', katakana: ''}, {romaji: 'ge', hiragana: '', katakana: ''}, {romaji: 'go', hiragana: '', katakana: ''}, {romaji: 'za', hiragana: '', katakana: ''}, {romaji: 'ji', hiragana: '', katakana: ''}, {romaji: 'zu', hiragana: '', katakana: ''}, {romaji: 'ze', hiragana: '', katakana: ''}, {romaji: 'zo', hiragana: '', katakana: ''}, {romaji: 'da', hiragana: '', katakana: ''}, {romaji: 'ji', hiragana: '', katakana: ''}, {romaji: 'dzu', hiragana: '', katakana: ''}, {romaji: 'de', hiragana: '', katakana: ''}, {romaji: 'do', hiragana: '', katakana: ''}, {romaji: 'ba', hiragana: '', katakana: ''}, {romaji: 'bi', hiragana: '', katakana: ''}, {romaji: 'bu', hiragana: '', katakana: ''}, {romaji: 'be', hiragana: '', katakana: ''}, {romaji: 'bo', hiragana: '', katakana: ''}, {romaji: 'pa', hiragana: '', katakana: ''}, {romaji: 'pi', hiragana: '', katakana: ''}, {romaji: 'pu', hiragana: '', katakana: ''}, {romaji: 'pe', hiragana: '', katakana: ''}, {romaji: 'po', hiragana: '', katakana: ''}, {romaji: 'vu', hiragana: '', katakana: ''}, {romaji: ',', hiragana: '', katakana: ''}, {romaji: '.', hiragana: '', katakana: ''} ];}sampleUsage.html<meta charset=UTF-8 /><textarea id=romaji></textarea><textarea id=hiragana></textarea><textarea id=katakana></textarea><script type=text/javascript src=startswith.js></script> <!-- https://github.com/mathiasbynens/String.prototype.startsWith/blob/master/startswith.js --><script type=text/javascript src=conversionTable.js></script><script type=text/javascript src=Converter.js></script><script type=text/javascript> var romajiInput = document.getElementById('romaji'); var hiraganaInput = document.getElementById('hiragana'); var katakanaInput = document.getElementById('katakana'); var converter = new Converter(); romajiInput.onkeyup = hiraganaInput.onkeyup = katakanaInput.onkeyup = function () { var from = this.id; converter.convert(this.value, from); var conversionResult = converter.getResult(); if (this !== romajiInput) { romajiInput.value = conversionResult.romajiText; } if (this !== hiraganaInput) { hiraganaInput.value = conversionResult.hiraganaText; } if (this !== katakanaInput) { katakanaInput.value = conversionResult.katakanaText; } };</script> | Three-way conversion between Japanese writing systems | javascript;natural language processing | This is so cool! I may finally realize the dream of learning Japanese :DAnyways, back to your code. var romajiInput = document.getElementById('romaji');var hiraganaInput = document.getElementById('hiragana');var katakanaInput = document.getElementById('katakana');var converter = new Converter();romajiInput.onkeyup = hiraganaInput.onkeyup = katakanaInput.onkeyup = function () { var from = this.id; converter.convert(this.value, from); var conversionResult = converter.getResult(); if (this !== romajiInput) { romajiInput.value = conversionResult.romajiText; } if (this !== hiraganaInput) { hiraganaInput.value = conversionResult.hiraganaText; } if (this !== katakanaInput) { katakanaInput.value = conversionResult.katakanaText; }};This is cool, nothing wrong about it. But if you're considering, try using a framework that supports basic two-way binding. That way, you don't have to deal with syncing the DOM with your data. Here's an example using Ractive.jsvar JapaneseConversionWidget = Ractive.extend({ // If you have the luxury of ES6, you can use template strings template: ` <textarea value={{ hiragana }} on-change=fromHiragana(hiragana)></textarea> <textarea value={{ katakana }} on-change=fromKatakana(katakana)></textarea> <textarea value={{ romaji }} on-change=fromRomaji(romaji)></textarea> `, // This autobinds to the DOM data: { hiragana: '', katakana: '', romaji: '', }, // Assuming convert returns an object like {hiragana: '', katakana: '', romaji: ''} // Now all I'm doing is `set. The library does everything else for me. fromHiragana: function(text){ this.set(convertFromHiragana(text)) }, fronKatakana: function(text){ this.set(convertFromKatakana(text)) }, fromRomaji: function(text){ this.set(convertFromRomaji(text)) }});new JapaneseConversionWidget({ el: document.body, append: true});Another thing is that its better if you split your convert into more distinct operations. In the sample framework code shown above, I explicitly created functions for conversion from Hiragana, Katakana and Romaji. This prevents your convert function from becoming bloated, especially when you add dialect-specific parsing routines.As for your converter, I don't think you really need to use prototypes for it although there's nothing wrong with doing so either. It's just that you're not doing inheritance, and the same feat can be done with just a series of transformation functions.Now usually I'd do things in a functional way (not really a follower of the paradigm, but know enough to get the benefits). I suggest you create your functions transparently. That means given the same input, the function should always give the same output, regardless of what's happening on the outside, specifically the implicit mutations of properties on this.// convertwhile (this.text !== '') { var token = this._getToken(); this.result.romajiText += token.romaji; this.result.hiraganaText += token.hiragana; this.result.katakanaText += token.katakana; this.text = this.text.substr(token.strLength);}The one problem I see is the use of a loop. It gives me the scares, and the fear that this will be an infinite loop eventually. What I would suggest is to have a function that accepts a string, and returns an array of tokens instead. That way, you have a finite set to operate with and easily used by array methods like map, reduce etc.function tokenizeRomaji(text){ return text.split('').reduce(function(syllables, character){ // Logic to group individual characters to syllables. // For Romaji, you can add Romaji-specific routines }, [])}function mapTokensToCharacters(tokens){ return tokens.map(function(token){ return //Convert token into another dialect });}What I suggest is doing something like this:function convertFromHiragana(text){ var lowerCasedText = text.toLowerCase(); var preprocessedText = preprocess(lowerCasedText); // Instead of running with in getting tokens, why not create an array of // tokens instead, then hand it off to individual translators? This also // makes the tokenizer dialect-specific. This means that even if your table // is shared, dialect-specific quirks can be worked-around. var tokenizedText = tokenizeHiragana(preprocessedText); // Explicitly separating translators. Since we come from Hiragana, we don't // translate Hiragana. var katakanaTranslation = convertToKatakana(tokenizedText); var romajiTranslation = convertToRomaji(tokenizedText); // Return as object. Note that we explicitly postProcess Romaji instead of // blindly calling postProcess and making it an implicit Romaji-only operation. return { hiragana: text, katakana: katakanaTranslation, romaji: postProcess(romajiTranslation) }}Sure, there's a lot of typing here, and more explicitness of code. However, we know that tokenizeHiragana does just tokenizing a Hiragana string into an array of tokens. We know we come from Hiragana, thus avoid Hiragana conversion. We know that the convert* functions can operate independently. With the above approach, you can have dialect-specific tokenizers. For instance, your Romaji tokenizer can look ahead to see if there's a vowel after the current token and merge it, or look behind to see if a vowel is preceded by an n and merge it.I wouldn't worry about repetetive code or being DRY at the moment. I'd worry much about making functions independent, and that bugs implemented for one operation doesn't affect another operation. You can start trimming off code at a later stage once you have the entire thing running perfectly and have tests. One cause of regression is refactoring without tests. |
_unix.246835 | I'm stuck in a tricky problem. Context : Yesterday I made a cloning script for my raspberry pi that dd a whole running pi filesystem into a local sdcard on a computer connected through ethernet. It was late, I was starving and tired : I dd the pi into... /dev/sda, my computer main fs, instead of /dev/sdb (the sd card). Because everything was running in memory I didn't noticed the error before I reboot this morning... :cry:So, instead of my 600go+ filesystem (which was partioned with LVM, without encryption, running debian jessie, but I cant remember the initial partition sheme) I now get a 4go raspberry pi filesystem which is not even booting since its arm. ::cry::cry:cry:(i am on amd64 btw)My current partition sheme looks like : SCSI1 (0,0,0) (sda) 640.1GB ATA Hitachi HTS54756n 1 primary 64.0MB fat16n 2 primary 4.0GB ext4pri/log 636.1GB free spaceor, in lsblk fashion : NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 4G 0 disk sda1 8:1 0 64M 0 part /bootsda2 8:2 0 4G 0 part /As far as i remember, before it was something like : NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTsda 8:0 0 640.1G 0 disk sda1 8:1 0 2?M 0 part /bootsda2 8:2 0 ?K 0 part sda5 8:5 0 ?G 0 part mycomputer--vg-root 254:0 0 ?G 0 lvm / mycomputer--vg-swap_1 254:1 0 ?G 0 lvm [SWAP] mycomputer--vg-var 254:2 0 ?G 0 lvm /var mycomputer--vg-tmp 254:3 0 ?M 0 lvm /tmp mycomputer--vg-home 254:4 0 ?G 0 lvm /homeEverything I need to get back was on /dev/sda5AFAIK, what the dd command did yesterday was : writing all the bits on the first 4goconverting the rest (636.1GB) into free spaceIf I'm right, then my data (located on the rest part) have to be still somewhere. Somewhere in the nowhere of free space. Goal : I would like to retrieve my things.Thanks to my stupidity I may be able to learn something about forensic. But for now I am at the ground zero. I am currently downloading Caine linux live, but I'm not sure what I should do. Is there a way to dump every bits located in free space and analyzeit?Or is there a way to recreate a LVM filesystem over free spaceWITHOUT formating anything? (I dont think so...)For now, I try to learn basics from Basic Steps in Forensic Analysis of Unix SystemsThx for any help. | Retrieve data from free space on 600Go disk after this disk have been dd of=/dev/{this disk} with 4Go data | data recovery | null |
_softwareengineering.306974 | Its like, I want to call .moveToBefore(Node) on a Node object and have the node relocate to before the node passed in.The problem arises if the node passed in is the head node. The List object will still reffer to the old head where as the old head will actually follow the new head further down the chain.I guess this could be solved easily if the nodes held a reference to the List object. So want to know if there are any disadvantages if the node objects in a Linked List implementation held a reference to its parent List object. | Is it odd if Nodes in a LinkedList held references to the List object? | data structures;list;reference;linked list | The potential problem with the Node class knowing about and using the List class directly is that you create a circular dependency between Node and List.Circular dependencies can be acceptable if they're carefully contained to ensure you don't accidentally end up making all your classes circularly dependent on each other. This particular example is probably very easy to contain and unlikely to infect the rest of your classes, so I wouldn't rule it out as a potentially valid design. But it could still cause problems when maintaining the List itself, since in principle it means you can never change anything on List without checking that you aren't breaking Node in the process. For instance, what if you want to implement the splice operation for your List class? If your Nodes all contain references to the List they're in, then you have to update all of these references, which means slightly more complicated code, and the splice would end up taking O(n) instead of O(1) time. And, if you weren't consciously aware of the circular dependency, you might not have even realized you had to update those references (just imagine the kinds of bugs that would lead to).For that reason, I would default to List.moveBefore(node1, node2) unless I had some compelling reason to put that method on the Node class instead. But if you do have such a reason, it's okay as long as you keep in mind that you can no longer make any changes to List or Node without checking both class' implementations for things that might break. |
_unix.309949 | How to list files that has last read-access older than 6 months? Then, how to delete them?My filesystem seems to be mounted with:/dev/sda3 on /home type ext4 (rw,relatime,data=ordered) | List (and then delete) files that has last read-access older than 6 months | files;filesystems;timestamps;atime | null |
_unix.274801 | I am writing a (very simple) userland IP network stack. For this purpose, I need to go around the OS network stack and obtain the raw Ethernet frames. The tap interface sounds like a way to go, but it does not seem to work for me. I created a bridge interface between the wlan and tap interfaces, but only few super-weird UDP packets seem to appear there (tcpdump -i tap0 -e -vv says so), even though the real wlan interface contains lots of other packets (again, tcpdump confirms).Am I doing something wrong? Is there any other (better) way to go about the userspace network stack solution? | Userspace Network Stack | kernel;ip;network interface;tcpdump | null |
_softwareengineering.299497 | I am writing a compiler, for which I devised a rather classic architecture: it's composed of sequential passes piped together, starting with a lexer and a parser, continuing with a macro processor, then a semantic analysis/type checker pass, and finally an intermediate code generator (and maybe IR optimizer that will come later).My current approach is the following. The parser is building an AST, where each type of AST node inherits from an AST base class. I plan that virtual functions of the AST will implement the functionality of subsequent passes. To provide a simplified example:macroExpand() finds, evaluates and substitutes all the macros in the AST, recursively;typeCheck() performs type checking, type inference and general semantic analysis/error checking on the now-desugared tree, completing each node with type annotations (which are implemented as member variables);codeGen(), finally, generates some kind of IR from the annotated AST.However, I'm afraid that having all this functionality in one single class violates the single-responsibility principle.I reckon that macro expansion especially does not fit in: I was thinking about integrating semantic analysis and code generation into just one set of functions instead of separating them, simply because I don't think I need to traverse the entire tree twice, and I would have to look at the types at codegen time anyway, even if I've pre-inferred and pre-checked them previously.But even with this structural change, there are still at least two completely different sets of methods on my AST classes. I don't yet see any particular reason why this in itself would be bad in my specific case, but I'm pretty sure the single responsibility principle was discovered for a good reason.One way to remedy (?) this issue would be to use the visitor pattern and write separate visitor class hierarchies for the AST for each purpose (macro expansion, semantic analysis and/or code generation). But I really don't feel like doing so. (In all honesty, I really dislike the idea.) So far it only seems to introduce unnecessary complexity and burden (by means of forcing me to maintain parallel class hierarchies).Currently, I'm writing this compiler in C++, but I'm pretty sure that if I were using a language that permitted after-the-fact modification and augmentation of classes (e.g. Objective-C categories), I would surely make use of this feature of the language and I would just decorate my base AST classes with the necessary set of methods, independently of the core interface and implementation of said classes.I could sort of simulate that in C++ by putting all the method declarations in one header file, but writing the implementation of each category of functions in different implementation files. This, however, contradicts the usual one class, one file practice.To sum up, my question is: is my current approach of giving two or three different functionality to one class really bad?If so, are any of my suggested fixes considered to be good practice, orIf it isn't, can you suggest something better? | Do I violate the Single Responsibility Principle with my multi-purpose AST Class? | object oriented;architecture;single responsibility | You're bumping up against a classic problem in programming language theory, the expression problem. It exposes a weakness of both classic object-oriented design (that it's hard to add operations to a data structure with multiple subtypes) and algebraic data types (that it's hard to add new type cases to an adt when there are multiple operations defined on it).There are various solutions; the visitor pattern is certainly a common one, but in my opinion object-oriented pattern matching is probably the nicest - there's an implementation for c++ here. |
_unix.337551 | I have a csv file with 9 columns where every three of them has different rows. The file looks like1 6.2 0.5 1 0.08 0.5 1 0.001 0.12 5.2 0.6 2 0.01 1.3 2 0.008 0.83 4.3 0.7 3 0.002 0.324 2.0 0.7 4 0.2 0.355 13.1 1.3 5 0.54 4.326 1.02 1.67I would like to replace any empty field by zero using a bash script. The outcome that I would like to produce should look like1 6.2 0.5 1 0.08 0.5 1 0.001 0.12 5.2 0.6 2 0.01 1.3 2 0.008 0.83 4.3 0.7 3 0.002 0.32 3 0 04 2.0 0.7 4 0.2 0.35 4 0 05 13.1 1.3 5 0.54 4.32 5 0 06 1.02 1.67 6 0 0 6 0 0 | Replacing empty fields by zero in a csv file | shell script;text processing;awk;sed | Solution hard-coded for the number of columns and assuming only the latter columns are the ones possibly empty:awk 'BEGIN { OFS=FS=\t } { $1=NR; if(!$2)$2=0; if(!$3)$3=0; $4=NR; if(!$5)$5=0; if(!$6)$6=0; $7=NR; if(!$8)$8=0; if(!$9)$9=0; print }' /path/to/your.csv |
_softwareengineering.349000 | Is it a good idea to use the same naming nomenclature (e.g. camelCase) for both front end (eg. Javascipt) and backend (eg. php) ? | Variables Naming in Javascript and PHP | php;javascript | It depends.If you are the only developer I'd say you should use same naming convention on both.BUT If there is any chance that someone else will look at your code you should follow the naming conventions of each language in separation. |
_codereview.136423 | I am getting all the answers correct. But still the solution is not accepted as only 4/5 tests cases are passed. I have not posted the whole problem statement but the problem is similar to this.I want to know if there are any more optimizations possible.import sysclass Queue(object): input_array = [] def __init__(self, input_array=None): if not input_array: self.input_array = [] else: self.input_array = input_array def enqueue(self, element): self.input_array.append(element) def dequeue(self): return self.input_array.pop(0) def first(self): return self.input_array[0] def last(self): return self.input_array[-1] def size(self): return len(self.input_array) def get_queue(self): return self.input_array def get_queue_after_first(self): return self.input_array[1:] def __str__(self): return Current Queue: {0}.format(self.input_array)def answer(document, searchTerms): no_of_search_terms = 0 count = dict() for searchTerm in searchTerms: if searchTerm in count: count[searchTerm] += 1 else: no_of_search_terms += 1 count.update({searchTerm: 1}) q = Queue() len_q = Queue() smallest_snippet_size = sys.maxint offsets = tuple() tokens = document.split() for position, token in enumerate(tokens, start=1): if count.get(token, 0): q.enqueue(token) len_q.enqueue(position) while q.first() in q.get_queue_after_first(): q.dequeue() len_q.dequeue() current_block_len = len_q.last() - len_q.first() + 1 if (q.size() >= no_of_search_terms) and (current_block_len < smallest_snippet_size): smallest_snippet_size = current_block_len offsets = (len_q.first() - 1, len_q.last()) return .join(tokens[offsets[0]: offsets[1]])if __name__ == '__main__': assert (answer(world there hello hello where world, [hello, world]) == 'world there hello') assert (answer(many google employees can program, [google, program]) == 'google employees can program') assert (answer(some tesla cars can autopilot, [tesla, autopilot]) == 'tesla cars can autopilot') assert (answer(a b c d a, [c, d, a]) == 'c d a') assert (answer(the cats run very fast in the rain, [cats, run, rain]) == 'cats run very fast in the rain') assert (answer(the cats run very fast in the rain run cats, [cats, run, rain]) == 'rain run cats') assert (answer(hello, [hello]) == 'hello') | Google Foobar Challenge: Spy Snippets in Python | python;programming challenge;python 2.7;interview questions | Why use two Queues, when you could just queue a tuple (or even better, a collections.namedtuple)? The only place which might prevent this is here:while q.first() in q.get_queue_after_first():But this can be written as:while any(q.first() == el[0] for el in q.get_queue_after_first()):in is already O(n), so this should not even have worse runtime (also, any uses short-circuit evaluation).Whenever you have to do a list.pop(0) you probably want collections.deque, which does deque.popleft in O(1) instead of O(n) for a list.Actually I don't see a point in having the Queue class at all. All its functions are single line and it is very well known and pythonic that you get the first element with l[0] and the last with l[-1].Also, I second the use of collections.Counter. In addition, collections.Counter will never have a count of zero for a key (unless modified to be so, of course), so if count.get(token, 0): is more readable as if token in count:PEP8 recommends using lower_case for variable names, so I would rename searchTerms to search_terms.Resulting code:import sysfrom collections import namedtuple, deque, CounterItem = namedtuple(Item, token position)def answer(document, search_terms): count = Counter(search_terms) no_of_search_terms = len(count) queue = deque() smallest_snippet_size = sys.maxint offsets = tuple() tokens = document.split() for position, token in enumerate(tokens, start=1): if token in count: queue.append(Item(token, position)) while any(queue[0].token == el.token for el in queue[1:]): queue.popleft() current_block_len = queue[-1].position - queue[0].position + 1 if (len(queue) >= no_of_search_terms) and (current_block_len < smallest_snippet_size): smallest_snippet_size = current_block_len offsets = (queue[0].position - 1, queue[-1].position) return .join(tokens[offsets[0]: offsets[1]]) |
_unix.101226 | I need to rebuild the Centos-6 / elrepo 3.10.19 kernel from source.Background: the GVision touch screen drivers are incompatible with kernels > 3.8 and require source code patches to add code to avoid conflicts with their touchscreen drivers. My first step is to build an unmodified driver from source that works before I try to apply the GVision patches.When I build the kernel as noted below, the kernel fails to boot properly with (hand typed!):Kernel panic - not syncing: Attempted to kill init! exitcode=0x000000100<some register dumps>dump_stackpanicremote_function+0x38/0x40find_new_reaper_0x512/0x160forget_original_parent+0x34/0x250perf_cgroup_switch+0x160/0x160exit_notify+0x16/0x120do_exit+0x1b4/0x400do_group_exit_0x3e/0xb0SyS_exit_group_0x3e/0xb0sysenter_do_call+0x12/0x28drm_kms_helper: panic occurred, swithcing back to text consoleHere how I built the kernel guided by https://fedoraproject.org/wiki/BuildingUpstreamKernelGet config file elrepo used:- First, get the config files that were used to build the elrepo kernel- - wget http://elrepo.org/linux/kernel/el6/SRPMS/kernel-t-3.10.19-1.el6.elrepo.nosrc.rpm- - rpm -i kernel-lt-3.10.19-1.el6.elrepo.nosrc.rpmThe key thing that you want from here is rpmbuild/SOURCES/config-3.10.19-i686Next, get the kernel source- wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.10.19.tar.xzChange perms on /usr/src/kernels- chmod o+w /usr/src/kernelsThen, as non-root- cd /usr/src/kernels- tar xJf ~/linux-3-10-19.tar.xz- cd linux-3-10-19- copy the config file from the rpmbuild/SOURCES/config-3.10.19-i686 to ./.config- edit the Makefile to make a unique kernel name with an extesion in the variable EXTRAVERSION- make bzImage && make modulesAs root- make modules_install- make installThis all completes cleanlyIn /boot, the original and newly build vmlinuz and System.map are the same file size (but different md5sum) and the newly built initramfs is much smaller.drwxr-xr-x 3 root root 1024 Nov 11 18:23 boot-rw-r--r-- 1 root root 142933 Nov 12 23:22 config-3.10.19-1.el6.elrepo.i686drwxr-xr-x 3 root root 1024 Aug 5 2011 efidrwxr-xr-x 2 root root 1024 Nov 14 20:07 grub-rw-r--r-- 1 root root 16589977 Nov 14 14:16 initramfs-3.10.19-1.el6.elrepo.i686.img-rw-r--r-- 1 root root 4645843 Nov 14 20:07 initramfs-3.10.19-MDV1.imgdrwx------ 2 root root 12288 Aug 5 2011 lost+found-rw-r--r-- 1 root root 254858 Nov 12 23:23 symvers-3.10.19-1.el6.elrepo.i686.gzlrwxrwxrwx 1 root root 29 Nov 14 20:06 System.map -> /boot/System.map-3.10.19-MDV1-rw-r--r-- 1 root root 2342208 Nov 12 23:22 System.map-3.10.19-1.el6.elrepo.i686-rw-r--r-- 1 root root 2342208 Nov 14 20:06 System.map-3.10.19-MDV1lrwxrwxrwx 1 root root 26 Nov 14 20:06 vmlinuz -> /boot/vmlinuz-3.10.19-MDV1-rwxr-xr-x 1 root root 4868224 Nov 12 23:22 vmlinuz-3.10.19-1.el6.elrepo.i686-rw-r--r-- 1 root root 4868224 Nov 14 20:06 vmlinuz-3.10.19-MDV1What step am I missing?==== Solved ====The key problem here was the initramfs that I generated was missing lots of material that was in the original elrepo distributed initramfs. As @terdon pointed out something is missing in there that is obviously essential to a successful boot.I don't know why the initramfs created by make install didn't work -- I didn't dig into that. To recreate the initramfs:cd /bootdracut -f initramfs-3.10.19.el6.elrepo.i686-MDV1.img 3.10.19.el6.elrepo.i686-MDV1With the new initramfs, this kernel boots cleanly.While digging into this I found that the config file in /boot was exactly the same as I had pulled from the elrepo archive so the wget for the elrepo config file can be eliminated.With a clean process to build from source, I was able to apply the source code patches to make the GVision touch screen work. The GVision instructions are a bit confusing, and in some places incorrect, and I've provided feedback to the vendor to update their documentation. | Kernel panic - not syncing after building Centos-6/elrepo 3.10.19 kernel from source | linux;kernel;compiling;source | null |
_unix.102108 | I have a problemIf [[ * ]]thencontinueelseexit 1fiI want to test that the argument to my switch (for example -d 3) is a valid positive decimal integer number (a sequence of one or more of any of the ASCII characters from 0 to 9). After -d there can be only be a number [0,infinity). Everything else is bad. I do not know what to put instead of *.Can you help me ? Argument after -d is at $2 position. | Test if number from range <0,infinity) | bash;exit;test | With any Bourne-like shell (that is, going back as far back as the 70s):case $2 in | *[!0-9]*) echo >&2 not OK; exit 1;; *) echo OK;;esac |
_unix.282859 | That's all - just wondering if there's other way to find out the permissions for a file without doing a ls -l to see the string of values there. As far as I know, there's no show option in chmod. | Is there a way to show the permissions for a file without using `ls` | linux;permissions;ls;aix;chmod | Besides stat (Linux-specific), there are tools which allow you to do this as a side effect. The tar program, for example can do this:tar cf - filename | tar tvf -For example$ tar cf - foo |tar tvf -rwxr-xr-x 1021/1021 18 Jan 13 21:40 2016 fooUsing the special - like that is reasonably portable (it works with AIX, HPUX, Solaris, Linux and FreeBSD).The term reasonably portable applies toavailabilityidentical formatThere are a few comments about stat versus portability. Here is output from GNU coreutils stat:$ stat foo File: `foo' Size: 0 Blocks: 0 IO Block: 4096 regular empty fileDevice: 801h/2049d Inode: 784564 Links: 1Access: (0755/-rwxr-xr-x) Uid: ( 1001/ tom) Gid: ( 100/ users)Access: 2016-05-12 19:03:54.773503477 -0400Modify: 2016-05-12 19:03:54.773503477 -0400Change: 2016-05-12 19:03:54.773503477 -0400 Birth: -and output from BSD stat (OSX):$ stat foo16777221 61893362 -rwxr-xr-x 1 tom wheel 0 0 May 12 19:03:54 2016 May 12 19:03:54 2016 May 12 19:04:59 2016 May 12 19:03:54 2016 4096 0 0 fooAnd here is an example output from AIX istat (looks different to me):$ istat fooInode 3166649 on device 32768/13 FileProtection: rwxr-xr-x Owner: 1021(dickey) Group: 1021(dickey)Link count: 1 Length 18 bytesLast updated: Wed Jan 13 21:40:30 UTC 2016Last modified: Wed Jan 13 21:40:30 UTC 2016Last accessed: Wed Jan 13 21:40:20 UTC 2016 |
_unix.283944 | I usually use command1 | command2 | command3 a lot in Linux but most of them are dealing with definite content.When I tried this with an infinite stream cat | sed '' | sed '' which hopefully simulates an infinite stream it didn't work utill I terminated it with Ctrl-D. I can solve the problem with using cat | sed -e '' -e '' but I would like to know why the first one doesn't work. cat | cat | cat works just fine. Is it something to do with sed, if so what is that problem?I tried to think about this problem and the only thing I found different was that when I am using cat I hit the Enter key which does something special that is not happening in the first sed '' above?Can anyone let me know how to make pipe work seamlessly with infinite steams? | How do pipes and infinite streams work? | linux;pipe;streams | The pipes connect the output or the left command to the input of the right command. This has nothing to do with the length of the stream. However, each command in the pipeline still has it's own buffering rules. If you don't trigger them in each command you won't see them on the final output. |
_codereview.115228 | This is my submission for the Prime Generator on SPOJ and it was accepted. Are there any improvements/changes I can make?Input:The input begins with the number \$t\$ of test cases in a single line (\$t \le 10\$). In each of the next t lines there are two numbers \$m\$ and \$n\$ (\$1 \le m \le n \le 1000000000\$, \$n-m \le 100000\$) separated by a space.Output:For every test case print all prime numbers \$p\$ such that \$m \le p \le n\$, one number per line, test cases separated by an empty line.Example:Input:21 103 5Output:235735#include <iostream>#include <cmath>#include <vector>using std::vector;using std::cout;using std::cin;bool isPrime(int n) { if (n == 1) return false; if (n == 2) return true; // invariant int root = int(ceil(sqrt(n))); // check up to ceil of square root n for (int i = 2; i <= root; ++i){ if (n % i == 0) return false; // not prime } return true;}int main() { int lines; int toPush; vector<int> inputs; // get inputs cin >> lines; int tempA = 0; while (tempA < 2*lines) { cin >> toPush; inputs.push_back(toPush); ++tempA; } auto i = inputs.begin(); while (i < inputs.end()){ int m = *i; int n = *(i + 1); while (m <= n){ if (isPrime(m)){ int value = m; cout << value << ; } ++m; } cout << \n; std::advance(i, 2); }} | Returns all primes p between m <= p <= n | c++;algorithm;c++11;programming challenge;primes | Here are a few things that may help you improve your program.Improve your algorithmRight now, within the isPrime routine, the loop begins at 2 and does a test division of every number from \$2\$ to \$\sqrt{n}\$. However, we already know that other than 2, all prime numbers are odd. You can approximately double the speed of this algorithm by writing it like this instead:bool isPrime(int n) { if (n == 1) return false; if (n == 2) return true; if (n % 2) return false; int root = int(ceil(sqrt(n))); for (int i = 3; i <= root; i+=2){ if (n % i == 0) return false; } return true;}Prefer for to while where appropriateThe input section of the code has these lines:int tempA = 0;while (tempA < 2*lines) { cin >> toPush; inputs.push_back(toPush); ++tempA;}It seems to me that this would be more clear as a for loop:for(int tempA = 2*lines; tempA; --tempA) { std::cin >> toPush; inputs.push_back(toPush);}The same is true for the main loop in the program. It could be written this way:for (auto i = inputs.begin(); i != inputs.end(); ++i) { for (int m = *i++, n=*i; m <= n; ++m) { if (isPrime(m)){ std::cout << m << ; } } std::cout << '\n';}Store std::pairs instead of intsAfter the first number, the program's input consists of pairs. It might make more sense to store them in a std::vector<std::pair<int, int>>The input routine would then look like this:for(int tempA = 2*lines; tempA; --tempA) { std::pair<int, int> toPush; std::cin >> toPush.first >> toPush.second; inputs.push_back(toPush);}The main loop is then considerably simplified by the use of the pair and a range-for outer loop:for (const auto &lim : inputs) { for (int m = lim.first; m <= lim.second; ++m) { if (isPrime(m)){ std::cout << m << ; } } std::cout << '\n';}}Use a better algorithmThe code works as it is, but could be still further improved. Since the inputs are all read at the beginning, you could choose the largest upper bound and run a sieve of Eratosthenes to derive all primes up to that number. Then printing for any range would be simply a matter of lookup. |
_unix.125665 | I like customizing my PS1 prompt and including the current directory.I also have several other items such as time, user, git branch and the like, i.e.However one problem is that when my current directory is many layers 'deep' such as /home/durrantmm/Dropnot/webs/rails_apps/linker/app/views there is too much text. (btw I carriage return at the end within my PS1 prompt setting anyway so my actual $ is back on the left, that is not the issue here).So I have a solution for that, to use this for the location part:LOCATION='\033[01;34m\]`pwd | sed s#\(/[^/]\+/[^/]\+/[^/]\+/\).*\(/[^/]\+/[^/]\+\)/\?#\1_\2#g`'not pretty, but it does the job and I then combine it with the other stuff (not shown, not needed here) and I get first 3 levels _ last two levels for the directories, i.e.Unfortunately though on my mac the sed part isn't working correctly and I get:[this is actually from my Linux machine, I faked it to show what it looks like on my mac in case you are wondering).How can I get the 3_2 format for the current directory on my mac ? | How can I make my sed command work on OSX as well as Ubuntu | sed | The \+ and \? parts of your sed command are GNU extensions - POSIX compatible sed cannot use these aspects of extended regex at all. Instead you can use \{1,\} and \{0,1\}. Try this:LOCATION='\033[01;34m\]`pwd | sed s#\(/[^/]\{1,\}/[^/]\{1,\}/[^/]\{1,\}/\).*\(/[^/]\{1,\}/[^/]\{1,\}\)/\{0,1\}#\1_\2#g`'For more information on this see - http://pubs.opengroup.org/onlinepubs/009696699/utilities/sed.html and http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html.Alternatively, you can enable extended regular expressions on OSX sed using -E. With this you could do:LOCATION='\033[01;34m\]`pwd | sed -E s#\(/[^/]+/[^/]+/[^/]+/\).*\(/[^/]+/[^/]+\)/?#\1_\2#g`'However, this won't work on GNU sed (it uses -r for this). Using a POSIX solution will give better portability. |
_webmaster.24182 | People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team.Thus they may have done very clever friendly URLs to maximize the page rank.I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain amount of character there were only numbers, kind of ok passed this length, SEOs will ignore this so let's put only numbers.I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like:http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/It's very long and I'm wondering if the previous URL won't be mixed with, say, this one:http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/ | Friendly URLs: is there a max length for search engines? | url;seo;best practices | Friendly for whom exactly? For visitors these urls aren't very friendly.I'd advice you to keep it a lot shorter, around 6 keywords at the max.Sources:http://support.google.com/webmasters/bin/answer.py?hl=en&answer=76329http://www.seomoz.org/blog/11-best-practices-for-urls |
_unix.352722 | current stuation- I run this line: Xxxcommand | mail -s 'test on' [email protected] I will have the output like below in my emailName Files(n) Space Calculation Adam 12345 12345 space/files(n) Becky 45689 8987 Maryanne 5598 7895I've got some calculation to make, so a few extra columns are needed.It's messy. So I want them to aligned according to column from my perl script.Plus addding extra column for calculation based on value from other columns.My end goal: by running my perl script in linux, which would take my output from command, then format it nicely. (the data would be different everytime, so I want to do it for just once, not to edit every time after I get the data output manually like copy and paste data into perl and format it one by one)if everything works out, when I run my perl script in linux. (that format the output from the command), I will receive an email with the nicely formatted output. (**Im sorry but idk how to format the styling here to make it aligned with column header) Name Files(n) Space Calculation Adam 12345 12345 space/files(n) Becky 45689 8987 Maryanne 5598 7895 So my main question is : how do I write perl script that can manipulate the output?Should I put the output into a text file in linux first? Because I just can't see how I can write the perl script that can format or add columns to the output before sending out as mail.Any advice provided would be appreciated. Thank you. | perl script to manipulate output (generated from a command line) | text processing;io redirection;perl;text formatting | null |
_cs.70178 | For an array A = [a1, a2, a3, a4] of distinct numbers, I have built heap using binary decision tree by Incremental and In-place method.Incremental method:In-place method:Is there a way to build heap using decision tree for inputs of size 4 that uses fewer decisions in the worst-case, compared to above methods?Notes on heap: https://drive.google.com/open?id=0By6GDPYLwp2cY3lfbEVWNHlrSlEIncremental method - page 36, In-place method - page 48. | How to build a heap better than Incremental and In-place method using decision tree? | binary trees;heaps;heap sort | null |
_cstheory.12041 | Say you have 3 algorithms, A B and C. You want to present a comparison of A B and C based on:Precompute overhead (say tree building etc that occurs prior to runtime)Runtime speed (say query response time)The problem is, however, A B and C each have their own unique set of parameters that will dictate the quality of the results. A has 3 parameters x, xx and xxxB has 1 parameter yC has 2 parameters z and zzIf A has it's parameter x increased from 10 to 50, say, then A's precompute overhead doubles, and A's runtime speed is halved.If B has it's parameter y increased from 0 to 0.001 say, it's precompute time stays the same but it's runtime speed becomes 1/3 of it's former speed.So each algorithm has it's own set of quirks and behaves differently, depending on how you set each algorithm's parameters.But we are interested in comparing A B and C based on the two categories (precompute and runtime) listed above!How can you compare algorithms A B and C? | Making algorithm comparison when algorithms use parameters | ds.algorithms | null |
_scicomp.20411 | I need to find an equation for the upper bound of $\max \mathbf{w}^T\mathbf{x}_i, \; i=1, \dots N$.where $\mathbf{w}$ and $\mathbf{x}_i$ are two vectors.I need to find a function $f$ which holds the following inequality.$\max \mathbf{w}^T\mathbf{x}_i \leq \mathbf{w}^T \mathbf{z}$where $\mathbf{z} = f(\mathbf{x}_i),\; i=1, \dots, N$e.gLet $\mathbf{x}_1 = \begin{pmatrix}x_{11}\\x_{12}\\x_{13}\end{pmatrix}, \; \dots, \mathbf{x}_N = \begin{pmatrix}x_{N1}\\x_{N2}\\x_{N3}\end{pmatrix}$$\mathbf{z} = \begin{pmatrix}f(x_{11}, \dots, x_{N1})\\f(x_{12}, \dots, x_{N2})\\f(x_{13}, \dots, x_{N3})\end{pmatrix}$for example f can be a $\max$ or $\min$ function.All the values of $\mathbf{x}_i, \;, i=1, \dots, N$ are known. But $\mathbf{w}$ is unknown.Is it possible to have $f$ as a function only on $\mathbf{x}_i$?Example:$\mathbf{x}_1 = \begin{pmatrix}-10\\1\\3\end{pmatrix}, \; \mathbf{x}_2 = \begin{pmatrix}5\\-3\\-5\end{pmatrix} \implies \mathbf{z} = \max\mathbf{x}_i = \begin{pmatrix}5\\1\\3\end{pmatrix}$ | what is the upper bound of $\max \mathbf{w}^T\mathbf{x}_i$ | optimization;numerical analysis;constrained optimization;nonlinear programming | As the OP is aware, when $\mathbf{w}$ is nonnegative, an upper bound of the required type can be obtained by taking $\mathbf{z}$ to be the componentwise maximum of the various $\mathbf{x}_i$.However a simple example shows that no choice of $\mathbf{z}$ is possible when $\mathbf{w}$ is allowed to have a negative entry. Consider the vectors:$$ \mathbf{x}_1 = (1,0,0)^T \; \text{ and } \; \mathbf{x}_2 = (-1,0,0)^T $$Then whatever choice of $\mathbf{z} = (z_1,z_2,z_3)^T$ is made, there exists $\mathbf{w} = (w_1,w_2,w_3)^T$ for which the inequality $\max \mathbf{w}^T\mathbf{x}_i \leq \mathbf{w}^T \mathbf{z}$ fails.Specifically, if $z_1 \le 0$, the choice $\mathbf{w} = (1,0,0)^T$ yields $\max \mathbf{w}^T \mathbf{x}_i = 1$, and $\mathbf{w}^T \mathbf{z} = z_1 \not \ge 1$.On the other hand, if $z_1 \ge 0$, the choice $\mathbf{w} = (-1,0,0)^T$ yields $\max \mathbf{w}^T \mathbf{x}_i = 1$, and $\mathbf{w}^T \mathbf{z} = -z_1 \not \ge 1$.Therefore no choice of $\mathbf{z}$ is satisfactory for all $\mathbf{w}$. |
_unix.365421 | Let's say I have a list of nested directories that looks like this:./x1/mf/dir1./x1/mf/dir2./x1/mf/file1./x2/mf/dir3./x2/mf/file2...I want to remove all the subdirectories of every mf directory. Meaning dir1, dir2, dir3 in the previous example.I know that find . -type d -name mfwill return a list of all the directories called mf. And ls -d */ returns all the subdirectories in the current directory. So I tried find . -type d -name mf -exec ls -d /* {} \;to list the desired directories, but it would actually print the directories inside /. I was planning to pipe the resulting list to xargs rm -r to do the removal afterwards. | Using find to look for a directory and remove its subdirectories | find;directory;rm | Setting up test directories and files:$ mkdir -p x{1..3}/mf/dir{1..3}$ touch x{1..3}/mf/file{1..3}$ tree.|-- x1| `-- mf| |-- dir1| |-- dir2| |-- dir3| |-- file1| |-- file2| `-- file3|-- x2| `-- mf| |-- dir1| |-- dir2| |-- dir3| |-- file1| |-- file2| `-- file3`-- x3 `-- mf |-- dir1 |-- dir2 |-- dir3 |-- file1 |-- file2 `-- file3Then find all directories that has mf in its path and delete them. The -depth does a depth-first traversal, so that find doesn't try to enter directories that it has already deleted. We also print the names of all directories that are deleted.$ find . -depth -type d -path */mf/* -print -exec rm -rf {} +./x1/mf/dir1./x1/mf/dir2./x1/mf/dir3./x2/mf/dir1./x2/mf/dir2./x2/mf/dir3./x3/mf/dir1./x3/mf/dir2./x3/mf/dir3Now:$ tree.|-- x1| `-- mf| |-- file1| |-- file2| `-- file3|-- x2| `-- mf| |-- file1| |-- file2| `-- file3`-- x3 `-- mf |-- file1 |-- file2 `-- file3 |
_datascience.18817 | I am working on a classification problem and I found my data having a lot of outliers which has resulted in reduction in my recognition rate. I have tried rescaling, normalization techniques like min max, box cox and even log transformation. I am considering of eliminating outliers from box plots but I am afraid I might be eliminating useful features/data required to define the model.Are there any suggestion on how to deal with such cases.Also further analysis of data revealed that my data constitutes of features belong to dfferent process like web application, apps. I segregated the data based on the processes and I do see that large variation of process resulted in different accuracy ranging from 60-95%Any tips on how to deal with such cases? In the end I want my classifier to classify irrespective of the process type. So with my current issue, does this imply that my features defined are not good enough or is there something else I can do? | Classification affected by a lot of outliers in features? How do you deal with outliers? | multiclass classification;outlier;normalization | null |
_codereview.54382 | I have a search function on my website. In addition, elsewhere I have a place where users can submit categories for their posts. I want the categories to have an uppercase first letter and the rest lower case. I know there far more complex ways to secure a user input string, but do you think that as a fairly basic function, this is relatively secure? function purify_string($string){ $script_tags = /\<script +(.+)<\/script>+/i; $string = strip_tags(addslashes($string)); $string = preg_replace($script_tags, '', $string); $string = trim($string); $string = strtolower($string); $string = ucfirst($string); return $string; } | String sanitisation function | php;beginner;strings;security | Instead of trying to purify the category for the user, which may or may not have unexpected results, you should simply reject everything that does not conform to your level of standard, and let the user fix it:function isValidCategoryName($string) { return preg_match(/^[A-Z][a-z0-9 ]+$/, $string);}If it doesn't match, prompt the user with an error message and tell him to try again. Your job isn't to try to extract a good category from a bad one.At most you could just correct the case with ucfirst if you want. |
_cs.54628 | I am collecting material for a MOOC about speech technology. My aim is that students also have examples to try rather than just watching the lecture and some complimentary youtube videos. So the idea was that they could call up some spoken dialogue systems or something like that. I have quite a few phone numbers of spoken dialogue systems in german (e.g. +498003504030) but as the course is going to be taught in english I am searching for examples in the english language.Note: Examples that will just redirect you to a customer service representative are not of interest. I am looking for good and bad examples. If you are aware of phone numbers for some spoken dialogue systems that you like or hate let me knowIf you know a website that lists some numbers to call even better. So far I had no luck with google and i hope i have choose the right community on stack overflow for this. If not please accept my apologies in advance. | Examples for speech recognition systems and spoken dialogue systems | machine learning;natural language processing;speech recognition | null |
_vi.12499 | I am trying to search backwards (using ?) for the beginning of a word.I don't mean a word in the text object sense; I mean a series of lower-case letters optionally beginning with a capital letter. The optional capital letter is stymying me.Here are some examples showing where I want to jump to, assuming the cursor starts at the end of the line:quick brown fox^ ^ ^QuickBrownFox^ ^ ^Quick Brown Fox^ ^ ^quick_brown_fox^ ^ ^The closest I've come up with is ?\(\U\u\)\|\L?e+ but that doesn't work with the capital letters. I assume this is because \L matches before \U\u does.How can I solve this? | Regexp for beginning of word? | regular expression | This simple search works for all the examples you gave:?\a\l\+It matches any alphabetic character (upper or lower case), followed by any non-zero number of lower case characters.Some examples of cases you didn't mention where I'm a little unclear what you want it to do:match a lower caseMatchACapitalIf you want it to match the single letter word a/A in both of these, then use a * instead of the \+:?\a\l* |
_unix.298437 | Is there an easy way to split, for example, apache vhost file with multiple vhost into 1 vhost per file?Or something else, allowing to operate only one by one vhost to get grep output .preferred solution in bash. | Split file by pattern | bash;shell script;text processing;split | null |
_codereview.23625 | I'd like to know if I'm doing profile configuration in the wrong place or in the wrong way.I'm following the Onion Architecture, so that restricts the direction of my dependencies towards the center.CoreMy domain model and AutoMapper facade:namespace Core.Domain{ public class MyModel { // model stuff }}namespace Core.Services{ public interface IMapper { object Map(object source, Type sourceType, Type destinationType); }}InfrastructureAutoMapper facade implementation:namespace Infrastructure.Mapping{ public class Mapper : IMapper { private readonly IMappingEngine _mappingEngine; public Mapper(IMappingEngine mappingEngine) { _mappingEngine = mappingEngine; } public object Map(object source, Type sourceType, Type destinationType) { return _mappingEngine.Map(source, sourceType, destinationType); } }}UIThis is my controller and view model. I'm using the AutoMapper via a filter, following this example.namespace UI.Controllers{ public class HomeController : Controller { [AutoMap(typeof(MyModel), typeof(MyViewModel))] public ActionResult Index() { var myItem = _myRepository.GetById(0); return View(myItem); } }}namespace UI.ViewModels{ public class MyViewModel { // view stuff }}Dependency ResolutionThis is where I have my doubts:namespace DependencyResolution{ public class MappingModule : NinjectModule { public override void Load() { Mapper.Initialize(cfg => cfg.AddProfile(new MyProfile())); Bind<IMappingEngine>().ToMethod(ctx => Mapper.Engine); Bind<IMapper>().To<Mapping.Mapper>(); Kernel.BindFilter<AutoMapFilter>(FilterScope.Controller, 0) .WhenActionMethodHas<AutoMapAttribute>() .WithConstructorArgumentFromActionAttribute<AutoMapAttribute>(sourceType, att => att.SourceType) .WithConstructorArgumentFromActionAttribute<AutoMapAttribute>(destType, att => att.DestType); } } public class MyProfile : Profile { protected override void Configure() { Mapper.CreateMap<MyModel, MyViewModel>().ForMember(...); } }}QuestionsIs the way I bind to AutoMapper wrong? Is this the wrong place for the profile (keep in mind the dependency restriction)?In the ideal world I would have placedMapper.CreateMap<MyModel, MyViewModel>().ForMember(...)in Global.asax, but how do I expose CreateMap without referencing AutoMapper?Is there anything else you have noticed? | Injecting AutoMapper profiles | c#;dependency injection;asp.net mvc 4 | What is the purpose of the IMapper interface and Mapper class? It looks to me that they are just wrapping the IMappingEngine interface and MappingEngine class. While this is a good method when you have a third party class that doesn't have an interface, I think it is overkill here. Why don't you just use the IMappingEngine where you need that functionality?If you are going to keep your Mapper class, I would rename it, having two Mapper classes is confusing.As for where it is, I don't have a problem with doing it this way. All the wire-up is done in one place, and its easy to find and add to as needed. |
_cstheory.8893 | This is about how effectively we can express an algorithm at hand. I need this for my undergraduate teaching. I understand there is no such thing as standard way of writing a pseudo code. Different authors follow different conventions. It would be helpful if people here point out, the way they follow and think the best one.Is there any book that deals with this in a good detail? | Good practices for writing algorithms | ds.algorithms;soft question;advice request;writing | Writing pseudocode is like writing code: It's not particularly important which standard you follow, as long as you (and the people you write with) actually follow some standard. But for the record, here's the idiosyncratic standard I use in my lecture notes, research papers, and upcoming book. Use standard imperative syntax for control flow and memory access if, while, for, return, array[index], function(arguments). Spell out else if.But use $field(record)$ instead of record.field or record->fieldUse standard mathematical notation for math Write $xy$ instead of x*y, $a\bmod b$ instead of a%b, $s\le t$ instead of s <= t, $\lnot p$ instead of !p, $\sqrt{x}$ instead of sqrt(x), $\pi$ instead of PI, $\infty$ instead of MAX_INT, etc.But use $x\gets y$ for assignment, to avoid the == problem.But avoid notation (and pseudocode!) entirely if English is clearer.Symmetrically, avoid English if notation is clearer!Minimize syntactic sugar Indicate block structure by consistent indentation ( la Python). Omit sugary keywords like begin/end or do/od or fi. Omit line numbers. Do not emphasize keywords like for or while or if by setting them in a different typeface or style. Ever. Just don't.But typeset algorithm names and constants in \textsc{Small Caps}, variable names in italic, and literal strings in sans serif.But add a small amount of vertical breathing space (\\[0.5ex]) between meaningful code chunks.Don't specify unimportant details. If it doesn't matter what order you visit the vertices, just say for all vertices.For example, here is a recursive formulation of Borvka's minimum spanning tree algorithm. I've previously defined $G / L$ as the graph obtained from $G$ by contracting all edges in the set $L$, and Flatten as a subroutine that removes loops and parallel edges.I use my own lightweight algorithm LaTeX environment to typeset pseudocode. (It's just a tabbing environment inside an \fbox.) Here's my source code for Borvka's algorithm:\begin{algorithm} \textul{$\textsc{Borvka}(G)$:}\+\\ if $G$ has no edges\+\\ return $\varnothing$\-\\[0.5ex] $L \gets \varnothing$\\ for each vertex $v$ of $G$\+\\ add the lightest edge incident to $v$ to $L$\-\\[0.5ex] return $L \cup \textsc{Borvka}(\textsc{Flatten}(G / L))$\end{algorithm} |
_unix.39087 | I saw a tutorial about redirecting client requests to a specific port to a VM inside the server using IPTables.Is there a way to redirect client requests for foo.com to a VM using only IPTables?or should I go for squid proxy server? | Is there a way to redirect requests from foo.com to a VM on that server? | networking;iptables | null |
_webmaster.55146 | I am working in Joomla! 1.5.9 and trying to change the names to the page titles in the menu browser (I think that's what it's called). I have only been able to find basic page info on those specific pages on the sections manager/green folder within the editor/admin site. However, all title changes I make to those pages within are not reflected on the actual web site itself. I have been able to add a section (just out of curiosity) but it is not visible on the actual site itself. Nor can I figure out how to delete any of the pages/sections.I don't know where to find access to the editing page title options. | Joomla! 1.5.9 changing page titles on menu | joomla;title;menu;titles | null |
_unix.99275 | This is a followup question to my earlier problem. In short I was experiencing massive I/O drops together with not-so-violent disk grinding, and with help I found out that the big caches were the problem. Now I'm trying to find a solution to it.I'm running a daily updated 32bit Debian testing on a computer with 16GB RAM, 120GB SSD and two 1TB HDDs. SSD stores read-heavy of the distribution and one of the 1TB disks store /var /tmp /media and other write heavy parts including a part of my home folder. 2nd HDD is a pure file storage. Everything is EXT4 formatted.The distribution's nominal RAM usage is 1 to 1.5GB. Remaining RAM is converted to cache at kernel's discretion. After cache grows beyond a point, I/O performance drops massively (from ~90MB/sec to ~5MB/sec). Dropping caches solves the problem but only temporarily.What can be the reason? While I'm a developer and a former cluster admin, caching is beyond my knowledge at this point. | Too large cache causes disks to grind, I/O to drop | cache;storage | null |
_codereview.40417 | This takes a width specified by user and prints a diamond of that width. It uses only three for loops, but could I reduce that further? Is there a more elegant solution?public class Diamond { static boolean cont = true; public static void main (String[] args) { Scanner input = new Scanner(System.in); while (cont) { System.out.print(Width: ); int width = input.nextInt(); int lines = width; System.out.println(); for (int line = 0; line < lines; line++) { for (int spaces = 0; spaces < Math.abs(line - (lines / 2)); spaces++) { System.out.print( ); } for (int marks = 0; marks < width - 2 * (Math.abs(line - (lines / 2))); marks++) { System.out.print(x); } System.out.println(); } System.out.println(); } }} | Print an ASCII diamond | java;console | null |
_webmaster.5812 | I have a javascript heavy site (It really couldn't be coded in any sensible way with progressive enhancement) and I am using Google's advice for making AJAX websites crawlable. (i.e. use ?_escaped_fragment_ in place of '#!')My question is this: How closely does my flat HTML need to match the AJAX created HTML for users?I imagine Google must be checking some of the AJAX content, as otherwise this would be an easy way to do cloaking. I don't want to cloak in any way, but it is difficult to produce the exact HTML source that AJAX generates on the server-side? Would a rough approximation be good enough? Anyone have any experience in doing this? | When creating HTML for _escaped_fragment_ AJAX pages, how correct does it have to be? | seo;ajax | null |
_unix.225024 | [root@localhost ~]# fdisk -lDisk /dev/xvdb: 2147.5 GB, 2147483648000 bytes255 heads, 63 sectors/track, 261083 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00050ec0 Device Boot Start End Blocks Id System/dev/xvdb1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/xvdb2 64 261084 2096638976 8e Linux LVMDisk /dev/xvda: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000effaf Device Boot Start End Blocks Id System/dev/xvda1 * 1 64 512000 83 LinuxPartition 1 does not end on cylinder boundary./dev/xvda2 64 653 4729856 8e Linux LVMPartition 2 does not end on cylinder boundary.Disk /dev/mapper/VolGroup-lv_swap: 16.8 GB, 16844324864 bytes255 heads, 63 sectors/track, 2047 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/VolGroup00-lv_root: 4303 MB, 4303355904 bytes255 heads, 63 sectors/track, 523 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/VolGroup00-lv_swap: 536 MB, 536870912 bytes255 heads, 63 sectors/track, 65 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/VolGroup-lv_root: 53.7 GB, 53687091200 bytes255 heads, 63 sectors/track, 6527 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000Disk /dev/mapper/VolGroup-lv_home: 2076.4 GB, 2076423749632 bytes255 heads, 63 sectors/track, 252444 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x00000000[root@localhost ~]#Mounted Drives[root@localhost ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-lv_root 4005360 780156 3015080 21% /tmpfs 1475396 0 1475396 0% /dev/shm/dev/xvda1 487652 52811 409241 12% /boot[root@localhost ~]#Screenshot of XenServer attached drivesHow can I attach this drive 2TB hard drive to /mnt/? I thought mount -t ext2 /dev/xvdb /mnt would work but everything I try fails/etc/fstab## /etc/fstab# Created by anaconda on Sun Aug 23 19:29:26 2015## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/VolGroup00-lv_root / ext4 defaults 1 1UUID=b9b08863-9a52-432a-b904-61a3144ce709 /boot ext4 defaults 1 2/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0/dev/mapper/VolGroup00-lv_swap swap swap defaults 0 0tmpfs /dev/shm tmpfs defaults 0 0devpts /dev/pts devpts gid=5,mode=620 0 0sysfs /sys sysfs defaults 0 0proc /proc proc defaults 0 0Mounted drives[root@localhost ~]# mount/dev/mapper/VolGroup00-lv_root on / type ext4 (rw)proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)tmpfs on /dev/shm type tmpfs (rw,rootcontext=system_u:object_r:tmpfs_t:s0)/dev/xvda1 on /boot type ext4 (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)uuid[root@localhost ~]# ls -l /dev/disk/by-uuidtotal 0lrwxrwxrwx. 1 root root 10 Aug 23 21:37 31de66c0-cd8f-4f75-9bf6-7194dc43dafd -> ../../dm-4lrwxrwxrwx. 1 root root 10 Aug 23 21:37 35a57414-2271-4db2-9f9a-09cf36c19044 -> ../../dm-0lrwxrwxrwx. 1 root root 11 Aug 23 21:37 5a99aec9-7270-44ce-9df7-32ff4d70a75b -> ../../xvdb1lrwxrwxrwx. 1 root root 10 Aug 23 21:37 986339d2-abd7-453f-8422-e8c1acb9368a -> ../../dm-2lrwxrwxrwx. 1 root root 11 Aug 23 21:37 b9b08863-9a52-432a-b904-61a3144ce709 -> ../../xvda1lrwxrwxrwx. 1 root root 10 Aug 23 21:37 c065a889-055d-44f2-bae4-0ac73d86d493 -> ../../dm-1lrwxrwxrwx. 1 root root 10 Aug 23 21:37 e85cb70e-6403-4b06-a481-a1512140a491 -> ../../dm-3 | Problem mounting disk | filesystems;mount | null |
_unix.318864 | I've got a command to find big files in a particular folder but for some reason it won't work in certain situations and I get an Argument list too long error. How do I fix this command so it works every time?jbsmith:/tmp$ sudo du -hsx * | sort -rh | head -10-bash: /usr/bin/sudo: Argument list too long | Argument list too long when using du | shell;disk usage;sort | null |
_codereview.134858 | What do you think about this implementation of 2 floats comparison functor considering how tolerance is introduced? class Less{private: float m_tolerance;public: Less(const float tolerance) : m_tolerance(tolerance) { } bool operator()(const float f1, const float f2) const { const bool toCloseToCompareSmaller = (std::abs(f2 - f1) < m_tolerance); const bool isSmaller = (f1 < f2); return !toCloseToCompareSmaller && isSmaller; } ~Less() { }}; | Functor to compare two floats with tolerance | c++;c++11;floating point;overloading | A few remarks:Seems appropriate to add a default value of 0.0f in the constructor:Less(const float tolerance = 0.0f) : m_tolerance(tolerance){}This class could most certainly be used as a base class.For example:class Less1 : public Less{public: Less1():Less(1.0f) { ... } ...}So you may as well make it suitable to serve as such:Declare virtual ~Less()Change private to protected where neededThe operation f1 < f2 is most likely less expensive than abs(f2 - f1).So you may as well check it first:bool operator()(const float f1, const float f2) const{ return f1 < f2 && std::abs(f2 - f1) >= m_tolerance;}You should add an assertion on the value of m_tolerance being non-negative.Alternatively, you could use std::abs(m_tolerance), but it seems a bit hacky. |
_webapps.33327 | I'm trying to delete an active domain from a Google Apps account so that I can create a new Apps account with that domain (long story, I was using one Apps account as a stand-in while waiting for an organization to get its act together). I keep getting an error telling me that I have e-mails or aliases still on that domain. But I've checked. I have renamed all e-mails that were created with it and deleted the aliases. I've checked them each individually, twice. Is there any way to display e-mails by their aliases? Or something else that could be could be preventing me from deleting the domain? | How to delete an active domain from a Google Apps account? | google apps | null |
_unix.293703 | I want to convert a particular Verilog Bus into individual split form using sed or awk command.Inputmodule test ( temp_bus[3:0], temp_B[1:0] ) input [3:0] temp_bus; output [1:0] temp_B;endmoduleOutputmodule test ( temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0], temp_B[1], temp_B[0]) input temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0]; output temp_B[1], temp_B[0];endmoduleEdit1: Case with multiple declarationmodule test ( temp_bus[3:0], temp_B[1:0] , temp_C[1:0] ) input [3:0] temp_bus; output [1:0] temp_B , temp_c;endmoduleResultant must have output temp_B[1], temp_B[0], temp_C[1], temp_C[0] ;cas has almost done given the best solution. | sed to split verilog bus into individual port | text processing;awk;sed | Here's one way to do it in perl:(revised version will handle both of your sample inputs. It also looks like a semi-colon inside [] doesn't confuse the markdown syntax highlighting)#! /usr/bin/perluse strict;sub expand { my ($name,$start,$stop) = @_; my $step = ( $start < $stop ? 1 : -1); my @names=(); my $i = $start; while ($i ne $stop + $step) { push @names, $name\[$i\]; $i += $step; } return @names;};while(<>) { chomp; s/([(),;])/ $1/g; # add a space before any commas, semi-colons, and # parentheses, so they get split into separate fields. my @l=(); # array to hold the output line as it's being built my @line = split ; # split input line into fields, with 1-or-more # whitespace characters (spaces or tabs) between each # field. my $f=0; # field counter while ($f < @line) { if ( $line[$f] =~ m/module/io ) { push @l,$line[$f++]; while ($f < @line) { if ( $line[$f] =~ m/^(.*)\[(\d+):(\d+)\]$/o ) { # expand [n:n] on module line push @l, join(, ,expand($1,$2,$3)); } else { push @l, $line[$f] }; $f++; }; } elsif ($line[$f] =~ m/^(?:input|output)$/io) { # use sprintf() to indent first field to 10 chars wide. $line[$f] = sprintf(%10s,$line[$f]); push @l, $line[$f++];; my @exp = (); while ($f < @line) { if ( $line[$f] =~ m/^\[(\d+):(\d+)\]$/o ) { # extract and store [n:n] on input or output lines @exp=($1,$2); } elsif ( $line[$f] =~ m/^\w+$/io) { # expand word with [n:n] on input or output lines push @l,join(, ,expand($line[$f],@exp)); } else { push @l, $line[$f]; }; $f++; }; } else { # just append everything else to the output @l array push @l, $line[$f]; }; $f++; } print join( ,@l),\n;}Output:$ ./jigar.pl ./jigar.txt module test ( temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0] , temp_B[1], temp_B[0] ) input temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0] ; output temp_B[1], temp_B[0] ; endmodule Output from your second sample:$ ./jigar2.pl jigar2.txt module test ( temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0] , temp_B[1], temp_B[0] , temp_C[1], temp_C[0] ) input temp_bus[3], temp_bus[2], temp_bus[1], temp_bus[0] ; output temp_B[1], temp_B[0] , temp_c[1], temp_c[0] ;endmodule |
_vi.10832 | I have this in my .vimrcfunction HeaderTpl(fchar, boxchar, width) let sfile = expand(%:p) return a:fchar . . repeat(a:boxchar, a:width) . \n \ . a:fchar . . sfile . \n \ . a:fchar . \n \ . a:fchar . . strftime(%FT%T %z) . \n \ . a:fchar . . repeat(a:boxchar, a:width) . \nendfunctionimap <silent> ### <C-R>=HeaderTpl('#', '-', 71)<CR>imap <silent> /// <C-R>=HeaderTpl('//', '-', 70)<CR>This works almost perfectly as if I type ### in blank file I get this header# -----------------------------------------------------------------------# /path/to/current/file.conf## 2017-01-03T20:02:50 +0100# -----------------------------------------------------------------------but if the file type is defined (something like vim python_script.py) then I get something like this:# -----------------------------------------------------------------------# # /path/to/current/file.conf# ## # 2017-01-03T20:02:50 +0100# # -----------------------------------------------------------------------#I think that additional # is caused by the autoindent or smartindent. My question: how do I prevent this additional # to be inserted? One of my ideas was to temporary switch off autoindent and smartindent but how to do it?$ vim --versionVIM - Vi IMproved 7.4 (2013 Aug 10, compiled Nov 24 2016 16:44:48)Included patches: 1-1689Extra patches: 8.0.0056 | How can I insert text from a function without triggering autoinsertion of comments? | vimscript | null |
_cs.55909 | We know that for problems in NP if the problem is an yes version then there is a short certificate and for coNP if the problem is a no version then there is a short certificate.Is there a short certficate analogy for higher levels in Polynomial Hierarchy? | Short certificate analogy of PH? | complexity theory;complexity classes | The correct analogy for higher levels of the polynomial hierarchy is that of a game between two players, the $\exists$ player and the $\forall$ player. The $\exists$ player wants to prove that the input is in the language, and the $\forall$ player wants to disprove it.For a language $L \in \Sigma_k^P$, the game is as follows. On input $x$, the $\exists$ and $\forall$ players alternate in presenting strings $y_1,y_2,\ldots,y_k$ (the $\exists$ player starts). After $k$ rounds, the referee runs a polynomial time procedure $f$ (set in advance) on $x,y_1,\ldots,y_k$, and declares $\exists$ to be the winner if $f$ returns YES, and $\forall$ to be the winner if $f$ returns NO. Since $L \in \Sigma_k^P$, when $x \in L$, $\exists$ has a winning strategy, and when $x \notin L$, $\forall$ has a winning strategy.For a language $L \in \Pi_k^P$, the only difference is that $\forall$ starts.When $k = 1$, there is no communication, and so we can talk in terms of witnesses. For larger $k$, we can still use the witness terminology, but with different semantics. Consider again $\Sigma_k^P$, for $k = 2r$ or $k = 2r+1$. The witness of $\exists$ consists of $r$ functions $Y_1,Y_2,\ldots,Y_r$ which are used to generate the strings produced by $\exists$ as follows: $y_1 = Y_1(x)$; $y_3 = Y_2(x,y_1,y_2)$ (or $y_3 = Y_2(x,y_2)$, since $y_1$ is known); and so on. You can think of these functions as implicit witnesses, since $\exists$ doesn't reveal the entire functions $Y_2,\ldots,Y_r$, but only the relevant values.For $x \in L$, the property that these witnesses satisfy is that for any reply by the $\forall$ player, the referee declares $\exists$ to be the winner. For $x \notin L$, the $\forall$ player has a similar kind of witness. Thus witness here is really the same as a winning strategy. |
Subsets and Splits