id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.338413 | I'm attempting to set up my first guest set up under KVM and am having trouble getting the network working. The host machine vm_host_box has a static IP of 4.4.4.185.The guest I am setting up needs to have a static IP of 4.4.4.200. I currently have zero connectivity to anything (including gateway) from the guest VM. Note that the guest VM is a minimal install and I'm not able to install any additional packages due to not having network for yum. Here's what I can see from the host: [root@vm_host_box ~]# cat /etc/sysconfig/network-scripts/ifcfg-em1 TYPE=EthernetBOOTPROTO=noneDEFROUTE=yesIPV4_FAILURE_FATAL=noNAME=em1UUID=some-string-hereDEVICE=em1ONBOOT=yesDNS1=4.4.10.1DOMAIN=vmhostbox.my.domainIPV6INIT=noIPADDR=4.4.4.185PREFIX=23GATEWAY=4.4.4.2list of other ifcfg-* files: [root@vm_host_box network-scripts]# ls ifcfg-*ifcfg-em1 ifcfg-em2 ifcfg-em3 ifcfg-em4 ifcfg-loip addr details: [root@vm_host_box ~]# ip addr show em12: em1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 14:18:77:69:b6:9a brd ff:ff:ff:ff:ff:ff inet 4.4.4.185/23 brd 170.140.203.255 scope global em1 valid_lft forever preferred_lft forever[root@vm_host_box ~]# ip addr show virbr06: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:11:3f:7e brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft foreverbrctl shows a bridge, but there is an interface listed and I'm not sure how it was added. [root@vm_host_box ~]# brctl showbridge name bridge id STP enabled interfacesvirbr0 8000.525400113f7e yes virbr0-nicHere's the virsh details for the default network. I'm unclear about the range of IPs. Will virsh be creating a new subnet, or am I telling it what IP's to look out for? [root@vm_host_box ~]# virsh net-list --all Name State Autostart Persistent---------------------------------------------------------- default active yes yes[root@vm_host_box ~]# virsh net-edit default<network> <name>default</name> <uuid>0b06b7c0-708f-4925-a8f6-84587bf96575</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:11:3f:7e'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> </dhcp> </ip></network>Here's what I am showing under KVM's Connection Details screens: andHere is the config after starting the Guest (labeled rhel7)Any pointers would be greatly appreciated. | No network connectivity in KVM guest | rhel;kvm | null |
_unix.70665 | I'm not sure if sudo supports so, I want to execute a command, only if this command is configured as NOPASSWD, that is, if the command can be run without password, run it; Otherwise quit directly.So far the only option seems to be listing all available commands with sudo -l and parse the output manually, that looked dirty.Are there any alternatives? | Don't execute a command with sudo, unless it's configured as password-free | sudo | Not very pretty, but short of parsing sudo -l as you're doing I don't know an alternative: simply run your command with the -n switch:The -n (non-interactive) option prevents sudo from prompting the user for a password. If a password is required for the command to run, sudo will display an error message and exit.Example:me$ sudo -l User me may run the following commands on this host: (ALL) NOPASSWD: /usr/bin/vim (ALL) /usr/bin/nanome$ sudo -n nano /etc/hostssudo: a password is requiredme$ echo $?1me$ sudo -n vim /etc/hosts# :qme$ echo $?0Problem with this is that you can't tell directly from the exit code if sudo failed because it required a password, or if the command you ran failed. So you'll need to parse the output. (Fortunately the error message is static, so that shouldn't be too hard. Beware of localization though.) |
_softwareengineering.161264 | I am taking a networking class(data communication) this coming semester. In many courses in my computer engineering degree, the books lack mathematical content. I am not a math wizard, but I think mathematics has the potential to give a much deeper understanding of different concepts, but it requires more work. Is it a good idea to focus on information theory(found a cool book by a researcher from Bell Labs) and use a more high level book like Computer networking, A top down approach as supplementary text? | How related is coding/information theory to computer networking? | networking;information | null |
_unix.80042 | I'm running Ubuntu 12.04 LTS on my notebook, and I've got a Motorola Razr which I updated to Android 4.1.2. Before updating, I connected my smartphone to my hotspot wifi very quickly, but after the update it no longer works. Android 4.1.2 can't see the hotspot created by Ubuntu; it sees all other wifi networks and can connect to them, but not mine. What could be wrong? | Ubuntu 12.04 hotspot wifi network not visible to Android 4.1.2 | ubuntu;wifi;android | null |
_webmaster.10116 | This is a recent problem I've been having.My site can be accessed from almost everywhere else except from my home IP, where I do most of my editing/updating, etc. I've tested my connection from my school's network, a friend's connection from out of state (multiple states), and through a tethered connection with my friend's Android. It works in all those cases, both viewing, accessing the cPanel, and using FTP.Here's the problem that happens to me when I try to view it from my home IP:The page times out in Firefox, IE, and Chrome.Using the cmd, I ran tracert and ping, both as failed attempts. Log here.downforeveryoneorjustme.com says my site is up. So do the other site checkers.I can't access my cPanel or FTP accounts.I can't access the host site. (I use perfectz.info for hosting, and I can't access their site either.)System settings:No firewall enabled.Ports are seemingly properly forwarded. (e.g. The ports are open in the router settings, and are open everywhere else.)I have an email forwarder set up from the cPanel that works just fine. (i.e. I can receive emails sent to that address.If any other information is needed, I'll do my best to provide it.UPDATE@ilhan: I use two things:1) The site cPanel from in-browser.2) Dreamweaver CS5 FTP.@Matthias: I tested both, and it passes the dual stack with a 10/10. What should I do then? | Cannot access personal website from home IP. More details inside | ip address;address | null |
_webmaster.93615 | I am using <meta name=robots content=noindex> for few pages that are not supposed to be displayed in any search results.I know how important is the meta-description tag for SEO.Is it still required for pages that are not indexed? | Is the meta-description tag required when meta-robots is set to noindex? | seo;noindex;meta description | null |
_softwareengineering.50554 | Today at work one of my colleague reviewed my code,and suggested me to remove set-only property and use method insted.As we both were busy with other stuffs, he told me to look at Property Design section fromFramework Design Guidelines book. In the book writer just asked to avoid Properties with the setter having broder accessibility than the getterAnd now I'm wondering why it is not recommended to have set-only property? Can anyone clarify me, please? | Why it is not recommended to have set-only property? | design;code quality;code reviews;code smell | null |
_unix.368944 | Recently I noticed we have 3 options to set environment variables:export envVar1=1setenv envVar2=2env envVAr3=3If there are other ways, please enlighten us.When should I prefer one over the other? Please suggest guidelines.As for shell compatibility, which is the most expansive (covers more shell dialects)?I already noticed this answer but I wish to expand the question with env and usage preference guidelines. | What is the difference between env, setenv, export and when to use? | bash;shell;environment variables | null |
_unix.354691 | I have a logfile with 2 distinct events (among others) that I need to capture.Each event generates a separate, dedicated line in the logfile with this format:timestamp - PID - process - event-type - event-detailsI don't care much about anything but the event-details column of the file, and the data I'm expecting to receive there, looks like this:Example 1: { values:{ SPEED:7.0 } }Example 2: { values:{ CADENCE:41 } }I've been trying to write a shell script that would only read the last line of the logfile every time, and depending on the contents of the event-details column, redirect the resulting SPEED or CADENCE data to a specific text file (when I say resulting SPEED/CADENCE data I mean the integer after the SPEED: expression for example).So far I was able to redirect the results to two different files, but:I have to tail the logfile twice in order for the script to work and......as a result of that, I have the feeling that the second file is not being updated at the same rate as the first one...as if, for some reason, I was missing some of the CADENCE events due to the order in which the script was written.I tried using the sleep function, and also tried to tail more than one line at a time to try to mitigate the lack of CADENCE update with no luck.I just keep missing CADENCE events from time to time.A note on the logfile behavior: Looking at the log, there are 3 events that appear most of the time, and they are always logged in the same order of appearance (CADENCE, SPEED and OTHER), and from time to time there is a 4rd event. I just wanted to clarify that the missing CADENCE events have nothing to do with that 4rd event appearance.This is a summarized version of the script that I have currently running:#!/bin/bashwhile :do tail -1 logfile.txt | grep -oP '(?<=SPEED:)[0-9]+' > spd.txt tail -1 logfile.txt | grep -oP '(?<=CADENCE:)[0-9]+' > cad.txtdone=======UPDATE:=======This is the complete log line and ouput expected:Example of line 1:Input (from logfile.txt):03-16 21:05:28.641 2797-2842/process:Service D/WEBSOCKET: receiving: { values:{ Speed MPH:3.1, Speed KPH:4.9, Miles:0.551, Kilometers:0.886 } }Output (sent to spd.txt):4.9Example of line 2:Input (from logfile.txt):03-16 21:05:29.309 2797-2842/process:Service D/WEBSOCKET: receiving: { values:{ RPM:27 } }Output: (sent to cad.txt):27 | Redirecting GREP output to different text files depending on capture content | grep;io redirection;tail | Got it Kamaraj, thanks for the help! it was key to find the right answer:This is the script that what worked for me:tail -1 logfile.txt | awk -F\:\ '{for (c=1;c<=NF;c++) {if ($c ~ /Speed KPH/) {print $(c+1)+0 > spd.txt} {if ($c ~ /RPM/) {print $(c+1)+0 > cad.txt}}}}'Had to go back to tail -1 because tail -f was affected by buffering and it would also leave a trail in the txt file with all the values captured. I was only expecting a single line with the most current result in the output text files.Thanks and best regards! |
_unix.188438 | How can I use the 7z compression with PPMD algorithm? It produces better compression than the default in 7z. | How to use PPMD with 7z under Linux? | linux;7z | null |
_codereview.105978 | I've just set up a bit of code to try and make things neater, but all I've really done, is add a whole bunch of nested if statements:public void Copy(Copy copyType){ FrmProjectChooser frm = null; if (copyType.HasFlag(Copy.Single)) { if (copyType.HasFlag(Copy.Estimations) && copyType.HasFlag(Copy.SubItems)) { // Single item, with sub items and estimations List<WB> items = new List<WB> { (WB)treeWBS.GetDataRecordByNode(treeWBS.FocusedNode) }; items.AddRange(_WBSManager.GetAllChildren(items[0].idWBS)); List<CTREstimation> estimations = new List<CTREstimation>(); foreach (WB wbs in items) estimations.AddRange(CTREstimationsManager.Instance.GetEstimationsForWBS(wbs.idWBS)); frm = new FrmProjectChooser(items, estimations); } else if (copyType.HasFlag(Copy.Estimations)) { // Single item, with estimations. WB wbsItem = treeWBS.GetDataRecordByNode(treeWBS.FocusedNode) as WB; if (wbsItem != null) frm = new FrmProjectChooser(new List<WB> { wbsItem }, CTREstimationsManager.Instance.GetEstimationsForWBS(wbsItem.idWBS)); } else if (copyType.HasFlag(Copy.SubItems)) { // Single item, with sub items List<WB> items = new List<WB> { (WB)treeWBS.GetDataRecordByNode(treeWBS.FocusedNode) }; items.AddRange(_WBSManager.GetAllChildren(items[0].idWBS)); frm = new FrmProjectChooser(items); } else { // Single item WB wbsItem = treeWBS.GetDataRecordByNode(treeWBS.FocusedNode) as WB; if (wbsItem != null) frm = new FrmProjectChooser(new List<WB> {wbsItem}); } } else if (copyType.HasFlag(Copy.All)) { if (copyType.HasFlag(Copy.Estimations)) { // All items, with estimations. List<WB> wbsItems = _WBSManager.GetWBSForProject(CurrentProject.idProjects); List<CTREstimation> estimations = new List<CTREstimation>(); foreach (WB wbs in wbsItems) estimations.AddRange(CTREstimationsManager.Instance.GetEstimationsForWBS(wbs.idWBS)); frm = new FrmProjectChooser(wbsItems, estimations); } else { // All items frm = new FrmProjectChooser(_WBSManager.GetWBSForProject(CurrentProject.idProjects)); } } if (frm != null) frm.ShowDialog();}[Flags]private enum Copy{ None = 0, Single = 1, // Straight copy of single item SubItems = 1 << 1, // Include any children in the copy Estimations = 1 << 2, // Include estimations in the copy All = 1 << 3 // Copy all items}The code will pull Items from a TreeList. Sub Items are of the same type as Items, but Estimations are a different type. I don't think it's very neat. I've tried googling similar titles to this question but I can't really find anything that helps. Is there anything I can do to make this neater? | Handling Enum flags for a copy operation | c#;enum | This answer is based on what @Matt said. I thought the idea was very good and it could be pushed a step further.Using @Matt's idea, your code wouldn't contain nested ifs, but you still chain ifs. Of course you could use a switch, but you still have the same problem. Lots of code in the same method, and we don't like crowded methods.The solution? (Well.. my solution)A dictionary!Now, I don't know what your doing inside your ifs, which is sad, because I can't help you much further. But it would look like this : //Right now, I assume your method is in an object, so I put the dictionary static, and it will be populated in the static ctor of the object.private static readonly Dictionary<Copy,Action> _mapCopy = new Dictionary<Copy,Action>();static YourObjectWhatever(){ _mapCopy.Add(Copy.All | Copy.Estimations, CopyAllEstimations); _mapCopy.Add(Copy.Single | Copy.Estimations, CopySingleEstimations); //etc...}private static void CopyAllEstimations(){ //Do stuff..}private static void CopySingleEstimations(){ //Do stuff..}public void Copy(Copy copyType){ Action action = null; if(!_mapCopy.TryGetValue(copyType, out action)) throw new ArgumentException(The copy type doesn't have an associated action); action();}Now, your logic is separated in multiple methods, which is better but not perfect. If you want to push this even farther (for testability for example), you'd need to implement an interface, let's say ICopyAction (I can't tell more because there's a little lack of context). So now we'd have : public interface ICopyAction{ void Copy(/*I'm guessing you'd have parameters*/);}public class CopyAllEstimations : ICopyAction{ public void Copy() { //do stuff }}//etc...And the usage ://Same assumptionprivate static readonly Dictionary<Copy,ICopyAction> _mapCopy = new Dictionary<Copy,ICopyAction>();static YourObjectWhatever(){ _mapCopy.Add(Copy.All | Copy.Estimations, new CopyAllEstimations()); //etc...}public void Copy(Copy copyType){ ICopyAction action = null; if(!_mapCopy.TryGetValue(copyType, out action)) throw new ArgumentException(The copy type doesn't have an associated action); action.Copy();}This method works as well if you want to have parameters to your method, but I didn't this explanation because I don't know if you need it.. |
_datascience.8451 | In performing ALS and getting an item matrix of latent features, what would be the best method for inferring the possible meaning of each latent factor in the item space? And as a corollary, are the any methods for successfully visualizing this data to gain some intuition or understanding? | Visualizing Latent Features | machine learning;recommender system;visualization | null |
_cs.43089 | Consider the following language: $$ L = \{ \langle M \rangle \ |\ M \text { accepts } w \text { whenever it accepts } w^R \}$$I am trying to understand the following proof that this language $L$ is undecidable.The proof proceeds by contradiction, by reducing to $L$ the language $A_{TM}=\{\langle M,w \rangle\mid M\text{ accepts }w\}$ known to be undecidable. It goes as follows:Suppose that $L$ is decidable, then there's a TM $M_L$, that decides $L$Thus we can build $M_{ATM}$ to decide $A_{TM}$ as follows:Let $\langle M, w \rangle$ be an input for $M_{ATM}$.Construct a machine, $M_1$ which on input $x$: Simulate $M$ on $w$; if $M$ rejects $w$, $reject$. If $M$ accepts $w$, $accept$ if $x=01$, $reject$ otherwise.Simulate $M_L$ on input $\langle M_1 \rangle$.$accept$ if $M_L$ rejects. $reject$ otherwise.Since $A_{TM}$ is not decidable, we have a contradiction, which implies that $L$ cannot be decidable.However there is a point of the proof I do not understand.My question is: What happens if $M$ gets into a loop on some $w$? | Reduction and decidability | formal languages;turing machines;reductions;proof techniques;undecidability | In many problems like this, it's often a bad idea to base your construction on both acceptance and rejection. Try rewriting your $M_1$ in this formM1(x) = simulate M on w if M accepts w if x = 01 acceptNow the important thing is that If $M$ accepts $w$ (i.e., if $\langle M, w\rangle\in A_{TM}$) then $L(M_1)=\{01\}$.If $M$ fails to accept $w$, either by rejecting or by running forever, then $M_1$ will accept nothing, so $L(M_1)=\varnothing$Now if $L$ were decidable with decider $M_L$, then you can use the decider and $M_1$ to build a decider, $M_A$, for $A_{TM}$ as you suggested:MA(<M>, w) = Construct M1 as above if ML accepts <M1> reject else if ML rejects <M1> acceptNow we have If $M$ accepts $w$, then $L(M_1)=\{01\}$ so $M_L$ will reject $M_1$ and hence $M_A$ will accept $\langle M, w\rangle$. If $M$ doesn't accept $w$, then $L(M_1)=\varnothing$ so $M_L$ will accept $M_1$ and hence $M_A$ will reject $\langle M, w\rangle$.In short, $M_A$ is indeed a decider for $A_{TM}$, an undecidable language, meaning that $L$ must not be decidable. |
_unix.199727 | I have a very specific and odd problem to solve. I'm working as a research assistant and I've been producing a ton of figures. In one directory, I dump .pngs to view casually (limited space here) and in the another, I dump .ps and .pdf files to use in latex. It's all automated with matlab. In the .png folder, I've periodically deleted many files I deemed not useful, but the other one is a mess. How can I tell unix to go through the .ps directory, and for each file, search the .png directory for filenames that match, and then, if they don't match, move the file to a different directory (that I will most likely later delete)?Are there any commands that could be useful here?Unfortunately I'm a complete shell scripting noob. | Shell Scripting: Deleting or moving files from one directory that match filenames from another directory | shell;scripting;directory;filenames | null |
_computergraphics.2486 | I have been studying hardware corporation GPU profilers in recent days (Qualcomm, PowerVR, Intel). I've noticed that these tools seem to give more low-level details than the GPU profilers I have used in the past -- XCode's OpenGL ES frame capture and apitrace -- which only listed which OpenGL calls were made and what the state of current resources are.How do I get started if I want to make a low-level tool that displays things like sampler cache misses and shader assembler code? | How to get started writing a low-level GPU profiler? | gpu | For basic GPU timing data, you can use D3D timestamp queries or the equivalent OpenGL timer queries.Any low-level hardware data like cache misses is going to be extremely vendor-specific. Each GPU vendor has its own custom API or extension for giving access to low-level performance data on its hardware. The APIs vary in how they work, and they don't necessarily all expose the same details. The available data may also vary between different chip models within the same vendor, so you probably need to know a bit about how the hardware works to make sense of it.Here are links to the relevant APIs for most of the main GPU vendors.AMD: GPUPerfAPI; see also AMD_performance_monitor Intel: Performance Counter Monitor (note: it's not clear to me whether this includes access to GPU counters, or only CPU ones); see also INTEL_performance_queryNVIDIA: PerfKitPowerVR: PVRScopeQualcomm: QCOM_performance_monitor_global_mode |
_webapps.95232 | I'm having a problem with my Twitter account, it was running smoothly since 2012 but I started to notice (from over a year by now) that my tweets are not showing in the search results as well as the hashtags search not under the (Top tweets, All tweets) when I search for them from another account, however I can see them when I use my own account. To clarify this more MY ACCOUNT IS ACTING AS IF IT WAS PRIVATE, and it's not. I tried to change passwords, deactivation & reactivation but nothing worked & yes I contacted the support multiple times with no reply. So if anyone can help I appreciate it because I don't want to lose my 15k followers by creating a new account. | Tweets don't show up in search | twitter;tweet | null |
_unix.238262 | I'm using Bash 4.I'm trying out a few experiments with Bash.I want to dynamically assign an array to the value of a variable. If you read the code below, it will be easier to understand.myFunc =RESULT_PREF 'bar' 'baz'function myFunc() { local params parser params $@}function parser() { if [ '=' = ${2::1} ]; then local name=${1} shift 2 local param_array=( $@ ) # In PHP, I'd do: # $$name = $param_array # The ideal outcome: # $params should contain the values 'bar' and 'baz'. fi}How do I do what I've mentioned in the comments of my code?I can't seem to do this using eval. Even if I could, I've read so many bad things about eval.Any advice? | Dynamically assign array to value of a variable. Eval? | bash;shell script | myFunc(){ parse $1 && #parse() is a test eval shift #$1 is a valid, eval-safe name local name$1 ${1#=}='($@)' #$1 is expanded before shift}parse() case ${1#=} in ($1||[0-9]*|*[!_[:alnum:]]*) ! : #return false for all invalid names esacIt does also work without eval given another execution step or two:myFunc(){ local -a name$1[@] ${1#=}='(${@:2})' && [ -z ${1%=*} ] && printf %s\\n ${!name}}In that case I just allow the local builtin to do all of the validation and assignment at once. Basically, if the local command can assign ${1#=}=(${@#$1}) successfully and ${1%=*} is null then you know the assignment has taken place successfully. Any syntax error - such as bad shell names - will automatically fail and return with error during the local assignment, and the simple validation [ test ] which follows is all that is necessary to ensure that you don't accidentally do local name=morename=(${@#$1}).The natural upshot to this is that when a bad name is passed in as =$1 the shell will automatically print out a meaningful error message to stderr for you and do all of the error handling automatically with no fuss.Like this:myFunc =goodname 1 2 3 'and some ;more' &&myFunc =bad-name 1 2 3 'and some ;more'123and some ;morebash: local: `bad-name=(${@:2})': not a valid identifierNote that doing functionname(){list;} is almost definitely not what you mean to do. If you intend to localize the function's traps and similar you need functionname{list;}. If you intend to share all state but your locally defined variables with the current shell then functionname(){list;} or name()list are equivalent because the function keyword is ignored by the shell (except that some shells which implement it tend to parse the following list incorrectly if it is not contained in curly braces) when name is followed by (). |
_webapps.105871 | When I have multiple reminders in one day in Google Calendar, they are grouped in a very unhelpful way:Is there a way to ungroup the reminders list so that I can see all the reminders concurrently? | Show all reminders in Google Calendar, without grouping | google calendar reminders | null |
_unix.45961 | recently starting trying to use VIM as my full-time text editor since I spent a lot of time SSH'ed anyways. Recently installed NERDTree so I can quickly swap between files in a project.Easy question that I can't seem to find on Google (perhaps not using the right terminology) - how do I easily switch focus between the two buffers when using NERDTree? Meaning how can I go from browsing the directory on the left to browsing the file on the right easily?Thanks. | Switching between split buffers in vim | vim;buffer | null |
_codereview.106034 | Three days ago I wrote about a Java Dice Roller I wrote. I've now added a GUI to that program. Here it is:DiceRollerGUI.java:package com.egroegnosbig.dicerollergui;import java.awt.*;import java.awt.event.*;import javax.swing.*;public class DiceRollerGUI { static JFrame frameOne = new JFrame(Dice Roller); public static void main(String[] args) { frameOne.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); DiceGUI GUI = new DiceGUI(); frameOne.add(GUI); Button b = new Button(Roll); b.addActionListener(new ButtonAction()); frameOne.add(b); frameOne.setLayout(new GridLayout(1, 2)); frameOne.setSize(400, 250); frameOne.setResizable(false); frameOne.setVisible(true); }}class ButtonAction implements ActionListener { @Override public void actionPerformed(ActionEvent e) { DiceRollerGUI.frameOne.setVisible(false); JFrame frameTwo = new JFrame(Dice Roller); frameTwo.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frameTwo.setSize(400, 250); frameTwo.setResizable(false); ResultGUI resultGUI = new ResultGUI(); frameTwo.add(resultGUI); frameTwo.setVisible(true); }}DiceGUI.java:package com.egroegnosbig.dicerollergui;import java.awt.*;import javax.swing.*;public class DiceGUI extends JPanel { @Override public void paintComponent(Graphics g) { super.paintComponent(g); this.setBackground(Color.WHITE); g.drawString(Dice Roller, 70, 20); }}ResultGUI.java:package com.egroegnosbig.dicerollergui;import java.awt.*;import javax.swing.*;public class ResultGUI extends JPanel { @Override public void paintComponent(Graphics g) { super.paintComponent(g); this.setBackground(Color.WHITE); Dice dice = new Dice(6); int resultInt = dice.roll(); StringBuilder sb = new StringBuilder(); sb.append(); sb.append(resultInt); String result = sb.toString(); g.drawString(The dice rolled a, 150, 125); g.drawString(result, 243, 125); }}Dice.java:package com.egroegnosbig.dicerollergui;import java.util.Random;public class Dice { private final Random rand; private final int faces; public Dice(int faces) { this.rand = new Random(); this.faces = faces; } public int roll() { return rand.nextInt(faces) + 1; }}Working on better class names... | Java Dice Roller with GUI | java;random;swing;dice;awt | Rather than swapping out the entire JFrame for another on the button click, you should simply be updating the contents. This is all very overwrought. I think all you really need is a panel with a button and a label. Click the button, and display the results in the label.Related to the above, rather than overriding paint, you should be using a layout and adding subcomponents.To get you started:public class DicePanel extends JPanel { private final Dice dice; private JButton rollButton; private JLabel displayLabel; public DicePanel(Dice dice) { this.dice = dice; rollButton = new JButton(Roll); displayLabel = new JLabel(); rollButton.addActionListener(e -> displayLabel.setText(You rolled a: + dice.roll()) ); // or if you're not using Java 8, you can do the more verbose thing. // not specifying a layout defaults to a flow layout. Set a layout via: // setLayout(new BorderLayout()); // or whatever add(rollButton); add(displayLabel); }}Your program should just create a Dice, create a DicePanel with that, and stick it in a JFrame and show it. Then play around with layouts to get something you like. |
_webmaster.303 | The goal is to make embedded youtube videos appear in video search results as part of my website.I searched very extensively how to do this; I found through google some articles that claim that it's possible, but they are old and that method no longer works. Youtube seems to have intentionally changed their website to prevent other people from including their videos in external sitemaps.To create the sitemap I would need two things:The URL of YouTube's flash player. This should be easy enough.This is the hard part: the URL of the .flv of the video. For http://www.youtube.com/watch?v=cE88ZYstEHc it is http://www.youtube.com/get_video?video_id=cE88ZYstEHc&t=vjVQa1PpcFMQCaCarYkjDrCDJyqOQ_cXrG5ulMRoDY8= . The t which is the tricky part. How can I obtain it?Also, am I doing something unethical? If I manage to do this will google and/or youtube be annoyed at me? The YouTube videos are mine. | Is there a way to add embedded youtube videos to my website sitemap? | seo;google;sitemap;video;youtube | According to the Protocol site: Sitemap file location:Note that this means that all URLs listed in the Sitemap must use the same protocol (http, in this example) and reside on the same host as the Sitemap. For instance, if the Sitemap is located at http://www.example.com/sitemap.xml, it can't include URLs from http://subdomain.example.com.URLs that are not considered valid are dropped from further consideration. It is strongly recommended that you place your Sitemap at the root directory of your web server. For example, if your web server is at example.com, then your Sitemap index file would be at http://example.com/sitemap.xml. In certain cases, you may need to produce different Sitemaps for different paths (e.g., if security permissions in your organization compartmentalize write access to different directories).Also: Sitemaps & Cross SubmitsTo submit Sitemaps for multiple hosts from a single host, you need to prove ownership of the host(s) for which URLs are being submitted in a Sitemap.Any YouTube urls you add to your sitemap will be considered invalid. |
_unix.280796 | I am trying to connect from my Gentoo to RHEL server. Both have mosh installed, however I get this error:petanb@localhost ~/Documents $ mosh root@server mosh-server needs a UTF-8 native locale to run.Unfortunately, the local environment ([no charset variables]) specifiesthe character set US-ASCII,The client-supplied environment ([no charset variables]) specifiesthe character set US-ASCII.LANG=LC_CTYPE=POSIXLC_NUMERIC=POSIXLC_TIME=POSIXLC_COLLATE=POSIXLC_MONETARY=POSIXLC_MESSAGES=POSIXLC_PAPER=POSIXLC_NAME=POSIXLC_ADDRESS=POSIXLC_TELEPHONE=POSIXLC_MEASUREMENT=POSIXLC_IDENTIFICATION=POSIXLC_ALL=Connection to server closed./usr/bin/mosh: Did not find mosh server startup message.On RHEL I have following locales:# localeLANG=en_US.UTF-8LC_CTYPE=en_US.UTF-8LC_NUMERIC=en_US.UTF-8LC_TIME=en_US.UTF-8LC_COLLATE=en_US.UTF-8LC_MONETARY=en_US.UTF-8LC_MESSAGES=en_US.UTF-8LC_PAPER=en_US.UTF-8LC_NAME=en_US.UTF-8LC_ADDRESS=en_US.UTF-8LC_TELEPHONE=en_US.UTF-8LC_MEASUREMENT=en_US.UTF-8LC_IDENTIFICATION=en_US.UTF-8LC_ALL=How can I fix this?UPDATE: The problem seem to be on Gentoo side, connecting to debian server produces same error, connecting using other distros works.UPDATE2: I fixed it by addingLANG=en_US.UTF-8export LANGinto ~/.bashrc | mosh-server needs a UTF-8 native locale to run | ssh;gentoo;mosh | null |
_softwareengineering.293602 | As a simple demonstration of the efficiency of Haskell style, I thoughtlessly ran the following:take 100 [(a, b, c) | a <- [1..], b <- [1..], c <- [1..], a^2 + b^2 == c^2]This should be a way of deriving the first 100 Pythagorean triples, with duplicates. In practice however, it never halts, because the algorithm itself defies lazy evaluation.To think about it in terms of actual implementation, the following should be something similar to how the list comprehension is actually evaluated, in an imperative style:results = []for (a = 0; a < ; a++) { for (b = 0; b < ; b++) { for (c = 0; c < ; c++) { if (a^2 + b^2 == c^2) { results[] = [a, b, c] } } }}When written like this, it becomes obvious that the function can never yield results, because infinite time will be spent testing whether 1^2 + 1^1 == c^2, as only the innermost for loop will advance, and a and b will remain '1'.The common solution in this particular case is to constrain the values of the smallest two variables to that of the largest:take 100 [(a, b, c) | c <- [1..],a <- [1..c], b <- [1..c],a^2 + b^2 == c^2]However, this seems like an obvious oversight for implementors of the language. When you think about it, any list comprehension with more than one infinite source of search space will never halt, for the same reason, except some will yield useful results when (1, 1, x) is useful. There are questions discussing this problem, but most discuss specific cases, rather than the problem overall. Why isn't fixing this within the language with a different iteration pattern trivial? | Why don't multi-infinite list comprehensions work with lazy evaluation? | functional programming;haskell;complexity | null |
_codereview.7593 | This is a subtle question about design in which I want to find the most elegant and appropriate solution. I have an enumeration representing a French card deck (see code below). With it I need to do certain things, such as displaying the elements in a table. I'll want to display the elements starting with ACE (the highest) down to deuce. The problem is that I don't want my UI part of the code to know which one is the highest and use a descending iterator. Should I have a method in my enumeration, something like EnumSet descending() ? I don't want to initialise the constants with integers that indicates their values, since in poker the cards don't have a numerical value per se, just an order. Probably it's just fine if users of this enum are supposed to know that an ACE has the highest value and display values accordingly to what they want, but just wondering if there is a better solution.public enum Rank { DEUCE (2), THREE (3), FOUR (4), FIVE (5), SIX (6), SEVEN (7), EIGHT (8), NINE (9), TEN (10), JACK (J), QUEEN (Q), KING (K), ACE (A); private String symbol; Rank (String symbol ) { this.symbol = symbol; } /** * Returns a string representation of this rank which is the full name * where the first letter is capitalised and the rest are lowercase */ @Override public String toString () { //only capitalize the first letter String s = super.toString(); return s.substring (0, 1).toUpperCase() + s.substring(1).toLowerCase(); } public String getSymbol () { return this.symbol; }} | Should I design my enumeration in some way that indicates what the highest value is? | java;playing cards;enum | null |
_webapps.89998 | The only emoji command I can find is https://api.slack.com/methods/emoji.listIs there a way to programatically create custom emoji? Or is the only way via manual process - https://get.slack.help/hc/en-us/articles/206870177-Creating-custom-emoji | Can a Slackbot create emoji? | slack;bot | null |
_webmaster.74347 | I want to publish some case studies from my customers on my website. Each case study will have real numbers about website traffic. I will also publish website screenshot. Do you think it is enough to get permission by e-mail? | Is it enough to get permission by e-mail to show customer case study on website? | legal;permissions | null |
_scicomp.4714 | I have a symmetric positive semidefinite covariance matrix $A$, which is approximately computed as the output of a quadratic regression. I then need to invert $A$, but often it is close to singular. I've reduced the problem by using scaling. That is, I create a diagonal matrix $D$, with elements $D_{ii} = 1/\sqrt{A_{ii}}$. Then$A^{-1} = D(DAD)^{-1}D$Where $DAD$ has a lower conditioning number then $A$. Unfortunately in some iterations this is not enough. The size of $A$ is quite small, say maximum $50 \times 50$. I need the inverse of $A$ because I have to use it in a long calculation, where terms such as $x^TA^{-1}x$ and $A^{-1}B$ appear lots for time. Also: $A^{-1}$ represents a covariance matrix, so it has to be symmetric and positive definite. Is there some better way to make $A$ invertible? | Imposing invertibility on a Matrix | linear algebra;matrices;condition number;numerical | null |
_unix.226951 | I created tar archive using following command:tar -zcvf archive-name.tar.gz directory-nameAfter this operation, where that tar.gz is located? | What is the default path of newly created tar archive? | linux;tar | The file is created in place where you executed the script. You can see your current location using pwd.However, you can also pass path instead of archive-name:tar -zcvf /my/absolute/path/archive-name.tar.gz directory-nameFile will be located in /my/absolute/path.You can also use relative path, if the directory is there:tar -zcvf relative/path/to/pwd/archive-name.tar.gz directory-name |
_codereview.112968 | I just started to learn programming, going through a lot of different tutorials, trying out different programming languages and I stumble in the same sort of questions all over the place. I give you an simple egg timer in bash I use for cooking.#!/bin/bash### varibalesmin=$1sec=$2message=$3### runtimer=$(($min * 60 + $sec))for i in $(seq 0 $timer)do remain=$(( $timer - $i ))echo -ne $(($remain / 60)):$(($remain % 60)) \rsleep 1doneecho -ne \recho $messageafplay ~/alarm.mp3This egg timer is very simple and therefore usually reliable. I use it for cooking. Yet, when I imagine doing it on a grander scale, there would be so many questions, that are neglectable in such a small application. But I am not sure if they would stay neglectable.Is it a good idea to keep the additional variable timer in there? It helps readability, but wouldn't that hold another variable in my RAM? Shouldn't I better forget about readability and directly calculate it in the echo command. Same goes with the variables set for min, sec and message. Those variables are basically doubled for readability alone. I could directly use them in the script. Using sleep to count down time. As far as I know, things like sleep, wait and pause can be off when the CPU is under heavy usage. Something probably to consider for scripts on my old raspberry, that only has one core. Shouldn't I first calculate the time of the alarm and then use my system time to calculate the time left? Should it do that in every iteration of the loop, that has to comes at least every second or should I only do it every minute or hour?If I check it with system time, should I rather put the checks in interlapping loops than add an if check? Feels like an interlapping loop would take less processing power than use if checks every time it runs.While we're on the question of waiting. How do I find good timings for programs? I mean, let's assume my program needs to check something regularily. Like, a file on the computer. How do I find a good timing for it? Every .5 second would feel pretty responsive. Yet, am I clogging my CPU with unnecessary checks? Is there some rule of thumb, like check 10 times more often for things on your RAM instead of things on your HD?Like I said, those programs won't very much suffer from such things, but I wonder right now if I build on a wrong premise when trying to learn to code without considering writing effective code and are there any materials that you can point me to that answers these sorts of questions and hopefully a lot more questions I haven't thought of yet? | Egg timer in Bash | performance;beginner;bash;timer | Your main questionsIs it a good idea to keep the additional variable timer in there? It helps readability, but wouldn't that hold another variable in my RAM? Shouldn't I better forget about readability and directly calculate it in the echo command. Same goes with the variables set for min, sec and message. Those variables are basically doubled for readability alone. I could directly use them in the script.Readability is extremely important. Write programs for people, not for computers. The computer has no problem reading a program that's written ugly. But it's not the computer that might have to search for bugs or implement the next feature, that's going to be a human. Code is read far more than it's written.As far as RAM goes, the cost of an additional variable is negligible,you can safely ignore it.Using sleep to count down time. As far as I know, things like sleep, wait and pause can be off when the CPU is under heavy usage. Something probably to consider for scripts on my old raspberry, that only has one core. Shouldn't I first calculate the time of the alarm and then use my system time to calculate the time left? Should it do that in every iteration of the loop, that has to comes at least every second or should I only do it every minute or hour?Yes, it will be more accurate to calculate the target end time, and in every iteration recalculate the remaining time to display. It's up to you how you pace the loop. Every second seems fine, with a sleep 1 like it is now.If I check it with system time, should I rather put the checks in interlapping loops than add an if check? Feels like an interlapping loop would take less processing power than use if checks every time it runs.I don't really get what an interlapping loop is, but in any case, you only need one loop, something like this: (see at the bottom a complete implementation)target_time=...while :; do current_time=$(date +%s) ((current_time >= target_time)) && break print_remaining_time current_time target_timedoneWhile we're on the question of waiting. How do I find good timings for programs? I mean, let's assume my program needs to check something regularily. Like, a file on the computer. How do I find a good timing for it? Every .5 second would feel pretty responsive. Yet, am I clogging my CPU with unnecessary checks? Is there some rule of thumb, like check 10 times more often for things on your RAM instead of things on your HD?That's a bit too broad to answer. It can depend on your hardware and specific circumstances and requirements. And looping is not always the right thing to do. In the example you gave, waiting for a file, it's better to look for a way to listen on filesystem events, rather than a loop with a sleep.Code reviewBash arithmetics can help you simplify a lot. For example, instead of this:timer=$(($min * 60 + $sec))You can write like this:((timer = min * 60 + sec))That is, you can drop all the $ and use comfortable spacing around operators.You did not indent the body of your for loop.This is not easy to read. It would be better this way:for i in $(seq 0 $timer)do remain=$(( $timer - $i )) echo -ne $(($remain / 60)):$(($remain % 60)) \r sleep 1doneseq is not standard, and therefore not recommended.And you can easily achieve the same thing using for:for ((i = 0; i < timer; ++i)); doYou did not validate your input.If the script is called without parameters, the behavior will be odd.It would be more friendly to display a help message to tell the user that something's wrong.Suggested implementationApplying some of the above suggestions (also borrowing a bit from the answer of @200_success, a cleaner, safer, more readable implementation:#!/bin/bashif test $# -lt 3; then echo usage: $0 minutes seconds message exit 1fimin=$1sec=$2message=$3((target_time = $(date +%s) + min * 60 + sec))while :; do ((current_time = $(date +%s))) ((current_time >= target_time)) && break ((remain_sec_total = target_time - current_time)) ((remain_min = remain_sec_total / 60)) ((remain_sec = remain_sec_total % 60)) printf '%4d:%02d \r' $remain_min $remain_sec sleep 1doneprintf '%-7s\n' $message |
_softwareengineering.318567 | I have three classes: User, Conversation and Message:Message properties:User sender;// Some moreConversation properties:List<Message> messages;List<User> participants;// Some moreI want to show in the program the creator of the conversation. I wonder if It is bad programming to add a creator property to the Conversation class, so I could access it easily and the code would become more readable.:List<Message> messages;List<User> participants;User creator{ get { return participants[0]; }}// Use of the property:conversationObj.creator;// Instead Of:conversation.participants[0];Is it considered data duplication because I could get it directly from the participants property? Or data duplication is wrong only when designing a database?Thanks. | Is data duplication bad in programming (in contrast to database designing)? | c#;object oriented | It isn't duplication of data in the sense of denormalization; all it is, is an extra accessor method. Duplication of data in the sense of denormalization or caching (another way to describe denormalization) would involve an instance field that captures participant[0]. This particular implementation, having an explicit getter and no setter, is not provided with a backing field (automatic by the language or manual by you); the getter code is executed each time. Therefore it is not duplication of data.I would equate arguments for denormalization in the database to caching in code: it is commonly done for performance, but as @Jrg says, makes the system more error prone now and also for future maintenance. Further, caching / denormalization have storage costs and computational costs as well. So, like all optimization, caching / denormalization should be applied when we have measureable tests that demonstrate good understanding of the problem and that the solution results in improvement. |
_unix.323666 | We have 300+ AIX servers.We have created local id for a user on all those servers.Now the user has to log in manually in all servers to check his whether credentials are working.So is there a way/script to identify whether user is able to login with the given user ID and password.I tried google but did not get anything.Could please help with this requirement. | Identify whether user is able to login with the given user ID and password | shell script;login;aix | null |
_webapps.81174 | Is there any way to enlarge the label area in Google Spreadsheets?E.g. see red rectangle:I would like to increase the label area so that the label text don't get cropped. | Enlarge label area in Google Spreadsheets | google spreadsheets;google spreadsheet charts | null |
_codereview.79041 | We have a system with a non standard database solution. All trips to the DB are rather expensive. We cannot use entity framework.Currently our lazy loading is on an entity by entity basis. So if I have a Customer and access their Orders object it only loads the Orders for that customer. Something like so//DALpublic List<Entities.Customer> GetCustomersByIds(IEnumerable<int> ids){ var customers = db.GetCustomersByIds(ids); foreach(var customer in customers) { customer.Orders = new ResetLazy<List<Entities.Order>>(() => db.GetOrdersByCustomerId(c.Id)); } return customers;}ResetLazy taken from here https://stackoverflow.com/a/6255398/102526Ideally if we had a collection of Customers and accessed an Orders collection on one of the customers it would load all the Orders for all the Customers. (If not in a collection it would just load it's own orders)//DALpublic List<Entities.Customer> GetCustomersByIds(IEnumerable<int> ids){ var customers = db.GetCustomersByIds(ids); foreach(var customer in customers) { customer.Orders = new ResetLazy<List<Entities.Order>>(() => GetOrdersByCustomersLazy(customers, customer.Id)); } return customers;}protected List<Entities.Order> GetOrdersByCustomersLazy( List<Entities.Customer> customers, int customerId){ var orders = db.GetOrdersByCustomerIds( customers.Select(customer => customer.Id).AsEnumerable()); foreach(var customer in customers) { customer.Orders = new ResetLazy<List<Entities.Order>>(() => { customer.Orders = new ResetLazy<List<Entities.Order>>(() => GetOrdersByCustomersLazy(customers, customer.Id)); return orders.Where(order => order.CustomerId == customer.Id).ToList(); }); } return customers.FirstOrDefault(customer => customer.Id == customerId).Orders.Value;}This seems to work well, but it is too complex and not generic. | Lazy Load for multiple entities at a time | c#;database;lazy | null |
_webmaster.105094 | If I am connected to a server, is it ok to just shut down filezilla? Does filezilla make a disconnect in a proper way? | Closing ftp-connection in filezilla | ftp | null |
_unix.362092 | I have a first gen Intel i5-760 with a default clock speed of 2.8GHz but I can set it to 3.3 in the bios and it runs fine with low temps. I have an Asus P7P55D-E LX motherboard.But if I set the Intel SpeedStep to enabled in the bios then I cannot get my CPU to go above 2.8 where if SpeedStep is disabled then it is locked at 3.3 or whatever I overclocked it to. I cannot change the frequency in the OS with the governor when overclocking.SpeedStep works normally when not overclocked with govenor profiles: conservative, ondemand, performance, powersave and schedutil from 1.2 to 2.8GHz.This is a problem because I do not want to run my cpu at overclocked speeds all the time as I assume this will likely wear it out or is generally not good for the life of the electronics, but I do not want to have to reboot everytime I need some extra processing power.I have tried everything in this post: Can't use userspace cpufreq governor and set cpu frequency to try and force a greater frequency than 2.8 with SpeedStep enabled or not but with no luck.I have done the following as stated in the post linked above: disable the current driver: add intel_pstate=disable to your kernel boot line boot, then load the userspace module: modprobe cpufreq_userspace set the governor: cpupower frequency-set --governor userspace set the frequency: cpupower --cpu all frequency-set --freq 800MHzAlso, setting /sys/module/processor/parameters/ignore_ppc to either 1 or 0 makes no difference.Does anyone know a fix for this?EDIT :A solution to Set Clock Frequency on a Frequency Scaling Disabled Kernel would solve my problem but it's over two months old and second in the list under the tag cpu-frequency.I guess the other question to ask is: will running my cpu at an overclocked frequency permanently, even when idle for hours have a downside, provided temps are low? Will this wear out my processor? Other than saving the planet by reducing power, does disabling SpeedStep have any drawbacks? | Intel SpeedStep does not honour max CPU frequency | cpu frequency | null |
_unix.122238 | I'm trying to upgrade to a newer version (that has a bug fix) than my current 1.6. I am on Ubuntu and recently upgraded to Ubuntu 13.04.Ideally I want to use tmux version 1.8 or even 1.9. I've downloaded newer versions but can't get them working.I downloaded 1.9a but when I try and run it, it just hangs.I tried this download: http://sourceforge.net/p/tmux/tmux-code/ci/master/tree/README#l26and did the$ sh autogen.sh$ ./configure && makebut I get $ ./tmux$ protocol version mismatch (client 8, server 6)I tried to download and use a 1.8.4 version but the download didn't seem to have files I could use. | protocol version mismatch (client 8, server 6) when trying to upgrade | tmux | This basically tells you, that you already have an (old) tmux-server running and the new tmux can't connect to it because they don't understand each other anymore. Exit all your existing tmux sessions and start a fresh one using the new version and everything should be fine. |
_softwareengineering.254903 | My program's aim is to determine the correct time zone offset (GMT+k) as a series of numbers are fed to it. eg:-4+111-4-4So, here -4 (the mode of the series) is the correct value. But this requires storing the series then analyzing it and I do not have enough storage (RAM and FLASH) for this. The running average can be calculated easily by storing the list size and the sum. Is there a similar case for finding the mode? | Is there a clever way to calculate a mode of an online series without storing the series? | algorithms | null |
_scicomp.11580 | [ question reposted from https://math.stackexchange.com/questions/786612/solving-a-linearly-constrained-sparse-linear-least-squares-problem ]Given the system of equations$Ax=b$, subject to $Cx\le d$where $A$ is an $n\times m$ matrix (with $n>m$) and is very large and sparse. As an example $A$ can have $3126250\times 2740$ elements. Each row of $A$ has only 4 or 5 non-zero numbers which can only be 1 or -1.I am on Matlab and I've been using LSQR but I need the inequality constraints to impose monotonicity on $x$. Can you please advise on any solvers to do this with linear constraints? Is there any implementation on Matlab or C for this? | solving a linearly-constrained sparse linear least-squares problem | optimization;least squares;constraints;linear system | If you have access to the MATLAB optimization toolbox then this can easily be done using the quadprog() function. You'd start by writing the objective in quadratic form as $ \| Ax - b \|_{2}^{2} = x^{T}(A^{T}A)x-2(A^{T}b)^{T}x+b^{T}b$then multiply to get $P=A^{T}A$ and $q=-2A^{T}b$. Then your objective is $f(x)=x^{T}Px+q^{T}x+b^{T}b$and ready to feed into quadprog(). The $P$ matrix is only 2740 by 2740, so this isn't a very large problem from the point of view of quadprog(). I'm sure there are some free qp solvers for MATLAB if you don't have a copy of the optimization toolbox. Also note that you may want to reformulate the problem in terms of variables $z_{i}$, where $x_{1}=z_{1}$$x_{2}=z_{1}+z_{2}$$\ldots$Then you can replace your inequality constraints $Cx \leq d$ with $z \geq 0$. |
_softwareengineering.181772 | I am trying to find best way to validate a mobile number with in a country.Currently my understanding is: User can enter whatever format they want in mobile numbers and its a waste of time and energy to validate it against a set of regular expressions.My application is not a critical one like banking application and if the user is entering an invalid mobile number, it is at his own risk to get updates (like activate account/ do something with the application)So I think the best way is to check for mobile number length and whether all are digits.I want to know the best way forward and is there any good resource (non-scattered) to get all mobile number length validations based on a country code? | Mobile number validation | mobile;validation;logic | null |
_unix.248750 | I have been trying to install Fedora 23 on a second drive inside my computer. The other system drive boots Windows 7.I used Fedora in the past (in a similar set up) and never had any issue installing it. This time though, nothing seems to be working, and since I'm a Linux noob, I can't find a solution.The error message I get as soon as I reach the installation window is as follows: There is a problem with your existing storage configuration: failed to scan disk sda.Also, For some reason, we were unable to locate a disklabel on a disk that the kernel is reporting partitions on. It is unclear what the exact problem is.The drive I've reserved for Fedora is a 300 gig GPT drive. It currently is not allocated.I tried to install the OS with an install DVD and an install USB key. I formatted the drive to NTFS and then left it in its current not allocated state.The Fedora error message suggests the following solution: There is a shell available for use which you can access by pressing ctrl-alt-F1 and then ctrl-b 2.I've done that to reach what I believe is the Anaconda shell. I then played around with basic fdisk commands (list disks mainly), but not being keen on coding I didn't pursue it too far.Any help would be very much appreciated, thanks. | Fedora 23 install woes (storage configuration + scanning disk sda failure) | linux;fedora;system installation | null |
_unix.287752 | $ uname -aLinux mypcname 3.16.0-4-686-pae #1 SMP Debian 3.16.7-ckt25-2 (2016-04-08) i686 GNU/LinuxI want to check that this ^^ Linux kernel on my PC has not been maliciously tampered with. I have no reason to suspect that it has been, but I would like to check anyway. My thinking is that comparing my kernel software with the same version that has been made publicly available by the original developers of the software should constitute a good enough check for my purposes. If you see any glaring errors in this way of thinking then please let me know. I know other malicious programs besides the kernel can run on a computer, but for the purposes of this question I am just interested in the kernel.Please note that the checks need not be done by the kernel as it runs. There is nothing preventing me from turning off my PC and booting another kernel or OS, or even taking out the hard drive and plugging it into another PC to run the checks on. But really I just want to do a quick check, so I will probably just do all checks using the existing kernel. Perfect? No. Good enough for my purposes? Certainly.I get the hashes of the kernel like so:$ apt-cache show linux-image-3.16.0-4-686-paePackage: linux-image-3.16.0-4-686-paeSource: linuxVersion: 3.16.7-ckt25-2Installed-Size: 118358Maintainer: Debian Kernel Team <[email protected]>Architecture: i386Provides: linux-modules-3.16.0-4-686-paeDepends: kmod | module-init-tools, linux-base (>= 3~), debconf (>= 0.5) | debconf-2.0, initramfs-tools (>= 0.110~) | linux-initramfs-toolPre-Depends: debconf | debconf-2.0Recommends: firmware-linux-free (>= 3~), irqbalance, libc6-i686Suggests: linux-doc-3.16, debian-kernel-handbook, grub-pc | extlinuxBreaks: at (<< 3.1.12-1+squeeze1), initramfs-tools (<< 0.110~)Description-en: Linux 3.16 for modern PCs The Linux kernel 3.16 and modules for use on PCs with one or more processors supporting PAE. . This kernel requires PAE (Physical Address Extension). This feature is supported by the Intel Pentium Pro/II/III/4/4M/D, Xeon, Core and Atom; AMD Geode NX, Athlon (K7), Duron, Opteron, Sempron, Turion or Phenom; Transmeta Efficeon; VIA C7; and some other processors. . This kernel also runs on a Xen hypervisor. It supports both privileged (dom0) and unprivileged (domU) operation.Description-md5: b2c3f405aab9f0fe07863b318891f277Homepage: https://www.kernel.org/Section: kernelPriority: optionalFilename: pool/main/l/linux/linux-image-3.16.0-4-686-pae_3.16.7-ckt25-2_i386.debSize: 33408936MD5sum: ce730b36742b837e3990889f2d897b60SHA1: 6f0816a4f4a2a24e7b74e9fa903dde778d825e63SHA256: 63a59e3a09afa720ce1c9b71bb33176e943e59aa90a9e3d92100b1d3b98cd1c6So I have a few questions:Are these the hashes for the pool/main/l/linux/linux-image-3.16.0-4-686-pae_3.16.7-ckt25-2_i386.deb file?Where can I find these hashes online to see if mine are correct?Assuming the answer to (1) is 'yes', then how can I check that the kernel that is actually running on my pc is the same as would be installed by this .deb file?I have faced opposition to this question in the comments...I didn't post this question looking for a debate. The checks I ask for in this question are not intended to be a panacea (there is no such thing when it comes to computer security). If you have a problem with this question then please consider that the developers of the Linux kernel itself sign their software with PGP keys and have suggested:All kernel releases are cryptographically signed using OpenPGP-compliant signatures. Everyone is strongly encouraged to verify the integrity of downloaded kernel releases by verifying the corresponding signatures.(source: https://www.kernel.org/signature.html)I understand that PGP signatures perform a different function to hashing a file to check for modifications, but there are similarities from a security standpoint - especially when it comes to trusting the data output by a compromised system. Hopefully this analogy will prevent readers of this question from dismissing it offhand as this mainly seems to have been the case so far. | check if the Linux kernel my PC runs has been maliciously modified | debian;apt;linux kernel;hashsum | null |
_softwareengineering.245696 | I have a problem visualising how the gap is closed between coarse-grained, n-tier boundary, high level, automated acceptance testing and lower level, task/sub-task scope Unit Testing.My motivation is to be able to take any unit test in my system and be able to follow backwards, through coverage, which scenario(s) make use of that unit test.I am familiar with the ideas of BDD: An extension of TDD and writing scenarios at the high level solution design level in the Gerkin format, and driving these scenarios using tools such as JBehave and Cucumber.Equally, I am familiar with the humble unit test at the utility or task level, and using xUnit to test through the various paths of the function.Where I begin to come unstuck, then, is when I try to imagine these two disciplines applied together in an enterprise setting.Perhaps I am going wrong because you tend to use one approach or the other?In an attempt to articulate my thoughts so far, let's assert that I am right to believe that both are used.In an agile world, user stories are broken down into tasks and sub-tasks, and in an enterprise setting, where the problems are large and the number of developers per story are many, those tasks and sub-tasks are distributed amongst a team of people with specialist problem solving skills.Some developers, for example, may specialise in mid-tier Development, develop a controller layer and continue to work downwards. Others may specialise in UI development and work from a controller layer upwards.Developers may therefore share implementation of a single string of tasks which, when glued back together again, will implement the story.In this case, it seems to me that cycles of TDD at the task and sub-task level is still necessary, to ensure that the efforts of these developers collaborate correctly within one layer and from one layer to the next (mocking collaborators).Working in this way implies a certain amount of high level solution design up-front, so contracts between layers are agreed and respected. Perhaps this is articulated as a sequence diagram and perhaps this emerges during the task breakdown session for the story.Yet, on the other hand, we have the value of defining the BDD acceptance tests at a higher level to configure that all of the blocks work together as expected - a form of super Integration Test, I suppose.I suppose the answer I am looking for here, aside of the simpler yes/no to the question of both, is an explanation/worked example of how I and a team of developers start out with a single n-tier scenario and end up with an implementation in an enterprise setting, where we've been able to break the tasks up into layers and share them amongst ourselves and work in parallel to turn the BDD scenario test from red to green, all the while ensuring that the the code we write is only written because the scenario calls for it, and at a fine grained level the code we have written is appropriately unit tested following cycles of TDD at the task and sub-task level as we go.If, of course that is correct? | In an enterprise setting, does one apply BDD principles alongside of, or instead of, traditional unit testing? | agile;unit testing;bdd | I think that there is a common misconception as to what BDD means, that BDD means that we are now writing our tests using tools like Cucumber, SpecFlow, etc instead of traditional unit tests. That is not the case. BDD is more a way of thinking that moves our focus in the tests from the technical aspects to the more business oriented aspects. Also see this P.SE answer: https://softwareengineering.stackexchange.com/a/135246A few tools were born out of the BDD world, e.g. Cucumber and RSpec. Cucumber is not intended as a testing tool - it is intended as a collaboration tool. It allows programmers to collaborate with the business in order to write business specifications. But you shouldn't use Cucumber because you are in an enterprise setting. You should use Cucumber because you can get customer involved in the software development process. If you do not have the customer strongly involved, you should think twice before using a tool like Cucumber.Also read this blog post from Aslak Hellesy (creator of Cucumber): https://cucumber.pro/blog/2014/03/03/the-worlds-most-misunderstood-collaboration-tool.htmlRSpec on the other hand is a unit test tool. But with a strong focus on describing the requirements for your code, e.g. by using strings as your test case names, instead of function names. For exampledescribe Login application service do context user has given 3 incorrect passwords already do it rejects the user when a correct password attempt is made do ... end endendThis is a test for a specific component in the system, but the focus of the tests are the business requirements that that particular component helps fulfill.If you use these two types of tool together, you get a process like this:Just picked from a random image search for BDD Cycle. You can exchange SpecFlow with Cucumber, JBehave, TickSpec, or whatever. And you can replace MSpec with RSpec, or even traditional xUnit type frameworks.If you are under operating in a regulated industry, e.g. medical, or finance, you get the benefit that your software is both validated and verified if you use both.Collaboration tools helps to make sure that your software fulfill business requirements, because the business collaborated in writing them*. Therefore your software has been validated.Unit test make sure that each component works correctly, Therefore your software has been verified.So in your case I would suggest: Continue to write unit tests. Strive to focus on business requirements rather that implementation details. If, and only if, you can get the customer involved, write acceptance tests in a cucumber style.* This requires that the specification actually describes business process, and not user interfaces, a mistake that many people new to cucumber makes. Also see this site for more: http://www.elabs.se/blog/15-you-re-cuking-it-wrong |
_codereview.29090 | I was making an algorithm for a task I found in a book. It says that there is sorted array, that was swapped so it looks like 4567123, and was proposed to use binary search modification.Below is my solution in java.The thing is, I'm a bit cringed that I can't process cases for array size of 2, 3 generically (for case of size 1 it's obvious that it'll require a specific case). Not that I need a solution that'll do that generically for 2, 3 array size cases. But I more like to know, is it a problem. Also I addressed a problem when array wasn't swaped at all. And I'm not sure that I should've done this, if preconditions are clearly specified. Btw binary search doesn't check that it's input is sorted. So, more what I need is not code review itself, but more of advise, is my hardcoded cases and support for nonswaped array is good or bad. I myself believe that cases are fine, as algorithm compares 2 values, and thus requires at least 2 values to be compared, also it searches for the peak and drop. But I'll probably try to modify it so it won't cover non-swaped array case and generically work for 2, 3 size array.//file SwapedArraySearch.java\public class SwapedArraySearch { public static int search(int[] arr) { switch (arr.length){ case 1: return arr[0]; case 2: return arr[0] > arr[1]?arr[1]:arr[0]; case 3: return arr[0] > arr [1] ? (arr[1] > arr[2]?arr[2]: arr[1]): (arr[0] > arr[2]?arr[2]: arr[0]); } int x = arr[0]; int n = arr.length/2; int prevn = arr[n] > x ? arr.length -1 : 1; int t; while(prevn != n) { if (x < arr[n]) { if (arr[n] > arr [n + 1] ) return arr[n+1]; t = n; n = n + (prevn - n)/2; prevn = t; } else { if (arr[n - 1] > arr[n]) return arr[n]; t = n; n = prevn + (n - prevn)/2; prevn = t; } } return arr[0] > arr[1] ? arr[arr.length - 1] : arr[0]; } public static void main(String... args) { int[] arr = new int[args.length]; for (int i = 0; i < args.length; i++) arr[i] = Integer.valueOf(args[i]); System.out.println(search(arr)); }}EDIT: the task is to find the lowest elementEDIT2: after much more thinking I boiled down my search function to this:x = arr[arr.length - 1]a = 0b = arr.lengthwhile b - a > 2 if x > arr[(a+b)/2] b = (a+b)/2 else a = (a+b)/2return arr[(a+b)/2]the key point to simplification was to understand that middle element can always be calculated as (a+b)/2. | Binary search modification for swaped array | java;algorithm | General AdviseYour methods are difficult to read because you chose to use predominantly single character variable names. Multi-character descriptive variable names are much easier to associate with a value, making the code magnitudes easier to read & maintain. Also, when implementing a binary search it is typical to use hi, lo & mid as variable names.Whenever you nest a ternary statements, this should be an immediate red flag, that you are not writing clear & maintainable code. As soon as you write a nested ternary statement I would advise you to pause, and consider what functionality you are trying to achieve and consider different ways to express that functionality. I'm not going to say that nested ternary statements are always a sign that your doing something wrong, but nested ternary statements is always a sign you should stop and think about what your doing.Algorithmic AdviseI am assuming the algorithm is locating the largest element in a semi-sorted array that has two (and only two) distinctly sorted sequences with a pivot index contains largest value in the array.The three conditions in your switch case are, generally doing the same thing. Each case is returning the largest element in a the array, but can't use the loop because the array is too small. We can cover these conditions in the final return.You should also check for general error cases such as null or the empty set (new int[0]) being passed into the function.Since the logic for checking if the current value of the binary search is the pivot index contains a lot of logic to do correctly, I would create a method isPivotIndex().Putting all that together, your solution could look something like this.Java Code: (ideone example link)int findLargestValueInSemiSortedArray(int arr[]) { if(arr==null || arr.length==0) throw new IllegalArgumentException(); int hi = arr.length; /* exclusive upper bound */ int low = 0; /* inclusive lower bound */ int mid; while(low < hi) { mid = low/2 + hi/2; if(isPivotIndex(arr,mid)) return arr[mid]; if(arr[hi-1] < arr[mid]) low = mid + 1; else hi = mid; } /* Array was actually sorted, either ascending or descending */ /* Note that return this covers corner cases of arr.length <= 3 */ return max(arr[0], arr[arr.length-1]); }int max(final int a, final int b) { return (a > b) ? a : b;}boolean isPivotIndex(final int[] arr, final int index) { int b = arr[index]; int a = (index > 0) ? arr[index-1] : Integer.MAX_VALUE; int c = (index < arr.length-1) ? arr[index+1] : Integer.MAX_VALUE; return a <= b && b > c;}The above is much more readable, has fewer lines of code, and is easier to check for correctness. It immediately throws an exception on invalid input. It uses isPivotIndex() method to isolate complex logic & maintain a single level of abstraction.It uses a clever final return to handle corner cases.Let's consider how the final return covers the corner cases in your original switch statement.Case 1: (arr.length == 1)max(arr[0],arr[arr.length-1]) will compare the same values and return arr[0].Case 2: (arr.length == 2)max(arr[0],arr[arr.length-1]) will evaluate exactly the same as your ternary statement.Case 3: (arr.length == 3)max(arr[0],arr[arr.length-1]) will return the larger of arr[0] and arr[2]. If arr[1] happened to be the max value it would have been the pivot index in the binary search loop and the method would have already returned. |
_unix.14378 | What criteria distinguishes variousdistributions of Linux, such as Debian, Ubuntu, Fedora, OpenSUSE? Inother words, given a release of aLinux OS, what features mean it is classified intoone distribution not the other?I heard that different distributionsare grouped differently, for example,Debian-based, Gentoo-based,RPM-based, Slackware-based? I waswondering what criteria are usedfor the grouping?Within a distribution, whatdistinguishes different releases?For example, within Ubuntu, Ubuntu10.04 and 10.10.As far as the concepts of releaseand distribution are concerned, isWindows 7 more of a counterpart ofUbuntu distribution or of Ubuntu 10.10? Is Windows NT family more of a counterpart of Ubuntu or of Debian-based Linux OSes?Thanks and regards! | Classification of Linux distributions | linux;distros | From the Linux distributions Wikipedia entry:A Linux distribution is a member of the family of Unix-like operating systems built on top of the Linux kernel. Such distributions (often called distros for short) are Operating systems including a large collection of software applications such as word processors, spreadsheets, media players, and database applications.What distinguishes them is the hardware they supposrt, packaging, kernel patches, what set and versions of applications they ship, their documentation, install methods etc. Other classifications are whether they are more oriented towards end users or servers.Some of the distributions (Debian, Gentoo, Fedora and others) are used as a starting point for other distributions (Ubuntu is derived from Debian for instance). That means that the creators of for instance Sabayon Linux used a Gentoo distribution to start their development effort, and keep track of Gentoo's evolution to some extent.You can look at the Distrowatch search page for this kind of examples.RPM-based distributions is a different classification. RPM is a package management system, not a distribution. Some distributions use it (RedHat and Suse comes to mind) directly or via one of its frontends. Others use different systems (pacman for Arch, portage for Gentoo). The package management system is one of the important differences between distributions.Regarding versions, there are no strict criteria. The distribution developers/managers decide on what versions/patches/new software they want to include in a new version, polish it, and when it's ready, they ship it. There isn't a consistent versioning scheme across distributions.For your last question I'm not sure I understand, but you could say that Windows NT, 2000, XP, 2003/Vista, 2008 and Windows 7 are versions of the Windows distribution. And they are all in the Windows NT family of Windows releases.So if you want to draw a parallel with Linux distributions, yes, each windows release is closer to a version of a Linux distribution. And the Windows NT lineage is equivalent to the RedHat or Suse lineage for instance.(One of the similarities of these lineages is that there usually is a major revision of the kernel between Windows releases, and that's also the case for a lot of Linux distros.) |
_softwareengineering.3317 | What's the difference in this terminology? Is one considered more professional than the other? | What's the difference between a developer and a programmer? | skills | null |
_cs.43682 | Is there an efficient algorithm to visit/enumerate all unique connected subgraphs of a labelled graph? E.g., when the graph is a path, $v_1v_2\dots v_N$, there are $N(N-1)$ unique connected graphs: for any $1\leq i<j\leq N$, the subgraph $v_i\dots v_j$. For graphs with denser edge structure, the number of unique connected subgraphs can be as high as $2^{|V|}$ (for cliques).Fortunately, graphs I'm interested in have a lot of articulation points, so the total number of unique connected subsets should be polynomial in the size of the input.For those who are interested, I'm modeling a problem in protein mass-spectrometry. Starting from a protein (chain of amino acids, with occasional long-distance bonds due to disulphide bonds), I'd like to generate a database of all possible sub-species that may result from breaks in peptide and/or disulphide bonds. | Enumerating connected subgraphs | algorithms;graphs;enumeration | null |
_softwareengineering.299934 | I'm using JavaScript but the question can be generalized to all the languages.In a nutshell - I'm checking if a browser connecting to my site is a microwave and cater for that accordingly.What would be the best way to structure my code?Best way is in most readable, most maintainable (insert your own metric here)...Option 1.0var iammicrowave = /(microwave)/.test(navigator.userAgent);if (iammicrowave) { var settings = { blah : 42 }; magicFunction(settings);} else { magicFunction();}Option 1.1var iammicrowave = /(microwave)/.test(navigator.userAgent);if (iammicrowave) { magicFunction({ blah : 42 });} else { magicFunction();}Option 2.0var iammicrowave = /(microwave)/.test(navigator.userAgent);var settings;if (iammicrowave) { settings = { blah : 42 };}magicFunction(settings);Option 2.1var iammicrowave = /(microwave)/.test(navigator.userAgent);var settings = imamicrowave ? { blah : 42 } : undefined;magicFunction(settings);Option 2.1.1var iammicrowave = /(microwave)/.test(navigator.userAgent);var settings = imamicrowave ? { blah : 42 } : {};magicFunction(settings);Option 2.2var iammicrowave = /(microwave)/.test(navigator.userAgent);magicFunction(imamicrowave ? { blah : 42 } : {});Option 2.3magicFunction(/(microwave)/.test(navigator.userAgent) ? { blah : 42 } : {});Many thanks! | What is the most readable way of passing arguments to the function? | javascript;coding style;code reviews;source code;clean code | null |
_webmaster.21803 | I am running some speedtests on a blog, and I always get complaints about unused CSS. But this is not CSS that I never use, it is just not used on that particular page.Now I work in a structured way, but there still has to be some CSS in the file that will not be used, because you need it on another page.I do not think that using different CSS files on different pages is the way to go, I think you are much better off just creating one big file that can be cached.Now is there an elegant way of dealing with this, or do you just stick with it. | How to go about unused CSS issues | css;optimization | Your assertion that you are better off with one, bigger CSS file is correct. It will likely be only a few KB when gzipped, and should be cached, so not a huge overhead. There are a few things worth checking though.If some CSS is only ever used on one page, it may be better in that case to put the CSS on the page, in some style tags. (Note: this can make things difficult to maintain, especially when you later decide to use a similar style elsewhere.)If you take your most popular pages (for example the pages making up 50%+ of your page views) and find that only a very small amount of your CSS is being used on those pages, it may be faster for users to split it into two CSS files. Now, new users visiting your most popular pages have much less to download. On other pages there is one extra HTTP request but that's not a huge deal.Make sure your CSS is well-optimized. Avoid descendent selectors where possible. If the right hand side of a selector is too generic then it can slow down rendering time. For example .class div {} would be a little slow because the browser has to check every <div> element on the page, then look up the DOM tree to the very top to find (or not) an element with the class. |
_cstheory.18658 | The task is to prove that (0+1)* and 0*(1.0*)* are equivalent.1. http://rubular.com/r/K9Hp9tU6px2. http://rubular.com/r/N8VpoEcch4EDIT: Forgot that + was ambiguous here!I want to prove that the second expression accepts all binary strings, without constructing the equivalent DFAs manually.Induction comes to mind, but I am probably missing something crucial.Can anyone of you recommend a few good methods that I can use here?Also, relevant identities are welcome.As a generalization of my question here, does a generic pattern for proving A language is accepted by R1 if and only if it is accepted by R2 exist ? | Can I show algebraically that this regular expression accepts all binary strings? | automata theory;regular expressions | The identity $(x + y)^* = x^*(xy^*)^*$ is a classical identity of regular expressions, butit is a nontrivial problem to find a complete set of identities for regular expressions. An infinite complete set was proposed by John Conway and this conjecture was ultimately proved by D. Krob.J.H. Conway, Regular algebra and finite machines, Chapman and Hall, 1971, ISBN 0-412-10620-5D. Krob, A complete system of $\mathcal{B}$-rational identities. Automata, languages and programming (Coventry, 1990), LNCS 443, Springer, New York, (1990) 60--73. DOID. Krob, Complete Systems of $\mathcal{B}$-Rational Identities, Theor. Comp. Sci. 89 (1991), 207343. DOISee also for a complete theory:S. L. Bloom and Z. sik. Iteration Theories: the Equational Logic of Iterative Processes.EATCS Monographs on Theoretical Computer Science. Springer-Verlag, Berlin, 1993. xvi+630 pp.ISBN: 3-540-56378-4and for a related discussion for deciding the corresponding equational theory in Coq:T. Braibant, D. Pous, Deciding Kleene algebras in Coq, Log. Methods Comput. Sci. 8 (2012), no. 1, 1:16, 42 pp. DOI |
_unix.353115 | I have a file in the following format with millions of rowsKABC XXX 111 222KDEF XXX 123 456KGHI XXX 567 890KABC XXX 124 267KDEF XXX 190 478KGHI XXX 095 609KABC XXX 001 902KDEF XXX 013 986KGHI XXX 792 001etcThere are many more rows but this is just for simplicity. How can I have just the unique identifiers printed? For exampleKABCKDEFKGHI | using grep to count unique Identifiers with word boundary | text processing;grep | cut -d' ' -f1 /path/to/file | sort -uorawk '! data[$1] { print $1; data[$1]=seen }' /path/to/file |
_unix.7312 | This script uses sed to change all to new stuff. How would one change just the after the yyy: using sed or anything else?cat >sample.txt <<EOFxxx:yyy:}EOFsed --expression='s//new stuff/' sample.txt | Locate postion then make a change using sed | sed | I might not understand your question. If you want to replace ONLY the value after 'yyy' then use the previous answer. If you want to replace ANY values after 'yyy', try this one-liner: sed --expression='/yyy/,$ s//new stuff/g' sample.txtHaven't tested it :D... |
_unix.160231 | Consider a directory with typical Microsoft Windows filenames:New Document.txtFoo.docFoo - Copy.docI'd like to do something on each file such as:for sendfile in $(find ${POLLDIR} -type f -mmin +1 -print0)do echo \${sendfile}\ ls -l ${sendfile} and-so-on if success_above then mv ${sendfile} ${donedir}/. fidonePlease note that I don't want to just run 1 command with ${sendfile} as the argument. I need the loop to do error checking and other things (like moving ${sendfile} on success and logging on failure).What is a correct construct to escape/quote the filenames from find so I can use them in a for look like in the ls command above? If possible, I'd like to avoid storing the filenames in a temp file one by one.I don't think that find -printf '%p\n' as suggested by triplee in the comments to question [ How do I use find when the filename contains spaces? ] would work in a for foo in $(...) do construct.I think replacing illegal chars with ? would work for me in this case, but it would be very ugly. The for loop ultimately processes files in ${POLLDIR} and then moves them when done, so the chance of Foo bar.txt colliding with Foo-bar.txt is 0 (-ish).My best attempt so far is:for sendfile in $(find ${POLLDIR} -type f -mmin +1 -print | tr ' ' '?')do ...doneAny cleaner suggestions? | shell for loop with find with filenames containing spaces | shell script;find;filenames;whitespace | Use find ... -print0 | while IFS= read -d '' construct:find ${POLLDIR} -type f -mmin +1 -print0 | while IFS= read -r -d '' -r sendfile; do echo ${sendfile} ls -l ${sendfile} and-so-on if success_above then mv ${sendfile} ${donedir}/. fidoneThe -d '' sets the end of line character to \0 which is what separates each file found by find -print0 and the IFS= is needed to also work with files that contain newlines. The -r ensures that backslashes do not escape characters (so that \t for example matches an actual backslash followed by a t and not a tab). |
_unix.331213 | I've been having trouble booting with Fedora 25 versions greater than 4.5.5-300. I have already tried modifying the selinux policy as suggested in answers to similar questions but with no success.Ouput of journalctl -xb -p3: -- Logs begin at dom 2016-10-09 16:14:16 CDT, end at dom 2016-12-18 06:30:04 CST. -- dic 18 05:59:10 localhost.localdomain kernel: kvm: disabled by bios dic 18 05:59:11 localhost.localdomain kernel: Support for cores revisions 0x17 and 0x18 disabled by module param allhwsupport=0. Try b43.allhwsupport= dic 18 05:59:20 localhost.localdomain avahi-daemon[916]: chroot.c: open() failed: No such file or directory dic 18 05:59:42 localhost.localdomain kernel: brcmsmac bcma0:1: brcms_ops_bss_info_changed: qos enabled: false (implement) dic 18 05:59:42 localhost.localdomain kernel: brcmsmac bcma0:1: brcms_ops_config: change power-save mode: false (implement) dic 18 05:59:46 localhost.localdomain kernel: brcmsmac bcma0:1: brcmsmac: brcms_ops_bss_info_changed: associated dic 18 05:59:46 localhost.localdomain kernel: brcmsmac bcma0:1: brcms_ops_bss_info_changed: qos enabled: true (implement) dic 18 05:59:53 localhost.localdomain kernel: brcmsmac bcma0:1: brcms_ops_bss_info_changed: arp filtering: 1 addresses (implement) dic 18 06:00:20 localhost.localdomain spice-vdagent[1323]: Cannot access vdagent virtio channel /dev/virtio-ports/com.redhat.spice.0 dic 18 06:03:05 localhost.localdomain kernel: [drm:radeon_cs_ioctl [radeon]] *ERROR* Invalid command stream ! dic 18 06:03:18 localhost.localdomain spice-vdagent[1706]: Cannot access vdagent virtio channel /dev/virtio-ports/com.redhat.spice.0 dic 18 06:03:35 localhost.localdomain pulseaudio[1606]: [pulseaudio] bluez5-util.c: GetManagedObjects() failed: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.Output of sestatus:SELinux status: enabledSELinuxfs mount: /sys/fs/selinuxSELinux root directory: /etc/selinuxLoaded policy name: targetedCurrent mode: permissiveMode from config file: permissivePolicy MLS status: enabledPolicy deny_unknown status: allowedMax kernel policy version: 30Any ideas on how to fix this? | fedora 25 won't boot after update | fedora;boot | null |
_codereview.93901 | Go is a board game (it looks like 5-in-a-row and plays like chess) and I tried to program it in Java. Rules:Two players take turn placing stones on a board. The goal is to capture teritory. Stones can be captured (removed from game) if they have no unoccupied adjacent tiles horizontaly/verticaly. Adjacent stones (horizontaly/verticaly) of one color are combined into chains which share unoccupied tiles.If you're interested here's more.My code:I implemented core rules and basic input/output. My goal is to make the game scalable so I can add functionality without breaking everything. My concerns: (main concern) I don't think it's expandable (enough). I had to redo (nearly) everything from scratch because it was complete mess. It works but I think that if I add more game mechanism everything will break. What can I do to avoid this? Is Main an acceptable name for a class? Or should it be Game/App or something completely different?What about my comments?Main package go;import java.awt.BorderLayout;import java.awt.Color;import javax.swing.BorderFactory;import javax.swing.JFrame;import javax.swing.JPanel;/** * Builds UI and starts the game. * */public class Main {public static final String TITLE = ;public static final int BORDER_SIZE = 25;public static void main(String[] args) { new Main().init();}private void init() { JFrame f = new JFrame(); f.setTitle(TITLE); JPanel container = new JPanel(); container.setBackground(Color.GRAY); container.setLayout(new BorderLayout()); f.add(container); container.setBorder(BorderFactory.createEmptyBorder(BORDER_SIZE, BORDER_SIZE, BORDER_SIZE, BORDER_SIZE)); GameBoard board = new GameBoard(); container.add(board); f.pack(); f.setResizable(false); f.setLocationByPlatform(true); f.setVisible(true);}}GameBoard package go;import java.awt.Color;import java.awt.Dimension;import java.awt.Graphics;import java.awt.Graphics2D;import java.awt.Point;import java.awt.RenderingHints;import java.awt.event.MouseAdapter;import java.awt.event.MouseEvent;import javax.swing.JPanel;/** * Provides I/O. * * */ public class GameBoard extends JPanel {private static final long serialVersionUID = -494530433694385328L;/** * Number of rows/columns. */public static final int SIZE = 9;/** * Number of tiles in row/column. (Size - 1) */public static final int N_OF_TILES = SIZE - 1;public static final int TILE_SIZE = 40;public static final int BORDER_SIZE = TILE_SIZE;/** * Black/white player/stone * * */public enum State { BLACK, WHITE}private State current_player;private Grid grid;private Point lastMove;public GameBoard() { this.setBackground(Color.ORANGE); grid = new Grid(SIZE); // Black always starts current_player = State.BLACK; this.addMouseListener(new MouseAdapter() { @Override public void mouseReleased(MouseEvent e) { // Converts to float for float division and then rounds to // provide nearest intersection. int row = Math.round((float) (e.getY() - BORDER_SIZE) / TILE_SIZE); int col = Math.round((float) (e.getX() - BORDER_SIZE) / TILE_SIZE); // DEBUG INFO // System.out.println(String.format(y: %d, x: %d, row, col)); // Check wherever it's valid if (row >= SIZE || col >= SIZE || row < 0 || col < 0) { return; } if (grid.isOccupied(row, col)) { return; } grid.addStone(row, col, current_player); lastMove = new Point(col, row); // Switch current player if (current_player == State.BLACK) { current_player = State.WHITE; } else { current_player = State.BLACK; } repaint(); } });}@Overrideprotected void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D) g; g2.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); g2.setColor(Color.BLACK); // Draw rows. for (int i = 0; i < SIZE; i++) { g2.drawLine(BORDER_SIZE, i * TILE_SIZE + BORDER_SIZE, TILE_SIZE * N_OF_TILES + BORDER_SIZE, i * TILE_SIZE + BORDER_SIZE); } // Draw columns. for (int i = 0; i < SIZE; i++) { g2.drawLine(i * TILE_SIZE + BORDER_SIZE, BORDER_SIZE, i * TILE_SIZE + BORDER_SIZE, TILE_SIZE * N_OF_TILES + BORDER_SIZE); } // Iterate over intersections for (int row = 0; row < SIZE; row++) { for (int col = 0; col < SIZE; col++) { State state = grid.getState(row, col); if (state != null) { if (state == State.BLACK) { g2.setColor(Color.BLACK); } else { g2.setColor(Color.WHITE); } g2.fillOval(col * TILE_SIZE + BORDER_SIZE - TILE_SIZE / 2, row * TILE_SIZE + BORDER_SIZE - TILE_SIZE / 2, TILE_SIZE, TILE_SIZE); } } } // Highlight last move if (lastMove != null) { g2.setColor(Color.RED); g2.drawOval(lastMove.x * TILE_SIZE + BORDER_SIZE - TILE_SIZE / 2, lastMove.y * TILE_SIZE + BORDER_SIZE - TILE_SIZE / 2, TILE_SIZE, TILE_SIZE); }}@Overridepublic Dimension getPreferredSize() { return new Dimension(N_OF_TILES * TILE_SIZE + BORDER_SIZE * 2, N_OF_TILES * TILE_SIZE + BORDER_SIZE * 2);}}Grid package go;import go.GameBoard.State;/** * Provides game logic. * * */public class Grid {private final int SIZE;/** * [row][column] */private Stone[][] stones;public Grid(int size) { SIZE = size; stones = new Stone[SIZE][SIZE];}/** * Adds Stone to Grid. * * @param row * @param col * @param black */public void addStone(int row, int col, State state) { Stone newStone = new Stone(row, col, state); stones[row][col] = newStone; // Check neighbors Stone[] neighbors = new Stone[4]; // Don't check outside the board if (row > 0) { neighbors[0] = stones[row - 1][col]; } if (row < SIZE - 1) { neighbors[1] = stones[row + 1][col]; } if (col > 1) { neighbors[2] = stones[row][col - 1]; } if (col < SIZE - 1) { neighbors[3] = stones[row][col + 1]; } // Prepare Chain for this new Stone Chain finalChain = new Chain(newStone.state); for (Stone neighbor : neighbors) { // Do nothing if no adjacent Stone if (neighbor == null) { continue; } newStone.liberties--; neighbor.liberties--; // If it's different color than newStone check him if (neighbor.state != newStone.state) { checkStone(neighbor); continue; } if (neighbor.chain != null) { finalChain.join(neighbor.chain); } } finalChain.addStone(newStone);}/** * Check liberties of Stone * * @param stone */public void checkStone(Stone stone) { // Every Stone is part of a Chain so we check total liberties if (stone.chain.getLiberties() == 0) { for (Stone s : stone.chain.stones) { s.chain = null; stones[s.row][s.col] = null; } }}/** * Returns true if given position is occupied by any stone * * @param row * @param col * @return true if given position is occupied */public boolean isOccupied(int row, int col) { return stones[row][col] != null;}/** * Returns State (black/white) of given position or null if it's unoccupied. * Needs valid row and column. * * @param row * @param col * @return */public State getState(int row, int col) { Stone stone = stones[row][col]; if (stone == null) { return null; } else { // System.out.println(getState != null); return stone.state; }}}Chain package go;import go.GameBoard.State;import java.util.ArrayList;/** * A collection of adjacent Stone(s). * */public class Chain {public ArrayList<Stone> stones;public State state;public Chain(State state) { stones = new ArrayList<>();}public int getLiberties() { int total = 0; for (Stone stone : stones) { total += stone.liberties; } return total;}public void addStone(Stone stone) { stone.chain = this; stones.add(stone);}public void join(Chain chain) { for (Stone stone : chain.stones) { addStone(stone); }}}Stonepackage go;import go.GameBoard.State;/** * Basic game element. * */public class Stone {public Chain chain;public State state;public int liberties;// Row and col are need to remove (set to null) this Stone from Gridpublic int row;public int col;public Stone(int row, int col, State state) { chain = null; this.state = state; liberties = 4; this.row = row; this.col = col;}} | Go (board game) in Java | java;object oriented;game | You don't cleanly separate different concerns:GameBoard contains both game logic (e.g. who is next to move, preventing playing on an occupied point, etc.) and UI logic.Converting between graphical coordinates and in game coordinates should be done using a single function for each direction and not interleaved with the drawing or click logic.Finding neighbours should be its own function. It should never return null elements in the returned collection, but rather a collection containing fewer Points if the center Point is at the edge of the board.Some of your code uses the GameBoard.SIZE constant. Other code uses the Grid.SIZE instance field. GameBoard.SIZE should either be eliminated or only used a single time, when passing it to the constructor of the board.Some other issues:State is a rather vague name, how about StoneColor?Ko doesn't get handled.I'd recommend having a collection/set if points-in-ko on the game state. This has two advantages over a single nullable Point: You don't need to handle null as a special case and it generalizes nicely to super-ko, where multiple points can be in ko at the same time.Doesn't handle suicideDepending on how you implemented ko, explicitly handling single stone suicide might not be necessary, since it results in an unchanged board. Multiple stone suicide is illegal under most rules, but allowed under other rules. If you want to support multiple rules, you should describe them in a Rules object (including scoring, ko and suicide).An alternative designThis is based on my experience writing a go program in C#. It focuses on clean design, sacrificing some performance. But features that need extreme performance, mainly bots, need specialized data-structures anyways, so I don't see that as a problem.Go game logic only depends on the coordinates of a point for a single function: determining the neighbours of a point. If you use an immutable GoPoint type, you don't need to pass around (x,y) pairs all the time.You don't need 2D arrays to represent the board state either, you can use a simple Dictionary<GoPoint, StoneColor>. The board topology can be described using two functions Iterable<GoPoint> allPoints and Iterable<GoPoint> neighbours(GoPoint point).To avoid creating new instances of Point all the time, you can create all points when initializing the board and add a function GoPoint pointAt(int x, int y) to obtain it.Chains are simply a collection of points, there is little gain in representing them as their own type. I wouldn't use persistent chains updated each move either. Chains are only necessary to determine which stones will be captured, you can compute them on-the-fly for the neighbours of the point you're playing on. To compute chains, start at a point and recursively add all neighbouring points, eliminating duplicates.Similarly I'm not fond of having a mutable Stone class. The GoPoint class together with a couple of functions on the class representing to board state should be enough. |
_webmaster.11356 | Possible Duplicate:How to find web hosting that meets my requirements? I live in Denmark in the daily and need to find a web host that can keep up at the world level.Should there be some who is familiar with hotels that can follow the requirements below.OverallClustered Web24/7 Expert Helpdesk and Server MonitoringLinux Operating SystemUnlimited Subdomains50 MySQL DatabasesFTP and FTPS AccessFast connectivity to any destination - World Wide. (Stable/Low DNS, TTFB and similar)ManagementFull DNS ManagementWeb Mail AccessphpMyAdminCronjobsMailIMAP/POP3/SMTPMail Auto RespondersCatch-All MailboxMicrosoft Exchange EnabledApachePython and perl CGI SupportSecure Server (SSL)mod_rewriteFull .htaccess SupportPHPPHP v 4.4 & 5.2ImageMagick and GDCURLRTF, POWERPOINT, EXCEL, WORD and PDF parserZip Utilityxhprof | Clustered Web host | web hosting;looking for hosting | Media Temple (http://mediatemple.net/) or Rackspace (http://www.rackspace.com/) should have a solution for you. Of course, everything depends on how much you would like to pay. |
_webmaster.71032 | Forgive me if this has been asked, all I found was SEO: h1 with text vs. h1 with bg image and hidden text which wasn't the same question.Will search engines pick up and index (in their image searches) pictures that are inserted into websites using:background: url(/images/filename.ext) no-repeat;And should I include that image in my XML sitemap or will web-crawlers think its lying because it may not find said image in the markup?I've been implementing Schema as well. Should I use Schema itemprop='image' inside the div that I'm putting the image in as background? | How does background: url(image.ext) work from an SEO standpoint? | seo;css;sitemap;images | Google does not treat CSS content the same as that on pageGenerally Google will only attempt to index content that is actually embedded within the page content associated with an appropriate tag such as <img>. You can however attempt to force Google's hand by adding the path of the background image into a image sitemap.Some Schema markups require more than just the standard itempropValid Schema involves marking up actual content, not content used by a template. Using the itemprop image will require a mark up of URL which in this case you don't have.CSS backgrounds are not considered page contentYour attempting something that shouldn't be done for various reasons, if the content is valuable to your users then the correct usage would be to include the content within the designed elements and not that using background url. General UI elements are useless for indexing in Google's image search, if you want the image to be indexed without the need of forcing Google's hand then there are several ways it can be done. A messy example would be to use position:fixed among many other ways this can be done... see my mess around fiddle as an example. |
_softwareengineering.291355 | I'm reading a guide for back-propagation of a neural net, and it gives the error function as:Next work out the error for neuron B. The error is What you want What you actually get, in other words: ErrorB = OutputB (1-OutputB)(TargetB OutputB)But then, underneath that it says:The Output(1-Output) term is necessary in the equation because of the Sigmoid Function if we were only using a threshold neuron it would just be (Target Output).I am in fact using the sigmoid function for my net, but I'd like to write things to be as general as possible. I thought of trying something like this:public List<Double> calcError(List<Double> expectedOutput, UnaryOperator<Double> derivative) { List<Double> actualOutput = outputLayer.getActivations(); List<Double> errors = new ArrayList<Double>(); for (int i = 0; i < actualOutput.size(); i++) { errors.add( derivative.apply(actualOutput.get(i)) * (expectedOutput.get(i) - actualOutput.get(i)) ); } return errors;}Where I allow the caller to pass in the derivative (I think that's what it's called), so the error can be calculated for any activation function.The problem is, I haven't learned calculus yet, and don't even know if this makes sense. Can I have the derivative passed in like this and still have it give accurate results? Or will the error calculation change depending on activation function/derivative used?I wasn't sure if this should be put in the Math SE; as it contains code. | Is it appropriate to pass in a derivative to calculate the error of a Neural Net? | java;calculus | Yes you can pass in the derivative, however, it must be the derivative of your activation function. N.b. the network will fail if the derivative isn't actually the derivative of the activation function.The options you have are:statically define your activation function and its derivativestatically define a list of activation functions that can be selected, (and the derivative selected automatically)parametrize the activation function and its derivative as a pairparametrize the activation function, and then use some method to calculate the derivative automatically (n.b. some neural network libraries use this approach.)Since rolling your own neural network code really only makes sense as learning exercise, any of the methods are appropriate. A practical library would want to prevent users from easily misconfiguration the network. N.b. I assume a 'threshold' neuron, mentioned in your quote, refers to a neuron with a rectified linear activation function. The derivative in this case is 1, unless the output is 0, then it is 0. So leaving out the derivative is the same as using it, in that particular case. |
_webmaster.10659 | I'm looking for a free lightweight mobile webmail. Something like squirrelmail but for mobile phones.roberto | free lightweight mobile webmail | email;mobile;looking for a script;webmail | null |
_softwareengineering.262557 | I need an algorithm that distributes same items across a list, so maximizing the distance between occurrences.E.g. I have a list of 15 items:{a,b,c,c,c,d,e,f,f,f,g,h,h,i,j}The algorithm should reorder these in such a way that all the duplicates are spread as uniformly as possible.The mentioned list should result in something like this:{c,f,a,h,b,c,d,e,c,f,g,c,h,i,j,f}Preferably I'd like pseudo code, and even better would be TSQL (since that is the platform it needs to run on). It needs to process hundreds of these lists in one go.I also tested a proposed method called 'Weighted shuffle' but this will still allow two of the same items in the list to appear next to each other even when this is not needed. | An algorithm that spreads similar items across a list | algorithms;sql;vb.net;visual basic;tsql | First, make sure there is a solution according to your requirements (that means, there is not a single letter which occurs more that n/2 times, when n is the total number of elements).Then I suggest you try the followingstart with a random shuffle or weighted shuffleafterwards, for each remaining pair of similar neighbours, pick one of the items, pick another randomly choosen item among those with different neighbours, and switch their placesrepeat the last step until all pairs are removed.This approach will just make sure you get no neighboured pairs, but it does not maximize the possible distances between similar letters. If you want to achieve the latter (which is not clear from your question), I suggest you introduce a score function to your list, for example like this: Score(list) := Sum(1/(abs(a-b)-0.999)) a,bwhere the sum goes over all pairs (a,b) of positions of equal letters. The -0.999 in the denominator makes sure the whole expression will become very big when there are 2 equal neighbours. Now, you can apply random swaps to your list and try to minimize the score function, for example by hill climbing or simulated annealing. |
_codereview.146207 | In short, the input is an ordered list of numbers, and we have to sum the numbers from index a to b quickly. For example, assume array[] = {1, 2, 3}. Please sum from array[0] to array[2] (1 + 2 + 3).Because the input array can have as much as 200,000 elements, I used a Fenwick tree to save time, but the time limit is still exceeded on UVa.Here is the problem on UVa. Which part can I improve?I built:lowbit() return low bit for fenwick treecreate() create the fenwick treeupdate() update the fenwick treesum() return the sum by fenwick treeupdate :add Arrays.fill(FT, 0); to initialize Fenwick treecreate() use update()add array[x] = y; to update the actual arrayimport java.util.*;public class Main { static int[] FT = new int[200001]; /* store fenwick tree */ static int[] array = new int[200001]; /* store input data */ static int N; /* size of input data */ public static void main(String[] args) { Scanner in = new Scanner(System.in); int testCase = 0; /* not important, just for UVa output format*/ int x, y; while ((N = in.nextInt()) > 0) { Arrays.fill(FT, 0); /* Initialize Fenwick tree */ for (int i = 1; i <= N; i++) { /* Initialize input array */ array[i] = in.nextInt(); } create(); System.out.printf(Case %d:\n, ++testCase); String act; while (!(act = in.next()).equals(END)) { /* receive action: print sum, update data or END */ if (act.equals(M)) { /* to print sum from x to y*/ x = in.nextInt(); y = in.nextInt(); System.out.println(sum(y) - sum(x - 1)); } else { /* to update the array[x] to y ,also fenwick tree*/ x = in.nextInt(); y = in.nextInt(); update(x, y - array[x]); array[x] = y; } } } } public static void create() { for (int i = 1; i <= N; i++) { update(i, array[i]); } } public static void update(int i, int delta) { for (int j = i; j <= N; j += lowbit(j)) { FT[j] += delta; } } public static int sum(int k) { int ans = 0; for (int i = k; i > 0; i -= lowbit(i)) { ans += FT[i]; } return ans; } public static int lowbit(int k) { return -k & k; }} | Summing integers in stream using Fenwick tree | java;programming challenge;time limit exceeded | Incorrect creation of fenwick treeYour function create() is both incorrect and slow. You had the correct function update() but you never used it. Instead you tried to create the fenwick tree yourself but used an \$O(n^2)\$ algorithm to do so. The correct way would have been just:public static void create() { for (int i = 1; i <= N; i++) { update(i, array[i]); }}This would have created the fenwick tree in \$O(n \log n)\$ time.BugAnother problem you have is when you update the array. You do correctly update the fenwick tree, but you forgot to update the actual array. In other words, after this line: update(x, y - array[x]);you need this line: array[x] = y; |
_unix.22348 | I need to find all links in a PDF file, along with the page they're on and their X/Y position. Is there any tool or combination of tools I can use to do that? | Find links and their positions in a PDF | command line;software rec;pdf | null |
_softwareengineering.331021 | We have a huge project that desperately needs to be broken apart into multiple databases and applications. I can think of 2 possible approaches to this problem:Use a REST endpoint on each service to fetch/update data. Services that are dependent on each other simply call each other's REST endpoints to fetch/update data. (this makes the most sense to me, and it seems to have a good separation of concerns in terms of what data each service is supposed to own)Use queues to pass entity updated messages between services and replicate data that was updated between all the services that need the data.My lead really, really wants to go with the 2nd option because he thinks it will be more cost effective and resilient: using this approach, we don't need to scale up all the dependent services of a particular microservice because data is replicated, and we can also fetch necessary data even if a dependent microservice is down (again because data is replicated). This kind of makes sense to me, however, my gut is telling me that this is a bad idea because replication bugs make me nervous, and it seems complicated to implement properly (lots of possible ways data could get out of sync, and doesn't seem trivial to manually re-sync all data that's necessary to be used in multiple services). I've never worked with such an architecture as mentioned in #2, so I could be completely off base here, which is why I'm posting here and asking for advice.What would you all recommend? #1 or 2? Something else entirely?Thanks in advance! | Which approach should I use to split a monolithic application into several microservices? | architecture;web services;microservices | The preferred approach will depend on several factors. At my company, we use both approaches (using Hermes rather than a queue for asynchronous communication) since they both have their advantages in certain situations.The main factor of interest is what requirements you have on data freshness for a certain pair of services. Basically, asynchronous communication (such as using a queue) has the benefit of causing less coupling between services and potentially allowing better throughput since you can schedule data transfer at a convenient time, repeat failed transmissions as many times as you like and so on. The downside is that things happen asynchronously so there is always some lag involved as one service sees stale data before it gets an update from the other. There may also be considerable disk and CPU resources required if you want to duplicate a large database in each service.Also, take into account that in a distributed system, things as simple as making a copy of data in another place get tricky. If you have one instance of application A and one instance of application B, and everything happens synchronously, you can make and keep in sync a true copy of all data from service A to service B. But when you have N instances of A talking to M instances of B, and to make matters worse, communication is asynchronous, it's really hard to keep an up-to-date copy. For example instance A1 may update a document, then instance A2 also updates the same document and now you have two update events racing to reach instances B1 and B2 which may receive and apply them in any order. It's a big topic but the take out is complexity grows a lot.So, here are a few concrete examples of when using each of the approaches may be the better choice:You have an authorization service with information about users and their access rights. Here, REST is preferred: you want any change (such as removing a user's permissions) to be active immediately. For security reasons you also don't want to make any copies of users' personal information or credentials.A service displays advertising for certain products your web shop is selling. Each product's ad contains a name, a short description and a link URL. Assuming the number of advertised products is not huge, making a copy via asynchronous communication may be preferable. Products are not updated very often and if they are, adding a delay of a few minutes before the change is visible in the ads is acceptable. Your ad server may index ads in some special format tuned for performance of ad serving, and services are loosely coupled: your ad service is not increasing the load on your main product database as much as if it were using REST requests.Your shop's listing pages show the products you sell. Here, you probably want to query the product database service directly via REST. When you edit a product, you want the change to be visible on your shop's page as soon as possible. The whole product database is probably quite big so duplicating it in the frontend service would be a large hardware cost.As you see, there are good use cases for both approaches, and in a large system you probably have to settle on using a hybrid approach of using #1 or #2 depending on the situation. |
_softwareengineering.267469 | The tutorials I find to setup OS-X for web developer programming (installing x-code ruby js cocoapods sql c asm etc) leave out if it should be done from an admin account, or standard (managed) user.Some methods I've seen used: Setup dev environment from admin account, then change admin to managed user. Or install dev environment from managed user, using sudo or otherwise editing settings to get things to install.What's the consensus for using Terminal from a managed & parental controlled account? What's the correct procedural way to setup a fresh 10.10 install? Thanks. | Developer account in OSX, admin or managed user with our without sudo access? | programming practices;security | null |
_reverseengineering.6299 | I disassemble my own written code in C to PowerPC assembly, and I can't understand why crclr occurrs before the call to the printf function.C codeint main(){ int a, b, c; a = 10; b = 2; c = a * b; printf(%d, c); return 0;}PowerPC assembly codestwu r1, -0x10(r1)mflr r0stw r0, 0x14(r1)lis r3, unk_38@haddi r3, r3, unk_38@lli r4, 0x14crclr 4*cr1+eqbl printfli r3, 0lwz r0, 0x14(r1)mtlr r0addi r1, r1, 0x10blrunk_38: .byte 0x25 # % .byte 0x64 # dCould anyone please tell me why? Thanks in advance :) | Why does the PowerPC Compiler emit a clclr instruction before calling a function? | disassembly;powerpc | null |
_unix.372541 | I have the following in a script:yes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2ps aux | grep yesWhen I run it, the output shows that yes is still running by the end of the script. However, if I run the commands interactively then the process terminates successfully, as in the following:> yes >/dev/null &[1] 9967> kill -INT 9967> ps aux | grep yessean ... 0:00 grep yesWhy does SIGINT terminate the process in the interactive instance but not in the scripted instance?EDITHere's some supplementary information that may help to diagnose the issue. I wrote the following Go program to simulate the above script.package mainimport ( fmt os os/exec time)func main() { yes := exec.Command(yes) if err := yes.Start(); err != nil { die(%v, err) } time.Sleep(time.Second*2) kill := exec.Command(kill, -INT, fmt.Sprintf(%d, yes.Process.Pid)) if err := kill.Run(); err != nil { die(%v, err) } time.Sleep(time.Second*2) out, err := exec.Command(bash, -c, ps aux | grep yes).CombinedOutput() if err != nil { die(%v, err) } fmt.Println(string(out))}func die(msg string, args ...interface{}) { fmt.Fprintf(os.Stderr, msg+\n, args...) os.Exit(1)}I built it as main and running ./main in a script, and running ./main and ./main & interactively give the same, following, output:sean ... 0:01 [yes] <defunct>sean ... 0:00 bash -c ps aux | grep yessean ... 0:00 grep yesHowever, running ./main & in a script gives the following:sean ... 0:03 yessean ... 0:00 bash -c ps aux | grep yessean ... 0:00 grep yesThis makes me believe that the difference has less to do on Bash's own job control, though I'm running all of this in a Bash shell. | Why doesn't SIGINT work on a background process in a script? | shell script;process;background process;signals;interrupt | What shell is used is a concern as different shells handle job control differently (and job control is complicated; job.c in bash presently weighs in at 3,300 lines of C according to cloc). pdksh 5.2.14 versus bash 3.2 on Mac OS X 10.11 for instance show:$ cat codepkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yes$ bash code3864338643$ ksh code38650$ Also relevant here is that yes performs no signal handling so inherits whatever there is to be inherited from the parent shell process; if by contrast we do perform signal handling$ cat sighandlingcode perl -e '$SIG{INT} = sub { die ouch\n }; sleep 5' &pid=$!sleep 2kill -INT $pid$ bash sighandlingcode ouch$ ksh sighandlingcode ouch$ the SIGINT is triggered regardless the parent shell, as perl here unlike yes has changed the signal handling. There are system calls relevant to signal handling which can be observed with things like DTrace or here strace on Linux:-bash-4.2$ cat codepkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yespkill yes-bash-4.2$ rm foo*; strace -o foo -ff bash code2189921899code: line 9: 21899 Terminated yes > /dev/null-bash-4.2$ We find that the yes process ends up with SIGINT ignored:-bash-4.2$ egrep 'exec.*yes' foo.21*foo.21898:execve(/usr/bin/pkill, [pkill, yes], [/* 24 vars */]) = 0foo.21899:execve(/usr/bin/yes, [yes], [/* 24 vars */]) = 0foo.21903:execve(/usr/bin/pgrep, [pgrep, yes], [/* 24 vars */]) = 0foo.21904:execve(/usr/bin/pkill, [pkill, yes], [/* 24 vars */]) = 0-bash-4.2$ grep INT foo.21899rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0rt_sigaction(SIGINT, {SIG_IGN, [], SA_RESTORER, 0x7f18ebee0250}, {SIG_DFL, [], SA_RESTORER, 0x7f18ebee0250}, 8) = 0--- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=21897, si_uid=1000} ----bash-4.2$ Repeat this test with the perl code and one should see that SIGINT is not ignored, or also that under pdksh there is no ignore being set as there is in bash. With monitor mode turned on like it is in interactive mode in bash, yes is killed.-bash-4.2$ cat monitorcode #!/bin/bashset -mpkill yesyes >/dev/null &pid=$!echo $pidsleep 2kill -INT $pidsleep 2pgrep yespkill yes-bash-4.2$ ./monitorcode 22117[1]+ Interrupt yes > /dev/null-bash-4.2$ |
_unix.331576 | I am on a CentOS guest OS on a VirtualBox. I need to connect to this server through SSH using its public IP. More importantly, I need to connect to this server from within the guest host.I have tried port forwarding with NAT as some forum suggested but I think I am lost there. SSH output is just root@localhost ~]# ssh [email protected]: connect to host xx.xxx.xxx.xxx port 22: Connection refusedI am able to connect to the server through host machine. | SSH to a server from inside a VM | ssh;networking;virtualbox | null |
_cs.13698 | I have a problem $\Pi_1$ that I want to show that is NP-hard. I know that I must find an NP-hard problem $\Pi_2$ and a polynomial time reduction $f()$ from instances of $\Pi_2$ to $\Pi_1$ such that $I_2$ is an Yes-instance of $\Pi_2$ iff $I_1=f(I_2)$ is an Yes-instance of $\Pi_1$.What if I find a (constant sized) family of reductions $f_i()$ such that $I_2$ is an Yes-instance of $\Pi_2$ iff at least one $f_i(I_2)$ is an Yes-instance of $\Pi_1$? Is this enough? Is there a way of translating this one in the classical definition? How to formalize this?I know that in the second situation I can say that I can't solve $\Pi_1$ in polynomial time unless P=NP, but I'm no sure that is equivalent of saying that $\Pi_1$ is NP-hard. | NP-Hardness reduction | complexity theory;reductions;np hard | null |
_cs.38374 | what is the best book to gain an introductory understanding of approximation algorithms? I'm looking for something along the lines of the Sedgewick, that has examples written in a well known language and not psuedocode. | Reference for approximation algorithms | algorithms;approximation | null |
_unix.71911 | First off I'm new to Linux. I have created a new user called ginny with -M option.Now I'm trying to assign a home directory with usermod -d /link/to/directory ginny but it doesn't assign a home directory to user ginny.% su - ginny returns an errorsu: warning : cannot chance directory to /abc: No such file or directory-bash-3.2$: The pwd command returns /root for user ginny (maybe because i haven't created her a home directory) but is there any switch that can now allow me to assign a home directory to user ginny? | adding home directory after -M option | centos;useradd | The user's home directory doesn't exist. usermod changes the home directory field in /etc/passwd but doesn't create the directory. You need to create it manually.cp -a /etc/skel /link/to/directory # or mkdir /link/to/directory to create an empty home directorychown ginny:ginnygroup /link/to/directory # where ginnygroup is ginny's primary groupchmod 755 /link/to/directory # or 711 or 700 or 751 or 750 as desired |
_codereview.127679 | I needed a deque who's maxlen can be set again after it has been initialized. So, since new-style classes in Python are treated as types, I decided to create a new class borrowing from deque to accomplish this. My approach is to have an internal deque as an attribute and when maxlen is set, it replaces the internal deque with a new one, initialized to the old contents and the new maxlen.I tried subclassing deque but it's too crude, creating a deque that's useless on top with the useful one internally. So I chose to simply have all of the wrapper class's attributes (except special ones) point to the internal deque's. However, due to how new-style classes work, the special attributes have to be handled manually. Since all I want is to override maxlen's setter, all of this seems inelegant and I'm wondering if there's a cleaner way of accomplishing this.From what I've read here, it seems I could subclass this class in __new__ to skip overriding the special attributes, but that seems even more hairy than what I already wrote.This is the result, stripped of extraneous comments (complete code here if you want something runnable, with tests):# -*- coding: utf-8 -*-from __future__ import print_functionfrom collections import dequeclass ResizableDeque(object): def __init__(self, *args, **kwargs): self.internal = deque(*args, **kwargs) skip_list = [ 'maxlen' ] + [attr for attr in dir(deque) if attr.startswith('__') and attr.endswith('__')] for attr in dir(deque): if attr not in skip_list: setattr(self, attr, getattr(self.internal, attr)) @property def maxlen(self): return self.internal.maxlen @maxlen.setter def maxlen(self, value): templist = list(self.internal) self.internal = deque(templist, value) def __str__(self): return self.internal.__str__() def __repr__(self): return self.internal.__repr__() def __getitem__(self, value): return self.internal.__getitem__(value) def __setitem__(self, index, value): return self.internal.__setitem__(index, value) # these have not been tested def __copy__(self): return self.internal.__copy__() def __delitem__(self, index): return self.internal.__delitem__(index) def __iadd__(self, other): return self.internal.__iadd__(other) def __len__(self): return self.internal.__len__() # not sure if overriding __sizeof__ is wise this way def __sizeof__(self): return self.__sizeof__() + self.internal.__sizeof__() # pretty sure this is ok def __format__(self, spec): return self.internal.__format__(spec) | Transparent wrapper class for data structure in Python | python;object oriented | null |
_codereview.166176 | Okay, This is my method:public function edit(Request $request, $ent, $room, $obj){ $input = $request->except(['_token']); Enterprise::where('bedrijfsnaam', $ent)->first()->rooms()->where('name', $room)->first()->objects()->where('name', $obj)->first()->update($input); return redirect('/enterprise/'.$ent.'/room/'.$room);}As you can see, at Enterprise::where() I have a really long relation, But it doesn't feel right to have such a long relation. Is it just okay to have one this large or is there a better way to do it? | Laravel - super long relation, it doesn't feel right | php;laravel;eloquent | I suggest you figure out a way to view the execution history for all of the SQL statements across your application. This can be an absolute god-send when trying to untangle or optimize your queries.I'd recommend using https://github.com/barryvdh/laravel-debugbar . It has a lot of other features as well but presents everything very neatly. There are ways to view the history with custom event listeners and such (https://stackoverflow.com/a/27753889/3224736) but the extra effort for debugbar would be worth it, IMO.As for your specific query, I believe it'd compile into four separate queries. Three for each time you call first() and one for the update.Eloquent does allow defining distant relationships with properties like hasManyThrough (https://laravel.com/docs/5.2/eloquent-relationships#has-many-through) but they can get as messy as manually querying the relationships. I'd give it a try, at least.The other option is to use joins for the three selects. Something likeEnterprise::where('bedrijfsnaam', $ent) ->join('rooms', function ($join) { $join->on('enterprise.id', '=', 'rooms.enterprise_id') ->where('rooms.name', '=', $room); }) ->join('objects', function ($join) { $join->on('rooms.id', '=', 'objects.room_id') ->where('objects.name', '=', $obj); })->first();(That's just me guessing at your schema, though. You can modify that as needed).Using one big join would compile into one query, meaning one round trip to the database. Just be sure you have proper indexes on your columns or the query could get bogged down.-Or you can investigate eager loading and lazy eager loading your relationships.That would at least reduce the apparent complexity of your query. But those are most useful if you anticipate iterating the relationships, rather than just selecting one of them. They wouldn't likely do much to speed up your current query.As complete tangents, I'd recommend investigating two items.Make sure your models define fillable arrays. Without them, passing the entire $input into it, even if you try to manually prune it, can lead to nasty security/integrity holes.See if you can use a route or action instead of a raw redirect. Laravel is built to use MVC through-and-through so you might as well learn how to use it early on, rather than patch all your manual stuff later. |
_unix.175012 | recently found subj on FreeBSD 10.x out of the box (with KDE4):[user@fbsd10] /home/user% suPassword:su: Sorry[user@fbsd10] /home/user% sudo cshPassword:Sorry, user user is not allowed to execute '/bin/csh' as root on fbsd10.[user@fbsd10] /home/user% pkexec csh==== AUTHENTICATING FOR org.freedesktop.policykit.exec ===Authentication is needed to run `/bin/csh' as the super userAuthenticating as: userPassword: [entered users password]==== AUTHENTICATION COMPLETE ===[root@fbsd10] ~# iduid=0(root) gid=0(wheel) groups=0(wheel),5(operator)This file looks like the cause of such behavior:/usr/local/share/polkit-1/actions/org.freedesktop.policykit.policyAm I doing something wrong? | KDE4 PolicyKit backdoor(misconfiguration) (tested on FBSD10x,PCBSD, probably will work on linux) | freebsd;kde;policykit;vulnerability | null |
_webmaster.101095 | My web-application is running on a node server on azure. I need to implement the If-Modified-Since header on this server. Could you please guide me how to do this?I tried implementing by setting the header in server.js file, but it is not working apparently. | How do I implement If-Modified-Since header on a node server? | http headers;cache control;node js | null |
_unix.165820 | I am writing a long-running program in the Linux environment that will be calculating entries in a large table. Every time it comes to the end of the row, it outputs the calculated values into a plaintext file.To avoid having to continually reopen the file and append to it, I am considering just opening the file once and holding it open for the duration of the program. I know that there is a limit on the maximum number of file descriptors that can be open at once, but is there a time limit on a single file descriptor being held for an extended period of time?Note: The process I am running could potentially take a month or more to complete. | Maximum time a file descriptor can be held | files | null |
_unix.239274 | I don't have control over the installation of debian, it's a pre-built debian 7 image provisioned by the VPS provider. It consumes about 6.5GB of disk space 'out of the box.'Do you think it's possible to get this install down below 500MB of disk space? It's on an OpenVZ host. Very few services are needed (pretty much SSH only).There is a discussion about removing components from debian, but it isn't clear what the net change in disk space will be: https://wiki.debian.org/ReduceDebianThe VPS provider also has CentOS and Ubuntu images. I haven't tried them; I'd assume their disk space utilization is similar.It's a grandfathered, very cheap VPS plan. As such, attempting to reduce the OS's consumed disk space for my application might be worthwhile (instead of buying a more expensive tier with more storage).Thank you for your insight. | What's the minimal disk usage of stripped-down Debian 7 installation on a VPS? | debian;disk usage;vps | null |
_unix.359759 | I know this is a common problem for new users in Debian [I'm using Jessie]. I have tried following the steps on the Debian wiki here to install Flash after adding the nonfree repos with no luck. Here is my sources files, which, I'll admit, is a bit of a mess:#deb cdrom:[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170116-23:46]/ jessie main #deb cdrom:[Debian GNU/Linux 8 _Jessie_ - Official Snapshot amd64 LIVE/INSTALL Binary 20170116-23:46]/ jessie main deb http://ftp.uk.debian.org/debian/ jessie main contrib non-freedeb-src http://ftp.uk.debian.org/debian/ jessie main contrib non-free#deb http://security.debian.org/ jessie/updates main #deb-src http://security.debian.org/ jessie/updates main # jessie-updates, previously known as 'volatile'deb http://ftp.uk.debian.org/debian/ jessie-updates main deb-src http://ftp.uk.debian.org/debian/ jessie-updates main deb http://ftp.debian.org/debian/ jessie main non-free contrib deb-src http://ftp.debian.org/debian/ jessie main non-free contrib deb http://ftp.debian.org/debian/ jessie-updates main contrib non-free deb-src http://ftp.debian.org/debian/ jessie-updates main contrib non-free deb http://security.debian.org/ jessie/updates main contrib non-freedeb-src http://security.debian.org/ jessie/updates main contrib non-free deb http://deb.opera.com/opera-stable/ stable non-free #deb http://httpredir.debian.org/debian/ jessie main #deb-src http://httpredir.debian.org/debian/ jessie main #deb http://httpredir.debian.org/debian/ jessie-updates main #deb-src http://httpredir.debian.org/debian/ jessie-updates main #deb http://security.debian.org/ jessie/updates main #deb-src http://security.debian.org/ jessie/updates main deb http://httpredir.debian.org/debian/ jessie main contrib non-free deb-src http://httpredir.debian.org/debian/ jessie main contrib non-free deb http://httpredir.debian.org/debian/ jessie-updates main contrib non-free deb-src http://httpredir.debian.org/debian/ jessie-updates main contrib non-free deb http://ppa.launchpad.net/no1wantdthisname/ppa/ubuntu trusty maindeb-src http://ppa.launchpad.net/no1wantdthisname/ppa/ubuntu trusty maindeb http://http.debian.net/debian jessie-backports main contribNow I tried installing pepperflash from the guide here. The output of apt-cache policy flashplugin-nonfree givesflashplugin-nonfree: Installed: 1:3.6.1+deb8u1 Candidate: 1:3.6.1+deb8u1 Version table: *** 1:3.6.1+deb8u1 0 500 http://ftp.uk.debian.org/debian/ jessie/contrib amd64 Packages 500 http://ftp.debian.org/debian/ jessie/contrib amd64 Packages 500 http://httpredir.debian.org/debian/ jessie/contrib amd64 Packages 100 /var/lib/dpkg/statusBut still nothing works. I am determined to sort this out but maybe am looking in the wrong places? | Cannot install Flash player after adding nonfree repos | debian | null |
_softwareengineering.179247 | I have been having some fun lately exploring the development of language parsers in the context of how they fit into the Chomsky Hierarchy.What is a good real-world (ie not theoretical) example of a context-sensitive grammar? | What is a real-world use case of using a Chomsky Type-I (context-sensitive) grammar | language design;parsing;grammar | Good question. Although as mentioned in the comments very many programming languages are context-sensitive, that context-sensitivity is often not resolved in the parsing phase but in later phases -- that is, a superset of the language is parsed using a context-free grammar, and some of those parse trees are later filtered out.However, that does not mean that those languages aren't context-sensitive, so here are some examples:Haskell allows you to define functions that are used as operators, and to also define the the precedence and associativity of those operators. In other words, you can't build the correct parse tree for an operator expression like:a @@ b @@ c ## d ## eunless you've already parsed the precedence/associativity declarations for @@ and ##:infixr 8 @@infixr 6 ##A second example is Bencode, a data language that prefixes content with its length:<length>:<contents>The issue with this format is that it's pretty much impossible to parse without something context-sensitive, because the only way to figure out the field sizes is by ... parsing the string.A third example is XML, assuming arbitrary tag names are allowed: opening tag names must have matching close tags:<hi> <bye> the closing tag has to match bye </bye></hi> <!-- has to match hi --> |
_unix.108363 | I have an program that calls libcurl, and libcurl calls libgssapi_krb5.If I want to debug calls to libcurl, then ltrace works.But now I want to debug calls to libgssapi_krb5, then ltrace my_program does not give out anything. | How to trace calls to library from library? | debugging;ltrace | null |
_unix.208702 | I want to monitor changes made a folder that is mounted via SSHFS.I have tried iwatch but it does not notify when a new file is created, below is the syntax I am using with iwatch:iwatch -e create /mnt/mme01/Any idea why this is not working and how it can be achieved? | Monitor folders mounted via SSHFS | filesystems;mount;sshfs | null |
_unix.232651 | I've got a file with with a list of words each delimited by tabs. I'm trying to use grep to search for two of the words, but I can't figure out how to include the tab in the search string. I've tried:grep -i -e word1 \tword2along with several variations, but I still can't figure it out. Anyhelp? | How to grep for two words with a tab in between them | bash;grep | null |
_unix.8854 | I'm migrating to a new webserver which has SELinux set up (running Centos 5.5). I've got it set up so that it can execute CGI scripts with no problem, but some of the older Perl based scripts are failing to connect to remote webservices (RSS feeds and the like).Running: grep perl /var/log/audit/audit.log gives:type=SYSCALL msg=audit(1299612513.302:7650): arch=40000003 syscall=102 success=no exit=-13 a0=3 a1=bfb3eb90 a2=57c86c a3=10 items=0 ppid=22342 pid=22558 auid=0 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=235 comm=index.cgi exe=/usr/bin/perl subj=root:system_r:httpd_sys_script_t:s0 key=(null)As my crash course in SELinux goes, it looks like it is actively refusing the outbound connection, but how do I configure it to allow for CGI scripts to make outbound requests? | How do I configure SELinux to allow outbound connections from a CGI script? | centos;selinux | You probably need to enable the httpd_can_network_connect SELinux boolean:Run as root:# setsebool -P httpd_can_network_connect 1 |
_cstheory.22206 | Consider the following set.$S_n =\{ (x,y) ~|~x \in \mathbb{Z}_+~\wedge~ y \in \{0,1\}^n ~\wedge~x=\sum_{i=0}^{n-1} 2^i y_i \}$$S_n$ is a collection of pairs $(x,y)$, where $x$ is an integer between 0 and $2^n-1$ and $y$ is its binary representation. I'm interested in the convex hull of $S_n$; you can call it $P_n$. Has $P_n$ been studied before? Does it admit a compact extended formulation? | What is known about this binary representation polytope? | linear programming;integer programming;polytope | I think $S_n$ can be written in terms of inequalities in the obvious way. Let$$Q_n = \{(x, y): x = \sum_{i = 0}^{n-1}{2^i y_i}, \forall i: 0 \leq y_i \leq 1\}.$$I claim that $Q_n = S_n$. First, obviously all $(x, y) \in S_n$ are also in $Q_n$, so $S_n \subseteq Q_n$. Second, fix a point $(x^*, y^*) \in Q_n$. Consider the probability distribution over $\{0, 1\}^n$ induced by picking $y_i = 1$ with probability $y^*_i$, independently for each $i$. If $y$ is sampled from that distribution, then, by linearity of expectation,$$\mathbb{E}\ x = \mathbb{E} \sum_{i = 0}^{n-1}{2^i y_i} = \sum_{i = 0}^{n-1}{2^i y^*_i} = x^*.$$Therefore $(x^*, y^*)$ is in the convex hull of $S_n$, which proves $Q_n \subseteq S_n$.BTW, this didn't really use anything special about $S_n$. Whenever you have the convex hull of the set $\{(x, y): y \in S, x = Ay\}$, and the convex hull of $S$ has a concise extended formulation, the same thing will work. The main point is that $x$ is a linear function of $y$. Here we used the fact that the cube $[0, 1]^n$ has a very easy formulation in terms of inequalities. |
_codereview.82188 | I am new to assembly and have made a simple addition program to sum two integers read from the keyboard. The program outputs correctly, but I want to know if there is a way to streamline my code. It seems a bit cumbersome for such a simple program and I may have instructions that are unnecessary. # Author: Evan Bechtol# Description: This program prompts the user to enter 2 integers and computes their sum.#---------------------------------------------------------------------------------------# .dataA: .word # Store the number 4 as an integer in var1 # $t0 is usedB: .word # Store the number 2 as an integer in var2 # $t1 is usedS: .word # Store the sum of A and B # $t2 is usedPrompt1: .asciiz Please enter first number: Prompt2: .asciiz Please enter second number: Result: .asciiz The sum of A and B is: .textmain: #--------------------------------------------------------# #Display first prompt li $v0, 4 # Load instruction print string la $a0, Prompt1 # Load prompt into $a0 syscall #Read first integer li $v0, 5 # Read 1st integer la $t0, A # $t0 = A syscall #Store first integer into memory move $t0, $v0 # Move contents in $v0 to $t0 sw $t0, A # A = value at $t0 #--------------------------------------------------------# #Display second prompt li $v0, 4 # Load instruction print string la $a0, Prompt2 # Load prompt into $a0 syscall #Read second integer li $v0, 5 # Read 1st integer la $t1, B # $t0 = A syscall #Store second integer into memory move $t1, $v0 # Move contents in $v0 to $t0 sw $t1, B # A = value at $t0 #--------------------------------------------------------# #Add the two variables la $t2, S # $t2 = S add $t2, $t0, $t1 # $t2 = $t0 + $t1 sw $t2, S # S = value at $t2 #Display the Result prompt la $a0, Result # Loads Output label to be printed li $v0, 4 # Sysycall to print string syscall #Display the sum lw $a0, S # $a0 = value at S li $v0, 1 # Syscall to print integer syscall #Exit the program li $v0, 10 # Load exit code to $v0 syscall | MIPS assembly addition program | beginner;assembly | The comments are misleading:#Read second integerli $v0, 5 # Read 1st integerla $t1, B # $t0 = Aumm... are we reading second or 1st? Bottomline is, do not overcomment the code.syscall 5 leaves a value in $v0. The contents of $t0 (or $t1) is irrelevant during the syscall. Set them up when you need them:li $v0, 5syscallla $t0, Amove $t0, $v0You store data to memory just to load them back. This is very anti-assembly. Generally you want to use registers as much as possible, and avoid memory as much as possible:li $v0, 5syscallmove $t0, $v0...li $v0, 5syscall# At this moment you have first integer in $t0, and the second in $v0.# Just add them together. No memory access is necessary.Consult your documentation on which registers are guaranteed to survive a syscall (I suspect, all of them besides $v0).Nothing to simplify reading and printing. |
_codereview.91663 | Swift's SequenceType is a useful means of generating a sequence of values, and it makes it particularly useful iterate over these values.I don't really have much experience with these SequenceType types, so I wanted to implement my own for some practice and learning. What better sequence to take a look at than a Fizz Buzz sequence, right?I wanted to make this Fizz Buzz a little special though. I wanted the user to define any sort of rules and add as many tests as they wanted. We just pair each test with a word, pass an array of these test-word pairs, and let the sequence do all the work.So, to start out, I create custom types for the Test and the test-word Pair:typealias FizzBuzzRule = (Int) -> Booltypealias FizzBuzzPair = (test: FizzBuzzRule, word: String)So using a normal FizzBuzz example, we'd create the ordinary Fizz and Buzz tests like this:let fizzTest = { (i: Int) -> Bool in return i % 3 == 0}let buzzTest = { (i: Int) -> Bool in return i % 5 == 0}let fizzPair: FizzBuzzPair = (fizzTest, Fizz)let buzzPair: FizzBuzzPair = (buzzTest, Buzz)let pairs = [fizzPair, buzzPair]But of course, we can create any sort of rules we want. These are just examples, and as we see the rest of the code, we'll see how using these example rules will produce the standard FizzBuzz problem results.The next step is writing a function to apply the rules and produce the required output. For that, I wrote the fizzBuzzify function:func fizzBuzzify(value: Int, fizzBuzzPairs: [FizzBuzzPair]) -> String { var retnValue: String? = nil for pair in fizzBuzzPairs { if pair.test(value) { retnValue = (retnValue ?? ) + pair.word } } return retnValue ?? String(value)}So now, we can pass any value and any array of Test-Word pairs, and build our FizzBuzz-type string simply using this function.Already, we could do something like this:for x in 1...100 { println(fizzBuzzify(value, pairs))}But, I wanted to go one step further and improve this into a sequence which generates the values for us, so I needed to create FizzBuzzSequence as a SequenceType:struct FizzBuzzSequence: SequenceType { let startValue: Int let endValue: Int let pairs: [FizzBuzzPair] init(start: Int = 1, end: Int = 100, pairs: [FizzBuzzPair]) { self.startValue = start self.endValue = end self.pairs = pairs } init(start: Int = 1, end: Int = 100, pairs: FizzBuzzPair...) { self.init(start: start, end: end, pairs: pairs) } func generate() -> GeneratorOf<String> { var value: Int = self.startValue return GeneratorOf<String> { return (value <= self.endValue) ? fizzBuzzify(value++, self.pairs) : nil } }}And now, that we've put it all together, it can be used as simply as:for fizzBuzzValue in FizzBuzzSequence(start: 1, end: 100, pairs: pairs) { println(fizzBuzzValue)}And assuming pairs is the same array of FizzBuzzPair that we set up earlier, this will have the exact same results as any other FizzBuzz program you'd expect to see.But we can now start at any value, end at any value, and set up any rules we want.I'm looking for general comments on Swiftiness of this code, as well as double checking efficiency of the program in general. Am I even using the SequenceType how it's intended to be used?For clarify, below is the full set of code to be reviewed put together (it was split up by commentary above):typealias FizzBuzzRule = (Int) -> Booltypealias FizzBuzzPair = (test: FizzBuzzRule, word: String)func fizzBuzzify(value: Int, fizzBuzzPairs: [FizzBuzzPair]) -> String { var retnValue: String? = nil for pair in fizzBuzzPairs { if pair.test(value) { retnValue = (retnValue ?? ) + pair.word } } return retnValue ?? String(value)}struct FizzBuzzSequence: SequenceType { let startValue: Int let endValue: Int let pairs: [FizzBuzzPair] init(start: Int = 1, end: Int = 100, pairs: [FizzBuzzPair]) { self.startValue = start self.endValue = end self.pairs = pairs } init(start: Int = 1, end: Int = 100, pairs: FizzBuzzPair...) { self.init(start: start, end: end, pairs: pairs) } func generate() -> GeneratorOf<String> { var value: Int = self.startValue return GeneratorOf<String> { return (value <= self.endValue) ? fizzBuzzify(value++, self.pairs) : nil } }} | Ultimate FizzBuzz | swift;fizzbuzz;generator | null |
_unix.32061 | I'm trying to:Launch several ssh sessions (processes) through a script (Python)Communicate with the sessions by sending them commands via STDIN (even though they aren't open in my current terminal)I've got the session spawning part down. I'm just unable to grab the process and send it stuff. I should mention this is a realm I've only recently started delving in so I'm definitely missing theory.Explanation:In-depth what I'm trying to do is launch ssh WITHOUT a terminal (I'm already doing this with python, so that isn't the issue) in the background. My problem arises when I actually want to communicate with the background process. How can I send data to a background process' STDIN? | How can I send data to the STDIN of a background process? | ssh;process;io redirection | It would be easier to use a named pipe to communicate with the processes than try to modify the FD whilst it's open. Set the named pipe as the process' standard input and write to it as required. |
_webmaster.81544 | I have a platform which uses a responsive design. The structure of the website is like this: Domain/#!/Product_type/Product_IDHere, I am trying to create the alias URL (Using an earlier version of Drupal) where I replace <Product ID> with <Product Name>. This is not possible, since the URL structure in the responsive page and the earlier Drupal version doesn't match. One fix which my team suggested is creating an alias URL in this format. Domain/#!/Product_type%2bProduct_ID where %2b is the unicode version of the /. I want to know if structuring the URL with a %2b will affect the SEO of the page in any way. | If the string '%2b' exists in the URL alias, does it affect SEO of the page? | seo;url | null |
_opensource.5399 | I want to put a BSD license in some code. I use part of code that doesn't have license information but it belongs to other people who I don't know. It was given to my from other privates but the code it self doesn't have a copyrigth claim. I want to put BSD license to my part of the work. How can I put my BSD license? Is the 2 terms bsd. | Derivative work BSD license when not known original work | licensing;bsd | null |
_unix.30987 | In Windows command line (powershell and cmd), when you press Esc key while on a line, whatever you have typed at the prompt is removed.I found that pressing Esc key at bash prompt does nothing. Pressing Esc and then backspace deletes a word, but this has to be done for each word.I am learning Bash incrementally and sometimes type something stupid in the middle of the line and feel that it is better to type from scratch again. To do this, pressing backspace is the only way I found until now. What do you do?I am aware of the clear command and Ctrl-L shortcut, but I am not talking about clearing the entire terminal. Just the line. | Windows shell Escape key (delete whole line) equivalent in Bash | bash;command line | You want kill-whole-line, but this is not bound by default in bash. backward-kill-line (CtrlX Backspace) and unix-line-discard (CtrlU) both erase from the current point to the beginning of the line, so just go to the end of the line and use either. |
_unix.149386 | Suppose I have the following two entries line of iptables:iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADEiptables -A INPUT -s 192.168.1.0/24 -j DROPSo, I have POSTROUTING and INPUT chains.Then I can get the result list with:iptables -L -t nat -n --line-numbers -t filterMy result is:Chain INPUT (policy ACCEPT)num target prot opt source destination 1 DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:702 DROP all -- 10.10.10.0/24 0.0.0.0/0 My quesion is, how can I find out which rule belongs to POSTROUTING and which belongs to INPUT ? | how to find out which chain in iptables in listing | iptables;firewall | null |
_unix.231917 | I installed Kali 2.0 on an Oracle VirtualBox. Everything was going fine until I tried to run the commandapt-get -y install dkmsThis gave me the following error messagePackage dkms is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'dkms' has no installation canidateHere's the output I get when I runapt-cache policyWhy am I getting an this error message and how do I fix it?I'm a newcomer to both Linux and Kali, so this is the guide I was following. I hit this error at around 9:45 in the video. | E: Package 'dkms' has no installation candidate | linux;kali linux | null |
_unix.358471 | I am trying to use mv to rename files. Some of the names of the files to be renamed contain apostrophes (or single quotes). And the file names are to be passed to mv with variables. But I cannot get that to work.When I give the file names to mv directly, it does work, like that:mv Artificial intelligence/Markoff_Rosenberg__China's_intelligent_weaponry_gets_smarter.pdf Artificial intelligence/Markoff_Rosenberg__Chinas_intelligent_weaponry_gets_smarter_(r1205).pdfBut when I use variables, it does not work:orig=Artificial intelligence/Markoff_Rosenberg__China's_intelligent_weaponry_gets_smarter.pdfnew=Artificial intelligence/Markoff_Rosenberg__Chinas_intelligent_weaponry_gets_smarter_(r1205).pdfmv $orig $newI receive the following error message:mv: cannot stat 'Artificial intelligence/Markoff_Rosenberg__China'\''s_intelligent_weaponry_gets_smarter.pdf': No such file or directoryWhy is that? Why is there an extra \'' in the file name in the error message? And what is the solution to the problem?Thanks in advance for your help! | bash: mv file with apostrophe in file name | bash | null |
_unix.267844 | I received an file encrypted with the public key I generated but I can't get it to decrypt.Steps:gpg key-gen default optionsgpg --export -a <email> > pub.keysent the pub.keyreceived the encrypted filecat <file> | gpgThe error:$ cat cred.gpg | gpggpg: key 71980D35: secret key without public key - skippedgpg: encrypted with RSA key, ID 0D54A10Agpg: decryption failed: secret key not availableHowever, the secret key DOES exist in my keyring and the public key i generate from it matches the fingerprint of the pub.key i sent to my coworker.$ gpg --list-secret-keys /home/jcope/.gnupg/secring.gpg------------------------------sec 2048R/71980D35 2016-03-04uid me <email>ssb 2048R/0D54A10A 2016-03-04Checking the fingerprint $ gpg --with-fingerprint pub.key pub 2048R/AF0A97C5 2016-03-04 me <email> Key fingerprint = 17A4 63BF 5A7D D3B2 C10F 15C0 EDD6 4D8A AF0A 97C5 sub 2048R/1103CA7C 2016-03-04$ gpg --fingerprint | grep 17a4 -i Key fingerprint = 17A4 63BF 5A7D D3B2 C10F 15C0 EDD6 4D8A AF0A 97C5I'm a gpg newby and at a loss for why this isn't working. It seems like the most standard operation. | gpg: secret key not available when sec & pub key are in keyring | encryption;gpg | Note the error message: it doesn't say that the secret key is missing (it isn't), it says the public key is missing.gpg: key 71980D35:secret key without public key- skippedIn RSA, some numbers (d, p, q, u) are private and others (n, e) are public. Only the 2 public numbers are required for encryption and signature verification while all 6 numbers are required in order to decrypt and sign. So for the latter operations, you actually need both the secret and public keys.Did the public key get deleted from the pubring by accident?You can try re-importing the public key. Since the public key is the one that is distributed widely, it should be easy to re-obtain a copy of it. |
_codereview.67330 | Both exercises have a common pattern of filter by a transformed list, then untransform the result. See skip and localMaxima.-- exercise 1skips :: [a] -> [[a]]skips xs = map (\n -> skip n xs) [1..(length xs)]skip :: Integral n => n -> [a] -> [a]skip n xs = map snd $ filter (\x -> (fst x) `mod` n == 0) (zip [1..] xs)--exercise 2isLocalMaximum :: Integral a => (a,a,a) -> BoolisLocalMaximum (a,b,c) = b > a && b > csliding3 :: [a] -> [(a,a,a)]sliding3 xs@(a:b:c:_) = (a,b,c) : sliding3 (tail xs)sliding3 _ = []localMaxima :: Integral a => [a] -> [a]localMaxima xs = map proj2 $ filter isLocalMaximum (sliding3 xs) where proj2 (_,b,_) = b-- *Main> filter isLocalMaximum (sliding3 [1,5,2,6,3])-- [(1,5,2),(2,6,3)]My instincts say that I could implement both of these something like this:localMaxima' :: Integral a => [a] -> [a]localMaxima' xs = filterBy isLocalMaximum sliding3 xsif only I could implement filterByfilterBy :: (b -> Bool) -> ([a] -> [b]) -> [a] -> [a]filterBy p f as = as' where indexedAs = zipWith (,) [0..] as indexedBs = zipWith (,) [0..] (f as) indexedBs' = filter p indexedBs -- doesn't typecheck; how can we teach p about the tuples? indexes = map fst indexedBs as' = map (\i -> snd (indexedAs !! i)) indexesIt's also slower than just writing out a fold. Is this all a bad idea? I've always considered fold a low level recursion operator and always try to structure in terms of higher level map and filter but maybe I am misunderstanding.My Haskell level is: understand LYAH but not written much code.This is a homework to CIS 194 (2013 version) (though I am not taking the class, I am working through the material on my own) | Filter by a transformed list, then untransform the result | haskell;homework | null |
_softwareengineering.193821 | We have a URL in the following format/instance/{instanceType}/{instanceId}You can call it with the standard HTTP methods: POST, GET, DELETE, PUT. However, there are a few more actions that we take on it such as Save as draft or CurateWe thought we could just use custom HTTP methods like: DRAFT, VALIDATE, CURATEI think this is acceptable since the standards sayThe set of common methods for HTTP/1.1 is defined below. Although this set can be expanded, additional methods cannot be assumed to share the same semantics for separately extended clients and servers.And tools like WebDav create some of their own extensions.Are there problems someone has run into with custom methods? I'm thinking of proxy servers and firewalls but any other areas of concern are welcome. Should I stay on the safe side and just have a URL parameter like action=validate|curate|draft? | Are there any problems with implementing custom HTTP methods? | rest;http | One of the fundamental constraints of HTTP and the central design feature of REST is a uniform interface provided by (among other things) a small, fixed set of methods that apply universally to all resources. The uniform interface constraint has a number of upsides and downsides. I'm quoting from Fielding liberally here.A uniform interface:is simpler.decouples implementations from the services that they provide.allows a layered architecture, including things like HTTP load balancers (nginx) and caches (varnish).On the other hand, a uniform interface:degrades efficiency, because information is transferred in a standardized form rather than one which is specific to an application's needs.The tradeoffs are designed for the common case of the Web and have allowed a large ecosystem to be built which provides solutions to many of the common problems in web architectures. Adhering to a uniform interface will allow your system to benefit from this ecosystem while breaking it will make it that difficult. You might want to use a load balancer like nginx but now you can only use a load balancer that understands DRAFT and CURATE. You might want to use an HTTP cache layer like Varnish but now you can only use an HTTP cache layer that understands DRAFT and CURATE. You might want to ask someone for help troubleshooting a server failure but no one else knows the semantics for a CURATE request. It may be difficult to change your preferred client or server libraries to understand and correctly implement the new methods. And so on.The correct* way to represent this is as a state transformation on the resource (or related resources). You don't DRAFT a post, you transform its draft state to true or you create a draft resource that contains the changes and links to previous draft versions. You don't CURATE a post, you transform its curated state to true or create a curation resource that links the post with the user that curated it.* Correct in that it most closely follows the REST architectural principles. |
_softwareengineering.313391 | When you use an IoC container, like so:var svc = IoC.Resolve<IShippingService>();How does the IoC Container choose which implementation of IShippingService to instantiate?Further, if I am calling the above to replace the equivalent code:var svc = new ShippingService(new ProductLocator(), new PricingService(), new InventoryService(), new TrackingRepository(new ConfigProvider()), new Logger(new EmailLogger(new ConfigProvider())));I assume that ProductLocator, PricingService, etc. are called out in the constructor parameters as Interfaces, not concrete classes. How does the IoC container know which implementations of IProductLocator, IPricingService, etc. to instantiate?Is the IoC Container smart enough to use the same ConfigProvider for both dependencies (if that is the requirement)? | How does a Dependency Injection/IOC Container know which implementation to use? | dependency injection;inversion of control;ioc containers | Because you pass configure to it which tells it how to do it. Typically this happens during application startup.In the simplest case you could have a lot of lines like:iocConfig.Bind<IShippingService>().To<ShippingService>();There are different ways to define such a configuration, such as conventions (e.g. a concrete class matching the name of the interface apart from the I prefix), attributes or config files, each with their own advantages and disadvantages.When testing you could create the stubs manually, or use a different configuration.Most containers also support a way for the target site to influence the dependency resolution, for example by adding an attribute. Personally I haven't needed that so far.Reuse of instances is determined by scope. Typically you have one scope where nothing gets reused (transient), per-request scope, per-thread scope, singleton scope, etc. You typically specify the scope as part of the configuration. |
_softwareengineering.142661 | I'm building an multi processes application and I need to save session ID, the sessions ID is 32 bit, and of course it can't be used twice in its lifetime, I'm currently using DB that saves all the ID in a table, and I do the following,ID table is (int key, char used(1)) //1 is used, 0 is not1. lock table2. get one key for one sessions3. update used field in it to used4. unlock After the session is finished the process use below to free key,1. lock table2. update used field in it to not used4. unlockI'm really wondering whether this is a good/fast implementation. and please note it's multi processes application. | Shared memory multiprocesses | c;multithreading | Did you try memcached? It's fast, it has atomic updates, it does not have all the rest of database overhead. Not as fast as raw shared memory, but probably session registration is not your bottleneck anyway? |
_softwareengineering.237681 | Related to this question i want to know if there is a concise way to eleminate null values out of code in general or if there is not.E.g. imagine a class that represents a user with birthday as attribute which can be set but does not have to, which then means it's null. In scala and similar languages one could use the Option type to describe this and i see why this is better then just having a regular null value as one would code it in java.However i wonder if this is the right concept or if we can do it better in this case and even in general.I came to the idea that we could replace the users birthday (and other optional or possibly unknown/unset attributes) with a list of possible optional attributes. So we could have likeourUser.getOptionalAttributes()// could return a list [Date Birthday, Float Height, Color Eyecolor]// or just [Color Eyecolor]// or even an empty listHowever using a regular list would mean that we could store multiple birthdays e.g.So we needed a way to tell the compiler that the list may only contain one Birthday type.Am i totally on the wrong track here and are option types the (philosophical) correct way to express optional attributes?Also, are there additional concepts for handling the logic of optional attributes other than nullable types and option-like types? | Alternatives to null values and option-like types | java;scala;concepts;null | What is a type?A type is meta data that can be used, at compile time or run time, to describe the shape of our data.trait A { val b: X; def c(y: Y): Z}So we know in our program that when ever we see an a: A, we know that a has the shape of A so we can call a.b safely.So what shape does null have?null has no data. null does not have a shape. String s = null is allowed as null has no shape, so it can fit any where. MikeFHay above says that an empty collection can be thought of as a null object. This is wrong, an empty collection has exactly the same shape as any other collection.If a field can be null, then we can no longer rely on a.b to be safe. The shape of the data in this field is not truly A, only potentially A. The other shape it can be is null. That is the shape of this field is a union of A and null.Most languages allow unions to be implemented as a tagged union. Scala's Option type is a tagged union of None | Some(a:A). Like wise Ceylon adds a ? postfix for types that say that the type may be null as in String? s = null. |
_cs.27998 | Let's assume we have given a two dimensional cellular automaton with an initial configuration where alls cells are in an quiescent state, expect for one square of cells. Let $n$ be the number of cells in that square. We want to synchronize all cells in the square, so we basically want to solve the firing squad synchronization problem.Synchronizing all cells in $\Theta(\sqrt{n})$ is rather easy: All cells at the left and right border of the square serve as generals and we synchronize each row separately with any standard 1-dimensional FSSP algorithm. A row of $k$ cells with generals at both ends can be synchronized in $\Theta(k)$ steps (in our case: $k=\sqrt{n}$).What I want to do, is to synchronize the whole square in $\Theta(\sqrt{n} \log n)$, so I need to slow down the synchronization process by a factor of $\log n$. But I have no idea how to do so. | How to synchronize a 2d cellular automaton in $\Theta(\sqrt{n} \log n)$ steps | synchronization;cellular automata;algorithm design | A naive solution could be that each general first launch a binary counter in its row and counts the current time until $k\log(k)$ steps and then run the FSSP. To do this you just need to detect the value of $k$ and compute $k\log(k)$ in less than $k\log(k)$ steps: sending a signal back and forth toward the other end while incrementing a binary counter gives you k written in binary in roughly $2k+\log(k)$ steps, so you also know $\log(k)$ (length of the counter). Within the $\Theta(k\log(k))$ steps left you have much more time than necessary to compute the product $k\times \log(k)$. And you're done.I'm pretty sure that (1) there are more elegant solutions, (2) there is a general acceleration/slowdown theorem that could also be used here. |
_webmaster.58672 | I bought certificate only for name_domain.com under the configuration for IIS 8 (on Windows Server 2012). I have 301 redirects from www.name_domain.com to https://name_domain.com. If I open the following link in a browser: www.name_domain.com , there's a certificate error. Should I buy a new certificate or maybe there is another solution to the problem? | Problems with SSL certificate in IIS 8 | redirects;security certificate;iis8 | null |
_cstheory.18727 | Is there a (reasonable) way to sample a uniformly random boolean function $f:\{0,1\}^n \to \{0,1\}$ whose degree as a real polynomial is at most $d$?EDIT: Nisan and Szegedy have shown that a function of degree $d$ depends on at most $d2^d$ coordinates, so we may assume that $n \leq d2^d$. The problems as i see are the following:1) On one hand if we pick a random boolean function on $d2^d$ coordinates, then its degree will be close to $d2^d$, much higher than $d$.2) On the other hand, if we pick each coefficient of degree at most $d$ at random, then the function will not be boolean.So the question is: is there a way to sample a low degree boolean function that avoids these two problems? | Random functions of low degree as a real polynomial | randomness;boolean functions;bounded degree | null |
Subsets and Splits