id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_datascience.15589
I have a table in R. It just has two columns and many rows. Each element is a string that contains some characters and some numbers. I need number part of the element. How can I have number part?For example: INTERACTOR_A INTERACTOR_B1 ce7380 ce60582 ce7380 ce138123 ce7382 ce73824 ce7382 ce52555 ce7382 ce11036 ce7388 ce5237 ce7388 ce8534Thanks
Remove part of string in R
dataset;bioinformatics
null
_softwareengineering.206593
A db index is analogous to a table of contents. This helps me understand db index in an easy way. My question is are tehre any real world analogies for a clustered index?
Real world analogy for a clustered index
oracle;indexing
null
_softwareengineering.329571
I have recently started a project where I need to make extensive use of videos and books. And all of them are needed to be stored locally. I want to be able to search all these books and get access to them as quickly as possible.I don't want users of my application to have access to the content (books and videos) outside my app that would allow them to copy the files or view them in another software. In short, I don't want the files to be stored openly in the file system, I want to obscure them or somehow restrict access to the content so it cannot be accessed outside my application.This is like an encyclopedia project, where I have copyrighted materials.I am using Java as the main language for this project.
How to prevent copy of my Java application's local multimedia content?
java;encryption;copy protection
You probably want to encrypt the content (or at least obfuscate it). You then have to trust the encryption and the procedures related to it.But advanced users could bypass your Java code and access to the encrypted form.BTW, with enough efforts (including forcing the encryption), your mechanism could be bypassed. Remember that security through obscurity is a fallacy.Storing data outside of files (e.g. in some database) does not hide it at all. An advanced user (like a motivated enough me) would find your database and could query it outside of your application. It could change a Java class loader to modify or at least trace your application's behavior, or it could even patch the JVM running your thing (or use some different, e.g. academic JVM, to run your thing). It could e.g. trace the system calls done by your app (on Linux, I'm using strace(1) on most foreign binary software I might have to install). Read perhaps Operating Systems: Three Easy Pieces.Also, a given content (video or book) could be legally available on the consumer's computer outside of your app. Do you require that content to be duplicated (wasting resources on the consumer's computer, assumed to legally belonging to him)?At last, as a consumer, I would never buy or use your software, because I don't trust DRM. Information wants to be free. See https://www.defectivebydesign.org/ (which also have technical arguments related to your issue that you should know, even if you disagree with the opinions there).Read about trusted computing base.Explain to your client that someone (outside of US law juridiction, perhaps some Chinese, Russian, French developer or hacker, more generally outside of legal reach from your client ....) will eventually reverse engineer any software trick you'll implement, and publish his understanding of your tricks on some website or forum. It is just a matter of time. Read about libdvdcss as a past example etc...I want to obscure them or somehow restrict access to the content so it cannot be accessed outside my application.You won't technically be able to fully restrict (that is, make impossible any) access to content on another computer (on which the OS, the JVM, the hardware etc... could be hacked or compromised or improved or patched); you just could make that difficult (using encryption or obfuscation techniques). So you have a cost-effectiveness tradeoff: how much work & resource can your client afford to make it very difficult? Or is barely difficult enough?
_softwareengineering.325241
I am wondering what is the difference between DDS and AUTOSAR. As I know, both of them are communication middlewares. AUTOSAR WAS originally proposed by a group of car manufacturer that's why I guess should be much more suitable for intra-vehicle communications. DDS was proposed for military and critical-mission use cases first. But I really want to know should DDS be an alternative for AUTOSAR? If not, why?Please correct me, if I am wrong and add more to this.
What is the difference between AUTOSAR and DDS?
middleware
null
_unix.332874
How can I discard multiple messages at once using rsyslog?This doesn't work and I'm unable to find a working example:# Do not log any keepass and chmomium messagesif $programname == 'chromium.desktop' then /dev/nullif $programname == 'keepass2.desktop' then /dev/null& stopThe solution below works, but shows an error when checking with rsyslogd -N1 -f <config_file>rsyslogd: version 8.4.2, config validation run (level 1), master config /etc/rsyslog.d/00-discard.confrsyslogd: CONFIG ERROR: there are no active actions configured. Inputs will run, but no output whatsoever is created. [try http://www.rsyslog.com/e/2103 ]rsyslogd: run failed with error -2103 (see rsyslog.h or try http://www.rsyslog.com/e/2103 to learn what that number means)
rsyslog: discard multiple messages
rsyslog
The line & stop means repeat the previous selector, and do action stop which stops further processing of the selected message. So you would need to put it after each if ... selecting line. However, since your action is to write to /dev/null, you may as well make the first line do what you want, i.e.if $programname == 'chromium.desktop' then stopif $programname == 'keepass2.desktop' then stop
_cogsci.4597
At the siege of Masada, a group of heavily outnumbered Jewish soldiers elected to commit suicide en masse, rather than to be captured by the besieging Romans, who would probably have committed them to a tortuous death by crucifixion. Such an action is highly unusual, almost unique in the annals of history.During the Spartacus slave revolt, some 6,000 rebels were captured and crucified. Why did they choose this over death in battle? (As rebelling slaves, they were outlaws, not foreign prisoners who could expect to be allowed to live.) And why might this be true of others in similar straits (e.g. partisans captured by Nazis in World War II)?
Why do soldiers seldom fight to the death even if they are going to be killed anyway?
decision making
Why does prey, when caught in the jaws of an animal whose bite and build is powerful enough to carry them, cease to struggle? I remember an old quote from psychology, recounting first-hand, a lion attack. I cannot find it now. The narrator said that once the lion had him in its mouth, it shook him (physically) and thus disoriented him. He said that he felt no pain, and he said that it must have been because of adrenaline. I forget how, I think a local intervened, but he survived to describe the experience.Why does a fish, when laid on ground, cease to struggle? Yet waggle when touched and escape when released? Morale?Why do bucks, when vying for supremacy by engaging in contests of strength, not aim to kill their opponent? This case is slightly different, because the foe is conspecific. While humans understand the concept of mutually-assured destruction, animals seem to understand this too, and that it's evolutionarily folly when fighting one's own species.Sorry for an incomprehensive answer, but I hope my insight spurs the ultimate answer.
_reverseengineering.8307
When clicking a GUI, there's underlying code somewhere getting executed. Is it possible to capture this puppet-stringing so that it can be manually called on demand (without clicking the GUI)?For example: In Internet Explorer's dev-console, there's a button for Clear domain cookies, however it's only accessible with the dev console pulled up and via button-click. Would it be possible to catch this underlying function getting called, and then call it on my own via programmatic puppet-stringing?
Possible to capture/replay GUI functions by puppet-stringing?
ida;windows
null
_unix.166210
Often, the bottleneck of my laptop is the disk. When doing some disk-intensive computations, automatically-started background processes like updatedb, find /something etc. kick in, making things even worse. They are set to be nice, but it doesn't help since CPU is not the problem, I/O is.The question: what to do to alleviate the problem (short of killing them manually), and more generally, is there a mechanism like nice, but taking I/O into account?Even more generally, how to improve I/O responsiveness of a Linux (Ubuntu 14.04) system? At present, when one app is maxing out disk usage, the system is veeeery slow to respond - for example, it takes forever to open a web page in Firefox (even though the system is not swapping; it get worse when it is swapping). Swappiness is set to 0, it it matters.
Disk is a bottleneck. Background processes make things worse. How to improve responsiveness?
performance;disk;nice
null
_cstheory.12307
A string has $2^n$ subsequences, but they are usually not all distinct. What is the complexity of finding the maximum frequency of any subsequence?For example, the string subsequence contains 7 copies of the subsequence sue and this is the maximum.Sample brute-force code at http://ideone.com/UIp3tAre there related structural theorems? Both of these turn out to be false:the longest of the maximum-frequency subsequences is uniquethe maximum frequency of any length-$k$ subsequence is unimodal in $k$Possibly related links:Counting # distinct subsequences $\in \mathbf{P}$ http://11011110.livejournal.com/254164.htmlRelated contest problem for multiple sources http://www.spoj.pl/problems/CSUBSEQS/Related paper http://dx.doi.org/10.1016/j.tcs.2008.08.035Edit 10 days later: thanks for taking a look! I had wondered if this would make a nice polynomial-time solvable programming contest problem. I guess not, but I hope to think about it again later.
Commonest Subsequence
ds.algorithms;string search
null
_cstheory.4757
I'm studying certain graph editing problems and I'd like to determine the complexity of this problem:Input: Balanced bipartite graph $G(A \bigcup B, E)$, $|A|=|B|=n$, integer $k$Problem: Is there $r$ edit operations that transform the input graph into balanced bipartite $n/2$-regular graph ($r \leq k$).An edit operation can be an addition of one edge or a removal of one edge (between sets $A$ and $B$). Has anyone seen this problem in the literature? Is there a polynomial time algorithm or is it $NP$-complete?My main interest is in the case where $k \leq cn$ for some constant $c \gt 0$.EDIT: One way to look at the problem is to find the minimum number of edit operations that transform an input of balanced bipartite graph $G(A \bigcup B, E)$ into balanced bipartite $n/2$-regular graph $G(A \bigcup B, E^')$. Notice that $n$ must be even integer.
Complexity of transforming a balanced bipartite graph into regular graph?
ds.algorithms;reference request;graph algorithms
null
_unix.375444
I have a handful of libvirt/kvm/qemu virtual hosts, which are working quite well, including live migration of VMs from one host to another. However, there are occasional problems with live migrations turning into offline migrations - the VM is moved to the new host, but does a fresh boot on the destination.I'm assuming that this is due to errors during the transfer of the VM state and it is often (though not always) accompanied by errors in syslog on either the source or destination host. Based on this, I've tried adding --abort-on-error to my virsh migration commands, but it does not appear to have had any effect.My complete virsh command for online migration is:virsh migrate --live --tunneled --persistent --undefinesource --p2p --abort-on-error [VM name] qemu+ssh://[user]@[destination host]/systemIs there anything else I can do to cause virsh to abort migration if it can't be done live, rather than falling back to an offline migration?
Can virsh be prevented from falling back to offline migration?
libvirtd;virsh
null
_cstheory.12529
The best known algorithm for computing the exact edit distance between two strings is I believe an algorithm by Masek and Paterson that runs in time $O(n^2/\log^2 n)$, for binary alphabets. Is there any algorithm that possibly by taking advantage of larger alphabet sizes (and potentially the possibility of few matches to explore) can run in time that might be strictly better than the above bound for large (i.e non-constant sized) alphabets ? Or is there some easy reason why this would be as hard as the case for a binary alphabet ?
Edit distance algorithms that depend on alphabet
ds.algorithms;edit distance
null
_unix.335004
Is there a program to monitor all ressource utilzation at once for a personal computer : CPU, memory, hard drives ?
Monitor all ressources use?
arch linux;monitoring
Take a look to Conky. Also searching for CPU widgets for ArchLinux will give you a lot of results for what you want.Additionally, if you want to know all that information via CLI, you can have it quickly with:echo CPU: && mpstat && echo && echo MEMORY: && free && echo && echo DISK USAGE: && df -h
_unix.105893
I have a large bibtex file with many entries where each entry has the general structure @ARTICLE{AuthorYear,item = {...},item = {...},item = {...},etc}(in some cases ARTICLE might be a different word e.g. BOOK)What I would like to do is write a simple script (preferably just a shell script) to extract entries with given AuthorYear and put those in a new .bib file. I can imagine that I can recognize the first sentence of an entry by AuthorYear and the last by the single closing } and perhaps use sed to extract the entry, but I don't really know how to do this exactly. Can someone tell me how I would achieve this? It should probably be something likesed -n /AuthorYear/,/\}/p file.bibBut that stops due to the closing } in the first item of the entry thus giving this output:@ARTICLE{AuthorYear,item = {...},So I need to recognize whether the } is the only character at a line and only have 'sed' stop reading when this is the case.
Script to extract selected entries from a bibtex file
shell script;text processing;sed
The following Python script does the desired filtering.#!/usr/bin/pythonimport re# Bibliography entries to retrieve# Multiple pattern compilation from: http://stackoverflow.com/a/11693340/147021pattern_strings = ['Author2010', 'Author2012',]pattern_string = '|'.join(pattern_strings)patterns = re.compile(pattern_string)with open('bibliography.bib', 'r') as bib_file: keep_printing = False for line in bib_file: if patterns.findall(line): # Beginning of an entry keep_printing = True if line.strip() == '}': if keep_printing: print line # End of an entry -- should be the one which began earlier keep_printing = False if keep_printing: # The intermediate lines print line,Personally, I prefer moving to a scripting language when the filtering logic becomes complex. That, perhaps, has an advantage on the readability factor at least.
_reverseengineering.6863
I am developing a DLL for the purpose of injecting it into a running process for a game. I've found the memory addresses to some key functions(via Immunity Debugger) and I am trying to call those functions from within' my injected DLL.So far whenever I inject my DLL and press the hotkey combination of ALT+T, the game client stops responding and crashes. At one particular instance it showed a debug error saying: The process was not able to resume execution because the ESP value was changed, or something similar.Do I have to alter the ESP value before and after I call the process function from within' my DLL? If so, how would I do this properly.Here is the source code of my DLL:// Warband_Chat.cpp : Defines the exported functions for the DLL application.#include stdafx.h#include <windows.h> // Include the functions we are going to use like Sleep and hInstance etc...#include <fstream> // Allows us to work with files on the hard drive.#include <iostream>#define MAX_BUFFER_SIZE 300 // Maximum chat message size: 300 characters.#define ThreadMake(x) CreateThread(NULL,NULL,(LPTHREAD_START_ROUTINE)&x,NULL,NULL,NULL); // Makes creating threads easy, it just requires 1 parameter(the function).using namespace std;// Define process(Warband) function based on its parameters and its location in memory.typedef void(__cdecl* ChatFunc)(char*);ChatFunc Chat = (ChatFunc)0x00450C60;wchar_t *convertCharArrayToLPCWSTR(const char* charArray)/* Converts a char array to a LCPWSTR string. */{ wchar_t* wString=new wchar_t[4096]; MultiByteToWideChar(CP_ACP, 0, charArray, -1, wString, 4096); return wString;}int getkey(char x) // A function I made to get 1 key and automatically check ALT(vk_menu,0x12){ if(GetAsyncKeyState(VK_MENU)&0x8000 && GetAsyncKeyState(x)&0x8000)//Check if we are pressing ALT and what ever is inside x { return 1; // if we are then return true. } return 0; // If the condition is not met then return false}void main() // the main function{ while(1) // the main loop { if(getkey('T')) // If we are pressing ALT + T then do { ifstream file(chat.txt); if (!file.is_open()) { MessageBox(NULL, LFailed to open chat.txt. Make sure its on your root Mount & Blade: Warband folder., LFailed, MB_OK); } else { char buffer[MAX_BUFFER_SIZE]; file.getline(buffer, MAX_BUFFER_SIZE-1); Chat(buffer); // Call chat function LPCWSTR newbuffer = convertCharArrayToLPCWSTR(buffer); MessageBox(NULL, newbuffer, LSuccess, MB_OK); // Post a message if we injected. // the L before the messages is just to tell MSVS that those are LPCTSTR characters. } file.close(); Sleep(20); // Sleep so we don't lag } Sleep(20); // no lag. }}extern C // DLL Hook{ __declspec(dllexport) BOOL __stdcall DllMain(HINSTANCE hInst,DWORD reason,LPVOID lpv) { if (reason == DLL_PROCESS_ATTACH) { DisableThreadLibraryCalls(hInst); ThreadMake(main); // Creates a new thread on the process. } return true; }}
Cannot call function (properly) in C++
c++;dll;immunity debugger;dll injection
null
_codereview.135569
Here's a simple school assignment I did:Problem 1: Write a program that asks the user for a positive integer no greater than 15. The program should then display a square on the screen using the character X. The number entered by the user will be the length of the side of the square. For example, if the user enters 5, the program should display the following:XXXXX XXXXX XXXXX XXXXX XXXXXProblem 2: Imagine that you and a number of friends go to a restaurant and when you ask for the bill you want to split the amount and the tip between all. Write a functiondouble CalculateAmountPerPerson(double TotalBill, double TipPercentage, int NumFriends)that takes the total bill amount, tip percentage (e.g., 15.0 for a 15% tip), and the number of friends as inputs and returns the total bill amount as its output.Write a main function that asks the user for the total amount of the bill and the size of his/her party (i.e., number of friends) and prints out the total amount that each person should pay for tip percentages of 10%, 12.5%, 15%, 17.5%, 20%, 22.5%, 25%, 27.5%, and 30%. Your main function should use a loop and invoke the CalculateAmountPerPerson function at each iteration.My code:#include <iostream>#include <iomanip>void menuPrompt();short getMenuSelection();void program1();void squareLengthPrompt();void displaySquare(int, char);void program2();void billPrompt();void numPeoplePrompt();double CalculateAmountPerPerson(double, double, int);const short PROGRAM_1 = 1;const short PROGRAM_2 = 2;const short EXIT = 0;int main(int argc, char *arv[]) { while(true) { char menu = getMenuSelection(); switch (menu) { case EXIT: exit(EXIT_SUCCESS); case PROGRAM_1: program1(); break; case PROGRAM_2: program2(); break; default: std::cout << That program doesn\'t exist. << std::endl; break; } } return 0;}void menuPrompt() { std::cout << Menu:\n << \t1. Program 1\n << \t2. Program 2\n << \t0. Exit\n << Select your program: << std::flush;}short getMenuSelection() { short selection = 0; while (selection != PROGRAM_1 && selection != PROGRAM_2) { menuPrompt(); std::cin >> selection; } return selection;}void program1(){ const char SQUARE_CHARATER = 'X'; short squareLength = 0; while (squareLength > 15 || squareLength < 1) { squareLengthPrompt(); std::cin >> squareLength; } displaySquare(squareLength,SQUARE_CHARATER);}void squareLengthPrompt() { std::cout << Enter the length of the side of the square (Between 1 and 15): << std::flush;}void displaySquare(int side, char character) { for(int i = 0; i < side ; ++i) { for (int j = 0; j < side; ++j) { std::cout << character; } std::cout << std::endl; }}void program2() { const float TIP_PERCENTAGES[] = {.10, .125, .15, .175, .20, .225, .25, .275, .30}; double totalBill = 0; int totalPeople = 0; double amountPerPerson; while (totalBill <= 0) { billPrompt(); std::cin >> totalBill; } while (totalPeople <= 0) { numPeoplePrompt(); std::cin >> totalPeople; } for(auto tipPercent: TIP_PERCENTAGES){ amountPerPerson = CalculateAmountPerPerson(totalBill,tipPercent, totalPeople); std::cout << With the tip percentage of << std::fixed << std::setprecision(2) << tipPercent*100 << %, each person pays << amountPerPerson << from a $ << totalBill << bill. << std::endl; }}double CalculateAmountPerPerson(double TotalBill, double TipPercentage, int NumFriends) { return (TotalBill*(1+TipPercentage))/NumFriends;}void billPrompt() { std::cout << Enter the total of your bill (must be greater than 0): << std::flush;}void numPeoplePrompt() { std::cout << Enter the number people that are splitting the bill (must be greater than 0): << std::flush;}I primarily just want to know if the code is self documenting and if I should include comments.
Homework to display a square and calculate tips
c++;homework;calculator;ascii art
The code is not self-documenting, because the problem description is non-trivial and involves what you'd call business logic. That is, you're not trying to perform some technical operation (like a cache, or a data structure, or a parser), but you're following rules defined by someone else. Each of those rules has to be programmed in, of course, but without describing WHY they have been put in, you'll always need the problem description along with the code to make sense of the code.Imagine you had posted your question without the problem description. Would we have been able to guess what the goal of your assignment was?Personally, yes, I think so. This is because if I were to run your program, it asks clear questions and prints a clear result. It does require a non-trivial time investment, though.You get Enter the length of the side of the square (Between 1 and 15): As output on the screen, you enter a number, you get a square. Program 1 will print a square.And seeing something likefor(auto tipPercent: TIP_PERCENTAGES){ amountPerPerson = CalculateAmountPerPerson(totalBill,tipPercent, totalPeople); std::cout << With the tip percentage of << std::fixed << std::setprecision(2) << tipPercent*100 << %, each person pays << amountPerPerson << from a $ << totalBill << bill. << std::endl;}In the code tells me that program 2 is for splitting up a bill.In that sense, the code is self documenting. We don't need the problem description. We can see what the code does, because we can execute it.What comments are for, then, are not for explaining what the code does. That understanding can already be achieved by, well, reading and executing the code.Comments have 2 main uses, in my opinion: First, to explain the why of the code (why does the code do what it does). Second, to help speed along the understanding the code. Basically, rather than making me read and execute the entire program, spending lots of time, you simply put the purpose of a part of code in a comment, and I can read what the code does via condensed comments. Like reading a recipe instead of watching someone actually cook something.The act of making code self-documenting, then, is to put these comments into active code. Putting the why into code is hard; the only places you can possibly do this is in error messages - number of people must be greater than 0, cannot split bill between 0 or negative people - stuff like that explains why there is a totalPeople <= 0 check. I don't recommend going out of your way to do that; comments are for the programmer and output is for the user.Putting the how comments into code is a lot easier. You can use function names for this.Compare: amountPerPerson = CalculateAmountPerPerson(totalBill,tipPercent, totalPeople);and s = calc(sum, pct, num);One is clear to understand, the other could mean anything. Yet it can even make sense after we give it a comment... double s; //share per person s = calc(sum, pct, num); //calculate share per person using sum costs, tip percentage and number of peopleSo you've, in essence, already done this importing of comments.There are still a few improvements to be made.For instance...void program2() { const float TIP_PERCENTAGES[] = {.10, .125, .15, .175, .20, .225, .25, .275, .30}; double totalBill = 0; int totalPeople = 0; double amountPerPerson; while (totalBill <= 0) { billPrompt(); std::cin >> totalBill; } while (totalPeople <= 0) { numPeoplePrompt(); std::cin >> totalPeople; } for(auto tipPercent: TIP_PERCENTAGES){ amountPerPerson = CalculateAmountPerPerson(totalBill,tipPercent, totalPeople); std::cout << With the tip percentage of << std::fixed << std::setprecision(2) << tipPercent*100 << %, each person pays << amountPerPerson << from a $ << totalBill << bill. << std::endl; }}program2 as a whole is hard to understand. You have to carefully read what it does to see what it does. Had you instead renamed the function to runBillSplitterProgram, we'd have gotten a hint of the meaning already.Internally, you have tried splitting certain sections up, but you're only done this for the long strings.We can do slightly better by not separating based on code length, but on functionality://in runBillSplitterProgramdouble amountPerPerson;double totalBill = askForTotalBill();int totalPeople = askForTotalPeople();//as separate functions double askUserForTotalBill() { double totalBill = 0; while (totalBill <= 0) { std::cout << Enter the total of your bill (must be greater than 0): << std::flush; std::cin >> totalBill; } return totalBill;}int askUserForTotalPeople() { int totalPeople = 0; while (totalPeople <= 0) { std::cout << Enter the number people that are splitting the bill (must be greater than 0): << std::flush; std::cin >> totalPeople; } return totalPeople;}It's a shame one is a double and the other is an integer, or you'd been able to merge both into some sort of askUserForValue function, keeping billPrompt as a function which calls askUserForValue with a lengthy string.Technical commentary:while (!(1 <= squareLength && squareLength <= 15))reads better as a range check.Overall, you've done a pretty good job. You should add a program header describing the exercise.
_unix.25811
Is there any way to list the tunnels that SSH clients connected to my OpenSSH server have set up?I can use e.g. lsof -i to show connections that are being actively tunnelled, but I'd like to be able to list tunnels that the clients have set up but may not currently be in use.(It's just struck me that this may be an entirely client-side thing, i.e. the server only knows the client is set up to tunnel a port when something tries to connect through the tunnel, in which case the answer will be you can't - but I'll take that as an answer if so.)(Background: I'm running a MineCraft server on a machine that won't be able to do much else while it's running. If I can monitor when users have tunnels set up, I can run up the MC server on demand.)
List ports tunnelled on OpenSSH server
openssh;ssh tunneling;tunneling
Well yes it is client sided.Plus there isn't any configuration in the traditional sense. You create a tunnel by specifying the correct parameters when connecting to a server.Sure you can store it in .bashrc, .ssh/config, or some other place for re-usability, but in general it is purely on-demand.
_webapps.31554
When I first browse to trello.com there is no request for login or password or any screening of any kind. The result is that anyone sitting at my PC can click on the Trello icon/shortcut and be presented with the full list of all boards I have created, ie., Trello assumes that the PC operator is always me.How do I stop access to the main Trello screen listing the boards I have created?Example: if someone operates my PC and browses to yahoo.com or even my.yahoo.com they are not presented with all my activity. In fact they would not even know if I had a Yahoo account.How do I force a logon to my home page on Trello?
Security/permission at the highest level?
trello
null
_softwareengineering.349852
So basically I have an object with a few properties in it:public MyObject{public string Name {get;set;}public bool Complete {get; set;}}List<MyObject> myList = new List<MyObject>();myList.Add(new MyObject(Page1, true));myList.Add(new MyObject(Page2, false));myList.Add(new MyObject(Page3, false));Imagine a webpage. Load the list of pages above from the database and then render a link to each one.The pages must be completed in order.So if a page is not complete then the next page cannot be edited.So in this case page 1 is already complete.Page2 can be edited/completed.Page3 cannot be edited until Page2 has been completed.I'm trying to decide the best way to implement - I have 3 options I am trying to decide between:1) Change database query - The current query contains the name and the complete flag. I was thinking to just add a new flag which will pull the value from the previous row, like so:SELECT Name, Complete, LAG(Complete) OVER (ORDER BY 1) as EditableFROM @MyPages ORDER BY 1But that is bordering on business logic in the database. I'm told that is bad.2) Change the webpage View (MVC page)Simplest way. When looping through the list, check the previous row.But I'm also told business logic in the view is bad.3) Some sort of new view model. But since the property is dependant on the previous object in a list I'm trying to find a 'nice' way of doing it.Best I managed so far...public class MyViewModel{ public List<MyObject> _list; public bool IsEditable(int index) { if (index > 0) { if (this._list[index-1].Complete) return true; return false; } else return true; }}All 3 of those will work.But recently I've been making more of an effort to do things 'properly' rather than just hacking together the quickest or the first thing i think of.-Or am I better scrapping the whole object and starting from scratch?I already have the list of pages taken from the database which was fine when we wanted to show everything and treat them equally. Now it's all screwed up..
Object property depend on previous objects in a list. How best to go about this?
c#;design;web development
null
_softwareengineering.273644
I'm using Apache Wicket for developing web apps, I have developed a few for the last year and it has been great; today I was looking at a few pages and most of them look like this:public class MyPage extends MyBasePage{ public MyPage() { constructUI(); } private void constructUI() { //first build all the models.. IModel aModel = new SomeModel(new SimeObject()); //then build forms, links and buttons like this Form myForm = new Form(myForm, aModel) { //notice that the submit logic is not implemented here private void onSubmit() { myForm_submit(); } } } private void myForm_submit() { //handle the form submision here, validation, service calling etc.. }}I usually handle the form submission and link/button clicking on separate methods in order to keep the constructUI method as short as possible, most pages have 1 or 2 grids, and 2-3 forms for different actions, for my specific needs and use cases this practice gives me these benefits:The UI can be constructed/deconstructed in different ways (to refactor the markup for example) without having to copy/paste/move-around the whole submit handling logicHaving *_submit, *_click methods allows to easily navigate the class using ctrl+o (on Eclipse) to filter and find methodsSo my question is, if there exists some software pattern that would allow to separate the constructUI logic into a different class, or something similar, because some pages are component heavy (even using custom components) and constructUI gets too large.
Pattern for separating UI code from logic in Wicket
java;design patterns;web;ui
null
_datascience.2308
I am looking for a thesis to complete my master M2, I will work on a topic in the big data's field (creation big data applications), using hadoop/mapReduce and Ecosystem ( visualisation, analysis ...), Please suggest some topics or project that would make for a good masters thesis subject.I add that I have bases in data warehouses, databases, data mining, good skills in programming, system administration and cryptography ... Thanks
Masters thesis topics in big data
bigdata;apache hadoop;research
Since it's a master's thesis, how about writing something regarding decision trees, and their upgrades: boosting and Random Forests? And then integrate that with Map/Reduce, together with showing how to scale a Random Forest on Hadoop using M/R?
_webmaster.18038
So I like to post a lot on forums. And often times, I'll link images. I usually use imgur as the image provider. But this thought just came into my head. Would it be a good idea to have the image on a page of site, thereby increasing my site ranking (right now, I'm the only person who's ever been on my site hah). So basically, instead of linking http://i.imgur.com/veCBW.p ng, I would link mysite.com/pagethatincludestheimage . And inside it would just contain img src of the image. It would basically appear exactly the same.Is this a decent idea? Is there any other way it may help my site? Btw, I also use amazon s3, so hotlinking will not be an issue.
Will doing this improve somewhat improve my site ranking (SEO)?
seo;images
null
_codereview.35212
There are some blocks in my code.To make it easier to read, I often append comment after the endsuch as end # end upto end # fileopenBecause sometimes the indentation still not easy for read.Is there any better practice?And my indentation is 2 spaces, is it OK for most Rubier ?require fakerreal_sn = 2124100000File.open(lib/tasks/tw_books.txt, r).each do |line| # author = Faker::Lorem.words(2..5).join(' ').gsub('-','').gsub('\s+','\s') 1.upto(1000).each do |x| location = Faker::Lorem.words(5) book_name, author, publisher, isbn, comment = line.strip.split(|||) ap(line) isbn = isbn[/\d+/] real_sn+=1 bk = Book.new(:sn => real_sn,:name => book_name, :isbn=>isbn, :price =>Random.rand(200..5000), :location=>location, :category=>[,,,].sample, :author => author, :sale_type => [:fix_priced, :normal, :promotion].sample, :publisher => publisher, :release_date => rand(10.years).ago, :comment => comment ) if bk.save() if (real_sn%100)==0 puts book_name end Sunspot.commit else puts real_sn puts bk.errors.full_messages end end # end uptoend # fileopen
How to make the end of loop more readable in Ruby
ruby
Compare your code to how I'd write it:require fakerreal_sn = 2124100000File.open(lib/tasks/tw_books.txt, r).each do |line|# author = Faker::Lorem.words(2..5).join(' ').gsub('-','').gsub('\s+','\s') 1.upto(1000).each do |x| location = Faker::Lorem.words(5) book_name, author, publisher, isbn, comment = line.strip.split(|||) ap(line) isbn = isbn[/\d+/] real_sn += 1 bk = Book.new( :author => author, :category =>[,,,].sample, :comment => comment, :isbn => isbn, :location =>location, :name => book_name, :price => Random.rand(200..5000), :publisher => publisher, :release_date => rand(10.years).ago, :sale_type => [ :fix_priced, :normal, :promotion ].sample, :sn => real_sn, ) if bk.save() puts book_name if ((real_sn % 100) == 0) Sunspot.commit else puts real_sn puts bk.errors.full_messages end endendPart of writing code is making it readable and maintainable. That means use indentation, vertical alignment, whitespace between operators, vertical whitespace to make changes in logic more obvious, etc. I sort hash keys alphabetically, such as the parameters for Book.new, especially when there's a lot of them. This makes it a lot easier to see if something is duplicated or missing. Since it's a hash it doesn't matter what order they're in as far as Ruby is concerned; Again this is for maintenance later on.The editor you use can help you immensely with this. I use gvim and Sublime Text, both of which allow me to easily reformat/reindent code I'm working on, and I take advantage of that often. It's a good first step when you have code from a foreign source that makes your eyes bug out. Reindent it, fix long, awkward sections, like your list of hash entries for Book.new, and the code will become more understandable.Also, your editor needs to have the ability to jump between matching delimiters like (), [], {} and do/end. gvim can do that plus jump through if/else/end plus rescue blocks. That ability to navigate REALLY helps keep your code flow clear.
_webmaster.91155
I have a brand new Blog-type website that will target a specific niche market. I plan to link affiliate products. At the moment I basically have a skeleton website as I get my colors, logos, ect. set up. There is no real content on my website yet as I am still writing my first blog posts. Should I make my website private until I have enough content for it to be worthwhile for a user to visit? While my website sits online with no content, is my SEO ranking being affected? I read from one answer on this site that the first few months are very important. If I should make it private, what is the best way to do this? Is there a right time to publish a website? Perhaps I am being too paranoid as Google and the like probably do not even know I exist yet.
Should I make my website private until I have a lot of content?
blog;ranking;google ranking
Generally I tell people not to worry too much about things that in the end do not matter. However, I do want to say that it is far better that a site that is reasonably formed and populated show up on the scene than one that is scant. Your site does not have to be huge. Just enough for a search engine to offer search users. Some say that is about 50 posts. I agree, but say why not offer more if you can?Even if you deploy your site with little in it, at the rate you can write content, search will not matter and you will not actually do any harm. Meaning, that as you develop your content, having little content becomes less of an issue as you go along and any effect of a smaller site disappears over time. By the time you reach a decent number of posts, any negative effect has long disappeared.I just prefer giving search engines and users something to chew on. I say do not sweat it unless you want to.
_unix.62379
The mount command in linux requires -t nfs4 in order to mount version 4 NFS shares, so I need to know beforehand which version it is.
Before mounting an NFS share, can the NFS client know if it's NFS v3 or v4?
nfs
Per: NFS version 3 and 4 with TCP/IP protocols, you could enter either of these commands:rpcinfo -p <hostname> |grep nfsrpcinfo -s <hostname> |grep nfs Note: All flavours of the command appear to support the -p argument, while the Solaris and GNU linux variants also support the -s variant.You could include some logic, based around the enquiry, into a shell script that instantiates a variable that could be pluged into a mount command e.g.nfsHost=11.22.33.44ARRAY=`rpcinfo -p $nfsHost |grep nfs |sed -e s/ [\s ]*/ /g -e s/^ // |cut -f2 -d `Ver=0for i in $ARRAY ; do if [ $i -gt $Ver ] ; then Ver=$i;fi;doneif [ $Ver -gt 0 ]then echo Host: $nfsHost supports NFS version $Ver; mount -o vers=$Ver...........fi
_codereview.197
I'm looking into Administration Elevation and I've come up with a solution that seems like it's perfectly sane, but I'm still in the dark about the professional methods to accomplish this.Is there a better way to do this or is this fine? using System;using System.Diagnostics;using System.Security.Principal;using System.Windows.Forms;namespace MyVendor.Installation{ static class Program { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); if (!IsRunAsAdmin()) { Elevate(); Application.Exit(); } else { try { Installer InstallerForm = new Installer(); Application.Run(InstallerForm); } catch (Exception e) { //Display Exception message! Logging.Log.Error(Unrecoverable exception:, e); Application.Exit(); } } } internal static bool IsRunAsAdmin() { var Principle = new WindowsPrincipal(WindowsIdentity.GetCurrent()); return Principle.IsInRole(WindowsBuiltInRole.Administrator); } private static bool Elevate() { var SelfProc = new ProcessStartInfo { UseShellExecute = true, WorkingDirectory = Environment.CurrentDirectory, FileName = Application.ExecutablePath, Verb = runas }; try { Process.Start(SelfProc); return true; } catch { Logging.Log.Error(Unable to elevate!); return false; } } }}
Administration Elevation
c#;authorization
You can create a manifest file and set the app to require administrative privileges. This will trigger the UAC user prompt with the dimmed screen when your application is run without requiring any code on your part.See MSDN for the gory details:This file can be created by using any text editor. The application manifest file should have the same name as the target executable file with a .manifest extension.<?xml version=1.0 encoding=UTF-8 standalone=yes?><assembly xmlns=urn:schemas-microsoft-com:asm.v1 manifestVersion=1.0> <assemblyIdentity version=1.0.0.0 processorArchitecture=X86 name=<your exec name minus extension> type=win32/> <description>Description of your application</description> <!-- Identify the application security requirements. --> <trustInfo xmlns=urn:schemas-microsoft-com:asm.v2> <security> <requestedPrivileges> <requestedExecutionLevel level=requireAdministrator uiAccess=false/> </requestedPrivileges> </security> </trustInfo></assembly>
_unix.353538
I have a remote server (OpenStack infrastructure) which sometimes I need to schedule a reboot at midnight.So far I did like this:$ sudo -s$ at 23:59at> reboot nowat> ^dHowever this seems not to be safe... as it happened a few times that the server got into a state that it was not reachable anymore via ssh (and all other services were not working). Some kind of limbo, especially when scheduling a reboot after having installed system upgrades (i.e. kernel etc). All I can do in this cases is to hardly reboot it manually via the OpenStack interface.Is there a safer way to schedule one-time only reboots on such servers?Thanks!
Safely schedule on time reboot Ubuntu server
ubuntu;scheduling;reboot
null
_webmaster.82265
I have a site which has two sub domains. One is forum and another is listing site. The main site has privacy policy. Do I need to have separate privacy policy and term & conditions page for each sub-domain or every thing has to be added on root domain.
Privacy Policy and Terms of Use for subdomains in USA
subdomain;legal;privacy;terms of use;privacy policy
null
_unix.332422
Quite interested in the size of the kernel ring buffer, how much information it can hold, and what data types?
How to find out a linux kernel ring buffer size?
linux;kernel;linux kernel
Regarding the size, it's recorded in your kernel's config file. For example, on Amazon EC2 here, it's 256 KiB.# grep CONFIG_LOG_BUF_SHIFT /boot/config-`uname -r`CONFIG_LOG_BUF_SHIFT=18# perl -e 'printf %d KiB\n,(1<<18)/1024'256 KiB#Referenced in /kernel/printk/printk.c#define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)More information in /kernel/trace/ring_buffer.cNote that if you've passed a kernel boot param log_buf_len=N (check using cat /proc/cmdline) then that overrides the value in the config file.
_unix.211488
I am a newbie for Linux. I have CentOS 7. I am running a very simple script from BASH. I executed the chmod command and then ran the script file from a terminal. I got an error saying if command not found and that it is a syntax error. Can you please help me resolving this?#!/bin/bashclearecho Enter a number:read numberif[$num -eq 10]thenecho the number is 10elif[$num -lt 10]thenecho that number is less then 10elif[$num -gt 10]thenecho this is greater than 10elseecho the number is between 10 and 20fiOUTPUT:Enter a number:100./arya.sh: line 6: if[ -eq 10]: command not found./arya.sh: line 7: syntax error near unexpected token `then'./arya.sh: line 7: `then'
Terminal does not call 'then', reports a syntax error and command not found
linux;bash;shell script;centos
You need to put a least one space between [ / ] and anything next to them:if [ $num -eq 10 ][ is actually a bash command, an alias for the test command that evaluates your expressions, and the closing ] has to stand on its own so it won't be evaluated as part of the expression.
_unix.62290
I need to insert a hidden HTML input tag into any form tag within a bunch of HTML files. I assume this is possible with sed, but need help forming the command.My idea is to search for any instance of <formand if found, insert a line below it that contains:<input type=hidden name=csrf_token value=$csrf_token /> What's the best way to tackle this? I'm close withsed -e '/<form/a\<input type=hidden name=csrf_token value=$csrf_token/>'
Help inserting a new line of text after matching a line of text (sed)?
text processing;sed;awk
Got it. Here is how it's done:find . -name \*.html | xargs sed -i '/<form/a\<input type=hidden name=csrf_token value=$csrf_token />'
_codereview.41041
I am creating an application in Java that runs at scheduled intervals and it transfer files from one server to another server.For SFTP I'm using jSch, and my server and file details came from Database.My code is working fine but its performance is not good because I'm using too many loops in my code.Is there any way to increase performance of my code?public class FileTransferThread implements Runnable {public static final Logger log = Logger.getLogger(FileTransferThread.class.getName());private Session hibernateSession_source;private Session hibernateSession_destination;private List<nr_rec_backup_rule> ruleObjList = new ArrayList<>();private Map<String, List<String>> filesMap = new HashMap<>();private int i = 1;@Overridepublic void run() { try { hibernateSession_destination = HibernateUtilReports.INSTANCE.getSession(); // Getting Active rules from (nr_rec_backup_rule) Criteria ruleCriteria = hibernateSession_destination.createCriteria(nr_rec_backup_rule.class); ruleCriteria.add(Restrictions.eq(status, active)); List list = ruleCriteria.list(); for (Object object : list) { nr_rec_backup_rule ruleObj = (nr_rec_backup_rule) object; ruleObjList.add(ruleObj); } System.out.println(List of Rule Objs : + ruleObjList); getTargetServerAuthentication(); } catch (Exception e) { log.error(SQL ERROR ======== , e); } finally { hibernateSession_destination.flush(); hibernateSession_destination.close(); hibernateSession_source.flush(); hibernateSession_source.close(); }}private void getTargetServerAuthentication() throws Exception { if (ruleObjList.size() > 0) { JSch jsch = new JSch(); hibernateSession_source = HibernateUtilSpice.INSTANCE.getSession(); for (nr_rec_backup_rule ruleObj : ruleObjList) { //getting authentication details for backupserver from table contaque_servers String backupHost = ruleObj.getBackupserver(); Criteria crit = hibernateSession_source.createCriteria(contaque_servers.class); crit.add(Restrictions.eq(server_ip, backupHost)); ProjectionList pList = Projections.projectionList(); pList.add(Projections.property(machineUser)); pList.add(Projections.property(machinePassword)); pList.add(Projections.property(machinePort)); crit.setProjection(pList); Object uniqueResult = crit.uniqueResult(); if (uniqueResult != null) { Object[] serverDetails = (Object[]) uniqueResult; String backupUser = (String) serverDetails[0]; String backupPassword = (String) serverDetails[1]; int backupPort = (int) serverDetails[2]; //creating connection to backup server com.jcraft.jsch.Session sessionTarget = null; ChannelSftp channelTarget = null; try { sessionTarget = jsch.getSession(backupUser, backupHost, backupPort); sessionTarget.setPassword(backupPassword); sessionTarget.setConfig(StrictHostKeyChecking, no); sessionTarget.connect(); channelTarget = (ChannelSftp) sessionTarget.openChannel(sftp); channelTarget.connect(); System.out.println(Target Channel Connected); //Getting fileName from table contaque_recording_log using campName and Dispositions String[] split = ruleObj.getDispositions().split(, ); Criteria criteria = hibernateSession_source.createCriteria(contaque_recording_log.class); criteria.add(Restrictions.eq(campName, ruleObj.getCampname())); criteria.add(Restrictions.in(disposition, Arrays.asList(split))); criteria.setProjection(Projections.property(fileName)); List list = criteria.list(); for (Iterator it = list.iterator(); it.hasNext();) { String completeFileAddress = (String) (it.next()); if (completeFileAddress != null) { int index = completeFileAddress.indexOf(/); String serverIP = completeFileAddress.substring(0, index); String filePath = completeFileAddress.substring(index, completeFileAddress.length()) + .WAV; if (filesMap.containsKey(serverIP)) { List<String> sourceList = filesMap.get(serverIP); sourceList.add(filePath); } else { List<String> sourceList = new ArrayList<String>(); sourceList.add(filePath); filesMap.put(serverIP, sourceList); } } } //getting authentication details for source-server from table contaque_servers if (filesMap.size() > 0) { for (Map.Entry<String, List<String>> entry : filesMap.entrySet()) { String sourceHost = entry.getKey(); List<String> fileList = entry.getValue(); Criteria srcCriteria = hibernateSession_source.createCriteria(contaque_servers.class); srcCriteria.add(Restrictions.eq(server_ip, sourceHost)); ProjectionList pList1 = Projections.projectionList(); pList1.add(Projections.property(machineUser)); pList1.add(Projections.property(machinePassword)); pList1.add(Projections.property(machinePort)); srcCriteria.setProjection(pList1); Object uniqueResult1 = srcCriteria.uniqueResult(); if (uniqueResult1 != null) { Object[] srcServer = (Object[]) uniqueResult1; String srcUser = (String) srcServer[0]; String srcPassword = (String) srcServer[1]; int srcPort = (int) srcServer[2]; //creating connection to source server com.jcraft.jsch.Session sessionSRC = jsch.getSession(srcUser, sourceHost, srcPort); sessionSRC.setPassword(srcPassword); sessionSRC.setConfig(StrictHostKeyChecking, no); sessionSRC.connect(); ChannelSftp channelSRC = (ChannelSftp) sessionSRC.openChannel(sftp); channelSRC.connect(); System.out.println(Source Channel Connected); try { fileTransfer(channelSRC, channelTarget, ruleObj, fileList); } finally { channelSRC.exit(); channelSRC.disconnect(); sessionSRC.disconnect(); } } else { log.error(IN ELSE ======== Source server dosen't exists in table 'contaque_servers'); } } } } catch (JSchException e) { log.error(Error Occured ======== Connection not estabilished, e); } finally { if (channelTarget != null && sessionTarget != null) { log.error(exiting channel and session); channelTarget.exit(); channelTarget.disconnect(); sessionTarget.disconnect(); } else { log.error(Error Occured ======== Connection not estabilished); } } } } }}private void fileTransfer(ChannelSftp channelSRC, ChannelSftp channelTarget, nr_rec_backup_rule ruleObj, List<String> fileList) { for (String filePath : fileList) { System.out.println(i === + i++); int fileNameStartIndex = filePath.lastIndexOf(/) + 1; String fileName = filePath.substring(fileNameStartIndex); System.out.println(File Name : + fileName); System.out.println(File Path: + filePath); System.out.println(Backup Path : + ruleObj.getBackupdir() + fileName); try { InputStream get = channelSRC.get(filePath); channelTarget.put(get, ruleObj.getBackupdir() + fileName); } catch (SftpException e) { log.error(Error Occured ======== File or Directory dosen't exists === + filePath); } }}}
Creating a thread for file transfer
java;optimization;performance;multithreading
At face value it appears that there can be only one place where the major bottleneck is: the actual file transfer. Your code does the following:builds up a bunch of source files to copycreates a 'target' destination for the file copygoes through each sourcefor each source, it 'downloads' the files one at a timeas it downloads each file, it uploads it to the target.While this whole task may be running in a separate thread, it is by no means multi-threaded.The probable bottleneck here is the amount of CPU time required to decrypt the data from the source, and re-encrypt it to the destination.It is likely, also, that close behind the CPU bottleneck (perhaps even in front of it) is the network transfer speeds you can get in a single socket connection.I would suggest four things to do, and possibly a combination of them:try to set up a system where you can sfp direct from the source to the destination without needing to process the file in between. You have ssh access to them both, so it should not be that hard to create a script on the source, and run that script with some parameters that copies the file to the destination.Use BlowFish encryption algorithm for the transfer. It is rumoured that blowfish is faster than the other algorithms, and, by the sounds of it, it should be fine for your use case.Wrap the InputStream you get from jsch in a BufferedInputStreamspread the load of the decrypt/encrypt on multiple threads.The most effective option will be 1, but the most fun to write will be 4....Something like:create a method that takes the details required to copy a single file....public Boolean copyFile(Session source, Session target, String sourcefile, String targetfile) throws IOException { // connect to the source ..... // connect to the target ..... // get a BufferedInputStream on the source ..... // copy the stream to the target ..... return Boolean.TRUE; // success.}instead of populating a filesMap Map, do something like:ExecutorService threadpool = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());List<Future<Boolean>> transfers = new ArrayList<>();.... final Session source = ......; final Session target = ......; final String sourcefile = ....; final String targetfile = ....; transfers.add(threadpool.submit(new Callable<Boolean>() { public Boolean call() throws IOException { return copyFile(source, target, sourcefile, targetfile); } });....// all copy actions are submitted now... so we wait for the threadpool.threadpool.shutdown(); // orderly shutdown, all tasks are completed.for (Future<Boolean> fut : transfers) { try { fut.get(); } catch (Exception ioe) { LOGGER.warn(Unable to transfer file: + ioe.getMessage(), ioe); }}// all copies have been attempted, in parallel.
_codereview.5745
This is primarily a container for quicksort and mergesort:#include c_arclib.cpptemplate <class T> class dynamic_array { private: T* array; T* scratch; public: int size; dynamic_array(int sizein) { size=sizein; array = new T[size](); } void print_array() { for (int i = 0; i < size; i++) cout << array[i] << endl; } void merge_recurse(int left, int right) { if(right == left + 1) { return; } else { int i = 0; int length = right - left; int midpoint_distance = length/2; int l = left, r = left + midpoint_distance; merge_recurse(left, left + midpoint_distance); merge_recurse(left + midpoint_distance, right); for(i = 0; i < length; i++) { if((l < (left + midpoint_distance)) && (r == right || array[l] > array[r])) { scratch[i] = array[l]; l++; } else { scratch[i] = array[r]; r++; } } for(i = left; i < right; i++) { array[i] = scratch[i - left]; } } } int merge_sort() { scratch = new T[size](); if(scratch != NULL) { merge_recurse(0, size); return 1; } else { return 0; } } void quick_recurse(int left, int right) { int l = left, r = right, tmp; int pivot = array[(left + right) / 2]; while (l <= r) { while (array[l] < pivot)l++; while (array[r] > pivot)r--; if (l <= r) { tmp = array[l]; array[l] = array[r]; array[r] = tmp; l++; r--; } } if (left < r)quick_recurse(left, r); if (l < right)quick_recurse(l, right); } void quick_sort() { quick_recurse(0,size); } void rand_to_array() { srand(time(NULL)); int* k; for (k = array; k != array + size; ++k) { *k=rand(); } } };int main() { dynamic_array<int> d1(10); cout << d1.size; d1.print_array(); d1.rand_to_array(); d1.print_array(); d1.merge_sort(); d1.print_array(); }
Dynamic array container
c++;array
My first comment is its named badly.dynamic_array implies that I can use [] operator on it and get a value out.You have owned RAW pointers in your structure.private: T* array; T* scratch;First this means you need to look up RAII to make sure these members are correctly deleted.Second you you need to look up the rule of three (or 5 in C++11) to make sure they are copied correctly.You have owned RAW pointers in your structure. This means you need to correctly manage the object as a resource. This means constructions/destruction/copy (creation and assignment) need to be taken care of correctly.Either do this manually or use a standard container that will do it for you. I suggest a standard container.void print_array() { for (int i = 0; i < size; i++) cout << array[i] << endl; }If you are going to write print_array at least write it so that it can use alternative stream (not just std::cout). Then write the output operator.std::ostream& operator<<(std::ostream& stream, dynamic_array const& data){ data.print_array(stream); // After you fix print_array return stream;}Also note that a method that access data but does not modify the state of the object should be marked const. So the signature should be: void print_array() constAre the following members really part of the array?void merge_recurse(int left, int right)int merge_sort()void quick_recurse(int left, int right)OK. Lets assume they are for now.Then void merge_recurse(int left, int right) should be a private member. There should be no reason to call this from externally. scratch = new T[size](); if(scratch != NULL)scratch will Never be NULL.I think merge (in merge_recurse) is easier to write than you are making it: int index = 0; int l = left; int r = midpoint; while((l < midpoint) && (r < right)) { scratch[index++] = (array[l] > array[r])) ? array[l++] : array[r++]; } // One of the two ranges is empty. // copy the other into the destination. while(l < midpoint) { scratch[index++] = array[l++]; } while(r < right) { scratch[index++] = array[r++]; }You should only call srand() once in an application:void rand_to_array() { srand(time(NULL));By putting srand() inside the structure you are opening it up to be called multiple times. Call it once just after main() then don't call it again.When you can use the standard tools: tmp = array[l]; array[l] = array[r]; array[r] = tmp;Can be replaced with: std::swap(array[l], array[r]);I am relatively sure these two are wrong: while (l <= r) if (l <= r) They should be: while (l < r) if (l < r)
_unix.186403
Given a string and a block of strings e.g.String:Use three words.Block:This is the first string of another block of strings.This is the second string of another block of strings.This is the third string of another block of strings.Now I want to join/weave the string and the block word by line such that the new block looks like this:This is the first string of another block of strings.UseThis is the second string of another block of strings.threeThis is the third string of another block of strings.words.What I do so far is:'<,'>s/\s/\r\r\rwhere '<,'>s is a range spanning the string Use three words.. This will give me each word of the string on a new line:Usethreewords.Then I use Ctrl+v to select the block, copy it and paste it such that I get:This is the first string of another block of strings.Use This is the second string of another block of strings.three This is the third string of another block of strings.words.And the I manually bring it into the shape I need with a lot if v, w and x usage.How can I do this more efficiently with simple copy and paste instructions in vim?
How to weave the words of a string into a block of strings in vim?
text processing;vim;editors
You can drop-ship text from the cut buffer with swap-pasting -- pasting into a selection swaps, so dwVP line-deletes everything but the deleted word.Start withUse three words.This is the first string of another block of strings.This is the second string of another block of strings.This is the third string of another block of strings.and do :normal ggdd three-word line in the cut buffer, cursor on first This line:normal pdwVPo<ESC>j dwVP is cut a word and exchange-paste it back for the rest of the line:normal pdwVPo<ESC>j do it again:normal pdwVPo<ESC>j againFor just three I wouldn't qq that but ggddqqpdwVPo<ESC>jq@q@@ is shorter.
_unix.284598
There is a lot of solution here to execute a script at shutdown/reboot, but I want my script to only execute at shutdown.I've tried to put my script in /usr/lib/systemd/systemd-shutdown, and check the $1 parameter, as seen here, but it doesn't work.Any ideas ?system : archlinux with gnome-shell$systemctl --version systemd 229+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN
Systemd : How to execute script at shutdown only (not at reboot)
linux;arch linux;systemd;shutdown
I've finally found how to do that.It's a bit hackish thought, but it works.I've used some part of this thread : https://stackoverflow.com/questions/25166085/how-can-a-systemd-controlled-service-distinguish-between-shutdown-and-rebootand this thread : How to run a script with systemd right before shutdown?I've created this service /etc/systemd/system/shutdown_screen.service[Unit]Description=runs only upon shutdownConflicts=reboot.targetAfter=network.target[Service]Type=oneshotExecStart=/bin/trueExecStop=/bin/bash /usr/local/bin/shutdown_screenRemainAfterExit=yes[Install]WantedBy=multi-user.targetWhich will be executed at shudown/reboot/halt/whatever.(don't forget to enable it) And in my script /usr/local/bin/shutdown_screenI put the following :#!/bin/bash# send a shutdown message only at shutdown (not at reboot) /usr/bin/systemctl list-jobs | egrep -q 'reboot.target.*start' || echo shutdown | nc 192.168.0.180 4243 -w 1Which will send a shutdown message to my arduino, whom will shutdown my screen.
_unix.58896
I would like to copy my settings from my desktop to my laptop. I am running KDE on Arch. I am not sure what to do with ~/.config, ~/.local, and ~/.kde4 since they have subdirectories with names that match my desktop hostname. If I naively copy everything, I get all sorts of errors/warning when logging in and trying to open my email/calendar/akonadi.
How to copy settings from one machine to another?
kde;home;migration
This is a really, really lame (non-)feature of KDE. ~/.config and ~./local actually do not have anything to do with it -- they are XDG standard filesystem hierarchy things used by various independent applications, not KDE.After you install, get out of X (so KDE is not running) and try copying just your old ~/.kde/share/config in, then restart X.If you have a hard time stopping X because of XDM and system services, you could try doing it in a VT while KDE is still loaded, just do not go back to X from the VT -- kill it on the command line to force a re-start (or just plain halt and reboot).
_unix.174764
What meaning does Xserver access control have, when the Xserver is started with tcp disabled:/usr/bin/X11/X -nolisten tcpAFAIU, Xserver can be used to allow remote network connection. But, if it is used only locally, is the access control meaningless?Do these access permissions only have meaning, when the Xserver is listening on public IP interface, i.e. 0.0.0.0, as seen with netstat -lptun ?Further, when I run xhost, I see following output:$ xhostaccess control enabled, only authorized clients can connectWhere do these settings come from? (I have not configured anything). Is there some config file in /etc that contains access control permissions?Is there any security issue, when I run Xephyr on top of my Xserver as a a different user? Is this secure?Xephyr -screen 1920x1054 :1 &DISPLAY=:1 su - nobody -c 'startlxde'
what does Xserver access control mean
security;x11;xorg;x server;access control
null
_webmaster.26774
Possible Duplicate:How to find web hosting that meets my requirements? I'm searching for hosting for the back-end and web client of an application that uses node.js and mongodb on the back-end and PHP on the web client.There are many options for hosting node:HerokuNodesterJoyentFor PHP nearly all hosting options are able of rendering PHP, for mongodb the node hostings allow databases as well.The project will have low usage, would it be better to use a VPS hosting where I can install all the software needed (PHP, Node and Mongo) like AmazonsEC2 micro instances?Are there any good alternatives to Amazon EC2?
Specific hosting or virtual machine?
web hosting;php;looking for hosting;amazon ec2;node js
null
_webmaster.12079
I have a websites related to cricket and advertising. How can I apply for google adsense?
How to get a new Adsense account?
google adsense
null
_codereview.121286
This is a jQuery function that controls a <div> to expand up or expand down. I am trying to simplify and optimize these lines of codes.ScenerioBy default, it does not have any classes in <div class=title-wrapper>.When I clicked on <div class=title-wrapper>, it adds .expand-up into this <div> if there .expand-up class. When I clicked on <div class=title-wrapper> it should remove .expand-up from <div> and only remove if there is .expand-down.$(document).ready(function() { $('.title-wrapper').click(function() { $(this).parent().toggleClass('active').delay('1500').promise().done(function() { var filterSearch = $(this).children('.title-wrapper'); // Check if have expand-up classes if (filterSearch.hasClass('expand-up')) { filterSearch.removeClass('expand-up'); filterSearch.addClass('expand-down'); } else { // Remove expand down if the class is existed if (filterSearch.hasClass('expand-down')) { filterSearch.removeClass('expand-down'); } filterSearch.addClass('expand-up'); } }); });});body { font-size: 62.5%; font-family: 'Roboto', sans-serif; color: #FFF;}.filter-search { background-color: #a78464;}.filter-search .wrapper { width: 100%; text-align: center; height: 10em; transition: height linear 1s;}.filter-search .wrapper .title-wrapper { display: inline-block;}.filter-search .wrapper .title-wrapper h2 { font-size: 3em; margin-bottom: 0;}.filter-search .wrapper .title-wrapper span { font-size: 1.4em; display: block; text-transform: uppercase; margin-bottom: 2em;}.filter-search .wrapper.active { height: 15em; transition: height linear 1s;}.filter-search .expand-up { -webkit-animation: moveUp ease-in 1; animation: moveUp ease-in 1; animation-fill-mode: forwards; animation-duration: .5s;}.filter-search .expand-down,.filter-search .expand-up { -webkit-animation-fill-mode: forwards; -webkit-animation-duration: .5s;}.filter-search .expand-down { -webkit-animation: moveDown ease-out 1; animation: moveDown ease-out 1; animation-fill-mode: forwards; animation-duration: .5s;}@-webkit-keyframes moveUp { 0% { margin-top: 0; } to { margin-top: -1em; } ;}@keyframes moveUp { 0% { margin-top: 0; } to { margin-top: -1em; } ;}@-webkit-keyframes moveDown { 0% { margin-top: -1em; } to { margin-top: 0; } ;}@keyframes moveDown { 0% { margin-top: -1em; } to { margin-top: 0; } ;}<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,700italic,800,800italic' rel='stylesheet' type='text/css'><link href=https://fonts.googleapis.com/icon?family=Material+Icons rel=stylesheet><script src=https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js></script><!-- Coding Start Here --><section id=filter-search class=filter-search> <div class=wrapper> <i class=material-icons>search</i> <div class=title-wrapper> <h2>Filter Search</h2> <span>Click to expand</span> </div> </div></section>
Expanding up or down
javascript;performance;jquery
It will be at least as reliable to test whether the parent element hasClass('active'), as that class will be toggled synchronously in response to a click.You can also benefit, syntactically, from chaining .removeClass() and .addClass() (or reducing to add/remove a single class).$(document).ready(function() { $('.title-wrapper').click(function(e) { e.preventDefault(); var $filterSearch = $(this), $wrapper = $(this).parent().toggleClass('active'); $wrapper.stop().delay(1500).promise().done(function() { if ($wrapper.hasClass('active')) { $filterSearch.removeClass('expand-down').addClass('expand-up'); } else { $filterSearch.removeClass('expand-up').addClass('expand-down'); } }); }).addClass('expand-down');});body { font-size: 62.5%; font-family: 'Roboto', sans-serif; color: #FFF;}.filter-search { background-color: #a78464;}.filter-search .wrapper { width: 100%; text-align: center; height: 10em; transition: height linear 1s;}.filter-search .wrapper .title-wrapper { display: inline-block;}.filter-search .wrapper .title-wrapper h2 { font-size: 3em; margin-bottom: 0;}.filter-search .wrapper .title-wrapper span { font-size: 1.4em; display: block; text-transform: uppercase; margin-bottom: 2em;}.filter-search .wrapper.active { height: 15em; transition: height linear 1s;}.filter-search .expand-up { -webkit-animation: moveUp ease-in 1; animation: moveUp ease-in 1; animation-fill-mode: forwards; animation-duration: .5s;}.filter-search .expand-down,.filter-search .expand-up { -webkit-animation-fill-mode: forwards; -webkit-animation-duration: .5s;}.filter-search .expand-down { -webkit-animation: moveDown ease-out 1; animation: moveDown ease-out 1; animation-fill-mode: forwards; animation-duration: .5s;}@-webkit-keyframes moveUp { 0% { margin-top: 0; } to { margin-top: -1em; } ;}@keyframes moveUp { 0% { margin-top: 0; } to { margin-top: -1em; } ;}@-webkit-keyframes moveDown { 0% { margin-top: -1em; } to { margin-top: 0; } ;}@keyframes moveDown { 0% { margin-top: -1em; } to { margin-top: 0; } ;}<link href='https://fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,700italic,800,800italic' rel='stylesheet' type='text/css'><link href=https://fonts.googleapis.com/icon?family=Material+Icons rel=stylesheet><script src=https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js></script><!-- Coding Start Here --><section id=filter-search class=filter-search> <div class=wrapper> <i class=material-icons>search</i> <div class=title-wrapper> <h2>Filter Search</h2> <span>Click to expand</span> </div> </div></section>Notes: By testing the synchronously toggled element, you will find this solution not to get confused by rapid multiple clicks. .stop() prevents the accumulation of delay. The title-wrapper div needs to be initialised with expand-down, otherwise there's a strange glitch after first click.
_webmaster.19804
I have one domain, abc.com and for some reson I want to Install one application which require tomcat service. Current hosting is for php and apache only.Is it possible to host java.abc.com to another hosting and how it can be done?
Can we host subdomains to diffrent hosting provider?
web hosting;domains
null
_cs.66020
For my science fair project, I implemented an optimization to Python's sort routine. The idea is to move the safety checks that have to be carried out during each comparison, e.g. type checks and character-width checks, outside of the sort loop and just get them all done in one pass. An optimized comparison function is then selected from a portfolio based on the results of the checks. So, for example, if the checks determine that all the objects are of the same type, the selected comparison function can skip the usually-required are the object types compatible check. Etc.I have to write this up as a paper, and am currently working on a literature review. Are there any papers describing similar techniques in other dynamic languages/generally?
Reference request: optimizing procedures on lists in dynamic languages by performing safety checks in advance
reference request;type checking;program optimization;interpreters
I'm not aware of anything exactly like this, but there are some things that are arguably related.For specifically sorting this is related to the Schwartzian transform, though with a very different goal. In the Schwartzian transform, you run through the input applying an expensive function and pairing the input and output together, then sorting on the output. This is in contrast to performing that expensive function on each operation. In your case, your expensive function would be the type checks and the dynamic dispatches. A bit differently you would be checking a property for the whole list as well and then choosing which comparison operation to use based on that.In a totally different vein, there's a general technique called polymorphic inline caching (pioneered by the Self team and covered, among many other things, in Craig Chamber's thesis) and more generally adaptive optimization that is used in some virtual machines. Polymorphic inline caching solves the problem that if we do a dynamic dispatch, then we are jumping to some completely unknown code, and thus we can't inline it and optimize it and the current function. The solution is simple: just do an if to test if we are in some specific case, and if so, we can inline that code, else we do the dynamic dispatch. The problem is there is an unbounded, unknown number of possible cases. This isn't a problem, though, for a Just-In-Time (JIT) compiler which can just do this for the cases actually seen at runtime.This doesn't solve your problem since dynamic dispatch is based on the runtime class of an object, not on some arbitrary predicate like all the elements of this array have the same type. This is where adaptive optimization comes in and things like tracing JIT compilers. It's quite conceivable that unrolling a loop a few times or inlining a couple levels of recursion can lead to many type checks being eliminated with simple constant propagation style optimizations, and possibly entirely eliminated by more sophisticated optimizations in some cases. Nevertheless, it will often not do the same thing as you are suggesting and would need to see a trace first for each use of the sort function. On the other hand, if it knows all the elements are numbers, say, from earlier code, it can eliminate checking entirely.
_unix.317218
How can I keep something always compiling on a spare machine?As it's just for looks, the more complex looking the better. I don't care what it is, just so long as it doesn't require input on my part, and it repeats forever.I'll be using some flavor of Ubuntu.Thanks in advance!
How can I keep something, anything, compiling forever?
compiling
Are you just looking for something looking busy? Don't care about any productive output? Check out hollywood. There is a link here talking about it, and spotting it in the wild.
_webapps.36679
Basically, I am adding a Google Hangout button to my Contact page. The idea is for my visitors to be able to press the button and for it to initiate a Google Hangout with me. Is this possible?Is there anything that makes this possible?Thank you!
Adding a Google Hangout button to my site. When visitors click it, is there anyway for it to initiate a hangout with me?
google;google hangouts
After more research, doesn't look like this feature is available yet.http://productforums.google.com/forum/#!topic/google-plus-discuss/-cJeQltplhE
_unix.109243
I am trying to type this comand but for some reason it takes the first and not the second version.I have entered both the host names in the host file, FYI.This command works (hostname zq13c1):mkcifsmnt -f /aix_bk5 -d AIX -h zq13c1 -c 'aix_user' -p 'Cognizant123' -u 214 -g 204 -t rw This command does not work (hostname zq13c1_bk):mkcifsmnt -f /aix_bk5 -d AIX -h zq13c1_bk -c 'aix_user' -p 'Cognizant123' -u 214 -g 204 -t rw
AIX cifs hostname with underscore
aix;command;hostname;cifs
Quoting from this wiki article:The Internet standards (Request for Comments) for protocols mandate that component hostname labels may contain only the ASCII letters 'a' through 'z' (in a case-insensitive manner), the digits '0' through '9', and the hyphen ('-'). The original specification of hostnames in RFC 952, mandated that labels could not start with a digit or with a hyphen, and must not end with a hyphen. However, a subsequent specification (RFC 1123) permitted hostname labels to start with digits. No other symbols, punctuation characters, or white space are permitted.The underscore _ is not a valid character in a hostname.
_webmaster.37698
I am building a social media site that is similar is structure to twitter and facebook.com where unauthenticated users who go to https://mysite.com will see a login + sign-up page, and authenticated users who go to https://mysite.com will see their timeline.My question is, what is the best practice (using Google Analytics) for tracking these two different types of users who are viewing completely different content but are visiting the same URL.I tried searching the Google Analytics docs but couldn't find what they suggested for this scenario. Perhaps I just don't know what keywords to search for.
Tracking logged in vs. non-logged in users in Google Analytics
google analytics;javascript
I finally found it in the Google Analytics Docs:Use session-level custom variables to distinguish different visitor experiences across sessions.For example, if your website offers users the ability to login, you can use a custom variable scoped to the session level for user login status. In that way, you can segment visits by those from logged in members versus anonymous visitors._gaq.push(['_setCustomVar', 1, // This custom var is set to slot #1. Required parameter. 'User Type', // The name of the custom variable. Required parameter. 'Member', // Sets the value of User Type to Member or Visitor depending on status. Required parameter. 2 // Sets the scope to session-level. Optional parameter.]);
_unix.348848
I want to print output between two pattern and first pattern should be second time matching in file.Example - test.txtstart onetext_1 endstart twotext_2end start threetext_3endHere first pattern is start and second pattern is end. Pattern start should be second time pattern matching in file.Then output should be start twotext_2end
Print between two patterns only when the first pattern occurs for the second time
text processing;awk;sed
null
_unix.203449
#!/bin/bashsearch_string=\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT;delimeters=$(cat /root/firewall/firewall.txt);sed -i s/$search_string/$delimeters$search_string/ /root/result.txtI want to add the contents of the /root/firewall/firewall.txt into /root/result.txt file before a line which is saved in search_string variable.If /root/firewall/firewall.txt contains one line above script works. But if the firewall.txt contains multiple lines, script breaks as:sed: -e expression #1, char 64: unterminated `s' commandI think, new line characters causing the problem but I could not properly backslash it.search_string=\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT;delimeters=$(cat /root/firewall/firewall.txt);replaced= $delimeters | sed -r 's/\\n/\\\\n/g'sed -i s/$search_string/$replaced$search_string/ /root/result.txt How can I fix this issue?
Adding File Text With Sed
sed
(This overlaps somewhat with some of the other answers.)Im somewhat confused. You say,I want to add the contents of the firewall.txt file into the result.txt file before a line which is saved in search_string variable.OK, first of all, if they arent an essential part of the question,full pathnames (/root/) clutter your question and add no value.Just use simple filenames. After all,I hope youre debugging this in a local directory, not running as root.Secondly, what is useful is giving some example data.Not the 2000 lines that you actually have in your actual files,but a handful of lines.(Again: you are debugging this on test files, arent you?)For example, say result.txt containsOne, two,/sbin/iptables -A INPUT -p tcp --dport 12443 -j ACCEPTBuckle my shoe.and firewall.txt containsHumpty Dumpty sat on a wall.You say,If firewall.txt contains [only] one line, the script#!/bin/bashsearch_string=\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT;delimeters=$(cat /root/firewall/firewall.txt);sed -i s/$search_string/$delimeters$search_string/ /root/result.txtworks.(By the way, you dont need the semicolons at the ends of the lines,and the word delimiter is spelled with the word limit in the middle.)Well, the above produces this result:One, two,Humpty Dumpty sat on a wall./sbin/iptables -A INPUT -p tcp --dport 12443 -j ACCEPTBuckle my shoe.Is that really what you want?Because thats not what most people think when you say,add [text] into [a] file before a line,especially when you start talking about the text to be insertedbeing more than one line,and especially since you said the linebreaks should be still therethe same way as in the firewall.txt.Ill assume that you really wantOne, two,Humpty Dumpty sat on a wall./sbin/iptables -A INPUT -p tcp --dport 12443 -j ACCEPTBuckle my shoe.If you want the last line of the firewall.txt fileconcatenated with the /sbin/iptables line,please explain more precisely what you want.You say,search_string=\/sbin\/iptables -A INPUT -p tcp --dport 12443 -j ACCEPT;delimeters=$(cat /root/firewall/firewall.txt);replaced= $delimeters | sed -r 's/\\n/\\\\n/g'sed -i s/$search_string/$replaced$search_string/ /root/result.txtWell, thats nonsense; the third line responds-bash: Humpty Dumpty sat on a wall.: command not foundPerhaps you meantreplaced=$(echo $delimeters | sed -r 's/\\n/\\\\n/g')?OK, even if you had saidreplaced=$(echo $delimeters | sed -r 's/\\n/\\\\n/g')it wouldnt have done any good, because sed normally worksa line at a time, and it isnt going to see newlines as characters in lines(even if its input is coming from a shell variablethat has a multi-line value;thats no different from a file with multiple lines).What does work (eliminating a useless use of cat) isreplaced=$(sed 's/$/\\/' firewall.txt)sed s/$search_string/$replaced$search_string/ result.txtusing sed 's/$/\\/' to add a backslashat the end of every line in firewall.txt.You need to type a newline (Enter) after $replacedbecause the last newline gets stripped offwhen you do the replaced=$() command substitution.And, just for simplicitys sake,if you want to leave the /sbin/iptables command untouched,you might want to consider changing the final command tosed s/$search_string/$replaced&/ result.txtusing & in the replacement string to sayinsert here the text that was found by the search string (regex).While the s command will allow you to insert entire lines,it wasnt really meant for that. There are a, i, and c commandsfor inserting one or more entire lines from the command string.But, since your insertion text is coming from a file,it makes the most sense to look at the r (read) command.As a first cut,sed /$search_string/r firewall.txt result.txtwill do almost what you want. Almost.Unfortunately, it will inject the contents of the firewall.txt fileafter the /sbin/iptables line.I was able to find a workaround(to get the contents of firewall.txt before the /sbin/iptables line),but its grievously complicated:sed -n -e /$search_string/{s/^/+/; h; r firewall.txtn} -e 'x; s/^+//p; s/.*//; x; p' result.txtApparently sed wont recognize ; as a delimiter to end a filename,so, when we say rfirewall.txt, we must type Enter.Here we go:-n: Dont write output except as commanded by p or r commands./$search_string/{}: For each line matching $search_string(/sbin/iptables ), do the following:s/^/+/: Insert a + at the beginning of the line(creating +/sbin/iptables ).This flags the line as being a match for the search string.(Ill get back to that.)h: Copy the pattern space (+/sbin/iptables ) to the hold space.r firewall.txt: Read the firewall.txt file(and write its contents).n: Stop processing this line and read the next one.Then, for every other line (those not matching $search_string),do the following:x: Exchange the contents of the hold and pattern spaces.I.e., the line we just read (the one not matching $search_string)goes into the hold space,and we copy into the pattern space the previously held line(which might be +/sbin/iptables and might be blank).s/^+//p: If the line is a saved match of the search string(i.e., it is a flagged line containing +/sbin/iptables ),strip off the + and print the rest.Otherwise, print nothing.s/.*//: Wipe out the line (replace everything with nothing).I would have liked to do d (delete) here,but that terminates processing of the current line.x: Exchange the contents of the hold and pattern spaces again.Move the blank line from the pattern space into the hold space,and retrieve the line from result.txt that we just stashed there.And finally p: Print the line from result.txt.In short,When we find a line matching $search_string (i.e., /sbin/iptables ),we save it in the hold space (without printing it),and read (and print) the firewall.txt file.For every other (i.e., non-matching) line,we pull the saved line (if any) out of the hold space and print it,and then print the current line.Argh! This fails if /sbin/iptables occurs on the last line,because it gets saved in the hold space,but theres no subsequent non-matching line to trigger its extraction.So lets just make sure that /sbin/iptables never occurs on the last line,by adding a dummy line at the end, and then strategically removing it.echo >> result.txtsed -n -e /$search_string/{s/^/+/; h; r firewall.txtn} -e 'x; s/^+//p; $d; s/.*//; x; p' result.txtThe $d causes the last line to be deleted.(We could use $q and get the same effect.)This does work if there are multiple iptables lines.But, yes, it is getting to be something of a kludge.I guess thesed s/$search_string/$replacednewline&/answer isnt looking so bad now.
_codereview.168730
I want to perform a binary search on a continuous unimodal function (x)=y, where x and y are real numbers. I'm not looking up values in an array, and so I don't have clean integer inputs that I'm stepping along.My original attempt at a function (written in JavaScript) has the problem that when the input value you are looking for happens to sit on a boundary chosen by the algorithm, the algorithm continues to run until the numeric precision is exhausted:/** * y: the target output value to find a matching input * minX: the smallest input value * maxX: the largest input value * : a function that returns a value given an X * : the threshold to compare outputs versus the target (default:0) */function binarySearch(y, minX, maxX, , ) { if (===undefined) =0; let m=minX, n=maxX, k, v, ; while (m<=n) { k = (n+m)/2; v = (k); = y-v; if (Math.abs()<=) return k; if (>0) m = k; else n = k; } if (Math.abs(y-(m))<=) return m; if (Math.abs(y-(n))<=) return n;}With the above, the call binarySearch( 0, 0, 10, n=>n ) will run 1077 iterations until m=0 and n=5e-324 before (m+n)/2 is finally so close to 0 that even with =0 the JavaScript interpreter cannot tell the difference.A hack I was going to use is to provide a minimum step function that modifies each boundary by a fixed amount (similar to array index +/- 1). This forces the boundary to move faster, but also requires an >0 in case the boundary overshoots the value. It feels gross:// step: a minimum amount to move each boundary each timefunction binarySearch(y, minX, maxX, , , step) { if (===undefined) =0; if (step===undefined) step=0; let m=minX, n=maxX, k, v, ; while (m<=n) { k = (n+m)/2; v = (k); = y-v; if (Math.abs()<=) return k; if (>0) m = k+step; n = k-step; } if (Math.abs(y-(m))<=) return m; if (Math.abs(y-(n))<=) return n;}A ~clean fix is to check the boundaries on every pass by moving the final two if statements into the while loop. This calls () three times as often each pass, and so seems inelegant./** * y: the target output value to find a matching input * minX: the smallest input value * maxX: the largest input value * : a function that returns a value given an X * : the threshold to compare outputs versus the target (default:0) */function binarySearch(y, minX, maxX, , ) { if (===undefined) =0; let m=minX, n=maxX, k, v, ; while (m<=n) { k = (n+m)/2; v = (k); = y-v; if (Math.abs(y-(m))<=) return m; if (Math.abs(y-(n))<=) return n; if (Math.abs()<=) return k; if (>0) m = k; else n = k; }}It smells to me like there ought to be an elegant solution, some sort of fencepost I'm not thinking of, that fixes this efficiently and elegantly.
Binary search on real space
javascript;binary search
null
_webmaster.18399
I see from WMT that my site has 4 (404) crawl errors, each Linked from 2 separate pages. As this is a small directory listings site it would be difficult to delete the missing URLs from the db each time it happens. Does Google penalise me for this in any way?
Crawl Errors (404) Showing Up in WMT
google search console;googlebot
null
_unix.171190
I've installed Red Hat 6.5 and see that Ctrl + V does not work. It just prints ^V in the console instead of pasting from the clipboard. What can be wrong? How can I enable pasting using Ctrl + V?
How enable ctrl + v paste in redhat?
terminal
null
_scicomp.8232
I'm new to opencl but I have some experience using HLSL. In HLSL multiple passes are used when you need to finish a computation before moving on to the next step.I would like to know how this sort of thing is done in opencl.I am writing an image filter as belowfloat4 Convolution(__read_only image2d_t srcImg, int2 point, float * kern){ const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_LINEAR; int maskSize = 1; float4 sum = (float4)(0.0f,0.0f,0.0f,0.0f); for (int i = -maskSize; i <= maskSize; i++) { for(int j = -maskSize; j <= maskSize; j++) { int2 delta = (int2)(i+maskSize,j+maskSize); int2 pos = (int2)(i,j); sum += kern[(delta.y*3) + delta.x] * convert_float4(read_imageui(srcImg, smp, point + pos)); } } return sum;}__kernel void imagingTest(__read_only image2d_t srcImg, __write_only image2d_t dstImg){ float k = 30.0L; float delta_t = 0.14285714285714285714285714285714L; // 1/7 float hN[9]; hN[0] = 0; hN[1] = 1; hN[2] = 0; hN[3] = 0; hN[4] =-1; hN[5] = 0; hN[6] = 0; hN[7] = 0; hN[8] = 0; float hS[9]; hS[0] = 0; hS[1] = 0; hS[2] = 0; hS[3] = 0; hS[4] =-1; hS[5] = 0; hS[6] = 0; hS[7] = 1; hS[8] = 0; float hE[9]; hE[0] = 0; hE[1] = 0; hE[2] = 0; hE[3] = 0; hE[4] =-1; hE[5] = 1; hE[6] = 0; hE[7] = 0; hE[8] = 0; float hW[9]; hW[0] = 0; hW[1] = 0; hW[2] = 0; hW[3] = 1; hW[4] =-1; hW[5] = 0; hW[6] = 0; hW[7] = 0; hW[8] = 0; float hNE[9]; hNE[0] = 0; hNE[1] = 0; hNE[2] = 1; hNE[3] = 0; hNE[4] =-1; hNE[5] = 0; hNE[6] = 0; hNE[7] = 0; hNE[8] = 0; float hSE[9]; hSE[0] = 0; hSE[1] = 0; hSE[2] = 0; hSE[3] = 0; hSE[4] =-1; hSE[5] = 0; hSE[6] = 0; hSE[7] = 0; hSE[8] = 1; float hSW[9]; hSW[0] = 0; hSW[1] = 0; hSW[2] = 0; hSW[3] = 0; hSW[4] =-1; hSW[5] = 0; hSW[6] = 1; hSW[7] = 0; hSW[8] = 0; float hNW[9]; hNW[0] = 1; hNW[1] = 0; hNW[2] = 0; hNW[3] = 0; hNW[4] =-1; hNW[5] = 0; hNW[6] = 0; hNW[7] = 0; hNW[8] = 0; const sampler_t smp = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_LINEAR; int2 coord = (int2)(get_global_id(0), get_global_id(1)); uint4 bgra = read_imageui(srcImg, smp, coord); float4 nablaN = Convolution(srcImg, coord, hN); float4 nablaS = Convolution(srcImg, coord, hS); float4 nablaE = Convolution(srcImg, coord, hE); float4 nablaW = Convolution(srcImg, coord, hW); float4 nablaNE = Convolution(srcImg, coord, hNE); float4 nablaNW = Convolution(srcImg, coord, hNW); float4 nablaSE = Convolution(srcImg, coord, hSE); float4 nablaSW = Convolution(srcImg, coord, hSW); float4 cN = exp(-(nablaN /k) * (nablaN /k)); float4 cS = exp(-(nablaS /k) * (nablaS /k)); float4 cW = exp(-(nablaW /k) * (nablaW /k)); float4 cE = exp(-(nablaE /k) * (nablaE /k)); float4 cNE = exp(-(nablaNE/k) * (nablaNE/k)); float4 cSE = exp(-(nablaSE/k) * (nablaSE/k)); float4 cSW = exp(-(nablaSW/k) * (nablaSW/k)); float4 cNW = exp(-(nablaNW/k) * (nablaNW/k)); float4 sum = 0.5 * (nablaNE * cNE) + (nablaSE * cSE) + (nablaSW * cSW) + (nablaNW * cNW); sum += (nablaN * cN) + (nablaS * cS) + (nablaW * cW) + (nablaE * cE); sum *= delta_t; bgra.x = bgra.y = bgra.z = convert_int(sum.x); bgra.w = 255; write_imageui(dstImg, coord, bgra);}This performs one pass of anisotropic diffusion, I would like to be able to apply this process multiple times. How do I do this?EDITHere's the C# codeusing System;using System.Collections;using System.Collections.Generic;using System.Drawing;using System.Drawing.Imaging;using System.IO;using System.Runtime.InteropServices;using Emgu.CV;using Emgu.Util;using Emgu;using Emgu.CV.Structure;using OpenCL.Net;namespace HLSLTest{ public class Computations { private Cl.Context _context; private Cl.Device _device; private Cl.Kernel kernel; private void CheckErr(Cl.ErrorCode err, string name) { if (err != Cl.ErrorCode.Success) { Console.WriteLine(ERROR: + name + ( + err.ToString() + )); } } private void ContextNotify(string errInfo, byte[] data, IntPtr cb, IntPtr userData) { Console.WriteLine(OpenCL Notification: + errInfo); } public void Setup() { Cl.ErrorCode error; Cl.Platform[] platforms = Cl.GetPlatformIDs(out error); List<Cl.Device> devicesList = new List<Cl.Device>(); CheckErr(error, Cl.GetPlatformIDs); foreach (Cl.Platform platform in platforms) { string platformName = Cl.GetPlatformInfo(platform, Cl.PlatformInfo.Name, out error).ToString(); Console.WriteLine(Platform: + platformName); CheckErr(error, Cl.GetPlatformInfo); //We will be looking only for GPU devices foreach (Cl.Device device in Cl.GetDeviceIDs(platform, Cl.DeviceType.Gpu, out error)) { CheckErr(error, Cl.GetDeviceIDs); Console.WriteLine(Device: + device.ToString()); devicesList.Add(device); } } if (devicesList.Count <= 0) { Console.WriteLine(No devices found.); return; } _device = devicesList[0]; if (Cl.GetDeviceInfo(_device, Cl.DeviceInfo.ImageSupport, out error).CastTo<Cl.Bool>() == Cl.Bool.False) { Console.WriteLine(No image support.); return; } _context = Cl.CreateContext(null, 1, new[] { _device }, ContextNotify, IntPtr.Zero, out error); //Second parameter is amount of devices CheckErr(error, Cl.CreateContext); //Load and compile kernel source code. string programPath = Environment.CurrentDirectory + /../../../ImagingTest.cl; //The path to the source file may vary if (!System.IO.File.Exists(programPath)) { Console.WriteLine(Program doesn't exist at path + programPath); return; } string programSource = System.IO.File.ReadAllText(programPath); using (Cl.Program program = Cl.CreateProgramWithSource(_context, 1, new[] { programSource }, null, out error)) { CheckErr(error, Cl.CreateProgramWithSource); //Compile kernel source error = Cl.BuildProgram(program, 1, new[] { _device }, string.Empty, null, IntPtr.Zero); CheckErr(error, Cl.BuildProgram); //Check for any compilation errors if ( Cl.GetProgramBuildInfo ( program, _device, Cl.ProgramBuildInfo.Status, out error ).CastTo<Cl.BuildStatus>() != Cl.BuildStatus.Success ) { CheckErr(error, Cl.GetProgramBuildInfo); Console.WriteLine(Cl.GetProgramBuildInfo != Success); Console.WriteLine(Cl.GetProgramBuildInfo(program, _device, Cl.ProgramBuildInfo.Log, out error)); return; } //Create the required kernel (entry function) kernel = Cl.CreateKernel(program, imagingTest, out error); CheckErr(error, Cl.CreateKernel); } } public void ImagingTest(Image<Gray, Single> InputImage, out Image<Gray, Single> outputImage) { Cl.ErrorCode error; int intPtrSize = 0; intPtrSize = Marshal.SizeOf(typeof(IntPtr)); //Image's RGBA data converted to an unmanaged[] array byte[] inputByteArray; //OpenCL memory buffer that will keep our image's byte[] data. Cl.Mem inputImage2DBuffer; Cl.ImageFormat clImageFormat = new Cl.ImageFormat(Cl.ChannelOrder.RGBA, Cl.ChannelType.Unsigned_Int8); int inputImgWidth, inputImgHeight; int inputImgBytesSize; int inputImgStride; inputImgWidth = InputImage.Width; inputImgHeight = InputImage.Height; System.Drawing.Bitmap bmpImage = InputImage.ToBitmap(); //Get raw pixel data of the bitmap //The format should match the format of clImageFormat BitmapData bitmapData = bmpImage.LockBits ( new Rectangle(0, 0, bmpImage.Width, bmpImage.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb ); inputImgStride = bitmapData.Stride; inputImgBytesSize = bitmapData.Stride * bitmapData.Height; //Copy the raw bitmap data to an unmanaged byte[] array inputByteArray = new byte[inputImgBytesSize]; Marshal.Copy(bitmapData.Scan0, inputByteArray, 0, inputImgBytesSize); //Allocate OpenCL image memory buffer inputImage2DBuffer = Cl.CreateImage2D ( _context, Cl.MemFlags.CopyHostPtr | Cl.MemFlags.ReadOnly, clImageFormat, (IntPtr)bitmapData.Width, (IntPtr)bitmapData.Height, (IntPtr)0, inputByteArray, out error ); CheckErr(error, Cl.CreateImage2D input); //Unmanaged output image's raw RGBA byte[] array byte[] outputByteArray = new byte[inputImgBytesSize]; //Allocate OpenCL image memory buffer Cl.Mem outputImage2DBuffer = Cl.CreateImage2D ( _context, Cl.MemFlags.CopyHostPtr | Cl.MemFlags.WriteOnly, clImageFormat, (IntPtr)inputImgWidth, (IntPtr)inputImgHeight, (IntPtr)0, outputByteArray, out error ); CheckErr(error, Cl.CreateImage2D output); //Pass the memory buffers to our kernel function error = Cl.SetKernelArg(kernel, 0, (IntPtr)intPtrSize, inputImage2DBuffer); error |= Cl.SetKernelArg(kernel, 1, (IntPtr)intPtrSize, outputImage2DBuffer); CheckErr(error, Cl.SetKernelArg); //Create a command queue, where all of the commands for execution will be added Cl.CommandQueue cmdQueue = Cl.CreateCommandQueue(_context, _device, (Cl.CommandQueueProperties)0, out error); CheckErr(error, Cl.CreateCommandQueue); Cl.Event clevent; //Copy input image from the host to the GPU. IntPtr[] originPtr = new IntPtr[] { (IntPtr)0, (IntPtr)0, (IntPtr)0 }; //x, y, z IntPtr[] regionPtr = new IntPtr[] { (IntPtr)inputImgWidth, (IntPtr)inputImgHeight, (IntPtr)1 }; //x, y, z IntPtr[] workGroupSizePtr = new IntPtr[] { (IntPtr)inputImgWidth, (IntPtr)inputImgHeight, (IntPtr)1 }; error = Cl.EnqueueWriteImage(cmdQueue, inputImage2DBuffer, Cl.Bool.True, originPtr, regionPtr, (IntPtr)0, (IntPtr)0, inputByteArray, 0, null, out clevent); CheckErr(error, Cl.EnqueueWriteImage); //Execute our kernel (OpenCL code) error = Cl.EnqueueNDRangeKernel(cmdQueue, kernel, 2, null, workGroupSizePtr, null, 0, null, out clevent); CheckErr(error, Cl.EnqueueNDRangeKernel); //Wait for completion of all calculations on the GPU. error = Cl.Finish(cmdQueue); CheckErr(error, Cl.Finish); //Read the processed image from GPU to raw RGBA data byte[] array error = Cl.EnqueueReadImage ( cmdQueue, outputImage2DBuffer, Cl.Bool.True, originPtr, regionPtr, (IntPtr)0, (IntPtr)0, outputByteArray, 0, null, out clevent ); CheckErr(error, Cl.clEnqueueReadImage); //Clean up memory Cl.ReleaseKernel(kernel); Cl.ReleaseCommandQueue(cmdQueue); Cl.ReleaseMemObject(inputImage2DBuffer); Cl.ReleaseMemObject(outputImage2DBuffer); //Get a pointer to our unmanaged output byte[] array GCHandle pinnedOutputArray = GCHandle.Alloc(outputByteArray, GCHandleType.Pinned); IntPtr outputBmpPointer = pinnedOutputArray.AddrOfPinnedObject(); //Create a new bitmap with processed data and save it to a file. Bitmap outputBitmap = new Bitmap(inputImgWidth, inputImgHeight, inputImgStride, PixelFormat.Format32bppArgb, outputBmpPointer); outputImage = new Image<Gray, Single>(outputBitmap); //outputBitmap.Save(outputImagePath, System.Drawing.Imaging.ImageFormat.Png); pinnedOutputArray.Free(); } }}
How to use multiple passes in OpenCL?
opencl
OpenCL uses barriersYou need to store the results of the first pass in a variable then callbarrier(CLK_LOCAL_MEM_FENCE);Once all the threads have reached the barrier, the next section of code can be executed. This is to enforce data dependencies.
_datascience.14226
If we have MLP then we can easily compute the gradient for each parameters, by computing the gradient recursively begin with the last layer of the network, but suppose I have neural network that consist of different type of layer for instance Input->convolution layer->ReLu->max pooling->fully connected layer->siftmax layer, how do I compute the gradient for each parameters ?
How to train neural network that has different kind of layers
machine learning;deep learning;gradient descent
The different layers you describe can all have gradients calculated using the same back propagation equations as for a simpler MLP. It is still the same recursive process, but it is altered by the parameters of each layer in turn. There are some details worth noting:If you want to understand the correct formula to use, you will need to study the equations of back propagation using the chain rule (note I have picked one example worked through, there are plenty to choose from - including some notes I made myself for a now defunct software project).When feed-forward values overlap (e.g. convolutional) or are selected (e.g. dropout, max pooling), then the combinations are usually logically simple and easy to understand:For overlapped and combined weights, such as with convolution, then gradients simply add. When you back propagate the the gradients from each feature pixel in a higher layer, they add into the gradients for the shared weights in the kernel, and also add into the gradients for the feature map pixels in the layer below (in each case before starting calculation, you might create an all-zero matrix to sum up the final gradients into).For a selection mechanisms, such as the max pooling layer, you only backprop the gradient to the selected output neuron in the previous layer. The others do not affect the output, so by definition increasing or decreasing their value has no effect - they have a gradient of 0 for the example being calculated.In the case of a feed-forward network, each layer's processing is independent from the next, so you only have a complex rule to follow if you have a complex layer. You can write the back propagation equations down so that they relate gradients in one layer to the already-calculated gradients in the layer above (and ultimately to the loss function evaluated in the output layer). It doesn't directly matter what the activation function was in the output layer after you backpropagate the gradient from it - at that point the only difference is numeric, the equations relating deeper layer gradients to each other do not depend on the output at all.Finally, if you want to just use a neural network library, you don't need to worry much about this, it is usually just done for you. All the standard activation functions and layer architectures are covered by existing code. It is only when creating your own implementations from scratch, or when making use of unusual functions or structure, that you might need to go as far as deriving the values directly.
_codereview.90645
The following code comes from a simple Brainfuck interpreter I'm working on to learn the Rust language. Error handling is omitted for simplicity. It is tested on rustc version 1.1.0-dev (af522079a 2015-05-14) (built 2015-05-14).The code implements just the input and output routines for the interpreter. The read_cell method reads a single byte from the Read object input and stores it in the memory cell at pointer. write_cell writes the byte a the memory cell at pointer to the Write object output.use std::io::Read;use std::io::Write;pub struct Machine<In, Out> { pub memory: Vec<u8>, pub pointer: usize, input: In, output: Out,}impl<In: Read, Out: Write> Machine<In, Out> { pub fn new(input: In, output: Out) -> Machine<In, Out> { Machine { memory: vec![0; 2], pointer: 0, input: input, output: output } } pub fn read_cell(&mut self) { self.input.read(&mut self.memory[self.pointer..self.pointer+1]).unwrap(); } pub fn write_cell(&mut self) { self.output.write(&self.memory[self.pointer..self.pointer+1]).unwrap(); }}#[cfg(test)]mod test { use super::*; #[test] fn test_write_cell() { let input = .as_bytes(); let mut output = vec![]; { let mut machine = Machine::new(input, &mut output); machine.memory[machine.pointer] = 1; machine.write_cell(); } assert_eq!(vec![1], output); }}There are two main points in the code that are concerning me:I cannot find in the standard library API an obvious method to read or write a single byte. There is a better/clearer way to implement the read_cell and write_cell methods?In the test function test_write_cell I had to declare machine inside a nested scope. If it is declared in the same scope of output I get this error at the assert_eq! line:cannot borrow `output` as immutable because it is also borrowed as mutableCan I write the same test without having to nest scopes? Is so, how?Any other comment, hint or advice will be very welcome!
Input/output for a simple Brainfuck interpreter
beginner;rust
a better/clearer way to implement the read_cell and write_cell methods?What you have seems pretty good to me. Rust's IO methods rely on having a buffer to read into. In many cases, you might see a single byte read with something likelet mut buf = [0; 1]; reader.read(&mut buf).unwrap();In your case, you already have a buffer, so you might as well read directly into it.Can I write the same test without having to nest scopes?Nope. Let's comment out the nested braces and look at the error messages:error: cannot borrow `output` as immutable because it is also borrowed as mutablematch ( & ( $ left ) , & ( $ right ) ) { ^~~~~~~~~~~note: in expansion of assert_eq!We are attempting to immutably borrow something that's already borrowed. The compiler is kind enough to show us where the borrow occurs:note: previous borrow of `output` occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `output` until the borrow ends let mut machine = Machine::new(input, &mut output); ^~~~~~And where it ends:note: previous borrow ends herefn test_write_cell() {...}^When there is an outstanding mutable borrow, Rust only that borrow to exist. This removes whole classes of bugs around spooky action at a distance as well as potential data races. The extra braces provide a lifetime that the borrow can live during; once the block is exited, the borrow is no longer needed.The only solution I know of is to use what I like to call an exploder. If you are familiar with constructors and destructors, then an exploder is like a destructor that returns values. There's a common pattern in the standard library with methods called into_inner. Here's how it could look for your code:impl<In: Read, Out: Write> Machine<In, Out> { pub fn into_inner(self) -> (In, Out) { (self.input, self.output) }}fn test_write_cell() { let input = .as_bytes(); let output = vec![]; let mut machine = Machine::new(input, output); machine.memory[machine.pointer] = 1; machine.write_cell(); let (_, output) = machine.into_inner(); assert_eq!(vec![1], output);}
_computergraphics.4078
I have a bunch of planes each with their own texture in a grid. Currently I am rendering these as separate planes, each with their own texture, although I could use a single plane with multiple faces.Each color is a texture.I have a polygon mesh with arbitrary shape that is parallel to these planes:This shape could be completely contained within one of the planes, or larger.I would like to texture the polygon with the overlapping textures of the planes:How do I accomplish this clipping of the textures in three js / WebGl?I am also open to any other WebGL solutions.A few ideas I had:Subdivide the polygon into faces that correspond with the overlapping planes. Then texture these faces using UV coords. I know I can get this to work, but it seems like too complicated of a solution.Apply multiple textures to the polygon and use UV coordinates to distribute them. -- Im not sure this is possible without subdividing?Any other ideas? Can this be accomplished with blending modes?
How to clip multiple tiled textures to polygon in Webgl / opengl
rendering;webgl;clipping;masking
null
_webmaster.86776
I have a client with very little text on his site, and I would like to make the most out of his services page. The only information on this particular page are the names of the services. ie:PaintingRemodelingCountertopsEtc.I would like to turn these (they're either p or li tags right now) into header tags so Search Engines understand that these are very important to the site, but without supporting content, I imagine this will be viewed as keyword stuffing. I'm thinking maybe I should make the more prominent services h2s and the not as prominent ones h3s/h4s? Would this approach improve or hinder my SEO?On other pages, he does have images with alt tags reiterating his key services
How To Setup A Services List Using Header Tags, But Without Keyword Stuffing?
seo;heading;keyword stuffing
The type of tag used when adding a piece of text to a page does not in and of itself affect the SERP ranking for the page in question. You should always use HTML tags for the correct purposes for situations where users are using an assisted device such as a screen reader which depends on the correct usage of HTML tags to work right.In this instance you may be better served to list each service, and then beneath each service add a short couple of sentences which describe the service in question and make the service name a link to the service page details.The whole point of SEO is to improve the quality of the page for the end user, and by doing things that will improve the end user's value derived from the site you will naturally be improving the sites quality for SEO and SERP ranking as well.
_webapps.70255
Does YouTube allow users to have multiple accounts? Do they specifically allow/forbid this practice?
Is it allowed to have multiple accounts on YouTube?
youtube;user accounts
As youtube accounts are today google accounts you are more than welcome to have separate accounts for different stuff if you like. The legality however would depend on some factors:Are you in a country that would restrict this in its laws and Google must abide by these laws?Does the Terms of Service from Google allow you to have multiple accounts.The #1 is impossible to answer as that may require us to know the jurisdiction you are under and so on, you better research this if needed yourself.The #2 is pretty clear. Yes, you are allowed to have multiple google accounts under the Google Terms of Service (http://www.google.com/intl/en/policies/terms/) and therefore also tied to different Youtube accounts.The terms of service says that the restriction is that you are not allowed to create multiple accounts for SpoofingSpammingScamming other usersEt cetera. Have a read through Google Terms of Service, it's not that long and it is actually rather clear.Here is proof from google support https://support.google.com/accounts/answer/179235?hl=en
_softwareengineering.189073
My team and I took over a medium sized codebase over a year ago when the previous tech lead left the company. Originating from the lack of man power I fear we favored pragmatic solutions over best practices a little too much.Now, I have to deal with a constant decline in code quality and some kind of organic growth of day-to-day processes. I regret that when asked for code conventions a year ago I basically gave common sense as the only rule. Soon I had programmers using different syntactic styles and failing to see the difficulties this induces in a merge process. Another example is my push for database migration scripts. I tried to incorporate Flyway into our process but after only a week I was quickly overruled by my boss. Even despite me warning them about the upcoming mandatory use of database migration scripts and providing them with as many clues, hints and tools to mitigate the problem of not starting applications because of missing or failing migrations they decided that it would be best do complain to my boss about them not being able to do their work. I forcefully disabled Flyway again and we now live with migration steps in arbitrary named SQL files on a network share that you have to remember to apply to the respective database at the right time. One problem in our process was that we never did formal code reviews. So a lot of hacks went under the radar and into the code base without someone noticing on time. Nowadays I tend to read checkins of my team mates when there is time (that's not often the case) but there is no automatic process to prevent unwanted changes. It is up to me to go to the developer in question and try to ease them into acknowledging why their code is bad. I thought to introduce lint-like tools like FindBugs and Checkstyle but I fear I would face the same psychological problems like I did with the database migrations. After all I would make their jobs harder for them and I can understand why this might lead to misunderstandings.So my question is: How can I go about improving our process and our code quality in an environment where getting the job done is valued much higher than doing it right?
How to deal with too much pragmatism in the project?
development process;code quality;teamwork;technical debt
Let me guess...now you have a team of developers mired in daily production support because the stream of issues coming in is endless and no one gets to work on more strategic things?I'll be curious to see what kinds of answers you get, but without being too pessimistic here I think you're in for a heck of an uphill battle, mainly because my first bit of advice would be to get management on your side, and these quotes make me think that's going to be difficult:I was quickly overruled by my bossandan environment where getting the job done is valued much higher than doing it rightI like to refer to this as the decision-consequence gap. The people making the decisions do not have to suffer the consequences of the decision. If the decision is bad, the fallout is often someone else's problem. If the decision is good, or was bad and the hard work of others made it look good, then of course the decision maker takes credit. I'm not busting on you here, these problems are a sadly systemic issue I see in most corporate IT shops and the corporate world at large. It's what causes them to narrowly put delivery date before quality in every project. And of course they'll do that because they are judged by their superiors on date first; as none of them have to live with the software's deficiencies, quality doesn't really matter to them. Ideal SolutionAnyway, the happy path scenario is that you get management to agree to:Halt new feature development while you make a focused effort to repair and refactor major deficiencies in the code. While they're at it, they empower you to release anyone from the team that isn't smart enough to recognize the value of best practices and doesn't want to change their hack-style of coding. Getting rid of crappy developers is probably even more valuable to the team than the refactoring itself; good developers will find ways to refactor as they go, naturally improving things over time.Management in this scenario recognizes that the leaks in the boat must be repaired or much time and money will be lost in paying for a team that is stuck churning through tactical issues (user hand-holding, data fixes, rote tasks, and other assorted diaper changes) rather than strategic (new features to enable user efficiency, improve quality of customer-facing output, and make possible new streams of revenue). Pragmatic SolutionLook, we know the above theoretical ideal is unlikely to happen based on what you've already told us. That leaves you and your team to find ways to handle this on your own. Establish the standards you need. You mentioned it in your post; get them published and get the team to understand the value and begin implementing for all future work. If you can keep the issues load stable even as the code grows because the new features don't add more problems, then that's a small but laudable victory against tough circumstances.Refactoring is likely one of your best approaches. Every time you have to go in for a fix, take advantage of the opportunity to also clean up that script and improve it some. If you can't get management to support a directed repair effort then evolve the code slowly; you might have to engage some overtime but some improvement is better than none. The problem here is that it works for individual scripts, but less well for system-wide changes that need to happen simultaneously in large numbers. But you may be able to blend in some larger changes if you do get some new feature request; you must be tactful though. I once got my hand slapped by a project manager that thought I was actually greenlighting unapproved work (I was not, I was implementing changes that would fix a part of the system and in turn enable the new requested feature and was thus a material part of the project).Make staffing changes? I don't want to speak out of turn because I don't know the full story behind your team, but it sounds like they are quite comfortable with sloppy quick fixes and don't appreciate best practices like code review. If you've identified people that are creating more problems than they solve, perhaps they can be put somewhere else. This is a tough one because these days management doesn't like firing people for incompetence because that's politically incorrect; they only fire people for hurting someone's feelings. But I mention this because it is a factor; poor developers compromise the rest of the team's efforts. You will waste many hours trying to convince them to improve, and any work they manage to get into production will create more support.Really Pragmatic SolutionA lot of the folks on the stackexchange sites are smart people that are also impatient and are quick to say, Just leave, it's obvious your shop isn't going to get any better.I'd agree with them if you really feel it's not going to get better. But obviously you have to be careful because at least in my experience, you have a chance of ending up somewhere that has the same problems. Also, this site's sidebar lists several similar questions you could look into for advice.
_softwareengineering.318928
This is a bit difficult to describe, but I'll do my best.In Python, I can use string.startswith(tuple) to test for multiple matches. But startswith only returns a boolean answer, whether or not it found a match. It is equivalent to any(string.startswith(substring) for substring in inputTuple). I am looking for a way to return the rest of the string. So, for example:>>> fruits = ['apple', 'orange', 'pear']>>> words1 = 'orange, quagga, etc.'>>> words2 = 'giraffe, apple, etc.'>>> magicFunc(words1, fruits)', quagga, etc.'>>> magicFunc(words2, fruits)False(I'm also okay with the function returning the first matching string, or a list of matching strings, or anything that would enable me to determine where to cut off the string.)Right now I have this:remainingString(bigString, searchStrings): for sub in searchStrings: if bigString.startswith(sub): return bigString.partition(sub)[0]Ick. Is there anything better?
Most Pythonic way to remove first match of potential leading strings?
python;strings;string matching
There's no easy way to get the info from .startswith, but you can construct a regular expression that gives you that info.An example:import reprefixes = (foo, moo!)# Add a ^ before each prefix to force a match at the beginning of a string;# escape() to allow regex-reserved characters like * be used in prefixes.regex_text = ( + |.join(^ + re.escape(x) for x in prefixes) + )match = re.search(regex_text, foobar)print match.end()
_webmaster.29568
Possible Duplicate:HTML validation: is it worth it? How important is to be W3C complaint. Currently I have 27 errors reported by W3C validator. Is that ok or I need to reduce the error count?
How important is it to be W3C complaint?
seo;wordpress;page speed
null
_webmaster.92888
Suppose I want to register a domain name but the .com TLD is unavailable and is being used by someone, would a .net domain be able to compete in terms of SEO?For example, if stackexchange.com is unavailable and I get hold of its .net domain would I be able to compete against the main site?Also if the keyword stackexchange has an average monthly search volume of 12,000 searches per month, would the domain keyword help me ride on the wave of the existing competitor site using the .net TLD?
Using .net to compete against .com
seo;top level domains
There is no advantage to choosing a .net over .com or any name over another in terms of domain names. But for best results, keep the following in mind:Make sure that you choose a name that does not break any law or corporations interests, for example, don't name your domain as burgerkingisbad.com or alldrugsarelegal.com, etc.Also, to help make your site indexed, make sure your domain name contains the title of your site or at least refers to it in some way. For example, if you are running a car lot online and you want to indicate they're all antiques, then you might want a domain like jacksantiqueautomobiles.com or even johnsantiquecars.net.Having something like donaldtrumpsfreshfruit.com on an automobile site just would not make sense at all unless you were showing cars with fruit loaded in them and you have more fruit than cars on your site. The point is, try to make the domain name as close to the subject or company name as possible.
_unix.219527
I installed Fedora 22 a few days ago and noticed that at times it didn't shut down. The monitor received no signal and goes black. But the color in the power button is still on and the fans of the CPU are still running.I found a similar question here (Fedora not shutting down) but there was no clear answer.I ran journalctl as root and here's the last part of the outcome before I turned the computer off myself by holding the power button. Any ideas why this is happening? Jul 30 21:53:43 localhost.localdomain systemd-logind[720]: System is powering down.Jul 30 21:53:44 localhost.localdomain gnome-session[1920]: gnome-session[1920]: WARNING: Lost name on bus: org.gnome.SessionManagerJul 30 21:53:44 localhost.localdomain gnome-session[1920]: WARNING: Lost name on bus: org.gnome.SessionManagerJul 30 21:53:44 localhost.localdomain systemd[1]: Stopped Session 1 of user gglasses.Jul 30 21:53:44 localhost.localdomain systemd[1]: Stopping Session 1 of user gglasses.Jul 30 21:53:44 localhost.localdomain systemd[1]: Stopping Restore /run/initramfs on shutdown...Jul 30 21:53:44 localhost.localdomain audit[2009]: <audit-1701> auid=1000 uid=1000 gid=1000 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:cJul 30 21:53:44 localhost.localdomain polkitd[738]: Unregistered Authentication Agent for unix-session:1 (system bus name :1.54, object path /org/freJul 30 21:53:44 localhost.localdomain systemd[1]: Stopping Daemon for power management...Jul 30 21:53:44 localhost.localdomain systemd[1]: Stopped target Sound Card.Jul 30 21:53:44 localhost.localdomain systemd[1]: Stopping Sound Card.Jul 30 21:53:44 localhost.localdomain systemd[1]: Deactivating swap /dev/mapper/fedora-swap...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Manage Sound Card State (restore and store)...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Manage, Install and Generate Color Profiles...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Disk Manager...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping LVM2 PV scan on device 8:1...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped target Graphical Interface.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Graphical Interface.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped target Multi-User System.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Multi-User System.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Avahi mDNS/DNS-SD Stack...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping MariaDB 10.0 database server...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Job spooling tools...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping The Apache HTTP Server...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping ABRT kernel log watcher...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Command Scheduler...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Virtualization daemon...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping NTP client/server...Jul 30 21:53:45 localhost.localdomain chronyd[688]: chronyd exitingJul 30 21:53:45 localhost.localdomain systemd[1]: Stopping SYSV: Late init script for live image....Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping CUPS Scheduler...Jul 30 21:53:45 localhost.localdomain systemd[1]: Removed slice system-getty.slice.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping system-getty.slice.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping PackageKit Daemon...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping User Manager for UID 1000...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped target Login Prompts.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Login Prompts.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Machine Check Exception Logging Daemon...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Install ABRT coredump hook...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping User Manager for UID 42...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Bluetooth service...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping GNOME Display Manager...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped target Timers.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Timers.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped Daily Cleanup of Temporary Directories.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Daily Cleanup of Temporary Directories.Jul 30 21:53:45 localhost.localdomain systemd[1]: Started Store Sound Card State.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Accounts Service...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping RealtimeKit Scheduling Policy Service...Jul 30 21:53:45 localhost.localdomain systemd[1]: Unmounting RPC Pipe File System...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Authorization Manager...Jul 30 21:53:45 localhost.localdomain audit[1524]: <audit-1701> auid=4294967295 uid=42 gid=42 ses=4294967295 subj=system_u:system_r:xdm_t:s0-s0:c0.c1Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped Session c1 of user gdm.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Session c1 of user gdm.Jul 30 21:53:45 localhost.localdomain bluetoothd[2098]: TerminatingJul 30 21:53:46 localhost.localdomain bluetoothd[2098]: Stopping SDP serverJul 30 21:53:46 localhost.localdomain bluetoothd[2098]: ExitJul 30 21:53:47 localhost.localdomain dbus[696]: [system] Activating via systemd: service name='org.freedesktop.Accounts' unit='accounts-daemon.serviJul 30 21:53:48 localhost.localdomain avahi-daemon[695]: Got SIGTERM, quitting.Jul 30 21:53:48 localhost.localdomain avahi-daemon[695]: Leaving mDNS multicast group on interface eno1.IPv4 with address 192.168.1.5.Jul 30 21:53:48 localhost.localdomain avahi-daemon[695]: avahi-daemon 0.6.31 exiting.Jul 30 21:53:49 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.settings.modify.hostname:Jul 30 21:53:49 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.settings.modify.own: (0) Jul 30 21:53:49 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.settings.modify.system: (Jul 30 21:53:49 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.wifi.share.open: (0) AuthJul 30 21:53:49 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.wifi.share.protected: (0)Jul 30 21:53:56 localhost.localdomain dbus[696]: [system] Activation via systemd failed for unit 'accounts-daemon.service': Refusing activation, D-BuJul 30 21:53:56 localhost.localdomain alsactl[683]: alsactl daemon stoppedJul 30 21:53:56 localhost.localdomain NetworkManager[802]: <warn> error requesting auth for org.freedesktop.NetworkManager.network-control: (0) AuthJul 30 21:53:45 localhost.localdomain systemd[1]: Stopping Login Service...Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped Authorization Manager.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped Daemon for power management.Jul 30 21:53:45 localhost.localdomain systemd[1]: Stopped PackageKit Daemon.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopped target Default.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopping Default.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopped target Basic System.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopping Basic System.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopped target Sockets.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopping Sockets.Jul 30 21:53:49 localhost.localdomain systemd[1751]: Stopped target Timers.Jul 30 21:53:56 localhost.localdomain systemd[1751]: Stopping Timers.Jul 30 21:53:56 localhost.localdomain systemd[1256]: Reached target Shutdown.Jul 30 21:53:56 localhost.localdomain systemd[1751]: Reached target Shutdown.Jul 30 21:53:56 localhost.localdomain systemd[1256]: Starting Shutdown.Jul 30 21:53:56 localhost.localdomain systemd[1751]: Starting Shutdown.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopped target Default.Jul 30 21:53:57 localhost.localdomain systemd[1751]: Starting Exit the Session...Jul 30 21:53:57 localhost.localdomain systemd[1]: Starting Show Plymouth Power Off Screen...Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopping Default.Jul 30 21:53:57 localhost.localdomain systemd[1751]: Stopped target Paths.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopped target Basic System.Jul 30 21:53:57 localhost.localdomain systemd[1751]: Stopping Paths.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopping Basic System.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopped target Sockets.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopping Sockets.Jul 30 21:53:57 localhost.localdomain systemd[1751]: Received SIGRTMIN+24 from PID 11141 (kill).Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopped target Timers.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopping Timers.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Starting Exit the Session...Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopped target Paths.Jul 30 21:53:57 localhost.localdomain systemd[1256]: Stopping Paths.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopped CUPS Scheduler.Jul 30 21:53:57 localhost.localdomain systemd[1]: Unit firewalld.service entered failed state.Jul 30 21:53:57 localhost.localdomain systemd[1]: firewalld.service failed. Jul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=crJul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=udJul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=coJul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=blJul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=ht Jul 30 21:53:49 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=abJul 30 21:53:51 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=liJul 30 21:53:56 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=alJul 30 21:53:56 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=fiJul 30 21:53:56 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=dbJul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=gdJul 30 21:53:56 localhost.localdomain NetworkManager[802]: <warn> disconnected by the system bus.Jul 30 21:53:57 localhost.localdomain systemd[1754]: pam_unix(systemd-user:session): session closed for user gglassesJul 30 21:53:57 localhost.localdomain systemd[1256]: Received SIGRTMIN+24 from PID 11153 (kill).Jul 30 21:53:57 localhost.localdomain systemd[1280]: pam_unix(systemd-user:session): session closed for user gdmJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped User Manager for UID 42.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=usJul 30 21:53:57 localhost.localdomain systemd[1]: Removed slice user-42.slice.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping user-42.slice.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping Permit User Sessions...Jul 30 21:53:57 localhost.localdomain gdm[876]: Tried to look up non-existent conversation gdm-launch-environmentJul 30 21:53:57 localhost.localdomain gdm[876]: Freeing conversation 'gdm-launch-environment' with active jobJul 30 21:53:57 localhost.localdomain gdm[876]: Freeing conversation 'gdm-password' with active jobJul 30 21:53:57 localhost.localdomain gdm[876]: Failed to contact accountsservice: Error calling StartServiceByName for org.freedesktop.Accounts: GDBJul 30 21:53:57 localhost.localdomain gdm[876]: Child process -1385 was already dead.Jul 30 21:53:57 localhost.localdomain gdm[876]: GLib: g_hash_table_find: assertion 'version == hash_table->version' failedJul 30 21:53:57 localhost.localdomain org.fedoraproject.Setroubleshootd[696]: Exception KeyError: KeyError(140594674702080,) in <module 'threading' fJul 30 21:53:57 localhost.localdomain NetworkManager[802]: g_dbus_connection_real_closed: Remote peer vanished with error: Underlying GIOStream returJul 30 21:53:57 localhost.localdomain abrtd[725]: The name 'org.freedesktop.problems.daemon' has been lost, please check if other service owning the Jul 30 21:53:57 localhost.localdomain systemd[1]: abrtd.service: main process exited, code=exited, status=1/FAILUREJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped ABRT Automated Bug Reporting Tool.Jul 30 21:53:57 localhost.localdomain systemd[1]: Unit abrtd.service entered failed state.Jul 30 21:53:57 localhost.localdomain systemd[1]: abrtd.service failed.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=abJul 30 21:53:57 localhost.localdomain systemd[1]: Stopping LSB: Init script for live image....Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopped LSB: Init script for live image..Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=liJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped Permit User Sessions.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=syJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped target User and Group Name Lookups.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping User and Group Name Lookups.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopped target Remote File Systems.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping Remote File Systems.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopped target Remote File Systems (Pre).Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping Remote File Systems (Pre).Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopped target NFS client services.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping NFS client services.Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping GSSAPI Proxy Daemon...Jul 30 21:53:57 localhost.localdomain systemd[1]: Stopping Logout off all iSCSI sessions on shutdown...Jul 30 21:53:57 localhost.localdomain iscsiadm[11167]: iscsiadm: No matching sessions foundJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped Logout off all iSCSI sessions on shutdown.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=isJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped WPA Supplicant daemon.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=wpJul 30 21:53:57 localhost.localdomain systemd[1]: Stopped GSSAPI Proxy Daemon.Jul 30 21:53:57 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=gsJul 30 21:53:59 localhost.localdomain NetworkManager[802]: <info> Could not connect to the system bus; only the private D-Bus socket will be availabJul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/fedora/swap.Jul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/disk/by-uuid/fba815ca-5c6d-4669-a933-2b4e6909afdb.Jul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/disk/by-id/dm-uuid-LVM-urmOqowfmjzlClU7g4S7DynV15ytZJ2wQnrOcQP4Z4C1HEGYRk6sPwJul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/disk/by-id/dm-name-fedora-swap.Jul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/dm-0.Jul 30 21:54:00 localhost.localdomain systemd[1]: Deactivated swap /dev/mapper/fedora-swap.Jul 30 21:54:00 localhost.localdomain mysqld_safe[1080]: 150730 21:54:00 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid endedJul 30 21:54:00 localhost.localdomain systemd[1]: Stopped MariaDB 10.0 database server.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Network.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Network.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Network Manager...Jul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=maJul 30 21:54:00 localhost.localdomain NetworkManager[802]: <info> caught signal 15, shutting down normally.Jul 30 21:54:00 localhost.localdomain NetworkManager[802]: <info> exiting (success)Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Network Manager.Jul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=NeJul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Basic System.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Basic System.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped dnf makecache timer.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping dnf makecache timer.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Sockets.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Sockets.Jul 30 21:54:00 localhost.localdomain systemd[1]: Closed Open-iSCSI iscsiuio Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Open-iSCSI iscsiuio Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Closed D-Bus System Message Bus Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping D-Bus System Message Bus Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Closed Avahi mDNS/DNS-SD Stack Activation Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Avahi mDNS/DNS-SD Stack Activation Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Closed Open-iSCSI iscsid Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Open-iSCSI iscsid Socket.Jul 30 21:54:00 localhost.localdomain systemd[1]: Closed CUPS Scheduler.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping CUPS Scheduler.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Forward Password Requests to Plymouth Directory Watch.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Forward Password Requests to Plymouth Directory Watch.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Slices.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Slices.Jul 30 21:54:00 localhost.localdomain systemd[1]: Removed slice User and Session Slice.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping User and Session Slice.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Paths.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Paths.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Forward Password Requests to Wall Directory Watch.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Forward Password Requests to Wall Directory Watch.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped CUPS Scheduler.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping CUPS Scheduler.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target System Initialization.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping System Initialization.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Apply Kernel Variables.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Apply Kernel Variables...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Swap.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Swap.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Encrypted Volumes.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Encrypted Volumes.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Setup Virtual Console.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Setup Virtual Console...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Load/Save Random Seed...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Security Auditing Service...Jul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sylines 233633-233668Jul 30 21:54:00 localhost.localdomain auditd[674]: The audit daemon is exiting.Jul 30 21:54:01 localhost.localdomain kernel: audit_printk_skb: 24 callbacks suppressedJul 30 21:54:01 localhost.localdomain kernel: audit: type=1305 audit(1438314840.965:1580): audit_pid=0 old=674 auid=4294967295 ses=4294967295 subj=syJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314840.967:1581): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314840.969:1582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314840.969:1583): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314840.999:1584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Security Auditing Service.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Create Volatile Files and Directories.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Create Volatile Files and Directories...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped Import network configuration from initramfs.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Import network configuration from initramfs...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopped target Local File Systems.Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Local File Systems.Jul 30 21:54:00 localhost.localdomain systemd[1]: Unmounting Configuration File System...Jul 30 21:54:00 localhost.localdomain systemd[1]: Unmounting /run/user/42...Jul 30 21:54:00 localhost.localdomain systemd[1]: Unmounting /run/user/1000/gvfs...Jul 30 21:54:00 localhost.localdomain systemd[1]: Stopping Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...Jul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted Configuration File System.Jul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted /run/user/42.Jul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted /run/user/1000/gvfs.Jul 30 21:54:00 localhost.localdomain audit: <audit-1305> audit_pid=0 old=674 auid=4294967295 ses=4294967295 subj=system_u:system_r:auditd_t:s0 res=1Jul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=auJul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=syJul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=feJul 30 21:54:00 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=syJul 30 21:54:01 localhost.localdomain systemd[1]: Stopped Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.Jul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvJul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted Temporary Directory.Jul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.011:1585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain lvm[11213]: 2 logical volume(s) in volume group fedora unmonitoredJul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvJul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted Temporary Directory.Jul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.011:1585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain lvm[11213]: 2 logical volume(s) in volume group fedora unmonitoredJul 30 21:54:01 localhost.localdomain systemd[1]: Stopping LVM2 metadata daemon...Jul 30 21:54:01 localhost.localdomain systemd[1]: Unmounting /run/user/1000...Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopped Configure read-only root support.Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopping Configure read-only root support...Jul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=feJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.024:1586): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain systemd[1]: Stopped LVM2 metadata daemon.Jul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.026:1587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=lvJul 30 21:54:01 localhost.localdomain systemd[1]: Unmounted /run/user/1000.Jul 30 21:54:01 localhost.localdomain systemd[1]: Reached target Unmount All Filesystems.Jul 30 21:54:01 localhost.localdomain systemd[1]: Starting Unmount All Filesystems.Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopped target Local File Systems (Pre).Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopping Local File Systems (Pre).Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopped Remount Root and Kernel File Systems.Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopping Remount Root and Kernel File Systems...Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopped Create Static Device Nodes in /dev.Jul 30 21:54:01 localhost.localdomain systemd[1]: Stopping Create Static Device Nodes in /dev...Jul 30 21:54:01 localhost.localdomain systemd[1]: Reached target Shutdown.Jul 30 21:54:01 localhost.localdomain systemd[1]: Starting Shutdown.Jul 30 21:54:01 localhost.localdomain systemd[1]: Reached target Final Step.Jul 30 21:54:01 localhost.localdomain systemd[1]: Starting Final Step.Jul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.041:1588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain kernel: audit: type=1131 audit(1438314841.041:1589): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:sJul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=syJul 30 21:54:01 localhost.localdomain audit[1]: <audit-1131> pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sy-- Reboot --
Fedora 22 does not shut down
fedora
null
_unix.31250
/var/www$ wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzThis results in: --2012-02-08 21:20:17-- http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzResolving ftp.drupal.org... 64.50.233.100, 64.50.236.52Connecting to ftp.drupal.org|64.50.233.100|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 2728271 (2.6M) [application/x-gzip]drupal-7.0.tar.gz: Permission deniedCannot write to `drupal-7.0.tar.gz' (Permission denied).eyedea@eyedea-ER912AA-ABA-SR1810NX-NA620:/var/www$ ^Ceyedea@eyedea-ER912AA-ABA-SR1810NX-NA620:/var/www$ wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gz--2012-02-08 21:46:34-- http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzResolving ftp.drupal.org... 64.50.236.52, 64.50.233.100Connecting to ftp.drupal.org|64.50.236.52|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 2728271 (2.6M) [application/x-gzip]drupal-7.0.tar.gz: Permission deniedCannot write to `drupal-7.0.tar.gz' (Permission denied).I checked the permissions of /var/www and i can't change them. What's going on here?
Permission Denied when downloading Drupal
linux;permissions
It's totally normal. your /var/www directory belongs to root user and root group with those rights drwxr-xr-x.It's far more better to have /var/www belonging to root, because it will forbid possible security flaws in apache or php to write and change source code on this server. What you can do about that : Make your wget with root rights. For instance :$ sudo wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzor$ su -c wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzDownload it from your $HOME and untar it afterwards$ cd ~; wget http://ftp.drupal.org/files/projects/drupal-7.0.tar.gzIgnore those security recommendations and change rights of /var/www$ sudo chown `id -u`:`id -g` /var/wwwEDIT : If you have broken your /var/www tree with a chmod -R 777 /var/www/* and haven't burn in hell, you can thank god and quickly execute those commands before he comes for you :$ sudo find /var/www -type d -exec chmod 755 {} \;$ sudo find /var/www -type f -exec chmod 644 {} \;
_unix.339434
I recently install kali linux 2016.2 (64 bit) in my computer. Not vmware nor virture box. i installed it as dual boot with window. i connected to the wifi just fine (using right-up-corner button) before i tried to set it static ip. After doing some stuff now it can not see any network around.ifconfig does show wlan0 and it it up too but when i type:ip link show wlan0it said that the state of interface is DOWN and ifup wlan0 or ip link set wlan0 updoes not change anythingand the weird thing is i still able to scan wifi network around using terminaliw scan wlan0while the list of network (corner button) still blankI am sure that the wifi router is working normally.Any help?
`Kali linux 2016.2` wireless interface is up but not network to select
wifi;kali linux
null
_unix.44592
Since last few days, I'm facing some problem. It's about response to 100 users. Situation goes like this. On one end there are 100 user, each user has unique ID. User is supposed to submit one file to server/mail. At receivers end, someone has to check whether submitted file is in particular format or not. I want to automate to this process, starting from server/mailbox where some script should download submitted file, check it and store result in some output file. Can we setup some server on unix machine?can we use some mailbox to which people can send files in the form of an email?Please suggest some solution.
Automate checking user submitted files
email
null
_cs.13274
I know that for executing a program, it should be copied to RAM. But the problem is whole of it may not be copied always. Since the size of the RAM is limited, there is mechanism called virtual memory. If the addressed thing is not in memory, a page fault occurs and the data is copied to the RAM. My question is who keeps track of which data is in the RAM and not in the RAM?
How a program is copied to RAM from harddisk
operating systems;memory management;virtual memory;memory access
The operating system (with help from the CPU) keeps a page table, which is a mapping of each virtual page for to the physical page it is mapped to. The page table also includes a bit for whether a particular page is currently mapped. For every load and store instruction the hardware walks the page table (or at least a cached portion of it.) If the virtual page is currently mapped to a physical page, the hardware figures this out and returns the right data.If the virtual page is currently unmapped then the operating system receives an interrupt. At this point it looks into the memory map for the process. This is a list of ranges of virtual memory, the permissions that should be applied to that range, and the file (if any) on disk that stores the data from that range when it is not in RAM. Typically for the main executable there will be a text segment, data segment and sometimes a read-only data segment (for constants), as well as a bss (zero initialized data) segment, a stack, and the heap managed by malloc. There may be (usually are) additional text and data segments for each shared object (shared library) that the program needs to load.You can use mmap() to tell a Posix operating system to create a new region in the memory map, the permissions for that region, and which file to use to back that region. On Linux, for an existing process you can get a listing of its currently mapped regions using the pmap command.
_webapps.105789
I was creating an accounting log, so right now I'm using: =SUM(FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Summary!C65),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Summary!C66),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Summary!C67),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Summary!C68))The first two criteria in the filter function checks if the range is within a specific date range, and the final criterion checks if the type of transaction is relevant. But it's not letting me compare multiple values so I have to keep repeating the filter function.Posting the same formula for more clarity=SUM(FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Nuts),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Bolts),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Screws),FILTER(Transactions!C:C,Transactions!B:B>=D23,Transactions!B:B<=E23,Transactions!E:E=Clips))So is there a way for me to just compare if the transaction done is Nuts, Bolts, Screws or Clips in one go without having to repeat the filter function?
Avoid repeating using the filter function
google spreadsheets
null
_ai.3072
I'm trying to get a gauge on just how big the programs and databases are these automata. I understand that this is a changing number, particularly in regard to Machine Learning.Q: How large was Deep Blue when it beat Gary Kasparov?Q: How big was AlphaGo when it beat Lee Sedol?
What are the (general) sizes of AlphaGo and Deep Blue?
ai design
null
_unix.239199
I'm trying to run a minecraft server on my vServer.Everything works properly when I start the server like this:java -Xmx1G -Xms1G -jar minecraft_server.jar noguiBut I want the server to run in a screen with a command like this:screen -d -m -S mc-server java -Xmx1G -Xms1G -jar minecraft_server.jar noguiBut everytime I try to start it this way there is just a new screen for a few seconds and then it disapears again. Is something wrong in my command or is there another way to see what happend in this scree before it closed itself?Or does screen need some special permissions? I use an separate user and all the serverfiles are in the users homedirectory ...
Starting a minecraft server using screen doesnt work properly
debian;ssh;permissions;gnu screen;minecraft
null
_webapps.26912
Is it possible to enable code folding in jsFiddle? I've found that long Javascript files can become unwieldy without code folding.(I'm referring to the feature in Geany or the Eclipse IDE that makes it possible to collapse text that is surrounded by curly braces.)
Code folding in jsFiddle
javascript;code;jsfiddle
null
_unix.39866
I am loading Linux (Debian Lenny) on VirtualBox but there is apparently something wrong with the GRUB. When I start the system, a grub menu appears:Then I run the following commands:root (hd0,0) kernel /vmlinuz root=/dev/hda1 ro quiet initrd /initrd.img bootAfter the system boots, how should I continue to repair the grub file?Any advice would be appreciated!
How to repair the grub on debian
debian;grub
null
_unix.328817
Consider the following Makefile.all: yesIf I run make and suspend using Ctrl-Z, and then start screen or tmux, followed by an attempt to reptyr, I get the following error.$ reptyr 5328[-] Process 5329 (yes) shares 5328's process group. Unable to attach.(This most commonly means that 5328 has suprocesses).Unable to attach to pid 5328: Invalid argumentIt is certainly true that make has subprocesses, but is there a way to reptyr anyways, either using this tool or another tool?
Is there a way to reptyr a make process or any process with subprocesses?
tty;pty;reptyr
null
_unix.265293
I have two Audio CDs to prepare for my upcoming English test. I can play the first CD by execute vlc cdda:// in konsole (I use Arch Linux with KDE). I also note that the Audio CD appears in the Devices panel in Dolphin. Unfortunately, for the second CD, nothing appears in Dolphin and I also can't play this CD with vlc.I run cd-info /dev/cdrom with the the second CD inside and getcd-info version 0.93 x86_64-unknown-linux-gnuCopyright (c) 2003-2005, 2007-2008, 2011-2013 R. BernsteinThis is free software; see the source for copying conditions.There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR APARTICULAR PURPOSE.CD location : /dev/cdromCD driver name: GNU/Linux access mode: IOCTLVendor : SlimtypeModel : DVD A DS8A5SH Revision : XAA2Hardware : CD-ROM or DVDCan eject : YesCan close tray : YesCan disable manual eject : YesCan select juke-box disc : NoCan set drive speed : NoCan read multiple sessions (e.g. PhotoCD) : YesCan hard reset device : YesReading.... Can read Mode 2 Form 1 : Yes Can read Mode 2 Form 2 : Yes Can read (S)VCD (i.e. Mode 2 Form 1/2) : Yes Can read C2 Errors : Yes Can read IRSC : Yes Can read Media Channel Number (or UPC) : Yes Can play audio : Yes Can read CD-DA : Yes Can read CD-R : Yes Can read CD-RW : Yes Can read DVD-ROM : YesWriting.... Can write CD-RW : Yes Can write DVD-R : Yes Can write DVD-RAM : Yes Can write DVD-RW : No Can write DVD+RW : No__________________________________Disc mode is listed as: Error in getting information++ WARN: error in ioctl CDROMREADTOCHDR: No medium foundcd-info: Can't get first track number. I give up.I installed libdvdread, libdvdcss, libdvdnav and tried with vlc dvd:///dev/sr0 but konsole returned errors. Can anyone help me to play the CD?
Can not play Audio CD in Arch Linux
arch linux;vlc;dvd;audio cd
Well, here's your error:++ WARN: error in ioctl CDROMREADTOCHDR: No medium foundIt seems your medium is either non-redbook compliant, is faulty or damaged, or your drive is faulty (seems less likely considering the other CD works).If your CD works on another audio player, it may be that it contains Digital Restrictions Management technology, which you don't have the required technology to interact with.
_cogsci.9280
Generally, we think of humans as having a (relatively) advanced level of consciousness, but we don't think of simple molecules as having any sort of mental capacity at all. So where in between does the phenomenon of consciousness arise?Update: I have chosen G.Tononi's definition: the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements
What is the simplest entity that would be considered conscious?
consciousness
null
_codereview.133910
The following code I am using to lookup the description based on a Partition Type in either a string, an integer or hex value. By calling parttype(parttype) in the below code.Is there a more pythonic way for this?__PARTTYPE_TO_DESCRIPTION__ = { 00: Empty, 01: DOS 12-bit FAT, 02: XENIX root, 03: XENIX /usr, 04: DOS 3.0+ 16-bit FAT (up to 32M), 05: DOS 3.3+ Extended Partition, 06: DOS 3.31+ 16-bit FAT (over 32M), 07: Windows NTFS | OS/2 IFS | exFAT | Advanced Unix | QNX2.x pre-1988, 08: AIX boot | OS/2 (v1.0-1.3 only) | SplitDrive | Commodore DOS | DELL partition spanning multiple drives | QNX 1.x and 2.x ('qny'), 09: AIX data | Coherent filesystem | QNX 1.x and 2.x ('qnz'), 0a: OS/2 Boot Manager | Coherent swap partition | OPUS, 0b: WIN95 OSR2 FAT32, 0c: WIN95 OSR2 FAT32, LBA-mapped, 0d: SILICON SAFE, 0e: WIN95: DOS 16-bit FAT, LBA-mapped, 0f: WIN95: Extended partition, LBA-mapped, 10: OPUS (? - not certain - ?), 11: Hidden DOS 12-bit FAT | Leading Edge DOS 3.x logically sectored FAT, 12: Configuration/diagnostics partition, 14: Hidden DOS 16-bit FAT <32M | AST DOS with logically sectored FAT, 16: Hidden DOS 16-bit FAT >=32M, 17: Hidden IFS, 18: AST SmartSleep Partition, 19: Claimed for Willowtech Photon coS, 1b: Hidden WIN95 OSR2 FAT32, 1c: Hidden WIN95 OSR2 FAT32, LBA-mapped, 1e: Hidden WIN95 16-bit FAT, LBA-mapped, 20: Rumoured to be used by Willowsoft Overture File System, 21: Reserved for: HP Volume Expansion, SpeedStor variant | Claimed for FSo2 (Oxygen File System), 22: Claimed for Oxygen Extended Partition Table, 23: Reserved - unknown, 24: NEC DOS 3.x, 26: Reserved - unknown, 27: PQservice | Windows RE hidden partition | MirOS partition | RouterBOOT kernel partition, 2a: AtheOS File System (AFS), 2b: SyllableSecure (SylStor), 31: Reserved - unknown, 32: NOS, 33: Reserved - unknown, 34: Reserved - unknown, 35: JFS on OS/2 or eCS , 36: Reserved - unknown, 38: THEOS ver 3.2 2gb partition, 39: Plan 9 partition | THEOS ver 4 spanned partition, 3a: THEOS ver 4 4gb partition, 3b: THEOS ver 4 extended partition, 3c: PartitionMagic recovery partition, 3d: Hidden NetWare, 40: Venix 80286 | PICK | Linux/MINIX, 41: Personal RISC Boot | PPC PReP (Power PC Reference Platform) Boot, 42: Windows dynamic extended partition | Linux swap | SFS (Secure Filesystem), 43: Linux native, 44: GoBack partition, 45: Boot-US boot manager | Priam | EUMEL/Elan , 46: EUMEL/Elan, 47: EUMEL/Elan, 48: EUMEL/Elan, 4a: Mark Aitchison's ALFS/THIN lightweight filesystem for DOS | AdaOS Aquila (Withdrawn), 4c: Oberon partition, 4d: QNX4.x, 4e: QNX4.x 2nd part, 4f: QNX4.x 3rd part | Oberon partition, 50: OnTrack Disk Manager (older versions) RO | Lynx RTOS | Native Oberon (alt), 51: OnTrack Disk Manager RW (DM6 Aux1) | Novell, 52: CP/M | Microport SysV/AT, 53: Disk Manager 6.0 Aux3, 54: Disk Manager 6.0 Dynamic Drive Overlay (DDO), 55: EZ-Drive, 56: Golden Bow VFeature Partitioned Volume | DM converted to EZ-BIOS | AT&T MS-DOS 3.x logically sectored FAT, 57: DrivePro | VNDI Partition, 5c: Priam EDisk, 61: SpeedStor, 63: Unix System V (SCO, ISC Unix, UnixWare, ...), Mach, GNU Hurd, 64: PC-ARMOUR protected partition | Novell Netware 286, 2.xx, 65: Novell Netware 386, 3.xx or 4.xx, 66: Novell Netware SMS Partition, 67: Novell, 68: Novell, 69: Novell Netware 5+, Novell Netware NSS Partition, 70: DiskSecure Multi-Boot, 71: Reserved - unknown, 72: V7/x86, 73: Reserved - unknown, 74: Scramdisk partition | Reserved - unknown, 75: IBM PC/IX, 76: Reserved - unknown, 77: M2FS/M2CS partition | VNDI Partition, 78: XOSL FS, 7e: Claimed for F.I.X., 7f: Proposed for the Alt-OS-Development Partition Standard, 80: MINIX until 1.4a, 81: MINIX since 1.4b, early Linux | Mitac disk manager, 82: Linux swap | Solaris x86 | Prime, 83: Linux native partition, 84: OS/2 hidden C: drive | Hibernation partition, 85: Linux extended partition, 86: Old Linux RAID partition superblock | FAT16 volume set, 87: NTFS volume set, 88: Linux plaintext partition table, 8a: Linux Kernel Partition (used by AiR-BOOT), 8b: Legacy Fault Tolerant FAT32 volume, 8c: Legacy Fault Tolerant FAT32 volume using BIOS extd INT 13h, 8d: Free FDISK 0.96+ hidden Primary DOS FAT12 partitition, 8e: Linux Logical Volume Manager partition, 90: Free FDISK 0.96+ hidden Primary DOS FAT16 partitition, 91: Free FDISK 0.96+ hidden DOS extended partitition, 92: Free FDISK 0.96+ hidden Primary DOS large FAT16 partitition, 93: Hidden Linux native partition | Amoeba, 94: Amoeba bad block table, 95: MIT EXOPC native partitions, 96: CHRP ISO-9660 filesystem, 97: Free FDISK 0.96+ hidden Primary DOS FAT32 partitition, 98: Free FDISK 0.96+ hidden Primary DOS FAT32 partitition (LBA) | Datalight ROM-DOS Super-Boot Partition, 99: DCE376 logical drive, 9a: Free FDISK 0.96+ hidden Primary DOS FAT16 partitition (LBA), 9b: Free FDISK 0.96+ hidden DOS extended partitition (LBA), 9e: ForthOS partition, 9f: BSD/OS, a0: Laptop hibernation partition, a1: Laptop hibernation partition | HP Volume Expansion (SpeedStor variant), a3: HP Volume Expansion (SpeedStor variant), a4: HP Volume Expansion (SpeedStor variant), a5: BSD/386, 386BSD, NetBSD, FreeBSD, a6: OpenBSD | HP Volume Expansion (SpeedStor variant), a7: NeXTStep, a8: Mac OS-X, a9: NetBSD, aa: Olivetti Fat 12 1.44MB Service Partition, ab: Mac OS-X Boot partition | GO! partition, ad: RISC OS ADFS, ae: ShagOS filesystem, af: MacOS X HFS | ShagOS swap partition, b0: BootStar Dummy, b1: HP Volume Expansion (SpeedStor variant) | QNX Neutrino Power-Safe filesystem, b2: QNX Neutrino Power-Safe filesystem, b3: HP Volume Expansion (SpeedStor variant) | QNX Neutrino Power-Safe filesystem, b4: HP Volume Expansion (SpeedStor variant), b6: HP Volume Expansion (SpeedStor variant) | Corrupted Windows NT mirror set (master), FAT16 file system, b7: Corrupted Windows NT mirror set (master), NTFS file system | BSDI BSD/386 filesystem, b8: BSDI BSD/386 swap partition, bb: Boot Wizard hidden, bc: Acronis backup partition, bd: BonnyDOS/286, be: Solaris 8 boot partition, bf: New Solaris x86 partition, c0: CTOS | REAL/32 secure small partition | NTFT Partition | DR-DOS/Novell DOS secured partition, c1: DRDOS/secured (FAT-12), c2: Hidden Linux, c3: Hidden Linux swap, c4: DRDOS/secured (FAT-16, < 32M), c5: DRDOS/secured (extended), c6: DRDOS/secured (FAT-16, >= 32M) | Windows NT corrupted FAT16 volume/stripe set, c7: Windows NT corrupted NTFS volume/stripe set | Syrinx boot, c8: Reserved for DR-DOS 8.0+, c9: Reserved for DR-DOS 8.0+, ca: Reserved for DR-DOS 8.0+, cb: DR-DOS 7.04+ secured FAT32 (CHS), cc: DR-DOS 7.04+ secured FAT32 (LBA), cd: CTOS Memdump, ce: DR-DOS 7.04+ FAT16X (LBA), cf: DR-DOS 7.04+ secured EXT DOS (LBA), d0: REAL/32 secure big partition | Multiuser DOS secured partition, d1: Old Multiuser DOS secured FAT12, d4: Old Multiuser DOS secured FAT16 <32M, d5: Old Multiuser DOS secured extended partition, d6: Old Multiuser DOS secured FAT16 >=32M, d8: CP/M-86, da: Non-FS Data | Powercopy Backup, db: Digital Research CP/M, Concurrent CP/M, Concurrent DOS | CTOS (Convergent Technologies OS -Unisys) | KDG Telemetry SCPU boot, dd: Hidden CTOS Memdump, de: Dell PowerEdge Server utilities (FAT fs), df: DG/UX virtual disk manager partition | BootIt EMBRM, e0: Reserved by STMicroelectronics for a filesystem called ST AVFS, e1: DOS access or SpeedStor 12-bit FAT extended partition, e3: DOS R/O | SpeedStor, e4: SpeedStor 16-bit FAT extended partition < 1024 cyl., e5: Tandy MSDOS with logically sectored FAT, e6: Storage Dimensions SpeedStor, e8: LUKS, eb: BeOS BFS, ec: SkyOS SkyFS, ed: plans to use this for an OS called Sprytix, ee: Indication that this legacy MBR is followed by an EFI header, ef: Partition that contains an EFI file system, f0: Linux/PA-RISC boot loader, f1: Storage Dimensions SpeedStor, f2: DOS 3.3+ secondary partition, f3: Storage Dimensions SpeedStor, f4: SpeedStor large partition | Prologue single-volume partition, f5: Prologue multi-volume partition, f6: Storage Dimensions SpeedStor, f7: DDRdrive Solid State File System, f9: pCache, fa: Bochs, fb: VMware File System partition, fc: VMware Swap partition, fd: Linux raid partition with autodetect using persistent superblock, fe: SpeedStor > 1024 cyl. | LANstep | IBM PS/2 IML (Initial Microcode Load) partition, located at the end of the disk. | Windows NT Disk Administrator hidden partition | Linux Logical Volume Manager partition (old), ff: Xenix Bad Block Table}def parttype_2_description(parttype): try: returns the Partition Type Description based on a two character (hex) string Partition type return __PARTTYPE_TO_DESCRIPTION__[parttype.lower()] except KeyError: return 'Unknown partition type: ' + parttype.lower()def parttype_int_2_description(parttype): returns the Partition Type Description based on an integer partition type return parttype_2_description(str(hex(parttype))[2:].rjust(2, '0'))def parttype_hex_2_description(parttype): returns the Partition Type Descriptoin based on a hex partition type return parttype_2_description(str(parttype)[2:].rjust(2, '0'))def ishex(value): if not str(value)[:2] == '0x': return False try: hexval = int(value, 16) return True except: return Falsedef isint(value): return isinstance(value, int)def isstr(value): return isinstance(value, str)def parttype(parttype): returns the partition type descriptor based on a string, int or hex partition type if ishex(parttype): return parttype_hex_2_description(parttype) if isint(parttype): return parttype_int_2_description(parttype) if isstr(parttype): return parttype_2_description(parttype) returndef main(): print('do not run this interactively') print('import and call the parttype() function') returnif __name__ == '__main__': main ()
Disk Partition type lookup table
python;python 3.x
The dictionaryThe naming rules in PEP 8 state:__double_leading_and_trailing_underscore__: magic objects or attributes that live in user-controlled namespaces. E.g. __init__, __import__ or __file__. Never invent such names; only use them as documented.If your intention is to simply indicate that the dictionary is private, use a _single_leading_underscore. Coupled with the convention to use ALL_CAPS for constants, I would name it _PARTTYPE_TO_DESCRIPTION.The keys in the dictionary represent numbers, right? Then why not write them as numbers? It is easier to normalize strings into integers than to format integers as strings, since there are a multitude ways to write 15 (e.g. 0f, 0F, 0x0f, 0x0F).The lookup functionsI'm not a fan of the parttype__2_description() naming. The 2 looks like it's supposed to be some version number.Instead of three lookup functions, why not offer one function that just does the right thing depending on the argument value?I don't think that you should return 'Unknown partition type: 13' as if it were a valid result. You could either raise an exception, or let the caller specify the fallback value. When composing the exception string, don't mess with the input (.lower()) it's confusing.The parttype_2_description docstring is botched. It needs to be the very first thing inside the function.Suggested solutionI would write one function that handles all the cases, and include a docstring with doctests to thoroughly describe how to use it._PARTTYPE_TO_DESCRIPTION = { 0x00: Empty, 0x01: DOS 12-bit FAT, 0x02: XENIX root, 0x03: XENIX /usr, 0x04: DOS 3.0+ 16-bit FAT (up to 32M), 0x05: DOS 3.3+ Extended Partition, 0x06: DOS 3.31+ 16-bit FAT (over 32M), 0x07: Windows NTFS | OS/2 IFS | exFAT | Advanced Unix | QNX2.x pre-1988, 0x08: AIX boot | OS/2 (v1.0-1.3 only) | SplitDrive | Commodore DOS | DELL partition spanning multiple drives | QNX 1.x and 2.x ('qny'), 0x09: AIX data | Coherent filesystem | QNX 1.x and 2.x ('qnz'), 0x0A: OS/2 Boot Manager | Coherent swap partition | OPUS, 0x0B: WIN95 OSR2 FAT32, 0x0C: WIN95 OSR2 FAT32, LBA-mapped, 0x0D: SILICON SAFE, 0x0E: WIN95: DOS 16-bit FAT, LBA-mapped, 0x0F: WIN95: Extended partition, LBA-mapped, 0x10: OPUS (? - not certain - ?), 0xFF: Xenix Bad Block Table,}def partition_description(type, unknown_description=None): Return the Partition Type Description for the partition type, given either as an integer or as a hex string. >>> partition_description(15) 'WIN95: Extended partition, LBA-mapped' >>> partition_description(0x0f) 'WIN95: Extended partition, LBA-mapped' >>> partition_description('0x0f') 'WIN95: Extended partition, LBA-mapped' >>> partition_description('0x0F') 'WIN95: Extended partition, LBA-mapped' >>> partition_description('0F') 'WIN95: Extended partition, LBA-mapped' >>> partition_description('0f') 'WIN95: Extended partition, LBA-mapped' If unknown_description is also given, then it will be returned if there is no such partition type. >>> partition_description(0x13, 'Bogus partition!') 'Bogus partition!' If unknown_description is None or is omitted, then ValueError will be raised for unrecognized partition types. >>> partition_description('0x13') Traceback (most recent call last): ... ValueError: Unknown partition type: 0x13 type_num = type if isinstance(type, int) else int(type, base=16) description = _PARTTYPE_TO_DESCRIPTION.get(type_num, unknown_description) if description is None: raise ValueError('Unknown partition type: ' + str(type)) return description
_reverseengineering.11037
I have an assignment for reverse engineering a binary. The function I'm up to takes a string input and reads one character at a time. It is as follows (push/pop registers removed): 8048b6e: mov $0x8049ee9,%esi 8048b73: movzbl (%esi),%edx ; (%esi) = 0x654A6167 8048b76: test %dl,%dl 8048b78: je 8048bb2 8048b7a: mov 0x8(%ebp),%ebx 8048b7d: mov $0x16,%edi 8048b82: movzbl (%ebx),%eax 8048b85: sub $0x61,%eax 8048b88: cmp $0x19,%al 8048b8a: ja 8048b97 8048b8c: mov %edi,%ecx 8048b8e: sub %al,%cl 8048b90: mov %ecx,%eax 8048b92: jns 8048b97 8048b94: add $0x1a,%eax 8048b97: add $0x61,%eax 8048b9a: cmp %al,%dl 8048b9c: je 8048ba3 8048b9e: call 8048e18 8048ba3: add $0x1,%esi 8048ba6: movzbl (%esi),%edx 8048ba9: test %dl,%dl 8048bab: je 8048bb2 8048bad: add $0x1,%ebx 8048bb0: jmp 8048b82 I'm having a little trouble understanding the logic of one part (8048b85 onwards) so I converted it to Ceax = *ebx; // movzbl (%ebx),%eaxeax -= 97; // sub $0x61,%eax// cmp $0x19,%al// ja 8048b97 <phase_3+0x32>if((unsigned)(eax & 0xFF) < 25){ ecx = edi; // mov %edi,%ecx int cl = (eax & 0xFF) - (ecx & 0xFF); // sub %al,%cl ecx &= cl; eax = ecx; // mov %ecx,%eax if(cl >= -127 && cl < 128) // jns 8048b97 { eax += 0x1A; // add $0x1a,%eax }}eax += 97; // add $0x61,%eaxif((eax & 0xFF) != (edx & 0xFF)) // cmp %al,%dl{ trigger_bomb(); // call 8048e18 <trigger_bomb>}I'm not sure if what I converted to is correct. The first value being compared is 0x67 which is g in ascii which wont set the flag for ja as 0x19 > 0x67 - 0x61. If I try 0x67 - 0x1A as the input, since it's unsigned comparison it will never be < 25 as it will overflow back to 236. I thought then I would need to use a negative number so that if it overflows, it would go 0x67 but since the input is ascii I'm not sure that it is possible to input a negative value. So my question is where am I going wrong in my logic? I'm not looking to be given the answer since I will need to figure out the other 3 values, but what I'm trying just doesn't seem to be correct. Any pointers/advice would be greatly appreciated.Thankyou
Assembly - Binary Bomb Confusion
assembly;x86;binary
null
_webmaster.27754
Possible Duplicate:Services to monitor and report if a web site goes down? I'm basically looking for a desktop-based software which can monitor my company's website and the web application's online availability. I know there are few online applications like Uptime Robot which does the same work but I have been asked to find a desktop based software which can monitor running in system tray and notify any down-time. A free software would be great.Any help would be appreciated. Thanks!
Desktop Software to monitor online status of web site and web-based application
analytics;monitoring
null
_unix.57044
I have recently bought a Dell XPS Touch. I'm dual booting Windows 7 with Fedora 16 (Verne). Right out of the box, Fedora reports 1 hour 26 minutes of battery life at full charge, while Windows reports a whopping 4 hours!! Why is this happening? Am I missing some acpi module or something?A friend suggested to me that this could be due to the fact that I'm using nouveau instead of the proprietary nvidia driver. Does that sound reasonable?UpdateI am now on Debian Wheezy and still the issue persists. Removed Fedora tag.
Dell battery performs worse under Linux
battery
Dell XPS seems to have an Nvidia hybrid (Optimus) graphics card. With correct driver setup, only the low-powered intel card is used, if you run more demanding applications, there's an automatic switch to the other card.By default, this is not supported (to my knowledge) in linux systems, and this is why the power consumption is so high: it uses the full power all the time. There's a project called bumblebee, that adds support for such hybrids, so you can switch them on and off manually.Bumblebee ProjectOn my dell (not an XPS), this worked wonderfully and got me up to the expected five hours battery time.
_unix.358770
I have a Btrfs raid1 with 3 disks on a Ubuntu 16.04. However, it seems only 2 disks are being used instead of all 3. How should I fix this?root@one:~# btrfs fi shLabel: none uuid: 3880b9fa-0824-4ffe-8f61-893a104f3567 Total devices 3 FS bytes used 54.77GiB devid 1 size 2.73TiB used 56.03GiB path /dev/sda2 devid 2 size 2.73TiB used 56.03GiB path /dev/sdc2 devid 3 size 2.59TiB used 0.00B path /dev/sdb3I have tried running a conversion filter but still the /dev/sdb3 is not being used. root@one:~# btrfs balance start -dconvert=raid1 -mconvert=raid1 /top/raid/Done, had to relocate 112 out of 112 chunksroot@one:~# btrfs fi df /top/raid/Data, RAID1: total=55.00GiB, used=54.40GiBSystem, RAID1: total=32.00MiB, used=16.00KiBMetadata, RAID1: total=1.00GiB, used=373.06MiBGlobalReserve, single: total=128.00MiB, used=0.00BAt first, there's only 1 disk during Ubuntu server installation. Then I added a disk and converted to raid1. Then I added a thrid disk /dev/sdb3 and tried to balance again. The third disk is not being used.root@one:~# btrfs --versionbtrfs-progs v4.4I can mount /dev/sdb3 just fine.root@one:~# mount /dev/sdb3 /mntroot@one:~# ll /mnttotal 16drwxr-xr-x 1 root root 74 Apr 13 09:37 ./drwxr-xr-x 1 root root 200 Apr 12 21:19 ../drwxr-xr-x 1 root root 200 Apr 12 21:19 @/drwxr-xr-x 1 root root 152 Apr 12 15:31 @home/drwxrwx--t 1 root root 36 Apr 13 09:38 @samba/root@one:~# btr fi shLabel: none uuid: 3880b9fa-0824-4ffe-8f61-893a104f3567 Total devices 3 FS bytes used 54.82GiB devid 1 size 2.73TiB used 56.03GiB path /dev/sda2 devid 2 size 2.73TiB used 56.03GiB path /dev/sdc2 devid 3 size 2.59TiB used 0.00B path /dev/sdb3
btrfs raid1 not using all disks?
ubuntu;btrfs;raid1
EDIT:NOTE: The btrfs FAQ states the following, as commented by @jeff-schaller (emphasis mine):btrfs supports RAID-0, RAID-1, and RAID-10. As of Linux 3.9, btrfs also supports RAID-5 and RAID-6 although that code is still experimental.btrfs combines all the devices into a storage pool first, and then duplicates the chunks as file data is created. RAID-1 is defined currently as 2 copies of all the data on different devices. This differs from MD-RAID and dmraid, in that those make exactly n copies for n devices. In a btrfs RAID-1 on three 1 TB devices we get 1.5 TB of usable data. Because each block is only copied to 2 devices, writing a given block only requires exactly 2 devices to be written to; reading can be made from only one.RAID-0 is similarly defined, with the stripe split across as many devices as possible. 3 1 TB devices yield 3 TB usable space, but offers no redundancy at all.RAID-10 is built on top of these definitions. Every stripe is split across to exactly 2 RAID-1 sets and those RAID-1 sets are written to exactly 2 devices (hence 4 devices minimum). A btrfs RAID-10 volume with 6 1 TB devices will yield 3 TB usable space with 2 copies of all data. I do not have large enough drives on hand to test this at the moment, but my speculation is simply that, since you have relatively large drives, btrfs simply chose to write the data to the first two drives thus far. I would expect that to change in the future as more data is written to the drives.In case you are interested in my tests with smaller drives:I installed Ubuntu Server 16.04 LTS in a VM with a single SATA drive, installed the OS on a single btrfs partition.Then I added another SATA drive, partitioned it, ran btrfs device add /dev/sdb1 /, and then balanced it while converting to raid1 with btrfs balance start -dconvert=raid1 -mconvert=raid1 /I repeated for device /dev/sdc1. The result for me is the same - I have a btrfs spanning three drives. I also fallocated a 2GiB file, and it was indeed accessible from all three disks. My btrfs fi sh shows the following:Label: none uuid: cdfe192c-36da-4a3c-bc1a-74137abbb190 Total devices 3 FS bytes used 3.07GiB devid 1 size 10.00GiB used 5.25GiB path /dev/sda1 devid 2 size 10.00GiB used 5.03GiB path /dev/sdb1 devid 3 size 8.00GiB used 2.28GiB path /dev/sdc1How did you call mkfs.btrfs? What is your btrfs-progs version? # btrfs --versionbtrfs-progs v4.4I cannot reproduce your situation. What happens if you try to mount /dev/sdb3?If you have a virtual machine or a spare disk to play with partitioning, create 3 partitions and try the following.I created an Ubuntu 16.04 VM and partitioned /dev/vda into three partitions of 2GiB each.# mkfs.btrfs -d raid1 -m raid1 /dev/vda{1..3}Label: (null)UUID: 0d6278f7-8830-4a73-a72f-0069cc560aafNode size: 16384Sector size: 4096Filesystem size: 6.00GiBBlock group profiles: Data: RAID1 315.12MiB Metadata: RAID1 315.12MiB System: RAID1 12.00MiBSSD detected: noIncompat features: extref, skinny-metadataNumber of devices: 3Devices: ID SIZE PATH 1 2.00GiB /dev/vda1 2 2.00GiB /dev/vda2 3 2.00GiB /dev/vda3# btrfs fi shLabel: none uuid: 0d6278f7-8830-4a73-a72f-0069cc560aaf Total devices 3 FS bytes used 112.00KiB devid 1 size 2.00GiB used 614.25MiB path /dev/vda1 devid 2 size 2.00GiB used 315.12MiB path /dev/vda2 devid 3 size 2.00GiB used 315.12MiB path /dev/vda3Try mounting /dev/vda1, writing a file to it, then mounting /dev/vda2 or /dev/vda3 instead and checking if the file is there (It definitely should be).PS: I first tried this on Arch with btrfs-progs version 4.10.2 with the same results, but thought that probably Ubuntu 16.04 ships with an older version that might behave differently. Turns out it ships with v4.4, but it seems to behave the same in regards to filesystem creation and mirroring etc.
_unix.343374
I have a server at home, with internal IP 192.168.1.100. I use a dynamic DNS service so that I can reach it at http://foo.dynu.com when I am out. When I have my laptop at home, I know that I could directly connect to the server by adding the following line to /etc/hosts.192.168.1.100 foo.dynu.comHowever, is there a way to automatically apply this redirect only when I'm on my home network? (I usually connect via a particular wifi connection, although I occasionally connect via ethernet. If this complicates matters, then I'm happy to only set it for the wifi connection.) I use Network Manager.Also, I connect to the internet via a VPN, so presumably any configuration on my (OpenWRT) router is unlikely to work.
Can I map an internal IP address to a domain name, when on a particular network?
networking;dns;ip;vpn;hosts
As per @garethTheRed's suggestion in the comments, I created a Network Manager Dispatcher hook.Create the following file at /etc/NetworkManager/dispatcher.d/99_foo.dynu.com.sh. This progresses when a new network connection is detected (i.e. ethernet or wifi). It then identifies my home network in two ways: the BSSID/SSID and the static IP that my router assigns me. (At the moment it doesn't work when I connect via ethernet, since that's relatively rare.) It then appends the mapping to the hosts file if we are in the home network; if not, then it removes this line.#!/bin/sh# Map domain name to internal IP when connected to home network (via wifi)# Partially inspired by http://sysadminsjourney.com/content/2008/12/18/use-networkmanager-launch-scripts-based-network-location/WIFI_ID_TEST='Connected to 11:11:11:11:11:11 (on wlp3s0) SSID: WifiName'LOCAL_IP_TEST='192.168.1.90'MAPPING='192.168.1.100 foo.dynu.com'HOSTS_PATH=/etc/hostsIF=$1STATUS=$2# Either wifi or ethernet goes upif [ $STATUS = 'up' ] && { [ $IF = 'wlp3s0' ] || [ $IF = 'enp10s0' ]; }; then # BSSID and my static IP, i.e. home network if [ $(iw dev wlp3s0 link | head -n 2) = $WIFI_ID_TEST ] && [ -n $(ip addr show wlp3s0 to ${LOCAL_IP_TEST}) ]; then grep -qx $MAPPING $HOSTS_PATH || echo $MAPPING >> $HOSTS_PATH else ESC_MAPPING=^$(<<<$MAPPING sed 's/\./\\./g')$ sed -i /${ESC_MAPPING}/d $HOSTS_PATH fifi
_codereview.117560
I have made a program to take x number of die and rolls them y number of times, then stores the data into an array so that I may output a CSV file. Everything works as intended, but I am having trouble figuring out how to increase the number of die to anything substantial. Right now I am using a switch, but linearly adding code like this seems inefficient, not to mention it will crash with amounts larger than 4 die. Is there some shortcut for adding variable number of switch statements? Any other methods would work as well, I am just not clever enough to come up with any as of yet.import javax.swing.JOptionPane;public class histogram {public static void main(String[] M83cluster) { // # of die String N = JOptionPane.showInputDialog(How many dice would you like to roll?); int numofDie = Integer.parseInt(N); // # of rolls String M = JOptionPane.showInputDialog(how many times would you like to roll?); int numofRolls = Integer.parseInt(M); int maxValue = numofDie*6; int[] taco = new int[maxValue]; // for every die there will be at most 6 values. // rolls the die and obtains a value. for (int i=0;i<numofRolls; i++) { int oneTotalRoll = 0; for (int k=0;k<numofDie; k++) { oneTotalRoll += (int)(1+6*Math.random()); } //int oneTotalRoll = (int) (valueofDice * numofDie); System.out.println(ROLL: + oneTotalRoll); // for each roll, increment taco[] array. switch (oneTotalRoll) { case 4: taco[0] += 1; break; case 5: taco[1] += 1; break; case 6: taco[2] += 1; break; case 7: taco[3] += 1; break; case 8: taco[4] += 1; break; case 9: taco[5] += 1; break; case 10: taco[6] += 1; break; case 11: taco[7] += 1; break; case 12: taco[8] += 1; break; case 13: taco[9] += 1; break; case 14: taco[10] += 1; break; case 15: taco[11] += 1; break; case 16: taco[12] += 1; break; case 17: taco[13] += 1; break; case 18: taco[14] += 1; break; case 19: taco[15] += 1; break; case 20: taco[16] += 1; break; case 21: taco[17] += 1; break; case 22: taco[18] += 1; break; case 23: taco[19] += 1; break; case 24: taco[20] += 1; break; case 25: taco[21] += 1; break; } } System.out.println(-------); String gorgon = null; // prints outcome for (int g=0; g<maxValue ; g++) { String gigabolt = (taco[g] + ,); gorgon += gigabolt; // System.out.print(gigabolt); } if (gorgon.endsWith(,)) gorgon = gorgon.substring(4, gorgon.length() - 1); System.out.print(gorgon);}}
Java rolling dice + csv array output
java;beginner;dice
null
_datascience.14668
I have a basic understanding of javascript, and know hardly any other programming language. Am I bound to face some issues in the field of neural networks and machine learning because of this? Should I learn something else for the sake of avoiding some inherent weaknesses of the language? I am most worried about the capacity of javascript to handle data, rather than its possibilities regarding the textual implementation of the algorithms per se...Thank you
Limits of Javascript on the implementation of AI algorithms
machine learning;neural network;javascript
null
_webmaster.72428
I'm trying to host a blog on Blogger with a custom domain.It's all good except the annoyance of many workplaces blocking the Blogger website using web-filters, including mine.This is causing the blog not to render/function properly.I can see in the Google Chrome developer tools that even though it's on my custom domain, it ask for a lot of resources from blogger.com, e.g., https ://www.blogger.com/static/v1/widgets/1535467126-widget_css_2_bundle.css https ://www.blogger.com/dyn-css/authorization.css?targetBlogID=12418145&zx=54164c77-4da2-4fc7-b3b6-5cf274c9c3b9https ://www.blogger.com/static/v1/widgets/2885176887-widgets.jsI'd like to replace these with custom links. Having said that, I don't see any reference to these in my template file.
Blogger's embedded CSS and JavaScript
blogger;javascript;css
null
_unix.156434
I'm looking for a way to change the hashing scheme on my Debian-based OS from sha512 to pbkdf2.Searching the internet hasn't helped much. The closest I've got is this question: Enable Blowfish-based Hash Support for CryptHowever, as pbkdf2 is not Blowfish-based, I'm back to square one.
Change password hashing to PBKDF2 on Debian-based OS
debian;security;password
null
_webapps.14839
I have both Facebook and Twitter accounts. I also use Linkedin.If I want a tweet to appear in my LinkedIn profile, I add the hashtag #in.Is there a way to do something similar with feeding Facebook from Twitter, so that not every tweet gets copied?
When connecting Twitter and Facebook, can posts be filtered?
facebook;twitter;filter;linkedin
You can use an application like Selective Tweets or TweetPo.st so your tweets ending with hashtag #fb will be automatically imported to Facebook, while all others will be ignored.
_unix.66008
Is there an application that will set an audible alarm when the battery goes low on a centos 6 IBM 430 machine?Details of the problem here:I have set it hibernate when battery is low. However, it does not hibernate possibly because most of the time I am working in a full screen windows VM that prevents it from going to hibernate when I try to do it manually.Since I am not able to see the battery levels (I am working in a full screen windows VM), the nmachine switches off when battery goes low. So I lose all my data and in one instance, the VMs (I am running windows and ubuntu) got corrupted.
any utility on centos 6 to sound an alarm when power is low
centos;power management;hibernate
A quick google throws this up:Check your battery status from the command lineA suggestion might be to parse out the percentage remaining of the battery charge, and play a sound when it falls to a certain low water mark/threshold. You could then run this from cron/at every few minutes or so. Very rudimentary but...cheerssc.
_unix.386335
I can't drag and drop files to floppy disk. I execute the following command which didn't help:sudo adduser eric floppy groupsudo adduser eric rootI can not drag and drop. The error I get is:only root can do thatHow can I solve it?
drag and drop from desktop to floppy disk doesn't work
files;permissions;root;floppy
null
_webmaster.12794
I have deluxe linux shared hosting with godaddy, and it supports perl. but i dont know how to setup and use perl scripts.Also i have no folder named CGI or CGI-BIN or anything like that.As movable type requires me to put my content in the CGI folder, i cant. creating it also doesn't work.So can you guide me on how to install movable type on godaddy. Also i dont want to spend money and install it from the godaddy hosting connection.And please remember as i'm on a shared hosting, i might not be able to edit my apache server configuration file.Thanks.
Movable Type on GoDaddy
godaddy;movabletype
null
_cogsci.8661
I am using nilearn and nipy package for python processing FMRI data. When computing mask, it says: Compute and write the mask of an image based on the grey level This is based on an heuristic proposed by T.Nichols: find the least dense point of the histogram, between fractions m and M of the total image histogram.and This is based on an heuristic proposed by T.Nichols: find the least dense point of the histogram, between fractions lower_cutoff and upper_cutoff of the total image histogram.In both masking functions of nilearn and nipy. Who is T.Nichols? I wasn't able to google him/her out.here are the links to the functions: http://nipy.org/nipy/stable/api/generated/nipy.labs.mask.htmlhttps://nilearn.github.io/building_blocks/generated/nilearn.masking.compute_epi_mask.html
Reference request for creator of heuristic for processing fMRI data (T.Nichols)
reference request;fmri
I know very little about fmri, but as @strongbad points out, surely it is Professor Thomas Nichols at Warwick.I'm not sure what the authoritative reference is, but Luo and Nichols (2003) might be worth a look. They state:We construct a histogram based on all non-tail data (10th to 90th percentile) and use the location of the minimum bin as the antimode estimate. Luo, W. L., & Nichols, T. E. (2003). Diagnosis and exploration of massively univariate neuroimaging models. NeuroImage, 19(3), 1014-1032. http://www-personal.umich.edu/~nichols/Docs/fMRIvis.pdf
_webmaster.22495
Possible Duplicate:Google analytics - drop in traffic Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong:a) We are using the same analytics accounts going from old to new site. Is this a bad idea?b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea?c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files:EG. <a href=somepage.aspx>Link</a>Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page:EG. <a href=/new-items/shoes/>New shoes</a>Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic.
Google analytics - drop in traffic
asp.net;google;google analytics;traffic;web traffic
null
_unix.178555
I'm adding multiple new system calls to kernel.I want to test my custom kernel by making an bootable ISO out of it trying to boot on another machine.As a part of making this bootable ISO, I got hold of the Ubuntu 14.04 bootable ISO and replaced the vmlinuz.efi in the Ubuntu14.04ISO/casper with the bzImage produced after the kernel build.This ISO didn't boot successfully. I guess I need to make a new inrd too and found commands like mkisofs but it requires to have the custom kernel installed on my machine, which I can't do as it's a common build server.Questions:What all files in the ISO have to changed to make it boot my custom kernel.
Change the kernel in downloaded Ubuntu Image
ubuntu;linux kernel;compiling;iso;initrd
null
_softwareengineering.345637
I have a binary file that I want to parse. The file is broken up into records that are 1024 bytes each. The high level steps needed are:Read 1024 bytes at a time from the file.Parse each 1024-byte record (chunk) and place the parsed data into a map or struct.Return the parsed data to the user and any error(s).Due to I/O constraints, I don't think it makes sense to attempt concurrent reads from the file. However, I see no reason why the 1024-byte records can't be parsed using goroutines so that multiple 1024-byte records are being parsed concurrently. I'm new to Go, so I wanted to see if this makes sense or if there is a better (faster) way:A main function opens the file and reads 1024 bytes at a time into byte arrays (records).The records are passed to a function that parses the data into a map or struct. The parser function would be called as a goroutine on each record.The parsed maps/structs are appended to a slice via a channel. I would preallocate the underlying array managed by the slice as the file size (in bytes) divided by 1024 as this should be the exact number of elements (assuming no errors).This appears to be a producer with multiple consumers (at least the way I'm thinking about it). I am aware of an example of this pattern in Go, but I'm not sure if this changes when reading contiguously from a file (it seems concurrent reads would slow things down, so only one producer, but many consumers parsing could speed things up—but I need to make sure I don't run out of memory also).
Concurrently parsing records in a binary file in Go
concurrency;parsing;io;golang
null
_webapps.60020
The downloaded Facebook archive has a messages.htm file in it with my Facebook messages. But I know for sure that a lot of them are missing. My top conversation with about 65k messages has exactly 10000 messages in the messages.htm file. At the end of this conversation it says:<div class=warning>We were unable to download the remainder of this conversation.</div>In the past you had the ability to download an extended archive of your Facebook data but that doesn't seem to exist anymore. Is that right?So how can I get the messages.htm file containing all my Facebook messages?
Downloaded Facebook archive doesn't contain all messages
facebook;export
null
_unix.366385
I am trying to transcode an audio of video. I have a live stream(video-h264, audio-mp2). I need to convert an audio to aac codec and stream it. I don't want to waste a lot of resource for video processing. How can I do it with ffmpeg? (I have already tried with copy option.)
How to transcode audio of video with ffmpeg?
ffmpeg
null
_unix.336104
I've recently moved to a place with public wifi (so I don't have access to the router or their DHCP config), and am running into issues connecting with my Arch laptop.I've tried using both NetworkManager and netctl to connect, but both fail at getting a DHCP lease. It should be noted that every other device (Android and iOS phones, Windows and macOS laptops) do so without problems.How do I go about debugging this? Am I missing a package, or am I connecting wrong?NetworkManagerI use nmcli to connect:$ nmcli dev wifi* SSID MODE CHAN RATE SIGNAL BARS SECURITY ssidOfWifi Infra 1 54 Mbit/s 52 __ WPA2 ssidOfWifi Infra 13 54 Mbit/s 34 __ WPA2 ssidOfWifi Infra 13 54 Mbit/s 22 ___ WPA2 $ nmcli dev wifi connect ssidOfWifi password passwordToWifiError: Connection activation failed: (5) IP configuration could not be reserved (no available address, timeout, etc.).$ systemctl status NetworkManager...Jan 09 17:49:43 home NetworkManager[5621]: <info> [1483980583.9385] device (wlp2s0): Activation: (wifi) Stage 2 of 5 (Device Configure) successful. Connected to wireless network 'ssidOfWifi'.Jan 09 17:49:43 home NetworkManager[5621]: <info> [1483980583.9386] device (wlp2s0): state change: config -> ip-config (reason 'none') [50 70 0]Jan 09 17:49:43 home NetworkManager[5621]: <info> [1483980583.9390] dhcp4 (wlp2s0): activation: beginning transaction (timeout in 45 seconds)Jan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0055] dhcp4 (wlp2s0): state changed unknown -> timeoutJan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0214] dhcp4 (wlp2s0): canceled DHCP transactionJan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0215] dhcp4 (wlp2s0): state changed timeout -> doneJan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0220] device (wlp2s0): state change: ip-config -> failed (reason 'ip-config-unavailable') [70 120 5]Jan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0223] manager: NetworkManager state is now DISCONNECTEDJan 09 17:50:29 home NetworkManager[5621]: <warn> [1483980629.0233] device (wlp2s0): Activation: failed for connection 'ssidOfWifi'Jan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0319] device (wlp2s0): state change: failed -> disconnected (reason 'none') [120 30 0]Jan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0421] device (wlp2s0): set-hw-addr: set MAC address to AA:BB:CC:DD:EE:FF (scanning)Jan 09 17:50:29 home NetworkManager[5621]: <warn> [1483980629.0453] sup-iface[0x1d5ec00,wlp2s0]: connection disconnected (reason -3)Jan 09 17:50:29 home NetworkManager[5621]: <info> [1483980629.0454] device (wlp2s0): supplicant interface state: completed -> disconnectedNetctlI use wifi-menu -o to connect. This shows only one ssidOfWifi, unlike nmcli which shows one for each accesspoint.$ sudo wifi-menu -oJob for netctl@wlp2s0\x2dssidOfWifi.service failed because the control process exited with error code.See systemctl status netctl@wlp2s0\\x2dssidOfNetwork.service and journalctl -xe for details.$ journalctl -xe...Jan 09 23:10:34 home dhcpcd[14402]: wlp2s0: soliciting a DHCP leaseJan 09 23:11:03 home dhcpcd[14402]: timed outJan 09 23:11:03 home dhcpcd[14402]: dhcpcd exitedJan 09 23:11:03 home network[14363]: DHCP IPv4 lease attempt failed on interface 'wlp2s0'Jan 09 23:11:03 home kernel: wlp2s0: deauthenticating from AA:BB:CC:DD:EE:FF by local choice (Reason: 3=DEAUTH_LEAVING)Jan 09 23:11:03 home network[14363]: Failed to bring the network up for profile 'wlp2s0-ssidOfWifi'Jan 09 23:11:03 home systemd[1]: netctl@wlp2s0\x2dssidOfWifi.service: Main process exited, code=exited, status=1/FAILURE
How to debug DHCP timeout on WiFi connect
wifi;networkmanager;dhcp;netctl
null
_softwareengineering.286457
I recently start to work on an application that drive different measurement device.Before the user start a measure, she sets the parameters of it.Actually, considering all measurements type there are 50+ parameters.The difficulty here is that every settings depends on others for:Being availableList of available valuesso on..Moreover, some measurement settings depend on previous measure results and settings.To make it short : we have a lot of stuff that are interconnected.The actual pattern is to validate everything as soon as a values is changed. It cost a lot (in time) and we are going to add a lot of more parameters : it will break.We try to implement a pattern where we use ObservableValues and where all parameters register on all values it depend on.It became hard when the parameters depend on an other reference measure. If the reference change, we have to stop listening on the previous reference and start to listen on the new reference.Etc...An other issue is that when we work on our pattern and we had more capabilities (like serialization), or when we had some helper class (like factory), we build big files with 50+ parameters or functions.Is there any other good pattern or library to do it ?
What are the most used pattern to manage a lot of interconnected parameters?
design;design patterns;code quality;code reuse
I would suggest:divide and conquer the problemuse components and keep each component focused on its main purposedecouple you system by using an event aggregator (EA)use interfaces where an EA doesnt make senseI would try to split the big problem in smaller problems and then try to solve them by its own. The smaller problems can be solved easier by specialized components, if they are simple enough. Each component should keep its focus on its main purpose.To keep a loose coupling around components I would use an event aggregator (EA) (main purpose: notify listeners). (e.g.: Reactive Extensions, Caliburn Micro, or the Prism EventAggregator, if in .NET)I would use simple parameter classes to keep values (main purpose: handle values).I would group parameters in a some kind of tree class (main purpose: provide parameters).The tree has to be build or updated. For this I would use one tree-builder component (main purpose: build/update tree).The tree-builder needs to be notified when to react -> EA notificationSome consumer of the tree needs to be notified after the tree has changed -> EA notification...If the components are not coupled tightly, then the system can be scaled or changed when necessary. I.e.: If the building process is far too complex, then it is possible to use multiple tree-builder classes instead one. To achive this, an additional component - a tree-builder-factory - would be added. It would react on tree build/update events by providing the adequate tree-builder.
_cs.33633
How would I prove that the regular expressions RS and SR where R = (0 + 1) and S = (0 + 1)* are equivalent? The '+' sign represents union of two regular expressions and two expressions RS are concatenated.From what I know, either you must prove using existing rules about equivalence of regular expressions (example: commutativity or associativity of union) or you must prove that L(RS) is a subset of L(SR) AND L(SR) is a subset of L(RS) where L is the language denoted by the regular expression in the parentheses. However, I am struggling to come up with a proof. Could someone give me a hint?
Proof of equivalence of regular expressions (0 + 1)(0 + 1)* and (0 + 1)*(0 + 1)
proof techniques;regular expressions
Let's go with your second approach. First note that $R^n$, for any nonnegative integer $n$, is in $L(S)$.If the string $w$ is in the first language, $L(RS)$, then we know that $w$ can be written as the concatenation of some string in $R$ and some string in $S$. This concatenation is denoted $(0 + 1)(0 + 1)^n$ (for some nonnegative integer $n$), which equals $RR^n$ which equals $R^{n+1}$, which equals $R^nR$, which we know is in the language $L(SR)$ (since $S=R^*$).The reverse direction is more or less the same thing.Edit:Another interesting way of proving this is to prove $L(RS) = L(S)$ and $L(SR) = L(S)$. To prove $L(RS) = L(S)$, take any string in $L(RS)$. It can be written as $RR^n$ for some nonnegative integer $n$. Now this equals $R^{n+1}$, which is of course in $L(S)$. So we have that $L(RS)\subseteq L(S)$. Now take any string in $L(S)$. It can be written as $R^m$ for some nonnegative integer $m$. This is equivalent to $RR^{m-1}$, which is in $L(RS)$, so we know that $L(S)\subseteq L(RS)$.Therefore we have $L(S)=L(RS)$.We can prove $L(S)=L(SR)$ in a very similar way. Therefore $L(RS)=L(SR)$.The actual difficulty and intuition behind the proof is fairly trivial, but putting the proof into words is harder. The proof amounts to proving some kind of exponent law for the Kleene star. We don't have an actual exponent law for the Kleene star, so the key part of the proof is simply isolating a single arbitrary string in the language, and then we have the finite integer exponents and we can just use the normal exponent laws.
_unix.20452
I lost my connection while I was logged via SSH into my university server. Classic.Now I can't log in since the session appears to be still running and I get the error Too many logins for 'myuser'. (Only 1 login for each user is allowed)Is there a way to recover the session not having another access to the server (I can't reach any sysadmin until monday) - or the only way is just to wait for the session to time out? Typically how long should I wait? More than an hour has already passed.
How do I recover/kill an SSH session after losing connection?
ssh;networking
You could try running something like ssh -n remuser remhost kill -HUP -1. This would not create a login, so it might bypass the 1 login/user limitation.If this does not work, then you might have to find someone who does have access, then run su remuser with your password from that person's account. Then you'd be able to run kill -HUP -1.
_unix.261302
I am getting this output from rpm -Kv on one of my experimental packages:clime@coprbox ~/v4tests $ rpm -Kv signed10.rpmsigned10.rpm: Header V3 RSA/SHA1 Signature, key ID f67e1676: NOKEY Header SHA1 digest: OK (6289e7d8d0a73be107945df48cefb762a5036eb1) V3 RSA/SHA1 Signature, key ID f67e1676: BAD MD5 digest: OK (3c8cafddad94a1e75adf52c59203cd3a)Now, there are two lines that mention signature: Header V3 RSA/SHA1 Signature, key ID f67e1676: NOKEY V3 RSA/SHA1 Signature, key ID f67e1676: BADWhat is the meaning of the first line and what is the meaning of the second?
analyzing rpm -Kv output
rpm;signature
null
_softwareengineering.322396
For example, I have a clan and a character. There's a character that is the leader. To give the clan a specific feature, some money from the character is required.I don't want to have too much tight coupling. Right now I have a member in the clan class like this:bool clan::give_rank(character* chr, int rank){ if (!is_leader(chr.id()) || !chr->has_money(500)) return false; this->rank_ = rank; chr->take_money(500); return true;}Is this tight coupling? or maybe I should have a secondary class like a clan_mgr that connects both classes? bool clan_mngr::give_rank(character* chr, int rank){ clan* myclan = chr->get_clan(); if (!myclan || !myclan->is_leader(chr.id()) || !chr->has_money(500)) return false; myclan->rank_ = rank; chr->take_money(500); return true;}// Or maybe this one, which looks even worse imo:bool character::give_rank_to_clan(int rank){ clan* myclan = chr->get_clan(); if (!myclan || !myclan->is_leader(id()) || !has_money(500)) return false; myclan->rank_ = rank; TakeMoney(500); return true;}
is this a good design
design;c++;object oriented design
null