id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.163466
I am using the following command line to analyse data:unpackdcm -scr ${in} -targ ${out} This command is printing on the screen status and description about the progress in the job. In order to save the status I did the following:unpackdcm -scr ${in} -targ ${out} >stat.txtBut it did not work! Kindly, what is wrong?
saving the output of command line in a text file?
shell;scripting;io redirection
The >-sign represents an I/O-Redirection. With >stat.txt you redirect the standard output (stdout) of the application to the file stat.txt. It is redirected, so you will not see any output in the shell.If you want the output in the current shell AND the file pipe the output into tee:your_command | tee stat.txtOr..your_command | tee -a stat.txt..to append to the file.Your application may also produce some errors. They mostly occure in the standard error (see standard streams). To redirect that stream use the folloing syntax:your_command 2>error.log
_codereview.65028
I've written some code that allows Unity3D's inspector to display fields that conform to an interface. Unity has some quirks about their inspector, so as a preface they are listed here:If you add a [Serializable] attribute to a class, Unity's Inspector will attempt to show all public fields inside that class.Any class extending Monobehaviour automatically has the [Serializable] attributeUnity's inspector will attempt to display any private field with the [SerializeField] attribute.Unity's inspector will not attempt to display generic types or interfaces, with the exception of List<T>, which is hard-coded.Unity's inspector will not attempt to display properties. A common workaround is to have a private backing field for your property with [SerializeField] attached. Setters won't be called on the value set in the inspector. It's typically only set pre-compilation time, although a developer can modify values in the inspector during runtime. Currently it is acceptable to me to only take the initial value, although if anybody has a simple and efficient way to update successfully, I'd be happy to hear it.Unity has a PropertyDrawer class you can extend to control how a type is displayed in the inspector. The PropertyDrawer for an interface or generic type will be ignored.The codeUnityInterfaceHelper.cs[Serializable]public class UnityInterfaceHelperBase{ [Tooltip(The component that is of the type required.)] [SerializeField] public Component target;}[Serializable]public class UnityInterfaceHelper<TInterface> where TInterface : class{ public TInterface TargetAsInterface { get { if (targetAsInterface == null) { targetAsInterface = target as TInterface; } return targetAsInterface; } set { if (targetAsInterface != value) { targetAsInterface = value; if (value as Component != null) { target = targetAsInterface as Component; } } } } [Tooltip(The component that is of the type required.)] [SerializeField] private Component target; private TInterface targetAsInterface; public static implicit operator UnityInterfaceHelper<TInterface>(UnityInterfaceHelperBase b) { return new UnityInterfaceHelper<TInterface>() { target = b.target }; } public static implicit operator UnityInterfaceHelperBase(UnityInterfaceHelper<TInterface> b) { return new UnityInterfaceHelperBase() { target = b.target }; }}UnityInterfaceHelperPropertyDrawer.cs[CustomPropertyDrawer(typeof(UnityInterfaceHelperBase), true)]public class UnityInterfaceHelperPropertyDrawer : PropertyDrawer{ public override void OnGUI(Rect position, SerializedProperty property, GUIContent label) { label = EditorGUI.BeginProperty(position, label, property); position = EditorGUI.PrefixLabel(position, GUIUtility.GetControlID(FocusType.Passive), label); EditorGUI.PropertyField(position, property.FindPropertyRelative(target), GUIContent.none); EditorGUI.EndProperty(); }}Typical usage[SerializeField]private UnityInterfaceHelperBase itemComparable;public IComparable ItemComparable{ get { return ((UnityInterfaceHelper<IComparable>)itemComparable).TargetAsInterface; } set { ((UnityInterfaceHelper<IComparable>)landedComparable).TargetAsInterface = value; }}public void CompareItems(){ if(ItemComparable.CompareTo(Hello) == 0) { Debug.Log(Hello world!); }}I'm looking to reduce the amount of code I have to repeat for every property, it's already down quite a way, but I'm hoping to make it as simple as possible, but any other comments on the code are welcome too.
Inspector interface serializer
c#;gui;serialization;unity3d
Possible bug:If you get the value, targetAsInterface is initialized. One can then set the value to something that is not a Component. Like null. If you then get the value again, you'll get the old value of target recasted to TInterface. Seems to me that this violates what get and set are supposed to do.
_unix.348181
if I am in: /1/2/3 I want prompt to show:user /3:If I am in : / I want the prompt to show:user /:This does not work ( when in /1/2/3 ) ( no slash )PS1=\u \W: This does not work ( when in / ) ( shows double / )PS1=\u /\W: What should I do ?
Always show slash before directory name in prompt
bash;prompt
You could always use:PS1='${USER=$(LOGNAME)} /${PWD##*/}: '(which would also work in most other Bourne-like shells).
_softwareengineering.164709
So, how exactly do software patches work. If there is a certain bug in the source code of a program, how is this source code changed when one installs a patch? After the patch has been installed how is the program 'automatically' rebuilt?
How do software patches and updates work?
software patches
null
_softwareengineering.167093
we're working on error handling in an application. We try to have fairly good automated test coverage. One big problem though is that we don't really know of a way to test some of our error handling. For instance, we need to test that whenever there is an uncaught exception, a message is sent to our server with exception information. The big problem with this is that we strive to never have an uncaught exception(and instead have descriptive error messages). So, how do we test something what we never want to actually happen?
What is the best way to go about testing that we handle failures appropriately?
.net;testing;exceptions;error handling;test automation
null
_softwareengineering.332220
In the context of MVC sometimes I find myself creating a Factory and injecting the factory with Repository.While it is certainly possible to use Repository as layer inside the Factory, I wonder if it is an anti-pattern to do so. i.e. is it advisable to keep Factory Repository-free?ExampleFor example, my QuoteFactory class is tasked withcreating Quote classmaintaining Repository methods & generally being Repository-awareCode$repository = new QuoteRepository(111);$factory = new QuoteFactory($repository); $quote = $factory->loadQuote();class QuoteRepository{ function __construct($id) { $this->id = $id; $this->em = DoctrineConnector::getEntityManager(); } function getLineItems() { $query = $this->em ->createQuery('SELECT s FROM... where id = :id') ->setParameter('id', $id); return $query->getResult(); }}class QuoteFactory{ function __construct(QuoteRepository $repository) { $this->repository = $repository; } function loadQuote() { $quote = new Quote(); $quote->setLines($this->repository->getLineItems()); return $quote; }}class Quote{ function setLines(array $lines) { $this->lines = $lines; }}
Am I breaking SRP when I inject Factory pattern with Repository layer?
php;mvc;repository;anti patterns;factory method
While in general injecting something is not bad and does not automatically result into breaking the SRP (nor it does in your case - you have one class which only fetches data and another constructing an object from it), you have a different problem: wrong understanding of layering and abstraction.The repository layer is the one to bind data to your domain models, you should not need another layer to do that. Not to mention your solution is overengineered.Simply construct the quote directly in the repository, unless you have a really good reason for doing otherwise, no need for a factory.From your comment below:What do you mean by binding data to domain models? It sounds too abstract and I can't find a way to understand it. Can you give an example?You have pure data which somehow finds its way into your system. The ways may include:SOAP API,REST API,database,contents from a read file,...This data is just data and nothing else, it contains no rules.You then have your business (the fun parts of the application, where the rules are) core, you domain. The problem is your domain does not understand the pure data. In order to understand the pure data the data shall be transformed to be represented by business objects, domain models.Also overall you are saying, effectively merge Factory and Repository together (into Repository) but keep Quote as a separate concept, or should Quote since it contains data be bound to Repository as well?I am not saying you shall merge the factory and repository together, I am saying you should remove the factory completely and instantiate a Quote object directly in the getLines method. And since we're already talking about it, it might be wise to rename the method to something better, such as: getQuoteWithLineItemsById.Also, the Quote shall have no direct ties to the repository. Why? Repository acts as a gateway to your system - as I have already mentioned - by taking pure data and transforming it to objects.The proposed design:class QuoteRepository{ /** * @var \Doctrine\ORM\EntityManager */ private $entityManager; public function __construct(\Doctrine\ORM\EntityManager $entityManager) { $this->entityManager = $entityManager; } /** * @param string $quoteId * @return Quote */ public function getQuoteWithLineItemsById($quoteId) { $query = $this->entityManager ->createQuery('SELECT s FROM... where id = :id') ->setParameter('id', $quoteId); $quote = new Quote(); $quote->setLines($query->getResult()); return $quote; }}class Quote{ private $lines = []; public function setLines(array $lines) { $this->lines = $lines; }}$quoteRepository = new QuoteRepository(DoctrineConnector::getEntityManager());$quote = $quoteRepository->getQuoteWithLineItemsById('1');Your repository is now responsible for transforming the data retrieved from the database to a domain model, the Query. It knows nothing about business logic, the business logic shall be within the domain model.Besides transforming raw data to business-understandable entities the repository layer also exists for another reason:transforming business-understandable entities back to raw data for persistence purposes.So in effect, I can end up with a single QuoteRepository class that will contain my data, have business domain functionality and read/write to/from the database?No. You will end up with repository layer only responsible for reading/writing from/to the database and transforming the data either one way or another.Then you will have another layer (the domain) which is persistence ignorant, knows absolutely nothing about SQL, is pure PHP (most likely classes), and contains all your business rules - eg. a username must not be empty or longer than 32 characters.When dealing with business operations your domain models ensure your business rules are preserved. If a domain model exists and is sent to a repository to be saved the repository no longer cares about the state of the domain model, because it simply trusts the domain model is in a valid state. The responsibility of the repository is to save the model, not to check for its state.
_unix.74961
Is it a bad idea to do usermod -l login-name old-name to change my username while leaving my home directory name intact? A few years ago my university changed my username, but since it didn't affect anything, I did not change my local username. Now in order to use our centralized printers, the local username must match the university username (or so they claim). The reason I don't want to change my home directory is twofold. I think there are a number of scripts that have my username hardcoded into them. I think a change in my home directory name might throw my backup and revision control systems into a state of chaos.
Changing username but not home directory
users
There are no purely technical reasons. It might create some confusion in cases where the USER environment variable is consulted while either $HOME, getpwuid(getuid()) or something similar should have been used.By the way you can even have multiple usernames assigned to the same UID - locally this is achieved by multiplicating the appropriate lines in /etc/passwd, /etc/shadow and updating /etc/group accordingly. The ownership of files doesn't change (usually the first one found in /etc/passwd for the corresponding UID is displayed) and you can use any of the usernames you decide.
_unix.286580
When I restart my Red Hat, the screen only show the snapshot page without login page.But I can use SSH to login to start VNC. Then VNC also working well with normal X screen. Does any export know how to fix this ?Thank you in advanced.
Red Hat restart not show correct login page?
linux;rhel
Thanks to @Ankidaemon's help.Finally I found the issue was caused by upgrade of OpenSSL which miss 2 import libraries, one is libssl.so.10, another one is libcrypto.so.10.Then I checked my /usr/lib64 and find different version of my above mentioned libraries.The solution is soft link them by:# ln -s /usr/lib64/libssl.so.1.0.0 libssl.so.10# ln -s /usr/lib64/libcrypto.so.1.0.0 libcrypto.so.10Also paste one reference link (in Chinese): http://www.heminjie.com/system/linux/1766.html
_codereview.37536
I need to display a list of category links, like so:Category 1, Category 2, Category 3I've got it working already but the code seems pretty repetitive and a bit of a mess, I was wondering if there was a better way of doing it.This is what I've got so far:for (var i = 0; i < data['categories'].length; i++) { var comma = document.createTextNode(', '); var link = document.createElement('a'); link.setAttribute('href', '#journal__category--' + data['categories'][i]['url_title'] ); link.setAttribute('class', 'js--page__link'); link.innerHTML = data['categories'][i]['category_name']; document.getElementById('js--journal__categories').appendChild(link); if( (i + 1) != data['categories'].length ) { document.getElementById('js--journal__categories').appendChild(comma); }}
Better way to display list of categories
javascript
Here's how I might try to clean things up:refer to object properties without bracket notation where possiblee.g. data.categories.lengthcreate a loop variable for the current category, which allows you to reference it nicely without data.categories[i]for (var i = 0; i < data.categories.length; i++) { var category = data.categories[i];}don't look up the container on every iteration, move that outside the loop:var container = document.getElementById('js--journal__categories');for (...loop...) { ...create your node... container.appendChild(link);}same for creating the comma node, create it outside the loop.you don't need to wrap i + 1 in parenthesis, and this is just my preference, but you should be consistent with spaces. You could also use < instead of != which is more consistent with the for loop. (These are fairly minor nits)if (i + 1 < data.categories.length)Why are you using double dashes and underscores in your element IDs? Comments are helpful to explain what your for loop and if statement is doing, since it's probably easier/quicker to understand the comment than figure out the for loop.Maybe I'm getting overkill at this point, but a way to write really descriptive, easy to read code is to break it into functions whose name makes sense:function createCategoryElement(name, url_title) { ...create your anchor element here... ...set its attributes and content, etc... return element;}I ended up with something like this:var data = { categories: [ {name: 'one', url_title: 'oneUrl'}, {name: 'two', url_title: 'twoUrl'} ],};var container = document.getElementById('container');var comma = document.createTextNode(', ');function createCategoryElement(name, url) { var urlBase = '#journal-category-'; var cssClass = 'js-page-link'; var el = document.createElement('a'); el.setAttribute('href', urlBase + url); el.setAttribute('class', cssClass); el.innerHTML = name; return el;}// Create HTML elements for each category and append them to the DOM.for (var i = 0; i < data.categories.length; i++) { var category = data.categories[i]; var categoryElement = createCategoryElement(category.name, category.url_title); container.appendChild(categoryElement); // Join categories with a comma if (i + 1 < data.categories.length) { container.appendChild(comma); }} for (...loop over categories...) { ... var categoryElement = createCategoryElement(category.name, category.url_title); container.appendChild(categoryElement); if (i + 1 < data.categories.length) { containenr.appendChild(commaElement); } }
_scicomp.10094
There are two general approaches to representing solutions in the discontinuous galerkin method: nodal and modal.Modal: Solutions are represented by sums of modal coefficients multiplied by a set of polynomials, e.g. $u(x,t) = \sum_{i=1}^N u_i(t) \phi_i(x)$ where $\phi_i$ is usually orthogonal polynomials, e.g. Legendre. One advantage of this is that the orthogonal polynomials generate a diagonal mass matrix.Nodal: Cells are comprised of multiple nodes on which the solution is defined. Reconstruction of the cell is then based on fitting an interpolating polynomial, e.g. $u(x,t) = \sum_{i=1}^N u_i(x,t) l_i(x)$ where $l_i$ is a Lagrange polynomial. One advantage of this is that you can position your nodes at quadrature points and quickly evaluate integrals.In the context of a large-scale, complex ($10^6$-$10^9$ DOFs) 3D mixed structured/unstructured parallel application with goals of flexibility, clarity of implementation, and efficiency, what are the comparative advantages and disadvantages of each method?I'm sure there's good literature already out there, so if someone could point me to something that'd be great as well.
Discontinuous Galerkin: Nodal vs Modal advantages and disadvantages
fluid dynamics;discontinuous galerkin
The tradeoffs below apply equally to DG and to spectral elements (or $p$-version finite elements).Changing the order of an element, as in $p$-adaptivity, is simpler for modal bases because the existing basis functions do not change. This is generally not relevant to performance, but some people like it anyway. Modal bases can also be filtered directly for some anti-aliasing techniques, but that is also not a performance bottleneck. Modal bases can also be chosen to expose sparsity within an element for special operators (usually the Laplacian and mass matrices). This does not apply to variable coefficient or non-affine elements, and the savings are not huge for the modest order typically used in 3D.Nodal bases simplify the definition of element continuity, simplify implementation of boundary conditions, contact, and the like, are easier to plot, and lead to better $h$-ellipticity in discretized operators (thus allowing use of less expensive smoothers/preconditioners). It is also simpler to define concepts that are used by solvers, such as rigid body modes (just use nodal coordinates), and to define certain grid transfer operators such as arise in multigrid methods. Embedded discretizations are also readily available for preconditioning, without needing a change of basis. Nodal discretizations can efficiently use collocated quadrature (as with spectral element methods), and the corresponding under-integration can be good for energy conservation. Inter-element coupling for first-order equations is sparser for nodal bases, though otherwise-modal bases are often modified to obtain the same sparsity.
_unix.150543
I read that exaile, banshee (banshee-extension-liveradio), and VLC previously had tools allowing users to browse from a vast selection of Shoutcast streams for listening. I found some articles suggesting that these features may no longer be available, due to changes with Shoutcast.Are there any media players that are updated to work with Shoutcast, not just allowing playing of the audio, but browsing of the stations?
Is there any media player that can browse Shoutcast streams?
streaming;music;music player
null
_codereview.144112
What do you think about this?#include <alloca.h>#include <setjmp.h>#include <cassert>#include <functional>template <::std::size_t N = 4096>class coroutine{ jmp_buf env_in_; jmp_buf env_out_; bool running_{}; char stack_[N];public: coroutine() = default; auto running() const noexcept { return running_; } template <typename F, typename ...A> void run(F&& f, A&& ...a) { if (setjmp(env_in_)) { return; } // else do nothing auto top(reinterpret_cast<char*>(&top)); alloca(top - (stack_ + N)); running_ = true; [this, f = ::std::forward<F>(f)](A&& ...a) __attribute__ ((noinline)) { f(::std::ref(*this), ::std::forward<A>(a)...); running_ = false; yield(); }(::std::forward<A>(a)...); } void yield() noexcept { if (setjmp(env_out_)) { return; } else { longjmp(env_in_, 1); } } void resume() { assert(running_); if (setjmp(env_in_)) { return; } else { longjmp(env_out_, 1); } }};Usage:#include <iostream>#include coroutine.hppint main(){ coroutine<> c; c.run([](decltype(c)& c) { for (int i{}; i != 3; ++i) { ::std::cout << i << ::std::endl; c.yield(); } } ); while (c.running()) { c.resume(); } return 0;}EDITChangeauto top(reinterpret_cast<char*>(&top));tochar* top;top = reinterpret_cast<char*>(&top);Also check out an updated version of the code.
Small coroutine class
c++;c++14
null
_codereview.11818
I am trying to create a simple factory pattern demo in PHP. I am not sure if my codes are the best practice. It seems that I have some duplicated codes but I am not sure how to improve it. Basically, I want to create 3 types of accounts (basic, premium and vip). Please advise. Thanks a lot.abstract classabstract class User { function __construct() { $this->db= new Database('mysql','localhost','mvcdb','root',''); } abstract function checkUser(); function showAccountCredit(){ return $this->credits; } function getUserName(){ return $this->username; }}I have 3 different user account types:Basic Accountclass BasicUser extends User { function __construct($username) { parent::__construct(); $this->username=$username; $this->credit='10'; $this->accountType='Basic Account'; $data=$this->checkUser(); if(!empty($data)){ echo 'The username: '.$this->username.' already exists<br>'; return false; } $array=array('username'=>$this->username, 'password'=>'password','credit'=> $this->credit,'accountType'=>$this->accountType); $this->db->insert('user',$array); } function checkUser(){ $array=array(':username'=>$this->username); $results=$this->db->select('SELECT * FROM USER WHERE username=:username',$array); if(!empty($results)){ $this->credit=$results[0]['credit']; $this->accountType=$results[0]['accountType']; } return $results; } function showAccountCredit() { echo 'Username: '.$this->username.'<br>'; echo 'Account Credit: '.$this->credit.'<br>'; echo 'Account Type: '.$this->accountType; }}Premium Accountclass PremiumUser extends User { function __construct($username) { parent::__construct(); $this->username=$username; $this->credit='100'; $this->accountType='Premium Account'; $data=$this->checkUser(); if(!empty($data)){ echo 'The username: '.$this->username.' already exists<br>'; return false; } $array=array('username'=>$this->username, 'password'=>'password','credit'=> $this- >credit,'accountType'=>$this->accountType); $this->db->insert('user',$array); } function checkUser(){ $array=array(':username'=>$this->username); $results=$this->db->select('SELECT * FROM USER WHERE username=:username',$array); if(!empty($results)){ $this->credit=$results[0]['credit']; $this->accountType=$results[0]['accountType']; } return $results; } function showAccountCredit() { echo 'Username: '.$this->username.'<br>'; echo 'Account Credit: '.$this->credit.'<br>'; echo 'Account Type: '.$this->accountType.'<br>'; }}VIP account:class VipUser extends User { function __construct($username) { parent::__construct(); $this->username=$username; $this->credit='1000'; $this->accountType='VIP Account'; $data=$this->checkUser(); if(!empty($data)){ echo 'The username: '.$this->username.' already exists<br>'; return false; } $array=array('username'=>$this->username, 'password'=>'password','credit'=> $this->credit,'accountType'=>$this->accountType); $this->db->insert('user',$array); } function checkUser(){ $array=array(':username'=>$this->username); $results=$this->db->select('SELECT * FROM USER WHERE username=:username',$array); if(!empty($results)){ $this->credit=$results[0]['credit']; $this->accountType=$results[0]['accountType']; } return $results; } function showAccountCredit() { echo 'Username: '.$this->username.'<br>'; echo 'Account Credit: '.$this->credit.'<br>'; echo 'Account Type: '.$this->accountType; }}UserFactory classclass UserFactory { static function create($username,$accountType){ $accountType = strtolower($accountType); switch($accountType){ case 'basic': return new BasicUser($username); case 'premium':return new PremiumUser($username); case 'vip': return new VipUser($username); default :return new BasicUser($username); } }index.php$user1= UserFactory::create('Jerry', 'Vip');$user1->showAccountCredit();$user2= UserFactory::create('Bob', 'Basic');$user2->showAccountCredit();$user3= UserFactory::create('Betty', 'premium');$user3->showAccountCredit();
Simple Factory Pattern Demo
php;object oriented
null
_codereview.158082
The problem:std::vector and other containers have two functions for accessing / modifying their content: operator[] and at().at() is meant for debugging, to catch out-of-bound bugs. However, at() is also quite a lot slower than operator[]. Both above facts would yearn for a way to switch between the use of at() and operator[] based on whether it is a debug build or a production build, especially if performance matters.However, I can see no such easy way. Sadly standard containers dont offer a flag or sth that would allow to do this.I tried to write a simple wrapper that would switch between at() and operator[] based on whether or not NDEBUG is defined, in line with how the assert macro works.Note: This is a different solution to the same problem as stated here: Allowing switching between operator[] and at() based on NDEBUG This solves the issue of 2d arrays. But I wanted to post two solutions, 'cuz I feared You could frown upon such a macro :(The code:// AOO = At Or Operator[]#ifndef NDEBUG #define AOO(INDEX) .at(INDEX)#else #define AOO(INDEX) [INDEX]#endifThe use:void testNormalUse(){ std::vector<int> vec(100); vec AOO(50) = 5; std::cout << vec AOO(50) << std::endl;}void testRequireConst(){ const std::vector<int> vec(100, 5); std::cout << vec AOO(50) << std::endl;}void testRvalueRef(){ std::vector<int>vec(100, 5); std::cout << std::move(vec) AOO(50) << std::endl;}void test2d(){ std::vector<std::vector<int>> vec(100, std::vector<int>(100, 5)); std::cout << vec AOO(50) AOO(50) << std::endl;}int main() { testNormalUse(); testRequireConst(); testRvalueRef(); test2d();}IdeoneOf course I know this is weird syntax. But the non-macro solution brings up pain with multidimensional arrays, a problem the macro version solves.If only macros allowed punctuation, then we could make it look operator-like. But alas, they don't.
Allowing switching between operator[] and at() based on NDEBUG macro version
c++;collections;macros
null
_cs.65822
I'm a rank amateur in the area of pseudo-random number generation. I've recently found out that certain generators are better than others (e.g. mt19337 vs rand in C++) and learned what modulo bias is.My RequestI'm looking for an introductory book on pseudo-random number generation. Does one exist?My RequirementsThe book must be understandable by someone with the following mathematics background:calculus discrete math (combinatorics, logic and proofs, set theory, mathematical induction, functions and relations, inclusion-exclusion, generating functions, recurrence relations, graphs/graph algorithms)linear algebra (vectors,matrix algebra, eigenvalues, transformations, diagonalization)introductory numerical analysis (computer arithmetic and errors, root-finding algorithms, computational techniques for matrices, numerical integration and differentiation)I would prefer a book that does not require knowledge of any specific programming language:Would like algorithms to be presented in a pseudocode style (e.g. Introduction to Algorithms by Cormen)If the book isn't language neutral, I know the following languages: Python, Java, C++,C,Ruby.I'm looking for a book that is accessible to a fairly inexperienced CS undergraduate:I have a basic understanding of stacks, linked lists, trees, heaps, hash tables and graphsI'm comfortable with basic programming conceptsThe book should cover pseudorandom number generation at an introductory level:I'm not looking for a complete encyclopedic treatment on every research paper ever published in the area, but enough content to gain an entry level understanding in the area that you'd expect someone to learn in a first undergraduate course on pseudo-random number generation. For example if someone asked you for a book on introductory calculus you'd probably recommend a book that covers limits, differentiation, related rates, approximation of derivatives, L'Hopital's Rule and some basic continuous optimization. I'm looking for something similar in the area of PRNG. It's hard for me to specify exactly what I'm looking for because I know next to nothing about the area, but try and think of what you'd expect a complete amateur in the area to be able to reasonably learn in a semester.What I've TriedI'm looking at Chapter 3 of Donald Knuth's art of computer programming volume 2. The book seems quite old and uses some kind of assembly that I don't understand. If this is the authoritative reference, I'll find my way around these issues, but other books would be nice.
Introductory Book on Pseudo-Random Number Generation
books;pseudo random generators
null
_codereview.39627
This Scala code snippet is supposed to encode a SHA1 hash in base 62.Can you find any issues? I'm asking since I might not be able to change the algorithm and, for example, fix issues in the future.I'd like to be able to also implement it in JavaScript in the future.def mdSha1() = java.security.MessageDigest.getInstance(SHA-1) // not thread safedef hashSha1Base62DontPad(text: String): String = { val bytes = mdSha1().digest(text.getBytes(UTF-8)) val bigint = new java.math.BigInteger(1, bytes) val result = encodeInBase62(bigint) result}private val Base62Alphabet = ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789def encodeInBase62(number: BigInt): String = { // Base 62 is > 5 but < 6 bits per char. var result = new StringBuilder(capacity = number.bitCount / 5 + 1) var left = number do { val remainder = (left % 62).toInt left /= 62 val char = Base62Alphabet.charAt(remainder) result += char } while (left > 0) result.toString}
Is this base 62 encoding algorithm okay?
scala;converting
It is 'standard' to have 0-9 at the beginning of the 'alphabet' for numbers....I would have suggested that you use the native functionality in BigInteger to convert the value to a String in any given radix, but unfortunately, it does not support more than radix 36. Still, you should follow that standard and start with 0-9 instead of ending with it.I would also suggest two things:you should convert the alphabet to an array immediately:private val Base62Alphabet = 0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.toCharArray()you should not have the magic number '62' in your code, but should base it off the array size:private val Radix = Base62Alphabet.lengthyour code then becomes:val remainder = (left % Radix).toIntleft /= RadixFinally, I don't like that you have variable-length results from the conversion. You should ensure that all hashes of data with the same length have the same length of output... -left-padding with 0 as needed. It is possible for something to hash to 0x0000000000 (hex).... which will give you 0-value number.bitCount.
_cs.30540
In purpose of load balancing I need to approximatively predict rendering time my graphics software (in the specific case it's Autodesk Maya) needs to render a task given an input file (.ma/.mb) and rendering parameters. An intuitive way to do it is to render several frames from different parts of a scene and to extrapolate the results. Is there a better way to do it? If no, is it enough to render several separated frames or I should take short series of frames in different moments of the scene?
Is there a way to predict rendering times?
heuristics;graphics
null
_unix.43629
Possible Duplicate:Why does this compound command report errors when copying directories? if one executes the following two commands in one line, as follows,rm -rf dir ; cp -r dir2 dirit may complain that cp can not create directory dir/subdir: File existsbut if these two commands are executed in two lines, no errors will be thrown. I am just wandering what is the difference? and more importantly, how to execute two commands in one line, with the effect exactly the same as by two lines... PS: what is in dir or dir2 is hugh, typically 4gb
Why does this compound command report errors when copying directories?
bash;command line;command
null
_cs.53614
In my current application I am trying to determine the best way to implement a simple scripting language. This language would allow a user to create a script that controls a machine with various inputs and outputs. I have defined most of the commands and created a nice editor for the language though I am stuck now trying to figure out how to verify, parse, and compile it.I currently treat the commands as const String and plan to parse the input script text with regex and these strings. Most of the commands have the syntaxCOMMAND Arg1 Arg2 Arg3 etc.The types and numbers of arguments are currently not defined anywhere in my code, only in my head. I need a design pattern that would allow me to take a line from the script and determine if it is valid; check the command versus some list and check its arguments against the match in that list. Are there any known design patterns for situations like this; scripting languages in general? I feel like I need a class Command that holds the command string and information about its arguments, also the translation from string to action (performing the actual real life action the command describes). Then if I am to encounter the string representing the command I can lookup the command class instance, pass in the line, and get some result if it is valid. Though I feel like I would end up (in my case) with quite a few Command sub classes.Any ideas or recommendations?
Design patterns for simple text based scripting language?
reference request;programming languages;software engineering;design patterns
The last time somebody asked this question, I replied: why not just use an existing language?What you have right now is an API.You can provide a library in which your commands are function calls, or method calls, or whatever suits you. This saves you from having to design and implement a language, and it saves your users from having to learn yet another little special-purpose language. They still need to learn your API, of course.I'd say this is the standard design pattern for exposing APIs: as libraries for one or more existing languages.If you really want to create a new language, you need to specify and implement its syntax and semantics.For dealing with syntax, using a parser generator is definitely the standard approach. Any language whose programs don't just consist of simple lists of statements will have nested structures and most will allow syntactic recursion (i.e. among the various types of language constructs, some can appear within themselves arbitrarily often). For instance, you may wish to allow a command invocation as an argument to another command invocation, or you may want to have a construct to express iteration or choice that can be arbitrarily nested. In that case, you don't want to parse and process nested constructs with regular expressions, you want grammars, and parser generators are the standard way to work with grammars.As to the surface syntax, the basic way in which the source code is chopped up into meaningful tokens of your language, I think the most important general lesson is: keep it general, easily readable and context-independent. Make the structure of your programs easy to understand for humans and computers. Examples not to follow are such atrocities as Makefiles or /bin/sh in which the correct application of whitespace and quotes is advanced witchcraft.As to the semantics, one of the things to decide is what sort of programming paradigm to support. Are all commands of a program executed strictly in sequence without anything else ever interfering? Then, a standard, strictly sequential imperative programming language may be a good option. Can commands set concurrent events in motion, or can multiple programs be running at once? Then, a fundamentally concurrent language may be more appropriate.Another thing to decide is to what extent you want your language to scale. If scripts can grow large and can be invoked by other scripts, you probably want to provide mechanisms for preventing different pieces of code from biting each other in the leg (e.g. scopes for names, local variables). But this is probably a later concern.
_cstheory.31368
Consider the problem $\max_x \;||x||_2\\ x\in P\subseteq \mathbb{R}_{\geq 0}^n$where $||\cdot||$ is Euclidean 2-norm and $P$ is a polytope in positive orthant of $\mathbb{R}^n$. Is this problem computationally hard?
Complexity of max problem
cc.complexity theory;complexity classes;convex optimization
null
_softwareengineering.113176
IntroductionAn Adapter normally wraps another object, so that it can be used in an interface it wasn't designed for, e.g., when you want to useinterface Node { Node parent(); Iterable<Node> children();}together withclass TreeModel { private Node root; // example method (stupid) Node grandparent(Node node) { return node.parent().parent(); }}and you're given a class likeclass File { File getParent() {...} File[] listFiles() {...}}you need to write some FileToNodeAdapter.Unfortunately, it means that you need to wrap each single object and you also need both a way to get from FileToNodeAdapter to File (which is trivial, since it's embedded), but also from File to FileToNodeAdapter, which leads either to creating a new object each time or to using some Map, which must be either globally accessible or referenced in each FileToNodeAdapter.The PatternReplace the interface Node byinterface NodeWorker<T> { T parentOf(T node); Iterable<T> childrenOf(T node);}and modify the TreeModel likeclass TreeModel<T> { private NodeWorker<T> nodeWorker; private T root; // example method (stupid) T grandparent(T node) { return nodeWorker.parentOf(nodeWorker.parentOf(node)); } ...}Does this pattern have a name?Are there any disadvantages, besides the fact that it is little bit more verbose and only applicable when you are in charge of the TreeModel code?
What is the name for this variation to Adapter Pattern?
design patterns
null
_codereview.17750
So this is an app I'm working on, and I'd like to get some feedback on it. I'm leaving some key parts out, as I don't want everyone everywhere to have access to the complete code. The main pieces that I would like you guys to look at are still there though.// Does a remote AJAX request that scrapes the URL and parses itfunction doAjax(url) { $.getJSON(URL, function (data) { if (data.results[0]) { $(#content).html(); // Reset var number = $(filterData(data.results[0])).find(#gtv_leftcolumn table:gt(1)); for (var i = 0; i < number.length; i++) { var name = $(filterData(data.results[0])).find(#gtv_leftcolumn table:gt(1) .maintext p:eq( + i + )).text(); var type = $(filterData(data.results[0])).find(#gtv_leftcolumn table:gt(1) .trafficbriefs:nth-child(even) p:eq( + i + )).text(); // Redacted } if (doAjax) { // Redacted if (number.length === 0) { // Checks to see if there are any elements on the page, and if 0, runs this // Redacted } else { checkFavorite(); var mySearch = $('input#id_search').quicksearch('#content .row', { clearSearch: '#clearsearch' }); mySearch.cache(); console.log(Loaded + number.length); console.log(Cached); } } } else { console.log(error); } });}function filterData(data) { data = data.replace(/<?\/body[^>]*>/g, ''); data = data.replace(/[\r|\n]+/g, ''); data = data.replace(/<--[\S\s]*?-->/g, ''); data = data.replace(/<noscript[^>]*>[\S\s]*?<\/noscript>/g, ''); data = data.replace(/<script[^>]*>[\S\s]*?<\/script>/g, ''); data = data.replace(/<script.*\/>/, ''); data = data.replace(/<img[^>]*>/g, ''); return data;}// On LoaddoAjax(http://www.codekraken.com/testing/snowday/wgrz.html);$(#info).click(showInfo);$(.info).click(closeInfo);$(#reload).click(reaload);$(#clearsearch).click(clearSearchBox);$(.clear).click(clearFavorite);setFavorite();// You can clear the favorite item you set in setFavorite()function clearFavorite() { localStorage.removeItem(favorite); localStorage.removeItem(favorite-status); $(.star-inside).removeClass(favorite); $(.clear span).text();}// Clear search boxfunction clearSearchBox() { $(#id_search).val(); $('#id_search').trigger('keyup');}// Show info boxfunction showInfo() { // Redacted}// Close info boxfunction closeInfo() { // Redacted}// Reload AJAX requestfunction reload() { closeInfo(); doAjax(URL);}// Set favorite item. This enables you to swipe on any .row element, and once it swipes, it sets the row you swipe on as the favorite. Swiping again unfavorites it. I mainly want help on this, as far as cleaning it up.function setFavorite() { var threshold = { x: 30, y: 10 }; var originalCoord = { x: 0, y: 0 }; var finalCoord = { x: 0, y: 0 }; function touchMove() { console.log(event.targetTouches); finalCoord.x = event.targetTouches[0].pageX; changeX = originalCoord.x - finalCoord.x; var changeY = originalCoord.y - finalCoord.y; if (changeY < threshold.y && changeY > (threshold.y * -1)) { changeX = originalCoord.x - finalCoord.x; if (changeX > threshold.x) { window.removeEventListener('touchmove', touchMove, false); $(document).off(touchmove, .row); if ($(event.target).attr(class) === row-inside) { var element = $(event.target); } if ($(event.target).attr(class) === row-l) { var element = $(event.target).parent(); } if ($(event.target).attr(class) === row-r) { var element = $(event.target).parent(); } var text = $(element).find(.row-l).text(); var favstatus = $(element).find(.row-r).text(); var thisStar = $(element).parent().find(.star-inside); $(element).css(margin-left, -75px); if ($(thisStar).hasClass(favorite)) { $(.clear span).text(); $(thisStar).removeClass(favorite); localStorage.removeItem(favorite); localStorage.removeItem(favorite-status); } else { $(.clear span).text(\ + text + \); localStorage.setItem(favorite, text); localStorage.setItem(favorite-status, favstatus); $(.star-inside).not(thisStar).removeClass(favorite); $(thisStar).addClass(favorite); } setTimeout(function () { $(element).css(margin-left, 0px); }, 500); setTimeout(function () { $(document).on(touchmove, .row, function () { touchMove(); }); }, 800); } } } function touchStart() { originalCoord.x = event.targetTouches[0].pageX; finalCoord.x = originalCoord.x; } $(document).on(touchmove, .row, function () { touchMove(); }); $(document).on(touchstart, .row, function () { touchStart(); });}// Check favorite set in setFavorite()function checkFavorite() { if (localStorage.getItem(favorite) !== null) { var name = localStorage.getItem(favorite); var favstatus = localStorage.getItem(favorite-status); var favstatusSplit = favstatus.substr(2); var favstatusLower = favstatusSplit.toLowerCase(); var string = $(.row-l).text().toLowerCase(); var re = new RegExp(name.toLowerCase(), 'g'); var test = string.match(re); $(.row-l:contains( + name + )).parent().parent().find(.star-inside).addClass(favorite); if (test !== null) { $(.fav_school_inside).text(name + - + favstatusLower + !); $(.clear span).text(\ + name + \); setTimeout(function () { $(.fav_school).addClass(top); }, 1000); setTimeout(function () { $(.fav_school).removeClass(top); }, 6000); } }}
Javascript app review
javascript;jquery
Alright, figure I'll go ahead and say this right off the bat: I am no JS/JQuery guru. However, there were a few things that I saw that you might want to take a look at. This is by no means complete, but hopefully it will help.You are violating the Arrow Anti-Pattern. This means your code is too heavily indented and should be refactored to remove some of that indentation. For example, the first if statement could be reversed and return early. This would then make an else statement unnecessary thus reducing a whole level of indentation from your entire function.if( ! data.results[ 0 ] ) { console.log( 'error' ); return;}//rest of code...You can use empty() instead of explicitly clearing a field. This makes it a bit more obvious what you are trying to do wihtout needing comments everywhere trying to explain it.$( '#content' ).empty();Caching your selectors avoids having to compile them again and again, saving you processing power.var $filterData = $( filterData( data.results[ 0 ] ) );var number = $filterData.find( '#gtv_leftcolumn table:gt(1)' );I don't know if this next suggestion will actually work, but it seems like it should. If it is possible to .find() from a .find(), in other words chaining finds, then I would try reusing your find results.var name = number.find( .maintext p:eq( + i + ) ).text();I think you can get away with just loosley querying the length instead of explicitly. But that might just be preference.if( ! number.length ) {I don't like your filterData() function at all. First off, it seems like you should be able to refactor it to avoid redefining the same variable again and again. Second, I'm just not sure I see the point. To me it looks like this is just removing unwanted tags from something. No idea what, but it seems like you should be able to use a selector to chose what part of that document you want without having to regex it. Because I don't know exactly what you are trying to do here I can't make a good suggestion, sorry.The following is violating the Don't Repeat Yourself (DRY) Principle. It should be refactored to avoid this. There may be other places that also violate this principle, but this is the first that popped out at me. The first thing that came to mind was to use a switch, but it quickly became apparent that once your abstract the first instance it really only had one other instance. This means you could have gotten away with a single if statement from the beginning. Another problem, not immediately obvious is that element is not always defined. Because you only define it in if statements there is a possibility that it may not be assigned and your script does not gracefully fail to compensate. Either declare a default value, or gracefully return to show an error has occurred. I went with the former.//original codeif ($(event.target).attr(class) === row-inside) { var element = $(event.target);}if ($(event.target).attr(class) === row-l) { var element = $(event.target).parent();}if ($(event.target).attr(class) === row-r) { var element = $(event.target).parent();}//refactored with default//now with multiple usesvar element = $( event.target );if( element.attr( 'class' ) !== 'row-inside' ) { element = element.parent();}I think its typically accepted that styling should only be done in CSS. I believe a better way would be to add and/or remove a class to get this same effect.$(element).css(margin-left, -75px);Part of what I believe tucaz was saying was to do something like this:$(document).on( { 'touchmove' : function () { touchMove(); }, 'touchstart' : function () { touchStart(); }}, '.row' );There's more to his answer, I'm sure, this is just the only part I'm sure of. The above creates an event object that can easily be added to. It also has the added benefit of grouping related functionality. Something you might want to consider is adding namespaces to your events. They already seem to have a pseudo-namespace, but you might want to explicitly declare it. This helps differentiate your custom events from normal ones, while also allowing you to keep track of their purpose.'touch:move' : function() {This last section is going to be a general blanket statement. You have too few functions for too much functionality. This violates the Single Responsibility Principle. In other words, functions should only do one thing, everything else should be delegated to another function. The functions you do have are bulky and difficult to follow because of this. Adding more functions should help you to make your code more legible and will make it easier to extend this later. This is also key in ensuring your code does not violate DRY.While I'm sure there are things I missed, I hope some of this helps.
_cs.56195
Given an adjacency matrix, what is an algorithm/pseudo-code to convert a directed graph to an undirected graph without adding additional vertices (does not have to be reversable)?similar question here
How to Convert a Directed Graph to an Undirected Graph (Adjacency Matrix)
algorithms;graphs;adjacency matrix
Note that for an undirected graph, the adjacency matrix is symmetric, i.e. A[i,j] == A[j,i].From this, we can see that we can simply compute the new value of A[i,j] = A[j,i] depending if A[i,j] or A[j,i] is set. Assuming the graph is unweighted, we can do:for i from 0 to n-1 for j from 0 to i if A[i,j] == 1 OR A[j,i] == 1 A[i,j] = A[j,i] = 1 else A[i,j] = A[j,i] = 0Note that we only have to consider 1 + 2 + 3 + ... + n-1 entries since the resultant adjacency matrix is symmetric.If we have a weighted graph, we now have the problem of which edge weight to take as the new undirected graph edge weight. For example, if w(2,5) = 5 but w(5,2) = 10, the resultant edge weight is ambiguous. However, this is enough for you to figure out what else you need from here.
_softwareengineering.275779
I recently came across this statement in the Perl documentation: extirpated as a potential munitionderived from the sentence:Creates a digest string exactly like the crypt(3) function in the C library (assuming that you actually have a version there that has not been extirpated as a potential munition).Is this not a very odd phrase to use in regards to a function becoming defunct or does it have some special meaning involved within cryptography?
What is the origin of the phrase extirpated as a potential munition
documentation;perl;cryptography;etymology
Before the rise of personal computing, the most significant use of cryptography was to keep war plans secret. Keep in mind that programmable computing as we know it essentially began with Turing's work breaking German codes in World War II, and the age of personal computing was beginning just as the Cold War was drawing to a close--not that anyone knew it at the time!If you had an algorithm that could encrypt data such that the government couldn't read it, and it fell into the hands of the Russians, they could use this encryption to safely coordinate first-strike plans against the West, potentially setting up a nuclear attack. This was a very real fear back in the day, and even after the Soviet Union fell, neither the nuclear weapons nor America's enemies magically went away.Therefore, the government classified strong encryption technology as munitions, and exporting it was regulated under the same rules as military-grade weaponry. This lasted up until the dawning of the World Wide Web and the rise of e-commerce, which required strong encryption to foil eavesdropping and fraud, and enough of the Internet made enough of a stink about it that the rules were changed, in the name of economic progress.
_cs.53536
Early note: This is not homework. I simply regularly create ideas in an attempt to teach myself a language. For what it's worth I'll be using Javascript for this.That said, I haven't been able to come up with a solution/algorithm outside of brute force. I've searched for various puzzle algorithms but any assumptions regarding puzzle shapes don't fit my simpler criteria so become overly complicated quickly. I'm calling this a rectangle puzzle rather than a jigsaw puzzle since there is no final picture or restrictions on the overall placement of the pieces.There are only 4 possible piece shapes (row x column) : 1x2, 2x1, 2x2, 4x1These are the only pieces (ie. they may not be rotated or overlap)The play area is 8x4The number of pieces will be undetermined at startPieces may exist multiple times or not at allThe play area does not need to be completely filledThe only requirement regarding placement is they must fit in the play areaThere is no one solutionEnd solution: What I would like to output is the various arrangements that exist. In other words, given a set of pieces I'd like to know in what ways they may populate the game area. Given the pieces available among other constraints there will be a finite set of solutions.For the visually minded please see the images.Play AreaPossible PiecesGenerating a 2D array and fitting the pieces via brute force, as said, is possible but I'd prefer to know whether this is already a game of sorts and an algorithm exists. I'm not a math lover so I'm not familiar with any possible solutions that may exist in that space but wouldn't be against investigating if something makes sense.
Algorithm for solving rectangle puzzle
algorithms;arrays
null
_webmaster.81516
I have two virtual hosts, and have sites1.com and sites2.com pointing to my server. All works fine.<VirtualHost *:80> ServerName site1.com DocumentRoot /var/www/site1/html <Directory /var/www/site1/html> allow from all Options +Indexes </Directory></VirtualHost><VirtualHost *:80> ServerName site2.com DocumentRoot /var/www/site2/html <Directory /var/www/site2/html> allow from all Options +Indexes </Directory></VirtualHost>I also have sites3.com pointing to my server, but haven't set up a virtual server for sites3.com.I have found that sites3.com will resolve to sites1.com (my first VPS). After reviewing http://httpd.apache.org/docs/current/vhosts/name-based.html, it appears that this is by design:If no matching ServerName or ServerAlias is found in the set of virtual hosts containing the most specific matching IP address and port combination, then the first listed virtual host that matches that will be used.It it possible to prevent this behavior, and require a valid ServerName?
Require valid ServerName in httpd.conf
domains;url;apache;dns
The first virtual host directive is the default catch all. Any unrecognized host names get handled by whichever virtual host directive comes first. The solution to your problem is to create a default virtual host directive that prints out an error. I have one that I use. I've previously posted it in this answer404 Not Found -- Hostname Not RecognizedThis server is not configured to serve documents for foo.example.comThen I create specific virtual hosts for each of my sites that serve the correct content when the host name is correct.Here is my default virtual host configuration that uses 404.pl to handle all requests:<VirtualHost *:80> Servername localhost.localdomain DocumentRoot /var/www/default <Directory /var/www/default/> Require all granted Options +ExecCGI AddHandler cgi-script .pl RewriteEngine on RewriteCond $1 !-f RewriteRule ^(.*)$ 404.pl AllowOverride None </Directory> </VirtualHost>And here is the 404.pl script that prints out the hostname not recognized message as well as does redirects for domain names that are almost correct but not canonical:#!/usr/bin/perluse strict;# Put the host names you actually use in here to enable redirects# The left side should be the main domain name and the right should include the TLD# This enables redirects for alternate TLDs.my $hostnameredirects = { 'example' => 'example.com', 'foo' => 'foo.example.com',};my $hostname = `hostname --fqdn`;chomp $hostname;my $server = $ENV{'SERVER_NAME'};$server = if (!$server);$server =~ s/[^\-\_\.A-Za-z0-9]//g;$server = lc($server);my $uri = $ENV{'REQUEST_URI'};$uri = if (!$uri);$uri =~ s/[ \r\n]+//g;$uri = /$uri if ($uri !~ /^\//);&serverNameRedirect();&noVirtualHostError();&show404();sub serverNameRedirect(){ my $domain = &removeTld($server); while ($domain){ if ($hostnameredirects->{$domain}){ &redirect('http://'.$hostnameredirects->{$domain}.$uri); } $domain =~ s/^[^\.]*[\.]?//g; }}sub removeTld(){ my ($domain) = @_; $domain =~ s/\.(([^\.]+)|((([A-Za-z]{2})|com|org|net)\.[A-Za-z]{2}))$//g; return $domain;}sub redirect(){ my ($redirect) = @_; my $eRedirect = &escapeHTML($redirect); print Status: 301 Moved Permanently\n; print Location: $redirect\n; print Content-type: text/html\n; print \n; print <html><body><p>Moved permanently: <a href=\$eRedirect\>$eRedirect</a></p></body></html>\n; exit;}sub show404(){ my $eServer = &escapeHTML($server); &errorPage( '404 Not Found', '404 Not Found -- Hostname Not Recognized', This server is not configured to serve documents for '$eServer' );}sub noVirtualHostError(){ if ($server !~ /^\d+\.\d+\.\d+\.\d+$/){ return; } &errorPage( '400 Bad request', '400 Bad Request -- No Hostname Sent', This server only accepts requests with a domain name, not requests for an ip address such as $server );}sub errorPage(){ my ($status, $title, $message) = @_; print STDERR $title\n; print STDERR $message\n; print Status: $status\n; print Content-type: text/html\n; print \n; print <html>\n; print <head>\n; print <title>$title</title>\n; print </head>\n; print <body>\n; print <h1>$title</h1>\n; print ERROR: $message\n; print </body>\n; print </html>\n; exit;}# Convert <, >, & and to their HTML equivalents.sub escapeHTML { my $value = $_[0]; $value =~ s/\&/\&amp;/g; $value =~ s/</\&lt;/g; $value =~ s/>/\&gt;/g; $value =~ s//\&quot;/g; return $value;}
_codereview.116288
I'm creating many builders where I use inheritance.My resources have many properties and are stored in a Repository.In rare case I need to update them.When I update, I need to keep the same objectId (I track update on the repository).I use the same Builder/Updater as it's easier for my users (there are more than 10 concrete builders).The code with only one builder (and properties removed): public interface IBuilder<T, P> { P Build(); P Update(); } public abstract class BuilderBase<T, P>: IBuilder<T, P> where T : BuilderBase<T, P> where P : class, new() { protected P Result; protected P Source; protected T This; protected BuilderBase() { Result = new P(); This = (T) this; } protected BuilderBase(P source):this() { Source = source; } public P Build() { var result = Result; Result = null; return result; } public abstract P Update(); } public abstract class ResourceBuilder<T, P> : BuilderBase<T, P> where T : ResourceBuilder<T, P> where P : ResourceBase, new() { protected ResourceBuilder():base(){} protected ResourceBuilder(P resource):base(resource) { Result.Name = Source.Name; } public new P Build() { var result = Result; return base.Build(); } public override P Update() { if (Source != null) Result.ResourceId = Source.ResourceId; return Build(); } public T Name(string name) { Result.Name = name; return This; } } public abstract class ColorResourceBuilder<T, P> : ResourceBuilder<T, P> where T : ColorResourceBuilder<T, P> where P : ColorResource, new() { public T Color(Color color) { Result.Color = color; return This; } protected ColorResourceBuilder(P colorResource) : base(colorResource) { Result.Color = Source.Color; } protected ColorResourceBuilder() : base() { } } public class ColorResourceBuilder : ColorResourceBuilder<ColorResourceBuilder, ColorResource> { public ColorResourceBuilder(ColorResource colorResource) : base(colorResource) { } public ColorResourceBuilder() { } }The main problem with this approach is for the Update():var colorResource2 = new ColorResourceBuilder(colorResource1).Name(Blue).Update();colorResource2 != colorResource1 => two different object have the same Id...hypothetical improvement:var colorResource2 = new ColorResourceBuilder(ref colorResource1).Name(Blue).Update();var colorResource2 = new ColorResourceBuilder(ref colorResource1).Name(Blue).Build();It could be OK for the Update(). But weird with Build() as ref suggest update.Another hypothetical improvement:var colorResource2 = new ColorResourceBuilder().Name(Blue).Update(ref colorResource1);var colorResource2 = new ColorResourceBuilder(colorResource1).Name(Blue).Build();Will incur another usage inconsistency problem.I'm looking for any advice to make it more clean and natural. Or any other pattern which could do the job.
Bloch's Builder Pattern / Updater
c#;object oriented;design patterns
null
_unix.226020
I have Elementary OS and there is problem if custom keyboard layout should be added, because currently the switchboard-plug-keyboard uses hardcoded layouts from data/layouts.txt and does not scan /usr/share/X11/xkb/rules/evdev.xml for new layouts, so you can't add your custom keyboard layout via UI, because it is not shown there.I manually added my layout to layouts.txt, because the format of this file is simple and easy to understand.data/layouts.txt (the last line i added)#Czech:czCzech (UCW layout, accented letters only):ucwCzech (US Dvorak with CZ UCW support):dvorak-ucwCzech (qwerty):qwertyCzech (qwerty, extended Backslash):qwerty_bkslCzech (with <\|> key):bkslCzech (programming):kblayoutAnd now i am able to add my layout via UI, but it does not work when activated.This is what gsettings get org.gnome.desktop.input-sources sources returns me: [('xkb', 'cz'), ('xkb', 'us'), ('xkb', 'cz+kblayout')]In part of /usr/share/X11/xkb/rules/evdev.xml below, you can see it should be properly configured, but the custom keyboard map not working:<layout> <configItem> <name>cz</name> <shortDescription>cs</shortDescription> <description>Czech</description> <languageList> <iso639Id>cze</iso639Id> </languageList> </configItem> <variantList> <variant> <configItem> <name>bksl</name> <description>Czech (with &lt;\|&gt; key)</description> </configItem> </variant> <variant> <configItem> <name>qwerty</name> <description>Czech (qwerty)</description> </configItem> </variant> <variant> <configItem> <name>qwerty_bksl</name> <description>Czech (qwerty, extended Backslash)</description> </configItem> </variant> <variant> <configItem> <name>ucw</name> <description>Czech (UCW layout, accented letters only)</description> </configItem> </variant> <variant> <configItem> <name>dvorak-ucw</name> <description>Czech (US Dvorak with CZ UCW support)</description> </configItem> </variant> <variant> <configItem> <name>kblayout</name> <description>Czech (programming)</description> </configItem> </variant> </variantList> </layout>Interesting is that when i use setxkbmap kblayout it works and even when i have activated kblayout via UI, in Keyboard layout chart i see keyboard map is properly set, but when i press some key, wrong character is given.
Gnome - input source not working
x11;gnome;keyboard
null
_cogsci.1287
How do autistic savants (or other people with these abilities) compute equations like $81^{100}$ in 2.5 minutes?Which algorithms do they use? Is it an efficient one, or do they just have a lot of memory?Can non-autistic savants use these algorithms to solve equations just as quickly?
How are autistic savants able to perform certain mathematical computations so quickly?
autism;problem solving;savant syndrome
null
_webapps.25693
How can I change all values of column A (currently empty) to be 5? In Excel, I could do Paste Special > Multiply. Is it possible in Google Docs?
Change all values of a column at once in Google Docs
google drive;google apps;google spreadsheets
I am confused by your explanation of how you do it in excel (past special then multiply) this will multiply all the values in the column by the value you are trying to paste. In your example you said that column A is empty. That will leave you with a column of zeros.So here is my approaches:If you have the value in the top row, then Crtl-D will copy the value down the rest of the selected cells.If the value you want is on the clipboard - then select the cells you want to paste into and then Ctrl-V or use paste special - paste values only.
_unix.252602
My problem here is that the parameter $0 gives the same result as ${0##*/}, and that happens after converting the x-shellscript to x-executable using the SHC program!OS: Debiab-8.2-jessieSHC version: 3.8.7cmd used: shc -f script.bashThe compiled script.x resides in a extra bin path (not known by sudo).Note I've created a hello world program to print the parametr $0, and it always gives me the basename!My scriptFile contains:#!/bin/bash((!EUID)) || exec sudo $0# shellcode ...When I execute it, I get this:sudo: scriptName: command not foundAfter checking out I found that the parameter $0 is the same as ${0##*/} or $(basename $0) inside an x-executable!How do I deal with that without putting an absolute path inside the script? Or is there something I should know when I'm compiling shell to x-executable using SHC?
Why does the parameter $0 give me only the basename instead of full path after converting x-shellscript to x-executable?
bash;shell script
Why SHC?First of all, why are you using SHC, given your hobbyist motivations? Here is an excerpt from their own description:Upon execution, the compiled binary will decrypt and execute the code with the shell -c option. Unfortunatelly, it will not give you any speed improvement as a real C program would.The compiled binary will still be dependent on the shell specified in the first line of the shell code (i.e. #!/bin/sh), thus shc does not create completely independent binaries.SHC's main purpose is to protect your shell scripts from modification or inspection.My opinion (I'll keep it sort of brief): Even if your motivation is the stated main purpose of preventing modifications, a mildly determined person can still recover (and, hence, modify) the original script! SHC essentially provides security by obscurity, which is an oft-derided strategy when used as a primary means of security. If this doesn't sound helpful, then I'd recommend ditching SHC and simply using shell scripts as the vast majority of others do. If you need real security for your shell scripts, I'd suggest asking a specific question about that, without SHC or compilers.The specific $0 problemI downloaded SHC 3.8.9 from this page just to try this out.I was not able to reproduce the problem, on Ubuntu 14.04 LTS.#!/bin/bashecho Hello, world. My name is \`$0'Test run$ ./shc -f my_test.bash$ ~/path/to/my_test.bash.xHello, world. My name is `~/path/to/my_test.bash.x'So, clearly, our systems differ. I can try to update this answer if you post more details about your operating system, version of SHC, and specific shell script and shc commandline you used to compile it.Why do you need the path?Why does your script need to know its path? Are you storing files with the script? Does it default to operate on the directory it was run in if the user doesn't specify? Whether these things are good ideas or not is a matter of opinion, but knowing your motivations might be helpful in narrowing down this answer.How to fetch the current directoryThe pwd command fetches the current directory. In many cases, you can assemble the full path of your script (assuming it was run with an explicit relative path) with something like:realname=`pwd`/$0... which would result in a value like:/path/to/script/./yourscript.bash.xThe extra ./ just means current directory, so while it might be cosmetically unfortunate, it will not negatively affect the result.If the script is simply in your $PATH, then you would need to use which instead of pwd, but we're already getting beyond the scope of the original question, here, so I'll conclude with a simple mention that determining pathnames can be a tricky process, especially if you need to do so in a secure manner, so that would best be left for another question with a specific problem statement along these lines.
_unix.380001
I have a simple script which displays current memory usage, disk usage, and CPU load within a terminal. Here is the code so you can see what I mean:free -m | awk 'NR==2{printf | Memory Usage: %s/%sMB (%.2f%%)|\n, $3,$2,$3*100/$2 }'df -h | awk '$NF==/{printf | Disk Usage: %d/%dGB (%s) |\n, $3,$2,$5}'top -bn1 | grep load | awk '{printf | CPU Load: %.2f |\n, $(NF-2)}'The problem is that it only runs that code within the script 1 time. I need to automatically keep running that script every 1 second without having to keep reloading the script. So basically I just want it in some sort of continuous loop while the script can do other stuff. The reason I want it to continue to refresh every 1 second, is because it only shows the CPU usage 1 time right when you run the script, but CPU changes frequently, so I need it to keep displaying current data.
How to keep code running whitin a script
bash;shell script;scripting
null
_unix.222604
I have a LiveCD started as a default user. How to login as root?Here it is said that there are boot parameters. How and when to set them?
How to login as superuser\administrator in Scientific Linux 6 LiveCD?
root;not root user;livecd;scientific linux
Have you tried just using su?Most of the time the default user on a livecd has passwordless sudo, and can also su passwordlessly to any other user.
_softwareengineering.185193
I am building a java desktop application where I am attempting to implement MVC. The GUI interface has multiple views (think pop-up windows) from the main view. Each view has its associated model that it receives updates from. I am running into scenarios where two views need to access a particular data field from one model. This requires that one model knows about the another. What are my options to solve this? The data field is a table that is populated in one of the views and is then subsequently accessed in the main splash view. I have tried passing the second model into the first's constructor from the controller, however I feel this is sloppy. Should a model listen to another model (Observer pattern)? Perhaps my question is too subjective....
MVC - sharing multiple models
java;mvc
null
_unix.52344
How can we write a shell script to become root?That is I donot want to input the password at prompt it should be within the script itself. I was trying to make but failed to do so.Is it possible, if Yes please Explain.
Shell script to become root
shell;shell script;password;root;su
You can write a script using expect tool.In Redhat, expect package comes by default. But in Ubuntu you need to install it separately.You can check this by using commands:$ rpm -qa | grep expect for Redhat$ dpkg -l expect for UbuntuThe following script will do your work: #!/usr/bin/expectspawn su -expect Password: send password\rinteract
_scicomp.20043
I am interested in a reference in the literature that discusses the performance of Dense Linear Algebra (blas routines) and dense linear algebra (sparse blas routines).I am interested in knowing for what combinations of size and density sparse routines outperform dense routines. I am mainly interested in a shared memory computer. Does anyone know of a reference?Thanks,
Sparse Linear Algebra vs Dense Linear Algebra
linear algebra;sparse;blas
null
_unix.212670
I am using Debian GNU/Linux 7.8 (wheezy) operating system in my institute. Yesterday I put a MATLAB program on a different machine from mine using ssh to run, which takes hours. Today morning when I came, I cannot log-in to my machine since there is no log-in window. It was just showing a background with the Debian logo.What should I do? This is happening to me on different occasions. What normally I do is shutdown and restart.
No login window display on Debian
debian;login;troubleshooting
null
_scicomp.14477
There is a lot of talk about exascale computing these days, and whether we will be able to reach that goal by 2018, 2019 or whatever.I have what is probably a naive question. What are the issues with doing it now? Specifically, today we have the AMD Radeon 295x2It has a computing power of 11.5 TFLOPS. Combining a hundred thousand of them would give us 1.15 EFLOPS.The power consumption of each card is slightly under 500 W, so the total consumption of all of them would be 50 MW (there would probably be some more for cooling etc). I'm only guessing, but lets say all the other stuff (cooling and whatever else) took 20 MW. Lets say the price of electric power is 60$/MWh, that would amount to slightly more than 35 million \$ per year.Price of a single graphics card is 1500 \$, which means the hardware would cost 150 million \$. Lets say infrastructure costs another 50 million \$.Compare this to current fastest supercomputer Tianhe-2. It cost 390 million \$ to make, uses 17.6 MW (24 MW with cooling) and has a processing power of 33.86 PFLOPS.So:Tianhe-2390 million \$24 MW33.86 PFLOPS AMD Radeon 295x2 x 100000200 million \$70 MW1.15 EFLOPS So for the cost of the Tianhe-2, you could build a computer that is more than 30 times faster and have its running costs covered for more than 5 years. I guess that after 5 years supercomputer mostly become obsolete anyway, so you would build another one :)What am I missing here?Is there a difference between the floating point operation that are done by today's supercomputers and these GPUs?Is the problem with AMD not being able to produce/supply 100,000 units of 295x2?Are there some other practical concerns, like inability to connect 100,000 units into something that would be useful, or inability to cool them properly?Would AMD Radeons be for some reason unstable or unreliable?
Exascale computer today
gpu;exascale
What am I missing here?Most of the broader issues with your proposal are covered in What are the current obstacles to reaching exascale computing?.I think the cost and power analysis you've done is a lower bound at best: you've calculated the cost it would take to buy 100,000 GPUs, and you can't run anything on a GPU that isn't plugged into anything.Operating systems are typically run on CPUs, not GPUs, so for every node in your system, in addition to one (or more) GPU accelerators, you'll need a mainboard with a CPU and some RAM. Furthermore, you've mentioned nothing about interconnects, on node storage, or storage for your entire cluster. All of these things cost money and power, and that's not even counting other necessary components (e.g., cases/racks, cooling fans, heat exchangers for a water cooled system).Is there a difference between the floating point operation that are done by today's supercomputers and these GPUs?As far as I can tell, the main difference between a CPU and a GPU is that GPUs are generally built to execute blocks of the same operation on different data across a core, and branching has poor performance. Beyond that, there's really no high-level difference. Some of today's supercomputers use GPUs (for instance, Titan), so I don't think there's a whole lot of difference into you start looking at low-level details.Is the problem with AMD not being able to produce/supply 100,000 units of 295x2?I doubt that.Are there some other practical concerns, like inability to connect 100,000 units into something that would be useful, or inability to cool them properly?Connecting the units isn't an issue. Cooling probably isn't an issue, if you can find the power, and water (if necessary), but it would be expensive. The main practical concern would be reliability (see below).Would AMD Radeons be for some reason unstable or unreliable?The main problem is that with so many components, all of them have to be extremely reliable to avoid nodes going down during a computation that uses the whole machine (that is, to avoid hard errors).Soft errors (flipping a bit, for instance) also become a concern at very large scale; for instance, the lead in the solder used to attach components to the motherboards will occasionally emit a small amount of radiation that can flip a bit in memory. Sometimes, bit flips will affect an algorithm, sometimes they won't. Recovering from soft errors is an area of active research.
_codereview.118866
In order to solve this challenge: Challenge DescriptionBy starting at the top of the triangle and moving to adjacent numbers on the row below, the maximum total from top to bottom is 27. 5 9 6 4 6 80 7 1 55 + 9 + 6 + 7 = 27Input sampleYour program should accept as its first argument a path to a filename. Input example is the following:59 64 6 80 7 1 5You make also check full input file which will be used for your code evaluation.Output sampleThe correct output is the maximum sum for the triangle. So for the given example the correct answer would be27I came up with the following code:static int GetMaxSum(int[][] numbers){ int firstCandidate = 0; int secondCandidate = 0; int max = 0; for(int i = 1; i < numbers.Length; i++) { for(int j = 0; j < numbers[i].Length; j++) { firstCandidate = 0; secondCandidate = 0; if (j - 1 >= 0) { firstCandidate = numbers[i][j] + numbers[i - 1][j - 1]; } if (j < numbers[i - 1].Length) { secondCandidate = numbers[i][j] + numbers[i - 1][j]; } numbers[i][j] = firstCandidate > secondCandidate ? firstCandidate : secondCandidate; } } int lastIndex = numbers.Length - 1; var lastLine = numbers[lastIndex]; for(int i = 0; i < lastLine.Length; i++) { if(lastLine[i] > max) { max = lastLine[i]; } } return max;}static int[] ParseLine(string line){ string[] numbers = line.Trim().Split(' '); int numbersLength = numbers.Length; var result = new int[numbersLength]; for(int i = 0; i < numbersLength; i++) { result[i] = int.Parse(numbers[i]); } return result;}static void Main(string[] args){ var list = new List<int[]>(); int max = 0; using (StreamReader reader = File.OpenText(args[0])) { while (!reader.EndOfStream) { string line = reader.ReadLine(); if (null == line) { continue; } list.Add(ParseLine(line)); } } max = GetMaxSum(list.ToArray()); Console.WriteLine(max);}and wanted some feedback on it. The basic idea is:Read all the fileTransform the content in a Lower Triangle Matrix structureCalculate max sums directly in the matrix (being that it's used only internally and it can be thrown once it's been processed)This was the most efficient - with regard to space and time - algorithm I could think of.Everything you can find - readability problems, efficiency problems, design problems, etc... - is fine.
Pass Triangle challenge
c#;programming challenge;dynamic programming
The GetMaxSum() function would be clearer if it were not so monolithic. The general outline of the program is: update the numbers for each input row based on the previous intermediate results, and fetch the maximum at the end. The program structure should reflect that. Furthermore, you can avoid storing the entire triangle in memory by working as you encounter each row.The firstCandidate / secondCandidate comparison would be better written using Math.Max().Your ParseLine() can be simplified using Array.ConvertAll(). Since it's a one-liner, perhaps you don't need to write it as its own method at all.The epilogue can be simplified using LINQ.using System;using System.IO;using System.Linq;public class Triangle{ private int[] Row = new int[0]; public void AppendRow(string numbers) { int[] oldRow = Row; // http://stackoverflow.com/a/1297250 Row = Array.ConvertAll(numbers.Trim().Split(' '), int.Parse); for (int j = 0; j < Row.Length; j++) { Row[j] += Math.Max ( (j > 0) ? oldRow[j - 1] : 0, (j < oldRow.Length) ? oldRow[j] : 0 ); } } public int GetMaxSum() { return Row.Max(); } public static void Main(string[] args) { Triangle triangle = new Triangle(); using (StreamReader reader = File.OpenText(args[0])) { while (!reader.EndOfStream) { triangle.AppendRow(reader.ReadLine()); } } Console.WriteLine(triangle.GetMaxSum()); }}
_unix.335113
When I attempt to set key bindings in Gnome that use the Alt key, Gnome also includes the Meta key. For instance when trying to set a keybinding to Ctrl-Alt-E and pressing only those keys, Gnome shows Ctrl-Alt-Meta-E. This causes no direct loss of functionality or anything as it seems the Alt key is sending both the Alt and the Meta signals. How can I get rid of this extra Meta signal?Also I have tried this with both the left and right Alt keys.
Get Rid of Meta Key Gnome
gnome;keyboard shortcuts
null
_codereview.13592
I want to insert parameters dynamically to statements at sqlite.I now write as follows: tx.executeSql('SELECT Data from Table Where something = '+ anyvariable+ '',[],successFn, errorCB);But I guess there is a better (cleaner) method to do it.. Any ideas?
Insert dynamic parameters to sqlite database statements
javascript;jquery;sqlite
null
_codereview.135047
The task at handI have previously asked about how optimizing Project Euler 1. See Generalized Project Euler 1: A sledgehammer to crack a nut. In this question I wanted to explore the next problem: Project Euler 2: Even Fibonacci numbers.Even Fibonacci numbersEach new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:$$ 1,\,2,\,3,\,5,\,8,\,13,\,21,\,34,\,55,\,89, \ldots $$By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.My attemptAs in standard literature I define the fibonacci sequence as$$ F_n = F_{n-1} + F_{n-2}$$with initial values \$F_0 = 0\$, \$F_1 = 1\$. It can be shown that every even fibonacci number is on the form \$F_{3k}\$, where \$k\$ is some positive integer. Let \$E_n = F_{3n}\$, denote the \$n\$th even Fibonacci number. We then have the following reccurence$$ E_n = 4 E_{n-1} + E_{n-2}$$ with initial values \$E_0 = 0\$ and \$E_1 = 2\$. The sum of the first \$n\$ even fibonacci numbers has a closed form, and can be written as$$\begin{align*} F_0 + F_3 + \cdots + F_{3n-1} + F_{3n} & = \frac{F_{3n+2}-1}{2} \\ E_0 + E_1 + \cdots + E_{n-1} + E_{n} & = \frac{E_{n} + E_{n+1}-2}{4} \end{align*}$$using the notation defined earlier. The two expressions above are equal, and just differ in notation used. Next we can find the index of the largest even Fibonacci number not exceeding $M$.$$ F_{3n} = \text{round}\left[\frac{\phi^{3n}}{\sqrt{5}}\right] = M \quad \Rightarrow \quad n = \frac{1}{6} \log_\phi 5 + \frac{1}{3} \log_\phi M\,,$$where \$M\$ is the golden ratio. So \$ n \$ rounded down gives us how many terms we need to iterate over. The struggleSo the problem boils down to either computing \$F_{3n+2}\$ fast, or computing \$E_n + E_n+2\$ fast. I found the following paper A fast algorithm for computing large Fibonacci numbers - Daisuke Takahashi. This explains a method to compute the \$n\$th Fibonacci number in logarithmic time. Using this I was able to write a quick and fast solution to the problem. An alternative is to use matrix multiplication to find the \$n\$th. However as my benchmarks show this is slightly slower than Daisuke Takahashi optimized algorithm. A third option is to use the relationship between the Lucas numbers and Fibonacci to generate the n'th term quickly. I will use that idea, but use the more generall Lucas sequences instead.Another solution is to use the relationship between the Fibonacci numbers and the Lucas Another solution I thought of was to use the Lucas sequences.\begin{align}U_0(P,Q)&=0, \\U_1(P,Q)&=1, \\U_n(P,Q)&=P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q) \mbox{ for }n>1,\end{align}and\begin{align}V_0(P,Q)&=2, \\V_1(P,Q)&=P, \\V_n(P,Q)&=P\cdot V_{n-1}(P,Q)-Q\cdot V_{n-2}(P,Q) \mbox{ for } n>1.\end{align}For example the Fibonacci sequence can be represented as \$F_n = U_n(1,-1)\$. The idea is to introduce a second sequence \$V_n(P, Q)\$ along side \$U_n(P, Q)\$ such that we can compute \$U_n\$ faster, than by the standard recursion: \$U_n = P U_{n-1} - Q U_{n-2}\$. First, we can double the subscript from \$k\$ to \$2k\$ in one step using the recurrence relations\begin{align*} U_{2k} & = U_k\cdot V_k \\ V_{2k} & = V_k^2-2Q^k .\end{align*}Next, we can increase the subscript by \$1\$ using the recurrences\begin{align*} U_{2k+1} & = (P\cdot U_{2k} + V_{2k})/2 \\ V_{2k+1} & = (D\cdot U_{2k} + P\cdot V_{2k})/2.\end{align*}Using this square and add algorithm, we can evaluate \$F_{44}\$ with the following computations:\begin{align*} F_1, \, F_2, \, F_4, \, F_5, \, F_{10}, \, F_{11}, \, F_{22}, \, F_{44}\end{align*} The benefit of using the generalized lucas sequences instead of the lucas numbers is that we can now iterate over \$E_n = 2\cdot U_{n}(4,-1)\$ instead of the slower \$F_{3n+2} = U_{n}(1,-1)\$.Implementation and questionUsing the theory above I have implemented three functions to solve this problemDaisuke Takahashi - Finding \$F_{3n+2}\$Lucas sequence finding \$F_{3n+2}\$Lucas sequence finding \$E_n\$ with $2 \cdot U_{4,-1}$ and then use this to find the next term \$E_{n+1}\$. I have done all these implentations in the code below, and I do have some questions. Right now I do not have time to include benchmarks, but there is something strange with the running times of the three implementations. Why is Takadashi faster than both my implentations? I can fully understand why it beats finding \$F_{3n+2}\$ using the generalized Lucas sequence. However finding \$E_n\$ should be quite a bit quicker, why is it not?I am not happy with how I have implemented the generalized lucas sequence. Is there a faster way to find \$E_n\$ and \$E_{n+1}\$ than with the generalized Lucas sequences?Please no comments on the Takadashi implementation. I included it to benchmark my use of the generalized lucas sequences.Codefrom math import logPHI = (1 + 5**0.5)/float(2)LOG_5 = log(5, PHI)/float(6)def largest_even_fib_under_n(limit): ''' The nth even fibonacci number can be written as F_{3n} = round( phi^3k / sqrt 5 ) This function solves the equation F_{3n} = limit for n. ''' return int(LOG_5 + log(limit, PHI)/float(3))def lucas_sequence(p, q, n): ''' Calculates the n'th term of the lucas sequences U_n and V_n. https://en.wikipedia.org/wiki/Lucas_sequence#Algebraic_relations U_0 = 0 U_1 = 1 U_n = P * U_(n-1) - Q * U_(n-2) and V_0 = 2 V_1 = P V_n = P * V_(n-1) - Q * V_(n-2) Uses the doubling relations to compute it in log n time. https://en.wikipedia.org/wiki/Lucas_pseudoprime#Implementing_a_Lucas_probable_prime_test U_2k = U_k * V_k V_2k = V_k^2 - 2*Q^k and U_(2k+1) = (P * U_2k + V_2k)/2 V_(2k+1) = (D * U_2k + P * V_2k)/2 If the index is odd, we can make it even using the last relation, and then square. ''' d = p * p - 4 * q un, vn, qn = 1, p, q u = 0 if n % 2 == 0 else 1 v = 2 if n % 2 == 0 else p k = 1 if n % 2 == 0 else q n = n // 2 while n > 0: u2 = un * vn v2 = vn * vn - 2 * qn q2 = qn * qn n2 = n // 2 if n % 2 == 1: uu = (u * v2 + u2 * v)/2 vv = (v * v2 + d * u * u2)/2 u, v, k = uu, vv, k * q2 un, vn, qn, n = u2, v2, q2, n2 return u, vdef sum_fibonacci_with_lucas(limit): n = largest_even_fib_under_n(limit) U = lucas_sequence(1, -1, 3*n+2) return (U[0] - 1)/2def sum_even_fibonacci_with_lucas(limit): n = largest_even_fib_under_n(limit) p = 4 q = -1 u0, v0 = lucas_sequence(p, q, n) u0 *= 2 u1 = (p*u0 + 2*v0)/2 return (u0+u1-2)/4def fib_takahashi(n): ''' A fast algorithm for computing large Fibonacci numbers by Daisuke Takahashi http://www.math.tamu.edu/~snpolloc/math491_691/Takahashi00.pdf ''' if n == 0: return n F, L, sign, exp = 1, 1, -2, int(log (n,2)) mask = 2**exp for i in xrange (exp - 1): mask = mask >> 1 F2 = F**2 FL2 = (F + L)**2 F = ((FL2 - 6*F2) >> 1) - sign if n & mask: temp = (FL2 >> 2 ) + F2 L = temp + (F << 1) F = temp else: L = 5*F2 + sign sign = -2 if n & mask else 2 if n & (mask >> 1) == 0: return F * L else: return ((F + L) >> 1) * L - (sign >> 1)def sum_fibonacci_takahashi(limit): n = largest_even_fib_under_n(limit) return (fib_takahashi(3*n+2) - 1)/2if __name__ == '__main__': limit = 4*10**6 print sum_fibonacci_with_lucas(limit) print sum_even_fibonacci_with_lucas(limit) print sum_fibonacci_takahashi(limit) import timeit times = 100 t1 = timeit.timeit(sum_fibonacci_with_lucas(4*10**(10**6)), setup=from __main__ import sum_fibonacci_with_lucas, number=times)/float(times) t2 = timeit.timeit(sum_even_fibonacci_with_lucas(4*10**(10**6)), setup=from __main__ import sum_even_fibonacci_with_lucas, number=times)/float(times) t3 = timeit.timeit(sum_fibonacci_takahashi(4*10**(10**6)), setup=from __main__ import sum_fibonacci_takahashi, number=times)/float(times) print'''sum using lucas 3n+2 used {:10.7f} mssum using lucas n used {:10.7f} mssum using takahashi used {:10.7f} mstakahashi was {:10.7f} times faster than lucas n takahashi was {:10.7f} times faster than lucas 3n+2lucas n was {:10.7f} times faster than lucas 3n+2 '''.format(1000*t1, 1000*t2, 1000*t3, t1/float(t3), t2/float(t3), t1/float(t2))
Generalized Project Euler 2: A sledgehammer to crack a nut
python;performance;python 2.7;mathematics
null
_unix.151851
I have my BBB configured to use a static IP address using the following in the file /etc/network/interfaces:allow-hotplug eth0iface eth0 inet static address 192.168.0.1 netmask 255.255.255.0 network 192.168.0.0This seems to work ok on boot, but when the ethernet cable is unplugged and then plugged back in, I lose the IP address. Any ideas what's going on here?Another weird symptom: If I boot the BBB with the network cable unplugged, but the switch it's plugged into off, I'll get my static IP. But, when I turn the switch on, I'll get a DHCP-assigned address. This is even though I have it configured with a static IP address.One last thing. If I ifdown etho, the interface will be gone when I do an ifconfig. If I wait a few seconds, though, and then re-run ifconfig, it will reappear, without an IP address. (Before I disabled IPv6, I used to get a IPv4 DHCP address in this case... weird). When that happens, I get a message like this in /var/log/messages:Apr 23 20:32:06 beaglebone kernel: [ 737.170172] libphy: 4a101000.mdio:00 - Link is Up - 100/FullApr 23 20:32:06 beaglebone kernel: [ 737.170304] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes readyHere's my uname -a:root@beaglebone:/etc# uname -aLinux beaglebone 3.8.13-bone47 #1 SMP Fri Apr 11 01:36:09 UTC 2014 armv7l GNU/LinuxAny ideas what's going on here?
Static IP addressing issue in Ubuntu on BeagleBoneBlack Rev C
ubuntu;networking;ip
null
_webmaster.30346
I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones.If this is my current sitemap:<?xml version=1.0 encoding=UTF-8 ?> <urlset xmlns=http://www.sitemaps.org/schemas/sitemap/0.9> <url> <loc>http://mobile.example.com/article100.html</loc> </url></urlset>Can I simply change it to:<?xml version=1.0 encoding=UTF-8 ?> <urlset xmlns=http://www.sitemaps.org/schemas/sitemap/0.9 xmlns:mobile=http://www.google.com/schemas/sitemap-mobile/1.0> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url></urlset>Or do I need to create new files with the additional markup, alongside my existing files?
Updating Google sitemap for mobile
google;sitemap;mobile
null
_codereview.41568
I'm starting to learn Clojure, and would like feedback on some code I wrote to manage database migrations. Any recommendations to make it more robust, efficient, idiomatic, elegant, etc... are welcome!(ns myapp.models.migrations (:require [clojure.java.jdbc :as sql] [myapp.models.database :as db]));;;; Manages database migrations.;;;;;;;; Usage:;;;;;;;; user=> (migrate!) ; migrate to the latest version;;;; user=> (migrate! 20140208) ; migrate to a specific version(let [db-spec db/spec] ;; WARNING: Only works with PostgreSQL! ;; ;; TODO: Can this be made generic to all databases? Look into using the ;; JDBC database metadata to determine if a table exists. (defn table-exists? [table-name] (-> (sql/query db-spec [select count(*) from pg_tables where tablename = ? table-name]) first :count pos?)) ;;; The migrations to apply ;;; ;;; The order in which migrations are apply is determined by the :version property. ;;; Each migration must have :apply and :remove functions so we can migrate up or down. (def migration-0 {:version 0 :description Starting point. Does nothing, but allows us to remove all other migrations if we want to. :apply (fn [] nil) :remove (fn [] nil)}) (def migration-20140208 {:version 20140208 :description Create the articles table. :apply (fn [] (when (not (table-exists? articles)) (sql/db-do-commands db-spec (sql/create-table-ddl :articles [:title varchar(32)] [:content text])))) :remove (fn [] (when (table-exists? articles) (sql/db-do-commands db-spec (sql/drop-table-ddl :articles))))}) (def db-migrations [ migration-0 migration-20140208 ]) ;;; Forms for processing the migrations. (defn create-migrations-table! [] (when (not (table-exists? migrations)) (sql/db-do-commands db-spec (sql/create-table-ddl :migrations [:version :int])))) (defn drop-migrations-table! [] (when (table-exists? migrations) (sql/db-do-commands db-spec (sql/drop-table-ddl :migrations)))) (defn migration-recorded? [migration] (create-migrations-table!) (-> (sql/query db-spec [select count(*) from migrations where version = ? (:version migration)]) first :count pos?)) (defn record-migration! [migration] (create-migrations-table!) (when (not (migration-recorded? migration)) (sql/insert! db-spec :migrations {:version (:version migration)}))) (defn erase-migration! [migration] (create-migrations-table!) (when (migration-recorded? migration) (sql/delete! db-spec :migrations [version = ? (:version migration)]))) (defn migrate-up! [to-version] (let [filtered-migrations (sort-by :version (filter #(<= (:version %) to-version) db-migrations))] (doseq [m filtered-migrations] (when (not (migration-recorded? m)) ((:apply m)) (record-migration! m))))) (defn migrate-down! [to-version] (let [filtered-migrations (reverse (sort-by :version (filter #(> (:version %) to-version) db-migrations)))] (doseq [m filtered-migrations] (when (migration-recorded? m) ((:remove m)) (erase-migration! m))))) (defn migrate! ([] (let [last-migration (last (sort-by :version db-migrations))] (when last-migration (migrate! (:version last-migration))))) ([to-version] (let [version (or to-version 0) migration-exists (not (nil? (some #(= (:version %) version) db-migrations))) already-applied (migration-recorded? {:version version})] (cond (not migration-exists) (println (format migration %s was not found version)) already-applied (migrate-down! version) :else (migrate-up! version))))))
Database Migrations
sql;clojure
Honestly, I think this code looks great! Kudos -- this looks especially good for a beginner to Clojure. I have just a few minor improvements:(defn create-migrations-table! [] (when-not (table-exists? migrations) (sql/db-do-commands db-spec (sql/create-table-ddl :migrations [:version :int]))))Use (when-not x) instead of (when (not x)) -- it'll save you a couple parentheses :)(defn record-migration! [migration] (create-migrations-table!) (when-not (migration-recorded? migration) (sql/insert! db-spec :migrations {:version (:version migration)})))(same thing with when-not)(defn migrate-up! [to-version] (let [filtered-migrations (sort-by :version (filter #(<= (:version %) to-version) db-migrations))] (doseq [m filtered-migrations] (when-not (migration-recorded? m) ((:apply m)) (record-migration! m)))))(another opportunity to use when-not)(defn migrate! ([] (when-let [last-migration (last (sort-by :version db-migrations))] (migrate! (:version last-migration))))...Anytime you have a statement of the form (let [x (something)] (when x (do-something))), you can simplify it to (when-let [x (something)] (do-something)).At the end, I would consider calling migration-exists migration-exists?, since it represents a boolean value.The only other thing that stood out for me is your inclusion of (create-migrations-table!) in a few of the other functions as the first line... this seems like kind of a work-around, and might potentially cause problems from a functional programming perspective. You might consider taking the (when-not (table-exists? migrations ... out of the function definition for create-migrations-table! and including it as a check in the other 3 functions, like this:(defn create-migrations-table! [] (sql/db-do-commands db-spec (sql/create-table-ddl :migrations [:version :int])))(defn record-migration! [migration] (when-not (table-exists? migrations) (create-migrations-table!)) (when-not (migration-recorded? migration) (sql/insert! db-spec :migrations {:version (:version migration)}))) This way seems more intuitive to me -- the create-migrations-table! ought to assume that there isn't already one in existence, and you would expect not to use it unless you're checking (table-exists? migrations) as a condition. On the other hand, this is wordier, so you may prefer to leave it the way it is for the sake of simplicity.
_webmaster.13369
The major do(s) and don't(s) to ensure a smooth launch. Any tips ranging from A to Z would be very helpful. I hope this question isn't too broad, but I'm pretty sure that this community has had its fair share of experiences.
Site launch do(s) and don't(s)
ecommerce;launch
Not quite a do/don't list, but take a look at Launchlist - it's a checklist of important items worth taking care of.
_unix.23955
How can I permanently change the ownership (or at least the group) of a LVM volume?I figured that I have to use udev, but I don't know how the rule should look like?Let's say I want to change the ownership of LVM/disk to user/group virtualbox, how would I do that?
Permanently changing the ownership (or group) of LVM volume
permissions;lvm;udev
On Debian (and hopefully your distro as well) all the LVM metadata is already loaded into udev (by some of the rules in /lib/udev/rules.d). So you can use a rules file like this:$ cat /etc/udev/rules.d/92-local-oracle-permissions.rules ENV{DM_VG_NAME}==vgRandom ENV{DM_LV_NAME}==ora_users_* OWNER=oracleENV{DM_VG_NAME}==vgRandom ENV{DM_LV_NAME}==ora_undo_* OWNER=oracleENV{DM_VG_NAME}==vgSeq ENV{DM_LV_NAME}==ora_redo_* OWNER=oracleYou can use udevadm to find out what kinds of things you can base your udev rules on. All the E: lines can be found in ENV in udev, e.g., the E: DM_LV_NAME=ora_data line matched by one of the above rules:# udevadm info --query=all --name /dev/dm-2 P: /devices/virtual/block/dm-2N: dm-2L: -100S: block/253:2S: mapper/vgRandom-ora_dataS: disk/by-id/dm-name-vgRandom-ora_dataS: disk/by-id/dm-uuid-LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89S: disk/by-uuid/787651c2-e4c7-40e2-b0fc-1a3978098dceS: vgRandom/ora_dataE: UDEV_LOG=3E: DEVPATH=/devices/virtual/block/dm-2E: MAJOR=253E: MINOR=2E: DEVNAME=/dev/dm-2E: DEVTYPE=diskE: SUBSYSTEM=blockE: DM_UDEV_PRIMARY_SOURCE_FLAG=1E: DM_NAME=vgRandom-ora_dataE: DM_UUID=LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89E: DM_SUSPENDED=0E: DM_UDEV_RULES=1E: DM_VG_NAME=vgRandomE: DM_LV_NAME=ora_dataE: DEVLINKS=/dev/block/253:2 /dev/mapper/vgRandom-ora_data /dev/disk/by-id/dm-name-vgRandom-ora_data /dev/disk/by-id/dm-uuid-LVM-d6wXWIzc7xWJkx3Tx3o4Q9huEG1ajakYr0SLSl5as3C6RoydA66sgNHxBZdpem89 /dev/disk/by-uuid/787651c2-e4c7-40e2-b0fc-1a3978098dce /dev/vgRandom/ora_dataE: ID_FS_UUID=787651c2-e4c7-40e2-b0fc-1a3978098dceE: ID_FS_UUID_ENC=787651c2-e4c7-40e2-b0fc-1a3978098dceE: ID_FS_VERSION=1.0E: ID_FS_TYPE=ext4E: ID_FS_USAGE=filesystemE: FSTAB_NAME=/dev/mapper/vgRandom-ora_dataE: FSTAB_DIR=/opt/oracle/oracle/oradataE: FSTAB_TYPE=ext4E: FSTAB_OPTS=noatimeE: FSTAB_FREQ=0E: FSTAB_PASSNO=3Also, you can match on sysfs attributes, in either ATTR (device only) or ATTRS (parents too). You can see all the attributes like this:# udevadm info --attribute-walk --name /dev/dm-2 Udevadm info starts with the device specified by the devpath and thenwalks up the chain of parent devices. It prints for every devicefound, all possible attributes in the udev rules key format.A rule to match, can be composed by the attributes of the deviceand the attributes from one single parent device. looking at device '/devices/virtual/block/dm-2': KERNEL==dm-2 SUBSYSTEM==block DRIVER== ATTR{range}==1 ATTR{ext_range}==1 ATTR{removable}==0 ATTR{ro}==0 ATTR{size}==41943040 ATTR{alignment_offset}==0 ATTR{discard_alignment}==0 ATTR{capability}==10 ATTR{stat}==36383695 0 4435621936 124776016 29447978 0 3984603551 342671312 0 191751864 467456484 ATTR{inflight}== 0 0Though that matching is more useful for non-virtual devices (e.g., you'll get a lot of output if you try it on /dev/sda1).
_unix.109734
I have been trying to get xsel --delete to work to ultimately implement a script to move selected material from one editor to another.According to the description:Delete the contents of the selection------------------------------------ xsel --deleteWill cause the program in which text is selected to delete that text. Thisreally works, you can try it on xedit to remotely delete text in the editorwindow.But this does not seem to work (emacs, gedit and also tried xedit). According to the man page:-d, --delete Request that the current selection be deleted. This not only clears the selection, but also requests to the program in which the selection resides that the selected contents be deleted. Overrides all input options.But selected text is not even cleared. Clearing with --clear worked in gedit, but not in emacs.In all cases I checked if the selected text was known with xsel -p -o first.How can I get xsel --delete to work? Does this depend on the application listening to xsel?
xsel not deleting selected text
xsel
null
_softwareengineering.190699
In Java, the String object is both immutable and also a pointer (aka reference type). I'm sure there are other types/objects which are both immutable and a pointer as well and that this extends further than just Java.I can not think where this would be desirable possibly due to my own experience I've only used either a reference types OR immutable primitive types.Can some one please give me a situation where it would be useful so I can get an appreciation for their meaning/value/concept.
When and why would we use immutable pointers?
java;pointers;immutability
An immutable reference type behaves similarly to a value type.If String were not immutable, something like this could happen:String a = abc;String b = a;a.ReplaceCharAt(1, X); // this is not possible, if the type is immutable// b is now aXc, which might be counter-intuitiveImmutability prevents this from happening: Whenever I assign a string to a variable, I can be sure that this string won't change until I reassign the variable.On the other hand, we still get the benefits of reference types for String:In the example above, abc would only be stored once in memory.Strings can be variable in size.The latter one is probably the main reason why Strings are reference types in Java and .NET. Often, value types are stored on the stack. To put something on the stack, the compiler needs to know its size beforehand, which is kind of difficult for strings of varying size.
_softwareengineering.288298
I'm writing a Gateway class that wraps access to a web service which provides information about a player's item inventory in a video game. This web service returns a variety of information, however I do not need to use all of it.The only thing I need to know from the web service is if a given game item is present in the player's inventory. This logic would be simple to implement: I would iterate over the player's items, trying to find a match. However, herein lies my primary concern: is this too much work for a Gateway to do? Would it be better practice to have the Gateway merely extract the list of items, and then handle the checking elsewhere? This would seem silly to me, because my program really only cares about the existence of a particular item and I don't really need a list of every item.
Is this too much work for a Gateway to do?
design patterns;domain driven design
In general, a gateway class should translate from the interface you have to the interface you want. If the interface you want is a simple presence check, by all means write it that way.I would just caution you as the requirements change and the application grows, to not be afraid to reevaluate that decision. If your gateway class starts looking too large, or like it's forcing too many abstraction layers together, split it into one that retrieves the list and one that filters/searches the list. Even if you have only one class, these should at least be separate functions.
_cs.79711
I posted a similar question here, however I have another question regarding Venn diagrams and logic circuits...In this problem:$$(A+B)(B+C)$$Wouldn't the Venn diagram look something like this?Because it simplifies to $B + AC$? And you can either have the $B$ region, the $AC$ region, or both?However, apparently it's supposed to look something like this:Which I don't understand... Because by the logic that you have $AB$ and $BC$, couldn't you justify saying that you can also shade in $A$, or shade in $C$ (by themselves)?OR, does saying $A$ mean you shade in EVERYTHING that contains $A$? And only don't shade in the intersection if you had something that said $A * \overline{B}$?
Someone explain the Venn diagram for the logic equation (A+B)(B+C)
logic;circuits;karnaugh map
The misunderstanding here is the same as in your previous question. If $B$ is true, the formula is true whatever the values of $A$ and $C$ are. This means that the whole of the $B$ circle needs to be shaded. Shading just the part at the top corresponds to saying The formula is true if $B$ is true and $A$ and $C$ are both false. That is certainly true, but it's not the whole story.For example, your proposed shading says that the formula is false if $A$ is true, $B$ is true and $C$ is false. But the formula is true in that case! You must shade every region of the diagram that makes the formula true. In this case, that means every region in the $B$ circle, and also the intersection of the $A$ and $C$ circles. (Corresponding to, The formula is true if $B$ is true, or if $A$ and $C$ are true, or both.)
_unix.317234
I have a server where a partition (/var) switched to read-only. So I try to reproduce this problem on another server with the following command.mount -o remount,ro /var/ -fWhen I check our application log on that same partition I remounted ro I see entries recently added.tail -f /var/log/httpd/*CentOS 6.7Apache: 2.2.15uname -r: 2.6.32-573.7.1.el6.x86_64
Why doesn't mount respect the ro option?
mount;ext4;readonly
null
_softwareengineering.353222
Below are the common phases applied, for a real world requirement(data) map to DBMS specific schema.Conceptual models ER/EER/CODASYL/Hierarchical bridge the gap, in mapping the real world data(requirement) to DBMS specific schema.In RDBMS world, object-relational impedance mismatch is the cause for applying these phases in database design.Using RDBMS, reduces the agility of the application, because once the database is designed, it takes a lot to modify the DB schema again, based on new requirement.But using, Document database(say MongoDB), factor of object mapping is intuitive, that minimizes impedance mismatch issue.So, embedding & referencing of entities in JSON, would suffice for modeling real world data(requirement) to a document database. Conceptual models(ER/EER/..) would not look like a necessity, for modeling document database.Modeling data for document database(say MongoDB), are these phases still valid?
Data modeling for NoSQL document database
database;database design;mysql;data modeling;document databases
Modeling data for document database(say MongoDB), are these phases still valid?Yes, they are.And it is much more important to do them right before the first release.The reason is that in a document database you never change the structur of old documents. That means that every client accessing the database must be able to work with all versions of the document structure present in your database. This is why changes in the document structure should be well thought.Changes in the document should be well thought, but for that, do we need phase of creating conceptual schema and then move to DB specific schema? overexchange The point is that you don't know if a document based database meets your requirements before the conceptual schema is finished.
_unix.14710
Is it possible to find the source code for the ATA_PIIX Driver outside the linux kernel? I have to install an older version of Linux (SuSE 10) on a brand new laptop and its failing to see the drive. I'd like to get a newer version of the driver to build for use in the automated build process.
ATA_PIIX Driver Source Code
linux;drivers;suse;sata
null
_softwareengineering.262250
A friend of mine without programming knowledge asked me this question and I found it interesting. I think it is not possible because it would require a really advanced artificial intelligence capable of analyzing the text of a problem, think about a solution and program it. Just thinking about a machine being able to program a simple calculator seems pretty advanced to me.But maybe I'm wrong and I would like to know what do you think about it and if you are aware of any articles/researches on the subject, or if it already exists or if the possibility exists of selecting a specification, and getting the machine to self-program to this spec?
Is it imaginable to teach a machine how to program itself to a defined specification?
artificial intelligence;machine learning
null
_webmaster.85044
I have my own VPS running Debian Linux and serving several Web sites.I have no trouble to manage it on my own, as I am a self-taught Linux expert.However, I am afraid that the data will be lost in the case if I die.As such, I think of inheriting my bank account and my house to a non-commercial organization which could preserve my sites.I host at Digital Ocean. They don't provide managed hosting. But in the case of my death managed hosting is necessary. Can we order managing hosting separately from the VPS hosting itself? Who provides managed hosting for hosters (such as Digital Ocean) which themselves do not do managed hosting? Will ordering managing hosting separately from the hosting itself increase the price much? (how much?)I know that moving from one to another hosting provider with restoring all server functionality is difficult. (I did it twice.) Maybe, there is an easy way to move from one VPS to another one? (I don't know an easy way.)What can I do in this situation?
Ordering managing Web hosting separately from the hosting itself
web hosting;vps;cloud hosting;administration
null
_unix.337435
I have been trying to insert a file as the first line of another with following SED command, without much success. Each time the file is inserted after line 1. Is there a switch that will inserted before line 1?sed -i '1r file1.txt' file2.txtThanks in advance.
SED - Insert file at top of another
sed
The sed command r will place the contents of the specified file in the output before reading the next line of input. Unfortunately you can't specify 0 as the address for this command, so there's no way to insert the contents of a file before the first line of input (without poking around with the hold space).You could, however, just use plain old cat. It is, after all, the command for concatenating files:$ cat file1.txt file2.txt >out && mv out file2.txtTo be sure you're writing to a temporary file that does not already exist, one may use the mktemp utility:$ tmp=$(mktemp) && cat file1.txt file2.txt >$tmp && mv $tmp file2.txtThis is slightly awkward on the command line, but a good precaution in any script that needs to write to a temporary file.
_unix.98210
I'm using a Raspberry pi to monitor a self-service fridge. When the door opens, a shell script is being executed to capture pictures until the door closes again.The problem, however, is the speed of the ARM-processor. Capturing in full resolution using fswebcam takes 2-4 seconds, which is too long.Our idea to solve this is to split the process:Capture raw images and save to memory cardAfter the door is closed, process the raw data. This is not time-critical.Now, my questions are:Is this possible / wise?Which program should I use?
Capture raw image from V4L (webcam) device
linux;raspberry pi;images;camera;v4l
null
_unix.26869
I tried to understand the usage of xargs and did the following experiment.ls | xargs | touchI want to refresh the files dates and directoris in the curent directory.Though it is a bit silly,for I could use a simpler form to achieve the same effect.In my mind, xargs reads from the STDIN and turn it into the arguments for the other command(/bin/echo by default if the command is not specified). Am I misunderstanding something?It failed and I am wondering why.
Why did using xargs fail in this case?
command line;pipe;xargs
It needs to be like this:ls | xargs touchThe xargs command runs the touch command with a number of strings read from stdin. In your case, stdin for xargs is the output end of the pipe from ls.The way you had the command:ls | xargs | touchxargs had no command to run against the strings (filenames) it would read from stdin. In that case, xargs simply prints each file name, and touch gets the list of file names on its standard input. But touch doesn't read from its standard input, and since you didn't give it any arguments, it should have printed an error message like:touch: missing file operandTry `touch --help' for more information.(which you should have mentioned in your question).
_datascience.10544
I want to use R for implementing a fuzzy inference system. There are 4 input variables and one output. Each rule is dependent on all input variables and based on there membership the output class is decided. Can anyone suggest a good library and source examples to begin with? The library that have good visualisation support also.Should I go with 'frbs' R package?
Fuzzy Inference system in R
machine learning;r;fuzzy logic
you can use frbs, sets and fugeR packege for fuzzy logic model building.
_softwareengineering.355082
Aside from the probable dozens of bugs you can spot in the code below, I'd really like to know what would most people would consider testing in the code below. I have 8 similar exported functions, so I believe it is those that I should be testing.The problem is, they all call the private function sendTokBoxMessage so I can't test that function is being called. Moving it to it's own module seems overkill as this is already a small contained module and adding complexity for the sake of testing feels wrong to me.That being the case, I thought I should test that the http request that sendTokBoxMessage makes happens. But this calls another private function getSessionIdForRoom so then I need to stub the roomStoreModel. I have to do the same for various other dependencies (moment, apiKey, apiSecret) and then use a mock http framework to spy on the request.Do you think it is good practice to setup all the above, or is the opinion that if the module is small and simple enough, setting up tests is more effort than it's worth?Thanks.'use strict';let request = require('request');let moment = require('moment');let jwt = require('jwt-simple');let logger = require('../business/log_business');let OpenTokConnection = require('../controllers/opentokConnection');let ot = OpenTokConnection.ot;let apiKey = OpenTokConnection.apiKey;let apiSecret = OpenTokConnection.apiSecret;let roomStoreModel = require('../model/roomstore_model')(ot, apiKey, apiSecret);var getSessionIdForRoom = function(roomName) { return roomStoreModel.getSessionIdForRoom(roomName);};var createWebToken = function() { let now = moment().unix(); let expires = now + 180; let claims = {'iss': apiKey, 'ist': 'project', 'iat': now, 'exp': expires, 'jti': 'jwt_corp'}; let token = jwt.encode(claims, OpenTokConnection.apiSecret); return token;};var sendTokBoxMessage = function(roomName, endpoint, data) { getSessionIdForRoom(roomName).then(function(sessionId) { //Get the web token var webToken = createWebToken(); //Set the headers var headers = { 'X-OPENTOK-AUTH': webToken, 'Content-Type': 'application/json' }; // Configure the request var options = { url: 'https://api.opentok.com/v2/project/' + apiKey + '/session/' + sessionId + '/signal', method: 'POST', headers: headers, json: {'type': endpoint, 'data': JSON.stringify(data)}, forever: true }; // Start the request request(options, function(error, response) { if (error || response.statusCode !== 204) { logger.log('couldnt send tokbox message to endpoint, probably no one left to send to: ' + endpoint, roomName); } }); });};exports.sendLogMessage = function(roomName, endPoint, data) { sendTokBoxMessage(roomName, endPoint, data);};exports.sendMessageToEveryone = function(roomName, endPoint, data) { sendTokBoxMessage(roomName, endPoint, data); logger.log('Sending message to everyone: ' + endPoint, 1, roomName);};exports.sendRestartRoomMessage = function(roomName) { sendTokBoxMessage(roomName, 'restartRoom', {'restart': 'restart'}); logger.log('Sending restart room to everyone', 1, roomName);};exports.sendUserUpdateMessage = function(roomName, user, action) { var data = {'action': action, 'user': user}; sendTokBoxMessage(roomName, 'userUpdated', data); logger.log('Sending userUpdated room to everyone: ' + user.uid, 1, roomName);};exports.sendRoomStateUpdateMessage = function(roomName, roomState) { sendTokBoxMessage(roomName, 'roomStateUpdate', roomState); logger.log('Sending roomState update to everyone: ' + JSON.stringify(roomState), 1, roomName);};exports.sendChatMessage = function(roomName, message) { sendTokBoxMessage(roomName, 'chatMessage', message); logger.log('Sending chat message: ' + JSON.stringify(message), 1, roomName);};exports.sendZipDxConnectedMessage = function(roomName) { sendTokBoxMessage(roomName, 'zipdxConnected', {'null': 'null'}); logger.log('Sending ZipDx connected message', 1, roomName);};exports.sendZipDxDisconnectedMessage = function(roomName) { sendTokBoxMessage(roomName, 'zipdxDisconnected', {'null': 'null'}); logger.log('Sending ZipDx disconnected message', 1, roomName);};
What to test in this module?
javascript;unit testing;integration tests
null
_webapps.69798
I'm making an expense spreadsheet, to keep track of all my expenses. I have made a dropdown-list with different category (Rent, Food, Others etc.) these are put into C2 to CXX, in the next cell D2 to DXX I put the Expense number. I would now like to have a summary cells, that tells me how much I have spent in each category (Adding all the expenses from one category together). How do I go about doing this?
Collect data from dropdown list
google spreadsheets
I would recommend a pivot table but you might use something like: =SUMIF(C:C,J2,D:D) copied down, where J2 is the first entry in the table driving the drop-downs.
_codereview.47600
I've been trying to optimize this piece of code:void detect_optimized(int width, int height, int threshold){ int x, y, z; int tmp; for (y = 1; y < width-1; y++) for (x = 1; x < height-1; x++) for (z = 0; z < 3; z++) { tmp = mask_product(mask,a,x,y,z); if (tmp>255) tmp = 255; if (tmp<threshold) tmp = 0; c[x][y][z] = 255-tmp; } return;}So far I've tried Blocking and a few other things, but I can't seem to get it to run any faster.Blocking resulted in: for(yy = 1; yy<height-1; yy+=4){ for(xx = 1; xx<width -1; xx+=4){ for (y = yy; y < 4+yy; y++){ for (x = xx; x < 4+xx; x++){ for (z = 0; z < 3; z++) { tmp = mask_product(mask,a,x,y,z); if (tmp>255) tmp = 255; if (tmp<threshold) tmp = 0; c[x][y][z] = 255-tmp; }}}}}Which ran at the same speed as the original program.Any suggestions would be great.mask_function cannot be changed, but here is its code:int mask_product(int m[3][3], byte bitmap[MAX_ROW][MAX_COL][NUM_COLORS], int x, int y, int z){ int tmp[9]; int i, sum;// ADDED THIS LINE (sum = 0) TO FIX THE BUG sum = 0; tmp[0] = m[0][0]*bitmap[x-1][y-1][z]; tmp[1] = m[1][0]*bitmap[x][y-1][z]; tmp[2] = m[2][0]*bitmap[x+1][y-1][z]; tmp[3] = m[0][1]*bitmap[x-1][y][z]; tmp[4] = m[1][1]*bitmap[x][y][z]; tmp[5] = m[2][1]*bitmap[x+1][y][z]; tmp[6] = m[0][2]*bitmap[x-1][y+1][z]; tmp[7] = m[1][2]*bitmap[x][y+1][z]; tmp[8] = m[2][2]*bitmap[x+1][y+1][z]; for (i=0; i<9; i++) sum = sum + tmp[i]; return sum;}
Detect optimized
optimization;c;image
Do not expect much:void detect_optimized(int width, int height, int threshold){ int x, y, z; int tmp; int widthM1= width-1; int heightM1=height-1; for (y = 1; y < widthM1; y++){ for (x = 1; x < heightM1; x++){ for (z = 0; z < 3; z++){ tmp = mask_product(mask,a,x,y,z); if (tmp>255) c[x][y][z] = 0; else if (tmp<threshold) c[x][y][z] = 255; else c[x][y][z] = 255 ^ tmp; // in this case xor is the same as - } } } return;}You can also unroll the z- loop, by copying the inner body 2 more times.If you can manage to change the mask_function:int mask_product(int m[3][3], byte bitmap[MAX_ROW][MAX_COL][NUM_COLORS], int x, int y, int z){ int xp1=x+1; int xm1=x-1; int yp1=y+1; int ym1=y-1; return m[0][0]*bitmap[xm1][ym1][z]; + m[1][0]*bitmap[x][ym1][z]; + m[2][0]*bitmap[xp1][ym1][z]; + m[0][1]*bitmap[xm1][y][z]; + m[1][1]*bitmap[x][y][z]; + m[2][1]*bitmap[xp1][y][z]; + m[0][2]*bitmap[xm1][yp1][z]; + m[1][2]*bitmap[x][yp1][z]; + m[2][2]*bitmap[xp1][yp1][z]; }Also have a look if you can make your compiler inline the mask_product method.
_codereview.139006
I have here a simple webapp for displaying different D3 demonstrations:The idea is - the user can select one of many modules in the control bar at the top, and the corresponding controls for that module will be displayed.The image generated by each module will be displayed in the main area below the control bar.Essentially - I want to be able to create each module independently, each with their own set of controls and drawing function.To achieve this, I've created a ng-template script for each module's controller, and a corresponding controller and service for each module. I then dynamically compile the template when the module is selected - using the technique outlined here.HTML and Templates<body ng-app=myapp ng-controller=myctrl> <div id=header class = flex-right > <div class=flex-down > <input type=button class=btn btn-info ng-repeat=m in modules ng-click=switchModule(m) value={{m.name}}></input> </div> <div class =flex-right > <dynamic-template template = currentModule.template/> </div> </div> <div id=main> </div> <script type=text/ng-template id=SpaceInvaders.html> <div ng-controller = SpaceInvadersCtrl class =flex-right> <div class = slider-container> <slider ng-model = speed min = 1 max = 10 value = 5 on-stop-slide = update() orientation =vertical/> </div> </div> </script> <script type=text/ng-template id=Waves.html> <button class = btn>foo</button> <button class =btn> bar</button> waves (more controls here). <!--more controller stuff here--> </script></body>Main Controller(function(){ use strict var app = angular.module(myapp,['ui.bootstrap', 'ui.bootstrap-slider']); app.controller(myctrl, function($scope){ $scope.modules = [{ name: Space Invaders, template: 'SpaceInvaders.html' }, { name: Waves, template: 'Waves.html' } ]; $scope.switchModule = function(m){ $scope.$broadcast(dj:moduleswitch); $scope.currentModule = m; } $scope.currentModule = $scope.modules[0]; }); }()); Dynamic Template directive(function() { use strict angular.module(myapp).directive('dynamicTemplate', function($compile, $templateCache, $timeout) { var linker = function(scope, element, attrs) { scope.$on(dj:moduleswitch, function(){ //timeout to make the template compile //after variables have updated $timeout(function(){ element.html($templateCache.get(scope.template)); $compile(element.contents())(scope); }); }); element.html($templateCache.get(scope.template)); $compile(element.contents())(scope); }; return { restrict : E, link : linker, scope : { template : '=' } }; })}());Example module service and controller(function() { use strict angular.module(myapp).service(SpaceInvaders, function() { this.execute = function(speed) { d3.select(#main).selectAll(*).remove(); //snip for brevity - //but all logic pertaining to actually drawing the thing }); }) .controller(SpaceInvadersCtrl, function($scope, SpaceInvaders){ $scope.speed = 5; $scope.update =function(){ SpaceInvaders.execute($scope.speed); }; }); ;}());My QuestionsThe main thing I'm concerned about is the technique I've used for the solution - declaring a template, passing and passing it to the directive for runtime compile.The implementation of the directive seems a little messy - with that $broadcast required from the main controller, and the $timeout required to get the compilation to occur after selected module has changed.How do I name broadcasts?In my main controller, I need to know the names of corresponding templates for each module.In each of my d3 modules, they'll all need to know the id of the drawing area - #main.
Dynamically displaying different controllers
javascript;angular.js;d3.js
null
_cs.29858
My question is regarding time-sharing and multi-tasking systems.Time-sharing operating system assigns time slot to each task while multi-tasking OS runs various jobs in parallel.But as I get it, on a single processor system, time-sharing is the only way to achieve multi-tasking. (I don't know if I am correct on this premise.)So, are time-sharing and multi-tasking system same or different.And if they are different, then what are the key differences (particularly on a single-processor/single-core system with no hyper-threading support etc) ? Thanks
Are Time-sharing and multi-tasking operating systems same or different
operating systems;os kernel
Just as multiprogramming allows the processor to handle multiple batch jobs at a time, multiprogramming can also be used to handle multiple interactive jobs. In this latter case, the technique is reffered to as time sharing, because processor time is shared among multiple users. In a time-sharing system, multiple users simultaneously access the system through terminals, with the OS interleaving the execution of each user program in a short burst or quantum of computation.William Stallings, Operating Systems Internals and Design Principles, 7th EditionYou should keep in mind that the principal objective of a time sharing system is not to maximize processor use, but to minimize response time.I think it would be correct however to assert that on a single processor system, multi-tasking is achieved through time sharing. Remember that a processor can execute only one task at a time at any given time.
_codereview.36251
I have been implementing a delegate class with C++11 that supports class member functions. I am interested in knowing if there are any potential problems with this implementation. My main concern is the private Key struct of the Delegate class. Can I always rely on the key being correct in all situations? Meaning, can I always identify certain function of certain object with it?In there I am storing 3 things:hash of the member function pointer pointer to the object std::function wrapping the actual functionThe hash is calculated from the function pointer with the getHash() function. I've originally used only that as key, but I run into a problem with inherited classes. I've gotten the same hash for the inherited function with 2 different classes inheriting from same base class. For that reason, I've also stored the pointer to the actual object, using these 2 as key I am able to identify certain entry in the list of stored keys. I'd be really interested to know any potential issues with this implementation. This compiles with VS2013, and I am hoping it will also work with Clang, but I haven't tried it out yet.template <typename...Params>class Delegate {private: typedef std::function < void(Params...)> FunctionType;struct Key{ size_t m_funcHash; void* m_object; FunctionType m_func; Key(void* object, size_t funcHash, FunctionType func) : m_object(object), m_funcHash(funcHash), m_func(func) {} bool operator==(const Key& other) const { return other.m_object == m_object && other.m_funcHash == m_funcHash; }};std::vector <Key> m_functions;public:template <typename Class>void connect(Class& obj, void (Class::*func) (Params...) ) { std::function < void (Class, Params...) > f = func; FunctionType fun = [&obj, f](Params... params) { f(obj, params...); }; size_t hash = getHash(func); m_functions.push_back(Key( &obj, hash, fun));}template <typename Class>size_t getHash(void (Class::*func) (Params...)) { const char *ptrptr = static_cast<const char*>(static_cast<const void*>(&func)); int size = sizeof (func); std::string str_rep(ptrptr, size); std::hash<std::string> strHasher; return strHasher(str_rep);}template <typename Class>void disconnect( Class& obj, void (Class::*func) (Params...) ) { size_t hash = getHash(func); for (unsigned int i = 0; i < m_functions.size(); ++i) { auto key = m_functions[i]; if (key.m_funcHash == hash && key.m_object == &obj) { m_functions.erase(m_functions.begin() + i); --i; } }}template <typename ... Args>void operator() (Args...args){ for (auto& key : m_functions) { key.m_func (args...); }}};class BaseClass {public: virtual void print(const std::string& str) = 0;};class A : public BaseClass {public: void print(const std::string& str) { std::cout << Class A : Print [ << str << ]\n; }};class B : public BaseClass {public: void print(const std::string& str) { std::cout << Class B : Print [ << str << ]\n; }};int _tmain(int argc, _TCHAR* argv[]){ A a; B b; Delegate <const std::string&> delegate; delegate.connect(a, &A::print); delegate.connect(b, &B::print); delegate(hello); delegate(world); delegate.disconnect(a, &A::print); delegate(bye world); delegate.disconnect(b, &B::print); delegate(nobody there.); // should not print anything std::cin.ignore(); return 0;}
C++ delegate implementation with member functions
c++;c++11;delegates;pointers
Fatal bugs:A big issue is the fact that you take a copy of the object in the function in this line: std::function < void (Class, Params...) > f = func;This needs to be Class& at the very least. Otherwise, invocations of handlers would operate on copies of the intended objects. See demonstration here: Live On Coliru with debug information showing copies are being made.Conceptual issues:you are using a hash for equality. This is a conceptual flaw: hashes can have collisions. Hashes are used to organize large domains into a smaller set of buckets. this could really be considered a fatal bug, as disconnect might disconnect unrelated handlers hereyou're wrapping your member-function pointers in a std::function. Twice. This is gonna be very bad for anything performance-sensitiveyou are abusing std::function for generic calleables. Instead, use perfect storage on user-supplied calleables (why bother whether it's std::mem_fun_ref_t, a bind-expression, a lambda or indeed a function pointer?). This wayyou don't incur the overhead of type erasure and virtual dispatch that comes with std::function, unless your situation requires ityou don't have to bother with the annoying details that you have to, nowyou don't need to treat member functions differently to begin with (std::bind to the rescue)you don't need to rely on implementation defined implementation of pointer-to-member-functionyou're using implementation defined behaviour to get a hash of a pointer-to-member-function (see previous bullet)your less-than generic solution works for pointer-to-member-function only. This you already knew. But, you may not have realized, it doesn't work for const or volatile qualified member functions.your Delegate class tries to guess identity of registered handlers. In fact, the registering party should be responsible for tracking exactly which registration it manages. A cookie-based design would be most suitable here, and certainly takes all the implementation defined behaviour out of the equation (as well as immediately making it clear what happens when the same handler gets registered multiple times). See the more drastic rewrite below.Style, EfficiencyYour Key class isn't the Key. It's the collection Entry (Key + Function).Your Key defines operator==. An object that's equatable necessitates linear search to do a lookup. Instead, define a weak total ordering (using operator<) so you can use tree-like data-structures. See below for a sample that uses std::tie() to quickly implement both relational operators (the first version).Also, need to use Key::operator== instead of just comparing m_object and m_funcHash again (encapsulation)Your Key stores the object, but the object is also part of FunctionType implicitely, because you capture it by reference. You should probably remove the redundancy.Your disconnect lookup is ... not very effective: linear search? really?no early out? (it's unclear how you really want repeatedly registered callbacks to be handled, but I'd say del.connect(some_handler); del.connect(some_handler); del.disconnect(some_handler); // disconnect 1 of the two del.disconnect(some_handler); // disconnect the otherwould be the principle of least surprise)your i-- invokes unsigned integer wraparound. Now, this is not undefined behaviour, but it's bad style IMO. You could usefor(unsigned int i = 0; i < m_functions.size(); ) { auto key = m_functions[i]; if(key.m_funcHash == hash && key.m_object == &obj) { m_functions.erase(m_functions.begin() + i); } else { ++i; }}instead.the loop crucially depends on m_functions.size() being evaluated each iteration. This is inefficient and error-proneGuideline: Whenever you see a loop that became... non-trivial and obscure like this, it's a sure sign you need to use an (existing) algorithmFixing just these elements: just use a better datastructure, likestruct Key { void* m_object; size_t m_funcHash; // no ctor: let just just be an aggregate bool operator==(const Key& other) const { return key() == other.key(); } bool operator< (const Key& other) const { return key() < other.key(); } private: // trick to make it trivial to consistently implement relational operators: std::key<void* const&, size_t const&> key() const { return std::tie(m_object, m_funcHash); }};std::multimap<Key, FunctionType> m_functions;The whole disconnect function becomes trivial:template <typename Class>void disconnect(Class& obj, void (Class::*func)(Params...)) { m_functions.erase(Key { &obj, getHash(func) });}Fixed version(s)Here's a version that uses the multimap approach (fixing just the style/efficiency issues) Live On ColiruA more drastic rewritewould look like this: Live On ColiruThis version alters the design so most (if not all) the problems are sidestepped. It uses boost's stable_vector to do a cookie-based design. This moves the burden to track handler identities on the registering parties (where it belongs after all).#include <functional>#include <iostream>#include <boost/container/stable_vector.hpp>template <typename... Params>class Delegate {private: typedef std::function<void(Params...)> Handler; typedef boost::container::stable_vector<Handler> Vector; Vector m_handlers;public: typedef typename Vector::const_iterator cookie; cookie connect(Handler&& func) { m_handlers.push_back(std::move(func)); return m_handlers.begin() + m_handlers.size() - 1; } template <typename... BindArgs, typename Sfinae = typename std::enable_if<(sizeof...(BindArgs)>1), void>::type> cookie connect(BindArgs&&... args) { return connect(Handler(std::bind(std::forward<BindArgs>(args)...))); } void disconnect(cookie which) { m_handlers.erase(which); } template <typename ... Args> void operator()(Args...args) { for(auto const& handler : m_handlers) handler(args...); }};//////////////////////////////////// demonstration#define CV volatile conststruct BaseClass { virtual void print(const std::string& str) CV = 0;};struct A : BaseClass { A() {} void print(const std::string& str) CV { std::cout << Class A : Print [ << str << ]\n; }};struct B : BaseClass { B() {} void print(const std::string& str) CV { std::cout << Class B : Print [ << str << ]\n; }};using namespace std::placeholders;int main() { Delegate <const std::string&> delegate; A CV a; B CV b; auto acookie = delegate.connect(&A::print, std::ref(a), _1); auto bcookie = delegate.connect(&B::print, std::ref(b), _1); delegate(hello); delegate(world); delegate.disconnect(acookie); delegate(bye world); delegate.disconnect(bcookie); delegate(nobody there.); // should not print anything}
_cs.45813
We have a Fibonacci heap with $N$ unique elements we want to make $O(N)$ order statistic queries (e.g., what is the element number 7 in this collection if it was sorted). Moreover, we know that the order statistic queries would be made in order (e.g., for $O(N)$ queries: what is elements number $1,3,5,\ldots,N$) .How can we do it in $O(N)$ time ? We can't change the data structure in any way. My thoughts: 1)The naive way is just to do delete min $N$ times but it would take $O(N \log N)$ so its not good enough. 2) if we could somehow use increase-key (if its min heap or decrease-key if its max heap) we would be able do find min in constant time increase its key so its larger than anything else in the heap we could avoid delete-min and its amortized bound of $O(log(n))$ entirely, but we are not allowed to modify the data-structure . EDIT: sorry i didn't make it clear the heap don't have trees that waiting to be melded , otherwise its not possible since the heap could have N degree 0 trees and we cant sort N items in $O(N)$ time.
How Can we make $O(N)$ order statistic queries with Fibonacci heap in $O(N)$?
data structures
Well, you can build a Fibonacci heap in $O(N)$ time. Assuming you will get the proposed order statistic also in $O(N)$ time for the index set $\{1,2,3,\ldots,N\}$, then we can sort the stored set in linear time.You probably know that (comparision based) sorting takes $\Omega(N \log N)$ time. Since you have no information about the universe of your elements you cannot use tricks like bucket sort or sorting on the word ram to get a linear time sorting algorithm. So you can't solve your problem in $O(N)$ time.
_scicomp.27722
I can't seem to find a good algorithm for the one-to-exactly-two assignment problem. Good algorithms are known for the classical assignment problem, where N tasks need to be assigned to to M agents in a one-to-one correspondence.In my case of the one-to-exactly-two assignment problem, I have N tasks and M agents. However, each tasks can only be solved if two agents are assigned to it. Similar to the classical assignment problem, the goal is to minimize the cost, given by a cost matrix $C_{ij}$. Here assigning task $i$ to agent $j$ costs an amount $C_{ij}$.Any ideas how this can be solved efficiently?I already considered the review by Pentico, D. 'Assignment Problems: A Golden Anniversary Survey', but could not find my problem there.
Algorithms for the one-to-two assignment problem
algorithms
null
_scicomp.83
I have several challenging non-convex global optimization problems to solve. Currently I use MATLAB's Optimization Toolbox (specifically, fmincon() with algorithm='sqp'), which is quite effective. However, most of my code is in Python, and I'd love to do the optimization in Python as well. Is there a NLP solver with Python bindings that can compete with fmincon()? It must be able to handle nonlinear equality and inequality constraints not require the user to provide a Jacobian. It's okay if it doesn't guarantee a global optimum (fmincon() does not). I'm looking for something that robustly converges to a local optimum even for challenging problems, and even if it's slightly slower than fmincon().I have tried several of the solvers available through OpenOpt and found them to be inferior to MATLAB's fmincon/sqp.Just for emphasis I already have a tractable formulation and a good solver. My goal is merely to change languages in order to have a more streamlined workflow.Geoff points out that some characteristics of the problem may be relevant. They are:10-400 decision variables4-100 polynomial equality constraints (polynomial degree ranges from 1 to about 8)A number of rational inequality constraints equal to about twice the number of decision variablesThe objective function is one of the decision variablesThe Jacobian of the equality constraints is dense, as is the Jacobian of the inequality constraints.
Is there a high quality nonlinear programming solver for Python?
python;optimization;nonlinear programming
fmincon(), as you mentioned, employs several strategies that are well-known in nonlinear optimization that attempt to find a local minimum without much regard for whether the global optimum has been found. If you're okay with this, then I think you have phrased the question correctly (nonlinear optimization).The best package I'm aware of for general nonlinear optimization is IPOPT[1]. Apparently Matthew Xu maintains a set of Python bindings to IPOPT, so this might be somewhere to start.[1]: Andreas Wachter is a personal friend, so I may be a bit biased.
_webapps.76598
I know that I should take ownership of any personal repositories (and gists, etc) they have that might contain company data before removal. However, what's the impact on repositories belonging to an organization that they were contributing to?Does the user removal affect history or data integrity in any way that I need to be aware of? In other words, is it safe to remove users or will it make bad things happen to all our important work? :-)
What precautions to take on removing users from Github Enterprise?
github
null
_unix.157283
I'm trying to increase the size of my boot partition because Arch keeps complaining about it. I have GParted on a USB flash drive. When I restart, there's no option to boot from the flash drive rather than the harddrive. How do I tell Arch to boot from the USB instead?If you need further info, just ask and I will update.
How can I boot GParted in Arch Linux?
arch linux;boot;gparted
null
_softwareengineering.210225
I came from a background where I taught that when a transaction reached some state (FINISHED, PRINTED, etc) it should no longer be opened for modification even to admin users.But here I am, fixing their human-input error barbarically by deleting a row in database and/or changing the current item's state to it's previous one. At some point they even make a mistake in inputting a transaction date, and just realized it somewhere later. And it's really annoying.Should a software, especially an enterprise one, gives a user all the freedom they requested?If so, what good does a transaction state for?Update : Here, by transaction I mean a Transaction object which usually consisted a reference to Master object and it's own attributes, not database transaction.
Should a persisted and 'finished' by state transaction be editable to any user
business rules
Your users obviously need a feature (changing or removing of an item after it has reached a certain transaction state) which your application doesn't provide. It seems like when designing the application you thought that there would never be a reason to do so, but the task you are currently performing proves that there is. Otherwise you wouldn't have to do it.Nobody is perfect. Input errors are human, as is not noticing them until it is too late. Every business process and the software system which represents it must keep that in mind. When revision requirements dictate that a document must not be invisibly changed after a certain point in the business process was reached, there should be a way to make the correction transparently. This can happen by using a visible change history of an item or by setting its status to VOID or CANCELED and creating a new item. The latter also allows to define a proper process for these situations. Example: When the status of an invoice changes from PRINTED to CANCELED, automatically print an apology letter telling the customer to ignore the previous invoice.
_webapps.57958
I wish there was a way to mark some of MY photos lying across/in different Google+ albums as my favourites!Or say, in Picasa Web Albums, I want to find a way to mark some photos (which may be present in different albums of mine), and then later move/copy all those marked photos in one album.I already know a work around to create spare email address and then tag all those photos with that tag. This work around is not very practical. Moreover, I won't be able to view all those tagged photos together at one place with one command, will I?
Way to mark some photos 'Favourites' in Google+?
google plus;google plus photos
null
_webmaster.2598
I am getting dozens of 404 errors on my site that are requests for gif's with apparently random names, like 4273uaqa.gif and 5pwowlag.gif.I see that most of them are coming from one user. I assume something is happening in the background on her machine without her knowledge -- a malware thing on the client.Have you seen this behavior before, and do you know what sort of malware might cause it?Would love to advise my customer that s/he has an issue. I'd also like to stop getting these 404 reports.(reposted from main Stack Overflow)
Are you aware of any client-side malware that sends lots of junk requests for .gifs?
404
null
_unix.312274
I originally installed the MegaRaid Controller on Windows. I switched to Kubuntu and now I'm having great difficulties getting the raid setup recognized.I don't really need the raid (Just the controller since the cards are SAS).Additionally, if I use megasasctl, I get the return:No LSI MegaRAID SAS cards found. You may try megactl instead.And the same thing if I use megactl.I tried following several tutorials online, which use the tool directly from Avagotech/LSI, without any luck getting it installed.At this point I really don't care about the raid, I just need the disk space.
LSI MegaRaid Controller - Switched OS's
linux;raid;kubuntu
null
_unix.16048
I'm using Microsoft Virtual PC 2007 to run a linux virtual machine, but after some time I can't login to it -- not from the instance on the localmachine or ssh in to the machine. Both give the same response. The login process doesn't finshf, the image shows where it stops: This only happens when the vm has been running for some days, if I reboot it works fine again for some time before it locks up like this again.Does anyone have an idea of why this is happening?Edit: Here are the files.ls -l /var/pam.d/:-rw-r--r-- 1 root root 1208 2011-01-20 09:17 common-account-rw-r--r-- 1 root root 1260 2011-01-20 09:17 common-auth-rw-r--r-- 1 root root 1509 2011-01-20 09:17 common-password-rw-r--r-- 1 root root 1201 2011-01-20 09:17 common-session-rw-r--r-- 1 root root 182 2009-04-17 09:53 atd-rw-r--r-- 1 root root 384 2009-04-04 07:42 chfn-rw-r--r-- 1 root root 581 2009-04-04 07:42 chsh-rw-r--r-- 1 root root 3592 2009-04-04 07:42 login-rw-r--r-- 1 root root 92 2009-04-04 07:42 passwd-rw-r--r-- 1 root root 2305 2009-04-04 07:42 su-rw-r--r-- 1 root root 69 2009-03-27 17:18 samba-rw-r--r-- 1 root root 520 2009-03-21 10:28 other-rw-r--r-- 1 root root 168 2009-02-20 18:24 ppp-rw-r--r-- 1 root root 119 2009-02-17 04:22 sudo-rw-r--r-- 1 root root 1272 2009-01-28 21:58 sshd-rw-r--r-- 1 root root 289 2008-11-12 16:47 cron/etc/pam-.d/login: (comments removed)auth optional pam_faildelay.so delay=3000000auth [success=ok ignore=ignore user_unknown=ignore default=die] pam_securetty.soauth requisite pam_nologin.sosession required pam_selinux.so closesession required pam_env.so readenv=1session required pam_env.so readenv=1 envfile=/etc/default/locale@include common-authauth optional pam_group.sosession required pam_limits.sosession optional pam_lastlog.sosession optional pam_motd.sosession optional pam_mail.so standard@include common-account@include common-session@include common-passwordsession required pam_selinux.so open/etc/pam.d/common-account:account[success=1 new_authtok_reqd=done default=ignore]pam_unix.so accountrequisitepam_deny.soaccountrequiredpam_permit.so/etc/pam.d/common-account: common-authauth[success=1 default=ignore]pam_unix.so nullok_secureauthrequisitepam_deny.soauthrequiredpam_permit.soauthoptionalpam_smbpass.so migrate/etc/pam.d/common-auth session: sessionsession[default=1]pam_permit.sosessionrequisitepam_deny.sosessionrequiredpam_permit.sosessionrequiredpam_unix.so sessionoptionalpam_ck_connector.so nox11auth.log:(15:36 is problably when I tryed to ssh to the vm, and I got the fault and did a reboot.)Jun 30 14:17:01 us CRON[1435]: pam_unix(cron:session): session opened for user root by (uid=0)Jun 30 14:17:01 us CRON[1435]: pam_unix(cron:session): session closed for user rootJun 30 14:20:01 us CRON[1458]: pam_unix(cron:session): session opened for user root by (uid=0)Jun 30 14:20:01 us CRON[1458]: pam_unix(cron:session): session closed for user rootJul 4 15:36:35 us sshd[1945]: Server listening on 0.0.0.0 port 22.Jul 4 15:36:35 us sshd[1945]: Server listening on :: port 22.Jul 4 15:36:37 us sshd[1945]: Received signal 15; terminating.Jul 4 15:36:37 us sshd[2052]: Server listening on 0.0.0.0 port 22.Jul 4 15:36:37 us sshd[2052]: Server listening on :: port 22.Jul 4 15:39:01 us CRON[2478]: pam_unix(cron:session): session opened for user root by (uid=0)Jul 4 15:39:01 us CRON[2478]: pam_unix(cron:session): session closed for user rootJul 4 15:40:02 us CRON[2565]: pam_unix(cron:session): session opened for user root by (uid=0)Jul 4 15:40:08 us CRON[2565]: pam_unix(cron:session): session closed for user rootJul 4 15:42:29 us login[2451]: pam_unix(login:session): session opened for user glennwiz by LOGIN(uid=0)Jul 4 15:50:01 us CRON[2677]: pam_unix(cron:session): session opened for user root by (uid=0)Jul 4 15:50:02 us CRON[2677]: pam_unix(cron:session): session closed for user rootJul 4 16:00:01 us CRON[2754]: pam_unix(cron:session): session opened for user root by (uid=0)Jul 4 16:00:01 us CRON[2756]: pam_unix(cron:session): session opened for user root by (uid=0)Jul 4 16:00:02 us CRON[2756]: pam_unix(cron:session): session closed for user rootJul 4 16:00:02 us CRON[2754]: pam_unix(cron:session): session closed for user rootJul 4 16:03:37 us sshd[2799]: Accepted password for glennwiz from 148.140.26.150 port 51330 ssh2Jul 4 16:03:37 us sshd[2799]: pam_unix(sshd:session): session opened for user glennwiz by (uid=0)
Virtual machine hangs when trying to login after beeing idle for som time
ubuntu;virtual machine
null
_datascience.18752
Hi I have fairly very short time series data. The data set has number of systems $s_{1}, s_{2}, s_{3},..s_{n}$. For each $s_{i}$ we have recorded number of failures on each day. As of now, we have recorded 30 days. I would like to know can I use LSTM for predicting number of system failures for next day.How about using Vector Auto Regression?. Any starting pointers and code references will be useful. Thank you.
time series prediction using LSTM
machine learning;deep learning;time series;tensorflow
null
_unix.103230
I have a Lenovo IdeaPad Yoga 13 with Ubuntu 13.10 Installed. The device has a Toggle TouchPad button on the keyboard (F5). The keyboard's F* buttons are reversed (so to get F5, I need to press Fn + F5, and F5 is actually the toggle key).I've found out that the button is actually read by the keyboard (rather than the TouchPad like certain devices), which is at /dev/input/event3. So using sudo input-events 3 I was able to figure out that the button sends the scan code 190:Output of sudo lsinput:/dev/input/event3 bustype : BUS_I8042 vendor : 0x1 product : 0x1 version : 43907 name : AT Translated Set 2 keyboard phys : isa0060/serio0/input0 bits ev : EV_SYN EV_KEY EV_MSC EV_LED EV_REPOutput of sudo input-events 3:23:13:03.849392: EV_MSC MSC_SCAN 19023:13:03.849392: EV_SYN code=0 value=023:13:03.855413: EV_MSC MSC_SCAN 19023:13:03.855413: EV_SYN code=0 value=0No other programs (such as xev) seem to be able to read it except for input-events. Is there any way to map this button to make it toggle the TouchPad on my laptop? If so, how can I do so?
Capturing key input from events device and mapping it (toggle TouchPad key is unmapped)
kernel;drivers;input;events
As it turns out the kernel did pick it up, but kept complaining that it's not recognised.For anyone else having this issue, or wants to map a key that's not read by the OS, read on.Open a terminal and run dmesg | grep -A 1 -i setkeycodes. This will give you multiple entries like this:[ 9.307463] atkbd serio0: Unknown key pressed (translated set 2, code 0xbe on isa0060/serio0).[ 9.307476] atkbd serio0: Use 'setkeycodes e03e <keycode>' to make it known.What we are interested is the hexadecimal value after setkeycodes, in this case this is e03e. If you have multiple of these, you can run tail -f /var/log/kern.log. Once you've done so, you can tap the button you're looking for, and this will give you a the same line as above, and again, we only need the hexadecimal value. Make a note of this.Now run xmodmap -pke | less and find the appropriate mapping. In my case, I needed to map this to toggle my touch pad, which means I was interested in the following line:keycode 199 = XF86TouchpadToggle NoSymbol XF86TouchpadToggleIf you can't find whatever you're interested in, read @Gilles answer too, as you can define custom mappings too, then read on (if the kernel reads it, you won't need to add it to xorg.conf.d)Now I ran the following command: sudo setkeycodes [hexadecimal] [keycode], so in my case that became: setkeycodes e03e 199.Now you can use the following line to test if it worked and/or you have the correct mapping:xev | grep -A2 --line-buffered '^KeyRelease' | sed -n '/keycode /s/^.*keycode \([0-9]*\).* (.*, \(.*\)).*$/\1 \2/p'When you run this command, you need to focus on the newly opened window (xev) and check the console output. In my case it read as following:207 NoSymbolThis was obviously wrong, as I requested keycode 199, so it's mapped to XF86TouchpadToggle. I checked xmodmap -pke again, and noticed that keycode 207 is actually mapped to NoSymbol, and I noticed that there was an offset difference of 8, so I tried the setkeycodes command again, but the key is mapped to keycode 191.sudo setkeycodes e03e 191This worked perfectly.EDIT -- the solution I provided to have to working on start up does not. I will figure this out tomorrow and update this answer. For now I suppose you can run this on start up manually.
_unix.231941
Not that I need to aplay as root but it came across me and I wondered why it shouldn't work. # aplay /home/bibek/apert.wavXDG_RUNTIME_DIR (/run/user/1000) is not owned by us (uid 0), but byuid 1000! (This could e g happen if you try to connect to a non-rootPulseAudio as a root user, over the native protocol. Don't do that.) ALSA lib pcm_dmix.c:1024:(snd_pcm_dmix_open) unable to open slaveaplay: main:722: audio open error: No such file or directoryI can see that it is giving me reasonable amount of details but I still don't understand.
Can't run aplay as root!
shell;root;alsa
null
_unix.235565
I used to login to a remote machine( I have to root of this machine) using key. Both my local machine and remote machine is in f23. From last few days, I can't login to this machine using key. It is asking for password. Here is the ssh -vvv:ssh -vvv avetaOpenSSH_7.1p1, OpenSSL 1.0.2d-fips 9 Jul 2015debug1: Reading configuration data /home/rudra/.ssh/configdebug1: /home/rudra/.ssh/config line 4: Applying options for avetadebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 56: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to phy-aveta.physics.uu.se [130.238.194.143] port 22.debug1: Connection established.debug1: identity file /home/rudra/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.1debug1: Remote protocol version 2.0, remote software version OpenSSH_7.1debug1: match: OpenSSH_7.1 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: Authenticating to phy-aveta.physics.uu.se:22 as 'rudra'debug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:16debug3: load_hostkeys: loaded 1 keys from phy-aveta.physics.uu.sedebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,ssh-rsadebug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1debug2: kex_parse_kexinit: ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug1: kex: server->client [email protected] <implicit> nonedebug1: kex: client->server [email protected] <implicit> nonedebug1: kex: [email protected] need=64 dh_need=64debug1: kex: [email protected] need=64 dh_need=64debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:F34tt6QLRDt6Qm45eHOFhYGS5DSxYrThhR2lbBHNXesdebug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:16debug3: load_hostkeys: loaded 1 keys from phy-aveta.physics.uu.sedebug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:14debug3: load_hostkeys: loaded 1 keys from 130.238.194.143debug1: Host 'phy-aveta.physics.uu.se' is known and matches the ECDSA host key.debug1: Found key in /home/rudra/.ssh/known_hosts:16debug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/rudra/.ssh/id_rsa (0x562e17c87070),debug2: key: /home/rudra/.ssh/id_dsa ((nil)),debug2: key: /home/rudra/.ssh/id_ecdsa ((nil)),debug2: key: /home/rudra/.ssh/id_ed25519 ((nil)),debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup gssapi-keyexdebug3: remaining preferred: gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_is_enabled gssapi-keyexdebug1: Next authentication method: gssapi-keyexdebug1: No valid Key exchange contextdebug2: we did not send a packet, disable methoddebug3: authmethod_lookup gssapi-with-micdebug3: remaining preferred: publickey,keyboard-interactive,passworddebug3: authmethod_is_enabled gssapi-with-micdebug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug2: we did not send a packet, disable methoddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/rudra/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug1: Trying private key: /home/rudra/.ssh/id_dsadebug3: no such identity: /home/rudra/.ssh/id_dsa: No such file or directorydebug1: Trying private key: /home/rudra/.ssh/id_ecdsadebug3: no such identity: /home/rudra/.ssh/id_ecdsa: No such file or directorydebug1: Trying private key: /home/rudra/.ssh/id_ed25519debug3: no such identity: /home/rudra/.ssh/id_ed25519: No such file or directorydebug2: we did not send a packet, disable methoddebug3: authmethod_lookup passworddebug3: remaining preferred: ,passworddebug3: authmethod_is_enabled passworddebug1: Next authentication method: [email protected]'s password: I have deleted .ssh, .config and .cache from the remote and redo ssh-copy-id, without any help.To troubleshoot, I have createed another user, did ssh-copy-id and that one is working fine. ssh -vvv for that working machine is:ssh -vvv [email protected] OpenSSH_7.1p1, OpenSSL 1.0.2d-fips 9 Jul 2015debug1: Reading configuration data /home/rudra/.ssh/configdebug1: Reading configuration data /etc/ssh/ssh_configdebug1: /etc/ssh/ssh_config line 56: Applying options for *debug2: ssh_connect: needpriv 0debug1: Connecting to phy-aveta.physics.uu.se [130.238.194.143] port 22.debug1: Connection established.debug1: identity file /home/rudra/.ssh/id_rsa type 1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_rsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_dsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_dsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ecdsa type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ecdsa-cert type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ed25519 type -1debug1: key_load_public: No such file or directorydebug1: identity file /home/rudra/.ssh/id_ed25519-cert type -1debug1: Enabling compatibility mode for protocol 2.0debug1: Local version string SSH-2.0-OpenSSH_7.1debug1: Remote protocol version 2.0, remote software version OpenSSH_7.1debug1: match: OpenSSH_7.1 pat OpenSSH* compat 0x04000000debug2: fd 3 setting O_NONBLOCKdebug1: Authenticating to phy-aveta.physics.uu.se:22 as 'rudra2'debug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:16debug3: load_hostkeys: loaded 1 keys from phy-aveta.physics.uu.sedebug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521debug1: SSH2_MSG_KEXINIT sentdebug1: SSH2_MSG_KEXINIT receiveddebug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,ssh-rsadebug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1,[email protected],[email protected],[email protected],[email protected],hmac-md5,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96debug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: none,[email protected],zlibdebug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1debug2: kex_parse_kexinit: ssh-rsa,ecdsa-sha2-nistp256,ssh-ed25519debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected]: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1debug2: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: none,[email protected]: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug1: kex: server->client [email protected] <implicit> nonedebug1: kex: client->server [email protected] <implicit> nonedebug1: kex: [email protected] need=64 dh_need=64debug1: kex: [email protected] need=64 dh_need=64debug1: expecting SSH2_MSG_KEX_ECDH_REPLYdebug1: Server host key: ecdsa-sha2-nistp256 SHA256:F34tt6QLRDt6Qm45eHOFhYGS5DSxYrThhR2lbBHNXesdebug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:16debug3: load_hostkeys: loaded 1 keys from phy-aveta.physics.uu.sedebug3: hostkeys_foreach: reading file /home/rudra/.ssh/known_hostsdebug3: record_hostkey: found key type ECDSA in file /home/rudra/.ssh/known_hosts:14debug3: load_hostkeys: loaded 1 keys from 130.238.194.143debug1: Host 'phy-aveta.physics.uu.se' is known and matches the ECDSA host key.debug1: Found key in /home/rudra/.ssh/known_hosts:16debug2: set_newkeys: mode 1debug1: SSH2_MSG_NEWKEYS sentdebug1: expecting SSH2_MSG_NEWKEYSdebug2: set_newkeys: mode 0debug1: SSH2_MSG_NEWKEYS receiveddebug1: Roaming not allowed by serverdebug1: SSH2_MSG_SERVICE_REQUEST sentdebug2: service_accept: ssh-userauthdebug1: SSH2_MSG_SERVICE_ACCEPT receiveddebug2: key: /home/rudra/.ssh/id_rsa (0x55c98f7eb080),debug2: key: /home/rudra/.ssh/id_dsa ((nil)),debug2: key: /home/rudra/.ssh/id_ecdsa ((nil)),debug2: key: /home/rudra/.ssh/id_ed25519 ((nil)),debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: start over, passed a different list publickey,gssapi-keyex,gssapi-with-mic,passworddebug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_lookup gssapi-keyexdebug3: remaining preferred: gssapi-with-mic,publickey,keyboard-interactive,passworddebug3: authmethod_is_enabled gssapi-keyexdebug1: Next authentication method: gssapi-keyexdebug1: No valid Key exchange contextdebug2: we did not send a packet, disable methoddebug3: authmethod_lookup gssapi-with-micdebug3: remaining preferred: publickey,keyboard-interactive,passworddebug3: authmethod_is_enabled gssapi-with-micdebug1: Next authentication method: gssapi-with-micdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug1: Unspecified GSS failure. Minor code may provide more informationdebug1: Unspecified GSS failure. Minor code may provide more informationNo Kerberos credentials availabledebug2: we did not send a packet, disable methoddebug3: authmethod_lookup publickeydebug3: remaining preferred: keyboard-interactive,passworddebug3: authmethod_is_enabled publickeydebug1: Next authentication method: publickeydebug1: Offering RSA public key: /home/rudra/.ssh/id_rsadebug3: send_pubkey_testdebug2: we sent a publickey packet, wait for replydebug1: Server accepts key: pkalg ssh-rsa blen 279debug2: input_userauth_pk_ok: fp SHA256:xT3VPQUunB3Nv/Pmi6C6Sroc0fa9SlKcQ4d0eF2vxzIdebug3: sign_and_send_pubkey: RSA SHA256:xT3VPQUunB3Nv/Pmi6C6Sroc0fa9SlKcQ4d0eF2vxzIdebug1: Authentication succeeded (publickey).Authenticated to phy-aveta.physics.uu.se ([130.238.194.143]:22).debug1: channel 0: new [client-session]debug3: ssh_session2_open: channel_new: 0debug2: channel 0: send opendebug1: Requesting [email protected]: Entering interactive session.debug1: client_input_global_request: rtype [email protected] want_reply 0debug2: callback startdebug2: fd 3 setting TCP_NODELAYdebug3: ssh_packet_set_tos: set IP_TOS 0x10debug2: client_session2_setup: id 0debug2: channel 0: request pty-req confirm 1debug1: Sending environment.debug3: Ignored env XDG_VTNRdebug3: Ignored env XDG_SESSION_IDdebug3: Ignored env HOSTNAMEdebug3: Ignored env SHELLdebug3: Ignored env TERMdebug3: Ignored env XDG_MENU_PREFIXdebug3: Ignored env VTE_VERSIONdebug3: Ignored env HISTSIZEdebug3: Ignored env XCRYSDEN_SCRATCHdebug3: Ignored env WINDOWIDdebug3: Ignored env QTDIRdebug3: Ignored env QTINCdebug3: Ignored env QT_GRAPHICSSYSTEM_CHECKEDdebug3: Ignored env XCRYSDEN_TOPDIRdebug3: Ignored env USERdebug3: Ignored env LS_COLORSdebug3: Ignored env DESKTOP_AUTOSTART_IDdebug3: Ignored env SSH_AUTH_SOCKdebug3: Ignored env SESSION_MANAGERdebug3: Ignored env PATHdebug3: Ignored env MAILdebug3: Ignored env DESKTOP_SESSIONdebug3: Ignored env QT_IM_MODULEdebug3: Ignored env XDG_SESSION_TYPEdebug3: Ignored env PWDdebug1: Sending env XMODIFIERS = @im=ibusdebug2: channel 0: request env confirm 0debug1: Sending env LANG = en_GB.UTF-8debug2: channel 0: request env confirm 0debug3: Ignored env MODULEPATHdebug3: Ignored env GDM_LANGdebug3: Ignored env LOADEDMODULESdebug3: Ignored env GDMSESSIONdebug3: Ignored env SSH_ASKPASSdebug3: Ignored env HISTCONTROLdebug3: Ignored env HOMEdebug3: Ignored env XDG_SEATdebug3: Ignored env SHLVLdebug3: Ignored env GNOME_DESKTOP_SESSION_IDdebug3: Ignored env XBANDPATHdebug3: Ignored env XDG_SESSION_DESKTOPdebug3: Ignored env LOGNAMEdebug3: Ignored env QTLIBdebug3: Ignored env DBUS_SESSION_BUS_ADDRESSdebug3: Ignored env MODULESHOMEdebug3: Ignored env LESSOPENdebug3: Ignored env WINDOWPATHdebug3: Ignored env XDG_RUNTIME_DIRdebug3: Ignored env DISPLAYdebug3: Ignored env XDG_CURRENT_DESKTOPdebug3: Ignored env XAUTHORITYdebug3: Ignored env BASH_FUNC_module()debug3: Ignored env BASH_FUNC_scl()debug3: Ignored env _debug2: channel 0: request shell confirm 1debug2: callback donedebug2: channel 0: open confirm rwindow 0 rmax 32768debug2: channel_input_status_confirm: type 99 id 0debug2: PTY allocation request accepted on channel 0debug2: channel 0: rcvd adjust 2097152debug2: channel_input_status_confirm: type 99 id 0debug2: shell request accepted on channel 0Last login: Mon Oct 12 12:33:15 2015 from 130.238.194.90[rudra2@phy-aveta ~]$ debug2: client_check_window_change: changeddebug2: channel 0: request window-change confirm 0I am clueless why in same local-remote combination, one is working and other is not. Kindly help.EDIT .ssh/config for the failed one is:Host aveta User rudra Hostname phy-aveta.physics.uu.se ForwardX11 yesThere is no ssh/.config entry for rudra2, i.e. that worked.Ans to Paulin my local machine I have:tree .ssh/.ssh/ authorized_keys config environment id_rsa id_rsa.pub known_hostsIn both the remote user, I only have authorized_keys. And both are identical.[root@phy-aveta rudra2]# diff .ssh/authorized_keys /home/rudra/.ssh/authorized_keys [root@phy-aveta rudra2]# Edit 2: Without using ssh/.configI have commented out the part for first user in .ssh/config. http://ur1.ca/nzndx is the ssh -vvv for that. It is still asking for passwordEdit: PermissionThe first user...failed one#ls -al /home/rudra/|grep .sshdrwx------. 2 rudra rudra 4096 Oct 12 14:16 .ssh$ls -alF .ssh/total 12Kdrwx------. 2 rudra rudra 4.0K Oct 12 14:16 ./drwxrwxr-x. 36 rudra rudra 4.0K Oct 12 14:30 ../-rw-------. 1 rudra rudra 394 Oct 10 12:01 authorized_keysFor 2nd user# ls -al /home/rudra2/|grep .sshdrwx------. 2 rudra2 rudra2 4096 Oct 12 14:16 .ssh$ ls -alF .ssh/total 12drwx------. 2 rudra2 rudra2 4096 Oct 12 14:16 ./drwx------. 4 rudra2 rudra2 4096 Oct 12 14:14 ../-rw-------. 1 rudra2 rudra2 394 Oct 11 09:57 authorized_keys
Can't login to a remote machine with key
ssh;sshd;openssh
There it is. Group has write access to ~rudra:$ls -alF .ssh/ total 12Kdrwx------. 2 rudra rudra 4.0K Oct 12 14:16 ./drwxrwxr-x. 36 rudra rudra 4.0K Oct 12 14:30 ../-rw-------. 1 rudra rudra 394 Oct 10 12:01 authorized_keysThus, sshd refuses to trust the files in ~rudra, and does not use ~rudra/.ssh/authorized_keys, even though its permissions are correct.chmod g-w ~rudra ought to fix it.
_unix.145301
I borrowed a WiFi modem from a friend - Buffalo Airstation G54 .I don't know about installations procedure, I just would like set a password to use the wifi, since now it is a free connection.I don't have an installation cd, but I have found a manual on the web.I am on Linux - Ubuntu.How can I set a WiFi password on this device?EDIT #1:~$ ip addr1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host loinet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 70:5a:b6:3d:3f:bf brd ff:ff:ff:ff:ff:ffinet 192.168.1.28/24 brd 192.168.1.255 scope global eth0inet6 fe80::725a:b6ff:fe3d:3fbf/64 scope link valid_lft forever preferred_lft forever3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000link/ether f0:7b:cb:10:42:e2 brd ff:ff:ff:ff:ff:ffTrying some of these addresses, the page is still not loaded.EDIT #2Since I don't know the right terminology, the following picture is representative of the connection that I am trying to set up:                              NOTE: The Buffalo router would be the wireless router in the diagram above.However, the Buffalo router doesn't seem to me to have a WAN port, and the picture from the OP seems to support this: there is no detached port, and all the ports have the same shape.I have tried::~$ sudo nmap -sP 192.168.1.0/24Starting Nmap 5.21 ( http://nmap.org ) at 2014-09-23 15:47 JSTNmap scan report for 192.168.1.1Host is up (0.0034s latency).MAC Address: 00:60:B9:E3:04:E4 (NEC Infrontia)Nmap scan report for 192.168.1.11Host is up (0.088s latency).MAC Address: E0:C9:7A:A2:E9:95 (Unknown)Nmap scan report for 192.168.1.15Host is up (0.074s latency).MAC Address: BC:3B:AF:98:F5:F3 (Unknown)Nmap scan report for 192.168.1.18Host is up (0.096s latency).MAC Address: 0C:30:21:2E:C9:56 (Unknown)Nmap scan report for 192.168.1.28Host is up.Nmap done: 256 IP addresses (5 hosts up) scanned in 5.44 secondsAnd looked in the browser again for these 5 IPs, but the pages are not loaded.Perhaps, is it a problem of the modem, instead of the WiFi router?
How to set WiFi password on modem - buffalo airstation g54
ubuntu;wifi;modem
null
_cstheory.31623
1.Its known that a polynomial time approximation algorithm that satisfies 3MaxSAT in 7/8+e clauses implies P=NP. 2.Its also experimentally known that 3SAT has the most difficult known cases when the clause to literal ratio is approximately 4.2.Q1. Is there a similar result for 3MaxSAT where the approximation algorithm for satisfying some 7/8+e of clauses becomes on Average difficult if 3MaxSAT has some specific format (like 3SAT in 2.) ?Q2. Is there a benchmark set available for 3MaxSAT approximation algorithms for such cases?Q3. Is there a similar approximation hardness result for 3SAT (like 3MaxSAT in 1.) if informed that the given 3SAT is satisfiable? Is just the clause to literal ratio is sufficient to represent completely the hardest cases for such approximation algorithms too?
Difficult On Average Cases for 3MaxSAT and 3SAT Approximation Algorithm
cc.complexity theory;approximation algorithms
null
_webapps.88714
I have a few groups created in Google Contacts. For example, one group is by location, such as NYC, another group is by association, such as friends.I want to email the people in these 2 groups that are both NYC AND friends. Right now if I type into the To line NYC, all my NYC contacts come up, and if I type friends, all my friends from around the country show up. I want to email the intersection of the 2. How do I do that without creating a third list?
Gmail emailing the overlapping group
gmail;google contacts
null
_cs.1367
Last year, I was reading a fantastic paper on Quantum Mechanics for Kindergarden. It was not easy paper.Now, I wonder how to explain quicksort in the simplest words possible. How can I prove (or at least handwave) that the average complexity is $O(n \log n)$, and what the best and the worst cases are, to a kindergarden class? Or at least in primary school?
Quicksort explained to kids
algorithms;education;algorithm analysis;didactics;sorting
At its core, Quicksort is this:Take the first item.Move everything less than that first item to the left of it, everything greater to the right (assuming ascending order).Recurse on each side. I think every 4-year-old on the planet could do 1 and 2. The recursion might take a little bit more explanation, but shouldn't be that hard for them.Repeat on the left side, ignoring the right for now (but remember where the middle was)Keep repeating with the left sides until you get to nothing. Now go back to the last right side you ignored, and repeat the process there.Once you run out of right and left sides, you're done.As for the complexity, worst-case should be fairly easy. Just consider an already-sorted array:1 2 3 4 2 3 4 3 4 4Fairly easy to see (and prove) that it's $\frac{1}{2}n^2$.I'm not familiar with the average-case proof, so I can't really make a suggestion for that. You could say that in an unsorted array of length $l$ the probability of picking the smallest or largest item is $\frac{2}{n}$, so...?
_softwareengineering.136826
Usually in a domain model, you'll have objects, and those objects will have properties that are mutable and properties that are immutable - for instance, a instance id/name will be immutable, while some other properties (specially properties that depend on relationships with other objects) can vary in the lifetime of an objectSo, in the usual scenario where multiple clients are accessing the same data store, your ORM tool will usually model columns as properties and relationships as lists, in the mutual understanding that any mutable property might be outdated in the client - that is, they only represent caches of the values of the properties. However, in many circumstances, one wants to do an operation that might require the actual state of the mutable properties, otherwise the operation might fail or produce data corruption, so in those cases you'll attempt a lock in the row in question, and if the row has already changed then you cancel or abort. In any case, the operation is done off the client application.At some point I have to ask a question (oddly enough, its turning out to be surprising hard!), so I guess the question would be: is there any scenario where it's useful to keep mutable properties cached in domain objects for anything else than informational purposes? After all, if there are relationships that we care for, say we want to traverse the neighbours of some object, we'll want to do that with their real (not cached) states, which would involve a database transaction, or some other means to do the process atomically (maybe map reduce?).
Is there any scenario where it's useful to keep mutable properties cached in domain objects for anything else than informational purposes?
orm;domain model;immutability
null
_webapps.65997
For example, suppose that as of now, I want to search for the name Alex Matthew Jacob in my custom search engine. If I use search without parentheses, I would get a multitude of un-useful pages with people named Alex, Matthew, and Jacob who happen to be far more popular than Alex Matthew Jacob. However, if I do use parentheses, I will end up having to search the same name several times with different word orders because of the various formats of the name in the sites of my custom search engine. This then begs the original question: Is there a way to enter a Google search so that all terms entered must be consecutive, but not ordered? Or, if not, is there any sort of workaround?
Is there a way to enter a Google search so that all terms entered must be consecutive, but not ordered?
google search
null
_codereview.29962
I am working on creating an interface gem that works with the Jira-Ruby gem.I have several questions and would like a general review for both my code and my attempt at creating a gem (I've never written one before). Any advice on where to go next would also be appreciated. File structure:jira_interface lib jira_interface config.rb version.rb app_interface.rb jira_interface.rb jira_interface.gemspec #...etc files, I used bundle's gem command to set upMy main module (jira_interface.rb) looks like this:require jira_interface/versionrequire jirarequire jira_interface/configrequire jira_interface/app_interfacemodule JiraInterface def get_jira_client @client = Jira::Client.new(USERINFORMATION) end def get_project(project_name) @client.Project.find(project_name) endendAnd my app_interface.rb looks like this:class AppInterface < JiraInterface before_filter :get_jira_client def create_issue(issue_desc, project) issue = @client.Issue.build issue.save({fields=>{summary=> issue_desc, project=> {id => project.id}, issuetype=> {id=> 1}}}) end def update_issue(issue_id) issue = @client.Issue.find(issue_id) comment = issue.comments.build comment.save({body=> This happened again at #{Date.time}}) end def get_issues(issue_desc, project_name) project = get_project(project_name) if project.issues.detect { |issue| issue.fields['summary'].include?(issue_desc) } update_issue(issue.id) else create_issue(issue_desc, project) end endendThe goal is pretty simple: call get_issues() from the rails app when an error happens. If the error is already documented, post a comment saying that it happened again. If it's not, then create a new issue. To the best of my knowledge, this should work because I've been following the examples found online. However, I've been having a heck of a time just testing it. If anyone could suggest a good way of testing this, that would be much appreciated.
Creating a Ruby gem that talks to another gem
ruby;ruby on rails;file structure
I suggest you take a close look at how the gem you are using is tested. Those tests include mocks for all responses of the API (I assume) and is a good example for testing gems in general.Notice:helper methods in the support folder including shared examplesmock data to simulate responses from the source APIcustom matchers to make code better readable and reduce repetition of code (DRY-principle)Those to me are the three most important you should embrace for now.
_unix.268593
$ rm Think\ Python:\ How\ to\ Think\ Like\ a\ Computer\ Scientist\ 2014.pdf rm: cannot remove Think Python: How to Think Like a Computer: No such file or directory$ Scientist 2014.pdf: command not found$ rm Think*$In the first rm, I use autocompletion in bash to specify a file with a newline character in its filename, and it doesn't work because of the newline character.In the second rm, I use file expansion to avoid explicitly to specify the new line character. Why can file expansion avoid the problem? Doesn't file expansion expand to the full filename which contains the new line character too?
Why can file expansion work for filenames with a newline character?
bash
null
_codereview.160191
In my earth science class at school we had a lab where we went around the campus measuring the temperature at different place and marking it on a map. Afterword we were told to color the map using different colors for different temperatures like this:~Isothermic mapAs you can see there is never a jump that skips one of the temperature colors, if you have a data point of 30 near one of 32 you have to show whatever color 31 is between them. I asked the teacher if I could write some code where I can input the points and generate the map instead and they said yes. I wrote this in P5.js:var dataPoints = [];var num = 0;var tempSlider;var num2 = 0;var num3 = 0;var temps = [];var load = false;var text;function setup() { createCanvas(500, 500); background(255); tempSlider = createSlider(32, 50, 40); tempSlider.position(20,20);}function draw() { clear(); stroke(0); fill(0); line(0,0,width-1,0); line(0,height-1,width-1,height-1); line(0,0,0,height); line(width-1,0,width-1,height-1); textSize(20); text(tempSlider.value(),0,20); if(load==false){ for (var i = 0; i < dataPoints.length; i++) { dataPoints[i].display(); } }else{ drawPixels(); }}function mousePressed () { if(load == false){ if(mouseX>170 || mouseY>55){ dataPoints[num] = { x: mouseX, y: mouseY, temp: tempSlider.value(), display: function() { stroke(0+(((this.temp-32)/18))*255, 0, 255-(((this.temp-32)/18))*255); fill(0+(((this.temp-32)/18))*255, 0, 255-(((this.temp-32)/18))*255); ellipse(this.x, this.y, 5, 5); } } num++; } }}function keyPressed () { if(keyCode == ENTER){ updateTemp(); load = true; }}function updateTemp (){ for(var y = 0; y < height; y++){ for(var x = 0; x < width; x++){ var posTemps = []; for(var j = 0; j < dataPoints.length; j++){ var distance = sqrt((x-dataPoints[j].x**2)+(y-dataPoints[j].y**2)); if(distance <=100){ var impact = (100 - distance)/100; posTemps[num2] = impact*dataPoints[j].temp; num2++; } } for(var i = 0; i<posTemps.length; i++){ temps[num3]+= posTemps[i]; } temps[num3] = temps[num3]/posTemps.length; num3++; } }}function drawPixels (){ for(var i = 0; i<temps.length; i++){ stroke(0+(((temps[i]-32)/18))*255, 0, 255-(((temps[i]-32)/18))*255); fill(0+(((temps[i]-32)/18))*255, 0, 255-(((temps[i]-32)/18))*255); point(i%width,(i-(i%width))%height); }}The problem with this code is all those for loops, when I try running the program it works and I can input the points but the moment I press enter the tab either crashes or very slowly draws the first line of pixels then freezes.If you want to run the codeSo if anyone knows some other way of drawing the pixels that I can use that might be faster that would be great. Also other problems you find in the code would also be helpful to know about.
P5 Isothermic Map Generator From Data Points
javascript;time limit exceeded;data visualization;processing.js
null
_codereview.19500
I currently have a method in my repository that will run SQL and map the reader to objects:protected IEnumerable<T> Query<T, TFactory>(string sql, List<IDbDataParameter> parameters = null) where TFactory : IFactory<T>{ List<T> list = new List<T>(); var factory = Activator.CreateInstance<TFactory>(); var connection = (Db == null) ? Connection : Db.Connection; using (var manager = new DbCommandManager(connection, sql)) { if (parameters != null) foreach (var parameter in parameters) manager.AddParameter(parameter); using (var reader = manager.GetReader()) { while(reader.Read()) list.Add(factory.CreateTFromReader(reader)); } } return list;}To use the following you simply do:public IEnumerable<Code> GetCodeWType(string CodeType, string Code){ var sql = @Select CODE_TYPE, CODE, DESCRIPTION From Codes Where (CODE_TYPE = :1) AND (CODE = :2); var parameters = new List<IDbDataParameter>(); parameters.Add(DbFactory.GetParameter(:1, CodeType, DbType.String)); parameters.Add(DbFactory.GetParameter(:2, Code, DbType.String)); return this.Query<Code, CodeFactory>(sql, parameters);}I would love to hear thoughts and feedback on how I can improve.
Running SQL and mapping the reader to objects
c#;asp.net
It looks ok for me so, just a couple of suggestions:First: given it is a protected method and not a private, it can be called be derived classes which probably could not be in the same assembly (this is true only if the class is public) then, you shouldnt use optional parameters, they force you to recompile all when you change them.Instead, use an overloaded method as follow:protected List<T> Query<T, TFactory>(string query) where TFactory : IFactory<T>{ return Query<T, TFactoty>(query, Enumerable.Empty<IDbDataParameter>());}protected List<T> Query<T, TFactory>(string query, IEnumerable<IDbDataParameter> parameters) where TFactory : IFactory<T>{It also help you to remove the checking for nulls.Second, if you return an IEnumerable, the callers will have less features. The rule is you you should require the most generic and return the most specific implementation so, if you have a list you could return a List<>, IList<> or a Collection. Then, callers have more options.My attempts is this one:protected List<T> Query<T, TFactory>(string query) where TFactory : IFactory<T>{ return Query<T, TFactoty>(query, Enumerable.Empty<IDbDataParameter>());}protected List<T> Query<T, TFactory>(string query, IEnumerable<IDbDataParameter> parameters) where TFactory : IFactory<T>{ var collection = new List<T>(); var factory = Activator.CreateInstance<TFactory>(); var connection = (Db == null) ? Connection : Db.Connection; // this line looks rare for me using (var manager = new DbCommandManager(connection, sql)) { foreach (var parameter in parameters) manager.AddParameter(parameter); using (var reader = manager.GetReader()) { while (reader.Read()) list.Add(factory.CreateTFromReader(reader)); } } return list;}Update:One more thing about optional parameters, when you call the Query method with one argument, you really are passing two: the query and a null in 'parameters'. You have never pass nulls (or at least you have to try) because if you pass nulls then, you need to check for null values. One more thing about the protected modifier, privates methods can trust that their parameter wont be null but protected methods cannot do it. This is because private members will be called by other methods in the same class but protected method can be called by other methods in other classes.
_webmaster.57683
I have a sitehttps://www.example.comAlso hosted on this site is a language version of that site, which has its own view in Google Analytics:https://www.example.com/de-de/My question is, when setting up a goal on the second site, what is the correct goal path to use? Do I need to include the full URL, the /de-de/ or simply the filepath after this?
Correct URL path for multi-site domain tracking in Google Analytics
google analytics;goal tracking;event tracking
null
_codereview.147935
When reading from the Console, it's very easy to get a complete line, you simply call Console.Readline. When reading from a TCPClient there isn't an equivalent function. You can either read a byte at a time, or read a block (up to a maximum size) in which case an arbitrary amount of data will be returned depending how the network behaved.In order to simplify line extraction I've written a LineBuffer class. It has an Append method that allows new blocks of data to be added to the buffer. Whenever a complete line is received, the action supplied via the constructor is called.The LineBuffer class:using System;using System.Text;namespace MudCore.Connection{ public class LineBuffer { private readonly Action<string> _onLineFound; private readonly StringBuilder _currentLine; public LineBuffer(Action<string> onLineFound) { _onLineFound = onLineFound; _currentLine = new StringBuilder(); } public void Append(string input) { if (input == null) return; while (input.Contains(\n)) { var indexOfNewLine = input.IndexOf('\n'); var left = input.Substring(0, indexOfNewLine); _currentLine.Append(left); var line = _currentLine.Replace(\r,).ToString(); _currentLine.Clear(); if (indexOfNewLine != input.Length - 1) { input = input.Substring(indexOfNewLine + 1); } else { input = string.Empty; } _onLineFound.Invoke(line); } if (!string.IsNullOrEmpty(input)) { _currentLine.Append(input); } } }}Some unit tests:using System;using System.Collections.Generic;using System.Collections;using NUnit.Framework;using MudCore.Connection;using MudCoreTests.Helpers;namespace MudCoreTests.Connection{ [TestFixture] public class LineBufferTests { [Test] public void AppendingEmptyStringDoesNothing() { int callCount = 0; LineBuffer buffer = new LineBuffer((extractedLine) => { callCount++; }); buffer.Append(); Assert.AreEqual(0, callCount); } [Test] public void AppendingNullStringDoesNothing() { int callCount = 0; LineBuffer buffer = new LineBuffer((extractedLine) => { callCount++; }); buffer.Append(null); Assert.AreEqual(0, callCount); } [TestCase(\r\n)] [TestCase(\n)] public void SingleLineIsExtractedMinusEndOfLine(string endOfLine) { int callCount = 0; string foundLine = String.Empty; string lineToAppend = This is a line; LineBuffer buffer = new LineBuffer((extractedLine) => { foundLine = extractedLine; callCount++; }); buffer.Append(lineToAppend + endOfLine); Assert.AreEqual(1, callCount); Assert.AreEqual(lineToAppend, foundLine); } [TestCaseSource(ReceivedBufferTestCases)] public void MultipleLinesAreIdentifiedFromMultipleAppends(Queue<string> receivedData, Queue<string> expectedLines, string scenarioName) { var expectedCount = expectedLines.Count; var callCount = 0; LineBuffer buffer = new LineBuffer((extractedLine) => { var expectedLine = expectedLines.Dequeue(); Assert.AreEqual(expectedLine, extractedLine, $Expected: '{expectedLine}' but go '{extractedLine}' during scenario {scenarioName}); callCount++; }); while (receivedData.Count > 0) { buffer.Append(receivedData.Dequeue()); } Assert.AreEqual(expectedCount, callCount, $Incorrect number of lines extracted, expected {expectedCount}, but was {callCount} during scenario {scenarioName}); } public static IEnumerable ReceivedBufferTestCases { get { yield return new TestCaseData(new Queue<string> { One\n, Two\n, Three\n }, new Queue<string> { One, Two, Three }, Simple Complete Lines); yield return new TestCaseData(new Queue<string> { One\r\n, Two\r\n, Three\r\n }, new Queue<string> { One, Two, Three }, Simple Complete Lines with \\r\\n); yield return new TestCaseData(new Queue<string> { On, e\n, Two\n, Three\n }, new Queue<string> { One, Two, Three }, Line split across two buffers); yield return new TestCaseData(new Queue<string> { One\r, \nT, wo\n, Three\n }, new Queue<string> { One, Two, Three }, Line split cr/lf across two buffers); yield return new TestCaseData(new Queue<string> { One\r\nTwo\nThree\n }, new Queue<string> { One, Two, Three }, All data from one buffer); } } }}In the unit tests, I've made use of the collection initializer. Since Queue<T> doesn't support this, I've also created an extension method to make the tests easier to write.using System.Collections.Generic;namespace MudCoreTests.Helpers{ public static class QueueExtensions { static public void Add<T>(this Queue<T> q, T item) { q.Enqueue(item); } }}Any feedback is welcome. Is there any built-in functionality that does something similar that I haven't come across yet? Is the code readable? Is the extension method a bad idea?If you need more context for where the class fits, the project is currently used here.
Extracting complete lines from a data stream
c#;unit testing;extension methods
There are some alternatives that are built in, for example in the simplest form combine a NetworkStream with StreamReader:using (var netStream = new NetworkStream(tcpClient.Client))using (var reader = new StreamReader(netStream)){ var line = reader.ReadLine();}Which is unbuffered, if you want to add buffering in, just use a BufferedStream in the middle:using (var netStream = new NetworkStream(tcpClient.Client))using (var bufferStream = new BufferedStream(netStream))using (var reader = new StreamReader(bufferStream)){ var line = reader.ReadLine();}These are pretty high performance because they operate at a lower level. Ideally you'd want to ditch the TcpClient and go direct with a Socket for best performance, but TcpClient.Client gives direct access to the underlying socket.
_cs.43328
Last fall I went on a tour of the Blue Waters supercomputer at the University of Illinois. I asked whether anyone ever used the entire computer. I was told that it was always working on multiple projects. That made me wonder about the usefulness of supercomputers. Perhaps Blue Waters is unusual in that it has to be shared by industry and the university - I don't know. I assume there's some overhead in managing the processors and memory of a single supercomputer. Would it be more cost effective to build smaller computers? Can anyone help me to understand the value of supercomputers? Or is it that sometimes they are dedicated to single projects?
purpose of supercomputers
computer architecture
A typical job on Blue Waters is using about 10% of the machine and consumes a total of 75 node hours. Blue Waters has about 27500 nodes, so that means some of those 75 node hour jobs are running in just a couple of minutes. That allows scientists to use the machine somewhat interactively. (You can see the moving averages here: http://xdmod.ncsa.illinois.edu/#tg_usage:group_by_Jobs_none)Supercomputers are just large collections of smaller computers. The main reason we collect them together in one place is that we can share the cost most efficiently that way. You are trying to create a computer that can do a lot of work, and for which the total cost of ownership (the total cost of the computer, the power, and the maintenance), is minimized over the lifetime of the computer.There are several factors involved in the total cost of ownership: The cost of the equipment is one. To minimize the cost of ownership you want the equipment to be doing useful work as large a percentage of the time as possible (ideally 100% of the time, realistically somewhat less, like 95% would be considered good), until the equipment burns out or becomes obsolete. In contrast, the computer in your laptop or your phone is probably actually in use less than 10% of the time you own it (you are asleep 33% of the time, you are eating and relaxing about half the time you are awake, and even when you are using the computer, the processor is idle most of the time.)The second is the cost of power. There are several parts of this: the first is the cost of the power itself. Part of that cost is consumed in transporting the power from the power plant to the computer. Part of it is lost in the computer's power supply (which is just converting AC power into DC power). A larger AC->DC converter can usually be made more efficient. Additionally computers turn useful electric power into waste heat. So you also need to pay to remove the heat. Again, larger air conditioners can usually be made more efficient than multiple small air conditioners.The third is the cost of maintenance. By putting together a bunch of computers and designing them so that when one goes down the rest keep running you can amortize the cost of maintenance staff over a much larger number of computer nodes than you could if the nodes were all different and placed in different buildings (or cities). The details: Blue Waters has 288 cabinets. Each cabinet has 96 nodes. Each node is a pretty normal high-end computer. Most nodes have 2 AMD Opeteron 6276 processors running at 2.3GHz, and 64GByte of DRAM. About 1/6 of the nodes instead have a single AMD Opteron 6276, an NVidia K20 GPU, and 38GByte of DRAM. If you wanted, you could buy something similar to a node for about \$3000 or \$4000 and put it in your living room to play video games. Blue Waters has about 27648 nodes. https://bluewaters.ncsa.illinois.edu/hardware-summaryEach node probably consumes a bit more than 500 Watts, and turns that power into heat. If you had a node in your living room to play video games it wouldn't be a particularly big deal. It would consume some electricity from the wall socket and generate about as much heat as a small personal space heater. In the winter that would be kind of nice and cozy. In the summer you'd have to run your air conditioner more frequently to keep your house comfortable. If you had it running full power all day every day, your electricity bill would go up considerably, perhaps double what you are consuming now.But when you put 27648 of them together it consumes about 15 Megawatts, and generates a correspondingly large amount of heat. The true engineering marvel of Blue Waters, like any large data center, is the building itself. It's an enormous refrigerated box. The Blue Waters building is particularly interesting because it is a fantastically efficient. About 85% of the power going into the building is actually used to run the nodes. I believe I read somewhere (can't find it at the moment) only 15% is lost in power conversion and removing waste heat. That's a lot better than what you'd get from the 500Watt gaming computer in your living room. You'd probably need a 750Watt power supply and another couple hundred Watts to run the air conditioner.TL; DRLet's put it all together. By putting together thousands of smaller computers and spreading the usage among many many people, we keep those computers running most of the time, sharing the resources in a very efficient way. It costs a lot of money to give people computers that sit idle most of the time. The best way to save money on computation is to have people share the computers so the computers are busy most of the time.Blue Waters is much more than just the computers inside it. It is specially designed to be as power efficient as possible. Part of that involves putting it near power plants to reduce power losses in power transmission lines. Here's a satelite picture of the part of Champaign IL containing Blue Waters to demonstrate:
_webmaster.25899
I have more than one (4 to be exact) domains and only one server host with same ip. How I can run all my domains from same server. All websites are wordpress sites.
How to manage multiple domains on same server
domains;apache;htaccess
null
_softwareengineering.107637
I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it.Consider the following scenario:or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions:1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern.2.) I can see a few alternatives:a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.)c.) Recopy orN to cN--again, yuck but I list it for sake of completeness.d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right?Can anyone suggest other approaches I may not have considered?If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet.EDIT: Thanks everyone for all the thoughtful responses.I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a canonical name for it.
Violation of the DRY Principle
terminology;anti patterns;dry
it is just called duplicate code - I don't know of any more fancy names for this. The long term consequences are as you described, and worse.Of course, eliminating the duplication is the ideal option if only possible. It may take a lot of time (in a recent case in our legacy project, I had several methods duplicated across more than 20 subclasses in a class hierarchy, many of which had evolutionarily grown their own slight differences / extensions over the years. It took me about 1,5 years through successive unit test writing and refactoring to get rid of all duplications. Perseverance was worth it though).In such a case, you may still need one or more of the other options as temporary fixes, even if you decide to start moving towards eliminating the duplication. However, which of those is better depends on a lot of factors, and without more context we are just guessing.Lots of small improvements can make a big difference in the long run. You don't necessarily need the customer's explicit approval for these either - a little refactoring every time when you touch said class to fix a bug or implement a feature can go a long way over time. Just include some extra time for refactoring in your task estimates. It is just like standard maintenance to keep the software healthy in the long run.
_softwareengineering.275999
I'm currently working on a project that I came into several years after it was built. The code is mostly procedural with a few objects that act more like buckets of functions than anything else. I want to start fixing it up by consolidating the database access and external API calls into domain models.I have the general idea down, but I don't know how to handle getting lists of my data when I have multiple filters. Having methods called $HelpRequestMapper->getById($id) work fine, but what do I do when the user wants to do multiple filters?Should I have methods for each possible case? Ie, $HelpRequestMapper->getByCompanyAndUserAssignedToAndQueueAndStatusAndPriorityAndSearchString($company_id, $user_id, $queue_id, $status, $priority, $search_string) This seems like it would get unwieldy very fast and not be much better than what we have now.Should I pass in an array/object and build a query from it using something like $HelpRequestMapper->getMultiple($array_of_options) that does some magical query building stuff to get the data I want?Should I do something else?Thanks
designing domain model that can handle large number of data filters?
design patterns;object oriented design
It depends a bit on the implementation of the filter, but you could look into the Decorator Pattern and Strategy Pattern and see if they are of use to you.You could solve this by creating an abstract Filter class with a filter-method that takes a list of objects and returns a list of objects. Each type of filter (CompanyFilter, UserAssignedFilter, QueueFilter...) would be a separate class that extends filter. When creating the filter you would pass the filter criteria in the constructor, and the implementation of the filter-method would take the objects passed to it and only return the ones matching the filter.You could then have a RequestQuery that you can push filters onto which fetches the data, runs the data through all the filters and returns the result.Quick demo in C#public class MyObject{ public string Company { get; set; } public string UserAssignedTo { get; set; }}public abstract class Filter{ public abstract IEnumerable<MyObject> Apply(IList<MyObject> list);}public class CompanyFilter : Filter{ private string _companyToFilter; public CompanyFilter(string companyToFilter) { _companyToFilter = companyToFilter; } public override IEnumerable<MyObject> Apply(IList<MyObject> list) { return list.Where(x => x.Company == _companyToFilter); }}public class UserAssignedToFilter: Filter{ private string _userToFilter; public UserAssignedToFilter(string userToFilter) { _userToFilter = userToFilter; } public override IEnumerable<MyObject> Apply(IList<MyObject> list) { return list.Where(x => x.UserAssignedTo == _userToFilter); }}public class RequestQuery{ public IEnumerable<MyObject> FindData(IList<Filter> filtersToApply) { IList<MyObject> data = null; //fetch data somewhere foreach(var filter in filtersToApply) { data = filter.Apply(data); } return data; }}
_cogsci.4416
This question is the third in a series, after: 1. Improved Typing as a result of slight movement 2. Neural Processes of Inducing FlowBackground:Pseudo-random, 'swaying', motion appears to help induce a flow state, in that it 'captures' the movements of an activity and entrains them to an underlying rhythm of activity. Time between activity-related movements is reduced, and error-recovery time is also reduced. (again, this is from first-hand observation, not published results).The following is the relevant explanatory section of the 2nd question's answer:Seen from this perspective, cyclical movements are the norm for almost any animal, whereas short-duration, single-use movements like typing or playing the piano are rather unusual. It could be the case that if the motor cortex (or even the basic movements encoded in the spinal cord) is inherently tuned to modeling cyclic movements, then adding some continuous motion could help the motor cortex capture the intended typing or playing movements as part of the larger, continuous movement.The last question I have is if the frequency of the periodic movements, which in general follow a 'figure-8' track, could be related to the frequency of other cycles in the body? For example, could the frequency of motion reflect the active frequency of the brain, e.g. EEG waves? Or could it reflect a different neural/physiological cycle or state?Naturally this is a tricky question: on one hand, the activity itself serves to entrain the neural system to a certain periodicity. On the other hand, if the rhythmic movement activity is allowed to 'float' in frequency organically, does its characteristics reflect anything about the initial state of the neural system?Final question(s):Does the frequency of movement give clues to other periodic cycles, such as EEG waves?Does said movement give any other insight into the brain/body state?
Derived knowledge from periodicity of harmonic motion?
motor;neurology
null
_cs.43536
I need to select a given number of nodes from a weighted directed graph such that the nodes selected are the closest to a given starting node. This seems like a common problem to need to solve, but I haven't found much material on it. Does anyone know if this particular problem has a name, or any published material (formal or informal) related to it?Note, I am not asking for a solution to the problem itself. I am interested in finding material related to the problem. For very large graphs, the canonical Dijkstra algorithm will not work because you must enqueue every node first. I believe a modified version of the Uniform Cost Search will do what I want, since it does not enqueue all the nodes first. Things get more interesting in this case, because you don't have full knowledge of all shortest paths. I'm interested in things like how do you know when you've found all the shortest paths for the closest nodes, without traversing the entire graph in the process. I think this is possible using an admissible heuristic as in the A* search. These are the kinds of details I want to be able to read up on.
Select the n closest nodes from a starting node in a weighted directed graph
algorithms;graphs;search algorithms
Question 1. For very large graphs, the canonical Dijkstra algorithm will not work because you must enqueue every node first. Yes, you are right about the disadvantage of canonical Dijkstra's algorithm. However, it can be easily adapted (wiki):The algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it).See the paper for details (including experiments) on implementing Dijkstra's algorithm with such a modified priority queue.Question 2. I'm interested in things like how do you know when you've found all the shortest paths for the closest nodes, without traversing the entire graph in the process.The Dijkstra's algorithm finds the shorted paths from a starting node in the order of increasing costs. So, the first $n$ vertices removed (delete-min) from the priority queue are the $n$ closest nodes from a starting node you are looking for.
_codereview.102325
I have a piece of code where I have string pairs (eg. 'a' and 'AA'), where both have their own unique ids. Id for first object is configured by user and second id is received at some point during runtime.Example code:http://repl.it/BE5NIs there smarter way to load/save second ids from/to a file?It feels somewhat unnecessary to loop through all configs every time new id is found (even if it doesn't happen that often)And if I added completely new pair-configs during runtime, how should I add them to user's config file while still keeping it human readable?Initial setup:var fs = require('fs');// config pair for first and secondfunction config (first, second, first_id) { this.first = first; this.second = second; // use name as an id for first if none is given this.first_id = first_id ? first_id : first;}var lookup = function (type, name, arr) { return arr.filter(function (obj) { return obj[type] === name; })[0];};// initial config file (few lines out of many)// user doesn't know second idsconfigs = [ new config('a', 'AA', 'a_id'), new config('b', 'BB'), new config('c', 'C'), new config('d', 'D', 'd_id'), new config('e', 'EEE')];// read ids for seconds from file if it exists// read json from fileinput_json = {AA:123,C:321,EEE:456};for (var i=0; i<configs.length; ++i) { var key = configs[i].second; if (key in input_json) { configs[i].second_id = input_json[key]; }}While program is running and new id's are found:// add new id for 'BB'BB_config = lookup('second', 'BB', configs);BB_config.second_id = 654;// write edited json to filevar output_json = {};for (var i=0; i<configs.length; ++i) { if (configs[i].second_id) { output_json[configs[i].second] = configs[i].second_id; }}console.log(JSON.stringify(output_json));
Saving and loading parts of configuration
javascript;node.js;json;file
null
_unix.96113
Is Gparted Live unable to convert a primary partition to a logical partition?i.e. to make an extended partition and make that primary partition a logical partition within the newly created extended partition? (or to move a primary partition to an existing extended partition)What GUI alternative is there to Gparted Live that can?I don't see any option here e.g. in the partition menu, for converting a primary to a logical.
Live boot GUI convert a primary partition to a logical partition?
partition;gparted
null