id
stringlengths 5
27
| question
stringlengths 19
69.9k
| title
stringlengths 1
150
| tags
stringlengths 1
118
| accepted_answer
stringlengths 4
29.9k
⌀ |
---|---|---|---|---|
_unix.241480 | Assuming Linux, would it be possible to implement a TCP/IP stack in the user context (vs. kernel)? How would you do it? What would be pros and cons of such an implementation, as compared to the conventional implementations where the stack resides in the kernel? | TCP/IP stack in the user context vs. kernel Linux | networking | null |
_codereview.112277 | I am building a Rails marketplace application using TDD. I would like to get advice on the way in which I have built the User associated Profile associations and way in which these are then handled in the controllers.User modelclass User < ActiveRecord::Base has_one :profile, dependent: :destroy has_many :listings, dependent: :destroy has_many :watches, dependent: :destroy has_many :watched_listings, -> { uniq }, :through => :watches, dependent: :destroy # Include default devise modules. Others available are: acts_as_messageable def mailboxer_email(object) email end def self.search(search) where(email ILIKE ?, %#{search}%) end after_create :build_profileendProfile modelclass Profile < ActiveRecord::Base belongs_to :user has_attached_file :avatar, :styles => { :medium => 300x300>, :thumb => 100x100> }, :default_url => /images/:style/missing.png validates_attachment_content_type :avatar, :content_type => /\Aimage\/.*\Z/endUsers controllerclass UsersController < ApplicationController def index @users = User.all if params[:search] @users = User.search(params[:search]).order(created_at DESC) else @users = User.all.order('created_at DESC') end end def show @user = User.find(params[:id]) end def watchlist @watched_listings = current_user.watched_listings.all endendProfiles controllerclass ProfilesController < ApplicationController before_filter :authenticate_user!, :only [:edit, :update] before_filter :correct_user, :only [:edit, :update] def show @profile = Profile.find_by(user_id: params[:user_id]) end def edit @profile = Profile.find_by user_id: current_user.id end def update @profile = Profile.find_by user_id: current_user.id if @profile.update(profile_params) flash[:notices] = [Your profile was successfully updated] render 'show' else flash[:notices] = [Your profile could not be updated] render 'edit' end end private def profile_params params.require(:profile).permit(:city, :country, :avatar) end def correct_user @profile = Profile.find_by(user_id: params[:user_id]) redirect_to(root_path) unless current_user?(@profile) endendI have also defined:Profiles helpermodule ProfilesHelper def current_user?(user) user == current_user endendWithin my Application Controller:class ApplicationController < ActionController::Base protect_from_forgery with: :exception def after_sign_in_path_for(resource) edit_user_profile_path(resource) end def after_sign_up_path_for(resource) edit_user_profile_path(resource) endendI'm very keen to get any feedback on:If the use of the after_create :build_profile is appropriate. Does this violate SRP principles by making the user model responsible for the creation of a profile?The best way to validate the correct user in the profile controller - to enforce whether a user can edit a profile or not. Mainly whether it is required that I be finding profiles by Profile.find_by(user_id: params[:user_id]) rather than by a Profile ID.Any other improvements to the code!I have RSpec and Capybara tests to back this here. | Managing users and profiles in Rails | beginner;ruby;ruby on rails;active record;helper | Here's my thoughts.Profile ModelI'm not sure that the separate profile is pulling enough weight to justify its existence. It seems that a lot of this code could be avoided by collapsing it into the User model. I appreciate that the User and Profile can be two separate entities but it looks like it'll be much easier to keep them in one model until they grow too large to be viable, or there comes a time when a Profile has a separate lifecycle from a User.Which then avoids your question around SRP altogether :)But if we were to keep them separate, I don't think it would be a practical violation of SRP because your application likely depends on a profile always existing for a user. So having asserted that User should always have a Profile, you are then faced with the choice of leaving the creation in the hands of the controller, or the model. I favour leaving it in the hands of the model because the model is responsible for modelling the relationships between different business objects.Finding the Profile and Validating AccessYou could replace the find methods with a before_action that uses the user's relationship with the profile like so:before_action :get_profile, only: [:edit, :update]def get_profile @profile = current_user.profileendThis will both reduce the amount of code required to select the profile, and ensure that the user can only access their profile to edit/update it.However, at that point it doesn't make sense to have the profile edit and update actions available with a user ID. They're essentially now singular resources and should be handled accordingly in your routes.You ask whether find by user_id or not is a good choice. It depends on whether the person accessing that route will be accessing it from the context of its User, or the Profile owner.current_user?I don't believe this code will work. You are passing it only Profile objects, and comparing it to the current_user method which returns a User object (or null if not logged in, if memory serves). This should always return false. You either need to call profile.user before passing it to the method, or have that method make the call. |
_codereview.82906 | I haven't really programmed in C++ for about a year, and realised that I should get back into it, and tried my abilities out by remaking the STD vector class. However, my C++ is a bit rusty at the moment, and was wondering if I have made many mistakes in my implementation.# ifndef VECTOR_H# define VECTOR_H# include <memory>namespace test { template <class T, class A> class vector; template <class A> class vector_alloc_types { public: typedef typename A::value_type value_type; typedef typename A::size_type size_type; typedef typename A::reference reference; typedef typename A::const_reference const_reference; typedef typename A::pointer iterator; typedef typename A::const_pointer const_iterator; typedef typename A::const_pointer const_pointer; typedef A allocator_type; }; template <class T, class A> class vector_base { friend class vector<T, A>; public: typedef typename A::pointer pointer; vector_base() { vm_begin = pointer(); value_end = pointer(); memory_end = pointer(); } private: pointer vm_begin, value_end, memory_end; }; template <class T, class A = std::allocator<T> > class vector : public vector_alloc_types<A>, public vector_base<T, A> { public: typedef vector<T, A> my_T; typedef vector_base<T, A> my_base; vector() : my_base() { } vector(my_T const &rhs) { if (allocate(rhs.size())) { try { this->value_end = std::uninitialized_copy(rhs.vm_begin, rhs.value_end, this->vm_begin); } catch (...) { kill(); throw; } } } vector(pointer first, pointer last) : my_base() { if (allocate(std::distance(first, last))) { try { this->value_end = std::uninitialized_copy(first, last, this->vm_begin); } catch (...) { kill(); throw; } } } template<size_type sz> vector(T arr[sz]) { if (allocate(sz)) { try { this->value_end = std::uninitialized_copy_n(arr, sz, this->vm_begin); } catch (...) { kill(); throw; } } } vector(size_type sz) { if (allocate(sz)) { try { this->value_end = std::uninitialized_fill_n(this->vm_begin, sz, T()); } catch (...) { kill(); throw; } } } vector(size_type sz, T const &val) { if (allocate(sz)) { try { this->value_end = std::uninitialized_fill_n(this->vm_begin, sz, val); } catch (...) { kill(); throw; } } } ~vector() { kill(); } void operator=(my_T &rhs) { if (this != &rhs) { assign(rhs); } } template<unsigned sz> void operator=(T (&arr)[sz]) { assign(&arr[0], &arr[sz]); } void clear() { wipe(this->vm_begin, this->value_end); } iterator begin() { return (this->vm_begin); } const_iterator cbegin() const { return ((const_iterator)this->vm_begin); } iterator end() { return (this->value_end); } const_iterator cend() const { return ((const_iterator)this->value_end); } void swap(my_T &rhs) { std::swap(this->vm_begin, rhs.vm_begin); std::swap(this->value_end, rhs.value_end); std::swap(this->memory_end, rhs.memory_end); } void shrink_to_fit() { if (has_spare_capacity()) { my_T tmp(*this); swap(tmp); } } void erase(iterator iter) { pointer tmp; if (iter == (this->value_end - 1)) { pop_back(); } else if (iter == this->vm_begin) { pop_front(); } else if (iterator_in_range(iter)) { tmp = allocator_type().allocate(size() - 1); pointer tmp2 = std::uninitialized_copy(this->vm_begin, iter, tmp); tmp2 = std::uninitialized_copy((iter + 1), this->value_end, tmp2); assign(tmp, tmp2); } } void erase(iterator first, iterator last) { if (last == (this->value_end - 1)) { while (last-- != first) { pop_back(); } pop_back(); } else if (first == this->vm_begin) { while (first++ != last) { pop_front(); } pop_front(); } else if (iterator_in_range(first) && iterator_in_range(last)) { pointer tmp = allocator_type().allocate(size() - std::distance(first, last)); pointer tmp2 = std::uninitialized_copy(this->vm_begin, first, tmp); tmp2 = std::uninitialized_copy((last + 1), this->value_end, tmp2); assign(tmp, tmp2); } } void insert(iterator iter, T const &val) { T v1 = val; if (iter == this->vm_begin) { push_front(v1); } else if (iter == (this->value_end - 1)) { push_back(v1); } else if (iterator_in_range(iter)) { allocator_type alloc; pointer tmp = alloc.allocate(realloc_size(size() + 1)); pointer tmp2 = std::uninitialized_copy(this->vm_begin, iter, tmp); alloc.construct(tmp2++, v1); tmp2 = std::uninitialized_copy(iter, this->value_end, tmp2); assign(tmp, tmp2); } } void insert(iterator iter, int count, T const &val) { T v1 = val; if (iter == this->vm_begin) { while (count--) { push_front(v1); } } else if (iter == (this->value_end - 1)) { while (count--) { push_back(v1); } } else if (iterator_in_range(iter)) { allocator_type alloc; pointer tmp = alloc.allocate(realloc_size(size() + 1)); pointer tmp2 = std::uninitialized_copy(this->vm_begin, iter, tmp); while (count--) { alloc.construct(tmp2++, v1); } tmp2 = std::uninitialized_copy(iter, this->value_end, tmp2); assign(tmp, tmp2); } } void insert(iterator iter, iterator first, iterator last) { if (iter == this->vm_begin) { while (first != last) { push_front(*(first++)); } } else if (iter == (this->value_end - 1)) { while (first != last) { push_back(*(first++)); } } else if (iterator_in_range(iter)) { allocator_type alloc; pointer tmp = alloc.allocate(realloc_size(size() + 1)); pointer tmp2 = std::uninitialized_copy(this->vm_begin, iter, tmp); while (first != last) { alloc.construct(tmp2++, *(first++)); } tmp2 = std::uninitialized_copy(iter, this->value_end, tmp2); assign(tmp, tmp2); } } void push_back(T const &val) { allocator_type alloc; T v1 = val; if (has_spare_capacity()) { this->value_end++; alloc.construct(this->value_end - 1, v1); } else if (reallocate(realloc_size(size() + 1))) { this->value_end++; alloc.construct(this->value_end - 1, v1); } } void pop_back() { pointer tmp = allocator_type().allocate(size() - 1); pointer tmp2 = std::uninitialized_copy(this->vm_begin, this->value_end - 1, tmp); assign(tmp, tmp2); } void push_front(T const &val) { allocator_type alloc; T v1 = val; vector tmp; tmp.push_back(v1); for (iterator it = this->vm_begin; it != this->value_end; ++it) { tmp.push_back(*it); } swap(tmp); } void pop_front() { pointer tmp = allocator_type().allocate(size() - 1); pointer tmp2 = std::uninitialized_copy(this->vm_begin + 1, this->value_end, tmp); assign(tmp, tmp2); } size_type size() const { return (this->value_end - this->vm_begin); } reference operator[](size_type pos) { return (*(this->vm_begin + pos)); } const_reference operator[](size_type pos) const { return (*(this->vm_begin + pos)); } private: bool allocate(size_type sz) { allocator_type alloc; if (alloc.max_size() > sz) { try { this->vm_begin = alloc.allocate(sz); this->value_end = this->vm_begin; this->memory_end = this->vm_begin + sz; } catch (...) { kill(); throw; } return (true); } return (false); } bool reallocate(size_type sz) { allocator_type alloc; if (alloc.max_size() > sz) { try { pointer nbegin, nvend, nmend; nbegin = alloc.allocate(sz); nmend = nbegin + sz; nvend = std::uninitialized_copy(this->vm_begin, this->value_end, nbegin); this->vm_begin = nbegin; this->value_end = nvend; this->memory_end = nmend; } catch (...) { kill(); throw; } return (true); } return (false); } size_type realloc_size(size_type sz) const { return ((allocator_type().max_size() > (sz * 1.5)) ? (sz * 1.5) : sz); } bool iterator_in_range(const_iterator iter) const { return (iter >= this->vm_begin && iter < this->value_end); } void assign(my_T &rhs) { assign(rhs.vm_begin, rhs.value_end); } void assign(pointer first, pointer last) { size_type sz = std::distance(first, last); if (sz > capacity()) { if (reallocate(realloc_size(sz))) { wipe(this->vm_begin, this->value_end); this->value_end = std::uninitialized_copy(first, last, this->vm_begin); } } else { wipe(this->vm_begin, this->value_end); this->value_end = std::uninitialized_copy(first, last, this->vm_begin); } } void wipe(pointer ptr) { allocator_type().destroy(ptr); } void wipe(pointer first, pointer last) { allocator_type alloc; while (first != last) { alloc.destroy(first++); } } void wipe(pointer ptr, size_type dist) { allocator_type alloc; while (dist-- != 0) { alloc.destroy(ptr); } } void kill() { if (this->vm_begin != pointer()) { wipe(this->vm_begin, this->value_end); allocator_type().deallocate(this->vm_begin, capacity()); } } inline size_type capacity() const { return (this->memory_end - this->vm_begin); } inline size_type spare_capacity() const { return (this->memory_end - this->value_end); } bool has_spare_capacity() const { return (spare_capacity() > 0); } }; }# endif | C++ vector implementation errors | c++;reinventing the wheel;vectors | null |
_codereview.123905 | I'm interested in receiving some feedback regarding an Event system that I wrote. Both in style and implementation, but also in overall design and the design decisions that I made.It is intended to be used as part of a game (hobby project of mine) and is intended to be used on a single thread. While receivers will always receive events on the services owning thread, I may add the ability for other threads to send events in the future. It is intended to have very low overhead in regards to execution time and scale well.Here it is:EventService.h#pragma once#include Core\ITimerService.h#include Events\EventListener.h#include Helpers\NonCopyable.h#include <thread>#include <chrono>#include <functional>#include <typeindex>#include <unordered_map>#include <memory>namespace quasar{ // The EventService provides a service for objects to // register an interest in recieving events, which are arbitary typed // data and an to send those events. Events are guaranteed to be // executed at a known point of execution but in an undefined order. // // It is currently not thread safe. // class EventService : public NonCopyable { public: template <typename EventT> using OnEventSignature = void(const EventT& evnt); // Constructor // Intializes the EventService with the calling thread as the owning thread // param timerService The timer service to use for timed events EventService(ITimerService* timerService); // Constructor // Intializes the EventService // param timerService The timer service to use for timed events // param owningThread The thread that owns this event service EventService(ITimerService* timerService, const std::thread::id& owningThread); // Queues up an event for execution // param evnt the event to queue up template <typename EventT> void send(EventT&& evnt); // Queues up an event for execution after a delay // param evnt the event to queue up // param delay the time delay until the event should be // exeucted, guaranteed to wait at least // for the delay and execute on the very // next executeEventQueue call afterwards template <typename EventT> void send(EventT&& evnt, std::chrono::duration<std::chrono::steady_clock> delay); // Registers a Listener interested in handling events // tparam EventT the exact type of event that the listener // is interested in. // tparam ListenerT the (exact) type of listener to register, // will tie the ListenerT's // void onEvent(const EventT&) method as the // trigger function. // param owner the listener, the lifetime of the registration // will be tied to the lifetime of the listener template <typename EventT, typename ListenerT> void registerListener(ListenerT& owner); // Registers a Listener interested in handling events // tparam EventT the exact type of event that the listener // is interested in. // param owner the listener, the lifetime of the registration // will be tied to the lifetime of the listener // param onEventFunction the function to invoke when the expected // event is executed template <typename EventT> void registerListener(EventListener& owner, const std::function<OnEventSignature<EventT>>& onEventFunction); // Infroms the service that a listener is no longer // interested in recieving events // param handle the handle of the registration // to unregister // returns true if the listener was removed, false if it was // not registered bool unregisterListener(const RegisteredEventListenerHandle& handle); // Executes all events in the event queue, // as well as all delayed events whose time // to execute has come void executeEventQueue(); private: class IEventTypeNode; template <typename EventT> class EventNode; template <typename EventT> EventNode<EventT>& getEventNode(); ITimerService* timerService; std::thread::id owningThread; std::unordered_map<std::type_index, std::unique_ptr<IEventTypeNode> > listenerMap; uint32 lastListenerId; };}#include Events\EventService.hppEventService.hpp#pragma once#include Events\EventService.h#include <queue>#include <utility>namespace quasar{ class EventService::IEventTypeNode { public: virtual ~IEventTypeNode() = default; template <typename EventT> EventNode<EventT>& getAsEventTypeNode() {#ifdef _DEBUG auto ptr = dynamic_cast<EventNode<EventT>*>(this); assert(ptr != nullptr); return *ptr;#else return *static_cast<EventNode<EventT>*>(this);#endif // _DEBUG } virtual void removeListenerRegistration(uint32 id) = 0; virtual void executeEventQueue(const std::chrono::time_point<std::chrono::steady_clock> currentTime) = 0; }; template <typename EventT> class EventService::EventNode : public EventService::IEventTypeNode { public: using OnEventFuncT = std::function<EventService::OnEventSignature<EventT>>; void queueUp(EventT&& evnt) { this->eventQueue.push_back(evnt); } void queueUp(EventT&& evnt, std::chrono::time_point<std::chrono::steady_clock> triggerPoint) { this->timedEventQueue.push(std::make_pair(std::forward(evnt), triggerPoint)); } void addListenerRegistration(uint32 id, const OnEventFuncT& onEventFunc) { assert(this->onEventFunctions.find(id) == this->onEventFunctions.end()); this->onEventFunctions[id] = onEventFunc; } void addListenerRegistration(uint32 id, OnEventFuncT&& onEventFunc) { assert(this->onEventFunctions.find(id) == this->onEventFunctions.end()); this->onEventFunctions[id] = onEventFunc; } void removeListenerRegistration(uint32 id) override { assert(this->onEventFunctions.find(id) != this->onEventFunctions.end()); this->onEventFunctions.erase(this->onEventFunctions.find(id)); } void executeEventQueue(const std::chrono::time_point<std::chrono::steady_clock> currentTime) override { for (const auto& function : this->onEventFunctions) { for (const EventT& evnt : this->eventQueue) { function.second(evnt); } } this->eventQueue.clear(); while (!this->timedEventQueue.empty()) { const TimedEventT& timedEvent = this->timedEventQueue.top(); if (currentTime >= timedEvent.second) { for (const auto& function : this->onEventFunctions) { function.second(timedEvent.first); } this->timedEventQueue.pop(); } else { break; } } } private: using TimedEventT = std::pair<EventT, std::chrono::time_point<std::chrono::steady_clock>>; struct IsScheduledEarlier { bool operator()(const TimedEventT& left, const TimedEventT& right) const { return left.second < right.second; } }; std::vector<EventT> eventQueue; std::priority_queue<TimedEventT, std::vector<TimedEventT>, IsScheduledEarlier> timedEventQueue; std::unordered_map<uint32, OnEventFuncT> onEventFunctions; }; template <typename EventT> EventService::EventNode<EventT>& EventService::getEventNode() { std::type_index index(typeid(EventT)); auto found = this->listenerMap.find(index); if (found == this->listenerMap.end()) { auto newNode = std::make_unique<EventNode<EventT>>(); this->listenerMap[index] = std::move(newNode); found = this->listenerMap.find(index); } return found->second->getAsEventTypeNode<EventT>(); } template <typename EventT> void EventService::send(EventT&& evnt) { assert(std::this_thread::get_id() == this->owningThread); this->getEventNode<EventT>().queueUp(std::forward<EventT>(evnt)); } template <typename EventT> void EventService::send(EventT&& evnt, std::chrono::duration<std::chrono::steady_clock> delay) { assert(std::this_thread::get_id() == this->owningThread); this->getEventNode<EventT>().queueUp(std::forward<EventT>(evnt), this->timerService->timestamp() + delay); } template <typename EventT, typename ListenerT> void EventService::registerListener(ListenerT& owner) { using OnEventMemberType = void(ListenerT::*)(const EventT&); this->registerListener<EventT>(owner, std::bind(static_cast<OnEventMemberType>(&ListenerT::onEvent), &owner, std::placeholders::_1)); } template <typename EventT> void EventService::registerListener(EventListener& owner, const std::function<OnEventSignature<EventT>>& onEventFunction) { assert(std::this_thread::get_id() == this->owningThread); uint32 id = lastListenerId++; this->getEventNode<EventT>().addListenerRegistration(id, onEventFunction); owner.addRegistrationHandle(RegisteredEventListenerHandle(typeid(EventT), this, id)); }}EventService.cpp#include Events\EventService.hnamespace quasar{ EventService::EventService(ITimerService* timerService) : EventService(timerService, std::this_thread::get_id()) { } EventService::EventService(ITimerService* timerService, const std::thread::id& owningThread) : timerService(timerService) , owningThread(owningThread) , lastListenerId(1) { } bool EventService::unregisterListener(const RegisteredEventListenerHandle& handle) { auto found = this->listenerMap.find(handle.typeIndex); if (found != this->listenerMap.end()) { found->second->removeListenerRegistration(handle.id); return true; } return false; } void EventService::executeEventQueue() { auto timestamp = this->timerService->timestamp(); for (const auto& node : this->listenerMap) { node.second->executeEventQueue(timestamp); } }}EventListener.h#pragma once#include Core\NumTypes.h#include Helpers/NonCopyable.h#include <vector>#include <typeindex>namespace quasar{ class EventService; // Handle that identifies a uniqe event listener registration struct RegisteredEventListenerHandle { RegisteredEventListenerHandle(const std::type_index& typeIndex, EventService* eventService, uint32 id); std::type_index typeIndex; EventService* eventService; uint32 id; }; // // Base class for objects capable of listening for events // Handles lifetime of listener registration // class EventListener : public NonCopyable { public: virtual ~EventListener(); // Ties the lifetime of a listener regitration to this listener // param handle the handle of the registration void addRegistrationHandle(const RegisteredEventListenerHandle& handle); private: std::vector<RegisteredEventListenerHandle> handles; };}EventListener.cpp#include Events\EventListener.h#include Events\EventService.hnamespace quasar{ RegisteredEventListenerHandle::RegisteredEventListenerHandle(const std::type_index& typeIndex, EventService* eventService, uint32 id) : typeIndex(typeIndex) , eventService(eventService) , id(id) { } EventListener::~EventListener() { for (auto& handle : this->handles) { handle.eventService->unregisterListener(handle); } } void EventListener::addRegistrationHandle(const RegisteredEventListenerHandle& handle) { this->handles.push_back(handle); }} | Event Service in c++ | c++;event handling | null |
_unix.57590 | I'm trying to append the current date to the end of a file name like this:TheFile.log.2012-02-11Here is what I have so far:set today = 'date +%Y'mkdir -p The_Logs &find . -name The_Logs -atime -1 -type d -exec mv \{} The_Logs_+$today \; &However all I get is the name of the file, and it appends nothing. How do I append a current date to a filename? | Appending a current date from a variable to a filename | bash;shell;rename;date | More than likely it is your use of set. That will assign 'today', '=' and the output of the date program to positional parameters (aka command-line arguments). You want to just use C shell (which you are tagging this as bash, so likely not), you will want to use:today=`date +%Y-%m-%d.%H:%M:%S` # or whatever pattern you desireNotice the lack of spaces around the equal sign.You also do not want to use & at the end of your statements; which causes the shell to not wait for the command to finish. Especially when one relies on the next. The find command could fail because it is started before the mkdir. |
_unix.309786 | When I paste into my terminal session the shell immediately executes the command without me pressing the enter key.I really don't know how to disable that behaviour. I'm using the preinstalled terminal on MacOS Yosemite. | Disable default Copy&Paste behaviour in Bash | bash;terminal;osx | null |
_vi.9001 | I would like to be able to search google from within any vim file. A nice command might be :goo while in normal mode. Then I type what I want to search and bam it opens my default browser with the search. How would I do this? | How do I search google from vim? | external command;bash | You have a couple of options here:Using a plugin:vim-ggsearchvim-quicklinkOr, if you prefer a lightweight solution, you can try the following:function! GoogleSearch() let searchterm = getreg(g) silent! exec silent! !firefox \http://google.com/search?q= . searchterm . \ &endfunctionvnoremap <F6> gy<Esc>:call GoogleSearch()<CR>(source)Using the vim-shell plugin you can rewrite this to:function! GoogleSearch() let searchterm = getreg(g) Open http://google.com/search?q= . searchterm . \ &endfunctionvnoremap <F6> gy<Esc>:call GoogleSearch()<CR>You can also have a look at those links:http://vim.wikia.com/wiki/Search_the_web_for_text_selected_in_Vimhttps://www.reddit.com/r/vim/comments/37ou4p/help_me_search_google_from_vim/http://vim.wikia.com/wiki/Internet_search_for_the_current_wordAnd I highly recommend this video by Drew Niel. |
_webapps.24612 | When using VEVO it seems impossible to register without using Facebook. No other options are given (even with Incognito)Is there a path to a regular form registration? I just want to save a playlist. | How to sign up to VEVO without using Facebook | vevo | This is possible now (2016) by going to http://www.vevo.com/signup.You can't do that anymore.We heard that music video site Vevo was planning a major site redesign, and those news changes are rolling out today just as planned. The first major difference you'll notice is that the only way to sign up for an account is with Facebook, and existing users must now log in with Facebook as well.Source. |
_codereview.129146 | I'm looking for optimizations and/or improvements. The code's flagship method read takes qty,item in the form of an array of arrays.pluralizer.read([[2,'orange'],[3,'peach'],[5,'cherry']])returns string2 oranges, 3 peaches, and 5 cherriesGitHub//Revealing Module Pattern (Public & Private) w Public Namespace 'pluralizer'var pluralizer = (function() { var pub = {}; var r = 'pluralizer.js error'; var expectedArrayOfArrays = {name:r, message:'Invalid argument. Expected array of arrays'}; //creates Array.isArray() if it's not natively available if (!Array.isArray) { Array.isArray = function(arg) { return Object.prototype.toString.call(arg) === '[object Array]'; }; } if (!String.prototype.endsWith) { String.prototype.endsWith = function(searchString, position) { var subjectString = this.toString(); if (typeof position !== 'number' || !isFinite(position) || Math.floor(position) !== position || position > subjectString.length) { position = subjectString.length; } position -= searchString.length; var lastIndex = subjectString.indexOf(searchString, position); return lastIndex !== -1 && lastIndex === position; }; } var irregular = [['child','children'], ['die','dice'], ['foot','feet'], ['goose','geese'], ['louse','lice'], ['man','men'], ['mouse','mice'], ['ox','oxen'], ['person','people'], ['that','those'], ['this','these'], ['tooth','teeth'], ['woman','women']]; var xExceptions = [['axis','axes'], ['ox','oxen']]; var fExceptions = [['belief','beliefs'], ['chef','chefs'], ['chief','chiefs'], ['dwarf','dwarfs'], ['grief','griefs'], ['gulf','gulfs'], ['handkerchief','handkerchiefs'], ['kerchief','kerchiefs'], ['mischief','mischiefs'], ['muff','muffs'], ['oaf','oafs'], ['proof','proofs'], ['roof','roofs'], ['safe','safes'], ['turf','turfs']]; var feExceptions = [[' safe','safes']]; var oExceptions = [['albino','albinos'], ['armadillo','armadillos'], ['auto','autos'], ['cameo','cameos'], ['cello','cellos'], ['combo','combos'], ['duo','duos'], ['ego','egos'], ['folio','folios'], ['halo','halos'], ['inferno','infernos'], ['lasso','lassos'], ['memento','mementos'], ['memo','memos'], ['piano','pianos'], ['photo','photos'], ['portfolio','portfolios'], ['pro','pros'], ['silo','silos'], ['solo','solos'], ['stereo','stereos'], ['studio','studios'], ['taco','tacos'], ['tattoo','tattoos'], ['tuxedo','tuxedos'], ['typo','typos'], ['veto','vetoes'], ['video','videos'], ['yo','yos'], ['zoo','zoos']]; var usExceptions = [['abacus','abacuses'], ['crocus','crocuses'], ['genus','genera'], ['octopus','octopuses'], ['rhombus','rhombuses'], ['walrus','walruses']]; var umExceptions = [['album','albums'], ['stadium','stadiums']]; var aExceptions = [['agenda','agendas'], ['alfalfa','alfalfas'], ['aurora','auroras'], ['banana','bananas'], ['barracuda','barracudas'], ['cornea','corneas'], ['nova','novas'], ['phobia','phobias']]; var onExceptions = [['balloon','balloons'], ['carton','cartons']]; var exExceptions = [['annex','annexes'], ['complex','complexes'], ['duplex','duplexes'], ['hex','hexes'], ['index','indices']]; var unchanging = ['advice', 'aircraft', 'bison', 'corn', 'deer', 'equipment', 'evidence', 'fish', 'gold', 'information', 'jewelry', 'kin', 'legislation', 'luck', 'luggage', 'moose', 'music', 'offspring', 'sheep', 'silver', 'swine', 'trousers', 'trout', 'wheat']; var onlyPlurals = ['barracks', 'bellows', 'cattle', 'congratulations', 'deer', 'dregs', 'eyeglasses', 'gallows', 'headquarters', 'mathematics', 'means', 'measles', 'mumps', 'news', 'oats', 'pants', 'pliers', 'pajamas', 'scissors', 'series', 'shears', 'shorts', 'species', 'tongs', 'tweezers', 'vespers']; var doc = document; doc.addEventListener(DOMContentLoaded, function(event) { }); pub.help = Pluralizer.js returns 2 public methods - read and format. Pluralizer.read expects an array of arrays, each with quantity and item name, e.g. pluralizer.read([[2,'orange'],[3,'peach'],[5,'cherry']]) returns string '2 oranges, 3 peaches, and 5 cherries.'. Pluralizer.format expects an array with quantity and item name, e.g., pluralizer.format([3,'couch']) returns array '[3, 'couches']' pub.read = function (arr) { if(isArrayOfArrays(arr)){ var count = arr.length; var str = ''; var temp = []; switch (count) { //if arr has 1 item is 1 apple (no and no commas) case 1: temp[0] = pluralizer.format(arr[0]); str = temp[0][0] + ' ' + temp[0][1]; break; //if arr has 2 items it's 1 apple and 2 oranges (no commas but an and) case 2: temp[0] = pluralizer.format(arr[0]); temp[1] = pluralizer.format(arr[1]); str = temp[0][0] + ' ' + temp[0][1] + ' and ' + temp[1][0] + ' ' + temp[1][1]; break; //if arr has 3 items or more it's 1 apple, 2 oranges, and 3 cherries (the last item has an 'and ' put before it) default: // for each item in array output format it and concatentate it to a string var arrayLength = arr.length; for (var i = 0; i < arrayLength; i++) { temp = pluralizer.format(arr[i]); //if this is 2nd last item append with ', and ' if (i === arrayLength - 2){ str += temp[0] + ' ' + temp[1] + ', and '; } //if this is last item append with '.' else if (i === arrayLength - 1){ str += temp[0] + ' ' + temp[1] + '.'; } else { str += temp[0] + ' ' + temp[1] + ', '; } } } return str; } else { throw expectedArrayOfArrays; } } pub.format = function (arr) { //if qty is greater than 1 we need to add s, es, or ies var qty = arr[0]; var str = arr[1]; if (qty > 1){ //Word ends in s, x, ch, z, or sh if (str.endsWith('s') || str.endsWith('x') || str.endsWith('ch') || str.endsWith('sh') || str.endsWith('z')){ //look for exceptions first xExceptions for (var i = 0; i < xExceptions.length; i++) { if(str === xExceptions[i][0]){ return [qty,xExceptions[i][1]]; } } //str = str.substring(0, str.length - 1); str = str + 'es'; return [qty,str]; } // Ending in 'y' else if (str.endsWith('y')){ var s = str.substring(0, str.length - 1); // preceded by a vowel if (s.endsWith('a') || s.endsWith('e') || s.endsWith('i') || s.endsWith('o') || s.endsWith('u')){ str = str + 's'; return [qty,str]; } else { //drop the y and add ies str = s + 'ies'; return [qty,str]; } } //Ends with 'ff' or 'ffe' else if (str.endsWith('ff') || str.endsWith('ffe')){ str = str + 's'; return [qty,str]; } //Ends with 'f' (but not 'ff') else if (str.endsWith('f')){ //look for exceptions first fExceptions for (var i = 0; i < fExceptions.length; i++) { if(str === fExceptions[i][0]){ return [qty,fExceptions[i][1]]; } } //Change the 'f' to 'ves' var s = str.substring(0, str.length - 1); str = s + 'ves'; return [qty,str]; } //Ends with 'fe' (but not ffe') else if (str.endsWith('fe')){ //look for exceptions first feExceptions for (var i = 0; i < feExceptions.length; i++) { if(str === feExceptions[i][0]){ return [qty,feExceptions[i][1]]; } } //Change the 'fe' to 'ves' var s = str.substring(0, str.length - 2); str = s + 'ves'; return [qty,str]; } //Ends with 'o' else if (str.endsWith('o')){ //look for exceptions first oExceptions for (var i = 0; i < oExceptions.length; i++) { if(str === oExceptions[i][0]){ return [qty,oExceptions[i][1]]; } } //Add 'es' str = s + 'es'; return [qty,str]; } //Ends with 'is' else if (str.endsWith('is')){ //Change final 'is' to 'es' var s = str.substring(0, str.length - 2); str = s + 'es'; return [qty,str]; } //Ends with 'us' else if (str.endsWith('us')){ //look for exceptions first oExceptions for (var i = 0; i < usExceptions.length; i++) { if(str === usExceptions[i][0]){ return [qty,usExceptions[i][1]]; } } //Change final 'us' to 'i' var s = str.substring(0, str.length - 2); str = s + 'i'; return [qty,str]; } //Ends with 'um' else if (str.endsWith('um')){ //look for exceptions first oExceptions for (var i = 0; i < umExceptions.length; i++) { if(str === umExceptions[i][0]){ return [qty,umExceptions[i][1]]; } } //Change final 'um' to 'a' var s = str.substring(0, str.length - 2); str = s + 'a'; return [qty,str]; } //Ends with 'a' but not 'ia' else if (str.endsWith('a')){ //not ending is 'ia' if (str.endsWith('ia')){ str = str + 's'; return [qty,str]; } //look for exceptions first aExceptions for (var i = 0; i < aExceptions.length; i++) { if(str === aExceptions[i][0]){ return [qty,aExceptions[i][1]]; } } //Change final 'a' to 'ae' var s = str.substring(0, str.length - 2); str = s + 'a'; return [qty,str]; } //Ends with 'on' Change final 'on' to 'a' else if (str.endsWith('on')){ //look for exceptions first onExceptions for (var i = 0; i < onExceptions.length; i++) { if(str === onExceptions[i][0]){ return [qty,onExceptions[i][1]]; } } //Change final 'um' to 'a' var s = str.substring(0, str.length - 2); str = s + 'a'; return [qty,str]; } //Ends with 'ex' else if (str.endsWith('ex')){ //look for exceptions first onExceptions for (var i = 0; i < exExceptions.length; i++) { if(str === exExceptions[i][0]){ return [qty,exExceptions[i][1]]; } } //Change final 'ex' to 'ices' var s = str.substring(0, str.length - 2); str = s + 'ices'; return [qty,str]; } else { //check unchanging for (var i = 0; i < unchanging.length; i++) { if(str === unchanging[i]){ return [qty,str]; } } //check onlyPlurals for (var i = 0; i < onlyPlurals.length; i++) { if(str === onlyPlurals[i]){ return [qty,str]; } } //check irregular for (var i = 0; i < irregular.length; i++) { if(str === irregular[i][0]){ return [qty,irregular[i][1]]; } } str = str + 's'; return [qty,str]; } } else { return [qty,str]; } } function isArrayOfArrays(arr){ if(Array.isArray(arr)){ var result = true; for (var i = 0; i < arr.length; i++) { if(!Array.isArray(arr[i])){ result = false; //throw expectedArrayOfArrays; } } if(result){ return true; } else { //throw expectedArrayOfArrays; return false; } } else { return false; } } //API return pub;}()); | pluralizer.js - return plural version of item if qty > 1 | javascript | null |
_codereview.140448 | I am a college student and started my first week of C++ and we were given an assignment to convert a Java program that calculate the interest on a series of loans given the amount of the principal, the annual interest rate, and the number of days in a sentinel loop into C++.Here is what we were given:import java.util.Scanner;public class ex311{public static void main(String[] args){ double principle, rate, interest; int days; Scanner sc = new Scanner(System.in); System.out.print(Enter principle (-1 to end): ); principle = sc.nextDouble(); while (principle != -1) { System.out.print(Enter annual interest rate (as a decimal): ); rate = sc.nextDouble(); System.out.print(Enter number of days: ); days = sc.nextInt(); interest = principle * rate / 365 * days; System.out.printf(Interest is %.2f\n, interest); System.out.print(\nEnter principle (-1 to end): ); principle = sc.nextDouble(); }}}This is what I have for the C++ converted code:#include <iostream>using namespace std;void main(){ double principle, rate, interest;int days;cout << Enter principle (-1 to end);cin >> principle;while (principle != -1){ cout << Enter annual interest rate(as a decimal); cin >> rate; cout << Enter number of days; cin >> days; interest = principle * rate / 365 * days; cout << Interest is << interest; cout << Enter principle (-1 to end); cin >> principle;}}I would like to know if there is a better way to go about this, as in making my code more efficient. I am aware that I should not be using void main, but this is how we're being taught for the time being. | Calculating the interest on a series of loans | c++;beginner;finance | null |
_codereview.87301 | I've written a script to automate the entry of laboratory instrument data into an Excel spreadsheet using pandas and win32com.I've got the script functioning correctly, but it is painfully slow. In an attempt to profile the code, my acfmp_ToExcel function seems to be the culprit. I've pasted the profiling data for this function at the bottom. Is there any way to get this code running faster? It takes anywhere from 20-30 seconds each time I run it.What the function does is take a list of queries (strings within a column of a dataframe, df_acfmp), then using those queries pull data from the other dataframe columns and put those values into an Excel spreadsheet at specific locations.The function is essentially one bundle of code (if any():) repeated 3 times within the main for loop.def acfmp_ToExcel(queries):order_list = {'one':['_410-', '_510-'], 'two':['_420-', '_530-'], 'three': ['_430-', '_590-']}queerz = Series(queries)fronts = queerz[queerz.str.endswith(_F)]fronts_plus = queerz[queerz.str.endswith(F+7)]backs = queerz[queerz.str.endswith(_B)]for each_queer in queerz: if any(q in each_queer for q in order_list['one']): locale_front = np.where(df_acfmp['Name'].str.contains(fronts.iloc[0]+'$')) positions_front = locale_front[0] fnd_f = 'F + 0 mm' x = xsheet1.Range('b1:b1000').Find(fnd_f) x_two = xsheet1.Range('b1:b1000').FindNext(x) x_three = xsheet1.Range('b1:b1000').FindNext(x_two) x_four = xsheet1.Range('b1:b1000').FindNext(x_three) x_five = xsheet1.Range('b1:b1000').FindNext(x_four) x_six = xsheet1.Range('b1:b1000').FindNext(x_five) x_seven = xsheet1.Range('b1:b1000').FindNext(x_six) front_queer = fronts_plus.iloc(0) locale_fronts_plus = np.where(df_acfmp['Name'].str.contains(front_queer, regex = False)) positions_fronts_plus = locale_fronts_plus[0] fnd_p = 'F + 7 mm' y_ = xsheet1.Range('b1:b1000').Find(fnd_p) y_two = xsheet1.Range('b1:b1000').FindNext(y_) y_three = xsheet1.Range('b1:b1000').FindNext(y_two) y_four = xsheet1.Range('b1:b1000').FindNext(y_three) y_five = xsheet1.Range('b1:b1000').FindNext(y_four) y_six = xsheet1.Range('b1:b1000').FindNext(y_five) try: y_seven = xsheet1.Range('b1:b1000').FindNext(y_six) except: pass locale_backs = np.where(df_acfmp['Name'].str.contains(backs.iloc[0])) positions_backs = locale_backs[0] fnd_b = 'Back' z_ = xsheet1.Range('b1:b1000').find(fnd_b) z_two = xsheet1.Range('b1:b1000').FindNext(z_) z_three = xsheet1.Range('b1:b1000').FindNext(z_two) z_four = xsheet1.Range('b1:b1000').FindNext(z_three) z_five = xsheet1.Range('b1:b1000').FindNext(z_four) z_six = xsheet1.Range('b1:b1000').FindNext(z_five) try: z_seven = xsheet1.Range('b1:b1000').FindNext(z_six) except: pass if 1 in df_acfmp['Stage_Number']: for nums in range(5): x_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 2 in df_acfmp['Stage_Number']: for nums in range(5): x_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 3 in df_acfmp['Stage_Number']: for nums in range(5): x_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 4 in df_acfmp['Stage_Number']: for nums in range(5): x_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if any(r in each_queer for r in order_list['two']): locale_front = np.where(df_acfmp['Name'].str.contains(fronts.iloc[1] + '$')) positions_front = locale_front[0] fnd_f = 'F + 0 mm' x = xsheet2.Range('b1:b1000').Find(fnd_f) x_two = xsheet2.Range('b1:b1000').FindNext(x) x_three = xsheet2.Range('b1:b1000').FindNext(x_two) x_four = xsheet2.Range('b1:b1000').FindNext(x_three) x_five = xsheet2.Range('b1:b1000').FindNext(x_four) x_six = xsheet2.Range('b1:b1000').FindNext(x_five) x_seven = xsheet2.Range('b1:b1000').FindNext(x_six) front_queer = fronts_plus.iloc(1) locale_fronts_plus = np.where(df_acfmp['Name'].str.contains(front_queer, regex = False)) positions_fronts_plus = locale_fronts_plus[0] fnd_p = 'F + 7 mm' y_ = xsheet2.Range('b1:b1000').Find(fnd_p) y_two = xsheet2.Range('b1:b1000').FindNext(y_) y_three = xsheet2.Range('b1:b1000').FindNext(y_two) y_four = xsheet2.Range('b1:b1000').FindNext(y_three) y_five = xsheet2.Range('b1:b1000').FindNext(y_four) y_six = xsheet2.Range('b1:b1000').FindNext(y_five) try: y_seven = xsheet2.Range('b1:b1000').FindNext(y_six) except: pass locale_backs = np.where(df_acfmp['Name'].str.contains(backs.iloc[1])) positions_backs = locale_backs[0] fnd_b = 'Back' z_ = xsheet2.Range('b1:b1000').find(fnd_b) z_two = xsheet2.Range('b1:b1000').FindNext(z_) z_three = xsheet2.Range('b1:b1000').FindNext(z_two) z_four = xsheet2.Range('b1:b1000').FindNext(z_three) z_five = xsheet2.Range('b1:b1000').FindNext(z_four) z_six = xsheet2.Range('b1:b1000').FindNext(z_five) try: z_seven = xsheet2.Range('b1:b1000').FindNext(z_six) except: pass if 1 in df_acfmp['Stage_Number'].values: for nums in range(5): x_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 2 in df_acfmp['Stage_Number'].values: for nums in range(5): x_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 3 in df_acfmp['Stage_Number'].values: for nums in range(5): x_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 4 in df_acfmp['Stage_Number'].values: for nums in range(5): x_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if any(s in each_queer for s in order_list['three']): #query_front = fronts.ix[1, 'filter'] + '$' locale_front = np.where(df_acfmp['Name'].str.contains(fronts.iloc[2] + '$')) positions_front = locale_front[0] fnd_f = 'F + 0 mm' x = xsheet3.Range('b1:b1000').Find(fnd_f) x_two = xsheet3.Range('b1:b1000').FindNext(x) x_three = xsheet3.Range('b1:b1000').FindNext(x_two) x_four = xsheet3.Range('b1:b1000').FindNext(x_three) x_five = xsheet3.Range('b1:b1000').FindNext(x_four) x_six = xsheet3.Range('b1:b1000').FindNext(x_five) x_seven = xsheet3.Range('b1:b1000').FindNext(x_six) front_queer = fronts_plus.iloc(2) locale_fronts_plus = np.where(df_acfmp['Name'].str.contains(front_queer, regex = False)) positions_fronts_plus = locale_fronts_plus[0] fnd_p = 'F + 7 mm' y_ = xsheet3.Range('b1:b1000').Find(fnd_p) y_two = xsheet3.Range('b1:b1000').FindNext(y_) y_three = xsheet3.Range('b1:b1000').FindNext(y_two) y_four = xsheet3.Range('b1:b1000').FindNext(y_three) y_five = xsheet3.Range('b1:b1000').FindNext(y_four) y_six = xsheet3.Range('b1:b1000').FindNext(y_five) try: y_seven = xsheet1.Range('b1:b1000').FindNext(y_six) except: pass locale_backs = np.where(df_acfmp['Name'].str.contains(backs.iloc[2])) positions_backs = locale_backs[0] fnd_b = 'Back' z_ = xsheet3.Range('b1:b1000').find(fnd_b) z_two = xsheet3.Range('b1:b1000').FindNext(z_) z_three = xsheet3.Range('b1:b1000').FindNext(z_two) z_four = xsheet3.Range('b1:b1000').FindNext(z_three) z_five = xsheet3.Range('b1:b1000').FindNext(z_four) z_six = xsheet3.Range('b1:b1000').FindNext(z_five) try: z_seven = xsheet3.Range('b1:b1000').FindNext(z_six) except: pass if 1 in df_acfmp['Stage_Number'].values: for nums in range(5): x_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 2 in df_acfmp['Stage_Number'].values: for nums in range(5): x_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 3 in df_acfmp['Stage_Number'].values: for nums in range(5): x_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 4 in df_acfmp['Stage_Number'].values: for nums in range(5): x_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums]Some profiling data that has led me to believe this is the culprit function. I am very new to profiling code so I'm not entirely sure what this is telling me. ncalls tottime percall cumtime percall filename:lineno(function) 3 2.138 0.713 24.834 8.278 grab_enter.py:688(acfmp_ToExcel)Function called... ncalls tottime cumtimegrab_enter.py:688(acfmp_ToExcel) -> 180 0.002 0.626 <COMObject <unknown>>:1(Range) 1 0.000 0.004 <COMObject Range>:1(FindNext) 189 0.003 0.007 C:\Python27\lib\site-packages\pandas\core\frame.py:1757(__getitem__) 36 0.000 0.001 C:\Python27\lib\site-packages\pandas\core\generic.py:686(__contains__) 891 0.003 0.005 C:\Python27\lib\site-packages\pandas\core\generic.py:1030(_indexer) 6 0.000 0.000 C:\Python27\lib\site-packages\pandas\core\generic.py:1932(__getattr__) 6 0.000 0.000 C:\Python27\lib\site-packages\pandas\core\generic.py:1949(__setattr__) 27 0.000 0.001 C:\Python27\lib\site-packages\pandas\core\indexing.py:49(__call__) 864 0.004 0.528 C:\Python27\lib\site-packages\pandas\core\indexing.py:1198(__getitem__) 3 0.000 0.001 C:\Python27\lib\site-packages\pandas\core\series.py:114(__init__) 72 0.000 0.001 C:\Python27\lib\site-packages\pandas\core\series.py:296(values) 9 0.000 0.003 C:\Python27\lib\site-packages\pandas\core\series.py:507(__getitem__) 3 0.000 0.000 C:\Python27\lib\site-packages\pandas\core\series.py:1011(__iter__) 6 0.000 0.000 C:\Python27\lib\site-packages\pandas\core\series.py:2454(str) 9 0.000 0.002 C:\Python27\lib\site-packages\pandas\core\strings.py:879(wrapper3) 81 0.001 0.025 C:\Python27\lib\site-packages\pandas\core\strings.py:963(contains) 810 0.010 3.217 C:\Python27\lib\site-packages\win32com\client\dynamic.py:184(__call__) 1944 0.053 9.291 C:\Python27\lib\site-packages\win32com\client\dynamic.py:444(__getattr__) 810 0.029 5.190 C:\Python27\lib\site-packages\win32com\client\dynamic.py:524(__setattr__) 9 0.000 0.000 grab_enter.py:697(<genexpr>) 9 0.000 0.000 grab_enter.py:765(<genexpr>) 9 0.000 0.000 grab_enter.py:834(<genexpr>) 81 0.000 0.000 {any} 81 0.001 0.003 {numpy.core.multiarray.where} 54 0.000 0.000 {range} | Excel Laboratory Data Entry from Python 2.7 | python;performance;excel;pandas | Do not bare excepttry: z_seven = xsheet1.Range('b1:b1000').FindNext(z_six)except: passShould be avoided as any kind of error will be expected, instead use:try: z_seven = xsheet1.Range('b1:b1000').FindNext(z_six)except TheExceptioIExpect: passRemove the massive code duplication if 1 in df_acfmp['Stage_Number']: for nums in range(5): x_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 2 in df_acfmp['Stage_Number']: for nums in range(5): x_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_five.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 3 in df_acfmp['Stage_Number']: for nums in range(5): x_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_six.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] if 4 in df_acfmp['Stage_Number']: for nums in range(5): x_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_seven.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums]becomes:def contains_any(items, lst): return any(i in lst for i in items)if contains_any([1,2,3,4], df_acfmp['Stage_Number']): for nums in range(5): x_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_front[0], nums] y_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_fronts_plus[0], nums] z_four.Offset(1, nums+2).Value = df_acfmp.iloc[positions_backs[0], nums] |
_unix.2857 | If I'm logged in to a system via SSH, is there a way to copy a file back to my local system without firing up another terminal or screen session and doing scp or something similar or without doing SSH from the remote system back to the local system? | SSH easily copy file to local system | ssh;file copy | Master connectionIt's easiest if you plan in advance.Open a master connection the first time. For subsequent connections, route slave connections through the existing master connection. In your ~/.ssh/config, set up connection sharing to happen automatically:ControlMaster autoControlPath ~/.ssh/control:%h:%p:%rIf you start an ssh session to the same (user, port, machine) as an existing connection, the second session will be tunneled over the first. Establishing the second connection requires no new authentication and is very fast.So while you have your active connection, you can quickly:copy a file with scp or rsync;mount a remote filesystem with sshfs.ForwardingOn an existing connection, you can establish a reverse ssh tunnel. On the ssh command line, create a remote forwarding by passing -R 22042:localhost:22 where 22042 is a randomly chosen number that's different from any other port number on the remote machine. Then ssh -p 22042 localhost on the remote machine connects you back to the source machine; you can use scp -P 22042 foo localhost: to copy files.You can automate this further with RemoteForward 22042 localhost:22. The problem with this is that if you connect to the same computer with multiple instances of ssh, or if someone else is using the port, you don't get the forwarding.If you haven't enabled a remote forwarding from the start, you can do it on an existing ssh session. Type Enter ~C Enter -R 22042:localhost:22 Enter.See Escape characters in the manual for more information.There is also some interesting information in this Server Fault thread.Copy-pasteIf the file is small, you can type it out and copy-paste from the terminal output. If the file contains non-printable characters, use an encoding such as base64.remote.example.net$ base64 <myfile(copy the output)local.example.net$ base64 -d >myfile(copy the output)Ctrl+DMore conveniently, if you have X forwarding active, copy the file on the remote machine and paste it locally. You can pipe data in and out of xclip or xsel. If you want to preserve the file name and metadata, copy-paste an archive.remote.example.net$ tar -czf - myfile | xsellocal.example.net$ xsel | tar -xzf - |
_webmaster.108195 | I'd like to know if you think it'd be worth it to compete with a relatively big competitor (160k backlinks from 2.4k domains, citation flow 47, trust flow 43, class C IPs 1.2k) whose niche is however something I have sufficient knowledge of (how-tos and technology) starting from scratch.I am particularly interested because such domain exploits a niche language I have good fluency of and of course technology is one of the big players on search engine searches and affiliate sales.Until now I have managed a very small niche domain in my spare time (genre-specific music), which however being so specific, couldn't but have a modest amount of visitors even if ranking high on Google.On the other hand, it's been years I have been closely watching that domain I'd like to compete with (it has around 8k how-tos with good SEO but average or below average error-ridden or outdated content taken from various unquoted sorces, something I'd easily create myself in a few months worth of work, or even sooner with the help of a few collaborators, that domain itself is run by just a couple of people) and consistently ranks its how-tos and comparison on top positions in its niche, yielding most of its revenue in header bidding.The template they use is nothing overly complicated, it just has the main optimizations one would use to decrease load time and not trigger Google with too many banners.The main doubts I have is if such a domain, having larger economic resources than me, could eventually sink mine (with techniques like giving me plenty of spammy backlinks or something like that) once it finds out I am competing and getting closer than they'd like, or since their texts are already quite good SEO, their strongest assurance could be that I'd have to sacrifice too many keywords in order to create my content without having my reworkings seem close enough to theirs to allow a lawsuit?Thank youFrank | Is it worth it to compete with a big competitor starting from scratch? | seo | null |
_softwareengineering.224393 | I'm building a repository for a large CRM schema that has a high number of relations between entities.Some of the entities are referenced by almost all entities, e.g. Person and Company.Where I have an aggregate root such as Order, that addresses Order Header, Order Lines etc. In the case where the Order is from a new Customer, I need to insert a Company and a Person record at the same time... so far so good.The difficulty is that this is the situation with very many other aggregate roots.I don't want to have a private method insertPerson(IPerson) in every single repository implementation.I have a Customer repository that already has public InsertCustomer(ICustomer) method, which persists a Company and a person record. But my reading indicates that repositories shouldn't depend on each other.If this is a case where it is okay for repositories to depend on each other, what is the best way to go about it? Should I inject the Customer Repository into the Order repository constuctor, pass it as a parameter to the methods that need it, or never commit this evil and duplicate the identical code in all repositories, in case those repositories need to specialise that code for their specific contexts? | How to avoid duplication of code related to shared entities in the repository pattern? | design patterns;domain driven design;repository | This sounds like an area that Service Layer might come in handy. You could inject the repositories in the service layer and that way each repo won't depend on the others, and the service can coordinate all of the inserts for a given operation across aggregates.Details of your implementation might also guide you depending on the extent to which you're relying on an ORM and need to take into account the atomicity of the repository operations. Without knowing more about that this advice may be less than useful. |
_codereview.143848 | I am interested in learning a more succinct (or better performing) way of writing the following working code. I just figured it out but it is pretty messy. This program takes a file full of daily financial transactions for x number of months (for example), and appends each line to the appropriate month file to create separate monthly reports. I put the open month files in a list. If the filestream is not yet created, I create it, add it to the list, then append the data. If it is already created/open, I append the data to it.using System;using System.Collections.Generic;using System.IO;using System.Linq;using System.Text;namespace SplitFiles{ class Program { private static FileStream fs; private static List<FileStream> lf; private static StringBuilder sb; private static string[] header; private static string _file; private static string _fileName; private static string _startDir; private static string _newFile; public static string NFile { get { return _newFile; } set { _newFile = Path.Combine(_startDir, ${value}_{_fileName}); } } static byte[] ReturnBytes(string s) { return Encoding.ASCII.GetBytes(s); } static void CeateFile(string monyr) { NFile = monyr; fs = new FileStream(NFile, FileMode.Create); for(var i = 0; i < 3; i++) { var b = ReturnBytes(${header[i]}{Environment.NewLine}); fs.Write(b, 0, b.Length); } lf.Add(fs); } static void Main(string[] args) { sb = new StringBuilder(); header = new string[4]; _startDir = @D:\ProgramData\MonthlyReports; _file = @D:\ProgramData\test.txt; _fileName = Path.GetFileName(_file); lf = new List<FileStream>(); var my = new string[2]; DateTime dt; // if directory doesnt exist, create it if (!Directory.Exists(_startDir)) Directory.CreateDirectory(_startDir); using (var sr = new StreamReader(File.OpenRead(_file))) { var i = 0; while (!sr.EndOfStream) { sb.Append(sr.ReadLine()); // first 3 lines of file contain: // -------- etc // field1 | field2 | field3 | DATEFIELD | field5 | etc. | etc. // -------- etc // last line: ------------------------ etc // Build header array (first 3 lines = header, last line = footer) if (i < 3 || sb.ToString().Contains(-------------)) { header[i] = sb.ToString(); i++; sb.Clear(); continue; } // split line by bar | delimiter var pop = sb.ToString().Split('|'); if(DateTime.TryParse(pop[4], out dt)) { my[0] = dt.Month.ToString(); my[1] = dt.Year.ToString(); fs = lf.FirstOrDefault(a => a.Name.IndexOf(${my[0]}_{my[1]}) != -1); // if fs is null, create filestream and set fs = new filestream if (fs == null) CeateFile(${my[0]}_{my[1]}); var b = ReturnBytes(${sb.ToString()}{Environment.NewLine}); fs.Write(b, 0, b.Length); } sb.Clear(); } } var finalLine = ReturnBytes(${header[3]}{Environment.NewLine}); lf.ForEach(a =>{ a.Write(finalLine, 0, finalLine.Length); a.Close(); }); sb.Clear(); sb = null; fs.Close(); } }} | Creating monthly files from an annual file | c#;performance;linq;memory optimization | Problems I seeHaving all methods static will envolve some problems you can avoid: the code is harder to testusing static methods with a class having a state will lead to problems if multi threading should be introduced Creating filestreams which are kept open can lead to problems because in the case of an exception the streams aren't properly disposed. Inconsistent naming of variables. Either use underscore-prefixed names or don't use them. Mixing styles will lead to harder to read and therefor harder to maintain code. Prefixing static variables with an underscore is IMO not correct. The names of most variables don't tell the reader of the code anything about their purpose. Always make the names of things as descriptive as possible. E.g private static List<FileStream> lf; would be named better fileStreams.. If you or Sam the maintainer come back in 2 months to this code because you either have a bug or you need to add a feature you would have a hard time to figure out what all the variables and parameters (like string monyr) represent hence your task will take a lot more time.If the file isn't that big (not megabytes) simply using File.ReadAllLines() would be better, because it would remove the need to convert the bytes to text because that is done by that said method and it will shorten your code. If the file is big, you should consider to use a TextReader instead of a StreamReader because the TextReader is doing the converting stuff under the hood. If you are using comments you should use them only if it isn't obvious why the code does something in a specific way. Something like // if directory doesnt exist, create it if (!Directory.Exists(_startDir)) Directory.CreateDirectory(_startDir); is stating the obvious and in addition could be just replaced by a simple Directory.CreateDirectory(_startDir); because under the hood the CreateDirectory() is checking if the directory exists before it tries to create it. |
_softwareengineering.290391 | BackgroundSupport and Sprint are the test branches for bugs and tasksEach bug gets a new branch from master, which is merged into Support, when tested good, a pull request is made between the Support branch and master.Each task gets a new branch from master, which is merged into Sprint, when tested good, a pull request is made between the Sprint branch and masterAllows for any given bug fix to go live at a moments notice when tested good, and allows for any given task to go live as and when its ready.Allows for any part of a task to be tested in isolationAllows for any bug to be tested in isolationProblemIf Task 123 and Task 234 both change method DoSomethingToX, this creates a conflict in Sprint which must be resolved. This will break one or both, or neither of the tasks. The fix will be made as part of the merge (because its resolving a conflict), so will be committed to Sprint, not Task 123/234.I do not want to merge Sprint back into a task, because that will then merge all other In Progress tasks into that task, and could potentially put those task-parts liveHow would i better manage these conflicts? Is this what cherry-pick is for? Is there a way to go with this type of architecture and avoid these conflicts?Are there a set of coding standards that would help avoid these conflicts?(Support and Sprint are only there to give a branch to create deployment builds from for testing, this is already pretty finely ingrained into the entire process and is unlikely to be changed) | Avoid branch conflicts/race conditions with task branches | git;branching;release management | null |
_unix.42973 | OS X 10.6.8, if I use Bash Process Substitution as 'root', it just doesn't work.Is it supposed to be so?Why?Note: here's what I mean... <(list)mysql -D robottinosino < <(echo 'select robot from tino_sino;') /* a contrived example, admittedly, as you could swap the echo and mysql using a simple pipe... I could not think of a better one off the top of my head */EDIT:I am logging on as root like so:sudo su -(incidentally, is there a better way if I want to stay logged on?)I am not on Bash so my question is really stupid and the comment below caught the problem instantly! :(echo $0 yields -sh :(I guess this question could just be deleted at this point or metamorphosed into: how to I properly log in as 'root' using bash? (perhaps editing /private/etc/passwd? that does not seem to work. or... sudo bash -l?) | Bash Process Substitution does not work as 'root' on OS X | bash;shell;osx;root | If you want to change the shell, run chsh -s /bin/bashIf you want to run the shell once while logged in as root just run bash or /bin/bashchsh after changing roots shell:# Changing user information for root.# Use passwd to change the password.### Open Directory: /Local/Default##Login: rootUid [#]: 0Gid [# or name]: 0Generated uid: FFFFEEEE-DDDD-CCCC-BBBB-AAAA00000000Home directory: /var/rootShell: /bin/bashFull Name: System AdministratorOffice Location:Office Phone:Home Phone: |
_unix.87001 | This might seem like a duplicate post, well yes it is, but I have a different problem compared with the duplicate version of it.My value for imaqhwinfo gives me: InstalledAdaptors: {'dcam' 'linuxvideo'} MATLABVersion: '7.14 (R2012a)' ToolboxName: 'Image Acquisition Toolbox' ToolboxVersion: '4.3 (R2012a)'The value for imaqhwinfo('linuxvideo',1) gives me:DefaultFormat: 'YUYV_640x480' DeviceFileSupported: 0 DeviceName: '1.3M WebCam' DeviceID: 1 VideoInputConstructor: 'videoinput('linuxvideo', 1)'VideoDeviceConstructor: 'imaq.VideoDevice('linuxvideo', 1)' SupportedFormats: {1x7 cell}So, after that I gave the following to the Matlab terminal:vid = videoinput('linuxvideo', 1);set(vid, 'ReturnedColorSpace', 'RGB');However, after inputting the following line:img = getsnapshot(vid);I get the following error:Warning: Unable to set the selected source. Perhaps the device is in use. Error using imaqdevice/getsnapshot (line 62)Could not connect to the image acquisition device. Device may be in use.I posted this question to Matlab central and am waiting for a reply.I'm using ArchLinux(64 bit) & Matlab(2012a) ( 64 bit). Webcam apps such as Cheese are running okay. I can see my face. I also have Skype, though I haven't configured it yet.TL;DRCan anybody help me fix this issue? It would be a great help, because if I cannot, I'll have to re-install Windows 7 for just a little bit of a school assignment, and that's time consuming. Plus I don't want to go back to Windows right now.P.S: lsusb gives me:Bus 002 Device 005: ID 148e:099a EVATRONIX SA Bus 002 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hubBus 001 Device 003: ID 064e:a219 Suyin Corp. 1.3M WebCam (notebook emachines E730, Acer sub-brand)Bus 001 Device 002: ID 8087:0020 Intel Corp. Integrated Rate Matching HubBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub | Connection error for linux webcam driver for matlab | arch linux;drivers;configuration;camera;matlab | null |
_softwareengineering.187648 | Every example neural network for image recognition I've read about produces a simple yes or no answer. One exit node corresponds to Yes, this is a human face, and one corresponds to No, this is not a human face.I understand that this is likely for simplicity of explanation, but I'm wondering how such a neural network could be programmed to give a more specific output. For example, let's say I was classifying animals. Instead of it saying Animal or Not an animal, I would want responses like Dog, Fish, Bird, Snake, etc., with one final exit node being Not an animal/I don't recognize this.I'm sure this must be possible, but I'm having trouble understanding how. It seems like due to the training algorithm of backpropogation of error, as you train up one exit node (i.e., This is a dog) and the weights of the neurons are changed, then the ideal state for another exit node that you previously trained (i.e., This is a bird) will begin to deviate, and vice versa. So training the network to recognize one category would sabotage any training done for another category, thus limiting us to a simple Yes or No design.Does this make such a recognizer impossible? Or am I misunderstanding the algorithm? The only two things I can think of are that:Either we could train one neural network for each thing we want classified and somehow use those to construct a greater, super-network (so for example, a network for dog, a network for bird, etc., which we somehow add together to create the super-network for animals); or,Create some kind of ridiculously complicated training methodology which would require incredibly advanced mathematics and would somehow produce an ideal neuron-weight-state for all possible outputs (in other words, insert math magic here).(Side note 1: I am specifically looking at multilayer perceptrons as a kind of neural network.)(Side note 2: For the first bulleted possible solution, having each specific neural network and iterating through them until we receive a Yes response is not good enough. I know this could be done fairly easily, but that is simple functional programming rather than machine learning. I want to know if it's possible to have one neural network to feed the information to and receive the appropriate response.) | Can a neural network provide more than yes or no answers? | machine learning;artificial intelligence | To answer just your title, yes. Neural nets can give non-boolean answers. For example, neural nets have been used to predict stock market values, which is a numeric answer and thus more than just yes/no. Neural nets are also used in handwriting recognition, in which the output can be one of a whole range of characters - the whole alphabet, the numbers, and punctuation.To focus more on your example - recognising animals - I'd say it's possible. It's mostly an extension of the handwriting recognition example; you're recognising features of a shape and comparing them to ideal shapes to see which matches. The issues are technical, rather than theoretical. Handwriting, when run through recognition software, is usually mapped down to a set of lines and curves - nice and simple. Animal faces are harder to recognise, so you'd need image processing logic to extract features like eyes, nose, mouth, rough skull outline etc. Still, you only asked if it's possible, not how, so the answer is yes.Your best bet is probably to take a look at things like Adaptive Resonance Theory. The general principle is that the sensory input (in this case, metrics on the relative size, shape and spacing of the various facial features) is compared to a prototype or template which defines that class of thing. If the difference between the sensory input and the remembered template is below a certain threshold (as defined by a vigilance parameter), then the object being observed is assumed to be a member of the group represented by the template; if no match can be found then the system declares it to be a previously unseen type. The nice thing about this sort of net is that when it recognises that an object is, say, a horse, it can learn more about recognising horses so that it can tell the difference between, say, a standing horse and a sleeping horse, but when it sees something new, it can start learning about the new thing until it can say I don't know what this is, but I know it's the same thing as this other thing I saw previously.EDIT:(In the interest of full disclosure: I'm still researching this myself for a project, so my knowledge is still incomplete and possibly a little off in places.)how does this tie in with backpropogation setting weights for one output node ruining the weights for another, previously-trained node?From what I've read so far, the ART paradigm is slightly different; it's split into two sections - one that learns the inputs, and one that learns the outputs for them. This means that when it comes across an input set that doesn't match, an uncommitted neuron is activated and adjusted to match the input, so that that neuron will trigger a match next time. The neurons in this layer are only for recognition. Once this layer finds a match, the inputs are handed to the layer beneath, which is the one that calculates the response. For your situation, this layer would likely be very simple. The system I'm looking at is learning to drive. This is actually two types of learning; one is learning to drive in a variety of situations, and the other is learning to recognise the situation. For example, you have to learn how to drive on a slippery road, but you also have to learn to feel when the road you're driving on is slippery.This idea of learning new inputs without ruining previously-learned behaviours is known as the stability/plasticity dilemma. A net needs to be stable enough to keep learned behaviour, but plastic enough that it can be taught new things when circumstances change. This is exactly what ART nets are intended to solve. |
_webmaster.27311 | There are quite a few one-page-only websites that are made to perform a simple/single task. Examples of such websites include:http://dummyimage.com/ -- generates a dummy imagehttp://www.lipsum.com/ -- generates lipsum texthttp://ajaxload.info/ -- generates animated ajax loading imageshttp://www.generatedata.com/ -- generates dummy data for sqlhttp://jsbeautifier.org/ -- formats your javascripthttp://jsonlint.com/ -- validates JSONhttp://yui.2clics.net/ -- online YUI compressorhttp://www.colorpicker.com/ -- online color pickerVery few of these websites show an advertisements. Now I have a few ideas of my own and I was wondering if these websites have some way of earning income to keep them up and running.Should I expect to earn some income if I set up a few websites such as these?Should I setup AdSense on my (planned) website? My ideas are not absolutely unique but rare. | Can I expect some income from utility websites? | google adsense;revenue | null |
_webmaster.78807 | When I am talking about third party trademarks on my website:Am I supposed to use the appropriate registered symbols or is this not required?I am not talking about our own trademarks but trademarks of products we sell on our website. | Trademark Symbol Required in Product Description? | legal | It probably is, from a purely legal standpoint actually required. Our company was actually sued about it several years ago. We were selling Brand's products on our website and selling them as Brand Product. The problem arose when we started ranking higher for Brand Product than the actual Brand's website. This caused Brand to get grumpy, and brought us to court, even though we were buying the product from them 100% legitimately. When all was said and done and the lawyers got their pound of flesh, we were required to put ® next to any use of their trademark.That said, no reasonable company would give you a hard time about this as long as you're not abusing or doing anything damaging with their trademarked term. |
_unix.211533 | I have a website which is hosted in local Server (CentOS 5.x) (Hostname: xxx.yyy.local, IP : 192.168.5.25). I can browse the site typing the server IP address in the browser from my Local Network e.g: http://192.168.5.25/supportHow can I get an alias for http://192.168.5.25/support Eg: http://mycompany/support ?Note this is required for Internal network only. I don't want to access this site outside of my network. | Alias name for ip address | dns;webserver;hostname;apache httpd | null |
_unix.361144 | Every hour the server creates a new log file in the format of syslog_all.yyyy-mm-dd-hh and archives the file from the previous hour.What I need is a way to grep through the current and yet-to-be-created log files for a certain string without having to re-start the command every hour just because the filename has changed.Currently I do:tail -f syslog_all.2017-04-25-09 | egrep -i --line-buffered string1 | egrep -i (.*first.*|.*second.*|.*third.*) | Grep in new log files as they are created | grep;logs;filenames | null |
_webapps.26066 | As to answer a question on SuperUser, I was making a sample spreadsheet on google docs.Now I made it first in Excel 2010 on my computer to copy/paste it later on.When I tried to copy a formula that contained the Match formula, it didn't work (it works in excel though).Now I was wondering is the google spreadsheet limited in formulas, or does it work with its own set of formulas and can't I compare it to the Office Excel one? | Is there a limit on functions in Google Spreadsheets? | google spreadsheets | Google Docs has size limits. There can be 40,000 cells containing formulas.You can compare the Excel functions with this Google Spreadsheet function list. While there are some common functions, Google Spreadsheet does have its own set of formulasGoogle spreadsheets also have complexity limits. Every time a cell is updated, any cell that references it will also be recalculated. If formulas become too complex or take too long to calculate, the spreadsheet will timeout during calculation. |
_webmaster.56825 | I have a Google Apps Standard account for a seldom used domain. The domain is mine indefinitely (the Registrar offers this to locally registered charities - I just have to maintain a basic webpage), but the group it was intended for never actually formed.The group may still organise someday, and I would like to keep the account active. I'd like to know how often I need to sign in to the admin console to keep the free account active. I don't sign in very often, maybe once a year or so.Will this expire someday, and what are the minimum requirements to keep it active? | Do Google Apps Standard (free) accounts expire after a period of inactivity? | google apps | They do not (or rather, Google has not expired them yet). I have Google Apps accounts from the very first beta that are not actively used but I can still log in to the Administrator Panel and control the account. I have gone years in between logins, so there does not appear to be a time limit.No one knows what Google will do in the future though. |
_webmaster.28599 | I have an php application that uses an MVC framework.Lets say it lives here http://domain.com on the web and here /srv/www/domain.com on my server.I want http://domain.com/blog to use a wordpress install from here /srv/www/blog and all the root web traffic to go to my MVC app (as shown above)How can this be done?I am using linux and apache. | Use different document root for subfolder of URL | apache;linux | null |
_webmaster.79213 | My page has a set up as below. As you can see it follows a simple structure but there are a number of elements of body text related to the various links.These text elements appear on the page using a JavaScript hover event as the trigger.The downside of this is all that text is diluting the relevant/focused content of the page. <h1>My Page Title</h1><p>My main body text optimised for SEO etc....</p><div id=sub-elements> <a href=/1.html>Link to Element 1</a> <p id=element-1-text>This text is hidden but displays when I hover over the Element 1 link.</p> <a href=/2.html>Link to Element 2</a> <p id=element-2-text>This text is hidden but displays when I hover over the Element 2 link.</p> <a href=/3.html>Link to Element 3</a> <p id=element-3-text>This text is hidden but displays when I hover over the Element 3 link.</p></div>I want the links to the sub-pages 1.html, 2.html and 3.html to be picked up by Google but I don't want their respective <p> tags to be treated as the main content of the page.What would be the best practice in this scenario? I'm wondering if it would be a good idea to wrap said <p> tags in <aside> tags on the chance that Google will recognise this and treat the text not directly applicable to the main body of the page. | Is the aside element recognised by Google? | seo;google search;html5 | I mentioned this in comments earlier, but I think it is a reasonable solution and a better way to mark up the content...Instead of having the p element (containing the tooltip) hardcoded in the HTML following the anchor, simply include this text in the anchors title attribute. This attribute is, after all, intended for this purpose... to provide the user additional information about the link and is naturally shown as a simple tooltip. The text is taken out of the main content of the page and is unlikely to influence searches for the page itself. It is intrinsically associated with the anchor, is more accessible and works for non-JS users.For example:<a href=/1.html title=Additional help text for this link>Link to Element 1</a>If you need extra styling then construct the tooltip (ie. create the p element) in the onmouseover event (not when the page loads) that copies the text from the title attribute (progressive enhancement). Since you are presumably using the onmouseover event anyway (to show the existing p element), this shouldn't be too much of a change to the code. Just to answer your initial question... I don't think an aside element would be appropriate here (semantically). It is still part of the page content so would still be picked up by Google, with respect to the current page, whether it understood it or not. |
_webmaster.10293 | My aim is to simply be informative about where a link is pointing to search engines. I have some content that is listed by name and then I have a Permalink button. Would it be blackhat SEO to add some hidden text within the anchor that describes where the permalink is pointing?My content is like so:News Item 1Permalink (<a href=/my-news-item-1><hidden>News Item 1</hidden> Permalink</a>)Teaser text..The news title of the block already links to the article, but I think it would be of benefit to users to provide and explicit permalink button. | Is it considered blackhat SEO to have hidden text within links? | seo;links;blackhat;hidden text | I'd prefer to do a <a href= title=whichever the text></a>Anyway, that seems to be a premade Dokuwiki tag? Which actually adds a javascript that initially sets a display:none to the block. If is the case I think it would not harm SEO. |
_webapps.87410 | When I perform searches I frequently receive a lot of results of photos in my searches. I would like to exclude these from my searches, how can I do this?Is there a cheatsheet of sorts that show the various search options that exist for Google Drive? | How to exclude Google Photos from Google Drive searches? | google drive;google photos;google drive search | AnswerAdd the following to your searches in Google Drive to exclude photos and other images:-type:imageRemarksThe reference include the cheat sheet of Google Drive search operators. It appears in the the section More search options > Advanced search in Drive.ReferencesSearch for your files - Google Drive Help |
_webmaster.100626 | Will my website get penalized if I include my meta descriptions into the main body copy?For example my meta description is Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. If I include the exact same text into the main body copy, how will Google react to that? | Including meta descriptions content into the main body copy | seo;content;meta description | According to Google itself, this is what they have to say about the meta tag description, here.The description attribute within the tag is a good way to provide a concise, human-readable summary of each pages content. Google will sometimes use the meta description of a page in search results snippets, if we think it gives users a more accurate description than would be possible purely from the on-page content. Accurate meta descriptions can help improve your clickthrough;This means your description is used as an indication to Google of what is inside the page and also as the text used in the snippets, providing information to the user that might lead him to click in your page, if it appears in the Search Engine results page.When developing a website, make sure you write for people, not bots. My advice is to don't care if the description is the same as the content as long as it's the appropriate text for your users. |
_datascience.11163 | I am using kernel regression to build a prediction model. For the same, I am using np package. It is working fine, but I observed in multiple runs on the same data, it produces different results. Why it results in diverse outputs on the same data? Is there any way to select the best run of the model? Here is the minimal R code:library(np) bw.all = npregbw(formula=power ~ temperature + prevday1 + prevday2 + prev_instant1 + prev_instant2 + prev_2_hour, regtype=ll,bwmethod=cv.aic, data=new_tr_dat) model.np <- npreg(bws=bw.all) summary(model.np) I am using following data for my experiments: power temperature prevday1 prevday2 prev_instant1 prev_instant2 prev_2_hour1 220.59680 38 NA NA 648.3621 1392.2186 848.72992 584.06867 38 220.59680 NA 1012.6853 250.1150 434.71293 206.39849 40 584.06867 220.59680 169.9380 105.5796 127.72944 177.05559 39 206.39849 584.06867 167.6312 229.3927 249.98715 165.71996 41 177.05559 206.39849 214.8291 248.5378 247.02626 184.02724 44 165.71996 177.05559 256.9970 314.3742 485.51847 187.70557 43 184.02724 165.71996 125.6160 213.9993 174.08308 916.78484 43 187.70557 184.02724 668.2840 217.3451 423.82859 185.98017 42 916.78484 187.70557 295.7329 331.6580 1227.029310 490.42294 42 185.98017 916.78484 241.6590 249.0523 255.311011 703.92694 39 490.42294 185.98017 806.5259 1515.1619 1140.441512 2038.91747 37 703.92694 490.42294 232.5541 582.5105 632.711813 208.66049 26 2038.91747 703.92694 210.5353 217.5053 221.393814 281.89860 37 208.66049 2038.91747 796.4336 256.4664 603.078115 425.72868 32 281.89860 208.66049 250.6069 187.1751 260.057316 86.77193 36 425.72868 281.89860 174.1249 179.6437 164.435917 218.06322 39 86.77193 425.72868 223.6548 316.2230 322.853618 258.89159 43 218.06322 86.77193 233.4561 372.5123 256.858819 1436.19980 40 258.89159 218.06322 1266.2630 1387.2287 791.705620 261.68520 42 1436.19980 258.89159 278.3378 230.5614 262.008421 225.34517 44 261.68520 1436.19980 211.3332 147.6705 196.832822 852.68835 44 225.34517 261.68520 1271.5826 1233.7158 991.783523 1729.79826 44 852.68835 225.34517 945.6528 298.0929 412.219924 464.58053 43 1729.79826 852.68835 182.6507 184.3031 203.539525 902.30950 45 464.58053 1729.79826 308.1398 1743.3495 642.456326 428.18792 45 902.30950 464.58053 205.1806 697.9208 1434.542527 1508.74739 43 428.18792 902.30950 1371.0550 2165.7173 1918.523628 355.01704 42 1508.74739 428.18792 1750.3907 1740.4654 1022.505629 3248.62618 43 355.01704 1508.74739 686.8528 360.0539 660.637830 1949.63937 44 3248.62618 355.01704 258.4627 217.2683 232.381831 725.25368 40 1949.63937 3248.62618 1406.3282 1714.6412 1375.282432 261.31252 32 725.25368 1949.63937 553.0443 275.6697 409.9598 | Kernel regression results in diverse outputs | r;predictive modeling;regression | You have a very small number of observations (32?) and a non-trivial number of predictors. It is known that the cross-validation function possesses multiple local minima/maxima. If you increase the number of multistarts to, say, nmulti=100 (add this option to your call to npregbw()), you ought to see that the same results occur on each invocation of the optimization process. Note you can do this all in the call to npreg() and skip the bandwidth call for convenience (npreg() will call npregbw() automatically but accept the arguments intended for npregbw()). Also, you will get the same results if you restart R each time you run the routine (seeds are set automatically to ensure this).model <- npreg(power ~ temperature + prevday1 + prevday2 + prev_instant1 + prev_instant2 + prev_2_hour, regtype=ll, bwmethod=cv.aic, nmulti=100, data=new_tr_dat)To see whether things are stable with respect to the number of multistarts, look at the value of the cross-validation function and summary provided. Also, you can look at partial regression plots along with resampled variability bounds via plot().summary(model$bws)plot(model,common.scale=FALSE,plot.errors.method=bootstrap)Also, I note that your predictor `temperature' is discrete, so you might consider using+ ordered(temperature)(i.e. use a discrete support kernel). Doing so reveals that there is little signal in this model, but the same holds for a simple parametric model (adjusted r-squared is negative).model.lm <- lm(model$bws$formula)summary(model.lm)Hope this helps! |
_unix.210844 | How can we change the GECOS field for a user if we have hmcsuperadmin rights? | How to change GECOS field on HMC? | ibm hmc | chhmcusr -i name=foobaruser,description=PROOOBAPROOOBA is the new GECOS field foobaruser is the username |
_opensource.2416 | I am currently developing an online service that allows to share and download music from different source on the internet, this is a free service without subscription or registration, I found a perfect flag icons I will like to integrate into the website countries section, but it comes with Creative Commons license: Attribution-NonCommercial-NoDerivs 3.0 Unported.I'm seriously confused on whether to use the flags because I am going to be running advertisements from Google and some other places to help with the financing of the web hosting and the Terms are:You must give appropriate creditI really don't want to attribute any link on that page that has the logoYou may not use the material for commercial purposes.Does running of advert which might earn me some cash means I can't use it?Could you please provide me with advice, because I would not like to be in trouble with my work. | Using CC BY-NC-ND images on website with ads | attribution;commercial;non commercial;cc by nc nd | null |
_cogsci.4335 | There are obvious consequences that prevent people from behaving anti-socially or criminally. However there are many behaviours that are within the bounds of social norms, yet there seems to be some invisible force preventing people from letting go of inhibitions and acting spontaneously. What stops a person from doing something when they have the urge to do something spontaneous and random? I was in the process of answering another question when it was deleted; so decided to write my own question | What causes behavioural inhibition? | cognitive psychology;social psychology;developmental psychology;reinforcement learning | null |
_unix.197385 | I have a very simple bash script build.sh which defines - but doesn't invoke - a collection of functions, e.g.#! /bin/bashcreate_iptables_log() { # do stuff}apply_iptables_rules() { # do stuff}The script is then sourced source build.sh and the functions are intended to be run from the command prompt.How can I get a list of the functions that the script has defined?I am currently grepping the file, e.g.:grep -v '^#' build.sh | grep functionbut I wondered if there was a bash way to list the functions that are present in the bash environment. | Listing functions defined in a sourced script? | bash | The command typeset -f lists the function definitions. (It is supported at least by bash and ksh.) Use awk if you want to post-process the data, e.g. to extract only the function names. |
_unix.231074 | I rm'd a file and now I see:$ ltotal 64-rw-rw-r-- 1 502 17229 Sep 17 16:42 page_object_methods.rbdrwxrwxr-x 7 502 238 Sep 18 18:41 ../-rw-rw-r-- 1 502 18437 Sep 18 18:41 new_page_object_methods.rb-rw-r--r-- 1 502 16384 Sep 18 18:42 .nfs0000000000b869e300000001drwxrwxr-x 5 502 170 Sep 21 13:48 ./13:48:11 *vagrant* ubuntu-14 selenium_rspec_conversionand if I try to remove it...$ rm .nfs0000000000b869e300000001rm: cannot remove .nfs0000000000b869e300000001: Device or resource busyWhat does this indicate? What should I do | removed a vagrant file and now I see .nfs0000000000b869e300000001? | files;nfs;rm;vagrant | A file can be deleted while it's open by a process. When this happens, the directory entry is deleted, but the file itself (the inode and the content) remain behind; the file is only really deleted when it has no more links and it is not open by any process.NFS is a stateless protocol: operations can be performed independently of previous operations. It's even possible for the server to reboot, and once it comes back online, the clients will continue accessing the files as before. In order for this to work, files have to be designated by their names, not by a handled obtained by opening the file (which the server would forget when it reboots).Put the two together: what happens when a file is opened by a client, and deleted? The file needs to keep having name, so that the client that has it open can still access it. But when a file is deleted, it is expected that no more file by that name exists afterwards. So NFS servers turn the deletion of an open file into a renaming: the file is renamed to .nfs (.nfs followed by a string of letters and digits).You can't delete these files (if you try, all that happens is that a new .nfs appears with a different suffix). They will eventually go away when the client that has the file open closes it. (If the client disappears before closing the file, it may take a while until the server notices.) |
_cs.12497 | Consider this small picture of a sunflower, and its histogram: What would the Fourier transform of the first picture look like? Is there any relationship between the histogram and the Fourier transform? | What the difference between the Fourier Transform of an image and an image histogram? | image processing;graphics;fourier transform | null |
_cs.64485 | Recently, I am reading the book [1]. I am trying to solve the following problem:1.3 Proving Euler's claim. Euler didn't actually prove that having vertices with even degree is sufficient for a connected graph to be Eulerian--he simply stated that it is obvious. This lack of rigor was common among 18th century mathematicians. The first real proof was given by Carl Hierholzer more than 100 years later. To reconstruct it, first show that if every vertex has even degree, we can cover the graph with a set of cycles such that every edge appears exactly once. Then consider combining cycles with moves like those in Figure 1.8. The following is my attempt to solve the problem:Let $G$ be a connected graph and every vertex of $G$ has even degree. Let $N_V$ be the number of vertices in $G$. Let $d_i$ be the degree of the $i$th vertex for $i = 1, ..., N_V$. Then$$ d_i = 2 n_i \tag{1} $$for some positive integer $n_i$, $i = 1, ..., N_V$. Therefore, by walking on the edges of $G$, we can walk to and leave the $i$th vertex for $n_i$ times, with each edge being walked on exactly once. Then I don't know how to continue...I also go to the Internet and find Carl Hierholzer's paper [2]. However, it is written neither in English nor Chinese (my mother language), so I can't read it.Note: It is not my homework. I am just interested in solving this problem.References[1] C. Moore and S. Mertens, The Nature of Computation, Oxford University Press, 2015.[2] C. Hierholzer. Ueber die Mglichkeit, einen Linienzug ohne Wiederholung und ohne Unterbrechung zu umfahren. Mathematische Annalen, 6:30-32, 1873. | Prove: A connected graph contains an Eulerian cycle iff every vertex has even degree | graph theory | null |
_cstheory.8523 | I just found the following sentence from the #P wiki page:Jerrum, Valiant, and Vazirani showed that every #P-complete problem either has an FPRAS, or is essentially impossible to approximate; if there is any polynomial-time algorithm which consistently produces an approximation of a #P-complete problem which is within a polynomial ratio in the size of the input of the exact answer, then that algorithm can be used to construct an FPRAS.[3]http://en.wikipedia.org/wiki/Sharp-P-completethe referece [3] is Mark R. Jerrum; Leslie G. Valiant; Vijay V. Vazirani (1986). Random Generation of Combinatorial Structures from a Uniform Distribution. Theoretical Computer Science (Elsevier) 32: 169188.I took a quick look at [3]. But it seems to me that the results of [3] do not contain anything similar to what is in the wiki page. Is there a mistake in the wiki page? Thanks. | FPRAS for #P-complete problems | ds.algorithms;counting complexity;randomized algorithms | The claim is not hard to see for specific problems though proving it for all #P-complete problems may require some more formalism. Suppose for some #P-complete problem one can obtain a $p(n)$-approximation. Given an instance $I$ make a new instance $I$ which contains $k$ copies of $I$. The number of solutions to $I$ is $a^k$ where $a$ is the number of solutions to $I$. Thus, choosing $k$ sufficiently large, even a polynomial-ratio approximation to $I'$ can be used to approximate $a$ pretty well. |
_codereview.94875 | I've been writing a program that accepts an integer as input and displays the number Pi rounded to that number. The only issue I see is the fact that the Math.Round method can only round up to 15 spaces and there's no try-catch for that ArgumentOutOfRange exception. I'm also not sure how safe it is letting your flow of execution rely on a try-catch statement.class Program{ public static int counter = 0; static void Main(string[] args) { Console.WriteLine(Welcome to the Pi-Rounder! Find Pi to up to 15 decimal places!); Console.WriteLine(Please enter the number of decimal places you'd like to round to.); int roundTo; do { string digitAsString = Console.ReadLine(); roundTo = ConversionLoop(digitAsString); } while(roundTo == 0 && counter != 5); if(counter == 5) { throw new FormatException(); } else { double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine(); } } static int ConversionLoop(string digitString) { try { int digit = Convert.ToInt32(digitString); return digit; } catch(FormatException) { counter++; Console.WriteLine(That was not a valid number. Please try again.); return 0; } }} | Get an integer as input and display Pi rounded to that amount of decimal places | c#;error handling;formatting;floating point | There are several issues with this piece of code: do { string digitAsString = Console.ReadLine(); roundTo = ConversionLoop(digitAsString); } while(roundTo == 0 && counter != 5); if(counter == 5) { throw new FormatException(); } else { double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine(); }Problems:ConversionLoop is a meaningless name. The function parses a string to an integer, so a better name would be toIntThe handling of invalid input and incrementing the counter are not visible here. At first look I didn't see how the counter can advance, and it seemed you don't tell the user about invalid results. I had to look at the ConversionLoop to find out, but it was not logical to do so. The responsibility of getting valid input should not be split between two methods, it would be clearer to handle in one place, and have all the elements of the logic easily visible.If the user fails to enter valid input 5 times, the code throws new FormatException()FormatException is not appropriate for this. The problem is not invalid format, but failure to enter valid input within a reasonable number of retries. It's a different kind of error, and should be captured by a different exception classCreating an exception without a text message explaining the problem makes debugging difficultAfter throwing an exception in the if branch, the program exits from the method, so you can simplify the elseA bugI think you have a bug: if the user enters 0 as input,the ConversionLoop method doesn't print an error and returns 0 normally,but the program will still wait for another try.Without a message, this will be confusing to the user.I doubt you intended it this way.Suggested implementationWith the above suggestions, the code becomes:class UserInputException : Exception{ public UserInputException(string message) : base(message) { }}public static int MAX_TRIES = 5;static void Main(string[] args){ Console.WriteLine(Welcome to the Pi-Rounder! Find Pi to up to 15 decimal places!); int roundTo = ReadIntegerInput(); double piRounded = Math.Round(Math.PI, roundTo); Console.WriteLine(piRounded); Console.ReadLine();}static int ReadIntegerInput() { Console.WriteLine(Please enter the number of decimal places you'd like to round to.); int counter = 0; while (true) { string digitAsString = Console.ReadLine(); try { return Convert.ToInt32(digitAsString); } catch(FormatException) { if (++counter == MAX_TRIES) { throw new UserInputException(Too many invalid inputs. Goodbye.); } Console.WriteLine(That was not a valid number. Please try again.); } }} |
_datascience.15005 | I am working with a dataframe in R that is formatted like this sample:Countries <- c('USA','USA','Australia','Australia')Type <- c('a','b','a','b')X2014 <- c(10, -20, 30, -40)X2015 <- c(20, -40, 50, -10)X2016 <- c(15, -10, 10, -100)X2017 <- c(5, -5, 5, -10)df_sample <- data.frame(Countries, Type, X2014, X2015, X2016, X2017)The dataframe looks like this: Countries Type X2014 X2015 X2016 X20171 USA a 10 20 15 52 USA b -20 -40 -10 -53 Australia a 30 50 10 54 Australia b -40 -10 -100 -10I want to be able to create columns of year values for each type by each country, yielding something that looks like this: Countries Year a b 1 USA X2014 10 -20 2 USA X2015 20 -40 3 USA X2016 15 -10 4 USA X2017 5 -5 ...With recast I get this:recast(df_sample, Countries ~ Type) Countries a b1 Australia 4 42 USA 4 4With dcast I get this:dcast(df_sample, Countries ~ Type) Countries a b1 Australia 5 -102 USA 5 -5The dataset I'm working with has 44 years of data, so I'd like to be able to indicate all columns of yearly data without having to enter each column id manually into a cast formula. What is the difference between dcast and recast (i.e. what situations might they be best suited to), and is it possible to shape my data with them? | What is the difference between dcast and recast in R? | r;data cleaning | See ?reshape2::recast: The function conveniently wraps melting and (d)casting a data frame into a single step.library(reshape2)recast(df_sample, Countries+variable~Type, id.var=1:2)# Countries variable a b# 1 Australia X2014 30 -40# 2 Australia X2015 50 -10# 3 Australia X2016 10 -100# 4 Australia X2017 5 -10# 5 USA X2014 10 -20# 6 USA X2015 20 -40# 7 USA X2016 15 -10# 8 USA X2017 5 -5So, it's just a shortcut for these two steps:(tmp <- melt(df_sample, id.vars=1:2))# Countries Type variable value# 1 USA a X2014 10# 2 USA b X2014 -20# 3 Australia a X2014 30# 4 Australia b X2014 -40# 5 USA a X2015 20# ...dcast(tmp, Countries+variable~Type)# Countries variable a b# 1 Australia X2014 30 -40# 2 Australia X2015 50 -10# 3 Australia X2016 10 -100# 4 Australia X2017 5 -10# 5 USA X2014 10 -20# 6 USA X2015 20 -40# 7 USA X2016 15 -10# 8 USA X2017 5 -5 |
_webapps.28560 | While trying to combine to Gmail filters, I noticed something strange: filter from:[email protected] alone returns 180 results, filter from:[email protected] alone returns 69 results, but when combinedfrom:[email protected] OR from:[email protected], the filter returns only 120 results. Why? | Why using boolean operators returns less results in Gmail filters? | gmail;gmail filters | null |
_webapps.3586 | I know there is a plugin for Firefox but is there any other way to create a signature in Gmail that includes an image? | How to add an image to my Gmail signature? | gmail;google apps | Google recently introduced their rich text signatures feature for Gmail users (including Apps users): Official Gmail Blog.From your inbox screen, click on Settings in the upper-right corner. Scroll down to the signature box, which is now a rich text editor just like the regular email editor. You should be able to add images, colorful text, and all kinds of things that are totally unnecessary for a wholly-textual experience like email. |
_softwareengineering.83149 | I have recently started my journey to learn programming, and got my self a book on Objective-C.The thing is though: I get stuck quite often, trying to figure out how to solve the different exercises. I am quite new, currently on chapter 5 and trying to figure out how to do the different exercises.I get stuck and can't solve the exercise, so I look up the solution on the official forum and try to understand how they solved it. Then I keep thinking that the authors intention must to be able to do the following exercises, so I get a little worried about not being able to do all exercises.So I was wondering: is it bad learning behaviour to look up the solution online, and try to understand the method behind the solution, or should I keep sticking with that method, and learning it somehow sooner or later?What did you do when you were in the same learning process as me? | Should I be looking up the answers to programming exercises? | learning;self improvement | null |
_unix.87792 | I searched on Google, but found nothing useful. I use SUSE now, how can I install LiS on my computer?I'm hoping for a download link. | How can I install STREAMS in Linux? | linux;streams;fast streams | From the wikipedia article on STREAMS:excerptThe Linux kernel does not include STREAMS functionality. The kernel developers consider it technically inadequate, and the compatibility layers in Linux for other operating systems convert STREAMS operations into sockets as early as possible.14LiS (Linux STREAMS) adds STREAMS functionality on Linux15,16OpenSS7 offers Fast STREAMS on Linux.[17]OpenSS7If you're on a Red Hat based distro, OpenSS7 provides RPMs for STREAMS so it should be trivial to at least install it.LiSI also came across this URL, titled: Introduction to LiS. This page seems to be what you're looking for. It includes links to download LiS as well as installation instructions.Downloads LiSInstallation InstructionsSeems pretty straightforward to install it. You'll need tools such as gcc, autotest, make, etc. installed. Depending on your distro these should be easy enough to get. The steps to install it:$ cd /usr/src/LiS-2.16 (Or wherever you installed the files)$ make$ make installLooks like these steps assume you're root when doing the installation. I hightly suggest you read the Installation of LiS guide. It covers installation and configuration of the software.32-bit vs. 64-bitAs of version 2.19.2 there isn't any support for 64-bit. So just something to be aware of.https://www.dialogic.com/den/forums/p/9246/34706.aspxUPDATE #1In digging more into the gcom.com website it appears that they've discontinued support for LiS. excerptThe LiS-2.18 version described by this documentation is the final version of LiS to be published on the Gcom FTP site. It is possible that others in the LiS community may organize a maintenance method for this package. To be apprised of developments in this area subscribe to the LiS discussion group and watch for announcements.Gcom no longer supports LiS for use with anything other than Gcom products. Please consult your software/hardware vendor for LiS support. If you are interested in complete protocol solutions for Linux, please contact [email protected] digging lead to this URL which has LiS 2.19.0. I was able to download it successfully and the tarball appears to be intact.NOTE: The above URL was ferreted out from this IBM technote, titled: Where to get LiS (Linux Streams).Linux Fast-STREAMS project?I found this note on the openss7 site, on a page titled: Linux STREAMS (LiS) Installation and Reference Manual.excerptNote: The original LiS package from GCOM is no longer actively maintained by either GCOM or the OpenSS7 Project: use the OpenSS7 Linux Fast-STREAMS package http://www.openss7.org/STREAMS.html instead.Of course the URL above is broken, I was able to find this Fast-STREAMS project page on the openss7 project page here, titled: Linux Fast-STREAMS. Continuing my expedition on the openss7 website I found this page, titled: Linux Fast-STREAMS (streams) Release. This page included both links to the deprecated LiS project as well as the new project Fast-STREAMS, which they appear to be just calling streams. This link to the latest version, 0.9.2.4 of streams, includes tarballs, source RPMS, and binary RPMS. This page seems to be what you're looking for, though the packages are provided for CentOS 5.2, they might be rebuilt for CentOS 6.x. |
_codereview.68861 | Is there something as they have too little in common to use a superclass?In my case, doing iOS programming, in a project, every screen has the same background. So I created a superclass which only has:.h@property (nonatomic) IBOutlet UIImageView *backgroundImageView;.m- (void)viewDidLoad { [super viewDidLoad]; UIImage *backgroundImage = ...; [self.backgroundImageView setImage:backgroundImage];}Is this too little to use a superclass for?If I didn't add it here, I'd have about 10 view controllers where I would have to do this. (Obviously code duplication). | Setting a common background image on a dozen screens | objective c;inheritance | The short answer is, no, this is not too little code to extract into a super class.If you intend for this bit of code to always be the same throughout every use of this class and its subclasses, then it's absolutely appropriate to put the code in the superclass. The primary point is, here, if we ever decide to change something about this code, we can change it in once place and have this change applied across all instances.With that said, however, I want to point out some problems.As a start, we have no guarantee that this code will actually run. If our subclass implements viewDidLoad and fails to call to super, now our code doesn't run. There's still no way to make this guarantee, but there is a way to make a reminder for ourselves.As this StackOverflow answer outlines, the NS_REQUIRES_SUPER directive will throw a warning when subclass implement the method without calling the super method. So we should add the following to our .h file:- (void)viewDidLoad NS_REQUIRES_SUPER;Now, a subclass of this class which implements viewDidLoad will throw a compiler warning until [super viewDidLoad]; is added to its viewDidLoad implementation.There's problems still however. For starters, as a rule, I don't like IBOutlets in my header file. There's absolutely no reason for these to be public. But in this particular case, we shouldn't have an IBOutlet at all. Using an image view set up in interface builder and requiring it be hooked up to this outlet will supremely complicate things.First, not only do we have to remember to call to super, which I already address. But we also have to remember to add the image view. And we have to remember to hook the image view up as an outlet. Trouble is though, we can't (or shouldn't) hook it up to our current class. We should open this, the super class, in assistant editor and hook it up to that. Problem is, assistant editor won't automatically load super class's in the assistant window... so that's kind of a pain.I'm a huge proponent of using interface builder. It does a lot to vastly simplify our written code, but in this case, it doesn't suffice for producing simplified results.We should remove this outlet altogether and not worry in the slightest about relying on the subclass setting up the image view in interface builder and hooking it up properly. The whole point of subclassing is minimizing redundancy, and so far, we're only half way there.(Moreover, what guarantee do we have that every subclass of this class will even use interface builder at all? View Controllers don't require a corresponding interface builder representation--the view can be set up entirely programmatically.)We need to change our viewDidLoad code to manually create an image view with an image and load it as the background.The number one problem we'll run into in manually creating and adding the image view to the view controller is getting it appropriately as the background. We can't be sure whether the subclass will do it's set up first then call super, or call super then do its set up (in case you're wondering, in viewDidLoad, it's appropriate to call to super FIRST, then add your implementation). Moreover, the view controller will almost certainly have views added on it in interface builder if the developer is using interface builder.So, we need to add the view as the bottom-most view in the view controller's view's subviews.This might be a good implementation:- (void)viewDidLoad { [super viewDidLoad]; // Set up image view UIImage *backgroundImage = [UIImage imageNamed:@background]; UIImageView *backgroundView = [[UIImageView alloc] initWithImage:backgroundImage]; [backgroundView setTranslatesAutoresizingMaskIntoConstraints:NO]; // aspect fill may be preferred backgroundView.contentMode = UIViewContentModeScaleAspectFit; // Add background view as back-most view [self.view insertSubview:backgroundView atIndex:0]; // Set up auto layout constraints so this view controller is applicable // on any device in any rotation NSDictionary *views = NSDictionaryOfVariableBindings(backgroundView); NSArray *verticalConstraints = [NSLayoutConstraint constraintsWithVisualFormat:@V:|-0-[backgroundView]-0-| options:0 metrics:nil views:views]; NSArray *horizontalConstraints = [NSLayoutConstraint constraintsWithVisualFormat:@H:|-0-[backgroundView]-0-| options:0 metrics:nil views:views]; [self.view addConstraints:verticalConstraints]; [self.view addConstraints:horizontalConstraints];}I'm not particularly a fan of setting up UI in code (especially auto layout). I much, much prefer interface builder. But in some cases, it is absolutely necessary to get it right.This auto layout code may seem bulky and unnecessary if your current app is just for iPads and just for a single orientation. But as soon as you want to subclass this view controller into an app that rotates or is made for iPhones, or is a universal app, you'll be glad to have this auto layout code.As one final comment, I might make a UIImage property in the .h file so that the background image could be changed. |
_cstheory.25213 | small world graphs (eg Watts-Strogatz model & others) and scale free graphs are a relatively recently discovered graph type via mainly empirical analysis of large real-world graphs (eg via Big Data techniques/ datamining etc). they have since been found to be quite ubiquitous/ longstanding in many diverse graphs related to nature and human constructions (eg biology/genes, social networks, man-made networks eg WWW/ internet/ telecommunication/ electrical grids, airport connectivity, etc). in contrast expander graphs are far older and were invented mainly as a theoretical device in math/(T)CS however have since found very broad/ widespread/ key application. am looking for eg refs/ surveys/ overviews on their interrelation.what are the relations between the following (eg is there any overlap for some parameters)small world networksscale free graphsexpander graphs | any relation/ overlap between small world graphs, scale free graphs, and expander graphs? | reference request;graph theory;big picture;application of theory | There are lots of overlaps between small world and scale-free, but I think much less so between those two and expanders.The terms small world and scale-free are often used informally, but formal definitions are often along the lines of:Small-world means short average (or maximum) path length (typically $O(\log n)$, with $n$ vertices) and highly clustered (meaning that for any vertex $v$, the fraction of pairs of neighbors of $v$ which are adjacent is high)Scale-free is often taken to mean that the degree distribution follows some variant of a power-law (e.g. power-law with cutoff, etc.), but more generally/informally is used to mean that the degree distribution is long-tailed, in contrast to, say, Erdos-Renyi random graphs which have a degree distribution that is exponentially concentrated around its mean.Expander, of course, has a formal definition that is almost 100% standardized. Expanders by their nature have logarithmic diameter, similar to small-world networks. Beyond that, however, as far as I know there is little overlap between the concepts. In practice, expanders are often bounded-degree or even regular of bounded degree, whereas real-world graphs typically have a long-tailed degree distribution, with a small (but surprisingly large - e.g. not exponentially small) number of high-degree hubs (as they are usually called). Furthermore, scale-free graphs (almost by definition, depending on your definition) have a large number of vertices of very low degree - in most real-world graphs there are a large number of vertices of degree 1 or 2. This makes real-world graphs very unlike expanders, in that it is very easy to disconnect real-world graphs by removal of targeted edges, whereas expanders are by their nature highly connected. (Real-world graphs often also have the property that removal of random edges is very bad at disconnecting them.)I'm sure there is something to be said about the use of spectral techniques for things like community detection, etc. in real-world networks, and from that viewpoint there may or may not be a little more overlap with expanders, but I'm not an expert in that area (maybe we can get Mark Newman to join cstheory.SE to comment...) |
_softwareengineering.333980 | I'm new to compiled languages. I'm learning C. I'm used to coding in python.I was wondering if there was any equivalent, or replacement method in compiled langues for functions able to create a function, and to return it.In python one can write:def genAdder (p): def adder (n): return n + p return adderaddFive = genAdder(5)print(addFive(7)) # prints 12Is it possible to do such a thing in C, or C++, or any other compiled language ?If yes, does it involve code generation during execution ?If no, are there replacements for situations where this is useful ? (It can be used for performance purposes, to avoid re-doing computations) | Function creating function, compiled languages equivalent | functional programming;higher order functions | Yes, many compiled languages support higher order functions. No, they rarely if ever do runtime code generation. In C, function pointer is the appropriate search term. C++ also supports function pointers, though there's a number of alternative approaches (and libraries) to support similar behavior. Though they generally aren't as clean as other languages' support for this sort of thing due to their history. Java and C# especially handle this better. And of course actual functional languages have great support for this, and are often compiled.As for using p in the inner definition, that is a closure. They are well studied, and well known. When I used C++ boost::bind supplied a mechanism to do something similar. In these languages, you generally need to make a new object/class to hold the stored variable, and the sub-function. |
_unix.44932 | Is there a fast way (keyboard shortcut) to open a terminal emulator (in my case urxvt) in the same directory as the file in the current emacs buffer? | Open terminal from emacs | emacs;keyboard shortcuts | The combination M-! allows you to launch shell commands. You could use it to launch a separate urxvt. M-! urxvt RETI just tried it with xterm (I don't have urxvt) and it did open in the same directory as the file in the buffer.If you want to define a shortcut add something similar in your init file:(global-set-key (kbd C-c s) (kbd M-! urxvt RET))In my case I bound the shortcut to: Ctrl+C - S. |
_scicomp.25927 | I'm trying to make a CFD model where I can place a source and a sink anywhere in a grid and get the fluid flow rate across each cell boundary between those locations. I'm starting simple with a 3x3 grid and solving continuity for each grid element, but that leaves me a few equations short (9 equations, 12 unknowns). In general I'd like to be able to raise that to a 64x64 grid with multiple flow inputs. Is there a way to constrain the system such that I can solve it with continuity equations or is this more difficult than I had initially thought?Below is a simple diagram of my 3x3 grid (o = source, x = drain):-------|o| | |-------| | | |-------| | |x|-------within the above grid is 9 cells and 12 cell boundaries. How do I fully constrain the system to solve for the flow?edit: doing some reading on CFD, it would seem that storing the velocity for the CENTRE of each cell is advantageous and the boundary velocities would be calculated based on the centre values for surrounding cells... Although I still don't entirely know how my equations would look with that. | Simple methods for solving 2D steady incompressible flow? | fluid dynamics;constraints;incompressible | null |
_cs.79634 | This question is referring to the Pumping Lemma for CFLs, namely: If $L$ is a CFL, there is a pumping length $p$ such that any string $z \in L$ of length $\geq p$ can be written as $z = uxwyz$, where $|xy| \geq 1$, $|vxy| \leq p$, and $\forall i 0, ux^iwy^iz \in L$.Let's say we have a language consisting of $0$s and $1$s and we pick $x \in 0^*$ and $y \in 0^* \Rightarrow x = 0^k$ and $y = 0^l$, where $k+l \geq 1$. Does this mean that $k$ can be $0$ if $l \geq 1$? Does that make sense? (why did we choose a $k$ length of $x$ in the first place, then, if it's going to end up being null).PS: I know I don't get to pick the decomposition. I'm talking about a particular case. | Interpreting the way we choose partitions in the pumping lemma for CFLs | formal languages;context free;pumping lemma | null |
_webmaster.44721 | We want to increase our rankings.We have relevant quality text content ... However we need the page to visually appear less texty.We are therefore considering a design with very little text immediately visible. Learn more icons will offer this additional content as popovers or tooltips.The content will be in divs on the page.Will this content be considered as content for an improved page rank? | Does text in tooltips (or hide/show divs) count for positive SEO? | seo;pagerank | null |
_softwareengineering.338538 | Irony includes two phases. In the first phase it create a parser tree. After that its optional to create an AST tree.What are the differences between the parse tree and the AST tree?What is the reason to implement that? | DotNet Irony Understanding | c# | Parse Trees are also sometimes referred to as Concrete Syntax Trees to distinguish them from Abstract Syntax Trees, which maybe already tells you what they are all about.Basically, a parse tree is still dependent on the actual concrete syntax used in the source code. E.g. if a language has two ways of defining a function that are semantically equivalent, then the parse tree might still tell you which of the ways was used. The parse tree might also still contain artifacts of the specific parser that was used, e.g. if the parser supports left-recursion or doesn't, etc.The AST, OTOH, should ideally be independent of any particular concrete syntax that was used in the source code and the particular parser that was used. In theory, an AST should be abstract enough that it can even serve as an interface between the parser and the rest of the system, IOW in theory, I should be able to swap in a different parser which generates the same AST without the rest of the system noticing. (In practice, that is seldomly possible, though.) |
_codereview.25640 | So for practice with Javascript, I wrote this lightweight widget factory which is similar in functionality (okay, that's a bit of a stretch) to jQuery's UI Widget factory. I was hoping I could get some pointers as far as idiom usage and also if there are any glaring problems with the below code. var _CreateWidget = function(namespace, implementation) { _Widget = { _create: function() {}, _destroy: function() {}, _set: function() {}, _get: function() {}, options: {}, context: undefined, namespace: namespace, apply: function(element) { this.context = element; element[namespace] = this; this._create(); } }; for (var item in implementation) { if (implementation.hasOwnProperty(item)) { _Widget[item] = implementation[item]; } } return _Widget;}Widget = function(namespace, implementation) { return function(element) { var instance = _CreateWidget(namespace, implementation); instance.apply(element); return instance };}An example widget would be defined as follows://Example widgetPage = Widget(page, { _create: function() { //initialization code here }, _destroy: function() { }, _set: function() { }, _get: function() { }, request: function() { //custom function }, options: { url: '' }});And the widget would be applied to an element like so:myElement = document.getElementById(myUniqueId);appliedPageReference = Page(myElement);myElement.page.request(); //One way to call the custom request() function;appliedPageReference.request(); //Another way to call the custom request() function; | Roll your own widget factory | javascript;jquery ui | Although DOM elements can be used like normal JavaScript objects, I'd avoid attaching JavaScript other than handlers. The problem is when they form circular references. In browser garbage collectors, they won't collect garbage if something still references them. If you accidentally form circular references, this will lead to memory leaks (unfreed memory)jQuery avoids this by creating objects in an internal cache and assigns an ID to the element. That way, you are assigning a primitive to the element, not an object. This is how jQuery collects and manages event handlers, data attributes, and others.So a general tip is: What is from JavaScript, stays in JavaScript. (And not cross over to the DOM)In jQuery's case, the functions are not actually attached to the element. That's the purpose of the jQuery object. In a gist, a jQuery object is just an array-like object (let's call it pseudo-array from this point on), that contains DOM elements. The prototype of the jQuery object is where the functions live. These functions operate on each value in the collection.Executing a function in a jQuery object is not like this:someElement.doSomething();But rather, something like this:collectionOfStuff.forEach(function(DOMElement,i,arr){ //do something for each in the collection});Widget in your code is some function that manufactures widget templates which are used to bind to your elements. Rather than having Widget return the template, why not make Widget your namespace. You can then create a function that attaches to the namespace. It would be synonymous to doing jQuery.fn.extend(function(){...})//define widgetWidget.defineWidget('Page',function(){ //local stuff, aka private var privateVar = 'foo'; function privateFn(){...} //anything attached to `this` is public //we will execute this function, providing an object as `this` this.publicVar = 'bar'; this.publicFn = function(){...}});//access widgetvar reference = Widget.Page(bindingTarget);defineWidget define a creator function into the namespace that uses the widget definition to build instances, something like:Widget.defineWidget = function(name,fn){ //store wigetCache[name] = fn; //attach to namespace Widget[name] = function(){ //BaseClass could be some constructor with prototype containing //all functions that widgets should have var instance = new BaseClass(); //run the instance through the definition to attach the internals return fn.call(instance); } } |
_webapps.14525 | I like to store my PDF files in the DropBox cloud.I like to access these PDFs from my PCs, Laptop & Android phone.When I open a PDF it always starts on page 1 no matter what page I was on previously.Is there a way to have DropBox remember the page I was on? | PDF with DropBox, remember the Last Page? | dropbox;pdf;online storage | null |
_cs.6410 | Consider the recurrence $\qquad\displaystyle T(n) = \sqrt{n} \cdot T\bigl(\sqrt{n}\bigr) + c\,n$ for $n \gt 2$ with some positive constant $c$, and $T(2) = 1$.I know the Master theorem for solving recurrences, but I'm not sure as to how we could solve this relation using it. How do you approach the square root parameter? | Solving a recurrence relation with n as parameter | asymptotics;recurrence relation;master theorem | We will use Raphael's suggestion and unfold the recurrence. In the following, all logarithms are base 2. We get$$\begin{align*}T(n) &= n^{1/2} T(n^{1/2}) + cn \\&= n^{3/4} T(n^{1/4}) + n^{1/2} c n^{1/2} + cn\\&= n^{7/8} T(n^{1/8}) + n^{3/4} c n^{1/4} + 2cn\\&= n^{15/16} T(n^{1/16}) + n^{7/8} c n^{1/8} + 3cn \\& \ldots \\&= \frac{n}{2} T(2) + c n \beta(n) \end{align*}.$$where $\beta(n)$ is how many times you have to take the square root to start with n, and reach 2. It turns out that $\beta(n) = \log \log n$. How can you see that? Consider:$$\begin{align*}n &= 2^{\log n}\\n^{1/2} &= 2^{\frac{1}{2} \log n} \\n^{1/4} &= 2^{\frac{1}{4} \log n} \\\ldots\end{align*}$$So the number of times you need to take the square root in order to reach 2 is the solution to $\frac{1}{2^t} \log n \approx 1$, which is $\log \log n$. So the solution to the recursion is $c n \log \log n + \frac{1}{2}n$. To make this absolutely rigorous, we should use the substitution method and be very careful about how things get rounded off. When I have time, I will try to add this calculation to my answer. |
_unix.31154 | I'm a C/C++ professional programmer who makes lots of spelling mistakes in comments. I want to configure vim such that the spell-checker only looks for misspelled words within comments. If necessary I'm willing to add special symbols around the comment that vim can look for to know where to check, such as: int main(){ /*<--C_S This is comment line in main function .. C_S-->*/ }If the plugin can work without the C_S symbols that'd be even better. I want the spell-checker to highlight any spelling mistakes it finds within comments. Does this already exist? Or is it easy to write myself? | Spell check comments in vim | vim;spell checking | null |
_unix.369170 | I have a USB device and i'm trying to create it in a way that it has 2 partitions: one for a live linux disc and the other for document storage.I created the partitions using gparted and and set a boot flag to the one I want to use as the live disc. Now, I have a usb like this:Disk /dev/sdc: 14.6 GiB, 15623782400 bytes, 30515200 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisklabel type: dosDisk identifier: 0xc3072e18Device Boot Start End Sectors Size Id Type/dev/sdc1 8439808 30515199 22075392 10.5G 83 Linux/dev/sdc2 * 51200 8439807 8388608 4G b W95 FAT32I then used dd to flash an Ubuntu iso to /dev/sdc2sudo dd if=/dev/shm/ubuntu-17.04-desktop-amd64.iso of=/dev/sdc2 bs=4MWhen the disc is flashed onto the usb drive, I try to boot from my laptop and it shows Operating system not found. When i try to use qemu/kvm, it shows a kernel panic like this:How would I be able to do this properly? | Use a partitioned live usb | partition;live usb;livecd | You received the Operating system not found error because by writing the ISO to a disk partition rather than the disk as a whole, you inadvertently did not write a boot loader to the disk's MBR gap. And... apparently the PC doesn't care about the boot flag.I see two possible solutions, but I must say, I'm really just pulling this out of my [censored].Partition the disk after dd'ing the ISOThe best part of this solution is that you'll know whether it's feasible real quick.dd the ISO to the entire USB diskCheck the USB disk for partitions using a partitioning tool. If you see partitions, you can probably add one for your encrypted volume.Add a bootloader to chainload into the partition.The idea here is to add a boot loader to the USB disk's MBR gap, and have it chainload whatever boot loader is in the partition. Chainloading basically delegates the boot loader's functionality to another bootloader. I'll direct to you Gentoo's documentation on the topic, considering it's quite thorough.OtherIf the above fail, you can try building your own Ubuntu ISO, adjusting how it boots. |
_softwareengineering.342388 | I am building a series of web apps connected to a single point of authentication. Basically, a user tries to access a site, if not authenticated they are redirected to the central auth system's login page. Once they successfully login, they are redirected to their app. From then on, if they access any other app they would automatically be signed on.A couple additional details: 1) the apps will all be running under the same domain, so I can use domain cookies, which makes things easier; 2) users can be given access to some apps and not others, so that needs to be taken into account; 3) user needs to be able to retrieve permissions specific to each app.I have implemented something, but am not 100% happy with it. Right now, this is what I have: 1) web app checks for existence of a session (specific to the app) and a cookie that's a JWT token that was sent from the centralized auth system; 2) if cookie doesn't exist, I redirect to the login page on the auth system; 3) once user logs in, they are redirected to their app passing in a JWT token; 4) the app verifies the token via a REST API call to the auth system (making this REST API calls relies on a separate access token), if it's valid, then the JWT token gets saved as a cookie and a session is initiated with the user logged in; 5) if the app session expires, it checks if the cookie exists and if it does then app does the same as step #4, verifies the token and reinitiates the session; 6) on logout, the system just deletes the cookie, ensuring user is logged out of all apps; 7) if the token expires, the app uses the expired token to request a new one, where the token signature and other claims are validated before issuing a new one, the only thing that doesn't get validated is the expiration claim.To clarify, the existence of a session specific to the app is used so that you don't have to keep making REST API calls constantly to verify the token. But given that the token was verified once, would it be safe to just use that cookie as the indicator that there's a valid session?One thing that I'm unsure about is that my token needs to have something that indicates what app it is for because other REST API calls can be made using the token to get some resources that are app specific. But if I obtain a token for app1 and then log into app2, app2 will be relying on the cookie generated by app2. So seems like I'd want to have two tokens, one that can be stored as a domain cookie to indicate the user is authenticated, and another one that would actually be app specific and can be used to make REST API calls for other app-specific resources.Am I over-complicating this, or does my line of thinking match what others are seeing/doing out there? Or is there a more elegant way of doing this? I've thought of implementing something like Open ID, but seems a bit like overkill for our needs. I want this to be as simple as can be so that I can document the process and other developers teams can develop apps that plug into the auth system without needing too much assistance. | Implementation of single authentication point | authentication | null |
_codereview.164119 | The goal is to create a Singleton and pass it a parameter that is required for the construction and initialization of the class, then preventing any changes to be made to the passed parameter (just like a readonly field being set by an argument passed to a constructor).For instance:SocketsHostsDatabasesRepositories(Any instance that requires at least one argument in order to construct)I am having a tough time coming to terms with this design, and I am quite certain that there is a pitfall or a loose-end to this implementation of a Singleton combined with a Builder Pattern, to mimic readonly fields set by constructor arguments.Example ImplementationIn this example, I am trying to get a Singleton of Host, where I would like the enum EnvironmentTypes to be treated like a readonly field usually found in classes that have parameters passed into the constructor.EnvironmentTypes Enumpublic enum EnvironmentTypes{ Production, Staging, Development}IHost Interfacepublic interface IHost{ string Name { get; set; }}Host Classpublic sealed class Host : IHost{ #region Singleton private static readonly Lazy<Host> _instance = new Lazy<Host>(() => new Host()); public static Host Instance { get { return _instance.Value; } } #endregion private static bool _isInstantiated; private static EnvironmentTypes _environment; private string _name; internal static EnvironmentTypes Environment { get { return _environment; } internal set { if (_isInstantiated) throw new InvalidOperationException(nameof(_environment) + cannot be set once an instance is created.); _environment = value; } } public string Name { get { return _name; } set { _name = value; } } static Host() { _isInstantiated = false; _environment = EnvironmentTypes.Production; } private Host() { _isInstantiated = true; _name = My Server; }}HostBuilder Classpublic sealed class HostBuilder{ private readonly EnvironmentTypes _environment; private string _name; public HostBuilder(EnvironmentTypes environment) { _environment = environment; } public HostBuilder SetName(string name) { _name = name; return this; } public IHost Build() { Host.Environment = _environment; Host host = Host.Instance; host.Name = _name; return host; }}Implementationclass Foo{ void UsingTheBuilder() { // probably over-kill HostBuilder builder = new HostBuilder(EnvironmentTypes.Development) .SetName(Bingo); IHost host = builder.Build(); //host.Environment is not available, great! host.Name = Renamed Server; // works as expected. } void ManualConfiguration() { Host.Environment = EnvironmentTypes.Development; Host host = Host.Instance; host.Name = Bingo; Host.Environment = EnvironmentTypes.Staging; // throws! Hoped to prevent // the developer from doing this. }}Random Notes: It would be great if I could restrict access from getting to the static properties of Host, so that I can totally avoid anyone trying to set the Host.Environment static property and throwing an exception -- note how the HostBuilder shields that from happening as it is a readonly field. | Singleton with readonly parameters | c# | Disclaimer: I am biased towards singletons. I think it is an anti-pattern, that has no place in modern C#.First, here is a great article on how singletons become a disaster when you try to unit test a code, that heavily relies on them. Your case is even more complex, because you also have to initialize additional parameters. And you can't change those. So you can't test Host with different environments unless you try to bypass your own exception with reflection.I would just register non-static Host class as singleton inside IoC container, and be done with it. It will solve all your problems:Parameters of Host are no longer exposed.Container guaranties, that there is going to be a single instance of Host.Host is exposed as service (IHost) and not as implementation (Host).You can mock IHost in unit tests.You can easily unit-test Host implementation with whatever parameters you want, because now it has public constructor and can be re-created as often as it is required by your tests.Classes that depend on IHost will now require it as dependency, instead of secretly accessing it via global static property. |
_unix.91370 | I am running Linux Slackware 14.0. I wanted to allow to do su only to the members of wheel group, so I modified the permissions of /bin/su and /usr/bin/sudo files to this:bash-4.2# ls -la /bin/su-rws--s--- 1 root wheel 59930 Sep 14 2012 ./subash-4.2# ls -la ./sudo-rws--s--- 1 root wheel 107220 Jun 29 2012 ./sudoNow when I am a member of wheel group and run su, it promts for password, I enter it. No errors are shown, but doesn't switch me to root. Probably, I set some permissions wrong? | Can't run su after changing permissions to su file | permissions;sudo;su | You could trychown root.wheel /bin/suchmod o-x /bin/suso su will belong to wheel group and the others won't be able to run it. It seems to me that chown should solve your problem, setting properly all the permissions, since you just set up the execution rights previously. |
_unix.370361 | If I have a page open in Lynx, how can I download the source (HTML) of it? | Use Lynx to download the source of the page I'm currently on | lynx | View the source code by pressing the \ key (thanks to this article)Then press the P key, and then select Save to a local file. |
_codereview.27254 | I have two class:InputForm.javapublic class InputForm { private String brandCode; private String caution; public String getBrandCode() { return brandCode; } public void setBrandCode(String brandCode) { this.brandCode = brandCode; } public String getCaution() { return caution; } public void setCaution(String caution) { this.caution = caution; }}CopyForm.javapublic class CopyForm { private boolean brandCodeChecked; private boolean cautionChecked; public boolean isBrandCodeChecked() { return brandCodeChecked; } public void setBrandCodeChecked(boolean brandCodeChecked) { this.brandCodeChecked = brandCodeChecked; } public boolean isCautionChecked() { return cautionChecked; } public void setCautionChecked(boolean cautionChecked) { this.cautionChecked = cautionChecked; }}I want to copy values from an InputForm to another if its corresponding property in CopyForm is true.This is what I do:if(copyForm.isBrandCodeChecked()) { inputForm.setBrandCode(otherInputForm.getBrandCode());}if(copyForm.isCautionChecked()) { inputForm.setCaution(otherInputForm.getCaution());}The problem is I have many many properties. Writing many if statements seems ugly and bad programming practice.How to solve it? (I know reflection is not a good choice so I don't think about it) | Copy object properties without using many if statements | java;classes;form | null |
_softwareengineering.154733 | In one of the latest WTF moves, my boss decided that adding a Person To Blame field to our bug tracking template will increase accountability (although we already have a way of tying bugs to features/stories). My arguments that this will decrease morale, increase finger-pointing and would not account for missing/misunderstood features reported as bug have gone unheard.What are some other strong arguments against this practice that I can use? Is there any writing on this topic that I can share with the team and the boss? | My boss decided to add a person to blame field to every bug report. How can I convince him that it's a bad idea? | teamwork;bug report | Tell them this is only an amateurish name for the Root Cause field used by professionals (when issue tracker does not have dedicated field, one can use comments for that).Search the web for something like software bug root cause analysis, there are plenty of resources to justify this reasoning 1, 2, 3, 4, .......a root cause for a defect is not always a single developer (which is the main point of this field)...That's exactly why root cause is professional while person to blame is amateurish. Personal accountability is great, but there are cases when it simply lays outside of the dev team.Tell your boss when there is a single developer to blame, root cause field will definitely cover that (coding mistake made by Bob in commit 1234, missed by Jim in review 567). The point of using the term root cause is to cover cases like that, along with cases that go out of the scope of the dev team.For example, if the bug has been caused by faulty hardware (with the person to blame being someone outside of the team who purchased and tested it), the root cause field allows for covering that, while single developer to blame would simply break the issue tracking flow.The same applies to other bugs caused by someone outside of the dev team - tester errors, requirements change, and management decisions. Say, if management decides to skip investing in disaster recovery hardware, blaming a single developer for an electricity outage in the datacenter would just not make sense. |
_unix.34063 | I am trying to show all instances of a particular message from the syslog in chronological order by doing something like the following:grep squiggle /var/log/messages*Unfortunately the glob pattern matches the currently active file first. eg./var/log/messages/var/log/messages-20120220/var/log/messages-20120227/var/log/messages-20120305/var/log/messages-20120312This means that recent messages show up first followed by the historical messages in chronological order.Is it possible to adjust the glob pattern behaviour somehow to make the empty match (ie. just messages) show up at the end of the list?If not, what would be a good way to address this problem? | Is it possible to change the order of a glob? | bash;wildcards | I don't know of a way to change the globbing order, but there's an easy workaround for your case:grep squiggle /var/log/messages-* /var/log/messagesi.e. don't match the messages files in your glob pattern, and add it to the end of grep's argument list. |
_cs.33828 | Let $\mathcal{B} = \{v_1,v_2,\ldots,v_k\} \in \mathbb{R}^n$ be linearly independent vectors. Recall that the integer lattice of $\mathcal{B}$ is the set $L(\mathcal{B})$ of all linear combinations of elements of $\mathcal{B}$ using only integers as coefficients. That is $$L(\mathcal{B}) = \{ \sum_{i=1}^k c_i b_i \mid c_i \in \mathbb{Z}\}.$$The closest vector problem asks us to find a nonzero vector $v \in L(\mathcal{B})$ such that $||v||$ is minimized. It is apparently well known that this problem is NP-complete though I was not able to find a reduction to any of the well known NP-complete problems.The first proof of this claim seems to be in P. van Emde Boas. Another NP-complete problem and the complexity of computing short vectors in a lattice., but I cannot find a copy of this paper.Can someone give a polynomial reduction of some well known NP complete problem to the closest vector problem? | NP completeness of closest vector problem | complexity theory;reference request;np complete;reductions | As far as I know, it is not known that the shortest vector problem is NP-hard for any $L^p$ norm other that $L^\infty$. It is known that the shortest vector problem is NP-hard under randomized reductions for all $L^p$, a result first proved by Ajtai. See for example Miccanccio's paper and the results he references. Since then better inapproximability results have been obtained, but as far as I can tell nobody could prove an unconditional NP-hardness result. |
_codereview.101427 | I am finding the order of sorting of an array. The code works but can it be made better especially the return values of the function findSortOrder.#include <stdio.h>#include <stdlib.h>// Returns 0 for unsorted, 1 for sorted in increasing order, 2 for sorted in decreasing orderint findSortOrder(int array[], int len){ // holds the sorting order of the subarray array[0...i] // 0 represents no information about sorting order // 1 represents sorting in increasing order // 2 represents sorting in decreasing order int order = 0; int i; for (i = 1; i < len; i++) { if (order == 0) { if (array[i] > array[i - 1]) { order = 1; } else if (array[i] < array[i - 1]) { order = 2; } } else if (order == 1) { if (array[i] < array[i - 1]) { return 0; } } else { if (array[i] > array[i - 1]) { return 0; } } } if (order == 0 || order == 1) { return 1; } else { return 2; }}int main(){ printf(Enter length of the array: ); int len; scanf(%d, &len); int* input = malloc(len * sizeof(*input)); int i; for (i = 0; i < len; i++) { scanf(%d, &input[i]); } int order = findSortOrder(input, len); switch (order) { case 0: printf(Unsorted\n); break; case 1: printf(Sorted in increasing order\n); break; case 2: printf(Sorted in decreasing order\n); break; } free(input); return 0;}Edit:I will be using this function to merge two sorted arrays in their sorting order. So I think if no. of elements is 1 or all are equal then sort order could be returned as increasing. | Finding the order of sorting of an array | c;sorting | The cleanest would be an enum and a switch (current) with three cases:In each case you can then check if you stay in the current_order or if you switch to another or if you can return 0/1/-1.Simplified:current=NONE;for() a=... b= .. switch(current) case NONE: if (a>b) current=DEC; else if (a<b) current=INC; break; case INC: if (a>b) return NONE; break; case DEC: if (a<b) return NONE; break;return current;I don't know if it's the right term, but I think that is a finite state machine. You could read about those to get more information; it's always good to use for parsing stuff.From the comments:What should be returned if the array is one element long?Good point. I would introduce another return value MIXED:current=NONE;for() a=... b= .. switch(current) case NONE: if (a>b) current=DEC; else if (a<b) current=INC; break; case INC: if (a>b) return MIXED; break; case DEC: if (a<b) return MIXED; break;return current; |
_softwareengineering.247125 | Context: I'm working on an HTML 5 game without persisted state. Every time you refresh the page, you start at the beginning. People are requesting that they can start where they left off if they leave the page. I plan to implement this as Local Storage. The thing that makes apprehensive about this is a new kind of bug I'll have to consider: If you come back and there's a newer code, it may not be able to deserialize the storage.As an example, the state when serialized may have looked like this:{ foo: bar}But when the player tries the game next month, that field doesn't even exist anymore and has been broken up into two fields with a completely different type. This is bound to happen, and I think I know how to fix this: I'll put a version number in the data, and when I release version y of the app, I'll have to write code to migrate from version x state to version y state. If I release version z of the app, I'll have to write code to migrate from version y to version z. I'll probably call this logic after deserialization.But that's not what this question is about (although if there's a flaw in my approach, I'd want to hear it). What I want to know is: How do I automate something (possibly a test) that tells me when I need to write a migration script and when I need to change the data's version number?I feel automation is the key here, because things will go very wrong if I make a mistake and remembering to change a version number / write a migration script will be the last thing on my mind when I just finished that shiny new feature I want others to see. An interesting (but perhaps irrelevant) realization: All these issues go away if I store the data on my server in a database with a schema. That lets me confidently know that all the data is in the right state (if my schema is good). The problem with persisting locally is I lose control of the data and when it gets migrated. This question is similar but different to this one. The difference is I'm concerned primarily with automation and s/he's not and we're using different languages (seeing how the accepted answer is a Java library, that makes it irrelevant to me). | What's a good way to make sure that locally serialized data can be deserialized in newer code? | javascript;automation;persistence;serialization | null |
_webmaster.38723 | I've been looking to use unicode for more iconography, and I haven't been able to find any appropriate unicode for a view icon. None of the eye-related unicode that I've found works on the web. Does anyone know of any unicode icons that would be appropriate for a view icon? | Usable unicode for a view icon? | icon;unicode | null |
_webmaster.79685 | I get an issue with the SEO of my website. Before I was on top of Google with Icecom, but I migrated my website on WordPress. All my urls have changed and the body too...What I did for my new SEO:Redirect 301Changing address using Google Webmaster ToolBut I don't see my website on Google anymore. What am I supposed to do?Regards,Edit #1Edit #2 (Index state and exploration from Google) | SEO issue when migrating to wordpress changing urls and file names etc | seo;google;url rewriting;migration;filenames | When migrating from HTML to WordPress , the main thing to be kept in mind is the permalink structure.By default HTML pages has the extension of .html while WordPress URLs have no extensions.(You can activate them though).Now Google treats a www.website/page.html and www.website/page as two different URLs.Generally there are two options :-1) Change the Permalink structure of all pages.(There are plugins for that)2) Redirecting using 301.Since you have already implemented 301 redirects.So,you must know that using a 301 redirect can lead to losing 15 percent of your link juice. Many sites quote Matt Cutts, Googles head of Web spam, as having made that statement.EditedHow to avoid negative SEO because of offline website? When Googlebot crawls your website while offline instead of returning an HTTP result code 404 (Not Found) or showing an error page with the status code 200 (OK) when a page is requested, its better to return a 503 HTTP result code (Service Unavailable) which tells search engine crawlers that the downtime is temporary. Moreover, it allows webmasters to provide visitors and bots with an estimated time when the site will be up and running again. |
_unix.262028 | When I was under Ubuntu 14.04, I had in the title bar a menu that allowed me to choosee between high performances or energy saving for the processor.I want to know what app it was, I can't remember... Thermald? Conky? Perf? Something else?And btw can it run under Kubuntu or Lubuntu? | perfs manager in Ubuntu 14.04 | ubuntu | null |
_unix.273273 | How can I create a text file called example text file (with spaces). Write a few sentences to store in this file. Make a copy of file with a different name. | Create a text file with spaces and sentences stored inside and make a copy of text file | terminal | null |
_datascience.8460 | What stable Python library can I use to implement Hidden Markov Models? I need it to be reasonably well documented, because I've never really used this model before.Alternatively, is there a more direct approach to performing a time-series analysis on a data-set using HMM? | Python library to implement Hidden Markov Models | python;time series;markov process | The ghmm library might be the one which you are looking for. As it is said in their website:It is used for implementing efficient data structures and algorithms for basic and extended HMMs with discrete and continuous emissions. It comes with Python wrappers which provide a much nicer interface and added functionality.It also has a nice documentation and a step-by-step tutorial for getting your feet wet. |
_codereview.8421 | I have just started reading through Learn you a Haskell I got up to list comprehension and started to play around in GHCi, my aim was to make a times table function that takes a number n and an upper limit upperLimit and return a nested list of all the 'tables' up to n for example> timesTable 2 12[[1,2..12],[2,4..24]]the actual function/list comprehension I came up with is> let timesTable n upperLimit = [[(n-y) * x | x <- [1..upperLimit]] | y <- reverse [0..(n-1)]]Any feedback on the above would be greatly appreciated as this is the first time I have really used a functional language, so if there is a better way or something I have missed please let me know. | Multiplication table using a list comprehension | beginner;haskell | Your function could be simplified a little, and I find it helpful to define functions using declarations, since type signatures are really helpful (although admittedly your example is simple enough that it doesn't matter):timesTable :: Int -> Int -> [[Int]]timesTable n u = [[y * x | x <- [1 .. u]] | y <- [1 .. n]]The key thing I noticed was that you were using n-y: it should be obvious that this part of the expression becomes the following values in each iteration of y: [n-(n-1), n-(n-2), ... n-0], which is just [1 .. n]. |
_webmaster.104987 | Can anyone tell me how Google decides which image to use when it shows an image with the search results (as is the case with mobile search, sometimes).If you look at these mobile search results......and take the AliExpress.com result as an example, by inspecting the HTML you can see that the image used by Google seems to have no special declaration, etc. It is just the first jpg that appears in that page's HTML - so I was thinking maybe that's why it's the image that is used in the Google search results...However, when you look at these mobile search results......and consider the Backpack Billboards result, the HTML reveals that the image being used is actually the seventh image declared in the HTML.So can anyone shed any light on how Google determine which image to be displayed? I would like to know so I can then have control over which image is displayed for my site. | How to change which image from website is shown in Google search result? | seo | null |
_unix.314545 | I was wondering how I would get this to sort alphabetically but in reverse.For ex. ( z-a )cut -d: -f1 /etc/passwd | sort | How would I sort this to alphabetically reversed? | sort;cut | null |
_unix.260378 | I have to find how many times the word shell is used in a file. I used grep shell test.txt | wc -w in order to count how many times that word has been used, but the result comes out 4 instead of 3. The file content is:this is a test filefor shell_Ashell_Bshsheland shell_Cscript project | The wc -w command outputs incorrect answer | grep;wc | The wc command is counting the words in the output from grep, which includes for:> grep shell test.txtfor shell_Ashell_Bshell_CSo there really are 4 words.If you only want to count the number of lines that contain a particular word in a file, you can use the -c option of grep, e.g.,grep -c shell test.txtNeither of those actually count words, but could match other things which include that string. Most implementations of grep (GNU grep, modern BSDs as well as AIX, HPUX, Solaris) provide a -w option for words, however that is not in POSIX. They also recognize a regular expression, e.g.,grep -e '\<shell\>' test.txtwhich corresponds to the -w option. Again, that is not in POSIX. Solaris does document this, while AIX and HPUX describe -w without mentioning the regular expression. These all appear to be consistent, treating a word as a sequence of alphanumerics plus underscore.You could use a POSIX regular expression with grep to match words (separated by blanks, etc), but your example has none which are just shell: they all have some other character touching the matches. Alternatively, if you care only about alphanumerics (and no underscore) and do not mind matching substrings, you could dotr -c '[[:alnum:]]' '\n' test.txt |grep -c shellThe -o option suggested is non-POSIX, and since OP did not limit the question to Linux or BSDs, is not what I would recommend. In either case, it does not match words, but strings (which was OP's expectation).For reference:grepwc |
_cstheory.30709 | Consider a denotational semantics from simply-typed $\lambda$-calculus into dependent type theory. Is that actually a (trivial) term transformation into that dependent type theory? After all, type theory has a syntax.In fact, even set theory has a syntax*! So how do we distinguish a denotational semantics from a compositional term transformation?Now, let's generalize to less trivial program transformations say, transformation to continuation-passing style (or store-passing style, environment passing style, ...). You can show the same idea through a non-standard semantics (here, a continuation-passing semantics) or a term transformation into a continuation-passing term, and they're distinguished by a binding-time shift. Again, isn't the non-standard semantics also a term transformation?This is a concrete confusion which I've observed at least twice:In my work (on incremental computation) I've used a non-standard denotational semantics into type theory (a change-passing semantics). After a presentation of that, Gabriel Scherer remarked (kindly) that for him, that was a term transformation into a dependently typed language.F-ing modules preempts this confusion they defend their presentation of the syntax of semantic objects.Semantic signatures. The syntax of semantic signatures is given in Figure 9. (And no, this is not an oxymoron, for in our setting the semantic objects we are using to model modules are merely pieces of F syntax.) [Emphasis added.]*Apparently, some (non-formalists) claim that set theory is not just syntax, but something ontologically different. I'll ignore this subtle philosophical issue; the only reference I know on it is Raymond Turner's Understanding Programming Languages. | Distinguishing semantics vs syntactic techniques and the syntax of your semantic domains | lo.logic;pl.programming languages;big picture | In general semantics is a mapping $[\![ {-} ]\!]$ of syntax to mathematical objects of some sort. The objects may be syntactic in nature, in which case it is perhaps better to speak of a translation.Some kinds of semantics are clearly not syntax. For example, if you interpret the simply typed $\lambda$-calculus into set theory, then it is not reasonable to claim that this is just a translation of one kind of syntax into another. For instance, in the set-theoretic semantics any set may act as a type, even a non-definable one, and even some definable types will have uncountable cardinality (consider $\mathtt{nat} \to \mathtt{nat}$) it would be absurd to claim that these are just syntax.One should not confuse semantics with how we express the semantic function. Of course, anything you express in mathematics is just a bunch of expressions. But it's been a long time since we learned the difference between a word and its meaning. If you are going to defend the position that it's all just formalism all the way down then we have a different issue to discuss, namely: are you a formalist? |
_cs.43149 | ProblemI am trying to come up with an algorithm that will dynamically throttle a client's number of outstanding requests based on the response times of completed requests. Response times are unpredictable and can be between 7 and 90 seconds. The response times are greatly affected by the total of outstanding requests, if the server is flooded it bogs down and times increase to very long times.My specific scenarioMy client application has several http requests it needs to send to a server (actually 3 servers that are load balanced and use a round robin distribution, the number of servers and the code running there are out of my control). Each request is a flat object containing 25 parameters. The server uses the parameters to do several look ups which may or may not trigger other look ups and calculations, this is where the server processing time is variable. The server returns a list of corresponding results which can be 0 to approximately 25 in length for most results.What I have triedI created a rolling average class that keeps the response times of the last X number of responses and can give the average at any point. Then when each response is received I compare its response time to the average. If the response time is equal or less than the average I increase the number of allowed outstanding requests. If the current response time is greater than the average I decrease allowed outstanding requests.This approach worked somewhat in scaling up but I had cases where the average just grew over time and allowed more and more outstanding requests which brought everything to a crawl. I have an implementation question on stackoverflow if you want more details on how I am doing this in code minus the rolling average part.https://stackoverflow.com/q/30263716/1168353I have been using a mocked out version of the server that just sleeps a thread before returning a response. I pick a bestWaitSeconds that is the fastest time it can return and a MaxRequestsBeforeDegradation that determines the maximum number of requests before times increase, anything below that returns bestWaitSeconds. The degradation formula looks like the below and only applies when currentRequests is more than MaxRequestsBeforeDegradation.secondsToWait = (currentRequests - MaxRequestsBeforeDegradation) * (r.NextDouble() + .5) * bestWaitSeconds;The random is just a way to apply the unknown return time stated in the problem. Times do go well above 90 seconds when the server is overloaded. This formula isn't exactly what the server does but it conveys the idea I think.So basically the algorithm needs to get as close to MaxRequestsBeforeDegradation as possible. In the real case that MaxRequestsBeforeDegradation would change over long amounts of time so the algorithm also needs to adapt and explore up and down to continually know where the best number of outstanding requests is. Hope this helps.SummaryHow can I dynamically find the optimal number of outstanding requests allowed at a given moment to get the greatest throughput, given only a history of response times? | Algorithm for Dynamic Client Side Throttling | algorithms;computer networks;communication protocols | null |
_reverseengineering.1984 | I want to develop a web-based API in Rails for some LockState wifi programmable thermostats, so that other different application can control these thermostats. But I'm cannot find any resource about how these thermostats are accessible from the Internet and how to connect to these thermostats. Is there any documentation available? If not, how would I find how to interact with the thermostats? | API for LockState Wireless Internet Thermostat | interoperability;api;embedded | null |
_codereview.69359 | In our .NET tests we use NSubstitute & ExpectedObjects.Testing object expectations involves hand crafting large anonymous objects and when new properties are added we need to go back to these anonymous objects & update them.Attempting below to get a fluent Object to DTO builder - which fails when a property is missed.Here is the implementation: //CustomerCreatedEvent has all below properties var exp1 = @event.ToDto<CustomerCreatedEvent,CustomerDetail>( x=> x.AggregateId.As(CustomerId), x => x.Email, // comment this out and test FAILS -> CustomerDetail.Email required x => x.FirstName, x => x.Surname); var exp2 = new { CustomerId= @event.AggregateId, @event.Email, // comment this out and test still passes @event.FirstName, @event.Surname}; exp1.ToExpectedObject().ShouldMatch(actual); exp2.ToExpectedObject().ShouldMatch(actual);I have 2 questions:Is my code just adding 'noise'?Is the implementation code below sound?public static TResult ToDto(this TSource obj, params Expression>[] items) where TSource : class { var eo = new ExpandoObject(); var props = eo as IDictionary; foreach (var item in items) { var member = item.Body as MemberExpression; var unary = item.Body as UnaryExpression; var body = member ?? (unary != null ? unary.Operand as MemberExpression : null); if (member != null && body.Member is PropertyInfo) { var property = body.Member as PropertyInfo; props[property.Name] = obj.GetType() .GetProperty(property.Name) .GetValue(obj, null); } else { var property = unary.Operand as MemberExpression; if (property != null) { props[property.Member.Name] = obj.GetType() .GetProperty(property.Member.Name) .GetValue(obj, null); } else { var compiled = item.Compile(); var output = (KeyValuePair<string, object>)compiled.Invoke(obj); props[output.Key] = obj.GetType() .GetProperty(output.Value.ToString()) .GetValue(obj, null); } } } TResult result = Activator.CreateInstance<TResult>(); foreach (var item in props) { result.GetType().GetProperty(item.Key).SetValue(result, item.Value, null); } return result;}} | Object to object mapping verification - is this extension method useful? | c#;object oriented;extension methods | null |
_cogsci.10252 | I would have liked to know what thoughts people get from various images, so that I could trigger targeted thought patterns with images. An easy way to do this would be if I could find an online service that lets strangers tag my images with their own keywords. Especially if I could somehow count votes for each keyword/tag.Does there exist any kind of service like this?Or does anyone know of a similar approach I could use? | Is there a way to have strangers tag my images? | measurement;methodology;experimental psychology;internet | Amazon Mechanical Turk is perfect for something like this. In fact, tagging content is one of their default project types, so you should be able to just load in your images and use their template for tagging.If you haven't seen it before, MTurk is basically a labor marketplace for very small tasks. It works best for paying people a few cents to complete very short tasks, but it is frequently used to recruit subjects for longer behavioral studies as well. |
_unix.40647 | Let me start off by saying this is a Mac Terminal I'm using. Not Linux, but I assumed I would get the best answers here as it has to do with Unix and the command line not really anything about Mac itself.Anyways here's the problem. In an attempt to be extremely lazy, I tried to write a function in my ~/.bashrc that would let me move into a homework folder, created a folder with today's date, move into said folder, and open vim with the given filename... all in one go. It looked something like...export DATE=$( date +%d-%b )function hw() { cd ~/Java/Programs/HW mkcd $DATE vim $*}mkcd is a function that makes the folder and moves into it at the same time. This is what my function looks like now and it works just fine. However in on of my many attempts to make this work I made a really really stupid error and ended up with some kind of infinite loop with my mkcd part... still not sure how I managed this and I've since deleted that code. Well what happened when I did this is quite obvious... I now have a folder named 27-Jan that has infinitely many folders named 27-Jan inside of it. (Like I said really stupid)Well to make it stop putting me deeper and deeper I hit ^c and viola I stopped... I changed back to my ~/ folder and did a quick sudo rm 27-Jan/. To my amazement (and worry) that didn't work. I tried a for more things to get rid of it but nothing did anything. So being clever like I am... I moved it to .Trash and stopped worrying about it. Since then I have emptied my trash a few times and never really noticed but that bloody folder won't go away! It's taking up zero bytes on my hard disk but it's still there with all it's little sub folders.What I've Tried:sudo rm 27-Jan/sudo rm -r 27-Jan/ This one said override rwxr-xr-x caldwell/staff for 27-Jan/(many times repeated)/27-Jan? To which I've responded y and yes and even si (in case it spoke spanish)... everytime it says No such file or directory and repeats the previous question.Has anyone ever seen anything like this? And do you know what I might be able to do to make it just go away? | Infinitely Nested Directories | directory;function;failure | Try rm -rf to avoid the prompting.-f, --force ignore non-existent files, never prompt |
_unix.343344 | My system has 2 physical 4 TB hds partially md mirrored, and a very fast 512GB ssd M.2 device that stores the root filesystems and caches key larger filesystems on disks. One particular fs stores VMWare Workstation virtual machine disk files. These files can be very large (10-70GB). The most common VM I boot is a Windows 10 image with a 78GB base image and another 6GB snapshot file.I'm looking for LVM cache tunable parameters that would allow this filesystem and these files in particular to perform better.For comparison, the same M.2 SSD also has a real Win 10 image on it, and booting that image straight will take about 8 seconds from Grub selection to Windows login screen. By comparison, from VMWare boot selection to login is about 28 seconds; not a lot better than if caching was turned off (though I haven't done that test recently so I don't have a quotable number).The Win 10 VM total directory is 82GB, and here is some specifics of my lvm (focus on the vmCache at the end)lvs -a -o+devicesLV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices games cache Cwi-aoC--- 200.00g [gamesDataCache] 11.37 16.05 0.00 games_corig(0)[gamesDataCache] cache Cwi---C--- 10.00g 11.37 16.05 0.00 gamesDataCache_cdata(0)[gamesDataCache_cdata] cache Cwi-ao---- 10.00g /dev/nvme0n1p6(23015) [gamesDataCache_cmeta] cache ewi-ao---- 12.00m /dev/nvme0n1p6(23012) [games_corig] cache owi-aoC--- 200.00g /dev/md126(0) home cache Cwi-aoC--- 300.00g [homeDataCache] 100.00 16.05 0.01 home_corig(0) [homeDataCache] cache Cwi---C--- 10.00g 100.00 16.05 0.01 homeDataCache_cdata(0) [homeDataCache_cdata] cache Cwi-ao---- 10.00g /dev/nvme0n1p6(3) [homeDataCache_cmeta] cache ewi-ao---- 12.00m /dev/nvme0n1p6(0) [home_corig] cache owi-aoC--- 300.00g /dev/md127(128000) [lvol0_pmspare] cache ewi------- 79.90g /dev/md127(204800) vm cache Cwi-aoC--- 500.00g [vmCache] 100.00 19.01 0.00 vm_corig(0) [vmCache] cache Cwi---C--- 79.80g 100.00 19.01 0.00 vmCache_cdata(0) [vmCache_cdata] cache Cwi-ao---- 79.80g /dev/nvme0n1p6(2563)[vmCache_cmeta] cache ewi-ao---- 80.00m /dev/nvme0n1p6(22992)[vm_corig] cache owi-aoC--- 500.00g /dev/md127(0) root0 fedora -wi-ao---- 39.00g /dev/nvme0n1p5(1)The cache size is almost 80GB, and this Win 10 is the only VM I boot, so I'd hope it would be able to cache pretty much the entire image. The data usage is 100%, but yet performance is far below what I hoped for.I can provide any more detailed LVM configs on request, but assume most values are default right now.Any suggestions?Thanks,Brian | Tuning LVM cache for very large files | lvm;cache | null |
_unix.234079 | I need to add an iptables rule from the inside of a C Linux program.How should I do? Do I need root privilege or can I just grant some capabilities?I tried granting CAP_NET_RAW+iep and using popen(), system() and execve() to set iptables but it doesn't work.It obviously works when I sudo but I would like not to grant root privilege.Thank you. | Can I add iptables rule from the inside of a C Linux program only with capabilities or do I need necessarily root? | linux;iptables;root;c;capabilities | null |
_softwareengineering.344343 | I want to set up basic permissions for a website. It has a basic roles system where users are part of certain groups and get all the permissions allowed to that group. However, I need more than just a basic Role, Permission, and Role_Permission setup because I want to allow for specific exceptions - certain users may have access to extra permissions (even if not granted to any of their roles) and certain users may be denied specific permissions even if their role has access to them. Basically, I want it to be completely flexible and customizable but still abstracted and reusable.This is the basic design of the tables:PermissionPermissionIDPermissionNameRoleRoleID RoleNameRole_PermissionPermissionIDRoleIDUser_RoleUserIDRoleIDNow I need a way to allow override so a specific user can be allowed/denied a specific permission. Should I create 2 separate tables, one for denied permissions and one for allowed? Or should I create a single User_Permission table? If so, should it have 2 separate flags for allow/deny or a single field? I'm thinking of having something like this:User_Permission(or should it be called PermissionException? PermissionOverride?)UserIDPermissionIDAllow (bit flag)Deny (bit flag)(I then plan to write a SQL stored procedure hasPermission(UserID, PermissionID) that would be called to determine whether this user has permission to perform an action.)Is this a good design? Is there any way it can be improved upon? What is the general standard used for implementing this common design pattern? | What is the best way to structure my web permissions tables? | sql server;database design;web applications | null |
_vi.10581 | Sometimes I mis-spell the name of a file. So let's say I have a file called ThisIsAFileName and I start typing ThisS... The moment I misspell the filename (and there are no hits whatsoever), CTRL-P becomes incredibly slow. It displays each next letter at a speed of about 1 character every 5 seconds. So if I accidentally type 6 extra characters I am waiting half a minute for CTRL-P to finish displaying these characters before I can undo this. Is this something that happens regularly? Any idea how to fix this? | CTRL-P very slow when files are not found | plugin ctrlp | null |
_cs.77103 | I came across a set of articles back from the 2000s that stated Intel was planning a 10Gz CPU by 2011. Obviously, this didn't pan out. But how well are Moore's law and it's cousins holding up in terms of:Transistors per \$1000Floating point operators per \$1000The performance of the world's most powerful supercomputers?The computing power available to the average consumerFlops per watt of powerAnd which of these are likely to break down in the next few years? | How well is Moore's law (and it's cousins) holding up? | parallel computing | null |
_webapps.12383 | Is it possible to see personal Facebook usage statistics from facebook.com or some other web service? By statistics, I mean, for example, the total number of likes, posts, logins, etc. (maybe detailed in periods). | Personal Facebook usage statistics | facebook;statistics | I think that the best option is Wolfram Alpha.Check it out at http://www.wolframalpha.com/input/?i=facebook%20report:) |
_unix.360742 | I want to install driver for alink MT7601U Wireless Adapter on centos 7, but i can't do it because it has error.I have updated the yum with yum update, and after that i reboot my system.Now I want to install the this driver but i encounter with this Error message. # makemake -C toolsmake[1]: Entering directory `/qomDB/LinuxSTA_wifi/tools'gcc -g bin2h.c -o bin2hmake[1]: Leaving directory `/qomDB/LinuxSTA_wifi/tools'/qomDB/LinuxSTA_wifi/tools/bin2hcp -f os/linux/Makefile.6 /qomDB/LinuxSTA_wifi/os/linux/Makefilemake -C /lib/modules/3.10.0-514.16.1.el7.x86_64/build SUBDIRS=/qomDB/LinuxSTA_wifi/os/linux modulesmake[1]: Entering directory `/usr/src/kernels/3.10.0-514.16.1.el7.x86_64' CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_profile.o/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_profile.c: In function announce_802_3_packet:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_profile.c:331:16: warning: unused variable pAd [-Wunused-variable] RTMP_ADAPTER *pAd = (RTMP_ADAPTER *)pAdSrc; ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_profile.c: In function STA_MonPktSend:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_profile.c:399:9: warning: format %d expects argument of type int, but argument 3 has t ype long unsigned int [-Wformat=] DBGPRINT(RT_DEBUG_ERROR, (%s : Size is too large! (%d)\n, __FUNCTION__, pRxBlk->DataSize + sizeof(wlan_ng_prism2_header))); ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/assoc.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/auth.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/auth_rsp.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/sync.o/qomDB/LinuxSTA_wifi/os/linux/../../sta/sync.c: In function PeerBeacon:/qomDB/LinuxSTA_wifi/os/linux/../../sta/sync.c:2181:12: warning: passing argument 8 of StaAddMacTableEntry from incompatible pointer typ e [enabled by default] ie_list->CapabilityInfo) == FALSE) ^In file included from /qomDB/LinuxSTA_wifi/include/rt_config.h:59:0, from /qomDB/LinuxSTA_wifi/os/linux/../../sta/sync.c:28:/qomDB/LinuxSTA_wifi/include/rtmp.h:7892:9: note: expected struct IE_LISTS * but argument is of type struct BCN_IE_LIST * BOOLEAN StaAddMacTableEntry( ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/sanity.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/rtmp_data.o/qomDB/LinuxSTA_wifi/os/linux/../../sta/rtmp_data.c: In function STAHandleRxDataFrame:/qomDB/LinuxSTA_wifi/os/linux/../../sta/rtmp_data.c:523:4: warning: passing argument 2 of MacTableLookup from incompatible pointer type [enabled by default] pEntry = MacTableLookup(pAd, &pHeader->Addr2); ^In file included from /qomDB/LinuxSTA_wifi/include/rt_config.h:59:0, from /qomDB/LinuxSTA_wifi/os/linux/../../sta/rtmp_data.c:28:/qomDB/LinuxSTA_wifi/include/rtmp.h:8429:18: note: expected UCHAR * but argument is of type UCHAR (*)[6] MAC_TABLE_ENTRY *MacTableLookup(RTMP_ADAPTER *pAd, UCHAR *pAddr); ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/connect.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/wpa.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.o/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c: In function RTMPIoctlRF:/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c:5306:7: warning: format %X expects argument of type unsigned int, but argument 5 has type LONG [-Wformat=] sprintf(msg+strlen(msg), BANK%d_R%02d:%02X , bank_Id, rfId, rfValue); ^/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c:5359:3: warning: passing argument 2 of RtmpDrvAllRFPrint from incompatible pointer typ e [enabled by default] RtmpDrvAllRFPrint(NULL, msg, strlen(msg)); ^In file included from /qomDB/LinuxSTA_wifi/include/rt_config.h:64:0, from /qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c:28:/qomDB/LinuxSTA_wifi/include/rt_os_util.h:668:6: note: expected UINT32 * but argument is of type PSTRING VOID RtmpDrvAllRFPrint( ^/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c:5209:22: warning: unused variable rf_bank [-Wunused-variable] UCHAR regRF = 0, rf_bank = 0; ^/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c: In function RtmpIoctl_rt_ioctl_siwgenie:/qomDB/LinuxSTA_wifi/os/linux/../../sta/sta_cfg.c:7610:13: warning: assignment from incompatible pointer type [enabled by default] eid_ptr = pAd->StaCfg.pWpaAssocIe; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_md5.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_sha2.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_hmac.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_aes.o/qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_aes.c: In function AES_Key_Wrap:/qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_aes.c:1459:6: warning: format %d expects argument of type int, but argument 2 has type long unsigned int [-Wformat=] DBGPRINT(RT_DEBUG_ERROR, (AES_Key_Wrap: allocate %d bytes memory failure.\n, sizeof(UINT8)*PlainTextLength)); ^/qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_aes.c: In function AES_Key_Unwrap:/qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_aes.c:1554:6: warning: format %d expects argument of type int, but argument 2 has type long unsigned int [-Wformat=] DBGPRINT(RT_DEBUG_ERROR, (AES_Key_Unwrap: allocate %d bytes memory failure.\n, sizeof(UINT8)*PlainLength)); ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/crypt_arc4.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.oIn file included from /qomDB/LinuxSTA_wifi/include/rtmp_os.h:44:0, from /qomDB/LinuxSTA_wifi/include/rtmp_comm.h:75, from /qomDB/LinuxSTA_wifi/include/rt_config.h:33, from /qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c:28:/qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c: In function MlmeResetRalinkCounters:/qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c:544:7: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] (UINT32)&pAd->RalinkCounters.OneSecEnd - ^/qomDB/LinuxSTA_wifi/include/os/rt_linux.h:473:76: note: in definition of macro NdisZeroMemory #define NdisZeroMemory(Destination, Length) memset(Destination, 0, Length) ^/qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c:545:7: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast] (UINT32)&pAd->RalinkCounters.OneSecStart); ^/qomDB/LinuxSTA_wifi/include/os/rt_linux.h:473:76: note: in definition of macro NdisZeroMemory #define NdisZeroMemory(Destination, Length) memset(Destination, 0, Length) ^/qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c: In function AsicRxAntEvalTimeout:/qomDB/LinuxSTA_wifi/os/linux/../../common/mlme.c:5201:45: warning: unused variable rssi_diff [-Wunused-variable] CHAR larger = -127, rssi0, rssi1, rssi2, rssi_diff; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_wep.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/action.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.o/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.c: In function CmdRspEventCallbackHandle:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.c:2509:8: warning: unused variable Ret [-Wunused-variable] INT32 Ret; ^/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.c: In function StopDmaTx:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.c:2684:8: warning: unused variable IdleNums [-Wunused-variable] UINT8 IdleNums = 0; ^/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_data.c:2682:20: warning: unused variable UsbCfg [-Wunused-variable] USB_DMA_CFG_STRUC UsbCfg; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.o/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c: In function NICInitAsicFromEEPROM:/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c:981:9: warning: unused variable i [-Wunused-variable] USHORT i; ^/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c: In function NICInitializeAdapter:/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c:1292:22: warning: unused variable GloCfg [-Wunused-variable] WPDMA_GLO_CFG_STRUC GloCfg; ^/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c: In function NICInitializeAsic:/qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init.c:1367:11: warning: unused variable KeyIdx [-Wunused-variable] USHORT KeyIdx; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_init_inf.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_tkip.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_aes.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_sync.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/eeprom.o/qomDB/LinuxSTA_wifi/os/linux/../../common/eeprom.c: In function RtmpChipOpsEepromHook:/qomDB/LinuxSTA_wifi/os/linux/../../common/eeprom.c:34:9: warning: unused variable e2p_csr [-Wunused-variable] UINT32 e2p_csr; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_sanity.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.o/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.c: In function Set_DebugFunc_Proc:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.c:1084:2: warning: format %x expects argument of type unsigned int, but argument 2 has type const char * [-Wformat=] DBGPRINT_S(RT_DEBUG_TRACE, (Set RTDebugFunc = 0x%x\n,__FUNCTION__, RTDebugFunc)); ^/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.c:1084:2: warning: too many arguments for format [-Wformat-extra-args]/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.c: In function set_rf:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_info.c:5730:3: warning: format %x expects argument of type unsigned int *, but argument 5 has type UCHAR * [-Wformat=] rv = sscanf(arg, %d-%d-%x, &(bank_id), &(rf_id), &(rf_val)); ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cfg.o/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cfg.c: In function wmode_valid_and_correct:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cfg.c:279:8: warning: unused variable mode [-Wunused-variable] UCHAR mode = *wmode; ^/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cfg.c: At top level:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cfg.c:264:16: warning: wmode_valid defined but not used [-Wunused-function] static BOOLEAN wmode_valid(RTMP_ADAPTER *pAd, enum WIFI_MODE wmode) ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_wpa.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_radar.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/spectrum.o/qomDB/LinuxSTA_wifi/os/linux/../../common/spectrum.c: In function PeerMeasureReportAction:/qomDB/LinuxSTA_wifi/os/linux/../../common/spectrum.c:1972:3: warning: format %d expects argument of type int, but argument 3 has type long unsigned int [-Wformat=] DBGPRINT(RT_DEBUG_ERROR, (%s unable to alloc memory for measure report buffer (size=%d).\n, __FUNCTION__, sizeof(MEASURE_RPI_REPORT))); ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/rtmp_timer.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/rt_channel.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_profile.o/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_profile.c: In function rtmp_read_multest_from_file:/qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_profile.c:2671:23: warning: unused variable pWdsEntry [-Wunused-variable] PRT_802_11_WDS_ENTRY pWdsEntry; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_asic.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/scan.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/cmm_cmd.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/uapsd.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/ps.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../rate_ctrl/ra_ctrl.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../rate_ctrl/alg_legacy.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../rate_ctrl/alg_ags.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../chips/rtmp_chip.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/txpower.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../mac/rtmp_mac.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../mgmt/mgmt_hw.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../mgmt/mgmt_entrytb.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../phy/rtmp_phy.o/qomDB/LinuxSTA_wifi/os/linux/../../phy/rtmp_phy.c: In function NICInitBBP:/qomDB/LinuxSTA_wifi/os/linux/../../phy/rtmp_phy.c:61:8: warning: unused variable R0 [-Wunused-variable] UCHAR R0 = 0xff; ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../phy/rlt_phy.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../phy/rlt_rf.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/ba_action.oIn file included from /qomDB/LinuxSTA_wifi/include/rtmp_os.h:44:0, from /qomDB/LinuxSTA_wifi/include/rtmp_comm.h:75, from /qomDB/LinuxSTA_wifi/include/rt_config.h:33, from /qomDB/LinuxSTA_wifi/os/linux/../../common/ba_action.c:30:/qomDB/LinuxSTA_wifi/os/linux/../../common/ba_action.c: In function convert_reordering_packet_to_preAMSDU_or_802_3_packet:/qomDB/LinuxSTA_wifi/include/os/rt_linux.h:886:34: warning: assignment makes integer from pointer without a cast [enabled by default] ((RTPKT_TO_OSPKT(_pkt))->tail) = (PUCHAR)((_start) + (_len)) ^/qomDB/LinuxSTA_wifi/include/os/rt_linux.h:929:2: note: in expansion of macro SET_OS_PKT_DATATAIL SET_OS_PKT_DATATAIL(__pRxPkt, __pData, __DataSize); \ ^/qomDB/LinuxSTA_wifi/os/linux/../../common/ba_action.c:1574:2: note: in expansion of macro RTMP_OS_PKT_INIT RTMP_OS_PKT_INIT(pRxBlk->pRxPacket, ^ CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../mgmt/mgmt_ht.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../common/rt_os_util.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/sta_ioctl.o CC [M] /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.o/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function RtmpOsUsDelay:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:179:8: warning: unused variable i [-Wunused-variable] ULONG i; ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function duplicate_pkt:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:497:3: warning: passing argument 1 of memmove makes pointer from integer without a cast [enabled by default] NdisMoveMemory(skb->tail, pHeader802_3, HdrLen); ^In file included from ./arch/x86/include/asm/string.h:4:0, from include/linux/string.h:18, from include/linux/bitmap.h:8, from include/linux/cpumask.h:11, from ./arch/x86/include/asm/cpumask.h:4, from ./arch/x86/include/asm/msr.h:10, from ./arch/x86/include/asm/processor.h:20, from ./arch/x86/include/asm/thread_info.h:22, from include/linux/thread_info.h:54, from include/linux/preempt.h:9, from include/linux/spinlock.h:50, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/linux/stat.h:18, from include/linux/module.h:10, from /qomDB/LinuxSTA_wifi/include/os/rt_linux.h:31, from /qomDB/LinuxSTA_wifi/include/rtmp_os.h:44, from /qomDB/LinuxSTA_wifi/include/rtmp_comm.h:75, from /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:32:./arch/x86/include/asm/string_64.h:58:7: note: expected void * but argument is of type sk_buff_data_t void *memmove(void *dest, const void *src, size_t count); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:499:3: warning: passing argument 1 of memmove makes pointer from integer without a cast [enabled by default] NdisMoveMemory(skb->tail, pData, DataSize); ^In file included from ./arch/x86/include/asm/string.h:4:0, from include/linux/string.h:18, from include/linux/bitmap.h:8, from include/linux/cpumask.h:11, from ./arch/x86/include/asm/cpumask.h:4, from ./arch/x86/include/asm/msr.h:10, from ./arch/x86/include/asm/processor.h:20, from ./arch/x86/include/asm/thread_info.h:22, from include/linux/thread_info.h:54, from include/linux/preempt.h:9, from include/linux/spinlock.h:50, from include/linux/seqlock.h:35, from include/linux/time.h:5, from include/linux/stat.h:18, from include/linux/module.h:10, from /qomDB/LinuxSTA_wifi/include/os/rt_linux.h:31, from /qomDB/LinuxSTA_wifi/include/rtmp_os.h:44, from /qomDB/LinuxSTA_wifi/include/rtmp_comm.h:75, from /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:32:./arch/x86/include/asm/string_64.h:58:7: note: expected void * but argument is of type sk_buff_data_t void *memmove(void *dest, const void *src, size_t count); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function ClonePacket:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:650:20: warning: assignment makes integer from pointer without a cast [enabled by default] pClonedPkt->tail = pClonedPkt->data + pClonedPkt->len; ^In file included from /qomDB/LinuxSTA_wifi/include/rtmp_os.h:44:0, from /qomDB/LinuxSTA_wifi/include/rtmp_comm.h:75, from /qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:32:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function RtmpOsPktInit:/qomDB/LinuxSTA_wifi/include/os/rt_linux.h:886:34: warning: assignment makes integer from pointer without a cast [enabled by default] ((RTPKT_TO_OSPKT(_pkt))->tail) = (PUCHAR)((_start) + (_len)) ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:669:2: note: in expansion of macro SET_OS_PKT_DATATAIL SET_OS_PKT_DATATAIL(pRxPkt, pData, DataSize); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function wlan_802_11_to_802_3_packet:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:695:15: warning: assignment makes integer from pointer without a cast [enabled by default] pOSPkt->tail = pOSPkt->data + pOSPkt->len; ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function __RtmpOSFSInfoChange:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:1121:20: error: incompatible types when assigning to type int from type kuid_t pOSFSInfo->fsuid = current_fsuid(); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:1122:20: error: incompatible types when assigning to type int from type kgid_t pOSFSInfo->fsgid = current_fsgid(); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function RtmpDrvAllRFPrint:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:2052:4: warning: passing argument 2 of file_w->f_op->write from incompatible pointer type [enabled by default] file_w->f_op->write(file_w, pBuf, BufLen, &file_w->f_pos); ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:2052:4: note: expected const char * but argument is of type UINT32 */qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:2037:22: warning: unused variable macValue [-Wunused-variable] UINT32 macAddr = 0, macValue = 0; ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:2037:9: warning: unused variable macAddr [-Wunused-variable] UINT32 macAddr = 0, macValue = 0; ^/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c: In function RtmpOSIRQRelease:/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.c:2173:21: warning: unused variable net_dev [-Wunused-variable] struct net_device *net_dev = (struct net_device *)pNetDev; ^make[2]: *** [/qomDB/LinuxSTA_wifi/os/linux/../../os/linux/rt_linux.o] Error 1make[1]: *** [_module_/qomDB/LinuxSTA_wifi/os/linux] Error 2make[1]: Leaving directory `/usr/src/kernels/3.10.0-514.16.1.el7.x86_64'make: *** [LINUX] Error 2I really don't know how can i solve it. please Help me | Problem on installing alink MT7601U Wireless Adapter on centos 7 | centos;wifi | null |
_datascience.20252 | I have a short time series of daily counts ( 6 days = 6 counts : 12, 15, 69, 35, 97, 107). This looks like a rapid increase from its initial count of 12.What are some of the statistical techniques to detect such large increases in short time period? rapid increases and short time period are rather loose terms, but, I am looking for techniques where these can be 'parameters' that I can provide. For example, short period could be 5 days. 'Rapid increase' could be 20% increase day-to-day or 90% increase in 5 days...something like that.Suggestions are appreciated. | Detect rapid increase in time series | time series;anomaly detection | null |
_unix.376135 | I have a chromebook (which is crap for trying to run any file) and I want to find a free online Linux Shell that I can use to run some of my programs. Don't need much data, just something that has python3, pip3, and git. Not looking to put down any money because I'm purely a hobbyist when it comes to programming. | Free linux shells? | shell;ssh | null |
_webapps.85580 | How can I manage the list people who show up on my calendar on their birthdays?I also want each birthday reminder to show the age the person is turning that day.Should I be using a different calendar? | Only show important birthdays on Google Calendar | google calendar | Google Calendar simply shows the birthdays of your Google Contacts. There is no way to restrict what people are displayed except by removing the birthday data from your Contacts.(Of course, there's nothing stopping you from creating a calendar with just the birthdays of the important people you want to see. That's what I used to do before Calendar started reading from Contacts.)Google also does not have an option to show the age someone is going to be with their birthday entry on your calendar.You'll need to find another calendar to do that (if one exists). Finding such an app is beyond the ken of this site, however. |
_unix.90211 | I want to use a linux OS on my Nintendo DS Lite. Tried to use the DSLinux, but it turned the device into white and non-responsive screens! Went on IRC channel and got the advice to learn C and solve the DSLinux problem!!!. So the question here is how I can peacefully get an operating system that I can run on my DS lite. The following steps created the error: Got the file dslinux.tgz from http://www.dslinux.org/builds/.Unzipped the file and moved its folder into a SD card. Inserted the SD within R4SDHC adaptor card and turned on the DS. Went to its folder and chose the Linux Logo. Then both screen went white. | A Linux Distribution for Nintendo DS Lite | linux;dsl | null |
_unix.299752 | I'm creating a custom syntax highlighter for a markup-like language that I created to accomplish some natural language manipulation stuff, so I thought I'd get started with the simple keywords, seeing as I don't need to come up with regexes for them.I've defined the following in the XML:<language id=foo _name=Foo version=2.0 _section=Source> <metadata> <property name=mimetypes>text/x-c;text/x-csrc;image/x-xpixmap</property> <property name=globs>*.foo</property> </metadata> <styles> <style id=operator _name=Operator map-to=def:keyword /> <style id=member _name=Member map-to=def:type /> </styles> <definitions> <context id=members style-ref=member> <keyword>ref</keyword> <keyword>alt</keyword> <keyword>pos</keyword> <keyword>num</keyword> </context> <context id=operators style-ref=operator> <keyword>#</keyword> <keyword>$</keyword> <keyword>@</keyword> <keyword>[</keyword> <keyword>]</keyword> <keyword>:</keyword> <keyword>=</keyword> <keyword>:?</keyword> <keyword>&</keyword> </context> <!--Main context--> <context id=opal class=no-spell-check> <include> <context ref=members /> <context ref=operators /> </include> </context> </definitions></language>The syntax does highlight, but only on some odd conditions. Specifically, any keyword in the operators context must be both preceded by and followed by a non-keyword character, lest it will fail to highlight.ScreencapShould I be using a regex for these or did I just botch the XML? Also, why do the members keywords highlight without issue? | Custom syntax highlighting misbehaving in gedit | xml;gedit;syntax highlighting | null |
_softwareengineering.241191 | Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphicsBut now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this// some/module.jsdefine([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function});This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules.This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that.It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time.Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong? | With AMD style modules in JavaScript is there any benefit to namespaces? | javascript;modules;namespace | null |
_codereview.69779 | I use this Bash script to rename specified files to all lowercase letters:#!/bin/bashusage() { test $# = 0 || echo $@ echo Usage: $0 [OPTION]... FILE... echo echo Rename files to all lowercase letters. echo echo -n, --dry-run Dry run, show what would happen echo echo -h, --help Print this help echo exit 1}args=dryrun=offwhile [ $# != 0 ]; do case $1 in -h|--help) usage ;; -n|--dry-run) dryrun=on ;; --) shift; while [ $# != 0 ]; do args=$args \$1\; shift; done; break ;; -?*) usage Unknown option: $1 ;; *) args=$args \$1\ ;; esac shiftdoneeval set -- $argstest $# -gt 0 || usagefor path; do test -e $path || continue origfile=$(basename $path) newfile=$(tr '[:upper:]' '[:lower:]' <<< $origfile) origdir=$(dirname $path) origpath=$origdir/$origfile newpath=$origdir/$newfile test $origpath != $newpath || continue echo $path -> $newpath test $dryrun = on || mv -i -- $path $newpathdoneIs there a better way I missed? Anything to improve?(This script, along with many other utility scripts I use are on GitHub.) | Rename files to all lowercase letters | bash;file system | null |
_vi.6378 | Imagine I have the following text:some random stuff* asdf* foo* barsome other random stuffI want to replace the asterisk bullets with numbers, like so:some random stuff1. asdf2. foo3. barsome other random stuffHow can this be done in vim ? | Replace a series of asterisk bullet points with a numbered list | substitute;filetype markdown;count;markup;range | You could try the following command::let c=0 | g/^* /let c+=1 | s//\=c.'. 'First it initializes the variable c (let c=0), then it executes the global command g which looks for the pattern ^* (a beginning of line, followed by an asterisk and a space).Whenever a line containing this pattern is found, the global command executes the command:let c+=1 | s//\=c.'. 'It increments the variable c (let c+=1), then (|) it substitutes (s) the previous searched pattern (//) with the evaluation of an expression (\=):the contents of variable c concatenated (.) with the string '. 'If you don't want to modify all the lines from your buffer, but only a specific paragraph, you can pass a range to the global command.For example, to modify only the lines whose number is between 5 and 10::let c=0 | 5,10g/^* /let c+=1 | s//\=c.'. 'If you have a file containing several similar lists which you want to convert, for example something like this:some random stuff some random stuff * foo 1. foo * bar 2. bar * baz 3. baz some other random stuff some other random stuff ==> some random stuff some random stuff * foo 1. foo * bar 2. bar * baz 3. baz * qux 4. qux some other random stuff some other random stuff You can do it with the following command::let [c,d]=[0,0] | g/^* /let [c,d]=[line('.')==d+1 ? c+1 : 1, line('.')] | s//\=c.'. 'It's just a variant of the previous command, which resets the variable c when you switch to another list. To detect whether you are in another list, the variable d is used to store the number of the last line where a substitution was made.The global command compares the current line number (line('.')) with d+1. If they are the same, it means we are in the same list as before so c is incremented (c+1), otherwise it means we are in a different list, so c is reset (1).Inside a function, the command let [c,d]=[line('.')==d+1 ? c+1 : 1, line('.')] could be rewritten like this:let c = line('.') == d+1 ? c+1 : 1let d = line('.')Or like this:if line('.') == d+1 let c = c+1else let c = 1endiflet d = line('.')To save some keystrokes, you could also define the custom command :NumberedLists, which accepts a range whose default value is 1,$ (-range=%):command! -range=% NumberedLists let [c,d]=[0,0] | <line1>,<line2>g/^* /let [c,d]=[line('.')==d+1 ? c+1 : 1, line('.')] | s//\=c.'. 'When :NumberedLists will be executed, <line1> and <line2> will be automatically replaced with the range you used.So, to convert all the lists in the buffer, you would type: :NumberedListsOnly the lists between line 10 and 20: :10,20NumberedListsOnly the visual selection: :'<,'>NumberedListsFor more information, see::help :range:help :global:help :substitute:help sub-replace-expression:help list-identity (section list unpack):help expr1:help :command |
_unix.59417 | I compiled a short bash one-liner to focus a running application or launch it if it isn't running:#!/bin/bash#intellilaunch.shwmctrl -a $1 || $1 & disownexit 1The command exits perfectly fine when run directly from the command line::~$ wmctrl -a firefox || firefox & disown[1] 32505A quick check with the system monitor shows that only firefox is running. However, when I launch firefox via the script (./intellilaunch.sh firefox) it spawns a persisting new process called intellilaunch.sh firefox, which only exits after closing firefox.What am I doing wrong?Edit:I modified my script according to michas' suggestion:#!/bin/bashprogram=$(basename $1)if ! wmctrl -a $program; then $1&fiNot a one-liner anymore but it works perfectly fine now! | Why won't my bash script exit after execution? | bash;exit | I cannot reproduce this behavior on my system. From your description is sounds like there is a process not properly set to background.Try to run as bash -x intellilaunch.sh xclock, this should show, what is going on.Also || binds stronger than &, therefore you send the whole pipe in background. Maybe an explicit if would be a good idea.Your wmctrl -a firefox || firefox & disown ; exit 1is interpreted as( wmctrl -a firefox || firefox ) & disown ; exit 1whereas you probably meantwmctrl -a firefox || ( firefox & disown ) ; exit 1 Beause of that, bash will start two jobs one with wmctl and firefox - and another one with disown and exit. As the background job needs a short time to launch it will probably start the commands slighly later, that is why the output of bash -x seems to be in the wrong order. |
_unix.105988 | I'm running on a minimal Ubuntu server 12.04.3 install and I installed a d-link DWA-160 usb wifi adapter as per the instructions shown in this page.After successfully connecting using these instructions (I basically ping google to confirm that I'm connected), I try to run apt-get update but end up getting what appears to be a kernel panic every time I do. The connection does not seem stable during the update process. For instance, as I'm typing this I've tried again and it seems stuck at :9% [4 Release 3,980B/49.6 kB 8%][Waiting for headers][Waiting for headers]I usually get a kernel panic shortly thereafter. I'll try to provide info as needed. | Kernel panic on apt-get upgrade with DWA-160 | networking;apt;kernel panic | Given the symptoms (crashes when there's a lot of network traffic, and you happen to be using a custom network driver), it's a bug in the network driver.From the page you link:DWA 160 is also know to freeze under heavy network load. When this happens, the only solution is to unplug and replug the key. Till date this bug has not been corrected.Because of all that, this wifi key is not, at this time, a very good deal for Linux users.Report a bug to the providers of the driver. This isn't something that can be worked around, other than not using the driver or using a fixed version of the driver. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.