id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_unix.199028
I'm developing a compute cluster and I'm trying to determine the maximum amount of RAM I can give to a single process. On a machine with 16GB of ram, the answer is that I can allocate (and touch) 15680MB before the out of memory killer is invoked (overcommit_memory=2, overcommit_ratio=0). That might seem halfway reasonable on a regular linux distro, but I've compiled my own minimal kernel (v 3.16), and my test app is running as sbin/init - IE there is absolutely nothing else in memory. If I simulate a machine with just 512MB of ram for comparison, I can allocate 468MB, an overhead of just 44MB, which seems very reasonable. So why is there so much more kernel memory on the 16gb machine? Reading the kernel docs it would seem that some percent of memory is reserved for the kernel for safety, which can be tuned by /proc/sys/vm/. But I've found that changing admin_reserve_kbytes and user_reserve_kbytes makes absolutely no difference in when the OOM killer is invoked.Is there something else that grows proportionally to the amount of RAM Installed?
What proportion of memory does the Linux kernel use of the installed RAM?
linux kernel;memory
A fairly old (2006) paper discusses some of the kernel overhead that scales with memory size: http://halobates.de/memorywaste.pdfFrom it:page struct (keeps track of page allocation): 1.37%page tables: 0.5%Meanwhile, my 16GB test machine shows a total overhead of 4.3%so around 2% is still unaccounted for. Thus, the question is still not completely answered, and the answer is based on a very old kernel. But this is the closest by far that I've come to answering the question.
_webapps.19227
How can I import a large list of tasks (.csv or .txt) into Trello and convert into cards that are part of a list
How can I import a large list of tasks into Trello
trello;import;tasks
null
_codereview.143286
This is my first network programming codes writing for a client who has the following requirement:My Server has to run 24*7*365 for multiple clients at the same time (concurrency).Their Client (Software running on client machine) has some bugs which makes it to crash sometimes.When the software crashes, Server should get information and the connection should be terminated, freeing any other id's associated with each connected client.As of now, they are not able to identify whether client side software has really crashed or is still running. The first time client gets registered in the server succesfully, they are not able to track about its active execution state on later stages. So I gave it a thought and somehow managed to use threads to satisfy their requirement partially (code still under development). But is this really a fool-proof solution? Will this code be able to handle 100's of clients and know about their statuses? I tried with around 10-20 terminals of client and it worked perfectly.Can anyone review my code let me know the errors/ any modifications.Here's my Server code:#include <stdio.h>#include <string.h> //strlen#include <stdlib.h> //strlen#include <sys/socket.h>#include <arpa/inet.h> //inet_addr#include <unistd.h> //write#include <pthread.h> //for threading , link with lpthread//the thread functionvoid *connection_handler(void *);int main(int argc , char *argv[]){ int socket_desc , client_sock , c , *new_sock; struct sockaddr_in server , client; //Create socket socket_desc = socket(AF_INET , SOCK_STREAM , 0); if (socket_desc == -1) { printf(Could not create socket); } puts(Socket created); //Prepare the sockaddr_in structure server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; server.sin_port = htons( 8888 ); //Bind if( bind(socket_desc,(struct sockaddr *)&server , sizeof(server)) < 0) { //print the error message perror(bind failed. Error); return 1; } puts(bind done); //Listen listen(socket_desc , 3); //Accept and incoming connection puts(Waiting for incoming connections...); c = sizeof(struct sockaddr_in); //Accept and incoming connection /*puts(Waiting for incoming connections...); c = sizeof(struct sockaddr_in);*/ while( (client_sock = accept(socket_desc, (struct sockaddr *)&client, (socklen_t*)&c)) ) { puts(Connection accepted); pthread_t sniffer_thread; new_sock = malloc(1); *new_sock = client_sock; if( pthread_create( &sniffer_thread , NULL , connection_handler , (void*) new_sock) < 0) { perror(could not create thread); return 1; } //Now join the thread , so that we dont terminate before the thread //pthread_join( sniffer_thread , NULL); // was commented before puts(Handler assigned); } if (client_sock < 0) { perror(accept failed); return 1; } return 0;}/* * This will handle connection for each client * */void *connection_handler(void *socket_desc){ //Get the socket descriptor int sock = *(int*)socket_desc; int read_size; char *message , client_message[2000]; //Receive a message from client while( (read_size = recv(sock , client_message , 2000 , 0)) > 0 ) { //Send the message back to client write(sock , client_message , strlen(client_message)); printf(%s\n,client_message); } if(read_size == 0) { puts(Client disconnected); fflush(stdout); } else if(read_size == -1) { perror(recv failed); } //Free the socket pointer free(socket_desc); close(sock); pthread_exit(NULL); return;}Here's my client:#include<stdio.h> //printf#include<string.h> //strlen#include<sys/socket.h> //socket#include<arpa/inet.h> //inet_addrint main(int argc , char *argv[]){ int sock; struct sockaddr_in server; char message[1000] , server_reply[2000]; //Create socket sock = socket(AF_INET , SOCK_STREAM , 0); if (sock == -1) { printf(Could not create socket); } puts(Socket created); server.sin_addr.s_addr = inet_addr(127.0.0.1); server.sin_family = AF_INET; server.sin_port = htons( 8888 ); //Connect to remote server if (connect(sock , (struct sockaddr *)&server , sizeof(server)) < 0) { perror(connect failed. Error); return 1; } puts(Connected\n); //keep communicating with server while(1) { printf( Type 'q' to end the session:\n); printf(Now type your message:); scanf(%s , message); if( message[0] == 'q') { close(sock); return 0; } else{ //Send some data if( send(sock , message , strlen(message) , 0) < 0) { puts(Send failed); return 1; } //Receive a reply from the server if( recv(sock , server_reply , 2000 , 0) < 0) { puts(recv failed); break; } puts(Server reply : ); puts(server_reply); } } close(sock); return 0;}
Multithreaded Client/ Server communication
c;socket;server;client;pthreads
null
_codereview.29963
I have some problems in my View.ascx.cs file because I'm reusing my code and modified it based on the situation. For example, when I apply pagination I have different code in all places. I just want my code to be simpler.using System;using DotNetNuke.Security;using DotNetNuke.Services.Exceptions;using DotNetNuke.Entities.Modules;using DotNetNuke.Entities.Modules.Actions;using DotNetNuke.Services.Localization;using Christoc.Modules.ResourcesFilter.BOL;using System.Data;using System.Web.UI.WebControls;using System.Collections.Generic;using System.Web.Security;using System.Web;using DotNetNuke.Security.Permissions;using System.Collections.Specialized;using System.Web.UI;using Christoc.Modules.ResourceModule.App_Code.BOL;namespace Christoc.Modules.ResourcesFilter{ /// ----------------------------------------------------------------------------- /// <summary> /// The View class displays the content /// /// Typically your view control would be used to display content or functionality in your module. public partial class View : ResourcesFilterModuleBase, IActionable { public ScriptManager sm; protected override void OnInit(EventArgs e) { sm = ScriptManager.GetCurrent(this.Page); sm.EnableHistory = true; } protected void Page_Load(object sender, EventArgs e) { try { this.ModuleConfiguration.ModuleTitle = ; this.ModuleConfiguration.InheritViewPermissions = true; if (!IsPostBack) { BindResourcesRepeater(); GetQueryString(HttpContext.Current.Request.Url); } } catch (Exception exc) //Module failed to load { Exceptions.ProcessModuleLoadException(this, exc); } } private void BindBookmarks() { DataSet ds = new DataSet(); Guid userID = Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString()); ds = Bookmark.Get_all_Bookmarks_for_user(userID); } private void GetQueryString(Uri tempUri) { if (HttpUtility.ParseQueryString(tempUri.Query).Get(IntrestFocus) != null) { ddlTopics.SelectedValue = HttpUtility.ParseQueryString(tempUri.Query).Get(IntrestFocus); } else if (HttpUtility.ParseQueryString(tempUri.Query).Get(Skills) != null) { ddlSkills.SelectedValue = HttpUtility.ParseQueryString(tempUri.Query).Get(Skills); } else if(HttpUtility.ParseQueryString(tempUri.Query).Get(Type) != null) { ddlTypes.SelectedValue = HttpUtility.ParseQueryString(tempUri.Query).Get(Type); } } public ModuleActionCollection ModuleActions { get { var actions = new ModuleActionCollection { { GetNextActionID(), Localization.GetString(EditModule, LocalResourceFile), , , , EditUrl(), false, SecurityAccessLevel.Edit, true, false } }; return actions; } } private void BindResourcesRepeater() { string tag = Request.QueryString[tag]; if (String.IsNullOrEmpty(tag)) { //Guid userID = Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString()); DataSet ds = new DataSet(); int selectedTopicID = Convert.ToInt32(ddlTopics.SelectedValue); int selectedSkillID = Convert.ToInt32(ddlSkills.SelectedValue); int selectedTypeID = Convert.ToInt32(ddlTypes.SelectedValue); string keyword = txtbKeyword.Text.Trim(); int sortBy = Convert.ToInt32(ddlSortBy.SelectedValue); ds = Resource.Search_Resource(selectedTopicID, selectedSkillID, selectedTypeID, keyword, sortBy); PagedDataSource objPds = new PagedDataSource(); objPds.DataSource = ds.Tables[0].DefaultView; objPds.AllowPaging = true; objPds.PageSize = 8; int curpage; if (ViewState[Page] != null) { curpage = Convert.ToInt32(ViewState[Page]); } else { ViewState[Page] = 1; curpage = 1; } // Set the currentindex objPds.CurrentPageIndex = curpage - 1; rp_resList.DataSource = objPds; rp_resList.DataBind(); if (objPds.IsFirstPage) { lnkPrev.Visible = false; lnkNext.Visible = true; } else if (!objPds.IsFirstPage && !objPds.IsLastPage) { lnkPrev.Visible = true; lnkNext.Visible = true; } else if (objPds.IsLastPage) { lnkNext.Visible = false; lnkPrev.Visible = true; } int numberOfItems = ds.Tables[0].Rows.Count; lbl_totalResult.Text = GetCurrentVisibleItemsText(numberOfItems, objPds.PageSize, objPds.CurrentPageIndex); } else { DataSet ds = new DataSet(); int selectedTopicID = Convert.ToInt32(ddlTopics.SelectedValue); int selectedSkillID = Convert.ToInt32(ddlSkills.SelectedValue); int selectedTypeID = Convert.ToInt32(ddlTypes.SelectedValue); txtbKeyword.Text = tag; int sortBy = Convert.ToInt32(ddlSortBy.SelectedValue); ds = Resource.Search_Resource(selectedTopicID, selectedSkillID, selectedTypeID, tag, sortBy); PagedDataSource objPds = new PagedDataSource(); objPds.DataSource = ds.Tables[0].DefaultView; objPds.AllowPaging = true; objPds.PageSize = 8; int curpage; if (ViewState[Page] != null) { curpage = Convert.ToInt32(ViewState[Page]); } else { ViewState[Page] = 1; curpage = 1; } // Set the currentindex objPds.CurrentPageIndex = curpage - 1; rp_resList.DataSource = objPds; rp_resList.DataBind(); if (objPds.IsFirstPage && objPds.Count < 8) { lnkPrev.Visible = false; lnkNext.Visible = false; } else if (objPds.IsFirstPage) { lnkPrev.Visible = false; lnkNext.Visible = true; } else if (!objPds.IsFirstPage && !objPds.IsLastPage) { lnkPrev.Visible = true; lnkNext.Visible = true; } else if (objPds.IsLastPage) { lnkNext.Visible = false; lnkPrev.Visible = true; } int numberOfItems = ds.Tables[0].Rows.Count; lbl_totalResult.Text = GetCurrentVisibleItemsText(numberOfItems, objPds.PageSize, objPds.CurrentPageIndex); } } private string GetCurrentVisibleItemsText(int numberOfItems, int pageSize, int currentPageIndex) { int startVisibleItems = currentPageIndex * pageSize + 1; int endVisibleItems = Math.Min((currentPageIndex + 1) * pageSize, numberOfItems); return string.Format( {0}-{1} of {2} resources, startVisibleItems, endVisibleItems, numberOfItems); } protected void rp_resList_ItemDataBound(object sender, System.Web.UI.WebControls.RepeaterItemEventArgs e) { DataSet ds_bookmarkUser = null; DataSet ds_LikesUser = null; if (HttpContext.Current.User.Identity.IsAuthenticated) { ds_bookmarkUser = Bookmark.Get_all_Bookmarks_for_user(Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString())); ds_LikesUser = Like.Get_all_Likes_for_user(Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString())); } if (e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { Repeater childRepeater2 = (Repeater)e.Item.FindControl(rp_tagsTopics); if (childRepeater2 != null) { //get the HiddenField Form parent Repeater rp_resList HiddenField hf_resID = (HiddenField)e.Item.FindControl(hf_resID); ImageButton imgBtn_bookmark = (ImageButton)e.Item.FindControl(imgBtn_bookmark); LinkButton lb_like = (LinkButton)e.Item.FindControl(lb_like); HyperLink hl_download = (HyperLink)e.Item.FindControl(hl_download); LinkButton lnkBtnTags = (LinkButton)e.Item.FindControl(lnkBtnTags); imgBtn_bookmark.Visible = HttpContext.Current.User.Identity.IsAuthenticated; lb_like.Visible = HttpContext.Current.User.Identity.IsAuthenticated; hl_download.Visible = HttpContext.Current.User.Identity.IsAuthenticated; //if user is Authenticated int resID = Convert.ToInt32(hf_resID.Value); //bind bookmark if (ds_bookmarkUser != null) { foreach (DataRow row in ds_bookmarkUser.Tables[0].Rows) { if (resID == Convert.ToInt32(row[resourceID])) { imgBtn_bookmark.ImageUrl = ~/DesktopModules/ResourcesFilter/img/favorite-star-yellow.png; break; } } } //bind likes if (ds_LikesUser != null) { foreach (DataRow row in ds_LikesUser.Tables[0].Rows) { if (resID == Convert.ToInt32(row[resourceID])) { lb_like.Text = Liked; break; } } } string[] strArrTgas = Resource.Get_Tags_For_Resource(resID); if (strArrTgas[0] == ) { childRepeater2.Visible = false; } else childRepeater2.DataSource = strArrTgas; childRepeater2.DataBind(); } } } protected void btnSearch_Click(object sender, EventArgs e) { ViewState[Page] = 1; DataSet ds = new DataSet(); int selectedTopicID = Convert.ToInt32(ddlTopics.SelectedValue); int selectedSkillID = Convert.ToInt32(ddlSkills.SelectedValue); int selectedTypeID = Convert.ToInt32(ddlTypes.SelectedValue); string keyword = txtbKeyword.Text.Trim(); int sortBy = Convert.ToInt32(ddlSortBy.SelectedValue); ds = Resource.Search_Resource(selectedTopicID, selectedSkillID, selectedTypeID, keyword, sortBy); PagedDataSource objPds = new PagedDataSource(); objPds.DataSource = ds.Tables[0].DefaultView; objPds.AllowPaging = true; objPds.PageSize = 8; int curpage; if (ViewState[Page] != null) { curpage = Convert.ToInt32(ViewState[Page]); } else { ViewState[Page] = 1; curpage = 1; } // Set the currentindex objPds.CurrentPageIndex = curpage - 1; rp_resList.DataSource = objPds; rp_resList.DataBind(); //hide next & prev links if (objPds.IsFirstPage && objPds.Count < 8) { lnkPrev.Visible = false; lnkNext.Visible = false; } else if (objPds.IsFirstPage && objPds.Count > 8) { lnkPrev.Visible = false; lnkNext.Visible = true; } else if (!objPds.IsFirstPage && !objPds.IsLastPage) { lnkPrev.Visible = true; lnkNext.Visible = true; } else if (objPds.IsLastPage) { lnkNext.Visible = false; lnkPrev.Visible = true; } //ViewState[Page] = 1; int numberOfItems = ds.Tables[0].Rows.Count; lbl_totalResult.Text = GetCurrentVisibleItemsText(numberOfItems, objPds.PageSize, objPds.CurrentPageIndex); } protected void btnReset_Click(object sender, EventArgs e) { ViewState[Page] = 1; ddlSkills.SelectedValue = 0; ddlTopics.SelectedValue = 0; ddlTypes.SelectedValue = 0; ddlSortBy.SelectedValue = 1; txtbKeyword.Text = ; lbl_totalResult.Text = ; BindResourcesRepeater(); } protected void LinkButton1_Click(object sender, EventArgs e) { LinkButton lnkBtnTags = (LinkButton)sender; Response.Redirect(~/WebsofWonder.aspx?tag= + lnkBtnTags.Text); } protected void lnkPrev_Click(object sender, EventArgs e) { if (Convert.ToInt32(ViewState[Page]) > 1) { // Set ViewStatevariable to the previous page ViewState[Page] = Convert.ToInt32(ViewState[Page]) - 1; // reload the control } sm.AddHistoryPoint(Currentpage, ViewState[Page].ToString()); BindResourcesRepeater(); } protected void lnkNext_Click(object sender, EventArgs e) { // Set ViewStatevariable to the next page ViewState[Page] = Convert.ToInt32(ViewState[Page]) + 1; sm.AddHistoryPoint(Currentpage, ViewState[Page].ToString()); // reload the control BindResourcesRepeater(); } protected void rp_resList_ItemCommand(object source, RepeaterCommandEventArgs e) { //get userID Guid userID = Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString()); if (e.CommandName == bookmark_res) { //get resID //get resource ID form HiddenField HiddenField hf = (HiddenField)e.Item.FindControl(hf_resID); ImageButton ib = (ImageButton)e.Item.FindControl(imgBtn_bookmark); int resID = Convert.ToInt32(hf.Value); //try to convert this block to fucntion Bookmark bm = new Bookmark(); bm.UserID = Guid.Parse(Membership.GetUser(HttpContext.Current.User.Identity.Name).ProviderUserKey.ToString()); bm.Resource.ResourceID = resID; //get bookmarkID to remove it from user bookmark and Group int bookmarkID = Bookmark.Insert(bm); if (bookmarkID == -1) { bool confirmDelete = Bookmark.Delete_User_bookmark(resID, userID); ib.ImageUrl = ~/DesktopModules/ResourcesFilter/img/favorite-star.png; } else { ib.ImageUrl = ~/DesktopModules/ResourcesFilter/img/favorite-star-yellow.png; } } if (e.CommandName == lb_like_Click) { //get resID HiddenField hf = (HiddenField)e.Item.FindControl(hf_resID); LinkButton lb_like = (LinkButton)e.Item.FindControl(lb_like); Like lke = new Like(); lke.ResoursceID = Convert.ToInt32(hf.Value); lke.UserID = userID; int likeID = Like.Insert(lke); if (likeID == -1) { bool confirmDelete = Like.Delete_User_Like(lke); lb_like.Text = Like; } else { lb_like.Text = Liked; } } } }}
Pagination of repeater
c#;asp.net;pagination
null
_cogsci.9688
There are plenty of people who feel sad, sometimes even depressed when it rains or storms. This effect is often featured in writing and song, where the weather is often a sign of something sinister (It was a dark and stormy night is a very cliche beginning for darker stories). I'm curious as to why that is.I do understand that there are certain activities that can't be done when it rains, but there are rainy day activities that take their place. What is it about the weather that makes people feel down?
Why does weather often affect mood?
mood
The answer is that we don't really know (for non-clinical populations). The relationship between weather and mood isn't straightforward, and the impact of weather on mood may be very small (on average). There have been some great experience-sampling studies that have looked at this. For example, in a sample of 1233, Denissen, Butalid, Penke, and van Aken (2008) found that:Daily weather had no significant effects on positive affectHigher temperature was associated with higher negative affectLower wind power and lower sunlight were associated with higher negative affectLower sunlight was associated with greater tirednessBeing outdoors moderated the impact of some weather variables on affectThe impact of weather on mood varied significantly across individualsThe overall effect of weather on mood was very smallKts, Realo, and Allik (2011) found that:Higher temperature was associated with increased positive and negative affectMore sunlight was associated with increased positive affect (but not with negative affect)Temperature and luminance were inversely related to feelings of fatigueBeing outdoors interacted with weather variables to predict negative and positive affectNeuroticism may account for some individuals' greater reactivity to weatherKeller et al. (2005) found that:The interaction between more time spent outdoors and higher temperature and barometric pressure predicted greater positive affect. Interestingly: Among participants who spent more than 30 min outside, higher temperature and pressure were associated with higher moods, but among those who spent 30 min or less outside, this relationship was reversed.Conclusion: As stated, the effects of weather on mood aren't linear. Sunlight can boost serotonin levels, which may explain why spending time outdoors improves mood (even when controlling for physical activity). This is corroborated by studies using light therapy for SAD or other forms of depression. And more obviously, we feel fatigued/tired as sunlight lessens. But overall, these effects aren't very big, and they vary quite a bit across individuals (indicating the effects of moderating variables like neuroticism).
_unix.43933
I'm running Debian Wheezy using gdm3 as window manager and xfce4 as desktop environment.Every so often, when I login, the desktop environment starts OK, but all windows are missing the title bar and are positioned at location (0,0).Usually logging out and back in, or restarting gdm3 and logging in, fixes it, but today it didn't. I switched to 'GNOME classic' which works fine, indicating it's probably a setting in xfce4 that's got corrupted.Short of just zapping every file in .config/ and .cache/ which refers to xfce, is there a simple fix/edit?
xfce4 loses title bars on all windows
debian;xfce;gdm3
Xfce's window manager crashed for some reason and the simple fix is to run xfwm4 in a terminal.If you can't do it in X (because, e.g., the panels/menus are unavailable, Alt+F2 doesn't work), switch to a virtual console (e.g. Ctrl+Alt+F2), login, and type DISPLAY=:0 xfwm4 --daemon. (Use Ctrl+Alt+F7 to switch back to X.)
_codereview.82558
I am generating a searchable and sortable data table using bootstrap. All works fine.I am looking for the best approach to get it. Can anyone can please update/suggest the right way, if you find anything better?jsfiddlevar tableMaker = function (columNames, parent, data) { var table = $('<table />', { id:example, class:table table-striped table-bordered }) .append($('<thead><tr></tr></thead>')) .append($('<tbody></tbody>')) .append($('<tfoot><tr></tr></tfoot>')); var thead = columNames.map(function (item) { return <th> + item + </th>; }); var tbody = data.map(function (item, index) { var tr = $('<tr />'); columNames.map(function (label) { var label = label.replace(' ', ''); var labelData = item[label]; tr.append('<td>' + labelData + '</td>') }); return tr; }); table .find('thead tr') .append(thead).end() .find('tfoot tr') .append(thead).end() .find('tbody') .append(tbody) parent.append(table); //appending to parent table.dataTable(); //starting tablar data works};var initTable = function () { var columNames = ['Organization Name', 'ZipCode', 'Telephone' ]; var data = $.getJSON('https://tcs.firebaseio.com/d/DocPageDetails/d/Organizations.json'); data.done(function (data) { tableMaker(columNames, $('#container'), data); });}initTable();
Bootstrap Data Table generation
javascript;jquery;twitter bootstrap;jquery datatables
null
_codereview.166236
In my solution I've implemented transaction-like methods of uploading posts media. For example I'm adding new post with photo. I want to have 700x700, 200x200, 70x70, 10x10 px sizes in my storage and original in device album. And when all images are uploaded, then save new post info to database. But this code is not beautiful at all. How can I improve it?// MARK: - Christmas tree// Like transaction :)// Bad view actuallystatic func uploadVideoForPost(with videoURL: URL, for postForUpdate: PostItem, screenShot: UIImage, completion: @escaping (_ hasFinished: Bool, _ postWithRefs: PostItem?) -> Void) { var post = postForUpdate // we will insert refs to media to this object // upload screenshots upload(screenShot, for: post, withSize: 700.0) { (hasFinishedSuccessfully, url) in if hasFinishedSuccessfully { post.mediaRef700 = url upload(screenShot, for: post, withSize: 200.0) { (hasFinishedSuccessfully, url) in if hasFinishedSuccessfully { post.mediaRef200 = url upload(screenShot, for: post, withSize: 70.0) { (hasFinishedSuccessfully, url) in if hasFinishedSuccessfully { post.mediaRef70 = url upload(screenShot, for: post, withSize: 10.0) { (hasFinishedSuccessfully, url) in if hasFinishedSuccessfully { post.mediaRef10 = url // upload video upload(with: videoURL, for: post) { (hasFinishedSuccessfully, url) in if hasFinishedSuccessfully { post.videoRef = url let path = videoURL.path if UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(path) { UISaveVideoAtPathToSavedPhotosAlbum(path, nil, nil, nil) } completion(true, post) // after this func i will save post data to db in other func. } else { completion(false, nil) } } }else { completion(false, nil) } } } else { completion(false, nil) } } } else { completion(false, nil) } } } else { completion(false, nil) } }}Upload func is just a func of uploading with completion handler.
Uploading photos of various sizes
error handling;swift;ios;network file transfer;firebase
You can chain those async methods using Promises.So the code will look like this:fetchUsers().then({ users in return users[0]}).then({ firstUser in return fetchFollowers(of: firstUser)}).then({ followers in self.followers = followers}).catch({ error in displayError(error)})Take a look at those projects:https://github.com/khanlou/Promisehttps://github.com/mxcl/PromiseKit
_unix.182580
I'm trying to update bash shell on my Mac OS Mavericks.$ brew install bash$ which -a bash/bin/bash/usr/local/bin/bash$ which bash/bin/bash$ chsh -s /usr/local/bin/bash$ which bash/bin/bashIn Terminal's preference: Shells open with -> Command (complete path) : /usr/local/bin/bash.But still, I cannot switch to brew-installed bash shell. What can I do?
Cannot change bash shell in Mac OS X
bash;shell;osx;homebrew
null
_softwareengineering.9313
Thought I post this to the best community of programmers I know.David Johnston, the new Governor General, has the digital world confused. Just what is the meaning of that 33 character-long string of ones and zeros that is emblazoned across the bottom of his fresh new Coat of Arms? According to the GG's website, The wavy band inscribed with zeros and ones represents a flow of information, digital communication and modern media.The binry is this:110010111001001010100100111010011It's not ASCII, is it just random?Original article: http://www.cbc.ca/politics/insidepolitics/2010/10/the-new-ggs-binary-banner-whats-it-mean.htmlI'll accept the correct (if it can be solved) answer or failing that, the highest voted answer.
Can you decipher this binary
binary
null
_codereview.108981
I have written a small library for saving data on local storage. I am still a learner and I am not sure whether my code is OK for a production level application.function AppStorage (aName) { use strict; var prefix = aName; var saveItem = function (key, value) { if (!key || !value) { console.error(Missing parameters \key\ and \value\); return false; } if (window.localStorage && window['localStorage']) { try { localStorage.setItem(prefix + '_' + key, JSON.stringify(value)); } catch (e) { return false; } } else { return false; } } var getItem = function (key) { if (!key) { console.error(Missing parameter \key\); return false; } if (window.localStorage && window['localStorage']) { try { localStorage.getItem(prefix + '_' + key); } catch (e) { return false; } } else { return false; } } var removeItem = function (key) { if (!key) { console.error(Missing parameter \key\); return false; } if (window.localStorage && window['localStorage']) { try { return localStorage.removeItem(prefix + '_' + key) } catch (e) { return false; } } else { console.log(Browser doen not support HTML5 Web Storage); } } return { set: function (key, value) { return saveItem(key, value); }, get: function () { return getItem(key, item); }, remove: function () { } }}var as = new AppStorage ('MyApp');
Saving data in local storage
javascript;beginner;design patterns
Structure:Instead of having a function encompass everything, consider using a prototype constructor, or if you can use ECMAScript 2015, use the class structure.By using the prototype structure your code would look something like the following:var AppStorage = function(name){ this.prefix = name;}AppStorage.prototype.saveItem = function(key, value){}AppStorage.prototype.getItem = function(key){}AppStorage.prototype.removeItem = function(key){}var AS = new AppStorage('myName');AS.getItem('thing');Which removes the need for blocks like the following:return { set: function (key, value) { return saveItem(key, value); }, get: function () { return getItem(key, item); }, remove: function () { }Things to improve upon:console.error: instead of that, use throw new Error() instead.There's a consistent use of extraneous empty lines, remove them. They're pointless and waste space.When you return the aliases for the functions, the remove function is left empty, if that is the desired case, then it should simply be removed, otherwise, you forgot to link the remove function.window.localStorage && window['localStorage']: uh, that's identical; you can access object properties via the . method, or [''] method.prefix + '_' + key: it would be easier/better to just append the underscore when you define the prefix initially.removeItem is the only function that doesn't just return false if localStorage does not exist. Consider incorporating the error message in every function, or just check for localStorage on initialisation. With those changes in mind...... here's your updated code:var AppStorage = function(name) { if (!(this instanceof AppStorage)) { throw new Error(AppStorage must be invoked with new!); } if (!window.localStorage){ throw new Error(Browser does not support HTML5 Web Storage); } this.prefix = name + '_';}AppStorage.prototype.save = function(key, value) { if (!key || !value) { throw new Error(Missing parameters \key\ and \value\); } try { localStorage.setItem(this.prefix + key, JSON.stringify(value)); } catch (e) { return false; }};AppStorage.prototype.get = function(key) { if (!key) { throw new Error(Missing parameter \key\); } try { localStorage.getItem(this.prefix + key); } catch (e) { return false; }};AppStorage.prototype.remove = function(key) { if (!key) { throw new Error(Missing parameter \key\); } try { return localStorage.removeItem(this.prefix + key); } catch (e) { return false; }};var as = new AppStorage('MyApp');
_unix.270537
I have a value stored in a variable that I want to use as part of a wildcard, like this:set extension (somecommand)cp *$extension ~/filezoneBut the * prevents the variable from being dereferenced. How can I use the value stored in a variable as part of a wildcard?
How can I mix variables and wildcards in fish?
wildcards;variable substitution;fish
null
_unix.50782
This question is following Unable to mount /home/ partition after reinstalling grub after reinstalling windows 7 where the diagnostic was that installing windows 7 deleted my /home partition, lovingly called /dev/sda3.Since almost nothing have been done with this computer since the incident, we can expect that the content of the partition is still intact and that it is only unusable for the moment.The mission is to try to rescue the files that were inside this partition by restoring it to its original ext4 format.Does anyone know how to proceed?
how to restore a logical partition to its original ext4 format
partition;data recovery;ext4;restore
Right off the bat make a dd disk image of the drive, and work with that instead of the drive itself. That lets you experiment.dd if=/dev/sda3 bs=1M > sda3.imgBeyond that I'm not sure. I'd hit google. Might look at it later.edit; http://www.cgsecurity.org/wiki/TestDisk looks promising.
_softwareengineering.138822
We have priority and severity fields in our bug tracking system. We define severity as how it impacts the user and priority as how it impacts the product. My question is about how to categorize a code improvement task in severity and priority. Suppose the improvement does not change any behavior but makes it a better code. We anticipate a long term maintenance improvement overall but it's hard to quantify. When we use our definitions for priority and severity, a code improvement gets the lowest values for both unless you introduce some hard to predict long term benefits into the picture. Therefore it implies that code improvement is a frivilous task and should never be attempted. However I believe it's cruical to constantly improve and refactor the code, because:Software development is itself a continuous learning process and without improving the code you can't get better at it.A team should be proud of their code.Future maintenance will take less time and over the long run savings will be significant.Or do you think such tasks should never be created and such improvements performed only on demand, when associated with a bug? Even if it's associated with a bug, wouldn't that be a discussion point on a code review, e.g. why did you do this drastic change in structure?.
How to determine the priority and severity of a code improvement?
project management;refactoring;issue tracking
Typically I do not like to view code improvement activities as a seperate assignable task because code improvement itself never directly brings you closer to completing user stories or requirements. This is why code improvement tasks will always be such low priority that they never get assigned.I see code improvement as a constant, something that every developer should be doing as naturally as typing on a keyboard. I factor it into my estimates for any task. If my task involves me touching a class or some source code that hasn't been looked at in a long while, then I will assume that some janitorial work is probably in order and increase my estimate accordingly.Best case scenario I finish the task early and can use the remaining time to improve upon code or even design. Worst case scenario the task takes much longer than expected but I have that extra time as a buffer.
_softwareengineering.235838
From what I understand, SVM's take a discrete number of x and y values from which to learn from, then when given new x values map it to one y value (category). Is it possible to use SVM's or something similar to instead map x values to probabilities of y values? Let me give you an example, say your x values are arrays of two integers: x = [[1,1],[1,0],[0,1],[0,0]], and you have two categories, a and b such that y = [a,a,b,b]. i.e. [1,1] and [1,0] map to a, [0,1] and [0,0] map to b. Given an x value of [1,0.9], the SVM would probably predict the y value to be the category a, given another x value [1,0.89], the SVM would probably still predict the y value to be a part of the a category.This is what I am looking for: Given x and y values as specified above, the predict function I am looking for would return an array of tuples for each category in y of the form: (category, probability x was in category). For example, with the case above, the output would look something like this: [(a,.93),(b,.07)]My application of this would be somewhat like a fuzzy logic system, using pseudocode:if x is almost certainly in category a: do somethingif x is likely to be in category a: do something elseDoes a system like this already have a name? If not, how would I go about implementing something like this? I'm currently using scikit-learn in Python, so if there's something like this I could do with that library, that would be the best.
Is it possible to get probabilities from a support vector machine?
python;machine learning;statistics
null
_softwareengineering.307509
Me and a couple of colleagues are having a discussion regarding the following case: In an OrderStatus table we're keeping track of all the statuses an order goes through in time, including Pending, Available, Returned, etc. There's a timestamp field in the table to keep track of when each status was added. However, there are two statuses that may be added exactly at the same time, one after the other, in which case both may have exactly the same timestamp (including milliseconds). In another part of the code, we need to check which is the last status an order was set to, and since we're ordering by the timestamp we can easily get the wrong record from the database. The table has an IDENTITY(1,1) primary key, and my colleagues think we should order using that column, since that will always necessarily give you the latest record added to the table.I feel like an IDENTITY column should be voided of any meaning and thus should not be used for any sorting, and would rather do a Thread.Sleep(1) to make sure the timestamps are different and we can keep sorting using that field.What do you think is the best practice in this case?
Best practice for getting last record inserted in DB
c#;database;programming practices;sorting
Use numeric status values where the higher value represents the higher state. Add that to your index in descending order. Then pull back the result ordered by time stamp and the status in descending order. This way the entry with the farthest progression is always returned.
_unix.136506
Here's a nice teaser for a shell scripting guru:Take a directory with several text files. Can be a few up to ~1000.All files contain an identifier on a given line (always the same line).Identify which files have identifier that is NOT UNIQUE, i.e. Duplicated in other file(s) in the directory.Output or save the list of duplicatesThis is needed for a routine admin 'clean-up' of system generated files that SHOULD be unique but through user error may not be.
Shell script to search files for identical text entries
shell;text processing;duplicate
null
_datascience.9713
I'm using a Random Forest algorithm in order to construct a classification model, and I HAVE to check the accuracy of my rf model in the training sample, but as you can see in this answers : https://stats.stackexchange.com/a/112052/90446https://stats.stackexchange.com/a/66546/90446you can't evaluate the accuracy considering the training samples like this:predict(model, data=train)I'm not confortable with the idea of use OOB to get accuracy of the training sample, because the OOB was not used to build the model, how could this be right? I don't know what should I do to predict the fit in the training sample and get the accuracy of the training sample, is it possible or make any sense? When a check the AUC of the prediction of my training sample I get something near of 0.98, but the AUC of the test sample is about 0.7. Is this due to the limitations of prediction at the training sample or due to Overfitting?
Prediction in the training sample with randomforest in r
machine learning;r;predictive modeling;random forest;accuracy
null
_softwareengineering.324024
Sometimes we meet a situation where we should iterate (or map) over a collection, applying the same procedure (function) for all elements except the first one. The simplest example is finding the max element of collection, but is not a good one: the operation here is idempotent. More realistic example is table dumping by portions: at first request we create a file with header for columns, then populate it. What is the common approach (or most idiomatic way) to implementation of this pattern? I'm interested in both imperative and functional programming paradigms.A few variants of possible implementations (c-like pseudocode):first (imperative)funcFirst(collection[0]); //duplicate code if funcFirst is something1 + func() + something2for (int i = 1; i < collection.size(); i++){ func(collection[i]);}second (imperative)for (int i = 0; i < collection.size(); i++){ if (i == 0) //too many extra checks funcFirst(collection[0]); else func(collection[i]); }third (imperative)for (int i = 0; i < collection.size(); i++){ if (i == 0) //too many extra checks initSomenthing(collection[0]); //double read of collection[0] func(collection[i]); }first (functional)funcFirst(collection.first()) //duplicate code if funcFirst is something1 + func() + something2collection.drop(1).map( func )second (functional)collection.iterateWithIndex.map (if index == 0 funcFirst else func) third (functional) collection.fold (init(collection.first), func) //double read of collection[0]
What is the most idiomatic way to iterate collection with different action for first element?
functional programming;collections;iterator;map;imperative programming
I would try something like that in Scala for mapping differently the first element :def apply[T,X](f: T=>X, g: T=>X, l: Seq[T]): Seq[X] = l match { case Nil => Nil case head :: tail => f(head) +: apply(g,g,tail)}> apply((x: Int)=> x*x, (x: Int)=> -x, List(5,4,3,2,1))res1: Seq[Int] = List(25, -4, -3, -2, -1)The idea is to do nothing if the list is empty. Otherwise, one replaces the head of the list by a function of this head: head becomes f(head), and the rest of the list is mapped by using an other function g, whatever the element is on the head of the tail of the list.As @jk noted, the recursive call to apply could be replaced by a map. However, my approach is a little bit more flexible (it was not required for answering the original question), since one can alternate the functions to apply (f for odd elements, g for even elements, for instance), or one can decide which function should be applied on the next element (if any) depending on an arbitrary predicate on the current element.
_reverseengineering.6128
I have been fuzzing Adobe Reader for a while now.One of the issue that I face is running multiple instances of same application. If I'm able to run multiple instances of same application, I will be utilizing my CPU cycles efficiently.But, the issue is, applications like Adobe Reader, does not allow multiple instances.Is there any way by which I can run multiple instances of application that does not support multiple instance and fuzz them efficiently.Actually, I'm trying to find ways by which I can achieve my goals.One of the idea is to hook functions.Please share your views and opinions.
Run multiple instances of same application - Adobe Reader
fuzzing;vulnerability analysis;multi process
null
_codereview.84793
I needed several classes with methods performing network requests which should be executed asynchronously (with callbacks). To get rid of repetition, I added a mixin and a helper class:module Asyncable def async(&callback) AsyncHelper.new(self, &callback) endendclass AsyncHelper attr_reader :base, :callback, :errback def initialize(base, &callback) @base = base @callback = callback @errback = nil end def onerror!(&errback) @errback = errback self end def method_missing(sym, *args) method = @base.method(sym) Thread.new do begin res = method.call(*args) rescue => e if @errback.nil? raise else @errback.call(e) end else unless @callback.nil? @callback.call(res) end res end end endendNow all methods in classes can be implemented as synchronous, and then used in these ways:# assuming base is an instance of a class including Asyncable# sequential execution of a method:value = base.method(params)f(value) # process the value# same method executed asynchronously, without callback:thread = base.async.method(params)# ... other operations while the method is executed in the background ...value = thread.value # blocks until completed and raises errors, if anyf(value)# same method executed asynchronously, with callback:thread = base.async { |value| f(value) }.method(params)# ... other operations while the method is executed in the background ...thread.join # blocks until completed and raises errors, if any# using errback to handle errors:thread = base.async { |value| f(value) }.onerror! { |e| puts e }.method(params)# ... other operations while the method is executed in the background ...thread.join # only blocks, no errors raised hereThis works perfectly for me, but I am a bit concerned by the fact that such an approach isn't widely used (I haven't seen it anywhere), while the use-case for it may be common enough. Is it a good way to approach the task, and how can be improved?
Asynchronous execution
ruby;multithreading;asynchronous
null
_unix.164710
I have this udev rule that launches a script when the battery level is 5% or below:$ cat /etc/udev/rules.d/90-lowbat.rules:SUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==5, RUN+=/opt/bin/battery-low.sh SUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==4, RUN+=/opt/bin/battery-low.shSUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==3, RUN+=/opt/bin/battery-low.shSUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==2, RUN+=/opt/bin/battery-low.shSUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==1, RUN+=/opt/bin/battery-low.shSUBSYSTEM==power_supply, ATTR{status}==Discharging, ATTR{capacity}==0, RUN+=/opt/bin/battery-low.shThis is the script:$ cat /opt/bin/battery-low.sh:#!/bin/bash# Critical battery level (acpi reports it at 5%)CRITICAL=5battery_level=`acpi -b | grep -o [0-9]*% | sed s/%//`if [ ! $battery_level ]then exitfiif [ $battery_level -le $CRITICAL ]then if acpi -a | grep 'off-line' then # First warning sudo -u andreas DISPLAY=:0.0 notify-send -u critical GIMME POWER ... will shut down in 60 sec sleep 60s if acpi -a | grep 'off-line' then # Second warning sudo -u andreas DISPLAY=:0.0 notify-send -u critical ... shutting down sleep 2s # This is the path to systemctl in Debian /bin/systemctl hibernate fi fifiWhen the power is the 5% or below the script is executed and I get the first warning.However the last part of the script isn't executed. With the machine still unplugged I don't get the second warning and the computers doesn't hibernate.Loads of things could of course be wrong with the script but the funny thing is that if I run the script with sudo ./battery-low.sh (when battery level is 5% or below and the machine is unplugged) everything works - I get the two warnings and the computer hibernates after approximately 62 sec.Does udev launch the script in a way that is different from when if I launch the script manually? If yes, how so?
Udev running hibernation script
shell script;udev;hibernate
null
_cs.21913
Recently i was studying removal of useless symbols in productions given in Ullman Hopcroft.The grammar goes as followsS-> aAa | aBCA -> aS | bDB - > aBa | bC-> abb | DDD -> aDaIn the explanation that follows, we eliminate D obviously, but the removal of C still baffles me. As D is non generating, but C is both generating and reachable. So why delete C?The resultant grammar is shown as S->aAaA->aSB->aBa | bHere is the link to the photo of the page in the book just in casePage 240: http://tinypic.com/r/17wcw4/8Page 241:http://tinypic.com/r/29w1i82/8
Simplification of CFG
context free;formal grammars
null
_unix.101347
For LUKS devices I know that hashes are stored somehow in the partition header (I don't really know what this means). But I don't know how to print the hash value in this case. For example in a standard unix system the user password hashes are stored in /etc/shadow. If I want to see a hash of a password I can just open this file and see it. So, how can I extract the hash value of a LUKS device?
How can I extract the hash value of a LUKS device?
luks;dm crypt
null
_webapps.30074
I have a private Trello board that I want to make invisible to the public, but visible to those that have the link to it. Is this possible? If so, how would I do it?
Make Trello board private, but visible to those with link
trello;links;permissions
null
_hardwarecs.2771
I am about to purchase a Samsung 950 Pro PCIe SSD. I have a MSI 970A motherboard. I am wondering if I can insert the 950 pro stick directly into one of the x16 slots. I read online that PCIe slots are backwards compatible (so the x4 should work in a x16).Is this possible? Do I need a converter?Also, don't have a M.2 slot on my motherboard. I noticed I have PCIe2.0 slots, not 3.0 slots. Will my SSD still work?
Can I use my Samsung 950 pro PCIe SSD directly into my x16 slot?
ssd;pcie
The linked drive may use a PCIe interface, but it not a PCIe form factor card. It is an M.2 form factor device and requires an M.2 slot. Without some sort of adapter, it will not work in your M.2-lacking motherboard.The M.2 form factor is designed to connect to multiple interfaces - SATA and PCIe - over the same physical connector. Depending on the capabilities of the connector and device, they're keyed differently to prevent you from attempting to connect incompatible devices (a PCIe-only device in a SATA-only M.2 port, for instance).The adapter you linked should do the job. As far as PCIe 2.0 vs. PCIe 3.0 goes, PCIe 3.0 devices are backwards compatible, they'll just run slower.As far as lane count goes, you can use a card that demands fewer lanes (4x vs 16x) in a slot with more lanes; the other pins aren't on the card, and therefore do nothing. Going the other direction requires paying more attention. Some cards need the bandwidth of all the lanes they're spec'd for, and some boards have things like 8x electrical slots in 16x mechanical slots for security/fit purposes (there are missing pins on the board end).
_webmaster.56855
I am working on an AJAX site and have successfully implemented Google's AJAX recommendation by creating _escape_fragment_ versions of each page for it to index. Thus each page has 2 URLs:pretty: example.com#!blogugly: example.com?_escaped_fragment_=blogHowever, I have noticed in my analytics that some users are arriving on the site via the ugly URL and am looking for a clean way to redirect them to the pretty URL without impacting Google's ability to index the site.I have considered using a 301 redirect in the head but fear that Googlebot might try to follow it and end up in an endless loop.I have also considered using a JavaScript redirect that Googlebot wouldn't execute but fear that Google may interpret this as cloaking and penalize the website.Is there a good, clean, acceptable way to redirect real users away from the ugly URL if for some reason or another they end up arriving at the site that way?
Best way to redirect users back to the pretty URL who land on the _escaped_fragment_ one?
seo;redirects;google search;ajax
null
_webmaster.92835
I'm wondering if at the server level I can disable the ability for scripts to override error_reporting and ini_set('display_errors',1);Edit: I know how to disable errors using .htaccess and on a script by script basis. What I am trying to do is disable the ability to override error settings entirely so our developers actually fix their issues and not hide behind a error_reporting(0); ini_set('display_errors', 0);
Is it possible to entirely disable the ability to set error_reporting?
php;error reporting
You should be able to use the disable functions feature built into PHP. That will prevent them from even calling ini_set or error_reporting in the first place.Add to php.ini:disable_functions = error_reporting,ini_set,shell_exec,etc...
_unix.333957
Is it true to conclude that when using a bash here-doc like bash << HEREDOC, then always and without exceptions, shebang lines like #!/bin/bash -x are redundant?If I would have to bet, I would bet that yes, they will be redundant, and could only use us to organize the information, like a sign saying for new users the following set of commands was originally executed from a traditional script, don't run them in one line with double ampersands or another similar way.I wonder what the experts say - Is the bash shebang line really are totally redundant when using a here-doc?
Bash here-documents and shebang lines
shell script;here document;shebang
Yes, in this case.Summary: The hash-bang line (also called shebang and other things) is only ever needed if it's in an executable script that is run without explicitly specifying its interpreter (just as when executing a binary executable file).In the case of your code, you're explicitly running the script within the here-document with bash already. There's no need to add the hash-bang as it will be treated as a comment.Since the here-document script seems to want to be executed with the -x option set (judging from #!/bin/bash -x), you will have to use set -x inside the here-document to get the same effect as if running the here-document as its own script, again, because the hash-bang is treated as a comment.The hash-bang line is used when executing an executable text file. The line tells the system what interpreter to use to execute it, and optionally allows you to specify one argument to that interpreter (-x in your case).You do need the hash-bang there if you're writing the here-document to a file that will later be used as a script. For example:cat >myscript.sh <<<END_OF_SCRIPT#!/bin/bash# contents of script# goes hereEND_OF_SCRIPTSuch a file also has to be made executable (chmod +x myscript.sh). But again, if you were to explicitly execute that script with bash, for example through$ bash ./myscript.shor$ bash -x ./myscript.sh(or equivalently from within another script) then no hash-bang is needed, and the script would not have to be made executable.It all comes down to how you would want to execute the script.See also the Wikipedia entry for Shebang.
_webmaster.96568
I'm wondering how it's possible to get Google to show stats below the meta description like in the image below. I've scanned the source code of the site for any sign of structured data and didn't see anything. I also put the site in Google's Structured Data Testing Tool, and it didn't find anything either.The only thing I see is an HTML table in the source code that contains this information. Given, from what I understand, tables are best used sparingly these days, I guess I'd find it surprising that Google would provide a benefit to such a structure.
How Can I Get Stats Showing in Google Search Results Meta Description?
seo
null
_datascience.20198
I have a set of documents as given in the example below.doc1 = {'Science': 0.7, 'History': 0.05, 'Politics': 0.15, 'Sports': 0.1}doc2 = {'Science': 0.3, 'History': 0.5, 'Politics': 0.1, 'Sports': 0.1}I want to cluster the documents and identify the most prominent document within the cluster.e.g, cluster 1 includes = {doc1, doc4, doc5. doc8} and I want to get the most prominent document that represents this cluster (e.g., doc8). (or to identify the main theme of the cluster)Please let me know a suitable approach to achieve this :)
Cluster documents and identify the prominent document in the cluster?
machine learning;data mining;clustering
A very simple approach would be to find some kind of centroid for each cluster (e.g. averaging the distributions of the documents belonging to each cluster respectively) and then calculating the cosine distance of each document within the cluster from the corresponding centroid. The document with the shorter distance will be the closest to the centroid, hence the most representative.Continuing from the previous example:import pandas as pdimport numpy as npfrom sklearn.metrics import pairwise_distancesfrom scipy.spatial.distance import cosinefrom sklearn.cluster import DBSCANfrom sklearn.preprocessing import StandardScaler# Initialize some documentsdoc1 = {'Science':0.8, 'History':0.05, 'Politics':0.15, 'Sports':0.1}doc2 = {'News':0.2, 'Art':0.8, 'Politics':0.1, 'Sports':0.1}doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1}doc4 = {'Science':0.1, 'Weather':0.2, 'Art':0.7, 'Sports':0.1}collection = [doc1, doc2, doc3, doc4]df = pd.DataFrame(collection)# Fill missing values with zerosdf.fillna(0, inplace=True)# Get Feature Vectorsfeature_matrix = df.as_matrix()# Fit DBSCANdb = DBSCAN(min_samples=1, metric='precomputed').fit(pairwise_distances(feature_matrix, metric='cosine'))labels = db.labels_n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)print('Estimated number of clusters: %d' % n_clusters_)# Find the representativesrepresentatives = {}for label in set(labels): # Find indices of documents belonging to the same cluster ind = np.argwhere(labels==label).reshape(-1,) # Select these specific documetns cluster_samples = feature_matrix[ind,:] # Calculate their centroid as an average centroid = np.average(cluster_samples, axis=0) # Find the distance of each document from the centroid distances = [cosine(sample_doc, centroid) for sample_doc in cluster_samples] # Keep the document closest to the centroid as the representative representatives[label] = cluster_samples[np.argsort(distances),:][0]for label, doc in representatives.iteritems(): print(Label : %d -- Representative : %s % (label, str(doc)))
_datascience.15163
In neural networks and old classification methods, we usually construct an objective function to achieve dimensionality reduction. But Deep Belief Networks (DBN) with Restricted Boltzmann Machines (RBM) learn the data structure through unsupervised learning. How does it achieve dimensionality reduction without knowing the ground truth and constructing an objective function?
How is dimensionality reduction achieved in Deep Belief Networks with Restricted Boltzmann Machines?
deep learning;dimensionality reduction;rbm
null
_unix.148534
I'm using the GNU date command to parse arbitrary natural language dates - for example to detect if a certain epoch time stamp is from last week, I can do:if [ $timestamp -lt $(date +%s -d last sunday) ]; then ...Now when I'm trying to port my script to FreeBSD, how do I achieve the same functionality? man date didn't show any promise, unless I missed something obvious.
Parsing arbitrary natural language dates under BSD
freebsd;date
There are a number of commands on FreeBSD that use the same API as GNU date to input natural language dates from the user. I've just found one that can be tricked into converting that date into Unix epoch time:/usr/sbin/fifolog_reader -B 'last sunday' /dev/null 2>&1 | sed 's/^From[[:blank:]]*\([0-9]*\).*/\1/p'(note that at least on FreeBSD 9.1-RELEASE-p2 where I tried that on, it only seems to work reliably if you're in a UTC timezone and the date specifications it recognises are not necessarily the same as those recognised by GNU date).Note that some shells have that capability builtin.ksh93:if (( timestamp < $(printf '%(%s)T' 'last sunday') )); thenzsh:autoload calendar_scandatecalendar_scandate 'last sun'if (( timestamp < REPLY)); then...Or you could use perl and the Date::Manip if installed:last_sun=$(perl -MDate::Manip -e 'print UnixDate(last sunday,%s)')if [ $timestamp -lt $last_sun ]; then...Or:if perl -MDate::Manip -e 'exit 1 unless $ARGV[0] < UnixDate(last sunday,%s) ' $timestamp; then....If the aim is to check file timestamps, then note that FreeBSD find supports:find file -prune -newermt 'last sunday'In this very case, if you want the time of the beginning of this week (weeks starting on Sunday), you can do:week_start=$(($(date '+%s - 86400*%w - 3600*%H - 60*%M - %S')))That should work on both GNU and FreeBSD (or any system where %s is supported).In timezones with summer time, that will be off by an hour around the switch from/to summer time.
_unix.80399
I have a file, with a KEYWORD on line number n. How can I print all lines starting from line n+1 until the end?For example, here I would liek to pro=int only lines DDD and EEEAAABBBCCCKEYWORDDDDEEE
cat all lines from file, which are after KEYWORD
linux;bash;cat
You can do this with sed:sed '1,/^KEYWORD$/d'This will delete (omit) all lines from the beginning of the stream until KEYWORD, inclusive.
_unix.227031
I am using haskell-vim-now, which has many features I like when editing Haskell. However, it also sets a colorscheme I do not like. In my ~/.vimrc.local I already resetted to the default colorscheme. However, the statusline still has some funny colors:What are the names of the fields that are not white-on-black, and how can I reset them? I already triedhi StatusLine guibg=black guifg=white ctermbg=black ctermbg=whitehi link StatusLine StatusLineNChi link StatusLine ModeMsghi link StatusLine MoreMsgBut that does not help.
Change colors of statusline in haskell-vim-now
vim;vimrc;haskell
null
_codereview.162279
I need some advice about my code.The following code will create a new object the first time, and also force other threads to wait for the creation. When the object is created, we return the cached value, and when the value is about to expire, we renew it without blocking other threads, the reference is replaced by the new value.I'm not sure about the SemaphoreSlim & ManualResetEvet parts, may be somebody can simplify this code.Thanks.public class UpdatableLazy<T>{ public class Container<T> { public readonly T Value; internal Container(T value) { Value = value; } } private readonly Func<Task<T>> _updateFunc; private readonly Func<T, bool> _isRenewNeeded; private volatile Container<T> _container; private volatile bool _isUpdating; private readonly SemaphoreSlim _ss = new SemaphoreSlim(1, 1); public UpdatableLazy(Func<Task<T>> updateFunc, Func<T, bool> isRenewNeeded) { _updateFunc = updateFunc; _isRenewNeeded = isRenewNeeded; } private readonly ManualResetEventSlim _mre = new ManualResetEventSlim(); public async Task<T> GetValueAsync() { //The value doesn't exist yet if (_container == null) { //if a thread is already creating the value, we will wait for the value if (!await _ss.WaitAsync(0).ConfigureAwait(false)) { _mre.Wait(); } //After the release of the ManualResetEvent, the value should be available if (_container != null) { return _container.Value; } try { //Let's create the value _container = new Container<T>(await _updateFunc().ConfigureAwait(false)); return _container.Value; } finally { //We tell awaiting threads that the value is available _mre.Set(); _ss.Release(); } } //If the value is not updating and we need to update, we replace the old value by a new one if (!_isUpdating && _isRenewNeeded(_container.Value)) { _isUpdating = true; try { _container = new Container<T>(await _updateFunc().ConfigureAwait(false)); } finally { _isUpdating = false; } } return _container.Value; }}How I tested it :class SomeObject{ public string Value = Hello; public DateTime Expire = DateTime.UtcNow.AddSeconds(5);}class Program{ private static int _count; static void Main(string[] args) { var iteration = 10000; var cachedObj = new UpdatableLazy<SomeObject>(Create, o => DateTime.UtcNow > o.Expire); Parallel.For(0, iteration, new ParallelOptions {MaxDegreeOfParallelism = 2048}, async i => { await cachedObj.GetValueAsync().ConfigureAwait(false); Interlocked.Increment(ref _count); }); if (iteration != _count) throw new Exception(_count and iteration should be equal); Console.WriteLine(Done); Console.ReadKey(); } static async Task<SomeObject> Create() { await Task.Delay(1000).ConfigureAwait(false); return new SomeObject(); }}
Caching a value just once and update it without blocking other threads
c#;multithreading
null
_cs.2690
Consider the set of graphs in which the maximum degree of the vertices is a constant number $\Delta$ independent of the number of vertices. Is the vertex coloring problem (that is, color the vertices with minimum number of colors such that no pair of adjacent nodes have the same color) on this set still NP-hard? Why?
Vertex coloring with an upper bound on the degree of the nodes
algorithms;complexity theory;graph theory;graphs;np complete
Yes, it's still NP-hard. 3-COL is still NP-complete for planar degree 4 graphs [1].Dailey, D. P. (1980), Uniqueness of colorability and colorability of planar 4-regular graphs are NP-complete, Discrete Mathematics 30 (3): 289293, DOI:10.1016/0012-365X(80)90236-8
_scicomp.6750
Consider 2 mathematical problems:$$f_1(x) = a - x \\f_2(x) = e^x -1$$The condition number for a function is defined as follows:$$k(f) = \left| x \cdot \frac{f'}{f} \right|$$Lets analyze conditioning first:$$k(f_1) = \frac{x}{x - a},$$which means that $f_1$ is ill-conditioned near $x = a$;$$k(f_2) = \frac{x \cdot e^x}{e^x - 1},$$which is undefined near $x = 0$, so lets use L'Hospital:$$k(f_2) = \frac{e^x + x \cdot e^x}{e^x},$$which means that $f_2$ is well-conditioned everywhere (including $x = 0$ proximity).Now lets analyze stability of these 2 algorithms (if we were to implement them on the computer directly):$$\frac{(a - x) \cdot (1 + \epsilon_1) - (a - x)}{a - x} = \epsilon_1,$$where $\epsilon_1 \leq \epsilon_m$, and which means that no (numerical) amplification of errors occurs and the algorithm is stable;$$\frac{(e^x \cdot (1 + \epsilon_1) - 1) \cdot (1 + \epsilon_2) - (e^x - 1)}{e^x - 1} \approx \{\epsilon_1 \cdot \epsilon_2 \to 0\} \approx \frac{e^x \cdot \epsilon_1 + (e^x - 1) \cdot \epsilon_2}{e^x - 1} = \epsilon_2 + \epsilon_1 \cdot \frac{e^x}{e^x - 1},$$where $\epsilon_1, \epsilon_2 \leq \epsilon_m$, and which means that the algorithm is unstable near $x = 0$.Although everything is allright from the mathematical point of view, i.e. if we obey the formulas and raw theory when obtaining such results, but I begin to doubt in the validity of these results when I try add some logic and reasoning behind it.First of all, as far as I understand, when we study conditioning of the mathematical problem we think of it in exact arithmetics (i.e. we do not think about computers, rounding, floating-point arithmetic, and etc.). Therefore, if I forget for a moment about the result obtained by analyzing $k(f_1)$ and just look on the simple mathematical problem $f_1(x) = a - x$, then I merely don't see how on earth it could be ill-conditioned near $x = a$. What is the physical reasoning behind it? What kind of bad thing can happen in exact arithmetic near $x = a$?My curriculum pointed out cancellation error as an explanation. What kind of cancellation error? From my point of view, there is no such thing as cancellation error in exact arithmetic...So, my reason against it would be straightforward, since ill-condition implies that the output changes drastically when the input changes slightly, then $f_1$ is clearly linear (moreover with coefficient $1$) and any slight changes of $x$ (regardless of whether near $a$ or not) will always result in the quantitatively equal change of $y = f_1(x)$ (i.e. $\Delta x \equiv \Delta y$). Therefore, I insist that $f_1$ can in no way be ill-conditioned neither at $x = a$ nor anywhere else.Are there any flaws in my reasoning? Please, clarify this for me as I'm actually stuck on it.Secondly, why it turns out that $f_2$ is well-conditioned while it looks the same as $f_1$? I mean if I follow the same logic as for $f_1$ (i.e. that it's ill-conditioned at $x = a$ because of cancellation as my curriculum states), then I could say the same here - ill-conditioned at $x = 0$, just by looking on the definition of $f_2$! However, mathematics show us that it's not true, but rather that the direct implementation of $f_2$ evaluation on computer would result in unstable algorithm. Due to what? I guess now it's cancellation error because we are in the floating-point world now. But why?And still the question is why these two seemingly similar problems are actually so different? I'd really appreciate an exhaustive breakdown of these problems as I feel that I'm missing something very basic and it might prevent my understanding of more challenging phenomena.
Problem Condition and Algorithm Stability
numerics;algorithms;stability;error estimation;condition number
null
_cs.47677
First of all, this question relates to an issue that affects several areas of science, but since StackExchange doesn't have a meta science section I'll make it specific to computer science, which is pertinent because the solution to the bigger problem may actually come from computer science.In the context of the news that Springer and IEEE published more than 120 nonsense papers my question is as follows:What rigourous set of methods can we apply to the process of publishing scientific papers so that we can quickly verify the reproducibility of the experiments?We already have systems like Turnitin that are highly efficient at detecting plagiarism, yet I don't know of any system that can score a piece of work on its scientific soundness.Is there any ongoing work related to this? I found out about Semantic Publishing whilst composing this question, but I have no idea what other approaches, if any, are being actively worked on.
Scientific soundness of computer science papers
machine learning;semantics;natural language processing
This is a complex issue that affects all areas of science but has been getting higher visibility as the mainstream media has reported some cases in headlines. An answer seems to be better review systems. However, one might argue that nonsense papers are not necessarily a failure of the peer review system. All peer review systems must be human to some degree and all human systems are fallible/ subject to failure. any peer review system will have both false positives and false negatives in the sense of papers that were accepted but shouldnt be on 2020 hindsight, and papers that were rejected but were of acceptable quality. There is some increasing awareness/ sociological study of peer review systems. Cyberspace can in fact aid the process in some ways by increasing reviewers, increasing visiblity of reviews, adding rating systems, etc. and it can have downsides such as computational ease of creating fake submissions, increased AI capabilities to fool humans, etc.An example of a CS specific peer review (meta-) analysis can be found in the recent NIPs experiment where peer reviewers were split into two groups, the same papers were given to each, and the overlap in acceptance/ rejection decision was measured. not unsurprisingly to many, results had quite high variance. Researchers overcome false negatives by resubmitting papers to other conferences. Unfortunately this NIPs experiment never seemed to be documented except across a lot of CS blogs and there is already some link rot of key links. It was announced informally at the conference and many insiders blogged on it including those participating. A full documentation might be considered airing dirty laundry.The NIPS ExperimentNIPs peer review experiment / many links
_unix.244504
I wanted to get 16 bytes from a binary file, starting from 5th byte, no separation of bytes or words by spaces.hexdump will do what I want, just the offset column is disturbing the output:$ hexdump -s5 -n16 -xe '/1 %01X' binfileod does the task fine as well and can even be told to suppress the offset column, although I had to use sed to get rid of the spacing:$ od -An -tx1 -j5 -N16 | sed 's/ //g' binfileI am sure it will work without sed, but as I had many issues with od in hex mode related to endianness (swapped bytes), this is not as easy as it looks.For instance, changing -tx1 to anything higher will swap the bytes and mess up the 128-bit value that I want. -tx1 is fine, but unlike hexdump I haven't found a way to get rid of the spaces while keeping byte order as-is at the same time.
hexdump: How to suppress offset column in hex mode
hexdump
null
_webmaster.107611
I know that 301 redirection for permanent redirection. 302 for temporary redirects.I want to redirect author pages to BuddyPress profile pages in WordPress.I am not sure that it will be permanent. I might cancel redirection after one month, two month or may be after one year or may be redirection will be permanents. It depends on my future plan as well as what my website readers suggest.So what redirection should I use whether I am not sure it is temporary or permanent?
What redirection should I use whether I am not sure it is temporary or permanent?
seo;redirects;301 redirect;302 redirect
As you already know 301 is permanent redirection and 302temporary redirection.Now comparing both of these 301 in most cases is important forindexing in search engines as their crawlers take this in account andtransfer Page rank when using 301. However if your redirection is for short period of time or youplan to reverse it in future the main issue with 301 redirect isthat browser will cache the redirection even if you disabled theredirection from the server level. It is always better to use 302 redirect ifyou are enabling the redirection for a short period of time.With that being said you want to experiment for sometime whether you like the results of redirection or not. In that case I would suggest going for temporary redirect for the experimental time period (a couple of days) and then change it to permanent redirect if you are happy with the results.
_unix.55842
The client I'm working for deploys their application to a directory they created at the root level. While acknowledging it is a matter of local preference, I am unsure if this is in accordance with the generally accepted standard how to distribute an application's components across a Unix-like system. E.g. normally, I would put the binaries for my deployed app to /opt, conf files in /etc, logs under /var etc. But then again, this is the primary application the server is used for so it isn't exactly equivalent to untaring some 3rd party software into /opt.Still, personally, I would prefer not to add any dirs at the root level, i.e. would keep that level sort of sacred out of respect for the original Unix file hierarchy. I wanted to ask whether this is common and what are some ramifications of this approach.
Convention for primary app deployment dir structure on a Unix-like app server
directory structure
null
_softwareengineering.84062
I'm new to web development and I'm a bit confused about the different languages and technologies in the web.I understand the basic is Html, Javascript, and Css.Then there's jQuery, ASP.net, Html5.I'm confused where I should use each technology and which should I use.For example, here is a video of a WPF application that I built:WPF app demoThe app is essentially for students, teaching some lessons. The student can choose a lesson, and listen and see images. The student can also test himself. As you can see, the app has some animation and stlyingIf I were to attempt at building this application for the web- where should I start from and what should I use? HTML5 (Canvas?), jQuery (jQueryUI?), ASP.net?I would really appreciate it if you can help me.Thanks!
Starting Web Development and interactive experiences
jquery;asp.net;wpf;web development
Quick overview:HTML is your mark up language. This is how you render the basic content (I believe its similar to XAML). This is where you define the contentCSS is your styling language. Here you defines what styles to apply to elements. This is where you define the look and feel of your application.JavaScript is the client side scripting language. This is where you manipulate html in the browser. It's used for events, animations and general ui wizardy.As for ASP.NET it's just a server-side implementation. There are many (django, RoR, PHP, node.js, etc, etc).As for jQuery it's a library for JavaScript that gives you free cross-browser support and jQuery UI is a UI library written for jQuery that makes it easier to place UI elements on your page.HTML5 is a buzzword for the latest HTML based techniques. There are many of these and they a re varied. The modernizr download page shows a nice overview of HTML5 features.HTML5 Canvas is a way to place a grid on the screen you can render to through pixel manipulation.
_reverseengineering.3990
When I try to load an exe into IDApro it shows up an error: The input file has unaligned section pointersWhat does this mean ?
The input file has unaligned section pointers
ida
null
_webmaster.108849
What's the name of this Google Images feature that displays categories of products related to a website, e.g boohoo ?I've seen relatively big fashion retail websites that didn't have this feature and some smaller ones that had it, so what really needs to be done to make this work for your website ?I know it works for other types of websites, I'm just using fashion retail websites as an example.EDIT: To make it clear, I'm referring to the tags at the top that filter the content once you click on them. e.g When I click on the tag sweater it will only show images of sweaters from the website boohoo.
What's the name of the Google Images search feature that show tags at the top that apply filters?
seo;google image search
The name of the feature is Google Images Badges. For it to be able to filter your products images you need to add special markup on the pages containing the products.More information on https://developers.google.com/search/docs/data-types/products.
_codereview.147621
I'm trying to make a game with inheritance & interfaces and I'm wondering if my system could be working without ECS. (I explain at the end why I'm not using components).To make it easier to understand there are three core class families in the code:Engine related classes:Graphics, Inputs, Audio, Physics,...Using E~~~ as a prefix for each classInterfaces:I'm using them like properties, they don't contain data, they're contracts, like C# interfaces (and Java I believe?):Transformable, Destroyable, ...Using I~~~ as a prefix for each classConcrete classes:They contain data and implements the interfaces. The could be Player, Bullet, etc.Using C~~~ as a prefix for each classPlus a class named Engine that contains every E~~~ class so it makes it easy to access Engine elements.Engine-related code:class EInput {public: enum { keyCount = 64 }; char state[keyCount]; char previousState[keyCount];};class EGraphics {public: int width, height;};class EPhysics {public: float gravity;};class EEngine {public: EInput* input; EGraphics* graphics; EPhysics* physics; EEngine() { input = new EInput(); graphics = new EGraphics(); physics = new EPhysics(); } ~EEngine() { delete input; delete graphics; delete physics; }};Interface-related code:class ITransformable {public: virtual float getX() = 0; virtual float getY() = 0; virtual void setX(float x) = 0; virtual void setY(float x) = 0; virtual ~ITransformable() {}}; class IDestroyable {public: virtual float getHp() = 0; virtual void setHp(float hp) = 0; virtual void hurt(float amount) = 0; virtual void kill() = 0; virtual bool isAlive() = 0; virtual ~IDestroyable() {}};class IEngine {public: virtual void provideEngine(EEngine* engine) = 0; virtual ~IEngine() {}};class ITeam {public: virtual int getTeam() = 0; virtual void setTeam(int team) = 0; virtual ~ITeam() {}};Concrete classes-related code:class CCharacter : public IEngine, public ITransformable, public IDestroyable, public ITeam {private: /* IEngine */ EEngine* engine; /* ITransformable */ float x, y; /* IDestroyable */ float hp; /* ITeam */ int team;public: CCharacter() : engine(nullptr), x(0.f), y(0.f), hp(0.f) {} /* IEngine */ void provideEngine(EEngine* engine) override { this->engine = engine; } /* ITransformable */ virtual float getX() override { return x; } virtual float getY() override { return y; } virtual void setX(float x) override { this->x = x; } virtual void setY(float x) override { this->y = y; } /* IDestroyable */ virtual float getHp() override { return hp; } virtual void setHp(float hp) override { this->hp = hp; } virtual void hurt(float amount) override { this->hp -= amount; } virtual void kill() override { this->hp = 0.f; } virtual bool isAlive() override { return this->hp > 0.f; } /* ITeam */ virtual int getTeam() override { return this->team; }; virtual void setTeam(int team) override { this-> team = team; }; virtual void update() = 0; virtual void draw() = 0;};class CPlayer : public CCharacter {public: CPlayer() { setHp(100.f); //... } void update() override {} void draw() override {}};enum { T_NONE, T_ALLY, T_ENEMY };class CAIPlayer : public CCharacter {private: int team;public: CAIPlayer() : team(T_NONE) {} void update() override {} void draw() override {}};And a game would be similar to this, removing the game loop:int main() { EEngine engine; //Initialize some values { EGraphics& g = *engine.graphics; g.width = 512; g.height = 288; EPhysics& p = *engine.physics; p.gravity = 0.4f; } //Create our characters, a factory would be used, this is just to show the idea //Our hero CPlayer sam; sam.provideEngine(&engine); //Two friendlies CAIPlayer lambert; lambert.provideEngine(&engine); lambert.setTeam(T_ALLY); CAIPlayer wilkes; wilkes.provideEngine(&engine); wilkes.setTeam(T_ALLY); wilkes.kill(); //The bad guy CAIPlayer nikoladze; nikoladze.provideEngine(&engine); nikoladze.setTeam(T_ENEMY);}Is this code clean?What are the limits of such a system? I mean, I don't see any duplicated behaviour.I read some really nice articles and questions on the net, including Stack Overflow, about entity-components-systems and how amazing it is, but it seems (to me) that it is overdone by people, the code is a big mess with templates everywhere and yet there are no results on the screen.I know that this code might be weak compared to ECS, but at which point?I'm trying to keep it simple.
Simple game engine layout
c++;game
I had a few thoughts whilst reading through your code:NamingI don't mind the I prefix for interfaces, I've never really liked C as a prefix for classes although I can live with it. Having another prefix of E for engine specific classes feels wrong/confusing. EEngine is a concrete class, that does stuff, why isn't it ECEngine / CEEngine for example?InterfacesSome of your interfaces don't really feel like interfaces. For example, IEngine has one method provideEngine. This isn't what I think of when I think of an Engine interface. It could be an IEngineUser interface but it still feels a bit wrong. I'd expect IEngine to define operations that were then implemented in EEngine.InitialisationYou're declaring variables and calling methods on them at the same time:CAIPlayer lambert; lambert.provideEngine(&engine);This seems error prone. It also flags up a possible design error. Does it make sense for a CAIPlayer to exist without an engine / without a team? If not, then I would expect it to be passed into the constructor, rather than passed in immediately after every construction. It seems like you've done it this way because of the interfaces you've declared which specify class dependencies rather than what a class can do...Public fieldsYou're passing an instance of EEngine into every class it has public pointers to other engine classes that have public fields. This feels like way too much exposure of your engines implementation. I would hide this information behind getter methods. Then either pass the intial values into the constructor or provide setter methods. Whilst they may be simple pass throughs initially it will easier in the future if you want to make changes such as supporting different gravity zones Does it really make sense that at the moment any CCharacter can change the gravity in the game?
_codereview.86538
I have a short snippet of code that simply computes a weighted average for surrounding elements in a square matrix. The actual implementation that I'm working on is not an average (more complex equation), but I am using this one to figure out how to handle various performance hinderances.A few notes about my system:Windows 7, x64Visual Studio 2010, with the Intel C++ CompilerProfiling it using Intel VTune AmplifierRunning in Release, with optimizations on (Intel compiler doesn't specify which level, but with comparison I did I think it's O2 or O3). The timing values I am mentioning I got from running it here with -O2DIM=512 and ITERATIONS=1000Complete code is provided at the end, but for the next few explanations I will just focus on this loop of interest (it's the only loop, so...). I started off with a pretty straightforward, basic implementation:for (int iter = 0; iter < ITERATIONS; iter++) { for (int x = 1; x < DIM-1; x++) // avoid boundary cases for this example { for (int y = 1; y < DIM-1; y++) { f0 = d_matrix[x][y]; f1 = d_matrix[x-1][y]; f2 = d_matrix[x+1][y]; f3 = d_matrix[x][y-1]; f4 = d_matrix[x][y+1]; d_res_matrix[x][y] = f0*0.6 + f1*0.1 + f2*0.1 + f3*0.1 + f4*0.1; } } for (int x = 0; x < DIM; x++) { for (int y = 0; y < (DIM); y++) { d_matrix[x][y] = d_res_matrix[x][y]; } } }This attempt took ~1.9s to execute. VTune suggested that I had problems with 4K Aliasing (read-before-write memory situations) specifically on lines for the y-loop and the memory writes that preceed them (in both loops). It also identified back-end bound, core-bound problems. I figured that the 4K aliasing problems might be causing the core-bound issues as well. To address this, I decided to get rid of the constant need to fetch x and y and rewrote the code using pointers:for (int iter = 0; iter < ITERATIONS; iter++) { for (int x = 1; x < DIM-1; x++) // avoid boundary cases for this example { pf3 = &d_matrix[x][0]; pf0 = &d_matrix[x][1]; pf1 = &d_matrix[x-1][1]; pf2 = &d_matrix[x+1][1]; pf4 = &d_matrix[x][2]; save_write_loc = &d_res_matrix[x][0]; for (int y = 1; y < DIM-1; y++) { f0 = *pf0; pf0++; f1 = *pf1; pf1++; f2 = *pf2; pf2++; f3 = *pf3; pf3++; f4 = *pf4; pf4++; *save_write_loc++ = f0*0.6+f1*0.1+f2*0.1+f3*0.1+f4*0.1; } } for (int x = 0; x < DIM; x++) { s_m = &d_matrix[x][0]; s_r_m = &d_res_matrix[x][0]; for (int y = 0; y < (DIM; y++) { *s_m = *s_r_m; s_m++; s_r_m++; } } }With this the execution time became ~1.3s. I ran VTune again. It once again complained about 4K Aliasing, but now it was about all the pointer dereferencing lines (such as f1 = *pf1). I figured it was probably caused by the the incrementing of the pointer I'm doing right after. It also suggested avoiding storing intermediate values, so I collapsed the loop into two lines as below. I also unrolled the 2nd loop 8 times to become:for (int iter = 0; iter < ITERATIONS; iter++) { for (int x = 1; x < DIM-1; x++) // avoid boundary cases for this example { pf3 = &d_matrix[x][0]; pf0 = &d_matrix[x][1]; pf1 = &d_matrix[x-1][1]; pf2 = &d_matrix[x+1][1]; pf4 = &d_matrix[x][2]; save_write_loc = &d_res_matrix[x][0]; for (int y = 1; y < DIM-1; y++) { *save_write_loc++ = (*pf0)*0.6 + (*pf1)*0.1 + (*pf2)*0.1 + (*pf3)*0.1 + (*pf4)*0.1; pf0++; pf1++; pf2++; pf3++; pf4++; } } for (int x = 0; x < DIM; x++) { s_m = &d_matrix[x][0]; s_r_m = &d_res_matrix[x][0]; for (int y = 0; y < (DIM/8); y=y+8) { *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; *s_m = *s_r_m; s_m++; s_r_m++; } } }This cut the execution time further to ~0.7s. (I also tried unrolling it 4 and 16 times, but those were slower, so I settled on 8).I am now stuck. I don't really know what else I can do to make it go any faster (if anything, but I'm sure there is something). VTune is still complaining about 4K Aliasing in the *save_write_loc++ =... line. Maybe that is caused by the pointer increments happening right after since it is such a tight loop? That same line is still triggering back-end bound, core-bound port utilization problems. Since there is so much going on (multiplications, additions, fetches, stores), I don't really know which part is causing the problem exactly. The complete code that can be compiled is here. I'm thinking of having a 1D array instead of a 2D matrix. In that case, the locations will be next to each other, and perhaps they can be cached more efficiently. I am going to try this and report back. But I would appreciate any sort of suggestions on how to make this code faster.
Compute weighted average around a point in a matrix quickly
c++;performance;c++11;matrix
Since I do not know what the actual implementation will be I can only comment on the code as it is presented. From the algorithm it appears that you are applying a filter multiple times in order to monitor some propagation effect - as opposed to applying the filter multiple times to stabilize time profiling. From here on out I will assume that running ITERATIONS times is part of the core algorithm.General C++ conceptsInitializing memoryAre you sure that it is legal to memset double types to zero to obtain logical \$0.0\$? Me neither. Use std::fill instead or better yet if you have C++03 or newer use value initialization so that you can initialize the arrays to zero with double *array = new double[n]().If T is an array type, each element of the array is value-initialized ... double f = double(); // non-class value-init, value is 0.0Use std::vector so that you do not have to do the memory allocation yourself.Just be sure to use the -O2 or higher compiler flag alongside or else vector is slower than raw arrays (in my experience).BugsYour optimized routine has a couple major bugs. I always recommend checking the output between original code and optimized code should you need to perform optimization.Shift bugsave_write_loc = &d_res_matrix[x][0];You are writing output values from column 1 to column 0. This would be fine if you ran the algorithm once but since you are running this algorithm ITERATIONS times, it results in shifting the output out multiple times. In general, on iteration \$i\$ you are writing the \$i^{th}\$ column of the original d_matrix to column 0 of d_res_matrix. To fix this simply keep the matrices aligned:save_write_loc = &d_res_matrix[x][1];Skipped output indices bugfor (int y = 0; y < (DIM/8); y=y+8)When you unrolled the loop you both divided the bounds by \$8\$ and increased the step to \$8\$. You only wanted to do one or the other. You should prefer changing the bounds since DIM/8 can be computed at compile time:for (int y = 0; y < (DIM/8); y++)The code runs much slower with these bug fixes.Quick OptimizationThe algorithm cannot be performed in-place so you are using d_res_matrix to store the output of applying your filter to d_matrix. But then you want the output to be placed back in d_matrix so you perform a deep copy.However a swap of the pointers would suffice so you could use this instead:std::swap(d_matrix, d_res_matrix);This has a major impact on the performance of your original code, but as you will see we can do better.Problematic OptimizationFlattening the 2D array to 1D sounds like a good idea until you factor in the cost of post-processing. When the array is processed as 1D the border pixels change values and, in fact, exhibit border-wrap effects.We often fix problems like this by wrapping the matrix inside a false border - i.e. we add a \$1\times1\$ border all around the matrix. Unfortunately this would not work here. Instead, you can fix this by applying the filter and then going back and zeroing out border pixels each iteration.More OptimizationsUse the -O3 flag before attempting generic optimizations.Readability often goes out the window when applying optimizations to source code. The built-in compiler flags can be used instead. As you will see below, applying the -O3 flag gives me comparable results to unrolling loops, using pointer-based indexing, and flattening the array to 1D.Consider a generic matrix element \$\textrm{matrix}[a][b]\$. Let's determine how many times \$\textrm{matrix}[a][b]\$ is multiplied by \$0.1\$.\$\textrm{matrix}[a][b]\$ has four 4-connected neighbors: $$\textrm{matrix}[a-1][b],\enspace \textrm{matrix}[a+1][b],\enspace \textrm{matrix}[a][b-1],\enspace \textrm{matrix}[a][b+1]$$Each of these neighbors multiplies \$\textrm{matrix}[a][b]\$ by \$0.1\$. Therefore, \$\textrm{matrix}[a][b]\$ is multiplied by \$0.1\$ four times. Instead we could cache this computation once and reuse it when we need it. This requires \$\mathcal{O}(n)\$ space, however, since the overall algorithm cannot be done in-place, we can use some space we already were going to need.(Technically the original algorithm only needs one row of extra space and the cached multiply algorithm only needs three rows of extra space.)The real optimization here is parallelization. If we had \$ \mathrm{DIM} \cdot \mathrm{DIM}\$ processors then each processor could be charged with applying the filter to its pixel in any order we choose (or rather don't choose). Hence there is zero serial code per iteration. Of course we would not actually want that many processors/threads due to communication costs. If you are interested in running this algorithm in parallel you could use OpenMP, MPI, boost::thread, etc.Code and TimingsTest machine - CentOS 6.5, libstdc++-4.4.7, -O3, Intel Core i7-2670QM (2.20 GHz)Your original code fixed, std::swap applied, unrolled interior loop, other minor changesRuntime: 1.54 seconds#include <time.h>#include <stdio.h>#include <cstdlib>#include <stdlib.h>#include <vector>#include <cstring>#define DIM 512#define ITERATIONS 1000#define START_TIMING_ND clock_t t2; t2=clock();#define STOP_TIMING_ND {long int final_nd=clock()-t2; printf(NEW/DELETE took %li ticks (%f seconds) \n, final_nd, ((float)final_nd)/CLOCKS_PER_SEC);}int main(void){ // new/delete double ** d_matrix, ** d_res_matrix; d_res_matrix = new double * [DIM]; d_matrix = new double * [DIM]; for (int i = 0; i < DIM; i++) { d_matrix[i] = new double [DIM](); d_res_matrix[i] = new double[DIM](); } d_matrix[20][45] = 1; // start somewhere // vector calculations double * save_write_loc; double * pf0, *pf1, *pf2, *pf3, *pf4; double * s_m, * s_r_m; START_TIMING_ND; for (int iter = 0; iter < ITERATIONS; iter++) { for (int x = 1; x < DIM-1; x++) // avoid boundary cases for this example { pf3 = &d_matrix[x][0]; pf0 = &d_matrix[x][1]; pf1 = &d_matrix[x-1][1]; pf2 = &d_matrix[x+1][1]; pf4 = &d_matrix[x][2]; save_write_loc = &d_res_matrix[x][1]; for (int y = 0; y < (DIM-2)/2; y++) { *save_write_loc++ = (*pf0++)*0.6 + (*pf1++)*0.1 + (*pf2++)*0.1 + (*pf3++)*0.1 + (*pf4++)*0.1; *save_write_loc++ = (*pf0++)*0.6 + (*pf1++)*0.1 + (*pf2++)*0.1 + (*pf3++)*0.1 + (*pf4++)*0.1; } } std::swap(d_matrix, d_res_matrix); } STOP_TIMING_ND; for(int i = 0; i < DIM; i++) { for(int j = 0; j < DIM; j++) { //printf(%lf , d_matrix[i][j]); } //printf(\n); } // delete dynamic stuff for (int i = 0; i < DIM; i++) { delete [] d_matrix[i]; delete [] d_res_matrix[i]; } delete [] d_matrix; delete [] d_res_matrix; return 0;}\$\$Cached multiplication codeRuntime: 1.10 seconds#include <time.h>#include <stdio.h>#include <cstdlib>#include <stdlib.h>#include <vector>#include <cstring>#define DIM 512#define ITERATIONS 1000#define START_TIMING_ND clock_t t2; t2=clock();#define STOP_TIMING_ND {long int final_nd=clock()-t2; printf(NEW/DELETE took %li ticks (%f seconds) \n, final_nd, ((float)final_nd)/CLOCKS_PER_SEC);}int main(void){ // new/delete double ** d_matrix, ** cmatrix; cmatrix = new double * [DIM]; d_matrix = new double * [DIM]; for (int i = 0; i < DIM; i++) { d_matrix[i] = new double [DIM](); cmatrix[i] = new double[DIM](); } d_matrix[20][45] = 1; // start somewhere // vector calculations double * save_write_loc; double * pf0, *pf1, *pf2, *pf3, *pf4; START_TIMING_ND; for (int iter = 0; iter < ITERATIONS; iter++) { // Store 0.1 * d_matrix for(int i = 0; i < DIM; i++) { for(int j = 0; j < DIM; j++) { cmatrix[i][j] = 0.1 * d_matrix[i][j]; } } for(int i = 1; i < DIM-1; i++) { for(int j = 1; j < DIM-1; j++) { d_matrix[i][j] = 0.6 * d_matrix[i][j] + cmatrix[i-1][j] + cmatrix[i+1][j] + cmatrix[i][j-1] + cmatrix[i][j+1]; } } } STOP_TIMING_ND; for(int i = 0; i < DIM; i++) { for(int j = 0; j < DIM; j++) { //printf(%lf , d_matrix[i][j]); } //printf(\n); } // delete dynamic stuff for (int i = 0; i < DIM; i++) { delete [] d_matrix[i]; delete [] cmatrix[i]; } delete [] d_matrix; delete [] cmatrix; return 0;}\$\$Cached multiplication, flattened std::vector, iterator indexingRuntime: 1.08 seconds (the unrolled code is slightly slower for me)#include <time.h>#include <cstdlib>#include <vector>#include <iterator>#include <stdio.h>#define DIM 512#define ITERATIONS 1000#define START_TIMING_ND clock_t t2; t2=clock();#define STOP_TIMING_ND {long int final_nd=clock()-t2; printf(NEW/DELETE took %li ticks (%f seconds) \n, final_nd, ((float)final_nd)/CLOCKS_PER_SEC);}int main(void){ std::vector<double> cmatrix(DIM*DIM, 0.0); std::vector<double> d_matrix (DIM*DIM, 0.0); d_matrix[20*DIM+45] = 1; // start somewhere // vector calculations START_TIMING_ND; for (int iter = 0; iter < ITERATIONS; iter++) { std::vector<double>::iterator dit = d_matrix.begin(); std::vector<double>::iterator cit = cmatrix.begin(); // Store 0.1 * d_matrix; while(dit != d_matrix.end()) { *cit++ = 0.1 * *dit++; } std::vector<double>::iterator pf0 = d_matrix.begin() + DIM+1; // [1][1] std::vector<double>::iterator pf1 = cmatrix.begin() + DIM; // [1][0] std::vector<double>::iterator pf2 = cmatrix.begin() + DIM+2; // [1][2] std::vector<double>::iterator pf3 = cmatrix.begin() + 1; // [0][1] std::vector<double>::iterator pf4 = cmatrix.begin() + 2*DIM+1; // [2][1] std::vector<double>::iterator stop = d_matrix.begin() + DIM*DIM - DIM - 1; // [DIM-2][DIM-1] while(pf0 != stop) { *pf0 = (*pf0++)*0.6 + (*pf1++) + (*pf2++) + (*pf3++) + (*pf4++); } dit = d_matrix.begin(); std::vector<double>::iterator deit = d_matrix.begin() + DIM - 1; // [0][DIM-1] // Post-processing - Zero out the border pixels while(dit != d_matrix.end()) { *dit = 0.0; *deit = 0.0; dit = deit+1; deit += DIM; } } STOP_TIMING_ND; for(int i = 0; i < DIM*DIM; i++) { //printf(%lf , d_matrix[i]); if(not ((i+1) % DIM)) { //printf(\n); } } return 0;}See alsoSeparable filterThe filter in this algorithm is:\$ f = \begin{bmatrix} 0 && 0.1 && 0 \\ 0.1 && 0.6 && 0.1 \\ 0 && 0.1 && 0\end{bmatrix}\$It is not separable, however, the cached multiplication above is even better than a separable filter. If a separable filter is large enough then applying the filter as two 1D filters is several times faster than applying it as a 2D filter.
_softwareengineering.271442
I'm building a website with MVC 5 and Entity Framework 6, implementing the Unit of Work & Repository patterns, and - for flexibility and performance - would like to utilize Entity Framework's snapshot change tracking rather than proxy change tracking. The drawback of this feature is that navigation property fix-up is not done until DetectChanges is called by DbContext.Adding an item to a collection...order.LineItems.Add(orderLineItem);The inverse association will not be updated until DetectChanges is called...lineItemOrder = orderLineItem.Order; // lineItemOrder == nulldbcontext.DetectChanges();lineItemOrder = orderLineItem.Order; // lineItemOrder == orderThis navigation property fix-up behavior seems too specific to Entity Framework for the level of abstraction I'd like to see. Does this behavior break principles of Domain Driven Design, or can DetectChanges be considered a business transaction?
Does snapshot change tracking break DDD principles?
c#;mvc;domain driven design;entity framework
It clearly not a business transaction, as one would expect lineItemOrder.Order to be readily populated without needing to call separate method, With this approach not only it would be hard to debug, the domain model is not working as one expect it to work.When designing a domain model one should try to keep domain model clean from supporting frameworks such as persistence.personally I would not use order.LineItems.Add(orderLineItem);when designing a domain model, instead I would use order.AddLineItem(orderLineItem);and make the order.LineItems readonlyIf it is possible to encapuslate the dbcontext.DetectChanges();inside the method AddLineItem it would be a better domain model even though it's not the optimal design,
_cs.55433
In parallel computing, I know the speed up equation is $$ \frac{1}{ s + \frac{1-s}{p} } $$But what is meant by superlinear speed up? Is it something theoritical? Could you explain it with equations?
What is meant by superlinear speedup? Is it possible to have superlinear speedup in practice?
parallel computing;multi tasking
With equatuon: not really.Superlinear speedup comes from exceeding naively calculated speedup even after taking into account the communication process (which is fading, but still this is the bottleneck).For example you have serial algorithm that takes $1t$ to execute. You have $1024$ cores, so naive speedup is $1024x$, or it takes $t/1024$, but it should be calculated like in your equation taking into account memory transfer, slight modifications to algorithm, parallelisation time.So speedup should be lower than 1024x, but sometimes it happens that speedup is bigger, then we call it $superlinear$.Where it comes from?From vast amount of places: cache usage (what fit into registers, main memory or mass storage, where very often more processing units gives overall more registers per subtask), memory hit patterns, simply better (or a slight different) algorithm, flaws in the serial code.For example random process that searches space for result is now divided into $1024$ searchers covering more space at once so finding solution faster is more probable.There are byproducts (if you swap elements like in bubble sort and switch into gpu it swaps all pairs at once, while serial only up to point).On the distributed system communication is even more costly, so programs are changed to make memory usage local (which also changes memory access, divides problem differently than in sequential application).And the most important, the sequential program is not ideally the same as parallel version - different technology, environment, algorithm etc. so it is hard to compare them.Excerpt from Introduction to Parallel Computing Second edition by Ananth Grama, 2003Theoretically speedup can never exceed the number of processing elements $p$. If the best sequential algorithm takes $T_s$ units of time to solve a given problem on a single processing element, then a speedup of $p$ can be obtained on $p$ processing elements if none of them spends more than time $T_s/p$. A speedup greater than $p$ is possible only if each processing element spends less than time $T_s/p$ solving the problem. In this case, a single processing element could emulate the $p$ processing elements and solve the problem in fewer than $T_s$ units of time. This is a contradiction because speedup, by definition is computed with respect to the best sequential algorithm.So the name superlinear in this context comes from definition of speedup.
_webmaster.103120
I'm tracking a client website with both Google Analytics and New Relic. There's a bit of discrepency between the browser usage though (i think).Google Analytics reports Chrome as top used browser, followed by IE with almost half number of sessions:New Relic doesn't have a comparative metric, but it reports 1.6 times higher throughput for IE than Chrome:I understand that number of sessions and pages per minute by browser are not directly comparable: sessions are recorded only after ga script is fully loaded and throughput includes requests by bots or some non-user-accountable hits.But besides that, could there be another explanation for such a high IE throughput? If throughput for IE is so high, it should be either due to high number of sessions or pages/session, but both of which are lower compared to Chrome (according to GA).
New Relic and Google Analytics browser stat discrepency
google analytics;browsers
null
_cs.26027
I studying the related question.https://stackoverflow.com/questions/13500560/number-of-ways-to-create-an-avl-tree-with-n-nodes-and-l-leaf-node but it's not so general.In-fact, We want to know with N keys, how many different AVL Tree, we can make? we know with N=1 key is 1 AVL Tree. with N=2 key we have 2 different AVL Tree, but in general we can make any recurrence formula? for example for N=4, N=5 and so on.
Number of Different AVL Tree
algorithms;graph theory;data structures;trees;binary trees
null
_webmaster.96937
I am running a Zimbra virtual machine on Ubuntu 14.04.4 LTS.I am getting a few emails returned to me with this message:<[email protected]>: host mxia.craigslist.org[208.82.237.85] refused to talk to me: 554 [1F14D9E7-73BD-489F-AC79-4E640DB27254@mxi6a] coldmail.co.xx.xx.xx.in-addr.arpa [xx.xx.xx.xx] Please setup matching DNS and rDNS records: http://www.craigslist.org/about/help/rdns_failureI run this on a virtual machine running on a dedicated server that I rent. My host assures me that the rdns record is setup correctly to my domain.When I check on http://mxtoolbox.com/ they are returning this to me:Is this something I need to correct on my virtual machine setup or is it something I need to get my host to correct?
Mail server rdns setup not right
email;dns
null
_webapps.20803
Google Instant has gotten to a level of speed that makes it extremely usable. Now that I like it, I'd like to turn off autocomplete, which seems to get in the way of the Instant results, more than enhance the search experience.Does anyone know a way to do this? I've found a few userscripts (see below), but they don't seem to get the job done.Disable Google Autocomplete Disable Google Autocomplete & Preview
On Google search, is there a way to disable autocomplete but keep Instant?
google search
null
_codereview.27517
In my app development, I need to fetch data from backed, and it needs to implement it into an HTML page.I've written the below code for this, but I think it's very lengthy. Can anyone give me suggestions on making this better?Function:var JsonHandler = function(url) { return $.getJSON(url)};(function ($) { var manupulate = function(data){ var header = {}, locales = {}, userLocales = {}; $.each(data, function(key,val){ if(key === name){ header[key] = val; }else if (key === username){ header[key] = val; }else if (key === allLocales){ locales = val; }else if (key === userLocales){ userLocales[key] = val; } }) var headerHtml = Handlebars.compile($(#header-template).html()); $(header).html(headerHtml(header)); var formHtml = Handlebars.compile($(#locale-template).html()); $(form).append(formHtml(locales)); $.each(userLocales[userLocales], function(key,val){ $(:checkbox, form).each(function(i,e){ if($(e).val() === val.name){ $(this).prop(checked, true); } }) }) var status = []; var checkBox = $(form).find(:checkbox); $(checkBox).each(function(i,el){ status.push($(el).prop(checked)); $(el).click(function(){ var clickedIndex = checkBox.index($(this)); var prevStatus = status[clickedIndex]; if(prevStatus !== $(this).prop(checked)){ console.log(You made the change on + $(this).val()); } }) }) } Handlebars.registerHelper(showHr, function(index_count,block) { if(parseInt(index_count) % 20 === 0){ return block.fn(this)} }); var path = js/data.json; new JsonHandler(path).done(function(data){ manupulate(data); })})(jQuery);JSON data:{ username: lgangula, name: Lavanya Gangula, allLocales: [{ name: Afrikaans }, { name: Albanian }, { name: Arabic }, { name: Arabic (Argentina) }, { name: Arabic (Bahrain) }, { name: Arabic (Egypt) }, { name: Arabic (Jordan) }, { name: Arabic (Oman) }, { name: Arabic (Qatar) }, { name: Arabic (Saudi Arabia) }, { name: Arabic (U.A.E.) }, { name: Bengali }, { name: Bulgarian }, { name: Catalan }, { name: Chinese }, { name: Chinese (Hong Kong) }, { name: Chinese (Macau) }, { name: Chinese (Simplified) }, { name: Chinese (Traditional) }, { name: Croatian }, { name: Czech }, { name: Danish }, { name: Dutch }, { name: Dutch (Belgium) }, { name: Dutch (Netherlands) }, { name: English (Armenia) }, { name: English (Australia) }, { name: English (Bahrain) }, { name: English (Botswana) }, { name: English (Canada) }, { name: English (Egypt) }, { name: English (Guinea-Bissau) }, { name: English (Hong Kong) }, { name: English (India) }, { name: English (Indonesia) }, { name: English (Ireland) }, { name: English (Israel) }, { name: English (Jordan) }, { name: English (Kenya) }, { name: English (Kuwait) }, { name: English (Latin America) }, { name: English (Macau) }, { name: English (Macedonia) }, { name: English (Malaysia) }, { name: English (Malta) }, { name: English (Moldova) }, { name: English (Montenegro) }, { name: English (Mozambique) }, { name: English (New Zealand) }, { name: English (Nigeria) }, { name: English (Oman) }, { name: English (Other Asia) }, { name: English (Philippines) }, { name: English (Qatar) }, { name: English (Saudi Arabia) }, { name: English (Singapore) }, { name: English (South Africa) }, { name: English (Thailand) }, { name: English (US) }, { name: English (Uganda) }, { name: English (United Arab Emirates) }, { name: English (United Kingdom) }, { name: English (Vietnam) }, { name: Estonian }, { name: Finnish }, { name: French }, { name: French (Africa) }, { name: French (Belgium) }, { name: French (Cameroon) }, { name: French (Canada) }, { name: French (Central African Republic) }, { name: French (Cote d'Ivoire) }, { name: French (Equatorial Guinea) }, { name: French (France) }, { name: French (Guinea) }, { name: French (Luxembourg) }, { name: French (Madagascar) }, { name: French (Mali) }, { name: French (Mauritius) }, { name: French (Morocco) }, { name: French (Niger) }, { name: French (Senegal) }, { name: French (Switzerland) }, { name: French (Tunisia) }, { name: German }, { name: German (Austria) }, { name: German (Germany) }, { name: German (Liechtenstein) }, { name: German (Luxembourg) }, { name: German (Switzerland) }, { name: Greek }, { name: Greek (Cyprus) }, { name: Hebrew }, { name: Hindi }, { name: Hungarian }, { name: Icelandic }, { name: Indonesian }, { name: Italian }, { name: Italian (Italy) }, { name: Italian (Switzerland) }, { name: Japanese }, { name: Korean }, { name: Latvian }, { name: Lithuanian }, { name: Macedonian }, { name: Malay (Malaysia) }, { name: Maltese }, { name: Montenegrin }, { name: Norwegian }, { name: Pashto }, { name: Polish }, { name: Portuguese (Brazil) }, { name: Portuguese (Portugal) }, { name: Romanian }, { name: Russian }, { name: Serbian (Cyrillic) }, { name: Serbian (Latin) }, { name: Slovak }, { name: Slovenian }, { name: Spanish (Latin America) }, { name: Spanish (Mexico) }, { name: Spanish (Spain) }, { name: Spanish (US) }, { name: Swedish }, { name: Tajik }, { name: Thai }, { name: Turkish }, { name: Ukrainian }, { name: Vietnamese }], userLocales: [{ name: English (US) }, { name: Swedish } ], message: }HTML:<!doctype html><html lang=en><head> <meta charset=UTF-8> <title>IT White Admin</title> <link rel=stylesheet href=styles/reset.min.css> <link rel=stylesheet href=styles/admin.css></head><body> <!-- wrapper starts --> <section id=wrapper> <header> </header> <!-- main contents goes here --> <section> <div class=locales> <form> <legend>Select Locales</legend> </form> </div> </section> <!-- end of main contents --> </section> <!-- wrapper ends --> <script id=header-template type=text/x-handlebars-template> <span>Welcome {{name}}</span> <div class=loginInfo> <a href=#>{{username}}</a> | <a href=#>Logout</a> </div> </script> <script id=locale-template type=text/x-handlebars-template> {{#each this}} {{#showHr @index}} <hr/> {{/showHr}} <label><input value={{name}} type=checkbox /> {{name}} </label> {{/each}} </script> <script type=text/javascript src=js/jquery-1.9.1.min.js></script> <script type=text/javascript src=js/handlebars.js></script> <script type=text/javascript src=js/admin.js></script></body></html>
Data with Handlebars templating function
javascript;jquery;json;template
null
_softwareengineering.80244
I recently suggested a method of chaining be implemented for a certain class in a certain project so readability of the code could be improved. I got a fluent interfaces should not be implemented just for convenience, but for semantics answer and had my suggestion shot down. I answered that I was not suggesting a fluent interface but method chaining itself (both can be confused with each other, read at bottom) to improve readability and coding comfort, suggestion was shot down again.Anyhow, this got me thinking that maybe I could be incurring in a bad practice by always returning this in methods that are not supposed to return anything (e.g. setters).My question is: can applying the previous convention be regarded as bad practice or abuse?, why?. I don't think there are any performance drawbacks, or are there?.
Are there any actual drawbacks to self-referential method chaining?
architecture;coding standards;readability;method chaining
NoAs Kent Beck points out, code is read far more often than it is written.If method chaining makes your code more readable, then use method chaining.
_codereview.106240
See the the previous iteration.Now I refactored the code a bit so that I do not need to pass like 6 parameters to helper methods. Also, I did a small tweak that allows the sort to arrive to the state where the amount of runs is a power of two, which solves the orphan issue: suppose your data has \$1048577 = 2^{20} + 1\$ runs. Now normally you would sort the first \$2^{20}\$ runs, after which you would have to perform another large merge over the entire range in order to merge the last orphan run.Also, I ignored the generics and pass to my sort any object that implements Comparable. Now, what do you think?NaturalMergesort.java:package net.coderodde.util.sorting;import java.util.Arrays;/** * This class implements natural merge sort for {@code Comparable} objects. * * @author Rodion rodde Efremov * @version 1.61 (Oct 1, 2015) */public final class NaturalMergesort { private Object[] source; private Object[] target; private int sourceOffset; private int targetOffset; private UnsafeIntQueue queue; private NaturalMergesort(Object[] array, int fromIndex, int toIndex) { if (toIndex - fromIndex < 2) { // Nothing to sort. return; } this.queue = buildRunSizeQueue(array, fromIndex, toIndex); Object[] buffer = Arrays.copyOfRange(array, fromIndex, toIndex); int mergePasses = getPassAmount(queue.size()); if ((mergePasses & 1) == 1) { // Odd amount of passes over the entire range; set the buffer array // as source so that the sorted shit ends up in the original array. source = buffer; target = array; sourceOffset = 0; targetOffset = fromIndex; } else { // Arrange the stuff such that after the last merge pass all shit is // in the argument array. source = array; target = buffer; sourceOffset = fromIndex; targetOffset = 0; } sort(); } private void sort() { // The amount of runs in current merge pass that were not processed yet. int runsLeft = queue.size(); // The amount of elements processed from beginnig of the ranges. int offset = 0; // While there are runs to merge, do: while (queue.size() > 1) { if (runsLeft == 3) { // We handle this special case in order to get fast to the state // where the amount of remaining runs is a power of two. We do // this for the following reason: Suppose you have 1048577 = // 1048576 + 1 = 2^(20) + 1 elements in the requested range. // Now the algorithm would sort the first 2^(20) element AND // will have to do one more merge pass just for putting the last // orphan element to its correct position. int leftRunLength = queue.dequeue(); int middleRunLength = queue.dequeue(); int rightRunLength = queue.dequeue(); merge(offset, leftRunLength, middleRunLength, rightRunLength); queue.enqueue(leftRunLength + middleRunLength + rightRunLength); int itmp = sourceOffset; sourceOffset = targetOffset; targetOffset = itmp; Object[] tmp = source; source = target; target = tmp; runsLeft = queue.size(); offset = 0; continue; } int leftRunLength = queue.dequeue(); int rightRunLength = queue.dequeue(); merge(offset, leftRunLength, rightRunLength); // Bounce the run we obtained by merging the two runs to the tail. queue.enqueue(leftRunLength + rightRunLength); offset += leftRunLength + rightRunLength; runsLeft -= 2; if (runsLeft == 0) { // Swap array offsets. int itmp = sourceOffset; sourceOffset = targetOffset; targetOffset = itmp; // Swap array roles. Object[] tmp = source; source = target; target = tmp; // Go to the beginning of the array. runsLeft = queue.size(); offset = 0; } } } /** * Sorts the entire input array. * * @param array the array to sort. */ public static void sort(Object[] array) { sort(array, 0, array.length); } /** * Sorts a specific range in the input array. * * @param array the array holding the target range. * @param fromIndex the starting, inclusive index of the range to sort. * @param toIndex the ending, exclusive index of the range to sort. */ public static void sort(Object[] array, int fromIndex, int toIndex) { new NaturalMergesort(array, fromIndex, toIndex).sort(); } /** * Reverses the range <code>array[fromIndex ... toIndex - 1]</code>. Used * for making descending runs ascending. * * @param array the array holding the desired range. * @param fromIndex the least index of the range to reverse. * @param toIndex the index one past the greatest index of the range. */ public static void reverseRun(Object[] array, int fromIndex, int toIndex) { for(int l = fromIndex, r = toIndex - 1; l < r; ++l, --r) { Object tmp = array[l]; array[l] = array[r]; array[r] = tmp; } } /** * This method implements a 3-way merge operation. * * @param offset the amount of elements to skip from the beginning * of the ranges. * @param leftRunLength the length of the left run. * @param middleRunLength the length of the middle run. * @param rightRunLength the length of the right run. */ private void merge(int offset, int leftRunLength, int middleRunLength, int rightRunLength) { int left = sourceOffset + offset; int middle = left + leftRunLength; int right = middle + middleRunLength; int leftBound = middle; int middleBound = right; int rightBound = right + rightRunLength; int placementOffset = targetOffset + offset; while (left < leftBound && middle < middleBound && right < rightBound) { Comparable cLeft = (Comparable) source[left]; Comparable cMiddle = (Comparable) source[middle]; Comparable cRight = (Comparable) source[right]; if (cRight.compareTo(cMiddle) < 0) { // Here, cRight < cMiddle if (cRight.compareTo(cLeft) < 0) { target[placementOffset++] = cRight; ++right; } else { target[placementOffset++] = cLeft; ++left; } } else { // Here, cMiddle <= cRight. if (cLeft.compareTo(cMiddle) <= 0) { target[placementOffset++] = cLeft; ++left; } else { target[placementOffset++] = cMiddle; ++middle; } } } while (left < leftBound && middle < middleBound) { Comparable cLeft = (Comparable) source[left]; Comparable cMiddle = (Comparable) source[middle]; target[placementOffset++] = cMiddle.compareTo(cLeft) < 0 ? source[middle++] : source[left++] ; } while (left < leftBound && right < rightBound) { Comparable cLeft = (Comparable) source[left]; Comparable cRight = (Comparable) source[right]; target[placementOffset++] = cRight.compareTo(cLeft) < 0 ? source[right++] : source[left++]; } while (middle < middleBound && right < rightBound) { Comparable cMiddle = (Comparable) source[middle]; Comparable cRight = (Comparable) source[right]; target[placementOffset++] = cMiddle.compareTo(cRight) < 0 ? source[middle++] : source[right++]; } System.arraycopy(source, left, target, placementOffset, leftBound - left); System.arraycopy(source, middle, target, placementOffset, middleBound - middle); System.arraycopy(source, right, target, placementOffset, rightBound - right); } /** * This method implements the merging routine. * * @param offset the amount of elements to skip from the beginning * of each array. * @param leftRunLength the length of the left run. * @param rightRunLength the length of the right run. */ private void merge(int offset, int leftRunLength, int rightRunLength) { int left = sourceOffset + offset; int right = left + leftRunLength; int leftBound = right; int rightBound = right + rightRunLength; int placementOffset = targetOffset + offset; while (left < leftBound && right < rightBound) { target[placementOffset++] = ((Comparable) source[right]).compareTo(source[left]) < 0 ? source[right++] : source[left++]; } System.arraycopy(source, left, target, placementOffset, leftBound - left); System.arraycopy(source, right, target, placementOffset, rightBound - right); } /** * This class method returns the amount of merge passes over the input range * needed to sort {@code runAmount} runs. */ private static int getPassAmount(int runAmount) { return 32 - Integer.numberOfLeadingZeros(runAmount / 2); } /** * Scans the runs over the range {@code array[fromIndex .. toIndex - 1]} and * returns a {@link UnsafeIntQueue} containing the sizes of scanned runs in * the same order as they appear in the input range. * * @param array the array containing the desired range. * @param fromIndex the starting, inclusive index of the range to scan. * @param toIndex the ending, exclusive index of the range to scan. * * @return a {@code UnsafeIntQueue} describing the lengths of the runs in * the input range. */ static UnsafeIntQueue buildRunSizeQueue(Object[] array, int fromIndex, int toIndex) { UnsafeIntQueue queue = new UnsafeIntQueue(((toIndex - fromIndex) >>> 1) + 1); int head; int left = fromIndex; int right = left + 1; int last = toIndex - 1; while (left < last) { head = left; // Decide the direction of the next run. if (((Comparable) array[left++]).compareTo(array[right++]) <= 0) { // Scan an ascending run. while (left < last && ((Comparable) array[left]) .compareTo(array[right]) <= 0) { ++left; ++right; } queue.enqueue(left - head + 1); } else { // Scan a strictly descending run. while (left < last && ((Comparable) array[left]) .compareTo(array[right]) > 0) { ++left; ++right; } queue.enqueue(left - head + 1); // We reverse a strictly descending run as to minimize the // the amount of runs scanned in total. Strictness is required. reverseRun(array, head, right); } ++left; ++right; } // A special case: the very last element may be left without buddies // so make it (the only) 1-element run. if (left == last) { queue.enqueue(1); } return queue; } /** * This is the implementation class for an array-based queue of integers. It * sacrifices under- and overflow checks as to squeeze a little bit more of * efficiency and thus is an ad-hoc data structure hidden from the client * programmers. * * @author Rodion Efremov * @version 2014.12.01 */ private static final class UnsafeIntQueue { /** * The minimum capacity of this queue. */ private static final int MINIMUM_CAPACITY = 256; /** * Stores the actual elements. */ private final int[] storage; /** * Points to the element that will be dequeued next. */ private int head; /** * Points to the array component to which the next element will be * inserted. */ private int tail; /** * Caches the amount of elements stored. */ private int size; /** * Used for faster head/tail updates. */ private final int mask; /** * Creates an empty integer queue with capacity of the least power of * two no less than original capacity value. */ UnsafeIntQueue(int capacity) { capacity = fixCapacity(capacity); this.mask = capacity - 1; this.storage = new int[capacity]; } /** * Appends an integer to the tail of this queue. * * @param num the integer to append. */ void enqueue(int num) { storage[tail & mask] = num; tail = (tail + 1) & mask; ++size; } /** * Pops from the head of this queue an integer. * * @return the integer at the head of this queue. */ int dequeue() { int ret = storage[head]; head = (head + 1) & mask; --size; return ret; } /** * Returns the amount of values stored in this queue. */ int size() { return size; } /** * This routine is responsible for computing an integer that is a power * of two no less than {@code capacity}. */ private static int fixCapacity(int capacity) { capacity = Math.max(capacity, MINIMUM_CAPACITY); int ret = 1; while (ret < capacity) { ret <<= 1; } return ret; } }}Demo.java:import java.util.Arrays;import java.util.Random;import net.coderodde.util.sorting.NaturalMergesort;public class Demo { private static final int ARRAY_LENGTH = 2000000; private static final int FROM_INDEX = 5; private static final int TO_INDEX = ARRAY_LENGTH - 6; static int getPasses(int runs) { return 32 - Integer.numberOfLeadingZeros(runs / 2); } public static void main(final String... args) { long seed = System.currentTimeMillis(); System.out.println(Seed: + seed); System.out.println(); System.out.println(-- Random data demo --); Random rnd = new Random(seed); Integer[] array1 = getRandomIntegerArray(ARRAY_LENGTH, -10000, 10000, rnd); Integer[] array2 = array1.clone(); System.out.print(My natural merge sort: ); long ta1 = System.currentTimeMillis(); NaturalMergesort.sort(array2, FROM_INDEX, TO_INDEX); long tb1 = System.currentTimeMillis(); System.out.println((tb1 - ta1) + ms.); System.out.print(java.util.Arrays.sort(): ); long ta2 = System.currentTimeMillis(); java.util.Arrays.sort(array1, FROM_INDEX, TO_INDEX); long tb2 = System.currentTimeMillis(); System.out.println((tb2 - ta2) + ms.); System.out.println(Sorted arrays equal: + Arrays.equals(array1, array2)); System.out.println(); //// System.out.println(-- Presorted data demo --); array1 = getPresortedIntegerArray(ARRAY_LENGTH); array2 = array1.clone(); System.out.print(My natural merge sort: ); ta1 = System.currentTimeMillis(); NaturalMergesort.sort(array2, FROM_INDEX, TO_INDEX); tb1 = System.currentTimeMillis(); System.out.println((tb1 - ta1) + ms.); System.out.print(java.util.Arrays.sort(): ); ta2 = System.currentTimeMillis(); java.util.Arrays.sort(array1, FROM_INDEX, TO_INDEX); tb2 = System.currentTimeMillis(); System.out.println((tb2 - ta2) + ms.); System.out.println(Sorted arrays equal: + Arrays.equals(array1, array2)); } private static Integer[] getRandomIntegerArray(final int size, final int min, final int max, final Random rnd) { final Integer[] array = new Integer[size]; for (int i = 0; i < size; ++i) { array[i] = rnd.nextInt(max - min + 1) + min; } return array; } private static Integer[] getPresortedIntegerArray(final int size) { final Integer[] array = new Integer[size]; for (int i = 0; i < size; ++i) { array[i] = i % (size / 8); } for (int i = 0, j = size - 1; i < j; ++i, --j) { final Integer tmp = array[i]; array[i] = array[j]; array[j] = tmp; } return array; }}
Natural merge sort in Java - follow-up 2
java;algorithm;sorting;mergesort
null
_webmaster.56057
I've recently changed URLs to lower case. I'm also suffering from about 15% traffic drop from search engines, which started about a week after my change. Can this be related?This is what I've done. Till now I've had all pages URLs look like this http://mydomain.com/folder/Page. And all internal links in the site were also to http://mydomain.com/folder/Page2.All these have now turned into: http://mydomain.com/folder/page and http://mydomain.com/folder/page2.And the page that still exists at http://mydomain.com/folder/Page has been add this in HTML <head>: <link rel=canonical href=/folder/page />Have I done anything wrong? Would you recommend me to put it back to the way it was?
Can a change of all URLs to lower case with canonical from others cause traffic loss?
seo;google;url;case sensitive
null
_unix.181037
I've Debian jessie and xfce. When I push the power button, my system is suspended. When I press any key on my keyboard the system wake up and I get the login prompt of Xscreensaver (screen lock is active). Now if I push the power button the system does not suspend, I must login and then I can suspend the system with the power button. Is there a way to suspend the system from the login screen of Xscreensaver?
Suspend button does not work when xscreensaver is active on xfce Debian
xfce;suspend;screen lock;xscreensaver
null
_cs.50869
I am currently trying to learn how the Delta Rule works in a neural network. So far I completely understand the concept of the Delta Rule, but the derivation doesn't make sense. My question is how is the Delta Rule derived and what is the explanation for the algebra.Currently I am writing equations to try to understand, they are as follows:Desired = (Input * WeightD)Actual = (Input * WeightA)Error = (Input * WeightD) - (Input * WeightA)Error = Input (WeightD - WeightA)However, as stated Here, the Weight is understood to be Error * Input, where as I derive it to be Error / Input. I fully understand the concept of correcting the weights in a neural network, however, I do not understand how the algebra and derivation of the Dela Rule work to re-adjust the weights to correctly represent the output. I will explain my current understanding to help you understand why I may be confused. Because the Input is constant whilst you attempt to find a correct weight, you can factor it out. This leads me to the equation that the Error is equal to the Input times the difference between the correct and incorrect weights. This is also very clear. However I do not understand where they go from here to get the equation to correct the weights. How is the required net change derived from this? How does the algebra behind simply Input * Error work? Thank you.
How is the Delta Rule derived in neural networks and what is the explanation for the algebra?
algorithms;neural networks
null
_codereview.137869
This code wraps the user endpoint and media endpoint of Instagram API. Any best practices/ styles or glaring bugs you can see? I wrote some unit tests for each class but did not include for readability Base class: import unirestfrom oauth2 import OAuth2APIclass Client(OAuth2API): host = http://api.instagram.com base_path = /v1 header_default = {Accept: application/json} ACCESS_TOKEN_ONLY = [access_token]def __init__(self, **kwargs): super(Client, self).__init__(**kwargs)def build_path(self, endpoint): return self.host + self.base_path + endpointdef build_params(self, params): return paramsdef build_oauth_params(self, params): if params == self.ACCESS_TOKEN_ONLY: return {access_token: self.access_token} else: raise NotImplementedError(ouath params {} not implemented.format(params))def parse_request(self, endpoint, accepted_oauth_params, accepted_params): path = self.build_path(endpoint) params = self.build_params(accepted_params) params.update(self.build_oauth_params(accepted_oauth_params)) return path, paramsdef get_request(self, endpoint, accepted_oauth_params, accepted_params): path, params = self.parse_request(endpoint, accepted_oauth_params, accepted_params) return unirest.get(path, headers=self.header_default, params=params)User endpoint class:from client import Clientclass User(Client):endpoint_base = /usersdef __init__(self, **kwargs): super(User, self).__init__(**kwargs)def self(self): oauth_params = self.ACCESS_TOKEN_ONLY params = {} endpoint = self.endpoint_base + /self response = self.get_request(endpoint, oauth_params, params) return responsedef self_recent_media(self, count=None, min_id=None, max_id=None): oauth_params = self.ACCESS_TOKEN_ONLY params = {} if count: params.update({count: count}) if min_id: params.update({mind_id: min_id}) if max_id: params.update({max_id: max_id}) endpoint = self.endpoint_base + /self/media/recent response = self.get_request(endpoint, oauth_params, params) return responsedef user_id(self, user_id): oauth_params = self.ACCESS_TOKEN_ONLY params = {} endpoint = self.endpoint_base + / + str(user_id) response = self.get_request(endpoint, oauth_params, params) return responsedef user_recent_media(self, user_id, count=None, min_id=None, max_id=None): passdef self_liked(self, count=None, max_like_id=None): oauth_params = self.ACCESS_TOKEN_ONLY params = {} if count: params.update({count: count}) if max_like_id: params.update({max_like_id: max_like_id}) endpoint = self.endpoint_base + /self/media/liked response = self.get_request(endpoint, oauth_params, params) return responsedef search(self, query, count=None): oauth_params = self.ACCESS_TOKEN_ONLY params = {q: query} if count: params.update({count: count}) endpoint = self.endpoint_base + /search response = self.get_request(endpoint, oauth_params, params) return responseMedia endpoint classfrom client import Clientclass Media(Client):endpoint_base = /mediadef __init__(self, **kwargs): super(Media, self).__init__(**kwargs)def media_id(self, media_id): oauth_params = self.ACCESS_TOKEN_ONLY params = {} endpoint = self.endpoint_base + / + str(media_id) response = self.get_request(endpoint, oauth_params, params) return responsedef media_shortcode(self, shortcode): oauth_params = self.ACCESS_TOKEN_ONLY params = {} endpoint = self.endpoint_base + /shortcode/ + str(shortcode) response = self.get_request(endpoint, oauth_params, params) return responsedef media_search(self, latitude=None, longitude=None, distance=1000): oauth_params = self.ACCESS_TOKEN_ONLY params = {} if latitude and longitude: params.update({lat: latitude, lng: longitude}) if distance != 1000: params.update({distance: distance}) endpoint = self.endpoint_base + /search response = self.get_request(endpoint, oauth_params, params) return response
Python wrapper for Instagram API
python;rest;wrapper;instagram
params.update({count: count})seems overwrought. What's wrong with just params[count] = count ?Also, code like:endpoint = self.endpoint_base + / + str(media_id)always makes me wonder if I'm getting the number of slashes correct, so instead I do:import posixpathendpoint = posixpath.join(self.endpoint_base, str(media_id))
_cstheory.653
Complexity theory uses a large number of unproven conjectures. There are several hardness conjectures in David Johnson's NP-Completeness Column 25. What are the other major conjectures not mentioned in the above article? Did we achieve some progress towards proving one of these conjectures? Which conjecture do you think would require completely different techniques from the currently known ones?
Major conjectures used to prove complexity lower bounds?
cc.complexity theory;lower bounds
null
_softwareengineering.315642
Is it good or bad to duplicate data between tests and real code? For example, suppose I have a Python class FooSaver that saves files with particular names to a given directory:class FooSaver(object): def __init__(self, out_dir): self.out_dir = out_dir def _save_foo_named(self, type_, name): to_save = None if type_ == FOOTYPE_A: to_save = make_footype_a() elif type == FOOTYPE_B: to_save = make_footype_b() # etc, repeated with open(self.out_dir + name, w) as f: f.write(str(to_save)) def save_type_a(self): self._save_foo_named(a, a.foo_file) def save_type_b(self): self._save_foo_named(b, b.foo_file)Now in my test I'd like to ensure that all of these files were created, so I want to say something like this:foo = FooSaver(/tmp/special_name)foo.save_type_a()foo.save_type_b()self.assertTrue(os.path.isfile(/tmp/special_name/a.foo_file))self.assertTrue(os.path.isfile(/tmp/special_name/b.foo_file))Although this duplicates the filenames in two places, I think it's good: it forces me to write down exactly what I expect to come out the other end, it adds a layer of protection against typos, and generally makes me feel confident that things are working exactly as I expect. I know that if I change a.foo_file to type_a.foo_file in the future I'm going to have to do some search-and-replace in my tests, but I don't think that's too big of a deal. I'd rather have some false positives if I forget to update the test in exchange for making sure that my understanding of the code and the tests are in sync.A coworker thinks this duplication is bad, and recommended that I refactor both sides to something like this:class FooSaver(object): A_FILENAME = a.foo_file B_FILENAME = b.foo_file # as before... def save_type_a(self): self._save_foo_named(a, self.A_FILENAME) def save_type_b(self): self._save_foo_named(b, self.B_FILENAME)and in the test:self.assertTrue(os.path.isfile(/tmp/special_name/ + FooSaver.A_FILENAME))self.assertTrue(os.path.isfile(/tmp/special_name/ + FooSaver.B_FILENAME))I don't like this because it doesn't make me confident that the code is doing what I expected --- I've just duplicated the out_dir + name step on both the production side and the test side. It won't uncover an error in my understanding of how + works on strings, and it won't catch typos.On the other hand, it's clearly less brittle than writing out those strings twice, and it seems a little wrong to me to duplicate data across two files like that.Is there a clear precedent here? Is it okay to duplicate constants across tests and production code, or is it too brittle?
Duplicating constants between tests and production code?
unit testing;coding style;clean code
I think it depends on what you're trying to test, which goes to what the contract of the class is. If the contract of the class is exactly that FooSaver generates a.foo_file and b.foo_file in a particular location, then you should test that directly, i.e. duplicate the constants in the tests.If, however, the contract of the class is that it generates two files into a temporary area, the names of each which are easily changed, especially at runtime, then you must test more generically, probably using constants factored out of the tests.So you should be arguing with your coworker about the true nature and contract of the class from a higher level domain design perspective. If you can't agree then I would say that this is an issue of the understanding and abstraction level of the class itself, rather than of testing it. It is also reasonable to find the class's contract changing during refactoring, for example, for its abstraction level to be raised over time. At first, it my be about the two specific file in a particular temp location, but over time, you may find additional abstraction is warranted. At such time, change the tests to keep them in sync with the contract of the class. There is no need to over build the class's contract right away just because you're testing it (YAGNI). When a class's contract is not well defined, testing of it can make us question the nature of class, but so would using it. I would say that you shouldn't upgrade the contract of the class just because you are testing it; you should upgrade the contract of the class for other reasons, such as, it is a weak abstraction for the domain, and if not then test it as it is.
_ai.3814
So, Deepmind is pushing for a human level Starcraft bot and Open AI just created a human level 1vs1 Dota bot. Unfortunately, I've no clue what that signifies because I've never played Starcraft nor Dota nor do I have more than a fleeting acquaintance with similar games. My question is what the difference between Starcraft and Dota is from a AI perspective and what scientific significance the respective super human bots would have.
What's the difference between Starcraft and Dota from an AI perspective?
game ai;deepmind
null
_unix.161698
I have a fresh VPS I just bought, for playing only. No risk of any kind involved, so I noticed that due to my slow connection I have to wait seconds to finish writing commands and opening/closing files. So, I would like to know if I can connect to my VPS without ssh? Putty gives me an option to connect as raw, which upon choosing I can not log in to my VPS.
How to connect to a VPS without SSH.
ssh;centos;vps
You could try mosh.It is optimized for slow and buggy network connections and does some neat tricks to allow comfortable working with such a connection, i.e. it displays characters you type on the fly and not after the server sends them back to you.Opening files will sadly still be slow, but commands are better.
_cs.61150
I receive a list of real numbers ( float ) between $0$ and $1$. The list has length $N+1$ and I need to find two numbers on the list which are $\le \frac{1}{N}$ apart. Here is an example test case:[ 0. , 0.31662479, 0.63324958, 0.94987437, 0.26649916, 0.58312395, 0.89974874, 0.21637353, 0.53299832, 0.84962311, 0.1662479 , 0.48287269, 0.79949748, 0.11612227, 0.43274706, 0.74937186, 0.06599665]One possibility I came up with is to multiply by $N=17$ and to take the integer part:[ 0, 5, 10, 16, 4, 9, 15, 3, 9, 14, 2, 8, 13, 1, 7, 12, 1]Then I have to find the location of two numbers in this list which are identical.I would appreciate any solution to this problem with or without the intermediate multiplication step. Or does this problem have a particular name in the field of algorithms?
Find the duplicates in a list of floating point numbers
sorting;integers;floating point;lists
SortingThe simplest algorithm is to sort your floats, then compare adjacent entries. This will let you find all pairs that are $\le \frac1N$ apart in $O(N \lg N)$ time.HashingIt's also possible to come up with a linear-time algorithm, i.e., to check whether there is any nearby pair of floats and if so find at least, in $O(N)$ time.Let $M=\lfloor N/2 \rfloor$. Build a list of integers by replacing each float $x$ with $\lfloor Mx \rfloor$, then find all duplicates in the resulting list of integers. Finding duplicates in a list of integers can be done in expected linear time by hashing and looking for a pair with the same hash. For each duplicate integer, check whether it corresponds to a pair of floats that differ by $\le \frac1N$.Then, do it again: build a second list of integers by replacing each float $x$ with $\lfloor Mx + 0.5 \rfloor$, then find all duplicates in this list and check each to see whether it corresponds to a pair of floats that differ by $\le \frac1N$.It's possible to show that if there exists a pair of floats that differ by $\le \frac1N$, then this procedure will find at least one such pair; and if there aren't any such pair, then this procedure will discover that fact. Moreover, with suitable choice of the hash function, this can be made to run in expected linear time. In fact, if you replace the hashing with counting sort, this runs in deterministic linear time.In practiceIn practice, my advice is to simply sort and then compare adjacent entries. It will be easier to implement, less fiddly (you don't have to deal with corner cases), obviously correct, and probably more than fast enough in practice.
_webapps.76244
I just replaced my phone from water damage and forgot my Gmail and Facebook passwords. How do I retrieve them?
Forgot passwords due to water damage
gmail;facebook
null
_webmaster.99011
The client only had Cpanel access. Its a basic LAMP, shared server environment. I've checked Cpanels Back Ups and Cron jobs sections and theres nothing.Would anyone know what could be creating these?
.tar.gz backups appearing in root directory of shared hosting client. Where are they coming from?
web hosting;shared hosting;backups
null
_webapps.28853
I have cells with dates in this format: DD/MM/YYYY 00:00:00 and I want to add 3 hours because I'm in a different GMT zone.Is possible to use dates in a formula? Something like: 30/12/2012 22:15:00 should become: 31/12/2012 01:15:00
Google Spreadsheets: Add 3 hours to Date value (DD/MM/YYYY 00:00:00)
google spreadsheets;date
All date/time values in spreadsheets are internally handled as floating point values. To add 3 hours to a date/time just add (3/24) to the original date/time.=F3+(3/24)This also works in MS Excel.
_codereview.29355
I think many know the problem of working with multiple projects: private projects, company projects, probably even projects for multiple companies.From day-one, I've always searched for better ways to handle all those projects on the file system level. I think I figured out a nice basic directory structure that works for me and my workflow:~/code/$company_name/$project_name/[$application_name]But, even with auto-completion and a $CODE var that pointed to ~/code, I had to type a lot more than I wanted when I had to switch between some projects.So, for the sake of laziness (I also heard some smart guys call it efficiency, but nah, I'm just lazy) and to learn a bit more about my shell, I decided to write a function that will do some of the work I'm not willing to do.These were the functional requirements I defined for that function:quickly cd into a project by given company and project namepossibility to add an optional application name for projects with multiple applicationspossibility to pass a flag to create new project directoriesSince I'm still a junior developer (1-year work experience, no degree, autodidact) and don't have much experience with shell-scripting, I'm pretty sure there is a lot to improve in my solution.And since I'm always on the search for improvements, I decided to show my code to those of whom might be more experienced in this than me.function pcd () { local create=false local target_dir=$HOME/code local argument_count=$# local root local app_name local project_name # When first param is -c, set create flag to true and remove the param if [[ $1 == -c ]]; then create=true (( argument_count=argument_count-1 )) shift fi # the root (private, company1, company2, etc...) # I really would like to make that optional too, but # that would require root=$1 # $PROJECT_ALIASES is a associative array to map some # project aliases to their actual names, it's set globally # since it's used in other scripts to and maintained through # it's own script project_name=$PROJECT_ALIASES[$2] # if the passed project string wasn't an alias, # assume it was the raw project name if [[ ! -n $project_name ]]; then project_name=$2 fi # the app folder (for multi application projects, optional) app_name=$3 # validate required params if [[ ! (( -n $root && -n $project_name )) ]]; then echo [ERROR] - Please pass at least a root dir and a project name >&2 return 1 fi target_dir=$target_dir/$root/$project_name if [[ -n $app_name ]]; then target_dir=$target_dir/$app_name fi # When the create flag is true (-c), create target directory if [[ $create == true ]]; then mkdir -p $target_dir fi # Finally, cd to the targeted directory cd $target_dir}
A z-shell function to quickly cd into projects
beginner;shell;zsh
null
_webmaster.674
I make valid (X)HTML documents that are also semantically correct. I try to use all tags as they were intended and this has yielded good results as far as my placement in Google and other search engines.What I've been doing for the last few years is using Google Adsense as a sort of barometer to ascertain how well Google understands my content. Normally, I place one Google ad at the bottom of the page and wait for it to change. If the ad reflects the topic of my site and the text on any given page, I assume that I've done my job well and just remove the ad.I'm wondering, does anyone else use this strategy .. and has it also worked for you? I realize that my ranking depends on many factors, but making sure the crawler could understand my pages seemed like the first battle to win.Am I just throwing salt over my shoulder by doing this?
Using Adsense as a SEO tool - does the adsense bot have the same understanding as the normal web crawler?
seo;google adsense
My experience of Adsense ads is that they can change quite a bit over a course of time, also, sometimes there's simply not good ads to match the content certain page, or, Google will latch onto a certain word on a page and there's so many ads on that topic it ignores what the page is really about.I think a better barometer to do the same thing would be to search for the page, then choose what pops up when you look for related pages.
_webapps.33961
I need to download the sound from the Google's doodle. I saw this post on Superuser that explains how to download a doodle for offline usage and I tried it, but the sounds are not included in the download. So, anyone knows how to download the sounds from Google's doodle?
How to download Google doodle sounds?
google;download;offline
null
_unix.362637
What does this mean in a shell script?| sed 's/ /':'/' | sed 's/ /-/' > file.list
What does this sed command mean?
shell script;sed
Assuming the context issome-command | sed 's/ /':'/' | sed 's/ /-/' > file.listLet's break it apart piece by piece. Suppose for example that some-command is echo 'test of the command'.Then sed 's/ /':'/' replaces the first space by :.test of the command test:of the commandAfter that, sed 's/ /-/' replaces the new first space by -test:of the command test:of-the commandThis transformation is applied on each line of the output of some-command.As mentioned by @Philippos in the comments, it is unclear why : is unquoted here. It would be better assome-command | sed 's/ /:/' | sed 's/ /-/' > file.listBut sed is not restricted to a single replacement per instance. So even better issome-command | sed 's/ /:/; s/ /-/' > file.list
_codereview.96840
I am building an Android application and there are five Activity classes or if you're familiar with the MVC pattern they would usually be the Controller classes. Specifically a User will enter one of these 5 Activity classes (by navigating throughout the app) and sometimes they might upload a photo. Now the code for uploading a photo follows a very similar pattern. Please note all this code is repeated 5 times in all 5 classes (YUCK).Global Variables:/*Tracking */private static final int TAKE_PHOTO_REQUEST = 1;private static final int GET_FROM_GALLERY = 2;private Uri mUri;private String mCurrentPhotoPath;private File mFile;private TypedFile mTypedFile; // For RetrofitUser hits Photo Upload Button, and a AlertDialog pops up:private void showFileOptions() { new AlertDialog.Builder(this) .setItems(R.array.uploadOptions, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { switch (which) { case 0: dispatchTakePicture(); break; case 1: dispatchUploadFromGallery(); break; } } }) .setNegativeButton(android.R.string.cancel, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }) .show();}dispatchTakePicture:/*Take picture from your camera */private void dispatchTakePicture() { Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE); // Make sure that there is a camera activity to handle the intent if (intent.resolveActivity(getPackageManager()) != null) { // Create the File where the mTypedFile would go File picFile = null; try { picFile = createImageFile(); mFile = picFile; } catch (IOException e) { e.printStackTrace(); Toast.makeText(this, e.getMessage(), Toast.LENGTH_LONG).show(); } // Continue only if the file was successfully created if (picFile != null) { intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(picFile)); startActivityForResult(intent, TAKE_PHOTO_REQUEST); } }}dispatchUploadFromGallery:/*Take a mTypedFile from your gallery */private void dispatchUploadFromGallery() { // Launch gallery intent startActivityForResult(new Intent(Intent.ACTION_PICK, MediaStore .Images.Media.INTERNAL_CONTENT_URI), GET_FROM_GALLERY);}Please note that startActivityForResult gets called in both of these methods. Next up is the createImageFile() method if the user wants to take a picture from the Camera API:private File createImageFile() throws IOException { // Create the Image File name String timeStamp = new SimpleDateFormat(yyyyMMdd_HHmmss, Locale.getDefault()).format(new Date()); String imageFileName = JPEG_ + timeStamp + _; File storageDir = Environment .getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES); File image = File.createTempFile( imageFileName, // Prefix .jpg, // Suffix storageDir // Directory ); // Save the file, path for ACTION_VIEW intents mCurrentPhotoPath = file: + image.getAbsolutePath(); mUri = Uri.fromFile(image); return image;}Now finally our startActivityForResult(...) method:@Overrideprotected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == TAKE_PHOTO_REQUEST && resultCode == RESULT_OK) { startUploadProgress(); showContainer(); mTypedFile = new TypedFile(image/*, mFile); RotatePictureHelper.rotatePicture(mFile, ExampleActivity.this, mAttachment); // Helper class to rotate pictures mBus.post(new LoadUploadFileEvent(mTypedFile)); } else if (requestCode == GET_FROM_GALLERY && resultCode == RESULT_OK) { startUploadProgress(); showContainer(); mUri = data.getData(); mTypedFile = UriHelper.handleUri(mUri, this); // Helper class to handle bitmap manipulation mFile = mTypedFile.file(); mBus.post(new LoadUploadFileEvent(mTypedFile)); } else if (resultCode != Activity.RESULT_CANCELED) { Toast.makeText(this, R.string.generalError, Toast.LENGTH_LONG).show(); }}Note that I have already created helper classes to handle bitmap manipulation and picture rotation issues. STILL, this is very ugly ugly code, and to have this repeated in 5 classes.I have a few ideas in mind right now:Create a service and pass in needed variables to that service to handle this.Moving AlertDialog options to a helper class and call different AlertDialogs based on instanceOf whatever Activity is calling it.Should I create a parent Activity class that has these methods and then extend the 5 Activity child classes and call these methods?
Camera intents and file manipulation
java;android;image
null
_unix.383596
I would like to make a pull request for upgrading the rpm formula of Homebrew. And, I'm looking for the latest stable code base for rpm. I found two project: http://rpm.org and http://rpm5.orgThe current rpm formula of Homebrew is based on version 5.4.15 from rpm5.org, published the 24 Aug. 2014. On rpm.org, there is a version 4.13.0.1 published the 16 Feb. 2017.What are the differences between versions of rpm published on rpm5.org and those published on rpm.org?Which one gives the latest stable code base for rpm?
What are the differences between rpm.org and rpm5.org?
rpm;homebrew
null
_codereview.55124
I want to create a table that looks like this:--------------------------------------------------| | USD |CAD |--------------------------------------------------|Smallest Donation | 100 |250 |--------------------------------------------------|Largest Donation | 9200 |7600 |--------------------------------------------------| |--------------------------------------------------|Total Donation | 12500 |11000 |--------------------------------------------------using the values from this object:perCurrency: {USD:{0:100, 1:200, 2:9200, 3:1500, 4:1500}, PHP:{0:250, 1:7600, 2:150, 3:3000}}I have this code and it actually works, though I believe there's an easier and shorter way to this using only one loop.var numOfCurrency = Object.keys(perCurrency).length + 1;var donation_table = '';donation_table += '<table id=donation_table class=table table-condensed>';donation_table += '<tr><td style=font-weight:bold; width:160px>&nbsp;</td>';$.each(perCurrency, function(index, value){ donation_table += '<td width=150px>'+index+'</td>';});donation_table += '</tr><tr><td style=font-weight:bold; width:160px>Smallest Donation</td>';$.each(perCurrency, function(index, value){ var lowest = Infinity; $.each(value, function(k, v){ if (v < lowest) lowest = v; }); donation_table += '<td>'+lowest+'</td>';});donation_table += '</tr><tr><td style=font-weight:bold; width:160px>Largest Donation</td>';$.each(perCurrency, function(index, value){ var highest = 0; $.each(value, function(k, v){ if (v > highest) highest = v; }); donation_table += '<td>'+highest+'</td>';});donation_table += '</tr><tr><td colspan='+numOfCurrency+'>&nbsp;</td></tr><tr><td style=font-weight:bold; width:160px>Total Donation</td>';$.each(perCurrency, function(index, value){ var total = 0; $.each(value, function(k, v){ total = total + v; }); donation_table += '<td>'+total+'</td>';});donation_table += '</tr></table>';$(#giving_capacity).html(donation_table);I am trying to put this table in a div with an id #giving_capacity.
Dynamically create a table with values from an object
javascript;jquery
null
_datascience.8850
I occasionally train neural nets for my research, and they usually take quite a long time to run (especially when I'm working on my laptop).I'm looking for a way to build the model on any computer and send it up to a server for training and have it return the graphs/accuracies/weights etc. I know there are paid solutions for this but I'm looking for a distributed solution I can run myself.I have a server set up at home which is about to get a CPU and GPU upgrade. I'd like to be able to set it up so that when I'm working on the LAN, or when I'm working remotely on my laptop, I can send code to the server and have it train the model and return to me the results (or save the results if the sender machine is switched off)Are there any existing solutions to accomplish something like this? I'm not tied to any specific library, but would prefer to stick with Python if possible
Python distributed machine learning
machine learning;python;neural network;distributed
null
_scicomp.20993
I'm using the method of manufacturated solution (MMS) to verify a cfd code (Transient energy equations).The equations are discretised with an implicit scheme and allow large time step without any problem, the concern is when I add the source term of the Manufacturated Solution, I only have convergence with small time steps.I don't know if there is any restriction condition about time step when using the MMS in a transient problem?!ps: my manufacturated solution is smooth and contains trigonometric and exponential functions.thanks in advance:)
Transient manufacturated solution
fluid dynamics
null
_softwareengineering.188030
Consider a module that is responsible for parsing files of any given type. I am thinking of using the strategy pattern to tackle this problem as I have already explained over here. Please refer to the linked post before proceeding with this question. Consider class B that needs the contents of the product.xml file. This class will need to instantiate the appropriate concrete implementer of the Parser interface to parse the XML file. I can delegate the instantiation of the appropriate concrete implementer to a Factory such that class B has-a Factory. However, class B will then depend on a Factory for instantiating the concrete implementer. This means that the constructor or a setter method in class B will need to be passed the Factory. Therefore, the Factory and class B that needs to parse a file will be tightly coupled with each other. I understand that I may be completely wrong about whatever I have explained so far. I would like to know whether I can use dependency injection in a scenario where the dependency to be injected is a Factory and what would be the right way to implement this so I can take advantage of areas such as mocking the Factory in my unit tests.
How to use Dependency Injection in conjunction with the Factory pattern
java;design patterns;dependency injection
The right way to do this is to depend on an interface, and then inject an implementation of that interface into Class B. An interface is just about the thinnest thing that you can depend on -- I liken it to trying to grab a wisp of smoke. Code has to couple to something or else it won't do anything, but coupling to an interface is about as decoupled as you can get, yet interfaces can provide all the functionality that you could want. So have Class B's constructor take an interface to the class it needs, and have the factory produce that class as an implementer of the interface. Don't depend on the factory, depend on the interface, and have the factory provide an implementation of that factory.So yes, you will be using dependency injection, but there's nothing wrong with that. Dependency injection -- particularly simple constructor injection -- should be the normal way of doing things. Simply put off new-ing things as far back in your app (and as close to the first line of main) as possible, and hide the new call in a class designed specifically for creating things. Bottom line: Don't be hesitant to inject dependencies. That should be the normal way of doing things.
_codereview.122377
I'm trying to find out what is the right approach to return DB values from a class. This is what I came up with: class Category { public $result; public $errors; public function __construct( $pdo) { $this->pdo = $pdo; } public function getCategoryNames () { $sql = SELECT category_id, category_name FROM category; try { $query = $this->pdo->prepare( $sql ); if( $query->execute( ) ) { if( $query->rowCount() ) { $this->result = $query->fetchAll(); return true; } return false; } return false; } catch ( PDOException $pe ) { trigger_error( 'SQL Error' . $pe->getMessage() ); } } public function getUserDetails($userid) { $sql = SELECT * FROM users WHERE user_id = :user_id; try { $query = $this->pdo->prepare( $sql ); if( $query->execute( array( 'user_id' => $userid ) ) ) { if( $query->rowCount() ) { $this->result = $query->fetch(PDO::FETCH_ASSOC); return true; } return false; } return false; } catch ( PDOException $pe ) { trigger_error( 'SQL Error' . $pe->getMessage() ); } }$category = new Category($pdo);$category->getCategoryNames();print_r($result);$category->getUserDetails('1');print_r($result);Is this approach good? Or what is the proper way to do this?
Database query inside a method of a class and return data
php;object oriented;mysql;pdo
Let me first say that any answer you get here is going to by biased by each person's personal preference. What you're doing isn't dangerous or breaking any rules, so it's 'fine' as far as that's concerned. So with that in mind I'll give you my opinion on a few things to consider, and you can decide from there.Returning true and false to me is a bit old-school and doesn't take advantage of php's flexible variable type handling. I typically return FALSE if there's an error, and return the results if not. The calling code can then just do a if($result !== FALSE) and you're good to go.The other thing to consider is the format of your return data.One method is to dump it into some predefined objects. This is a very clean approach that makes code reliable and easy to read, but it isn't always that easy to implement. I would only do this if you're going to be creating a large and complex project.If you've got a simple project, then returning arrays is the fastest and easiest way to go. It's a little less reliable if your database is likely to change, so watch out for that. I typically massage my arrays in the method prior to returning them. Since in your example you're breaking out specific operations, you can do custom data formatting in each. This is encouraged since it will make your calling code much easier to create, read, and maintain.Lastly I wanted to talk about your error handling. What I prefer to do is return FALSE on error only, and return data every other time. You've chosen to return FALSE if no results are found which makes a bit of sense as it will simplify the calling code. The change I would suggest you make is to remove your trigger_error method and instead set the error message to a class variable that can be retrieved by your calling code. Your should then return FALSE after the catch. This will allow your code to fail more gracefully and also give you more control in your calling code as to how to handle the error (think MVC).Of course I am making some assumptions as to what trigger_error does. I see you have an $errors variable so perhaps you're already on that track. If you are then I recommend always returning FALSE from the method after an error, and adding hasErrors and getErrors methods to your class.
_computergraphics.4221
How can I make an object give the effect that it is giving out light when it isn't? I basically want to make an object glow, for example Neon Lights. Also Area Lights in my engine work properly but to increase how realistic it looks I wanted to give the effect of a haze or glow rather than a flat white rectangle/disk.This photo basically shows what I want to do perfectly.
Bloom in DirectX
opengl;lighting;directx11
This effect is called light bloom. Its algorithm is usually a variation of the following:Render your scene (preferably in high dynamic range) to texture.Make a thresholding pass to another texture. I.e. pixels whose brightness is below a certain (configurable) threshold are are turned down to black.Downsample and blur the thresholded pixels. Usually, this is done in several octaves, i.e. rendering a Gaussian blur with a small kernel to progressively smaller render targets: from full resolution to half resolution, from half to quarter etc.Composite the downsampled octaves back onto the scene image.Both the number of octaves used and the Gaussian blur kernel size affect the end result in terms of visual quality and performance, so you may need to do trade-offs.In other words, the glow effect that you seek is usually simply the original scene image, but thresholded and blurred, superimposed back onto the scene image.
_unix.286591
I've created some custom key bindings for bash vi mode. They trigger while I'm in insert mode and I want them instead to trigger when I'm in normal mode.I'm using vi modeset -o viin a terminal emulator on Ubuntu 14.04 server. So far I have remapped:^ Move to start of line $ Move to end of lineTo the following:<space>a Move to start of line <space>; Move to end of lineUsing the bash built-in command bind by editing .bashrc as follows:bind -a:beginning-of-linebind -;:end-of-lineThese key bindings work - but they only trigger when I'm in insert mode. How do I get them only to fire only when I'm in normal mode and not in insert mode, instead?tags: bash vi mode, bash vi mode remap keys, vi mode normal mode
trigger vi mode key binding in normal mode only
vim;keyboard shortcuts;bashrc;readline;vi mode
null
_datascience.3728
I was wondering if anyone knew which piece of software is being used in this video? It is an image recognition system that makes the training process very simple.http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn#t-775098The example is with car images, though the video should start at the right spot.
What software is being used in this image recognition system?
classification
I'm pretty sure that the software you're referring to is a some kind of internal research project software, developed by Enlitic (http://www.enlitic.com), where Jeremy Howard works as a founder and CEO. By internal research project software I mean either a proof-of-concept software, or a prototype software.
_reverseengineering.9265
Recently I faced with strange (in my opinion) behavior of radare2.I have been reading the Artificial truth blog post about Hacking bits with this crackme.In an article Julien used Intel syntax,but I choose AT&T.So I started disassemble crackme:$ r2 ./crackme.03.32Set syntax to intel, block size to 10 bytes and seek to needed address and print disassemble:[0x00010020]> e asm.syntax = intel[0x00010020]> b 10[0x00010020]> s 0x0010112[0x00010112]> pdOutput was: 0x00010112 80f2ac xor dl, 0xac 0x00010115 eb02 jmp 0x10119But when I changed syntax to ATT:[0x00010112]> e asm.syntax = att[0x00010112]> pdI received that: 0x00010112 80f2ac xorb $-0x54, %dl 0x00010115 eb02 jmp 0x10119In the source code of crackme we can find that value of argument is 0xac (xor dl, 0xac).So, actually, question:Why 80 f2 ac translate to the same opcodes, but with different arguments for AT&T and Intel syntax.Why 0xac became -0x54?$ r2 -versionradare2 0.10.0-git 8247 @ linux-little-x86-64 git.0.9.9-148-gd5f2661commit: d5f2661cbe1a32bc26490bd7a1864ef45907aaea build: 2015-06-26
AT&T XOR argument at radare2
disassemblers;crackme
It was signed and unsigned question.The way to change the signedness is by negating it, which is NOTing all bits of that number and incrementing it by 1>>> 256 - (~(-0x54)+1)172>>> hex(172)'0xac'>>>
_codereview.117494
I have this ugly function, and I feel that the entire strncpy should just be an strcpy:void PackData(char*& cursor, const std::string& data) { *(reinterpret_cast<int*>(cursor)) = static_cast<short>(data.length() + 1); cursor += sizeof(int); // copy the text to the buffer ::strncpy(cursor, data.c_str(), data.size()); cursor += (data.length() * sizeof(char)); *(reinterpret_cast<char*>(cursor)) = 0; cursor += sizeof(char);}cursor is guaranteed to have enough room to contain all the data copied. And data only contains a '\0' character at termination.I want to update this function to use strcpy, and to remove some of the ugly. Here's what I have:void PackData(char*& cursor, const std::string& data) { const int size = data.size() + 1; std::copy_n(cursor, sizeof(int), reinterpret_cast<char*>(&size)); cursor += sizeof(int); strcpy(cursor, data.c_str()); cursor += size;}My code works fine, but I wanted to ask if anyone sees any misbehavior that I may have missed?
strncpy To strcpy Equivalence
c++;strings;casting
The new version of your function does not have any actual mistakes I could spot. It is good that you have replaced the assignment to *(reinterpret_cast<int*>(cursor)) with a call to std::copy_n because the old version (presumably) violated strict aliasing rules and therefore invoked undefined behavior. (Also think about alignment.) It also seemed wrong in your old function that you casted data.length() + 1 to short instead of int.I have a few soft remarks, though.Avoid output parametersTaking a pointer by reference and then modifying does not appear pretty to me. Instead, I'd take the pointer by-value and return the number of bytes by which it was advanced.std::size_t PackData(char * cursor, const std::string& data);It looks better and allows for more flexible use.Chose the appropriate library functionThe string manipulation functions from the C standard library all operate on NUL terminated strings. That is, std::strcpy has to look at every byte of the string to determine whether it is the end of the string. You've mentioned that your string does not contain NUL bytes today. But what if it will do so in the future? Will you remember to update the function? And even if you are sure that NUL bytes will never be an issue, it is still less efficient than it could be. Since the std::string already knows its own length, you have a couple of options available.std::memcpy if you like doing things the C waystd::copy if you like doing things the STL waystd::string::copy if you like doing things the OO wayNote that none of these functions will NUL terminate the copied string, so you'll have to add the NUL byte yourself. Not hard to do.cursor[size] = '\0';Using std::memcpy could also help you with storing the length. Since its arguments are void pointers, there is no need to cast anything.std::memcpy(cursor, &size, sizeof(size));looks cleaner to me thanstd::copy_n(cursor, sizeof(int), reinterpret_cast<char*>(&size));Don't repeat yourselfYour code mentions the type of size three times. Once in the declaration which is fine and then two times in sizeof expressions. By using sizeof(size) instead of sizeof(int), you can eliminate this redundancy. If you ever change the data type, you only need to do so in one place and you can't get it inconsistent.Verify your assumptionsThe castconst int size = data.size() + 1;looks rather safe. But are you sure? Integer overflow is a nasty source of bugs and a potential entry door for crackers. I would recommend two things.First, make the fact that you're narrowing explicit by spelling it out.const auto size = static_cast<int>(data.size() + 1);Note that I can use auto now in order to follow the advice from the previous section.Second, add defensive code (before the cast) to verify your assumptions.assert(INT_MAX > data.size());This is safe if your function is only called with trusted input. If attackers are of concern, assert is the wrong tool because it may be compiled out and you don't want your debug builds to be safe only to open security holes in your release builds. Therefore, an explicit check likeif (INT_MAX <= data.size()) throw std::invalid_argument {data too large};would be preferred in this case.This code is safe as long as sizeof(int) <= sizeof(std::string::size_type) which I believe is true on almost any implementation. However, it might still trigger a compiler warning about comparing a signed and an unsigned integer. Actually, size could be unsigned anyway.Alternatively, you could use a helper function checked_cast<int>(data.size() + 1). I believe that the argument cannot overflow because a std::string always has to account for the possibility to NUL terminate its buffer.Watch out for data leaksI don't know about the environment in which your function is called but is the buffer that cursor points to aligned? If so, watch out for unused padding bytes after the end of the string and the next thing that is written into the buffer. They might leak information you'd rather not have anybody see.
_unix.329526
I have a server with about 6TB of media files on a single 8TB WD hdd.Before I ask a question I probably should provide some background.These files were on BTRFS for a few weeks but after an unrelated hardware issue and subsequent OS rebuild I accidentally trashed the disk & had to restore the files from a backup, so I decided to use that as an opportunity to try out ZFS instead. The main reason that I want to use zfs is for its ability to maintain data integrity. Before I moved to btrfs (and now zfs) I had these files on ext4 and after a drive developed a bitrot issue I had a bunch of files get silently corrupted.So, after reloading the data onto zfs things were OK for a few weeks until I noticed in the morning today that the disk was being flogged relentlessly.After a bit of poking around I found that it was being scrubbed by zfs at the blindingly fast rate of 586K/s. At that rate it will never complete!Now part of this process is me getting more familiar with ZFS so if I am misunderstanding something here please let me know but I believe that the scrub is needed for data integrity purposes because the whole dataset is stored on a single physical disk?If this is correct will the flogging problem be solved if I were to buy more disks and use some form of raidz?If so what would be the best way to solve this problem?1) Buy 2 x 4TB disks and use non-redundant striping? (cheapest) 2) Buy 3 x 4TB disks and use redundant stripe? (more expensive)3) Buy a second 8TB disk and mirror it? (most expensive)Bear in mind I don't really require the redundancy of options 2 & 3 (availability) & I am more interested in maintaining the data (integrity) without having the disk constantly flogging doing scrubs.System details:intel i3 6100T16Gb RAM8TB WD RedUbuntu 16.04 (on a separate SSD)zfs compression and dedup are turned off (they were turned on at first but I have since turned them off)Thanks for reading
Large media storage & ZFS or BTRFS
linux;ubuntu;zfs
null
_scicomp.21423
In FEM classes, it's usually taken for granted that the stiffness matrix is positive definite, but I just can't understand why. Could anyone give some explanation?For instance, we can consider the Poisson problem:$$ -\nabla^2 u = f,$$whose stiffness matrix is:$$K_{ij} = \int_\Omega\nabla\varphi_i\cdot\nabla\varphi_j\, d\Omega,$$which is symmetric and positive definite. Symmetry is an obvious property, but the positive definiteness is not so explicite to me.
In FEM, why is the stiffness matrix positive definite?
finite element;matrix;stiffness
The property follows from the property of the corresponding (weak form of the) partial differential equation; this is one of the advantages of finite element methods compared to, e.g., finite difference methods.To see that, first recall that the finite element method starts from the weak form of the Poisson equation (I'm assuming Dirichlet boundary conditions here): Find $u\in H^1_0(\Omega)$ such that$$ a(u,v):= \int_\Omega \nabla u\cdot \nabla v \,dx = \int_\Omega fv\,dx \qquad\text{for all }v\in H^1_0(\Omega).$$The important property here is that$$ a(v,v) = \|\nabla v\|_{L^2}^2 \geq c \|v\|_{H^1}^2 \qquad\text{for all }v\in H^1_0(\Omega). \tag{1}$$(This follows from Poincar's inequality.)Now the classical finite element approach is to replace the infinite-dimensional space $H^1_0(\Omega)$ by a finite-dimensional subspace $V_h\subset H^1_0(\Omega)$ and find $u_h\in V_h$ such that $$ a(u_h,v_h):= \int_\Omega \nabla u_h\cdot \nabla v_h \,dx = \int_\Omega fv_h\,dx \qquad\text{for all }v_h\in V_h.\tag{2}$$The important property here is that you are using the same $a$ and a subspace $V_h\subset H^1_0(\Omega)$ (a conforming discretization); that means that you still have$$ a(v_h,v_h) \geq c \|v_h\|_{H^1}^2 >0 \qquad\text{for all }v_h\in V_h. \tag{3}$$Now for the last step: To transform the variational form to a system of linear equations, you pick a basis $\{\varphi_1,\dots,\varphi_N\}$ of $V_h$, write $u_h =\sum_{i=1}^N u_i\varphi_i$ and insert $v_h=\varphi_j$, $1\leq j\leq N$ into $(2)$. The stiffness matrix $K$ then has the entries $K_{ij}=a(\varphi_i,\varphi_j)$ (which coincides with what you wrote). Now take an arbitrary vector $\vec v=(v_1,\dots,v_N)^T\in \mathbb{R}^N$ and set $v_h:=\sum_{i=1}^Nv_i \varphi_i\in V_h$. Then we have by $(3)$ and the bilinearity of $a$ (i.e., you can move scalars and sums into both arguments)$$ \vec v^T K \vec v = \sum_{i=1}^N\sum_{j=1}^N v_iK_{ij} v_j =\sum_{i=1}^N\sum_{j=1}^N a(v_i\varphi_i,v_j\varphi_j) = a(v_h,v_h) >0.$$Since $\vec v$ was arbitrary, this implies that $K$ is positive definite.TL;DR: The stiffness matrix is positive definite because it comes from a conforming discretization of a (self-adjoint) elliptic partial differential equation.
_softwareengineering.46272
I'm working as a web developer and I want to be able to determine if I'm efficient.Does this include the how long it take to accomplish tasks such as:Server side code for the site logic with one language or multiple php,asp,asp.net.Client side code like javascript with jquery for ajax, menus and other interactivityPage layout, html, css (color, fonts (but I have no artistic sense!))The needs of the site and how it will work (planning)How can i judge how long it will take to complete a website?The site has CMS for adding and editing news, products, articles on the experience of the company. Also, they can edit team work, add Recreational Activities and a logo gallery with compressed psd download, and send messages to cpanel and to email.You are starting from scratch except JQuery and PHPmailer.How can I estimate how long the job will take, and how can I calculate the required time to finish any new projects?I'm so sorry for many scattered questions, but I'm in my first experiment and I want to take benefits from the great experience of those who have it.
How can I estimate how long a project will take?
web development;experience;requirements
There is a lot to be said for detailed requirements. Everyone hates creating requirements documents but they are a very necessary evil. That being said, I've managed a lot of software projects over the years and I have a few methods that I've found make it much easier to estimate.Personally I can't say enough about Microsoft Project. There are free tools with similar capabilities but MS Project is by far and away my favorite. Regardless of what project management tool you choose these methodologies should apply still.Create a list of high level tasks (CMS, site layout, custom coding, etc).Begin to add sub tasks and groups of sub, sub, sub tasks from the top level.Ultimately what your looking for here is to understand everything that's involved. You won't get everything, you'll inevitably miss something, etc but that's not the point of the exercise. As you go through listing every task that needs to be done (put down things like Research X, Test X, etc) you'll discover tasks you never thought about as you go through it. Think of everything that has to be done from planning to building to testing to migrating to the customer. Once you have all the tasks down you can start to estimate the time necessary for each item. Your times are an educated guess, make sure you pad them with 20-40% (or more) more time than you think it will take. The project management tool you use should have a concept of Predecessors or similar. This will allow you to link the tasks and indicate which tasks require other tasks to be completed first.Now that you have tasks, time estimates and predecessors your project plan can start to estimate a timeline for you. Project management essentially has two primary concepts. Either A, the project deadline should dictate the timeline or B, the project tasks should dictate the timeline. I am VERY much in the B camp. Many MBA types and bean counters will try to tell you when the project is Due. They will also look at your plan and say if we put 5 developers on task X it will get done in 1/5 the time. These theories are flat unusable in a software development world. While there are some cases a similar concept can be employed, it's generally a recipe for disaster. Imagine 5 people trying to modify the same file simultaneously. They will walk all over each other and even the most advanced source code management tools will fall far short.OK, so you have an estimate now. Yes it's rough, no it's not complete and yes it will change (go back and add more time padding Now). Your probably also looking at the end date and thinking to yourself, the client / boss is going to go nuts when they see how long it will take. This is where you pause and take a deep breath. Not only have you thought throughly through what this project will take but you now have documented detail about WHY it will take this long. If they want to dispute time they have to go task by task to cut out time. I've found in 95% of the cases they won't have any interest in do this. You will also (in their minds) clearly understand what needs to be done and be seen as an expert in doing it since you have a detailed plan showing what it will take. Notes: Make sure you put in tasks with estimates in hours where you can. It's hard to dispute something will take 8 or 10 hours. If you put 1 day they start trying to negotiate. There will be tasks that take weeks and months, just put them as such and be prepared to explain why. If you can, break that task into smaller sub tasks in hours / days.Hope that helps!Daniel.....
_softwareengineering.131983
I've been asked to evaluate what appears to be a substantial legacy codebase, as a precursor to taking a contract maintaining that codebase.This isn't the first time I've been in this situation. In the present instance, the code is for a reasonably high-profile and fairly high-load multiplayer gaming site, supporting at least several thousand players online at once. As many such sites are, this one is a mix of front- and back-end technologies.The site structure as seen from the inside out, is a mess. There are folders suffixed _OLD and _DELETE lying all over the place. Many of the folders appear to serve no purpose, or have very cryptic names. There could be any number of old, unused scripts lying around even in legitimate-looking folders. Not only that, but there are undoubtedly many defunct code sections even in otherwise-operational scripts (a far less pressing concern).This is a handover from the incumbent maintainers, back to the original developers/maintainers of the site. As is understandably typical in these sorts of scenarios, the incumbent wants nothing to do with the handover other than what is contractually and legally required of them to push it off to the newly-elected maintainer. So extracting information on the existing site structure out of the incumbent is simply out of the question.The only approach that comes to mind to get into the codebase is to start at the site root and slowly but surely navigate through linked scripts... and there are likely hundreds in use, and hundreds more that are not. Given that a substantial portion of the site is in Flash, this is even less straightforward since, particularly in older Flash applications, links to other scripts may be embedded in binaries (.FLAs) rather than in text files (.AS/ActionScript).So I am wondering if anyone has better suggestions as to how to approach evaluating the codebase as a whole for maintainability. It would be wonderful if there were some way to look at a graph of access frequency to files on the webserver's OS (to which I have access), as this might offer some insight into which files are most critical, even though it wouldn't be able to eliminate those files that are never used (since some files could be used just once a year).
In a legacy codebase, how do I quickly find out what is being used and what isn't?
freelancing;tools;websites;source code;flash
Since what you're being asked to do is provide input for your client to write an appropriate proposal to the other client (owner-of-the-nightmare-code) for any work on that code, I'm going to go out on a limb and say that you're not going to be doing any thorough testing or refactoring or anything along those lines at this point. You probably have a very short time to get a rough estimate. My answer is based on my experience in the same situation, and so if my interpretation is incorrect, just disregard everything that follows.Use a spidering tool to get a sense of what pages are there, andwhat is inbound. Even a basic linkchecker tool -- not a specificspider for auditing purposes tool -- will be useful in this regard.Make a basic audit/inventory spreadsheet. This could be as simple asa list of files and their last-modified time, organized by directory.This will help you get a sense of scope, and when you get todirectories like _OLD and _DELETE you can make a big note that a)your evaluation is based on stuff not in those directories b) thepresence of those directories and the potential for cruft/hiddennightmares attests to deeper issues that should be accounted for inyour client's bid, in some way. You don't have to spend a gazillionyears enumerating the possible issues in _OLD or _DELETE; the infowill feed into the eventual bid.Given you are reviewing what sounds like an entirely web-based app,even standard log analyzer tools are going to be your friend. Youwill be able to add to the spreadsheet some sense of this is in thetop 10 of accessed scripts or some such. Even if the scripts areembedded in Flash files and therefore not spiderable, there's a highprobability they are accessed via POST or GET, and will show up inthe server logs. If you know you have 10 highly accessed scripts,not 100 (or vice versa), this will give you a good idea of howmaintenance work will likely go.Even in a complicated site, what I outlined above is something you could do in a day or day and a half. Since the answer you're going to give to your client is something like this is going to be a tremendous pain in the butt, and here are some reasons why you'll just be putting lipstick on a pig, so you should bid accordingly or any reasonable person would bid not to maintain but to start over, so you should bid accordingly or even this isn't that bad, but it will be a consistent stream of work over any given timeframe, so bid accordingly, the point is that they're going to be making the bid and thus you do not need to be as precise as you would be if you were being hired directly to do a full content and architecture audit.
_webapps.101138
Also not being crawled by Google. I actually want these pastes to be easily found. They're not code and they're not personal information like emails or links or anything like that. And the pastes are still there, but they act like unlisted pastes (you have to have the link). When I make the posts they do show up as new public posts in the ticker on Pastebin's site, but they don't ever seem to get indexed. Why?
Public pastes not showing up in searches on Pastebin
search;pastebin
null
_softwareengineering.196898
I'm not so hot on the maths for this but for what I understand...A graph g exists with v vertices and edges. g = (V,E);The spanning graph for this is an acyclic copy of this where all the vertices are present, and all the edges are a subset of the graph with the condition that each connection is distinct.Apparently the MST should have n-1 nodes. How can this be proven ? Sources:http://youtu.be/zFbq8vOZ_0k?t=25m1shttp://www.gtkesh.com/minimum-spanning-tree/
How can you prove an acyclic graph has n-1 edges?
algorithms;graph;trees
Proof by induction:Every acyclic graph can be represented as a tree, if all the nodes are connected.So let's think about trees. You've got one root node. Let's look at the simplest, case, in which the tree only has one branch, and so it's a simple linked list.If there are two nodes, there's one edge between them. Add one node to the end of the linked list, and there are three nodes and two edges, and so on.Now if we take the linked list and add another node to one of the nodes in the middle, we have a true tree. And again, we're adding one node along with one edge. Add another one, and it's the same.No matter how many nodes you add, or where you add them, as long as it remains an acyclic, fully connected tree, there will always be N-1 edges for N nodes.
_cstheory.5151
Consider the following online problem:For $\sigma$ and $k$ fixed, given a string of symbols from alphabet $[1..\sigma]$, given one by one, guess a set $S$ of $k$ symbols such that the next symbol belongs in $S$, in space independent of the length of the past string.This problem is similar to online cache management in the sense that failing the guess is a miss, while guessing correctly is a catch. It is much more general than cache management in the sense that symbols can be added to (the cache) $S$ without having generated a miss, just based on correlations previously learned (e.g. $b$ always follows $a$). It presents the same problem of analysis (competitive analysis and the like) than other online problems in the sense that the worst case is intractable while some good performance can be expected on practical instances.It could have application to caching with pre-fetching. I thought of it while thinking about the dream specs of a diary application on a smart phone (i.e. reduced screen and costly typing), which given the past submissions of the user (went to bed, woke up, ate eggs, took the bus, etc...) must suggest on the screen $k$ activities to be logged next, a menu for others and a field to enter a new one (using the menu is a miss, writing a new one is unavoidable for any solution). In this case one should consider the space taken by the algorithm: remembering the whole section of previous queries is not an option, only a finite, lossy (ditching outliers) summary of it. One could give additional information to the guesser such as the time of the day, but I do not know how to fit it nicely in the theoretical model.Was such a problem or a variant already studied? Under which name? Using which model (Dorrigiv and Lopez-Ortiz cooperative analysis comes to mind)?
Has this online problem been studied before ?
ds.algorithms;reference request;lg.learning;terminology;online algorithms
While I'm still not entirely sure what you're looking for, you might try reading Avrim Blum's talk on Online Learning and Prediction, and that might help you focus on areas of interest.
_codereview.114551
I'm trying to build a game where a client gets to play against a server in a game of Tic Tac Toe. I've built the game and followed a somehow structured design of how the game is supposed to happen, with the client and the server exchanging messages, but they seem to lose synchronisation at the beginning of the game. The server sends a GOO message, which is supposed to start the game, but it seems the client never receives it. The server doesn't even care about that and just goes on, reaching a point where both ends expect a message in return but if one of them gives a message, the run fails because of the error detection I've put in place.Here are both codes, I would be really glad if you could help me out, I'm really new to all the socket programming stuff.Here is the server side:#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <errno.h>#include <string.h>#include <sys/types.h>#include <sys/socket.h>#include <netinet/in.h>#include <arpa/inet.h>#include <sys/wait.h>#include <signal.h>#define MYPORT 5555 // Le port ou se connectent les utilisateurs#define BACKLOG 10 // Le nombre de connections accepteesint sockfd, new_fd; // listen on sock_fd, new connection on new_fdstruct sockaddr_in my_addr; // my address informationstruct sockaddr_in their_addr; // connector's address informationunsigned int sin_size;//-------------------------------- Le Serveur ----------------------------------int setup_listener(){ printf(\nsetup_listener\n); // Socket que l'on va listen. sockfd = socket(PF_INET, SOCK_STREAM, 0); if (sockfd < 0) { perror(Erreur a l'ouverture du socket 'listener'.); exit(EXIT_FAILURE); } // Mise a 0 de la memoire pour le serveur. memset(&my_addr, 0, sizeof(my_addr)); // Set up des infos du serveur my_addr.sin_family = AF_INET; my_addr.sin_addr.s_addr = INADDR_ANY; my_addr.sin_port = htons(MYPORT); // Bind des infos du serveur au socket. if (bind(sockfd, (struct sockaddr *) &my_addr, sizeof(my_addr)) < 0) { perror(Erreur au binding du socket 'listener'.); exit(EXIT_FAILURE); }}void get_client(){ printf(\nget_client\n); if (listen(sockfd, BACKLOG) < 0) { perror(Erreur lors du listen.); exit(EXIT_FAILURE); } sin_size = sizeof(struct sockaddr_in); // Mise a zero de la memoire pour le client. memset(&their_addr, 0, sin_size); new_fd = accept (sockfd, (struct sockaddr *)&their_addr, &sin_size); if (new_fd < 0) { perror(Erreur lors de l'accept.); exit(EXIT_FAILURE); } printf(Serveur: connection recue du client %s\n, inet_ntoa(their_addr.sin_addr));}//---------------------------------- Le Jeu ------------------------------------void write_client_int(int msg){ printf(\nwrite_client_int\n); int n = write(new_fd, &msg, sizeof(int)); if (n < 0) { perror(Erreur lors de l'ecriture d'entier.); exit(EXIT_FAILURE); }}void write_client_msg(char * msg){ printf(\nwrite_client_msg\n); int n = write(new_fd, msg, strlen(msg)); if (n < 0) { perror(Erreur lors de la transmission de message); exit(EXIT_FAILURE); }}int recv_int(){ printf(\nrecv_int\n); int msg = 0; int n = read(new_fd, &msg, sizeof(int)); if (n < 0 || n != sizeof(int)) { perror(Erreur lors de la reception de message(int).); exit(EXIT_FAILURE); } return msg;}int get_player_move(){ printf(\nget_player_move\n); write_client_msg(TRC); // Demande ce que le joueur veut faire. return recv_int();}int check_move(char board[3][3], int move){ printf(\ncheck_move\n); if ((move == 9) || (board[move/3][move%3] == ' ')) return 1; else return 0;}void update_board(char board[3][3], int move){ printf(\nupdate_board\n); board[move/3][move%3] = 'X';}void send_update(int move){ printf(\nsend_update\n); write_client_msg(MAJ); // Envoie le resultat apres un placement de morpion. write_client_int(move); }int check_board(char board[3][3], int last_move){ printf(\ncheck_board\n); int row = last_move/3; int col = last_move%3; if ( board[row][0] == board[row][1] && board[row][1] == board[row][2]) { // Si victoire atteinte sur une ligne. if (board[row][0] == 'X') return 1; else if (board[row][0] == 'O') return 2; } else if ( board[0][col] == board[1][col] && board[1][col] == board[2][col] ) { // Si victoire atteinte sur une colonne. if (board[0][col] == 'X') return 1; else if (board[0][col] == 'O') return 2; } else if ( (last_move == 0 || last_move == 4 || last_move == 8) && (board[1][1] == board[0][0] && board[1][1] == board[2][2]) ) { // Si victoire atteinte sur la diagonale descendante. if (board[1][1] == 'X') return 1; else if (board[1][1] == 'O') return 2; } else if ( (last_move == 2 || last_move == 4 || last_move == 6) && (board[1][1] == board[0][2] && board[1][1] == board[2][0]) ) { // Si victoire atteinte sur la diagonale montante. if (board[1][1] == 'X') return 1; else if (board[1][1] == 'O') return 2; } return 0;}void server_play(char board[3][3]){ printf(\nserver_play\n); int random_move; do { random_move = rand()%8; if (check_move(board, random_move) == 1) { update_board(board, random_move); } } while (!check_move(board, random_move));}void Play_game(){ printf(\nPlay_game\n); printf(Writing message to client GOO. 0); char board[3][3] = { {' ', ' ', ' '}, {' ', ' ', ' '}, {' ', ' ', ' '} }; int game_over = 0; int turns = 0; printf(Writing message to client GOO. 1); write_client_msg(GOO); printf(Writing message to client GOO. 2); while (game_over == 0) { int valid_move = 0; int move = 0; while (!valid_move) { move = get_player_move(); if (move < 0) { perror(Erreur lors de la reception du placement indique.); exit(EXIT_FAILURE); } valid_move = check_move(board, move); } update_board(board, move); game_over = check_board(board, move); send_update(move); if (game_over == 0) server_play(board); send_update(move); game_over = check_board(board, move); if (game_over == 1) { write_client_msg(WIN); } else if (game_over == 2) { write_client_msg(LOS); } else if (turns == 8) { write_client_msg(EGL); game_over = 3; } ++turns; } close(sockfd);}int main(int argc, char** argv) { printf(\nmain\n); setup_listener(); while (1) { get_client(); // REQUESTS INT FOR SOME REASON Play_game(); } return (EXIT_SUCCESS);}And here is the client side:#include <stdio.h>#include <stdlib.h>#include <unistd.h>#include <errno.h>#include <string.h>#include <netdb.h>#include <sys/types.h>#include <netinet/in.h>#include <sys/socket.h>#define MYPORT 5555;#define MY_BUFFER_SIZE 10;int sockfd;struct sockaddr_in their_addr;struct hostent *server;// Lit un message du socket serveur.void recv_msg(char * msg){ printf(\nrecv_msg\n); // Tous les messages font 3 bytes. memset(msg, 0, 4); int n = read(sockfd, msg, 3); if (n < 0 || n != 3) { perror(Erreur lors de la lecture de message sur le socket serv.); exit(EXIT_FAILURE); }}// Lit un entier du socket serveur.int recv_int(){ printf(\nrecv_int\n); int msg = 0; int n = read(sockfd, &msg, sizeof(int)); if (n < 0 || n != sizeof(int)) { perror(Erreur lors de la lecture d'entier sur le socket serv.); exit(EXIT_FAILURE); } return msg;}// Ecrit un entier sur le socket serveur.void write_server_int(int msg){ printf(\nwrite_server_int\n); int n = write(sockfd, &msg, sizeof(int)); if (n < 0) { perror(Erreur lors de l'ecriture d'entier sur le socket serv.); exit(EXIT_FAILURE); }}int connect_to_server(char * hostname){ printf(\nconnect_to_server\n); sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd < 0) { perror(Erreur lors de l'ouverture du socket du serveur.); exit(EXIT_FAILURE); } server = gethostbyname(hostname); if (server == NULL) { perror(Erreur lors de la recherche du serveur (hostname)); exit(EXIT_FAILURE); } memset(&their_addr, 0, sizeof(their_addr)); their_addr.sin_family = AF_INET; their_addr.sin_addr = *((struct in_addr*)server->h_addr); //PORT their_addr.sin_port = htons(5555); if (connect(sockfd, (struct sockaddr *) &their_addr, sizeof(their_addr)) < 0) { perror(Erreur lors de la connection au serveur.); exit(EXIT_FAILURE); }}void draw_board(char board[3][3]){ printf(\ndraw_board\n); printf( %c | %c | %c \n, board[0][0], board[0][1], board[0][2]); printf(___________\n); printf( %c | %c | %c \n, board[1][0], board[1][1], board[1][2]); printf(___________\n); printf( %c | %c | %c \n, board[2][0], board[2][1], board[2][2]);}void take_turn(){ printf(\ntake_turn\n); //BUFFER_SIZE char buffer[10]; while (1) { printf(Choisissez un nombre de 0 a 8 pour jouer. Sur 0 commence la 1ere ligne, sur 3 la 2eme, sur 6 la 3eme.); fgets(buffer, 10, stdin); // Lit ce qui est recu par l'utilisateur // - '0' = -48 pour annuler les 48 premiers chr de la table ASCII // Car stdin prend l'input en tant que char -> ASCII et move est un int int move = buffer[0] - '0'; if (move <= 8 && move >= 0) { write_server_int(move); break; } else printf(\n); printf(Mauvais input. Veuillez suivre les indications.); }}void get_update(char board[3][3]){ printf(get_update); // Recoit le placement de morpion du serveur. int move = recv_int(sockfd); // Met a jour le jeu. board[move/3][move%3] = 'O'; }int main(int argc, char** argv) { printf(main); if (argc != 2) { perror(Veuillez specifier le nom de la machine distante en argument.); exit(EXIT_FAILURE); } sockfd = connect_to_server(argv[1]); printf(Waiting for server message to start.); char msg[4]; char board[3][3] = { {' ', ' ', ' '}, {' ', ' ', ' '}, {' ', ' ', ' '} }; draw_board(board); // THE ERROR HAPPENS HERE PLZ HELP printf(Waiting for server message to start.); do { recv_msg(msg); } while (strcmp(msg, GOO)); printf(msg); draw_board(board); while(1) { recv_msg(msg); if (!strcmp(msg, TRC)) { printf(A vous de jouer: ); take_turn(sockfd); } else if (!strcmp(msg, MAJ)) { get_update(board); draw_board(board); } else if (!strcmp(msg, WIN)) { /* Winner. */ printf(Victoire! Vous avez battu la machine.); break; } else if (!strcmp(msg, LOS)) { /* Loser. */ printf(Vict... presque! On apprend beaucoup quand on perd <3); break; } else if (!strcmp(msg, EGL)) { /* Game is a draw. */ printf(Egalite. Bah au moins vous avez pas perdu! Excellent.); break; } printf(Revenez un de ces jours! Regardez comme je m'amuse...); close(sockfd); return 0; } return (EXIT_SUCCESS);}And here are both outputs when I try to run them:Server:mainsetup_listenerget_clientServeur: connection recue du client 192.168.209.1Play_gameWriting message to client GOO. 0Writing message to client GOO. 1write_client_msgWriting message to client GOO. 2get_player_movewrite_client_msgrecv_intRUN TERMINATED (exit value 1, total time: 2m 3s)Client:E:\University\BA2\Algorithmique\OXO_Client>gcc client.c -o clientE:\University\BA2\Algorithmique\OXO_Client>client Reinstallmainconnect_to_serverWaiting for server message to start.draw_board | |___________ | |___________ | |Waiting for server message to start.recv_msgErreur lors de la lecture de message sur le socket serv.: No errorE:\University\BA2\Algorithmique\OXO_Client>The client is run through the terminal and receives the computer's name as an argument. As you can see, the server just sends its GOO message and doesn't care if the client receives it nor what it does with it.
Server / client desynchronisation of messages
c;socket;server;client
null
_unix.178400
I am running Red Hat 7 into a VM using virt-manager (qemu) and it seems that it uses by default, Cirrus driver:00:02.0 VGA compatible controller [0300]: Cirrus Logic GD 5446 [1013:00b8] (prog-if 00 [VGA controller]) Subsystem: XenSource, Inc. Device [5853:0001] Physical Slot: 2 Flags: bus master, fast devsel, latency 0 Memory at f0000000 (32-bit, prefetchable) [size=32M] Memory at f3000000 (32-bit, non-prefetchable) [size=4K] Expansion ROM at <unassigned> [disabled]I would like to know, how can I replace it with VESA. Any suggestions?
How to enable Vesa driver in qemu
xorg;drivers;qemu
null
_unix.218552
I'm a new user of CentOS and now I want to create a new user of my system and let it can only access one directory.First I create a group named test. Then:useradd -g test -d /home/disk/disk1/testDir testuserThe disk1 is a real disk which is mounted in disk1 folder.And now I can see the testDir folder and its ll output is :drwx------ 2 testuser test 4096 Jul 27 14:48 testDirAnd after I set the password and login with testuser by putty. It says:Could not chdir to home directory /home/disk/disk1/testDir: Permission deniedThe folder exists and owned by testuser. I do not understand why it got permission denied?
Could not chdir to home directory when create and login a new user?
permissions
When you can not access to the directory you have right permissions, the first thing you should check is your access rights with parent directories:ls -ld /home/diskand:ls -ld /home/disk/disk1You need at least execute permission to access the child of those directories.
_unix.236746
I use gpg-agent sometimes with no X display or over ssh, so my config file contains:pinentry-program /usr/bin/pinentry-cursesThis way, the gpg passphrase is requested in curses.That said, in some graphical scripts, I wish to use the GTK pinentry instead. How to call gpg and temporarily use a different pinentry?
Change pinentry program temporarily with gpg-agent
gpg;gpg agent
null
_cogsci.7826
I've dealt with a few people having a narcissistic personality trait / disorder.For me it's interesting to see the defense pattern they adopt through rationalization to defend their self-image and to avoid changing their beliefs (confirmation bias). The pattern seems to be almost identical in the various subjects.My question is: could be functional to these subjects to read a book which tries to change their beliefs? At least could it be used to speed-up the therapy?It could be described as follow:The subject feels ashamed, weak or bad on some area. He develops the belief to be superior to others on another area (strong, very rational).The subject feel a strong need to defend this superiority. Any attempt to change his belief is perceived as an aggression.The strategy of defense include: use of cognitive distortion to confirm his belief, discrediting of the other idea, personal offence or provocation, getting annoyed and emotional, focusing on everything but his error / responsibility / distorted beliefThe subject usually raise the level of the argument in an emotional way. He appears completely stubborn or use verbal strategy to obstacle the interlocutor. He appeals to any formal argumentative error.The subject claim to be unique and use typical excuses connected to relativism like: why should I change?. But actually he use relativism only when it comes to be useful to justify his own belief. Otherwise his beliefs appears very absolute and dichotomic.Is it correct to suppose that million of narcissists share the same defensive pattern? Has it been already studied?
Can reading [self-help] books aid in therapeutic treatment of the narcissistic rationalization pattern?
clinical psychology
null
_webmaster.47906
I'm considering purchasing an unmanaged VPS with a Ubuntu 12.04 server to host smaller sites that aren't very important yet. My plan would be to install webadmin and basically administrate the hosting end that way. My question is: What am I getting myself into? I'm fairly comfortable with Ubuntu. I know enough to 'sudo apt-get update && sudo apt-get upgrade.'What else would I have to worry about aside from the usual keeping my scripts (mainly WordPress) up to date?
Unmanaged Ubuntu Server - What would I be getting myself into?
vps;ubuntu
What else would I have to worry aboutEverything short of pretty much the data center being on fire. That's the unmanaged part: OS/software update? You need to install it.Bad performance? They installed Ubuntu; that doesn't mean they did any optimizing at all.Apache/MySQL/etc. crashed and your site's unreachable? You need to restart it.Security problem? You figure it out and fix it.Data center burned down? They'll probably replace the server, but you better have your own backups of the data.In reality, you need to read the host's terms for a final answer on this, but generally speaking, it's going to be as above. They give you a server with maybe completely default installations of a few things like Apache, but then you manage it from there.
_codereview.1771
I have been trying to wrap my head around MVVM for the last week or more and still struggling a bit. I have watched Jason Dolingers MVVM video and gone through Reed Copsey lessons and still find myself wondering if i am doing this right... I found both sources very interesting yet a bit different on the approach. If anyone has any other links, I would be interested as I would really like to learn this. What is the best practice for the model to alert the viewmodel the something has happened? As you will see in the code below, I created a very simple clock application. I am using an event in my model but am not sure if this is the best way to handle this. The output of the program is as expected, however I'm more interested in if I'm actually using the pattern correctly. Any thoughts comments etc would be appreciated.My modelusing System;using System.Threading;namespace Clock{ public class ClockModel { private const int TIMER_INTERVAL = 50; private DateTime _time; public event Action<DateTime> TimeArrived; public ClockModel() { Thread thread = new Thread(new ThreadStart(GenerateTimes)); thread.IsBackground = true; thread.Priority = ThreadPriority.Normal; thread.Start(); } public DateTime DateTime { get { return _time; } set { this._time = value; if (TimeArrived != null) { TimeArrived(DateTime); } } } private void GenerateTimes() { while (true) { DateTime = DateTime.Now; Thread.Sleep(TIMER_INTERVAL); } } }}My View<Window x:Class=Clock.MainWindow xmlns=http://schemas.microsoft.com/winfx/2006/xaml/presentation xmlns:x=http://schemas.microsoft.com/winfx/2006/xaml xmlns:ViewModels=clr-namespace:Clock Title=MainWindow Height=75 Width=375> <Window.DataContext> <ViewModels:ClockViewModel /> </Window.DataContext> <StackPanel Background=Black> <TextBlock Text={Binding Path=DateTime} Foreground=White Background=Black FontSize=30 TextAlignment=Center /> </StackPanel></Window>My View Modelusing System;using System.ComponentModel;namespace Clock{ public class ClockViewModel : INotifyPropertyChanged { private DateTime _time; private ClockModel clock; public ClockViewModel() { clock = new ClockModel(); clock.TimeArrived += new Action<DateTime>(clock_TimeArrived); } private void clock_TimeArrived(DateTime time) { DateTime = time; } public DateTime DateTime { get { return _time; } set { _time = value; this.RaisePropertyChanged(DateTime); } } /// <summary> /// Occurs when a property value changes. /// </summary> public event PropertyChangedEventHandler PropertyChanged; /// <summary> /// Raises the property changed event. /// </summary> /// <param name=propertyName>Name of the property.</param> private void RaisePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } }}
Simple clock view model
c#;datetime;wpf;mvvm
It seems your implementation is correct. You also touch a common discussion of MVVM.In MVVM should the ViewModel or Model implement INotifyPropertyChanged?One could argue you could let the model implement INotifyPropertyChanged. I'm not experienced enough with MVVM to answer this with a pro/contra argumentation.The main intent however is implemented and the separation is there either way.
_computergraphics.23
I have a point cloud that is being rendered to the screen. Each point has its position and color as well as an ID.I was asked to render the IDs for each point to a texture so I created a FBO and attached two textures, one for color and one for depth. I created the necessary VAO and VBO for this off-screen rendering and uploaded for each point its position and ID.Once the rendering to the FBO is done, I read the pixels of the color-texture with glReadPixels() to see what the values are, but they seem to be all cleared out, i.e., the value they have is the same as glClearColor().Is there a way I can debug what it is being render to the color texture of my FBO? Any tips that you may provide are very welcomed.
How can I debug what is being rendered to a Frame Buffer Object in OpenGL?
opengl;debugging
Generally to see what is being rendered in the various steps of your pipeline I'd suggest the use of a tool for frame analysis. These usually provide you with a view on the content of each buffer for each API call and this can help you in your situation.A very good one is Renderdoc which is both completely free and opensource. Also it is actively supported. Another one is Intel GPA that unfortunately, according to its webpage, support up to OGL 3.3 Core. Just for the sake of adding one more, I used to use gDEBugger, but it has passed long since the last update. This has evolved into AMD CodeXL that unfortunately I never used.
_unix.363695
Related questions on this site that unfortunately don't match my use case:Compare two audio filesHow to use sox or ffmpeg to detect silence intervals in a long audio file and replace them by zeros (aka suppress background noise)?The DSP Stack Exchange site has nothing of value to offer:Measuring how non-noisy a sound signal is has meaningless answers.Problem:I download files from the Internet (YouTube, Vimeo, etc.), or transcode stuff from my FM radio. I would like to be able to calculate (from the command line) the following:(ideally) a pair of numbers: average Signal-to-Noise-Ratio (SNR) and its variance for the recording as a whole(not so ideally) a sequence of SNR values for e.g. each time slice of the recording (a second, a tenth etc.)(would work at a pinch) a spectrogram for each time slice of the recording...using only GNU/FOSS command line tools - FFmpeg and anything I can get in source tarball form.P.S. Can I do it with sox?
How do I calculate noise level of an audio file with ffmpeg?
audio;ffmpeg;sox
null