id
stringlengths
5
27
question
stringlengths
19
69.9k
title
stringlengths
1
150
tags
stringlengths
1
118
accepted_answer
stringlengths
4
29.9k
_cs.38397
I am trying to calculate all pure strategy Nash equilibrium in a mxn game. It requires to check all pure strategy pairs (m.n pairs). Suppose player 1 has m strategies. Algorithm should start with the first pair (1,1) and compare (1,1) with (2,1),(3,1),...(m,1). Same for all strategies, exp: (3,4) is compared by (1,4),(2,4),(4,4),...,(m,4).In summary is it possible to make a search with for loop which excludes existing i:Thank you.
Is it possible to make excluded search with for loop in Java?
search algorithms;game theory;java
First of all, if you want to check each combination of pure strategies, you will have two nested for loops:int a, b;for (b = 0; b < n; b++) { for (a = 0; a < m; a++) { compare(a, b); }}Additionally, compare(a,b) will then be compared to each alternative I have to exchange a under the assumption that my opponent will use b:int compare(int a, int b) { for (int x = 0; x < m; x ++) { if (x == a) continue; // This will exclude the check (a, b), (a, b) System.out.println(Comparing ( + a + , + b ) and ( + x + , + b + )); } return 0;}However, my knowledge of game theory is a little bit rusted. Aren't you trying to determine the best answer in pure strategies to each pure strategy of your opponent?In fact, this comparison would have a complexity of $\mathcal{O}(m^2\cdot n)$.In order to determine the best answers with only a complexity of $\mathcal{O}(m\cdot n)$, you can gradually search the maximum payoff while traversing each possible combination of strategies:int best_answers[n];int a, b;int max_payoff;for (b = 0; b < n; b++) { best_answers[b] = 0; max_payoff = 0; // If negative payoffs are possible, adjust this. for (a = 0; a < m; a++) { if (payoff[a][b] > max_payoff) { max_payoff = payoff[a][b]; best_answers[b] = a; } }}Here, I assume that int payoff[m][n] is a 2-dimensional array that encodes the payoff matrix. After running this algorithm, best_answers[b] contains the best answer strategy for strategy $0 \leq b < n$ of the opponent.
_webmaster.51818
Well I have some crawlers to scrape news from different news agency websites and auto-publish it to my own website, the thing is my pages get index in Google, but even if I search the exact same words, it's not showing up.It only shows the source website, I'd like to know what can I do to make Google to show my pages on SERP too? should I link back to the source website? or is it basically not useful to crawl another website and publish it's content as your own?
News like pages get index, but not showing up in SERP
seo;duplicate content;web crawlers
It is not useful to crawl another website and publish its content as your own. Google refers to that practice as scraping. Here is an article about it: Google Penalty: Why You Should Not Copy Content which is summarized:Many people believe that setting up a website loaded with copied content is an easy way to make lots of money via AdSense and advertising. They are mistaken....Google, the almighty search engine, heavily penalizes websites that scrape content. Other search engines like Bing, Yahoo, Baidu etc. also impose similar penalties on misbehaving websites. Such penalty will push your website down the search results and that will make it difficult (or almost impossible) for users to reach your website.In your case, it appears that Google is crawling your site, but not indexing your content because it indexes the content elsewhere.
_softwareengineering.1007
Tester and blogger Lanette Creamer recently posted this question on Twitter:If you are a professional software developer who works with testers, think of the best testers you know. What traits do they have in common?I thought it would make an excellent question for here.My thoughts are:They want to remove ambiguity from requirements even if it means asking awkward questions.They create new features by seeing the way software should work, rather than just how it's documented.They demonstrate honesty and integrity and encourage but not demand it from those around them. In other words, they model behavior.What are the traits of the best testers you've worked with?
What traits do the best testers you've worked with have in common?
team;testers
Here are a few that I'd add:Smart - These people come across as rather bright or deep thinkers. Boundary cases come quickly to these people it seems. They may ask the, What about. . questions a lot.Attention to detail - Listing reproduction steps, stating the difference between expected and actual results, etc. Thorough in their work.Self-motivated - The better testers I know seem to drive themselves to be thorough and go, go, go! Get things done would be another way to state this to my mind.Analytical - Arguing over priority or severity with calm, rational arguments. Understanding what bugs are going to get fixed ASAP and which are too cosmetic, e.g. a bad color choice.Tenacity - They stuck to their interpretation unless a project manager, business analyst, or someone with the power changed the requirements to overrule them. Not a push-over for another way to put this.
_codereview.36687
What can be improved in my code: using (var oracleConnection = new OracleConnection(ConnectionString)) { using (var oracleCommand = oracleConnection.CreateCommand()) { oracleConnection.Open(); oracleCommand.CommandText = SELECT * FROM table_sample Table_Sample WHERE table_sample.id > 1000; using (var reader = oracleCommand.ExecuteReader()) { while (reader.Read()) { int id = reader.GetValue(0); } } } }As the most dangerous thing I see the ugly string statement for database query.I dont want to use Entity Framework for Oracle - cause its not as actual as one for MS SQL Server. And also some Oracle features are not supported.
Connecting to Oracle using ODP.NET
c#;sql;oracle
You're using using blocks to dispose your disposables, which is excellent. However these blocks increase the nesting of your code; since there's nothing between using (var oracleConnection = new OracleConnection(ConnectionString)) and using (var oracleCommand = oracleConnection.CreateCommand()) you could drop the curly braces and stack them, like this: using (var oracleConnection = new OracleConnection(ConnectionString)) using (var oracleCommand = oracleConnection.CreateCommand()) { ... }Within that scope, you're reassigning variable Id at each row that gets read; ultimately the value of Id will be that of the last row that was read. I doubt this is the intended behavior.As for the string query, I agree it's dangerous - I prefer (by far) to use an object-relational mapper such as Entity Framework (which as @svick has mentioned has an Oracle provider), but never used it for anything other than SQL Server. I believe you could look into NHibernate as well, or shop around - something like .net ORM for oracle should find you some interesting links :)
_webapps.43358
When I'm about to work on Project Foo, I want to open all documents and spreadsheets related to Project Foo with one click of a link. Is it possible to set up such a link? (Does Google Drive have a concept of a saveable workspace?)
Is there a way to save and open workspaces in Google Drive?
google drive
Does Google Drive have a concept of a saveable workspace?No. (Not to say they wouldn't add something like that in the future.)Of course, if you are syncing Drive files with your hard drive, you could always use your OS to open multiple files using your file explorer (or equivalent).
_unix.205276
I have a broken disk where I need to copy a 60G file from.From time to time the disk resets and I can't finish the copy. I would like to try and copy partial slices and put them all together.How can I do this?
How can I partially copy a file from a broken disk?
hard disk;file copy
null
_webmaster.43664
We are strongly considering moving significant, relatively static, portions of our existing Drupal site (yinyanghouse.com) out of Drupal and into a subdomain built up with a static site generator like Jekyll.This section would be something like 900 pages or so of largely static content with no need for comments, etc. (although keeping Adsense would be important) - something like resources.yinyanghouse.com.Then we are more than likely going to move the dynamic/community portions of our site into Wordpress for easier upgrade paths and better adherence to API's between versions (in our opinion) than Drupal.My question, then, is will there be any significant ramifications of moving a page such as:http://www.yinyanghouse.com/acupuncturepoints/lv3 to resources.yinyanghouse.com/acupuncturepoints/lv3?Any things to watch out for or are just 301's and fixing all our internal links enough?Any experiences with Jekyll and larger sites? What about hosting that many pages on Github pages vs. locally with Nginx?These are crucial pages of our rankings overall and we really don't want to lose that but moving them to a static site generator will help us greatly with maintenance and hosting costs.
SEO implications of moving a section of site to a subdomain - Drupal to Jekyll/Wordpress migration
seo;wordpress;migration;jekyll
null
_unix.207859
I can't seem to get my wifi working and I feel i've exhausted google's search capabilities. Here is the output of lspci for the device02:00.0 Network controller [0280]: Broadcom Corporation BCM4352 802.11ac Wireless Network Adapter [14e4:43b1] (rev 03) Subsystem: AzureWave Device [1a3b:2123] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 10 Region 0: Memory at f7e00000 (64-bit, non-prefetchable) [size=32K] Region 2: Memory at f7c00000 (64-bit, non-prefetchable) [size=2M] Capabilities: [48] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=2 PME- Capabilities: [58] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [68] Vendor Specific Information: Len=44 <?> Capabilities: [ac] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 256 bytes, PhantFunc 0, Latency L0s <4us, L1 unlimited ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset- DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr+ NoSnoop+ MaxPayload 128 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend- LnkCap: Port #0, Speed 5GT/s, Width x1, ASPM L0s L1, Latency L0 <2us, L1 <32us ClockPM+ Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s, Width x1, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-, Selectable De-emphasis: -6dB Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [13c v1] Device Serial Number 24-0a-00-ff-ff-00-00-01 Capabilities: [150 v1] Power Budgeting <?> Capabilities: [160 v1] Virtual Channel Caps: LPEVC=0 RefClk=100ns PATEntryBits=1 Arb: Fixed- WRR32- WRR64- WRR128- Ctrl: ArbSelect=Fixed Status: InProgress- VC0: Caps: PATOffset=00 MaxTimeSlots=1 RejSnoopTrans- Arb: Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256- Ctrl: Enable+ ID=0 ArbSelect=Fixed TC/VC=01 Status: NegoPending- InProgress- Capabilities: [1b0 v1] Latency Tolerance Reporting Max snoop latency: 71680ns Max no snoop latency: 71680ns Capabilities: [220 v1] #15As you can see, there is no kernel driver associated with it, and I have no idea how to get one associated with it. I'm using debian 7.8. I know that the correct kernel driver for it is the wl module, which I have installed but for some reason it doesn't associate with that network card.here is modprobe debug output, not sure if it helpsroot@void:~# modprobe -vvv wllibkmod: DEBUG ../libkmod/libkmod-module.c:519 kmod_module_new_from_lookup: input alias=wl, normalized=wllibkmod: DEBUG ../libkmod/libkmod-module.c:525 kmod_module_new_from_lookup: lookup modules.dep wllibkmod: DEBUG ../libkmod/libkmod.c:542 kmod_search_moddep: use mmaped index 'modules.dep' modname=wllibkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='wl' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:398 kmod_pool_add_module: add 0x7fbb30aef4a0 key='wl'libkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='lib80211' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='lib80211' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:398 kmod_pool_add_module: add 0x7fbb30aef600 key='lib80211'libkmod: DEBUG ../libkmod/libkmod-module.c:178 kmod_module_parse_depline: add dep: /lib/modules/3.2.0-4-amd64/kernel/net/wireless/lib80211.kolibkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='cfg80211' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='cfg80211' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:398 kmod_pool_add_module: add 0x7fbb30af2e20 key='cfg80211'libkmod: DEBUG ../libkmod/libkmod-module.c:178 kmod_module_parse_depline: add dep: /lib/modules/3.2.0-4-amd64/kernel/net/wireless/cfg80211.kolibkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='rfkill' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:390 kmod_pool_get_module: get module name='rfkill' found=(nil)libkmod: DEBUG ../libkmod/libkmod.c:398 kmod_pool_add_module: add 0x7fbb30af2f80 key='rfkill'libkmod: DEBUG ../libkmod/libkmod-module.c:178 kmod_module_parse_depline: add dep: /lib/modules/3.2.0-4-amd64/kernel/net/rfkill/rfkill.kolibkmod: DEBUG ../libkmod/libkmod-module.c:184 kmod_module_parse_depline: 3 dependencies for wllibkmod: DEBUG ../libkmod/libkmod-module.c:546 kmod_module_new_from_lookup: lookup wl=0, list=0x7fbb30aef5a0libkmod: DEBUG ../libkmod/libkmod-module.c:435 kmod_module_unref: kmod_module 0x7fbb30aef4a0 releasedlibkmod: DEBUG ../libkmod/libkmod.c:406 kmod_pool_del_module: del 0x7fbb30aef4a0 key='wl'libkmod: DEBUG ../libkmod/libkmod-module.c:435 kmod_module_unref: kmod_module 0x7fbb30af2f80 releasedlibkmod: DEBUG ../libkmod/libkmod.c:406 kmod_pool_del_module: del 0x7fbb30af2f80 key='rfkill'libkmod: DEBUG ../libkmod/libkmod-module.c:435 kmod_module_unref: kmod_module 0x7fbb30af2e20 releasedlibkmod: DEBUG ../libkmod/libkmod.c:406 kmod_pool_del_module: del 0x7fbb30af2e20 key='cfg80211'libkmod: DEBUG ../libkmod/libkmod-module.c:435 kmod_module_unref: kmod_module 0x7fbb30aef600 releasedlibkmod: DEBUG ../libkmod/libkmod.c:406 kmod_pool_del_module: del 0x7fbb30aef600 key='lib80211'libkmod: INFO ../libkmod/libkmod.c:319 kmod_unref: context 0x7fbb30aef220 releasedIs there something like /lib/modules/3.2.0-4-amd64/modules.alias that I'm supposed to update?EDIT TO ADD:root@void:~# lsmod | grep -i wlwl 2552134 0 iwlwifi 166761 0 mac80211 192806 1 iwlwifilib80211 12941 1 wlcfg80211 137243 3 mac80211,iwlwifi,wloot@void:~# dmesg | grep -i net[ 0.003674] Initializing cgroup subsys net_cls[ 0.921992] NET: Registered protocol family 16[ 1.075822] NET: Registered protocol family 2[ 1.078897] NET: Registered protocol family 1[ 1.237519] audit: initializing netlink socket (disabled)[ 1.407312] NET: Registered protocol family 10[ 1.407547] NET: Registered protocol family 17[ 1.408255] Initializing network drop monitor service[ 1.426190] e1000e: Intel(R) PRO/1000 Network Driver - 2.3.2-k[ 1.608316] e1000e 0000:00:19.0: eth0: Intel(R) PRO/1000 Network Connection[ 3.220683] NET: Registered protocol family 31[ 10.365781] ip_tables: (C) 2000-2006 Netfilter Core Team[ 10.390360] ip6_tables: (C) 2000-2006 Netfilter Core Team[ 10.541140] FS-Cache: Netfs 'nfs' registered for caching[ 11.474762] ADDRCONF(NETDEV_UP): eth0: link is not ready[ 13.015205] ADDRCONF(NETDEV_CHANGE): eth0: link becomes readyroot@void:~# dmesg | grep -i wl[ 9.846040] wl: module license 'MIXED/Proprietary' taints kernel.root@void:~# dmesg | grep cfg80211[ 9.835597] cfg80211: Calling CRDA to update world regulatory domainroot@void:~# dmesg | grep -i addr[ 0.000000] ACPI: Local APIC address 0xfee00000[ 0.000000] ACPI: Local APIC address 0xfee00000[ 0.000000] ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])[ 0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23[ 11.474762] ADDRCONF(NETDEV_UP): eth0: link is not ready[ 13.015205] ADDRCONF(NETDEV_CHANGE): eth0: link becomes readypaul@void:~$ cat /etc/modprobe.d/broadcom-sta-dkms.conf # wl module from Broadcom conflicts with the following modules:blacklist b44blacklist b43legacyblacklist b43blacklist brcm80211blacklist brcmsmacblacklist ssb
wlan0 device missing
debian;networking;wifi
null
_softwareengineering.342232
I was reading the Wikipedia article on Actor model and came across this line-...bridging the chasm between local and nonlocal concurrency.What is this non-local concurrency and how is it different from local concurrency?
What is the difference between local and non-local concurrency?
concurrency
null
_codereview.87515
I am using RestTemplate as my HttpClient to execute a URL while the server will return a JSON string as the response. The customer will call this library by passing DataKey object which has userId in it. Earlier I was using AsyncRestTemplate which is part of Spring 4 but in my company they are not supporting Spring 4 in their parent pom so going back to Spring 3 for now.Using the given userId, I will find out what are the machines that I can hit to get the data and then store those machines in a LinkedList, so that I can execute them sequentially.After that I will check whether the first hostname is in block list or not. If it is not there in the block list, then I will make a URL with the first hostname in the list and execute it and if the response is successful then return the response. But let's say if that first hostname is in the block list, then I will try to get the second hostname in the list and make the url and execute it, so basically, first find the hostname which is not in block list before making the URL.Now, let's say if we selected first hostname which was not in the block list and executed the URL and somehow server was down or not responding, then I will execute the second hostname in the list and keep doing this until you get a successful response. But make sure they were not in the block list as well so we need to follow above point.If all servers are down or in block list, then I can simply log and return the error that service is unavailable.I am making a library in which I will have synchronous (getSyncData) and asynchronous (getAsyncData) methods in it.getSyncData() - waits until I have a result, returns the result.getAsyncData() - returns a Future immediately which can be processed after other things are done, if needed.Below is my DataClient class which will be called by customer and they will pass DataKey object to either getSyncData or getAsyncData method depending on what they want to call. In general some customer will call getSyncData method and some customer might call getAsyncData method.public class DataClient implements Client { private RestTemplate restTemplate = new RestTemplate(); private ExecutorService service = Executors.newFixedThreadPool(15); @Override public DataResponse getSyncData(DataKey key) { DataResponse response = null; Future<DataResponse> responseFuture = null; try { responseFuture = getAsyncData(key); response = responseFuture.get(key.getTimeout(), key.getUnitOfTime()); } catch (TimeoutException ex) { response = new DataResponse(DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR); responseFuture.cancel(true); // terminating the tasks that have got timed out } catch (Exception ex) { response = new DataResponse(DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR); } return response; } @Override public Future<DataResponse> getAsyncData(DataKey key) { DataFetcherTask task = new DataFetcherTask(key, restTemplate); Future<DataResponse> future = service.submit(task); return future; }}DataFetcherTask class:public class DataFetcherTask implements Callable<DataResponse> { private DataKey key; private RestTemplate restTemplate; public DataFetcherTask(DataKey key, RestTemplate restTemplate) { this.key = checkNotNull(key); this.restTemplate = checkNotNull(restTemplate); } // can we simplify this? // I tried thinking a lot but not sure how to split this up which follows SRP @Override public DataResponse call() { DataResponse dataResponse = null; ResponseEntity<String> response = null; MappingsHolder mappings = ShardMapping.getMappings(key.getTypeOfFlow()); // given a userId, find all the hostnames // it can also have four hostname or one hostname or six hostname as well in the list LinkedList<String> hostnames = mappings.getListOfHostnames(key.getUserId()); for (String hostname : hostnames) { // If host name is null or host name is in local block list, skip sending request to this host if (StringUtils.isEmpty(hostname) || ShardMapping.isBlockHost(hostname)) { continue; } try { String url = generateURL(hostname); response = restTemplate.exchange(url, HttpMethod.GET, key.getEntity(), String.class); // if the status code is NO_CONTENT, then send that as well // otherwise send OK if (response.getStatusCode() == HttpStatus.NO_CONTENT) { dataResponse = new DataResponse(response.getBody(), DataErrorEnum.NO_CONTENT, DataStatusEnum.SUCCESS); } else { dataResponse = new DataResponse(response.getBody(), DataErrorEnum.OK, DataStatusEnum.SUCCESS); } break; // below codes are duplicated looks like } catch (HttpClientErrorException ex) { HttpStatusCodeException httpException = (HttpStatusCodeException) ex; DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException); String errorMessage = httpException.getResponseBodyAsString(); dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR); return dataResponse; } catch (HttpServerErrorException ex) { HttpStatusCodeException httpException = (HttpStatusCodeException) ex; DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException); String errorMessage = httpException.getResponseBodyAsString(); dataResponse = new DataResponse(errorMessage, error, DataStatusEnum.ERROR); return dataResponse; } catch (RestClientException ex) { // if it comes here, then it means some of the servers are down so adding it into block list ShardMapping.blockHost(hostname); } } // if hostnames are empty, then sending different ERROR ENUM code. if (CollectionUtils.isEmpty(hostnames)) { dataResponse = new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR); } else if (response == null) { // either all the servers are down or all the servers were in block list dataResponse = new DataResponse(null, DataErrorEnum.SERVICE_UNAVAILABLE, DataStatusEnum.ERROR); } return dataResponse; }}My block list keeps getting updated from another background thread every 1 minute. If any server is down and not responding, then I need to block that server by using this - ShardMapping.blockHost(hostname);And to check whether any server is in block list or not, I use this - ShardMapping.isBlockHost(hostname);I am returning SERVICE_UNAVAILABLE if servers are down or in block list, on the basis of response == null check, not sure whether it's a right approach or not.I want to know if I am following the Single Responsibility Principle properly here or not and if the above code can be simplified?
Receiving a JSON string response from a URL
java;performance;multithreading;http;guava
I would consider creating an AsyncClient and a SyncClient interface/classes instead of one. I guess users of the current Client class would use only one of the two methods. Furthermore, getSyncData does not use any of the fields of DataClient currently.Async version:public class AsyncDataClient { private RestTemplate restTemplate = new RestTemplate(); private ExecutorService service = Executors.newFixedThreadPool(15); public Future<DataResponse> getAsyncData(DataKey key) { DataFetcherTask task = new DataFetcherTask(key, restTemplate); Future<DataResponse> future = service.submit(task); return future; }}Sync version:public class SyncDataClient { private AsyncDataClient asyncDataClient; public SyncDataClient(final AsyncDataClient asyncDataClient) { this.asyncDataClient = checkNotNull(asyncDataClient, asyncDataClient cannot be null); } public DataResponse getSyncData(DataKey key) { ... responseFuture = asyncDataClient.getAsyncData(key); ... }}See also: Interface segregation principleIn getSyncData the response variable is used for multiple purposes. It could store a valid response and error responses too. I would use separate variables for these purposes for better readability and smaller variable scope:public DataResponse getSyncData(DataKey key) { Future<DataResponse> responseFuture = null; try { responseFuture = asyncDataClient.getAsyncData(key); final DataResponse response = responseFuture.get(key.getTimeout(), key.getUnitOfTime()); return response; } catch (TimeoutException ex) { final DataResponse response = new DataResponse(DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR); responseFuture.cancel(true); // terminating the tasks that have got // timed out return response; } catch (Exception ex) { return new DataResponse(DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR); }}It also makes easier to figure out that the only occasion when this method returns null is the try block, when the future returns null.See also: Effective Java, Second Edition, Item 45: Minimize the scope of local variablesIn this catch block you loose the cause of the error:} catch (Exception ex) { response = new DataResponse(DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);}An operator would be helpful for (at least) a debug level log message here. It could save you lots of debugging time.DataKey contains at least these methods:getTimeout()getUnitOfTime()getTypeOfFlow()getUserId()getEntity()It reminds me a DataRequest object name instead. Consider renaming. For me, DataKey is closer to a cache key class or something like that. Furthermore, getEntity is still smells even in a request class. It might be a third parameter of getAsyncData and the constructor of DataFetcherTask as well.The of hostnames reference could be a simple List<String> instead of LinkedList<String> hostnames = ...As far as I see the code does not use any LinkedList specific methods.See: Effective Java, 2nd edition, Item 52: Refer to objects by their interfacesThis:if (CollectionUtils.isEmpty(hostnames)) {Could be changed to if (hostnames.isEmpty()) {CollectionUtils checks nulls as well, but if it's a null you get a NullPointerException in the for loop earlier.Instead of StringUtils.isEmpty(hostname) I usually prefer StringUtils.isBlank which handles whitespace-only strings too.I don't know how complex is your generateURL method but I would consider moving it to an UrlGenerator method. I would also call it generateUrl for a little bit better readability.From Effective Java, 2nd edition, Item 56: Adhere to generally accepted naming conventions: While uppercase may be more common, a strong argument can made in favor of capitalizing only the first letter: even if multiple acronyms occur back-to-back, you can still tell where one word starts and the next word ends. Which class name would you rather see, HTTPURL or HttpUrl?You could move this part at the beginning of your method as a guard clause:// if hostnames are empty, then sending different ERROR ENUM code.if (hostnames.isEmpty()) { dataResponse = new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR);For example:List<String> hostnames = mappings.getListOfHostnames(key.getUserId());// if hostnames are empty, then sending different ERROR ENUM code.if (hostnames.isEmpty()) { return new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR);}Instead of the break statement in the loop you could return immediately:if (response.getStatusCode() == HttpStatus.NO_CONTENT) { return new DataResponse(response.getBody(), DataErrorEnum.NO_CONTENT, DataStatusEnum.SUCCESS);} else { return new DataResponse(response.getBody(), DataErrorEnum.OK, DataStatusEnum.SUCCESS);}It also helps to make the scope of dataResponse variable smaller. Actually, you don't need it at all and you could get rid of the null comparison at the end of the method too:@Overridepublic DataResponse call() { ... List<String> hostnames = mappings.getListOfHostnames(key.getUserId()); if (hostnames.isEmpty()) { return new DataResponse(null, DataErrorEnum.PERT_ERROR, DataStatusEnum.ERROR); } for (String hostname: hostnames) { ... try { ... ResponseEntity<String> response = restTemplate.exchange(url, HttpMethod.GET, key.getEntity(), String.class); ... if (response.getStatusCode() == HttpStatus.NO_CONTENT) { return new DataResponse(response.getBody(), DataErrorEnum.NO_CONTENT, DataStatusEnum.SUCCESS); } else { return new DataResponse(response.getBody(), DataErrorEnum.OK, DataStatusEnum.SUCCESS); } } catch (HttpClientErrorException ex) { ... return new DataResponse(errorMessage, error, DataStatusEnum.ERROR); } catch (HttpServerErrorException ex) { ... return new DataResponse(errorMessage, error, DataStatusEnum.ERROR); } catch (RestClientException ex) { // if it comes here, then it means some of the servers are down so adding it into block list ShardMapping.blockHost(hostname); } } // either all the servers are down or all the servers were in block list return new DataResponse(null, DataErrorEnum.SERVICE_UNAVAILABLE, DataStatusEnum.ERROR);}Also note the scope change of response.I don't see why you need this casting:} catch (HttpClientErrorException ex) { HttpStatusCodeException httpException = (HttpStatusCodeException) ex;and this one:} catch (HttpServerErrorException ex) { HttpStatusCodeException httpException = (HttpStatusCodeException) ex;Since HttpStatusCodeException is a superclass of both HttpClientErrorException and HttpServerErrorException the following is the same: } catch (HttpClientErrorException ex) { HttpStatusCodeException httpException = ex; ... } catch (HttpServerErrorException ex) { HttpStatusCodeException httpException = ex;Furthermore, HttpStatusCodeException has only these two subclasses in Spring and the body of both catch clauses are the same so you could simply catch only HttpStatusCodeException here:} catch (HttpStatusCodeException httpException) { DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException); String errorMessage = httpException.getResponseBodyAsString(); return new DataResponse(errorMessage, error, DataStatusEnum.ERROR);} catch (RestClientException ex) { // if it comes here, then it means some of the servers are down so adding it into block list ShardMapping.blockHost(hostname);}Keep in my that anyone can create a new subclass of HttpStatusCodeException so that's might not what you want.If you're using Java 7 or later, you could use multi-catch with one catch block:} catch (HttpClientErrorException | HttpServerErrorException ex) {Another solution is extracting out the common code into a method: private DataResponse createErrorResponse(HttpStatusCodeException httpException) { DataErrorEnum error = DataErrorEnum.getErrorEnumByException(httpException); String errorMessage = httpException.getResponseBodyAsString(); return new DataResponse(errorMessage, error, DataStatusEnum.ERROR);}Usage:} catch (HttpClientErrorException ex) { return createErrorResponse(ex);} catch (HttpServerErrorException ex) { return createErrorResponse(ex);responseFuture.cancel(true) is in a catch block. I have not checked but I would try to move it into a finally block. If another (non-timeout exception) happens you won't use the future anyway.getUnitOfTime does not suggest any connection with the getTimeout method. I would rename it to getTimeoutUnit.
_unix.227931
I am trying to list all the ffmpeg processes that are currently running on Debian machine (Ubuntu 15).I use the following command:ps aux | grep 'ffmpeg'If only one ffmpeg process is running, I still get two results. One for the actual process, and one for grep that is looking for ffmpeg in the process list.max 21599 13.2 3.0 503848 92288 ? Rl 01:39 1:18 ffmpeg -f video4linux2 -i /dev/video0 -f mpeg1video -b:v 800k -r 30 http://127.0.0.1:8082/oops/1024/640/ -nostdin -nostats -loglevel fatalmax 23789 0.0 0.0 13688 2172 pts/3 S+ 01:49 0:00 grep --color=auto ffmpegHow can I modify my request so that the grep result which is actually my request is omitted from the output?
List process by name excluding grep
linux;grep;process;ps
null
_unix.305643
When I query the status of the NTP daemon with ntpdc -c sysinfo I get the following output:system peer: 0.0.0.0system peer mode: unspecleap indicator: 11stratum: 16precision: -20root distance: 0.00000 sroot dispersion: 12.77106 sreference ID: [73.78.73.84]reference time: 00000000.00000000 Thu, Feb 7 2036 7:28:16.000system flags: auth monitor ntp kernel statsjitter: 0.000000 sstability: 0.000 ppmbroadcastdelay: 0.000000 sauthdelay: 0.000000 sThis indicates that the NTP sync failed. However the system time is accurate within 1 second precision. When I ran my system without network connection for the same period as I did now the system time would deviate ~10s.This behavior suggests that the system has another way of syncing the time. I realized that there is also systemd-timesyncd.service (with configuration file at /etc/systemd/timesyncd.conf) and timedatectl status gives me the correct time: Local time: Thu 2016-08-25 10:55:23 CEST Universal time: Thu 2016-08-25 08:55:23 UTC RTC time: Thu 2016-08-25 08:55:22 Time zone: Europe/Berlin (CEST, +0200) NTP enabled: yesNTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-27 01:59:59 CET Sun 2016-03-27 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-10-30 02:59:59 CEST Sun 2016-10-30 02:00:00 CETSo my question is what is the difference between the two mechanisms? Is one of them deprecated? Can they be used in parallel? Which one should I trust when I want to query the NTP sync status?(Note that I have a different system (in a different network) for which both methods indicate success and yield the correct time.)
ntpd vs. systemd-timesyncd - How to achieve reliable NTP syncing?
ntp;ntpd
null
_softwareengineering.94709
I'm considering using a Java stored procedure as a very small shim to allow UDP communication from a PL/SQL package. Oracle does not provide a UTL_UDP to match its UTL_TCP. There is a 3rd party XUTL_UDP that uses Java, but it's closed source (meaning I can't see how it's implemented, not that I don't want to use closed source).An important distinction between PL/SQL and Java stored procedures with regards to networking: PL/SQL sockets are closed when dbms_session.reset_package is called, but Java sockets are not. So if you want to keep a socket open to avoid the tear-down/reconnect costs, you can't do it in sessions that are using reset_package (like mod_plsql or mod_owa HTTP requests).I haven't used Java stored procedures in a production capacity in Oracle before. This is a very large, heavily-used database, and this particular shim would be heavily used as well (it serves as a UDP bridge between a PL/SQL RFC 5424 syslog client and the local rsyslog daemon).Am I opening myself up for woe and horror, or are Java stored procedures stable and robust enough for usage in 10g? I'm wondering about issues with the embedded JVM, the jit, garbage collection, or other things that might impact a heavily used database.
Java stored procedures in Oracle, a good idea?
java;oracle;stored procedures
null
_codereview.79946
I have a small function that when passed a str that names a file that contains a program; it returns a 2-tuple with the number of the non-empty lines in that program, and the sum of the lengths of all those lines. Here is my current functioning code:def code_metric(file_name): line_count = char_count = 0 with open(file_name) as fin: stripped = (line.rstrip() for line in fin) for line_count, line in enumerate(filter(None, stripped), 1): char_count += len(line) return line_count, char_countIs there a way to implement this function using functionals such as map, filter, and reduce and small lambdas to pass to these functionals? I could make it work conventionally but having some issue with using functionals. Any help would be great.
Counting Lines and Sum of Lines
python;python 3.x;functional programming
null
_reverseengineering.9431
I am trying to change the name of the DLL file I had specified for my application before compiling it. When I change the DLL file name I get the error message like: System.IO.FileNotFoundException: Could not load file or assembly 'MyDLLName'Is there any way to change the DLL I had specified through .NET Reflector + Reflexil add-in?
How to change the DLL name in the already compiled application?
disassembly;dll;patch reversing;reassembly;.net
null
_unix.379281
I have a large directory, with too many files for just ls. My idea: use something along the line of:find . -name * -exec wc -c < {} \; | sort | tail -n 1Problem: shell is interpreting it as (find . -name * -exec wc -c) < ({} \;) | ...I need the < on the < {}, to avoid displaying the filename into sort.I've also triedfind . -name * -exec cat {} +| wc -cHowever, this seems to be interpreted as: (find . -name * -exec cat {}) | (wc -c) -- so it gives me the size of all files combined.There is also a variant using du-- however, since the biggest files could be just a few bytes apart, this just displays along the lines of a million files of 500-KB size-- again, too many for ls.
How to find the biggest filesize in a large directory
find;osx;size
null
_unix.71807
I'm noticing very strange behavior from the mv command that may (or may not) be related to the open system call. We're running RedHat v5. There are two separate storage devices, one mounted to /diskTo and the other /diskFrom (for this example).In normal operations, we are moving (mv'ing) hundreds, if not low thousands, of files from /diskFrom to /diskTo. The majority of the files move fine. However, out of, say 1000 files, we have 1-5 that fail. The failure is a permission denied error. When we check the file destination, a file exists, but the inode contents are garbage. For example, the timestamp is junk (1969, but varies), and the permissions are 0. So, I figured we should run strace on the mv commands and capture the output of the failures. Here's what I found:munmap(0x2b0328770000, 4096) = 0geteuid() = 31169ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90de9500) = -1 ENOTTY (Inappropriate ioctl for device)stat(/diskTo/foo.dat, 0x7fff90de95d0) = -1 ENOENT (No such file or directory)lstat(/diskFrom/bar.dat, {st_mode=S_IFREG|0444, st_size=234632119, ...}) = 0lstat(/diskTo/foo.dat, 0x7fff90de9370) = -1 ENOENT (No such file or directory)rename(/diskFrom/bar.dat, /diskTo/foo.dat) = -1 EXDEV (Invalid cross-device link)unlink(/diskTo/foo.dat) = -1 ENOENT (No such file or directory)open(/diskFrom/bar.dat, O_RDONLY|O_NOFOLLOW) = 3fstat(3, {st_mode=S_IFREG|0444, st_size=234632119, ...}) = 0open(/diskTo/foo.dat, O_WRONLY|O_CREAT|O_EXCL, 0400) = -1 EEXIST (File exists)write(2, mv: , 4) = 4write(2, cannot create regular file `/diskTo..., 76) = 76write(2, : File exists, 13) = 13write(2, \n, 1) = 1As you can see, The unlink is called, which returns -1, which shows the file doesn't exist. Then, mv tries to open the file and receives an EEXIST error. But the file can't possibly exist! I'm not showing this here, but the script that is creating this test case is using unique numbers to build directories - so it's very unlikely (if not impossible) that the file truly existed. Not to mention, the unlink proves the file didn't exist.Could this be an issue with how open is creating the inode contents? I'm not sure where to look at this point. Maybe looking into mv more, or the open system call?
Strange behavior in mv command - maybe open sys call issue?
fedora;filesystems;rhel;rename;open files
null
_unix.284410
Is there any way of displaying memory usage from U-Boot?I mean operation memory - like sdram - not mmc.. Ideally from shell but I can be happy also with C command as I compile U-Boot myself.
U-Boot shell - show memory usage (like Linux free command)
memory;u boot
null
_unix.122115
I've installed Debian on an old netbook of mine which I intend to turn into a thin client for a personal thingie while also having it ready to play a video or some music when needed.After swimming against the current for hours, working around known bugs and getting wireless to work, I've finally come to a point where I can't find any more help on Google. (I've hardly used Linux) I wish to boot into a terminal (like pressing Control + Alt + F1) by default, but also have the GUI (Gnome, in my case) loading in the background if possible (for quick access with Control + Alt + F7).How do I do this?
Boot to a terminal by default but still have GUI loading in Debian?
debian
So, by empirical evidence, the GUI won't even start up properly unless I'm switched to it (VT7) during the loading, so I tried something else. I used this to get tty1 and tty2 to autologin into my account.Then I set up the file ~/.bash_profile with this code: #!/bin/shif [ $(tty) = 'dev/tty1' ]then echo boot script yada yada on tty1else echo Type 'startx' to get your GUI!fiThen I instructed the other potential users of my netbook to press Control + Alt + F2 and Control + Alt + F7 to get around. (Taped sticky note...) This is the best solution I've found so far.If anyone posts a better (full) solution, it'll get all the cookies.
_cs.41785
What is the intuition behind the Max-Flow Min-Cut Theorem? I know that the Min-Cut is the dual of Max-Flow when formulated as a linear program, but the result seems artificial to me.
Max-Flow Min-Cut Theorem Intuition
graph theory;network flow;intuition
So you have a flow thought the network. If you want the maximal flow, your network should not have any bottlenecks. And if you partition the network in two parts, where the source and the sink are in different partitions - you won't be able to push more though the network than this cut - i.e. the sum of edges.Now the minimum cut will be the worst bottleneck in the network. So it will correspond to maxflow.
_webapps.108910
I'm trying to connect oracle database and encountered the following error Connection URL uses an unsupported JDBC protocol Is this error due to firewall? Please help.
jdbc connection issue in google apps script
google apps script
null
_cs.5010
Im having trouble figuring out how to determine if two finite automata are the same apart from renumbered states. More specifically, heres an example: It's easy to generate a regular expression by hand and see that both FA produce: b + (ab*a(a+b)), though their states are renumbered, they are identical.What I'm trying to do is figure out a way to check if the two states are the same apart from state renumbering without generating a regular expression. Since the states are just renumbered, I'm thinking it has something to do with permutations of the states(1 2 3 4) but am not seeing how to determine if they are equivalent. I'm thinking it has something to do with input like this and relating it to the 24 permutations of the states:Left Right1 a 2 4 a 31 b 4 4 b 12 a 3 3 a 22 b 2 3 b 33 a 4 2 a 13 b 4 2 b 1Im more so trying to figure out the algorithm to renumber the states Any ideas or help is greatly appreciated!
Finding an isomorphism between finite automata
automata;finite automata;graph isomorphism
I can identify a permutation on the set of states $\{1,2,3,4\}$ the turns the left-hand diagram into the right-hand one. In other words, if you renumber the states on the left-hand diagram, you get the right-hand diagram. Thus the two automata are identical up to renumbering.
_codereview.138389
I've made my own std::vector but it's the first time that I work with template and new/delete. The code works, but surely there are a lot of things that are wrong. Can you read the code and say me if I have coded it the right way?(main is a test.)#ifndef __STDVECTOR__#define __STDVECTOR__#include <iostream>using namespace std;template <typename T> class StdVector{ private: T *buffer; unsigned int capacity; public: //Constructor. StdVector(){ capacity=0; buffer=new T[capacity]; } //Copy constructor. StdVector(const StdVector &asv){ int i; capacity=asv.getCapacity(); buffer=new T[asv.getCapacity()]; for (i=0; i<capacity; i++){ buffer[i]=asv[i]; } } //Destructor. ~StdVector(){ delete []buffer; } void push_back(T obj){ StdVector oldSV(*this); int i; capacity++; delete []buffer; buffer=new T[capacity]; for (i=0; i<oldSV.getCapacity(); i++){ buffer[i]=oldSV[i]; } buffer[i]=obj; }; T getBuffer() const{ if (capacity==0){ throw exception(); } return *buffer; }; T &operator[](int index) const{ if (index>=capacity){ //Out of range. throw exception(); } else{ return buffer[index]; } } StdVector &operator=(const StdVector &obj){ capacity=obj.getCapacity(); delete []buffer; buffer=new T[capacity]; buffer=obj.getBuffer(); return *this; } unsigned int getCapacity() const{ return capacity; }; };#endifint main(){ try{ StdVector<int> test; StdVector<string> test2; unsigned int i; test.push_back(5); test.push_back(4); test.push_back(3); test.push_back(2); test.push_back(1); test.push_back(0); test.push_back(-1); test.push_back(-2); test.push_back(-3); test.push_back(-4); test.push_back(-5); for (i=0; i<test.getCapacity(); i++){ cout << test[i] << endl; } test2.push_back(Hello); test2.push_back( ); test2.push_back(World); test2.push_back(.); cout << --------------- << endl; for (i=0; i<test2.getCapacity(); i++){ cout << test2[i]; } cout << endl; } catch(...){ cout << Exception. << endl; } return 0;}
My own std::vector
c++;template;vectors
Don't use double underscore.#ifndef __STDVECTOR__#define __STDVECTOR__Identifiers with double underscores are reserved for the implementation.see: What are the rules about using an underscore in a C++ identifier?Stop usingusing namespace std;This kind of thing can break so much code. Putting it in your header file will get you banned from open source projects as you corrupt the global namespace of any compilation unit that includes your header. Even in your own source files it is a bad idea as it potentially introduces hard to spot errors.see: Why is using namespace std in C++ considered bad practice?Don't build objects that have not been added.Your current design is flawed. template <typename T> class StdVector{ T *buffer; ... buffer=new T[capacity];Because you only have one size value (a capacity) your code becomes tremendously inefficient (as we see when we get to push back).You should have two sizes.The capacity: The amount of space allocated for objects in your vector.The size: The number of objects in the vector.Without both these sizes they have to be the same value. This means you can not pre-allocate space and each time you add or remove elements you must resize the vector and this includes both allocating new space and copying all the elements into the newly allocated spaceI go into a lot of detail in my article:Vector - Resource Management AllocationPrefer to use initializer list. StdVector(){ capacity=0; buffer=new T[capacity]; }This is totally fine and works. But it is a bad habit. If you change the types of the members you are potentially making your class inefficient as the members are constructed before the body of the class is entered. You then modify them in the body. StdVector() : buffer(new T[capacity]) , capacity(0) {}Rule of threeYour assignment operator is way down in your code. I initially thought you were violating the rule of three. Put your assignment operator close to the constructors.Assignment operator not exception safe.This assignment operator is the classic first attempt. StdVector &operator=(const StdVector &obj){ capacity=obj.getCapacity(); delete []buffer; buffer=new T[capacity]; buffer=obj.getBuffer(); return *this; }But it is prone to leaking and leaving the object in an inconsistent state if there are exceptions (in the constructor of T).You should use the copy and swap idiom. StdVector &operator=(StdVector tmp) // notice the pass by value { // this creates a copy. tmp.swap(*this); return *this; } void swap(StdVector& other) noexcept { using std::swap; swap(capacity, other.capacity); swap(buffer, other.buffer); }I cover this in a lot of detail in:Vector - Resource Management Copy SwapPush back ineffecientYou are making a copy of the whole vector each time you add an element. But you are doing it twice to make even more inefficient that the original inefficiency imposed by your design. void push_back(T obj){ StdVector oldSV(*this); int i; capacity++; delete []buffer; buffer=new T[capacity]; for (i=0; i<oldSV.getCapacity(); i++){ buffer[i]=oldSV[i]; } buffer[i]=obj; };You can get rid of one copy like this: void push_back(T obj){ int newCapacity = capacity + 1; T* newBuffer = new T[newCapacity]; for (int i = 0; i < capacity; ++i){ newBuffer[i]=capacity[i]; } swap(capacity, newCapacity); swap(buffer, newBuffer); delete [] newBuffer; };Still not very good. But better than the original.For Loops incrementing iterators.Prefer to declare the loop variable inline.Prefer to use prefix increment (not all iterators are as efficient as int).Like this: for (int i = 0; i < capacity; ++i){ newBuffer[i]=capacity[i]; }Badly named method.This does not get the buffer. T getBuffer() const{ if (capacity==0){ throw exception(); } return *buffer; };It returns a copy of the first element in the vector.Efficiency.Normally in C++ the operator[] does unchecked access to the elements. Because there is no need to pay for the check in the method if the calling code already does the check. T &operator[](int index) const{ if (index>=capacity){ //Out of range. throw exception(); } else{ return buffer[index]; } }To give us checked accesses we usually implement the function T& at(int index). This provides checked access to the vector for situations where the calling code does not check.for(int loop = 0;loop < v.size(); ++loop){ v[loop] = stuff(); // no need to check `loop` is in bounds. // we know it is in bounds because of the context.}int index;std::cin >> index;std::cout << v.at(index) << \n; // here we want checked access.I would write:T& operator[](int index) {return buffer[index];}T& at(int index) {checkIndex(index);return buffer[index];}Const correctness T &operator[](int index) constThis function is not const correct. You promise not to mutate the object by marking the function as const but then return a reference that is not const thus allowing the object to be mutated. void bla(StdVector<int> const& data) { data[5] = 8; // You just mutated a const object. }You should define two versions of this operator one for const and one for non const usage. T const& operator[](int index) const; T& operator[](int index);
_softwareengineering.2410
I am referring to explaining to the non-programmer what programming is. I made sure to search for similar questions before creating this one, but the few ones I did find seemed to dodge the question, and I specifically would like to see some metaphors or analogies. I personally find it easier to explain something technical to someone through the use of metaphors or analogies.The reason I'm interested in this is because many people encounter the work of a programmer on a daily basis, but if you ask the average person what a programmer is or does, they don't really know. This leads to certain situations of misunderstanding (ex. [...] but I thought you were good with computers!)I really would like to find the best one out there. I would like to be able to easily explain to someone what my career choice is about. Of course, at least the general idea.I personally don't have a solid one, but I have long thought about it and I have usually gravitated towards the 'language' metaphor, where we happen to know a language that computers understand, and therefore we are able to tell computers what to do, or teach them, to solve our problems.For example:Imagine that in an alternate reality, humanoid robots with artificial intelligence exist, and some people are able to communicate with them through a common language, which is a variation of English. These people who can communicate with the robots are able to teach them how to solve certain problems or do certain tasks, like doing our chores.Well, although robots like that don't exist yet, programmers of our time are like those people, but instead of communicating with the robots, they communicate with computers. Programmers teach the computers how to perform certain tasks or solve certain problems by means of software which they create by using this common language.Programmers and this common language are what give us things like email, websites, video games, word processors, smart phones (to put it simply), and many other things which we use on a daily basis.I don't mean to put programming on the throne or anything, it's just the best metaphor I could come up with.I'm sure someone will find some issue with this one, it's probably a bit contrived, but then again that's why I'm asking this question.
What's a good Programming Metaphor?
programming languages
null
_unix.197990
Is there a Linux version of http://www.linuxliveusb.com/ for example?A GUI way to creating a bootable USB stick.
Bootable USB creator for Linux
linux
null
_softwareengineering.352896
I see and work with a lot of software, written by a fairly large group of people. LOTS of times, I see integer type declarations as wrong. Two examples I see most often: creating a regular signed integer when there can be no negative numbers. The second is that often the size of the integer is declared as a full 32 bit word when much smaller would do the trick. I wonder if the second has to do with compiler word alignment lining up to the nearest 32 bits but I'm not sure if this is true in most cases.When you create a number, do you usually create it with the size in mind, or just create whatever is the default int?edit - Voted to reopen, as I don't think the answers adequately deal with languages that aren't C/C++, and the duplicates are all C/C++ base. They fail to address strongly typed languages such as Ada, where there cannot be bugs due to mismatched types...it will either not compile, or if it can't be caught at compile time, will throw an exception. I purposely left out naming C/C++ specifically, because other languages treat different integers much differently, even though most of the answers seem to be based around how C/C++ compilers act.
Why are so many of the numbers I see signed when they shouldn't be?
programming practices
Do you see the same thing? Yes, the overwhelming majority of declared whole numbers are int.Why?Native ints are the size your processor does math with*. Making them smaller doesn't gain you any performance (in the general case). Making them larger means they maybe (depending on your processor) can't be worked on atomically, leading to potential concurrency bugs. 2 billion and change is big enough to ignore overflow issues for most scenarios. Smaller types mean more work to address them, and lots more work if you guess wrong and you need to refactor to a bigger type.It's a pain to deal with conversion when you've got all kinds of numeric types. Libraries use ints. Clients use ints. Servers use ints. Interoperability becomes more challenging, because serialization often assumes ints - if your contracts are mismatched, suddenly there are subtle bugs that crop up when they serialize an int and you deserialize a uint.In short, there's not a lot to gain, and some non-trivial downsides. And frankly, I'd rather spend my time thinking about the real problems when I'm coding - not what type of number to use.*- these days, most personal computers are 64 bit capable, but mobile devices are dicier.
_datascience.17168
I have a dataset of Key Performance Indicator (KPI) and for each KPI I have a current level of achivement and 2 targets : Target1 and Target2.How can I automatically generate one graphic for each achievement as in the file attached here.As I have many KPIs I would like to generate my graphics in a batch process
How to generate bulk graphics using R
r;dataset;visualization;data;excel
There are multiple ways you can achieve this. I guess the easiest would be to create a function that saves a bar chart for your KPI to a file:save_KPI_plot <- function(fn, kpi_data) { png(paste0(fn, .png)) # Plot code dev.off()}You can call the function either as follows,save_KPI_plot(kpi_1, kpi_1_data)or store your file names in one list and your data in another and loop over both:kpi_fns <- list(filename_1, filename_2)kpi_data <- list(kpi_1_data, kpi_2_data)for (i in seq_along(kpi_fns)) { save_KPI_plot(kpi_fns[i], kpi_data[i])}If you prefer to have a different image format you can change png() to bmp() or jpeg().
_unix.78443
How do you set a shorter timeout value for read/write I/O errors on MacOSX?We tend to get a few semicrashed disks and want to rsync the contents to a secure location, but when the file subsystem hits a block with an error it tends to freeze up all the processes pertaining to that file system. We use MacSO X so that we can read HFS+.I recon this is something you have to change on kernel level or even firmware (so asking at Ask Different is wasted).The alternative is to have an rsync exclude file and each time I hit a bad block I add the filename of the failed file to the excludes, pull the plug (literally) on the broken disk to provoke a flush and reconnect, but that is more than likely to destroy the disk even further.
Shorter timeout on I/O errors MacOS X
osx;io;timeout;darwin
null
_webmaster.34801
I would like to know if there is a way to change default px unit after you hover an element in dev tools, into em, percent, in, pt, pc. Most important for me would be possibility to see em values of an element. Thank you in advance for you answers, I am aware of calculators, or doing that myself. Having such a feature would speed up my work.
Google Chrome developer tools metric units
plugin;google chrome;development;measure
null
_webmaster.77282
I have some pages on my website that are intended for other users to embed those pages on their own webpages using iframes. My iframed pages have a Google Analytics tracking code and each time they are embedded on someone elses page, it counts as a normal page view, per Google's recommendation.However, in some cases on my own website I need to add the same iframed pages, mostly as an example for other users so they know how to properly add the iframes to their own websites.So this results in an inflated page view count in GA because my webpage gets a page view and then my iframe I add also gets a page view, resulting in 2 page views for a single page.How can I avoid this problem?
Avoid inflated pageviews when iframing your own webpage?
google analytics;iframe;page views
null
_unix.339379
I've got a raspberry pi 3 at home running raspbian, and I would like to setup up an openvpn router, such that I can connect to my provider windscribe by choosing my raspberry pi as the gateway. It is connected by ethernet to my main router, which I do not want to run openvpn on, as it would affect all machines on my network, which would affect gaming. Setting up openvpn or even running it on boot isn't my problem, it's setting up routing that's been affecting me, as iptables-persistent will not install as netfilter-persistent isn't configured yet, and I do not understand how to fix it.
Setting up openvpn router on raspberry pi
routing;raspbian;openvpn
null
_unix.348042
My Raw log file similar to production log i have tweaked this,Block f1PCO Blockf1tray:school SAM :XP X/Y DUPL KEY Z/ZBlock f2PCO Blockf2tray:school SAM :XP D/D DUPL KEY D/DBlock f3PCO Blockf3tray:school SAM :AP X/Y DUPL KEY Z/Z-----cont.. more than 800 recordsexpected result with applied filters as follows:condition1:If SAM :XP is found with Z/Z above X/Y(FYI...Z/Z above row contains X/Y) then print like thisBlock f1PCO Blockf1tray:school SAM :XP X/Y DUPL KEY Z/Zcondition2:IF SAM :XPis found with D/D above D/D(FYI...D/D above row contains D/D) then printBlock f2PCO Blockf2tray:school SAM :XP D/D DUPL KEY D/Dlike these it has traverse 800 records and print these output to junk.txt.NOTE: Rows may decrease or increase Here Block is treated as starting and ending, in-between PCO BlockXX is there, script should not consider that block ignore that.Thanks I tried so far awk 'BEGIN{RS=Block\n; ORS=RS} $0== || /KEY:ZZ/ && /XY/ {print}' raw.txt >> junk.txt.I am using HP-UX
Print entire Block if multiple strings are found inside Start ---End
text processing;awk;hp ux
Almost standart task for sedsed ' /^Block/! D :1 N $!{ /\n\s*KEY/! b1 } \%SAM.*D/D.*D/D\|SAM.*X/Y.*Z/Z%! d ' logproduceBlock f1PCO Blockf1tray:school SAM :XP X/Y DUPL KEY Z/ZBlock f2PCO Blockf2tray:school SAM :XP D/D DUPL KEY D/DBlock f3PCO Blockf3tray:school SAM :AP X/Y DUPL KEY Z/Z
_unix.60772
I installed Arch a couple of days ago. Just realized the date/time were off by a day and one hour.I changed it using timedatectl set-time. Then used hwclock --systohc to set the hardware clock. After that I was not able to enter some sites like Gmail because of https certificate errors. I tried changing the time back but it did not work.I rebooted and then had problems because the partitions had mounted on a different time so I used fsck /dev/sda on my partitions and I was able to boot up. Right now the clock is not a problem but I really need to check my mail. I had to use Facebook to log in to stackexchange cringe.Help?This is what Gmail's error page say:The server's security certificate is not yet valid! You attempted to reach gmail.com, but the server presented a certificate that is not yet valid. No information is available to indicate whether that certificate can be trusted. Chromium cannot reliably guarantee that you are communicating with gmail.com and not an attacker. Your computer's clock is currently set to Tuesday, January 10, 2012 12:14:47 PM. Does that look right? If not, you should correct your system's clock and then refresh this page.You cannot proceed because the website operator has requested heightened security for this domain.
I messed up my system clock in Arch Linux
linux;arch linux;date;time
I used the ntp solution in this article. Updated against a time server.I was getting an error at first. You have to stop ntp before using a time server. If it can't find a server you have to specify it, in my case I used: sudo ntpdate 0.us.pool.ntp.org. That did it.
_unix.17319
hd and od are both dump viewers of binary content. Can hd be used wherever od is and vice versa?
Can hd and od replace each other?
utilities;binary;od
hd is a synonym for hexdump -C on FreeBSD and on some Linux distributions. hexdump is from the BSD days; od is from the dawn of time. Only od is standardized by POSIX. The Single UNIX rationale discusses why od was chosen in preference to hd or xd.These commands do very similar things: display a textual representation of a binary file, using octal, decimal or hexadecimal notation. There's no fundamental difference between the two.They have many options to control the output format, and some formats can only be achieved with one or the other command. In particular, to see a glance of what's in a binary file, I like hd's output format, with a column on the right showing printable characters literally; od can't do that.$ od /bin/sh | head -n 2 # od default: octal, 2-byte words0000000 042577 043114 000402 000001 000000 000000 000000 0000000000020 000002 000076 000001 000000 170020 000101 000000 000000$ od -Ax -t x1 /bin/sh | head -n 2 # od showing bytes in hexadecimal000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00000010 02 00 3e 00 01 00 00 00 10 f0 41 00 00 00 00 00$ hd /bin/sh | head -n 2 # hd default output: nice00000000 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 |.ELF............|00000010 02 00 3e 00 01 00 00 00 10 f0 41 00 00 00 00 00 |..>.......A.....|
_cs.37562
It is common to define $P$-completeness with respect to logspace many-one reductions. I am looking for a complexity class $C$ such that if $C=P$ then all problems in $P$ are $P$-complete under many-one $C$-reductions. What is the weakest many-one reduction (computable in class $C$) for which the class of P-complete problems remains unchanged?Note that $C$ is contained in $P$.
Weakest reduction for P-completeness
complexity theory
null
_webmaster.67930
Cloudflare lets you serve your site over SSL without having to purchase and install a security certificate, a product they call Flexible SSL. (They act as a proxy and serve your site over SSL from their servers, while the connection from your server to theirs remains unencrypted.)They currently offer Flexible SSL for free.With Google's announcement that HTTPS is now a ranking signal, I'm considering switching several sites to Cloudflare, buying a Pro account, and turning on their Flexible SSL option, because it seems like the easiest way to serve several sites over HTTPS without having to purchase and manage multiple certificates.Is there any downside to Cloudflare's Flexible SSL?I'm comfortable using Cloudflare as a proxy I'm more interested in two factors:The experience for end-users. (e.g. Will visitors see security warnings?)The level of security offered. (Enough for a simple blog, but not for an online shop because they'd pass credit card data from their server to yours unencrypted?)
Are there any disadvantages to Cloudflares Flexible SSL?
seo;security;https;cloudflare
Flexible SSL is NOT fully secureCloudFlare's Flexible SSL provides encryption from the user to CloudFlare's servers, but not from their servers to the website server. This avoids the hassle of installing (and renewing) a certificate on your web server, but does mean traffic gets sent plain text over the 2nd half of the journey.The benefits of this setup are:Easy to get started, no need to install certificates on your web server and deal with the periodic renewalsProvides protection from eavesdropping on insecure WiFi connections (internet cafes) and others on your local network or at the ISP level.Users will see a green padlock in their browser and should not receive any security warningsThe inherent problems are:Traffic from CloudFlare to your server is not encrypted, meaning wholesale ISPs, trunk providers, and the NSA can still read all requests in plain-textThe traffic is subject man-in-the-middle (MITM) attacks where another server can impersonate your server and receive its traffic (although this issue also applies to the Full SSL setting, you'll need Strict mode to avoid this).Because of the above, it provides a misleading and false sense of security to your web site visitors (but that's a rant not appropriate for this venue)Comparison of the SSL settingsNot encrypting traffic between a proxy and backend server is common when the traffic is sent over a private, secured network. But in this case, you are routing traffic over the public internet. CloudFlare recommends that you also install a certificate on your web server for true end-to-end encryption, and even provide free certificates via their dashboard for doing so (if you don't want to install a self-signed certificate). From the discussion on the CloudFlare Blog:Actually, we'll be providing a free certificate that's pinned to the domain that you can install on your server for end-to-end crypto.Whether Full or Flexible SSL is used, your users should not see a pop-up or other warnings.
_cstheory.36280
I am looking at the following solved exercise: I haven't really understood at the reduction the part that we construct for each number $a_i$ a package of measurement $(\frac{4}{A}a_i, 5,3)$. Why do we consider this measurement?
Could you explain to me the reduction?
computability;reductions;np complete
null
_unix.326106
I created a RAID1 withmdadm --create /dev/mdX --level=mirror --raid-devices=2 /dev/sdb /dev/sdcThen watched the first sync on /proc/mdstat. It says [UU]. So far so good.sd[bc] were supposed to have been shreded, but I did not check before, figuring that all contents were going to be overwritten anyway.I proceeded to create a volume group on that device, and then created an ext4 FS in a fresh logical volume.Wanting to mount via UUID, I dumped them all with blkid. Already visually the RAID1 array looked off.blkid (only relevant lines shown):/dev/mdX: UUID=... TYPE=LVM2_member/dev/sdb: UUID=... UUID_SUB=... LABEL=...:0 TYPE=linux_raid_member/dev/sdc1: PARTUUID=0xd25946fbI was expecting 2 linux_raid_members, and whats with /dev/sdc1? I again check:# cat /proc/mdstat (shortened)Personalities : [raid1] mdX : active raid1 sdb[0] sdc[1] 976631488 blocks super 1.2 [2/2] [UU] bitmap: 2/8 pages [8KB], 65536KB chunk# cat /proc/partitions (shortened)major minor #blocks name 8 32 976762584 sdc 8 33 976759808 sdc1 8 16 976762584 sdb 9 0 976631488 md0# fdisk -l /dev/sd[bc]Disk /dev/sdb: (empty, as expected, both disk geoms identical, also expected)Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectorsUnits: sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 4096 bytesI/O size (minimum/optimal): 4096 bytes / 4096 bytesDisklabel type: dosDisk identifier: 0xd25946fbDevice Boot Start End Sectors Size Id Type/dev/sdc1 2048 1953521663 1953519616 931.5G 7 HPFS/NTFS/exFATAgain sdc1.So it looks like sdc was never shredded. But shouldn't all previous meta-data/partition info be overwritten by mdadm --create? Thinking it may be cached info, I run partprobe. No change. I try reboot, no change. So, It looks like there still is a partition table on the drive.I have a few ideas, and I decide to post this to SE.So, while writing this post, I wanted to post a more precise blkid command, so I executed blkid /dev/sd[bc]{,1} /dev/mdX, and pasted it into this post:/dev/sdb: UUID=... UUID_SUB=... LABEL=...:0 TYPE=linux_raid_member/dev/sdc: UUID=... UUID_SUB=... LABEL=...:0 TYPE=linux_raid_member/dev/sdc1: PARTUUID=d25946fb-01/dev/mdX: UUID=... TYPE=LVM2_memberIn the preview for this post I saw that it was too regular, and -lo and behold- spotted the second RAID-Member! Doubting my sanity, I executed blkid without parameters again. sdc, the second RAID-member is not shown.At this point my problems seems to boil down to:How do I get rid of the partition table (safely), and will I then get my second raid-member in blkid w/o parameters? What other problems could arise if i just leave this be? At this point it looks like my RAID1 is operational, but is it? How would I best test that?The ideas I had built up up to and including now are:Bulldoze online: dd if=/dev/zero bs=512 count=1 of=/dev/sdc and run partprobe and blkid afterwards. But wont that trip up any mdadm or anything else somehow?fail the member, disconnect (logically), bulldoze first few MiBs off-line, reconnect (logically), re-synchronize. I'd rather not.Find out through SE about the standard way of dealing with leftover meta-data only found after array creation.Without U.SE I would probable have tried 1) and then 2), being fairly sure that 2) would work, but be the least elegant and most lengthy way.The data on that md is not vital, and in the absence of answers I will try 1) then 2). I will post results. But I'll still be interested in knowing why sdc does not get shown as raid-member with blkid, while it does get shown with blkid /dev/sd[bc], whereas sdb is shown in both cases.
RAID1 `mdX` looks O.K. on `/proc/mdstat`, but `blkid` and `fdisk -l` of the members report whacky/faulty values
mdadm;software raid;raid1;partition table;libblkid
null
_scicomp.7633
I have a set of users who have won a game('jim', 12), ('james', 54), ('john', 76), ('dave', 22), ('garry', 34), ('stuart', 16)I want to award them a share of points based on their position in the game on a sliding scale. The 'global pot' is $100 and I would like the winner to get the most with other users getting less based on a sliding scale to their position. Would anyone know of a 'scoring' algorithm to do this...? Similar to ones used in poker but with a fixed pot.
algorithm to assign points to winning users
algorithms
You could easily use their fraction of the total points as the fraction of the pot to win. I.e., divide each score by the total of all the scores then multiply that by $100.
_unix.282998
I have Raspberry Pi 2 which runs on Raspbian OS and External HDD is connected and mounted to it. It is connected to local server. I have a laptop and I want to boot Ubuntu from External HDD which is connected to Raspberry Pi. Is this possible. If yes, then provide me resources and links.
How can I boot ubuntu from external Hard disk drive which is connected to Raspberry Pi
networking;boot;raspberry pi
null
_unix.329832
I'm setting up rsync to transfer files from ServerA to ServerB, and need to preserve timestamps and permissions. They key here is the files are owned by a different account than the one performing the file transfer. rsync transfers files using the example below: rsync -a /colorschemes/ [email protected]:/colorschemes/ --deleteThe -a flag yields the following types of errors: rsync: failed to set times on /colorschemes/946/ex: Operation not permitted (1)rsync: failed to set permissions on /colorschemes/946/ex/blue.pdf: Operation not permitted (1)On the remote system, the acoder account has a similar error when attempting to manually set permissions on a file: [acoder@bu ~]$ chown apache:codingteam /colorschemes/946chown: changing ownership of /colorschemes/946: Operation not permittedThis works OK, though: [acoder@bu ~]$ sudo chown apache:codingteam /colorschemes/946Is there a way to make the remote rsync use sudo?
forcing sudo on remote rsync server
permissions;sudo;rsync
For using sudo with rsync in remote machine you can call it with --rsync-path=sudo rsync but be aware of the require TTY, you skip it by removing Defaults requiretty from sudoers file. If you want to change the permission for anything you don't own, you have to use sudo if you were not rootor there is a different way like setting a setuid on chmod, chown then any one can run the chmod, chown as a root, but that will be horrible.
_webmaster.100552
It took me a while but I finally found out how to use filters to block spam from my analytics.I currently have the following filter pattern applied to 'campaign source'semalt\.com|buttons-for-website\.com|rank-checker\.online|monetizationking\.net|site-auditor\.online|topbestlisted\.com|site-speed-check\.site|site-speed-checker\.site|scanner-elena\.top|scanner-mary\.top|scanner-irvin\.top|scanner-jack\.topI've just noticed a couple of new spam bots that have recently showed up under my Referrals section, but now it won't let me add any more to the filter pattern due to the character limit.What is the best solution to get around it? Do I just have to set up an additional filter?
Google Analytics character limit on Filter Pattern field
google analytics;analytics;spam;spam prevention;google analytics spam
null
_softwareengineering.116352
At the company I'm working we support two versions of the software we develop. One version is available for customers, and one version the developers are developing new functionality in. The version available for customers is also changed by developers, to fix the bugs our customers have found.So, for example we have a 4.1 version available for customers, and we are developing 4.2. As soon as we release 4.2, 4.1 gets closed, and we start developing on 4.3. Currently we have two trunks, one for each version that is open for development. Every time a bug is fixed in the released version, we have to merge it in the new version too. This is extra work. Next to this, we would like to work in advance, and have a version already finished 'on the shelf', and already start on a new version. Which would mean if we fix a bug in the released version, we would have to merge it in three trunks!Is there a better way of structuring this, and possibly eliminate the duplicate merges? Are we doing something completely wrong?Thanks in advance.
How to manage two major versions using SVN?
version control;svn
The usual terminology here would be that you have one trunk (the code leading to 4.3) and two branches (4.1 and 4.2), even though the branch for 4.1 will be closed according to your description when 4.2 is released (at the same time that the new 4.2 branch is created).You can avoid the three-end situation when you do your advance work in a feature branch that is not intended to be bug free. That is, you'd have three open ends:trunk (will be 4.2 some day)branches/4.1branches/conquer-the-world (a feature planned for 4.3)and only merge bugfixes from 4.1 into 4.2. When conquer-the-world is finished (some point after the release of 4.2, I presume), merge it into trunk.
_unix.211727
I would like to know because I am going to use a power pc mac g5 for making my distribution. I also need to know if it makes my remaster the powerpc architecture and how to change it to 32 or 64-bit if possible.
Will a remastered debian for powerpc still be in the powerpc architecture?
debian;powerpc
null
_unix.92301
I need to deploy an ASP.NET MVC4 (MVC3 at least) application on Centos 6 server.I've installed Mono 3.2.1, XSP4, mod_mono (for use with apache web server) and succesfully ran the test application that goes with mono. I used a config tool to create a config for app directory and deployed an empty ASP.NET WebPages project created in VS2012 on .Net 2.0 - it ran ok. But I need to run an .net 4.5 or at least 4.0 application, so I've set the MonoServerPath to mod-mono-server4 instead of mod-mono-server2 in the config, but now i'm getting a Service Temporarily Unavailable error while trying to access the asp.net project directory (even empty).What should I check for?Update: I checked the apache log and here what it shows:mod-mono-server4Exception caught during reading the configuration file:System.MissingMethodException: Method not found: 'System.Configuration.IConfigurationSectionHandler.Create'. at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection (System.String configKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.GetSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.get_AppSettings () [0x00000] in <filename unknown>:0 at Mono.WebServer.Apache.Server.get_AppSettings () [0x00001] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:208 at Mono.WebServer.Apache.Server+ApplicationSettings..ctor () [0x0002a] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:63 mod-mono-server4Listening on: /tmp/mod_mono_server_UnrealRoot directory: /var/www/html/UnrealError: An exception was thrown by the type initializer for System.Net.Sockets.Socketmod-mono-server4Exception caught during reading the configuration file:System.MissingMethodException: Method not found: 'System.Configuration.IConfigurationSectionHandler.Create'. at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection (System.String configKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.GetSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.get_AppSettings () [0x00000] in <filename unknown>:0 at Mono.WebServer.Apache.Server.get_AppSettings () [0x00001] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:208 at Mono.WebServer.Apache.Server+ApplicationSettings..ctor () [0x0002a] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:63 mod-mono-server4Listening on: /tmp/mod_mono_server_UnrealRoot directory: /var/www/html/UnrealError: An exception was thrown by the type initializer for System.Net.Sockets.Socketmod-mono-server4Exception caught during reading the configuration file:System.MissingMethodException: Method not found: 'System.Configuration.IConfigurationSectionHandler.Create'. at System.Configuration.ClientConfigurationSystem.System.Configuration.Internal.IInternalConfigSystem.GetSection (System.String configKey) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.GetSection (System.String sectionName) [0x00000] in <filename unknown>:0 at System.Configuration.ConfigurationManager.get_AppSettings () [0x00000] in <filename unknown>:0 at Mono.WebServer.Apache.Server.get_AppSettings () [0x00001] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:208 at Mono.WebServer.Apache.Server+ApplicationSettings..ctor () [0x0002a] in /usr/src/xsp-2.10.2/src/Mono.WebServer.Apache/main.cs:63 mod-mono-server4Listening on: /tmp/mod_mono_server_UnrealRoot directory: /var/www/html/UnrealError: An exception was thrown by the type initializer for System.Net.Sockets.Socket[Wed Sep 25 08:45:13 2013] [error] Failed to connect to mod-mono-server after several attempts to spawn the process.Update 2Well, after some googling i've found the solution..To fix it u have to copy mod-mono-server4.exe from /opt/mono/lib/mono/4.0 (or wherever u've installed it) to /opt/mono/lib/mono/4.5 and then to edit mod-mono-server4 in /opt/mono/bin from exec /opt/mono/bin/mono $MONO_OPTIONS /opt/mono/lib/mono/4.0/mod-mono-server4.exe $@toexec /opt/mono/bin/mono $MONO_OPTIONS /opt/mono/lib/mono/4.5/mod-mono-server4.exe $@
mod-mono-server 4 is not working, while 2 does
linux;webserver;mono;.net;asp.net
null
_codereview.44115
I thought I would try and write a solution to the Wolf, Goat and Cabbage problem in Java 8 to try and get to grips with lambdas.I am looking for any feedback you might provide. The feedback I am looking for is mainly on code structure and where I could make more, or more simple, use of new Java 8 features.The basic idea of the code is to try and encapsulate the behaviour of the elements of the problem in an OO fashion and process them using lambdas.I started with an enum Member to encapsulate the players of the game. It should be self explanatory.enum Member { FARMER, WOLF, CABBAGE { @Override public boolean isSafe(final Set<Member> others) { return others.contains(FARMER) || !others.contains(GOAT); } }, GOAT { @Override public boolean isSafe(final Set<Member> others) { return others.contains(FARMER) || !others.contains(WOLF); } }; public boolean isSafe(final Set<Member> others) { return true; }}Next is the class Bank, this encalsulates a river bank:import static com.google.common.base.Preconditions.checkState;public final class Bank { public static Bank all() { return new Bank(EnumSet.allOf(Member.class)); } public static Bank none() { return new Bank(ImmutableSet.of()); } private final ImmutableSet<Member> members; public Bank(final Set<Member> members) { this.members = ImmutableSet.copyOf(members); } public Bank accept(final Member member) { checkState(!members.contains(member) && !members.contains(Member.FARMER)); final Set<Member> ms = Sets.newHashSet(members); ms.add(member); ms.add(Member.FARMER); return new Bank(ms); } public Bank evict(final Member member) { checkState(members.contains(member) && members.contains(Member.FARMER)); final Set<Member> ms = Sets.newHashSet(members); ms.remove(member); ms.remove(Member.FARMER); return new Bank(ms); } public boolean farmerIsHere() { return members.contains(Member.FARMER); } public boolean hasAllMembers() { return equals(all()); } public boolean isEmpty() { return members.isEmpty(); } public boolean isFeasible() { return members.stream().allMatch((m) -> m.isSafe(members)); } public Stream<Member> stream() { return members.stream(); } @Override public int hashCode() { int hash = 7; hash = 97 * hash + Objects.hashCode(this.members); return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final Bank other = (Bank) obj; return Objects.equals(this.members, other.members); }}This class allows the transferal of items from bank to bank according to the rules of the puzzle.The next class encapsulates the state of play at any given time:final class State { private final Bank leftBank; private final Bank rightBank; public State(final Bank leftBank, final Bank rightBank) { this.leftBank = leftBank; this.rightBank = rightBank; } public Bank leftBank() { return leftBank; } public Bank rightBank() { return rightBank; } public boolean isInitialState() { return leftBank.hasAllMembers() && rightBank.isEmpty(); } public boolean isSolution() { return rightBank.hasAllMembers() && leftBank.isEmpty(); } public boolean isFeasible() { return leftBank.isFeasible() && rightBank.isFeasible(); } public State moveToRight(final Member member) { return new State(leftBank.evict(member), rightBank.accept(member)); } public State moveToLeft(final Member member) { return new State(leftBank.accept(member), rightBank.evict(member)); } @Override public int hashCode() { int hash = 7; hash = 97 * hash + Objects.hashCode(this.leftBank); hash = 97 * hash + Objects.hashCode(this.rightBank); return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final State other = (State) obj; if (!Objects.equals(this.leftBank, other.leftBank)) { return false; } if (!Objects.equals(this.rightBank, other.rightBank)) { return false; } return true; }}This has various methods for determining whether that state is a solution etc.Next I have the interface Action and class ActionImpl, this stores the graph of the solution - so that the path to the solution can be determined:public interface Action<T> { Action<T> previous(); T data(); Collection<Action<T>> children(); void children(Collection<Action<T>> children);}public class ActionImpl<T> implements Action<T> { private final Action<T> previous; private final T data; private Collection<Action<T>> children; public ActionImpl(final Action<T> previous, final T data) { this.previous = previous; this.data = data; } @Override public Action<T> previous() { return previous; } @Override public T data() { return data; } @Override public Collection<Action<T>> children() { return children; } @Override public void children(Collection<Action<T>> children) { this.children = children; } @Override public int hashCode() { int hash = 5; hash = 43 * hash + Objects.hashCode(this.data); return hash; } @Override public boolean equals(Object obj) { if (obj == null) { return false; } if (getClass() != obj.getClass()) { return false; } final ActionImpl<?> other = (ActionImpl<?>) obj; return Objects.equals(this.data, other.data); }}Now for the meat of the puzzle, this is the class that solves the puzzle:public class App { public static void main(final String[] args) throws Exception { final State initalState = new State(Bank.all(), Bank.none()); final Action<State> finalState = calculateGraph(new ActionImpl<>(null, initalState)); final List<State> solution = ImmutableList.copyOf(getSoltutionPath(finalState)); final ListIterator<State> solIter = solution.listIterator(solution.size()); while (solIter.hasPrevious()) { System.out.println(solIter.previous()); } } private static Action<State> calculateGraph(final Action<State> parent) { final Collection<Action<State>> states = calculateChildren(parent); parent.children(states); return states.stream().filter((n) -> n.data().isSolution()).findFirst().orElse(parent); } private static Collection<Action<State>> calculateChildren(final Action<State> parent) { final State s = parent.data(); if (s.leftBank().farmerIsHere()) { return process(parent, calculateMoves(s.leftBank(), (m) -> s.moveToRight(m))); } if (s.rightBank().farmerIsHere()) { return process(parent, calculateMoves(s.rightBank(), (m) -> s.moveToLeft(m))); } throw new IllegalStateException(We seem to have lost the farmer.); } private static Collection<Action<State>> process(final Action<State> parent, final Collection<State> children) { final Set<State> path = getSoltutionPath(parent); return children.stream(). filter((s) -> !path.contains(s)). map((s) -> calculateGraph(new ActionImpl<>(parent, s))). collect(Collectors.toSet()); } private static Set<State> calculateMoves(final Bank bank, final Function<Member, State> mover) { return bank.stream().map(mover).filter(State::isFeasible).collect(Collectors.toSet()); } private static Set<State> getSoltutionPath(Action<State> leaf) { final ImmutableSet.Builder<State> lb = ImmutableSet.builder(); while (leaf != null) { lb.add(leaf.data()); leaf = leaf.previous(); } return lb.build(); }}The idea is to recursively walk the graph of feasible moves and bubble the target state up through the recursion. The solution can then be determined by walking back up the parent nodes in the solution graph.For completeness the output of running the code is:State(leftBank=Bank(members=[FARMER, WOLF, CABBAGE, GOAT]), rightBank=Bank(members=[]))State(leftBank=Bank(members=[CABBAGE, WOLF]), rightBank=Bank(members=[GOAT, FARMER]))State(leftBank=Bank(members=[CABBAGE, WOLF, FARMER]), rightBank=Bank(members=[GOAT]))State(leftBank=Bank(members=[WOLF]), rightBank=Bank(members=[GOAT, CABBAGE, FARMER]))State(leftBank=Bank(members=[GOAT, WOLF, FARMER]), rightBank=Bank(members=[CABBAGE]))State(leftBank=Bank(members=[GOAT]), rightBank=Bank(members=[CABBAGE, WOLF, FARMER]))State(leftBank=Bank(members=[GOAT, FARMER]), rightBank=Bank(members=[CABBAGE, WOLF]))State(leftBank=Bank(members=[]), rightBank=Bank(members=[GOAT, CABBAGE, WOLF, FARMER]))
Wolves, Goats and Cabbages in Java
java
null
_webapps.41885
I was quite early with registering my Gmail account and I got a nice and clean [email protected] address. Apparently all mails end up on my account when people make simple mistakes in typing the intended e-mail address. So I receive several legitimate mails each month from people that did not intend to mail me; people that I don't know and have no desire to know.If it where snail mail, I'd write Undeliverable or Wrong address over the envelope and put it back in the outgoing mail box. Now I'm looking for a way to do the same for e-mail. How can I show the sender of an e-mail that the e-mail address is wrong without revealing my identity?Any solution involving a Gmail feature, third-party application, or manipulation of the SMTP protocol is acceptable (I can program it if it doesn't exist already), as long as it works from Windows.I used to reply along the lines of: You sent this e-mail to the wrong address. You sent it to [email protected] but you probably meant [email protected]. Kind regards, Marilyn Baker, revealing that my e-mail address is valid and who I was, and then they'd reply: Hey you have the same surname. You guys must surely be distantly related. Bla bla bla... John Baker is my husband, let's meet up... Bla bla. Anna Baker-FieldAll names and e-mail addresses are fake.
Reply e-mail with 'Undelivered mail' message?
email;gmail
null
_cs.10572
How is Perfect shuffle a better interconnect scheme for parallel processing? For example if we consider a problem of sum reduction, I want to understand how this scheme is useful when implementing sum reduction in parallel , for example on a GPU?
Perfect shuffle in parallel processing
computer architecture;parallel computing
The perfect shuffle alone has never been used as an interconnection network; it always included exchange links, in order to allow for a worst-case $O(\lg n)$ routing algorithm. Algorithms for reduction, broadcast, parallel prefix, transpose etc are given in Yosi Ben-Asher , David Egozi , Assaf Schuster, SIMD Algorithms for 2-D Arrays in Shuffle Networks.Stone was the first researcher to provide algorithms for the perfect shuffle network, including algorithms for the FFT, matrix transpose etc.This network has been used in several parallel machines, but mostly in multi-stage shuffle-exchange networks such as the Omega network (which uses the perfect shuffle between stages). An example is the IBM SP3 machine.Note that these are message-passing algorithms, whilst you refer to a GPU, which is instead a shared-memory device.
_unix.280434
I would like to install PostgreSQL and PostGIS on my Ubuntu 14.04 virtual private server that is hosted in a remote datacenter.How can I enable remote access on them?
How can I install PostgreSQL and PostGIS on Ubuntu 14.04 and enable remote access over the internet?
remote;postgresql
null
_softwareengineering.314077
I am wondering about best practices here.MVC (Model - View - Controller) patterns involve separating components of your program that model the data, manipulate those models, and display those results to the user (usually through the UI) in some way.What about a function that takes the model data and inserts it into a database? For example I have an object called a GameBoard, and I also want the ability to insert the state of this board into the SQL database for storage / historical purposes. I have a class that holds all my query functions.But where would I call these functions from? Would this sort of functionality make the most sense to make it as a method of GameBoard? Or should it be part of the controller classes?For example, I've got a GameBoard class and an SQLDatasource/SQLHelper class (which I call the models). The SQL classes have methods that take care of the queries and such. In Android, there are also Activity classes where all the events take place (I call these the controllers). The view takes place via code that binds the Activity to some XML. That being said, I normally instantiate the GameBoards in the Activity classes, and right now I also call the query functions from these same classes that accept a GameBoard as an argument.
Using MVC style, where is the best place to put SQL functionality?
java;design patterns;object oriented;programming practices;mvc
A typical (but simplified) MVC architecture looks like this:Database <--> Logic Layer <--> Controller <--> ViewYour Logic Layer contains the functions that you would call to perform your game-related activities. The purpose of the Logic Layer is to translate your game-related functions into database operations.
_cs.77473
Suppose that $F_2$ denotes the field with $2$ elements. We are given $m$ vectors $\{x_1, \ldots, x_m\}$ in $F_2^d$ which are a basis for a subspace $W$. Suppose we have a vector $v \in F_q^m$, and we want to find a $w \in W$ that is closest to $v$ in the Hamming metric.What is the complexity of this problem, as a function of $md$?1) It's clear that there is an algorithm taking exponentially many steps. $W$ has $2^m$ elements, each of which takes $d$ bits to describe. So we just search through them.2) It's unclear to me if this problem is even in $NP$. I can verify in polynomial time whether one vector in $W$ is closer to $v$ than another vector in $W$, but I do not know how to verify if one is the closest (or minimizes the distance). I suspect that it is impossible to do. Can anyone outline or refer me to a proof that this problem is not in $NP$?3) If it is in $NP$, then is it $NP$-hard?I guess this is well known, and presumably already discussed on this webpage. I poked around for a bit and couldn't find it, however. A reference would be welcome.
What is the complexity of Hamming nearest neighbor to a subspace ...?
complexity theory;time complexity;coding theory
Your problem is known as the nearest codeword problem, and it is NP-hard to approximate. See for example lecture notes of Madhu Sudan. The way to make this problem an NP-problem is to ask whether the distance is at most a given distance. Regarding algorithms, I suggest taking a look at a paper of Alon, Panigrahy and Yekhanin, Deterministic Approximation Algorithms for the Nearest Codeword Problem.
_softwareengineering.96521
I've read in multiple answers that switch/case avoids unnecessary comparisons, but I never learned this in college, and I'm a little stumped on how the program would figure out which case to jump to without doing a comparison.Obviously, in the case of int switchVar=3; switch (switchVar) { case 0: ... case 1: ... case 2: ... case 3: ... case n: ... }, this would be pretty easy, as it could just create an array of pointers that point to the beginning of each case's code block, and it would simply do something along the lines of instructionPointer = switchJumpTable[switchVar];.However, this breaks down if you were to do a switch/case on a string, e.g. char switchVar[]=North; instructionPointer = jumpTable[switchVar]; where trying to access the North index of an array would cause an error (or if the compiler allowed this behind the scenes, I still don't see how it would avoid comparisons when converting the char array into an integer in order to access the array.)I can think of one way to get around unnecessary comparisons, but it wouldn't be terribly efficient, so I'm sorta curious as to how this is actually done, as I can't imagine that compilers are using the method that I have in mind.
How is switch/case handled as to avoid comparisons to the case values?
optimization;switch statement
The answer varies Enormously by the individual compiler, but there are a few strategies that could be used.The usual answer is a jump table. The case variable is looked up in a table containing all of the allowed values, and the program jumps to the address the table specifies.Of course, that's a fine strategy on the flat address models used in older CPU's, but the cost of an indirect branch on the deep pipelines of modern CPU's is often an order of magnitude greater than a simple conditional branch, (which is in turn more expensive than no branch). indirect branching usually breaks the branch prediction logic on most CPU's that have it, and so the instructions after the branch cannot be prefetched until the lookup instruction has actually completed. A regular conditional branch can prefetch one side of the branch and have a decent chance of 'guessing right'. And so this optimization is rarely taken on compilers that target those CPU's, and instead the case statement is compiled as a tree of nested conditional branches.
_codereview.92920
I picked up a programming game, TIS-100. Programming manual can be found on Steam as well, but I have described the relevant syntax in my question.Basically, you're dealing with some old machine that uses its own variant of assembly. It's a gamification of learning assembly, almost.The machine consists of nodes. Each node has pipes (UP LEFT DOWN RIGHT) that it can write and read from. When a pipe is read from, the value in the pipe disappears. It also has two registers, ACC and BAK. BAK can only be accessed with the SAV and SWP opcodes. Each node has its own code that it executes. This code is limited to 15 lines of 18 characters per node.I've already solved this level. The goal is counting sequences:- SEQUENCE COUNTER -> SEQUENCES ARE ZERO-TERMINATED> READ A SEQUENCE FROM IN> WRITE THE SUM TO OUT.S> WRITE THE LENGTH TO OUT.LShort syntax run down (based on commands I used):MOV <src> <dest> //moves from source to destination. Blocks if source is a pipe and doesn't have a value available. Blocks if dest is a pipe and already has a value.ADD value //adds to ACC<label>: //defines a label (for jumps)JMP <label> //jumps execution to labelJEZ <label> //jumps to label if ACC = 0. JumpifEqualsZero//there's also JumpifNotZero (JNZ), JumpifGreaterZero (JGZ), JumpifLesserZero (JLZ).SAV //MOV ACC BAKSWP //switches values of ACC and BAKThe game describes it like this:My program is a bit unwieldy to post in full, so I'll limit it to the three main nodes.This is the node responsible for counting the sequence length:S: MOV LEFT ACCJEZ ESWPADD 1SAVJMP SE: SWPMOV ACC DOWNMOV 0 ACCSAVThis is the node responsible for summing sequences:S: MOV UP ACCJEZ EMOV ACC LEFTSWPADD LEFTSAVJMP SE: SWPMOV ACC DOWNMOV 0 ACCAnd this is the node I abuse as temporary storage:MOV RIGHT ACCMOV ACC RIGHTWhat I don't like about my code is that it doesn't read very well.The sequence counter goes like so:STARTread value to ACCif ACC is 0, then GOTO ENDswap ACC and BAKadd 1 to ACCwrite ACC to BAKGOTO STARTENDswap ACC and BAKwrite ACC as outputset ACC to 0write ACC to BAKThere's duplication in here, where no matter if you are in the then or the (implicit) else case of the JEZ, you first swap ACC and BAK. Additionally, I'm abusing swap set write for altering BAK, but maybe that's the shortest way there is.For the sequence summer, I don't like how I'm abusing a separate node just for temporary storage.It works without errors.For reference, these are histograms showing the scores of people on levels, with mine highlighted via arrow:
Counting Sequence Length in TIS-100
assembly;tis 100
Unfortunately my hard drive containing my savegames crashed yesterday, so I cannot look up my solution, but I can say that the things you consider ugly are because of the limitations of that old computer system. I am abusing nodes as temporary storage all the time and my solution pretty much looked the same (if I remember correctly). As the histograms show you are not that bad (especially on cycles).We can optimize your solution in terms of # of instructions and maybe cycles as well, though. I am focussing on optimizing the existing algorithms, instead of creating a new program. I could not test my optimized solution, because I don't have Steam on this box, but they should work none-the-less:Your temporary storage node can be simplified to:MOV RIGHT RIGHTThis is a valid instruction and saves you one instruction.I only save my ACC just before it would get overridden. This removes some duplicate code (namely at least one SAV in your counter) and simplifies control flow. My rewritten algorithm is one instruction less than yours:L:SAV # Save counterMOV LEFT ACC # Next itemJEZ EZ # End of sequenceSWP # Restore counterADD 1 # IncrementJMP L # LoopEZ: # End of sequenceSWP # Restore counterMOV ACC DOWN # OutputSUB ACC # Clear ACC BAK is not needed in your summing node. You only need the value from your temporary storage node and the new value. Some reordering of instructions allows you to directly add the saved sum to the new value. I again saved the ACC in the very last moment:L:MOV ACC LEFT # Save sum MOV UP ACC # Next itemJEZ EZ # End of sequenceADD LEFT # Add sum JMP L # LoopEZ: # End of sequenceMOV LEFT DOWN # Output sum SUB ACC # Clear ACCGetting rid of BAK saved us 3 instructions on this one!Conclusion: Absolutely abuse the instruction set (e.g. my first bullet point) and possibilities of the game (storing values in the pipes) to achieve good scores in the different metrics. Don't try to optimize for everything, you have got three saves per level!
_vi.11883
I am confused about what is going on in this example and why spell checking is not working any help is appreciated.While editing a markdown file with spell checking enabled the following happens:* This line is spellchecked - This line is also spellchecked - This line is not spellcheckedI also noticed that this also doesn't work* This line is really long but spellchecked continuation of text after linewrap from the previous line, this line is not spellcheckedIndentation is 1 tab character per level.Google has turned up nothing but since the - is not bold on the third bullet as the rest are I assume this is some kind of markdown spec violation but I am not sure which. Is there a way to make this work?
markdown spell checking with triple nested bullet point
spell checking;filetype markdown
null
_webmaster.34400
How can I automatically save site's copy each week with an option to browse saved copies? I need kind of WebArchive, but on my local computer.
Saving website copy and browsing saved copies
webserver;backups
null
_webmaster.68869
I'm sure this is an easy thing to do but I can't seem to find the answer!I'm trying to redirect https://carddav.example.com/MYUSERNAME to https://carddav.example.com/remote.php/subdomain/addressbooks/MYUSERNAME/contacts where MYUSERNAME can be anything.The current RewriteRule I have attempted looks like:RewriteRule ^/(.*)$ https://carddav.example.com/remote.php/carddav/addressbooks/$1/contacts/ [R=301]
Redirect Everything After slash (/) to another directory
htaccess;301 redirect;mod rewrite;apache2
You can do this with a single RewriteRule. The trick here is to only check for valid username characters, not everything (ie. .* - I wouldn't have thought your usernames could literally be anything?). This would also be more efficient since not every request will match and be processed.For example, assuming your usernames can only consist of upper/lowercase letters and numbers then:RewriteRule ^([a-zA-Z0-9])$ /remote.php/carddav/addressbooks/$1/contacts [R=301]This is very similar to your initial attempt. Note that the RewriteRule pattern in per-directory .htaccess files does not begin with a slash, however, if this rule was used in your server config then it would!This also naturally avoids the rewrite loop since /remote.php/carddav... will not match as a valid username (specifically . and / would fail to match).You could also limit the length of the username, to say between 4 and 20 characters... ^([a-zA-Z0-9]{4,20})$.If you needed to match any character except a slash (a slash would surely break your destination URL?) then you could use a pattern like: ^([^/]+)$
_webapps.100872
So I am hoping that one of you excellent minds will be able to assist me. I have about dozen tables in a new form that are working perfectly. tables may contain positive or negative adjustments to static numbers. I need a field to calculate all of the fields that contain negative adjustments and another for positive adjustments.I imagine some sort of if then statement will do this... but I am not sure where to start.Something like this maybe...dunno=if ChangeAmount <= 0 then ADD So grateful for any assistance you can give me.
Cognito Forms: If then calculations based on value
cognito forms
null
_cs.9875
Given an $n \times n$ matrix $\mathbf{A}$. Let the inverse matrix of $\mathbf{A}$ be $\mathbf{A}^{-1}$ (that is, $\mathbf{A}\mathbf{A}^{-1} = \mathbf{I}$). Assume that one element in $\mathbf{A}$ is changed (let's say $a _{ij}$ to $a' _{ij}$). The objective is to find $\mathbf{A}^{-1}$ after this change. Is there a method to find this objective that is more efficient than re-calculating the inverse matrix from scratch.
Computing inverse matrix when an element changes
algorithms;numerical analysis;online algorithms
The Sherman-Morrison formula could help:$$ (A + uv^T)^{-1} = A^{-1} - \frac{A^{-1} uv^T A^{-1}}{1 + v^T A^{-1} u}. $$Let $u = (a'_{ij}-a_{ij}) e_i$ and $v = e_j$, where $e_i$ is the standard basis column vector. You can check that if the updated matrix is $A'$ then$$ A^{\prime -1} = A^{-1} - \frac{(a'_{ij}-a_{ij})A^{-1}_{i\rightarrow} A^{-1T}_{\downarrow j}}{1 + (a'_{ij}-a_{ij})A^{-1}_{ij}}.$$
_codereview.142355
It occurred to me that if SHA2 can be used to derive keys from passwords, then it might as well be good enough to generate random data that can be xored with a plaintext to encrypt and the other way around.These are my assumptions:SHA-512 produces a random-looking output that is impossible to guessThe result of SHA-512 can be fed back into it appended to a 256-bit key and produce an output with the same quality as given for a random inputIf those assumptions hold, then it should provide privacy. It's much faster than AES-256-CBC.At first I thought there must be something I'm missing, but no one has pointed out an attack that can be carried out against this construction. So what I would like to know is how secure this algorithm is and specific ways to break it.Here's the code:mad.h#ifndef _MAD_H_#define _MAD_H_typedef struct { unsigned char state[64]; unsigned char key[32];} MadCtx;void mad_ctx_init(MadCtx* mad, unsigned char const* key, unsigned char const* iv);void mad_encrypt(MadCtx* mad, unsigned char const* in, unsigned int in_size, unsigned char* out); void mad_decrypt(MadCtx* mad, unsigned char const* in, unsigned int in_size, unsigned char* out);#endifmad.c#include mad.h#include <stdint.h>#include <assert.h>#include <string.h>#include <openssl/sha.h>// Privatestatic void _xor64(uint64_t* dest, uint64_t const* a, uint64_t* b){ for(int i = 0; i < 8; ++i) *dest++ = *a++ ^ *b++;}// Publicvoid mad_ctx_init(MadCtx* mad, unsigned char const* key, unsigned char const* iv){ memcpy(mad->state, iv, 64); memcpy(mad->key, key, 32);}void mad_encrypt(MadCtx* mad, unsigned char const* in, unsigned int in_size, unsigned char* out){ assert(0 == in_size % 64); int n = in_size >> 6; // in_size / 64 while(n){ uint64_t x[8]; SHA512((unsigned char const*)mad, 96, (unsigned char*)x); _xor64((uint64_t*)out, (uint64_t const*)in, x); memcpy(mad->state, out, 64); in += 64; out += 64; --n; }}void mad_decrypt(MadCtx* mad, unsigned char const* in, unsigned int in_size, unsigned char* out){ assert(0 == in_size % 64); int n = in_size >> 6; // in_size / 64 while(n){ uint64_t x[8]; SHA512((unsigned char const*)mad, 96, (unsigned char*)x); memcpy(mad->state, in, 64); _xor64((uint64_t*)out, (uint64_t const*)in, x); in += 64; out += 64; --n; }}and some test code#include mad.h#include <stdio.h>#include <stdlib.h>#include <string.h>#include <assert.h>void phex(void const* data, size_t size){ char const* table = 0123456789abcdef; unsigned char const* in = data; while(size--){ int c; c = table[*in >> 4]; putchar(c); c = table[*in & 0xf]; putchar(c); ++in; }}void read_or_die(FILE* file, void* dest, size_t size){ if(size != fread(dest, 1, size, file)){ perror(fread()); exit(EXIT_FAILURE); }}int main(int argc, char* argv[]){ FILE* urandom = fopen(/dev/urandom, r); unsigned char iv[64]; unsigned char key[32]; unsigned char plain[128]; unsigned char cipher[128]; unsigned char decrypted[128]; read_or_die(urandom, iv, 64); read_or_die(urandom, key, 32); memset(plain, 0xdd, 128); puts(plain text is:); for(int i = 0; i != 128; i += 32){ phex(plain + i, 32); putchar('\n'); } putchar('\n'); MadCtx ctx; mad_ctx_init(&ctx, key, iv); mad_encrypt(&ctx, plain, 128, cipher); puts(cipher text is:); for(int i = 0; i != 128; i += 32){ phex(cipher + i, 32); putchar('\n'); } putchar('\n'); assert(0 != memcmp(plain, cipher, 128)); mad_ctx_init(&ctx, key, iv); mad_decrypt(&ctx, cipher, 128, decrypted); puts(decrypted text is:); for(int i = 0; i != 128; i += 32){ phex(decrypted + i, 32); putchar('\n'); } putchar('\n'); assert(0 == memcmp(plain, decrypted, 128));}
Using SHA-512 for encryption/decryption
algorithm;c;cryptography
The above encrypt/decrypt functions can be summarized as follows:void mad_encrypt(...){ while(n) { SHA512(mad, 96, x); XOR(out, in, x); memcpy(mad->state, out, 64); ... }}void mad_decrypt(...){ while(n) { SHA512(mad, 96, x); XOR(out, in, x); memcpy(mad->state, in, 64); ... }}This is basically the CFB mode. Note that the two functions are almost identical, except a small difference in memcpy. AES in CFB mode is done in much the same way, except of course it uses AES block cipher instead of SHA512(...) function above. Also AES uses blocks of 16 bytes, so AES has to run 4 rounds to catch up with a single SHA512 round. Overall, AES is faster. To compare the performance you can simply compare 4 rounds of AES to 1 round of SHA512. Watch out for compiler optimizations which may skew the result. Many tests have already been done for this, it shows AES is faster.AES also uses key expansion which makes it more secure. Key expansion is relatively slow but it's done only once per file/data. This may skew the performance test depending on how the test is done.typedef struct { unsigned char state[64]; unsigned char key[32];} MadCtx;For improvement, don't let the key linger on in MadCtx::key. For example, you can use SHA512 to combine the key with IV (or state as you call it) so it becomes hidden during the operation, then you can drop key out of the structure.Example:unsigned char mad_IV[64];void mad_init(const unsigned char *key, const unsigned char* iv){ unsigned char buf[96]; memcpy(buf, iv, 64); memcpy(buf + 64, key, 32); SHA512(buf, 96, mad_IV);}void mad_crypt(char const* in, int in_size, char* out, int encrypt){ assert(0 == in_size % 64); int n = in_size >> 6; // in_size / 64 while (n) { SHA512(mad_IV, 64, mad_IV); _xor64((uint64_t*)out, (uint64_t*)in, (uint64_t*)mad_IV); if (encrypt) memcpy(mad_IV, out, 64); else memcpy(mad_IV, in, 64); in += 64; out += 64; --n; }}int main(){ unsigned char iv[64] = { 0 }; unsigned char key[32] = { 0 }; memcpy(iv, iv, 2); memcpy(key, key, 3); int size = 640; char *plaintext = malloc(size); char *decrypted = malloc(size); char *encrypted = malloc(size); memset(plaintext, 0, size); memset(encrypted, 0, size); memset(decrypted, 0, size); strcpy_s(plaintext, size, plainxxxxxx.); mad_init(key, iv); mad_crypt(plaintext, size, encrypted, 1); mad_init(key, iv); mad_crypt(encrypted, size, decrypted, 0); phex(encrypted, 64); printf(\n\n); printf(plaintext: %s\n, plaintext); printf(decrypted: %s\n, decrypted); putchar('\n'); return 0;}
_unix.377600
I followed the instructions at this email thread, and placed services.xserver.xkbOptions = grp:alt_space_toggle, ctrl:swapcaps;in my /etc/nixos/configuration.nix file, but even after rebuilding with $ nixos-rebuild switch, and rebooting with nixos-rebuild boot and reboot, my caps lock key is not remapped.How to map caps-lock to ctrl in nixos?
In nixos, how to remap caps lock to control?
nixos
null
_codereview.155513
I want to fill an array of complex numbers using a uniform generator. I thought up the next code. (Complex is a simple fixed-point complex data type.)std::generate(inputData.begin(), inputData.end(), []()-> Complex { static std::default_random_engine generator; static std::normal_distribution<double> distribution(0.0, 0.5); // mean = 0.0, stddev = 0.5 return Complex(distribution(generator), distribution(generator));});So I though up using static variables within the lambda expression. Is that inefficient? Alternatively I would create them outside of the lambda, and put them on the capture list. But I don't need them outside of the lambda, so this seems cleaner to me.
Fill a vector with uniformly distributed random complex numbers
c++;lambda;static
null
_codereview.129647
In this example, I am only looking for the first duplicate - curious if this logic in for loop (take from the top, evaluate, put on the bottom) pattern has a standard formfunction findOneDupe(input){ var b = input.split(/\s|\n/).map(Number); b.shift(); for (var i=0;i<b.length;i++){ var e = b.shift(); if (b.indexOf(e) == -1) return e; b[b.length] = e; } return -1;}
Take from top, evaluate, put on bottom
javascript
null
_webapps.100756
Let's say in Sheet 2 I have a column (A) of words, and a column (B) of hex color codes. In Sheet 1, wherever a word from Sheet 2 appears alone in a cell, that cell should get the background color from Sheet 2.EX. Sheet 2A | BCat | #FF2223Dog | #114589Bat | #123456I've tried using custom functions, but custom functions don't give you permissions to set the background color of cells.Using conditional formatting seems awkward, as I might have thousands of words in Sheet 2.Is there a way to accomplish this?
How can I change the background color of a cell based on the cell contents using a color lookup sheet?
google spreadsheets;conditional formatting
null
_cstheory.5961
In the paper The Random Oracle Hypothesis Is False, the authors (Chang, Chor, Goldreich, Hartmanis, Hstad, Ranjan, and Rohatgi) discuss the implications of the random-oracle hypothesis. They argue that we know very little about separations between complexity classes, and most results involve either using reasonable assumptions, or the random-oracle hypothesis. The most important and widely believed assumption is that PH does not collapse. In their words:In one approach, we assume as a working hypothesis that PH has infinitely many levels. Thus, any assumption which would imply that PH is finite is deemed incorrect. For example, Karp and Lipton showed that if NP P/poly, then PH collapses to $\Sigma^P_2$. So, we believe that SAT does not have polynomial sized circuits. Similarly, we believe that the Turing-complete and many-one complete sets for NP are not sparse, because Mahaney showed that these conditions would collapse PH. One can even show that for any k 0, $P^{\mathrm{SAT}[k]} = P^{\mathrm{SAT}[k+1]}$ implies that PH is finite. Hence, we believe that $P^{\mathrm{SAT}[k]} \ne P^{\mathrm{SAT}[k+1]}$ for all k 0. Thus, if the polynomial hierarchy is indeed infinite, we can describe many aspects of the computational complexity of NP.Apart from the assumption about PH not collapsing, there have been many other complexity assumptions. For instance:Yao deems the following assumption plausible:$RP \subseteq \bigcap\limits_{\epsilon > 0} DTIME(2^{n^\epsilon})$.Nisan and Wigderson make several assumptions related to derandomization.The main idea of this question is what its title says: To be an anthology of complexity-theoretic assumptions. It would be great if the following conventions were adhered to (whenever possible):The assumption itself;The first paper in which the assumption is made;Interesting results in which the assumption is used;If the assumption has ever been refuted / proved, or whether its plausibility has ever been discussed.This post is meant to be a community wiki; if an assumption is already cited, please edit the post and add new information rather than making a new post.Edit (10/31/2011): Some cryptographic assumptions and information about them are listed in the following websites:Wiki of Cryptographic Primitives and Hard Problems in Cryptography.Helger Lipmaa's Cryptographic assumptions and hard problems.
An Anthology of Complexity Assumptions
cc.complexity theory;complexity classes;big list;complexity assumptions
null
_datascience.9483
I am a newbie to XGBoost so pardon my ignorance. Here is the python code : import pandas as pdimport xgboost as xgbdf = pd.DataFrame({'x':[1,2,3], 'y':[10,20,30]})X_train = df.drop('y',axis=1)Y_train = df['y']T_train_xgb = xgb.DMatrix(X_train, Y_train)params = {objective: reg:linear}gbm = xgb.train(dtrain=T_train_xgb,params=params)Y_pred = gbm.predict(xgb.DMatrix(pd.DataFrame({'x':[4,5]})))print Y_predOutput is :[ 24.126194 24.126194]As you can see the input data is simply a straight line. So the output I expect is [40,50]. What am I doing wrong here?
XGBoost Linear Regression output incorrect
python;linear regression;xgboost
It seems that XGBoost uses regression trees as base learners by default. XGBoost (or Gradient boosting in general) work by combining multiple of these base learners. Regression trees can not extrapolate the patterns in the training data, so any input above 3 or below 1 will not be predicted correctly in your case. Your model is trained to predict outputs for inputs in the interval [1,3], an input higher than 3 will be given the same output as 3, and an input less than 1 will be given the same output as 1.Additionally, regression trees do not really see your data as a straight line as they are non-parametric models, which means they can theoretically fit any shape that is more complicated than a straight line. Roughly, a regression tree works by assigning your new input data to some of the training data points it have seen during training, and produce the output based on that. This is in contrast to parametric regressors (like linear regression) which actually look for the best parameters of a hyperplane (straight line in your case) to fit your data. Linear regression does see your data as a straight line with a slope and an intercept.You can change the base learner of your XGBoost model to a GLM (generalized linear model) by adding booster:gblinear to your model params :import pandas as pdimport xgboost as xgbdf = pd.DataFrame({'x':[1,2,3], 'y':[10,20,30]})X_train = df.drop('y',axis=1)Y_train = df['y']T_train_xgb = xgb.DMatrix(X_train, Y_train)params = {objective: reg:linear, booster:gblinear}gbm = xgb.train(dtrain=T_train_xgb,params=params)Y_pred = gbm.predict(xgb.DMatrix(pd.DataFrame({'x':[4,5]})))print Y_predIn general, to debug why your XGBoost model is behaving in a particular way, see the model parameters :gbm.get_dump()If your base learner is linear model, the get_dump output is :['bias:\n4.49469\nweight:\n7.85942\n']In your code above, since you tree base learners, the output will be :['0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=2.85\n\t\t4:leaf=5.85\n\t2:leaf=8.85\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=1.995\n\t\t4:leaf=4.095\n\t2:leaf=6.195\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=1.3965\n\t\t4:leaf=2.8665\n\t2:leaf=4.3365\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.97755\n\t\t4:leaf=2.00655\n\t2:leaf=3.03555\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.684285\n\t\t4:leaf=1.40458\n\t2:leaf=2.12489\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.478999\n\t\t4:leaf=0.983209\n\t2:leaf=1.48742\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.3353\n\t\t4:leaf=0.688247\n\t2:leaf=1.04119\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.23471\n\t\t4:leaf=0.481773\n\t2:leaf=0.728836\n', '0:[x<3] yes=1,no=2,missing=1\n\t1:[x<2] yes=3,no=4,missing=3\n\t\t3:leaf=0.164297\n\t\t4:leaf=0.337241\n\t2:leaf=0.510185\n', '0:[x<2] yes=1,no=2,missing=1\n\t1:leaf=0.115008\n\t2:[x<3] yes=3,no=4,missing=3\n\t\t3:leaf=0.236069\n\t\t4:leaf=0.357129\n']Tip : I actually prefer to use xgb.XGBRegressor or xgb.XGBClassifier classes, since they follow the sci-kit learn API. And because sci-kit learn has so many machine learning algorithm implementations, using XGB as an additional library does not disturb my workflow only when I use the sci-kit interface of XGBoost.
_codereview.131558
Following yesterday's advice on my Customer class, I have now created a new class which is for looking up descriptions from a MySQL database. I have a Lookup table which contains type, name, value, and descriptions.Here's a basic example of type = customer and name = stateCustomer StatesNew CustomerPending ApprovalApproved / ActiveDeletedIn this example if I pass in the parameters customer, state, and 2, the method will return Pending Approval.pdo.inc.php<?php// Version 0.1// Last updated 08 Jun 2016define('DB_CONFIG_HOST', 'localhost');define('DB_CONFIG_DB', 'dev');define('DB_CONFIG_USER', 'dev');define('DB_CONFIG_PW', 'dev');$dsn = 'mysql:host=' . DB_CONFIG_HOST . ';dbname=' . DB_CONFIG_DB . ';';define('DB_CONFIG_DSN', $dsn);try{ $pdo = new PDO(DB_CONFIG_DSN, DB_CONFIG_USER, DB_CONFIG_PW);}catch (PDOException $ex){ error_log('Connection failed: ' . $ex->getMessage()); die();}?>lookup.class.php<?php// Version 0.1// Last updated 09 Jun 2016class Lookup{ function __construct($pdo) { $this->pdo = $pdo; } function getLookup($lookup_type, $lookup_name, $lookup_value) { $valid_types = array(customer); $valid_names = array(state); if (!in_array($lookup_type, $valid_types)) { throw new InvalidArgumentException('Lookup type is not a valid'); } if (!in_array($lookup_name, $valid_names)) { throw new InvalidArgumentException('Lookup name is not a valid'); } if (empty($lookup_value) || !is_int($lookup_value) || $lookup_value < 0) { throw new InvalidArgumentException('Lookup value is not a valid integer'); } $query = SELECT lookup_description FROM Lookup WHERE lookup_type = :lookup_type AND lookup_name = :lookup_name AND lookup_value = :lookup_value LIMIT 1; try { $stmt = $this->pdo->prepare($query); $stmt->bindParam(':lookup_type', $lookup_type); $stmt->bindParam(':lookup_name', $lookup_name); $stmt->bindParam(':lookup_value', $lookup_value); $stmt->execute(); if ($stmt->rowCount() === 0) { return false; } else { return $stmt->fetchColumn(); } } catch (PDOException $ex) { error_log('Something went wrong in getLookup ' . $ex->getMessage()); return null; } }}?>test.php<?php// Version 0.1// Last updated 09 Jun 2016include_once('pdo.inc.php');include_once('lookup.class.php');try{ $l = new Lookup($pdo);}catch (Exception $ex){ echo $ex->getMessage(); die();}try{ echo $l->getLookup(customer, state, 1); // Returns New Customer echo $l->getLookup(customer, state, 2); // Returns Pending Approval}catch (Exception $ex){ echo $ex->getMessage(); error_log('Customer name lookup failed with: ' . $ex->getMessage());}?>I'm not sure I like the idea of having fixed arrays for looking up valid parameter values. I was thinking of getting a distinct list from MySQL, but as to do that I might as well just pass the query without validation and let it return null. Any thoughts?
Lookup class to get descriptions of values
php;php5;pdo
Having provided an answer to your previous review, I have a mixed bag of reactions towards this latest version.The good:You seem to have embraced the concept of data validation (though conspicuously missing in your constructor to Lookup class).Your usage of try-catch seems appropriate (with one possible exception I will note below).I think you are handling the PDO dependency in the class in a much better manner now, by storing the connection on the object.Possible points of concern:First and foremost it now more unclear to me what sort of approach you are trying to take to this class definition. Before you had a customer class, now you have a lookup class. What are you really trying to do? Are you trying to establish a factory pattern whereby you can query a set of records and return appropriate object representations? If so, why no object class now. Since I don't know your full use case here, I will not get into what the interface design may look like for a factory and will instead still with a single class implementation, as I see nothing in your question that would suggest you are actually trying to work with collections of customers (but rather single customer instances).To this end, I will begin to talk in terms of creating an object relational mapping class that allows you to instantiate a single customer object based on a provided ID and have access to whatever properties on the customer object that may be appropriate. You may want that customer class to look like this:<?php// Version 0.1// Last updated 09 Jun 2016class Customer{ protected $customerId; protected $firstName; protected $lastName; protected $state; // any other properties from DB table you want to capture on this object. public abstract function getCustomerById(PDO $pdo, $customerId) { if(!self::validateCustomerIdFormat($customerId) { throw new InvalidArgumentException( '$customer_id is not a valid integer' . ' Value provided: ' . var_export($customerId, true) ); } $query = SELECT customer_id AS customerId, first_name AS firstName, last_name AS lastName, state Lookup /* any other properties along with alias to property name */ FROM customer WHERE customer_id = :customer_id LIMIT 1; // limit not needed if customer_id is unique try { $stmt = $pdo->prepare($query); $stmt->bindParam(':customer_id', $customerId); $stmt->execute(); // return saturated instance of this class // no need for row count check here // as this method returns false if there are no records // remaining in result set return $stmt->fetchObject(__CLASS__); } catch (PDOException $ex) { // note here that I have decided just to rethrow the exception // rather then returning null as in previous example // this is because if there is a problem with underlying PDO object // there is nothing really this class can do (a terminal exception) error_log('Something went wrong in getCustomerById' . $ex->getMessage()); throw $ex; } } // constructor has been made private // to force use of abstract method to instantiate class private function __construct($customer_id) { } // have added a public abstract validation function around customer ID // the allows single place to configure validation rules // and can be used outside object context for validating customer ID // format anywhere in the application public abstract function validateCustomerIdFormat($customerId) { if (empty($customerId) || !is_int($customerId) || $customerId < 0) { return false; } return true; }}?>In this case, your calling code might look like:include_once('pdo.inc.php');include_once('customer.class.php');try{ $customer1 = Customer::getCustomerById($pdo, 1); $customer2 = Customer::getCustomerById($pdo, 2);}catch (Exception $ex){ echo $ex->getMessage(); error_log('Customer ID lookup failed with: ' . $ex->getMessage());}// conditional needed here are Customer::getCustomerById can return false// if no match found// here we simply echo out the customer's state informationif($customer1) { echo $customer1->state;}if($customer2) { echo $customer2->state;}The one minor quibble I have about your try-catch usage is that you might consider splitting the instantiation of each customer object into separate try-catch block depending on your need to granularly perform different catch block activities on each. I didn't show that in my example, because I know your test.php is just a proof of concept. But in the real world, if you needed to instantiate two customer objects, you might want to handle cases where either one fail independently.
_reverseengineering.12767
im wondering if anyone knows how i would go about finding references to a buffer in memory, the scenario is that i have found the buffer that the program receives from a server, the buffer is encoded so im trying to find the routine that is going to decode it, i know its possible since people have done it in the past and im just trying to re-create it to learn.Anyways so i tried placing a hardware breakpoint on the buffer but it only gets hit like 3-4 times and none of them really copy the buffer or modify it in any way :/ so im wondering if im not finding all references to the buffer ? and how one would go about this in general.
Tracing what references a buffer in memory using Olly?
ollydbg;memory;tracing
null
_unix.385729
In Linux Mint 18.2, the default path selector looks like this:I want it to look like this:So I went opened up dconf-editor and changed org.gtk.settings.filechooser location-mode from path-bar to filename-entry. Unfortunately, this didn't have any effect, and when I opened up dconf-editor later, it had reverted back to path bar. So then I used gsettings to do it myself:gsettings set org.gtk.Settings.FileChooser location-mode filename-entryI tried this both with and without sudo. After running the command, gsettings get said that it did indeed take effect and was now set to filename-entry. Excited, I opened up gedit to test this, but was dismayed to find it was still the path bar and not filename-entry. Perplexingly, after closing the file-chooser in gedit, gsettings get now showed that location-mode had reverted to path-bar.After trying this a few times, I determined that closing the file-chooser causes the setting to revert.How can I get the file-chooser to use the path bar?
Linux Mint file-chooser: show filename-entry instead of path-bar?
linux mint;gsettings
null
_unix.244183
Here is my example snippettext=Var 1 is ${one}, Var 2 is ${two}, Var 3 is ${three}for (( i=0 ; i<1 ; i++ ))do one=one two=two three=three echo ${text}donereturnsVar 1 is , Var 2 is , Var 3 isand if I change the code to this, it works as expected:text=Var 1 is ${one}, Var 2 is ${two}, Var 3 is ${three}for (( i=0 ; i<1 ; i++ ))do one=one two=two three=three echo Var 1 is ${one}, Var 2 is ${two}, Var 3 is ${three}done
Global variable referencing variables in a for loop is not set correctly
bash;shell script
This happens because at the moment you set the 'text' variable, all the others are empty, defaulting to empty string .Try setting text as you command and save it with (') and not with () so bash doesn't evaluate you expression, then if you do.$ text='echo var1 = $one var2 = $two' $ one=hi$ two=byethen eval $text will return var1 = hi var2 = bye
_codereview.5241
I wrote this palindrome extractor. And even though it works, and I can solve the challenge with it, it feels very Java-like. I was wondering what adjustments I could make in order for it to be more functional.import collection.mutable._object Level1 { def palindrome(input:String) = input.reverse == input def extractPalindromes(input:String) = { var counter = 0 val palindromes = new ListBuffer[String]() while(counter < input.length) { var localCounter = 2 while((counter + localCounter) < input.length) { //println(counter:+counter+ localCounter:+localCounter) val tempString = input.substring(counter, input.length - localCounter) if(palindrome(tempString)) palindromes += tempString localCounter += 1 } counter += 1 } palindromes } def main(args:Array[String]) = { val input = I like racecars that go fast extractPalindromes(input).filter(_.length > 4).foreach(println) }}
More functional way of writing this palindrome extractor?
scala;palindrome
It can be as simple as:scala> val str = I like racecars that go faststr: java.lang.String = I like racecars that go fastscala> for { i <- 2 to str.size; s <- str.sliding(i) if s == s.reverse} yield sres5: scala.collection.immutable.IndexedSeq[String] = Vector(cec, aceca, racecar)
_cogsci.13016
While reading, I tend to subvocalize what I have read. I feel this as disturbing, but as many people do this, there must be a reason why the brain does this. So, what is subvocalization good for?
What is subvocalization good for?
cognitive psychology
It should be pointed out that subvocalisation while reading is not necessarily of the same cause or purpose as subvocalisation while doing other activities. Many people are able (or accustomed) to comprehending written language only by converting it to spoken language mentally. This allows the interpretation process to share the same brain regions, as opposed to using one modality for reading and another for listening. Naturally some people are more prone to logical (read: serialised) thinking while others are more prone to visual (read: parallel) thinking. Those individuals prone to logical thinking are more likely to subvocalise while reading since this allows the reading process to correspond well with one's usual way of thinking. The reverse should be true for those accustomed to visual thinking.The act of subvocalising while doing non-reading activities is probably similar in that it may allow the thinking to be of one's usual modality (logical). Taking it further, however, subvocalisation is a way of putting thoughts into words. It is common knowledge that when you put an understanding into your own words you are more likely to develop a lasting memory. This is presumably because explaining something in words requires that the ideas be organised in a coherent fashion, as opposed to possessing merely a superficial glimpse of awareness. To think about it another way, putting thoughts into words creates durable mental objects that can then be stored away into memory.
_unix.352734
I wrote a compressing command in Ubuntu. However,the zip file produced also contain path folder leading to the target file in form of folder. I only need the target file alone in the zip file. This is the code I currently using.zip -9pr /mnt/test/Raimi/temp/Testing.zip /home/tect/Loco/*txtwhere mnt/test/Raimi/temp is the destination folder Testing.zip is the output I intended to produced and /home/tect/Loco is the Original file located.Please help pointed out a fault in my command if found.Thank you in advance.
Create zip file without folder path
filenames;zip
null
_unix.307696
How can I increase the display time of the pane numbers seen with ctrl-b q?When having lots of panes, it is sometimes not enough time to key in the one I want to switch to.
How to increase tmux pane numbers display time `ctrl-b q`
tmux
You can set it in an existing session with ctrl-b :set display-panes-time 2000 for 2 seconds for example. To persist it put the comand into your ~/.tmux.confset -g display-panes-time 2000This is documented in the tmux manpage (man tmux) under OPTIONS: display-panes-time time Set the time in milliseconds for which the indicators shown by the display-panes command appear.
_unix.345804
When vmsplice(4) is used with SPLICE_F_GIFT it is promised that my process won't modify the underlying page(s) I gift. The normal work flow I'm informed is:/*pseudo code don't kill me*/void* page = memmap();vmsplice(page, SPLICE_F_GIFT);free(page);But this requires me to invalidate my TLB every time I gift a page. Which nicely negates any performance gain I get from not copying the data.So how can I know the kernel is done with my page? As I can simply not free the page right?I'm assuming for a use case like: vmsplice -> pipe -> splice -> tcpsocketI would wait for a response, at which point the kernel will flush its SEND buffer and my page will be mine again?
When can I modify a page that was GIFT'd to vmsplice?
linux;pipe;memory;ram
null
_unix.186752
The issue I encounterWhen working on Android-Studio, Eclipse or even command-line Gradle, the Java software often freezes (even though usually it is after I update my system/change java). For Android-Studio and Eclipse, if I move to another desktop and come back, then it becomes a gray window and the interface never comes back, even after hours. I suppose it is a Java issue.It does not always happen: I usually don't have any problem for weeks until it appears again. I don't understand what makes it stop: when it happens, I try to reboot my computer, change my Java JDK version, but it does not change anything. Then one day, I boot my computer and the problem has disappeared - for the next few weeks.What I can observeOne CPU always stays at 100%I cannot make a thread dump of Android-Studio (as described here): it freezes as well.If I run a big C++ compilation while Android-Studio/Eclipse/Gradle is freezing (i.e. a compilation that takes all of my CPUs), then it stops freezing and I can continue my work until the next time (but it happens extremely often).What I triedI tried another Window Manager: I could reproduce the bug on XMonad and FluxboxI tried to export _JAVA_AWT_WM_NONREPARENTING=1 in /etc/profile.d/jre.shI tried to switch between java-7-jdk, java-7-openjdk, java-8-jdk, java-8-openjdkI tried to run wmname LG3DI tried to run pkill -e adb, as advised in the commentsI tried to jmap <pid> on the <pid> of Android Studio, but I have a DebuggerException: Can't attach to the processI tried to jcmd <pid> GC.run on the <pid> of Android Studio, but I have a DebuggerException: Can't attach to the process and Unable to open socket file: target process not responding or HotSpot VM not loaded.I tried to remove my .gradle directoryI tried to Invalidate and Restart Android Studio (but the problem does not look to be unique to Android Studio since I experienced it with Eclipse, too)My configurationI am on Arch Linux (but a similar issue has been reported on Linux Mint) with Awesome WM (I experience the same with XMonad and Fluxbox). As far as I remember, it has always been happening on this machine (I changed in October 2014). Before this, it was working on Debian (but with Awesome WM as well). I have updated Android-Studio many times (from around 0.8 to the latest version).What could be happening? Or how can I figure out?Related problemsI have recently found this post talking about a similar problem. I tried what he advises (i.e. I tried export LD_ASSUME_KERNEL=2.4.1; android-studio) but then Android Studio does not start at all. Is it possible that I also have a problem with NPTL?
Java process freezes until I use 100% CPU
arch linux;java;eclipse;intellij
I never found the answer to this question, but this problem hasn't occured in months (maybe a year?).I guess something fixed it, somehow.I will therefore close the question now.
_unix.153518
I have to cross compile opensawn for a OMAP4 Board and GMP is prerequisite. First I tried it on 64 bit OS but it gave me this error:configure: error: Oops, mp_limb_t is 64 bits, but the assembler code in this configuration expects 32 bits.Then I shifted to Ubuntu 12.04 32 Bit and the GMP V6.0.0 got compiled after few trials. Even after having the ARCH, TOOLCHAIN and CROSS_COMPILER variables in .bashrc I had to export the following:export ARCH=arm<BR>export PATH=/home/harsh32bit/Work/Projects/BSQ_VVDN/BISQUARE/gcc-SourceryCodeBenchLite-arm/bin/:$PATH<BR>export CROSS_COMPILE=arm-none-linux-gnueabi-<BR>Then following commands were observed:./configure --build=i686-pc-linux-gnu --host=arm-none-linux-gnueabi --prefix=/home/harsh32bit/Work/Projects/BSQ_VVDN/BISQUARE/gcc-SourceryCodeBenchLite-arm/make cleanmakemake installThen Soft-linking GMP Library to Toolchain~/Work/Projects/BSQ_VVDN/BISQUARE/gcc-SourceryCodeBenchLite-arm/lib/gcc/arm-none-linux-gnueabi/4.7.3 # ln -s ~/Work/Projects/BSQ_VVDN/packages/gmp-6.0.0/.libs/libgmp.so libgmp.soI had the GMP compiled successfully although the make check reported all test failed.9 of 9 tests failed.Now when I try to cross compile Openswan-2.6.41 after making changes in CROSSCOMPILE.sh and do this make programs I get this error:In file included from /home/harsh32bit/Work/Projects/BSQ_VVDN/packages/openswan-2.6.41/include/certs.h:24:0,from /home/harsh32bit/Work/Projects/BSQ_VVDN/packages/openswan-2.6.41/lib/libopenswan/id.c:42: /home/harsh32bit/Work/Projects/BSQ_VVDN/packages/openswan-2.6.41/include/secrets.h:20:41: fatal error: gmp.h: No such file or directory compilation terminatedI have gone to TI E2E site for this, sniffed internet for pointers in last 4 weeks but I couldn't figure out. If anyone has any clue about cross compiling openswan and GMP please advise me.
Cross Compile GMP and Openswan for ARM
ubuntu;arm;cross compilation
null
_codereview.105744
I would use the same RecyclerView.Adapter with two or more different fragments. Every fragment uses a different view items layout so I must use a different RecyclerView.ViewHolder for binding the data.For the implementation I have created one RecycleView.ViewHolder that binds the two different view layout with a switch but I think might be something better for this kind of situation.public class TFViewHolder extends RecyclerView.ViewHolder { public final static int LAYOUT_ONE = 1; public final static int LAYOUT_TWO = 2; public Integer mId; public ImageView mThumb; public TextView mName; public TextView mTitle; public TextView mDescription; public final View mView; public TFViewHolder(View itemView, int layoutType) { super(itemView); mView = itemView; switch (layoutType) { case LAYOUT_ONE: mThumb = (ImageView) itemView.findViewById(R.id.thumb); mName = (TextView) itemView.findViewById(R.id.name); break; case LAYOUT_TWO: mTitle = (TextView) itemView.findViewById(R.id.title); mDescription = (TextView) itemView.findViewById(R.id.description); break; } itemView.setTag(itemView); }}The RecyclerView.Adapter:public class TFRecyclerViewAdapter extends RecyclerView.Adapter<TFViewHolder> { protected Context context; protected List items; private int layout; private int layoutType; public TFRecyclerViewAdapter(Context context) { this.context = context; } public TFRecyclerViewAdapter(Context context, int layout, int layoutType) { this(context); this.layout = layout; this.layoutType = layoutType; } public void setItems(List items){ this.items = items; } @Override public void onBindViewHolder(final TFViewHolder holder, final int position) { holder.mId = position; } @Override public TFViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { LayoutInflater inflater = LayoutInflater.from(parent.getContext()); View itemView = inflater.inflate(layout, parent, false); return new TFViewHolder(itemView, layoutType); } @Override public int getItemCount() { int l = 0; if (items != null) { l = items.size(); } return l; } }Two different adapters that extend the TFRecyclerViewAdapter of above:public class TFNewsRecyclerViewAdapter extends TFRecyclerViewAdapter { public TFNewsRecyclerViewAdapter(Context context, int layout) { super(context, layout, TFViewHolder.LAYOUT_NEWS); } @Override public void onBindViewHolder(TFViewHolder holder, int position) { final News item = (News) items.get(position); holder.mTitle.setText(item.getTitle()); holder.mDescription.setText(item.getDescription()); super.onBindViewHolder(holder, position); }}andpublic class TFTeamsRecyclerViewAdapter extends TFRecyclerViewAdapter { public TFTeamsRecyclerViewAdapter(Context context, int layout) { super(context, layout, TFViewHolder.LAYOUT_TEAMS); } @Override public void onBindViewHolder(TFViewHolder holder, int position) { final Team team = (Team) items.get(position); Glide.with(holder.mThumb.getContext()) .load(team.getThumb()) .fitCenter() .crossFade() .into(holder.mThumb); holder.mName.setText(team.getName()); super.onBindViewHolder(holder, position); }}
Using the same RecyclerView.Adapter with a different ViewHolder
java;design patterns;android
null
_unix.271140
Ideally I'd like a command like thisrm --only-if-symlink link-to-filebecause I have burned myself too many times accidentally deleting the file instead of the symlink pointing to the file. This can be especially bad when sudo is involved. Now I do of course do a ls -al to make sure it's really a symlink and such but that's vulnerable to operator error (similarly named file, typo, etc) and race conditions (if somebody wanted me to delete a file for some reason). Is there some way to check if a file is a symlink and only delete it if it is in one command?
Remove file, but only if it's a symlink
bash;command line;rm
$ rm_if_link(){ [ ! -L $1 ] || rm -v $1; } #test $ touch nonlink; ln -s link $ rm_if_link nonlink $ rm_if_link link removed 'link'
_webapps.72748
The error message says:You need to figure out how to change the encoding scheme from Windows Hebrew to UTF or vise versa.How do I fix Hebrew encoding on Gmail for emails I receive as per example below:~~~~~~~~~~~~~ Message 4 of 23 ~~~~~~~~~~~~~ From:Time: Wed, 7 Jan 2015 20:58:51 +0200 Subject: . ' , . . , 8. , With great gratitude. To hakadosh barichhu. And with much joy in our heart we ask u to join us in the engagement celebration of David and Natalie Thursday. night 8pm in the Young Israel of ginot. Shomron. Come with joy,this is a personal invitation. To 1 and all Reply-to-sender: mailto:subject %20 %20 %20Reply-to-list: mailto:subject %20 %20 %20
Hebrew encoded mail does not display correctly
email;gmail
null
_cs.72538
I was reading how to find the Lowest common ancestor in a DAG. A DAG can have scenarios where the LCA yields multiple solutions and I feel the accepted answer explains that pretty well.However, one of the answers also mentions a paper that talks about because of the given scenario above, for DAGs there may be cases where you would want to find the lowest SINGLE common ancestor.From the limited Abstract linked in the answer above, the paper says this:We derive a new generalization of lowest common ancestors (LCAs) in dags, called the lowest single common ancestor (LSCA). We show how to preprocess a static dag in linear time such that subsequent LSCA-queries can be answered in constant time. The size is linear in the number of nodes.We also consider a fuzzy variant of LSCA that allows to compute a node that is only an LSCA of a given percentage of the query nodes. The space and construction time of our scheme for fuzzy LSCAs is linear, whereas the query time has a sub-logarithmic slow-down. This fuzzy algorithm is also applicable to LCAs in trees, with the same complexities.Clarification I will look more into this to confirm this is what LSCA is but given the picture belowThe LCA of this picture for nodes 8 and 9 is straight forward (it would be 6) but for nodes 3 and 4, the LCA could yield either 1 or 2 because they are at the same level and are both common ancestors. In this case, perhaps it makes more sense to find the LSCA which would be 0 since its the single ancestor of the two.Specifically I would like to know:How does finding the LSCA of a DAG affect time complexity compared to finding the LCA which could yield multiple solutions and what are the best methods to achieve it?
Lowest single common ancestor in a Directed Acyclic Graph?
algorithms;trees;dag
null
_webapps.74243
I found this Google extension in Google Web Store https://chrome.google.com/webstore/detail/profile-visitors-for-face/ihjbpjahiibmjdlcgodcnmpelpmilamk?hl=enIt promises that one can see who visits his profile in Facebook. Does it work? Or will the others see that I use it?
Does this Chrome extension of reporting who visits your Facebook profile work?
facebook
null
_unix.218896
I have file separetd by pipe | and I want that when column 6 contain letter I print 0758000 in column 7 andwhen the column 6 contain letter A print 0800000 in column 7, I can not find how to do it!!!Example:Original filecat file1.txtZ89|EEE333333|100001|JANMC84|19990101|I|1800040Z89|EEE444444|200001|JANMC84|19990101|I|1800040Z89|EEE222222|300001|JANMC84|19990101|A|1800040Z89|EEE555555|700001|JANMC84|19990101|A|1800040The result should be:Z89|EEE333333|100001|JANMC84|19990101|I|0758000Z89|EEE444444|200001|JANMC84|19990101|I|0758000Z89|EEE222222|300001|JANMC84|19990101|A|0800000Z89|EEE555555|700001|JANMC84|19990101|A|0800000
How to replace value for a given condition in specific column of file
text processing;sed;awk;replace;text formatting
You can do it with awk likeawk -F\| 'BEGIN {OFS=FS} $6 == A {$7 = 0800000} $6 == I {$7 = 0758000}; 1' file1.txtThis will have awk split fields based on |, then set the output field separator to also be | when we write the lines back out. Then if the sixth field, $6, is A replace the seventh field with a particular value, and a different value if it's an I. Then print the line out at the end, with our changes if we made any.
_softwareengineering.329415
I am new to the Java 8 time package, and am trying to better understand it and make sure that I am making good use of it.Is there a specific reason that LocalDateTime's truncatedTo(TemporalUnit) does not support ChronoUnit values past Days.I think I put together a successful implementation of such a method, but had concerns with integer division and floating point arithmetic anomalies. Is this something I should be concerned with?Also, I understand them making LocalDateTime and LocalDate both immutable Objects meaning that LocalDateTime cannot extend LocalDate, but I don't see them sharing a common interface for all the date components of LocalDateTime, making it hard to understand why they did this. The only common interfaces I can see is Temporal, TemporalAccessor, and TemporalAdjuster.If they shared a common interface for the date and time components, then you could easily write code to the interface that can use both LocalDate and LocalDateTime instances for things that modify the date without caring about the time part of the instance during runtime. Is there a good reason for them not doing this, or am I missing something here?Here is my implementation of the aforementioned way to truncate LocalDateTime beyond DAYS:public static LocalDateTime truncateDate(LocalDateTime date, ChronoUnit unit) { LocalDateTime truncatedDate = null; // LocalDateTime only supports truncatedTo(TemporalUnit) up to ChronoUnit.DAYS. switch (unit) { case NANOS: case MICROS: case MILLIS: case SECONDS: case MINUTES: case HOURS: case HALF_DAYS: case DAYS: truncatedDate = date.truncatedTo(unit); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; default: // else; we can't use LocalDateTime.truncatedTo(TemporalUnit) past ChronoUnit.DAYS, so lets truncate up to DAYS and continue from there. truncatedDate = date.truncatedTo(ChronoUnit.DAYS); break; } int year = 0; switch (unit) { case WEEKS: truncatedDate = truncatedDate.plus(DayOfWeek.MONDAY.getValue()-truncatedDate.getDayOfWeek().getValue(), ChronoUnit.DAYS); // subtract days to the last Monday. System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; case MONTHS: truncatedDate = truncatedDate.with(TemporalAdjusters.firstDayOfMonth()); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; case YEARS: truncatedDate = truncatedDate.with(TemporalAdjusters.firstDayOfYear()); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; case DECADES: truncatedDate = truncatedDate.with(TemporalAdjusters.firstDayOfYear()); year = truncatedDate.getYear(); int decadeYear = (year/10)*10; // int division rounds down, same as trunc(year/10)*10. truncatedDate = truncatedDate.plus(decadeYear-year, ChronoUnit.YEARS); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; case CENTURIES: truncatedDate = truncatedDate.with(TemporalAdjusters.firstDayOfYear()); year = truncatedDate.getYear(); int centuryYear = (year/100)*100; // int division rounds down, same as trunc(year/100)*100. truncatedDate = truncatedDate.plus(centuryYear-year, ChronoUnit.YEARS); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; case MILLENNIA: truncatedDate = truncatedDate.with(TemporalAdjusters.firstDayOfYear()); year = truncatedDate.getYear(); int millenniumYear = (year/1000)*1000; // int division rounds down, same as trunc(year/1000)*1000. truncatedDate = truncatedDate.plus(millenniumYear-year, ChronoUnit.YEARS); System.out.println(date = ' + String.valueOf(date) + ', unit = ' + String.valueOf(unit) + ', truncatedDate = ' + String.valueOf(truncatedDate) + '.); return truncatedDate; // break; default: // ChronoUnit.ERA || ChronoUnit.FOREVER: throw new UnsupportedTemporalTypeException(Unable to truncate to unit = ' + String.valueOf(unit) + ', not well supported!); } // all switch paths return or throw an exception.}With the following output from a main test program:$ java Maindate = '2016-08-26T10:51:58.828'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Nanos'.date = '2016-08-26T10:51:58.828', unit = 'Nanos', truncatedDate = '2016-08-26T10:51:58.828'. => result = '2016-08-26T10:51:58.828'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Micros'.date = '2016-08-26T10:51:58.828', unit = 'Micros', truncatedDate = '2016-08-26T10:51:58.828'. => result = '2016-08-26T10:51:58.828'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Millis'.date = '2016-08-26T10:51:58.828', unit = 'Millis', truncatedDate = '2016-08-26T10:51:58.828'. => result = '2016-08-26T10:51:58.828'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Seconds'.date = '2016-08-26T10:51:58.828', unit = 'Seconds', truncatedDate = '2016-08-26T10:51:58'. => result = '2016-08-26T10:51:58'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Minutes'.date = '2016-08-26T10:51:58.828', unit = 'Minutes', truncatedDate = '2016-08-26T10:51'. => result = '2016-08-26T10:51'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Hours'.date = '2016-08-26T10:51:58.828', unit = 'Hours', truncatedDate = '2016-08-26T10:00'. => result = '2016-08-26T10:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'HalfDays'.date = '2016-08-26T10:51:58.828', unit = 'HalfDays', truncatedDate = '2016-08-26T00:00'. => result = '2016-08-26T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Days'.date = '2016-08-26T10:51:58.828', unit = 'Days', truncatedDate = '2016-08-26T00:00'. => result = '2016-08-26T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Weeks'.date = '2016-08-26T10:51:58.828', unit = 'Weeks', truncatedDate = '2016-08-22T00:00'. => result = '2016-08-22T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Months'.date = '2016-08-26T10:51:58.828', unit = 'Months', truncatedDate = '2016-08-01T00:00'. => result = '2016-08-01T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Years'.date = '2016-08-26T10:51:58.828', unit = 'Years', truncatedDate = '2016-01-01T00:00'. => result = '2016-01-01T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Decades'.date = '2016-08-26T10:51:58.828', unit = 'Decades', truncatedDate = '2010-01-01T00:00'. => result = '2010-01-01T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Centuries'.date = '2016-08-26T10:51:58.828', unit = 'Centuries', truncatedDate = '2000-01-01T00:00'. => result = '2000-01-01T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Millennia'.date = '2016-08-26T10:51:58.828', unit = 'Millennia', truncatedDate = '2000-01-01T00:00'. => result = '2000-01-01T00:00'.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Eras'. => exception = 'UnsupportedTemporalTypeException' => {Unable to truncate to unit = 'Eras', not well supported!}.Truncating date = '2016-08-26T10:51:58.828' by unit = 'Forever'. => exception = 'UnsupportedTemporalTypeException' => {Unable to truncate to unit = 'Forever', not well supported!}.It appears to all be working as expected, I was just wondering the implementation isn't recommended and why the default library doesn't support it if it is?Thanks!
Java 8 time - LocalDateTime vs LocalDate and truncatedTo limitation handling
java;java8
null
_reverseengineering.1897
When reversing smart-cards, the side-channel attacks are known to be quite effective on hardware. But, what is it, and can it be used in software reverse-engineering and how?
What is SCARE (Side-Channel Attacks Reverse-Engineering)?
hardware;physical attacks;smartcards
A 'side-channel attack' define any technique that will consider unintended and/or indirect information channels to reach his goal. It has been first defined in smart-card cryptography to describe attacks which are using unintentional information leak from the embedded chip on the card and that can be used in retrieval of keys and data. For example, it may be used by monitoring:Execution Time (Timing attack): To distinguish which operations has been performed and guess, for example, which branch of the code has been selected (and, thus, the value of the test).Power Consumption (Power monitoring attack): To distinguish precisely what sequence of instructions has been performed and be able to recompose the values of the variables. Note that there exist several techniques of analysis using the same input but with slightly different way of analyzing it. For example, we can list: Single Power Analysis (SPA), Differential Power Analysis (DPA), High-order Differential Power Analysis (HO-DPA), Template Attacks, ...Electromagnetic Radiation (Electromagnetic attacks): Closely related to power consumption, but can also provide information that are not found in power consumption especially on RFID or NFC chips.If you're more interested in learning how to leverage this information then I'd suggest to start by reading Power Analysis Attacks. Don't get 'scared' away by the fact that the book is about smart cards. Most of the information also applies 1-to-1 on 'normal' (SoC) embedded devices.Forgot to mention there's an open source platform called OpenSCA and some open source hardware called FOBOS (Flexible Open-source BOard for Side-channel) for which I can't seem to find a proper link from home.Application to Software Reverse-engineeringSpeaking about the application of side-channel attacks in software reverse engineering now, it is more or less any attacks that will rely on using unintended or indirect information leakage. The best recent example is this post from Jonathan Salwan describing how he guessed the password of a crackme just by counting the number of instructions executed on various inputs with Pin.More broadly, this technique has been used since long in software reverse-engineering without naming it, or could have improved many analysis. The basic idea is to first consider that if a piece of software is too obscure to understand it quickly, we can consider it as a black-box and think about using a side-channels technique to guess the enclosed data through a guided trial and error technique.The list of side-channels available in software reverse-engineering is much longer than the one we have in hardware. Because it enclose the previous list and add some new channels such as (non exhaustive list):Instruction Count: Allow to identify different behaviors depending on the input.Read/Write Count: Same as above, with more possibilities to identify patterns because it includes also instruction read.Raised Interrupt Count: Depending on what type of interrupt is raised, when and how, you might identify different behaviors and be able to determined the good path to your goal.Accessed Instruction Addresses: Allow to rebuild the parts of the program that are active at a precise moment.Accessed Memory Addresses: Allow to rebuild data pattern or complex data-structure stored or accessed in memory (eg. in the heap).This list is far from being exhaustive, but basically tools such as Valgrind VM or others can be used to perform such analysis and quickly deduce information about the behavior of a given program, thus speeding up the reverse-engineering.Obfuscation and Possible Counter-measuresTrying to build a software which will be resistant to such attacks will borrow also a lot from the smart-card industry. But, not only. Here are a few tricks, I could think of (but far from being complete about all we can find).Armoring Program BranchesThe instruction count is extremely efficient to detect which branch has been taken in code like this:if (value) ret = foo();else ret = bar();With foo() and bar() having different instruction count.This can be defeated by executing foo() and bar() whatever value is and deciding afterward what is the value of ret.tmp_foo = foo();tmp_bar = bar();if (value) ret = tmp_foo;else ret = tmp_bar;This technique render your program much more difficult to guess from a side-channel attack, but also much less efficient. One has to find a proper trade-off.Countering Timing AttacksTiming attacks are extremely easy to perform and difficult to workaround because sleep() cannot be an option (too easy to detect in a code and, anyway you cannot assume a specific speed for the processor). The programmer has to identify the execution time of each branch of his program and to balance each branch with extra non-useful operations which are of the same computational power than the ones from the other branchs. The point being to render each branch indistinguishable from the others only based on the execution time. Threading MadnessAnother way to dilute the side-channel is to massively multi-thread your program. Imagine that each branch of your program is executed in a separate thread, and one variable tell in which thread the current program really is (if possible in a cryptic manner). Then side-channel analysis will be much more difficult to perform.Conclusion and Further ResearchSide-channel attacks has been widely under-estimated for software reverse-engineering, it can drastically speed-up the reverse of many programs. But, in the same time, obfuscation techniques exists and have to be developed specifically targeting software reverse-engineering. So, don't be surprised if you see more and more novelties related to this field.
_unix.338113
I'm trying to move away from running cron-scheduled jobs with root, so the thought process is to create a system account with no login (/dev/null home, /sbin/nologin shell) to run each cron job we need ran. I'm just curious how to give these accounts the proper permission to run where they need to be without changing the ownership of normal files and folders that are typically restricted to root.For instance, say I want this system account to output log files of what it's doing to /var/log, However, /var/log/ is owned by root, and is set to 755. This process won't be able to create log files there without running as root, correct?Am I correct in assuming using Linux Kernel Capabilities is the best way to do this?
Assigning Privileges to System Accounts
linux;permissions;administration
One way You can achieve that is to put the logs inside a sub-folder under /var/log and then set the permission for the sub-folders.Another why is to log into syslog with logger and use a filter to redirect the logs to a specific file.e.g# /etc/rsyslog.d/10-myrules.confif $programname == [script1, script2]then { action(type=omfile file=/var/log/myscripts/sys.log) stop}And you probably should also set a logrotate rule while you at it.
_webapps.24525
Is it possible to add time management in Trello? For example, if we work on a project I want to be able to do monthly reports to see how much time was used/spent on a certain project. How can I set this up in Trello?
Add time management for each project in Trello
trello
null
_unix.264397
I learned when I use command, double quoting treat all things as character except $, `, \ .But, when use command like find -type f -name *.jpg *.jpg is inside double quotes. Then, it means we want to treat * and . as just a character. So, the find command should output regular file which has name *.jpg as it says, not pathname expansion implemented.If we want to do pathname expansion, I think I have to do type command find -type f -name *.jpg(without double quoting).But, the result is same. Why use double quoting in this command?
confusing about double quoting
shell;find;quoting;wildcards
There is a subtlety to how wildcard expansion works. Change to a directory which contains no .jpg files and typeecho *.jpgand it will output*.jpgIn particular, the string *.jpg is left unmodified. If, however, you change to a directory containing .jpg files, for example suppose we have two files: image1.jpg and image2.jpg, then the echo *.jpg command will not outputimage1.jpg image2.jpgand the *.jpg gets expanded.If you typefind . -name *.jpgand there are no .jpg files in the directory you are when you type this, then find will receive the arguments ., -name and *.jpg. If, however, you type this command in a directory containing .jpg files, say image1.jpg and image2.jpg, then find will receive the arguments ., -name, image1.jpg and image2.jpg, so will in effect run the commandfind . -name image1.jpg image2.jpgand find will complain. What can be really confusing if you omit the quotes is if there is a single .jpg file (say image1.jpg). Then the wildcard expansion will result infind . -name image1.jpgand the find command will find all files whose basename is image1.jpg. Aside: This does lead to a useful bash idiom for seeing if any files match a given pattern:if [ $(echo *.jpg) = *.jpg ]; then # *.jpg has no matcheselse # *.jpg has matchesfithough be warned that this will not work if there is a file called '*.jpg' in the current directory. To be more watertight, you can doif [ $(echo *.jpg) = *.jpg ] && [ ! -e *.jpg ]; then # *.jpg has no matcheselse # *.jpg has matchesfi(While not directly relevant to the the question, I added this since it illustrates some of the aspects of how wildcard expansion works.)
_webapps.57039
I'd like to hide posts written by a specific author in LinkedIn Pulse.
How can I hide posts written by a specific author in LinkedIn Pulse?
linkedin
null
_codereview.37448
Even though it's the first time I'm writing something this big, it feels like I know C# quite well (it is very similar to Java after all). It's been nice to learn LINQ also and I am very impressed by the features (which is just like Steams in Java 8), and perhaps I have overused it here (if it's possible to do that).Class summarySudokuFactory: Contains static methods to create some Sudoku variationsSudokuBoard: Contains collection of SudokuRule and of SudokuTileSudokuRule: Whether it's a box, a line, a row, or something entirely different doesn't matter. Contains a collection of SudokuTile that must be unique.SudokuTile: Each tile in the puzzle. Can be blocked (like a hole in the puzzle), remembers it's possibleValues, and also contains a value (0 is used for tiles without a value)SudokuProgress: Used to know what the progress of a solving step was.Program: Main starting point. Contains tests for seven different Sudokus. All have been verified to be solved correctly.Since this is the first time I'm using C# and LINQ, please tell me anything. All suggestions welcome. Except for the fact that the method box should be called Box. I'd be especially interested in cases where I could simplify some of the LINQ usage (trust me, there is a lot). I hope you are able to follow all the LINQ queries. I have tried to put some short comments where needed to explain what is happening. If you want an explanation for some part, post a comment and I will explain.As usual, I have a tendency to make the challenge into something super-flexible with support for a whole lot of more or less unnecessary things. Some of the possible puzzles that this code can solve is:A hard classic 9x9 Sudoku with 3x3 boxes that requires more advanced techniques (or in my case, more or less brute force by trial and error)NonominoHyperSudokuSamurai SudokuA classic Sudoku of any size with any number of boxes and size of boxes (only completely tested on 9x9 with 3x3 boxes and 4x4 with 2x2 boxes but any sizes should be possible)These images are puzzles that are tested and solved in the below code:One known issue with the implementation is if you would input an empty puzzle, then it would work for years to find all the possible combinations for it.SudokuProgresspublic enum SudokuProgress { FAILED, NO_PROGRESS, PROGRESS }SudokuTilepublic class SudokuTile{ internal static SudokuProgress CombineSolvedState(SudokuProgress a, SudokuProgress b) { if (a == SudokuProgress.FAILED) return a; if (a == SudokuProgress.NO_PROGRESS) return b; if (a == SudokuProgress.PROGRESS) return b == SudokuProgress.FAILED ? b : a; throw new InvalidOperationException(Invalid value for a); } public const int CLEARED = 0; private int _maxValue; private int _value; private int _x; private int _y; private ISet<int> possibleValues; private bool _blocked; public SudokuTile(int x, int y, int maxValue) { _x = x; _y = y; _blocked = false; _maxValue = maxValue; possibleValues = new HashSet<int>(); _value = 0; } public int Value { get { return _value; } set { if (value > _maxValue) throw new ArgumentOutOfRangeException(SudokuTile Value cannot be greater than + _maxValue.ToString() + . Was + value); if (value < CLEARED) throw new ArgumentOutOfRangeException(SudokuTile Value cannot be zero or smaller. Was + value); _value = value; } } public bool HasValue { get { return Value != CLEARED; } } public string ToStringSimple() { return Value.ToString(); } public override string ToString() { return String.Format(Value {0} at pos {1}, {2}. , Value, _x, _y, possibleValues.Count); } internal void ResetPossibles() { possibleValues.Clear(); foreach (int i in Enumerable.Range(1, _maxValue)) { if (!HasValue || Value == i) possibleValues.Add(i); } } public void Block() { _blocked = true; } internal void Fix(int value, string reason) { Console.WriteLine(Fixing {0} on pos {1}, {2}: {3}, value, _x, _y, reason); Value = value; ResetPossibles(); } internal SudokuProgress RemovePossibles(IEnumerable<int> existingNumbers) { if (_blocked) return SudokuProgress.NO_PROGRESS; // Takes the current possible values and removes the ones existing in `existingNumbers` possibleValues = new HashSet<int>(possibleValues.Where(x => !existingNumbers.Contains(x))); SudokuProgress result = SudokuProgress.NO_PROGRESS; if (possibleValues.Count == 1) { Fix(possibleValues.First(), Only one possibility); result = SudokuProgress.PROGRESS; } if (possibleValues.Count == 0) return SudokuProgress.FAILED; return result; } public bool IsValuePossible(int i) { return possibleValues.Contains(i); } public int X { get { return _x; } } public int Y { get { return _y; } } public bool IsBlocked { get { return _blocked; } } // A blocked field can not contain a value -- used for creating 'holes' in the map public int PossibleCount { get { return IsBlocked ? 1 : possibleValues.Count; } }}SudokuRulepublic class SudokuRule : IEnumerable<SudokuTile>{ internal SudokuRule(IEnumerable<SudokuTile> tiles, string description) { _tiles = new HashSet<SudokuTile>(tiles); _description = description; } private ISet<SudokuTile> _tiles; private string _description; public bool CheckValid() { var filtered = _tiles.Where(tile => tile.HasValue); var groupedByValue = filtered.GroupBy(tile => tile.Value); return groupedByValue.All(group => group.Count() == 1); } public bool CheckComplete() { return _tiles.All(tile => tile.HasValue) && CheckValid(); } internal SudokuProgress RemovePossibles() { // Tiles that has a number already IEnumerable<SudokuTile> withNumber = _tiles.Where(tile => tile.HasValue); // Tiles without a number IEnumerable<SudokuTile> withoutNumber = _tiles.Where(tile => !tile.HasValue); // The existing numbers in this rule IEnumerable<int> existingNumbers = new HashSet<int>(withNumber.Select(tile => tile.Value).Distinct().ToList()); SudokuProgress result = SudokuProgress.NO_PROGRESS; foreach (SudokuTile tile in withoutNumber) result = SudokuTile.CombineSolvedState(result, tile.RemovePossibles(existingNumbers)); return result; } internal SudokuProgress CheckForOnlyOnePossibility() { // Check if there is only one number within this rule that can have a specific value IList<int> existingNumbers = _tiles.Select(tile => tile.Value).Distinct().ToList(); SudokuProgress result = SudokuProgress.NO_PROGRESS; foreach (int value in Enumerable.Range(1, _tiles.Count)) { if (existingNumbers.Contains(value)) // this rule already has the value, skip checking for it continue; var possibles = _tiles.Where(tile => !tile.HasValue && tile.IsValuePossible(value)).ToList(); if (possibles.Count == 0) return SudokuProgress.FAILED; if (possibles.Count == 1) { possibles.First().Fix(value, Only possible in rule + ToString()); result = SudokuProgress.PROGRESS; } } return result; } internal SudokuProgress Solve() { // If both are null, return null (indicating no change). If one is null, return the other. Else return result1 && result2 SudokuProgress result1 = RemovePossibles(); SudokuProgress result2 = CheckForOnlyOnePossibility(); return SudokuTile.CombineSolvedState(result1, result2); } public override string ToString() { return _description; } public IEnumerator<SudokuTile> GetEnumerator() { return _tiles.GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return GetEnumerator(); } public string Description { get { return _description; } }}SudokuBoard:public class SudokuBoard{ public SudokuBoard(SudokuBoard copy) { _maxValue = copy._maxValue; tiles = new SudokuTile[copy.Width, copy.Height]; CreateTiles(); // Copy the tile values foreach (var pos in SudokuFactory.box(Width, Height)) { tiles[pos.Item1, pos.Item2] = new SudokuTile(pos.Item1, pos.Item2, _maxValue); tiles[pos.Item1, pos.Item2].Value = copy.tiles[pos.Item1, pos.Item2].Value; } // Copy the rules foreach (SudokuRule rule in copy.rules) { var ruleTiles = new HashSet<SudokuTile>(); foreach (SudokuTile tile in rule) { ruleTiles.Add(tiles[tile.X, tile.Y]); } rules.Add(new SudokuRule(ruleTiles, rule.Description)); } } public SudokuBoard(int width, int height, int maxValue) { _maxValue = maxValue; tiles = new SudokuTile[width, height]; CreateTiles(); if (_maxValue == width || _maxValue == height) // If maxValue is not width or height, then adding line rules would be stupid SetupLineRules(); } public SudokuBoard(int width, int height) : this(width, height, Math.Max(width, height)) {} private int _maxValue; private void CreateTiles() { foreach (var pos in SudokuFactory.box(tiles.GetLength(0), tiles.GetLength(1))) { tiles[pos.Item1, pos.Item2] = new SudokuTile(pos.Item1, pos.Item2, _maxValue); } } private void SetupLineRules() { // Create rules for rows and columns for (int x = 0; x < Width; x++) { IEnumerable<SudokuTile> row = GetCol(x); rules.Add(new SudokuRule(row, Row + x.ToString())); } for (int y = 0; y < Height; y++) { IEnumerable<SudokuTile> col = GetRow(y); rules.Add(new SudokuRule(col, Col + y.ToString())); } } internal IEnumerable<SudokuTile> TileBox(int startX, int startY, int sizeX, int sizeY) { return from pos in SudokuFactory.box(sizeX, sizeY) select tiles[startX + pos.Item1, startY + pos.Item2]; } private IEnumerable<SudokuTile> GetRow(int row) { for (int i = 0; i < tiles.GetLength(0); i++) { yield return tiles[i, row]; } } private IEnumerable<SudokuTile> GetCol(int col) { for (int i = 0; i < tiles.GetLength(1); i++) { yield return tiles[col, i]; } } private ISet<SudokuRule> rules = new HashSet<SudokuRule>(); private SudokuTile[,] tiles; public int Width { get { return tiles.GetLength(0); } } public int Height { get { return tiles.GetLength(1); } } public void CreateRule(string description, params SudokuTile[] tiles) { rules.Add(new SudokuRule(tiles, description)); } public void CreateRule(string description, IEnumerable<SudokuTile> tiles) { rules.Add(new SudokuRule(tiles, description)); } public bool CheckValid() { return rules.All(rule => rule.CheckValid()); } public IEnumerable<SudokuBoard> Solve() { ResetSolutions(); SudokuProgress simplify = SudokuProgress.PROGRESS; while (simplify == SudokuProgress.PROGRESS) simplify = Simplify(); if (simplify == SudokuProgress.FAILED) yield break; // Find one of the values with the least number of alternatives, but that still has at least 2 alternatives var query = from rule in rules from tile in rule where tile.PossibleCount > 1 orderby tile.PossibleCount ascending select tile; SudokuTile chosen = query.FirstOrDefault(); if (chosen == null) { // The board has been completed, we're done! yield return this; yield break; } Console.WriteLine(SudokuTile: + chosen.ToString()); foreach (var value in Enumerable.Range(1, _maxValue)) { // Iterate through all the valid possibles on the chosen square and pick a number for it if (!chosen.IsValuePossible(value)) continue; var copy = new SudokuBoard(this); copy.Tile(chosen.X, chosen.Y).Fix(value, Trial and error); foreach (var innerSolution in copy.Solve()) yield return innerSolution; } yield break; } public void Output() { for (int y = 0; y < tiles.GetLength(1); y++) { for (int x = 0; x < tiles.GetLength(0); x++) { Console.Write(tiles[x, y].ToStringSimple()); } Console.WriteLine(); } } public SudokuTile Tile(int x, int y) { return tiles[x, y]; } private int _rowAddIndex; public void AddRow(string s) { // Method for initializing a board from string for (int i = 0; i < s.Length; i++) { var tile = tiles[i, _rowAddIndex]; if (s[i] == '/') { tile.Block(); continue; } int value = s[i] == '.' ? 0 : (int)Char.GetNumericValue(s[i]); tile.Value = value; } _rowAddIndex++; } internal void ResetSolutions() { foreach (SudokuTile tile in tiles) tile.ResetPossibles(); } internal SudokuProgress Simplify() { SudokuProgress result = SudokuProgress.NO_PROGRESS; bool valid = CheckValid(); if (!valid) return SudokuProgress.FAILED; foreach (SudokuRule rule in rules) result = SudokuTile.CombineSolvedState(result, rule.Solve()); return result; } internal void AddBoxesCount(int boxesX, int boxesY) { int sizeX = Width / boxesX; int sizeY = Height / boxesY; var boxes = SudokuFactory.box(sizeX, sizeY); foreach (var pos in boxes) { IEnumerable<SudokuTile> boxTiles = TileBox(pos.Item1 * sizeX, pos.Item2 * sizeY, sizeX, sizeY); CreateRule(Box at ( + pos.Item1.ToString() + , + pos.Item2.ToString() + ), boxTiles); } } internal void OutputRules() { foreach (var rule in rules) { Console.WriteLine(String.Join(,, rule) + - + rule.ToString()); } }}SudokuFactory:public class SudokuFactory{ private const int DefaultSize = 9; private const int SamuraiAreas = 7; private const int BoxSize = 3; private const int HyperMargin = 1; public static IEnumerable<Tuple<int, int>> box(int sizeX, int sizeY) { foreach (int x in Enumerable.Range(0, sizeX)) { foreach (int y in Enumerable.Range(0, sizeY)) { yield return new Tuple<int, int>(x, y); } } } public static SudokuBoard Samurai() { SudokuBoard board = new SudokuBoard(SamuraiAreas*BoxSize, SamuraiAreas*BoxSize, DefaultSize); // Removed the empty areas where there are no tiles var queriesForBlocked = new List<IEnumerable<SudokuTile>>(); queriesForBlocked.Add(from pos in box(BoxSize, BoxSize*2) select board.Tile(pos.Item1 + DefaultSize, pos.Item2 )); queriesForBlocked.Add(from pos in box(BoxSize, BoxSize*2) select board.Tile(pos.Item1 + DefaultSize, pos.Item2 + DefaultSize * 2 - BoxSize)); queriesForBlocked.Add(from pos in box(BoxSize*2, BoxSize) select board.Tile(pos.Item1 , pos.Item2 + DefaultSize)); queriesForBlocked.Add(from pos in box(BoxSize*2, BoxSize) select board.Tile(pos.Item1 + DefaultSize * 2 - BoxSize, pos.Item2 + DefaultSize)); foreach (var query in queriesForBlocked) { foreach (var tile in query) tile.Block(); } // Select the tiles in the 3 x 3 area (area.X, area.Y) and create rules for them foreach (var area in box(SamuraiAreas, SamuraiAreas)) { var tilesInArea = from pos in box(BoxSize, BoxSize) select board.Tile(area.Item1 * BoxSize + pos.Item1, area.Item2 * BoxSize + pos.Item2); if (tilesInArea.First().IsBlocked) continue; board.CreateRule(Area + area.Item1.ToString() + , + area.Item2.ToString(), tilesInArea); } // Select all rows and create columns for them var cols = from pos in box(board.Width, 1) select new { X = pos.Item1, Y = pos.Item2 }; var rows = from pos in box(1, board.Height) select new { X = pos.Item1, Y = pos.Item2 }; foreach (var posSet in Enumerable.Range(0, board.Width)) { board.CreateRule(Column Upper + posSet, from pos in box(1, DefaultSize) select board.Tile(posSet, pos.Item2)); board.CreateRule(Column Lower + posSet, from pos in box(1, DefaultSize) select board.Tile(posSet, pos.Item2 + DefaultSize + BoxSize)); board.CreateRule(Row Left + posSet, from pos in box(DefaultSize, 1) select board.Tile(pos.Item1, posSet)); board.CreateRule(Row Right + posSet, from pos in box(DefaultSize, 1) select board.Tile(pos.Item1 + DefaultSize + BoxSize, posSet)); if (posSet >= BoxSize*2 && posSet < BoxSize*2 + DefaultSize) { // Create rules for the middle sudoku board.CreateRule(Column Middle + posSet, from pos in box(1, 9) select board.Tile(posSet, pos.Item2 + BoxSize*2)); board.CreateRule(Row Middle + posSet, from pos in box(9, 1) select board.Tile(pos.Item1 + BoxSize*2, posSet)); } } return board; } public static SudokuBoard SizeAndBoxes(int width, int height, int boxCountX, int boxCountY) { SudokuBoard board = new SudokuBoard(width, height); board.AddBoxesCount(boxCountX, boxCountY); return board; } public static SudokuBoard ClassicWith3x3Boxes() { return SizeAndBoxes(DefaultSize, DefaultSize, DefaultSize / BoxSize, DefaultSize / BoxSize); } public static SudokuBoard ClassicWith3x3BoxesAndHyperRegions() { SudokuBoard board = ClassicWith3x3Boxes(); const int HyperSecond = HyperMargin + BoxSize + HyperMargin; // Create the four extra hyper regions board.CreateRule(HyperA, from pos in box(3, 3) select board.Tile(pos.Item1 + HyperMargin, pos.Item2 + HyperMargin)); board.CreateRule(HyperB, from pos in box(3, 3) select board.Tile(pos.Item1 + HyperSecond, pos.Item2 + HyperMargin)); board.CreateRule(HyperC, from pos in box(3, 3) select board.Tile(pos.Item1 + HyperMargin, pos.Item2 + HyperSecond)); board.CreateRule(HyperD, from pos in box(3, 3) select board.Tile(pos.Item1 + HyperSecond, pos.Item2 + HyperSecond)); return board; } public static SudokuBoard ClassicWithSpecialBoxes(string[] areas) { int sizeX = areas[0].Length; int sizeY = areas.Length; SudokuBoard board = new SudokuBoard(sizeX, sizeY); var joinedString = String.Join(, areas); var grouped = joinedString.Distinct(); // Loop through all the unique characters foreach (var ch in grouped) { // Select the rule tiles based on the index of the character var ruleTiles = from i in Enumerable.Range(0, joinedString.Length) where joinedString[i] == ch // filter out any non-matching characters select board.Tile(i % sizeX, i / sizeY); board.CreateRule(Area + ch.ToString(), ruleTiles); } return board; }}Program:static class Program{ [STAThread] static void Main() { SolveFail(); SolveClassic(); SolveSmall(); SolveExtraZones(); SolveHyper(); SolveSamurai(); SolveIncompleteClassic(); } private static void SolveFail() { SudokuBoard board = SudokuFactory.SizeAndBoxes(4, 4, 2, 2); board.AddRow(0003); board.AddRow(0204); // the 2 must be a 1 on this row to be solvable board.AddRow(1000); board.AddRow(4000); CompleteSolve(board); } private static void SolveExtraZones() { // http://en.wikipedia.org/wiki/File:Oceans_Hypersudoku18_Puzzle.svg SudokuBoard board = SudokuFactory.ClassicWith3x3BoxesAndHyperRegions(); board.AddRow(.......1.); board.AddRow(..2....34); board.AddRow(....51...); board.AddRow(.....65..); board.AddRow(.7.3...8.); board.AddRow(..3......); board.AddRow(....8....); board.AddRow(58....9..); board.AddRow(69.......); CompleteSolve(board); } private static void SolveSmall() { SudokuBoard board = SudokuFactory.SizeAndBoxes(4, 4, 2, 2); board.AddRow(0003); board.AddRow(0004); board.AddRow(1000); board.AddRow(4000); CompleteSolve(board); } private static void SolveHyper() { // http://en.wikipedia.org/wiki/File:A_nonomino_sudoku.svg string[] areas = new string[]{ 111233333, 111222333, 144442223, 114555522, 444456666, 775555688, 977766668, 999777888, 999997888 }; SudokuBoard board = SudokuFactory.ClassicWithSpecialBoxes(areas); board.AddRow(3.......4); board.AddRow(..2.6.1..); board.AddRow(.1.9.8.2.); board.AddRow(..5...6..); board.AddRow(.2.....1.); board.AddRow(..9...8..); board.AddRow(.8.3.4.6.); board.AddRow(..4.1.9..); board.AddRow(5.......7); CompleteSolve(board); } private static void SolveSamurai() { // http://www.freesamuraisudoku.com/1001HardSamuraiSudokus.aspx?puzzle=42 SudokuBoard board = SudokuFactory.Samurai(); board.AddRow(6..8..9..///.....38..); board.AddRow(...79....///89..2.3..); board.AddRow(..2..64.5///...1...7.); board.AddRow(.57.1.2..///..5....3.); board.AddRow(.....731.///.1.3..2..); board.AddRow(...3...9.///.7..429.5); board.AddRow(4..5..1...5....5.....); board.AddRow(8.1...7...8.2..768...); board.AddRow(.......8.23...4...6..); board.AddRow(//////.12.4..9.//////); board.AddRow(//////......82.//////); board.AddRow(//////.6.....1.//////); board.AddRow(.4...1....76...36..9.); board.AddRow(2.....9..8..5.34...81); board.AddRow(.5.873......9.8..23..); board.AddRow(...2....9///.25.4....); board.AddRow(..3.64...///31.8.....); board.AddRow(..75.8.12///...6.14..); board.AddRow(.......2.///.31...9..); board.AddRow(..17.....///..7......); board.AddRow(.7.6...84///8...7..5.); CompleteSolve(board); } private static void SolveClassic() { var board = SudokuFactory.ClassicWith3x3Boxes(); board.AddRow(...84...9); board.AddRow(..1.....5); board.AddRow(8...2146.); board.AddRow(7.8....9.); board.AddRow(.........); board.AddRow(.5....3.1); board.AddRow(.2491...7); board.AddRow(9.....5..); board.AddRow(3...84...); CompleteSolve(board); } private static void SolveIncompleteClassic() { var board = SudokuFactory.ClassicWith3x3Boxes(); board.AddRow(...84...9); board.AddRow(..1.....5); board.AddRow(8...2.46.); // Removed a 1 on this line board.AddRow(7.8....9.); board.AddRow(.........); board.AddRow(.5....3.1); board.AddRow(.2491...7); board.AddRow(9.....5..); board.AddRow(3...84...); CompleteSolve(board); } private static void CompleteSolve(SudokuBoard board) { Console.WriteLine(Rules:); board.OutputRules(); Console.WriteLine(Board:); board.Output(); var solutions = board.Solve().ToList(); Console.WriteLine(Base Board Progress:); board.Output(); Console.WriteLine(--); Console.WriteLine(--); Console.WriteLine(All + solutions.Count + solutions:); var i = 1; foreach (var solution in solutions) { Console.WriteLine(----------------); Console.WriteLine(Solution + i++.ToString() + / + solutions.Count + :); solution.Output(); } }}
SudokuSharp Solver with advanced features
c#;linq;community challenge;sudoku
Impressive. I mean it.Couple observations:Your enums...public enum SudokuProgress { FAILED, NO_PROGRESS, PROGRESS }Should be:public enum SudokuProgress { Failed, NoProgress, Progress }When the first thing you see is this:public class SudokuBoard{ public SudokuBoard(SudokuBoard copy) { _maxValue = copy._maxValue; tiles = new SudokuTile[copy.Width, copy.Height]; CreateTiles();you wonder where _maxValue and tiles come from, and why _maxValue (whose naming convention is that of a private field) can be accessed like that - I would expose it as a get-only property. Accessing private fields from another object doesn't seem instinctively right to me.Speaking of the devil:private int _maxValue;This line belongs just above the constructor that's using it (it's 30-some lines below its first usage).This box method which should be named Box (actually box is a bad name because it's the name of a CIL instruction that your C# compiles to), is returning a not-so-pretty Tuple<T1,T2> - The framework has a type called Point which has X and Y properties; if that's not appropriate, I don't know what is. Side note, Point is a value type, so there's no boxing actually going on if you use it over a Tuple, which is a reference type (incurs boxing). Bottom line, use a Point and call that method something else:public static IEnumerable<Point> Box(int sizeX, int sizeY){ foreach (int x in Enumerable.Range(0, sizeX)) { foreach (int y in Enumerable.Range(0, sizeY)) { yield return new Point(x, y); } }}You want to abuse LINQ? How about taking this:private SudokuTile[,] tiles;private void CreateTiles(){ foreach (var pos in SudokuFactory.box(tiles.GetLength(0), tiles.GetLength(1))) { tiles[pos.Item1, pos.Item2] = new SudokuTile(pos.Item1, pos.Item2, _maxValue); }}And turning it into that:private Dictionary<Point, SudokuTile> tiles;private void CreateTiles(){ tiles = SudokuFactory .Box(tiles.GetLength(0), tiles.GetLength(1)) .Select(p => new KeyValuePair<Point, SudokuTile>{ Key = p, Value = new SudokuTile(p.X, p.Y, _maxValue)}) .ToDictionary(kvp => pkv.Key, kvp => kvp.Value);}It takes the IEnumerable<Point> returned by the modified Box method, selects each point into the Key of a KeyValuePair and a new SudokuTile as the vale, and then ToDictionary selects the Enumerable into a dictionary, which gets assigned to tiles. (C#: 1, Java: 0) Lines of code: 1.In SudokuRule, your private fields can be marked as readonly.This is only a partial review, I'll write more after I've implemented my own solution - I purposely haven't looked at your puzzle-resolution code :)Overall looks quite good (except for all that static stuff that doesn't need to be, but that's dependency-injection me talking, doesn't make it any worse c#, but testing might be more enjoyable with non-static dependencies), It's great that you gave C# a bit of lovin' this week. I know Visual Studio isn't Eclipse, but I can assure you that VS with ReSharper would have made it a similar experience (and could have shown you some LINQ tricks!), at least in terms of code inspections (R# makes VS actually better than Eclipse... but I'm biased, and drifting, so I'll keep it at that!)...I like how your Solve() method yield returns all found solutions.That said, if your entire project is compiled into 1 single assembly (.exe/.dll), your usage of the internal access modifier is equivalent to public - internal basically means assembly scope, so an internal type or method cannot be accessed from another assembly; if there's no other assembly, everything in the project can see it, so I don't see a point for internal here.Not much left to say, except perhaps method IsValuePossible might be better off as IsPossibleValue, but that's mere nitpicking. Very neat, I'm jealous.One last thing - this piece of list-initialization code:var queriesForBlocked = new List<IEnumerable<SudokuTile>>();queriesForBlocked.Add(from pos in box(BoxSize, BoxSize*2) select board.Tile(pos.Item1 + DefaultSize, pos.Item2 ));queriesForBlocked.Add(from pos in box(BoxSize, BoxSize*2) select board.Tile(pos.Item1 + DefaultSize, pos.Item2 + DefaultSize * 2 - BoxSize));queriesForBlocked.Add(from pos in box(BoxSize*2, BoxSize) select board.Tile(pos.Item1 , pos.Item2 + DefaultSize));queriesForBlocked.Add(from pos in box(BoxSize*2, BoxSize) select board.Tile(pos.Item1 + DefaultSize * 2 - BoxSize, pos.Item2 + DefaultSize));Could use a collection initializer and be written like this:var queriesForBlocked = new List<IEnumerable<SudokuTile>> { { box(BoxSize, BoxSize*2).Select(pos => board.Tile(pos.Item1 + DefaultSize, pos.Item2)) }, { box(BoxSize, BoxSize*2).Select(pos => board.Tile(pos.Item1 + DefaultSize, pos.Item2 + DefaultSize * 2 - BoxSize)) }, { box(BoxSize*2, BoxSize).Select(pos => board.Tile(pos.Item1, pos.Item2 + DefaultSize)) }, { box(BoxSize*2, BoxSize).Select(pos => board.Tile(pos.Item1 + DefaultSize * 2 - BoxSize, pos.Item2 + DefaultSize)) } };Each item in the collection initializer actually calls the .Add method anyway, so it's completely equivalent. Except it's 1 single instruction now.
_softwareengineering.84908
So much of development is now done online to take advantage of interconnectivity and shared resources. In a prolonged Internet outage, how can one cope with the lack of that connection? Are there ways to replicate or work around the innumerable benefits the Internet adds to development?
How can I program effectively during an Internet outage?
internet
There was programming long before the internet. We had books, we had periodicals and we met in real life maybe more often than we do today. First question would be: What does 'prolonged' mean? Two or three days? If you feel paranoid about such a situation, you can secure the core of your working environment by having local copies of the most important websites you need as reference and manuals. In addition you could make sure to download some tools, plugins and other material you may use some day.If you don't have such backups, it's just a question of organizing your work. There is always some minor stuff I wanted to do all the time, like adding a few more tests, finding a less important bug or extending the manual of my application to cover the latest features I added. Or simply adding another new feature for which I don't need material from the web as reference.But a week? And how large would you think the area affected? Your company? A city? A country? The whole world? If the internet went down for the whole of my country (Germany) for more than a few days I guess we would have other problems (civil war?) than to worry that much about details of our work. Though the civil wars in Northern Africa have shown, that it still is an important piece of the infrastructure and in some cases I know that for some people work went on, wile there was fighting in the next city.What about the phone lines? Would they still work? If yes, you can fall back to using a modem (as back in the 90s) to stay in some contact with customers to send them updates or exchange some email. Though where would you get a modem soon enough, especially if everybody wants to buy one?If we would have to expect this to happen for a very long time, we would need to restructure the complete infrastructure to whatever still works.Assuming this would happen to your company only (which seems more reasonable), maybe because of some prolonged construction work in it's building, then you should prepare with backups of important material. In addition I would allow every developer to take an hour off or two each day to go to the next internet cafe. Or ask a friendly company in the neighborhood if we may be allowed to use their resources, maybe rent some office space in a nearby building, where our workers can get email and stay in contact. Prepare customers in advance, that responses to email may take longer than customary. Buy a big load of smartphones, so people can still have minimum access to important material.Actually something like this happened to a company I worked for. It was the launch day of my very first website and thanks to a construction site nearby the main cable for our office building was cut this very morning. Launch to be 12:00 am, no messing about that time, since advertised for months by the customer. We took our laptops and went to Amsterdam Central Station (five minutes walk) and launched it from there (adding better coffee ad breakfast than we had in our office). This worked well enough for a few hours. Though if we would have had bugs in the code (luckily not) it could have become difficult to fix them.In the same company we had for several weeks an internet cable from our neighbor office running into our room, since they were waiting for their slow provider to fix their connection.
_unix.216105
I want to force a fsck run on my root filessystem on my Gentoo system running systemd.I've triedadding an empty forcefsck file at the root of the filesystem which I want to check, e.g. touch /forcefsckadding fsck.mode=force as kernel boot parameterNothing worked so far. What's the right approach?
How to force a file system check on the next boot of the root file system on Gentoo running systemd?
systemd;gentoo;fsck
null
_softwareengineering.208154
I watched Stuart Sierra's talk Thinking In Data and took one of the ideas from it as a design principle in this game I'm making. The difference is he's working in Clojure and I'm working in JavaScript. I see some major differences between our languages in that:Clojure is idiomatically functional programming Most state is immutable I took the idea from the slide Everything is a Map (From 11 minutes, 6 seconds to > 29 minutes in). Some things he says are:Whenever you see a function that takes 2-3 arguments, you can make a case for turning it into a map and just passing a map in. There are a lot of advantages to that:You don't have to worry about argument orderYou don't have to worry about any additional information. If there are extra keys, that's not really our concern. They just flow through, they don't interfere. You don't have to define a schemaAs opposed to passing in an Object there's no data hiding. But, he makes the case that data hiding can cause problems and is overrated:PerformanceEase of implementationAs soon as you communicate over the network or across processes, you have to have both sides agree on the data representation anyway. That's extra work you can skip if you just work on data.Most relevant to my question. This is 29 minutes in: Make your functions composable. Here's the code sample he uses to explain the concept:;; Bad(defn complex-process [] (let [a (get-component @global-state) b (subprocess-one a) c (subprocess-two a b) d (subprocess-three a b c)] (reset! global-state d)));; Good(defn complex-process [state] (-> state subprocess-one subprocess-two subprocess-three))I understand the majority of programmers aren't familiar with Clojure, so I'll rewrite this in imperative style:;; Gooddef complex-process(State state) state = subprocess-one(state) state = subprocess-two(state) state = subprocess-three(state) return stateHere are the advantages:Easy to testEasy to look at those functions in isolationEasy to comment out one line of this and see what the outcome is by removing a single stepEach subprocess could add more information on to the state. If subprocess one needs to communicate something to subprocess three, it's as simple as adding a key/value. No boilerplate to extract the data you need out of the state just so that you can save it back in. Just pass in the whole state and let the subprocess assign what it needs. Now, back to my situation: I took this lesson and applied it to my game. That is, almost all of my high level functions take and return a gameState object. This object contains all the data of the game. EG: A list of badGuys, a list of menus, the loot on the ground, etc. Here's an example of my update function:update(gameState) ... gameState = handleUnitCollision(gameState) ... gameState = handleLoot(gameState) ...What I'm here to ask about is, have I created some abomination that perverted an idea that is only practical in a functional programming language? JavaScript isn't idiomatically functional (though it can be written that way) and it's really challenging to write immutable data structures. One thing that concerns me is he assumes that each of those subprocesses are pure. Why does that assumption need to be made? It's rare that any of my functions are pure. Do these ideas fall apart if you don't have immutable data?I'm worried that one day I'll wake up and realize this whole design is a sham and I've really just been implementing the Big Ball Of Mud anti-pattern. Honestly, I've been working on this code for months and it's been great. I feel like I'm getting all the advantages he's claimed. My code is super easy for me to reason about. But I'm a one man team so I have the curse of knowledge. UpdateI've been coding 6+ months with this pattern. Usually by this time I forget what I've done and that's where did I write this in a clean way? comes into play. If I haven't, I'd really struggle. So far, I'm not struggling at all.I understand how another set of eyes would be necessary to validate its maintainability. All I can say is I care about maintainability first and foremost. I'm always the loudest evangelist for clean code no matter where I work.I want to reply directly to those that already have a bad personal experience with this way of coding. I didn't know it then, but I think we're really talking about two different ways of writing code. The way I've done it appears to be more structured than what others have experienced. When someone has a bad personal experience with Everything is a map they talk about how hard it is to maintain because:You never know the structure of the map that the function requiresAny function can mutate the input in ways you'd never expect. You have to look all over the code base to find out how a particular key got into the map or why it disappeared. For those with such an experience, perhaps the code base was, Everything takes 1 of N types of maps. Mine is, Everything takes 1 of 1 type of map. If you know the structure of that 1 type, you know the structure of everything. Of course, that structure usually grows over time. That's why...There's one place to look for the reference implementation (ie: the schema). This reference implementation is code the game uses so it can't get out of date. As for the second point, I don't add/remove keys to the map outside of the reference implementation, I just mutate what's already there. I also have a large suite of automated tests.If this architecture eventually collapses under its own weight, I'll add a second update. Otherwise, assume everything is going well :)
Everything is a Map, am I doing this right?
design;design patterns;architecture;functional programming
null
_softwareengineering.294908
How would you relate the indexes of an array to an enumerator without leaving the chance of mismatch? Examplepublic enum difficulties { easy, medium, hard}public List<Lobby> easyLobbies = new List<Lobby>();public List<Lobby> mediumLobbies = new List<Lobby>();public List<Lobby> hardLobbies = new List<Lobby>();public List<Lobby>[] lobbiesArray;public ClassConstructor(){ // Index order should match enumerator lobbiesArray = new List<Lobby>[] { easyLobbies, mediumLobbies, hardLobbies};}List<Lobby> lobbies = lobbiesArray[difficulties.hard];Because this enumerator and array are seemingly unlinked, it is not obvious that the lobbiesArray should follow any order. What is a better way to approach this?
Relating an array of objects to an enumerator
c#
You are using a wrong data structure.In your case, you may use a dictionary where keys are the values from the enum, and the values are the actual lists:var lobbies = new Dictionary<Difficulty, List<Lobby>>{ { Difficulty.Easy, easyLobbies }, { Difficulty.Medium, mediumLobbies }, { Difficulty.Hard, hardLobbies },};var currentLobbies = lobbies[Difficulty.Hard];A few notes:An array is mostly always a wrong data structure in C#. Don't use it, unless you are perfectly certain that you need the specific characteristics of an array.Unless your team has a well-established style convention (and the inconsistencies in your code makes me think that there are none), stick with the standards. This means that enum Difficulties, with a capital D. The members of an enum start with a capital too. You can use StyleCop to check for other violations (like the lack of a new line before the opening curly bracket.)Since your enum doesn't contain flags, its name should be Difficulty, not Difficulties. When you use plural, it means that you can use multiple values at once. More on flags here.lobbiesArray is a wrong name. You shouldn't have types in the names of the variables. Visual Studio makes it very easy to determine the type of a given variable, so you don't need Hungarian notation or similar constructs.ClassConstructor is a misleading name for a method, because it makes the reader think that it's an actual constructor, while it's not (unless you actually called your class ClassConstructor, which is a strange name for a class.)
_codereview.48438
I want to implement the following function:// Return true if and only if 's' is numeric including// leading positive/negative sign, decimal point.bool isnumeric( const char * s );It is somewhat similar to strtol() but I don't need to return the number.My approach is to count various things unless I can bail out:bool isnumeric( char const * str ) { if( !str ) { return false; } int signs = 0; int decimals = 0; int digits = 0; int digitsAfterDecimal = 0; for( char const * p = str; *p; ++p ) { if( (*p == '+') || (*p == '-') ) { if( (decimals > 0) || (digits > 0) ) { return false; } signs += 1; if( signs == 2 ) { return false; } } else if( *p == '.' ) { decimals += 1; if( decimals == 2 ) { return false; } } else if( ! isdigit( *p ) ) { return false; } else { digits += 1; if( decimals > 0 ) { digitsAfterDecimal += 1; } } } return (decimals > 0) ? ((digits > 0) && (digitsAfterDecimal > 0)) : (digits > 0) ;}I also have the following tests:void test_isnumeric() { assert( isnumeric( 42 ) ); assert( isnumeric( 42.0 ) ); assert( isnumeric( 42.56 ) ); assert( isnumeric( +42 ) ); assert( isnumeric( .42 ) ); assert( isnumeric( +.42 ) ); assert( ! isnumeric( 42. ) ); assert( ! isnumeric( ++42 ) ); assert( ! isnumeric( +. ) ); assert( ! isnumeric( 4+ ) );}int main( void ) { test_isnumeric();}To make it easy to clone and modify, the full code is available here.Please comment on design, structuring, test coverage etc. Mentioning failing tests are most welcome.
Test if string is numeric
c;parsing;unit testing;validation;fixed point
null
_webapps.41035
Google allows me to make the file private but I am not sure that making it private has effect.
Can I have private files in public folders?
sharing;google drive
I created a public folder. When I attempt to create a new document in that folder, Google Drive tells me: Do you want to create the element in a shared folder? The created element will have the same sharing settings as the selected folder.This suggests that I can't have a private document in a shared folder.But what if I created the document first, then made the folder public?So I made the folder private again, and created a document in it.Then I reverted the folder to public. Inspecting the document's sharing settings, it is now Anyone with the link. Then I changed the document's sharing setting to Private.Opening an incognito window in Chrome, pasting in the link to the document, gives me the Google login form.I also created a new document in the folder, without altering its sharing settings. This is viewable in the incognito window.Visiting the folder in the incognito window gives me a list containing only the public document. When logged in, I see both documents.So yes, it does seem you can have a private document in a public folder. The folder's setting merely acts as a default for documents in that folder with no explicit setting.
_unix.375946
I am trying to prevent a script from being invoked remotely. User need to SSH login instead first and then run the scriptssh remote-server script.sh should failssh remote-server and after login./script.sh should workI can change the ownership and apply chmod, but then the remote user can use ssh remote-server sudo -u newuser ./script.shUsing AWS EC2 instance
Prevent script file to invoke remotely using SSH
ssh
null
_webmaster.50690
I'm using WooCommerce and have selected shop page as homepage in WordPress settings. Therefore http://www.example.com and http://www.example.com/shop are one and the same page.In this case what I should do? Use canonical element in header or 301 redirect?I have read several things about both, but I'm unable to judge what's perfect for this situation. I'm here to seek some advise from SEO experts.
SEO advise for WordPress WooCommerce
seo;wordpress;301 redirect;duplicate content;canonical url
If you want the shop webpage as homepage and if two webpages have the same content, you should apply a 301 redirect from http://www.example.com to http://www.example.com/shop. You need to do this to avoid duplicate content.Moreover, I think it's useless to show two webpages with the same content to your visitors, it's a little bit confusing. That's why you should apply 301 redirect instead of rel=canonical.
_unix.159344
I am trying to better understand the network setup in my machine. Host Machine SetupI have a wireless interface (wlan0) on my host machine which hasthe IP address as 192.168.1.9.The default gateway of this host is the router which goes to theoutside world through my ISP, whose IP address is 192.168.1.1.The route -n command in my host machine returns me the output as,Kernel IP routing tableDestination Gateway Genmask Flags Metric Ref Use Iface0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 wlan0169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 wlan0192.168.1.160 0.0.0.0 255.255.255.224 U 0 0 0 virbr2Guest Machine SetupNow, I setup a guest OS in KVM as below. The KVM is in a sub-network which has the details as192.168.1.160/27.The DHCP start is 192.168.1.176 and the DHCP end is 192.168.1.190.I also did the below command for my KVM configuration to work.arp -i wlan0 -Ds 192.168.1.9 wlan0 pubFrom the guest OS, I see that my IP address is 192.168.1.179. My route -n command in the guest machine returns me the output as,kernel IP routing tableDestination Gateway Genmask0.0.0.0 192.168.1.161 0.0.0.0192.168.1.160 0.0.0.0 255.255.255.224How can I make the guest OS to interact with the outside world?EDITThis is the output of virsh net-list --all. ramesh@ramesh-pc:~$ virsh net-list --all Name State Autostart Persistent---------------------------------------------------------- arpbr0 inactive yes yes default active yes yes proxyArp active yes yes
setup the guest network in KVM to interact with the outside world (google.com )
networking
I would like to thank user slm for guiding me in the right direction in setting up the guest network in the KVM. I will add the screen shots to the answer so that it will be more informative. I assume the virt-manager package is installed and also the host machine is setup with the necessary packages for KVM to work. Preparing the Network For Guest to Host InteractionThe main step in the KVM is setting up of the network. If the machine is not available in the network, then it serves no purpose, be it physical or virtual. Type virt-manager in the terminal. The console would show up as below.Click on Edit -> Connection Details and a new screen would pop up as below.Click on Virtual Networks tab and from there click on the + button to add a new network to the KVM guests.Click on Forward and then we would be presented with the below screen. Now, the IPV4 addresses we choose here is completely up to our choice and we could optimize this step to suit our actual needs. After we click on Forward in the above screen, we would be presented with the below screen. In this step, it basically tells the address space available for us.In this step, choose forwarding to physical network and select the host's network interface which will help the guests to interact with the outside world.After the above step, we are almost done and we just would be presented with the below screen, which is kind of a review of all the details we chose so far.Adding this new device to our Guest OSFrom the initial screen of virt-manager, click on the Open and we will be presented with a screen as below.From the above screen, click on the i to open up another screen as below.Click on Add Hardware and select Network. In the Network tab, select the host device as our newly created network in the previous step and click on Finish as shown in the below screen.Testing in the guest OSNow, inside the guest OS make sure that you are able to ping the host machine and outside network such as google. If the ping succeeds, then we have successfully setup our network in the guest OS. ReferencesThe reference material used to setup the guest network
_unix.39205
I have embedded FreeNAS, version 8.2 beta. Now I would like to install openvpn, of course persisting between updates.I have managed to get the program itself persist by using mount -wu /pkg_add -r openvpnmount -ro /Also I got the the openvpn_enable=YES line to persist by editing /conf/base/etc/rc.confMy problem is to persist the /usr/local/etc/openvpn folder with therein my openvpn.conf configuration and key files. How can I persist those?
How to persist a file in /usr/local in embedded FreeNAS
openvpn;freenas
null
_unix.22235
I have a lot of pdf files (3308) on which I must apply four steps:1) I have to convert all to jpgI found this little script on the web with ImageMagick: batch converting pdf to jpgI want to do this, but I keep my files the same name as before: foo.pdf => foo.jpgAlso I would like all images are saved in a folder in scenes2) Then I have to resize to 612x7923) Then I have to create thumbnails in 255x3304) Finally I have to rename them. Indeed, I have a csv file in which is their names and new namesHere is an example of some lines.Each line corresponds to the actual name file comma the new name. There are 3308 rows, one per file current_name,new_namefoo,barPS130_1060,55-largeAs you can see, the extensions are not displayed because the two files are in jpg format.I am completely lost, I do not know whether to use 4 scripts or if it is possible in one script. I often work in PHP, but I wanted to do a bit of bash to change, but it's a bit hot for me.Can you help me?
Convert pdf to jpg keeping the same name; resize & create thumbs then rename?
bash;image manipulation;imagemagick
You can start like this:for i in $@; do dst=${i%pdf}jpg convert $i -resize 612x792 $dst convert $i -resize 255x330 ${i%.pdf}_thump.jpgdoneAnd call it like$ bash my_script.sh *.pdfFor renaming you can use another script. I don't understand your example .csv-file. Does is contain 3 lines for 3 files? Ok, this is the case.You can rename the file with following command line:$ awk -F, '{ system(echo mv \ $1 \ \ $2 \) }' myrename.csvAwk executes the echo ... command for each line, $1 is the value of the first field of a line and $2 is the value of the 2nd field. The quoting \ is needed in case a filename contains spaces. -F, tells awk to use a comma as field separator.If you have tested this command you can remove the echo to do real renaming of files. You can add -n to mv to avoid accidental overwrites of existing files.