text
stringlengths
64
89.7k
meta
dict
Q: How do I write raw pixel data and save as a bitmap My problem is this: I wanna try to make a white color 24bit bitmap, but everytime I write to the bmp file, it gives me white black/white stripes. i dont understand why? Maybe i skip some bytes? If you want more information on the code just ask. setup settings: void setup_settings( ) { // information om billedet pic.infoHeader.biSize = sizeof(BMP_InfoHeader); pic.infoHeader.biBitCount = 24; pic.infoHeader.biWidth = WIDTH; // Hoejte i pixels pic.infoHeader.biHeight = HEIGH; // bredte i pixels pic.infoHeader.biPlanes = 1; pic.infoHeader.biCompression = 0; pic.infoHeader.biSizeImage = WIDTH * HEIGH * (pic.infoHeader.biBitCount/8); pic.infoHeader.biXPelsPerMeter = 0; pic.infoHeader.biYPelsPerMeter = 0; pic.infoHeader.biClrUsed = 0; pic.infoHeader.biClrInportant = 0; pic.fileHeader.bfType[0] = 'B'; pic.fileHeader.bfType[1] = 'M'; pic.fileHeader.bfReservered1 = pic.fileHeader.bfReservered2 = 0; pic.fileHeader.bfOffBits = sizeof(BMP_FileHeader) + pic.infoHeader.biSize; } this funktion definition for my SaveBitmapFile is: int SaveBitmapFile(const std::string filename, bit24* image){ // gem filen std::ofstream writer(FileName.c_str(), std::ofstream::binary); if(!writer.is_open()){ printf("Error: While Writing\n"); return -1; } writer.write(reinterpret_cast<char *>(&pic.fileHeader), sizeof(BMP_FileHeader) ); writer.write(reinterpret_cast<char *>(&pic.infoHeader), sizeof(BMP_InfoHeader) ); writer.write(reinterpret_cast<char*>(&image[0]), pic.infoHeader.biSizeImage); writer.close(); return 0; } My structures: #pragma pack(1) typedef struct{ uint32_t value : 24; }bit24; #pragma pack(0) // Billedet #pragma pack(1) typedef struct{ unsigned int Width; unsigned int Heigh; bit24* RGB; }Image; #pragma pack(0) typedef struct { BMP_FileHeader fileHeader; BMP_InfoHeader infoHeader; Image data; }BMP_Data; My source main source code: // the pic is a type of BMP_Data. sorry if i this is really messy. int main( int argc, char *argv[] ){ setup_settings(); pic.data.Heigh = pic.infoHeader.biHeight; pic.data.Width = pic.infoHeader.biWidth; int bytesPerRGB = (pic.infoHeader.biBitCount/8); //padded bytes? int paddedBytes = ( pic.data.Width * bytesPerRGB) % 4; printf("PaddedBytes: %d\n", paddedBytes); pic.data.RGB = new bit24[ pic.data.Heigh * pic.data.Width * bytesPerRGB]; uint8_t r,g,b; r = 0xFF; g = 0xFF; b = 0xFF; /* for( unsigned int y = 0; y < pic.data.Heigh; y++) for( unsigned int x = 0; x < pic.data.Width; x++) { pic.data.RGB[x + (y*pic.data.Width )].value = ( (b << 0) | (g << 8) | (r << 16) ); } */ for( unsigned int i = 0; i < pic.data.Heigh * pic.data.Width * bytesPerRGB; i+=3){ pic.data.RGB[i ].value = ( (b << 0) | (g << 8) | (r << 16) ); } SaveBitmapFile(FileName, pic.data.RGB); delete [] pic.data.RGB; return 0; } A: Okay, I have found the problem. After I changed the bit24 to: typedef struct{ uint8_t blue; uint8_t green; uint8_t red; }RGB_Data; from: #pragma pack(1) typedef struct{ uint32_t value : 24; }bit24; #pragma pack(0) and a little change in the main: for( unsigned int i = 0; i < pic.data.Heigh * pic.data.Width * bytesPerRGB; i++){ pic.data.RGB[i].blue = b; pic.data.RGB[i].green = g; pic.data.RGB[i].red = r; } it worked like a charm. thanx you for your help :)
{ "pile_set_name": "StackExchange" }
Q: How to use SIFR or Facelift with ASP.net? Has anyone used SIFR or Facelift (FLIR) with ASP.net? I noticed that all the scripts included with FLIR are all PHP pages. I looked around but it looks like there isn't a good solution for image replacement for ASP.net. A: sIFR is a client-side technique that leverages Javascript and Flash, so is pretty much independent of which server-side language you use. For some examples of how to implement it, see How to use.
{ "pile_set_name": "StackExchange" }
Q: beginner XPath expression error in C# I am trying to use following XPath expressions: nodeList = root.SelectNodes("/moviedb/movie[genres="Thriller"]"); nodeList = root.SelectNodes("/moviedb/movie[contains(genres, "+ ElementValue +")]"); where ElementValue is user input value For the first line i get errors like: Error 4 ) expected Error 6 Invalid expression term ')' While second expression returns 0 Before i used this expressions in the c#, i did test them online and they worked. My xml looks something like this: <moviedb> <movie> <imdbid>tt2226321</imdbid> <genres>Thriller</genres> <languages>English</languages> <country>USA</country> <rating>8</rating> <runtime>155</runtime> <title>The Dark Knight</title> <year>2014</year> </movie> <movie> <imdbid>tt1959490</imdbid> <genres>Action,Adventure,Drama</genres> <languages>English</languages> <country>USA</country> <rating>6.5</rating> <runtime>138</runtime> <title>Noah</title> <year>2014</year> </movie> </moviedb> Thanks A: When dealing with quotes in strings you need to take special steps, as inserting any quotes will make the compiler think you are ending or starting a new string. Either use the \" escape sequence to show the compiler that you mean to insert a quote mark instead of the end of a string, or the @ symbol and use 2 double quotes ("") nodeList = root.SelectNodes(@"/moviedb/movie[genres=""Thriller""]"); or nodeList = root.SelectNodes("/moviedb/movie[genres=\"Thriller\"]");
{ "pile_set_name": "StackExchange" }
Q: Sitecore workbox only display the latest Version of an item I have customized workbox by overriding it. By default Workbox displays all versions of items in a particular workflow state. I need only the last version to appear in the workbox. Played around with the DisplayStates(IWorkflow workflow, XmlControl placeholder) method, but no luck. How can I do this? A: You need to override DisplayStates() method and filter the DataUri[] items array: List<DataUri> filteredUriList = new List<DataUri>(); DataUri[] items = this.GetItems(state, workflow); for (int index = offset; index < num; ++index) { Item obj = Sitecore.Context.ContentDatabase.Items[items[index]]; if (obj != null && obj.Versions.IsLatestVersion()) filteredUriList.Add(items[index]); } items = filteredUriList.ToArray();
{ "pile_set_name": "StackExchange" }
Q: QuorumChain Consensus vs Raft Consensus vs Istanbul consensus quorum I am reading about Quorum which is an Ethereum-based distributed ledger protocol with transaction/contract privacy and new consensus mechanisms. I have read about QuorumChain consensus and Raft consensus from this document. According to this answer and this link, Istanbul consensus mechanism implemented in a quorum. What are the advantages and disadvantages of each consensus mechanisms used in a quorum? A: I'm not an expert on this Quorum world. But as i've read, the main difference between the three mechanisms you mentioned is the % of BFT (Byzantinism Fault Tolerance). BFT is defined as: Byzantine fault tolerance (BFT) is the dependability of a fault-tolerant computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component is failed. In a "Byzantine failure", a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers. It is difficult for the other components to declare it failed and shut it out of the network, because they need to first reach a consensus regarding which component is failed in the first place. The term is derived from the Byzantine Generals' Problem,1 where actors must agree on a concerted strategy to avoid catastrophic system failure, but some of the actors are unreliable. Byzantine fault tolerance has been also referred to with the phrases interactive consistency or source congruency, error avalanche, Byzantine agreement problem, Byzantine generals problem, and Byzantine failure.[2] Raft: No, Raft's initial description (by Diego Ongaro and John Ousterhout) is not byzantine fault-tolerant. Imagine a node that votes twice in a given term, or votes for another node that has a log which is not up-to-date like its own and that node becomes leader. Such behaviour could cause split-brains (case where to two nodes believing themselves to be leader) or inconsistencies in the log. Scenarios like permissionated blockchains where nodes are holded by different companies, it's important to have some BFT properties in order to be sure that everyone is beheaving correctly. And that's why Istanbul was born. Istanbul Implements control over some types of byzantine behaviours on nodes. Being F: # of Byzantine nodes on the network. Istanbul is based on a commitment consensus where each node waits until 2F + 1 commits from different validators with the same result before inserting the block into the blockchain. You have a very good slides explaining how Istanbul works here: https://es.slideshare.net/YuTeLin1/istanbul-bft Can't say anything about QuorumChain because i've not read almost anything of it. Hope it helps!
{ "pile_set_name": "StackExchange" }
Q: Get URL Variables from a Frame and Pass to Codebehind First off, yes I know I shouldn't be using frames, but I don't have a choice. It's an old system that's caused me nothing but headaches, but the network engineers love it and demand that this is where their information and pages have to go. I'm currently using the .NET 4.0 framework, c#, and, though I doubt it matters for this question, SQL Server 2008R2. The problem as it stands right now: I need a way to determine whether the primary or standby hardware is selected so I can properly set the radio button and initial information on page load to either the primary hardware or secondary hardware based on which page is loaded. The website my page is being used on is third party, which I do not have access to modifying, so I can't just tack URL variables onto that page or change settings. The URL has variables, but they're generated statically elsewhere on the website and only visible inside the frame in which my page resides. I've never actually used frames, so I'm at a little bit of a loss. Worse, because of the way this is set up and being tested, I'm not actually sure how to set up any breakpoints in the code to see where it's failing. I couldn't think of another way around this, but I would be more than happy to have a solution that doesn't involve this frame-y nonsense. So far, I've been looking at these for guidance, but haven't had much success. sharing variables between urls and frames, msdn's .NET 4.0 page on Frames, a post on how to get url variables out of frames, and loading pages in IFrame dynamically from the codebehind. For the time being, I've been asked to make sure the page as it stands does not break, which is why this is being checked instead of just done. It's currently in two places on that site, one without frames and URL variables (which the admins want to delete) and the new home with URL variables and frames. For now, the first one can't break, which is why you'll see a bit of strange checking and the ?? operator. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { System.Web.UI.HtmlControls.HtmlGenericControl orionIFrame = (System.Web.UI.HtmlControls.HtmlGenericControl)this.FindControl("pcmaframe"); if (orionIFrame != null) { string frameURL = orionIFrame.Attributes["src"].ToString() ?? ""; Uri frameURI = new Uri(frameURL); NameValueCollection queryVars = HttpUtility.ParseQueryString(frameURI.Query); //If this is in Orion, we want to change the canceller to standby if it's 97, not 96 if (queryVars["NetObject"] == "N:97" || queryVars["NetObject"] == "N%3a97") { SelectCanceller.SelectedValue = "Standby"; primaryStandby = false; } } //Do some other stuff to generate page data Right now, the code that generates the frame looks like this (where [url] replaces the actual url and [mypage] replaces the actual file name I've used): NodeID - ${NodeID}<br> Node Name - ${NodeName} <iframe id="pcmaframe" src="[url]/[mypage].aspx?NetObject=N:" + ${NodeID} width = 1000 height = 1500> </iframe> At the moment, there is no bad behavior, it simply fails to switch. Both pages display the primary, regardless of the URL variables. The primary being N:96 and the secondary being N:97. The reason I check is that I'd like it to display something in the event that it fails, so it defaults to the primary hardware. So, wonderful Stack Overflow people... Can you answer any of my three questions? How can I troubleshoot a Frame on a separate website without adding output to the page when I have no way to insert breakpoints? What can I do instead of using the URL variables and messing with these frames? What logic am I missing or screwing up in my code that's causing the frame to /not/ recognize the URL variable? UPDATE Well, so far, I've determined that the frame is null. Not sure if this is because of the this.FindControl is not being properly cast, or it's due to the way the website uses frames, or any number of other things... A: After being allowed to add some debugging output to the page, I was able to find a work around. What I believe is happening, based on some testing and these articles: FindControl() return null Better way to find control in ASP.NET http://msdn.microsoft.com/en-us/library/txxbk90b%28v=vs.90%29.aspx http://forums.asp.net/t/1097333.aspx http://msdn.microsoft.com/en-us/library/system.web.ui.page.previouspage.aspx Is that the website where my program/page is being used has the frame at a higher level than my ASP has access to without a lot of technical voodoo. Since the frame wasn't returning, I started testing and found that the calling frame was actually using [URL].[MyPage].aspx?NetObject=N:97 as the previous page or the calling page. This was true under a variety of circumstances which meant it was semi-safe to use Request.UrlReferrer: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { string frameURL = Request.UrlReferrer.ToString() ?? "NO DATA"; if ((frameURL != null) && (frameURL != "NO DATA")) { Uri frameURI = new Uri(frameURL); NameValueCollection queryVars = HttpUtility.ParseQueryString(frameURI.Query); //If this is in Orion, we want to change the canceller to standby if it's 97, not 96 if (queryVars["NetObject"] == "N:97" || queryVars["NetObject"] == "N%3a97") { SelectCanceller.SelectedValue = "Standby"; primaryStandby = false; } }
{ "pile_set_name": "StackExchange" }
Q: Spring: Send email from Gmail I am following This link for sending email (Gmail smtp) My problem is that why should I hardcode sender and receiver in the bean? <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com" /> <property name="port" value="587" /> <property name="username" value="username" /> <property name="password" value="password" /> <property name="javaMailProperties"> <props> <prop key="mail.smtp.auth">true</prop> <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean> <bean id="mailMail" class="com.mkyong.common.MailMail"> <property name="mailSender" ref="mailSender" /> <property name="simpleMailMessage" ref="customeMailMessage" /> </bean> <bean id="customeMailMessage" class="org.springframework.mail.SimpleMailMessage"> <property name="from" value="[email protected]" /> <property name="to" value="[email protected]" /> <property name="subject" value="Testing Subject" /> <property name="text"> <value> <![CDATA[ Dear %s, Mail Content : %s ]]> </value> </property> </bean> A: if you test with gmail account, you need to enable the Access for less secure app option here: https://www.google.com/settings/security/lesssecureapps otherwise you may get an authentication error. A: You can avoid hard coding the email properties by placing the email properties in an external properties file, say email.properties. If you enable the context namespace within your configuration file Spring will load the properties file and allow properties within the file to be used via the expression language. Email.properties email.host=smtp.gmail.com email.port=587 email.username=username email.password=password Configuration File <!-- Spring Loads the Properties File, which can be used for resolution of EL Expressions --> <context:property-placeholder location="classpath:META-INF/db/db.properties"/> <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="${email.host}" /> <property name="port" value="${email.port}" /> <property name="username" value="${email.username}" /> <property name="password" value="${email.password}" /> <property name="javaMailProperties"> <props> <prop key="mail.smtp.auth">true</prop> <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean>
{ "pile_set_name": "StackExchange" }
Q: Finding the number of years in a compound interest formula My question is: Suppose that I have $\$2,500$ in an investment account. I want this to grow to $\$5,000$. Approximately how long it would take to do this if my account earns $3.5\%$ compounded annually $$ FV=PV(1+i)^n \\ 5000=2500(1+.035)^n \\ 5000=2500(1.035)^n $$ I need help with this question. It's one of my business mathematics questions. I'm confused as to how to divide out the $n$ or how I would arrive at $n$. A: The general technique when the $n$ is in the exponent is to use $\log$ and then use the rule $\log(x)^n=n \log(x)$. \begin{align*} 5000 &=2500(1.035)^n \\ 5000/2500 &=(1.035)^n \\ \log(5000/2500) &= \log((1.035)^n) \\ \log(5000/2500) &=n \log(1.035) \\ n &=\frac{\log(5000/2500)}{\log(1.035)} \approx 20.15 \end{align*} A quick and dirty way to get the number of years to double your money is to use the rule of $72$ http://en.wikipedia.org/wiki/Rule_of_72 which says that it will take about $72/3.5 \approx 20.57$ years. So, if you need to calculate the number of years , $y$, it takes for an initial value, $P$, to accumulate its interest to $F$, where the interest is $i$ in % per annum, your formula is: $$y = \frac{\log(F/P)}{\log(1 + (i / 100))}$$
{ "pile_set_name": "StackExchange" }
Q: Starting Intent correctly. Difference between this vs MainActivity.this What is the difference starting new intent from some MainActivity(for example) using: Intent intent = new Intent(this, SecondActivity.class); vs Intent intent = new Intent(MainActivity.this, SecondActivity.class); A: There is not difference in working of intent, but we use these two statements in different situations. Actually for starting new activity we use intent: Intent intent = new Intent(Context packageContext, Class<?> cls); Where on packageContext, we have to pass the context. So that's why we pass 'this' as a context of current activity. But if we do the same from some anonoymous class like anonymous onClickListener, this refers to the instance of that ananymous class. So in that case we use 'MainActivity.this' which is the context of MainActivity class.
{ "pile_set_name": "StackExchange" }
Q: event.screenX and external Javascript I'm currently trying to retrieve the coordinates of the cursor using function getCursor(event) { event.screenX; event.screenY; } I know you can reference this event by using something similar to <div onmousemove="getCursor(event)"></div> However I am currently trying to use a purely external javascript file and am attaching events like so: element.onmousemove = getCursor //or element.onmousemove = function() {getCursor(parameters)} How would I be able to reference the event that these functions are attached to? Thank you in advance for any help! note: I do not use jQuery or any other javascript library Update MDC nvm! figured it out. You actually don't have to pass the param at all. element.onmousemove = getCursor works great as long as you have function getCursor(event) {}. Sorry bout that >_<. Double Update! Read Sime's comment/answer A: Here you go: element.onmousemove = function(e) { e = e || window.event; getCursor(e); } The event object is passed as the first argument into the event handler function automatically. The exception is Internet Explorer 8 and below - e = e || window.event; will make it work in those versions of IE too. Live demo: http://jsfiddle.net/simevidas/DteK8/
{ "pile_set_name": "StackExchange" }
Q: WizardHandler.wizard().goTo Im using https://github.com/mgonto/angular-wizard to create an angular wizard whose steps can be called from route params: .../step/1 .../step/2 etc. So I've created this controller: .controller('createOrganizer', ['$scope', 'WizardHandler' , '$routeParams', function($scope,WizardHandler,$routeParams) { //$scope.data = {}; $step = $routeParams.step; WizardHandler.wizard().goTo($step); }]) The proper linking and routing were created correctly on app.js and the index.html But when I get into the urls, I get this: TypeError: Cannot call method 'goTo' of undefined Is this the way to pre-select an angular-wizard step using url parameters? ================= Update ===================== I tried with something like this: .controller('createOrganizer', ['$scope', 'WizardHandler' , '$routeParams', function($scope,WizardHandler,$routeParams) { $step = $routeParams.step; $scope.$watch(WizardHandler.wizard(), function(step) { WizardHandler.wizard().goTo($step); }); }]) The idea is to ise watch to tell me when WizardHandler.wizard() is instantiated in order to call the .goTo method. With this controller im getting this error: TypeError: Cannot set property 'selected' of undefined Not sure if i am using watch correctly. I even tested the step variable and it is ok, showing the same value as the url. ================= Solved! ===================== var step = parseInt($routeParams.step); // Important, as routeParams returns an String: the wizardHandler uses either an integer number or a string name of step. So I am parsing the number to prevent confusion. $scope.$watch( function() {return WizardHandler.wizard();}, function (wizard) { if (wizard) wizard.goTo(step); }); I added an init(step) function that handles the initial values of some values i need and also prevents error caused by urls like .../step/SOMERANDOMSTRING Thanks to GregL for your help! A: From reading through the source code quickly, my first guess would be that you have used a <wizard> element with a name property specified, but you have not passed that same name to WizardHandler.wizard(). In the code, if you don't specify a name argument to WizardHandler.wizard(), it will use the default name, which is the name used by a <wizard> with no name attribute. Since you are not getting back the wizard you are intending when you call WizardHandler.wizard(), it resolves to undefined and calling goTo() will fail with the error you got. At the very least, separate the getting of the wizard and the .goTo() call to add a check in to make sure you got a valid wizard: .controller('createOrganizer', ['$scope', 'WizardHandler' , '$routeParams', function($scope,WizardHandler,$routeParams) { //$scope.data = {}; $step = $routeParams.step; var wizard = WizardHandler.wizard(); if (wizard) wizard.goTo($step); }]); There should probably be a var keyword before that $step assignment, too, and as a convention, only Angular core things should start with a $, or jQuery selection variables. EDIT: You could also try using a $watch() to get notified when the wizard is loaded, however there is no guarantee that the $digest() cycle will be called immediately after the wizard is loaded, so you would be relying on the assumption that the $apply/$digest cycle is run sometime after the wizard is correctly added to the WizardHandler's cache. You would do this like so: .controller('createOrganizer', ['$scope', 'WizardHandler' , '$routeParams', function($scope,WizardHandler,$routeParams) { //$scope.data = {}; $step = $routeParams.step; $scope.$watch(function() { return WizardHandler.wizard(); }, function (wizard) { if (wizard) wizard.goTo($step); }); }]);
{ "pile_set_name": "StackExchange" }
Q: Spawn different types of objects I'm trying to create a script where I can spawn different type of objects A: From your description and comments what it sounds like is you want to guarantee that between speiclaCratePercentageMin and speiclaCratePercentageMax percentage of crates are the special creates but the rest can just be a normal create. If that is so all you need to do is figure out how many crates that percentage will be of the total, spawn that many first, then fill the rest in with normal crates. using UnityEngine; using System.Collections.Generic; using UnityEngine.UI; public class spawnmanager : MonoBehaviour { public int noOfobjects = 6; public Transform[] spawnPoints; public GameObject normalCrate; public GameObject specialCrate; public float speiclaCratePercentageMin; public float speiclaCratePercentageMax; void Awake() { } // Use this for initialization void Start () { spawner(); } void spawner () { List<Transform> availablePoints = new List<Transform>(spawnPoints); //Figures out how many special creates we need. int numberOfSpecialCrates = noOfobjects * Random.Range(this.speiclaCratePercentageMin, this.speiclaCratePercentageMax); //Added i<spawnPoints.Length check to prevent errors when noOfobjects is bigger than the number of available spawn points. for (int i = 0; i<noOfobjects && i<spawnPoints.Length;i++) { int spawnPointIndex = Random.Range (0, availablePoints.Count); //As long as i is lower than numberOfSpecialCrates we spawn a special crate. if(i < numberOfSpecialCrates) { Debug.Log("dd"); Instantiate(specialCrate, availablePoints[spawnPointIndex].position, Quaternion.identity); } else { Instantiate(normalCrate, availablePoints[spawnPointIndex].position, Quaternion.identity) ; } availablePoints.RemoveAt(spawnPointIndex); } } }
{ "pile_set_name": "StackExchange" }
Q: Is this でも or で+も? This is similar to a previous question of mine. Also from Kanji in Context, we have the example sentence (with no other context) 東京{とうきょう}でも雪{ゆき}は降{ふ}りますが、たいていは大{たい}して積{つ}もりません。 Is it possible to determine whether the でも here is でも (even in Tokyo, it snows) or で(location of action) + も (it also snows in Tokyo)? A: Formal vs. Informal I am going to say that: 1) In informal speech, both interpretations are almost equally natural. 2) In formal speech, however, it would be considerably more appropriate to interpret the 「でも」 as being 「で + も」(location + "also"). Here is my reasoning. If one said 「東京でも雪は降る」 in formal speech to mean "It snows even in Tokyo.", then one would have to wonder where the location marker is because 「でも」 is all taken to express "even". In formal speech, one would instead need to use 「東京でさえも」 or 「東京ででも」 to express both "even" and "in". Admittedly, though, the latter would rarely be heard in real life as it is a mouthful. 「Place Name + で + でも」 is very often contracted to 「Place Name + でも」 in informal speech, which is the main reason that, in informal speech, 「東京でも雪は降る」 can naturally be interpreted to mean both: "It snows in Tokyo, too." and "It snows even in Tokyo." 
{ "pile_set_name": "StackExchange" }
Q: Grails: log stacktrace to stdout When I launch my grails application, I get the following error: java.io.FileNotFoundException: stacktrace.log (Permission denied) I know this can be solved by chowning some files/directories or by changing the file the logs go to, but I don't want this: I just want stracktraces to be logged to the stdout. The documentation states: For example if you prefer full stack traces to go to the console, add this entry: error stdout: "StackTrace" However: it also states: This won't stop Grails from attempting to create the stacktrace.log file - it just redirects where stack traces are written to. And later: or, if you don't want to the 'stacktrace' appender at all, configure it as a 'null' appender: log4j = { appenders { 'null' name: "stacktrace" } } I combine the 2 and get the following configuration: // log4j configuration environments { production { log4j = { appenders { console name:'stdout', layout:pattern(conversionPattern: '%c{2} %m%n') // Don't use stacktrace.log 'null' name: "stacktrace" } } } } log4j = { // print the stacktrace to stdout error stdout:"StackTrace" } Unfortunately, this doesn't work: INFO: Deploying web application archive MyBackend.war Sep 12, 2012 4:46:11 PM org.apache.catalina.core.StandardContext start SEVERE: Error listenerStart Sep 12, 2012 4:46:11 PM org.apache.catalina.core.StandardContext start SEVERE: Context [/MyBackend2] startup failed due to previous errors Admittedly, it doesn't attempt to write stacktrace.log anymore, so the Permission denied error isn't thrown anymore, but I have no clue why the app won't start becaue the only thing it logs is "Error listenerStart" Can anyone please help me with configuring my app to just log the stacktraces to stdout? A: Grails Bug report: http://jira.grails.org/browse/GRAILS-2730 (contains some workarounds) If you want stacktraces to stdout: log4j = { appenders { console name:'stacktrace' ... } ... } Disable stacktrace.log: log4j = { appenders { 'null' name:'stacktrace' ... } ... } stacktraces to application specific log file in Tomcat logs directory log4j = { appenders { rollingFile name:'stacktrace', maxFileSize:"5MB", maxBackupIndex: 10, file:"${System.getProperty('catalina.home')}/logs/${appName}_stacktrace.log", 'append':true, threshold:org.apache.log4j.Level.ALL ... } ... } kudos to this blog post: http://haxx.sinequanon.net/2008/09/grails-stacktracelog/
{ "pile_set_name": "StackExchange" }
Q: Getting specified Node values from XML document I have a problem going through an XML document (with C#) and get all the necessary values. I successfully go through all specified XmlNodeLists in the XML document, successfully get all XmlNode values inside, but I have to get some values outside of this XmlNodeList. For example: <?xml version="1.0" encoding="UTF-8" ?> <Element xsi:schemaLocation="http://localhost/AML/CaseInvestigationMangement/Moduli/XmlImportControls/xsdBorrow.xsd xsd2009027_kor21.xsd" Kod="370" xmlns="http://localhost/AML/CaseInvestigationMangement/Moduli/XmlImportControls/xsdBorrow.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> /2001/XMLSchema-instance"> <ANode> <BNode> <CNode> <Example> <Name>John</Name> <NO>001</NO> </Example> </CNode> </BNode> <ID>1234</ID> <Date>2011-10-01</Date> </ANode> <ANode> <BNode> <CNode> <Example> <Name>Mike</Name> <NO>002</NO> </Example> </CNode> </BNode> <ID>5678</ID> <Date>2011-03-31</Date> </ANode> </Element> This is the code that gets values for nodes Name and NO in every found ANode in the XML document: XmlDocument xml = new XmlDocument(); xml.LoadXml(myXmlString); //myXmlString is the xml file in string //copying xml to string: string myXmlString = xmldoc.OuterXml.ToString(); XmlNodeList xnList = xml.SelectNodes("/Element[@*]/ANode/BNode/CNode"); foreach (XmlNode xn in xnList) { XmlNode example = xn.SelectSingleNode("Example"); if (example != null) { string na = example["Name"].InnerText; string no = example["NO"].InnerText; } } Now I have a problem getting values for ID and Date. A: Just like you do for getting something from the CNode you also need to do for the ANode XmlNodeList xnList = xml.SelectNodes("/Element[@*]"); foreach (XmlNode xn in xnList) { XmlNode anode = xn.SelectSingleNode("ANode"); if (anode!= null) { string id = anode["ID"].InnerText; string date = anode["Date"].InnerText; XmlNodeList CNodes = xn.SelectNodes("ANode/BNode/CNode"); foreach (XmlNode node in CNodes) { XmlNode example = node.SelectSingleNode("Example"); if (example != null) { string na = example["Name"].InnerText; string no = example["NO"].InnerText; } } } }
{ "pile_set_name": "StackExchange" }
Q: c++ string.compare don't accept space characters First I want to apologist about my bad title. Now the problem. I'm trying to compare two strings in C++. I had try with string.compare and ==, none of them worked. Here is code if(game_type == "AI vs AI"){ std::cout<<"You choosed AI vs AI\n"; aiVsAI(range); } else{ std::cerr <<"Error"; } and with string.compare if(game_type.compare("AI vs AI") == 0){ std::cout<<"You choosed AI vs AI\n"; aiVsAI(range); } else{ std::cerr <<"Error"; } If I give AIvsAI for input the program works correctly, but if i enter AI (space) vs (space) AI, the program prints "Error". I tried using \x20 instant of space but this didn't work too. Any ideas why this is happening? A: It appears that you are using a statement similar to std::cin >> game_type; to obtain the user input. The problem is that the >> operator only extracts the first word from the line the user types, which makes game_type only contain AI when you type AI vs AI. (As a side note, if you were to use std::cin >> blah on the next line, then blah would contain vs because that typed input had not been consumed yet.) To fix this, you can use std::getline: std::getline(std::cin, game_type); This gets everything the user types on the line (up to but not including the Enter keypress) and puts that in game_type. This is almost always the right way to get user input for an interactive program.
{ "pile_set_name": "StackExchange" }
Q: What is a "De-clicked" aperture control ring? The Samyang 85mm T1.5 Cine Lens featured a "De-clicked" aperture control ring. What exactly is it? A: It means the aperture ring does not have detents or, if it does, there are no audible clicks. This is intended for video shooting as it does not make any noise when turning the aperture ring. Depending on how the lens is designed, the aperture may be stepped or not. If it is not stepped, it usual says continuous aperture or something similar.
{ "pile_set_name": "StackExchange" }
Q: Having trouble with aligning an iFrame in HTML5 Now I have made a website, and I have integrated google maps into it. Now I am havign trouble with centering Google maps on the web page. Here is my code: <iframe width="640" height="480" align="middle" frameborder="0" scrolling="no" marginheight="0" marginwidth="100" margin-top:100px; margin-left:140px; src="https://maps.google.ca/?ie=UTF8&amp;t=h&amp;ll=49.894634,-97.119141&amp;spn=27.26845,56.25&amp;z=4&amp;output=embed"></iframe> A: Why not use margin:auto for the left and right of the iframe or the div its in.
{ "pile_set_name": "StackExchange" }
Q: Which preposition should be used for websites? Which prepositions should we use when we're talking about websites? For example: Register On StackExchange / Register AT StackExchange Trade ON Forex / Trade AT Forex Answer questions ON Quora / Answer Questions AT Quora Which one is correct? Thanks A: Files are said to reside on a hard drive or on a certain machine. Since I know that a website is made of computer files, I think of a website as being on a machine and so any part of the website is also on that machine. A website's "address" is a "Uniform Resource Locator" or URL. These terms connote the idea of website as "place". In this case the file would be at the website location. So, if you think of a website as it is actually constructed (files stored on hard drives), you will probably say on. If you think in the location metaphor, you will probably say at. Link to an answer for a related ELU question
{ "pile_set_name": "StackExchange" }
Q: will pen drive information same on both windows and Mac? Greetings to all.. I have a pen drive which is a make of Trensend .... Whether the product and vendor id of the above pen drive will be same on both windows and Mac operating systems..or will it be different .. Thank you A: That kind of information is stored in the drive's firmware. Different OSes might decide to make different random responses to that kind of information, but the information itself is constant. The information doesn't care which kind of computer or OS it's being transmitted to.
{ "pile_set_name": "StackExchange" }
Q: In Season 8 Why are Demons immune to holy water? In season 8 Episode 21 Kevin is trapped in Crowley's alternate dimension of Garth's house boat... Kevin hears a knock, goes to open the door of the house boat and sprays "Dean" (whom is a Demon in disguise) in the face with a water gun, for not using the "secret knock."__ Then "Sam" jumps out and Kevin sprays him with the water gun as well.... One can only assume Kevin has Holy Water inside the water gun, otherwise spraying them w/ plain water would make NO sense. So why are the "Sam" and "Dean" demonic impostors... immune to holy water??? -I've considered that Crowley is running the alternate dimension but if Kevin has rosary, water, and a prayer... Viola! = Holy Water... A: Kevin thinks he has holy water but he really doesn't. Everything in the pocket dimension was put there by Crowley and he's too smart to put real holy water in there and then send in his demons.
{ "pile_set_name": "StackExchange" }
Q: Finding Duplicate Array Elements I've been struggling to create a function to essentially find all the indices of duplicate elements in a multi-dimensional array(unsorted), in this case a 5x5 array, and then using the indices found changing the parallel elements in a score array. But only find duplicates within columns and not comparatively to the other columns in the array Here is what I've done so far, with research online. The main problem with this code is that it will find all the duplicate elements but not the originals. For example: if the array holds the elements: {{"a","a","a"},{"b","b","b"},{"a","c","a"}}, then it should change the parallel score array to: {{0,1,0},{1,1,1},{0,1,0}}. But instead it only recognizes the last row and top the top row's duplicates. Code: public static void findDuplicates(String a[][]) { System.out.println("*Duplicates*"); Set set = new HashSet(); for(int j = 0; j<a.length; j++) { for(int i=0; i < a[0].length; i++) { if(!set.contains(a[i][j])) { set.add(a[i][j]); } else { System.out.println("Duplicate string found at index " + i + "," + j); scores[i][j] -= scores[i][j]; } } set = new HashSet(); } } I know my explanation is a bit complicated, but hopefully it is understandable enough. Thanks, Jake. A: Your logic is incorrect. Your outer loop is j and inner loop is i but you're doing: set.add(a[i][j]); It should be the other way around: set.add(a[j][i]); Technically you could get an out of bounds exception if the array isn't NxN. But you can state that as a precondition. For some reason you're also setting to 0 with: scores[i][j] -= scores[i][j]; Why not just: scores[i][j] = 0; But to find duplicates within columns: public static void findDuplicates(String a[][]) { for (int col=0; col<a[0].length; col++) { Map<String, Integer> values = new HashMap<String, Integer>(); for (int row=0; row<a.length; row++) { Integer current = values.put(a[row][col], row); if (current != null) { scores[row][col] = 0; scores[current][col] = 0; } } } } How does this work? I've renamed the loop variables to row and col. There's no reason to use i and j when row and col are far more descriptive; Like you I assume the input array is correct as a precondition. It can be NxM (rather than just NxN) however; I use a Map to store the index of each value. Map.put() returns the old value if key is already in the Map. If that's the case you've found a duplicate; The current (row,col) and (current,col) are set to 0. Why subtract the score from itself rather than simply setting to 0? if the value "a" is found 3+ times in a column then scores[current][col] will be set to 0 more than once, which is unnecessary but not harmful and makes for simpler code. I've declared the Map using generics. This is useful and advisable. It says the Map has String keys and Integer values, which saves some casting; It also uses auto-boxing and auto-unboxing to convert an int (the loop variable) to and from the wrapper class Integer.
{ "pile_set_name": "StackExchange" }
Q: Understanding the definition of a vector space So here vector space is defined as : If $S$ is any non-empty set, then $V ={f : S→F}$ ($F$ denotes field.) is a vector space over the field F with the usual operations of “addition” and “multiplication by a scalar” of functions. I didn't get that is vector space a function which assigns values of $S$ to a field. A field is a general description here, I didn't get that. And We say that complex numbers are a vector space over R. How come? Complex vectors don't exist in R. We also say that complex numbers are a vector space over C, complex numbers itself. I get that, but how come it is a vector space over R? A: A vector space is defined as a quadruple $(\mathbf{V},\mathbb{K},\oplus,\odot)$ where $\mathbf{V}$ is a set of elements called vectors, $\mathbb{K}$ is a field $(\mathbb{K},+,\cdot)$ (and we say that the vector space is a space over $\mathbb{K}$) , $\oplus$ is a binary operation (called sum) on $\mathbf{V}$ such that $(\mathbf{V},\oplus)$ is a commutative group and $\odot:\mathbb{K}\times\mathbf{V} \rightarrow \mathbf{V}$ is a scalar multiplication such that, $\forall a,b \in \mathbb{K}$ and $\forall \mathbf{u,v} \in \mathbf{V}$ we have: $$ a\odot(b\cdot\mathbf{v})=(a\cdot b)\odot\mathbf{v} $$ $$ 1\odot\mathbf{v}=\mathbf{v} $$ $$ a \odot (\mathbf{u}\oplus\mathbf{v})=a \odot\mathbf{u}\oplus a\odot \mathbf{v} $$ $$ (a+b)\odot \mathbf{v}=a\odot \mathbf{v}\oplus b\odot \mathbf{v} $$ Note that $(\oplus, \odot)$ are, in genral, different from the operations $(+,\cdot)$ in $\mathbb{K}$ . Now look at your definition: If $S$ is any non-empty set, then $V ={f : S\to F}$ ($F$ denotes field.) is a vector space over the field F with the usual operations of “addition” and “multiplication by a scalar” of functions. This define: The set $\mathbf{V}$ is the set of functions from $S$ to a field $F$. The field $\mathbb{K}$ is the same field $F$ and the two operations are: $$ \oplus : \mathbf{V}\times \mathbf{V} \to \mathbf{V} \quad (f+g)(x)=f(x)+g(x) \quad \forall x \in S. $$ that is: the sum of the two functions is the function that has as value the sum of the values of the two functions. $$ \odot:\mathbb{K}\times\mathbf{V} \rightarrow \mathbf{V} \qquad (cf)(x)=c(f(x)) $$ that is: the product of a function $f$ with a scalar is the function that has as values the products of the values of the function for the same scalar. It is not difficult to prove that with these definition all the axioms are satisfied and we have a vector space. The other examples in your question are similar. E.g: , it is simple to prove that any field $\mathbb{F}$ is a vector space over itself simply defining $\mathbf{V}=F$, $\mathbb{K}=\mathbb{F}$ and $\oplus=+$ and $\odot=\cdot$. Finally, note that $\mathbb{C}$ is a vector space ( of dimension 2) over $\mathbb{R}$ because a complex number $ x+iy$ can be identified with the couple of real numbers $(x,y) \in \mathbb{R}^2$ and $\mathbb{R}^2$ is a vector space over $\mathbb{R}$ with the usual operations.
{ "pile_set_name": "StackExchange" }
Q: adjustsFontSizeToFitWidth doesn't properly work I'm developing an app for iOS > 5.0 and using Xcode 4.6.2 To explain my problem, I've a UIScrollView which contains only a bunch of UILabel. The text that i'm going to display in these UILabels are dynamic but, UILabel's frame is constant, so if current text doesn't fit in the frame width, I need to scale my font size down. So, i've found adjustsFontSizeToFitWidth property of UILabel. Here is the explanation I took directly from UILabel class reference. A Boolean value indicating whether the font size should be reduced in order to fit the title string into the label’s bounding rectangle. Yeap, I thought that's what i exactly looking for since the text that i'm going to display in UILabel is always one line. So i'm aware of this property should be used with numberOfLines property to set 1. Here is my code to make this happen. for (NSString *currentImage in self.imagesNames) { UILabel *lbl = [[UILabel alloc]initWithFrame:CGRectMake(imageNumber*resultTypeImageWidth, 10, 75, 35)]; [lbl setFont:[UIFont fontWithName:@"GothamRounded-Bold" size:25]]; [lbl setNumberOfLines:1]; [lbl setText:currentImage]; [lbl setBackgroundColor:[UIColor clearColor]]; [lbl setTextColor:[UIColor colorWithHue:0.07 saturation:1 brightness:0.49 alpha:1.0]]; [lbl adjustsFontSizeToFitWidth]; [lbl setTextAlignment:NSTextAlignmentCenter]; imageNumber++; [self.resultTypeScrollView addSubview:lbl]; } imageNumber is a int that i'm using to place this UILabels in appropriate place in my UIScrollView which is name resultTypeScrollView. resultTypeImageWidth is a constant i've defined and set 100 to give some space between UILabels. So, my problem is if the text doesn't fit in the label's frame, it gets truncated. I expected that if text doesn't fit the frame, font size will scale down to fit it. At least, that's what i understand from UILabel class reference. Apparently, i'm missing something, but what ? So far, As i'm using custom font, I suspect that using custom font would be a problem. So, I changed to one of the system font, it has no effect. I tried to set NSLineBreakModeto NSLineBreakByClipping to prevent text getting truncated. After that, text doesn't get truncated but, some characters are missing which got truncated before. I've used also minimumScaleFactor to set different values to see if it has any effect, but, no effects at all. A: This should just work without changing many defaults. The posted code doesn't set adjustsFontSizeToFitWidth, it only gets it. Setting would look like this: lbl.adjustsFontSizeToFitWidth = YES;
{ "pile_set_name": "StackExchange" }
Q: Arduino serial ports gives bad data? I'm trying to receive data from my Arduino in a web(Socket.IO). So i'll explain the code below. Arduino: int temperatureC = (voltage - 0.5) * 100; Serial.print(temperatureC - 2); Serial.print(" "); This converts the Volt to a temperature. When i open the serial-display I can see the output how I wanted it. 228 28 28 28 28 29 28 But i created a SerialPort in Node and the output of that is kinda strange. I receive data on this way: serialPort.on("open", function () { console.log('open'); io.sockets.on('connection', function (socket) { serialPort.on('data', function(data) { console.log('data received: ' + data); socket.emit('temps', { temp: data }); }); }); }); But the output is: data received: 28 debug - websocket writing 5:::{"name":"temps","args":[{"temp":50}]} data received: debug - websocket writing 5:::{"name":"temps","args":[{"temp":32}]} data received: 2 debug - websocket writing 5:::{"name":"temps","args":[{"temp":50}]} data received: 8 debug - websocket writing 5:::{"name":"temps","args":[{"temp":56}]} data received: 28 debug - websocket writing 5:::{"name":"temps","args":[{"temp":50}]} data received: 28 debug - websocket writing 5:::{"name":"temps","args":[{"temp":50}]} data received: As you can see the output is something like: 28 2 8 2 8 28 Looks like its breaking my int/strings all the time. A: Make sure your baud rate is set, 9600 is safest. var sp = new SerialPort(comPort, { parser: serialport.parsers.readline("\r"), baudrate: 9600, }); sp.on('data', function (arduinoData) { // data example var ArduinoString = arduinoData.toString(); } I don't use the io.socket routines, you can look at my git for a working example with Arduino and node code .
{ "pile_set_name": "StackExchange" }
Q: Why it can't find a object in class (python) class TCPHandler(socketserver.BaseRequestHandler): def setup(self): self.packer = MessagePacker.MessagePacker() self.parser = MessageParser.MessageParser() def handle(self): self.setup() while True: pass #blabla... class FroggerServer(threading.Thread): def init(self,ip,nickname): self.serverIp = ip self.serverPort = 10000 self.nickname = nickname self.tcpHandler = TCPHandler tuple = (self.serverIp,self.serverPort) self.__serverSocket = socketserver.TCPServer(tulpe,self.tcpHandler) def run(self): self.__serverSocket.serve_forever() I used socket and then decleard TCPHandler. The problem is can't find a 'packer', 'parser' in TCPHandler. For eaxmple, like this. server = FroggerServer() server.init(ip,nickname) server.start() server.parser.putMessage(43) #oops. not exist object parser in server object I thought there is matter in self.tcpHandler = TCPHandler if I dont use this way, How can I access objects in TCPHandler at outer of this class A: You can do something like this. class TCPHandler(socketserver.BaseRequestHandler): def setup(self): self.packer = MessagePacker.MessagePacker() self.parser = MessageParser.MessageParser() def handle(self): self.setup() # your code def getPacker(): return self.packer def getParser(): return self.parser class FroggerServer(threading.Thread): def init(self,ip,nickname): # your code def run(self): self.__serverSocket.serve_forever() def getTCPHandler(): return self. tcpHandler Then you can do: server = FroggerServer() server.init(ip,nickname) server.start() server.getTCPHandler().getParser().putMessage(43) Alternative approach: You can do the following as well. class TCPHandler(socketserver.BaseRequestHandler): def __init__(self): self.packer = MessagePacker.MessagePacker() self.parser = MessageParser.MessageParser() def handle(self): while True: pass #blabla... class FroggerServer(threading.Thread): def __init__(self, ip, nickname): self.serverIp = ip self.serverPort = 10000 self.nickname = nickname self.tcpHandler = TCPHandler() tuple = (self.serverIp, self.serverPort) self.__serverSocket = socketserver.TCPServer(tulpe, self.tcpHandler) def run(self): self.__serverSocket.serve_forever() Now create your server object as follows. You can now access the variable parser of TCPHandler class. server = FroggerServer(ip, nickname) server.start() server.tcpHandler.parser.putMessage(43) Why the above will work? Suppose, you have the following class. class Example(object): def doSomething(self): self.othervariable = 'instance variable' >> foo = Example() Here we created an instance of Example, however if we try to access othervariable we will get an error: >> foo.othervariable AttributeError: 'Example' object has no attribute 'othervariable' Since othervariable is assigned inside doSomething and we haven't called it yet, it does not exist. >> foo.doSomething() >> foo.othervariable 'instance variable' Please note, __init__ is a special method that automatically gets invoked whenever class instantiation happens. class Example(object): def __init__(self): self.othervariable = 'instance variable' >> foo = Example() >> foo.othervariable 'instance variable' Reference: https://stackoverflow.com/a/16680307/5352399
{ "pile_set_name": "StackExchange" }
Q: How to install MIMEDefang on Debian? I'm trying to install MIMEDefang on my Debian Stretch but it doesn't work out the of box and I can't find any guides that work. After apt install mimedefang I added the following to /etc/postfix/main.cf: smtpd_milters = unix:/var/spool/MIMEDefang/mimedefang.sock milter_default_action = accept Reloaded postfix, and service mimedefang status says it is active and running. Even tried copying /etc/mimedefang-filter to /etc/mail/mimedefang-filter and made it executable... But still I just get this in /var/log/mail.log: postfix/smtpd[29832]: warning: connect to Milter service unix:/var/spool/MIMEDefang/mimedefang.sock: No such file or directory The file /var/spool/MIMEDefang/mimedefang.sock exists though. sendmail is already installed from before. How do I install and activate this thing? A: With the kind help of Benoît Panizzon on the MIMEDefang mailing list I found out that I need to use inet instead of unix as the listening socket, so that it's listening on the TCP port on the specified local or remote host. So the setup procedure for MIMEDefang on Debian/Ubuntu is: Install MIMEDefang: apt install mimedefang In /etc/default/mimedefang set (use other port if you want): SOCKET=inet:33333 Add to /etc/postfic/main.cf: smtpd_milters = inet:localhost:33333 milter_default_action = accept Copy /etc/mimedefang-filter to /etc/mail/mimedefang-filter, and modify it according to your needs (don't need to make it executable). Reload Postfix and MIMEDefang services: sudo systemctl reload postfix mimedefang MIMEDefang must be reloaded every time you change mimedefang-filter. By the way, this presentation gives a good understanding of MIMEDefang: https://www.mimedefang.org/static/mimedefang-lisa04.pdf The actual Perl script being run is located in /usr/bin/mimedefang.pl which then includes mimedefang-filter. I also wanted to be able to write my filtering logic in PHP instead of Perl, so I came up with the following solution. Add this to the end of eg. the filter_begin function in mimedefang-filter: %passToPhp = ("Sender", $Sender, "Recipients", \@Recipients, "Subject", $Subject, "RelayAddr", $RelayAddr, "RelayHostname", $RelayHostname, "Helo", $Helo, "QueueID", $QueueID, "MessageID", $MessageID); my $cmd = "/path/to/your/email-filter.php"; $cmd .= " " . encode_base64(encode_json(\%passToPhp), ''); my $phpOutput = `$cmd`; md_syslog('info', "PHP filter output: $phpOutput"); #causes entry in /var/log/mail.log if ($phpOutput eq "bounce") { action_bounce("We dont want this particular message."); } elsif ($phpOutput eq "discard") { action_discard(); } Then use the following code in email-filter.php to get you started: #!/usr/bin/php <?php // runs as user "defang". This file must have execute permissions. // Get variables from mimedefang that we passed along, headers, raw message, and extra information from mimedefang $arguments = ($argv[1] ? base64_decode($argv[1]) : null); if ($arguments) $arguments = json_decode($arguments, true); $headers = file_get_contents('HEADERS'); $raw_message = file_get_contents('INPUTMSG'); $commands = file_get_contents('COMMANDS'); // Get all the MIME parts into an array $mimeparts = []; chdir('./Work'); foreach (glob('*') as $mimepart_file) { if (is_dir($file)) continue; $mimeparts[$mimepart_file] = file_get_contents($mimepart_file); } // Do all your logic here... if ($someLogic == 'spam') { echo 'discard'; // echo 'bounce'; //use this line if you want to have a bounce message sent back to sender (but you probably don't want that for spam) } Monitor /var/log/mail.log for ensuring everything works as it should.
{ "pile_set_name": "StackExchange" }
Q: Why would a 'public event EventHandler cccc' be null? Why would a 'public event EventHandler cccc' be null? I have a class that's public class Builder { public event EventHandler StartedWorking; public Builder() { // Constructor does some stuff } public void Start() { StartedWorking(this, eventargobject); //StartedWorking is null -- } } This seems straightforward and something I do all the time? Am I missing something obvious or is there something that could cause this? EDIT: Does this mean that if I fire an event that is not subscribed to in a client class I have to check that it is not null? EDIT-2: I guess I'd never had events that were never not subscribed to and hence never ran into this -- You learn something new every day Sorry about the seemingly stupid question.... A: The event handler will be null unless somebody has subscribed to the event. As soon as a delegate is subscribed to the event, it will no longer be null. This is why it's always suggested to use the following form for raising events: public void Start() { var handler = this.StartedWorking; if (handler != null) { handler(this, eventArgObject); } } This protects you from a null exception if there has been no subscribers.
{ "pile_set_name": "StackExchange" }
Q: How to fetch more than 100 records from azure cosmos db using query I want to fetch more than 100 records from azure-cosmos DB using select query. I am writing a stored procedure and using a select query to fetch the record. SELECT * FROM activities a I am getting only 100 records though there are more than 500 records. I am able to get all records usings the setting configuration provided by Azure. I want to perform the same operation using query or stored procedure. How can I do that ?? Please suggest changes that need to accomplish. A: I am writing a stored procedure and using a select query to fetch the record. SELECT * FROM activities a I am getting only 100 records though there are more than 500 records. The default value of FeedOptions pageSize property for queryDocuments is 100, which might be the cause of the issue. Please try to set the value to -1. The following stored procedure works fine on my side, please refer to it. function getall(){ var context = getContext(); var response = context.getResponse(); var collection = context.getCollection(); var collectionLink = collection.getSelfLink(); var filterQuery = 'SELECT * FROM c'; collection.queryDocuments(collectionLink, filterQuery, {pageSize:-1 }, function(err, documents) { response.setBody(response.getBody() + JSON.stringify(documents)); } ); }
{ "pile_set_name": "StackExchange" }
Q: Do redundant ndb.Model.put_async() calls end up being sent only once to the datastore? I have a NDB model that exposes a few instance methods to manipulate its state. In some request handlers, I need to call a few of these instance methods. In order to prevent calling put() more than once on the same entity, the pattern I've used so far is similar to this: class Foo(ndb.Model): prop_a = ndb.StringProperty() prop_b = ndb.StringProperty() prop_c = ndb.StringProperty() def some_method_1(self): self.prop_a = "The result of some computation" return True def some_method_2(self): if some_condition: self.prop_b = "Some new value" return True return False def some_method_3(self): if some_condition: self.prop_b = "Some new value" return True if some_other_condition: self.prop_b = "Some new value" self.prop_c = "Some new value" return True return False def manipulate_foo(f): updated = False updated = f.some_method_1() or updated updated = f.some_method_2() or updated updated = f.some_method_3() or updated if updated: f.put() Basically, each method that can potentially update the entity returns a bool to indicate if the entity has been updated and therefore needs to be saved. When calling these methods in sequence, I make sure to call put() if any of the methods returned True. However, this pattern can be complex to implement in situations where other subroutines are involved. In that case, I need to make the updated boolean value returned from subroutines bubble up to the top-level methods. I am now in the process of optimizing a lot of my request handlers, trying to limit as much as possibles the waterfalls reported by AppStat, using as much async APIs as I can and converting a lot of methods to tasklets. This effort lead me to read the NDB Async documentation, which mentions that NDB implements an autobatcher which combines multiple requests in a single RPC call to the datastore. I understand that this applies to requests involving different keys, but does it also apply to redundant calls to the same entity? In other words, my question is: could the above code pattern be replaced by this one? class FooAsync(ndb.Model): prop_a = ndb.StringProperty() prop_b = ndb.StringProperty() prop_c = ndb.StringProperty() @ndb.tasklet def some_method_1(self): self.prop_a = "The result of some computation" yield self.put_async() @ndb.tasklet def some_method_2(self): if some_condition: self.prop_b = "Some new value" yield self.put_async() @ndb.tasklet def some_method_3(self): if some_condition: self.prop_b = "Some new value" yield self.put_async() elif some_other_condition: self.prop_b = "Some new value" self.prop_c = "Some new value" yield self.put_async() @ndb.tasklet def manipulate_foo(f): yield f.some_method_1() yield f.some_method_2() yield f.some_method_3() Would all calls to put_async() be combined into a single put call on the entity? If yes, are there any caveats to using this approach vs sticking to manually checking for an updated return value and calling put once at the end of the call sequence? A: Well, I bit the bullet and tested these 3 scenarios in a test GAE application with AppStat enabled to look at what RPC calls were being made: class Foo(ndb.Model): prop_a = ndb.DateTimeProperty() prop_b = ndb.StringProperty() prop_c = ndb.IntegerProperty() class ThreePutsHandler(webapp2.RequestHandler): def post(self): foo = Foo.get_or_insert('singleton') foo.prop_a = datetime.utcnow() foo.put() foo.prop_b = str(foo.prop_a) foo.put() foo.prop_c = foo.prop_a.microsecond foo.put() class ThreePutsAsyncHandler(webapp2.RequestHandler): @ndb.toplevel def post(self): foo = Foo.get_or_insert('singleton') foo.prop_a = datetime.utcnow() foo.put_async() foo.prop_b = str(foo.prop_a) foo.put_async() foo.prop_c = foo.prop_a.microsecond foo.put_async() class ThreePutsTaskletHandler(webapp2.RequestHandler): @ndb.tasklet def update_a(self, foo): foo.prop_a = datetime.utcnow() yield foo.put_async() @ndb.tasklet def update_b(self, foo): foo.prop_b = str(foo.prop_a) yield foo.put_async() @ndb.tasklet def update_c(self, foo): foo.prop_c = foo.prop_a.microsecond yield foo.put_async() @ndb.toplevel def post(self): foo = Foo.get_or_insert('singleton') self.update_a(foo) self.update_b(foo) self.update_c(foo) app = webapp2.WSGIApplication([ ('/ndb-batching/3-puts', ThreePutsHandler), ('/ndb-batching/3-puts-async', ThreePutsAsyncHandler), ('/ndb-batching/3-puts-tasklet', ThreePutsTaskletHandler), ], debug=True) The first one, ThreePutsHandler, obviously ends up calling Put 3 times. However, the 2 other tests that are calling put_async() end up with a single call to Put: So the answer to my question is: yes, redundant ndb.Model.put_async() calls are being batched by NDB's autobatching feature and end up as a single datastore_v3.Put call. And it does not matter if those put_async() calls are made within a tasklet or not. A note about the number of datastore write ops being observed in the test results: as Shay pointed out in the comments, there are 4 writes per modified indexed property value plus 1 write for the entity. So in the first test (3 sequential put), we observe (4+1) * 3 = 15 write ops. In the 2 other tests (async), we observe (4*3) + 1 = 13 write ops. So the bottom line is that having NDB batch multiple put_async calls for the same entity saves us a lot of latency by having a single call to the datastore, and saves us a few write ops by writing the entity only once.
{ "pile_set_name": "StackExchange" }
Q: representation of context free grammar relations The production rules of a context free grammar are formalised as pairs, just a set of relations... (α,β) ∈ R where α is a non-terminal and β is either a terminal or a non-terminal. thus S → A could be written as (S,A) ∈ R But when parsing tagged natural language trees for probabilitic CFG's. Many of there rules are of the form: NP → NNP POS that is, the right hand side is not always a single terminal or non-terminal Is there a way of formalising these production rules? As I can't see the relation method working... unless they were perhaps more like (NP → NNP) → POS Or is it that they are not the exact production rules, A: A context-free grammar is defined by a four-tuple (V, T, P, S): V a set of non-terminal symbols T a set of terminal symbols, disjoint from V P a set of productions, each of which is a mapping v → ω where v ∈ V and ω ∈ (V &Union T)* S an element of V, the start symbol Technically, you could derive V and T from P. However, everyone does roughly as above (with some variation of names, and occasionally using V and V &Union T as primitives instead of V and T). The important point (in bold above) is that the right-hand side of a production is not "a terminal or a non-terminal" but rather "an element of (V &Union T)*". If you couldn't expand a non-terminal into more than one symbol, your language would only consist of single-element strings.
{ "pile_set_name": "StackExchange" }
Q: How to convert Perl data into PDF? I need to covert Perl data into PDF. For that I have installed CPAN into my UNIX system, Now I need to install PDF::API2. So please give the UNIX command to install PDF::API2. A: I think you need this command, sudo apt-get install libpdf-api2-perl or, sudo perl -MCPAN -e "install PDF::API2" For more reference check this page.
{ "pile_set_name": "StackExchange" }
Q: Sending JSON to PHP file that is located on the server I don't really understand why my code ins not working. I am simply taking values from a form and I want to insert them into a database using JSON and AJAX. Is there anything that I am doing wrong? $(document).ready(function() { $("#insert").click(function() { var email = $("#email").val(); var password = $("#password").val(); var name = $("#name").val(); var bio = $("#bio").val(); var postData = {"email":email,"password":password,"name":name,"bio":bio}; $.ajax({ type: "POST", dataType: "json", url: "http://**************/php-code/insert.php", data: {myData:postData}, crossDomain: true, cache: false, beforeSend: function() { $("#insert").val('Connecting...'); }, success: function(data) { if (data == "success") { alert("inserted"); $("#insert").val('submit'); } else if (data == "error") { alert("error"); } } }); return false; }); }); And PHP file that sits on the server: include "db.php"; if(isset($_POST['myData'])) { $email=$_POST['email']; $password=$_POST['password']; $name=$_POST['name']; $bio=$_POST['bio']; $q=mysqli_query($con,"INSERT INTO 'user' ('email','password','name', 'bio') VALUES ('$email','$password','$name','$bio')"); if($q) echo "success"; else echo "error"; } A: Your data does not need to be send with 'myData:...'. It is already in a key value pair format and is ready to send. Just replace data: {myData:postData}, with data: postData, And in you php code remove the check to myData and only check the fields you want to get. (email, password, etc.) if(isset($_POST['myData'])) { As stated in the comment, your code is highly vulnerable and should use prepared statements!
{ "pile_set_name": "StackExchange" }
Q: How to work out the complexity of the game 2048? Edit: This question is not a duplicate of What is the optimal algorithm for the game 2048? That question asks 'what is the best way to win the game?' This question asks 'how can we work out the complexity of the game?' They are completely different questions. I'm not interested in which steps are required to move towards a 'win' state - I'm interested in in finding out whether the total number of possible steps can be calculated. I've been reading this question about the game 2048 which discusses strategies for creating an algorithm that will perform well playing the game. The accepted answer mentions that: the game is a discrete state space, perfect information, turn-based game like chess which got me thinking about its complexity. For deterministic games like chess, its possible (in theory) to work out all the possible moves that lead to a win state and work backwards, selecting the best moves that keep leading towards that outcome. I know this leads to a large number of possible moves (something in the range of the number of atoms in the universe).. but is 2048 more or less complex? Psudocode: for the current arrangement of tiles - work out the possible moves - work out what the board will look like if the program adds a 2 to the board - work out what the board will look like if the program adds a 4 to the board - move on to working out the possible moves for the new state At this point I'm thinking I will be here a while waiting on this to run... So my question is - how would I begin to write this algorithm - what strategy is best for calculating the complexity of the game? The big difference I see between 2048 and chess is that the program can select randomly between 2 and 4 when adding new tiles - which seems add a massive number of additional possible moves. Ultimately I'd like the program to output a single figure showing the number of possible permutations in the game. Is this possible?! A: Let's determine how many possible board configurations there are. Each tile can be either empty, or contain a 2, 4, 8, ..., 512 or 1024 tile. That's 12 possibilities per tile. There are 16 tiles, so we get 1612 = 248 possible board states - and this most likely includes a few unreachable ones. Assuming we could store all of these in memory, we could work backwards from all board states that would generate a 2048 tile in the next move, doing a constant amount of work to link reachable board states to each other, which should give us a probabilistic best move for each state. To store all bits in memory, let's say we'd need 4 bits per tile, i.e. 64 bits = 8 bytes per board state. 248 board states would then require 8*248 = 2251799813685248 bytes = 2048 TB (not to mention added overhead to keep track of the best boards). That's a bit beyond what a desktop computer these days has, although it might be possible to cleverly limit the number of boards required at any given time as to get down to something that will fit on, say, a 3 TB hard drive, or perhaps even in RAM. For reference, chess has an upper bound of 2155 possible positions. If we were to actually calculate, from the start, every possible move (in a breadth-first search-like manner), we'd get a massive number. This isn't the exact number, but rather a rough estimate of the upper bound. Let's make a few assumptions: (which definitely aren't always true, but, for the sake of simplicity) There are always 15 open squares You always have 4 moves (left, right, up, down) Once the total sum of all tiles on the board reaches 2048, it will take the minimum number of combinations to get a single 2048 (so, if placing a 2 makes the sum 2048, the combinations will be 2 -> 4 -> 8 -> 16 -> ... -> 2048, i.e. taking 10 moves) A 2 will always get placed, never a 4 - the algorithm won't assume this, but, for the sake of calculating the upper bound, we will. We won't consider the fact that there may be duplicate boards generated during this process. To reach 2048, there needs to be 2048 / 2 = 1024 tiles placed. You start with 2 randomly placed tiles, then repeatedly make a move and another tile gets placed, so there's about 1022 'turns' (a turn consisting of making a move and a tile getting placed) until we get a sum of 2048, then there's another 10 turns to get a 2048 tile. In each turn, we have 4 moves, and there can be one of two tiles placed in one of 15 positions (30 possibilities), so that's 4*30 = 120 possibilities. This would, in total, give us 1201032 possible states. If we instead assume a 4 will always get placed, we get 120519 states. Calculating the exact number will likely involve working our way through all these states, which won't really be viable.
{ "pile_set_name": "StackExchange" }
Q: Infowindow Button jQuery Event I am having trouble executing a jQuery function that is called when someone clicks Edit, Share or Delete on the infoWindow div. var markers = []; for(i=0; i<array.length; ++i) { var marker = new google.maps.Marker({ position: {lat: parseFloat(array[i]['latitude']), lng: parseFloat(array[i]['longitude'])}, map: map }); var id = array[i]['id']; var edit = 'edit', share = 'share', del = 'delete'; var cString = '<div style="margin: auto; text-align: center; font-family: Tahoma, Geneva, sans-serif;"><strong>Location Name: </strong>' + array[i]['title'] + '<br><strong>Location Description: </strong>' + array[i]['description'] + '<br><br><br><div class="btn-group"><button type="button" class="btn btn-primary '+edit+'" id="' + id + '">Edit</button><button type="button" class="btn btn-primary '+share+'" id="' + id + '">Share</button><button type="button" class="btn btn-primary '+del+'" id="' + id + '">Delete</button></div>'; contentString.push(cString); google.maps.event.addListener(marker, 'click', (function(marker, i) { return function() { infoWindow.setContent(contentString[i]); infoWindow.open(map, marker); } })(marker, i)); // this is the function $('button').click(function() { console.log('clicked'); }); markers.push(marker); } It doesn't display clicked for buttons assigned to infoWindow but does for other buttons like signout, view profile etc. Array is a JSON array that has has the structure: [ { id:"1" description:"I am Loving It! ↵McArabia Combo Meal: 520 Rs/-" latitude:"25.28919345" longitude:"67.11113134" title:"McDonalds" type:"favourite" },//.... //...... ] How can i fix this? A: You are adding those buttons dynamically after the page has loaded. You need to attach the click event on buttons using .on() function. $(document).on( "click", "button", function() { console.log('clicked'); }); And dont add this event binding inside for loop. Put this in document ready. This is just for basic info, follow this link to read more about on() and how to use proper selector/container. A: Further @anu comment: That's because the infoWindow added to the DOM only when in the function infoWindow.open(map, marker); so when you bind the click to the buttons, the infoWindow's button not included. And Live example: $(document).on('click', '.info-button', function(){ alert('button clicked'); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <!DOCTYPE html> <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"> <meta charset="utf-8"> <title>Info windows</title> <style> /* Always set the map height explicitly to define the size of the div * element that contains the map. */ #map { height: 100%; } /* Optional: Makes the sample page fill the window. */ html, body { height: 100%; margin: 0; padding: 0; } </style> </head> <body> <div id="map"></div> <script> // This example displays a marker at the center of Australia. // When the user clicks the marker, an info window opens. function initMap() { var uluru = {lat: -25.363, lng: 131.044}; var map = new google.maps.Map(document.getElementById('map'), { zoom: 4, center: uluru }); var contentString = '<div id="content">'+ '<button class="info-button">Click on it</button>' + '</div>'; var infowindow = new google.maps.InfoWindow({ content: contentString }); var marker = new google.maps.Marker({ position: uluru, map: map, title: 'Uluru (Ayers Rock)' }); marker.addListener('click', function() { infowindow.open(map, marker); }); } </script> <script async defer src="https://maps.googleapis.com/maps/api/js?&callback=initMap"> </script> </body> </html>
{ "pile_set_name": "StackExchange" }
Q: JSLink: Return field value as plain text This might be really very simple, I have a list view with multiple Rich Text fields, I want to remove HTML formatting from those field in a specific view and return them as Plain text. I would like to do it without using RegEx. I am sure there must be any simple ways. Any suggestions? A: It was simpler then i thought. Implemented JSLink for the same... (function () { // Create object that have the context information about the field that we want to change it output render var viewContext = {}; viewContext.Templates = {}; viewContext.Templates.Fields = { "Field":{"View": renderPlainText} }; SPClientTemplates.TemplateManager.RegisterTemplateOverrides(viewContext);})(); My render function looks like below:- function renderPlainText(ctx) { var value = ctx.CurrentItem[ctx.CurrentFieldSchema.Name]; value = value.replace(/<(?:.|\n)*?>/gm, ''); return "<div>" + value + "</div>"; }
{ "pile_set_name": "StackExchange" }
Q: Retrieve word from chapter I have a list of about 200,000 words each with about 10 numerical features. The 200,000 words are split into size 500 chunks by some header (eg, "Chapter x"). I want to write a script that does nothing except prompts the user to input a string and then finds that string among the 200,000 words (the word may be in multiple chunks, but in each chunk it will only be found once) and returns the names of the chapter under which the string falls sorted by the value of that word's numerical feature within each chapters. Eg, suppose that the word "twelve" appears in 3 chapters and I want to sort it by feature 1, which has the value 50, 30, 2 in chapters 10, 14, and 9 respectively. I want the o/p: Chapter 10, 50 Chapter 14, 30 Chapter 9, 2 Before I even start writing a script, I want to make sure that this is a reasonable task for Python. In other words, will the execution time be in seconds, in minutes? If I instead had 500,000 words, would it still be feasible? I don't want to keep the user waiting. A: Ballpark time: Let's say all your words are 10 8-bit string characters (so, 80 bits each), and you need to compare your input string to all 200,000 of them. That's about 16 million bit-comparison operations. If your processor is running the code at 1 GHz, you will finish in 0.016 seconds. Even if I've underestimated the number of operations this task will take by a factor of 100, it will only take about 1 second to execute.
{ "pile_set_name": "StackExchange" }
Q: Cypher query gives unnecessary relationships I am trying to display only one relevant relationship in the cypher query web browser but it displays all the relationships in between nodes. I am running following query: MATCH (emp:Employee)-[e:EMPLOYED {dateendrole:"Current"}]->(c:Company {companyname:"xyza"}) MATCH (emp)-[ea:EDU_ASSOCIATED]->(ec:Company) MATCH (another_emp:Employee)-[ea1:EDU_ASSOCIATED {overlapyearstart:ea.overlapyearstart, overlapyearend:ea.overlapyearend}]->(:Company {comp_id:ec.companyId}) RETURN emp, e, c, ea, ec, another_emp, ea1, LIMIT 1; My intention in the above query is to find associated employees into another company where an employee employed currently in one company and it was or has been employed to another company. For example, find associated employees in some company where an employee has worked before in that company and currently working in the xyza company. Here, the employee and company is the nodes. It has associated relationship which contains their overlap years as properties of the relationship. e.g. (emp)-[:Associated{overlapyearstart:x, overlapyearend:y}]->(company) If the employee has worked with another employee at some company then overlap years will be same. The above query gives following output in the web interface of the neo4j. In the image, "Mr"(nodes dosen't display proper names) is the employee. "United States" is current of employer. "Unknown" is company he/she worked in the past and "Doctor" is the associated employee to "Mr" at the "Unknown" company. I've two questions: From "Doctor" to "Unknown", why it displays all the relationships? How can I show only one relevant relationship? Currently it shows all "Doctor" to "Unknown" relationship. How can I do the same as above for "Mr" to "Unknown"? A: I believe that these relationships are only present in the Graph Visualization Mode of Neo4j Browser. If you change your visualization mode to "Text", "Table" or "Code" these relationships will not be shown. That is: the Graph visualization mode is trying to "complete" the graph for you. To achieve the desired result you should go to the section "Graph Visualization" of Neo4j Browser Settings and uncheck the option "Connect result nodes" as show in the image below:
{ "pile_set_name": "StackExchange" }
Q: Can Bootsfaces be used with Richfaces? I have a web application with the following: JSF 2.2.6 Java 1.7 Tomcat 8 Richfaces 4.5.7 Omnifaces 2.2 I would like to enhance the look and feel of the web app to use Bootsfaces. I have added Bootsfaces jar to my project (via .ivy) and rebuilt. Before even updating the first web page to use Bootsfaces in the project I wanted to see if there were any conflicts after adding the Bootsfaces jar. It seems there is. I am getting the following errors shown in Firebug: ReferenceError: jsf is not defined TypeError: RichFaces.ui is undefined Does anyone know whether Bootsfaces works with Richfaces and if so could they kindly provide a link with migration steps to follow? A: Currently, we, the BootsFaces team, do not support RichFaces. We strive for compatibility with PrimeFaces, OmniFaces, AngularFaces and - if possible - ButterFaces. Neither RichFaces nor ICEFaces are on our list. However, if someone were to join our team in order to support RichFaces, they'd certainly be welcome!
{ "pile_set_name": "StackExchange" }
Q: Dojo events: getting it to work with dynamically added DOM elements I have a method of a class as follows: add_file: function(name, id, is_new){ // HTML: <div class="icon mime zip">name.zip <a>x</a></div> var components = name.split('.'); var extension = components[components.length-1]; this.container.innerHTML += "<div id='"+id+"' class='icon mime "+extension+"'>"+name+" <a id='remove-"+id+"' href='#remove'>x</a></div>"; // Add event to a tag dojo.connect(dojo.byId('remove-'+id), 'onclick', function(ev){ // here i am }); }, All is working well, until I run this method more than once. The first time the event is registered correctly, and clicking the 'x' will run the "here i am" function. However, once I add more than one node (and yes, the ID is different), the event is registered to the last node, but removed from any previous ones. In affect I have this: <div id="field[photos]-filelist"> <div id="file1" class="icon mime jpg">file1.jpg <a id="remove-file1" href="#remove">x</a></div> <div id="file2" class="icon mime jpg">file2.jpg <a id="remove-file2" href="#remove">x</a></div> </div> ...and the remove link only works for the last node (remove-file2 in this case). A: The problem is you are using the innerHTML += That is going to take the existing html, convert it to plain markup, and then completely create new nodes from the markup. In the process, all of the nodes with events get replaced with nodes that look exactly the same but are not connected to anything. The correct way to do this is to use dojo.place(newNodeOrHTML, refNode, positionString) var myNewHTML = "<div id='"+id+"' class='icon mime "+extension+"'>"+name+" <a id='remove-"+id+"' href='#remove'>x</a></div>" //This won't work as is breaks all the connections between nodes and events this.container.innerHTML += myNewHTML; //This will work because it uses proper dom manipulation techniques dojo.place(myNewHTML, this.container, 'last');
{ "pile_set_name": "StackExchange" }
Q: Can symplectic blow up increase symplectic capacities? Let $N$ be a symplectic submanifold of $M$. Symplectic blow up of $M$ along $N$ is an operation replacing a tubular neighborhood of $N$ with the projectivization of that neighborhood. So it decreases the volume. I have a question on the change of symplectic capacities. A symplectic capacity $c$ is a function from the set of symplectic manifolds to $[0, \infty]$ satisfying $c(M_1) \leq c(M_2)$ if we can embed $M_1$ into $M_2$ symplectically, $c(M, k\omega) = |k| c(M, \omega)$ for $k \neq 0$, and $c(B^{2n}(r)) = c (B^2(r) \times \mathbb{R}^{2n-2}) = \pi r^2$, where $B^{2n}(r)$ is a $2n$-dimensional ball of radius $r$. Symplectic capacities may not change after symplectic blow ups. But it seems to me that it is impossible that symplectic blow ups increase symplectic capacities. I couldn't prove this. Can symplectic blow up increase symplectic capacities? A: The answer is yes. Let $c$ be the Gromov width except we put $c(M)=\infty$ if $M$ admits an embedding of $B^{2n}(r)$ with $0$ blown up for some $r$. Using that Gromov width is a capacity it is easy to check 1-3 above, and blowing up $B^{2n}(1)$ at 0 changes this capacity from 1 to $\infty$.
{ "pile_set_name": "StackExchange" }
Q: Any popular c++ code static check tools recommended? There're several new c++ guys working in our team, so too much ugly code everyday! I hate those functions using readonly string, STL containers as parameters in, but without const reference!!! I'm crazy!!! Is there any static code checker that can find these ugly code? I need such a tool used in our makefile. A: Yeah, it's unlikely that "bad code" can be prevented with automated tools. For myself, and I'm also doing this at my workplace, I've always turned on as many warnings as possible (usually by enabling a high level of warnings and only turning off the 'obviously dumb' warnings; g++ being the only exception since it doesn't have an option to turn on everything, so I do -Wall, -Wextra and a whole bunch of other -W, and occasionally go through the manual to see whether new warnings have been added). I also compile with -Werror or /WX. Unfortunately, while Linux and Windows headers seem to be rather clean by now, I get stupid warnings about things like bad casts or incorractly used macros from boost headers. 3rd party libraries are often badly written wrt to warnings. As for static analysis tools, I did try cppcheck and clang (both of which are free, which is why I tried them). Wasn't thrilled about either of them; I'm still planning to add some support for one or both to my build software, but it has rather low priority. One of the two (don't remember which one) actually found SOMETHING: an unnecessary assignment, which any decent optimizer will remove anyway. I don't think that I'm such a perfect 0-bugs developer, so I'm blaming the tools. Still, I did remove that assignment :-) If I'm not mistaken, the commercial VisualStudio versions have code analysis as well (At home I'm more of an Express guy, and I'm stuck with MacOS development at work); maybe that one is better. Or one of the other commercial tools; they have to offer SOMETHING for their money, after all. There are still some additional free tools that I haven't tried yet; I have no idea how complete the http://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis#C.2FC.2B.2B list, but I hope to eventually try all the free tools that can handle C++. For your problem in particular, Wi8kipedia describes "cpplint" as "cpplint implements what Google considers to be "best practices" in C++ coding". I have no idea what that means, but the Wikipedia page has a link to a "Google C++ Style Guide" pdf. Or you could just try it and see what it complains about :-) Also, I probably wouldn't want to add such tools to the Makefile (unless you meant to imply that people still have to invoke "make check" to actually run it). Adding it to the source code repository to check new commits before allowing them is probably too time consuming (code analysis is pretty much "compiling with many extras", so it takes a good deal of time), but you could automatically run it every now and then.
{ "pile_set_name": "StackExchange" }
Q: Over time, would I exert less effort mining a large asteroid or collecting small asteroids, assuming both have similar compositions? Suppose there are two asteroids, one very large (larger than a football field) and one very small (roughly the size of a refrigerator). Assume they have similar compositions and are at equal distances from my current location. My hypothetical goal is to begin collecting these smaller, "refrigerator-sized" chunks of material -- by any means -- and use them for some purpose that is unrelated to the question. Would I be better off going to the large asteroid and setting up a mining operation, or locating and collecting the asteroid that is already the size I'm interested in? What if instead of a single chunk, I began identifying, locating, and retrieving smaller asteroids that are already near the size I am interested in? Would the effort I am putting out depend on the asteroid's composition, and would the hypothetical mining operation cost less effort over time as compared to hunting down the appropriate chunks? Some more specific parameters: For the first part of the question, assume I've identified the first two asteroids in different parts of the asteroid belt, but they're approximately the same distance from my location. Let's call it 1 AU. I would like to return any contents to an arbitrary point in open space, not under great influence of any nearby bodies. The vehicle carrying the payload should be capable of either coming to a stop at this location, or slowing enough for another, larger device to capture it. I'm not so much concerned about the details of this procedure for the question, more the actual retrieval of the material instead of the delivery. Primary candidates for retrieval would be M-type asteroids. They are desirable for identification and retrieval due to their moderate brightness and metal content. A: Your best bet is mining the smallest single rock that provides as much material as you need. Suppose you mine a bunch of little rocks--you have to expend Δv to move between each rock--energy that you don't expend when you're mining a single rock. On the other hand the bigger the rock the more Δv you will expend lifting your cargo against it. Note that the means by which you lift your cargo matters here, you very well might be better off with a rock big enough to mount some sort of throwing system. Throwing your stuff home from Ceres is going to take a lot less work than bringing it home by rocket from your football field. A: This is more complicated than just how far away, or the size of the asteroid. If the material is on the surface and easily harvest-able then the smaller asteroids are going to be more efficient to mine. But if you need to extract large amounts of ore to get any usable amount of material, then a larger asteroid where you can set up something to mine and at least pre-process if not fully process the ore is going to be more efficient. On small asteroids you would need to move the asteroid to someplace where it can be harvested for the ore. The first asteroids that are likely to be targeted to set up to mine will be larger and have multiple mineral targets. Since ever operation is going to require a certain base effort you will be able to leverage that base infrastructure to build out for multiple mining targets. Smaller targets will require some medium to mine the ore, load the ore into carriers, and transport the ore to some place to be refined.
{ "pile_set_name": "StackExchange" }
Q: Install the node-osc module in npm, in nodejs I'm trying to install the node-osc package in nodejs. I run npm install node-osc and get this I tried installing the dependencies on their own: npm install python npm install node-gyp This did not work .. any ideas about what went wrong? Update I have set the environmemtal variable like this: Variable name: PYTHON Variable value: c:\Python33\ now I got rid of the python not found thingy and i get this: Update Now I installed the CORRECT verison of python (27) .. 33 is not supported by node-gyp and I get this: Update Turns you I had to install some other stuff to get it working on a 64bit machine: this guide was life-saving: https://github.com/TooTallNate/node-gyp#installation A: The error means you're missing Python from your executables, which node-gyp requires to build some modules. That means it either isn't installed, or you haven't set the PATH variable for python. To fix this, just install Python. The installation guide states you will need version 2.6/2.7 or above of Python.
{ "pile_set_name": "StackExchange" }
Q: Store rating as an integer - android In my android app, I have to ask the user to rate it (stars icon, 4 stars). If a user rates 2 stars, I need to convert it as value 2(int type) and store it at back-end (Salesforce). And also while displaying the summary, I need to get the int value and display as 1,2,3 or 4 stars. How do I approach this task? Any help with code or suggestion is appreciated. Thanks. A: You can use the RatingBar widget in Android to display a star-rating bar to the user, and then use the getNumStars() method to get the number of stars shown as an int. Details here: https://developer.android.com/reference/android/widget/RatingBar.html
{ "pile_set_name": "StackExchange" }
Q: doxygen - how to multi-line a file documentation enum? I'm using doxygen to document some c++ enum like this: /** Members */ enum { MEMBER_ONE, /*!< This is member one */ MEMBER_TWO /*!< This is member two */ } members; the documentation looks good but the problem is that the code with the hyperlinks to the documentation (I'm talking of the section in the File Documentation like the following: File Documentation file1.h enum { MEMBER_ONE, MEMBER_TWO }; doesn't have any newline among all the members of the enum source code. Any way to force doxygen respecting the newlines or inserting them? Putting inside the comments only works for the documentation itself A: Doxygen reformats the enum's values, but you can control how many elements will appear on a line via the ENUM_VALUES_PER_LINE configuration option. So you could set it to 1 to get one item per line as was in the original source code.
{ "pile_set_name": "StackExchange" }
Q: tikzcd diagram within an array I am trying do make this looking better: namely: curly brackets enclosing everything, tikzcd diagram vertically centered compared to the two other lines. Any suggestion? \documentclass[letter, 11pt]{article} \usepackage{multirow} \usepackage{amsmath} \usepackage{tikz-cd} \tikzset{ commutative diagrams/.cd, arrow style=tikz, diagrams={>={Computer Modern Rightarrow[length=5pt,width=5pt]}}, } \begin{document} \begin{align*} M := \left\{\begin{array}{c|c} f:A \to B & \multirow{2}{*}{ \begin{tikzcd}[ampersand replacement=\&,column sep=1em] X \times Y \ar[r, "m"] \ar[d, "r"']\& Z \times W \ar[r, "n"] \& V\\ X \times Y \ar[r, "m"] \& Z \times \ar[r, "n"]W \& V \ar[u, "h"'] \end{tikzcd} } \\ g: A \times X \to Y & \\ \end{array}\right\} \end{align*} \end{document} A: an alternative, simple solution: \documentclass[letter, 11pt]{article} \usepackage{amsmath} \usepackage{tikz-cd} \tikzset{ commutative diagrams/.cd, arrow style=tikz, diagrams={>={Computer Modern Rightarrow[length=5pt,width=5pt]}}, } \begin{document} \[ M := \left\{\begin{array}{c|c} \begin{gathered} f:A \to B \\ g: A \times X \to Y \end{gathered} & \begin{tikzcd}[ampersand replacement=\&] X \times Y \ar[r, "m"] \ar[d, "r"']\& Z \times W \ar[r, "n"] \& V\\ X \times Y \ar[r, "m"] \& Z \times \ar[r, "n"]W \& V \ar[u, "h"'] \end{tikzcd} \end{array}\right\} \] \end{document} addendum: some off-topic remarks: for determining arrows style you can instead of \tikzset use (shorter) \tikzcdset, for example: \tikzcdset{arrow style=tikz, diagrams={>=Straight Barb} % I liked such arrows :-) } in your case you not need ampersand replacement=\& arrows is better -- due to consistency of code -- to write after node content, i.e.: instead Z \times \ar[r, "n"]W is better Z\times W \ar[r, "n"], regardless that resulting diagram is the same \documentclass[margin=3mm, varwidth]{standalone} \usepackage{amsmath} \usepackage{tikz-cd} \tikzcdset{arrow style=tikz, diagrams={>=Straight Barb} } \begin{document} \[ M := \left\{\begin{array}{c|c} \begin{gathered} f:A \to B \\ g: A \times X \to Y \end{gathered} & \begin{tikzcd}%[sep=large] % i like bigger diagram :-) X\times Y \ar[r, "m"] \ar[d, "r"'] & Z\times W \ar[r, "n"] & V\\ X\times Y \ar[r, "m"] & Z\times W \ar[r, "n"] & V \ar[u, "h"'] \end{tikzcd} \end{array}\right\} \] \end{document} gives: A: I think you should not use \left and \right. Also, letter is not defined yet. Use letterpaper. \documentclass[letterpaper,11pt]{article} \usepackage{mathtools} \usepackage{tikz-cd} \tikzset{ commutative diagrams/.cd, arrow style=tikz, diagrams={>={Computer Modern Rightarrow[length=5pt,width=5pt]}}, } \makeatletter \newcommand{\vast}{\bBigg@{4}} \makeatother \begin{document} \[ M\coloneqq\vast\{ \begin{array}{c} f:A\to B\\ g:A\times X\to Y \end{array}\vast|\begin{tikzcd}[ampersand replacement=\&,column sep=1em] X \times Y \ar[r, "m"] \ar[d, "r"']\& Z \times W \ar[r, "n"] \& V\\ X \times Y \ar[r, "m"] \& Z \times \ar[r, "n"]W \& V \ar[u, "h"'] \end{tikzcd} \vast\} \] \end{document} The arrow tips are not consistent at all... However, as that is your intention, I keep it.
{ "pile_set_name": "StackExchange" }
Q: Get List Of Connected Joysticks using VBScript I'm looking for a way that I can get list of connected joysticks via VBScript just like in the picture below (I mean the order of them is highly important): A: Here is the code to do that... hard to find on net... strComputer = "." Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" & strComputer & "oot\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_PnPEntity") For Each objItem in colItems Wscript.Echo "Class GUID: " & objItem.ClassGuid Wscript.Echo "Description: " & objItem.Description Wscript.Echo "Device ID: " & objItem.DeviceID Wscript.Echo "Manufacturer: " & objItem.Manufacturer Wscript.Echo "Name: " & objItem.Name Wscript.Echo "PNP Device ID: " & objItem.PNPDeviceID Wscript.Echo "Service: " & objItem.Service Next Hope this helps
{ "pile_set_name": "StackExchange" }
Q: Connecting multiple grounds As I'm relatively new to the electronics world, I was wondering if you can connect multiple ground terminals (corresponding to several different voltage outputs) to the same ground. Or do they each need their own? In my case I have 5 lines, one 12V, one 3.3V, two 5V and one variable from a 12V in using a voltage regulator that already has its own ground. So I'm guessing that each would need its own ground, but it is never bad to ask. A: If you have two separated circuits the voltages of the first don't mean anything to the second and vice versa. If you want to combine the circuits you'll have to connect a reference on one circuit with a reference on the other one. In 99 % of cases you'll choose the resp. grounds for this, because that's what ground is for: a reference against which all the rest is measured. If there's a 3 V level in a circuit, it will be referenced to ground, unless specified otherwise. So by connecting the ground of a 5 V circuit to the ground of a 12 V circuit the 5 V becomes meaningful for that circuit as well: it will also be 5 V, or 7 V less than the 12 V. A well designed circuit must have a reliable ground, which means that the 0 V at one point should be as close as possible to that 0 V at any other point of the ground net. Zero difference is not always possible if you're working with high currents, but the difference should be as low as possible.
{ "pile_set_name": "StackExchange" }
Q: Run unit tests only on Windows I have a class that makes native Windows API calls through JNA. How can I write JUnit tests that will execute on a Windows development machine but will be ignored on a Unix build server? I can easily get the host OS using System.getProperty("os.name") I can write guard blocks in my tests: @Test public void testSomeWindowsAPICall() throws Exception { if (isWindows()) { // do tests... } } This extra boiler plate code is not ideal. Alternatively I have created a JUnit rule that only runs the test method on Windows: public class WindowsOnlyRule implements TestRule { @Override public Statement apply(final Statement base, final Description description) { return new Statement() { @Override public void evaluate() throws Throwable { if (isWindows()) { base.evaluate(); } } }; } private boolean isWindows() { return System.getProperty("os.name").startsWith("Windows"); } } And this can be enforced by adding this annotated field to my test class: @Rule public WindowsOnlyRule runTestOnlyOnWindows = new WindowsOnlyRule(); Both these mechanisms are deficient in my opinion in that on a Unix machine they will silently pass. It would be nicer if they could be marked somehow at execution time with something similar to @Ignore Does anybody have an alternative suggestion? A: Have you looked into assumptions? In the before method you can do this: @Before public void windowsOnly() { org.junit.Assume.assumeTrue(isWindows()); } Documentation: http://junit.sourceforge.net/javadoc/org/junit/Assume.html A: In Junit5, There are options for configuring or run the test for specific Operating System. @EnabledOnOs({ LINUX, MAC }) void onLinuxOrMac() { } @DisabledOnOs(WINDOWS) void notOnWindows() { // ... } A: Have you looked at JUnit assumptions ? useful for stating assumptions about the conditions in which a test is meaningful. A failed assumption does not mean the code is broken, but that the test provides no useful information. The default JUnit runner treats tests with failing assumptions as ignored (which seems to meet your criteria for ignoring these tests).
{ "pile_set_name": "StackExchange" }
Q: Python - eval and show size of data read from stdin I have a Python script that read lines from stdin such as: # Read nmon data from stdin data = sys.stdin.readlines() # Number of lines read nbr_lines = len(data) # Show current time and number of lines msg = now + " Reading NMON data: " + str(nbr_lines) + " lines" print (msg) I would have liked to eval and show the total amount of data in Bytes that has been read from stdin, is that possible ? Thanks you for your help ! A: The total amount of Bytes depends on encoding of your input. For 8b encoding (e.g. ASCII) or if you just need to know the number of characters: bytes_total = len(''.join(data))
{ "pile_set_name": "StackExchange" }
Q: Getting value of a "object HTMLInputElement" and function error I have checkboxes on a page and I get the checked ones then I loop through them; var checkeds = $('#accTypes input:checked'); var values = ""; for (var i = 0; i < checkeds.length; i++) { console.log(checkeds[i]); var cval = checkeds[i].val(); values = values + "," + cval; } I recognized that the row below causes error. checkeds[i].val() When I print the checkeds[i] variable in chrome console, I see; <input type=​"checkbox" name=​"accom-check" id=​"checkboxP12" value=​"12">​ I wanted to get the value of checkeds[i] variable. How can I do this? A: A jQuery collection is an array-like object containing native DOM nodes. When you access it as checkeds[1] you get the native DOM node, not the jQuery version, so it doesn't have a val() method. Either use the native value var cval = checkeds[i].value; or use eq() to get the jQuery object var cval = checkeds.eq(i).val(); As a sidenote, you could do the same thing with a map var values = $('#accTypes input:checked').map(function() { return this.value; }).get().join(',');
{ "pile_set_name": "StackExchange" }
Q: How to apply a dict in python to a string as opposed to a single letter I am trying to output the alphabetical values of a user entered string, I have created a dict and this process works, but only with one letter. If I try entering more than one letter, it returns a KeyError: (string I entered) If I try creating a list of the string so it becomes ['e', 'x', 'a', 'm', 'p', 'l', 'e'] and I get a TypeError: unhashable type: 'list' I cannot use the chr and ord functions (I know how to but they aren't applicable in this situation) and I have tried using the map function once I've turned it to a list but only got strange results. I've also tried turning the list into a tuple but that produces the same error. Here is my code: import string step = 1 values = dict() for index, letter in enumerate(string.ascii_lowercase): values[letter] = index + 1 keyw=input("Enter your keyword for encryption") keylist=list(keyw) print(values[keylist]) Alt version without the list: import string step=1 values=dict() for index, letter in enumerate(string.ascii_lowercase): values[letter] = index + 1 keyw=input("Enter your keyword for encryption") print(values[keyw]) A: You need to loop through all the letters and map each one individually: mapped = [values[letter] for letter in keyw] print(mapped) This uses a list comprehension to build the list of integers: >>> [values[letter] for letter in 'example'] [5, 24, 1, 13, 16, 12, 5] The map() function would do the same thing, essentially, but returns an iterator; you need to loop over that object to see the results: >>> for result in map(values.get, 'example'): ... print(result) 5 24 1 13 16 12 5 Note that you can build your values dictionary in one line; enumerate() takes a second argument, the start value (which defaults to 0); using a dict comprehension to reverse the value-key tuple would give you: values = {letter: index for index, letter in enumerate(string.ascii_lowercase, 1)}
{ "pile_set_name": "StackExchange" }
Q: Remove part between two patterns in bash Let's say I have a huge file with this: (Ano_gla|EOG091B00FI:0.21327484,Tri_cas|EOG091B00FI:0.14561670,((Tri_bro|EOG091B00FI:0.00523450,Tri_jap|EOG091B00FI:0.01261030)1.00 0000:0.26780267,(((((Orm_nit|EOG091B00FI:0.00243200,Orm_pom|EOG091B00FI:0.00914980)1.000000:0.08747204,(((((Meg_dor|EOG091B00FI:0.0 0953580,Meg_sti|EOG091B00FI:0.02205870)1.000000:0.09005934,(Cer_mar|EOG091B00FI:0.00429740,Cer_sol|EOG091B00FI:0.02112877)1.000000: 0.07852307)0.937000:0.01510878,(((Cec_fun|EOG091B00FI:0.04067119,(Tri_sar|EOG091B00FI:0.00462004,(Nas_gir|EOG091B00FI:0.00126111,Na s_lon|EOG091B00FI:0.00087461)0.877000:0.00251191)0.995000:0.01752929)1.000000:0.04366313,(Tri_bra|EOG091B00FI:0.00461186,Tri_pre|EO G091B00FI:0.01023626)1.000000:0.44067486)0.000000:0.01008020,(Ana_pse|EOG091B00FI:0.07264534)) And I'm looking for a bash method in order to remove the part between the | and : and get: (Ano_gla:0.21327484,Tri_cas:0.14561670,((Tri_bro:0.00523450,Tri_jap:0.01261030)1.00 0000:0.26780267,(((((Orm_nit:0.00243200,Orm_pom:0.00914980)1.000000:0.08747204,(((((Meg_dor:0.0 0953580,Meg_sti:0.02205870)1.000000:0.09005934,(Cer_mar:0.00429740,Cer_sol:0.02112877)1.000000: 0.07852307)0.937000:0.01510878,(((Cec_fun:0.04067119,(Tri_sar:0.00462004,(Nas_gir:0.00126111,Na s_lon:0.00087461)0.877000:0.00251191)0.995000:0.01752929)1.000000:0.04366313,(Tri_bra:0.00461186,Tri_pre:0.01023626)1.000000:0.44067486)0.000000:0.01008020,(Ana_pse:0.07264534 I tried: sed -e 's/\(|\).*\(:\)/\1\2/g' myfile but it does not work. A: sed ':a;$!{N;ba};s/|[^:]*//g' myfile Explained: :a # Label to jump to $! { # On every line but the last one N # Append next line to pattern space ba # Jump to label } s/|[^:]*//g # Remove every pipe up to (and excluding) the next colon This slurps the complete file into the pattern space and then does one global substitution. Notice that this leaves the closing )) of the input file in place, unlike your expected output. For seds other than GNU sed, the command has to be pulled apart a bit so that the label is separate: sed -e ':a' -e '$!{N;ba;}' -e 's/|[^:]*//g' myfile
{ "pile_set_name": "StackExchange" }
Q: Online index rebuild snapshot location and impact of filling index drive My understanding of a rebuild index online operation is that the index will have a snapshot taken and the rebuild is started on the snapshot index. My question is where does the snapshot index reside? Our databases have data, log and index file drives. My understanding is the snapshot should reside on the index file drive. Is this correct? In addition what happens if the index drive runs out of space? Anyone had experience of this? It wont be an issue but I would like to know! Version Info: SQL Server 2005 9.00.4266.00(x64) EE A: My understanding of a rebuild index online operation is that the index will have a snapshot taken and the rebuild is started on the snapshot index. Incorrect. An unfortunate overload of the term 'snapshot'... A snapshot read of the index is used, which means row-versioning see How Online Index Operations Work: A snapshot of the table is defined. That is, row versioning is used to provide transaction-level read consistency. With this correction, the rest of the question is moot. Row-versioning does not create a copy of the data until data is modified, and then the copy resides in tempdb. In other words, as you continue to modify the original index while OIB is running the row-versioning will have to preserve the pre-update image of the updated rows in tempdb. @Shanky is right about SORT_IN_TEMPDB, but that refers to the index builder, a different stage, unrelated to the original 'snapshot', and an option that is independent of the 'online' nature of the OIB. Obviously during the OIB you will slowly build up a copy of the data (the new index). This must be in the same location as the original index (including filegroups for partitions etc) as it has to be a valid replacement of the original index when the OIB is done.
{ "pile_set_name": "StackExchange" }
Q: How do I input strings in Linux terminal that points to file path using subprocess.call command? I'm using Ubuntu and have been trying to do a simple automation that requires me to input the [name of website] and the [file path] onto a list of command lines. I'm using subprocess and call function. I tried something simpler first using the "ls" command. from subprocess import call text = raw_input("> ") ("ls", "%s") % (text) These returned as "buffsize must be an integer". I tried to found out what it was and apparently I had to pass the command as a list. So I tried doing it on the main thing im trying to code. from subprocess import call file_path = raw_input("> ") site_name = raw_input("> ") call("thug", -FZM -W "%s" -n "%s") % (site_name, file_path) These passed as an invalid syntax on the first "%s". Can anyone point me to the correct direction? A: You cannot use % on a tuple. ("ls", "%s") % text # Broken You probably mean ("ls", "%s" % text) But just "%s" % string is obviously going to return simply string, so there is no need to use formatting here. ("ls", text) This still does nothing useful; did you forget the call? You also cannot have unquoted strings in the argument to call. call("thug", -FZM -W "%s" -n "%s") % (site_name, file_path) # broken needs to have -FZM and -W quoted, and again, if you use format strings, the formatting needs to happen adjacent to the format string. call(["thug", "-FZM", "-W", site_name, "-n", file_path]) Notice also how the first argument to call() is either a proper list, or a long single string (in which case you need shell=True, which you want to avoid if you can). If you are writing new scripts, you most definitely should be thinking seriously about targetting Python 3 (in which case you want to pivot to subprocess.run() and input() instead of raw_input() too). Python 2 is already past its originally announced end-of-life date, though it was pushed back a few years because Py3k adoption was still slow a few years ago. It no longer is, and shouldn't be -- you want to be in Py3, that's where the future is.
{ "pile_set_name": "StackExchange" }
Q: VたにはVた VS VるにはVた Has I read in this question What is the meaning of 「読むには読んだ」?, the form vるにはvた is used and correct. However, when speaking with some Japanese friends I noticed that they are more likely to use 読んだには読んだ instead of 読むには読んだ. Is that a spoken-language usage? By the way, they perfectly understand what I want to mean if I use 読むには読んだ A: 読んだには読んだ and 読むには読んだ feel exactly the same to me. I personally feel the latter form is a bit more common, but they're both perfectly natural. Note that the first verb never takes ます when you are speaking politely. 見るには見た。 見たには見た。 見るには見ました。 見たには見ました。 [×] 見ますには見ました。 [×] 見ましたには見ました。 Regarding formality, this には sounds a bit stiff. When talking with friends, 読むには読んだ is okay, but more casual wordings such as "読みはした" and "読むっちゃ読んだ" tend to be preferred.
{ "pile_set_name": "StackExchange" }
Q: change subviews according to its superview frame I have a custom UIView class CustomView.m - (instancetype)initWithString:(NSString *)aString { self = [super init]; if (!self) return nil; [self setAutoresizesSubviews:YES]; UILabel *label = [[UILabel alloc] init]; label.frame = ({ CGRect frame = self.frame; frame.origin.x = CGRectGetMinX(frame) + 50; frame.origin.y = CGRectGetMinY(frame) + 20; frame.size.width = CGRectGetWidth(frame) - 100; frame.size.height = CGRectGetHeight(frame) - 40; frame; }); [label setText:aString]; [self addSubview:label]; return self; } ViewController.m -(void)addCustomView { CustomView *custom = [CustomView alloc] initWithString:@"abc"]; custom.frame = ({ CGRect frame = self.view.frame; frame.origin.y = CGRectGetHeight(frame) - 100; frame.size.height = 100; frame; }); [self.view addSubview: custom]; } Here's the problem, I set frame of my CustomView after alloc init it. Which means its frame is CGRectZero before I change it. Therefore, the frame of UILabel is zero, too. How can I change frame of my subviews after their superview's frame changed ? Like above. Thanks. A: You simply need to implement -layoutSubviews in your CustomView, and do your subview layout/frame calculation stuff in there. -layoutSubviews will get called when your CustomView's frame changes. Edit for further clarification: You could add a property for the label, or use a tag, so that you can access it it -layoutSubviews, something like this; - (void)layoutSubviews { self.label.frame = ({ CGRect frame = self.frame; frame.origin.x = CGRectGetMinX(frame) + 50; frame.origin.y = CGRectGetMinY(frame) + 20; frame.size.width = CGRectGetWidth(frame) - 100; frame.size.height = CGRectGetHeight(frame) - 40; frame; }); } Edit 2: Of course, what you probably want to do as well is change your initialiser to include the frame, like so: - (instancetype)initWithFrame:(CGRect)frame string:(NSString *)string { self = [super initWithFrame:frame]; if (self) { // Do some stuff } return self; } That way, your view's frame is set when you first create the label. Implementing -layoutSubviews is still a good idea though, for any future frame changes/orientation changes etc.
{ "pile_set_name": "StackExchange" }
Q: the value of y is assigned how I am looking at this function function foo(x) { var tmp = 3; return function (y) { alert(x + y + (++tmp)); } } var bar = foo(2); // bar is now a closure. bar(10); when I run it, the variables get the following values x = 2, y = 10 tmp = 3. Now I see that in foo(2) x is passed as 2. So its understandable that x is getting the value of 2. But then bar(10) is assigning a value of 0 to y. Hows that? I am confused on how does the receiving function know that 10 is the value for y assigned by bar(10) A: foo(2) returns an anonymous function which accepts one parameter (y). As you're setting bar to be the return value of foo(2), bar becomes a reference to that anonymous function. So, when you call bar(10) you're calling the anonymous function foo returns, and so 10 is being set to the parameter y.
{ "pile_set_name": "StackExchange" }
Q: Best way to convert only JSX in TSX and maintaining TS I have a bunch of TSX components written in Inferno (similar to React/Preact). I'm in need of just .ts versions with the JSX aspects converted. The environment I'm using it in only supports TypeScript and the Inferno JSX transformer is only written for Babel. I believe I can do this with Babel but not sure which flags to add. Here's an example of my a script: import { Component, linkEvent } from 'inferno'; import './index.scss'; interface HeaderProps { name: string, address: string } export default class Header extends Component { render(props:HeaderProps) { return ( <header class="wrap"> <img class="logo" src="logo.svg" /> <h1>{ props.name }</h1> <img src="arrow.svg" /> </header> ); } } After I compile this script any of the TS such as the interface should remain, however the JSX should be converted into createVNode() functions. The babel plug-in to do this is: https://github.com/infernojs/babel-plugin-inferno Here's my current .babelrc: { "compact": false, "presets": [ [ "@babel/preset-env", { "loose": true, "targets": { "browsers": ["ie >= 11", "safari > 10"] } } ], [ "@babel/typescript", { "isTSX": true, "allExtensions": true } ] ], "plugins": [ ["babel-plugin-inferno", { "imports": true }], "@babel/plugin-transform-runtime", [ "@babel/plugin-proposal-class-properties", { "loose": true } ] ] } I'm including @babel/typescript in the rc file because it needs to be able to read TS without complaining about syntax. However, the output should be retained. If this is not the best approach, can you make a suggestion on a more effective way of converting this? ps. I can't use the TS JSX transformer, it's not compatible with Inferno. Here's my tsconfig: { "compilerOptions": { "pretty": true, "target": "es5", "module": "esnext", "allowSyntheticDefaultImports": true, "preserveConstEnums": true, "sourceMap": true, "moduleResolution": "node", "lib": ["es2017", "dom"], "types": [ "inferno" ], "jsx": "preserve", "noUnusedLocals": true, "baseUrl": "./src", "noEmit": true, "skipLibCheck": true, "noUnusedParameters": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true, }, "include": [ "src/**/*", "node_modules/inferno/dist/index.d.ts" ] } A: This is the .babelrc you need: { "plugins": [ ["babel-plugin-inferno", { "imports": true }], ["@babel/plugin-syntax-typescript", { "isTSX": true }], ] } Note, don't use tsc, use babel only. Unfold to see test result: // ============== input ============== const x: number = 1; enum Enum { one, two } interface Foobar { key: string; } const bar = () => <div>bar</div> const zoo = () => <><span>yolo</span></> // ============== output ============== import { createVNode, createFragment } from "inferno"; const x: number = 1; enum Enum { one, two, } interface Foobar { key: string; } const bar = () => createVNode(1, "div", null, "bar", 16); const zoo = () => createFragment([createVNode(1, "span", null, "yolo", 16)], 4);
{ "pile_set_name": "StackExchange" }
Q: Efficient way to search in a list of objects in getView of Adapter Android I have a List of custom objects. These objects holds reservation information for the past 2 years (for each day). It is a really big list with about 730 (365+365) items. I have also a grid view with day cells (like calendar) and i want to draw different things in each day if they meet certain conditions. The problem is that for each cell in getView i have to loop this large list. @Override public View getView(int position, View convertView, ViewGroup parent) { ... String date = dateList.get(position).getDate(); for(Reservation item: reallyBigList){ if(item.getDate.equals(date)){ ... break; } } ... } This approach make my list very laggy. I am looking for a most efficient way to accomplish this. One solution i can think is to split this large list. But i want to know if there is any other way. A: You can have a Map based on some unique attribute. Lets say you have date in this case.
{ "pile_set_name": "StackExchange" }
Q: Does a Roman Catholic priest have to intellectually understand a confession in order to grant absolution? The concept of confessing a sin, seams to imply that the person has to subjectively believe they have sinned—otherwise how could they confess? According to Roman Catholic doctrine, does the priest have to understand the nature of the sin, or merely understand that the penitent believes they have sinned? Can a priest hear a confession in a foreign language? I ask this because this seems like the limit case for understanding what the person is saying in the confession booth. What if the sin seems like gibberish to the priest? A: There are several aspects to be considered here. The first situation is the general obligation of confessing grave sins. This is addressed in Canon Law, No. 960: "Individual and integral confession and absolution constitute the only ordinary means by which a member of the faithful conscious of grave sin is reconciled with God and the Church. Only physical or moral impossibility excuses from confession of this type; in such a case reconciliation can be obtained by other means." The lack of a common language between penitent and confessor would enter into the category of a "physical or moral impossibility" which would excuse either the obligation of confession or its integrity, and allow for reconciliation to be obtained by other means. In the present case we would be dealing with the confessor making a prudential judgment that the penitent is excused in virtue of a physical and moral impossibility and presuming the latter's sincerity in manifesting those sins confessed in his native language. Thus in this particular situation the sacrament would be valid. However, canon law does foresee the possibility of confessing by using an interpreter, although the penitent may not be obliged to do so. To wit: "Canon 990: No one is prohibited from confessing through an interpreter as long as abuses and scandals are avoided and without prejudice to the prescript of can. 983, §2." Canon 983, §2, requires absolute secrecy on the part of the interpreter analogous to the priest's sacramental seal: "The interpreter, if there is one, and all others who in any way have knowledge of sins from confession are also obliged to observe secrecy." The violation of the secrecy of confession by an interpreter may be punished by the imposition of a canonical penalty not excluding excommunication (see Canon 1388, §2). An interpreter need have only a sufficient command of the two languages involved and requires no official certificates of competence. You can read more here.
{ "pile_set_name": "StackExchange" }
Q: MS Word (2007) - increased file size after removing content MS Word (2007 in my case, but I had that experience also with 2010, didn't use 2013 yet) surprises me with the file size it uses - I have a standard .docx of 96 kB, after changing one character (a 7 to a 6) and saving again, it had 101 kB. I had in mind that Word sometimes saves additional information, so I searched a bit and found that in the Office button menu (the round button in the upper left corner) there is Prepare and then Inspect Document. I chose to have the Properties removed and also Header and Footers. Then, after saving the file size was 104 kB. So, what is MS Word doing when saving documents after small changes or deleting content, that file size can increase afterwards. And how to get rid of this behaviour. A: Word file sizes can increase if there's "dross" in the file: sometimes, a document becomes damaged and left-overs accumulate. If the damage is not critical, Word will work around it, but the "bad" information often remains in the file. Under some circumstances, Word encounters the problem every time it saves, which will cause file size to increase. It can help to save the document to another file format, such as RTF, HTML or an earlier version of Word, then opening that file in Word. Another thing you can try is to copy/paste the content to a new document WITHOUT any section breaks and WITHOUT the last paragraph mark (because "dross" often accumulates in the non-visible section information). But these attempts should always be done on a COPY of the document because information can get lost in the dual conversion process.
{ "pile_set_name": "StackExchange" }
Q: Is it possible for a sinatra app to use 2 databases? We have an API in Sinatra that serves both a staging environment and a production environment. The API should talk to the staging database if the request comes from a staging server. It should talk to the the production database if the request comes from a production server. All apps are deployed on Heroku. We can use env['HTTP_HOST'] to find out whether the request is coming from staging or production, and then set the db_url. However, the problem is the ActiveRecord init code that runs to connect to the db: db = URI.parse db_url ActiveRecord::Base.establish_connection( :adapter => db.scheme == 'postgres' ? 'postgresql' : db.scheme, :host => db.host, :port => db.port, :username => db.user, :password => db.password, :database => db.path[1..-1], :encoding => 'utf8' ) Does it make sense to run this code before each request? That would probably be slow... Another solution is to run two instances of the API. But then we need to deploy the same code twice... Is there a better way to do this? A: Standard practice and common sense says that you should keep your production app separate from your staging app. I'm not sure what you have against deploying two different apps, but that's the only way to ensure problems in staging don't trip up your production app.
{ "pile_set_name": "StackExchange" }
Q: VS 2012 Publish Website dialog box I am attempting to publish a website using VS 2012 within the company, however, when I select "Publish Website" from the solution explorer, I get this dialog box What I am looking for, is this dialog box: Any ideas how to go about retrieving the desired dialog box? A: Applying VS 2012 Update 4 resolved the issue.
{ "pile_set_name": "StackExchange" }
Q: How to open a new file from the command line with Inkscape I can't find how to open a new svg document with Inkscape, simply from the terminal. If the document specified as argument (or via -f) does not exist, there is just an error saying it doesn't exist, and then it opens an unsaved new document. I tried using the verb FileSaveAs like this for example: inkscape --verb FileSaveAs mynewfile.svg but FileSaveAs does not take arguments, it just opens the graphical window for this action. I might be persnickety, but I would find it more convenient to be able to create a new file directly from the command line instead of having to launch this window and click to the right directory... A: To my surprise, there seems to be no option in Inkscape to produce a new file from cli! How to create the option? As always, if it doesn't exist, it can be made: Open Inkscape, create a new file drawing.svg Save this file anywhere Copy the code below into an empty file, save it as newinkscape (no extension) in ~/bin. Create the directory if it doesn't exist yet. #!/bin/bash sample="/path/to/drawing.svg" dr=$1 cp "$sample" "$dr" inkscape "$dr" Make the script executable Replace in the line: sample="/path/to/drawing.svg" The path by the path to your sample file. Log out and back in, now: newinkscape /path/to/newfile.svg will open a new empty Inkscape file, saved in the location you used in the command.
{ "pile_set_name": "StackExchange" }
Q: Showing that $38^n+31$ is prime I was reading a question in one of the previous pages, in searching for a proof I stumble across what seem like a contradiction. All I want is for someone to provide the missing link in my argument. The question Find the least $n$ for which $38^n+31$ is prime. My attempt at a proof If $38^n+31$ composite, then there exist at least a prime $p$ such that $p|38^n+31$. Now $\gcd(p,38)=1$, otherwise, $d=\gcd(p,38)=2$ or $19$ and $d|31$ a contradiction. Hence, by Fermat's Little Theorem; $38^{p-1} \equiv 1 \pmod p$, and for all positive integer $r$, $38^{r(p-1)} \equiv 1 \pmod p$. Hence, $38^{r(p-1)}+31 \equiv 32 \pmod p$, but $38^{r(p-1)}+31 \equiv 0 \pmod p$, because it's composite. It follows that $32 \equiv 0 \pmod p$ i.e $p|32$, a contradiction, and hence, the above expression cannot be composite (but inputing real values for $n$ shows that it is indeed composite). A: It does not follow that $38^rp-1\equiv 1\pmod p$. In fact $38^rp-1\equiv -1\pmod p$. And how are you trying to get from $38^rp-1+31$ to $38^n+31$ anyway?
{ "pile_set_name": "StackExchange" }
Q: no return for file_get_contents()? why i not getting return on file_get_contents($myurl) but i do get output for wget updated: i use curl to get the return header and result and discovery that the it is on the header instead in of the body HTTP/1.1 200 Document follows Transfer-Encoding: chunked Content-type: text/plain Server: XMS (724Solutions HTA XSAM_30_M2_B020 20070803.172831) Date: Fri, 21 May 2010 10:48:31 GMT Accept-Ranges: bytes HTTP/1.1 404 ChargedPartyNotAvailable Transfer-Encoding: chunked Content-type: text/plain Server: XMS (724Solutions HTA XSAM_30_M2_B020 20070803.172831) Date: Fri, 21 May 2010 10:34:13 GMT Accept-Ranges: bytes How can i extract out only "200 Document Follow" and "404 ChargedPartyNotAvailable" ? A: Do you have allow_url_fopen enabled in your php configuration? However I would expect that you would get a warning generated if not - do you have errors/warnings displayed? You could add temporarily at the top of your script : error_reporting(E_ALL); ini_set('display_errors', true); and then you might see why file_get_contents() doesn't work. Edit You might just be able to use get_headers() if you're only interested in headers. A: You can use PHP Curl package to retrieve the content of URL $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$myUrl); $result = curl_exec($ch); curl_close($ch); Or if you want to use only file_get_contents, check if you configured PHP.ini correctly http://www.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen As mentioned here (http://php.net/manual/en/function.file-get-contents.php) A URL can be used as a filename with this function if the fopen wrappers have been enabled. See fopen() for more details on how to specify the filename.
{ "pile_set_name": "StackExchange" }
Q: How to have global footer in separate views using angular-ui-router v0.2.5 (allows modules) Sorry if the title of this is confusing. I'm converting a template I purchased into an angular.js app. I want to use different modules to organize the app. I'm also using version 0.2.5 of angular-ui-router which allows routing with separate modules. All is well except the template I'm using looks like this: <div>Global Nav Bar</div> <div>Content that changes with different states right below Nav Bar</div> <div class="wrapsContentAndPushesToBottom"> <div>Content that changes with different states at bottom of page thanks to parent div's class</div> <div>Global Footer also on bottom of page due to parent div's class</div> </div> I'm having a hard time getting that global footer to work because of that parent wrapping div. Can someone help me get this to work? UPDATE: I can't get suggested ng-include to work with my plunkr example: http://plnkr.co/edit/dgNkHX I also can't it working using a named view for the footer: http://plnkr.co/edit/BO8NDO A: I think you're looking for ng-include. http://docs.angularjs.org/api/ng.directive:ngInclude That will enable you to extract that global footer out to a separate file and just include it in your template. <div ng-include src="'globalFooter.tpl.html'"></div>
{ "pile_set_name": "StackExchange" }
Q: How to use system environment variable as part of @PropertySource value? I want to launch my program with java ... -Denv=prod ... and have @PropertySource("classpath:/settings/$idontknowwhat$/database.properties")` read properties file: /settings/prod/database.properties I have tried using #{systemProperties['env']} but it is not resolved with exception: Could not open ServletContext resource ['classpath:/settings/#{systemProperties['env']}/database.properties] A: Found it, I can simply use @PropertySource("classpath:/settings/${env}/database.properties")
{ "pile_set_name": "StackExchange" }
Q: Create fake $route inside data I'm trying to use a test-suite and inside my component, I'm using Vue-Router, thus I have $route object inside data() of my component. For my test suite, I want to fake $route so I can access a fake value inside a component while testing rather than setting up Vue-Router. When I try to use data() { return { $route: { fullPath: '/' }, test: 'test' } } I can't access $route using this.$route however I can access this.test. I think the $ sign is causing this. Is there a way to fake $route and be able to access it as this.$route? A: See https://vuejs.org/v2/cookbook/adding-instance-properties.html For example: Vue.prototype.$route = { fullPath: '/' } var app = new Vue({ el: '#app', data: { message: 'Hello Vue!' } }) <script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script> <div id="app"> {{message}} Faker Full Path: {{$route.fullPath}} </div>
{ "pile_set_name": "StackExchange" }
Q: What does the dashed bounds mean when plotting a contour plot with R GAM? At the moment I'm trying to interpret the green and red dashed lines in a contour plot when visualizing a generalized additive model (GAM) with R. These two lines seem to be something like confidence bands, but I'm not sure how to interpret these dashed lines in a contour plot. Does anybody have experiences with contour plots using R, specifically when fitting GAM? A: I'm guessing that you mean the red and green contours in the last example figure produced by library(mgcv) example(plot.gam) which looks likes this: The generalized additive model produces a fitted surface defined by the black contours. The help file (from ?plot.gam) says: ...surfaces at +1 and -1 standard errors are contoured and overlayed on the contour plot for the estimate. You have an estimated SE at each position (x1,x2); adding one SE to the fitted surface, at each point (x1,x2), gives you another surface, which is depicted using the green dotted contours. Subtracting one SE from the fitted surface gives you another surface, which is depicted using the red dashed curves.
{ "pile_set_name": "StackExchange" }
Q: sendRequest removes passing function Chrome removes the function that I'm trying to pass through sendRequest. function sendQuery() { var currentQuery = document.getElementById("queries").value; var request = { option: "random value", command: function() { alert("fire!"); } }; chrome.tabs.getSelected(null, function(tab) { chrome.tabs.sendRequest(tab.id, request) }); } As you can see, request contains function command, but when I 'dump' the request that was received by contentscript, everything I get is this: request Object option: "random value" __proto__: Object I need to pass the command as well, not just the option. Thanks in advance for helping me to do so. Edit: Edited according to Pointy's suggestion, but the problem remains. A: The second parameter of chrome.tabs.sendRequest is JSON serialized for transportation. The one and only way to pass a function is via the third parameter. This function is received as a third parameter at the chrome.extension.onRequest event listener: function sendQuery() { var currentQuery = document.getElementById("queries").value; var request = { option: "random value" }; var command = function() { alert("fire!"); }; chrome.tabs.getSelected(null, function(tab) { chrome.tabs.sendRequest(tab.id, request, commans); }); }
{ "pile_set_name": "StackExchange" }
Q: What is the fastest way to copy my array? I'm doing some Wave file handling and have them read from disk into an array of bytes. I want to quickly copy portions from this byte array into another buffer for intermediate processing. Currently I use something like this: float[] fin; byte[] buf; //fill buf code omitted for(int i=offset; i < size; i++){ fin[i-offset] = (float) buf[i]; } I feel that this is a slow method, because there is as much computation going on in the for loop conditional and increment as there is over in the actual body. If there was a block copy avaliable in C# or some other way I can implement a block copy, that would be great. Maybe it isn't too slow, but it sure looks like a lot of work to move some data over. Here "size" is between 2^10 and 2^14. I am then handing the "fin" off to a FFT library, so this is by no means the slowest part of the code, maybe I'm barking up the wrong tree. EDIT UPDATE: I realize that micro optimizations are not where someone should spend their time, and I realize that profiling is a better way to achieve speedups overall, but I know that this code is in a 'hot path' and must be completed in under a third of a second on varying end user architectures to minimize our hardware system requirements. Even though I know that the following FFT code will be much more time consuming, I am looking for speedups where I can get them. Array.Copy sure looks nice, I didn't know about that before, and I consider this Q&A a success already! A: There is also: Array.Copy Array.CopyTo but whether these will be faster will require profiling. But be warned about focusing on micro-optimisations to the extent you miss the big picture, on modern PCs the effect of multi-level memory caching is likely to be greater than one approach or another to the copy. Edit: Quick check in reflector: both of the above methods boil down to a common native implementation (good). Note the docs for Array.Copy cover valid type conversions, a value -> value widening conversion like byte to float should be OK. A: Have a look at Array.Copy it should be faster A: Since you are converting from byte to float you are not going to get any significant speedup. No Array.Copy or variation of memcopy can cope with that. The only possible gain would be to 'poke' the byte value into a float. I don't know enough (about the implementation of float) to know if it will work and I honestly don't want to know either.
{ "pile_set_name": "StackExchange" }
Q: PostgreSQL partial unique index and upsert I'm trying to do an upsert to a table that has partial unique indexes create table test ( p text not null, q text, r text, txt text, unique(p,q,r) ); create unique index test_p_idx on test(p) where q is null and r is null; create unique index test_pq_idx on test(p, q) where r IS NULL; create unique index test_pr_idx on test(p, r) where q is NULL; In plain terms, p is not null and only one of q or r can be null. Duplicate inserts throw constraint violations as expected insert into test(p,q,r,txt) values ('p',null,null,'a'); -- violates test_p_idx insert into test(p,q,r,txt) values ('p','q',null,'b'); -- violates test_pq_idx insert into test(p,q,r,txt) values ('p',null, 'r','c'); -- violates test_pr_idx However, when I'm trying to use the unique constraint for an upsert insert into test as u (p,q,r,txt) values ('p',null,'r','d') on conflict (p, q, r) do update set txt = excluded.txt it still throws the constraint violation ERROR: duplicate key value violates unique constraint "test_pr_idx" DETAIL: Key (p, r)=(p, r) already exists. But I'd expect the on conflict clause to catch it and do the update. What am I doing wrong? Should I be using an index_predicate? index_predicate Used to allow inference of partial unique indexes. Any indexes that satisfy the predicate (which need not actually be partial indexes) can be inferred. Follows CREATE INDEX format. https://www.postgresql.org/docs/9.5/static/sql-insert.html A: I don't think it's possible to use multiple partial indexes as a conflict target. You should try to achieve the desired behaviour using a single index. The only way I can see is to use a unique index on expressions: drop table if exists test; create table test ( p text not null, q text, r text, txt text ); create unique index test_unique_idx on test (p, coalesce(q, ''), coalesce(r, '')); Now all three tests (executed twice) violate the same index: insert into test(p,q,r,txt) values ('p',null,null,'a'); -- violates test_unique_idx insert into test(p,q,r,txt) values ('p','q',null,'b'); -- violates test_unique_idx insert into test(p,q,r,txt) values ('p',null, 'r','c'); -- violates test_unique_idx In the insert command you should pass the expressions used in the index definition: insert into test as u (p,q,r,txt) values ('p',null,'r','d') on conflict (p, coalesce(q, ''), coalesce(r, '')) do update set txt = excluded.txt;
{ "pile_set_name": "StackExchange" }
Q: How To Get Value From another Activity? I want to Set value in a Resigter Model. I want to create a SignUp Activity in four Step. I want to know how to set value in Register Model. And I have to Get that value from anywhere. Here is my code All Values are placed in one Activity. And I want to make Four Step public void UploadData(final String link) { Response = ""; try { HttpResponse response; Log.d("pre_link", "pre_link = " + link); final HttpClient httpclient = new DefaultHttpClient(); final HttpPost httppost = new HttpPost(link); /*httppost.addHeader("Authorization", "Basic " + Base64.encodeToString(("username" + ":" + "password").getBytes(), Base64.NO_WRAP));*/ MultipartEntity mpEntity = new MultipartEntity( HttpMultipartMode.BROWSER_COMPATIBLE); String FullName = fullName.getText().toString(); String UserName = userName.getText().toString(); String DateOfBirth = dob.getText().toString(); String Age = age.getText().toString(); String Sex = gender.getText().toString(); String InterestedIn = interestIn.getText().toString(); String ToMeet = "both";//toMeet.getText().toString(); String Email = email.getText().toString(); String Password = pwd.getText().toString(); String Lat = String.valueOf(latitude); String Long = String.valueOf(longitude); mpEntity.addPart("fullName", new StringBody(FullName)); mpEntity.addPart("userName", new StringBody(UserName)); mpEntity.addPart("dob", new StringBody(DateOfBirth)); mpEntity.addPart("age", new StringBody(Age)); mpEntity.addPart("gender", new StringBody(Sex)); mpEntity.addPart("interestIn", new StringBody(InterestedIn)); mpEntity.addPart("toMeet", new StringBody(ToMeet)); mpEntity.addPart("email", new StringBody(Email)); mpEntity.addPart("pwd", new StringBody(Password)); mpEntity.addPart("latitude", new StringBody(Lat)); mpEntity.addPart("longitude", new StringBody(Long)); if (bab1 != null) { mpEntity.addPart("uploaded_file", bab1); } httppost.setEntity(mpEntity); createCancelProgressDialog("Uploading Image", "Please wait...", "Cancel"); new Thread() { public void run() { try { HttpResponse response; Message msg = new Message(); msg.what = 1; try { response = httpclient.execute(httppost); HttpEntity resEntity = response.getEntity(); if (resEntity != null) { Response = EntityUtils.toString(resEntity) .trim(); Log.d("Response", "Response = " + Response); Message msg2 = new Message(); msg2.what = 1; UpdateHandler.sendMessage(msg2); } if (resEntity != null) { resEntity.consumeContent(); } } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } catch (Exception e) { Log.e("tag", e.getMessage()); } } }.start(); } catch (UnsupportedEncodingException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } public Handler UpdateHandler = new Handler() { public void handleMessage(Message msg) { switch (msg.what) { case 1: try { cancelDialog.dismiss(); cancelDialog.hide(); Log.d("Response", "Response = " + Response); Toast.makeText(SignUp.this,"you are Success", Toast.LENGTH_SHORT).show(); RegisterModel register =new RegisterModel(); //register.setfullName(); Intent i = new Intent(getApplicationContext(),SignupSuccessfully.class); // i.putExtra("pwd",pwsd); startActivity(i); finish(); //flag=1; //String read_data = ReadDataFromAppCache(MainActivity.this, "file_name"); //StoreDataToAppCache(MainActivity.this, "file data", "file_name"); } catch (Exception e) { // TODO: handle exception } super.handleMessage(msg); } } }; ProgressDialog cancelDialog = null; private void createCancelProgressDialog(String title, String message, String buttonText) { cancelDialog = new ProgressDialog(SignUp.this); cancelDialog.setTitle(title); cancelDialog.setMessage(message); cancelDialog.setCanceledOnTouchOutside(false); // cancelDialog2.setIcon(R.drawable.icon); /*cancelDialog.setButton(buttonText, new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { cancelDialog.dismiss(); cancelDialog.hide(); return; } });*/ cancelDialog.show(); } public Bitmap setBitmap(String _path) { BitmapFactory.Options options = new BitmapFactory.Options(); options.inTempStorage = new byte[16*1024]; options.inPurgeable = true; //options.inJustDecodeBounds = true; Bitmap bitmap = null; ExifInterface exif; try { bitmap = BitmapFactory.decodeFile(selectedImagePath, options); exif = new ExifInterface(_path); int exifOrientation = exif .getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL); int rotate = 0; switch (exifOrientation) { case ExifInterface.ORIENTATION_ROTATE_90: rotate = 90; break; case ExifInterface.ORIENTATION_ROTATE_180: rotate = 180; break; case ExifInterface.ORIENTATION_ROTATE_270: rotate = 270; break; } //Log.d("image_rotation", "image_rotation = " + rotate); if (rotate != 0) { int w = bitmap.getWidth(); int h = bitmap.getHeight(); // Setting pre rotate Matrix mtx = new Matrix(); mtx.preRotate(rotate); // Rotating Bitmap & convert to ARGB_8888, required by tess bitmap = Bitmap.createBitmap(bitmap, 0, 0, w, h, mtx, false); bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } return bitmap; } public String ReadDataFromAppCache(Context context, String file_name) { String output = ""; Log.d("file name", "file name = " + file_name); try { int ch; File f = new File(context.getFilesDir() + "/" + file_name); //Log.d("file path", "" + f.getAbsolutePath()); StringBuffer strContent = new StringBuffer(""); FileInputStream fin = null; try { fin = new FileInputStream(f); while ((ch = fin.read()) != -1) strContent.append((char) ch); fin.close(); } catch (FileNotFoundException e) { //Log.d("File " + f.getAbsolutePath(), " could not be found on filesystem"); output = "null"; return output; } catch (IOException ioe) { //Log.d("Exception while reading the file", "Exception while reading the file" + ioe); } try { output = URLDecoder.decode(strContent.toString(), "UTF-8"); } catch (UnsupportedEncodingException uee) { } //Log.d("This is xml", "This is xml" + strContent); //output = strContent.toString(); } catch (Exception e) { // TODO: handle exception } return output; } public void StoreDataToAppCache(Context con, String fileData, String file_name) { try { String encodedValue = ""; try { encodedValue = URLEncoder.encode(fileData, "UTF-8"); } catch (UnsupportedEncodingException uee) { } //encodedValue = sBody; //Log.d("store text", "store_text = " + encodedValue); File f = new File(con.getFilesDir() + "/" + file_name); FileWriter writer = new FileWriter(f); writer.append(encodedValue); writer.flush(); writer.close(); Log.d("save complete", "save complete"); } catch (IOException e) { e.printStackTrace(); } } } Can you Please tell me how to set these Value in Registers, then I will create Four Step Of Signup. A: You can do this using one of the following methods Use Shared Preferences to save your data and you can access it from anywhere in the application (recommended). If you have more data to be stored, I recommend using database. You can pass the values from one activity to another using intent.putExtra() method. But you will have to do this for all new Activities Another simple method is to make your variables public static and access the data using static reference from any where in your project. (not recommended). Hope this will help you. :)
{ "pile_set_name": "StackExchange" }
Q: How to get wasm-ld to honor wasm-import-module attribute Fixed: I set the DLL storage class of the function I was importing to dllimport, and that allowed wasm-ld to emit the correct import namespace. I am building a compiler with the LLVMSharp* library, and it emits LLVM .bc module files targeted to wasm32-unknown-unknown. I am trying to import functions into it from the WASI interface by tagging those function values with the { "wasm-import-module"="wasi_unstable" } attribute. (This should be equivalent to what clang does with __attribute__((import_module(<module_name>))); see here). However, when I pass the resulting .bc files to wasm-ld (the Windows 64-bit 9.0.0 installed version), the resulting .wasm module still imports those functions from "env", which doesn't work. Is there some option to pass to wasm-ld to get it to handle wasm-import-module correctly, or do I need to go another route? *Specifically, I'm using LLVMSharp 5.0.0, which is the latest stable version. It's possible that LLVMSharp 8.0.0 may support building .wasm modules, but there isn't a release NuGet for it, and the beta NuGet has some problems that prevent me from upgrading. That's why I'm going the wasm-ld route. A: wasm-ld should support this attribute. The first thing to check is your object file. You can use llvm-readobj --syms to dump the symbols in your object file. You should see ImportModule: foo on your symbol where foo is the module name you specified in your attribute. I looks like the support for this landed in wasm-ld in: https://reviews.llvm.org/D45796 I believe this change landed just before llvm 8.0, so you will llvm 8.0 or above.
{ "pile_set_name": "StackExchange" }
Q: Managing local forks of Maven dependencies So I have a dependency, actually two dependencies to which I'd like to make changes either right now like fixing JBSEAM-3424 or potentially in the future. The coding is not an issue - I'm capable of making the change - and I'm not seeking to fork the community project, just to have a local version as recommended by Will Hartung to get some work done. My concern is that issues of process will come up and bite me further down the line. So SO what can I do to ensure I manage this properly. What best practices are there? Some more specific sub-questions: Should I change the artifact names? How choose group artifact and version names? Should I import the whole source tree or be selective? What if I can't get the build system working in full - should I scale it down or try to keep it close to the original? A: Should I change the artifact names? How choose group artifact and version names? Keep the groupId and artifactId of the module(s) you change the same, but use a qualifier on the version to ensure that it is obvious it is a non-standard version, for example 1.0.0-simon. This is pretty common practice. Should I import the whole source tree or be selective? Update based on your comment: Personally I'd only add the artifacts I've changed to my local source repository. If you change another artifact later then add it to your SCM then. What if I can't get the build system working in full Worry about that when it happens. If the project is built with Maven it should be straightforward for you to build only the artifacts you need. If it uses an uber-ant build which you can't get working with your changes, then consider paring the build down.
{ "pile_set_name": "StackExchange" }
Q: Remove and Re-Add Object in matplotlib | Toggle Object appearance matplotlib Using iPython and matplotlib, I want to be able to add an annotation (or any object), remove it from the graph, and then re-add it. Essentially I want to toggle the appearance of the object in the graph. Here is how I am adding and removing this object. The object still exists after the remove(). But I can't figure out how to make it reappear in the graph. an = ax.annotate('TEST', xy=(x, y), xytext=(x + 15, y), arrowprops=dict(facecolor='#404040')) draw() an.remove() A: You want set_visible (doc) an = gca().annotate('TEST', xy=(.1, .1), xytext=(.1 + 15,.1), arrowprops=dict(facecolor='#404040')) gca().set_xlim([0, 30]) draw() plt.pause(5) an.set_visible(False) draw() plt.pause(5) an.set_visible(True) draw()
{ "pile_set_name": "StackExchange" }
Q: cacheGroups discards my groups and only creates first two groups I want to split my bundle into chunks according to below scheme. However, Only first 2 chunk group is considered and my editor group inserted to my application main js (app) instead of separate chunk. Expected Result: "/packs/js/runtime~app.js", "/packs/js/vendors.chunk.js", "/packs/js/app-commons.chunk.js", "/packs/js/editors.chunk.js", // Editors are in this. "/packs/js/app.chunk.js" Actual Result: "/packs/js/runtime~app.js", "/packs/js/vendors.chunk.js", "/packs/js/app-commons.chunk.js", "/packs/js/app.chunk.js" // instead, editors inserted to this... It seems after app-commons, it just discard the rest. But, If I remove app_commons then editors get created as a chunk. It seems after second group it just doesn't respect on my rules. Code: splitChunks(config => Object.assign({}, config, { optimization: { splitChunks: { cacheGroups: { commons: { test(mod /* , chunk */) { if (!mod.context.includes('node_modules')) { return false } if ( ['editor', 'draft-js', 'highlight'].some(str => mod.context.includes(str), ) ) { return false } return true }, name: 'vendors', chunks: 'all', reuseExistingChunk: true, }, app_commons: { test(mod /* , chunk */) { if (!mod.context.includes('node_modules')) { return false } if (['draft-js', 'highlight'].some(str => mod.context.includes(str))) { return true } return false }, name: 'app-commons', chunks: 'all', reuseExistingChunk: true, }, editor: { test(mod /* , chunk */) { if (!mod.context.includes('node_modules')) { return false } if (['editor'].some(str => mod.context.includes(str))) { return true } return false }, name: 'editors', chunks: 'all', reuseExistingChunk: true, }, }, }, }, }), ) A: Please add enforce: true to your editor configuration.
{ "pile_set_name": "StackExchange" }
Q: What is the state-of-art algorithms for planar object recognition? I read about SIFT, SURF, Fern, BRIFT and even the evolution algorithms. But I not sure which from those algorithms is the best. So I need your help. Of course I know each algorithms have its own advantages, so here is the key for you to classify: Which is fastest in training/ recognize phases? Which consume fewest memory in runtime? Which can be implement for detect 3D object? Thank you, and sorry about my bad English. In my case, I want to implement an application on smartphone to recognize a known object. A: Your question is a bit complicated. There are no optimal methods for all cases but methods that suit certain very specific cases. If you decide to use local descriptors in your method I advise you to get started by using SIFT / SURF which are the most popular descriptors but are not very efficient (slow) and require a lot of memory. After that, you can try to replace them with binary descriptors (eg. BRIEF, ORB, BRISK, FREAK) that are much more efficient and require less storage. But as I said before it all depends on what you want to implement, and what are the requirements of your application.
{ "pile_set_name": "StackExchange" }
Q: Use MarkItUp as editor and not the default Does anyone know if it is possible to use this editor as my WordPress editor and not the default. If so, how? A: Yes, It is possible ... there is a good article on DigWP - which points out one warning that MarkDown is not reversible (ie articles written in Markdown get saved in Markdown so if you ever turn off MarkDown, then you're left with goo on your screen). There is a MarkitUp plugin on the WordPress.org ... While the project has a warning about its age, I think you could find it still works. Have a search on WordPress.org there are other markdown plugins And a note from me about compatibility - you may find it easier to just use a MarkDown editor on your computer / tablet / whatever and then just export / cut n paste the finished product into your WordPress post or publish with a blog editor tool like MarsEdit.
{ "pile_set_name": "StackExchange" }
Q: Enter text from text box into combo box vb.net I have two form, A and B. On FORM A user will select a country code from combobox and it will then be saved to DB. On FORM B a textbox shows the country code that was saved to the database earlier. I want to change the country code in FORM B when edit is selected. How to change: 1. the textbox will first be hidden 2. a combobox with all the country codes will be shown with selected value equals to the hidden textbox value. I have tried putting the info into the combobox like the textboxes straight from the database when it is blank, e.g: cbCountryCode.Text = CStr(dataTable.Rows(0).Item(2)) but this does not work. I also need to keep the country codes in the combobox as the user will need to change the country code if it's wrong. Is there a way of doing this? I have a work around where I don't let the user update the info if the combobox is blank, but I want the country code to be already there so that the user does not have to select a country code again if it is not wrong. Any help with this problem would be greatly appreciated. EDIT: datatable.Rows(0).Item(2) holds a country code, for example, Ireland (+353), United Kingdom (+44) or U.S.A. (1). This is the code I have for calling the information from the database: sqlVisitorDetails = "SELECT * FROM visitorDetails WHERE idNumber=@idNumber" sqlCon.Open() sqlCmd = New SqlCommand(sqlVisitorDetails, sqlCon) sqlCmd.Parameters.AddWithValue("@idNumber", txtIdNumber.Text) dtVisitorDetails = loadDtVisitorDetails() txtFirstName.Text = CStr(dtVisitorDetails.Rows(0).Item(1)) txtLastName.Text = CStr(dtVisitorDetails.Rows(0).Item(2)) txtContactNumber.Text = CStr(dtVisitorDetails.Rows(0).Item(3)) txtCountryCode.Text = CStr(dtVisitorDetails.Rows(0).Item(4)) txtAddress.Text = CStr(dtVisitorDetails.Rows(0).Item(5)) The country code (e.g. 'Ireland (+353)') is stored in dtVisitorDetails.Rows(0).Item(4) and this is put into the text box txtCountryCode. When edit is clicked on the form, the text box txtCountryCode is hidden and the combobox cbCountryCode is visible (before edit is clicked txtCountryCode is shown and cbCountryCode is hidden). I then want the country code (in this case 'Ireland (+353)') to be shown in the cbCountryCode combo box. At the moment when the combobox is shown it is blank and the user has to choose a country code again, even if it's right. I hope this makes things clearer. A: From the best i can understand from your question. cbCountryCode.Text = CStr(dataTable.Rows(0).Item(2)) will not work if the DropDownStyle in the properties is set to DropDownList, change it to DropDown instead(if its not). And to this: I also need to keep the country codes in the combobox as the user will need to change the country code if it's wrong. you have to bind the data to the ComboBox to make it work. EDIT: If possible, use ColumnName instead of Index for getting a data from datatable. Since you're selecting all record from your database, your may not know when then index would change(when a column is added or deleted from DB) cbCountryCode.Text = CStr(dataTable.Rows(0).Item("CountryCodeColumn"))
{ "pile_set_name": "StackExchange" }
Q: Does the Fencing Master feat stack with the 5th level extra attack? Do the extra attacks from the Fencing Master feat and the extra attack 5th level class benefit stack? Do I get to attack 3 times in one turn when having both of these? A: Yes, you get can get 3 attacks, but with a -5 penalty to each. Rarely worth it. Once on your turn when you use your action to make a melee attack with a finesse weapon, you can make one additional attack with that weapon, but all of the attacks that are part of the action take a –5 penalty to the attack roll. At 5. level as a paladin you can make melee 2 attacks as an action. With this feat (wielding the right weapon), you can decide to make one additional attack, but in this case you take a -5 penalty to all three attacks. Doing so is rarely good for you, usually only worths it if you have advantage.* The main attraction of Fencing Master from an optimization standpoint is the ability to parry the attack with a reaction. Connecting to your other question regarding an empty hand, even with this feat you benefit from a shield in your other hand. *The mathematics of when is it better to attack 3 times with -5: 2 * H * D < 3 * (H-0.25) * D 2 * H < 3 * H - 0.75 2 * H + 0.75 < 3 * H 0.75 < H Where H is your hit chance, and D is your damage. The 0.25 is the -5 converted from roll penalty to hit chance. As you can see, if you have a 0.75 chance (in other words you hit on a 6) it makes sense to make 3 attacks.
{ "pile_set_name": "StackExchange" }
Q: what type of protocol and api connection is used for authorize.net Authorize.net will disable older protocols, TLS 1.0 and TLS 1.1, which are highly vulnerable to security breaches. They will be disabled by Authorize.Net on February 28, 2018 What type of protocol and API connection does CiviCRM use? A: CiviCRM will communicate with AuthorizeNet using whatever versions of TLS are enabled on your server. To see if your server is using TLS 1.2, go to https://www.ssllabs.com/ssltest and put in your url in the Configuration section of the report it will list the TLS versions supported.
{ "pile_set_name": "StackExchange" }
Q: El Capitan: Your iCloud session has expired After upgrading to 10.11, everytime when I run iTunes get this: Enter login and password, fix for 20 minutes. How can this be fixed? What I did: reset NVRAM. A: I found an advice for this on Reddit: Log out of iTunes Store account and log back in.
{ "pile_set_name": "StackExchange" }
Q: Renaming an uploaded file in CodeIgniter Using CodeIgniter, I am trying to modify the name of the uploaded file to camelCase by removing any spaces and capitalizing subsequent words. I am pretty sure that I can rename the file using the second parameter of move_uploaded_file but I don't even know where to look to figure out how to modify the name to camelCase. Thanks in advance! Jon A: Check out CI's upload library: http://www.codeigniter.com/user_guide/libraries/file_uploading.html Let's first take a look at how to do a simple file upload without changing the filename: $config['upload_path'] = './uploads/'; $config['allowed_types'] = 'jpg|jpeg|gif|png'; $this->upload->initialize($config); if ( ! $this->upload->do_upload()) { $error = $this->upload->display_errors(); } else { $file_data = $this->upload->data(); } It's that simple and it works quite well. Now, let's take a look at the meat of your problem. First we need to get the file name from the $_FILES array: $file_name = $_FILES['file_var_name']['name']; Then we can split the string with a _ delimiter like this: $file_name_pieces = split('_', $file_name); Then we'll have to iterate over the list and make a new string where all except the first spot have uppercase letters: $new_file_name = ''; $count = 1; foreach($file_name_pieces as $piece) { if ($count !== 1) { $piece = ucfirst($piece); } $new_file_name .= $piece; $count++; } Now that we have the new filename, we can revisit what we did above. Basically, you do everything the same except you add this $config param: $config['file_name'] = $new_file_name; And that should do it! By default, CI has the overwrite $config param set to FALSE, so if there are any conflicts, it will append a number to the end of your filename. For the full list of parameters, see the link at the top of this post. A: $this->load->helper('inflector'); $file_name = underscore($_FILES['file_var_name']['name']); $config['file_name'] = $file_name; that should work too
{ "pile_set_name": "StackExchange" }
Q: Mobile Programming Basics I`m willing to start mobile programming ... I want to know where do I start, which language to use and where to find good tutorials. Shall I work on Android platform, iPhone platform or Windows platform? A: It largely depends on your current developer skills and experience. If you've already written some C#/VB.NET code you'll be VERY comfortable developing for Windows Phone. Go download the free WinPhone dev tools from Microsoft and then download and read Charles Petzold's free eBook "Programming Windows Phone 7". If you're a seasoned C and/or Java developer, you'll probably prefer developing for Android. The Linux foundation has recently published some tutorials too. If you're a seasoned C/C++ developer and are willing to spend the time and effort to learn Objective C, then iOS might be right for you. If you've not done much coding at all, then you've a learning curve ahead of you. Out of the three, I would argue that WinPhone is possibly the easiest to get into, followed by Android and then iOS. The other factor to consider is your market. iOS and Android are currently FAR more popular than WinPhone because they've been around for longer. However, the WinPhone app marketplace is growing very rapidly and its rate of growth is likely to increase as more and more developers with existing experience of writing .NET code realize how much fun developing for WinPhone is :)
{ "pile_set_name": "StackExchange" }
Q: Scala test dependent methods used to calculate vals are executed only once I am new to scala, and I'm trying figure out the best way to test the following process. I have a class that gets a list of numbers from constructor parameter. The class supports various operations on the list, some operations may depend on the output of other operations. But every option should only perform calculations on demand and should be done at most once. No calculations should be done in the constructor. Example class definition . InputList: List[Int] . x: returns a vector with the square of all elements in InputList . y: returns the sum of all elements in x . z: returns the square root of y . As for class implementation, I think I was able to come up with a fitting solution but now I can't figure out how can I test the calculations of the dependent tree of operations are executed only once. Class Implementation Approach #1: class Operations(nums: List[Int]) { lazy val x: List[Int] = nums.map(n => n*n) lazy val y: Int = x.sum lazy val z: Double = scala.math.sqrt(y) } This was my first approach which I'm confident will do the job but could not figure out how to properly test it so I decided to add some helper methods to confirm they are being called just ones Class Implementation Approach #2: class Ops(nums: List[Int]) { def square(numbers: List[Int]): List[Int] = { println("calling square function") numbers.map(n => n*n) } def sum(numbers: List[Int]): Int = { println("calling sum method") numbers.sum } def sqrt(num: Int): Double = { println("calling sqrt method") scala.math.sqrt(num) } lazy val x: Vector[Double] = square(nums) lazy val y: Double = sum(x) lazy val z: Double = sqrt(y) } I can now confirm each dependent method of each method is called just once whenever necessary. Now how can I write tests for these processes. I've seen a few posts about mockito and looked at the documentation but was not able to find what I was looking for. I looked at the following: Shows how to test whether a function is called once but then how to test whether other depend functions where called? http://www.scalatest.org/user_guide/testing_with_mock_objects#mockito Mockito: How to verify a method was called only once with exact parameters ignoring calls to other methods? Seems promising but I can't figure out the syntax: https://github.com/mockito/mockito-scala Example Tests I'd like to perform var listoperations:Ops = new Ops(List(2,4,4)) listoperations.y // confirms 36 is return, confirms square and sum methods were called just once listoperations.x // confirms List(4,16,16) and confirms square method was not called listoperations.z // confirms 6 is returned and sqrt method called once and square and sum methods were not called. A: Ok, lets leave the pre-mature optimisation argument for another time. Mocks are meant to be used to stub/verify interactions with dependencies of your code (aka other classes), not to check internals of it, so in order to achieve what you want you'd need something like this class Ops { def square(numbers: List[Int]): List[Int] = numbers.map(n => n*n) def sum(numbers: List[Int]): Int = numbers.sum def sqrt(num: Int): Double = scala.math.sqrt(num) } class Operations(nums: List[Int])(implicit ops: Ops) { lazy val x: List[Int] = ops.square(nums) lazy val y: Int = ops.sum(x) lazy val z: Double = ops.sqrt(y) } import org.mockito.{ ArgumentMatchersSugar, IdiomaticMockito} class IdiomaticMockitoTest extends AnyWordSpec with IdiomaticMockito with ArgumentMatchersSugar "operations" should { "be memoised" in { implicit val opsMock = spy(new Ops) val testObj = new Operations(List(2, 4, 4)) testObj.x shouldBe List(4, 16, 16) testObj.y shouldBe 36 testObj.y shouldBe 36 //call it again just for the sake of the argument testObj.z shouldBe 6 //sqrt(36) testObj.z shouldBe 6 //sqrt(36), call it again just for the sake of the argument opsMock.sum(*) wasCalled once opsMock.sqrt(*) wasCalled once } } } Hope it makes sense, you mentioned you're new to scala, so I didn't wanna go too crazy with implicits so this is a very basic example in which the API of your original Operations class is the same, but it extracts out the heavy lifting to a third party that can be mocked so you can verify the interactions.
{ "pile_set_name": "StackExchange" }
Q: Two invertible matrices Let $A,B$ be two $n\times n$ invertible matrices with complex entries. Also, let $\alpha, \beta \in \mathbb{C}$ with $|\alpha| \neq |\beta|$ such that $\alpha AB+\beta BA=I_n$. Prove that $\det(AB-BA)=0$. I tried to manipulate the given equation in order two get $(AB-BA)$ as a factor somewhere, but didn't manage to get anything useful. I also thought of using $A^{-1}$ and $B^{-1}$ somewhere, but I only got messier relations. A: I leave my first answer below. Here is a much easier one: As below, we may assume $AB + \gamma BA = I$, where $|\gamma|\neq 1$ and $\gamma\neq 0$. Put $\lambda_0 = (1+\gamma)^{-1}$. Then $$ AB-\lambda_0 = 1 - \lambda_0 - \gamma BA = -\gamma\left(BA - \frac{1-\lambda_0}{\gamma}\right) = -\gamma(BA-\lambda_0). $$ Hence, as $AB$ and $BA$ have the same eigenvalues, $$ \sigma(BA-\lambda_0) = \sigma(AB-\lambda_0) = -\gamma\cdot\sigma(BA-\lambda_0).$$ Thus, multiplication by $(-\gamma)$ leaves the finite set $\sigma(BA-\lambda_0)$ invariant. But, as $|\gamma|\neq 1$, this is only possible if $\sigma(BA-\lambda_0) = \{0\}$. Hence, $BA-\lambda_0$ is nilpotent. And as $$ AB-BA = I - \gamma BA - BA = I - \lambda_0^{-1}BA = -\lambda_0^{-1}(BA - \lambda_0), $$ the same holds for $AB-BA$. In particular, $AB-BA$ is not invertible, i.e., $\det(AB-BA)=0$. The statement is clear for $\alpha = 0$. Hence, let $\alpha\neq 0$. In this case, with $A' = \alpha A$ we have $A'B + \frac{\beta}{\alpha}BA' = I$. Hence, we may assume that $AB + \gamma BA = I$ with $|\gamma|\neq 1$ and $\gamma\neq 0$. Let $x$ be an eigenvector of $AB$ with respect to the eigenvalue $\lambda$. Then $$ \lambda x + \gamma BAx = x, $$ that is, $$ BAx = \frac{1-\lambda}\gamma x. $$ But we know that $AB$ and $BA$ have exactly the same eigenvalues (even the same Jordan structures) as they are both invertible. Hence, the function $f(z) = \tfrac{1-z}\gamma$ is a selfmap of the set of eigenvalues. Therefore, there is an eigenvalue $\lambda$ such that $f^n(\lambda) = \lambda$ for some $n$, where $f^n = f\circ\ldots\circ f$ ($n$ times). One can easily prove by induction that $$ f^n(z) = \frac{1 - (-\gamma^{-1})^n}{1+\gamma} + (-\gamma^{-1})^nz $$ and then (since $(-\gamma^{-1})^n\neq 1$) that the only fixed point of each $f^n$ is $z = (1+\gamma)^{-1}$. Thus, $\lambda = (1+\gamma)^{-1}$. In particular, $f(\lambda) = \lambda$. But then, with the eigenvector $x$ from above, we have $$ (BA-AB)x = BAx - ABx = f(\lambda)x - \lambda x = 0. $$ Hence, the matrix $BA-AB$ is not invertible, meaning that $\det(BA-AB) = 0$.
{ "pile_set_name": "StackExchange" }
Q: How to load/modify/save an entity object in different contexts? With the Entity Framework (EF) I want to load an object from my database, modify it and save it back. However, loading and saving happens in different contexts and I modify it by adding another object to a collection property of the object. Consider the following code based on the famous blog/posts example from MSDN: Blog blog; using (BloggingContext db = new BloggingContext()) { blog = db.Blogs.Include("Posts").Single(); } // No one else knows the `post` object directly. { Post post = new Post {Blog = blog, Title = "Title", Content = "Content"}; blog.Posts.Add(post); } using (BloggingContext db = new BloggingContext()) { // No idea what I have to do before saving... // Can't do anything with `post` here, since this part will not know this // object directly. //db.Blogs.Attach(blog); // throws an InvalidOperationException db.SaveChanges(); } In my database I have 1 Blog object with 100 Posts. As you can see, I want to add a new Post to this Blog. Unfortunately, doing db.Blogs.Attach(blog); before saving, throws an InvalidOperationException saying: "A referential integrity constraint violation occurred: The property values that define the referential constraints are not consistent between principal and dependent objects in the relationship." What do I have to do to let the EF update this blog? UPDATE: I think what I was trying to achieve (decoupling the database update of an entity from the modifications and its related child entities) is not possible. Instead, I consider the opposite direction more feasible now: decoupling the update/creation of a child entity from its parent entity. This can be done the following way: Blog blog; using (BloggingContext db = new BloggingContext()) { blog = db.Blogs.Single(); } Post post = new Post {BlogId = blog.BlogId, Title = "Title", Content = "..."}; using (BloggingContext db = new BloggingContext()) { db.Posts.Add(post); db.SaveChanges(); } A: You have to attach the entity to the context and then change tracking should kick in and save changes will do the rest. For reference: MSDN Attach Entities to Context Or try adding it explicitly and set the relationship needed information directly and not through the navigation property like so: Blog blog; using (BloggingContext db = new BloggingContext()) { blog = db.Blogs.Include("Posts").Single(); Post post = new Post {Blog = blog, Title = "Title", Content = "Content"}; post.blogId = blog.Id; db.Posts.Add(post); db.SaveChanges(); }
{ "pile_set_name": "StackExchange" }
Q: Probability of exactly $2$ aces within first $5$ cards? Every person gets $5$ cards from a deck of cards ($52$). What is the probability that the first $5$ cards will contain exactly $2$ aces? I have tried to calculate it by $\frac{5}{52} \times \frac{5}{47} = \frac{25}{2444}$. I know my answer is incorrect, but I dont know how I should approach this. A: You need to think about the number of ways you can get two aces, and divide this by the total number of hands you can get. Firstly, there are $4 \choose 2$ different ace combinations that you can get. And, given that two cards in your hand are aces, there are $48 \choose 3$ different combinations for the remaining $3$ cards in your hand (note we remove all 4 aces to get 48 remaining cards, since you can only have 2 aces). This gives the total number of ways to get 2 aces as $ 4 \choose 2$ $\times$ $48 \choose 3 $ Get this number in your calculator and divide it by the total number of possible hands, $52 \choose 5$ to get the answer. Recall that $n \choose x $$= \frac{n!}{x! (n-x)!}$ and $n! = n\times (n-1) \times ... \times 2 \times 1$
{ "pile_set_name": "StackExchange" }
Q: python's print function not exactly an ordinary function? Environment: python 2.x If print is a built-in function, why does it not behave like other functions ? What is so special about print ? -----------start session-------------- >>> ord 'a' Exception : invalid syntax >>> ord('a') 97 >>> print 'a' a >>> print('a') a >>> ord <built-in function ord> >>> print -----------finish session-------------- A: print in Python versions below 3, is not a function. There's a separate print statement which is part of the language grammar. print is not an identifier. It's a keyword. A: The short answer is that in Python 2, print is not a function but a statement. In all versions of Python, almost everything is an object. All objects have a type. We can discover an object's type by applying the type function to the object. Using the interpreter we can see that the builtin functions sum and ord are exactly that in Python's type system: >>> type(sum) <type 'builtin_function_or_method'> >>> type(ord) <type 'builtin_function_or_method'> But the following expression is not even valid Python: >>> type(print) SyntaxError: invalid syntax This is because the name print itself is a keyword, like if or return. Keywords are not objects. The more complete answer is that print can be either a statement or a function depending on the context. In Python 3, print is no longer a statement but a function. In Python 2, you can replace the print statement in a module with the equivalent of Python 3's print function by including this statement at the top of the module: from __future__ import print_function This special import is available only in Python 2.6 and above. Refer to the documentation links in my answer for a more complete explanation.
{ "pile_set_name": "StackExchange" }
Q: How to import/sync comments from Disqus into my drupal database? After reading the module´s homepage, I understand that there´s an easy way to get disquss comments easily imported/exported from your Drupal database in D7. To import/sync: You can do a one-time import at Comments->Disqus Import. Specify the timestamp you want to import from. You can have comments automatically import from Disqus on an interval basis. Turn this on in the import settings at Site Config -> Disqus -> Import. Turning on syncing will query Disqus for any comments that came in since the last import. But how do I actually do that? I don´t have any "Disquss import" tab or link under "admin/content/comment" And under "/admin/config/services/disqus" I don´t have any "Import" link or tab. Maybe I´m missing something? Thanks for your help!! Rosamunda A: Reading under the heading "Disqus Migrate Sub-module" on the module page, the import functionality/disqus migrate module seems to only be for the drupal 6 version.
{ "pile_set_name": "StackExchange" }
Q: How to automatically close Bootstrap 3 modal after time period I'm struggling to automatically close Bootstrap modals after a set time period. Here's the js code I'm using to close the modal in 4 seconds: setTimeout(function() { $('#myModal').modal('hide'); }, 4000); Two basic problems: (A) When the html page (that contains the modals) loads, the modal Timeout seems to run before the modal is even displayed. The modal is set to display after clicking on a link in the page. If the link is not clicked immediately when the page loads, the modal will only appear briefly and then close immediately because essentially the Timeout period started when the html page loaded, not when the modal was displayed. (B) If the user clicks on the link to launch the modal a second time (or 3rd time, 4th time, etc.), the modal displays properly but does NOT close after the time period. It just stays open until the user manually closes it. So...the two questions are: (1) How do I get the modal Timeout period to wait until the modal is displayed before running the clock. (2) How do I get the modal to display a second and third time with the proper Timeout function still working? (The answer(s) proposed at this link below looked promising, but aren't working for me. Maybe they don't work on Bootstrap 3? How to automatically close the bootstrap modal dialog after a minute ) This code below looked very promising, but didn't work even after changing 'shown' to 'shown.bs.modal'. Or maybe I'm placing this code in the wrong place? var myModal = $('#myModal').on('shown', function () { clearTimeout(myModal.data('hideInteval')) var id = setTimeout(function(){ myModal.modal('hide'); }); myModal.data('hideInteval', id); }) Many thanks for any suggestions! A: I'm not pretty sure about your html so I did a complete example: html: <a data-toggle="modal" href="#myModal" class="btn btn-primary">Open Modal</a> <div id="myModal" class="modal fade"> <div class="modal-dialog"> <div class="modal-content"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">x</button> <h4>Header</h4> </div> <div class="modal-body"> Modal Content </div> </div> </div> </div> js: $(function(){ $('#myModal').on('show.bs.modal', function(){ var myModal = $(this); clearTimeout(myModal.data('hideInterval')); myModal.data('hideInterval', setTimeout(function(){ myModal.modal('hide'); }, 3000)); }); }); The main difference with your code: I set a time for timeout (3000) I set myModal variable inside callback A: I guess it depends on how you display your modal. But you could set the timeout in the event listener? Here is a JSFiddle to demonstrate how you can achieve it. Basically you add the timeout in the function that will be executed when the event happens. // Select the element you want to click and add a click event $("#your-link").click(function(){ // This function will be executed when you click the element // show the element you want to show $("#the-modal").show(); // Set a timeout to hide the element again setTimeout(function(){ $("#the-modal").hide(); }, 3000); }); If the event you listen for is a click on a link you could have to prevent the default action too by using event.preventDefault(). You can find more info on that here I hope this helps.
{ "pile_set_name": "StackExchange" }
Q: Relay fragment variables Link Github issue: https://github.com/facebook/relay/issues/1218 We have encountered an strange behaviour of Relay. I will try to explain the best I can. So we have an "main" relay container that fetches the data for corresponding store, and also includes and fragment from Ticket container. Ticket container render out custom table that has filter and sorting. So you can see that in StoreFrom component StoreTicketList container is import all required props are passed like Store fragment. The problem occurs when you try to filter StoreList Ticket, I mean set filter or sort relay variables. You will get this error: Warning: RelayContainer: component TicketList was rendered with variables that differ from the variables used to fetch fragment Store. The fragment was fetched with variables {"first":5,"after":null,"last":null,"before":null,"sort":null,"filter":null}, but rendered with variables {"first":5,"after":null,"last":null,"before":null,"sort":null,"filter":{"authorAccount":{"email":{"__e":"[email protected]"}}}}. This can indicate one of two possibilities: - The parent set the correct variables in the query - TicketList.getFragment('Store', {...}) - but did not pass the same variables when rendering the component. Be sure to tell the component what variables to use by passing them as props: <TicketList ... first={...} after={...} last={...} before={...} sort={...} filter={...} />. - You are intentionally passing fake data to this component, in which case ignore this warning. But those filter/sort variables are on StoreTicketList and they arent passed dow from parent to child container like in this case Store container to StoreListTicket container. export class StoreForm extends React.Component { constructor(props) { super(props); const { Viewer: { store } } = props; this.state = { number: store && store.number !== null ? store.number : '', }; } handleInsert = (model) => { console.log('Form mutation model : ', model); }; render() { const { Viewer, relay: { variables: { update } } } = this.props; return ( <div> <Form> <FormTitle title='Store Info' /> <FormBody> <TextField required fullWidth name='number' value={this.state.number} floatingLabelText='Number' /> <StoreTicketList Store={this.props.Viewer.store} /> </FormBody> </Form> </div> ); } } StoreForm container (main container): export default Relay.createContainer(StoreForm, { initialVariables: { id: null, update: false }, prepareVariables({ id = null }) { return { id, update: (id !== null) }; }, fragments: { Viewer: (variables) => Relay.QL` fragment on Viewer { store(id: $id) @include(if: $update) { id, number ${StoreTIcketList.getFragment('Store')} } } ` } }); Ticket container: export const StoreTicketList = Relay.createContainer(TicketList, { initialVariables: { first: 5, after: null, last: null, before: null, sort: null, filter: null }, fragments: { Store: () => Relay.QL` fragment on Store { ticketConnection(first: $first, after: $after, last: $last, before: $before, sort: $sort, filter: $filter) { count, pageInfo { hasNextPage, hasPreviousPage, startCursor, endCursor }, edges{ node{ created, subject } } } } ` } }); We have built our own Connection Table HOC component that renders table for each container. In this component there are also sort and filter function that are using this.props.relay.setVariables(). So the StoreListTicket is rendering as an ConnectionTable and it passes down the relay prop object, and if user clicks on a table colum header, component is generating an array of sort objects. function connectionTableHOC(ComposedComponent) { class EnhanceTable extends React.Component { constructor(props) { super(props); } sortHandler = (sortArray) => { const { relay, relay: { variables } } = this.props; relay.setVariables({ first: variables.first || variables.last, after: null, last: null, before: null, sort: sortArray }); }; filterHandler = (filterObj) => { const { relay, relay: { variables } } = this.props; relay.setVariables({ first: variables.first || variables.last, after: null, last: null, before: null, filter: filterObj }); }; render() { return <ComposedComponent {...this.props} />; } } A: It turns out you need to do two things: First, pass the props into the component, as described by Joe Savona. I'm using react-relay-router, so for me that was a matter of adding this line <Route path="interviews"> <IndexRoute component={InterviewsList} queries={ViewerQuery} /> <Route path=":id" component={InterviewSession} queries={NodeViewerQuery} render={({ props }) => props ? <InterviewSession {...props} /> : <Loading />}/> // <--- this line </Route> Second, you must inject the variable's values into the getFragment function call, like so: fragments: { Viewer: (variables) => Relay.QL` fragment on Viewer { store(id: $id) @include(if: $update) { id, number ${StoreTIcketList.getFragment('Store', {... variables})} // <---- this thing! } } ` } Note that if you're using getFragment inside of your root query, variables will be argument number two: const NodeViewerQuery = { node: (component, variables) => Relay.QL`query { // <---- extra "component" argument node(id: $id) { ${component.getFragment('node', {...variables})} } }`, (This answer crossposted from https://github.com/facebook/relay/issues/1218)
{ "pile_set_name": "StackExchange" }
Q: Test for presence/absence of hidden field with jest <div className="errorMsg" hidden={props.error === true ? false : true}> Error message text </div> I have this div that is hidden if props.Error is false, and displayed if it's true. I am trying to test that the text does/doesn't appear depending on prop value. Since I'm using shallow render, the test expect(wrapper.find('.errorMsg').length).toEqual(1); is always going to pass whether hidden is true or not. I'm using shallow render because that's necessary for my other tests, and so far I've tried: expect(wrapper.find('.garmentOriginErrorMsg').length).toEqual(0); expect(wrapper.find('.errorMsg')).toHaveProperty('props', 'hidden: true') expect(wrapper.find('.errorMsg').displayed()).toBeFalsy() expect(wrapper.find('.errorMsg').hasStyle('display', 'none')).toBe(true) Is this possible with shallow rendering, or is my only option to use mount? A: This should work with shallow: expect(wrapper.find('.errorMsg').props().hidden).toBe(true); Also, props.error === true ? false : true can be written simply as !props.error
{ "pile_set_name": "StackExchange" }
Q: public definition of GetEnumerator in asp.net mvc missing? Should I manually create a definition for GetEnumerator? Seems like it should know... I get the following error: foreach statement cannot operate on variables of type 'MvcAppNorthwind.Models.Product' because 'MvcAppNorthwind.Models.Product' does not contain a public definition for 'GetEnumerator' Line 9: <h2><%: ViewData["Message"] %></h2> Line 10: <ul> Line 11: <% foreach (MvcAppNorthwind.Models.Product p in ViewData.Model) { %> Line 12: <li><%= p.ProductName %></li> Line 13: <% } %> In my controller I have this code: NorthwindDataContext db = new NorthwindDataContext(); ViewData["Message"] = "Welcome to ASP.NET MVC!"; var products = from p in db.Products select p; return View(products); I changed the declaration in my view like this and now it works: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<IEnumerable<MvcAppNorthwind.Models.Product>>" %> But if you want to display or use data from several models in the same view? How do you do it then? A: Change the type of "products" from var to IEnumerable<MvcAppNorthwind.Models.Product> and make sure your cast reflects the same. In answer to your last question, you could assign objects to a dictionary item in ViewData OR better yet you could create a View Model that contains all of the data that you need for the view. That way you have better separation of concerns by having a model that is specific for your view.
{ "pile_set_name": "StackExchange" }
Q: ERROR 1 (HY000): Can't create/write to file 'C:\Outfile.txt' (Errcode: 13 - Permission denied) All of a sudden, I cannot write to a file from MySQL. I am using Windows 10 and MySQL version 5.7.18-log. My query runs fine but when I add INTO OUTFILE 'C:/Outfile.txt' it returns ERROR 1 (HY000): Can't create/write to file 'C:\Outfile.txt' (Errcode: 13 - Permission denied) I have edited the my.ini file at C:\ProgramData\MySQL\MySQL Server 5.7\. It originally had secure-file-priv= "" which, as I understand it, should mean I have permission to write anywhere. I changed my.ini to secure-file-priv= "C:\" but I still get the same error. Is there somewhere else that I can change permissions or something that I am missing? A: you have to give permission to file from file properties and give all permission to file
{ "pile_set_name": "StackExchange" }
Q: Ansible 2.3.0: Unable to evaluate date using to_datetime() I'm trying to capture a date in string format and parse it to an actual date in Ansible 2.3.0. Here's a snippet from my playbook: vars: date_of_birth: "{{ bdate_YYYYMMDD }}|to_datetime('%Y%d%m')" tasks: - name: 2) Print date debug: msg="Birth date as discovered is {{ date_of_birth }}" Command: ansible-playbook ansible_playbook.yml -i inventory -k -v --extra-vars "bdate_YYYYMMDD=20181203" This is the output: TASK [2) Print date] ************************************************************************************************************************************************************************************************* ok: [****hostname****] => { "changed": false, "msg": "Birth date as discovered is 20181203|to_datetime('%Y%d%m')" } Looking to determine why the date doesn't get evaluated and stored in variable date_of_birth. A: You should use filters inside Jinja2 expressions (i.e., part opened with {{ and closed with }}): date_of_birth: "{{ bdate_YYYYMMDD | to_datetime('%Y%d%m') }}" Otherwise they are interpreted just as string as in your example.
{ "pile_set_name": "StackExchange" }