text
stringlengths
64
89.7k
meta
dict
Q: Find a constant in a density function $ f(x) = \begin{cases} k\sqrt{x}, 0<x<1 \\ 0, \text{elsewhere}\\ \end{cases}$ I know that $E[X] = \frac{2k}{5}$ and $Var[X] = \frac {2k}{7}$ Then what can I do to find $k$ with these few information? A: Updated to match the corrected version of the question: You must have $$\int_{-\infty}^\infty f(x)~dx=1$$ in order for $f$ to be a probability density function. In this case $$\int_{-\infty}^\infty f(x)~dx=\int_0^1 k\sqrt x~dx=k\int_0^1 x^{1/2}~dx\;,$$ so you need only solve the equation $$k\int_0^1 x^{1/2}~dx=1$$ for $k$.
{ "pile_set_name": "StackExchange" }
Q: How to know when a HID USB/Bluetooth device is connected in Cocoa? How can I get a simple call back when a HID device, or at last, any USB/Bluetooth device gets connected/disconnected? I made a simple app that shows the connected joysticks and the pressed buttons/axis for mac in a pretty way. Since I am not very familiar with cocoa yet, I made the UI using a webview, and used the SDL Joystick library. Everything is working nice, the only problem is that the user needs to scan for new joysticks manually if he/she connects/disconnects something while the program is running. With a callback, I I can just call the Scan function. I don't want to handle the device or do something fancy, just know when there is something new happening... Thanks. A: Take a look at IOServiceAddMatchingNotification() and related functions. I've only worked with it in the context of serial ports (which are in fact USB to serial adapters, though that doesn't matter), but it should be applicable to any IOKit accessible device. I'm not sure about Bluetooth, but it should at least work for USB devices. Here's a snippet of code I use: IONotificationPortRef notificationPort = IONotificationPortCreate(kIOMasterPortDefault); CFRunLoopAddSource(CFRunLoopGetCurrent(), IONotificationPortGetRunLoopSource(notificationPort), kCFRunLoopDefaultMode); CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOSerialBSDServiceValue); CFRetain(matchingDict); // Need to use it twice and IOServiceAddMatchingNotification() consumes a reference CFDictionaryAddValue(matchingDict, CFSTR(kIOSerialBSDTypeKey), CFSTR(kIOSerialBSDRS232Type)); io_iterator_t portIterator = 0; // Register for notifications when a serial port is added to the system kern_return_t result = IOServiceAddMatchingNotification(notificationPort, kIOPublishNotification, matchingDictort, SerialDeviceWasAddedFunction, self, &portIterator); io_object_t d; // Run out the iterator or notifications won't start (you can also use it to iterate the available devices). while ((d = IOIteratorNext(iterator))) { IOObjectRelease(d); } // Also register for removal notifications IONotificationPortRef terminationNotificationPort = IONotificationPortCreate(kIOMasterPortDefault); CFRunLoopAddSource(CFRunLoopGetCurrent(), IONotificationPortGetRunLoopSource(terminationNotificationPort), kCFRunLoopDefaultMode); result = IOServiceAddMatchingNotification(terminationNotificationPort, kIOTerminatedNotification, matchingDict, SerialPortWasRemovedFunction, self, // refCon/contextInfo &portIterator); io_object_t d; // Run out the iterator or notifications won't start (you can also use it to iterate the available devices). while ((d = IOIteratorNext(iterator))) { IOObjectRelease(d); } My SerialPortDeviceWasAddedFunction() and SerialPortWasRemovedFunction() are called when a serial port becomes available on the system or is removed, respectively. Relevant documentation is here, particularly under the heading Getting Notifications of Device Arrival and Departure. A: Use IOHIDManager to get the notifications.
{ "pile_set_name": "StackExchange" }
Q: How can I calculate avg of data from django form and store in variable for later use? In the readerpage function, in my views.py, I am trying to calculate the avg of the two variables: readability_rating and actionability_rating, and store the result in avg_rating def readerpage(request, content_id): content = get_object_or_404(Content, pk=content_id) form = ReviewForm(request.POST) if form.is_valid(): review = form.save(commit=False) review.content = content readability_rating = form.cleaned_data['readability_rating'] readability = form.cleaned_data['readability'] actionability_rating = form.cleaned_data['actionability_rating'] actionability = form.cleaned_data['actionability'] general_comments = form.cleaned_data['general_comments'] review.avg_rating = (float(readability_rating) + float(actionability_rating)) / 2 review.save() return redirect('home') args = {'content': content, 'form': form} return render(request, 'content/readerpage.html', args) The problem is that with this setup the two variables are still ChoiceFields - as such the above setup gives me the error: float() argument must be a string or a number, not 'ChoiceField' I’ve tried converting them to floats without any luck. I also attempted using the TypedChoiceField with coerce=float, still with no luck I’m not sure whether the best place to calculate this is in my function, my form, or my model? models.py: class Review(models.Model): content = models.ForeignKey(Content, null=True, on_delete=models.CASCADE) readability = models.CharField(null=True, max_length=500) readability_rating = models.IntegerField(null=True) actionability = models.CharField(null=True, max_length=500) actionability_rating = models.IntegerField(null=True) general_comments = models.CharField(null=True, max_length=500) avg_rating = models.FloatField(null=True) def _str_(self): return self.title forms.py: class ReviewForm(forms.ModelForm): readability = forms.CharField(widget=forms.Textarea) readability_rating = forms.ChoiceField( choices=[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)]) actionability = forms.CharField(widget=forms.Textarea) actionability_rating = forms.ChoiceField( choices=[(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)]) general_comments = forms.CharField(widget=forms.Textarea) class Meta: model = Review fields = ['readability', 'readability_rating', 'actionability', 'actionability_rating', 'general_comments'] Thanks for reading this. A: The variables are ChoiceFields because you are declaring them as ChoiceFields in view function. Shouldn't you just fetch the values from your cleaned_data? readability_rating = form.cleaned_data['readability_rating'] And to the second part of your question: Why not add it as a @property to your model?
{ "pile_set_name": "StackExchange" }
Q: Can I reuse my Office 2010 licence key when upgrading my OS to 8? As some others have been doing, I have been procrastinating upgrading my operating system. A lot. I'm still on XP (Don't laugh, I can't see spending the money), and since MS is pulling the plug on patches after thirteen years, it's definitely time to upgrade, especially with the $40 deal. Here's the problem: I have Office 2010 installed, and I don't want to waste another key. Will it transfer my key/deactivate it? [Related] This doesn't cover upgrading, I know I can call Microsoft, but does the upgrade tool automatically transfer/deactivate my licence? I bought it in a three licence pack, so will I have two licences or one remaining when I install it on Windows 8? (I only have it on one computer now, but I want to use it on two more computers.) Please request clarifications in the comments; I will clarify anything ASAP. I tried to make this as easy to read as possible, but there may be something needing to be edited. A: If you keep using it on the same machine there shouldn't be a problem at all.
{ "pile_set_name": "StackExchange" }
Q: Could sweet and hot peppers come from the same plant? One of the veggies that my Keplerians grow is peppers. I have heard of some sweet peppers cross pollinating with hot peppers making the sweet peppers hotter and the hot peppers sweeter. However that involves a minimum of 2 plants. 1 per type of pepper. So I was wondering if it is plausible for a single plant to produce both sweet and hot peppers. I think it would be in 1 of 2 ways. Either the plant produces 2 types of flowers, 1 for sweet peppers and 1 for hot peppers and those different flowers attract different pollinators. Or once the plant reaches a certain stage it will transition from 1 type to another. Some might produce hot peppers first and then start producing sweet peppers and others might do just the opposite. This could still involve 2 types of flowers but only one type being produced at any given point. Is it plausible that a single plant could produce both hot and sweet peppers? A: Already happens First off, lets be clear that sweet and hot peppers are the same species of plant. Bell peppers, jalaepeno and cayenne are all Capsicum annuum. Second, I can tell you from gardening jalapenos that you can get divergent results from the same plant based on growing conditions. In Virginia, you can get the first peppers from a plant in June, and continue to get a harvest until October or later, depending on the earliest frost. If you overwater the plant, no only will you get bigger peppers, you will get much weaker ones. We had such a rainy summer last year that my last peppers were not noticably hotter than bell peppers. But if you have a dry summer, and water (heavily) only once a week, you can get very spicy peppers. So you can vary pepper-heat level over time for your plant. Finally, there are plants that produce two different fruits; they are called grafted chimeras. An example is the Bizzaria of Florence, is a citron and sour orange tree grafted together so that it produces the both fruit. A: In Viticulture, there is a concept of 'late harvest' grapes. The idea is that you leave the grapes on the vine after they've ripened so that they dry out a bit. That concentrates the sweetness in the remaining juice. I actually like an 'early harvest' wine (when I drink it), because it's less sweet and makes it a refreshing summer beverage with less alcohol because there the sugar actually makes up a smaller percentage of the juice. So; imagine a pepper with a standard amount of Capsaicin which makes the pepper hot (we call peppers capsicums in Australia, but I'll run with it to keep language consistent) but they contain a high water content. Early harvest - higher heat by comparison to sugars thanks to higher water content. Late Harvest - sweetness overwhelms the Capsaicin and the pepper tastes sweeter. Ideally, in this environment you want the Capsaicin to degrade in the 'fruit' over time; start out high, but leave the pepper over time with the water. I'm not sure how you would do that but that would mean that you have a plant that produces both, the only difference is harvest time. A: Shoots from various fruit trees of the same species can be grafted onto the root-stock of a single plant thus enabling one apple tree to give fruit from several varieties of apple. Grafting is a technique that can grow some kinds of peppers. What you propose is actually quite possible without any crazy scifi tech what-so ever. Grafting multiple variety fruit bearing plants has been done for centuries.
{ "pile_set_name": "StackExchange" }
Q: How to store a table to database in Django 2.1? I have a table inside html, and I need to save it into database using view and model and form. Here are some part of the code: template.html <form method="post" action="/images/save/" enctype="multipart/form-data"> {% csrf_token %} <table class="table" border="1" id="tbl_posts"> <tbody id="tbl_posts_body"> {% for name, age in lines %} {% with i=forloop.counter0 %} {% with i|add:1|stringformat:"s" as i_id %} {% with id="rec-"|add:i_id %} <tr id={{id}}> <td><span class="sn">{{ i|add:1 }}</span>.</td> <td><INPUT type="text" name="txt1" value=""\></td> <td><INPUT type="text" name="txt2" value=""\></td> </tr> {% endwith %} {% endwith %} {% endwith %} {% endfor %} </tbody> </table> <input type="submit" value="Submit"> </form> model.py: class Names(models.Model): name= models.CharField(max_length=255) age= models.IntegerField() view.py: def save_form(request): template = "template.html" context = {'txt1': "Name", 'txt2': 0} if request.method == 'POST': dname= request.POST.get("txt1") dage= request.POST.get("txt2") names1= Names(name=dname, age=dage) names1.save() return render(request, template, context) Question: So, it works perfectly, but the issue is that It saves only the last row. I think there is a way to enter the whole data. I need to enter all data in the table not only the last row. Can someone help me? Update: lines is a zip a combination of two lists, I read it from a file. A: These two lines are the ones which get sent in a form. <td><INPUT type="text" name="txt1" value=""\></td> <td><INPUT type="text" name="txt2" value=""\></td> However, you are using the same "names" to send the values for multiple rows.The result of that is that you will only get those values once with the current code you have in your view. You will want to give each of them a unique name (just do something like this: <td><INPUT type="text" name="txt{{forloop.counter}}" value=""\></td> and then iterate through them in your view.
{ "pile_set_name": "StackExchange" }
Q: unexpected TOKBEGIN, expecting AFFECT or SEMICOLON I'm new to vhdl i've written the code for 12-bit binary counter and i'm getting this error (unexpected TOKBEGIN, expecting AFFECT or SEMICOLON). Kindly guide me to resolve this error library IEEE; use IEEE.STD_LOGIC_1164.ALL; -- Uncomment the following library declaration if using -- arithmetic functions with Signed or Unsigned values use IEEE.NUMERIC_STD.ALL; -- Uncomment the following library declaration if instantiating -- any Xilinx primitives in this code. --library UNISIM; --use UNISIM.VComponents.all; entity bin_count is Port ( clk : in STD_LOGIC; reset : in STD_LOGIC; seq : out STD_LOGIC_VECTOR (11 downto 0)); end bin_count; architecture Behavioral of bin_count is signal ff, ff_next, max_pulse : std_logic_vector(11 downto 0) begin process(clk,reset) begin if(reset = '1') then ff <= "000000000000" elsif( rising_edge(clk)) then ff <= ff_next end if end process; ff_next <= ff + 1; max_pulse <= '1' when ff = "111111111111" else 0; seq<= ff end end Behavioral; error is; ERROR:HDLParsers:164 - "C:/.Xilinx/New folder/bin_count/bin_count.vhd" Line 39. parse error, unexpected TOKBEGIN, expecting AFFECT or SEMICOLON A: Your profile shows you haven't take the Tour (found under Help). The Tour suggests searching the site before asking a question. Searching in this case doesn't find a single answered question for ERROR:HDLParsers:164 also mentioning an expected semicolon. In VHDL a semicolon is a separator for statements and declarations. The use is so basic that you can't find the requirement exhibited anywhere else in the LRM but the Extended BNF which addresses how to parse a VHDL syntax. That you missed a lot of semicolons likely means no mention of the use of separators is presented where you were introduced to VHDL. Errors in your example code In addition to 5 missing semicolons there is no visible "+" operator (ff is not an unsigned or signed value - use package numeric_std_unsigned instead of numeric_std or use type conversions). The assignment to max_plus choices are not expressions of values of std_logic_vector's subtype max_plus ("111111111111" and "0000000000000" instead of '1' and 0). There's an errant (extra) end statement (also without a semicolon). The first error shows up on line 26 of your example, there is no line 39. You could have pointed out the location. architecture behavioral of bin_count is signal ff, ff_next, max_pulse: std_logic_vector(11 downto 0); -- missing semicolon begin process(clk, reset) begin if reset = '1' then ff <= "000000000000"; -- missing semicolon elsif rising_edge(clk) then ff <= ff_next; -- missing semicolon end if; -- missing semicolon end process; -- ff_next <= ff + 1; -- no visible operator "+" ff_next <= std_logic_vector(unsigned(ff) + 1); -- type converts ff to unsigned -- and the result back -- or in the context clause use numeric_std_unsigned: -- use ieee.numeric_std_unsigned.all; -- makes "+" visible -- instead of numeric_std; -- -- and here: -- ff_next <= ff + 1; -- abstract literal 0 and enumeration literal '1' are not values -- of std_logic_vector: -- max_pulse <= '1' when ff = "111111111111" else -- '0'; -- use string literal instead: max_pulse <= "111111111111" when ff = "111111111111" else "000000000000"; -- (and there are other expressions available) seq <= ff; -- missing semicolon -- end -- errant end statement - this doesn't match anything. end behavioral; And with the mentioned changes your code analyzes. There are some other changes that can be made. max_pulse doesn't need to be a std_logic_vector, it can be a std_logic. (This would allow the use of '1' and '0' as the values assigned in the conditional assignment statement. There is no need for ff_next. These give us a slightly different architecture: architecture foo of bin_count is signal ff: std_logic_vector(11 downto 0); signal max_pulse: std_logic; begin process(clk, reset) begin if reset = '1' then ff <= (others => '0'); -- an aggregate elsif rising_edge(clk) then ff <= std_logic_vector(unsigned(ff) + 1); end if; end process; max_pulse <= '1' when ff = "111111111111" else '0'; seq <= ff; end architecture; With a testbench: library ieee; use ieee.std_logic_1164.all; entity bin_count_tb is end entity; architecture fum of bin_count_tb is signal clk: std_logic := '0'; -- default value signal reset: std_logic; signal seq: std_logic_vector (11 downto 0); begin CLOCK: process begin wait for 5 ns; -- half the clock period clk <= not clk; if now > 85000 ns then -- now returns simulation time wait; end if; end process; DUT: entity work.bin_count port map ( clk => clk, reset => reset, seq => seq ); STIMULUS: process begin reset <= '1'; wait for 11 ns; reset <= '0'; wait; end process; end architecture; We can see the reset: (clickable) We can see max_pulse occurs periodically: (clickable) And we can see max_pulse occurs when seq = "111111111111": (clickable)
{ "pile_set_name": "StackExchange" }
Q: spring:batch listener issue I know this is probably a pretty simple fix but for some reason I'm not able to find anything on google. I created a Listener that will take a jobParameter, but for some reason it's not working and I'm not sure what I need to add to my code. It says I need a ref, but what would I need to reference since everything is right there <step id="idOfJob" next="nextJob"> <tasklet> <listeners> <listener> <beans:bean class="class.class.Class" scope="step"> <beans:property name="property" value="#{jobParameters['input']}'" /> </beans:bean> </listener> </listeners> </tasklet> </step> A: Per Spring Batch's XSD, the <listener> element doesn't support inline bean definitions. You need to define it as an external bean and then use a ref as follows: <step id="idOfJob" next="nextJob"> <tasklet ref="myTasklet"> <listeners> <listener ref="myListener"/> </listeners> </tasklet> </step> <beans:bean id="myListener" class="class.class.Class" scope="step"> <beans:property name="property" value="#{jobParameters['input']}'" /> </beans:bean> <beans:bean id="myTasklet" class="class.class.MyTasklet"/>
{ "pile_set_name": "StackExchange" }
Q: iOS Web page errors over Cellular Data but not over Wifi? Recent change to AT&T Cellular network? I'm encountering an issue with certain iOS web pages (in both mobile Safari, Chrome, and in also iOS Webviews in app) over cellular data vs. Wifi, The issue is identical to what was previously posted by someone else here: Mobile Safari Cellular Only Loading Error Unfortunately no answers yet posted to the above URL. Basically, I'm consistently seeing extraneous random garbage characters in the HTML that comes down from cellular data, but the same page loading perfectly okay via Wifi. This isn't a download speed or poor connection issue, it seems to be some inexplicable data transfer/interpretation malfunction over the cellular network. I've been able to replicate the same problem at different locations and with different devices. An example of a page that loads okay with Wifi but loads with errors (JavaScript and CSS errors because of the aforementioned extraneous garbage characters) over data is here: http://www.ear-say.com Has anyone else encountered the same issue? Any insights greatly appreciated. A: The content-type wasn't the issue, but that got me thinking more about how the pages were possibly being transformed between my server, the network, and ultimately the client. After some trial and error I eliminated JQuery references, after which the pages then loaded correctly over AT&T celullar data. That led to another Google search, and ultimately the answer to the problem as per the below URLs: http://bugs.jquery.com/ticket/8917 ... The above JQuery bug report referenced fixes at these two URLs, one of which was actually from stackoverflow: http://mobiforge.com/design-development/setting-http-headers-advise-transcoding-proxies Web site exhibits JavaScript error on iPad / iPhone under 3G but not under WiFi In summary, the issue is with a recent change at the AT&T cellular data network, similar to that described in the above URLs. i.e. AT&T is in some way modifying certain web content before sending it on to iPhones and iPads. The fix is simple, just set the Cache-Control "no-transform" header for pages you don't want changed/transformed by the AT&T network. I'm manually setting the header in PHP for select pages via: header("Cache-Control: no-transform"); .... but I assume it could be globally set in a directory's .htaccess file, or for the domain in the virtual host file or the entire server in the httpd.conf file, e.g.: Header set Cache-Control "no-transform" I don't know how setting "no-transform" will effect performance, I'm far from an expert on Apache configuration settings or networks, but the above has at least for now solved the initial problem.
{ "pile_set_name": "StackExchange" }
Q: Identifying 60's vintage PE Ebner amp components: are those capacitors or resistors in the picture? In the picture there are: 1) 2 brown ones with (silver, black, yellow) bands 2) 2 beige ones with (silver, red, red, red) 3) 2 beige ones with (silver, red, red, orange) 4) 1 beige one with (silver, yellow, red, brown) 5) 2 black ones with (silver, yellow, purple, yellow) 6) 2 small silvery ones with 10?? (and also 25V that cant be seen on the picture) written on them (connected to the triple red silver ones). 7) two yellow caps with 47000pF 10% What are the 1) - 6) ones? Are they caps? Resistors? A: They have Brown, black and yellow bands that tell me the value is 100 kohm. The silver denotes the tolerance. Maybe the brown band has become faded over the years and has virtually merged into the background brown colour. Red, red and red denote 2.2 kohm Red, red and orange denote 22 kohm Brown, red and yellow is 120 kohm Yellow, violet and yellow are 470 kohm The small silvery ones look like polystyrene capacitors to me The polystyrene capacitors look like this in more detail: - This one is coded in value as 682 and means 6800 pF i.e. the "2" represents two trailing zeroes applied after the 68.
{ "pile_set_name": "StackExchange" }
Q: Why can't I multiply a decimal by an integer on python I have been trying to make a height calculator that converts cm to feet, here is my code: print "how tall are you (in cm)?" cm = raw_input() answer = cm*int(0.03280839) print answer I know it's pretty simple but I have only just started so any tips on what is wrong would be great. Thanks in advance. A: cm is a string. You need to convert it to an number (integer or float) first. (You don't want to convert your conversion factor to an integer, since the result would just be multiplying the height by 0.) print "how tall are you (in cm)?" cm = raw_input() answer = int(cm)*0.03280839 print answer This just might be me, but 0.03280839 didn't strike me as an obvious conversion factor, while 2.54 (cm/inches) was much more recognizable. You can let Python do the work of converting cm to feet for you--convert to inches first, then convert to fee--for more readable code: answer = int(cm) * 2.54 / 12 Or better yet, define a constant to use in place of the "magic" number. FEET_PER_CM = 2.54 / 12 # 2.54 cm/inch ÷ 12 in/foot # ... answer = int(cm) * FEET_PER_CM A: Firstly, when you use raw_input(), it would return you a string. So you have to convert it to integer using int(cm) Secondly, int(0.03280839) would round the number and return 0. Why are you using int on that? just cut down the int from there. So the finished code should be: print "how tall are you (in cm)?" cm = raw_input() answer = int(cm) * 0.03280839 # use int(cm) to convert the string to integer print answer
{ "pile_set_name": "StackExchange" }
Q: What are these targets in node for? I have the following in my package json that is responsible for building and running the Angular 2 application. May I know the meaning of each of the flags in the targets ? This for deploying an Angular 2 application in IBM Bluemix "build": "rimraf dist && webpack --progress --profile --bail", "start": "tsc && concurrently \"tsc -w\" \"lite-server\" " A: The answer to your questions are located in the documentation for each of the node js libraries: https://www.npmjs.com/package/rimraf https://www.npmjs.com/package/concurrently https://www.npmjs.com/package/tsc https://www.npmjs.com/package/webpack https://www.npmjs.com/package/lite-server The build script is deleting the dist folder, then building your app with webpack. The start script is compiling your typescript and then running the typescript compiler in watch mode and then starting lite-server concurrently.
{ "pile_set_name": "StackExchange" }
Q: How can I exclude rows from a pandas dataframe depending on conditions across all columns I have a df: population plot1 plot2 plot3 plot4 0 Population1 Species1 Species1 Species2 Species2 1 Population2 Species4 Species2 Species3 Species4 2 Population3 Species1 Species2 Species1 Species2 3 Population4 Species4 Species4 Species4 Species4 4 Population5 Species2 Species2 Species4 Species2 5 Population6 Species4 Species3 Species3 Species4 6 Population7 Species3 Species4 Species1 Species3 7 Population8 Species4 Species4 Species4 Species4 8 Population9 Species3 Species4 Species2 Species3 9 Population10 Species1 Species3 Species2 Species4 10 Population11 Species2 Species4 Species2 Species4 I want to create a new dataframe with all rows (populations) in which Species4 occurs more than once are removed. I've tried several ways using .value_counts() but can't work out a way to apply it across the entire dataframe at once, rather than just by simply looping thru all rows (which takes a long time on the large dataset I have). So, I tried: dat.drop(dat.value_counts()['Species4'] > 1) but .value_counts() cannot be applied to the entire df. A: Using pandas.DataFrame.eq: new_df = df[df.eq('Species4').sum(1).le(1)] # or new_df = df[~df.eq('Species4').sum(1).gt(1)] print(new_df) Output: population plot1 plot2 plot3 plot4 0 Population1 Species1 Species1 Species2 Species2 2 Population3 Species1 Species2 Species1 Species2 4 Population5 Species2 Species2 Species4 Species2 6 Population7 Species3 Species4 Species1 Species3 8 Population9 Species3 Species4 Species2 Species3 9 Population10 Species1 Species3 Species2 Species4
{ "pile_set_name": "StackExchange" }
Q: Makefile Os detection and ifeq not getting triggered I'm struggling to get my ifeq condition to be triggered. I know my makefile probably is looking silly on all parts anyway, but that is why I am here, to ask. My Makefile condition is as follows: COMPILER = g++ TARGET_WIN32 = engine.exe SOURCES_WIN32 = main.cpp os_win32.cpp FLAGS_WIN32 = -mwindows TARGET_LINUX = engine SOURCES_LINUX = main.cpp os_linux.cpp FLAGS_LINUX = -lX11 ifeq ( $(OS), Windows_NT) TARGET = $(TARGET_WIN32) SOURCES = $(SOURCES_WIN32) FLAGS = $(FLAGS_WIN32) else TARGET = $(TARGET_LINUX) SOURCES = $(SOURCES_LINUX) FLAGS = $(FLAGS_LINUX) endif all: @echo $(OS) $(COMPILER) -o $(TARGET) $(SOURCES) $(FLAGS) A: Make is very sensitive for spaces :-) Your line: ifeq ( $(OS), Windows_NT) must be: ifeq ($(OS),Windows_NT)
{ "pile_set_name": "StackExchange" }
Q: Why does floor(Infinity) give large integer with opposite sign? The following Fortran program: program test double precision :: inf, one, zero one = 1.d0 zero = 0.d0 inf = one/zero write(6,*) floor( inf) write(6,*) floor(-inf) end program test compiled with gfortran test.f95 prints this: -2147483648 2147483647 I understand why these values, they are the maximum (or minimum) values of a 4-byte integer, which is the default in gfortran, and floor returns an integer. What I don't get is their sign... Mathematically, a number line would be: -Infinity -2147483648 0 2147483647 Infinity ---------------------------------------------------------> But apparently my program sees it the other way around. Mathematically speaking, floor(inf) should return (after type conversion) 2147483647 and floor(-inf) should return -2147483648. What causes the change of sign? What convention does gfortran use to make this odd result? Is there a convention in Fortran more generally? Furthermore, other functions that do type conversion return strange (for me, at least) values: write(6,*) floor( inf), nint( inf), ceiling( inf) write(6,*) floor(-inf), nint(-inf), ceiling(-inf) prints: -2147483648 0 -2147483647 2147483647 0 -2147483648 Update: After agentp's comment I found out that this doesn't happen to infinities only. Any large number (larger than 2^32) will give the same results. And to make things even stranger, this program: program test real :: big big = 2.d0**32-1000 print*, big write(6,*) floor( big), nint( big), ceiling( big) write(6,*) floor(-big), nint(-big), ceiling(-big) end program test returns this quite bizarre result: -2147483648 -1024 -2147483647 2147483647 1024 -2147483648 Just out of curiosity I checked and this C program: #include <stdio.h> #include <math.h> int main() { float one = 1; float zero = 0; float inf = one/zero; int iflr = floor(inf); int irnd = round(inf); int icei = ceil(inf); printf("%d - %d - %d\n",iflr,irnd,icei); iflr = floor(-inf); irnd = round(-inf); icei = ceil(-inf); printf("%d - %d - %d\n",iflr,irnd,icei); } compiled with gcc test.c -lm and it prints -2147483648 for all cases. A: Mathematically floor(inf) and floor(-inf) should return inf and -inf because changing an infinite number by a scalar is still infinite. Interestingly, iFort prints -2147483648 -2147483648 as opposed to what the gfortran results were. Additionally, your second test program compiled with iFort prints: -2147483648 -2147483648 -2147483648 -2147483648 -2147483648 -2147483648 The real issue here is that floor() specifically returns an integer and there are no denormalized integers and certainly no special bit patterns that can store values such as inf or NAN in an integer. With no such specifications, its programmer beware.
{ "pile_set_name": "StackExchange" }
Q: reliable way to find the location devenv.exe of Visual Studio 2017 I need to run scripts that build a visual studio solutions using devenv.exe (or devenv.com for that matter). For visual studio 2015 there was an environment variable %VS140COMNTOOLS% that I could use to find the install location of devenv. Since there is no %VS150COMNTOOLS% for Visual Studio 2017, what would be a reliable way to find the install location of devenv in a script (bat or powershell). A: One way is to use power shell and vswhere.exe. But I'm bit lazy to install new tools and ... I was trying to find simpler solution and found it from registry - there exists registry key HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\SxS\VS7, which lists all Visual studio installations. One of limitations mentioned in this link: https://developercommunity.visualstudio.com/content/problem/2813/cant-find-registry-entries-for-visual-studio-2017.html If there is more than one edition of 2017 installed, then it seems the last one installed will have their path in this key. But typically you install only one visual studio for build or use purpose. Also I've coded this sample from 64-bit machine perspective, I think Wow6432Node does not exits in 32-bit machines, but really - how many developers use 32-bit machines nowadays ? So if you're fine with limitations above, here is a simple batch which can query visual studio installation path: test.bat : @echo off setlocal call:vs%1 2>nul if "%n%" == "" ( echo Visual studio is not supported. exit /b ) for /f "tokens=1,2*" %%a in ('reg query "HKLM\SOFTWARE\Wow6432Node\Microsoft\VisualStudio\SxS\VS7" /v "%n%.0" 2^>nul') do set "VSPATH=%%c" if "%VSPATH%" == "" ( echo Visual studio %1 is not installed on this machine exit /b ) echo Visual studio %1 path is "%VSPATH%" endlocal & exit /b :vs2017 set /a "n=%n%+1" :vs2015 set /a "n=%n%+2" :vs2013 set /a "n=%n%+1" :vs2012 set /a "n=%n%+1" :vs2010 set /a "n=%n%+10" exit /b Can be executed like this: >test 2010 Visual studio 2010 path is "C:\Program Files (x86)\Microsoft Visual Studio 10.0\" >test 2012 Visual studio 2012 path is "C:\Program Files (x86)\Microsoft Visual Studio 11.0\" >test 2013 Visual studio 2013 path is "C:\Program Files (x86)\Microsoft Visual Studio 12.0\" >test 2014 Visual studio is not supported. >test 2015 Visual studio 2015 path is "C:\Program Files (x86)\Microsoft Visual Studio 14.0\" >test 2017 Visual studio 2017 path is "C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\" A: You can use vswhere.exe or powershell to find your Visual Studio instances: for /r "usebackq tokens=1* delims=: " %%i in (`vswhere.exe -latest -requires Microsoft.VisualStudio.Workload.NativeDesktop`) do ( if /i "%%i"=="installationPath" set dir=%%j ) and Install-Module VSSetup -Scope CurrentUser Get-VSSetupInstance | Select-VSSetupInstance -Latest -Require Microsoft.VisualStudio.Component.VC.Tools.x86.x64 The path to specific workloads can be found through this api as well. https://blogs.msdn.microsoft.com/vcblog/2017/03/06/finding-the-visual-c-compiler-tools-in-visual-studio-2017/
{ "pile_set_name": "StackExchange" }
Q: Why is resolution 20? I am using an amazing code to send/read infrared pulses without any external library. The code is fine and I took about 1 hour to fully understand it. The only thing I didnt understand is the variable RESOLUTION. I believe it should be 0 not 20. I think it should be as low as possible. You can see the code at: https://learn.adafruit.com/ir-sensor/using-an-ir-sensor or below: /* Raw IR decoder sketch! This sketch/program uses the Arduino and a PNA4602 to decode IR received. This can be used to make a IR receiver (by looking for a particular code) or transmitter (by pulsing an IR LED at ~38KHz for the durations detected Code is public domain, check out www.ladyada.net and adafruit.com for more tutorials! */ // We need to use the 'raw' pin reading methods // because timing is very important here and the digitalRead() // procedure is slower! //uint8_t IRpin = 2; // Digital pin #2 is the same as Pin D2 see // http://arduino.cc/en/Hacking/PinMapping168 for the 'raw' pin mapping #define IRpin_PIN PIND #define IRpin 2 // for MEGA use these! //#define IRpin_PIN PINE //#define IRpin 4 // the maximum pulse we'll listen for - 65 milliseconds is a long time #define MAXPULSE 65000 // what our timing resolution should be, larger is better // as its more 'precise' - but too large and you wont get // accurate timing #define RESOLUTION 20 // we will store up to 100 pulse pairs (this is -a lot-) uint16_t pulses[100][2]; // pair is high and low pulse uint8_t currentpulse = 0; // index for pulses we're storing void setup(void) { Serial.begin(9600); Serial.println("Ready to decode IR!"); } void loop(void) { uint16_t highpulse, lowpulse; // temporary storage timing highpulse = lowpulse = 0; // start out with no pulse length // while (digitalRead(IRpin)) { // this is too slow! while (IRpin_PIN & (1 << IRpin)) { // pin is still HIGH // count off another few microseconds highpulse++; delayMicroseconds(RESOLUTION); // If the pulse is too long, we 'timed out' - either nothing // was received or the code is finished, so print what // we've grabbed so far, and then reset if ((highpulse >= MAXPULSE) && (currentpulse != 0)) { printpulses(); currentpulse=0; return; } } // we didn't time out so lets stash the reading pulses[currentpulse][0] = highpulse; // same as above while (! (IRpin_PIN & _BV(IRpin))) { // pin is still LOW lowpulse++; delayMicroseconds(RESOLUTION); if ((lowpulse >= MAXPULSE) && (currentpulse != 0)) { printpulses(); currentpulse=0; return; } } pulses[currentpulse][1] = lowpulse; // we read one high-low pulse successfully, continue! currentpulse++; } void printpulses(void) { Serial.println("\n\r\n\rReceived: \n\rOFF \tON"); for (uint8_t i = 0; i < currentpulse; i++) { Serial.print(pulses * RESOLUTION, DEC); Serial.print(" usec, "); Serial.print(pulses[1] * RESOLUTION, DEC); Serial.println(" usec"); } // print it in a 'array' format Serial.println("int IRsignal[] = {"); Serial.println("// ON, OFF (in 10's of microseconds)"); for (uint8_t i = 0; i < currentpulse-1; i++) { Serial.print("\t"); // tab Serial.print(pulses[1] * RESOLUTION / 10, DEC); Serial.print(", "); Serial.print(pulses[i+1][0] * RESOLUTION / 10, DEC); Serial.println(","); } Serial.print("\t"); // tab Serial.print(pulses[currentpulse-1][1] * RESOLUTION / 10, DEC); Serial.print(", 0};"); } A: Most IR remotes send pulses that are at least 400 micro seconds long. E.g. on a NEC IR remote you get pulses that are 560 or 2240 microseconds long (depending on whether it's sending a 1 or a 0). So with a RESOLUTION of 20 you get at least 20 'pulse counts' for a 0 and 80 for a 1. Because remotes aren't that precise in you'll get somewhere between e.g. 18 and 22 or 78 and 82. Those values are different enough to easily detect whether a 0 or 1 was send. Changing this RESOLUTION to e.g. 2 will give you 200 or 800 pulsecounts. You'll get a more precise timings from a still imprecise source. But this better precision is useless. PS setting it to 0 could break the code, as the highpulse and lowpulse are only 16 bit integers so they will overflow after 65536, which will happen pretty fast. Hope that helps somewhat. Just let me know is anything is still unclear.
{ "pile_set_name": "StackExchange" }
Q: Ruby on Rails: How do I recursively substitute my params? So, I really don't want any nulls passed into my server, because it destroys IE when rendered. I think a before filter in teh ApplicationController would do the trick. I kinda want to do something like params.gsub(/\000/,"") but since params is a hash, that won't work. What is the shortest way to do this? A: Something like this should work: def recursive_gsub(search, replace, value) case value when String value.gsub!(search, replace) when Array,Hash value.each{|v| recursive_gsub(search, replace, v)} end end Then recursive_gsub(/\000/,"",params) should work. You could even add this method to Hash if you want something prettier like params.recursive_gsub!(/\000/,"").
{ "pile_set_name": "StackExchange" }
Q: Understanding $\lim \inf$ and $\lim \sup$ Let $\left\{ x_{n}\right\} _{n}$ be a sequence. Define $E_0=${$r\in\mathbb{R}$:$\lim_{k\rightarrow\infty}$ $x_{n_{k}}=r$}. If $\left\{ x_{n}\right\} _{n}$ has a subsequence converging to $\pm \infty$ we add this to $E_0$ to obtain $E$. So, $E=${$x\in\mathbb{R}\cup${$\pm \infty$}:$\lim _{k\rightarrow \infty }x_{n_{k}}=x$}. Remark. If $\left\{ x_{n_{k}}\right\} _{k}$ is increasing then sup$_{k} $$x_{n_{k}}\in E$, if not inf$_{k}$ $x_{n_{k}}\in E$. I couldn't understand this remark that can you explain more clearly? A: So $\{x_{n_k}\}_k$ is increasing. If this series is bounded, we know it will converge to a number $M$, otherwise, it will go to infinity. Either way, we could say $\sup_k\{x_{n_k}\}=\lim_{k\rightarrow \infty}=x,\,x\in \mathbb R \cup\{-\infty,+\infty\}$. Similar logic could be applied to decreasing and $\inf$. However, I do not think the "if not" in the remark would be correct - not increasing does not mean decreasing. If it is oscillating, $\inf$ always exists in $\mathbb R \cup\{-\infty,+\infty\}$, but $\lim$ does not necessarily exist.
{ "pile_set_name": "StackExchange" }
Q: Correlations, Scatter Plots and P-Value I have a set of data, after questioning customers.(it's about a shoe company) Two of the columns include GENDER and INCOME. I am supposed to test if there are any significant differences in income between genders, and give the corresponding P-value. I'm still a n00b when it comes to R, I'm still learning and I've been struggling for 3 days now to find the functions to do so. Does anyone have any lead, or could help me with it? would be awesome. A: I am editing this because I realized my other answer was not correct. What you want is a linear model. say GENDER <- factor(c(0,1,1,0,1) INCOME <- c(20000,30000,40000,50000,550000) then you want model <-lm(INCOME~GENDER) and summary(model) anova(model) will give you the information you are after. Good luck, Bryan
{ "pile_set_name": "StackExchange" }
Q: What does http and https module do in Node? Can someone help me in understanding what does http and https module do in Express? I was going through the following docs on w3schools From definition it says Node.js has a built-in module called HTTP, which allows Node.js to transfer data over the Hyper Text Transfer Protocol (HTTP). With following example var http = require('http'); //create a server object: http.createServer(function (req, res) { res.write('Hello World!'); //write a response to the client res.end(); //end the response }).listen(8080); //the server object listens on port 8080 This is the example to live demo First, I am unable to comprehend their example like Where are they making (route) request so that they are receiving response? Second by the definition, to make a request, using libraries like axios can be alternative? third, when we make an api request, isn't the data transferred over http/https? app.post("/", (req, res) => { In short, Can someone please explain me in more human words the use of http package in express? Update: I might be confusing this with express, I am used to using express and here we aren't using express A: 1- They aren't defining any route. That piece of code only creates a server running on port 8080 that when it's created or accessed on the home route (/) returns "Hello World". If you want to define routes you should take a closer look to a module called express that it's used by most of node users due to its simplicity and documentation (https://expressjs.com/en/starter/hello-world.html) In that link you have an example for creating the server and a basic route 2- Yes it can and should be because they are way better than the default from nodeJs. Take a look at axios or superagent, superagent it's better if you want to use formdata to send images or attachments. 3- By default, all servers created using http or express are http servers (don't have a certificate to encrypt the data so they aren't secure). If you want a https server, you can buy certificates or use https://letsencrypt.org/ this module that generates free SSL certificates with 1 month validation. http module has multiple functions, it can be used to create a server, to make http requests and so on. It's up to you to decide which submodule from the package you want to use. Express is built over the http module making everything easier. If you need more explanation, tell me and I will try to explain a little better.
{ "pile_set_name": "StackExchange" }
Q: Is medical insurance mandatory for entering the Schengen area if you are not required to have a visa? For nationals that request the Schengen visa, the traveller must have insurance that covers, for a minimum of €30,000, any expenses incurred as a result of emergency medical treatment or repatriation for health reasons. But what about citizens of 'Annex II' countries and territories (Japan, Brazil, etc)? Are they also required to have such insurance? A: Travel health insurance is not mandatory for people who do not need a visa to enter the Schengen area (including people from annex II countries like Brazil and people from other countries who hold a residence permit from a Schengen country). The travel medical insurance requirement is defined in article 15 of the Schengen Visa code and then mentioned again in article 21 on the entry conditions that must be fulfilled to issue a visa. It's also one of the reasons to refuse a visa mentioned on the standard refusal form. By contrast, it is nowhere to be found in the Schengen Borders code. In particular, article 5 of this code, on “Entry conditions for third-country nationals” mentions most of the conditions listed in article 21 of the visa code (valid travel document, purpose of stay, financial means…), except travel insurance. Lack of travel insurance is also absent from the standard refusal form in annex V. The Borders Code is the regulation that applies to third-country nationals who don't need a visa and defines all the requirements that apply to them. This regulation is binding for all Schengen countries, whatever their embassies might have to say about it. There is therefore no legal basis for border guards or anyone else to require insurance from visa-exempt travellers. Some comments mention the fact that Slovakia apparently requires travel health insurance from visa-exempt short-term visitors and that this “requirement” even made it to the TIMATIC database airlines use to find out about entry rules in various situations. I would not be surprised if border guards there actually ask to see some proof of insurance but that's clearly illegal. Neither Slovakia as a whole nor individual border guards are free to make up their own rules. Of course, being insured is the easiest way to avoid unpleasant discussions and can be beneficial for other reasons but it's not a legal requirement for a visa-free short-stay in the Schengen area. Note that Schengen visa holders do need valid travel insurance every time they enter the Schengen area. The reason for that is that the visa requirement itself is included in article 5 of the Borders code and a Schengen visa can be revoked at any time if the conditions for issuing it are not met anymore, thus making all the requirements of the Visa code relevant for entry in this case. That's particularly relevant for multiple-entry visa holders who only have to prove they are covered for their first intended trip when applying for their visa. If they show up at the border with a Schengen visa but without insurance (e.g. on a subsequent trip), the border guards should in principle rule that the conditions for issuing the visa are no longer met and revoke it. In practice, border guards do not always check that. A: The French embassy site in Brazil in it's portuguese version states: All foreigners, required or not to obtain a short stay visa, who wish to enter France must have a 30.000 Euros health insurance that covers all Schengen territory." Strange that the same requirement is not mentioned in the french version of the same website. The Nederlands embassy site in Brazil in it's portuguese version states: To reduce risks or delays in the border it is advisable to: Hire an international insurance, valid for Europe during the period of stay in the Schengen territory with a coverage of at least €30.000,00 for medical or hospital expenses. Even though it is not mandatory for entrance in the Netherlands, some other countries in the Schengen territory require this insurance. Additionally, I looked a few other (Spain, Germany, Italy) embassies sites in Brazil and none of them mention the health insurance requirement Besides that, it is common sense in Brazil that a Brazilian who wish to travel to Europe, must have this 30.000EUR health insurance. For instance, credit cards like Mastercard or Visa advise that they will give such insurance if the airline tickets are bought using the credit card. A few travel forums and sites like Mochileiros (backpackers) or FalandoDeViagem(talking about travel) also mention the need to have such health insurance. All in all, it's hard to say if the health insurance is really mandatory or no. Evidences suggest it is not for brazilians, besides the common sense, but I just can't be sure.
{ "pile_set_name": "StackExchange" }
Q: MSP430G2553 UART Baudrate and Control Registers. Can I use word access? Is it possible to program MSP430G2553 UART registers with word access or are the internal peripherals only byte wide and therefore only byte accessible? (I know that MCTL is only byte wide on this device.) A: The 2xx family User's Guide says in section 1.4.3: The address space from 010h to 0FFh is reserved for 8-bit peripheral modules. These modules should be accessed with byte instructions. Read access of byte modules using word instructions results in unpredictable data in the high byte. If word data is written to a byte module only the low byte is written into the peripheral register, ignoring the high byte.
{ "pile_set_name": "StackExchange" }
Q: How do I force a Windows Forms C# application to ignore when a user choose 125% or 150% for the OS font size? I need a quick way of forcing my C# Windows Forms application to not scale fonts when a user choose a larger or smaller percentage in the OS settings. Is this even possible? A: Here is what worked for me... this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.None; this.Font = new System.Drawing.Font("Arial", 14F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Pixel, ((byte)(0))); The above two lines are copied from my BaseForm.Designer.cs file, but basically I found two easy steps to get "no font scaling": Set AutoScaleMode to None. Use "Pixel" as the Unit Type for all Fonts, instead of the default Point value. As far as if you should let Windows scale your fonts or not, that's up to you. I for one don't like the design, so if I feel my application needs to be scaled, I'll do it myself, with my own options and design. Over the years of speaking with actual end-users, I've also found that most of them have no idea about DPI settings, and if they have anything other than the default set, it wasn't because they wanted it that way... and they just never noticed because all they use is the web browser and maybe Excel and Microsoft Word (which use whatever font they set it to). If my application had respected the system font settings, they wouldn't have liked it as much == less sales, because it would have had this huge ugly font like the system dialogs do (and they don't know how to change it, but they don't care about system dialogs they never use). A: The problem is that the Font property of a Form or a control specifies the font size in Points. That's a measurement that affect the height of the letters when the DPI setting changes. One point is 1/72 inches. The default DPI, 96 dots per inch and a font size of 9 points yields a letter that is 9 / 72 x 96 = 12 pixels high. When the user bumps up the DPI setting to, say, 120 DPI (125%) then the letter becomes 9 / 72 x 120 = 15 pixels high. If you don't let the control get larger then the text won't fit in the control anymore. Very ugly to look at. The Form.AutoScaleMode property solves this problem. It checks at which size the form was designed and compares it against the DPI on the machine on which it runs. And resizes and relocates the controls to ensure this kind of clipping won't happen. Very useful, it is completely automatic without you having to do anything about it. The typical problem is the "relocates" bit in the previous paragraph. If you give controls their own font size instead of inheriting the size of the form or if the automatic layout of the form isn't kosher then controls may end up in the wrong spot, destroying the organized look of the form. You need to fix that, it isn't clear from your question what the source of the problem might be. Trying to prevent this auto-scaling from doing its job is not sustainable. You'll have to iterate all of the controls in the form and change their Font, picking a smaller font size. This is however going to get you into trouble a couple of years from now, if not already. Your user is going to complain about having to work with a postage stamp. The easiest way to debug the layout problem, avoiding the pain of constantly changing the DPI size, is to temporarily paste this code into your form class: protected override void OnLoad(EventArgs e) { this.Font = new Font(this.Font.FontFamily, this.Font.SizeInPoints * 125 / 96); base.OnLoad(e); } A: I found a pretty easy workaround. this.AutoScaleDimensions = new System.Drawing.SizeF(96F, 96F); this.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi; This will make the text the same size regardless of the autoscale size.
{ "pile_set_name": "StackExchange" }
Q: Pandas resampling with custom volume weighted aggregation I'm trying to do a volume weighted price aggregation based on a 5 second timestep for which I have multiple datapoints. I can get simple mean and sum aggregations for individual fields by passing a dict of aggregation types. However, to generate a volume weighted aggregation I need to use both the pricing and volume fields to generate this for each step. TS P Q D 2018-01-01 00:00:00 1514764800 1673574.0 0.164012 2018-01-01 00:00:00 1514764800 1673954.0 0.006000 2018-01-01 00:00:00 1514764800 1673967.0 0.005808 2018-01-01 00:00:00 1514764800 1673949.0 0.040000 2018-01-01 00:00:00 1514764800 1673573.0 0.159234 2018-01-01 00:00:00 1514764800 1673569.0 0.007000 2018-01-01 00:00:00 1514764800 1673949.0 0.100000 2018-01-01 00:00:00 1514764800 1673569.0 0.008000 2018-01-01 00:00:00 1514764800 1673949.0 0.033000 2018-01-01 00:00:00 1514764800 1673346.0 0.033000 2018-01-01 00:00:01 1514764801 1673967.0 0.212200 2018-01-01 00:00:02 1514764802 1673954.0 0.006765 2018-01-01 00:00:03 1514764803 1673950.0 0.012000 2018-01-01 00:00:03 1514764803 1673955.0 0.005700 2018-01-01 00:00:03 1514764803 1673642.0 0.031197 2018-01-01 00:00:03 1514764803 1673949.0 0.067654 The volume weighting formula should simply be a cumulative sum of quantity x price divided by the total quantity for the period. Is there a way to do this with a custom aggregation using both the price and quantity series to return a VWAP? Any suggestions are appreciated, thanks! A: Using .apply you can write any custom aggregation function you want. def vwap(data): return (data.P * data.Q).sum() / data.Q.sum() When using a grouper, you can apply it like this: df.groupby(pd.Grouper(freq="5s")).apply(vwap) With resampling, .apply can be used as well: df.resample("5s").apply(vwap)
{ "pile_set_name": "StackExchange" }
Q: Create DropDownListFor using strings from a List I feel this should be simple but haven't found a guide on here that explains the use of dropdownlistfor in MVC. I have a simple List of Names in a method in a class Users: public List<string> getUsersFullNames() { return (from da in db.Dat_Account join ra in db.Ref_Account on da.AccountID equals ra.AccountID select ra.FirstName + " " + ra.Surname).ToList(); } I want to display each of these names in a dropdownlist so that a name can be selected. I tried to get this working but have had no success. My controller: [Authorize] public ActionResult ManageUserAccounts() { ViewBag.UserList = oUsers.getUsersFullNames(); return View(); } My Model: public class ManageUserAccountsViewModel { [Display(Name = "Users")] public List<SelectListItem> UserList { get; set; } } My View: Html.DropDownListFor(model => model.UserList, new SelectList(oUsers.getUsersFullNames(), "Select User")); I'm quite new to asp.net MVC as I have always used webforms in the past. Has anyone any idea if this is possible or a way to display this? Thanks, A: I would recommend using the model directly in the view, instead of the ViewBag. Update your action to include a model reference: public ActionResult ManageUserAccounts() { var model = new ManageUserAccountsViewModel(); model.UserList = oUsers.getUsersFullNames(); return View(model); } Your model should be updated to include a selected User property: public class ManageUserAccountsViewModel { public string User { get; set; } [Display(Name = "Users")] public List<string> UserList { get; set; } } Your view should be binding to the model: @model ManageUserAccountsViewModel @Html.DropDownListFor(m => m.User, new SelectList(Model.UserList), "Select User")
{ "pile_set_name": "StackExchange" }
Q: JSP - Netbeans cannot use session bean Solved: if you run in trouble like me doublecheck your constructor definition! mine was for some ninja-releated reason private. Hi everyone, i'm new to NetBeans-JSP programming (i'm quite confident with PHP) hi have this instruction in "doLogin.jsp": ... <jsp:useBean scope="request" id="user" class="minibay.user.LoginBean" /> <jsp:useBean scope="session" id="userSession" class="minibay.user.UserSessBean" /> ... when I run the application and go to the page I recive this error: org.apache.jasper.JasperException: /doLogin.jsp(12,0) The value for the useBean class attribute minibay.user.UserSessBean is invalid. org.apache.jasper.compiler.DefaultErrorHandler.jspError(DefaultErrorHandler.java:40) org.apache.jasper.compiler.ErrorDispatcher.dispatch(ErrorDispatcher.java:407) org.apache.jasper.compiler.ErrorDispatcher.jspError(ErrorDispatcher.java:148) org.apache.jasper.compiler.Generator$GenerateVisitor.visit(Generator.java:1220) org.apache.jasper.compiler.Node$UseBean.accept(Node.java:1178) org.apache.jasper.compiler.Node$Nodes.visit(Node.java:2361) org.apache.jasper.compiler.Node$Visitor.visitBody(Node.java:2411) org.apache.jasper.compiler.Node$Visitor.visit(Node.java:2417) org.apache.jasper.compiler.Node$Root.accept(Node.java:495) org.apache.jasper.compiler.Node$Nodes.visit(Node.java:2361) org.apache.jasper.compiler.Generator.generate(Generator.java:3416) org.apache.jasper.compiler.Compiler.generateJava(Compiler.java:231) org.apache.jasper.compiler.Compiler.compile(Compiler.java:347) org.apache.jasper.compiler.Compiler.compile(Compiler.java:327) org.apache.jasper.compiler.Compiler.compile(Compiler.java:314) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:589) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:313) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:260) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:393) Actually, the class is located under "MiniBay/java/minibay/user" (where MiniBay is the project root) I've read in other post that my classes should be located under the "WEB-INF" folder. Actually, ii've tried to move them with no succes. Furthermore, the "user" bean works well, if I remove the second line of the code above I have no problem. Any Idea of how to make it work out? thnks Edit: this is the UserSessBean class definition: package minibay.user; import java.io.Serializable; /** * * @author Alessandro Artoni <[email protected]> */ public class UserSessBean implements Serializable{ private boolean loggedIn; private User user; private UserSessBean(){ } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public boolean isLoggedIn() { return loggedIn; } public void setLoggedIn(boolean loggedIn) { this.loggedIn = loggedIn; } } A: The value for the useBean class attribute minibay.user.UserSessBean is invalid. This boils down that the following has failed miserably: UserSessBean userSession = new UserSessBean(); Does it have an (implicit) public no-arg constructor? It should be there to get construction to work. Also take care that any of (static) initialization blocks runs without throwing runtimeexceptions/errors. You should however have seen that back in the server logs as root cause. See also: Javabeans specification - This specifies how Javabean classes should look like
{ "pile_set_name": "StackExchange" }
Q: Evaluate $f(x)= {\int_{-\infty}^0\ }\frac{x^2}{e^{x^2}}\operatorname d\!x$ How to evaluate this integral: $$f(x)= {\int\limits_{-\infty}^0\ }\frac{x^2}{e^{x^2}}\operatorname d\!x$$ A: As stated in the comments integrate by parts. Since $(e^{-{x^2}})'=-2xe^{-{x^2}}$ you have that $$\begin{align*}f(x)&=\int_{-\infty}^o x^2e^{-{x^2}}dx=-\frac{1}{2}\int_{-\infty}^0 x(e^{-{x^2}})'dx=-\frac{1}{2}\left[x(e^{-{x^2}})\right]_{-\infty}^0+\frac{1}{2}\int_{-\infty}^0 (x)'e^{-{x^2}}dx=\\&=0+\frac{1}{2}\int_{-\infty}^0 e^{-{x^2}}dx\end{align*}$$ Now the last can be evaluated in many ways. One probabilistic way is to observe that this is the normal distribution function with $\mu=0$ and $\sigma=\frac{1}{\sqrt{2}}$ that is to be integrated (missing a constant). The integral limits are the half of the normal distribution's domain (which is $(-\infty,+\infty)$ so the integral will be equal to $\frac{1}{2}$ instead of $1$ (due to the symmetry of the standard normal distribution to zero) after adding the constant. So by adding this constant you have that: $$\int_{-\infty}^0 e^{-{x^2}}dx=\frac{\sqrt{2\pi}}{\sqrt{2}}\int_{-\infty}^0\frac{1}{\sqrt{2\pi}\frac{1}{\sqrt{2}}}e^{-\frac{x^2}{2\frac{1}{\sqrt{2}^2}}}dx=\frac{\sqrt{2\pi}}{\sqrt{2}}\frac{1}{2}=\frac{\sqrt{\pi}}{2}$$ So, in sum $$f(x)=\int_{-\infty}^o x^2e^{-{x^2}}dx=\frac{1}{2}\frac{\sqrt{\pi}}{2}=\frac{\sqrt{\pi}}{4}$$
{ "pile_set_name": "StackExchange" }
Q: is ko.applyBindings synchronous or asynchronous? Does the generated view exists right after you call ko.applyBindings() or does the scaffolding happen asynchronously? Thanks! A: ko.applyBindings is a synchronous call. There may be cases where bindings have special code to do things in a setTimeout, but this is not generally the case. With the addition of components in Knockout 3.2, components are asynchronous. With Knockout 3.3, there will be an option to render components synchronously if the viewmodel / template is loaded.
{ "pile_set_name": "StackExchange" }
Q: How add image, showing in PagerView, to background? It is necessary download the image (on SDcard) and then just use it? If no, how do this ? I have image store and images shows in PagerView (images load from internet). I need to select a picture from PagerView and put it on the background (wallpaper). I can not add a button in ImagePagerActivity for adding image to background. Error: E/AndroidRuntime(14608): java.lang.RuntimeException: Unable to start activity ComponentInfo{down.load.ascetix/down.load.ascetix.ImagePagerActivity}: java.lang.ClassCastException: down.load.ascetix.ImagePagerActivity. ImagePagerActivity: public class ImagePagerActivity extends BaseActivity { private ViewPager pager; private DisplayImageOptions options; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.ac_image_pager); Bundle bundle = getIntent().getExtras(); String[] imageUrls = bundle.getStringArray(Extra.IMAGES); int pagerPosition = bundle.getInt(Extra.IMAGE_POSITION, 0); options = new DisplayImageOptions.Builder() .showImageForEmptyUri(R.drawable.image_for_empty_url) .cacheOnDisc() .imageScaleType(ImageScaleType.EXACT) .build(); pager = (ViewPager) findViewById(R.id.pager); pager.setAdapter(new ImagePagerAdapter(imageUrls)); pager.setCurrentItem(pagerPosition); } @Override protected void onStop() { imageLoader.stop(); super.onStop(); } private class ImagePagerAdapter extends PagerAdapter { private String[] images; private LayoutInflater inflater; ImagePagerAdapter(String[] images) { this.images = images; inflater = getLayoutInflater(); } public void destroyItem(View container, int position, Object object) { ((ViewPager) container).removeView((View) object); } public void finishUpdate(View container) { } public int getCount() { return images.length; } public Object instantiateItem(View view, int position) { final FrameLayout imageLayout = (FrameLayout) inflater.inflate(R.layout.item_pager_image, null); final ImageView imageView = (ImageView) imageLayout.findViewById(R.id.image); final ProgressBar spinner = (ProgressBar) imageLayout.findViewById(R.id.loading); imageLoader.displayImage(images[position], imageView, options, new ImageLoadingListener() { public void onLoadingStarted() { spinner.setVisibility(View.VISIBLE); } public void onLoadingFailed(FailReason failReason) { String message = null; switch (failReason) { case IO_ERROR: message = "Input/Output error"; break; case OUT_OF_MEMORY: message = "Out Of Memory error"; break; case UNKNOWN: message = "Unknown error"; break; } Toast.makeText(ImagePagerActivity.this, message, Toast.LENGTH_SHORT).show(); spinner.setVisibility(View.GONE); imageView.setImageResource(android.R.drawable.ic_delete); } public void onLoadingComplete() { spinner.setVisibility(View.GONE); Animation anim = AnimationUtils.loadAnimation(ImagePagerActivity.this, R.anim.fade_in); imageView.setAnimation(anim); anim.start(); } public void onLoadingCancelled() { // Do nothing } }); ((ViewPager) view).addView(imageLayout, 0); return imageLayout; } public boolean isViewFromObject(View view, Object object) { return view.equals(object); } public void restoreState(Parcelable state, ClassLoader loader) { } public Parcelable saveState() { return null; } public void startUpdate(View container) { } } } My .xml file: <?xml version="1.0" encoding="utf-8"?> <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:padding="1dip"> <ImageView android:id="@+id/image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:adjustViewBounds="true" /> <ProgressBar android:id="@+id/loading" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center" android:visibility="gone" /> </FrameLayout> Thanks. A: You can set your background image this way. It is probably easier. The Code: ((ImageView)findViewById(R.id.background)).setBackgroundResource(R.drawable.<your image>); The XML: <ImageView android:id="@+id/background" android:layout_width="fill_parent" android:layout_height="fill_parent"/> For setting background take a look at this: http://www.edumobile.org/android/android-beginner-tutorials/setting-an-image-as-a-wallpaper/
{ "pile_set_name": "StackExchange" }
Q: On usage of “What’s in a name?” The most famous quote from Shakespeare’s “Romeo and Juliet” appears to have a more recent and different usage according to McGraw-Hill Dictionary of American Idioms and Phrasal Verbs. What's in a name? Prov. The name of a thing does not matter as much as the quality of the thing. Sue: I want to buy this pair of jeans. Mother: This other pair is much cheaper. Sue: But it doesn't have the designer brand name. Mother: What's in a name? I am not very familiar with the above usage which appears to suggest a different connotation from the original one, possibly a jocular one, and the McGrow-Hill Dictionary appears to be one of the few, if not the only source to cite this current usage. Questions: Is the above-mentioned use of the famous quote an AmE thing? Given that the original quote is from the 16h century, how recent and how common is this current variant? A: It's exactly the same as the usage in the play. What’s in a name? That which we call a rose By any other word would smell as sweet. So Romeo would, were he not Romeo called, Retain that dear perfection which he owes Without that title. "Call a rose something else, it still smells good. If Romeo had a new name, he'd still be just as perfect." ("Owes" in this case means "owns".) Juliet is saying that a thing is good if it's good, not because it has a good name attached to it. Similarly, a pair of jeans is good if it looks good and feels good, not because it has a particular brand name written on it.
{ "pile_set_name": "StackExchange" }
Q: PHP/MySql Debugging an SQL query to add values to a database I have this simple code where I am trying to have 3 fields, first name, last name, and email, tag to mysql database from a website form. When I test the form and hit "register" the error comes up that "An Error Has Occured. The item was not added." How can I debug this, as I'm not sure what point the error is coming in? <html> <head> <title> Nomad - New User Registration Results</title> </head> <body> <h1>Nomad - New User Registration Results</h1> <?php // create short variable names $First_Name=$_POST['First_Name']; $Last_Name=$_POST['Last_Name']; $Email=$_POST['Email']; if(!$First_Name || !$Last_Name || !$Email) { echo "You have not entered all the required details.<br />" ."Please go back and try again."; exit; } @ $db=new mysqli('localhost','nomad_steve','steven','nomad_prod'); if (mysqli_connect_errno()) { echo "Error: Could not connect to database. Get it together Steve!"; exit; } $query = "insert into nomad_prod values (' " .$First_Name.",' " .$Last_Name."',' ".$Email."')"; $result=$db->query($query); if ($result) { echo $db->affected_rows." book inserted into database."; } else { echo "An Error Has Occured. The item was not added."; } $db->close(); ?> </body> </html> A: Try this may solve your problem $query = "insert into Player(first_name,last_name,email) values('" .$First_Name."','".$Last_Name."','".$Email."')"; $result=$db->query($query); I have checked and got it you have to specify your column name where you have to enter your data.
{ "pile_set_name": "StackExchange" }
Q: make Excel cells read-only using EPPLUS I have following excel data out of which i wanna make first column and header row read-only. CityMaster POC HeadCount Mumbai Prasad S 2 Delhi Kishan T 5 Banglore Shilpa S 7 Chennai Prasad S 2 Second and third data should be editable. A: I tried this and it worked: workSheet.Protection.IsProtected = true; workSheet.Cells[2, 3, pocDeatils.CityMaster.Rows.Count + 1, 4].Style.Locked = false; for more detail refer below link: https://epplus.codeplex.com/SourceControl/latest#SampleApp/Sample6.cs
{ "pile_set_name": "StackExchange" }
Q: Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()? I do not quite understand why numpy.linalg.solve() gives the more precise answer, whereas numpy.linalg.inv() breaks down somewhat, giving (what I believe are) estimates. For a concrete example, I am solving the equation C^{-1} * d where C denotes a matrix, and d is a vector-array. For the sake of discussion, the dimensions of C are shape (1000,1000) and d is shape (1,1000). numpy.linalg.solve(A, b) solves the equation A*x=b for x, i.e. x = A^{-1} * b. Therefore, I could either solve this equation by (1) inverse = numpy.linalg.inv(C) result = inverse * d or (2) numpy.linalg.solve(C, d) Method (2) gives far more precise results. Why is this? What exactly is happening such that one "works better" than the other? A: np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here). np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I where I is the identity*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for A-1 (an n×n matrix) than for x (an n-long vector). Additionally, if you then wanted to obtain x via the identity A-1·b = x then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error. There's no need for the intermediate step of computing A-1 - it is faster and more accurate to obtain x directly. * The relevant bit of source for inv is here. Unfortunately it's a bit tricky to understand since it's templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter B.
{ "pile_set_name": "StackExchange" }
Q: How to make simple logistic regression in tensorflow? My input data for one step numpy array length of 36 float [-0.712982 1.14461327 -0.46141151 -0.39443004 -0.44848472 -0.65676075 0.56058383 -0.61031222 0.43211082 -0.74852234 1.28183317 0.79719085 -0.28156522 0.16901374 -0.73715878 0.69877005 -0.40633941 0.01085454 -0.33675554 -0.37056464 -0.43088505 0.3327457 -0.15905562 0.72995877 0.56962079 0.10286932 0.25698286 0.89823145 -0.12923111 0.3219386 0.10118762 1.29127014 -0.22283298 0.75640506 0.79971719 0.60000002] Part of my code: X = tf.placeholder(tf.float32, (36)) Y = tf.placeholder(tf.float32) # Create Model # Set model weights W = tf.Variable(tf.zeros([36], name="weight")) b = tf.Variable(tf.zeros([1]), name="bias") # Construct model activation = tf.add(tf.matmul(X, W), b) In this case tf.matmul not work(ValueError: Shape (36,) must have rank 2). What changes I need to get the activation as a single float number? A: Just use: activation = tf.add(tf.mul(X, W), b) See a simple linear regression example (and others) from https://github.com/nlintz/TensorFlow-Tutorials/blob/master/1_linear_regression.py: import tensorflow as tf import numpy as np trX = np.linspace(-1, 1, 101) trY = 2 * trX + np.random.randn(*trX.shape) * 0.33 # create a y value which is approximately linear but with some random noise X = tf.placeholder("float") # create symbolic variables Y = tf.placeholder("float") w = tf.Variable(0.0, name="weights") # create a shared variable (like theano.shared) for the weight matrix y_model = tf.mul(X, w) cost = tf.square(Y - y_model) # use square error for cost function train_op = tf.train.GradientDescentOptimizer(0.01).minimize(cost) # construct an optimizer to minimize cost and fit line to my data # Launch the graph in a session with tf.Session() as sess: # you need to initialize variables (in this case just variable W) tf.initialize_all_variables().run() for i in range(100): for (x, y) in zip(trX, trY): sess.run(train_op, feed_dict={X: x, Y: y}) print(sess.run(w)) # It should be something around 2
{ "pile_set_name": "StackExchange" }
Q: Counterexample to inverse Leibniz alternating series test The alternating series test is a sufficient condition for the convergence of a numerical series. I am searching for a counterexample for its inverse: i.e. a series (alternating, of course) which converges, but for which the hypothesis of the theorem are false. In particular, if one writes the series as $\sum (-1)^n a_n$, then $a_n$ should not be monotonically decreasing (since it must be infinitesimal, for the series to converge). A: Put: $$ b_n = \begin{cases} n^{-2} &: n \text{ odd} \\ 2^{-n} &: n \text{ even} \end{cases} $$ $b_n$ is not monotonically decreasing. Still, $\sum (-1)^n b_n$ converges.
{ "pile_set_name": "StackExchange" }
Q: Scala compiler says "No TypeTag available for T" in method using generics The following code is not compiling: override def read[T <: Product](collection : String): Dataset[T] = { val mongoDbRdd = MongoSpark.load(sparkSession.sparkContext,MongoDBConfiguration.toReadConfig(collection)) mongoDbRdd.toDS[T] } This is "toDS" definition: def toDS[T <: Product: TypeTag: NotNothing](): Dataset[T] = mongoSpark.toDS[T]() Compiler says: Error:(11, 20) No TypeTag available for T mongoDbRdd.toDS[T] Error:(11, 20) not enough arguments for method toDS: (implicit evidence$3: reflect.runtime.universe.TypeTag[T], implicit evidence$4: com.mongodb.spark.NotNothing[T])org.apache.spark.sql.Dataset[T]. Unspecified value parameters evidence$3, evidence$4. mongoDbRdd.toDS[T] Line 11 is mongoDbRdd.toDS[T] I really don't know what's going on with Scala Generics and the compiler is not very specific, any idea? A: The problem is with the type constraints on T that toDS requires: // The ':' constraint is a type class constraint. def toDS[T <: Product: TypeTag: NotNothing](): Dataset[T] = mongoSpark.toDS[T]() // The below is exactly the same as the above, although with user-defined // names for the implicit parameters. // All a type class does is append implicit parameters to your function. def toDS[T <: Product]()(implicit typeTag: TypeTag[T], notNothing: NotNothing[T]) = mongoSpark.toDS[T]() You'll notice that's what your compiler error shows - with the names expanded to evidence$3 and evidence$4. If you want your method to compile, simply add the same type classes: override def read[T <: Product: TypeTag: NotNothing]( collection : String): Dataset[T] = { /* impl */ }
{ "pile_set_name": "StackExchange" }
Q: wordpress updating deprecated get_the_author function I am updating an older theme and am getting this message. Notice: get_the_author was called with an argument that is <strong>deprecated</strong> since version 2.1 with no alternative available. in /srv/www/virtual/example.com/htdocs/zzblog/wp-includes/functions.php on line 3468 home page pageid-641 page-author-test page-template page-template-MIMindexMOD-php"> I can find the call in my themes functions.php as follows: c[] = 'page-author-' . sanitize_title_with_dashes(strtolower(get_the_author('login'))); this is the only reference to get_the_author that I can find. In the wordpress codex it says the entire function get_the_author is depreciated (along with the passed argument) so would like to update but not sure how. A: Just replace the line: c[] = 'page-author-' . sanitize_title_with_dashes(strtolower(get_the_author('login'))); by this: c[] = 'page-author-' . sanitize_title_with_dashes(strtolower(get_the_author())); The function as appears in http://codex.wordpress.org/Function_Reference/get_the_author isn't deprecated, only is deprecated the parameter, because now the function returns always the user display name, so no need to specify that the desired return value is the user 'login'.
{ "pile_set_name": "StackExchange" }
Q: Keeping Users Anonymous - Secure DB Only Option - General Thoughts? I'm working on a web app where was considering how to keep user's identities totally anonymous. However I've come to the conclusion, there's not too much I can do - except concentrate on securing the database from being hacked. Is this the general consensus here on StackOverflow or are there any methods I may have missed? So I thought about: A straight bcrypt hash and salt however this then leads to contacting the user for various reasons. Password resets. I could save recovery question/answer but then of course the answers would need to be readable so that defeats things. Another point was say they forgot those security questions or a username I had generated on registration. No way to link them to an account. Also what came to mind (assuming I conquered the above) restricting duplicate users. If I hashed/salted searching through would be quite 'heavy' on processing? I could simply keep a long list of emails used but then the issue again linking that to an existing account? Interested to hear your thoughts. Thanks. A: I think a scenario you describe could be possible. You could give the user a decryption token when they login. The token could be assigned to a variable in the app on the front-end, so if they leave the site or page, the token would be lost and they would have to login again. Using the token, the app can decrypt encrypted data coming from the server. So, all data could be encrypted using the token. Then, if password changes are required, you could generate a new token when you generate a new password, and your server would have to decrypt then re-encrypt all their data using the new token. In this way, you could have all your server files, code, and databases encrypted while using SSL so all data would be anonymous until it reaches the user for display. The user logging in would be the only way to get the token to decrypt any data coming from the server. This would reduce server performance considerably. This technique is used by Atmel in their microchips to have 100% encryption from device to cloud. Which makes sense for a chip because a user doesn't have to interact with it visually. In the case of a webapp, there needs to be some way to decrypt the data for display. Which limits the possibilities. Here are a couple of links that might be useful in doing this: https://www.jamesward.com/2013/05/13/securing-single-page-apps-and-rest-services https://www.fourmilab.ch/javascrypt/javascrypt.html Here is a link to the Atmel chip that uses the above mentioned security method. http://www.atmel.com/products/security-ics/cryptoauthentication/
{ "pile_set_name": "StackExchange" }
Q: In which orientations do two planar magnetic shells have maximum attraction/repulsion and no force? A very thin magnet (one side +ve and other side -ve) is called a magnetic shell. Now for two planar magnetic shells placed near to each other with unlike poles facing each other, there will be attraction. Now if we keep tilting the second shell, the attractive force would decrease and then becomes repulsive and by turning it around $180^0$ it becomes highly repulsive (as shown below). What I need to know is, while keeping the first planar shell in same orientation, by tilting the second planar shell by certain angles, in which angles there would be maximum attraction, maximum repulsion and no force? A: To a good approximation, the force between the two magnets can be modelled as the force between two magnetic dipoles of dipole moments $\mathbf m_1$ and $\mathbf m_2$ (i.e. vectors which point from the south poles to the north poles along the symmetry axis of the shells), separated by a displacement $\mathbf r$, given by $$ {\displaystyle \mathbf {F} (\mathbf {r} ,\mathbf {m} _{1},\mathbf {m} _{2})={\dfrac {3\mu _{0}}{4\pi r^{5}}}\left[(\mathbf {m} _{1}\cdot \mathbf {r} )\mathbf {m} _{2}+(\mathbf {m} _{2}\cdot \mathbf {r} )\mathbf {m} _{1}+(\mathbf {m} _{1}\cdot \mathbf {m} _{2})\mathbf {r} -{\dfrac {5(\mathbf {m} _{1}\cdot \mathbf {r} )(\mathbf {m} _{2}\cdot \mathbf {r} )}{r^{2}}}\mathbf {r} \right]}. $$ Following the simplest reading of the diagrams in the question, with the shells sharing an axis, you're fixing $\mathbf m_1$ to be antiparallel to $\hat{\mathbf r}=\mathbf r/r$, so you have $$ \mathbf {F} (\mathbf {r} ,-m_1 \hat{\mathbf r},\mathbf {m} _{2}) = \frac {3\mu _{0}m_1}{4\pi} \frac{1}{r^4} \left[ 3(\mathbf {m} _{2}\cdot \hat{\mathbf r}) \hat{\mathbf r} -\mathbf {m} _{2} \right]. $$ This then means that if $\mathbf m_2=-m_2 \hat{\mathbf r}$ points towards the first shell, then $$ \mathbf {F} (\mathbf {r} ,-m_1 \hat{\mathbf r},-m_2 \hat{\mathbf r}) = -2m_2 \hat{\mathbf r} \frac {3\mu _{0}m_1}{4\pi} \frac{1}{r^4} $$ and the force is attractive, if $\mathbf m_2=+m_2 \hat{\mathbf r}$ points away from the first shell, then $$ \mathbf {F} (\mathbf {r} ,-m_1 \hat{\mathbf r},+m_2 \hat{\mathbf r}) = +2 m_2 \hat{\mathbf r} \frac {3\mu _{0}m_1}{4\pi} \frac{1}{r^4} $$ and the force is repulsive, and if $\mathbf m_2\cdot\mathbf r=0$ and the second shell is orthogonal to the existing distance, then the force $$ \mathbf {F} (\mathbf {r} ,-m_1 \hat{\mathbf r},\mathbf {m} _{2}) = -\mathbf {m} _{2} \frac {3\mu _{0}m_1}{4\pi} \frac{1}{r^4} $$ is neither attractive nor repulsive; instead, it tries to bring swing the second shell around. If, instead, the shells are coplanar, then $\mathbf m_1\cdot\mathbf r=0$ as the shells' centres' separation is always orthogonal to the axis of the first one, so the force is instead given by $$ \mathbf {F} (\mathbf {r} ,\mathbf {m} _{1},\mathbf {m} _{2}) = \dfrac {3\mu _{0}}{4\pi r^{5}}\left[ (\mathbf {m} _{2}\cdot \mathbf {r} )\mathbf {m} _{1} +(\mathbf {m} _{1}\cdot \mathbf {m} _{2})\mathbf {r} \right] . $$ This tells you that the force is in general neither fully attractive nor fully repulsive, and it is only zero if $\mathbf m_2$ is orthogonal to both the initial dipole and the mutual separation. In any case, the line-of-sight component $$ (\mathbf {F} (\mathbf {r} ,\mathbf {m} _{1},\mathbf {m} _{2}) \cdot \hat{\mathbf r})\hat{\mathbf r} = \dfrac {3\mu _{0}}{4\pi r^{4}} (\mathbf {m} _{1}\cdot \mathbf {m} _{2})\hat{\mathbf r} . $$ can be attractive if $\mathbf {m} _{1}\cdot \mathbf {m} _{2}<0$ and the magnetic moments are pointing in opposite directions, repulsive if $\mathbf {m} _{1}\cdot \mathbf {m} _{2}$ and the two shells have the poles aligned, and $\mathbf {m} _{1}\cdot \mathbf {m} _{2}=0$ and the second magnetic moment is in the same orthogonal plane as $\hat{\mathbf r}$.
{ "pile_set_name": "StackExchange" }
Q: html2canvas screen capture for a portion of a page I tried using html2canvas to generate a screen captures for a portion of the screen but It generates am image of the entire document. Am I doing something wrong here ? ("graph" is a div id) $("#graph").html2canvas(); Also,how do I save the resulting image to another location other than the specified element ? A: Can you clarify what you mean by saving the image to another location? If I'm understanding what you're asking, assuming you are using the "jquery.plugin.html2canvas.js", you need to edit the "$canvas.css" line in that file, to specify where you want to append your canvas object. One option you may consider is to keep it appending to the body, but set the css to "display: none" so that the object doesn't keep showing up. Then, after the css properties you can put in your own function to manipulate the canvas object. $canvas.css({width: '500px', height: '500px', display:'none'}).appendTo(document.body); $canvas.attr('id', 'canvasObj'); manipulateCanvasFunction(document.getElementById('canvasObj')); This sends the complete canvas object to your "manipulateCanvasFunction," and you can do whatever you want with the canvas object. You might check out this question thread for some other ideas. Edit: Removing incorrect information
{ "pile_set_name": "StackExchange" }
Q: Find Maximum in Array with Repeated Elements I faced a question in an interview two days back regarding optimally finding the maximum element in an array and I have query regarding that. The array as described by interviewer consists of integer values which may repeat (not necessarily in consecutive places) and it also consists of two parts (One part may be empty) : First part in non-decreasing order, and second part in non-increasing order,i.e. if a[n] is an array consisting of n integer values, then, a[i]<=a[i+1],i in [1,m] and a[j]>=a[j+1],j in [m,n] and m lies somewhere between 1 and n inclusive. Now, they asked me to find the maximum value in the array in most efficient way. I argued that as some elements may be repeated, I cannot element a search space using divide and conquer strategy. Had all the elements been distinct, we would be able to find maximum in lg n time using binary search. But there exists at least one input sequence (possibly when all elements are same) for which we at least need to have n-1 comparison before we can tell which element is maximum. The interviewer asked me to think if I can impose some restriction on the input so that the problem can be solved in lg n time. I was not able to solve at that time and still thinking. It would be helpful if you can help me in thinking. A: The interviewer was trying to see if you know about ternary search: if you require that the parts are strictly increasing and then strictly decreasing, you'd be able to get an answer in Log3(N). So the additional requirement the interviewer was looking for is probably that duplicate entries, if any, should occur on different sides of the ascending-descending switchover point.
{ "pile_set_name": "StackExchange" }
Q: Riemannian manifolds which admit a smooth free $\mathbb{Z}/3\mathbb{Z}$ action but do not admit an equilateral triangle action A free action of $\mathbb{Z}/3\mathbb{Z}$ on a Riemannian manifold $(M, g)$ is called an equilateral action if for every $x\in M$ all three points of orbit of $x$ have the same distance from each other. What is an example of a Riemannian manifold which does not admit such an action but it already admit a smooth free action by cyclic group of order $3$? A: Consider a torus with three handles, where one handle is much larger than the others, and with a smooth and free $\mathbb{Z}_3$ action which permutes the handles. Let $\gamma$ be a small geodesic loop going through one of the small handles. Let $z\gamma$ be an image of $\gamma$ under the group action which goes through the large handle. Let $p$ be a point on $\gamma$ such that $zp$ is far out on the large handle. Now the distance between $p$ and $zp$ is almost the diameter of the manifold, and there is no third point $z^2p$ which makes an equilateral triangle with them. So there is no smooth and equilateral $\mathbb{Z}_3$ action on the manifold.
{ "pile_set_name": "StackExchange" }
Q: Xamarin Forms Stacklayout doesn't fill whole MasterPage in UWP app I have a StackLayout in the MasterPage of a MasterDetailPage in Xamarin Forms, which has as VerticalOptions "FillAndExpand", but it doesn't fill the whole ContentPage in my UWP app (I don't know, if it works correctly on Android or iOS). In the screenshot you can see a green bar at the bottom left corner. What can I do to make the StackLayout fill the whole MasterPage? Here is my MasterPage: <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://xamarin.com/schemas/2014/forms" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:prism="clr-namespace:Prism.Mvvm;assembly=Prism.Forms" xmlns:local="clr-namespace:TobyList_XamarinForms" Title="Toby" prism:ViewModelLocator.AutowireViewModel="True" x:Class="TobyList_XamarinForms.Views.MasterPage" BackgroundColor="Green"> <StackLayout Padding="5" VerticalOptions="FillAndExpand" BackgroundColor="#F9F9F9" /> </ContentPage> A: I wrapped the MasterPage in a NavigationPage and now the StackLayout fills the whole MasterPage as I wanted. Now the app looks like this, without the green bar: And here is the change in the XAML code of my MainPage: <MasterDetailPage.Master> <NavigationPage Title="Test"> <x:Arguments> <local:MasterPage /> </x:Arguments> </NavigationPage> </MasterDetailPage.Master>
{ "pile_set_name": "StackExchange" }
Q: How to handle readlink() of "/proc/self/exe" when executable is replaced during execution? In my C++ application, my application does an execv() in a fork()ed child process to use the same executable to process some work in a new child process with different arguments that communicates with pipes to the parent process. To get the pathname to self, I execute the following code on the Linux port (I have different code on Macintosh): const size_t bufSize = PATH_MAX + 1; char dirNameBuffer[bufSize]; // Read the symbolic link '/proc/self/exe'. const char *linkName = "/proc/self/exe"; const int ret = int(readlink(linkName, dirNameBuffer, bufSize - 1)); However, if while the executable is running, I replace the executable with an updated version of the binary on disk, the readlink() string result is: "/usr/local/bin/myExecutable (deleted)" I understand that my executable has been replaced by a newer updated version and the original for /proc/self/exe is now replaced, however, when I go to execv() it now fails with the errno 2 - No such file or directory. due to the extra trailing " (deleted)" in the result. I would like the execv() to either use the old executable for self, or the updated one. I could just detect the string ending with " (deleted)" and modify it to omit that and resolve to the updated executable, but that seems clumsy to me. How can I execv() the current executable (or its replacement if that is easier) with a new set of arguments when the original executable has been replaced by an updated one during execution? A: Instead of using readlink to discover the path to your own executable, you can directly call open on /proc/self/exe. Since the kernel already has an open fd to processes that are currently executing, this will give you an fd regardless of whether the path has been replaced with a new executable or not. Next, you can use fexecve instead of execv which accepts an fd parameter instead of a filename parameter for the executable. int fd = open("/proc/self/exe", O_RDONLY); fexecve(fd, argv, envp); Above code omits error handling for brevity.
{ "pile_set_name": "StackExchange" }
Q: When does $f(a),f(f(a)),f(f(f(a)))...$ produce better and better approximations to $x=f(x)$? I tried to approximate the solution to $x=f(x)$ for some given $f$, by guessing $x=a$, then I observed that $x=f(a)$ was an even better approximation, and $x=f(f(a))$ and so on was even better, so why does this method work and for which f, is it sufficient that f is continuous? A: This method certainly works if the function is Lipschitz continuous with Lipschitz constant $L<1$. In general, it is not true. For example let $f(x)=x+1$, then for any starting value $x_0$, the sequence $x_n=f(x_{n-1})$ ($n\geq1$) diverges.
{ "pile_set_name": "StackExchange" }
Q: How do you run OpenERP yaml unit tests I'm trying to run unit tests on my openERP module, but no matter what I write it doesnt show if the test passes or fails! Anyone know how to output the results of a test? (Using Windows OpenERP version 6.1) My YAML test is: - I test the tests - !python {model: mymodelname}: | assert False, "Testing False!" assert True, "Testing True!" The output when I reload the module with openerp-server.exe --update mymodule --log-level=test -dtestdb shows that the test ran but has no errors?! ... TEST testdb openerp.tools.yaml_import: I test the tests What am I doing wrong? Edit: --------------------------------------------------------------------- Ok so after much fiddling with the !python, I tried out another test: - I test that the state - !assert {model: mymodel, id: mymodel_id}: - state == 'badstate' Which gave the expected failure: WARNING demo_61 openerp.tools.yaml_import: Assertion "NONAME" FAILED test: state == 'badstate' values: ! active == badstate So I'm guessing it is something wrong with my syntax which may work as expected in version 7. Thanks for everyone's answers and help! A: This is what I've tried. It seems to work for me: !python {model: sale.order}: | assert True, "Testing True!" assert False, "Testing False!" (Maybe you forgot the "|" character) And then : bin/start_openerp --init=your_module_to_test -d your_testing_database --test-file=/absolute/path/to/your/testing_file.yml You might want to create your testing database before : createdb mytestdb --encoding=unicode Hope it helps you UPDATE: Here are my logs ( I called my test file sale_order_line_test.yml) ERROR mytestdb openerp.tools.yaml_import: AssertionError in Python code : Testing False! mytestdb openerp.modules.loading: At least one test failed when loading the modules. loading test file /path/to/module/test/sale_order_line_test.yml AssertionError in Python code : Testing False! A: Looking at the docs (e.g. here and here), I can't see anything obviously wrong with your code. However, I'm not familiar with --log-level=test. Maybe try running it with the -v, --debug or --log-level=debug flags instead of --log-level=test? You may also need to try the uppercase variants for the --log-level argument, i.e. --log-level=DEBUG. test certainly isn't one of the standard Python logging module's logging levels, and while I can't exclude the possibility of them adding a custom log level, I don't think that's the case. It might also be worthwhile trying to remove the line obj = self.browse(cr, uid, ref("HP001")), just in case.. A: Try to type following path on your terminal when you start your server. ./openerp-server --addons-path=<..Path>...--test-enable :Enable YAML and unit tests. ./openerp-server --addons-path=<..Path>...--test-commit :Commit database changes performed by YAML or XML tests.
{ "pile_set_name": "StackExchange" }
Q: PHP Efficient way to Convert String of Binary into Binary here is the skinny (scroll down to see the problem): I am doing Huffman Encoding to compress a file using PHP (for a project). I have made the map, and made everything into a string like so: 00101010001100001110011101001101111011111011 Now, I need to convert that into an actual binary string, in its current state, it is only a string of 1s and 0s. Here is the problem: The string of 1s and 0s is 17,747,595 characters long, and it is really slowing down at around 550,000 This is the code I have: <?php $i=0 $len = strlen($binaryString); while ($i < $len){ $section = substr($binaryString,$i,$i+8); $out .= chr(bindec($section)); $i=$i+8; } ?> How can I make this efficient enough to run the 17 million character string? Thanks very much for any support! A: You don't need to loop you can use gmp with pack $file = "binary.txt"; $string = file_get_contents($file); $start = microtime(true); // Convert the string $string = simpleConvert($string); //echo $string ; var_dump(number_format(filesize($file),2),microtime(true)- $start); function simpleConvert($string) { return pack('H*',gmp_strval(gmp_init($string, 2), 16)); } Output string '25,648,639.00' (length=13) <---- Length Grater than 17,747,595 float 1.0633520126343 <---------------- Total Conversion Time Links Original Dictionary File (349,900 words) 3,131KB Binary Version 25,048 KB Note Solution requires GMP Functions
{ "pile_set_name": "StackExchange" }
Q: Mono - System.TypeLoadException: Could not load types After recently upgrading to Xamarin Studio 6 and Mono 4.4.0.182 (running on OSX 10.10.5) we found that our application no longer runs (Asp.NET MVC/Razor website). The solution does compile correctly however. We've tried reverting back to the previous versions which did work - Xamarin Studio 5.10.3 and Mono 4.3.2, and our application still does not run correctly. The exact same application was working fine prior to the update. No other environment/code changes have been made, and restarts have also not worked. Errors being generated after the update. 1st Error (on application start): System.Reflection.ReflectionTypeLoadException This is being triggered by SimpleInjector Container.RegisterPackages(). This was working prior to the Xamarin/Mono update, and no code changes/package updates have been applied/made. Could not load type 'System.Net.HttpListener' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Could not load type 'System.Net.HttpListenerPrefixCollection' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Could not load type 'System.Net.HttpWebRequest' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Could not load type 'System.Net.Security.SslStream' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Could not load type 'System.Net.WebSockets.ClientWebSocket' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. Stacktrace at (wrapper managed-to-native) System.Reflection.Assembly:GetTypes (System.Reflection.Assembly,bool) at System.Reflection.Assembly.GetExportedTypes () [0x00000] in /private/tmp/source-mono-4.3.2/bockbuild-xamarin/profiles/mono-mac-xamarin/build-root/mono-x86/mcs/class/corlib/System.Reflection/Assembly.cs:407 at SimpleInjector.PackageExtensions.GetExportedTypesFrom (System.Reflection.Assembly assembly) [0x00000] in <filename unknown>:0 at SimpleInjector.PackageExtensions+<>c.<RegisterPackages>b__1_0 (System.Reflection.Assembly assembly) [0x00000] in <filename unknown>:0 at System.Linq.Enumerable+<SelectManyIterator>c__Iterator5`3[TSource,TCollection,TResult].MoveNext () [0x00059] in <filename unknown>:0 at System.Linq.Enumerable+WhereSelectEnumerableIterator`2[TSource,TResult].MoveNext () [0x00078] in <filename unknown>:0 at System.Linq.Buffer`1[TElement]..ctor (IEnumerable`1 source) [0x00087] in <filename unknown>:0 at System.Linq.Enumerable.ToArray[TSource] (IEnumerable`1 source) [0x00011] in <filename unknown>:0 at SimpleInjector.PackageExtensions.RegisterPackages (SimpleInjector.Container container, IEnumerable`1 assemblies) [0x000f0] in <filename unknown>:0 at SimpleInjector.PackageExtensions.RegisterPackages (SimpleInjector.Container container) [0x0002f] in <filename unknown>:0 at MyApplication.Web.UI.MvcApplication.InitializeContainer (SimpleInjector.Container container) [0x00003] in /Users/*sanitized*/MyApplication.Web.UI/Global.asax.cs:57 2nd Error (After page reload and any subsequent page requests): System.ArgumentException An item with the same key has already been added. This is referencing a System.Web.Mvc.RouteCollectionExtensions.MapRoute call in our App_Start/RouteConfig.cs file (called in turn from Global.asax.cs, Application_Start), indicating that this file is being called at least twice. Again, this was working prior to the recent Xamarin/Mono update. Can anyone provide any assistance and/or suggestions? EDIT Rolled back to Mono 4.2.4.4, and the error has gone away. Looks like there has been a change introduced into 4.3.2 at some point that has introduced this bug. A: I had a similar issue: Could not load type 'System.Net.HttpListener' from assembly 'System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' I solved adding Mono.Security to the executable project and rebuilding it. Hope this helps
{ "pile_set_name": "StackExchange" }
Q: Valgrind memory leak detection I am new to Valgrind, and I wanted to see how valgrind works. I wrote a sample program for memory leak. However Valgrind does not seem to detect a memory leak. Can you please tell me why? Or does the below code leak memory? #include <iostream> using namespace std; class test { private: int a; public: test(int c) { a = c; } }; int main() { test* t = new test(7); } this is the valgrind output HEAP SUMMARY: ==5449== in use at exit: 0 bytes in 0 blocks ==5449== total heap usage: 29 allocs, 29 frees, 3,592 bytes allocated ==5449== ==5449== All heap blocks were freed -- no leaks are possible A: I don't think that constitutes a memory leak; the memory at pointer t is not 'lost' until t goes out of scope, and that's at the end of main() so there's no memory loss. dan@rachel ~ $ g++ -o t t.cpp dan@rachel ~ $ valgrind ./t ==11945== Memcheck, a memory error detector ==11945== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al. ==11945== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info ==11945== Command: ./t ==11945== ==11945== ==11945== HEAP SUMMARY: ==11945== in use at exit: 36 bytes in 9 blocks ==11945== total heap usage: 9 allocs, 0 frees, 36 bytes allocated ==11945== ==11945== LEAK SUMMARY: ==11945== definitely lost: 36 bytes in 9 blocks ==11945== indirectly lost: 0 bytes in 0 blocks ==11945== possibly lost: 0 bytes in 0 blocks ==11945== still reachable: 0 bytes in 0 blocks ==11945== suppressed: 0 bytes in 0 blocks ==11945== Rerun with --leak-check=full to see details of leaked memory ==11945== ==11945== For counts of detected and suppressed errors, rerun with: -v ==11945== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2) The code was modified slightly to use test* t multiple times in a for loop, effectively "forgetting" all but the last test object. #include <iostream> using namespace std; class test { private: int a; public: test(int c){ a = c; } }; int main(){ test* t; for(int i=1; i<10;i++) t=new test(i); } For even better memory leak patching, try compiling with debugging information and using the valgrind option recommended in the output: dan@rachel ~ $ g++ -g -o t t.cpp dan@rachel ~ $ valgvalgrind --leak-check=full ./t ==11981== Memcheck, a memory error detector ==11981== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al. ==11981== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info ==11981== Command: ./t ==11981== ==11981== ==11981== HEAP SUMMARY: ==11981== in use at exit: 36 bytes in 9 blocks ==11981== total heap usage: 9 allocs, 0 frees, 36 bytes allocated ==11981== ==11981== 36 bytes in 9 blocks are definitely lost in loss record 1 of 1 ==11981== at 0x4C2C099: operator new(unsigned long) (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==11981== by 0x4006EF: main (t.cpp:15) ==11981== ==11981== LEAK SUMMARY: ==11981== definitely lost: 36 bytes in 9 blocks ==11981== indirectly lost: 0 bytes in 0 blocks ==11981== possibly lost: 0 bytes in 0 blocks ==11981== still reachable: 0 bytes in 0 blocks ==11981== suppressed: 0 bytes in 0 blocks ==11981== ==11981== For counts of detected and suppressed errors, rerun with: -v ==11981== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 2 from 2)
{ "pile_set_name": "StackExchange" }
Q: Is there a way for php or ajax to get image size in bytes or kb or mb right away? I have a nice php and ajax script that uploads a photo and displays the image after uploading. I also have rules that it must follow too. But my problem is the maxSize is 2mb, and right now when you choose an image and you start to upload it -- it basically looks at the whole upload (which can take a while) and then it tells you if there is a problem.... Even if the image or picture is 10mb file, it will process this for minutes and minutes and then it will error out and say "File size too big" My question : Is there a way with ajax or php (i know there is with flash) -- to tell right away if the file size is too big?? So, if I have a 8mb picture I want to upload, it will error out right away that the file size is too big, instead of trying to upload and process this... Hope that makes sense!! Thanks A: Frustratingly, no. You'd have to use something like flash or java. Keep in mind, this would be for convenience only - you'd still need to check on the server because anything on the browser could be faked or bypassed.
{ "pile_set_name": "StackExchange" }
Q: In the presence of heteroskedasticity, is quantile regression more appropiate than OLS? ..for understanding the relationship between a dependent and independent variables, given that quantile regression makes no assumptions about the distribution of the residual. A: If you are really interested in determining how the conditional mean value of the dependent variable varies with the independent variables, then you would address this question by using: Ordinary least squares regression in the absence of heteroskedasticity; Generalized least squares regression or weighted least squares regression in the presence of heteroskedasticity. On the other hand, if you are interested in determining how the quantiles of the conditional distribution of the dependent variable vary with the independent variables, then you would address that via quantile regression. All of these regression techniques target some aspect(s) of the conditional distribution of the dependent variable given the independent variables. Usually, you would choose the aspect relevant to your study based on subject matter considerations. In some cases, focusing on a single aspect of that distribution (e.g., conditional mean or conditional median) is sufficient given the study purposes. In other cases, a more comprehensive look at the entire conditional distribution is necessary, which can be obtained by focusing on an appropriately selected set of quantiles of that distribution. So what is appropriate depends primarily on the study question, though it also has to take into account features present in the data used to elucidate this question, such as presence/absence of heteroscedasticity when the study question involves the conditional mean of the dependent variable. Note that, if the study question concerns quantiles of the conditional distribution of the dependent variable, then quantile regression is appropriate whether or not heteroskedasticity is present.
{ "pile_set_name": "StackExchange" }
Q: Как сделать что бы появлялось фото если значение input равно определенному слову? Я новичок и хотел бы узнать, как сделать так чтобы когда пользователь вводил секретное слово в окно ввода данных и нажимал на кнопку OK, то на сайте появлялось фото. A: Пример, от которого можно оттолкнуться: const img = document.querySelector("img") const input = document.querySelector("input") const button = document.querySelector("button") const secret = "секрет" const handleClick = () => { const value = input.value.trim() if (!value) { console.log("введите слово") return } if (value === secret) { img.classList.toggle("hide") } else { console.log("неугадали, попробуте еще раз") } } button.addEventListener("click", handleClick) img { display: block; } .hide { display: none; } <input /> <button>OK</button> <img class="hide" src="https://picsum.photos/200" />
{ "pile_set_name": "StackExchange" }
Q: Unwanted indentation with `\mintinline` [possible bug] I have stumbled upon an indentation problem with \mintinline from the minted-package in combination with the breaklinesoption. Consider the following minimal example: %!TEX TS-program = pdflatex %!TEX TS-options = -shell-escape \documentclass{scrbook} \usepackage{minted} \setmintedinline{breaklines=true} \begin{document} A sufficently long text, to make breaking meaningful. Lets have some really cool code now \mintinline{text}{cool code}. Some more code \mintinline{sql}{Test} an how about some Java \mintinline{java}{String a = "hello";} \end{document} The output (MacTeX 2016, just updated with TeX live Manager): Without the breaklines option: As this extra space in front of the verbatim part in the first image looks wrong, I assume this is a bug in minted. Any thoughts on this? A: If I use TeX Live 2015, the output is correct, so the problem seems to belong in fvextra.sty (a recent addition). Indeed the package code has unprotected end-of-lines: 1370 \def\FV@VerbatimPygments#1#2{% 1371 \edef\FV@PYG@Literal{\expandafter\FV@DetokMacro@StripSpace\detokenize{#1}}% 1372 \def\FV@BreakBeforePrep@PygmentsHook{% 1373 \expandafter\FV@BreakBeforePrep@Pygments\expandafter{\FV@PYG@Literal}} 1374 \def\FV@BreakAfterPrep@PygmentsHook{% 1375 \expandafter\FV@BreakAfterPrep@Pygments\expandafter{\FV@PYG@Literal}} 1376 \ifx#2\relax 1377 \let\FV@PYG#1 1378 \else 1379 \let\FV@PYG#2 1380 \fi 1381 \ifbool{FV@breakbytoken}% 1382 {\ifbool{FV@breakbytokenanywhere}% 1383 {\def\FV@BreakByTokenAnywhereHook{% 1384 \def\FV@BreakByTokenAnywhereBreak{% 1385 \let\FV@BreakByTokenAnywhereBreak\FancyVerbBreakByTokenAnywhereBreak}}% 1386 \def#1##1##2{% 1387 \FV@BreakByTokenAnywhereBreak 1388 \leavevmode\hbox{\FV@PYG{##1}{##2}}}}% 1389 {\def#1##1##2{% 1390 \leavevmode\hbox{\FV@PYG{##1}{##2}}}}}% 1391 {\def#1##1##2{% 1392 \FV@PYG{##1}{\FancyVerbBreakStart##2\FancyVerbBreakStop}}}% 1393 } (line numbers added for reference). If I add % at the end of lines 1373, 1375, 1377 and 1379, the output is correct: \documentclass{scrbook} \usepackage{minted} \setmintedinline{breaklines=true} \makeatletter \def\FV@VerbatimPygments#1#2{% \edef\FV@PYG@Literal{\expandafter\FV@DetokMacro@StripSpace\detokenize{#1}}% \def\FV@BreakBeforePrep@PygmentsHook{% \expandafter\FV@BreakBeforePrep@Pygments\expandafter{\FV@PYG@Literal}}% <--- \def\FV@BreakAfterPrep@PygmentsHook{% \expandafter\FV@BreakAfterPrep@Pygments\expandafter{\FV@PYG@Literal}}% <--- \ifx#2\relax \let\FV@PYG#1% <--- \else \let\FV@PYG#2% <--- \fi \ifbool{FV@breakbytoken}% {\ifbool{FV@breakbytokenanywhere}% {\def\FV@BreakByTokenAnywhereHook{% \def\FV@BreakByTokenAnywhereBreak{% \let\FV@BreakByTokenAnywhereBreak\FancyVerbBreakByTokenAnywhereBreak}}% \def#1##1##2{% \FV@BreakByTokenAnywhereBreak \leavevmode\hbox{\FV@PYG{##1}{##2}}}}% {\def#1##1##2{% \leavevmode\hbox{\FV@PYG{##1}{##2}}}}}% {\def#1##1##2{% \FV@PYG{##1}{\FancyVerbBreakStart##2\FancyVerbBreakStop}}}% } \makeatother \begin{document} A sufficently long text, to make breaking meaningful. Lets have some really cool code now \mintinline{text}{cool code}. Some more code \mintinline{sql}{Test} an how about some Java \mintinline{java}{String a = "hello";} \end{document} Update With version 1.2.1 of fvextra (released 2016/09/02), the issue has been fixed, so the extra code is no longer necessary.
{ "pile_set_name": "StackExchange" }
Q: Dative case vs. "für" + accusative case Mathe ist meinen Ausbildern wichtig. Mathe ist für meine Ausbilder wichtig. Do the two sentences have the same meaning? A: In your quoted example sentence, both expressions are nearly equal, but: wichtig sein für jemanden/etwas means something is important for/to something/someone and jemandem wichtig sein means something is considered important by someone The difference becomes apparent when you use the verbs on something that doesn't normally consider things, like: You can say Frisches Heu, Wasser und viel Auslauf sind wichtig für ein Pferd. but rather not Dem Pferd sind Frisches Heu, Wasser und Auslauf wichtig. (because you wouldn't imply an opinion on healthy living with a horse)
{ "pile_set_name": "StackExchange" }
Q: Distribution family of the mean of iid random variables Let $X_1,X_2,...,X_n$ be $n$ iid random variables following some distribution family $A$ (beta, gamma, normal, etc.). Does the mean $\bar{X}=\frac{1}{n}\sum{X_i}$ also follow a $A$-like distribution (with different parameters)? I know it is approximately normal due to the Central Limit Theorem, but does it follow the same family $A$? I'm particularly interested in Beta distributions. Thanks A: Using a counterexample by @cardinal above: Let $X_1,X_2$ follow a uniform distribution on $[0,1]$, which is a Beta distribution as well. Variable $(X_1+X_2)/2$ has a triangular density function, which is not Beta. So no, they do not necessarily follow the same family
{ "pile_set_name": "StackExchange" }
Q: Deploy Jhipster on clevercloud I'm deploying a Jhipster application on Clevercloud. I have set up some configuration: war.json { "build": { "type": "maven", "goal": "package -Pprod -DskipTests" }, "deploy": { "goal": "package -Pprod -DskipTests", "container": "TOMCAT8", "war": [ { "file": "target/myapp-1.0.0.war" } ] } } maven.json { "build": { "type": "maven", "goal": "package -Pprod -DskipTests" }, "deploy": { "goal": "package -Pprod -DskipTests" } } I have modified the application-prod.yml to include the url/username/password of the db add-on. When I deploy, the deployment is successfull but the application is not running. On the application page I have 404 error. The DB is correctly initialised. In the logs I have the following messages that I don't understand or I'm not able to solve: multiple times this message 2017-09-18T09:21:22.701Z: 09:21:21.483 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [java:comp/env/logging.exception-conversion-word] 2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiLocatorDelegate - Converted JNDI name [java:comp/env/logging.exception-conversion-word] not found - trying original name [logging.exception-conversion-word]. javax.naming.NameNotFoundException: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word]. 2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiTemplate - Looking up JNDI object with name [logging.exception-conversion-word] 2017-09-18T09:21:22.702Z: 09:21:21.486 [localhost-startStop-1] DEBUG org.springframework.jndi.JndiPropertySource - JNDI lookup for name [logging.exception-conversion-word] threw NamingException with message: Name [logging.exception-conversion-word] is not bound in this Context. Unable to find [logging.exception-conversion-word].. Returning null. then: 2017-09-18T09:21:22.777Z: [09:21:12.705][debug][talledLocalContainer] Connection attempt with socket Socket[unconnected], current time is 1505726472705 2017-09-18T09:21:22.778Z: [09:21:12.705][debug][talledLocalContainer] Socket Socket[unconnected] for port 8009 closed 2017-09-18T09:21:22.778Z: [09:21:13.068][debug][talledLocalContainer] Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments: 2017-09-18T09:21:22.778Z: '-version' 2017-09-18T09:21:22.778Z: The ' characters around the executable and arguments are 2017-09-18T09:21:22.778Z: not part of the command. 2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Output appended to /tmp/cargo-jvm-version-4176730048875251522.txt 2017-09-18T09:21:22.779Z: [09:21:13.085][debug][talledLocalContainer] Error appended to /tmp/cargo-jvm-version-4176730048875251522.txt 2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Project base dir set to: /home/bas/app_4b724c3b-6703-474e-9ec4-65d775cd0013 2017-09-18T09:21:22.779Z: [09:21:13.086][debug][talledLocalContainer] Execute:Java13CommandLauncher: Executing '/usr/x86_64-pc-linux-gnu/lib/icedtea8/jre/bin/java' with arguments: 2017-09-18T09:21:22.779Z: '-version' 2017-09-18T09:21:22.779Z: The ' characters around the executable and arguments are 2017-09-18T09:21:22.779Z: not part of the command. And multiples times: 2017-09-18T09:21:22.793Z: [09:21:13.416][debug][URLDeployableMonitor] Checking URL [http://localhost:8080/cargocpc/index.html] for status using a timeout of [120000] ms... 2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] URL [http://localhost:8080/cargocpc/index.html] is not responding: -1 java.net.ConnectException: Connection refused (Connection refused) 2017-09-18T09:21:22.794Z: [09:21:13.452][debug][URLDeployableMonitor] Notifying monitor listener [org.codehaus.cargo.container.spi.deployer.DeployerWatchdog@7bd4937b] Ending with: 2017-09-18T09:21:32.710Z: 2017-09-18 09:21:28.724 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application configuration, using profiles: prod 2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.735 INFO 2232 --- [ost-startStop-1] com.bbs.dm.config.WebConfigurer : Web application fully configured 2017-09-18T09:21:32.711Z: 2017-09-18 09:21:28.994 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Starting Liquibase synchronously 2017-09-18T09:21:36.985Z: Nothing listening on 8080. Please update your configuration and redeploy 2017-09-18T09:21:52.730Z: 2017-09-18 09:21:47.492 DEBUG 2232 --- [ost-startStop-1] i.g.j.c.liquibase.AsyncSpringLiquibase : Started Liquibase in 18498 ms 2017-09-18T09:21:57.985Z: Application start successful 2017-09-18T09:21:57.985Z: No cron to setup 2017-09-18T09:21:57.986Z: Created symlink /etc/systemd/system/multi-user.target.wants/zabbix-agentd.service → /usr/x86_64-pc-linux-gnu/lib/systemd/system/zabbix-agentd.service. I have done nothing else except following Clevercloud documentation to deploy Have I miss something in the configuration? (For info, the application is deploying well on other platform like Heroku or Pivotal) A: To deploy a jhipster application on Clevercloud. Here is what worked for me. I have followed the indications given to create an application and deploy it using the CLI Configuration files: clevercloud/war.json { "build": { "type": "maven", "goal": "package -Pprod -DskipTests" }, "deploy": { "jarName": "target/myapp-1.0.0.war" } } clevercloud/maven.json { "build": { "type": "maven", "goal": "package -Pprod -DskipTests" }, "deploy": { "goal": "package -Pprod -DskipTests" } } I modified my application-prod.yml to link the db.
{ "pile_set_name": "StackExchange" }
Q: How to deserialize array into wrapper object? I want Json.NET deserializer to be able to directly use: ArrayWrapper[] Array { get; set; } property. Should I write custom JsonConverter or there is easier way? public class Obj { //public ArrayWrapper[] Array { get; set; } // I want it to work!!! [JsonProperty( "array" )] public int[][] Array_ { get; set; } [JsonIgnore] public ArrayWrapper[] Array => Array_.Select( i => new ArrayWrapper( i ) ).ToArray(); } public struct ArrayWrapper { private readonly int[] array; public int Item0 => array[ 0 ]; public int Item1 => array[ 1 ]; public ArrayWrapper(int[] array) { this.array = array; } public static implicit operator ArrayWrapper(int[] array) { return new ArrayWrapper( array ); } } Note: the array of arrays is returned by this API: https://github.com/binance-exchange/binance-official-api-docs/blob/master/rest-api.md#klinecandlestick-data. I want to convert inner array into object. A: If you simply want to capture a collection inside a surrogate wrapper object, the easiest way to do so is to make the wrapper appear to be a read-only collection to Json.NET. To do that, you must: Implement IEnumerable<T> for some T (here int). Add a constructor that takes an IEnumerable<T> for the same T. (From experimentation, a constructor that takes T [] is not sufficient.) Thus if you define your ArrayWrapper as follows: public struct ArrayWrapper : IEnumerable<int> { private readonly int[] array; public int Item0 { get { return array[ 0 ]; } } public int Item1 { get { return array[ 1 ]; } } public ArrayWrapper(int[] array) { this.array = array; } public ArrayWrapper(IEnumerable<int> enumerable) { this.array = enumerable.ToArray(); } public static implicit operator ArrayWrapper(int[] array) { return new ArrayWrapper( array ); } public IEnumerator<int> GetEnumerator() { return (array ?? Enumerable.Empty<int>()).GetEnumerator(); } #region IEnumerable Members IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } #endregion } You will be able to serialize and deserialize Obj into the following JSON: {"Array":[[1,101]]} Demo fiddle #1 here. However, in comments you mention your array actually has a fixed schema as documented in Public Rest API for Binance: Kline/Candlestick data. If so, you could adopt the approach from this answer to C#: Parsing a non-JSON array-only api response to a class object with x properties which specifically addresses Binance Kline/Candlestick data: Define an explicit data model for your data. Label each property with [JsonProperty(Order = N)] to indicate relative array positions. ([DataContract] and [DataMember(Order = N)] could be used instead.) Use the converter ObjectToArrayConverter<ArrayWrapper>() from this answer to C# JSON.NET - Deserialize response that uses an unusual data structure. I.e. for the specific model shown in your question, modify its definition as follows: [JsonConverter(typeof(ObjectToArrayConverter<ArrayWrapper>))] public struct ArrayWrapper { [JsonProperty(Order = 1)] public int Item0 { get; set; } [JsonProperty(Order = 2)] public int Item1 { get; set; } } And you will be able to (de)serialize the same JSON. Note that the converter is entirely generic and can be reused any time the pattern of (de)serializing an array with a fixed schema into an object arises. (You might also want to change the struct to a class since mutable structs are discouraged.) Demo fiddles #2 here and #3 here showing the use of a JsonConverter attribute applied to one of the serializable properties.
{ "pile_set_name": "StackExchange" }
Q: MacBook Pro 2013 hard drive upgradable? My MacBook Pro late-2011 model was recently stolen when someone broke into our car. Luckily there weren't too many other items in the car and my computer was fully backed-up. I am looking for a new MBP and have found several 2013 models. I am looking at a September 2013 model that shows it only comes with 256GB of flash storage. There is no way I can possibly fit even 1/4 of my documents on 256GB. I would like to reserve the flash store for OS X and other applications while having the rest of the data on another larger drive. I am trying to find out if this model (Sept 2013) can take an extra standard hard drive in the chassis. I had two drives in the model that was stolen. Follow-up: What is the newest MacBook Pro model that has a Serial ATA Hard drive connection? Also, how do Mac users with only flash based memory store large files? Do they use external storage for everything? For example, I have a large music collection and use my MacBook for video editing for websites. I wouldn't be able to do that on a 265gb PCIe flash based hard drive. A: No, it cannot. Actually, there is no there is no drive-bay at all (standard or optical). The traditional, 2.5'' 9.5mm internal notebook drives (whether SSD or HDD) that you're thinking of are not compatible (internally) with this model. This is what the solid state "drive" looks like inside late 2013 models: As you can see from the iFixit photo above, the SSD is really just a stick of flash memory, connected via the PCIe bus. It's made up of 8 identical NAND flash modules (in densities of either 32, 64, or 128 GB). The 256 GB drive is pictured here. There are 8 32GB chips in total, 4 on each side. The chip density corresponds to the listed drive capacity, so 8x64GB Modules = 512 GB, etc. Though it is possible to replace the SSD with a larger capacity one, there's currently no aftermarket upgrade available yet: Unfortunately, the proprietary PCIe 2.0-based SSD in the "Late 2013" models is limited to a smaller "blade" option, but upgrade options no doubt are forthcoming, nevertheless. Soon forthcoming, indeed. Consider purchasing an external storage device, or exploring 'cloud' based storage options.
{ "pile_set_name": "StackExchange" }
Q: Visualizing the inputs to convolution neural networks using keras and tensorflow as backend There is an in-detailed post on keras blog. But when compiling the code I get the error as follows: Using TensorFlow backend. Traceback (most recent call last): File "visulaize_cifar.py", line 24, in <module> model.add(MaxPooling2D((2, 2), strides=(2, 2))) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/models.py", line 332, in add output_tensor = layer(self.outputs[0]) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 572, in __call__ self.add_inbound_node(inbound_layers, node_indices, tensor_indices) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 635, in add_inbound_node Node.create_node(self, inbound_layers, node_indices, tensor_indices) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 166, in create_node output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0])) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/layers/pooling.py", line 160, in call dim_ordering=self.dim_ordering) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/layers/pooling.py", line 210, in _pooling_function pool_mode='max') File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2866, in pool2d x = tf.nn.max_pool(x, pool_size, strides, padding=padding) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/ops/nn_ops.py", line 1617, in max_pool name=name) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1598, in _max_pool data_format=data_format, name=name) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op op_def=op_def) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2242, in create_op set_shapes_for_outputs(ret) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs shapes = shape_func(op) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring return call_cpp_shape_fn(op, require_shape_fn=True) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn debug_python_shape_fn, require_shape_fn) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl raise ValueError(err.message) ValueError: Negative dimension size caused by subtracting 2 from 1 for 'MaxPool_1' (op: 'MaxPool') with input shapes: [1,1,64,128]. This error goes when I set dim_ordering='th'. But as I am using tensorflow backend so dimension ordering should be dim_ordering='tf'. Even after setting dim_ordering as 'th', I get error while loading weights from vgg16_weights.h5 as follows : Traceback (most recent call last): File "visulaize_cifar.py", line 67, in <module> model.layers[k].set_weights(weights) File "/home/dude_perf3ct/.local/lib/python2.7/site-packages/keras/engine/topology.py", line 985, in set_weights 'provided weight shape ' + str(w.shape)) ValueError: Layer weight shape (3, 3, 128, 64) not compatible with provided weight shape (64, 3, 3, 3). As detailed in this post about 'th' and 'tf'. The above error implies layer weights are in 'tf' (but I set it to 'th' to avoid first error) and provided weight shape in 'th' ordering. What seems to be the error? A: Answer to this question was pretty simple. As, I was using tensorflow as backend. So, to convert I inserted the line if K.backend()=='tensorflow': K.set_image_dim_ordering("th") after from keras import backend as K. This is because the vgg16_weights.h5 has th format and also cifar10.load_data().
{ "pile_set_name": "StackExchange" }
Q: Spring Data JPA Query - count with in clause I am trying to implement the following query with spring data JPA with method name resolving. select count(*) from example ex where ex.id = id and ex.status in (1, 2); I know there's a method in crud repo for count which will go like - countByIdAndStatus(params) but I wasn't sure how can I incorporate "in" clause with this method. Thank you! A: This is how you can do it: long countByIdAndStatusIn(Integer id, List<Type> params); With @Query: @Query("select count(*) from Example ex where ex.id = ?1 and ex.status in (?2)") long countByIdAndStatus(Integer id, List<Type> params);
{ "pile_set_name": "StackExchange" }
Q: UserControl Inheritance WPF I wanted to implement a base class for my views. This class looks like the following: public abstract class ViewBase<T> : UserControl, IView where T : ViewModelBase { protected ViewModelBase viewModel; public ViewBase(T viewModel) : base() { this.InitializeComponent(); this.viewModel = viewModel; this.DataContext = viewModel; } protected abstract void InitializeComponent(); public void OnViewCalled(object parameters) { this.viewModel.OnCalled(parameters); } } This is the usage of the base class: public sealed partial class LoginView : ViewBase<LoginViewModel> { public LoginView(LoginViewModel viewModel) : base(viewModel) { } } The problem is that when I try the above code, I get the following error: CS1729 C# 'UserControl' does not contain a constructor that takes 1 arguments Why is not the constructor of the ViewBase class called? A: It appears you inherited from ViewBase in code behind but UserControl in XAML. LoginView.xaml <Local.View:ViewBase x:Class="YourNameSpace.View.LoginView" x:TypeArguments="Local.ViewModel:LoginViewModel" xmlns:Local.View="clr-namespace:YourNameSpace.View" xmlns:Local.ViewModel="clr-namespace:YourNameSpace.ViewModel"> </Local.View:ViewBase> LoginView.xaml.cs namespace YourNameSpace.View { public partial class LoginView : ViewBase<LoginViewModel> { public LoginView(LoginViewModel viewModel) : base(viewModel) { InitializeComponent(); } } } ViewBase.cs namespace YourNameSpace.View { public abstract class ViewBase<T> : UserControl where T : ViewModelBase { protected ViewModelBase ViewModel; public ViewBase(T viewModel) : base() { ViewModel = viewModel; DataContext = ViewModel; } } }
{ "pile_set_name": "StackExchange" }
Q: Redirecionamento em excesso ASP NET MVC Estou com um problema no projeto, quando acesso a rota default, me retorna o seguinte erro. Esta página não está funcionando Redirecionamento em excesso por localhost Tente limpar os cookies. ERR_TOO_MANY_REDIRECTS O erro acontece quando no meu index, eu redireciono para outra view de outro controller, se eu der return View() tudo funciona. namespace Portal.Web.Client.Controllers { [RoutePrefix("")] [Route("{action=index}")] public class HomeController : Controller { [Route("")] [Route("index")] public ActionResult Index() { //algumas coisas if(condicao) return RedirectToAction("sign-in","user"); // fica "chamando" este método (Action) infinita vezes. else return View(); //tudo funciona } } } Alguém tem alguma sugestão de por que isto está ocorrendo, e como resolver? A: Depois de ler esta questão do SO-en, alterei o return RedirectToAction("sign-in","user"); para return Redirect("~/user/sign-in"); e funcionou.
{ "pile_set_name": "StackExchange" }
Q: Find a closed formula for $\sum_{n=1}^\infty nx^{n-1}$ Find a closed formula for $\sum_{n=1}^\infty nx^{n-1}$ I am trying to use the derivative of generalized binomial theorem, $\frac{d}{dx}[(x+1)^r=\sum_{n=0}^\infty \binom{r}{n}x^n] =r(x+1)^{r-1}=\sum_{n=1}^\infty \binom{r}{n}nx^{n-1}$ However, I am not sure how to "get rid" of the binomial term. A: Note that for $|x|<1$, we have $$\sum_{n=0}^\infty x^n=\frac{1}{1-x}$$ Taking derivatives of both sides $$\sum_{n=0}^\infty nx^{n-1}=\frac{1}{(1-x)^2}$$ On the LHS, when $n=0$, the term vanishes. Thus $$\sum_{n=1}^\infty nx^{n-1}=\frac{1}{(1-x)^2}$$
{ "pile_set_name": "StackExchange" }
Q: Как сохранить состояние корзины во Vuex Как сохранить состояние корзины во Vuex после перезагрузки (store разбит на модули(меню, продукты, корзина))? A: Есть прекрасный плагин, называется vuex-persistedstate yarn add vuex-persistedstate@latest Этот плагин использует localStorage (или sessionStorage, если передать параметром) браузера для сохранения текущего состояния приложения: import Vue from 'vue'; import Vuex, { Store } from 'vuex'; import createPersistedState from 'vuex-persistedstate'; Vue.use(Vuex); export default new Store({ // ваш стейт state: {}, // ваши мутации mutations: {}, // ваши экшены actions: {}, plugins: [createPersistedState()] }); Не забываем зарегистрировать store в инициализации приложения: import store from './path-to-store'; new Vue({ store, render: (h) => h(YOUR_APP_COMPONENT) }).$mount('#app'); Вкратце что делает этот плагин: localStorage.setItem(key, JSON.stringify(state)); Сохранение в localStorage происходит после каждого commit, а чтение после каждой загрузки страницы: // source код функции `getState` из плагина function getState(key, storage, value) { try { return (value = storage.getItem(key)) && typeof value !== 'undefined' ? JSON.parse(value) : undefined; } catch (err) {} return undefined; } Так как у вас хранилище разбито на модули, каждый модуль должен иметь локальное пространство имен: const cart = { namespaced: true, state: { ... } }; new Store({ ....., modules: { cart } }); Это позволяет нам избежать конфликта мутаций с одинаковым названием, коммитим мутацию через название модуля + / + мутация: store.commit('cart/mutationName', mutationValue);
{ "pile_set_name": "StackExchange" }
Q: How to replace the Zebble SignaturePad UI Component or add and use another SignaturePad component? Using Visual Studio, when selecting 'Zebble for Xamarin - Cross Platform Solution' a default project will be created with five pages. I've modified the fifth page to implement a signature pad. Below is the following Page-5.zbl code. <?xml version="1.0"?> <z-Component z-type="Page5" z-base="Templates.Default" z-namespace="UI.Pages" z-partial="true" Title="About us" data-TopMenu="MainMenu" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="./../.zebble-schema.xml"> <z-place inside="Body"> <TextView Text="Hello world!" /> <SignaturePad Id="sigPad1" Enabled="true" LineThickness="4" Style.Border.Color="red" Style.Width="100" Style.Height="100"/> </z-place> </z-Component> Which ends up adding this line to .zebble-generated.cs: await Body.Add(sigPad1 = new SignaturePad { Id = "sigPad1", Enabled = true, LineThickness = 4 } .Set(x => x.Style.Border.Color = "red") .Set(x => x.Style.Width = 100) .Set(x => x.Style.Height = 100)); I have been looking at this SignaturePad component package: https://github.com/xamarin/SignaturePad If I wanted to use the Xamarian SignaturePad component or anyone else's SignaturePad component instead of the Zebble SignaturePad UI component, how would I do that? A: To use a third party component, all you need to do is to create a Zebble wrapper around it. It's explained here: http://zebble.net/docs/customrenderedview-third-party-native-components-plugins Step 1: Creating Native Adapter(s) You should first create a Zebble view class to represent an instance of your component using the following pattern. This class will be in the Shared project, available to all 3 platforms. namespace Zebble.Plugin { partial class MyComponent : CustomRenderedView<MyComponentRenderer> { // TODO: Define properties, method, events, etc. } } Note: To make the VS IntelliSense in ZBL files recognize this, you should create a .ZBL file for MyComponent as well: <z-Component z-type="MyComponent" z-base="CustomRenderedView[MyComponentRenderer]" z-namespace="Zebble.Plugin" z-partial="true" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="./../.zebble-schema.xml" /> The next step will be to create the renderer classes. Step 2: Creating Native Renderers(s) You need to create the following class each platform (UWP, iOS, Android). public class MyComponentRenderer : ICustomRenderer { MyComponent View; TheNativeType Result; public object Render(object view) { View = (MyComponent)view; Result = new TheNativeType(); // TODO: configure the properties, events, etc. return Result; } public void Dispose() => Result.Dispose(); } Using it in the application code In the application code (App.UI) you can use MyComponent just like any other built-in or custom view type. <Zebble.Plugin.MyComponent Id="..." Property1="..." on-Event1="..." />
{ "pile_set_name": "StackExchange" }
Q: How to update old Android project for the updates in new Android SDK tools? Previously I was using an older version of Android SDK Tools, and now I moved to a new pc, setup my new development environment from scratch and copied and imported the projects from the previous pc. Now if I create a new project in the current environment I notice that the SDK creates a file proguard.cfg in the root folder of the project which I didn't see for any of my previous projects on the previous setup. I looked up the use of proguard.cfg and it sounds useful. But this proguard.cfg file is not present in the projects which I created on the previous setup. And, it also makes me think that the new SDK tools might also have some new updates and/or feature additions which I could make use of. I should mention that the target platform which I'm using is the same - Android 2.2, but the "Android SDK Tools" have been updated. Also, I'm NOT getting any error of any kind as of now, but as I said, I want to make use of updates to the SDK tools (and not to the target platform updates). So, my question is - Is there a way to ensure that I'm not missing anything updated by Android SDK like the proguard.cfg or I'll need to myself identify such things and manually add the file, and similarly for any other things which got improved in the newer SDK tools. A: In the tools directory of the Android SDK, there's the executable android. You can use this to update a project by passing the path to the directory containing the AndroidManifest.xml: $ANDROID_SDK/tools/android update project -p /path/to/my-android-project This is documented on the SDK site under Managing Projects from the Command Line > Updating a Project. The Eclipse plugin doesn't do this automatically, and there's no equivalent section in the Managing Projects from Eclipse with ADT page, so it looks like you have to use the command line tool.
{ "pile_set_name": "StackExchange" }
Q: Adding files to ZIP file I am trying to add some files to a ZIP file, it creates the file but does not add anything into it. Code 1: String fulldate = year + "-" + month + "-" + day + "-" + min; File dateFolder = new File("F:\\" + compname + "\\" + fulldate); dateFolder.mkdir(); String zipName = "F:\\" + compname + "\\" + fulldate + "\\" + fulldate + ".zip"; zipFolder(tobackup, zipName); My function: public static void zipFolder(File folder, String name) throws Exception { byte[] buffer = new byte[18024]; ZipOutputStream out = new ZipOutputStream(new FileOutputStream(name)); FileInputStream in = new FileInputStream(folder); out.putNextEntry(new ZipEntry(name)); int len; while((len = in.read(buffer)) > 0) { out.write(buffer, 0, len); } out.closeEntry(); in.close(); out.close(); } Edit: I found the problem, it was just having trouble writing files from the C:\ drive into a ZIP in the F:\ drive A: You can't zip folders, only files. To zip folders, you have to add all the subfiles manually. I wrote this class that does the job. You can have it for free :) The usage would be this: List<File> sources = new ArrayList<File>(); sources.add(tobackup); Packager.packZip(new File(zipName), sources); Here is the class: import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.io.IOException; import java.util.List; import java.util.zip.Deflater; import java.util.zip.ZipEntry; import java.util.zip.ZipOutputStream; public class Packager { public static void packZip(File output, List<File> sources) throws IOException { System.out.println("Packaging to " + output.getName()); ZipOutputStream zipOut = new ZipOutputStream(new FileOutputStream(output)); zipOut.setLevel(Deflater.DEFAULT_COMPRESSION); for (File source : sources) { if (source.isDirectory()) { zipDir(zipOut, "", source); } else { zipFile(zipOut, "", source); } } zipOut.flush(); zipOut.close(); System.out.println("Done"); } private static String buildPath(String path, String file) { if (path == null || path.isEmpty()) { return file; } else { return path + "/" + file; } } private static void zipDir(ZipOutputStream zos, String path, File dir) throws IOException { if (!dir.canRead()) { System.out.println("Cannot read " + dir.getCanonicalPath() + " (maybe because of permissions)"); return; } File[] files = dir.listFiles(); path = buildPath(path, dir.getName()); System.out.println("Adding Directory " + path); for (File source : files) { if (source.isDirectory()) { zipDir(zos, path, source); } else { zipFile(zos, path, source); } } System.out.println("Leaving Directory " + path); } private static void zipFile(ZipOutputStream zos, String path, File file) throws IOException { if (!file.canRead()) { System.out.println("Cannot read " + file.getCanonicalPath() + " (maybe because of permissions)"); return; } System.out.println("Compressing " + file.getName()); zos.putNextEntry(new ZipEntry(buildPath(path, file.getName()))); FileInputStream fis = new FileInputStream(file); byte[] buffer = new byte[4092]; int byteCount = 0; while ((byteCount = fis.read(buffer)) != -1) { zos.write(buffer, 0, byteCount); System.out.print('.'); System.out.flush(); } System.out.println(); fis.close(); zos.closeEntry(); } } Enjoy! EDIT: To check if the program is still busy, you can add the three lines I marked with a (*) EDIT 2: Try the new code. On my platform, it runs correct (OS X). I'm not sure but, there might be some limited read permissions for files in Windows in AppData.
{ "pile_set_name": "StackExchange" }
Q: Multiple DataTable with the same columns in Visual C# I wish to create an array of DataTables with the same columns. Is there a way to set the schema of all the DataTables in one shot instead of running through each DataTable and add the columns? A: No, you can't do this without a loop. Even if there was a method like DataSet.CloneManyTables it would use a loop for you. A LINQ solution would also use a loop. So use following: You can use DataTable.Clone, for example with 100 clone tables: for (int i = 1; i <= 100; i++) { DataTable tClone = tSource.Clone(); // tSource is your source-table tClone.TableName = $"{tClone.TableName}_{i + 1}"; ds.Tables.Add(tClone); // ds is your DataSet } The columns can have the same names but the table-name must be unique.
{ "pile_set_name": "StackExchange" }
Q: Pass a vbscript String list to a SQL "in"operator In the vb script I have a select statement I am trying to pass a string value with an undetermined length to a SQL in operator the below code works but allows for SQL injection. I am looking for a way to use the ADO createParameter method. I believe the different ways I have tried are getting caught up in my data type (adVarChar, adLongChar, adLongWChar) Dim studentid studentid = GetRequestParam("studentid") Dim rsGetData, dbCommand Set dbCommand = Server.CreateObject("ADODB.Command") Set rsGetData = Server.CreateObject("ADODB.Recordset") dbCommand.CommandType = adCmdText dbCommand.ActiveConnection = dbConn dbCommand.CommandText = "SELECT * FROM students WHERE studentID in (" & studentid & ")" Set rsGetData = dbCommand.Execute() I have tried Call addParameter(dbCommand, "studentID", adVarChar, adParamInput, Nothing, studentid) which gives me this error ADODB.Parameters error '800a0e7c' Problems adding parameter (studentID)=('SID0001','SID0010') :Parameter object is improperly defined. Inconsistent or incomplete information was provided. I have also tried Call addParameter(dbCommand, "studentID", adLongVarChar, adParamInput, Nothing, studentid) and Dim studentid studentid = GetRequestParam("studentid") Dim slength slength = Len(studentid) response.write(slength) Dim rsGetData, dbCommand Set dbCommand = Server.CreateObject("ADODB.Command") Set rsGetData = Server.CreateObject("ADODB.Recordset") dbCommand.CommandType = adCmdText dbCommand.ActiveConnection = dbConn dbCommand.CommandText = "SELECT * FROM students WHERE studentID in (?)" Call addParameter(dbCommand, "studentID", adVarChar, adParamInput, slength, studentid) Set rsGetData = dbCommand.Execute() both of these options don't do anything... no error message and the SQL is not executed. Additional information: studentid is being inputted through a HTML form textarea. the design is to be able to have a user input a list of student id's (up to 1000 lines) and perform actions on these student profiles. in my javascript on the previous asp I have a function that takes the list and changes it into a comma delimited list with '' around each element in that list. A: Classic ASP does not have good support for this. You need to fall back to one of the alternatives discussed here: http://www.sommarskog.se/arrays-in-sql-2005.html That article is kind of long, but in a good way: it's considered by many to be the standard work on this subject. It also just so happens that my preferred option is not included in that article. What I like to do is use a holding table for each individual item in the list, such that each item uses an ajax request to insert or remove it from the holding table the moment the user selects or de-selects it. Then I join to that table for my list, so that you end up with something like this: SELECT s.* FROM students s INNER JOIN studentSelections ss on s.StudentID = ss.StudentID WHERE ss.SessionKey = ?
{ "pile_set_name": "StackExchange" }
Q: NP languages definition Is it good to define a language $\mathcal{L}$ in NP as a language for which, given an element $x$, it is possible in polynomial-time to check whether $x \in \mathcal{L}$ or not? Because I need to have an informal definition of that, in order to give just an idea of it, without using formalism. Otherwise how could I define it roughly speaking? A: Is it good to define a language $\mathcal{L}$ in NP as a language for which, given an element $x$, it is possible in polynomial-time to check whether $x \in \mathcal{L}$ or not? No. If you could do that you could "check" that this language would be in P, because you can check (or more formally decide) all possible words for membership in polynomial time this way. A correct formulation would be: A language $\mathcal L$ is in NP if for every word $x$ in the language there exists a witness $w$ which is of length polynomial in the length of $x$ and given the witness and the word you can verify language-membership in polynomial time. Note that you only need said witness (sometimes also called certificate) need only exist for words that actually are in the language, i.e. you don't need to be able to construct them for negative instances and constructing them is allowed to take super-polynomial time. A: No! The definition you give is the definition of P! NP is the class of languages $\mathcal{L}$ where given $x$ and what you might call a "proof" $y$, you can deterministically check in time polynomial in the size of $x$ and $y$ whether $y$ really does prove that $x\in\mathcal{L}$. In addition, the size of $y$ must be bounded by some polynomial in the size of $x$. Examples of proofs would be satisfying assignments for $\text{SAT}$, colourings for $3\text{-COL}$, and so on. These proofs are more commonly called "certificates" or "witnesses". The key distinction is that the language itself is a yes/no decision problem (e.g., "Is this formula satisfiable?"), whereas the proof tends to be a solution to the related function problem (e.g., "Give me a satisfying assignment for this formula, if it has one.").
{ "pile_set_name": "StackExchange" }
Q: What would cause Prolog to succeed on a match, but fail when asked to label outputs? I'm trying to solve a logic puzzle with Prolog, as a learning exercise, and I think I've correctly mapped the problem using the GNU Prolog finite domain solver. When I run the solve function, Prolog spits back: yes and a list of variables all bounded in the range 0..1 (booleans, as I've so constrained them). The problem is, when I try to add a fd_labeling(Solution) clause, Prolog about faces and spits out: no. I'm new to this language and I can't seem to find any course of attack to figure out why everything seems to work until I actually ask it to label the answers... A: Apparently, you didn't "correctly" map the problem to FD, since you get a "no" when you try to label the variables. What you do in Constraint Logic Programming is set up a constraint model, where you have variables with a domain (in your case booleans with the domain [0,1]), and a number of constraints between these variables. Each constraint has a propagation rule that tries to achieve consistency for the domains of the variables on which the constraint is posted. Values that are not consistent are removed from the domains. There are several types of consistency, but they have one thing in common: the constraints usually won't by themselves give you a full solution, or even tell you whether there is a solution for the constraint model. As an example, say you have two variables X and Y, both with domains [1..10], and the constraint X < Y. Then the propagation rule will remove the value 1 from the domain of Y and remove 10 from the domain of X. It will then stop, since the domains are now consistent: for each value in one domain there exists a value in the other domain so that the constraint is fulfilled. In order to get a solution (where all variables are bound to values), you need to label variables. Each labeling will wake up the constraints attached to the labeled variable, triggering another round of propagation. This will lead to a solution (all variables bound to values, answer: yes) or failure (in each branch of the search tree, some variable ends up with an empty domain, answer: no) Since each constraint is only aiming for consistency of the domains of the variables on which it is posted, it is possible that an infeasibility that arises from a combination of constraints is not detected during the propagation stage. For example, three variables X,Y,Z with domains [1..2], and pairwise inequality constraints. This seems to have happened with your constraint model. If you are sure that there must be a solution to the puzzle, then your constraint model contains some infeasibility. Maybe a sharp look at the constraints is already sufficient to spot it. If you don't see any obvious infeasibility (e.g., some contradicting constraints like the inequality example above), you need to debug your program. If it's possible, don't use a built-in labeling predicate, but write your own. Then you can add some output predicate that allows you to trace what variable was instantiated and what changes in the boolean decision variables this caused or whether it led to a failure. A: (@twinterer already gave an explanation, my answer tries to take it from a different angle) When you enter a query to Prolog what you get back is an answer. Often an answer contains a solution, sometimes it contains several solutions and sometimes it does not contain any solution at all. Quite often these two notions are confused. Let's look at examples with GNU Prolog: | ?- length(Vs,3), fd_domain_bool(Vs). Vs = [_#0(0..1),_#19(0..1),_#38(0..1)] yes Here, we have an answer that contains 8 solutions. That is: | ?- length(Vs,3), fd_domain_bool(Vs), fd_labeling(Vs). Vs = [0,0,0] ? ; Vs = [0,0,1] ? ; ... Vs = [1,1,1] yes And now another query. That is the example @twinterer referred to. | ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs). Vs = [_#0(0..1),_#19(0..1),_#38(0..1)] yes The answer looks the same as before. However, it does no longer contain a solution. | ?- length(Vs,3), fd_domain_bool(Vs), fd_all_different(Vs), fd_labeling(Vs). no Ideally in such a case, the toplevel would not say "yes" but "maybe". In fact, CLP(R), one of the very first constraint systems, did this. Another way to make this a little bit less mysterious is to show the actual constraints involved. SWI does this: ?- length(Vs,3), Vs ins 0..1, all_different(Vs). Vs = [_G565,_G568,_G571], _G565 in 0..1, all_different([_G565,_G568,_G571]), _G568 in 0..1, _G571 in 0..1. ?- length(Vs,3), Vs ins 0..1, all_different(Vs), labeling([], Vs). false. So SWI shows you all constraints that have to be satisfied to get a concrete solution. Read SWI's answer as: Yes, there is a solution, provided all this fine print is true! Alas, the fine print is false. And yet another way to solve this problem is to get an implementation of all_different/1 with stronger consistency. But this only works in specific cases. ?- length(Vs,3), Vs ins 0..1, all_distinct(Vs). false. In the general case you cannot expect a system to maintain global consistency. Reasons: Maintaining consistency can be very expensive. It is often better to delegate such decisions to labeling. In fact, the simple all_different/1 is often faster than all_distinct/1. Better consistency algorithms are often very complex. In the general case, maintaining global consistency is an undecidable problem.
{ "pile_set_name": "StackExchange" }
Q: Disable user registration password email So, WordPress 4.3 has a new password system as we all know. Unfortunately, this new system has done away with the ability to NOT send new users an email. My client was using a system where he sent a custom email to his clients, manually registering their emails, and then sending them an email with the login info with a custom message. We are aware that this new system is trying to be more secure, but this isn't working for the amount of control he would like. I've found the following code in my search for a solution to turn these emails off, but I think they only turn off the notification emails for if a user's email is CHANGED for previously registered users, not when it's first created: add_filter( 'send_password_change_email', '__return_false'); add_filter( 'send_email_change_email', '__return_false'); Does anyone know of any way to turn off these initial password emails sent after registration? Thank you. A: You can intercept this email before it is sent using the phpmailer_init hook. By default, this hook fires before any email is sent. In the function below, $phpmailer will be an instance of PHPMailer, and you can use its methods to remove the default recipient and manipulate the email before it is sent. add_action('phpmailer_init', 'wse199274_intercept_registration_email'); function wse199274_intercept_registration_email($phpmailer){ $admin_email = get_option( 'admin_email' ); # Intercept username and password email by checking subject line if( strpos($phpmailer->Subject, 'Your username and password info') ){ # clear the recipient list $phpmailer->ClearAllRecipients(); # optionally, send the email to the WordPress admin email $phpmailer->AddAddress($admin_email); }else{ #not intercepted } } A: Actually it depends how you create the new user. If you do it from administration - Users - Add New you are right. In 4.3 unfortunatelly you cannot disable sending the notification email. But if you really want to create a new user without the email, there is a way. You can create a small plugin where you'd create a new account by yourself via wp_insert_user function, which doesn't send any email by default. This function can be called like this. wp_insert_user( $userdata ); Where the userdata parameter is an array where you can pass all needed information. $userdata = array( 'user_login' => 'login', 'user_pass' => 'password', ); $user_id = wp_insert_user( $userdata ) ; //On success if ( ! is_wp_error( $user_id ) ) { echo "User created : ". $user_id; } For more informations check codex here. A: The wp_new_user_notification function is pluggable, so you can override it by defining your own. You should be able to copy the entire function from wp-includes/pluggable.php into your plugin (or functions.php) and remove the line that sends out the email.
{ "pile_set_name": "StackExchange" }
Q: Translation question on part of a lullaby I was looking for Mama's Lullaby on Youtube (Once there was a mama bear / sitting in her rocking chair etc.), and I bumped into this. After the astonishment that a video of an evidently Japanese song could be titled "Re: mama's lullaby", I decided to try understanding it. My tentative transcription was mostly right, but then I found the lyrics here. I kanji-ized the part in the video as: いつもそばにいるの あなたには見えないけど いつもそばにいるの 思い出して くれている時も 今日も一日 楽しかったわね あなたが笑うと ママも嬉しい Which should mean: I will always be on your side Even though you can't see me I will always be on your side Even in the times when you will remember me Today too was A funny day, wasn't it? If you smile Mama is happy too I have a couple of doubts: Am I right in taking the の in the repeated line いつもそばにいるの as a marker of emphasis, as is an option in sense 3 here? Is my interpretation of implied subjects and objects right? This is an especially wild guess in 思い出して くれている時も. Is my translation of 一日 correct? Also, do you know where I can find a complete video of the song? Searching for the title given on mojim returns all sorts of videos, but definitely not the song here… A: This is an ending theme from anime 怪傑ゾロリ, and ゾロリ's mama has already been dead for years when the story begins, so... The の is #❷-1 in デジタル大辞泉: [終助]活用語の連体形に付く。 1 (下降調のイントネーションを伴って)断定の言い方を和らげる意を表す。多く、女性が使用する。 It's a sentence-ending particle. (With a falling tone) You use it to soften an assertive statement. It's more used by women. So ゾロリ's late mama is talking/singing to him: I'm always by your side Even though you can't see me I think it means: I'm always by your side (Of course) When you remember me, too (as well as when you're doing something else) As the other poster says the 楽しい means "fun, enjoyable." breakdown: 今日も today, too/again 一日 one day, the whole day 楽しかった you had fun, you enjoyed わね (a feminine way of ending a sentence) ≒ ね So I think it'd be something like: "(I can see that) you had another fun day today." "You had fun/enjoyed the whole day today again (right?/didn't you?)"
{ "pile_set_name": "StackExchange" }
Q: Free subgroup of linear groups Suppose $a,b \in GL(n,\mathbb{C})$, and $\langle a,b\rangle$ is a free group of rank $2$. Is there a way to choose a $c$ to guarantee that $\langle a,b,c\rangle$ is a free group of rank $3$? A: Yes. Note first that it suffices to answer the question in $\textit{SL}(2,\mathbb{C})$. In particular, if $A,B\in\textit{GL}(2,\mathbb{C})$, let $\hat{A},\hat{B}$ be scalar multiples of $A,B$ that lie in $\textit{SL}(2,\mathbb{C})$, and suppose that there exists a $C\in\textit{SL}(2,\mathbb{C})$ so that $\langle\hat{A},\hat{B},C\rangle$ is a free group of rank $3$. Then for any word $w(x_1,x_2,x_3)$ in the free group $\langle x_1,x_2,x_3\rangle$, if $w(A,B,C) = I$, then $w(\hat{A},\hat{B},C)$ would have to be a scalar matrix, and hence would lie in the center of $\langle\hat{A},\hat{B},C\rangle$, which is impossible. We conclude that $w(A,B,C) \ne I$ for all $w$, and hence $\langle A,B,C\rangle$ is free as well. So suppose $A,B\in\textit{SL}(2,\mathbb{C})$ and $\langle A,B\rangle$ is free of rank $2$. Consider the group $\textit{SL}(2,R)$, where $R$ is the ring $\mathbb{C}[t_1,t_2,t_3,t_4]/(t_1t_4-t_2t_3-1)$, and let $$ T \;=\; \begin{bmatrix}t_1 & t_2 \\ t_3 & t_4\end{bmatrix}. $$ Note that $T$ is invertible over $R$, with $$ T^{-1} \;=\; \begin{bmatrix}t_4 & -t_2 \\ -t_3 & t_1\end{bmatrix}, $$ and hence $T \in \textit{SL}(2,R)$. Then $\langle A,B,T\rangle$ must be free, since any element of $\langle A,B\rangle$ can be substituted for $T$, and the free group $\langle A,B\rangle$ has no non-trivial laws. Now, if $w = w(x_1,x_2,x_3)$ is any non-trivial word in the free group $\langle x_1,x_2,x_3\rangle$, the equation $$ w(A,B,T) \;=\; I $$ is equivalent to a nontrivial system of polynomial equations in $t_1,t_2,t_3,t_4$, which define a proper subvariety $V_w$ of $\textit{SL}(2,\mathbb{C})$. But $\textit{SL}(2,\mathbb{C})$ cannot be expressed as a countable union of proper subvarieties, and hence there exists a matrix $C\in\textit{SL}(2,\mathbb{C})$ that does not lie in any $V_w$. Then $\langle A,B,C\rangle$ is free. Practically speaking, the way to choose $C$ is to choose a generic matrix, i.e. a matrix whose entries do not satisfy any polynomial equations over the field generated by the entries of $A$ and $B$.
{ "pile_set_name": "StackExchange" }
Q: Using Oracle's CREATE SEQUENCE as secure random I was wondering if Oracle's CREATE SEQUENCE with NOORDER statement (enabled by default) could be used as a secure random number generator ? I mean, is it really random or it could be predicted ? I couldn't find doc regarding to this spec. Thanks in advance. lemon A: No. If you want random numbers, use the dbms_crypto package. A sequence with NOORDER specified will still give sequential, ordered numbers in a non-RAC system. If you are using RAC, however, each node will have a separate cache of values and one node will be generating values that are greater than the other nodes at the same time. For example, if you have a 3 node cluster and a sequence with a cache of 20, node 1 would cache the values 1-20, node 2 would cache the values 21-40, and node 3 would cache the values 41-60. If a session on node 3 requested the nextval, it would get 41. Then, if another session on node 1 requested the nextval, it would get the value 1. The next request on node 3 would get 42, the next request on node 1 would get 2, the next request on node 2 would get 21. And, of course, once each node exhausted its cache, it would cache the next 20 values. If one node persistently gets more nextval requests, it can get much further ahead of the nextval on the other nodes.
{ "pile_set_name": "StackExchange" }
Q: Find the indexes of dependent columns I have a fairly large singular square matrix, of size 37 and rank 35. Is there a way to get a vector with the (groups of) indexes of the columns in the original matrix that are linearly dependent? RowReduce et al won't help me, because they perform an arbitrary number of row swaps. A: If RowReduce won't help, then perhaps I don't know what you're looking for. Here's my understanding of the question in which I use RowReduce to get the answer. Example A random matrix: SeedRandom[1]; mat = RandomSample[#~Join~Accumulate@RandomSample[#, 2] &@ RandomInteger[{-5, 5}, {35, 37}]]; MatrixRank[mat] (* 35 *) We can use the SparseArray property "AdjacencyLists" to find for each column, which columns it depends on. I'm not sure what form the final result should be in. The output below indicates for each column, which columns it is a linear of. An entry of the form i -> {i} indicates an independent vector. The entry 21 -> {3, 8} indicates that column 21 is a linear combinations of columns 3 and 8. The leading 1 in each nonzero row of the reduced matrix indicates the column with which the columns with nonzero entries in the row have a relationship. The set of rules row2col basically indicates for each row, the column in which the leading 1 is located. sa = Transpose @ Unitize @ SparseArray @ RowReduce[mat2]; row2col = Rule @@@ Reverse /@ First /@ GatherBy[sa["NonzeroPositions"], First]; Thread[Range[First @ Dimensions @ sa] -> (sa["AdjacencyLists"] /. row2col)] (* {1 -> {1}, 2 -> {2}, 3 -> {3}, 4 -> {4}, 5 -> {5}, 6 -> {6}, 7 -> {7}, 8 -> {8}, 9 -> {9}, 10 -> {10}, 11 -> {11}, 12 -> {12}, 13 -> {13}, 14 -> {14}, 15 -> {15}, 16 -> {16}, 17 -> {17}, 18 -> {18}, 19 -> {19}, 20 -> {20}, 21 -> {3, 8}, 22 -> {22}, 23 -> {23}, 24 -> {24}, 25 -> {3}, 26 -> {26}, 27 -> {27}, 28 -> {28}, 29 -> {29}, 30 -> {30}, 31 -> {31}, 32 -> {32}, 33 -> {33}, 34 -> {34}, 35 -> {35}, 36 -> {36}, 37 -> {37}} *) A: One way is to start with empty matrix. Add the first column. Then loop, each time adding the next column, and checking if the rank of this matrix has increased from before, if so, keep it, else skip over to the next column. Keep doing this until you reach the last column in the original matrix, or have collected m columns, where m is the rank of the original matrix. (no need to keep trying if found m columns). function indepCols[mat_?(MatrixQ[#, NumericQ] &)] := Module[{nRows, nCols, m, idx, i, vecs, candidate}, {nRows, nCols} = Dimensions[mat]; m = MatrixRank[mat]; idx = Table[1, {m}]; (*arrary to collect the index of columns*) vecs = {mat[[All, 1]]}; (*first column is always in*) Do[ candidate = Join[vecs, {mat[[All, i]]}]; If[MatrixRank[candidate] > Length[vecs],(*did the rank increase?*) idx[[Length[vecs] + 1]] = i; vecs = candidate; If[Length[vecs] == m, Break[]](*bail out if got the rank*) ], {i, 2, nCols} ]; {idx, vecs} ] To use the above function: First example (mat = {{1, 0, -2, 1, 0}, {0, -1, -3, 1, 3}, {-2, -1, 1, -1, 3}, {-2, -1, 1, -1, 3}, {0, 3, 9, 0, -12}}) // MatrixForm The above has rank 3. So there will be 3 L.I. columns {idx, out} = indepCols[mat]; Transpose[out] // MatrixForm idx (*{1, 2, 4}*) Verified using "AdjacencyLists" thanks to Michael E2 above. sa = Transpose@Unitize@SparseArray@RowReduce[mat]; row2col = Rule @@@ Reverse /@ First /@ GatherBy[sa["NonzeroPositions"], First]; Thread[Range[First@Dimensions@sa] -> (sa["AdjacencyLists"] /. row2col)] second example Using example given by Michael E2, which is much larger SeedRandom[1]; mat = RandomSample[#~Join~Accumulate@RandomSample[#, 2] &@ RandomInteger[{-5, 5}, {35, 37}]]; {idx, out} = indepCols[mat]; idx Verified: sa = Transpose@Unitize@SparseArray@RowReduce[mat]; row2col = Rule @@@ Reverse /@ First /@ GatherBy[sa["NonzeroPositions"], First]; Thread[Range[First@Dimensions@sa] -> (sa["AdjacencyLists"] /. row2col)] The above gives the index of the L.I. columns. To find the index of the L.D. columns, simply take the complement: Complement[Range[1, Length[mat]], idx] (*{36, 37}*)
{ "pile_set_name": "StackExchange" }
Q: .each is executing block on empty array I've always known that: a = [] a.each do |x| puts "I'm still being executed" end will output just nothing until the array is an empty array. It so behaves in the console too. But in my application my code is still executing the block on an empty array and I've been unable to debug it. Is there something I am missing? EDIT: Here's the original code: <% @idea.comments.each do |c| %> <div class="left_box gray"> <%= c.body %> </div> <% end %> whereas @idea has_many comments. A: It is impossible. Something wrong in your code. See your code carefully.
{ "pile_set_name": "StackExchange" }
Q: "Righello" oppure "riga graduata"? A volte succede che mi manca il vocabolario adeguato per esprimere i fatti della vita quotidiana. Qual è il vocabolo più usuale per chiamare lo strumento della fotografia? Ho trovato "righello", ma anche "riga graduata" su Wikipedia. Anche l'immagine è tratta da Wikipedia. A: Io non direi altro che “righello”. “Riga graduata”, per quanto corretto e chiaro, è un'espressione più formale, che penserei di trovare semmai su un catalogo o un testo di legge; o in un testo di matematica, ma solo se strettamente necessario (per esempio per specificare che le classiche costruzioni geometriche con riga e compasso contemplano una riga non graduata).
{ "pile_set_name": "StackExchange" }
Q: An animation featuring two chibis wearing tiger and dragon costumes duking it out I came across a few clips of this a few days ago, but I didn't attempt a reverse search because it was in .webm form, which I haven't had much luck reverse searching through Google. The animation seemed to feature two main characters: a girl with short hair wearing cat ears, and a glassed girl wearing a dragon suit who appeared to be the enemy. The latter might have also worn braids. The dragon suit led me to believe that the first girl was supposed to represent a tiger, what with various elements of Japanese mythology representing them as rivals. The tiger-girl may have had some sort of transformation sequence before the fight. The dragon-girl could control ice, which put their fight heavily in her favor, since she and the tiger-girl were fighting in an area that was frozen over and covered in snow and icicles. Throughout the scene, the tiger-girl attempted (mostly in vain) to land blows on her opponent while dodging the icicles her opponent broke off from the surroundings and shot at her. At one point, the tiger-girl managed to dodge a volley of icicles and take advantage of an opening in the dragon-girl's defense to leap into the air and try to claw at her, which was shown through silhouette. Time slowed down for a moment as she flew through the air, and then it returned to normal as another wave of icicles seemingly impaled her in midair from all directions, kind of like this: The camera then changed angles out of silhouette to show that the tiger-girl hadn't been stabbed - she was only trapped and tangled in the interlocking icicles as she attempted to break free. That's about all I could remember. The animation looked very smooth and polished. At one point, something shattered, sending beautifully animated ice shards flying. Both characters were super-deformed/chibi. I used "animation" and not "anime" in describing it, because I am not sure if it was just some independent video or an actual series. I tried searching various combinations of relevant keywords on Google, like "3d", "icicles", "chibi", etc., but most of what I got was mediocre DeviantArt and drawing tutorials. A: It is Etotama, an anime currently airing this season (Spring 2015). Synopsis from MAL: The anime's story revolves around Nya-tan, the cat of Chinese astrology who wants to become a member of the Chinese zodiac. [...] She meets Takeru Tendo, a high school student who lives alone in Akihabara, and becomes a freeloader at his house. Little by little, she gets closer to her goal. The anime's setting is built upon this folk story, which explains the origin of the 12 animals in the Chinese Zodiac via a story about a race between the animals to become part of the zodiac. The Cat in variations of this folk story were always tricked by the Rat in one way or another and missed its chance to become part of the zodiac. The scene you describe is from the second half of episode 1, where Nya-tan (Cat) and Doratan (Dragon) brawled out with each other to give Nya-tan the setting of "a tragic heroine who's shocked because the Dragon Eto-musume she thought was a friend ended up being a traitor". A screenshot from around 16:33 of episode 1, showing both the Cat and the Dragon. A screenshot from around 17:52 of episode 1, showing the Cat being restrained by multiple icicles.
{ "pile_set_name": "StackExchange" }
Q: Submitting form with AJAX not working. It ignores ajax I've never used Ajax before, but from researching and other posts here it looks like it should be able to run a form submit code without having to reload the page, but it doesn't seem to work. It just redirects to ajax_submit.php as if the js file isn't there. I was trying to use Ajax to get to ajax_submit without reloading anything. Is what i'm trying to do even possible? HTML form: <form class="ajax_form" action="ajax_submit.php" method="post"> <input class="input" id="license" type="text" name="license" placeholder="License" value="<?php echo htmlentities($person['license1']); ?>" /> <input class="input" id="license_number" type="text" name="license_number" placeholder="License number" value="<?php echo htmlentities($person['license_number1']); ?>" /> <input type="submit" class="form_button" name="submit_license1" value="Save"/> <input type="submit" class="form_button" name="clear1" value="Clear"/> </form> in scripts.js file: $(document).ready(function(){ $('.ajax_form').submit(function (event) { alert('ok'); event.preventDefault(); var form = $(this); $.ajax({ type: "POST", url: "ajax_submit.php",//form.attr('action'), data: form.serialize(), success: function (data) {alert('ok');} }); }); }); in ajax_submit.php: require_once("functions.php"); require_once("session.php"); include("open_db.php"); if(isset($_POST["submit_license1"])){ //query to insert }elseif(isset($_POST['clear1'])) { //query to delete } I have "<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/3.1.0/jquery.min.js"></script>" in the html head A: form.serialize() doesn't know which button was used to submit the form, so it can't include any buttons in the result. So when the PHP script checks which submit button is set in $_POST, neither of them will match. Instead of using a handler on the submit event, use a click handler on the buttons, and add the button's name and value to the data parameter. $(":submit").click(function(event) { alert('ok'); event.preventDefault(); var form = $(this.form); $.ajax({ type: "POST", url: "ajax_submit.php",//form.attr('action'), data: form.serialize() + '&' + this.name + '=' + this.value, success: function (data) {alert('ok');} }); });
{ "pile_set_name": "StackExchange" }
Q: Space efficient conversion from Wrapper to primitive and primitive to Wrapper Given the following function: public void convertToWrapper(long[] longsToConvert) { } and public void convertToPrimitive(Long[] longsToConvert) { } Apache ArrayUtils exposes the following: public Long[] toObject(long[] longs){ final Long [] result = new Long [array.length]; for (int i = 0; i < array.length; i++) { result[i] = new Long (array[i]); } } My question is there a way to do this utilizing only one array? I have tried the following which does not work: for(int i =0;i<array.length;i++) { array[i] = new Long(array[i]); } A: No. A Long[] is not the same type as a long[], so assignment will fail.
{ "pile_set_name": "StackExchange" }
Q: Problem connecting Instruments to my iPod (touch) device I'm having a bit of trouble connecting intstruments to my App on a iPod touch device. Whilst debugging in the Simulator is possible, I get this error in instruments when trying to start a app: Target failed to run: Remote exception encountered : 'Selector' processIdentifierForBundleIdentifier:' not authorized for type 'DTSpringBoardProcessControlService" Thanks in advance A: I had this same problem. The current fix is to install XCode 3.2.2 in a different directory and use the Instruments.app in new directory. Not much of a fix, but XCode 3.2.3 is still beta, so problems are expected.
{ "pile_set_name": "StackExchange" }
Q: Ubuntu + Opera (ipv6 issue) Did anyone manage to get Opera working on Ubuntu 9.04? It's trying to resole domain names in IPv6 way, and somehow gets a zero-reply from the gateway (ip-address of much zeroes), and fails to connect. UPD: wireshark sniff: 8 5.647832 192.168.1.2 192.168.1.1 DNS Standard query A google.com 9 5.649655 192.168.1.1 192.168.1.2 DNS Standard query response A 1.0.0.0 By the way, ALL other networking goes fine, including firefox. One solution i found was to disable IPv6 in kernel, but in 9.04 it's impossible due to a BUG. Can i have Opera working without rebuilding the kernel with a patch? UPD: if I ping some host (so its IP is cached now) - Opera finds it, and opens the page OK. Maybe, there's a way to "pre-ping" everything Opera tries to connect to? :)) A: SOLVED! $ opera -debugdns ... dns: Host 'google.com' resolved to 1.0.0.0 This is a typical malformed response from the broken DNS implementation found on some routers. Opera first looks for ipv6, and gets the wrong reply. The solution is to point resolv.conf to OpenDNS's DNS servers - 208.67.222.222 and 208.67.220.220. Now my resolv.conf looks like this: nameserver 208.67.222.222 nameserver 208.67.220.220 nameserver 192.168.1.1 Works like a charm! :)
{ "pile_set_name": "StackExchange" }
Q: How to ignore blank elements in linq query How to ignore blank elements in linq query I have a linq query var usersInDatabase = from user in licenseUserTable where user.FirstName == first_name && user.LastName == last_name select user; But if I get here and first_name or last_name is blank then I want to still evaluate the other data item. A: var usersInDatabase = from user in licenseUserTable select user; if (!string.IsNullOrEmpty(first_name)) { usersInDatabase = usersInDatabase.Where(u => u.FirstName == first_name); } if (!string.IsNullOrEmpty(last_name)) { usersInDatabase = usersInDatabase.Where(u => u.LastName == last_name); } A: var usersInDatabase = from user in licenseUserTable where (user.FirstName == first_name || first_name == string.Empty) && (user.LastName == last_name || last_name == string.Empty) select user; Now you will get records that match the first name given, if the first_name is empty all will match as long as it ALSO matches the last name, unless the last_name is blank as well. The only issue with this right now is that if first_name and last_name are blank, you will get everything.
{ "pile_set_name": "StackExchange" }
Q: Does realm.io database support real-time sync? I'm considering switching from Firebase DB to Realm.io for my Android app. I wonder - does Realm make any guarantees about real-time sync? From what I found: "Data sync will automatically happen whenever you save, with no work from you." That sounds like a start, but there is no mention of the speed of this... Will it be real-time like in Firebase DB, or more of a slow "poll-based" process? A: Realm is able to provide real-time synchronization, as long as you set up a Realm Object Server (ROS). Synchronization itself (between devices and to the ROS, through the ROS) is free, you just need to have a ROS somewhere. As for listening to events and reading from/writing to the synchronized Realm on the server side using the NodeJS API, that's the paid feature. You can write to the sync database manually on the server-side with the Realm Browser, though; which runs on Mac OS. (I don't think it's worth a mention, but obviously you can write into sync Realms from Android devices and stuff. It's just the server-side that's not as simple.)
{ "pile_set_name": "StackExchange" }
Q: How to write a script to iteratively search through a document and return results based on a pattern I have a large document containing items that appear in a specific pattern: "TEXT I NEED" "," (Comma ends text I want to return) "more text I DON'T need" "." "TEXT I NEED" (need the text immediately following a period) "," (Comma ends text).. and so on. I am hoping to write a script that will go through the doc and pull out (TEXT I NEED). I haven't tried much. I've tried playing around with re.compile but I am mostly a beginner. Document example: APPLES ARE FUN, oranges are better. ORANGES ARE FUN, bananas are better. BANANAS ARE WEIRD, bananas are a little weird. I want to return: APPLES ARE FUN ORANGES ARE FUN BANANAS ARE WEIRD A: Extract text that is before ',' and preceded by begin of text(^) or '.': import re text = """APPLES ARE FUN, oranges are better. ORANGES ARE FUN, bananas are better. BANANAS ARE WEIRD, bananas are a little weird""" print(re.findall('(?:^|\.\s+)([\w\s]+)(?=,)', text)) # ['APPLES ARE FUN', 'ORANGES ARE FUN', 'BANANAS ARE WEIRD']
{ "pile_set_name": "StackExchange" }
Q: Good ARM development board for bare-metal development I'm looking for an ARM development board for bare-metal (no underlying OS) development. Some criteria that I value: 1) External SRAM/SDRAM, at least 1MB 2) External Flash, at least 512kB 3) Built-in JTAG, or atleast standard JTAG interface 4) Nice, well documented and easily programmable accessories (UART, GPIO, USB, Networking) 5) Good documentation. 6) Not too expensive. I've been looking at the BeagleBoard (and BeagleBone). It seems to cover everything except (4). Any other ideas? A: What are you going to do with it? Relative to your requirements the BeagleBoard and BeagleBone are a couple orders of magnitude of overkill, although the Bone is an amazing value for its power and hackability. I'm not sure why you say it fails requirement 4, it has all those things except perhaps the nice documentation. The Beagles seem to be pitched as linux platforms so I don't know if you'll be completely on your own if you want to bootstrap it yourself; unlike with an MCU you may not have a bunch of C libraries for working directly with the hardware and peripherals. Have you considered an ARM Cortex M3 or M4 microcontroller kit? STMicro has a $20 discovery kit for their new Cortex M4 micros. 192KB RAM, 1MB flash, JTAG, USB, and a pretty awesome array of peripherals. These are targeted towards bare-metal development so you will just get some C libraries that let you configure the hardware and you provide a main function and interrupt service routines. It doesn't have networking but at that price point, unless you are designing something where that's a make or break requirement it's an amazing value for learning and prototyping. I have one in the mail now so I can't give a first-hand account but I have developed on other Cortex MCUs and really like them. If you do require networking, I have used TI's Stellaris LM3S6965 Ethernet Evaluation Kits and they are great, the docs and libraries are pretty good (I've hit a few stumbling blocks figuring things out but overall a good experience). I've even used lwIP and UIP to build a device with a (very, very, very) simple web server. I'm a little reluctant to recommend the full kits over the BeagleBone though because they are around $70 and vastly less powerful than the Bone, but it all depends on what you want to build or learn. A: I hve a Friendly ARM Micro 2440 (which is, contra to its name, a smaller version of the mini2440): http://www.friendlyarm.net/products/micro2440. The page cites OS support for Linux, CE and Android, but there are files for bare-metal development. IIRC I payed E 125 for the Stamp + Base (ex BTW). Specification: Stamp Module Dimension: 63 x 52 mm CPU: 400 MHz Samsung S3C2440A ARM920T (max freq. 533 MHz) RAM: 64 MB SDRAM, 32 bit Bus Flash: 64 MB / 128 MB / 256 MB / 1GB NAND Flash and 2 MB NOR Flash with BIOS Serial, SPI, USB, LCD, CMOS Camera Interface Analog Input and Output User Outputs: 4x LEDs Expansion headers (2.0 mm) Debug: 10 pin JTAG (2.0 mm) OS Support Windows CE 5 and 6 Linux 2.6 Android Specification: SDK-Board Dimension: 180 x 130 mm EEPROM: 1024 Byte (I2C) Ext. Memory: SD-Card socket Serial Ports: 3x DB9 connector (RS232) USB: 4x USB-A Host 1.1, 1x USB-B Device 1.1 Audio Output: 3.5 mm stereo jack Audio Input: 3.5mm jack (mono) + Condenser microphone Ethernet: RJ-45 10/100M (DM9000) RTC: Real Time Clock with battery Beeper: PWM buzzer Camera: 20 pin Camera interface (2.0 mm) LCD: 41 pin connector for FriendlyARM Displays and VGA Board Touch Panel: 4 pin (resistive) User Inputs: 6x push buttons and 1x A/D pot Expansion headers (2.0 mm) Power: regulated 5V (DC-Plug: 1.35mm inner x 3.5mm outer diameter)
{ "pile_set_name": "StackExchange" }
Q: Popup on a ImageOverlay On Leaflet I can get a image to overlay and display just fine, but can't seem to get a popup on hover or click. Here is my code for version 1.6.0: <div id="map"></div> <script> var map = L.map('map', { center: [40.75, -74.15], zoom: 13 }); var osmAttrib = 'Map data © <a href="http://openstreetmap.org">OpenStreetMap</a> contributors'; var osm = L.tileLayer('http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', { maxZoom: 18, attribution: osmAttrib }).addTo(map); var cities = new L.LayerGroup([ L.marker([40.72801, -74.07772]).bindPopup('Jersey City') ]); cities.addTo(map); L.imageOverlay( 'http://research.ccountync.com/maps/images/yellow.png', [[40.712216, -74.22655], [40.773941, -74.12544]], { opacity: 1 }).addTo(map).bindPopup("This is a yellow box"); </script> A: If you look at the Leaflet docs for L.imageOverlay (https://leafletjs.com/reference-1.6.0.html#imageoverlay), you'll see that by default this layer is not interactive (interactive option is set to false), which means it does not emit mouse events and as a consequence popup cannot be triggered with mouse click. Simply define your image layer as interactive and popup will appear upon mouse click: L.imageOverlay( 'http://research.ccountync.com/maps/images/yellow.png', [[40.712216, -74.22655], [40.773941, -74.12544]], { opacity: 1, interactive: true } ).bindPopup("This is a yellow box").addTo(map);
{ "pile_set_name": "StackExchange" }
Q: What is the difference between functional and non functional requirement? What is the difference between functional and non-functional requirements in the context of designing a software system? Give examples for each case. A: A functional requirement describes what a software system should do, while non-functional requirements place constraints on how the system will do so. Let me elaborate. An example of a functional requirement would be: A system must send an email whenever a certain condition is met (e.g. an order is placed, a customer signs up, etc). A related non-functional requirement for the system may be: Emails should be sent with a latency of no greater than 12 hours from such an activity. The functional requirement is describing the behavior of the system as it relates to the system's functionality. The non-functional requirement elaborates a performance characteristic of the system. Typically non-functional requirements fall into areas such as: Accessibility Capacity, current and forecast Compliance Documentation Disaster recovery Efficiency Effectiveness Extensibility Fault tolerance Interoperability Maintainability Privacy Portability Quality Reliability Resilience Response time Robustness Scalability Security Stability Supportability Testability A more complete list is available at Wikipedia's entry for non-functional requirements. Non-functional requirements are sometimes defined in terms of metrics (i.e. something that can be measured about the system) to make them more tangible. Non-functional requirements may also describe aspects of the system that don't relate to its execution, but rather to its evolution over time (e.g. maintainability, extensibility, documentation, etc.). A: functional requirements are the main things that the user expects from the software for example if the application is a banking application that application should be able to create a new account, update the account, delete an account, etc. functional requirements are detailed and are specified in the system design Non-functional requirement are not straight forward the requirement of the system rather it is related to usability( in some way ) for example for a banking application a major non-functional requirement will be available the application should be available 24/7 with no downtime if possible. A: Functional requirements Functional requirements specifies a function that a system or system component must be able to perform. It can be documented in various ways. The most common ones are written descriptions in documents, and use cases. Use cases can be textual enumeration lists as well as diagrams, describing user actions. Each use case illustrates behavioural scenarios through one or more functional requirements. Often, though, an analyst will begin by eliciting a set of use cases, from which the analyst can derive the functional requirements that must be implemented to allow a user to perform each use case. Functional requirements is what a system is supposed to accomplish. It may be Calculations Technical details Data manipulation Data processing Other specific functionality A typical functional requirement will contain a unique name and number, a brief summary, and a rationale. This information is used to help the reader understand why the requirement is needed, and to track the requirement through the development of the system. Non-functional requirements LBushkin have already explained more about Non-functional requirements. I will add more. Non-functional requirements are any other requirement than functional requirements. This are the requirements that specifies criteria that can be used to judge the operation of a system, rather than specific behaviours. Non-functional requirements are in the form of "system shall be ", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed. Non-functional requirements - can be divided into two main categories: Execution qualities, such as security and usability, which are observable at run time. Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the software system. Non-functional requirements place restrictions on the product being developed, the development process, and specify external constraints that the product must meet. The IEEE-Std 830 - 1993 lists 13 non-functional requirements to be included in a Software Requirements Document. Performance requirements Interface requirements Operational requirements Resource requirements Verification requirements Acceptance requirements Documentation requirements Security requirements Portability requirements Quality requirements Reliability requirements Maintainability requirements Safety requirements Whether or not a requirement is expressed as a functional or a non-functional requirement may depend: on the level of detail to be included in the requirements document the degree of trust which exists between a system customer and a system developer. Ex. A system may be required to present the user with a display of the number of records in a database. This is a functional requirement. How up-to-date [update] this number needs to be, is a non-functional requirement. If the number needs to be updated in real time, the system architects must ensure that the system is capable of updating the [displayed] record count within an acceptably short interval of the number of records changing. References: Functional requirement Non-functional requirement Quantification and Traceability of Requirements
{ "pile_set_name": "StackExchange" }
Q: Bluebird .then(): no TypeError when function reference is undefined I recently started working with promises and found a strange behavior for me. When i give the .then() function a reference to a undefined function it is just skipped and the next then is called. An example: var cust = customer({general: { cust_id: 22 }}); // just for testing req.pool.getConnectionAsync() .then(cust.del) // cust.del is 'undefined' .then(function(dbResult) { console.log("dbresult:"); console.log(dbResult); res.status(200).end(); }) .catch(function (e) { console.log(e); res.status(500).end(); }); So what's happening here: getConnectionAsync returns a connection which should be given to cust.del cust.del is undefined (was a typo by me the correct function would be cust.delete) no error is raised instead the next .then function is called with the connection from getConnectionAsync as "dbresult" the output of the last then function is the connection object and not a db result object and status 200 is returned to the client If i change the code to: req.pool.getConnectionAsync() .then(function(conn) { cust.del(conn) }) // type error is raised .then(function(dbResult) { console.log("dbresult:"); console.log(dbResult); res.status(200).end(); }) .catch(function (e) { console.log(e); res.status(500).end(); }); then i get the expected TypeError and the catch function is called. Is this an expected behavior? Or am I missing something to prevent this? .then(cust.del) is obviously much cleaner code, but since this function is not callable there should be an error. Regards Phil A: Like the comment said, this is similar to: Promise.resolve().then(undefined); // undefined is ignored here It is specified in the Promises/A+ spec and it works this way in every promise implementation. The rationale was that Promises/A+ don't have to support a .catch method and you could do: Promise.reject().then(null, function(err){ /* handle */ }); As well as interoperability with existing libraries.
{ "pile_set_name": "StackExchange" }
Q: Merge identically shaped dataframes into a multiIndex dataframe I have two identically shaped pandas dataframes: index = range(5) columns = ['A', 'B', 'C'] left = pd.DataFrame(np.random.randint(1,10, size=(5,3)), index=index, columns=columns) right = pd.DataFrame(np.random.randint(1,10, size=(5,3)), index=index, columns=columns) Namely left Out[127]: A B C 0 3 4 7 1 5 8 4 2 8 8 7 3 1 3 5 4 3 5 8 and right Out[129]: A B C 0 2 8 2 1 3 6 5 2 4 6 4 3 8 4 2 4 4 2 9 Now I would like to combine them into a single dataframe with the same index and two levels of columns. On the top the common column name and on the bottom the original dataframe name: combined = pd.DataFrame(np.nan, index=index, columns=pd.MultiIndex.from_tuples([('A', 'left'), ('A', 'right'), ('B', 'left'), ('B', 'right'), ('C', 'left'), ('C', 'right')])) for column in combined.columns: if column[1] == 'left': combined[column] = left[column[0]] elif column[1] == 'right': combined[column] = right[column[0]] combined Out[138]: A B C left right left right left right 0 3 2 4 8 7 2 1 5 3 8 6 4 5 2 8 4 8 6 7 4 3 1 8 3 4 5 2 4 3 4 5 2 8 9 Since the dataframes I'm dealing with are massive, is there a faster or more elegant way to achieve this? Thanks in advance! A: You can provide keys parameter in pd.concat to add another column level: pd.concat([left, right], axis=1, keys=['left', 'right']).swaplevel(axis=1).sort_index(axis=1) # A B C # left right left right left right #0 9 7 3 4 4 2 #1 8 3 9 1 3 5 #2 3 6 1 6 5 7 #3 9 1 7 2 2 2 #4 9 5 3 1 4 3
{ "pile_set_name": "StackExchange" }
Q: Rails locale development server "rails s" to serve multiple request at the same time Windows 7, Rails 3 here. I local/development mode, rails server does not handle multiple request at the same time. The process crash and the cmd prompt comes in front. I've noticed this behaviour when : having too much ajax request, too close from one another loading a simple page on 2 browsers Is there a way workaround that ? Change the local server (default is webrick) ? How is that done ? Thanks. A: I don't know if this still needs an answer but I did this by adding gem 'puma' to the Gemfile then you'll need to add config.threadsafe! to either your config/application.rb or the environment file you're running on (like config/environments/development.rb. Sometimes you might not want threadsafe on so I so did this in my development.rb: if ENV["THREADS"] config.threadsafe! end Now (with what I did in my development.rb) I can do rails s Puma and it will run with a max of 16 threads and can handle multiple requests. You can also up the thread pool and configure more with Puma, docs are here Update Note that the use of config.threadsafe! is not needed in Rails 4+ and is deprecated I believe.
{ "pile_set_name": "StackExchange" }
Q: Designing a multi-module project I'm new to this concept so I might be wrong in calling it a "multi-module project". In the software I'm making, there's one distinct unit with distinct inputs and outputs. I decided to pull it out and run it as a separate (web) service since it has other values in this stand-alone mode. But the thing is, now the application which was previously blended with this unit has to run standalone, too. And that means there should be a new way for it to communicate with that unit by calling it's service endpoints which adds new layers of complexity (serializing and de-serializing not-so-simple data into XML and JSON). Is there a common practice for this in software engineering? Can I transmit the data in any other way? The software is being written in Scala which runs on JVM and may or may not affect this story. A: If you can't have your original client ap connect to a server, then I would recommend the following module breakdown: Service module with Java/Scala API Web service module which wraps the service module with Rest/XML/JSON Client module which calls directly to the Java/Scala API module
{ "pile_set_name": "StackExchange" }
Q: Counter Example for the union of any number of compact sets is compact in T2-spaces We know that, the intersection of arbitrary many compact sets is compact in Hausdorff Spaces. I need counter example analogous of above proposition for the union is not true. When I consider, logically it does not have to be compact but I cannot find a counter example in one way or another. Could someone give a counter example please? Thanks A: Just consider the union of all the singletons in $\mathbb{R}$.
{ "pile_set_name": "StackExchange" }
Q: Can POTUS contribute to political campaigns? Can the president of the USA run a Super PAC? Can they contribute to political campaigns other than their own? A: No politician with a campaign committee can run a SuperPAC. SuperPACs are not allowed to coordinate with campaign committees. So no, the president can't. Yes, anyone can contribute their own funds to other campaigns. Certainly politicians can donate to their own campaign (without limit). And they can transfer funds from their campaign to other campaigns. Anyway, a president can certainly make a regular donation from his own funds subject to the normal limits. Note that a SuperPAC can't contribute funds to political campaigns. It can only spend money on advertising and what are called electioneering activities. Electioneering generally includes things like registration drives and get-out-the-vote activities.
{ "pile_set_name": "StackExchange" }
Q: Why do you use sugar when canning peaches or pears? I’m going to be canning peaches and pears and most instruction say to use sugar. Is sugar a preservative for the fruit or is it just for taste? I would prefer not to use sugar if I don’t need to.. A: It somewhat depends on your definition of preserving. Sugar is definitely added for and too taste. If you prefer less, then by all means use less using a light syrup rather than heavy, or possibly even none. But, one of the purposes of sugar in canned fruit is for texture. Canned fruit in light syrup will deteriorate in texture more quickly that in heavy syrup. At least in theory, the sugar syrup will work through osmosis to replace some of the water in the fruit with sugar making it more firm especially after being subjected to the heat of canning. Without this, the cells will break down and turn mushy. It seems counter intuitive to me at least, but to get this to work, the syrup needs to be higher in sugar than the fruit being canned, so very sweet peaches for instance would need a heavier syrup to firm when canned than a less sweet batch. It is sad to risk over-powering that natural fresh flavor but is a cost of that type of preserving. In things like Jams and meat cures sugar is definitely used to help preserve, but those are different techniques. In this case it helps preserve some of the quality, but not really against spoilage.
{ "pile_set_name": "StackExchange" }
Q: String with spaces when concatenate I'm using .get() for getting a view and i have to concatenate two variables in the path but when i do that path stay with a lot of spaces. The variables values are well received from two drop downs. the problem is that: function getTableData(){ $( "#getTable" ).click(function(e){ table = $('#tabax').val(); type = $('#type').val(); alert(table); alert(type); alert("upload-file/tb/"+table+"/"+type); $.get("upload-file/tb/"+table+"/"+type, function(response){ $('.table-data').html(response); }); e.preventDefault(); });} my html: <div class="form-group"> {!! Form::label('tabax', 'Table:') !!} {!! Form::select('tabax', $tabax, null, ['id'=> 'tabax', 'class' => 'form-control tabax']) !!} </div> <div class="form-group"> {!! Form::label('type', 'Type:') !!} {!! Form::select('type', $type, null, ['id'=> 'type', 'class' => 'form-control']) !!} </div> and the path is: http://localhost:8000/app/upload-file/tb/FT%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20/1 I do not know why it happens, i'm doing this a lot and this is the first time that i have this problem. A: Supposedly values in those tables appear from CHAR(n) SQL fields, which automatically up-filled with spaces. If you have access to the server I suggest to change field types to VARCHAR(n) and migrate data trimming values, this way you will reduce client-server traffic. Otherwise you can trim them on client side (replace lines 3 and 4): table = $('#tabax').val().trim(); type = $('#type').val().trim();
{ "pile_set_name": "StackExchange" }
Q: How do you add a toolbar to the bottom of an NSTableView? Check the following image, how can I add this kind of bar to my own NSTableViews? Other uses are in the network preferences app. What's the magic trick to make this work? A: I don't think there is any "magic trick." This is something you will have to implement yourself. It looks like a group of Gradient style NSButtons placed below the table view. An NSSegmentedControl in Small Square style would work too.
{ "pile_set_name": "StackExchange" }
Q: How to concatenate 2 arrays? Is their any helper to concatenate 2 arrays? I mean an helper to add all the elements of array1 to the end of array2 ? array1: array of integer; array2: array of integer; A: If you have a fairly new Delphi (XE7 as per comments), you can just do it like this: PROCEDURE Test; TYPE TIntArr = TArray<INTEGER>; VAR A1,A2 : TIntArr; BEGIN SetLength(A1,20); SetLength(A2,30); A1:=A1+A2 END; ie. simple addition.
{ "pile_set_name": "StackExchange" }