text
stringlengths
64
89.7k
meta
dict
Q: TAG_ Prefix on C/C++ structs? I'm not sure why it escapes me now, but can anyone tell me why so many structs are named with the TAG_ prefix? What is this a mnemonic for, or an acronym of? It seems to be a common enough convention in all sorts of environments, so surely there is a meaning behind it. I know a lot just gets "monkey copied" and the lore behind such things often gets lost over time. I thought I knew the story myself once, but right now I'm clueless. One person suggested it stands for "The Actual Guts" of something, e.g. TAG_HANDLE being the "guts of" a handle. This doesn't ring a bell and seems just a bit too frivolous though. Can anyone help clear my mental block on this? A: It's not an acronym, it's just literally "tag". That's the technical term for it. See e.g. section 6.7.2.3 of the C99 spec. The term "tag" is often incorporated into the name simply to avoid confusion with the typedef name, e.g.: typedef struct tag_blah { ... } blah;
{ "pile_set_name": "StackExchange" }
Q: Explaining JS closures in the terms of perl I understand without any problems how perl's closures works, like the next one use 5.012; use strict; use warnings; sub countdown { my $start = shift; return sub { $start-- } } my $c10 = countdown(3); say while( $_ = $c10->() ); I'm trying to understand the next piece of Javascript: var runInSandbox = (function(js, inputPath) { (function() { if ((!context.initialized__QUERY)) { return createContext(); }; })(); (function() { if (typeof(inputPath) !== 'undefined') { (process.argv)[1] = inputPath;; (context)["__dirname"] = path.dirname(inputPath);; return (module)["filename"] = inputPath;; }; })(); return vm.runInContext(js, context, "sibilant"); }); NO CHANCE! :( PLEASE can someone rewrite the above to perl ? I know perl a bit - so for me will be extremely useful to understanding JS basics and the constructions like: (...)() - more precisely (function(){.....})() double (( in the if if ((!context.initialized__QUERY)) { or the next (context)["__dirname"] = something ;; or return (module)["filename"] = inputPath;; // why double ;;? And if someone coul'd suggest me any resource something like: Learning Javascript for perl programmers - would be very nice ;) Ps: the JS (shortened) is from here: https://github.com/jbr/sibilant/blob/master/lib/cli.js A: I'm not extremely well-versed with Perl closures, so I will at least try to demystify this for you. The form: (function(...) { ... })(); is a self-invoked anonymous function1. This means that you write out an anonymous function, and then invoke it immediately. This is usually done for encapsulation2. For example, if you end up creating a bunch of variables, but don't want it to pollute the global namespace, you can put it inside an anonymous, self-invoked function. However, in this case I don't see why the first invocation is necessary at all, since it's simply checking a flag or something. What is even stranger is the return inside those self-invoked functions. They aren't being assigned to anything. I would hazard a guess that createContext() initializes the context variable, but that return in there is effectively useless. The same goes for the following: return (module)["filename"] = inputPath;; As far as the double (( and )), they seem to be largely unnecessary and so I'm not sure why the author originally put it in there. For example: if ((!context.initialized__QUERY)) Isn't any different from: if (!context.initialized__QUERY) Also, the parentheses in the following are also unnecessary, as are the double semicolons: (context)["__dirname"] = something ;; Honestly, it just looks like poorly-written Javascript, or perhaps JavaScript that was autogenerated (this is most probably the case). You could rewrite it like so: var runInSandbox = function(js, inputPath) { if (!context.initialized__QUERY) { createContext(); }; if (typeof inputPath !== 'undefined') { process.argv[1] = inputPath; context["__dirname"] = path.dirname(inputPath); module["filename"] = inputPath; }; return vm.runInContext(js, context, "sibilant"); }; Notes: In Perl, that would be sub { ... }->(). In Perl, one would use { my $var; ... } instead of sub { my $var; ... }->() and do { my $var; ...; EXPR } instead of sub { my $var; ...; return EXPR; }->().
{ "pile_set_name": "StackExchange" }
Q: Flutter - how to change TextField border color? I've tried everything to try and change the border color of textfield but it seems to be ignored. I've tried sideBorder(even width is ignored too), hintStyle, applying a specific theme to only this widget and they all seem to be ignored. child: new Theme( data: ThemeData( primaryColor: Colors.white, accentColor: Colors.white, hintColor: Colors.white //This is Ignored, inputDecorationTheme: InputDecorationTheme( border: OutlineInputBorder( borderSide: BorderSide(color: Colors.white) //This is Ignored ), ), ), child: new TextField( style: TextStyle(color: Colors.white, decorationColor: Colors.white), cursorColor: Colors.white, decoration: InputDecoration( border: new OutlineInputBorder( //borderRadius: const BorderRadius.all(Radius.circular(30.0)), borderSide: BorderSide(color: Colors.white, width: 0.0) //This is Ignored, ), hintText: "Search people", ), ), //new Divider(color: Colors.white, height: 20), ) I'd like to change that hairline looking black border and alter its color and its width. Image of what it currently is A: Use enabledBorder and focusedBorder (when the textfield is focused) InputDecoration( enabledBorder: OutlineInputBorder( borderSide: BorderSide( color: Colors.red, width: 5.0), ), focusedBorder: OutlineInputBorder( borderSide: BorderSide( color: Colors.blue, width: 3.0), ), hintText: "Search people", ),
{ "pile_set_name": "StackExchange" }
Q: Force javascript onload handler to the back of the queue I want to add a javascript file to very end of footer.php with an wordpress plugin. i tried it using 'add_action' Hook. But this hook is adding javascript nearby </body> tag. add_action('wp_footer', 'my_fucntion()'); How can i push my javascript to very end of footer.php A: Update There's a better way, the one below should work, but this one should be safer, still (and it's easier, IMO). Given that JS is single-threaded, an event will push the handler to the back of the queue anyway, so if your onload handler ends up being called prior to another handler, dispatching a new event, that calls a handler that actually does the work you want/need to do is probably your best bet: window.addEventListener('load', function rmMe() { var handler = function() { window.removeEventListener('click', handler, false);//remove this listener, too var i, links = document.querySelectorAll('a'); for (i=0;i<links.length;++i) { //do stuff with links } }; window.removeEventListener('load', rmMe, false); window.addEventListener('click', handler, false);//bind new listener window.dispatchEvent(new Event('click'));//calls handler, last handler in queue },false); That might be your best bet... After those endless comments, it would apear that you're looking for a way to guarantee that your JS gets executed after the page has loaded, and all other possible onload event handlers have been called. The simple answer is: you can't. Not really. JS's event loop is beyond your control, but there are a few tricks that do work in most cases: Add your script to the bottom of the body tag Bind your event listener but wrap it in a setTimeout with zero or 1 ms delay to be safe, let the event handler set another timeout that actually calls the function that does the work, this time, use a timeout that allows for a queued handler to be called. This is optional, because the queue is probably empty already important make sure (or hope) that other handlers don't call stopPropagation on the event: this'll result in your listener not getting called. That's why the first method is preferable: it pushes a new handler to the back of the queue, allowing you to bind your listener first. Code example: setTimeout(function() { window.onload = function() { var handler = function() { var i, links = document.querySelectorAll('a'); for (i=0;i<links.length;++i) { //do stuff with links } }; return function() {//fake, initial handler setTimeout(handler, 40);//optional, return handler; should suffice }; }; },1); That's all there is too it. This code, to be clear, goes here in the DOM: <script> //code here </script> </body> </html> This is not 100% reliable, but most browsers queue event handlers as they are being bound: window.addEventListener('load', function(){}, false);//likely to be called first window.addEventListener('load', function(){}, false);//likely to be called second and so on. Not sure about this, but I wouldn't be surprised if listeners that are bound using the window.addEventListener are more likely to be appended to the end of the queue, as the addEventListener call may cause overhead. Probably use: setTimeout(function() { window.addEventListener('load', function tmpLoad() { var handler = function() { var i, links = document.querySelectorAll('a'); for (i=0;i<links.length;++i) { //do stuff with links } }; return function() {//fake, initial handler window.removeEventListener('load', tmpLoad, false);//adds overhead and time setTimeout(handler, 40);//optional, return handler; should suffice }; },false); },1); You could also consider manually dispatching an event, after a timeout, bind a new listener, and then do: window.dispatchEvent(new Event('load')); to call the handler.
{ "pile_set_name": "StackExchange" }
Q: Google App Engine blocking access to my backend services In Google App Engine, I have 3 services, 1 for front end, 2 for back end. Is there a way to block http calls to my backend services for accounts not from my company's domain (and the service account of the front end), but allow everyone http access to my front end service? I know there is the firewall option, but this is restricted to IP addresses, I would prefer user based If it matters all services are python3 A: There's currently no option to filter traffic to specific App Engine services within a single application/project: App Engine Firewall filters by source IP ranges but can only be set for the whole app, not per service. Identity-Aware Proxy can filter access by user account as you'd prefer but also applies to the whole app. Also, it only supports user account and can't be used with service accounts. One option you may have would be to split your app in 2 different projects. Keep the front-end in one project open to the world and restrict access to the backend services in your other project via firewall rules.
{ "pile_set_name": "StackExchange" }
Q: Find the partial sums of the following series and then check their convergence: $$\sum_{n=0}^{\infty}(\frac{2^n-1}{4^n}) $$ so i figured out that this is equal to: $$\sum_{n=0}^{\infty}(\frac{1}{2^n})-(\frac{1}{2^{2n}}) $$ and i don't know what to do next. A: it is a geometric series $$\sum _{ n=0 }^{ \infty } \left( \frac { 1 }{ 2^{ n } } \right) -\sum _{ n=0 }^{ \infty } \left( \frac { 1 }{ 4^{ n } } \right) =\frac { 1 }{ 1-\frac { 1 }{ 2 } } -\frac { 1 }{ 1-\frac { 1 }{ 4 } } =2-\frac { 4 }{ 3 } =\frac { 2 }{ 3 } $$
{ "pile_set_name": "StackExchange" }
Q: Nokogiri ruby: Iterate over table rows with no class name I want to iterate over each row of a table. This is the relevant source code showing 6 table rows in total. 3 of them have no class name and 3 others do, the ... represent some attributes. <tbody> <tr> … </tr> <tr class="even"> … </tr> <tr> … </tr> <tr class="even"> … </tr> <tr> … </tr> <tr class="even"> … </tr> </tbody> Assuming that doc is a Nokogiri::HTML::Document the following code generates only 3 tr elements instead of 6. It only returns the tr elements having the class="even". doc.css('#main_result table tbody tr').each do |tr| p tr end How can I now get an array of all tr elements, making it able to iterate over them? This actual HTML can be found on the following link: http://www.motogp.com/en/Results+Statistics/1949/TT/500cc/RAC I don't really know how to paste the source code nicely... sorry A: The HTML in that page is malformed, and is missing some <tr> tags, it actually looks something like this: <tbody> <td></td> ... </tr> <tr class="even"> <td></td> ... </tr> <td></td> ... </tr> <tr class="even"> <td></td> ... </tr> <td></td> ... </tr> <tr class="even"> <td></td> ... </tr> </tbody> Note how only the tr tags with class="even" are present, the others are missing. Nokogiri therefore only sees three rows when parsing the page. One possible solution to this could be to use Nokogumbo, which adds Google’s Gumbo HTML5 parser to Nokogiri, and better handles and corrects malformed HTML like this: require 'nokogumbo' # install the gem first doc = Nokogiri.HTML5(the_page) puts doc.css('#main_result table tbody tr').size # should now be 6 rather than 3
{ "pile_set_name": "StackExchange" }
Q: how to "one column of table minus 1"? i have a TABLE named FLY,i have a column in it named FLY_NUM.im trying to decrease the value of this column in a mysql query(in PHP). something like this : mysql_query("UPDATE fly SET `fly_Num` = `fly_Num` -1 WHERE fly_ID ='1' "); this is wrong!everytime this query run,FLY_NUM column set to zero.how can i do such a thing? A: Your query is valid and correct. There is also no need to put a space between the - and the 1. Test case: CREATE TABLE fly (fly_id int, fly_num int); INSERT INTO fly VALUES (1, 1); INSERT INTO fly VALUES (1, 2); INSERT INTO fly VALUES (1, 3); INSERT INTO fly VALUES (1, 4); INSERT INTO fly VALUES (1, 5); INSERT INTO fly VALUES (1, 6); INSERT INTO fly VALUES (1, 7); Update Query: UPDATE fly SET `fly_Num` = `fly_Num` -1 WHERE fly_id ='1'; Query OK, 7 rows affected (0.00 sec) Rows matched: 7 Changed: 7 Warnings: 0 New table contents: SELECT * FROM fly; +--------+---------+ | fly_id | fly_num | +--------+---------+ | 1 | 0 | | 1 | 1 | | 1 | 2 | | 1 | 3 | | 1 | 4 | | 1 | 5 | | 1 | 6 | +--------+---------+ 7 rows in set (0.00 sec) It works even if you use a varchar for the fly_num column: CREATE TABLE fly (fly_id int, fly_num varchar(10)); INSERT INTO fly VALUES (1, '1'); INSERT INTO fly VALUES (1, '2'); INSERT INTO fly VALUES (1, '3'); INSERT INTO fly VALUES (1, '4'); INSERT INTO fly VALUES (1, '5'); INSERT INTO fly VALUES (1, '6'); INSERT INTO fly VALUES (1, '7'); Update Query: UPDATE fly SET `fly_Num` = `fly_Num` -1 WHERE fly_id ='1'; Query OK, 7 rows affected (0.00 sec) Rows matched: 7 Changed: 7 Warnings: 0 New table contents: SELECT * FROM fly; +--------+---------+ | fly_id | fly_num | +--------+---------+ | 1 | 0 | | 1 | 1 | | 1 | 2 | | 1 | 3 | | 1 | 4 | | 1 | 5 | | 1 | 6 | +--------+---------+ 7 rows in set (0.00 sec)
{ "pile_set_name": "StackExchange" }
Q: Qt exposes a QWidget's handle with getDC - how do I get a QWidget's handle on the Mac? I can make native win32 calls (GetPixel/SetPixel) on a QWidget by using QWidget::getDC .. How do I do this for Mac builds? Using QImage/QPixmap for retrieving pixel information is not an option because I need very fast access to what's already been drawn onto a QWidget via QPainter on both Windows and Mac. The reason I am using GetPixel on windows is to implement 2d mouse picking. A: I am not sure what you are trying to do but if you want the underlying window system handle/ID, you can use QWidget::winId() which returns HIViewRef or NSView on Mac depending on if it's Carbon or Cocoa version of Qt library.
{ "pile_set_name": "StackExchange" }
Q: How to remove duplicate records from table using MAX function I have a table t1 that looks similar to the following: first_name last_name row_number Bob Smith 1 Mike Jones 2 Mike Jones 3 Jessie Lee 4 Bob Smith 5 Jessie Lee 6 and I would like to delete rows from the table so that each name is listed only once and is accompanied by its MAX row number. As such I would like the output of my query to be: first_name last_name row_number Mike Jones 3 Bob Smith 5 Jessie Lee 6 The query that I came up with is: DELETE FROM table t1 WHERE t1.row_number != (SELECT MAX(row_number) FROM table t2 WHERE t1.first_name = t2.first_name and t2.last_name = t2.last_name); This query does not work (it deletes some rows but not the right ones) but I don't understand what I am doing wrong. How can I fix this query to delete the correct rows? A: You have a typo: "and t2.last_name = t2.last_name" should probably be: "and t1.last_name = t2.last_name"
{ "pile_set_name": "StackExchange" }
Q: how to install jdbc driver for sql server Hi I am new to eclipse ; I am trying to do example of nhibernate ; I downloaded sqljdbc4jar and uzipped it but when I double click to install it is not installing and no messages on windows 7 why it is not installing.however in windows 8 i am getting messages: cannot launch this type of file file types that contain executable code like exe ..cannot be launched A: Try downloading JDBC driver for SQL Server from this Microsoft website: http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774 This Binary Code is: Microsoft JDBC Driver 4.0 for SQL Server Make sure you follow the installation instruction on the same link, to probably install it.
{ "pile_set_name": "StackExchange" }
Q: Split string by regular expression I have [110308] Asia and India string and I want only Asia and India as my result by regular expression. Can anyone please help me? A: If you want to do without regex you can try >>> '[110308] Asia and India'.split(']', 1)[-1].strip() 'Asia and India' or just use lstrip >>> '[110308] Asia and India'.lstrip(" []" + string.digits) 'Asia and India' Or, if for reasons you want to stick with regex >>> re.findall("^[\[\]\d ]*(.*)"," [110308] Asia and India")[0] 'Asia and India'
{ "pile_set_name": "StackExchange" }
Q: Nominate for Reopening Shows Duplicate Message Boxes When reviewing a question in the Reopen Votes queue, there are two main options: Leave Closed and Nominate for Reopening. If I click Nominate for Reopening, a message box appears asking me if I am sure I want to vote to reopen: If I click cancel, a second message box appears with the same question. If I click cancel on the second message box is dismissed the message box, but nothing else happens. If I click okay, a second message box appears with the same question. If I click okay on the second message box is dismissed the message box, and a vote to reopen is cast. If I click cancel on the second message box is dismissed the message box, and a vote to reopen is cast. I haven't yet exhausted all possible ways of interacting with these message boxes, but there is definitely at least one bug here. A similar, and possibly related issue exists with the Leave Closed option. If I click Leave Closed, a message box appears asking me if I am sure I want to vote to reopen: If I click cancel, the message box is dismissed, and my vote to leave closed is cast. If I instead click okay the message box is dismissed, and my vote to leave closed is cast, but I occasionally see the error, "An error occurred when reviewing this item. Please try again." Using Google Chrome Version 27.0.1453.116 m A: This seems to be fixed now. According to this post by Emmett: Oops, this was caused by a javascript copy-paste gone awry. It's fixed now – sorry about that.
{ "pile_set_name": "StackExchange" }
Q: Determining the gem's list of files for the specification I've always used git to determine which files should go into the gem package: gem.files = `git ls-files`.split "\n" Unfortunately, this approach has recently proved to be inappropriate. I need a self-contained, pure-Ruby solution. My first idea was to simply glob the entire directory, but that alone is likely to include unwanted files. So, after researching the problem, I came up with this: # example.gemspec directory = File.dirname File.expand_path __FILE__ dotfiles = %w(.gitignore .rvmrc) ignore_file = '.gitignore' file_list = [] Dir.chdir directory do ignored = File.readlines(ignore_file).map(&:chomp).reject { |glob| glob =~ /\A(#|\s*\z)/ } file_list.replace Dir['**/**'] + dotfiles file_list.delete_if do |file| File.directory?(file) or ignored.any? { |glob| File.fnmatch? glob, file } end end # Later... gem.files = file_list That seems a bit complex for a gemspec. It also does not fully support gitignore's pattern format. It currently seems to work but I'd rather not run into problems later. Is there a simpler but robust way to compute the gem's list of files? Most gems apparently use git ls-files, and the ones that don't either use a solution similar to mine or specify the files manually. A: Hi, You can list all files of your project with pure Ruby: gem.files = Dir['**/*'].keep_if { |file| File.file?(file) } Or you can do it manually, this solution is used by Ruby on Rails gems: gem.files = Dir['lib/**/*'] + %w(.yardopts Gemfile LICENSE README.md Rakefile my_gem.gemspec) A: With Rake The easiest solution depending on rake to list all files from a directory, but exclude everything in the .gitignore file: require 'rake/file_list' Rake::FileList['**/*'].exclude(*File.read('.gitignore').split) RubyGems Official rubygems solution, list and exclude manually: require 'rake' spec.files = FileList['lib/*.rb', 'bin/*', '[A-Z]*', 'test/*'].to_a # or without Rake... spec.files = Dir['lib/*.rb'] + Dir['bin/*'] spec.files += Dir['[A-Z]*'] + Dir['test/**/*'] spec.files.reject! { |fn| fn.include? "CVS" } Bundler Bundler solution, list manually: s.files = Dir.glob("{lib,exe}/**/*", File::FNM_DOTMATCH).reject {|f| File.directory?(f) } Note: rejecting directories is useless as gem will ignore them by default. Vagrant Vagrant solution to mimic git ls-files and taking care of .gitignore in pure ruby: # The following block of code determines the files that should be included # in the gem. It does this by reading all the files in the directory where # this gemspec is, and parsing out the ignored files from the gitignore. # Note that the entire gitignore(5) syntax is not supported, specifically # the "!" syntax, but it should mostly work correctly. root_path = File.dirname(__FILE__) all_files = Dir.chdir(root_path) { Dir.glob("**/{*,.*}") } all_files.reject! { |file| [".", ".."].include?(File.basename(file)) } all_files.reject! { |file| file.start_with?("website/") } all_files.reject! { |file| file.start_with?("test/") } gitignore_path = File.join(root_path, ".gitignore") gitignore = File.readlines(gitignore_path) gitignore.map! { |line| line.chomp.strip } gitignore.reject! { |line| line.empty? || line =~ /^(#|!)/ } unignored_files = all_files.reject do |file| # Ignore any directories, the gemspec only cares about files next true if File.directory?(file) # Ignore any paths that match anything in the gitignore. We do # two tests here: # # - First, test to see if the entire path matches the gitignore. # - Second, match if the basename does, this makes it so that things # like '.DS_Store' will match sub-directories too (same behavior # as git). # gitignore.any? do |ignore| File.fnmatch(ignore, file, File::FNM_PATHNAME) || File.fnmatch(ignore, File.basename(file), File::FNM_PATHNAME) end end Pathspec Using pathspec gem Match Path Specifications, such as .gitignore, in Ruby! See https://github.com/highb/pathspec-ruby References Ref: Bundler Vagrant RubyGems Rake easy solution
{ "pile_set_name": "StackExchange" }
Q: Time Complexity - nested for loop For the given code, whats the time complexity in bigO notation: for(i=n; i >= 1; i /=2) for(j=i; j>=1; j/=2) x = i+j; The first loop runs Log N times, how about the second loop? Is it (Log N * Log N) ? I am confused. Thanks A: Asymptotically, we can say that complexity of second loop is O(logn) and In each iteration of first Loop, second loop iterates once, so the complexity will be logn*logn that is (logn)^2
{ "pile_set_name": "StackExchange" }
Q: Access is denied when trying to CustomDevice.FromIdAsync using a Software KMDF Driver I have prepared a KMDF driver meant to be accessed by a UWP using the guidelines found within MSDN (HSA for Driver, HSA for UWP) The UWP App I used is the CustomCapability example found under Universal Windows Samples The KMDF is a sample driver with only the DriverEntry, Unload, and EvtDeviceAdd implemented. Also, the driver has been installed, and is visible in Device Manager, but there is no actual/external device plugged in. In the UWP App, I can see the Sample Driver from the device watcher. However, when attempting to connect/open the driver using : var device = await CustomDevice.FromIdAsync(Id, DeviceAccessMode.Read, DeviceSharingMode.Exclusive); An exception System.UnauthorizedAccessException' in System.Private.CoreLib.ni.dll is being thrown as seen below: I have matched the required information that needs to get across both apps, and I have gone and tried this one out with the assumption that the SCCD does not require signing if it is only meant to run in development mode. Aside from the SCCD configuration, I have also tried adding a <DeviceCapability> for the class interface of the driver, as well as for lowLevel devices, but it did not seem to do anything related to the exception. I don't see any other places for issues aside from the SCCD and the INF file, but I would like to show them just in case I missed something: SCCD: <?xml version="1.0" encoding="utf-8"?> <CustomCapabilityDescriptor xmlns="http://schemas.microsoft.com/appx/2016/sccd" xmlns:s="http://schemas.microsoft.com/appx/2016/sccd"> <CustomCapabilities> <CustomCapability Name="microsoft.firmwareRead_cw5n1h2txyewy"></CustomCapability> <!-- this one is not used by the way --> <CustomCapability Name="microsoft.hsaTestCustomCapability_q536wpkpf5cy2"></CustomCapability> </CustomCapabilities> <AuthorizedEntities> <AuthorizedEntity AppPackageFamilyName="Microsoft.SDKSamples.CustomCapability.CS_8wekyb3d8bbwe" CertificateSignatureHash="1db5ceeaa4c97c6f6e91c0ce76830361776c64635ecfecdb2f157ca818ae3b69"></AuthorizedEntity> </AuthorizedEntities> <Catalog>xxxx</Catalog> </CustomCapabilityDescriptor> INF Strings and Interface section: [Strings] SPSVCINST_ASSOCSERVICE= 0x00000002 ManufacturerName="Samples" ;TODO: Replace with your manufacturer name ClassName="Samples" ; TODO: edit ClassName DiskName = "Samples_Driver Installation Disk" Samples_Driver.DeviceDesc = "Samples_Driver Device" Samples_Driver.SVCDESC = "Samples_Driver Service" GUID_DEVINTERFACE_OSRUSBFX2="573E8C73-0CB4-4471-A1BF-FAB26C31D384" DEVPKEY_DeviceInterface_UnrestrictedAppCapabilities="026e516e-b814-414b-83cd-856d6fef4822" CustomCapability="microsoft.hsaTestCustomCapability_q536wpkpf5cy2" ; ;----------------- Interface Section ---------------------- ; [WDMPNPB003_Device.NT.Interfaces] AddInterface= {%GUID_DEVINTERFACE_OSRUSBFX2%},,AddInterfaceSection [AddInterfaceSection] AddProperty= AddInterfaceSection.AddProps [AddInterfaceSection.AddProps] ; DEVPKEY_DeviceInterface_UnrestrictedAppCapabilities {%DEVPKEY_DeviceInterface_UnrestrictedAppCapabilities%}, 8, 0x2012,, %CustomCapability% A: I managed to find the problem on the INF file. The section labeled Interface Section has been copied directly from the HSA for Drivers guide by Microsoft. The modifications done were only those that were explicitly written within the guidelines. The INF snippet below describes the starting point of the interface section: [WDMPNPB003_Device.NT.Interfaces] AddInterface= {zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz},,AddInterfaceSection What the guide did not explicitly mention is that we would have to replace WDMPNPB003_Device with your own Driver_name/Root namespace. A petty mistake, but will most likely be what driver development new comers will encounter.
{ "pile_set_name": "StackExchange" }
Q: Showing $\sin{\frac{\pi}{13}} \cdot \sin{\frac{2\pi}{13}} \cdot \sin{\frac{3\pi}{13}} \cdots \sin{\frac{6\pi}{13}} = \frac{\sqrt{13}}{64}$ I would like to show that $$ \sin{\frac{\pi}{13}} \cdot \sin{\frac{2\pi}{13}} \cdot \sin{\frac{3\pi}{13}} \cdots \sin{\frac{6\pi}{13}} = \frac{\sqrt{13}}{64} $$ I've been working on this for a few days. I've used product-to-sum formulas, writing the sines in their exponential form, etc. When I used the product-to-sum formulas, I'd get a factor of $1/64$, I obtained the same with writing the sines in their exponential form. I'd always get $1/64$ somehow, but never the $\sqrt{13}$. I've come across this: http://mathworld.wolfram.com/TrigonometryAnglesPi13.html, (look at the 10th equation). It says that this comes from one of Newton's formulas and links to something named "Newton-Girard formulas", which I cannot understand. :( Thanks in advance. A: Use this formula (found here, and mentioned recently on MSE here): $$\prod _{k=1}^{n-1}\,\sin \left({\frac {k\pi }{n}} \right)=\frac{n}{2^{n-1}} .$$ Let $n=13$, which gives $$\left(\sin{\frac{\pi}{13}} \cdot \sin{\frac{2\pi}{13}} \cdot \sin{\frac{3\pi}{13}} \cdots \sin{\frac{6\pi}{13}}\right)\left(\sin{\frac{7\pi}{13}} \cdot \sin{\frac{8\pi}{13}} \cdot \sin{\frac{9\pi}{13}} \cdots \sin{\frac{12\pi}{13}}\right) = \frac{13}{{2^{12}}}.$$ Use the fact that $\sin\dfrac{k\pi}{13}=\sin\dfrac{(13-k)\pi}{13}$ to see that this is the same as $$\left(\sin{\frac{\pi}{13}} \cdot \sin{\frac{2\pi}{13}} \cdot \sin{\frac{3\pi}{13}} \cdots \sin{\frac{6\pi}{13}}\right)^2 = \frac{13}{2^{12}}$$ and take the square root of both sides to get your answer. A: For positive integer $n$ If $\sin(2n+1)x=0, (2n+1)x=m\pi\iff x=\frac{m\pi}{2n+1} $ where $m$ is any integer From $(3)$ of this, $\displaystyle \sin(2n+1)x=2^{2n}s^{2n+1}+\cdots+(2n+1)s=0$ where $s=\sin\frac{m\pi}{2n+1}$ So the roots of $\displaystyle 2^{2n}s^{2n+1}+\cdots+(2n+1)s=0 $ are $\sin\frac{m\pi}{2n+1}; 0\le m\le2n$ So the roots of $\displaystyle 2^{2n}s^{2n}+\cdots+(2n+1)=0 $ are $\sin\frac{m\pi}{2n+1}; 1\le m\le2n$ Using Vieta's formula $\displaystyle\prod_{m=1}^{2n}\sin\frac{m\pi}{2n+1}=\frac{2n+1}{2^{2n}}$ Now using $\displaystyle\sin(\pi-y)=\sin y,\sin\left(\pi-\frac m{2n+1}\pi\right)=\sin\frac{(2n+1-m)\pi}{2n+1}$ $\displaystyle\prod_{m=1}^{2n}\sin\frac{m\pi}{2n+1}=\prod_{m=1}^n\sin^2\frac{m\pi}{2n+1}$ Now for $\displaystyle, 1\le m\le n;0<\frac{m\pi}{2n+1}<\frac\pi2\implies\sin\frac{m\pi}{2n+1}>0$
{ "pile_set_name": "StackExchange" }
Q: How do I swap some characters of a String with values of a HashMap in Kotlin? Assuming I have a val s: String = "14ABC5" and have a HashMap val b: HashMap<String,String> = hashMapOf("A" to "10", "B" to "11", "C" to "12", "D" to "13", "E" to "14", "F" to "15" ) How would I change all occurrences of A,B,C with 10, 11, 12 while keeping their order ("1", "4", "10", "11", "12", "5")? So far I have this val result: List<String> = s.toUpperCase().toCharArray().map{ it.toString() }.map{ it -> b.getValue(it)} which works if ALL characters of the String exist in the HashMap but my String may contain inexistent keys as well. A: You could either use getOrDefault(...), or the Kotlinesque b[it] ?: it. By the way, if you're using the implicit lambda argument name (it), you can get rid of the it ->.
{ "pile_set_name": "StackExchange" }
Q: CORS HTTP header not showing on GAE Java I have a custom domain setup on GAE that is hosting a java web (servlet) application, I need to support XHR requests from a different app domain so I attempted to set the CORS response header both programmatically and as instructed in GAE reference docs for java but I am unable to see the header in the response and my XHR requests fail. The following code is implemented in the servlet's doPost, doGet, doOptions methods. response.setContentType("application/json"); response.addHeader("Access-Control-Allow-Origin:", "*"); I also attempted to set the header in the appengine-web.xml file as follows. <static-files> <include path="https://api.ezpzrentals.com/validate" > <http-header name="Access-Control-Allow-Origin" value="*" /> </include> </static-files> Neither of the above seem to be working, any tips would be greatly appreciated! A: After reviewing carefully, this was a dumb mistake on my part, there is a colon ":" at the end of the header and GAE obviously does not like that, what threw me off is that in my local dev instance, it was showing up.
{ "pile_set_name": "StackExchange" }
Q: Non-zero idempotent element is not nilpotent I have a problem that I need to solve but I have trouble in solving the following question. Question is; Let $a \in R$ be a nonzero idempotent. Show that $a$ is not nilpotent. ($R$ is a ring) I will appreciate your help. Thanks in advance. A: Since $a$ is an idempotent element then $$a^2=a$$ hence we have $$\forall n\in \Bbb N,\qquad a^n=a\ne0$$ hence $a$ isn't nilpotent.
{ "pile_set_name": "StackExchange" }
Q: Phonegap Error + Error: Cannot find module 'q' Trying to build phonegap app but strange error :- user@ubuntu:~/Projects/PhoneGap/testapp$ phonegap build android [phonegap] executing 'cordova build android'... Running command: /home/user/Projects/PhoneGap/testapp/platforms/android/cordova/build module.js:338 throw err; ^ Error: Cannot find module 'q' at Function.Module._resolveFilename (module.js:336:15) at Function.Module._load (module.js:278:25) at Module.require (module.js:365:17) at require (module.js:384:17) at Object.<anonymous> (/home/user/Projects/PhoneGap/testapp/platforms/android/cordova/lib/spawn.js:23:15) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Module.require (module.js:365:17) You may not have the required environment or OS to build this project cordova -v => 5.3.3 wanted: {"node":"0.8.x || 0.10.x"} (current: {"node":"0.12.7","npm":"2.11.3"} Please suggest something .. Thanks A: I was struggling with same issue and finally figure out npm install q --save helps me. A: I had the same issue because my cordova android version was deprecated. Upgrade with cordova platform update android as described here: https://cordova.apache.org/docs/en/3.1.0/guide/platforms/android/upgrading.html A: This worked for me: cordova platform remove android cordova platform add [email protected]
{ "pile_set_name": "StackExchange" }
Q: Magento Contact Form: What is the 'hideit' input for? Looking at the Magento Contact Module (be it magento1 or magento2) there is a hidden field name="hideit". <input type="hidden" name="hideit" id="hideit" value="" /> Here the full snippet from magento2: <div class="actions-toolbar"> <div class="primary"> <input type="hidden" name="hideit" id="hideit" value="" /> <button type="submit" title="<?php /* @escapeNotVerified */ echo __('Submit') ?>" class="action submit primary"> <span><?php /* @escapeNotVerified */ echo __('Submit') ?></span> </button> </div> </div> However, apart from being in the post validation, this field does not seem to be used anywhere in the code. if (\Zend_Validate::is(trim($post['hideit']), 'NotEmpty')) { $error = true; } What is it used for? A: I reckon it's the same use as it was back in M1. If I remember right it was introduced around 1.4 to prevent spam. As you can see from the controller part you pasted, it will throw an error if this field is not empty. Robots are used to fill every input tag of the form, including hidden input tags. If that field is not empty, that means a robot has filled the form.
{ "pile_set_name": "StackExchange" }
Q: UserSecretsId prevents any deployment of F# ASP.NET Core app to Azure App Service I'm trying to deploy my F# ASP.NET Core app to Azure App Service. Unfortunately, each time I try to deploy, a <UserSecretsId> element is added to my project file, which as explained in this article causes the build to fail with the following error: A function labeled with the 'EntryPointAttribute' attribute must be the last declaration in the last file in the compilation sequence The article explains why the error occurs and instructs to fix it by removing the element from the project file and instead adding the user secrets ID in an AssemblyInfo.fs. I have tried this and can then build manually, but each time I try to deploy, the deployment process still adds a <UserSecretsId> element with a new ID in my project file, causing the build to fail. Is there any way I can publish an F# ASP.NET Core app to Azure App Service? (Also reported on Microsoft/visualfsharp#5549) A: Add this to your .fsproj file to suppress the attribute generation. <PropertyGroup>  <GenerateUserSecretsAttribute>false</GenerateUserSecretsAttribute> </PropertyGroup> If you want to use user secrets, you will manually need to add the UserSecrets assembly attribute. module Your.Namespace.AssemblyInfo open Microsoft.Extensions.Configuration.UserSecrets [<assembly: UserSecretsIdAttribute("Your UserSecretsId")>] do() Also, this workaround should be unnecessary in ASP.NET Core 2.2. See https://github.com/aspnet/Configuration/pull/872
{ "pile_set_name": "StackExchange" }
Q: Relation Big-O and limit of a function? It is not clear to me what it the relationship (if any) between Big-O and limit of a function. Suppose that a function $f(x)=O(g(x))$: does this imply that $f(x)$ converges to a limit as $x\rightarrow \infty$? And viceversa, suppose that $f(x)$ converges to a limit as $x\rightarrow \infty$: does this imply that there exists a function $g(x)$ s.t. $f(x)=O(g(x))$? A: When we say a function $f$ is $O(g)$ we are making a statement about its asymptotic behavior. This means that there exists some threshold $M\gt 0$ and some constant $N$ such that $x\gt M\implies |f(x)|\le N|g(x)|$. Let's compare this to the definition of a limit. $\lim_{x\to\infty}f(x) = L$ if for all $\epsilon\gt 0$ there exists $M\gt 0$ such that $x\gt M\implies |f(x)-L|\lt\epsilon$. The two different definitions are very similar but not the same. With big $O$ notation we are simply saying that, for large $x$, the magnitude of $f$ is no larger than the magnitude of $g$. The limit is saying that $f$ is getting arbitrarily close to a constant. To see if you understand this I leave you with two exercises. If $\lim_{x\to\infty}f(x) = L$, prove that $f$ is $O(1)$ Prove that $f$ is $O(f)$
{ "pile_set_name": "StackExchange" }
Q: How to calculate rolling mean on a GroupBy object using Pandas? How to calculate rolling mean on a GroupBy object using Pandas? My Code: df = pd.read_csv("example.csv", parse_dates=['ds']) df = df.set_index('ds') grouped_df = df.groupby('city') What grouped_df looks like: I want calculate rolling mean on each of my groups in my GroupBy object using Pandas? I tried pd.rolling_mean(grouped_df, 3). Here is the error I get: AttributeError: 'DataFrameGroupBy' object has no attribute 'dtype' Edit: Do I use itergroups maybe and calculate rolling mean on each group on each group as I iterate through? A: You want the dates on your left column and all city values as separate columns. One way to do this is set the index on date and city, and then unstack. This is equivalent to a pivot table. You can then perform your rolling mean in the usual fashion. df = pd.read_csv("example.csv", parse_dates=['ds']) df = df.set_index(['date', 'city']).unstack('city') rm = pd.rolling_mean(df, 3) I wouldn't recommend using a function, as the data for a given city can simply be returned as follows (: returns all rows): df.loc[:, city]
{ "pile_set_name": "StackExchange" }
Q: RethinkDB group and merge into single doc with sub-array Is it the best way to group and merge each reduction into a single document with sub-array? r.expr([ {id: 1, foo: 1, bar: 2, date: r.time(2016, 1, 1, 'Z')}, {id: 1, foo: 4, bar: 1, date: r.time(2016, 1, 3, 'Z')}, {id: 1, foo: 10, bar: 0, date: r.time(2016, 1, 2, 'Z')}, {id: 2, foo: 5, bar: 3, date: r.time(2016, 1, 1, 'Z')}, {id: 2, foo: 3, bar: 6, date: r.time(2016, 1, 2, 'Z')} ]).group('id').orderBy('date').map(function(d){ return d .without('foo', 'bar', 'date') .merge({stats: [d.pluck('foo', 'bar', 'date')]}) }).reduce(function(left, right){ return left .without('stats').merge({ stats: left('stats').append(right('stats')(0)) }) }).ungroup().map(function(g){ return g('reduction') }) Output: [ { "id": 1 , "stats": [ { "foo": 1, "bar": 2 , "date": Fri Jan 01 2016 00:00:00 GMT+00:00 }, { "foo": 10, "bar": 0 , "date": Sat Jan 02 2016 00:00:00 GMT+00:00 } , { "foo": 4, "bar": 1 , "date": Sun Jan 03 2016 00:00:00 GMT+00:00 } ] }, { "id": 2 , "stats": [ { "foo": 5, "bar": 3, "date": Fri Jan 01 2016 00:00:00 GMT+00:00 } , { "foo": 3, "bar": 6, "date": Sat Jan 02 2016 00:00:00 GMT+00:00 } ] } ] A: This should work: r.expr([ {id: 1, foo: 1, bar: 2, date: r.time(2016, 1, 1, 'Z')}, {id: 1, foo: 4, bar: 1, date: r.time(2016, 1, 3, 'Z')}, {id: 1, foo: 10, bar: 0, date: r.time(2016, 1, 2, 'Z')}, {id: 2, foo: 5, bar: 3, date: r.time(2016, 1, 1, 'Z')}, {id: 2, foo: 3, bar: 6, date: r.time(2016, 1, 2, 'Z')} ]).group('id') .orderBy('date') .without('id') .ungroup() .map(rec => { return { id : rec('group'), stats : rec('reduction') }; } )
{ "pile_set_name": "StackExchange" }
Q: ¿Es posible usar triggers en la misma tabla en MYSQL? Mi problema es que tengo una tabla llamada students y quiero que se dispare un trigger al momento que un nuevo student se registre, el trigger me realice un correo electrónico pero dentro de la misma tabla student obvio en otro campo de la misma tabla esto es posible? Me genera el siguiente error: Gracias! A: Te recomiendo que hagas procedimiento almacenado para insertar y hacer inserciones, llamas el procedimiento y no la tabla. Algo así: CREATE DEFINER=root@localhost PROCEDURE correo( IN id_students INT (11), IN name VARCHAR(30), IN firts_lastname VARCHAR(30), IN second_lastname VARCHAR(30)) BEGIN declare correo varchar (100); declare pass varchar (40); set correo = concat(name,'.',first_lastname,'@ejemplo.com')); set pass= ('12345seis'); insert into students values(id_students,name,first_lastname, second_lastname,correo,pass); END
{ "pile_set_name": "StackExchange" }
Q: How to encrypt a 192 bit plaintext using AES-192? Can we use AES-192 bit algorithm for 192 bit plaintext? If yes, what are the changes we have to do? A: Strange as it may sound, you should not directly use any block cipher directly as a means to provide confidentiality. A block cipher can only handle a statically sized plaintext, which in the case of AES is always 128 bits; it's not 192 bits, that's just the key size. More importantly a block cipher will always output the same ciphertext if the key and input block remain the same. That means that if you input two 128 bit messages, for instance the ASCII representation of "affirmative, sir", then the ciphertext will be identical. If this message is the answer of two separate questions then an adversary can immediately detect that the answer of two questions is identical as well. This breaks the security of a cipher as it leaks information about the plaintext. In the strongest semantic security that a cipher can obtain, IND_CPA, the ciphertext output should not leak any information even if the adversary may input as many input messages as he likes. The simplest mode of operation is ECB mode; it just splits the plaintext into multiple blocks, and then encrypts each block in turn, possibly padding the last block so that the input to the block cipher always receives 128 bits. This solves the first part of the cipher only handling 128 bits, but it does nothing for the security mentioned in the previous sections. Fortunately there are many modes of operation that do provide such security. AES-CBC is used a lot, but it requires padding - which in itself can be a security risk - as well as a unpredictable IV. This IV or initialization vector is used to change the ciphertext independently of the plaintext. AES-CTR (AES in counter mode) is also used a lot and doesn't require the padding and only a unique nonce (number-used-once) instead of the unpredictable IV. Currently authenticating ciphers such as GCM are becoming more commonplace as they do not just protect the confidentiality but also the authenticity and integrity of the message. They are generally build using AES-CTR internally. Note that the size of the plaintext is not automatically protected by AES using any of the modes of operation. This is one of many side channels that may leak information to the attacker. You cannot just encrypt "Yes" to 3 bytes ciphertext and expect that an adversary cannot distinguish it from the 2 bytes required for "No". When you read about AES-192 is being used to protect your data then you should ask yourself how it is used to do this. To a seasoned cryptographer the use of AES, as the most common unbroken block cipher, only means that a small part of a system is possibly secure. If you want to encrypt fully random data (such as an AES key) then you could right-pad your data using 64 bits set to zero. Then you could use ECB mode encryption (encrypt two blocks of 64 bits) and encrypt the two blocks of 128 bits. This process is called key wrapping. This is only secure for completely random data though and it is probably better to use more modern techniques such as AES in SIV mode.
{ "pile_set_name": "StackExchange" }
Q: Database Selection Failed. Unknown Database error on phpmyAdmin I am having trouble with selecting my database via PHP script on localhost. I am 100% sure that db name is spelled correctly and infact it appears on phpmyAdmin very well and only when I am trying to connect to it by running a PHP script on localhost it displays the following error: Database selection failed Unknown database 'fokrul_justdeals' My PHP code is here: <?php class database{ public $connection; // the user for the database public $user = 'root'; // the pass for the user public $pswd = ''; // the db from where you want to parse the info public $db = 'fokrul_justdeals'; // the host where db is located public $host = 'localhost'; function __construct(){ $this->connect(); } private function connect(){ $this->connection = mysql_connect("$host", "$user", "$pswd") or die("Database connection failed ". mysql_error()); if($this->connection){ // we select the db that we want to work with mysql_select_db($this->db, $this->connection) or die("Database selection failed " . mysql_error()); } } I have read thousands of forums and I am doing everything as I shall be. But don't know what is going wrong here? One interesting thing is that out of all databases only system generated db 'mysql' connects if I change the db name to 'mysql' in PHP script. I have tried creating different db names and also tried creating new users and adding full privileges to them. Nothing has worked for me :( A: $this->connection = mysql_connect("$host", "$user", "$pswd") or die("Database connection failed ". mysql_error()); You should use $this->host,$this->user,$this->pswd
{ "pile_set_name": "StackExchange" }
Q: Reverse Bootstrap's breakpoint effect Using bootstrap, with this sample : <div class="row"> <div class="col-md-4"></div> <div class="col-md-4"></div> <div class="col-md-4"></div> </div> After passing the breakpoint of 768px your cols will go from a vertical alignment to an horizontal alignment. How can I reverse this effect ? I need my cols to start horizontal then go vertical after the breakpoint. A: Just add col-md-12 (breakpoint of 768px or higher will be full width). Then, add col-4 as well, that will affect from this breakpoint down. <div class="row"> <div class="col-4 col-md-12"></div> <div class="col-4 col-md-12"></div> <div class="col-4 col-md-12"></div> </div> You can see more in the Bootstrap documentation how this grid system works.
{ "pile_set_name": "StackExchange" }
Q: AttributeError: 'RDD' object has no attribute 'show' from pyspark import SparkContext, SparkConf, sql from pyspark.sql import Row sc = SparkContext.getOrCreate() sqlContext = sql.SQLContext(sc) df = sc.parallelize([ \ Row(nama='Roni', umur=27, tingi=168), \ Row(nama='Roni', umur=6, tingi=168), Row(nama='Roni', umur=89, tingi=168),]) df.show() error: Traceback (most recent call last): File "ipython-input-24-bfb18ebba99e", line 8, in df.show() AttributeError: 'RDD' object has no attribute 'show' A: The error is clear as df is an rdd. You should change it to a dataframe using toDF likes in the following code: df = df.toDF() df.show()
{ "pile_set_name": "StackExchange" }
Q: Ошибка линковки библиотеки libm.so Доброго времени суток! Пытаюсь скомпилировать проект на C. Компиляция проходит успешно, а вот на этапе линковки возникают проблемы с libm.so. Компоновщик упорно ее не видит. Использую компилятор gcc под ОС CentOS. Много раз пересмотрел Makefile, в нем вроде все корректно. Я не очень знаком с компиляцией gcc, но насколько я понял -lm должно решить проблему (компоновщик автоматически разрешает путь к библиотеке), однако это не помогает. Подскажите, что я делаю не так и как можно решить проблему? PS Строка компиляции и линковки примерно такая: gcc source1.c source2.c -L. mylib.a -lm -static PPS Может дело в -static? A: вероятно, линковщик не может найти статическую сборку библиотеки libm — файл libm.a. этот файл входит либо в пакет glibc-static, либо glibc-devel. установить его в centos можно командой yum install glibc-static (yum install glibc-devel).
{ "pile_set_name": "StackExchange" }
Q: Is the goal of mindfulness to develop ultimate dissociation? I came across this interpretation of Buddha's teaching that suggests that the Buddha ultimately sought a dissociative state, rather than one of freedom. the buddhistic mindfulness meditation does not ... (a) stress genuine freedom, peace and happiness ... and (b) does not eliminate the genetically-encoded instinctual passions of fear, aggression, nurture and desire (the root cause of human bondage, malice and sorrow) ... and (c) does promise a mythical ‘freedom’ in an imaginary life-after-death (‘Parinirvana’) ... and (d) is not a new, non-spiritual method ... and (e) does not produce an actual freedom from the instinctual animal passions, here and now, on earth, in this lifetime ... and (f) does not offer a step by step, down-to-earth, practical progression to becoming actually free of the human condition of malice and sorrow ... to be both happy and harmless. More to your point, however, Mr. Gotama the Sakyan’s mindfulness meditation is primarily about detachment/ dissociation from life – all existence is Dukkha due to Anicca (impermanence) and Dukkha comes from Tanha (craving) for Samsara (phenomenal existence) – and any meditation technique which stresses involvement with such is anything but what Mr. Gotama the Sakyan taught. So my question is: is anyone that practices mindfulness, as advised by Buddhist teachers, heading towards developing a sort of dissociation from their feelings? If not, how can anyone explain the fact of enlightened Buddhists still getting angry without referencing psychological dissociation? Clarification 1 Some people expressed confusion over that question at the end. My clarification follows. We can agree that Buddhist enlightenment does not guarantee extirpation of emotions (like anger). Thus, feeling angry (for example) does not invalidate someone's enlightenment. Enlightened beings can feel angry. Now, Richard says -- and this has been confirmed by the actually free people -- "I" am "my" feelings, and "my" feelings are "me" (i.e., emotions and self are the same thing). So if enlightened Buddhists claim to be free from illusion of self, and if emotions still remain and occur, how can that be explained as anything but dissociation (i.e., dissociation of a covert part of self from the overt rest of the self)? Hope that is clear enough. A: In Iti 109 (quoted below), the Buddha indeed taught man to swim against his nature to become free from suffering. Renunciation (nekkhamma - subject to the middle way) is against the flow i.e. it's not natural to man. Craving is natural to man. This was said by the Blessed One, said by the Arahant, so I have heard: "Suppose a man was being carried along by the flow of a river, lovely & alluring. And then another man with good eyesight, standing on the bank, on seeing him would say: 'My good man, even though you are being carried along by the flow of a river, lovely & alluring, further down from here is a pool with waves & whirlpools, with monsters & demons. On reaching that pool you will suffer death or death-like pain.' Then the first man, on hearing the words of the second man, would make an effort with his hands & feet to go against the flow. "I have given you this simile to illustrate a meaning. The meaning is this: the flow of the river stands for craving. Lovely & alluring stands for the six internal sense-media. The pool further down stands for the five lower fetters. The waves stand for anger & distress. The whirlpools stand for the five strings of sensuality. The monsters & demons stand for the opposite sex. Against the flow stands for renunciation. Making an effort with hands & feet stands for the arousing of persistence. The man with good eyesight standing on the bank stands for the Tathagata, worthy & rightly self-awakened." The author (Richard) quoted by the OP portrays Buddhism in a negative way, probably in an attempt to peddle his own teachings (called "actual freedom"?) - at least I guess, from some contents on the website. From this page (quoted below), we can see Richard's teachings, from which I gather that "actual freedom" is the freedom to enjoy life fully (i.e. indulge in sensual pleasures without limitation): Taken to its extreme, as Hindu and Buddhist philosophy does, one denies that this planet earth and the space that it hangs in – and the universe itself – are actual. To them it is all an illusion, a dream. For them, the ‘Dreamer’ – their god – is who ‘I’ really am and all their effort is predicated upon realising that this is who one really is. Westerners have foolishly allowed themselves to be taken in by the apparent wisdom coming from the eastern mystical states of being because of the paucity of experiential wisdom in their own culture. It all started growing exponentially after the sixties generation trekked to the Himalayas, and to other exotic places, to find the permanent drug experience ... One arrives in the actual by becoming involved, totally involved in being here ... not by practicing detachment. Being here is to put your money where your mouth is, as it were. All other actions are methods, devices, techniques ... in other words: delaying tactics. In being here one is completely immersed. Being here is total inclusion. One demonstrates one’s appreciation of life by partaking fully in existence ... by letting this moment live one. One dedicates oneself to the challenge of being here as the universe’s experience of itself. ... Meanwhile people thoughtlessly pursue the elusive chimera of Eastern Enlightenment. The Buddha realized that modern day hedonists like Richard, who delight in attachment, is excited by attachment and enjoys attachment, would not be able to understand the Dhamma (teachings of the Buddha) as we can see from SN 6.1 (quoted below): "This Dhamma that I have attained is deep, hard to see, hard to realize, peaceful, refined, beyond the scope of conjecture, subtle, to-be-experienced by the wise. But this generation delights in attachment, is excited by attachment, enjoys attachment. For a generation delighting in attachment, excited by attachment, enjoying attachment, this/that conditionality and dependent co-arising are hard to see. This state, too, is hard to see: the resolution of all fabrications, the relinquishment of all acquisitions, the ending of craving; dispassion; cessation; Unbinding. And if I were to teach the Dhamma and if others would not understand me, that would be tiresome for me, troublesome for me." The Buddha felt it was too troublesome to teach the Dhamma to people like Richard, but he was persuaded otherwise. And we are fortunate and blessed because of that. If people like Richard feel that practitioners of Buddhism are gloomy, "anti-life" (quoted verbatim) and do not have "genuine freedom, peace and happiness", the answer to this comes from Dhammapada 197 - 200 (quoted below): Happy indeed we live, friendly amidst the hostile. Amidst hostile men we dwell free from hatred. Happy indeed we live, friendly amidst the afflicted (by craving). Amidst afflicted men we dwell free from affliction. Happy indeed we live, free from avarice amidst the avaricious. Amidst the avaricious men we dwell free from avarice. Happy indeed we live, we who possess nothing. Feeders on joy we shall be, like the Radiant Gods. A: mindfulness meditation is primarily about detachment/ dissociation from life Mindfulness one should observe in a detached manner without attachment to the pleasant or aversion to the unpleasant, also having wisdom towards the neutral experiences which can be classified as mental states (citta), mental content (cetasika) or corporeal body (rupa), 5 aggregates or 6 sense bases and their associated objects, 4 foundations of mindfulness, 4 foods, etc. heading towards developing a sort of dissociation from their feelings? As mentioned above this is the mental non reaction to pleasant, unpleasant, and neutral feelings. If one gets attached to the pleasant, if one gets averse to the unpleasant or ignorant to the neutral, this all leads to negative mental states. This might not be the same as detachment/ dissociation from life. One should do what needs to be done in life within moral and ethical bounds and constraints. (The moral constraints here is to conducive metal a karmic environment to further develop concentration and wisdom.)
{ "pile_set_name": "StackExchange" }
Q: bash - processing output one line at a time I read that xargs was good for processing the output of a command one line at a time (and it is). I have the following line in my script. ./gen-data | awk '{printf $2 " "; printf $1=$2=$3=""; gsub (" ", "", $0);if(length($0) == 0){ print "0000"} else{print $0}}' | xargs -t -n2 -P1 bash -c 'datatojson "$@"' _ It produces the right output, there is no question of that. However, gen-data produces something like 1000 lines, and what I really would like is for this command to execute after each line, not after 1000 lines (It's clearly stopping regularly to get more input). Here is what gen-data looks like: candump $interface & while true; do while read p; do cansend $interface $(echo $p | awk 'NF>1{print $NF}'); done < <(shuf $indoc) done (cansend sends data to an interface and candump reads from that interface and outputs it onto the screen, but I wager that's not too relevant). In any case candump seems to be continuously streaming output, but when I pipe that into awk and xargs, it becomes chunked. Is it just because I used shuf? I would think that since it's going through the interface, and being read on the other side, it would be less chunked than shuf provides. A: You can try the same command, this time using multiple hacks to avoid buffering: ./gen-data | gawk '{printf $2 " "; printf $1=$2=$3=""; gsub (" ", "", $0);if(length($0) == 0){ print "0000"} else{print $0}; fflush(stdout)}' | stdbuf -o0 xargs -t -n2 -P1 bash -c 'datatojson "$@"' _ Mind the change from awk to gawk and the use of fflush. You can also try mawk -Winteractive. Also mind that I added stdbuf -o0 before xargs. You can also try the latest at the beginning with ./gen-data
{ "pile_set_name": "StackExchange" }
Q: Prevent table data editing in Oracle SQL Developer Is there any way in Oracle SQL Developer to view table data read-only or prevent editing? When I view the data in a table it lets me edit, and I want to avoid accidentally making a change. I would expect there to be a way to toggle this somehow, but I haven't found it. I'm using version 2.1.1.64. A: Yes there is. Ask your DBA to create a user that only has read access to the TABLES in question and then log in with that user. The other thing to remember is that if you do edit some data in a data grid in order for the change to written to the database you have to separately commit the change to the database by either issuing a COMMIT command in an SQL Worksheet or by clicking on the Green Tick button or pressing F11. Pressing F12 Will Rollback Changes that have not been committed.
{ "pile_set_name": "StackExchange" }
Q: Confusion about the Kleene star I have been struggling to understand one key property about the closure of two union'ed expressions. Basically what I need to know exactly how the Kleene star works. I.E If Regular Expression R = (0+1)* Does the expression have to evaluate to something like 000111/01/00001111, or can we have a unequal amount of 0's & 1's, such as 0011111/000001/111111/0000? A: The amount of 0's and 1's can be unequal; you can even have the 0's and 1's in any order! a* means "zero or more as, where each a is evaluated independently"; thus, in a string matching (0+1)*, each character can match (0+1) without regard for how the other characters in the string are matching it. Consider the pattern (0+1)(0+1); it matches the strings 00, 01, 10, and 11. As you can see, the 0's and 1's don't have to occur in equal amounts and don't have to occur in any specific order. The Kleene star extends this to strings of any length; after all, (0+1)* just means <empty>+(0+1)+(0+1)(0+1)+(0+1)(0+1)(0+1)+ ....
{ "pile_set_name": "StackExchange" }
Q: Driving power MOSFET from 3.3v or lower Arduino Pro Mini I want to control a low voltage (hobby class) motor with a 8Mhz/3.3v Arduino Pro Micro. I'd like to power both from 2 alkaline cells. The motor draws approximately 1 A @ 3V with new batteries. I would be doing PWM of a couple of channels at the normal Arduino rates. My thought is to connect the batteries to the Vcc pin (not Vraw) and run the Arduino unregulated at about 3V, but somewhat less when the motor is running. I'm willing to reprogram the fuses for a lower brownout voltage (or none) if needed. I'm feeling my way on the rest of this - I'll lay out my (perhaps naive) approach and get some feedback about whether I'm understanding it, and how to proceed. With such a low battery voltage, a TIP120 darlington seems to drop too much voltage and substantially reduce the motor speed/power. So I'm thinking that I need to use a power N channel MOSFET, and it appears that I need to be fairly selective to get one which can sufficiently turn on with a <3V gate to source voltage. I'm considering something like the FQP30N06L http://www.mouser.com/ds/2/149/FQP30N06L-244344.pdf If I read the datasheet correctly, Vgs of 3V and a load Ids of 1A would result in less than 100mV drop Vds. But this transistor may be overkill, and the TO220 package is larger than I'd prefer. If I'm willing to deal with SOT-23-3 surface mount packaging, there are some options like: Toshiba SSM3K324RLF http://www.semicon.toshiba.co.jp/info/docget.jsp?type=datasheet&lang=en&pid=SSM3K324R Infineon BSS806NE http://www.mouser.com/ds/2/196/BSS806NE_Rev2%2001-359816.pdf If I'm reading it right, with 2x1.5v (nominal) batteries directly powering an Arduino Pro Micro, I can sufficiently drive Vgs to handle 1A Ids at a small Vds and a low enough power dissipation, with any of these three. I would put a reverse diode and a cap across the motor. I've seen devices with a small electrolytic across the motor terminals, and I've seen a smaller value ceramic cap - what's the best practices? Is this a feasible and reasonable approach? What are the gotcha's that I need to take into consideration? Any advice about choosing among these (or other) transistors? My concerns are: not understanding MOSFETs enough, and the unregulated power supply being too noisy for the ATMega328p when it's also powering the motor. Can I handle the latter by adding another cap or two across the Arduino Pro Micro's Vcc/Ground? Backup option: run 3x1.5v alkalines and power the Arduino Pro Micro through its regulated Vraw input. A: I don't think the mosfets will be an issue. Looking at the last one (BSS806NE); in figure 5 in the datasheet you see that for 1A and 1.4v (Vgs) you'd get a Vds of only .1v. I think a bigger problem might be the voltage of the batteries dropping under load. You'd have to measure that with a multimeter while the motor is running. A simple cap might help with a little noise, but not with the voltage dropping while the motor is running. Not sure how much the AVR would mind having its supply voltage dropping all of a sudden. Of the top of my head the minimal voltage of the Atmega328 @8mhz is 1.8v, while default fuse settings use 2.7v, so I'd probably change to fuses to make BOD at 1.8v. Other fuses don't need to change. Backup option sound good, though you might have to go with 4 x 1.5v depending on the voltage drop under load. Another alternative might be to have two sets of 1.5v batteries in parallel, halving the current load on each battery.
{ "pile_set_name": "StackExchange" }
Q: Angular: pick item from ng-repeat search results is it possible to pick/choose one result from ng-repeat search results and it will automatically go to input area? <input type="text" class="form-control" ng-model="event.url_code" ng-change="searchAccountUrl();"> <div ng-repeat="result in accounts | filter: event.url_code" style="border: 1px solid #dee3e8"> <a style="cursor: pointer">{{result.url_code}}</a> </div> </div> like angucomplete? or select tag, that you choose one option and it goes to input area as selected. and then ng-model = your choice A: You want only one (or several) of the <div> within the ng-repeat to show up, if they match the value in the input? You just need to use ng-if with your ng-repeat. The ng-if will have knowledge of your current item within your ng-repeat iteration and if it shares a scope with the input, it'll also have knowledge of the model you use for ng-model. No need for change handlers, etc. Data binding will do all of it for you. <input ng-model="myModel.url_code"> <div ng-repeat="result in accounts" ng-if="result.url_code == myModel.url_code"> <strong>{{result.url_code}}</strong> </div> You can use a method like .contains() or whatever in place of == if you want to worry about partial matches. Just add the method to your scope and it'll work. Edit: Based on your comments, you want something like: <input ng-model="myModel.url_code"> <a href="" ng-click="setInputValue(result)" ng-repeat="result in accounts">{{result.url_code}}</a> $scope.setInputValue = function(result) { $scope.myModel.url_code = result.url_code; };
{ "pile_set_name": "StackExchange" }
Q: Multiline regular expressions with python requests module I am iterating over a web page line by line using requests, but trying to capture some multi-line regular expressions: import requests r = requests.get(url) for line in r.iter_lines(): pat = re.search(regex, line) if pat: print pat.group(1) I have tried concatenating the whole file into one long string, but that seems wrong. What is the best way to capture these multiline expressions (preferably using requests)? Note: I am new to requests. I have looked at the docs but haven't found, or understood, the answer. Thanks A: r = requests.get(url) pat = re.search(regex, r.text)
{ "pile_set_name": "StackExchange" }
Q: Yii 2.0 $request->post() issues In my controller I have the following lines $request = Yii::$app->request; print_r($request->post()); echo "version_no is ".$request->post('version_no',-1); The output is given below Array ( [_csrf] => WnB6REZ6cTAQHD0gAkoQaSsXVxB1Kh5CbAYPDS0wOGodSRANKBImVw== [CreateCourseModel] => Array ( [course_name] => test [course_description] => kjhjk [course_featured_image] => [course_type] => 1 [course_price] => 100 [is_version] => 1 [parent_course] => test [version_no] => 1 [parent_course_id] => 3 [course_tags] => sdsdf ) ) version_no is -1 So here the return value of post() contains the version_no.But when it is called as $request->post("version_no"), it is not returning anything (or $request->post("version_no",-1) returns the default value -1). As per Yii 2.0 docs, the syntax is correct and should return the value of post parameter. But why is it failing in my case.The post array has the parameter in it.But the function is not returning when called for an individual parameter value. A: your parameters are in $_POST['CreateCourseModel']['version_no'] etc. with $request->post('version_no',-1) you trying to get $_POST['version_no'] which is not defined so it returns you -1. So to get version_no use $data = $request->post('CreateCourseModel'); print_r($data['version_no']); A: You can access nested $_POST array elements using dot notation: \Yii::$app->request->post('CreateCourseModel.version_no', -1); Model properties are grouped like that for massive assignment that is done via $model->load(Yii::$app->request->post()). Depending on your needs maybe it's better use default value validator like that: ['version_no', 'default', 'value' => -1],
{ "pile_set_name": "StackExchange" }
Q: How to add content recursively to an array? I have a file with multiple lines, I need to iterate each line and save it into an array. Actually this is my code: while(($line = fgets($fh)) !== false) { $obj = json_decode($line); $content['trace']= array( 'message' => $obj->trace->details->{"[message]"}, ); } Now If I have for example two lines: Line 1 Line 2 In the $content array returned after the end of the while I can see only the content of Line2. Should I use array_push() or there is something else? A: It depends on what you want the resulting array to look like. To build a message array: $content['trace']['message'][] = $obj->trace->details->message; Or to build multiple arrays each with a message key: $content['trace'][]['message'] = $obj->trace->details->message;
{ "pile_set_name": "StackExchange" }
Q: Can a 26 Inch Controller Drive a 20 Inch Wheel? My situation: A Bakfiets with a bad controller (hills too much for it I think) Supplier has offered to send me a new controller Supplier has also offered to supply a whole new separate battery - controller - motor system to mount on the front wheel (to prevent a recurrence of controller failure). This is a unit he has unused and it is designed for a rear 26 inch wheel. This sounds great, but I am worried that his second controller won't properly drive my 20 inch front wheel. My question - will this system work? Do controllers need to be configured to drive different wheel sizes? I feel the answer is yes, otherwise, how can a controller know the speed of the bike - it only has two ways of knowing this - the pedelec or the motor. A: You may be able to reprogram the wheel size controller from the lcd control panel (or usb reprogramming cable). But this assumes a certain level of intelligence in the controller and yours might not have this capacity. You’d have to check with the vendor. Even if it were reprogrammable you have the following challenges: Motors have optimum torque and speed ranges. Your motor may be wound for slower rotations of a 26” wheel and won’t give you enough speed that a 20” needs controllers may obey the ebike assist speed limit by limiting max RPMs. If your controller isn’t wheel size reprogrammable you’ll find yourself limited to a much reduced speed and even if you can, the physical/electronic max rpm limit may still limit your top speed (my Bafang geared hub motor/controller doesn’t seem to go above a certain rpm irregardless of the wheel size; this may be either a motor or controller limitation) you’ll need to build the wheel and you might find some of the spoke angles are going to be very tight a rear ehub will not fit normally in the front forks because of the hubspacing and freewheel clearance.
{ "pile_set_name": "StackExchange" }
Q: Undefined method "delete" on redirect_to When I'm authenticating using the google-drive-ruby gem trying to redirect the user via the auth url returns the error "undefined method 'delete' for #<Addressable::URI:0x0000000d8c1128>" For reasons I'm not entirely aware of. Here's my code: class UserFormsController < ApplicationController layout 'admin' before_action :set_user_form, only: [:show, :edit, :update, :destroy] before_action :g_auth_user # GET /user_forms def index @user_forms = UserForm.all redirect_to @auth_url end [...] def g_auth_user credentials = Google::Auth::UserRefreshCredentials.new( client_id: "506139056270-iu34antv0ebbouo332p55gem8vj5uj9b.apps.googleusercontent.com", client_secret: "CNc0okSHqFBsmLSeZgzDhyHJ", scope: [ "https://www.googleapis.com/auth/drive", "https://spreadsheets.google.com/feeds/", ], redirect_uri: user_forms_url) @auth_url = credentials.authorization_uri end [...] A: It looks like the URI structure returned by that method is incompatible with redirec_to so you should be able to fix it by converting it to a string: redirect_to @auth_url.to_s
{ "pile_set_name": "StackExchange" }
Q: add material Date/Time Picker library to android studio manualy i want to manually add this library material Date/Time Picker. to my project.with out this code. dependencies { compile 'com.wdullaer:materialdatetimepicker:2.3.0' } please help me A: Go to this link - (same library). Copy java codes to your own custom packages in your project. Copy the items from all resource files also. Resolve all import errors manually. Its done.
{ "pile_set_name": "StackExchange" }
Q: Google Maps AutoComplete dropdown hidden when Google Maps Full Screen is true I've implemented a google maps with autocomplete onverlayed on the map and I've set the FullScreenControl option to "true" (You can see the FullScreenControl on the right in the image below) My problem is that when I switch to FullScreen mode by clicking the FullScreenControl, the dropdown is hidden behind the google map. It seems that the ZIndex is too low but setting it to a very large number does not seem to fix the issue. You can see from the image below that the dropdown exists, but only behind the fullscreen google map. I did find a similar question with answer where someone used a normal dropdown and not the google map autocomplete. Similar Question and answer However the solution didn't work for me. Setting the ZIndex doesn't seem to work. I'm using TypeScript with Angular2. Thank you. ]4 A: For anyone struggling with this, if the z-index solution is not working: The Google maps generated div ("pac-container") with the autocomplete options is appended to the body child elements. But when in full screen, only elements inside the target element (the map div) will be shown, so z-index is ignored. A quick workaround is to move the pac-container div inside the map div when entering full screen, and move it back on exit. document.onfullscreenchange = function ( event ) { let target = event.target; let pacContainerElements = document.getElementsByClassName("pac-container"); if (pacContainerElements.length > 0) { let pacContainer = document.getElementsByClassName("pac-container")[0]; if (pacContainer.parentElement === target) { console.log("Exiting FULL SCREEN - moving pacContainer to body"); document.getElementsByTagName("body")[0].appendChild(pacContainer); } else { console.log("Entering FULL SCREEN - moving pacContainer to target element"); target.appendChild(pacContainer); } } else { console.log("FULL SCREEN change - no pacContainer found"); }}; A: i was fix it with add z-index to .pac-container see here .pac-container, .pac-item{ z-index: 2147483647 !important; }
{ "pile_set_name": "StackExchange" }
Q: Custom ViewController Transition Remove Gap Between Views Hi I have a custom transition between two view controllers and I want to remove the black gap between them when it makes the transition. What can I do to do this? Thank you! Here is how the transition is performed currently with a CATransition and a picture of the gap. - (void)bottomButtonScreen4:(UIGestureRecognizer *)gestureRecognizer { NSLog(@"Swipe Up Worked"); settingsViewController = [[SettingsViewController alloc] init]; CATransition* transition = [CATransition animation]; transition.duration = 0.5; transition.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; transition.type = kCATransitionPush; transition.subtype = kCATransitionFromTop; [self.navigationController.view.layer addAnimation:transition forKey:nil]; settingsViewController = [self.storyboard instantiateViewControllerWithIdentifier: @"settingView"]; [self.navigationController pushViewController:settingsViewController animated:NO]; } A: Using this transition, there's only so much you can do. You'll notice (if you make it slow enough) that as the new view moves in, it's also fading in and the outgoing view is fading out. You can get rid of the black bar by setting the window's background color to the same color as your incoming view's background color, but you'll also see the outgoing controller's view fade to that color as it goes out. Check that out, and see if that's a look you can live with. self.view.window.backgroundColor = settingsViewController.view.backgroundColor; P.S. You should get rid of the line where you alloc init settingsViewController. It isn't doing anything now, and it was the wrong way to get that instance anyway. After Edit: Another way to do this that gets rid of the black bar problem, is to use animateWithDuration like this: -(IBAction) pushViewControllerFromTop { CGFloat height = self.view.bounds.size.height; UIViewController *settingsViewController = [self.storyboard instantiateViewControllerWithIdentifier:@"Blue"]; settingsViewController.view.frame = CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y - height, self.view.frame.size.width, self.view.frame.size.height); [self.view.superview addSubview:settingsViewController.view]; [UIView animateWithDuration:.5 animations:^{ self.view.center = CGPointMake(self.view.center.x, self.view.center.y + height); settingsViewController.view.center = CGPointMake(settingsViewController.view.center.x, settingsViewController.view.center.y + height); } completion:^(BOOL finished) { [self.navigationController pushViewController:settingsViewController animated:NO]; }]; }
{ "pile_set_name": "StackExchange" }
Q: App not stopped when launched for background fetch from Xcode When I launch my app from Xcode with Background Fetch mode, it works. But it should stop when I call callback handler, or after 30 seconds. But it does not! When I click Pause, I may see my main thread does not do anything. Is it some xcode-specific issue, or do I misunderstand something about background fetch? A: Apps running under the Xcode debugger are not subject to the same background execution time limitations as a released app. For example, if you looped, logging UIApplication property backgroundTimeRemaining on a released app, your app would be terminated when this value reached zero. Under the debugger your app would continue indefinitely, reporting a zero value.
{ "pile_set_name": "StackExchange" }
Q: Calculating square roots using the recurrence $x_{n+1} = \frac12 \left(x_n + \frac2{x_n}\right)$ Let $x_1 = 2$, and define $$x_{n+1} = \frac{1}{2} \left(x_n + \frac{2}{x_n}\right).$$ Show that $x_n^2$ is always greater than or equal to $2$, and then use this to prove that $x_n − x_{n+1} ≥ 0$. Conclude that $\lim x_n = \sqrt2$. My question: So I know how to do this problem but I don't know how to prove that $x_n >0$ for all $n\in \mathbb{N}$. A: If $x_n>0$ then $x_{n+1}$ cannot be negative. Since $x_1=2>0$, the claim follows. A: As, @CiaPan said in comments, you need to prove that the limit to this sequence is exactly $\sqrt 2$. So, here is a different approach to reach the goal. Firstly, by induction, prove that $$\color{blue}{\frac {x_n-\sqrt 2}{x_n+\sqrt 2}=\left(\frac{x_1-\sqrt 2}{x_1+\sqrt 2}\right)^{2^n}}.$$ So, firstly, for $n=1$, we have, $$x_2=\frac 12\left(x_1+\frac 2{x_1}\right)\\\implies\frac {x_2}{\sqrt 2}=\frac {x_1^2+2}{2\sqrt2 x_1}.$$ Now, use inverse of componendo-dividendo, $$\frac {x_2-\sqrt 2}{x_2+\sqrt 2}=\left(\frac{x_1-\sqrt 2}{x_1+\sqrt 2}\right)^2.$$ So, for $n=1$, our claim holds. Now, suppose, it is true for $n=k$, i.e. $$\frac {x_{k+1}-\sqrt 2}{x_{k+1}+\sqrt 2}=\left(\frac{x_1-\sqrt 2}{x_1+\sqrt 2}\right)^{2^{k+1}}.\tag 1$$ Let's see for $n=k+1$. So, we have $$x_{k+2}=\frac 12\left(x_{k+1}+\frac 2{x_{k+1}}\right),$$ and similar approach to the case $n=1$ takes us to $$\frac {x_{k+2}-\sqrt 2}{x_{k+2}+\sqrt 2}=\left(\frac{x_{k+1}-\sqrt 2}{x_{k+1}+\sqrt 2}\right)^2=\left(\frac{x_1-\sqrt 2}{x_1+\sqrt 2}\right)^{2^{k+2}}[from\;\;(1)].$$ So, for all $n$, we have $$\color{blue}{\frac {x_n-\sqrt 2}{x_n+\sqrt 2}}=\left(\frac{x_1-\sqrt 2}{x_1+\sqrt 2}\right)^{2^n}=\color{blue}{\left(\frac{2-\sqrt 2}{2+\sqrt 2}\right)^{2^n}}[\text{since }\, x_1=1].$$ So, as $\color{red}{n\to \infty}$, the right side, i.e. $\color{red}{\left(\frac{2-\sqrt 2}{2+\sqrt 2}\right)^{2^n}\to 0}$, so, $$\color{red}{\frac {x_n-\sqrt 2}{x_n+\sqrt 2}\to 0},$$ so $$\color{red}{x_n-\sqrt 2\to 0\\\implies x_n\to \sqrt 2}. $$ So, $${\color{fuchsia}{\lim x_n= \sqrt 2}}$$
{ "pile_set_name": "StackExchange" }
Q: How to start a shell terminal session in Java and use it to put commands and read their results keeping the previous context I have a web application that has to mirror a Linux shell, that is to say, the users will have a screen that simulates the shell and will have an actual shell in the server side. I have implemented a quick & dirty solution to test the interaction with the shell Process but it doesn't seem to work: this is a simple Spring Controller where the user passes commands and should get their results: @RestController public class ShController{ @GetMapping("/sh") public DeferredResult<String> shSessionCommand(@RequestParam String command) throws IOException, InterruptedException { DeferredResult<String> dr = new DeferredResult<>(); ProcessBuilder pb = new ProcessBuilder("sh"); //START SHELL Process p = pb.start(); InputStream inputStream = p.getInputStream(); Thread t = new Thread(() -> { //BACKGROUND SHELL OUTPUT READ byte[] read = new byte[2048]; try { inputStream.read(read); dr.setResult(new String(read)); } catch (IOException e) { e.printStackTrace(); } }); t.start(); //TO ALLOW THE SHELL READER TO START BEFORE PASSING THE COMMAND Thread.sleep(5000); //WRITE THE USER COMMAND OutputStream outputStream = p.getOutputStream(); outputStream.write(command.getBytes()); outputStream.flush(); return dr; } } When I call /sh?command=ls outputStream.write(command.getBytes()); writes ls but inputStream.read(read) is blocked for ever. Why? UPDATE: Clarifications: Every user needs to have its own shell Process Every user shell session will be used for a long time to keep its context I have updated the Controller to reflect this. There is a Map where I keep the relation between users and their shell Process. Now I close the OutputStream and the inputStream.read(read); does read the command result, BUT, only the first time. Subsequent calls to issue more commands throw java.io.IOException: Stream closed when invoking outputStream.write(command.getBytes());. Any ideas? @RestController public class ShController{ private Map<String, Process> shellMap = new HashMap<>(); @GetMapping("/sh") public DeferredResult<String> shSessionCommand(@RequestParam String command, HttpSession session) throws IOException, InterruptedException { DeferredResult<String> dr = new DeferredResult<>(); Process p = getUserShell(session.getId()); InputStream inputStream = p.getInputStream(); Thread t = new Thread(() -> { System.out.println("****THREAD"); byte[] read = new byte[2048]; try { inputStream.read(read); dr.setResult(new String(read)); } catch (IOException e) { e.printStackTrace(); } }); t.start(); Thread.sleep(1000); System.out.println("****COMMAND"); OutputStream outputStream = p.getOutputStream(); outputStream.write(command.getBytes()); outputStream.flush(); outputStream.close(); return dr; } private Process getUserShell(String user) throws IOException { if(this.shellMap.get(user) == null){ System.out.println("****Creating process"); ProcessBuilder pb = new ProcessBuilder("sh"); Process process = pb.start(); shellMap.put(user, process); } return shellMap.get(user); } } A: The problem is I wasn't sending the new line (enter) character, it's as if I never submitted the command. This is the solution: @RestController public class ShController{ private Map<String, Process> shellMap = new HashMap<>(); @GetMapping("/sh") public String shSessionCommand(@RequestParam String command, HttpSession session) throws IOException, InterruptedException { String result = ""; Process process = getUserShell(session.getId()); InputStream out = process.getInputStream(); OutputStream in = process.getOutputStream(); byte[] buffer = new byte[4000]; boolean read = false; boolean written = false; while (!read) { int no = out.available(); if (no > 0) { int n = out.read(buffer, 0, Math.min(no, buffer.length)); result = new String(buffer, 0, n); read = true; } if(!written) { in.write((command + "\n").getBytes()); in.flush(); written = true; } try { Thread.sleep(10); } catch (InterruptedException e) { e.printStackTrace(); } } return result; } private Process getUserShell(String user) throws IOException { if(this.shellMap.get(user) == null){ System.out.println("****Creating process"); ProcessBuilder pb = new ProcessBuilder("sh", "-i"); Process process = pb.start(); shellMap.put(user, process); } return shellMap.get(user); } }
{ "pile_set_name": "StackExchange" }
Q: Reading from SerialPort & Memory Management - C# I am working on a program that reads in from a serial port, then parses, formats, and displays the information appropriately. This program, however, needs to be run for upwards of 12+ hours - constantly handling a stream of incoming data. I am finding that as I let my app run for a while, the memory usage increases at a linear rate - not good for a 12 hour usage. I have implemented a logger that writes the raw incoming binary data to a file - is there a way I can utilize this idea to clear my memory cache at regular intervals? I.e. how can I, every so often, write to the log file in such a way that the data doesn't need to be stored in memory? Also - are there other aspects of a Windows Form Application that would contribute to this? E.g. I print the formatted strings to a textbox, which ends up displaying the entire string. Because this is running for so long, it easily displays hundreds of thousands of lines of text. Should I be writing this to a file and clearing the text? Or something else? A: Obviously, if the string grows over time, your app's memory usage will also grow over time. Also, WinForms textboxes can have trouble dealing with very large strings. How large does the string get? Unless you really want to display the entire string onscreen, you should definitely clear it periodically (depending on your users' expectations); this will save memory and probably improve performance.
{ "pile_set_name": "StackExchange" }
Q: Out of memory using svmtrain in Matlab I have a set of data that I am trying to learn using SVM. For context, the data has a dimensionality of 35 and contains approximately 30'000 data-points. I have previously trained decision trees in Matlab with this dataset and it took approximately 20 seconds. Not being totally satisfied with the error rate, I decided to try SVM. I first tried svmtrain(X,Y). After about 5 seconds, I get the following message: ??? Error using ==> svmtrain at 453 Error calculating the kernel function: Out of memory. Type HELP MEMORY for your options. When I looked up this error, it was suggested to me that I use the SMO method: svmtrain(X, Y, 'method', 'SMO');. After about a minute, I get this: ??? Error using ==> seqminopt>seqminoptImpl at 236 No convergence achieved within maximum number (15000) of main loop passes Error in ==> seqminopt at 100 [alphas offset] = seqminoptImpl(data, targetLabels, ... Error in ==> svmtrain at 437 [alpha bias] = seqminopt(training, groupIndex, ... I tried using the other methods (LS and QP), but I get the first behaviour again: 5 second delay then ??? Error using ==> svmtrain at 453 Error calculating the kernel function: Out of memory. Type HELP MEMORY for your options. I'm starting to think that I'm doing something wrong because decision trees were so effortless to use and here I'm getting stuck on what seems like a very simple operation. Your help is greatly appreciated. A: Did you read the remarks near the end about the algorithm memory usage? Try setting the method to SMO and use a kernelcachelimit value that is appropriate to the memory you have available on your machine. During learning, the algorithm will build a double matrix of size kernelcachelimit-by-kernelcachelimit. default value is 5000 Otherwise subsample your instances and use techniques like cross-validation to measure the performance of the classifier. Here is the relevant section: Memory Usage and Out of Memory Error When you set 'Method' to 'QP', the svmtrain function operates on a data set containing N elements, and it creates an (N+1)-by-(N+1) matrix to find the separating hyperplane. This matrix needs at least 8*(n+1)^2 bytes of contiguous memory. If this size of contiguous memory is not available, the software displays an "out of memory" error message. When you set 'Method' to 'SMO' (default), memory consumption is controlled by the kernelcachelimit option. The SMO algorithm stores only a submatrix of the kernel matrix, limited by the size specified by the kernelcachelimit option. However, if the number of data points exceeds the size specified by the kernelcachelimit option, the SMO algorithm slows down because it has to recalculate the kernel matrix elements. When using svmtrain on large data sets, and you run out of memory or the optimization step is very time consuming, try either of the following: Use a smaller number of samples and use cross-validation to test the performance of the classifier. Set 'Method' to 'SMO', and set the kernelcachelimit option as large as your system permits.
{ "pile_set_name": "StackExchange" }
Q: Self import of subpackages or not? Suppose you have the following b b/__init__.py b/c b/c/__init__.py b/c/d b/c/d/__init__.py In some python packages, if you import b, you only get the symbols defined in b. To access b.c, you have to explicitly import b.c or from b import c. In other words, you have to import b import b.c import b.c.d print b.c.d In other cases I saw an automatic import of all the subpackages. This means that the following code does not produce an error import b print b.c.d because b/__init__.py takes care of importing its subpackages. I tend to prefer the first (explicit better than implicit), and I always used it, but are there cases where the second one is preferred to the first? A: I like namespaces -- so I think that import b should only get what's in b itself (presumably in b/__init__.py). If there's a reason to segregate other functionality in b.c, b.c.d, or whatever, then just import b should not drag it all in -- if the "drag it all in" does happen, I think that suggests that the namespace separation was probably a bogus one to start with. Of course, there are examples even in the standard library (import os, then you can use os.path.join and the like), but they're ancient, by now essentially "grandfathered" things from way before the Python packaging system was mature and stable. In new code, I'd strongly recommend that a package should not drag its subpackages along for the ride when you import it. (Do import this at the Python prompt and contemplate the very last line it shows;-).
{ "pile_set_name": "StackExchange" }
Q: Is there a way in PHP to leave a function upon error To minimize page loading time, we are caching our PHP generated HTML as an HTML file. If a "cache file" exists we use it. Otherwise we run the PHP scripts to generate the page. To reduce the page loading speed even more, we would like to remove the "If file exists" line (which takes time) and simply let PHP generate an error if the file does not exist. We would then "catch" the error and let PHP generate the page. The question is the following: Is there a simple way to implement "On error, leave function" in PHP. This is the code we have: function UseCachedPage() { $CacheFile='mypage.html'; if(!file_exists($CacheFile)) return; //Would like to skip this to save time include($CacheFile); //Magic "on error, leave function" needed here. exit; //Cache included, so no need to run the php. } Any help would be appreciated. A: According to the docs: Files are included based on the file path given or, if none is given, the include_path specified. If the file isn't found in the include_path, include will finally check in the calling script's own directory and the current working directory before failing. The include construct will emit a warning if it cannot find a file; this is different behavior from require, which will emit a fatal error. So what this means is if you want to "catch" it, you have to use require instead. Now, you can only catch this fatal error as of PHP 7, which is important to know. function UseCachedPage() { $CacheFile = "mypage.html"; try { require($CacheFile); exit; } catch (Error $e) { // Call something to build your file here } }
{ "pile_set_name": "StackExchange" }
Q: Handling assertions/exceptions using python-hypothesis What is considered to be best practice when it comes to property based testing using the hypothesis library with respect to assertions within the program code? I created a very simple function to show my point. The function simply divides two numbers. I put in an assertion that fails if the denominator is zero. When I run the test, it fails as soon as hypothesis chooses 0 for parameter b (denominator) due to the assertion error. However, the assertion in the function is meant to handle this particular case. from hypothesis import given from hypothesis import strategies as st def divide(a: float, b: float) -> float: assert b != 0, "Denominator must not be zero!" return a / b @given(b=st.floats()) def test_divide(b): assert isinstance(divide(100, b), (int, float)) How am I supposed to adjust the code to make the test pass for parameter value b=0? What's a pythonic way? EDIT: In my view, the suggested duplicate question (Exception handling and testing with pytest and hypothesis) doesn't solve the issue. What happens if I were to use the following code? @given(b=st.floats()) def test_divide(b): try: assert isinstance(divide(100, b), (int, float)) except AssertionError: assume(False) As far as I understand it, as soon as the assertion within the try block will be False, the except-path will be executed and the particular test case will be ignored. That is, every real test failure (found by hypothesis) will be ignored. Compared to the suggested duplicate question, my divide-function will throw an AssertionError rather than a ZeroDivisionError for b=0. Every other failing test case will also cause an AssertionError (try-block). A: @given(b=st.floats()) def test_divide(b): assume(b != 0) assert isinstance(divide(100, b), (int, float)) assume tells hypothesis that b == 0 is a bad example, and to ignore it. Bear in mind that with the floats strategy, you're also going to get Nan and -Nan - you can disable this in the strat with allow_infinity=False. Once you're modifying the strat, you may as well also add min_value=1 or min_value=0.00000001 to prevent 0
{ "pile_set_name": "StackExchange" }
Q: How to annotate MYSQL autoincrement field with JPA annotations Straight to the point, problem is saving the object Operator into MySQL DB. Prior to save, I try to select from this table and it works, so is connection to db. Here is my Operator object: @Entity public class Operator{ @Id @GeneratedValue private Long id; private String username; private String password; private Integer active; //Getters and setters... } To save I use JPA EntityManager’s persist method. Here is some log: Hibernate: insert into Operator (active, password, username, id) values (?, ?, ?, ?) com.mysql.jdbc.JDBC4PreparedStatement@15724a0: insert into Operator (active,password, username, id) values (0, 'pass', 'user', ** NOT SPECIFIED **) The way I see it, problem is configuration with auto increment but I can't figure out where. Tried some tricks I've seen here: Hibernate not respecting MySQL auto_increment primary key field But nothing of that worked If any other configuration files needed I will provide them. DDL: CREATE TABLE `operator` ( `id` INT(10) NOT NULL AUTO_INCREMENT, `first_name` VARCHAR(40) NOT NULL, `last_name` VARCHAR(40) NOT NULL, `username` VARCHAR(50) NOT NULL, `password` VARCHAR(50) NOT NULL, `active` INT(1) NOT NULL, PRIMARY KEY (`id`) ) A: To use a MySQL AUTO_INCREMENT column, you are supposed to use an IDENTITY strategy: @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; Which is what you'd get when using AUTO with MySQL: @Id @GeneratedValue(strategy=GenerationType.AUTO) private Long id; Which is actually equivalent to @Id @GeneratedValue private Long id; In other words, your mapping should work. But Hibernate should omit the id column in the SQL insert statement, and it is not. There must be a kind of mismatch somewhere. Did you specify a MySQL dialect in your Hibernate configuration (probably MySQL5InnoDBDialect or MySQL5Dialect depending on the engine you're using)? Also, who created the table? Can you show the corresponding DDL? Follow-up: I can't reproduce your problem. Using the code of your entity and your DDL, Hibernate generates the following (expected) SQL with MySQL: insert into Operator (active, password, username) values (?, ?, ?) Note that the id column is absent from the above statement, as expected. To sum up, your code, the table definition and the dialect are correct and coherent, it should work. If it doesn't for you, maybe something is out of sync (do a clean build, double check the build directory, etc) or something else is just wrong (check the logs for anything suspicious). Regarding the dialect, the only difference between MySQL5Dialect or MySQL5InnoDBDialect is that the later adds ENGINE=InnoDB to the table objects when generating the DDL. Using one or the other doesn't change the generated SQL. A: Using MySQL, only this approach was working for me: @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; The other 2 approaches stated by Pascal in his answer were not working for me. A: For anyone reading this who is using EclipseLink for JPA 2.0, here are the two annotations I had to use to get JPA to persist data, where "MySequenceGenerator" is whatever name you want to give the generator, "myschema" is the name of the schema in your database that contains the sequence object, and "mysequence" is the name of the sequence object in the database. @GeneratedValue(strategy= GenerationType.SEQUENCE, generator="MySequenceGenerator") @SequenceGenerator(allocationSize=1, schema="myschema", name="MySequenceGenerator", sequenceName = "mysequence") For those using EclipseLink (and possibly other JPA providers), it is CRITICAL that you set the allocationSize attribute to match the INCREMENT value defined for your sequence in the database. If you don't, you'll get a generic persistence failure, and waste a good deal of time trying to track it down, like I did. Here is the reference page that helped me overcome this challenge: http://wiki.eclipse.org/EclipseLink/Examples/JPA/PrimaryKey#Using_Sequence_Objects Also, to give context, here is what we're using: Java 7 Glassfish 3.1 PostgreSQL 9.1 PrimeFaces 3.2/JSF 2.1 Also, for laziness' sake, I built this in Netbeans with the wizards for generating Entities from DB, Controllers from Entities, and JSF from Entities, and the wizards (obviously) do not know how to deal with sequence-based ID columns, so you'll have to manually add these annotations.
{ "pile_set_name": "StackExchange" }
Q: TFS: Best way to trigger build on server restart, or on Windows Updates installation In short, the requirement is to verify that our latest released software can be built and then installed after the latest Windows updates and/or other patches were applied. So the build server VM(s) will be configured just for this purpose and the build only needs to run after an update. Since such updates usually are followed with a restart, I am thinking of a server restart event triggering a build and deployment. Does such option exist in TFS 2017? If there is no way to do it through TFS then, I guess, a PowerShell script that runs on startup should work? A: No such a build-in function to achieve that. However create a PowerShell script that runs on startup should work. Just as Jessehouwing said, you can create the script with the REST API to trigger builds. Create a script to trigger the specific build definition. (Reference below sample) Run the script on startup: How to run a batch file each time the computer boots How to schedule a Batch File to run automatically in Windows 10/8/7 Param( [string]$collectionurl = "http://server:8080/tfs/DefaultCollection", [string]$projectName = "ProjectName", [string]$keepForever = "true", [string]$BuildDefinitionId = "34", [string]$user = "username", [string]$token = "password" ) # Base64-encodes the Personal Access Token (PAT) appropriately $base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$token))) function CreateJsonBody { $value = @" { "definition": { "id": $BuildDefinitionId }, "parameters": "{\"system.debug\":\"true\",\"BuildConfiguration\":\"debug\",\"BuildPlatform\":\"x64\"}" } "@ return $value } $json = CreateJsonBody $uri = "$($collectionurl)/$($projectName)/_apis/build/builds?api-version=2.0" $result = Invoke-RestMethod -Uri $uri -Method Post -Body $json -ContentType "application/json" -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)}
{ "pile_set_name": "StackExchange" }
Q: How can i show the categories description? I'm working on a wordpress site and im using this theme: Ermark Adora I want to be able to add a description to my categories. Here's the code on the functions file where the code would go. Here's the entire functions page (towards the end of the page). <li class="categ"> <h3> - '.$categ->name.' - </h3>'; if ($products_full=='true') { $src['full_categ'] .= '<a href="#" class="less-products">'.__('view all','adora').'</a>'; $src['full_categ'] .= '</li>' What code do I need to add in oder to display the category's description? A: I found the answer: <h3> - '.$categ->name.' -</h3><p> '.$categ->description.' </p>';
{ "pile_set_name": "StackExchange" }
Q: Why does the chlorination of indene occur with syn selectivity in heptane? In the chlorination of indene to form 1,2-dichloro-2,3-dihydro-1​H-indene in heptane as solvent, the syn dichloride is formed as the major product, as reported in J. Org. Chem. 1980, 45 (25), 5150–5155): The explanation I was given for this was that "in a non-polar solvent, the ion pair collapses quickly". What does this mean, and why is the stereoselectivity reversed, as compared to typical halogenations which proceed via a halonium ion intermediate to give anti dihalides? A: Key The reaction is run in heptane, a non-polar solvent. A non-polar solvent will not stabilize or support ions as well as a polar solvent. Background When the $\ce{Cl2}$ begins to interact with the indene an initial ion pair is formed. In the case of chlorine, it is not always clear whether a cyclic halonium ion (the 3-membered structure on the bottom left) an open ion (the structure on the bottom right) or an equilibrium between the two ions occurs [Side Note: In electrophilic bromination reactions involving alkenes, bromonium ions are very common intermediates usually leading to trans addition. However, chlorine isn't as polarizable as bromine and in this particular case with indene, the open ion is extremely stable due to its benzylic nature. Hence the involvement of only a chloronium ion is questionable in this case.] Analysis As mentioned above, ions are not very well stabilized in the non-polar solvent. As a result the two ions (one positive, one negative) will tend to stay close together (tight ion pair) and stabilize each other and further due to this lack of stabilization they will have a short lifetime leading to quick collapse to products. Both of these factors (chlorines located close to one another, short lifetime doesn't allow much movement of the second chlorine) favor syn collapse - addition of the second chlorine from the same side that the first chlorine is on. If we were to repeat the reaction in a polar solvent that would allow the ions to separate and have a longer lifetime, more trans product from anti addition would be expected.
{ "pile_set_name": "StackExchange" }
Q: SOAP - parsing variable from groovy to soap request I´m new in SOAP and need help :)I have a SOAP request which I would like to run five times and I need pass variable from for loop in groovy to request. Example: for (i=1;i<5;i++){ SOAP request with passed i variable } Another question: Is there any way, how to combine variables? I have a Properties with some data and I need combine this data with passed i variable Example: ${adress#City + i variable} Thanks for responses. A: Assuming my_prop is the value you pass to your soap request for (index=1;index<5;index++) { // set the index value in the property testRunner.testCase.setPropertyValue(my_prop, index.toString()) // run your SOAP request testRunner.runTestStepByName("my_soap_test_step") } for your second problem you can do new_prop = context.expand( '${#TestCase#old_prop}' ) + index.toString()
{ "pile_set_name": "StackExchange" }
Q: Oracle - How to Convert VARCHAR to DATETIME format I am using Oracle10g database in which a table contains a Column with Date DataType. I am using the following query to get the record: select to_char(START_TIME, 'YYMMDD HH24:MI:SS') from table; So from above query, the result will be of type VARCHAR. I have tried to_Date() method but resulted in displaying only DATE. Can i convert VARCHAR to DATETIME format? The result should be of type DATETIME. Please help me how to resolve this problem. A: an Oracle date contains both date and time so you can consider it a datetime (there is no datatype of datetime in Oracle). how is DISPLAYS when you select it is entirely up to your client. the default display setting is controlled by the NLS_DATE_FORMAT parameter. If you're just using the date in your pl/sql block then just assign it into a date datatype and select into that variable without to_char and it will work just fine and contain whatever time component is present in your table. to control the display, for example using nls_date_format: SQL> select a from datetest; A --------- 19-FEB-13 SQL> alter session set nls_date_format='YYMMDD HH24:MI:SS'; Session altered. SQL> select a from datetest; A --------------- 130219 07:59:38 but again, this is only for display. A: Oracle's Date type fields contain date/time values, therefore converting it to Datetime does not make any sense (it's already datetime) Read more about oracle date types here
{ "pile_set_name": "StackExchange" }
Q: Krylov Space dimension for specific matrix This was an exam question and I couldn't solve it so I'd like to know what the solution is. Let $b,c\in \mathbb{R}^n$ be linearly independent and define $$A(b,c)=I+bb^T+cc^T.$$ Show that $A(b,c)$ is positive definite Show the Krylov space $\mathcal{K}_l(b,A)$ has dimension atmost $2$ for all natural $l$. For what $b,c$ does $\mathcal{K}_2(b,A)$ have dimension $2$. For $x^0$ what is the minimum and maximum number of iterations for the CG method for $A(b,c) x=b$. Solution attempt: We have for $y\in \mathbb{R}^n\setminus\{ 0\}$: $$y^T A(b,c) y= y^Ty+ y^Tbb^Ty+y^T cc^Ty=y^Ty+(y^Tb)^2+(y^Tc)^2 \geq y^Ty > 0.$$ and the above is $0$ only when $y^T$ is. $\mathcal{K}_1=\text{span}( b)$ and $\mathcal{K}_2=\text{span}(b,A(b,c)b)$. Set $bb^T=B$ and $cc^T=C$. $$Ab=(I+B+C)b=b+Bb+Cb=b+bb^Tb+cc^Tb=b+\|b\|^2b+\langle c,b\rangle c.$$ $$A^2b=(I+B+C)(b+Bb+Cb)=b+2Bb+2Cb+B^2b+C^2b+BCb+CBb$$ I have to show $\{b,Ab,A^2b\}$ is linearly dependent but I'm not getting anywhere. We want $b$ and $b+Bb+Cb$ to be linearly independent. Maximum number of iterations in general is $n$ but here it seems to be different. A: You need to notice that \begin{align} Ab&=b+(b^Tb)b+(c^Tb)c\\ Ac&=c+(b^Tc)b+(c^Tc)c\\ \end{align} so that all Krylov spaces are spanned by the vectors $b$ and $c$. Now consider what happens when these vectors are orthogonal.
{ "pile_set_name": "StackExchange" }
Q: open a file using CDPATH and symlink To quickly move around, I added a path to CDPATH that contains symlinks to different locations. I did this by adding the following line to .bashrc: export CDPATH=~/symlinks When working with directories, everything's fine and I can access the symlinked folders from everywhere. For example if I do: $ ln -s ~/path/to/folder ~/symlinks/folder I can then just write: $ cd folder to get inside the symlinked folder, regardless of my current directory. However, when I create a symlink to a file and then try to open it with an editor, I get an empty file, unless I'm in the symlink directory. For example if I do: $ ln -s ~/path/to/file/filename ~/symlinks/filename and then write: $ kwrite filename I get an empty file, if I'm not in the symlink folder. I want to access the file from anywhere though, how can I achieve this? A: The simple answer is that you can't. What CDPATH does is that if you type "cd folder", it first checks if "folder" exists within your CDPATH; if not, it will check in the folder you're currently in. But this is specific for directory changes; kwrite doesn't check the CDPATH and AFAIK there's no configuration option to make it look in any specific directory. What you could do is to make a small shell script that replaces kwrite, like this: #!/bin/sh FILE=$1 if [ -f "$HOME/symlinks/$FILE" ] then kwrite "HOME/symlinks/$FILE" else kwrite "$FILE" fi Then run the script (which you could name e.g. "akwrite") instead of running kwrite directly.
{ "pile_set_name": "StackExchange" }
Q: How to document mixed array return type? What is the correct syntax to document an array of mixed strings and ints? public function toArray(): array { return [ 'string', 42, ]; } Here are the options I've considered: /** * @return string|int[] */ But this seems to indicate it would either be a string or an int[] /** * @return string[]|int[] */ And this would seem to indicate either an array of strings or an array of ints. A: You can use @return (int|string)[] More details on phpdoc
{ "pile_set_name": "StackExchange" }
Q: How can I estimate the compressibility of a file without compressing it? I'm using an event loop based server in twisted python that stores files, and I'd like to be able to classify the files according to their compressibility. If the probability that they'd benefit from compression is high, they would go to a directory with btrfs compression switched on, otherwise they'd go elsewhere. I do not need to be sure - 80% accuracy would be plenty, and would save a lot of diskspace. But since there is the CPU and fs performance issue too, I can not just save everything compressed. The files are in the low megabytes. I can not test-compress them without using a huge chunk of CPU and unduly delaying the event loop or refactoring a compression algorithm to fit into the event loop. Is there any best practice to give a quick estimate for compressibility? What I came up with is taking a small chunk (few kB) of data from the beginning of the file, test-compress it (with a presumably tolerable delay) and base my decision on that. Any suggestions? Hints? Flaws in my reasoning and/or problem? A: Just 10K from the middle of the file will do the trick. You don't want the beginning or the end, since they may contain header or trailer information that is not representative of the rest of the file. 10K is enough to get some amount of compression with any typical algorithm. That will predict a relative amount of compression for the whole file, to the extent that that middle 10K is representative. The absolute ratio you get will not be the same as for the whole file, but the amount that it differs from no compression will allow you to set a threshold. Just experiment with many files to see where to set the threshold. As noted, you can save time by doing nothing for files that are obviously already compressed, e.g. .png. .jpg., .mov, .pdf, .zip, etc. Measuring entropy is not necessarily a good indicator, since it only gives the zeroth-order estimate of compressibility. If the entropy indicates that it is compressible enough, then it is right. If the entropy indicates that it is not compressible enough, then it may or may not be right. Your actual compressor is a much better estimator of compressibility. Running it on 1K won't take long. A: I think what you are looking for is How to calculate the entropy of a file? This questions contains all kind of methods to calculate the entropy of the file (and by that you can get the 'compressibility' of a file). Here's a quote from the abstract of this article (Relationship Between Entropy and Test Data Compression Kedarnath J. Balakrishnan, Member, IEEE, and Nur A. Touba, Senior Member, IEEE): The entropy of a set of data is a measure of the amount of information contained in it. Entropy calculations for fully specified data have been used to get a theoretical bound on how much that data can be compressed. This paper extends the concept of entropy for incompletely specified test data (i.e., that has unspecified or don't care bits) and explores the use of entropy to show how bounds on the maximum amount of compression for a particular symbol partitioning can be calculated. The impact of different ways of partitioning the test data into symbols on entropy is studied. For a class of partitions that use fixed-length symbols, a greedy algorithm for specifying the don't cares to reduce entropy is described. It is shown to be equivalent to the minimum entropy set cover problem and thus is within an additive constant error with respect to the minimum entropy possible among all ways of specifying the don't cares. A polynomial time algorithm that can be used to approximate the calculation of entropy is described. Different test data compression techniques proposed in the literature are analyzed with respect to the entropy bounds. The limitations and advantages of certain types of test data encoding strategies are studied using entropy theory And to be more constructive, checkout this site for python implementation of entropy calculations of chunks of data A: Compressed files usually don't compress well. This means that just about any media file is not going to compress very well, since most media formats already include compression. Clearly there are exceptions to this, such as BMP and TIFF images, but you can probably build a whitelist of well-compressed filetypes (PNGs, MPEGs, and venturing away from visual media - gzip, bzip2, etc) to skip and then assume the rest of the files you encounter will compress well. If you feel like getting fancy, you could build feedback into the system (observe the results of any compression you do and associate the resulting ratio with the filetype). If you come across a filetype that has consistently poor compression, you could add it to the whitelist. These ideas depend on being able to identify a file's type, but there are standard utilities which do a pretty good job of this (generally much better than 80%) - file(1), /etc/mime.types, etc.
{ "pile_set_name": "StackExchange" }
Q: Display rectangular background instead of circular background We want to add a Circular background for an image like this: http://jsfiddle.net/rkEMR/8679/ So I followed link1 , I am trying below code in the link2, but its displaying like below image, background is not circluar: .product-options ul.options-list .label>label.colors { width: 30px; height: 30px; border-radius: 50%; background-size: cover !important; display: block; padding: 0 !important; font-size: 0; border: 0px solid #d1d1d1 !important; } Edit script var jQuery = $.noConflict(); jQuery(document).ready(function(){ var inner = Array(); inner = jQuery(" .product-options ul.options-list .label>label"); for (i=0;i<inner.length;i++){ var classN = inner[i].innerText; if (classN=="Black" || classN=="Green" || classN=="Red" || classN=="Purple" || classN=="Orange" || classN=="Pink" || classN=="Brown"){ inner.eq(i).addClass("colors"); classN = classN.toLowerCase(); var urlB = "http://stylebaby.com/media/catalog/custom/"+classN+".png"; inner.eq(i).css('background-image', 'url(' + urlB + ')'); } } }); A: Provide a fixed height and width to the <span class="label" here. Preferably give them the same value for the element to be a square. Apply margin-top to the label element for it to be positioned in the center of the span Refer to attached code: .label { height: 40px; width: 40px; border-radius: 50%; } .label label { margin-top: 3px !important; } The result should be like in the attached screenshot:
{ "pile_set_name": "StackExchange" }
Q: De-icing or combat icing in winter. "Snow" vocabulary I'd like to learn some "ice and snow-related" vocabulary. I've marked bold the words I'm trying to think of. I'm not sure whether you face the same problem in places you live in, but still.. In my city every winter streets get covered with ice and snow, and special snow-clearing cars/vehicles (?) appear in the streets to remove ice, or do de-icing, or combat icing as it becomes slippery. To remove the ice they use some reagent, road salt or.. what would you call it? And what is a slippery part of a road is called? Black ice or something different? Meaning it's dangerous not only for cars but also people (pedestrians). A: The response to ice depends on how cold it is likely to get, and the amount of snow that is normal for the region. In regions with lots of snow, this can be moved from the road with a snowplough (US snowplow) or snow blower. If the temperatures are not too low, ice can be melted with salt, usually a coarse rock salt. This is called gritting, or salt spreading, and the salt is sometimes referred to as "grit". Other chemicals can be used, these would have particular chemical names. Another word for "chemical" is "reagent". The vehicles are named according to their particular role, but in general "snow removal vehicle" is possible (but not common) Generally you would use the specific name "de-icing lorry/gritter/snow plough/snow melter" See https://en.wikipedia.org/wiki/Winter_service_vehicle for lists of more specialised types. Black ice is in contrast to compressed snow. Snow, even when compressed, remains white (due to trapped air). But if a puddle freezes, it stays clear, and is much harder to see on the road. Hence "black ice" it is transparent ice that looks black when on the road surface
{ "pile_set_name": "StackExchange" }
Q: Python + opencv with PyCharm- 'opencv' has no attribute 'imread' A similar question to mine exists, however it does not answer my question. Here is what I am working with: Python v. 3.6.2 opencv 1.0.1 PyCharm Community Edition 2017 .2.2 macOS Sierra Version 10.12.6 I'm trying to use imread for image processing. I've looked at the documentation and I am using the function correctly. Here is the test code that comes with the opencv library: import opencv img = cv.imread('background.png') if img is None: print("Image not loaded!") else: print("Image is loaded!") I can see my python files and and modules in the project explorer. When I run the code, I get the following error: /Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6 /Users/lmc/Desktop/pywerk/opencvpractice Traceback (most recent call last): File "/Users/lmc/Desktop/pywerk/opencvpractice", line 4, in img = cv.imread('background.png') AttributeError: module 'opencv' has no attribute 'imread' I've tried everything from reinstalling python and the opencv module to switching python versions to 2.7 (and using the respective opencv module) and I get the same error. Is there some sort of system configuration I could be missing? Any help would be much appreciated. A: Turns out it was a combination of several of these suggestions; if I could give the answer props to Alexander Reynolds that would be the most accurate. I was following an outdated tutorial and ended up with an outdated version of opencv. I downloaded opencv using the instructions here, for anyone else who is looking for the exact commands: https://pypi.python.org/pypi/opencv-python/3.1.0.3 Here is what I ended up with: import cv2 img = cv2.imread('background.png') if img is None: print("Image not loaded!") else: print("Image is loaded!") Thanks for the help!
{ "pile_set_name": "StackExchange" }
Q: tmux split-window using shell script I am trying to write shell script for creating panes in tmux. #!/bin/sh tmux rename-window user tmux new-session -d tmux split-window -h tmux selectp -t 1 tmux split-window -h tmux select-layout even-horizontal tmux selectp -t 2 tmux split-window -v But when executing the above code doesn't produce desired output. The picture indicates output produced when the code is executed: Instead when all the commands are written in tmux it produces the desired output. The picture indicates output produced when commands are manually typed: How can the code be modified to produce the desired outcome? A: Try this, it looks like it does what you need (judging by the "Desired Output" image): tmux split-window -h tmux split-window -h tmux select-layout even-horizontal tmux split-window -v tmux select-pane -t 0 You can also try persisting your layout using something like https://github.com/tmux-plugins/tmux-resurrect.
{ "pile_set_name": "StackExchange" }
Q: when flex-direction is column, divs takes all available width. how to prevent this? See attached snippet. I need each item to take width space, with respect to content. flex-items need to be stacked vertically, like in the example. how to achieve it? how to do that? .container { display:flex; flex-direction:column; } .green { background-color:green; } .red { background-color:red; } <div class="container"> <div class="green"> hello world</div> <div class="red"> hello world2</div> </div> A: Assuming they still should stack vertically, use display: inline-flex and they will size equally by the content of the widest item. For each row to collapse to their individual content, use i.e. align-items: flex-start, and note, this will make them collapse when using display: flex too. Why they stretch to equal width, is because align-items defaults to stretch, so by using any other value they will size by content .container { display: inline-flex; flex-direction:column; align-items: flex-start; } .green { background-color:green; } .red { background-color:red; } <div class="container"> <div class="green"> hello world</div> <div class="red"> hello world2</div> </div>
{ "pile_set_name": "StackExchange" }
Q: Как отправить данные в нормальной кодировке? Добрый вечер. Посылаю GET запросом в базу данных такой ссылкой http://192.168.1.134/addItem.php?name=Славные парни&plc=ул. Кутузова, д.11А&prc=196&lplc=111&type=Детское&web=mcc.org&time=11.13.16 10:40&minfo=Что бывает, когда напарником брутального костолома становится субтильный лопух? Наемный охранник Джексон Хили и частный детектив Холланд Марч вынуждены работать в паре, чтобы распутать плевое дело о пропавшей девушке, которое оборачивается преступлением века.&img=http://www.kinopoisk.ru/images/film_big/841152.jpg&sinfo=Что бывает, когда напарником брутального костолома становится субтильный лопух? При этом в базу они приходят в неправильной кодировке. Кодировка mysql таблицы - ut8_general_ci Как сделать так, чтобы данные передавались в нормальной кодировке? A: После соединения с базой данных попробуйте использовать mysqli_set_charset. Так же можно в phpmyadmin после выбора базы данных, выполнить: show variables like 'char%'; Результатом будет таблица типа: Может у вас база в одной кодировке а таблица в другой, поэтому проблемы и возникают.
{ "pile_set_name": "StackExchange" }
Q: Not all files are showing up in ftp session with Azure Web App I lost my ASP.NET project when reinstalling Windows. So I opened up an ftp session to recover the files from my webapp on Azure. I can see most of my project in my wwwroot folder, but not everything. The folder Controllers is almost empty. And that's kinda the files I were most interested in. A: You have only deployed the executables with you publish action, not the code in you controllers or other folders. You need to find a backup of your code somewhere else. Normally you would have checked it into some kind of code repository.
{ "pile_set_name": "StackExchange" }
Q: addEventListener per element only I want to make my canvas to exclusively respond to keydown/mouse EventListeners only. Another user had this problem: addEventListener for keydown on Canvas But the problem still persists: I have a canvas and an input box. I have the spacebar to trigger an event but when I want to type in an input element box, it will trigger as well. How do I make keydown/mouse events exclusive for the canvas so it won't affect other elements such as my input? //js file gameCanvas = document.getElementById('gameCanvas'); document.addEventListener("keydown", function(e){ if(e.keyCode == 32) // triggers something in canvas }, false); //html dom <canvas id="gameCanvas" tabindex='1'></canvas> //input that accidentally triggers the above event meant for canvas <input type="text" id="conversation"></input> A: You can create separate keydown/click event for each of your input/canvas. To get canvas to response to keydown you need to add tabindex="1". Like so <canvas id="gameCanvas" tabindex="1"></canvas> Both input and canvas will response to keyboard event when the right elements are selected and triggered when keys are pressed. If you prefer click event just replace keydown with click instead. //js file gameCanvas = document.getElementById('gameCanvas'); conversation = document.getElementById('conversation'); gameCanvas.addEventListener("keydown", function(e){ console.log(gameCanvas); }, false); conversation.addEventListener("keydown", function(e){ console.log(conversation); }, false); #gameCanvas { background: blue; width: 100px; height: 100px; } <p>gamecanvas</p> <canvas id="gameCanvas" tabindex="1"></canvas><br /> <p>conversation</p> <input id="conversation" type="text" tabindex="2"></input>
{ "pile_set_name": "StackExchange" }
Q: NodeJS stuck in uploading image I have a piece of Node code to upload images. Images are of size of 10~200K. As you can see thy are not big at all.The problem is that it seems that Node keeps busy for a long time (can be 10 minutes) on uploading and it won't response to new requests. The code is part of JSON API consumed by an Android app. var fs = require('fs'); exports.upload = function(req, res){ .... var rawBody = new Buffer(''); req.on('data', function(chunk){ rawBody = Buffer.concat([rawBody, chunk]); }); req.on('end', function(){ winston.info('on end of uploading moment'); fs.writeFile(filepath, rawBody, 'binary', function(err){ if(err) winston.error(err); else{ db.updatesomething(); } }); }); A: The request never returns anything to the browser, which is why it's hanging, it's waiting for a response var fs = require('fs'); exports.upload = function(req, res){ var rawBody = new Buffer(''); req.on('data', function(chunk){ rawBody = Buffer.concat([rawBody, chunk]); }); req.on('end', function(){ winston.info('on end of uploading moment'); fs.writeFile(filepath, rawBody, 'binary', function(err){ if(err) { winston.error(err); res.end('error'); } else { db.updatesomething(); res.end('success'); // send a response back } }); }); });
{ "pile_set_name": "StackExchange" }
Q: How to remove common letters in two Strings in iOS SDK? How can i remove common letters in two strings and generate a new string using remaining unique letters ? for example: String 1= Optimus Prime, String 2= Deja Thoras, new string should be: Djaha A: Here's a way that avoids the nitty gritty of character enumeration: NSString *string1 = @"Deja Thoras"; NSString *string2 = @"Optimus Prime"; NSCharacterSet *filterSet = [NSCharacterSet characterSetWithCharactersInString:string2]; NSString *filteredString = [[string1 componentsSeparatedByCharactersInSet:filterSet] componentsJoinedByString:@""]; A: Just enumerate through the characters in the string and delete matching ranges. Be sure to search 'caselessly' (ie the difference between uppercase and lowercase letters is irrelevant). The following snippet logs "Djaha", as the opening post expects. NSString *firstString = @"Deja Thoras"; NSString *secondString = @"Optimus Prime"; NSMutableString *outputString = [NSMutableString stringWithString:firstString]; [outputString enumerateSubstringsInRange:NSMakeRange(0, firstString.length) options:NSStringEnumerationByComposedCharacterSequences usingBlock:^(NSString *substring, NSRange substringRange, NSRange enclosingRange, BOOL *stop) { if ([secondString rangeOfString:substring options:NSCaseInsensitiveSearch].location != NSNotFound) { [outputString deleteCharactersInRange:substringRange]; } }]; NSLog(@"%@", outputString); If this is a common operation, I would place the code into a category, with a method name like stringByRemovingMatchingCharactersInString:
{ "pile_set_name": "StackExchange" }
Q: form submit redirecting to index page of codeigniter When clicking my submit button of the form the page is redirecting to the index page i.e the welcome message of codigniter. but the url shows the requested page. . I m confused. view page: <form action='welcome/register' method='post'> <table>...... ...... </form> and in controller page i ve done the validation and if validation fails it have to redirect to register_new page but the redirected page is the welcome page of codeignitor. . But no problem with the url. A: This is how you use forms in CodeIgniter. Don't try to use HTML forms like that, just use CodeIgniter style http://ellislab.com/codeigniter/user-guide/helpers/form_helper.html Add this to your controller (to load the form helper): $this->load->helper('form'); In your view <?php echo form_open('controllername/functionname');?> <input type ...> //...more inputs </form> It redirects to the original page if action is not specified. It shouldn't redirect to the index page ever unless specified by second controller.
{ "pile_set_name": "StackExchange" }
Q: Are core data changes reflected in local NSManagedObject subclass vars? I'm using NSFetchResultsController to populate a UITableView. The table view is populated with my "Contact" NSManagedObject subclass. When one of the table cells is selected, I'm passing the selected Contact to the destination view controller: override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) { if segue.identifier == kChatSegue { if let controller = segue.destinationViewController as? ChatViewController { if let theSelectedContact = self.selectedContact { controller.contact = theSelectedContact } } } } As you can see from the above code, ChatViewController has a local var which holds the selected contact. Now for the problem. When changes are made to that particular Contact object elsewhere in the app and saved to the managed object context, the changes are not accurately reflected for ChatViewController's local contact var. Will changes to a NSManagedObject be reflected in local vars for that object? If not, how can I force the var to update so that it reflects the current saved values? A: Managed objects do not automatically reflect changes made to the underlying persistent store. Once fetched, they keep their state until you change it. This is generally a good thing-- you wouldn't want unsaved changes to be unexpectedly wiped out, for example. If you want to force one copy to load changes made elsewhere, use refreshObject(:,mergeChanges:) on your NSManagedObjectContext with the second argument set to true. That will tell the context to reload the object's data to reflect the current saved state. You can observe NSManagedObjectContextDidSaveNotification to find out when there might be changes that need to be loaded.
{ "pile_set_name": "StackExchange" }
Q: Proving consistency for an estimator. Limits and Convergence in Probability. I need to show that $U$, as defined below, is a consistent estimator for $\mu^{2}$. $U=\bar{Y}^{2}-\frac{1}{n}$ By the continuous mapping theorem, which states that, $X_{n} \stackrel{\mathrm{P}}{\rightarrow} X \Rightarrow g\left(X_{n}\right) \stackrel{\mathrm{P}}{\rightarrow} g(X)$ Then, $\bar{Y} \stackrel{P}{\longrightarrow} \mu $ gives me $\bar{Y}^{2} \stackrel{P}{\longrightarrow} \mu^{2} .$ And since $\frac{1}{n} \rightarrow 0$ as $n \rightarrow \infty$ the result for conistency seems intuitively obvious. But I have a confusion with how to show this formally, whether using only the mapping theorem, or if I need something else. Showing how the $\frac{1}{n} \rightarrow 0$ part leads to consistency is the part that I'm missing, since this is a standard limit and not a convergence in probability. Any help in completing this is greatly appreciated. A: $X_n=1/n$ can be thought of as a random variable with a Dirac distribution with mass at $1/n$. $P(X_n\leq x)\rightarrow 1$, for all $x\geq 0$, and $0$ for all $x< 0$, therefore $X_n$ converges in distribution to the random variable with Dirac distribution with mass at zero, which is the same to say that it converges to zero. By Slutsky's Theorem, $\bar{Y}^2-\frac{1}{n}\rightarrow_d \mu^2-0$, and since convergence in distribution to a constant implies convergence in probability, you have your result. It is also possible to show this with the Continuous Mapping Theorem, since $(\bar{Y},X_n)$ converges jointly in probability to $(\mu,0)$. Then use function $g(y,x)=y^2-x$. A: You're missing two things. First of all, saying $1/n\to\infty$ is a 'standard limit' means that the convergence holds a.s. and hence also in probability. The next step is then to apply the continuous mapping theorem again with the function $g(x,y)=x-y$.
{ "pile_set_name": "StackExchange" }
Q: In tensorflow/keras, why train_on_batch output loss is different when recomputed after training using predict_on_batch? I created a model using keras and I trained it using train_on_batch. To check if the model does what it's supposed to do, I recomputed the loss before and after the training phase using the predict_on_batch method. But, as you'd guess when reading the title, I don't have the same output losses. Here follows a basic code to illustrate my problem: from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense import tensorflow as tf import numpy as np # Loss definition def mse(y_true, y_pred): return tf.reduce_mean(tf.square(y_true-y_pred)) # Model definition model = Sequential() model.add(Dense(1)) model.compile('rmsprop',mse) # Data creation batch_size = 10 x = np.random.random_sample([batch_size,10]) y = np.random.random_sample(batch_size) # Print loss before training y_pred = model.predict_on_batch(x) print("Before: " + str(mse(y,y_pred).numpy())) # Print loss output from train_on_batch print("Train output: " + str(model.train_on_batch(x,y))) # Print loss after training y_pred = model.predict_on_batch(x) print("After: " + str(mse(y,y_pred).numpy())) With this code I get the following outputs: Before: 0.28556848 Train output: 0.29771945 After: 0.27345362 I'd suppose that the training loss and the loss computed after training should be the same. So I'd like to understand why not? A: This is how train_on_batch works, it calculates the loss, then updates the network, so we get the loss before the network was updated. When we apply predict_on_batch, we get the prediction from the updated network. Under the hood, train_on_batch does many more things like fixing your data types, standardizing your data, etc. The closest sibling of train_on_batch would be test_on_batch. If you run test_on_batch you'll find the result is close to train_on_bacth but not same. Here's an implementation of test_on_batch : https://github.com/tensorflow/tensorflow/blob/e5bf8de410005de06a7ff5393fafdf832ef1d4ad/tensorflow/python/keras/engine/training_v2_utils.py#L442 it internally calls _standardize_user_data to fix your data types, data shapes, etc. Once, you fix your x and y with proper shapes and data types the result is very close except for some small difference delta due to numerical instability. Here's a minimal example where the test_on_batch, train_on_batch and predict_on_batch seem to agree on the result numerically. from tensorflow.keras.layers import * from tensorflow.keras.models import Model, Sequential from tensorflow.keras.optimizers import Adam import tensorflow as tf import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense import tensorflow as tf import numpy as np # Loss definition def mse(y_true, y_pred): return tf.reduce_mean(tf.square(y_true-y_pred)) # Model definition model = Sequential() model.add(Dense(1, input_shape = (10,))) model.compile(optimizer = 'adam', loss = mse, metrics = [mse]) # Data creation batch_size = 10 x = np.random.random_sample([batch_size,10]).astype('float32').reshape(-1, 10) y = np.random.random_sample(batch_size).astype('float32').reshape(-1,1) print(x.shape) print(y.shape) model.summary() # running 5 iterations to check for _ in range(5): # Print loss before training y_pred = model.predict_on_batch(x) print("Before: " + str(mse(y,y_pred).numpy())) # Print loss output from train_on_batch print("Train output: " + str(model.train_on_batch(x,y))) print(model.test_on_batch(x, y)) # Print loss after training y_pred = model.predict_on_batch(x) print("After: " + str(mse(y,y_pred).numpy())) (10, 10) (10, 1) Model: "sequential_25" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_27 (Dense) (None, 1) 11 ================================================================= Total params: 11 Trainable params: 11 Non-trainable params: 0 _________________________________________________________________ Before: 0.30760005 Train output: [0.3076000511646271, 0.3076000511646271] [0.3052913546562195, 0.3052913546562195] After: 0.30529135 Before: 0.30529135 Train output: [0.3052913546562195, 0.3052913546562195] [0.30304449796676636, 0.30304449796676636] After: 0.3030445 Before: 0.3030445 Train output: [0.30304449796676636, 0.30304449796676636] [0.3008604645729065, 0.3008604645729065] After: 0.30086046 Before: 0.30086046 Train output: [0.3008604645729065, 0.3008604645729065] [0.2987399995326996, 0.2987399995326996] After: 0.29874 Before: 0.29874 Train output: [0.2987399995326996, 0.2987399995326996] [0.2966836094856262, 0.2966836094856262] After: 0.2966836 N.B: train_on_batch updates the weight of the neural network after calculating the loss, so obviously the loss from train_on_batch and test_on_batch or predict_on_batch won't be same exactly. The proper question could be why test_on_batch and predict_on_batch give different losses with your data.
{ "pile_set_name": "StackExchange" }
Q: Can't call ajaxSubmit function on document body I can only insert my code on the body of the page. I need to submit form via Ajax and send strings to server app. So i use jQuery form plugin. This is a code that i try to add: <script src='http://somesite.ru/interactive/jquery-1.8.3.min.js'></script> <script src='http://malsup.github.io/min/jquery.form.min.js'></script> <link rel='stylesheet' type='text/css' href='http://somesite.ru/interactive/somesite_interactive.css'> <div id='question_block'> <b>You can left your question at the form right here:</b> <form action='http://somesite.ru/interactive/question' id='interactive_form'> <span>Your name:</span><input type='text' name='sender' size='64'> <textarea name='question_text'></textarea> <input type='button' id='interactive_submit_question' value='Отправить'> </form> <script type='text/javascript'> $("#interactive_submit_question").click(function(){ $("#interactive_form").ajaxSubmit(); $("#interactive_question_block").html("<b>Thanks! Your question was submited successfully.</b>"); }); </script> I don't get any errors while page is loading, but when i click the button i got "TypeError: $(...).ajaxSubmit is not a function". But i just import the form plugin with <script src='http://malsup.github.io/min/jquery.form.min.js'></script> Update: Looks like the problem is that at this page already loaded jQuery (2 times) and old version of form plugin. I just used a simple function $.post() to send data to my server. A: why not just use $.ajax? just as easy, and you have more control over what gets submitted.
{ "pile_set_name": "StackExchange" }
Q: Finding eigenvalues and eigenfunctions of difference operator? Consider the following difference operator For n=2 this operator action is given by: n = 2; tup = Tuples[{1, -1}, n]; LHS = Sum[ Product[((1 - Subscript[u, 1] Subscript[x, i]^tup[[qq, i]]) (1 - Subscript[u, 2] Subscript[x, i]^tup[[qq, i]]))/(1 - Subscript[ x, i]^(2 tup[[qq, i]])), {i, 1, n}] Product[(1 - t Subscript[x, i]^tup[[qq, i]] Subscript[x, j]^tup[[qq, j]])/(1 - Subscript[x, i]^tup[[qq, i]] Subscript[x, j]^tup[[qq, j]]), {j, 2, n}, {i, 1, j - 1}] f[Subscript[x, 1] q^(tup[[qq, 1]]/2), Subscript[x, 2] q^(tup[[qq, 2]]/2)], {qq, 1, Length[tup]}] Now I would like to use Mathematica to solve the difference equation LHS==EE f[Subscript[x,1],Subscript[x,2]] to find eigenfunctions f and eigenvalues EE. Is there a command in Mathematica that can do this? I looked online but could not find it. PS: There seems to be a recurrence relation solver RSolve[], but it does not seem to be useful in the above problem. A: One simple solution can be obtained by assuming q == 1. Simplify[LHS /. {q -> 1}] (* f[Subscript[x, 1], Subscript[x, 2]] (-1 + Subscript[u, 1] Subscript[u, 2]) (-1 + t Subscript[u, 1] Subscript[u, 2]) *) So, the second line of this answer is EE in this case. This immediately suggests a more general result, namely the eigenfunction f a constant and EE = (-1 + Subscript[u, 1] Subscript[u, 2]) (-1 + t Subscript[u, 1] Subscript[u, 2]) as may be verified by evaluating Simplify[LHS /. f[_, _] -> c] It is not obvious (to me, at least) whether other solutions exist. If they do, I am unaware of any single Mathematica function to find them. One might be able to make some progress, however, by expanding LHS as a power series in q about q == 1.
{ "pile_set_name": "StackExchange" }
Q: How to prove that $x\mapsto x^{\frac35}$ is continuous Trying to teach myself calc and I know what continuity is but how exactly do we write a formal proof proving that a function is continuous. Say the function $f(x) = x^{\frac35}$. How would I write the step-by-step proof of why it is continuous? A: We have $x^{\frac35}=e^{\frac35\log x}$. Since $\log$ is defined as $$\log x = \int_1^x \frac1t \ \mathsf dt$$ for $x>0$ by the fundamental theorem of calculus, $\log$ is continuous. Moreover, $e^x = \lim_{n\to\infty} \left(1+\frac xn\right)^n$. Since $$g_n(x) := \left(1+\frac xn\right)^n = \sum_{k=0}^n \binom nk \left(\frac xn\right)^k $$ is a polynomial, $g_n$ is continuous. As $$g_n'(x) = \left(1+\frac xn\right)^{n-1} + \frac1n\left(1+\frac xn\right)^n = \left(1 +\frac xn \right)^{n-1}\left(1+\frac1n\right) $$ and $g_n'(x)=0$ implies $x=-n$. Since $g_n'\geqslant 0$ on $[0,\infty)$, and $[0,\infty)$ is a connected set which is not bounded above, it follows that any extreme value must be obtained at $x=0$. Hence $$\sup_{x\in[0,\infty)}|g_n(x) - e^x| = |g_n(0) - e^0| =0. $$ Therefore $g_n$ converges uniformly to $\exp$, from which we conclude that $\exp$ is continuous. Now, $f(x) = g(h(x))$ where $g(x) = e^x$ and $h(x)=\frac35\log x$. The composition of continuous functions is continuous, so $f$ is indeed continuous.
{ "pile_set_name": "StackExchange" }
Q: JVM cannot load class properly Inside Class A, I have a method and in the method, there is a line: someClassB.staticMethodB(arg); where someClassB is another class. Now, arg is fine. But at this line I get an error: java.lang.NoClassDefFoundError: someClassC It seems that someClassB does not load properly for the static method staticMethodB to execute. But inside someClassB, we are not using someClassC at all. So why JVM tries to find someClassC? A: I suspect you're either actually using it in someClassB somewhere you haven't seen, or it's used in a superclass of someClassB. Either way, it sounds like you need someClassC to be present...
{ "pile_set_name": "StackExchange" }
Q: How to copy both .h & .m files in Xcode, and other files? Sometimes I have to make another class very similar with a existed one, I know it's against OO design but I have to violate it. But when I select the file and press command+c and +v, nothing happened and even the "Edit" list in menu cannot make Copy/Paste. Even other plist file cannot be directly copied in Xcode. So I have to make a new one and copy/past all the code from the existed one. Is there a better way to do this ? Thanks guys :) A: Select a file in Xcode and select File > Duplicate.... Then give the new file a unique name. All of the code from the duplicated file will be copied into the new file.
{ "pile_set_name": "StackExchange" }
Q: The XNA redistributable isn't in the prerequisites list; how do I include it when publishing? So I created a small project in XNA 4.0, and wanted to try and publish it, and so I did. Yet when attempting to install it on a different computer it didn't seem to work. I've read and studied about how the process of publishing works, and I saw someone in a different thread asking the same question as me, then I realized that inside the prerequisites list which is in PROJECT -> PROPERTIES -> Publish, the XNA redistributable wasn't there therefore it wasn't checked, and when attempting to install it on a different computer they needed to install redistributable on their own to make it work. Any idea why isn't the redistributable in the list? Maybe this isn't even the problem and if that's case any ideas of what IS the problem? Thanks ahead. A: XNA things aren't in the prerequisites list for me either. Instead of that, go to the same location and click "Application Files", then make sure the Publish Status of the XNA libraries are set to "Prerequisite". To demonstrate, here's a screenshot of my settings for a new XNA project. (Click image for full size.)
{ "pile_set_name": "StackExchange" }
Q: Cannot load a plugin to frama-c with call to ocamlyices functions I am trying to develop a plugin to frama-c. I did build the application, which has several files, and then created the makefile referencing all the files i needed. I am able to use make and then make install and execute the plugin. My problem appears when i call functions from the ocamlyices library in a function... I am still able to do the make and make install and when i try to execute i get the following error: [kernel] warning: cannot load plug-in 'name' (incompatible with Fluorine-20130601). [kernel] user error: option `<name>' is unknown. use `frama-c -help' for more information. [kernel] Frama-C aborted: invalid user input. So it says it is incompatible when I add the call to ocamlyices functions. Is there any option/configuration I am missing somewhere? Thank you for your help. The final solution looked like this: FRAMAC_SHARE := $(shell frama-c.byte -print-path) FRAMAC_LIBDIR := $(shell frama-c.byte -print-libpath) PLUGIN_NAME = Fact # All needed files PLUGIN_CMO = ../base/basic_types concolic_search run_fact ../lib/lib PLUGIN_DOCFLAGS = -I ../base -I ../lib -I $(YICES) -I /usr/lib/ocaml/batteries -I ../instrumentation PLUGIN_BFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation PLUGIN_OFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation PLUGIN_EXTRA_BYTE = $(YICES)/ocamlyices.cma PLUGIN_EXTRA_OPT = $(YICES)/ocamlyices.cmxa include $(FRAMAC_SHARE)/Makefile.dynamic The variable $(YICES) is defined as export YICES="/usr/local/lib/ocaml/3.12.1/ocamlyices" A: As mentioned by Anne, if your plug-in uses an external library that is not already included by Frama-C itself, you need a few more steps than for a basic plug-in, especially setting PLUGIN_EXTRA_BYTE and PLUGIN_EXTRA_OPT to the external libraries that you want to be linked to your plug-in. It might also be necessary to adapt the flags passed to the linker with PLUGIN_LINK_BFLAGS and PLUGIN_LINK_OFLAGS, but this is heavily dependent on ocamlyices itself. More information on the variables that can be used to customize the compilation of your plug-in can be found in section 5.3.3 of Frama-C's development guide.
{ "pile_set_name": "StackExchange" }
Q: Using the same 3 lines of code to print many lines The code below prints the text welcome, letter by letter every 0.1 seconds. welcome = ("Welcome") for character in welcome: print (character, end = "", flush = True) sleep(0.1) Usually if I wanted to print several things (in this case Welcome and Hello) I would do this: welcome = ("Welcome") hello = ("Hello") for character in welcome: print (character, end = "", flush = True) sleep(0.1) for character in hello: print (character, end = "", flush = True) sleep(0.1) But it is not efficient to be typing this every time I want a new line because it wastes a lot of time and space. for character in hello: print (character, end = "", flush = True) sleep(0.1) Is there a way I can somehow use only one of these for character in hello: print (character, end = "", flush = True) sleep(0.1) to print many different lines like this: hello = ("How are You") welcome = ("Welcome") nice = ("Awesome") A: You could iterate over an array, in which you store all messages: from time import sleep hello = ("How are You") welcome = ("Welcome") nice = ("Awesome") messages = [hello, welcome, nice] for message in messages: for character in message: print (character, end = "", flush = True) sleep(0.1) print('') the output looks like that: How are You Welcome Awesome
{ "pile_set_name": "StackExchange" }
Q: How to pause FFmpeg from C++ code? I'm writing a VisualC++ program which have code invoke ffmpeg.exe to convert video file. I wonder is it possible to pause/resume ffmpeg converting thread from C++ code? LR. A: There are no ways to controls the conversion using ffmpeg. However, if you switch you mencoder, you will be able to do it. A: All you need to do is suspend and resume the ffmpeg child process itself. The main problem is: there is no SuspendProcess API function. And there is no documented or safe way of doing this. The only simple way of doing this is via SuspendThread/ResumeThread. See this article on codeproject about how to do it.
{ "pile_set_name": "StackExchange" }
Q: Conflicting layer styles for geojson in Leaflet map Assigning a style to a geojson layer but it is visualising with the aforementioned style as well as the default blue style assigned to geojsons. Can't work out what is causing <!DOCTYPE html> <html> <head> <title>Global Terrorism Attacks </title> <meta charset="utf-8" /> <script src="http://code.jquery.com/jquery-2.1.0.min.js"></script> <link rel="stylesheet" type="text/css" href="css/own_style.css"> <link rel="stylesheet" href="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.css" /> <script src="http://cdn.leafletjs.com/leaflet/v0.7.7/leaflet.js"></script> <script type="text/javascript" src="geojsonLayer.js"> </script> </head> <body> <div id ="container"> <div id="header"> <br><b>Global Terrorism Attacks</b><br><br> </div> </div> <div id="map"> <script> var map = L.map('map').setView([41,-1],2); var CartoDB_All = L.tileLayer('http://{s}.basemaps.cartocdn.com/dark_all/{z}/{x}/{y}.png', { attribution: '&copy; <a href="http://www.openstreetmap.org/copyright">OpenStreetMap</a> &copy; <a href="http://cartodb.com/attributions">CartoDB</a>', subdomains: 'abcd', maxZoom: 19 }); CartoDB_All.addTo(map); var layerOrder=new Array(); function popupcall (feature, layer) { layer.bindPopup("<h1 class='header'></h1>" + feature.properties.admin); }; function doStylegtd(feature) { if (feature.properties.armassault >= 0.0 && feature.properties.armassault <= 500.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#2b83ba', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 500.0 && feature.properties.armassault <= 1000.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#80bfab', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 1000.0 && feature.properties.armassault <= 1500.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#c7e8ad', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 1500.0 && feature.properties.armassault <= 2000.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#ffffbf', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 2000.0 && feature.properties.armassault <= 2500.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#fdc980', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 2500.0 && feature.properties.armassault <= 3000.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#f07c4a', opacity: '0.6', fillOpacity: '0.6', } } if (feature.properties.armassault >= 3000.0 && feature.properties.armassault <= 3017.0) { return { color: '#000000', weight: '1.04', dashArray: '', fillColor: '#d7191c', opacity: '0.6', fillOpacity: '0.6', } } } L.geoJson(geojsonLayer, { style: doStylegtd }).addTo(map); L.geoJson(geojsonLayer,{ onEachFeature: popupcall }).addTo(map); //jamie.setStyle(function(feature) { return secondStyle; }); //L.Util.setOptions(jamie,{style:secondStyle}) </script> </div> A: You are adding the same geojson layer to the map twice: L.geoJson(geojsonLayer, { style: doStylegtd }).addTo(map); L.geoJson(geojsonLayer,{ onEachFeature: popupcall }).addTo(map); The second one of these is giving you the default blue outlines. Try: L.geoJson(geojsonLayer, { style: doStylegtd, onEachFeature: popupcall }).addTo(map);
{ "pile_set_name": "StackExchange" }
Q: Can I use a single stored procedure to operate on different schemas based on the executing user I have thousands of schemas with same set of tables, Each user has a default schema. But I dont want to create copies of a single stored procedure in every schema. Can a single stored procedure access the tables of user specific schema. I have created the test projects to test this but it is throwing error as the table not found create Database PermissionsTest -- Create user1 and UserSchema1 and assign permissions CREATE LOGIN [User1] WITH PASSWORD=N'User1' GO CREATE USER [User1] FOR LOGIN [User1] WITH DEFAULT_SCHEMA=[UserSchema1] GO ALTER LOGIN [User1] Enable go CREATE SCHEMA [UserSchema1] AUTHORIZATION [User1] GO -- Create user2 and UserSchema2 and assign permissions CREATE LOGIN [User2] WITH PASSWORD=N'User2' GO CREATE USER [User2] FOR LOGIN [User2] WITH DEFAULT_SCHEMA=[UserSchema2] GO ALTER LOGIN [User2] Enable go CREATE SCHEMA [UserSchema2] AUTHORIZATION [User2] GO -- Create StoredProcedure Schema and creating the role to execute on this schema CREATE ROLE [ExecuteSprocsOnStoredProcsSchema] AUTHORIZATION [dbo] GO ALTER AUTHORIZATION ON role::ExecuteSprocsOnStoredProcsSchema TO User1; GO Create Schema [StoredProcedures] AUTHORIZATION [ExecuteSprocsOnStoredProcsSchema] Go EXEC sp_addrolemember N'ExecuteSprocsOnStoredProcsSchema', N'User1' GO EXEC sp_addrolemember N'ExecuteSprocsOnStoredProcsSchema', N'User2' Go -- GRANT Execute ON SCHEMA :: StoredProcedures TO ExecuteSprocsOnStoredProcsSchema GRANT CONNECT TO [User1] Grant Connect to [User2] grant SELECT ON SCHEMA::[dbo] TO [User1] GO grant SELECT ON SCHEMA::[dbo] TO [User2] GO ----------------- Database data side changes --------------- USE PermissionsTest GO CREATE TABLE [dbo].[StaticTable]( [pkcol] [int] IDENTITY(1,1) NOT NULL, [col1] [int] NULL, PRIMARY KEY CLUSTERED ([pkcol]) ) GO insert into StaticTable values (420); -- drop table [UserSchema2].[Table1] CREATE TABLE [UserSchema1].[Table1]( [pkcol] [int] IDENTITY(1,1) NOT NULL, [col1] [varchar](max) NULL, PRIMARY KEY CLUSTERED ([pkcol]) ) GO CREATE TABLE [UserSchema2].[Table1]( [pkcol] [int] IDENTITY(1,1) NOT NULL, [col1] [varchar](max) NULL, PRIMARY KEY CLUSTERED ([pkcol]) ) GO Create Procedure [StoredProcedures].[Insert_Table1] as begin Insert into table1 values (newID()); Insert into table1 select col1 from StaticTable; end after setting up the system for two users and the stored propcedure Insert_Table1 When I try to execute the stored procedure by User1 I get this error Msg 208, Level 16, State 1, Procedure Insert_Table1, Line 6 Invalid object name 'table1'. but when I run the queries separately as user1 it gets executed without any problem. I know while executing the stored procedure the stored procedure temperarily takes the permissions of the Schema owener of that Stored procedure I have created the schema with authorization to the role assigned to user1 already. Create Schema [StoredProcedures] AUTHORIZATION [ExecuteSprocsOnStoredProcsSchema] Am I missing anything.. Is there any way to make it work? Or is there any way that a single stored procedure can be used over multiple schemas.. A: You could probably use dynamic SQL, CREATE PROCEDURE [StoredProcedures].[Insert_Table1] AS DECLARE @sql nvarchar(max)=N' INSERT INTO table1 VALUES (NEWID()); INSERT INTO table1 SELECT col1 FROM StaticTable; '; EXECUTE sys.sp_executesql @sql; However, you should be aware that this is problematic for several reasons. The most obvious ones are: If you're duplicating the entire schema for every user, there's probably something seriously wrong with your database design. Instead of giving users their own schemas, you should design permissions into the table (with a user column and row-level security), It breaks ownership chaining (the user will now need permissions to the base tables, as opposed to inheriting rights from the stored procedure), You need to manage SQL injection if your stored procedure accepts parameters that go into the SQL statement (preferably using parameterization), Dynamic SQL has some other effects with regards to parameterization and performance that you should be aware of if you're dealing with larger OLTP-style loads. A: I do not believe that what you are trying to accomplish is possible, at least not without an extra, over-complicated layer of Dynamic SQL. Since you have already gone through the trouble of creating unique Logins and Schemas, you might want to consider one of the following approaches: Accept the separate Schema concept and create the Stored Procedures (and Functions?) in each Schema. You mentioned (quote from now deleted duplicate question): I want to have same set of stored procedures to be executed on each schemas to avoid the maintainance overhead for each schema. but what maintenance overhead? You already have the same Tables per each Schema, so your release/rollout process already needs to account for applying Table / Constraint / Trigger / Index changes to ALL Schemas, so what harm is there in adding the "code" objects to the mix (and again, Triggers are already copied to each Schema since they exist per Table). Move away from using Schemas as the separation and create a Database per each client. This would allow for all objects to be identical. You just need to deploy to each Database instead of to each Schema. But needing to do the release / rollout / deployment across a series of Databases isn't really any different than needing to do it across a series of Schemas (something that you already need to do in your current model).
{ "pile_set_name": "StackExchange" }
Q: Vertical cusp vertical tangent question Can anyone help me solve the following two problems: Does the following function have a vertical tangent or vertical cusp at $c=0$ $$f(x)=(3+x^{\frac{2}{5}})$$ I got for the derivative $$f'(x)=\frac{2}{5\sqrt[5]{x}^3}$$ Now I think this would be a cusp because as x approaches zero from the left and right f(x) approches infinity. My second question is with the function $f(x)=|{(x+8)^{1/3}}|$. For my derivative I got $$f'(x)=\frac{1}{3}(x+8)^{\frac{-2}{3}},\;x > -8$$ $$f'(x)=\frac{-1}{3}(x+8)^{\frac{-2}{3}},\;x < -8$$ Would this not be a tangent because of the square expoent. A: Yes, your first function has a cusp at $x = 0$, because the derivative $f'(x)$ approaches $\infty$ as $x \to 0^+$ and as $x\to 0^-$. We can see this at the point $(0, 3)$: $\quad f(x)=\left(3+x^{\large\frac{2}{5}}\right)$ (see blue curve) Your second function also has a cusp: at $x = -8$ Can you try to rewrite your derivative, or redefine if you mean "for $x \geq 0, f'(x) = ...$ and for $x < 0, f'(x) = ...?$. $\quad f(x)=|(x+8)^{1/3}|$
{ "pile_set_name": "StackExchange" }
Q: Get BaseName Of Element DTE _applicationObject Visual Studio Add-In I'm trying to get definition of element with visual studio add-in but I can't reach the signature of the element on selected line. I have a code blocks like: sub Test(){ Dim TestVariable As New TestClass TestVariable.Execute() } I select the line that is TestVariable.Execute() But this operation, which is in add-in: _applicationObject.ActiveDocument.ProjectItem.FileCodeModel. _ CodeElementFromPoint(sel.AnchorPoint, vsCMElement.vsCMElementFunction) returns me Test(), which is topmost element, but i need the innermost. I tried to change 'vsCMElementFunction' property to attribute etc. but they all returned Nothing. Does anyone know another way to do this? A: I found a solution for my problem, and I wanted to share it. 'When Your Cursor On The Element, Call Your Add-In 'In Exec() Function You Can Follow These Steps: 'Open Object Browser And Copy The Signature _applicationObject.ExecuteCommand("Edit.GoToDefinition") _applicationObject.ExecuteCommand("Edit.Copy") 'To Close The Object Browser Dim win As EnvDTE.Window win = DirectCast(_applicationObject.ActiveWindow, EnvDTE.Window) 'And ta ta, Full Signature Of The Element basename = My.Computer.Clipboard.GetText() 'And Here We Get The Base Name Dim tobjbasearr As String() = basename.Split(CChar(".")) Dim tobjbase As String = tobjbasearr(0)
{ "pile_set_name": "StackExchange" }
Q: Uniform convergance for $f_n(x)=x^n-x^{2n}$ the function $f_n(x)=x^n-x^{2n}$ converge to $f(x)=0$ in $(-1,1]$. Intuativly the function does not converge uniformally in (-1,1]. How can I prove it? I tried using the definition $\lim \limits_{n\to\infty}\sup \limits_{ x\in (-1,1]}|f_n(x)-f(x)|$ function is continial fractional on $[-1,1]$ and $x=0,(\frac 1 2 )^{\frac 1 n}$ are the roots of the derivative. I found that the second derivative is negative in the second point. then $\sup=1/4$ and the function does not converge uniformally? A: Choose an arbitrarily large odd value of $n$. There exists some $0<x<1$ such that $x^n>\dfrac 12$. Then $$\begin{array}{rl}f_n(-x) &= (-x)^n - (-x)^{2n} \\ &= -\left(x^n + x^2n\right) \\ &\leq -\frac 34 \end{array}$$ So $f_n$ does not converge uniformly on $(-1,1]$.
{ "pile_set_name": "StackExchange" }
Q: Python - Access column based on another column value I have the following dataframe in python +-------+--------+ | Value | Number | +-------+--------+ | true | 123 | | false | 234 | | true | 345 | | true | 456 | | false | 567 | | false | 678 | | false | 789 | +-------+--------+ How do I conduct an operation which returns a list of all the 'Number' which has Value == TRUE The output list expected from the above table is ['123', '345', '456'] Thanks in advance! A: df.loc[df['Value'],'Number'] should work assuming the dtype for 'Value' are real booleans: In [68]: df.loc[df['Value'],'Number'] Out[68]: 0 123 2 345 3 456 Name: Number, dtype: int64 The above uses boolean indexing, here the boolean values are a mask against the df. If you want a list: In [69]: df.loc[df['Value'],'Number'].tolist() Out[69]: [123, 345, 456]
{ "pile_set_name": "StackExchange" }
Q: Set state of object property I have this factory method that creates functions for each change handler: makeHandleChange = changedProperty => newVal => { const newState = { ...this.state, [changedProperty]: newVal }; this.setState({ ...newState }); }; It is being executed for example like that: onChange={this.makeHandleChange('phoneNumber')} How can I set the state of a property object using this function? For example I have to setState the info.phoneNumber property: state = { info: { phoneNumber: '', }, } How can I do that with this makeHandleChange function? A: There are two ways to get this to work. One is to update your factory code so that it takes into consideration of nested state properties. The other, and probably the easier way is to pass in the entire nested object when you need to update the state. For example, if you have a input field that updates the phoneNumber property nested in this.state.info: <input type="text" onChange={e => this.makeHandleChange("info")({ ...this.state.info, phoneNumber: e.target.value})} value={this.state.phoneNumber} /> In the object you pass in to the function, make sure you destruct this.state.info (i.e. ...this.state.info) before setting the new value so that none of the other nested properties gets overwritten/removed.
{ "pile_set_name": "StackExchange" }
Q: Self join vs group by when counting duplicates I'm trying to count duplicates based on a column of a table in an Oracle Database. This query using group by: select count(dockey), sum(total) from ( select doc1.xdockeyphx dockey, count(doc1.xdockeyphx) total from ecm_ocs.docmeta doc1 where doc1.xdockeyphx is not null group by doc1.xdockeyphx having count(doc1.xdockeyphx) > 1 ) Returns count = 94408 and sum(total) = 219330. I think this is the correct value. Now, trying this other query using a self join: select count(distinct(doc1.xdockeyph)) from ecm_ocs.docmeta doc1, ecm_ocs.docmeta doc2 where doc1.did > doc2.did and doc1.xdockeyphx = doc2.xdockeyphx and doc1.xdockeyphx is not null and doc2.xdockeyphx is not null The result is also 94408 but this one: select count(*) from ecm_ocs.docmeta doc1, ecm_ocs.docmeta doc2 where doc1.did > doc2.did and doc1.xdockeyphx = doc2.xdockeyphx and doc1.xdockeyphx is not null and doc2.xdockeyphx is not null Is returning 1567466, which I think is wrong. The column I'm using to find duplicates is XDOCKEYPHX and the DID is the primary key of the table. Why is the value sum(total) different from the result of the last query? I can't see why the last query is returning more duplicate rows than expected. A: Thanks to @vogomatix, since his answer helped me understand my problem and where I was wrong. The last query actually results in a number of rows showing each pair of duplicates with no repetitions, but it's not suitable to count for them as the sum(total) from the first one. Given this case: DID | XDOCKEYPHX --------------- 1 | 1 2 | 1 3 | 1 4 | 2 5 | 2 6 | 3 7 | 3 8 | 3 9 | 3 The first inner query would return DID | XDOCKEYPHX --------------- 1 | 3 2 | 2 3 | 4 And the full query would be count = 3, meaning there are 3 documents with n duplicates, and the total duplicated documents sum(total) = 9. Now, the second and third query, if we use just a select *, will give something like: DID_1 | XDOCKEYPHX | DID_2 -------------------------- 2 | 1 | 1 3 | 1 | 1 3 | 1 | 2 5 | 2 | 4 7 | 3 | 6 8 | 3 | 6 8 | 3 | 7 9 | 3 | 6 9 | 3 | 7 9 | 3 | 8 So now, the second query select count(distinct(xdockeyphx)) will give the correct value 3, but the third query select count(*) will give 10, which well, is incorrect for me since I wanted to know the sum of duplicates for each DID (9). What the third query gives you is all the pairs of duplicates, so you can then compare them or whatever. My misunderstanding was thinking that if I counted all the rows in the third query, I should get the sum of duplicates for each DID (sum(total) of the first query), which was a wrong idea and now I realize it.
{ "pile_set_name": "StackExchange" }
Q: With Protocol Buffers, is it safe to move enum from inside message to outside message? I've run into a use case where I'd like to move an enum declared inside a protocol buffer message to outside the message so that other messages van use the same Enum. ie, I'm wondering if there are any issues moving from this message Message { enum Enum { VALUE1 = 1; VALUE2 = 2; } optional Enum enum_value = 1; } to this enum Enum { VALUE1 = 1; VALUE2 = 2; } message Message { optional Enum enum_value = 1; } Would this cause any issues de-serializing data created with the first protocol buffer definition into the second? A: It doesn't change the serialization data at all - the location / name of the enums are irrelevant for the actual data, since it just stores the integer value. What might change is how some languages consume the enum, i.e. how they qualify it. Is it X.Y.Foo, X.Foo, or just Foo. Note that since enums follow C++ naming/scoping rules, some things (such as conflicts) aren't an issue: but it may impact some languages as consumers. So: if you're the only consumer of the .proto, you're absolutely fine here. If you have shared the .proto with other people, it may be problematic to change it unless they are happy to update their code to match any new qualification requirements.
{ "pile_set_name": "StackExchange" }
Q: How to have 2 Floating Action Buttons fixed in some specific positions with Flutter? I am developing an app with Flutter, and I want to have 2 FABs on the main screen. One in the BottomAppBar which I have done. Scaffold( appBard: AppBar(title: Text("My App")), floatingActionButtonLocation: FloatingActionButtonLocation.centerDocked, floatingActionButton: FloatingActionButton( child: Icon(Icons.add), onPressed: ()=>{} ), bottomNavigationBar: BottomAppBar( color: Style.darkPrimaryColor, shape: CircularNotchedRectangle(), child: Container( height: 50, child: Row( children: <Widget>[] ), ), ), ) I want to have a second FAB positioned and fixed in the right bottom side of the screen in plus of the centered FAB like the following maquette: Is there any way to achieve that? A: I don't think there is any built in way of doing this with the scaffold, but it should be easy enough to just use a stack as your scaffold body, since you can add a floating action button anywhere. However, if you have two, you will need to change the hero tag on one of them to avoid errors when moving to/from that page. Scaffold( appBar: AppBar(title: Text("My App")), floatingActionButtonLocation: FloatingActionButtonLocation.centerDocked, floatingActionButton: FloatingActionButton( child: Icon(Icons.add), onPressed: ()=>{} ), bottomNavigationBar: BottomAppBar( color: Style.darkPrimaryColor, shape: CircularNotchedRectangle(), child: Container( height: 50, child: Row( children: <Widget>[] ), ), ), body: Stack( alignment: Alignment.bottomRight, children: [ Container( child: //Use this as if it were the body ), //Setup the position however you like Padding( padding: const EdgeInsets.all(16.0), child: FloatingActionButton( heroTag: null, //Must be null to avoid hero animation errors child: Icon(Icons.add), onPressed: () => {}, ), ), ], ), )
{ "pile_set_name": "StackExchange" }
Q: animating one color of a gradient continuously with css saw some similar questions here but can't seem to figure out this one in particular. I have a gradient that's black on top and blue on the bottom. I want the blue to fade to several different colours on a continuous loop, but the black to remain in place. like this: http://i.stack.imgur.com/6k9EN.gif saw some demos like this one where the gradient animates continuously but it's just a large gradient being shifted vertically with keyframes which wouldn't work for me. Wondering if anyone could recommend a solution. Here's the unanimated code so far: http://jsfiddle.net/ssccdd/9qA3L/ body{ background: -moz-linear-gradient(top, #000000 80%, #3300FF 100%); background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#1e5799), color-stop(100%,#7db9e8)); background: -webkit-linear-gradient(top, #1e5799 80%,#3300FF 100%); background: -o-linear-gradient(top, #000000 80%,#3300FF 100%); background: -ms-linear-gradient(top, #000000 80%,#3300FF 100%); background: linear-gradient(to bottom, #000000 80%,#3300FF 100%); filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#000000', endColorstr='#3300FF',GradientType=0 ); background-attachment: fixed; } A: You might not be able to animate gradient colors, but you can animate the background color. So, in your case, have an unchanging linear gradient going from black to transparent. Then change the background color using CSS transitions. Here's a working example: http://jsfiddle.net/qSJa8/ I had to include javascript to switch from one color to another: var classes = [ "stop1", "stop2", "stop3", "stop4" ]; var stopIndex = 0; var lastClass = classes[0]; window.setInterval(function() { ++stopIndex; if(stopIndex >= classes.length) { stopIndex = 0; } var newClass = classes[stopIndex]; $('#sampleDiv').removeClass(lastClass).addClass(newClass); lastClass = newClass; }, 2000); And the css: #sampleDiv { height:300px; background-image: linear-gradient(180deg, black, black 50%, transparent); transition: background-color 2s; } .stop1 { background-color:blue; } .stop2 { background-color:purple; } .stop3 { background-color:green; } .stop4 { background-color:yellow; }
{ "pile_set_name": "StackExchange" }
Q: How to fix or replace the timer button of a microwave oven? I have a microwave oven which timer button is broken. I would like to fix it (by using glue?) or replace it (with a similar "key"). The oven is a Samsung M1714. However, I am afraid of making things worse if I try to repair it, and I do not know any shop where I could buy something similar. I noticed there is the number 3 written on the inside of the broken piece. Do you know an online shop for this kind of thing? Here are several shots: A: I have repaired dials similar to this in the past. I have found that if you apply some CA glue (super glue) or epoxy to the crack, and then wrap the outside of the shaft tightly with electrical tape, it will hold up under most circumstances. Failing that, it appears that you can buy the replacement knob from several online suppliers. A: You can actually repair this knob with resin + fiberglas. You can see the final result here. Hope it helps.
{ "pile_set_name": "StackExchange" }
Q: How to switch from MariaDB to MySQL in WAMP server? I installed wamp server on my PC and by default wamp server is connected to MariaDB as showed below How can i switch to MySQL? Thank you in advance for your help. A: To pick MySQL or MariaDB, right click on the wampmanager icon in the system tray and you should see this this menu Please Select Components like I did please check the 2nd Screenshot
{ "pile_set_name": "StackExchange" }