text
stringlengths
64
89.7k
meta
dict
Q: How to deal with presence of variables in message bundles ( localization ) it's the first time I'm developing an app that needs to be localized into more than 20 languages. The problem is that there is a lot of messages that contain a variable. It can't be hardcoded because if it changed, the administrator would have to change it in 20+ property files. Are there any known possibilities how to deal with it ? I'm currently using spring framework. A: One methodology could be to replace variables with markers and use the String.Format method ( http://download.oracle.com/javase/1.5.0/docs/api/java/lang/String.html#format(java.lang.String,%20java.lang.Object...) ) or another kind of Formatter to replace accordingly. Whilst I'm just getting into Spring myself and suspect that it could provide a more elegant solution, currently I would use string formatters to replace placeholders with variable values at runtime.
{ "pile_set_name": "StackExchange" }
Q: Populate span tag with data from a json file On THIS page, I have listed some dummy estates and hard-coded, for each one, a span (with the text "VEZI TELEFON") that when clicked, reveals a phone number. I want every phone number to be retrieved from a JSON (or PHP) file called telefoane.json that has the content: { "telefoane":[{ "id":"1", "tel":"0743127315" }, { "id":"2", "tel":"072245875" }, { "id":"3", "tel":"0756129458" }, { "id":"4", "tel":"0725127216" }, { "id":"5", "tel":"0723127322" }] } My code, that can be seen below does not output the desired result: $.ajax({ url: 'telefoane.json', dataType: 'json', success: function(data){ $.each(data, function(key, val){ console.log(key + ":" + val); }); } }); The output is, unfortunately: telefoane:[object Object],[object Object],[object Object],[object Object],[object Object] What am I doing wrong? Thank you! EDIT: $.ajax({ url: 'telefoane.json', dataType: 'json', success: function(){ $.each(data.telefoane, function() { console.log(this.id + ": " + this.tel); }); } }); A: Your loop will give you the objects of data.telefoane. So you need to access the content by the property names. var data = { "telefoane": [ { "id": "1", "tel": "0743127315" }, { "id": "2", "tel": "072245875" }, { "id": "3", "tel": "0756129458" }] }; $.each(data.telefoane, function(i, object) { console.log(object.id + ": " + object.tel); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> Or use this: var data = { "telefoane": [ { "id": "1", "tel": "0743127315" }, { "id": "2", "tel": "072245875" }, { "id": "3", "tel": "0756129458" }] }; $.each(data.telefoane, function() { console.log(this.id + ": " + this.tel); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
{ "pile_set_name": "StackExchange" }
Q: R subset data according list I am working on Twitter dataset and I haven't figure out subsetting my data according list of hashtags. df: rowID Hashtags 1 ogretmenemayistamujdehazirandaatama,onlarkonusurakpartiyapar 2 onlarkonusurakpartiyapar,halkinbasbakanitokatta 3 kurdish,mahabad,justiceforfarinaz,kurdistan 4 onlarkonusurakpartiyapar 5 anfal,halabja,kurdistan,kobani 6 onlarkonusurakpartiyapar 7 kurdistan Hashtags are a character list hashtag_list: "onlarkonusurakpartiyapar" "kurdistan" I tried this code but it didn't work for me; new_df=df[df$Hashtags %in% hashtag_list,] It can only give the subset of "onlarkonusurakpartiyapar" hashtag. I know that it looks so simple but I couldn't figure out yet even though I have looked all posts in the site. Thanks for your help. A: Here is an approach that modifies yours by distinguishing characters separated by a "," to be different hashtag, and saying that the row is a match if any of those hashtags are in your list. Your Data df <- data.frame( rowID=1:8, Hashtags=c( "ogretmenemayistamujdehazirandaatama,onlarkonusurakpartiyapar", "onlarkonusurakpartiyapar,halkinbasbakanitokatta", "kurdish,mahabad,justiceforfarinaz,kurdistan", "onlarkonusurakpartiyapar", "anfal,halabja,kurdistan,kobani", "onlarkonusurakpartiyapar", "kurdistan", "this,willnot,befound" ), stringsAsFactors=F ) hashtag_list <- c("onlarkonusurakpartiyapar", "kurdistan") The Solution find_ht <- function(hashtags, hashtag_list){ sapply(strsplit(hashtags, split=","), function(x)any(x%in%hashtag_list)) } Implementation find_ht(hashtags=df$Hashtags, hashtag_list=hashtag_list) which returns ... [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE FALSE Edit To perform the subset, you simply need to ... sub.index <- find_ht(hashtags=df$Hashtags, hashtag_list=hashtag_list) df[sub.index,] which returns rowID Hashtags 1 1 ogretmenemayistamujdehazirandaatama,onlarkonusurakpartiyapar 2 2 onlarkonusurakpartiyapar,halkinbasbakanitokatta 3 3 kurdish,mahabad,justiceforfarinaz,kurdistan 4 4 onlarkonusurakpartiyapar 5 5 anfal,halabja,kurdistan,kobani 6 6 onlarkonusurakpartiyapar 7 7 kurdistan Or, if you want the indices do which(sub.index). To Specifically subset the rowID's only, do df[sub.index,"rowID"]. In this case, both of those return [1] 1 2 3 4 5 6 7
{ "pile_set_name": "StackExchange" }
Q: Lower resistance resistor in place of a higher resistance one I drew this small circuit from a design a friend gave me to blink an LED, but I am missing a certain component. I am missing a 12KΩ resistor. Can I get away with a 10KΩ resistor in its place? A: Yeah, it will work, just with slightly different timing characteristics. See https://en.wikipedia.org/wiki/555_timer_IC#Astable for how to calculate the frequency, on time, and off time.
{ "pile_set_name": "StackExchange" }
Q: foreach through div I have got some trouble about doing an loop through the following construct: <div id="items"> <p id="test1"> <select name="1" id="1"><option>A</option><option selected>B</option></select> <input type="text" name="to" id="to"> <input type="text" name="away" id="awy"> </p> <p id="test2"> <select name="2" id="2"><option>A</option><option selected>B</option></select> <input type="text" name="to" id="to"> <input type="text" name="away" id="awy"> </p> <p id="test3"> <select name="3" id="3"><option>A</option><option selected>B</option></select> <input type="text" name="to" id="to"> <input type="text" name="away" id="awy"> </p> </div> I need to run through test1, test2 and test3 and read the selected option and the values of the text-fields (select, to,away). Doing this if i got only one thing in for example test1 is no problem with query: $('#items').children('p').each(function () {...} But if i add the two text-fields and want to do this for each test (1-3) I have no idea... Any idea? Thanks! A: Ids should represent unique items within a DOM. Use classes instead, like so: <input type="text" name="to" class="to"> <input type="text" name="away" class="awy">
{ "pile_set_name": "StackExchange" }
Q: Different value between the console and the view I have a "wtf" problem that I don't understand. I have a view where the user can see all his pictures. On each picture, there's an icon to delete a picture. This icon opens a modal where there's a link to delete the picture. But every links goes to delete the first picture in the view. The most weird is that: = picture_path(picture) # => path of the first picture - puts picture_path(picture) # => path of the right picture How is it possible? This is a part of my views: _gallery.slim .row - pictures.each do |picture| .col-xs-10.col-xs-offset-1.col-sm-6.col-sm-offset-0.col-md-4.col-lg-3 # Some code .caption id="pictures-#{picture.id}" .row - if current_user == @user = render 'pictures/form_position', picture: picture # => Here we go _form_position.slim .caption-edit.d-block .col-lg-6.col-md-6.col-xs-6 div span> Position span.hide-if-edit => picture.position span.icon.icon-edit.picture-position-icon.hide-if-edit span.icon.icon-delete.hidden.picture-position-icon.position-form data-toggle= 'modal' data-target= '#delete-picture-modal' = render 'pictures/destroy_picture_modal', picture: picture # => My modal .col-lg-6.col-md-6.col-xs-6.position-form.hidden = simple_form_for picture, remote: true do |f| = hidden_field_tag(:position) = f.input :position, wrapper: :vertical_input_group, label: false do = f.input_field :position, value: picture.position, class: 'form-control' .input-group-btn = button_tag type: 'submit', class: 'btn btn-primary' do i.icon.icon-check _destroy_picture_modal.slim .modal.fade#delete-picture-modal tabindex= '-1' role= 'dialog' aria-hidden= 'true' .modal-dialog role= 'document' .modal-content .modal-header button type="button" class="close" data-dismiss="modal" &times; h5.modal-title Delete the picture .modal-body p Are you sure? .modal-footer = button_tag 'Cancel', type: 'button', class: 'btn btn-secondary', data: { dismiss: 'modal' } = link_to picture_path(picture), method: :delete do # => Where there's a problem = button_tag 'Delete', type: 'button', class: 'btn btn-primary' I remind you: = picture_path(picture) # => return the path of the first picture - puts picture_path(picture) # => return the right path Do you see something wrong in my code? A: You are using one id identifier for multiple modals: data-target= '#delete-picture-modal' Multiple modals are being rendered, but you are always targeting the "first" one when clicking the delete icon. You need to assign a unique id to each modal, and target it accordingly. (Or, populate the modal via some asynchronous javascript.)
{ "pile_set_name": "StackExchange" }
Q: How does jekyll run as a .exe, and it starts rails server also? How does the jekyll gem work, it somehow creates a command line arguement and also runs rails server on port 4000. Where in the code does it implement this functionality? https://github.com/mojombo/jekyll/tree/master/lib/jekyll A: Jekyll uses WEBrick. WEBrick also happens to be used by Rails, but is a generic Ruby HTTP server. The functionality is implemented in bin/jekyll.
{ "pile_set_name": "StackExchange" }
Q: [http]How to identify end of stream when content-length is not specified? First let me show my code. http=require("http"); fs=require("fs"); var server=http.createServer(function(req,res){ var readStream=fs.createReadStream("1.jpg"); readStream.on("data",function(data){ res.write(data); }); readStream.on("end",function(data){ res.write("this string seems not to be sent","utf8"); res.end("end","utf8"); }); }); server.listen(3000); I created a readStream of picture 1.jpg and then sent data stream. After the "end" event was fired, I sent a string "this string seems not to be sent". I didn't specify a content-length in headers. On the client side, I actually got 1.jpg correctly. But I didn't receive the string. I guess there must be something that marks the end of stream. If so, what the mark is? how it works? I know that assigning transfer-encoding with "chunked" is a way to send data whose length is uncertain, but my safari shows the response headers are: Connection keep-alive Transfer-Encoding Identity A: On the client side, I actually got 1.jpg correctly. But I didn't receive the string. In fact, the string is sent. To confirm this: $ echo '(Contents of a JPEG file.)' >1.jpg $ curl -i http://localhost:3000/ HTTP/1.1 200 OK Date: Sat, 23 May 2015 08:01:48 GMT Connection: keep-alive Transfer-Encoding: chunked (Contents of a JPEG file.) this string seems not to be sentend Your browser (or image viewer) understands the format of a JPEG, so it ignores the extra string at the end. However, it is sent. I guess there must be something that marks the end of stream. Yes. The data is delimited by chunked transfer encoding markers. curl doesn't show them by default, but they're present. To see them in the response: $ curl -i --raw http://localhost:3000/ HTTP/1.1 200 OK Date: Sat, 23 May 2015 08:23:02 GMT Connection: keep-alive Transfer-Encoding: chunked 1b (Contents of a JPEG file.) 20 this string seems not to be sent 3 end 0
{ "pile_set_name": "StackExchange" }
Q: CSS Scrollable content within flexbox child I'm trying to design layout for single page application. The layout should fill full page height and width without scrolling. Here is layout design: All blocks contain a bit of information, all they have fixed height, there is only one block which contains large list of data and it should be scrollable (it is purple on picture). Currently, I'm using flexible boxes for all UI blocks, but I can't make purple block scrollable. How to make purple block remain flexible (i.e. occupy all available space within blue block), and make its content scrollable (i.e. content should not purple block body). Maybe there is some better solution (I believe flexible boxes serve a bit another purposes)? A: My goal was not use extra blocks (which easily transform my markup to div-soup), however in this case I had to. The solution is to use CSS's position property, original idea and technical details could be found here. To achieve scrolling content I turned my purple block into a wrapper, it is a child of column oriented flex (blue block) with flex-grow ability set to 1. It occupies all available space within parent. Wrapper is relatively positioned. Inside this block I have another one, having absolute position and sized by top, bottom, left, and right properties. That is the block where content lives, it has overflow set to auto. I have no solution without wrapping. Here is live demo: var scrollElem = document.getElementById('scroll'); for (var i = 0; i < 100; i++) { (function() { var e = document.createElement('div'); e.classList.add('item'); e.innerText = "Content piece"; scrollElem.appendChild(e); }()); } HTML { width: 100vw; height: 100vh; font-family: sans-serif; font-weight: 700; } BODY, #main { width: 100%; height: 100%; margin: 0; font-size: 0.75em; } /*||||*/ #scroll { top: 0; bottom: 0; left: 0; right: 0; overflow: auto; text-align: left; } #footer { height: 30px; } /*||||*/ #main { background-color: rgb(255, 215, 125); } #upper { background-color: rgb(170, 215, 125); } #bottom { background-color: rgb(200, 50, 180); } #left { box-shadow: inset -3px 0 15px 0 rgba(60, 5, 20, 0.5); } #right { box-shadow: inset 3px 0 15px 0 rgba(60, 5, 20, 0.5); } #footer { box-shadow: inset 0 3px 15px 0 rgba(60, 5, 20, 0.5); } #left, #right, #footer { color: rgba(60, 5, 20, 0.8); } /*||||*/ .D-f { display: flex; } .A-I-c { align-items: center; } .F-D-c { flex-direction: column; } .F-D-r { flex-direction: row; } .J-C-c { justify-content: center; } .J-C-sa { justify-content: space-around; } .J-C-sb { justify-content: space-between; } .F-G-1 { flex-grow: 1; } .F-G-2 { flex-grow: 2; } .F-G-3 { flex-grow: 3; } .F-G-4 { flex-grow: 4; } .F-G-5 { flex-grow: 5; } .F-W-nw { flex-wrap: nowrap; } .F-W-w { flex-wrap: wrap; } .Pos { position: relative; } .Pos-a { position: absolute; } /*||||*/ #upper, #bottom { padding: 1em; text-align: center; } #upper .popout { display: none; } #upper:hover .popout { display: initial; } ASIDE SPAN { font-size : 0.6em; } <div id="main" class="D-f F-D-c J-C-sb"> <div id="page" class="F-G-1 D-f J-C-sb"> <aside id="left" class="F-G-3 D-f J-C-c A-I-c"> Left aside <br/> <span><sup>3</sup>/<sub>12</sub></span> </aside> <div id="content" class="F-G-5 D-f F-D-c"> <div id="upper"> <h3>Upper block</h3> <div class="popout">Ta-Da!</div> </div> <div id="bottom" class="F-G-1 D-f F-D-c"> <h3 class="header">Header</h3> <div class="misc">Misc</div> <div id="scroll-wrap" class="Pos F-G-1"> <div id="scroll" class="Pos-a"></div> </div> </div> </div> <aside id="right" class="F-G-2 D-f J-C-c A-I-c"> Right aside <br/> <span><sup>2</sup>/<sub>12</sub></span> </aside> </div> <div id="footer" class="D-f A-I-c J-C-c">Footer</div> </div>
{ "pile_set_name": "StackExchange" }
Q: call of overloaded is ambiguous, how to deal with that? I really don't understand this, I thought that compiler first executes what is in braces and then gives the result to the most appropriate function. Here it looks like it gives the function an initializer list to deal with it... #include <string> #include <vector> using namespace std; void func(vector<string> v) { } void func(vector<wstring> v) { } int main() { func({"apple", "banana"}); } error: <stdin>: In function 'int main()': <stdin>:11:27: error: call of overloaded 'func(<brace-enclosed initializer list>)' is ambiguous <stdin>:11:27: note: candidates are: <stdin>:6:6: note: void func(std::vector<std::basic_string<char> >) <stdin>:8:6: note: void func(std::vector<std::basic_string<wchar_t> >) Why isn't my func(vector<string> v) overload called, and can I make it so? A: This one was subtle. std::vector has a constructor taking two range iterators. It is a template constructor (defined in 23.6.6.2 of the C++11 Standard): template<typename InputIterator> vector(InputIterator first, InputIterator last, const allocator_type& a = allocator_type()); Now the constuctor of std::vector<wstring> accepting an initializer_list is not a match for the implicit conversion in your function call, (const char* and string are different types); but the one above, which is of course included both in std::vector<string> and in std::vector<wstring>, is a potentially perfect match, because InputIterator can be deduced to be const char*. Unless some SFINAE technique is used to check whether the deduced template argument does indeed satisfy the InputIterator concept for the vector's underlying type, which is not our case, this constructor is viable. But then again, both std::vector<string> and std::vector<wstring> have a viable constructor which realizes the conversion from the braced initializer list: hence, the ambiguity. So the problem is in the fact that although "apple" and "banana" are not really iterators(*), they end up being seen as such. Adding one argument "joe" to the function call fixes the problem by disambiguating the call, because that forces the compiler to rule out the range-based constructors and choose the only viable conversion (initializer_list<wstring> is not viable because const char* cannot be converted to wstring). *Actually, they are pointers to const char, so they could even be seen as constant iterators for characters, but definitely not for strings, as our template constructor is willing to think.
{ "pile_set_name": "StackExchange" }
Q: construct tensorflow bijectors Error I am new to use tensorflow. And I want to construct a bijector with the following properties: It takes a n dimensional probability distribution p(x1, x2, ..., xn), and it only transforms two certain dimensions i and j, such that xi' = xi, xj' = xj*exp(s(xi)) + t(xj), where s and t are two functions realized using neural networks. It outputs p(x1, x2, ..., xi', .., xj', .., xn). I have a basic code looks like below: def net(x, out_size, block_w_id, block_d_id, layer_id): x = tf.contrib.layers.fully_connected(x, 256, reuse=tf.AUTO_REUSE, scope='x1_block_w_{}_block_d_{}_layer_{}'.format(block_w_id, \ block_d_id,\ layer_id)) x = tf.contrib.layers.fully_connected(x, 256, reuse=tf.AUTO_REUSE, scope='x2_block_w_{}_block_d_{}_layer_{}'.format(block_w_id,\ block_d_id,\ layer_id)) y = tf.contrib.layers.fully_connected(x, out_size, reuse=tf.AUTO_REUSE, scope='y_block_w_{}_block_d_{}_layer_{}'.format(block_w_id,\ block_d_id,\ layer_id)) # return layers.stack(x, layers.fully_connected(reuse=tf.AUTO_REUSE), [512, 512, out_size]) return y class NVPCoupling(tfb.Bijector): """NVP affine coupling layer for 2D units. """ def __init__(self, input_idx1, input_idx2, block_w_id = 0, block_d_id = 0, layer_id = 0, validate_args = False\ , name="NVPCoupling"): """ NVPCoupling only manipulate two inputs with idx1 & idx2. """ super(NVPCoupling, self).__init__(\ event_ndims = 1, validate_args = validate_args, name = name) self.idx1 = input_idx1 self.idx2 = input_idx2 self.block_w_id = block_w_id self.block_d_id = block_d_id self.layer_id = layer_id # create variables tmp = tf.placeholder(dtype=DTYPE, shape = [1, 1]) self.s(tmp) self.t(tmp) def s(self, xd): with tf.variable_scope('s_block_w_id_{}_block_d_id_{}_layer_{}'.format(self.block_w_id,\ self.block_d_id,\ self.layer_id),\ reuse = tf.AUTO_REUSE): return net(xd, 1, self.block_w_id, self.block_d_id, self.layer_id) def t(self, xd): with tf.variable_scope('t_block_w_id_{}_block_d_id_{}_layer_{}'.format(self.block_w_id,\ self.block_d_id,\ self.layer_id),\ reuse = tf.AUTO_REUSE): return net(xd, 1, self.block_w_id, self.block_d_id, self.layer_id) def _forward(self, x): x_left, x_right = x[:, self.idx1:(self.idx1 + 1)], x[:, self.idx2:(self.idx2 + 1)] y_right = x_right * tf.exp(self.s(x_left)) + self.t(x_left) output_tensor = tf.concat([ x[:,0:self.idx1], x_left, x[:, self.idx1+1:self.idx2]\ , y_right, x[:, (self.idx2+1):]], axis = 1) return output_tensor def _inverse(self, y): y_left, y_right = y[:, self.idx1:(self.idx1 + 1)], y[:, self.idx2:(self.idx2 + 1)] x_right = (y_right - self.t(y_left)) * tf.exp(-self.s(y_left)) output_tensor = tf.concat([ y[:, 0:self.idx1], y_left, y[:, self.idx1+1 : self.idx2]\ , x_right, y[:, (self.idx2+1):]], axis = 1) return output_tensor def _forward_log_det_jacobian(self, x): event_dims = self._event_dims_tensor(x) x_left = x[:, self.idx1:(self.idx1+1)] return tf.reduce_sum(self.s(x_left), axis=event_dims) But it didn't work as I think it is. When I use the class, it pops up an error: base_dist = tfd.MultivariateNormalDiag(loc=tf.zeros([2], DTYPE)) num_bijectors = 4 bijectors = [] bijectors.append(NVPCoupling(input_idx1=0, input_idx2=1, \ block_w_id=0, block_d_id=0, layer_id=0)) bijectors.append(NVPCoupling(input_idx1=1, input_idx2=0, \ block_w_id=0, block_d_id=0, layer_id=1)) bijectors.append(NVPCoupling(input_idx1=0, input_idx2=1, \ block_w_id=0, block_d_id=0, layer_id=2)) bijectors.append(NVPCoupling(input_idx1=0, input_idx2=1, \ block_w_id=0, block_d_id=0, layer_id=3)) flow_bijector = tfb.Chain(list(reversed(bijectors))) dist = tfd.TransformedDistribution( distribution=base_dist, bijector=flow_bijector) dist.sample(1000) with error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-16-04da05d30f8d> in <module>() ----> 1 dist.sample(1000) /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/distributions/distribution.pyc in sample(self, sample_shape, seed, name) 708 samples: a `Tensor` with prepended dimensions `sample_shape`. 709 """ --> 710 return self._call_sample_n(sample_shape, seed, name) 711 712 def _log_prob(self, value): /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/distributions/transformed_distribution.pyc in _call_sample_n(self, sample_shape, seed, name, **kwargs) 412 # returned result. 413 y = self.bijector.forward(x, **kwargs) --> 414 y = self._set_sample_static_shape(y, sample_shape) 415 416 return y /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/ops/distributions/distribution.pyc in _set_sample_static_shape(self, x, sample_shape) 1220 shape = tensor_shape.TensorShape( 1221 [None]*(ndims - event_ndims)).concatenate(self.event_shape) -> 1222 x.set_shape(x.get_shape().merge_with(shape)) 1223 1224 # Infer batch shape. /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.pyc in merge_with(self, other) 671 return TensorShape(new_dims) 672 except ValueError: --> 673 raise ValueError("Shapes %s and %s are not compatible" % (self, other)) 674 675 def concatenate(self, other): ValueError: Shapes (1000, 4) and (?, 2) are not compatible Really hope some experts could help me understanding where I did wrong and how to fix that. Many Thanks! H. A: I believe the problem is here (reformatted slightly for clarity): output_tensor = tf.concat([ x[:,0:self.idx1], x_left, x[:, self.idx1+1:self.idx2], y_right, x[:, (self.idx2+1):] ], axis = 1) This assumes idx2 > idx1 which is not true in the case where you give idx1=1 and idx2=0. This causes you to concat more things than you meant to giving the 2nd dim of 4 instead of 2. I printed shapes in _forward thus: print("self.idx1: %s" % self.idx1) print("self.idx2: %s" % self.idx2) print("x[:,0:self.idx1]: %s" % x[:,0:self.idx1].shape) print("x_left: %s" % x_left.shape) print("x[:, self.idx1+1:self.idx2]: %s" % x[:, self.idx1+1:self.idx2].shape) print("x_right.shape: %s" % x_right.shape) print("y_right: %s" % y_right.shape) print("x[:, (self.idx2+1):]: %s" % x[:, (self.idx2+1):].shape) print("output_tensor.shape: %s" % output_tensor.shape) And got this output: self.idx1: 0 self.idx2: 1 x[:,0:self.idx1]: (1000, 0) x_left: (1000, 1) x[:, self.idx1+1:self.idx2]: (1000, 0) x_right.shape: (1000, 1) y_right: (1000, 1) x[:, (self.idx2+1):]: (1000, 0) output_tensor.shape: (1000, 2) self.idx1: 1 self.idx2: 0 x[:,0:self.idx1]: (1000, 1) x_left: (1000, 1) x[:, self.idx1+1:self.idx2]: (1000, 0) x_right.shape: (1000, 1) y_right: (1000, 1) x[:, (self.idx2+1):]: (1000, 1) output_tensor.shape: (1000, 4) self.idx1: 0 self.idx2: 1 x[:,0:self.idx1]: (1000, 0) x_left: (1000, 1) x[:, self.idx1+1:self.idx2]: (1000, 0) x_right.shape: (1000, 1) y_right: (1000, 1) x[:, (self.idx2+1):]: (1000, 2) output_tensor.shape: (1000, 4) self.idx1: 0 self.idx2: 1 x[:,0:self.idx1]: (1000, 0) x_left: (1000, 1) x[:, self.idx1+1:self.idx2]: (1000, 0) x_right.shape: (1000, 1) y_right: (1000, 1) x[:, (self.idx2+1):]: (1000, 2) output_tensor.shape: (1000, 4) I think you'll need to think a little more carefully about re-assembling the concat'ed pieces in this block, when idx1 > idx2. Hope this gets you back on track!
{ "pile_set_name": "StackExchange" }
Q: using ViewPager in API 8-10 in android i want to use this sample from Androidhive for using ViewPager. but i can't use it in API 8 till 10. how can i change this sample for this case? please help me. public class MyAdapter extends FragmentPagerAdapter { public MyAdapter(android.support.v4.app.FragmentManager fm) { super(fm); } public int getCount() { return 2; } public Fragment getItem(int i) { switch (i) { case 0: return new One(); case 1: return new Two(); } return null; } } and this is One.java : and Two .java is the same. public class One extends Fragment { private MediaPlayer mediaPlayer; @Override public View onCreateView(LayoutInflater inflater , ViewGroup container, Bundle savedInstanceState) { final View rootView = inflater.inflate(R.layout.one, container, false); // BTN assert rootView != null; rootView.findViewById(R.id.one_soundBTN) .setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { // play sound mediaPlayer = MediaPlayer.create(getActivity(), R.raw.one); mediaPlayer.start(); //show image rootView.findViewById(R.id.one_image).setVisibility(View.VISIBLE); } }); return rootView; } } A: ActionBar tabs are deprecated check google's project here to help you implement tabs with viewpager
{ "pile_set_name": "StackExchange" }
Q: How can I predict using an AFT model with the survival package in R? I am using an accelerated failure time / AFT model with a weibull distribution to predict data. I am doing this using the survival package in R. I am splitting my data in training and test, do training on the training set and afterwards try to predict the values for the test set. To do that I am passing the the test set as the newdata parameter, as stated in the references. I get an error, saying that newdata does not have the same size as the training data (obviously!). Then the function seems to evaluate predict the values for the training set. How can I predict the values for the new data? # get data library(KMsurv) library(survival) data("kidtran") n = nrow(kidtran) kidtran <- kidtran[sample(n),] # shuffle row-wise kidtran.train = kidtran[1:(n * 0.8),] kidtran.test = kidtran[(n * 0.8):n,] # create model aftmodel <- survreg(kidtransurv~kidtran.train$gender+kidtran.train$race+kidtran.train$age, dist = "weibull") predicted <- predict(aftmodel, newdata = kidtran.test) Edit: As mentioned by Hack-R, there was this line of code missing kidtransurv <- Surv(kidtran.train$time, kidtran.train$delta) A: The problem seems to be in your specification of the dependent variable. The data and code definition of the dependent was missing from your question, so I can't see what the specific mistake was, but it did not appear to be a proper Surv() survival object (see ?survreg). This variation on your code fixes that, makes some minor formatting improvements, and runs fine: require(survival) pacman::p_load(KMsurv) library(KMsurv) library(survival) data("kidtran") n = nrow(kidtran) kidtran <- kidtran[sample(n),] kidtran.train <- kidtran[1:(n * 0.8),] kidtran.test <- kidtran[(n * 0.8):n,] # Whatever kidtransurv was supposed to be is missing from your question, # so I will replace it with something not-missing # and I will make it into a proper survival object with Surv() aftmodel <- survreg(Surv(time, delta) ~ gender + race + age, dist = "weibull", data = kidtran.train) predicted <- predict(aftmodel, newdata = kidtran.test) head(predicted) 302 636 727 121 85 612 33190.413 79238.898 111401.546 16792.180 4601.363 17698.895
{ "pile_set_name": "StackExchange" }
Q: Можно ли сказать "Девушка НЕЖНО ступала"? "Легко ступать" - устойчивое словосочетание. Можно ли сказать "она нежно ступала"? A: Если по спине любимого, то да. A: На самом деле, слово "нежно" относится к воздействию на другой объект. Можно нежно посмотреть на кого-то, нежно его коснуться. То есть, скажем так, "нежно" - это и ощущение того, на кого оказано воздействие, и чувства, которые вложил в действие человек. То есть, конечно, хотелось бы видеть контекст, но, думаю, если речь идет просто о походке, то лучше придумать другое прилагательное. Например, "мягко ступала".
{ "pile_set_name": "StackExchange" }
Q: Relationship between multiple tables and a common table I have 3 tables JobType JobSubType JobSubSubType All tables take in parameters which are stored in the Parameter table. Once a parameter is used in one table it cannot be used by any other table. The Id values of all tables need to be present in the Parameter table. I created EntityId and EntityType columns in Parameter table which stores a JobType or JobSubType or JobSubSubType Id and the type of the entity. How do I define relationships which ensure the values in EntityId belongs to JobType, JobSubType or JobSubSubType? Simply having foreign keys on EntityId doesn't help. I do not want to create separate columns for JobType JobSubType or JobSubsubType Ids, as 2 columns out of 3 are always going to be null. Example Parameter Table +-------------+----------+---------------+--+ | ParameterId | EntityId | EntityType | | +-------------+----------+---------------+--+ | 1 | 2 | JobType | | | 2 | 6 | JobSubType | | | 3 | 11 | JobSubSubType | | | 4 | 4 | JobType | | | 5 | 6 | JobType | | | 6 | 12 | JobSubSubType | | +-------------+----------+---------------+--+ I had created foreign keys on the JobType, JobSubType and JobSubSubType tables with EntityId. When I was inserting values for JobSubSubTypes in parameter table I got an error. A: You could do it in two ways. First way would be to have a common base table of types that is used to serve the three tables with a common ancestor ID and thus each table would have IDs that are unique across all three tables, and you could then reference the ID in one column, with perhaps a second column to indicate which table was referenced. I believe this second approach would be preferable: Have only one types table, but use a ParentID as a self pointer to allow for sub types, sub sub types, and indeed sub sub sub sub sub sub sub sub sub types. Then all you need do is reference the ID. To keep the existing IDs (as asked in the follow-up comments, you add an additional field identifying the table - as mentioned in the first paragraph. The join would then play out like follows: SELECT col1, col2, ..., coln FROM mytable1 INNER JOIN types on mytable1.typeID = types.ID AND mytable1.typetable='Type' UNION ALL SELECT col1, col2, ..., coln FROM mytable1 INNER JOIN types on mytable1.typeID = types.ID AND mytable1.subtypetable='SubType' UNION ALL SELECT col1, col2, ..., coln FROM mytable1 INNER JOIN types on mytable1.typeID = subsubtypes.ID AND mytable1.typetable='SubSubType'
{ "pile_set_name": "StackExchange" }
Q: How to wire related models/controllers I'm writing an Ember application for storing "recommendations" about locations. The user must select a location from a list (retrieved by a custom API), write a description, take a photo and make a POST with that data. So I have a LocationModel and LocationController which retrieves the places from my API, display it on a list and allows the user to select one of them. Then a DescriptionController and PhotoController which allow the user to write the description and take the photo. I need to have each of these "parts" on individual controllers/models since I want them to reuse on others parts of my final app. My Recommendation's model looks like this: App.RecommendationModel = DS.Model.extend({ location: DS.BelongsTo('location'), description: DS.BelongsTo('description'), photo: DS.BelongsTo('photo') }); The corresponding recommendation template: <section id="select-location"> {{render "location" location}} </section> <section id="write-description"> {{render "description" description}} </section> <section id="take-photo"> {{render "photo" photo}} </section> <section id="send-recommendation"> {{render "share"}} </section> Each 'section' has a button with a 'complete' action: Eg: App.DescriptionController = Ember.Controller.extend({ actions: { complete: function() { _saveDescription(this.get('value')); } } }); My problem is how to "assemble" this parts in order to fulfil the RecommendationModel. I think that the complete action should be something like: App.DescriptionController = Ember.Controller.extend({ actions: { complete: function() { recommendation.set('description', this); } } }); Should the "complete" action be in the RecommendationController instead? If so, how should I wire each component? Any other approach would be appreciated. Thank you! A: I believe that the 'global' complete action should be handled by the RecommendationController and each complete action should be handled by each controller (description, Photo and whatever). What you can do is use the needs property in each controller that handles a part of the Recommendation: needs: ['recommendation'], And make the RecommendationRoute setup and use each controller and template of every part of the commendation like this: App.RecommendationRoute = Ember.Route.extend({ setupController: function(controller, model) { this._super(); var descController = this.controllerFor('description').set('content', model.get('description')); var photoController = this.controllerFor('photo').set('content', model.get('photo')); this.set('descController', descController).set('photoController', photoController); ... }, renderTemplate: function() { this._super(); this.render('description', { into: 'application', outlet: 'description', controller: this.get('descController') }); this.render('photo', { into: 'application', outlet: 'photo', controller: this.get('photoController') }); } }); This way, each controller of a part of the recommendation can handle its complete action. All use the same recommendation model and they can call the RecommendationController whenever necessary like this: this.get('controllers.recommendation') For instance, you'd do this to enable the complete action once all the parts of the recommendation have been completed. Finally, you can bind the action of the 'global' complete action in an element inside the recommendation template, or just add another render call inside renderTemplate in RecommendationRoute. Hope it helps!
{ "pile_set_name": "StackExchange" }
Q: Regex validation on Tridion Schema fields and show alert message created a schema having a field name 'email'. Now i need to verify field is it valid ? if not i want to show a error message. i edit the source as follows : <xsd:element name="email" minOccurs="0" maxOccurs="50"> <xsd:simpleType> <xsd:restriction base="xsd:normalizedString"> <xsd:pattern value="\w+([-+.']\w+)*@\w+([-.]\w+)*\.\w+([-.]\w+)*"/> </xsd:restriction> </xsd:simpleType> </xsd:element> how can i show a error message? or any other better approach please suggest. A: Take a look at some of these links, they should get you on the right track: Previous SF post - How to display the custom message in SDL Tridion Message bar? Validating Tridion Content Part 1 - http://nunolinhares.blogspot.co.uk/2012/07/validating-content-on-save-part-1-of.html Validating Tridion Content Part2 - http://www.curlette.com/?p=913 Cheers
{ "pile_set_name": "StackExchange" }
Q: what does $_ (ruby underscore) global method return? What does the global variable $_ return in ruby? Is it just echoing the response of the previously called method, or is it returning (raw) last line of code read by the interpreter? Here it says: string last read by gets and in builtin.rb it says: # Last line read by Kernel#gets or Kernel#readline. # This variable is defined in current scope, thread local. $_ = "" #value is unknown, used for indexing. This is what I get in the console: [1] pry(main)> x = 1 + 5 => 6 [2] pry(main)> _ => 6 [3] pry(main)> y = 3 + 3 => 6 [4] pry(main)> $_ => nil What is the difference between _ and $_ ? here's a gist with builtin.rb that someone has created... A: $_ is the last string read from IO by one of Kernel.gets, Kernel.readline or siblings. Pry introduces the underscore variable, returning the result of the last operation, on its own. It has nothing to do with ruby globals.
{ "pile_set_name": "StackExchange" }
Q: Execute same command for each non-interactive bash session I need to implement some config - I need to execute some code, if someone on my system calls for bash command in any form (interactive, non-interactive and bash -c form). For example this is command I want to add to each bash call: touch /tmp/$RANDOM I added that line to /bash/bash.bashrc and it works fine for logon shell and for shells, which was initalized by calling bash command. But this will not work for bash session initialized like this: bash -c 'echo 1' - new file will not be created From documentation I understand, that bash.bashrc and bash_profile called only for logon or interactive shells. Is there any way to create some similar calls for non-interactive shells too? A: To make bash parse a file when invoked as a non-interactive shell, you need to set the environment variable BASH_ENV to point to that file. From man bash (section on INVOCATION): When bash is started non-interactively, to run a shell script, for example, it looks for the variable BASH_ENV in the environment, expands its value if it appears there, and uses the expanded value as the name of a file to read and execute. Bash behaves as if the following com‐ mand were executed: if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi but the value of the PATH variable is not used to search for the file‐ name. So where to set BASH_ENV? Either: If you want a variable to be available to the environment system-wide, a good place to put it is /etc/environment. This file is specifically made for system-wide environment variable settings. It is not parsed by the shell but by the PAM module pam_env, so you can not use shell syntax or variable expansion within, but only simple assignments of the following type: VARIABLE=value or VARIABLE=/full/path/to/file Changes will take effect at the next login/authentication, so switch to a new tty console or logout and re-login to your session. On a standard desktop system, this should work for all types of authenticated sessions using PAM, including console logins, ssh and display managers, but also daemons such as atd and cron. If everything works as expected, then you are done and there is no need to read on. However, mistakes do occasionally sneak into the PAM configuration files of some programs/distributions, so if /etc/environment is not parsed by a certain type of program, make sure that the necessary PAM module is loaded in the PAM configuration file for that program in /etc/pam.d: session required pam_env.so readenv=1 (Note: The readenv flag which turns reading of the config file on/off should not actually be needed, since it is set to on (1) by default - but it doesn't hurt to make sure either.) Or: If you are working on a system that does not provide pam_env, then the best alternative that comes to my mind would be write a simple init script (or service unit file on systemd) that parses a custom config file (such as /etc/default/environment) at boot.
{ "pile_set_name": "StackExchange" }
Q: Insert the new record after a exsiting result In my items table I got following records: shop_vnum item_vnum count 4 13209 1 4 11229 1 And then, I want to insert a new record after item_vnum 13209 so it will look like: shop_vnum item_vnum count 4 13209 1 4 12400 1 4 11229 1 Is that possible? A: If you are using an InnoDB table and item_vnum is a primary key, InnoDB will actually "physically" arrange the rows in an ascending order by item_vnum (this uses something called clustered indexes). But, it looks like you are wanting it to be arranged in descending order, so the next best thing I can think of is creating a view: http://net.tutsplus.com/tutorials/databases/introduction-to-mysql-views/ http://www.techotopia.com/index.php/An_Introduction_to_MySQL_Views CREATE VIEW items_view AS SELECT * FROM items ORDER BY item_vnum DESC; Then, the view called "items_view" should behave like a table and reflect what you are looking for whenever you insert something into the "items" table. But, the best answer is probably: You don't need to worry about how they are inserted. You only need to worry about how they are retrieved with your select statement later on. Be sure to use "ORDER BY item_vnum desc" at the end of your select statement.
{ "pile_set_name": "StackExchange" }
Q: transaction not returning false msg I am trying to update 3 tables using transaction if any one fails I need to roll back all the tables. For one table foreign key constraint fails but it does not return false statement instead showing console error of Database. Roll back is working. I include my code below. Kindly help. The function below helps me to execute some queries function funcname($val1,$val2,$val3){ $this->db->trans_start();//starting transaction try { // trying to execute query $this->db->query("UPDATE tab1 SET name = 1 WHERE id=".$val1); $this->db->query("UPDATE school SET emp = 2 WHERE id=".$val2); $this->db->query("UPDATE profile SET status = 4 WHERE id=".$val3); $this->db->trans_complete(); return TRUE; } catch (Exception $ex) { //exception roll back to original state $this->db->trans_rollback(); return FALSE; } } A: If you are using trans_start() and trans_complete() then you need not use a try catch statement. Your function will be like this. function funcname($val1,$val2,$val3){ $this->db->trans_start();//starting transaction $this->db->query("UPDATE tab1 SET name = 1 WHERE id=".$val1); $this->db->query("UPDATE school SET emp = 2 WHERE id=".$val2); $this->db->query("UPDATE profile SET status = 4 WHERE id=".$val3); $this->db->trans_complete(); if ($this->db->trans_status() === FALSE) { return FALSE; } else { return TRUE; } } In case you need to do it manually then use the code below function funcname($val1,$val2,$val3){ $this->db->trans_begin();//starting transaction $this->db->query("UPDATE tab1 SET name = 1 WHERE id=".$val1); $this->db->query("UPDATE school SET emp = 2 WHERE id=".$val2); $this->db->query("UPDATE profile SET status = 4 WHERE id=".$val3); if ($this->db->trans_status() === FALSE) { $this->db->trans_rollback(); return FALSE; } else { $this->db->trans_commit(); return TRUE; } } for more reference Transactions : CodeIgniter User Guide
{ "pile_set_name": "StackExchange" }
Q: What is the purpose of the Build Configuration Manager in VS2008? I am struggling with the purpose of the build configuration manager in Visual Studio 2008. Specifically I am interested in knowing what it does when developing a console application and also a web application (web application project). Does setting it to Debug or Release mode make any difference when you are developing and running the application in the context of VS2008? What does it to when you want to build the solution? A: The Build Configuration Manager allows you to set different combinations of project build options for a solution. For example, say you have a solution with 4 projects, log4net, a DAL, a Business layer and the Website. Most times you'll want to run the website and business layer in debug, but the DAL and log4net in release mode. Sometimes you'll want to run the DAL in debug too, but only on a rare occurrence will you want to run everything in debug. The config manager lets you define configurations like that. Additionally, you could define a x64 build that had some projects target x64 and others target AnyCPU depending on need. Or even a build target that excluded specific projects and included others depending on need. So in short, the config manager lets you control the inter-process build relations at a level beyond the simplistic debug-all or release-all. I'd also guess, that 99% of the time you won't need to mess w/the config manager anyway. :-)
{ "pile_set_name": "StackExchange" }
Q: Cannot find adb tool during running ndk-gdb I am building my android ndk application by using ndk-build commend and it works fine but, when i use ndk-gdb commend from cygwin i get a following error ; ERROR: The 'adb' tool is not in your path. You can change your PATH variable, or use --adb=<executable> to point to a valid one. Please help me out to solve this problem. A: Its seems to be like, your instruction command can't find your adb executable, So either set environment path for adb.exe or give full path of your adb.exe in command..
{ "pile_set_name": "StackExchange" }
Q: Python Selenium: wait until class is visible I'm trying to get data from a website with different webpages. My code looks like this: item_List = [] def scrape(pageNumber): driver.get(url + pageExtension + str(pageNumber)) items = driver.find_elements_by_class_name("Item-information") for item in items: item_List.append(item.text) return item_List Right now I'm able to collect the data that I want from one page. When I run: print scrape(23) I get the results I need. But when I run: print scrape(14) #any page number really print scrape(23) Selenium first loads the page "url + pageExtension + str(14)" and successfully gets the data. It then loads "url + pageExtension + str(23)" but doesn't scrape the data. I get the following error code: selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document I assume this is caused by the browser not loading the second page fast enough resulting in selenium not being able to scrape the class I'm looking for. I've tried some waiting functions, but thus far I haven't been successful. Help would really be appreciated! Thanks in advance! A: Try as follows: item_List = [] def scrape(pageNumber): driver.get(url + pageExtension + str(pageNumber)) items = driver.find_elements_by_class_name("Item-information") for item in items: item_List.append(item.text) element = WebDriverWait(driver, 10).until( EC.staleness_of((By.CLASS_NAME, "Item-information"))) # waits till the element is NOT attached to the DOM. return item_List Note: As you are looking for the same elements (with same class name), items still contains the references to the elements of previous that you already visited. (here Page 14). so, when you visit Page 24, items refers to the elements which are in Page 14 but not in Page 24, so gives StaleElementReferenceException.
{ "pile_set_name": "StackExchange" }
Q: save data from one sheet to last row of other sheet Edited to add code to the bottom. I'm creating an invoice sheet that an employee will populate and then click a "save & clear" button which will find the next empty row in a sales report and save certain fields from the invoice to the sales report. I have two solutions that work but both seem to bog down the sheet take a while to complete. Below is a copy of the sheet I'm working on that I've stripped out extraneous information. Because I've stripped it to bare bones, I don't have an issue with slowness. I initially tried to create an array of each of the cells I wanted to save and then put those to the report. I then copy and paste values on the report to keep my clear function from deleting the invoice and the report. Second I have code that is activating certain cells in the lastRow+1 and then I do a copy/paste values of each cell as I go. Neither feels like the most efficient way to do this but I can't figure out how to use the getValues and setValues functions. Or if those would be applicable to what I am trying to do. Thanks in advance! First Try (with edits): data = ["=Invoice!$E$3", "=Invoice!$E$2","=row()+100000-2","=Invoice!$E$30","=Invoice!$E$4","=Invoice!$C$6","=Invoice!$C$10","=Invoice!$E$6","=Invoice!$E$10","Parts","=Invoice!$H$11"] SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Sales Report").appendRow(data); var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Sales Report"); sheet.getRange('$A$47:$K$100').copyTo(sheet.getRange('$A$47:$K$100'), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); SpreadsheetApp.getActive().getSheetByName('Invoice').getRange(6,3 ).clearContent(); SpreadsheetApp.getActive().getRange('B12:C28').clearContent(); SpreadsheetApp.getActive().getRange('E29').clearContent(); Second try (with edits): var spreadsheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Sales Report'); var row = spreadsheet.getLastRow() + 1; spreadsheet.getRange('Invoice!E3').copyTo(spreadsheet.getRange('A' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!E2').copyTo(spreadsheet.getRange('B' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('C' + (row - 1) + 1).copyTo(spreadsheet.getRange('C' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!E30').copyTo(spreadsheet.getRange('D' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!E4').copyTo(spreadsheet.getRange('E' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!C6').copyTo(spreadsheet.getRange('F' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!C10').copyTo(spreadsheet.getRange('G' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!E6').copyTo(spreadsheet.getRange('H' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!E10').copyTo(spreadsheet.getRange('I' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('J' + (row - 1)).copyTo(spreadsheet.getRange('J' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); spreadsheet.getRange('Invoice!H11').copyTo(spreadsheet.getRange('K' + row), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); SpreadsheetApp.getActive().getSheetByName('Invoice').getRange(6,3 ).clearContent(); SpreadsheetApp.getActive().getRange('B12:C28').clearContent(); SpreadsheetApp.getActive().getRange('E29').clearContent(); A: This should get you started with batch value operations (and some concepts). The key to remember is that JavaScript arrays are 0-base and the Spreadsheet interface uses 1-base: cell A1 aka cell R1C1 is at array indices 0, 0. The Spreadsheet Service offered by Apps Script's SpreadsheetApp also interprets the "major dimension" as rows, so the first index of a "2D" JavaScript Array is the row, and the second index is the column of that particular row. (The Sheets REST API can specify that arrays should be interpreted with columns as the first index.) function logInvoice() { const wb = SpreadsheetApp.getActive(); // 'const' means we will not change what 'wb' means. const sales = wb.getSheetByName("Sales Reports"); var lastSalesRow = sales.getLastRow(); // not 'const' since we will re-assign this primitive. const invoice = wb.getSheetByName("Invoice"); const fullReport = invoice.getDataRange().getValues(); // Gets A1:___ into a 2D JS array const data = [ fullReport[2][4], // E3 fullReport[1][4], // E2 /** I really don't know what you were doing with "=row() + 100000 - 2". If * you were just trying to get a unique identifier, then use the next line: */ // Utilities.getUuid(), fullReport[29][4], // E30 /** * add more elements as needed */ ]; // Write the summary data to the next row in the 'Sales Reports' sheet. // Grabs the cell "A#" and then uses the size of the data to write to determine // the size of the Range required. sales.getRange(++lastSalesRow, 1, data.length, data[0].length).setValues(data); // Reset the Invoice sheet to its blank slate. invoice.getRangeList([ "C6", // R6C3 "B12:C28", "E29" ]).clearContent(); } References: const Sheet#getDataRange Range#getValues Range#setValues RangeList
{ "pile_set_name": "StackExchange" }
Q: How can I use functools.partial on multiple methods on an object, and freeze parameters out of order? I find functools.partial to be extremely useful, but I would like to be able to freeze arguments out of order (the argument you want to freeze is not always the first one) and I'd like to be able to apply it to several methods on a class at once, to make a proxy object that has the same methods as the underlying object except with some of its methods parameters being frozen (think of it as generalizing partial to apply to classes). And I'd prefer to do this without editing the original object, just like partial doesn't change its original function. I've managed to scrap together a version of functools.partial called 'bind' that lets me specify parameters out of order by passing them by keyword argument. That part works: >>> def foo(x, y): ... print x, y ... >>> bar = bind(foo, y=3) >>> bar(2) 2 3 But my proxy class does not work, and I'm not sure why: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=bind(Foo.bar, y=3)) >>> b.bar(2) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: bar() takes exactly 3 arguments (2 given) I'm probably doing this all sorts of wrong because I'm just going by what I've pieced together from random documentation, blogs, and running dir() on all the pieces. Suggestions both on how to make this work and better ways to implement it would be appreciated ;) One detail I'm unsure about is how this should all interact with descriptors. Code follows. from types import MethodType class PureProxy(object): def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if hasattr(subst_attr, "underlying"): setattr(self, name, MethodType(subst_attr, self, PureProxy)) def __getattribute__(self, name): return getattr(object.__getattribute__(self, "underlying"), name) def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) new_func.underlying = f return new_func Update: In case anyone can benefit, here's the final implementation I went with: from types import MethodType class PureProxy(object): """ Intended usage: >>> class Foo(object): ... def bar(self, x, y): ... print x, y ... >>> a = Foo() >>> b = PureProxy(a, bar=FreezeArgs(y=3)) >>> b.bar(1) 1 3 """ def __init__(self, underlying, **substitutions): self.underlying = underlying for name in substitutions: subst_attr = substitutions[name] if isinstance(subst_attr, FreezeArgs): underlying_func = getattr(underlying, name) new_method_func = bind(underlying_func, *subst_attr.args, **subst_attr.kwargs) setattr(self, name, MethodType(new_method_func, self, PureProxy)) def __getattr__(self, name): return getattr(self.underlying, name) class FreezeArgs(object): def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs def bind(f, *args, **kwargs): """ Lets you freeze arguments of a function be certain values. Unlike functools.partial, you can freeze arguments by name, which has the bonus of letting you freeze them out of order. args will be treated just like partial, but kwargs will properly take into account if you are specifying a regular argument by name. """ argspec = inspect.getargspec(f) argdict = copy(kwargs) if hasattr(f, "im_func"): f = f.im_func args_idx = 0 for arg in argspec.args: if args_idx >= len(args): break argdict[arg] = args[args_idx] args_idx += 1 num_plugged = args_idx def new_func(*inner_args, **inner_kwargs): args_idx = 0 for arg in argspec.args[num_plugged:]: if arg in argdict: continue if args_idx >= len(inner_args): # We can't raise an error here because some remaining arguments # may have been passed in by keyword. break argdict[arg] = inner_args[args_idx] args_idx += 1 f(**dict(argdict, **inner_kwargs)) return new_func A: You're "binding too deep": change def __getattribute__(self, name): to def __getattr__(self, name): in class PureProxy. __getattribute__ intercepts every attribute access and so bypasses everything that you've set with setattr(self, name, ... making those setattr bereft of any effect, which obviously's not what you want; __getattr__ is called only for access to attributes not otherwise defined so those setattr calls become "operative" & useful. In the body of that override, you can and should also change object.__getattribute__(self, "underlying") to self.underlying (since you're not overriding __getattribute__ any more). There are other changes I'd suggest (enumerate in lieu of the low-level logic you're using for counters, etc) but they wouldn't change the semantics. With the change I suggest, your sample code works (you'll have to keep testing with more subtle cases of course). BTW, the way I debugged this was simply to stick in print statements in the appropriate places (a jurassic=era approach but still my favorite;-).
{ "pile_set_name": "StackExchange" }
Q: Select/Decelect Row in DataGridView I'm trying to insert/update Data into a DataGridView but ran into a small Problem. In the DataGridVieware certain cells with a date and a starting time (datetime). Now what i'm trying to do is when a button is pressed the current date should be searched for, the row selected, then the start Time should be read and with the current time a TimeSpan should be calculated. Right now i'm just trying to get the row selected to take out the data of the starttime cell. var Today = DateTime.Now.ToShortDateString(); dataGridView1.SelectedRows.Clear(); foreach (DataGridViewRow row in dataGridView1.Rows) { if (row.Cells[0].Value.Equals(Today)) { row.Selected = true; } } But this gives me an error that the Listing is read only...i'm not really sure what i did wrong here? So i'd appreaciate it if someone could help me with this or give me a tip on how to solve this Problem. Thanks in advance to everyone. :) A: If you try to clear selection in your DataGridView using dataGridView1.SelectedRows.Clear(); you will receive an exception: Operation not supported. Collection is read-only. To clear selection, you can use ClearSelection method: dataGridView1.ClearSelection();
{ "pile_set_name": "StackExchange" }
Q: CSS how to add sliding transition I have the following fiddle demo of a working sidenav menu with sliding sub menu contents. I followed the same demo without using Jquery (actually first using plain JS and then via CSS hover selector instead of click) and in my case sub menu doesn't slides/animates in the same way. .submenu { display: none; } .parent:hover .submenu,.submeun:hover { display: block; } Is that animation due to Jquery toggle method? $(document).ready(function() { $('.parent').click(function() { $('.submenu').toggle('visible'); }); }); How can I replicate the same approach without using jquery, via css or plain JS as I don't want to use jquery just for one simple sliding animation. JSFIDDLE A: This way ? document.getElementById('home').addEventListener('click', function(e) { var nextEl = e.target.nextElementSibling; if(!nextEl.classList.contains('submenu')) { return false; } if(nextEl.classList.contains('show')) { nextEl.classList.remove('show') } else { nextEl.classList.add('show'); } }); .submenu { -webkit-transition: max-height 1s; -moz-transition: max-height 1s; -ms-transition: max-height 1s; -o-transition: max-height 1s; transition: max-height 1s; background: #e5feff; overflow: hidden; max-height: 0; } .submenu.show { max-height: 300px; } <div id="sidebar"> <ul> <li id="home" class="parent">Home</li> <li class="submenu"><ul > <li>Home 1</li> <li>Home 2</li> <li>Home 3</li> </ul> </li> <li>Explore</li> </ul> </div>
{ "pile_set_name": "StackExchange" }
Q: Dart Async Does Not Wait for Completion I'm currently trying to wait for a BLE connection to result in one of two outcomes: Device connected successfully, or Device failed to connect after the scan timed out Instead of returning a true or false value as desired, null is immediately returned, without waiting for the function to finish. I'm using dart's Future and async functionality in order to wait for the completion of the connect function. Here is my code below: BLE Connect method: static Future<bool> connect(BluetoothDevice d) async { // Connect to device Duration timeout = new Duration(seconds: 5); deviceConnection = _flutterBlue.connect(d, timeout: timeout).listen((s) { deviceState = s; if (s == BluetoothDeviceState.connected) { device = d; device.discoverServices().then((s) { ... Some service discovery stuff ... }); } }, onDone: () { return deviceState == BluetoothDeviceState.connected; }); } Where the connect method is being called from: bool isConnected = await FlutterBLE.connect(device); if(isConnected) { ... Do some stuff ... } else { ... Do some other stuff ... } What am I doing wrong here? A: As Günther Zöchbauer has pointed out, the mistake is in the onDone part. You are returning a value there that nobody will ever see, and you are no returning anything from the surrounding function. You are inside an async function, so you can use await for to iterate the stream. You also want to stop listening to the stream the first time you get a connection event, because you only care about the first connection. The stream of connection events itself never stops. static Future<bool> connect(BluetoothDevice d) async { // Connect to device Duration timeout = const Duration(seconds: 5); await for (var s in _flutterBlue.connect(d, timeout: timeout)) { deviceState = s; if (s == BluetoothDeviceState.connected) { device = d; device.discoverServices().then((s) { ... Some service discovery stuff ... }); return true; } } // The stream ended somehow, there will be no further events. return false; } If you don't want to use await for (and not using an async function to begin with), I would recommend using firstWhere to find the first connection (if any) over listen: static Future<bool> connect(BluetoothDevice d) { // Connect to device Duration timeout = const Duration(seconds: 5); return _flutterBlue.connect(d, timeout: timeout).firstWhere((s) { return s == BluetoothDeviceState.connected; }, orElse: () => null).then((s) { if (s == null) return false; deviceState = s; device = d; device.discoverServices().then((s) { //... Some service discovery stuff ... }); return true; }); It's also slightly suspicious that nobody waits for the future returned by device.discoverServices().then(...). Make sure that this is correct.
{ "pile_set_name": "StackExchange" }
Q: From master equation to Fokker-Planck equation For the continuous master equation in real space and time, we have for the distribution $f(x,t)$: $$\frac{\partial f(x,t)}{\partial t}=\int_{-\infty}^{\infty}[f(x',t)W(x',x)-f(x,t)W(x,x')]\mathrm{d}x'$$ In the statistical mechanics book by P.K. Pathria, the right hand side is Taylor expanded to obtain (keeping up to second order) $$\frac{\partial f(x,t)}{\partial t}=-\frac{\partial}{\partial x}[f(x,t)\int_{-\infty}^{\infty}\xi W(x;\xi)\mathrm{d}\xi]+\frac{1}{2}\frac{{\partial}^2}{\partial x^2}[f(x,t)\int_{-\infty}^{\infty}{\xi}^2 W(x;\xi)\mathrm{d}\xi],$$ where $\xi=x'-x$, and $W(x;\xi)=W(x,x')$, which is the transition probability density from $x$ to $x'$. This is pretty much confusing to me and I have the following questions: How is the expansion carried out? It seems that we should expand around $x'$, after which a change of integration variable is done. However, it doesn't seem to lead to the above expression. The difference $\xi$ between $x$ and $x'$ is not necessarily small and in fact as the integration variable it goes all the way to $\infty$. Then in this case, how is the keeping up to second order and neglect all the higher orders justified? Any help is much appreciated. A: For the first question, we start by rewriting $$ \frac{\partial f(x,t)}{\partial t}=\int_{-\infty}^{\infty}[f(x',t)W(x',x)-f(x,t)W(x,x')]\, dx' $$ as $$ \begin{aligned} \frac{\partial f(x,t)}{\partial t} &=\int_{-\infty}^{\infty}[f(x',t)W(x';x-x')-f(x,t)W(x;x'-x)] \, d (x' - x) \\ &= \int_{-\infty}^{\infty} f(x-\xi,t) W(x-\xi;\xi) \, d \xi -f(x, t) \int_{-\infty}^{\infty} W(x; \xi) \, d \xi, \qquad (1) \end{aligned} $$ where $\xi \equiv x - x'$. For the first term, the change of the minus sign in $d(-\xi) \to d\xi$ is compensated by swapping the lower and upper limits of the integral: $$ \int_{-\infty}^\infty g(\xi) d(-\xi) = -\int_{\infty}^{-\infty} g(\xi) d\xi = \int_{-\infty}^{\infty} g(\xi) d\xi $$ Next we can expand the first term using the Kramers-Moyal expansion $$ f(x-\xi,t) W(x-\xi;\xi) = f(x,t) W(x;\xi) -\xi \frac{\partial }{ \partial x} \left[ f(x,t) W(x;\xi) \right] +\frac{ \xi^2 } {2!} \frac{\partial^2 }{ \partial x^2} \left[ f(x,t) W(x;\xi) \right] + \dots, \qquad (2) $$ in which the first term of right-hand side will cancel the second term of (1). So $$ \begin{aligned} \frac{\partial f(x,t)}{\partial t} &= \int_{-\infty}^\infty \left( -\xi \frac{\partial }{ \partial x} \left[ f(x,t) W(x;\xi) \right] +\frac{ \xi^2 } {2!} \frac{\partial }{ \partial x} \left[ f(x,t) W(x;\xi) \right] \right) \, d\xi, \\ &= -\frac{\partial }{ \partial x} \left[f(x,t) \int_{-\infty}^\infty \xi W(x;\xi) \, d\xi\right] + \frac{ 1 } {2!} \frac{\partial^2 }{ \partial x^2}\left[ f(x,t) \int_{-\infty}^\infty \xi^2 W(x;\xi) \, d\xi \right]. \end{aligned} $$ For the second question, you are absolutely correct to say that we cannot always justify the truncation. Van Kampen has a entire chapter devoted to this (chapter X) among complaints in other places. The formally correct expansion, Kramers-Moyal expansion, yields $$ \begin{aligned} \frac{\partial f(x,t)}{\partial t} &= \sum_{k = 1}^\infty \frac{(-1)^k}{k!} \frac{\partial^k }{ \partial x^k} \left[f(x,t) \int_{-\infty}^\infty \xi^k W(x;\xi) \, d\xi\right]. \end{aligned} $$ The problem is that this expansion is equivalent to the master equation, and it is generally too difficult to solve.
{ "pile_set_name": "StackExchange" }
Q: Converting data in daily for each date range given in dataframe row I have below types of data: I want the output in below daily forms(below is the Example where I want to convert Row 1 data into Daily and then Row 2 and … (The date Range is not Fix)): First, I just want to ask the Expert about the possibility of this kind of data treatment in python. I am trying with the following code: # data1 = dataframe name data1['Daily']=(data1['Total Spot']/((data1['Event End']-data1['Event Start']).dt.days)+1) for date in data1: pd.date_range(data1['Event Start'], data1['Event End'],freq='D') What should I include in the above code to get a desirable output? A: You can use list comprehension with flattening for new DataFrame and then DataFrame.merge together with original data: data1 = pd.DataFrame({'Event Start':['03/28/2018','04/02/2018'], 'Event End':['04/03/2018','04/05/2018'], 'Team 1':['AAB','AAC'], 'Team 2':['BBB','ABC'], 'Total Spot':[160, 350]}) c = ['Event Start','Event End'] data1[c] = data1[c].apply(pd.to_datetime) data1['Daily']=(data1['Total Spot']/((data1['Event End']-data1['Event Start']).dt.days)+1) print (data1) Event Start Event End Team 1 Team 2 Total Spot Daily 0 2018-03-28 2018-04-03 AAB BBB 160 27.666667 1 2018-04-02 2018-04-05 AAC ABC 350 117.666667 L = [(i, x) for i, s, e in zip(data1.index, data1['Event Start'], data1['Event End']) for x in pd.date_range(s, e)] df = (pd.DataFrame(L, columns=['idx','Day']) .merge(data1.drop(c + ['Total Spot'], axis=1), left_on='idx', right_index=True) .drop('idx', axis=1)) print (df) Day Team 1 Team 2 Daily 0 2018-03-28 AAB BBB 27.666667 1 2018-03-29 AAB BBB 27.666667 2 2018-03-30 AAB BBB 27.666667 3 2018-03-31 AAB BBB 27.666667 4 2018-04-01 AAB BBB 27.666667 5 2018-04-02 AAB BBB 27.666667 6 2018-04-03 AAB BBB 27.666667 7 2018-04-02 AAC ABC 117.666667 8 2018-04-03 AAC ABC 117.666667 9 2018-04-04 AAC ABC 117.666667 10 2018-04-05 AAC ABC 117.666667 Another similar solution: zipped = zip(data1['Team 1'], data1['Team 2'], data1['Daily'], data1['Event Start'], data1['Event End']) L = [(x, t1, t2, d) for t1, t2, d, s, e in zipped for x in pd.date_range(s, e)] print (L) df = pd.DataFrame(L, columns=['Day', 'Team 1','Team 2','Daily']) print (df) Day Team 1 Team 2 Daily 0 2018-03-28 AAB BBB 27.666667 1 2018-03-29 AAB BBB 27.666667 2 2018-03-30 AAB BBB 27.666667 3 2018-03-31 AAB BBB 27.666667 4 2018-04-01 AAB BBB 27.666667 5 2018-04-02 AAB BBB 27.666667 6 2018-04-03 AAB BBB 27.666667 7 2018-04-02 AAC ABC 117.666667 8 2018-04-03 AAC ABC 117.666667 9 2018-04-04 AAC ABC 117.666667 10 2018-04-05 AAC ABC 117.666667
{ "pile_set_name": "StackExchange" }
Q: C - Does freeing an array of pointers also free what they're pointing to? Say I have an array of pointers to structs that contain a string each and so for something like this: printf("%s\n", array[0]); The output is: Hello. If I perform a free(array) will this free what array[0] is pointing to? ("Hello."). I've spent hours attempting to manually free each element and all I get is crashes. I'm hoping this is a shortcut :/ A: This all depends on how the array was allocated. I'll give examples: Example 1: char array[10]; free(array); // nope! Example 2: char *array; array= malloc(10); // request heap for memory free(array); // return to heap when no longer needed Example 3: char **array; array= malloc(10*sizeof(char *)); for (int i=0; i<10; i++) { array[i]= malloc(10); } free(array); // nope. You should do: for (int i=0; i<10; i++) { free(array[i]); } free(array); Ad. Example 1: array is allocated on the stack ("automatic variable") and cannot be released by free. Its stack space will be released when the function returns. Ad. Example 2: you request storage from the heap using malloc. When no longer needed, return it to the heap using free. Ad. Example 3: you declare an array of pointers to characters. You first allocate storage for the array, then you allocate storage for each array element to place strings in. When no longer needed, you must first release the strings (with free) and then release the array itself (with free). A: If I perform a free(array) will this free what array[0] is pointing to? ("Hello."). No they don't get freed automatically, but depending on how you allocated each of them, there might be no need to free them actually. You would only need to free them if they point to memory which was returned by malloc and similar allocation functions. Say you have array of pointers to string array char * array[2]; array[0] = "Some text"; // You would not need to free this array[1] = malloc(LENGTH); // This one you would have to free Note in this case you don't need to free the array itself. Only the element with index 1.
{ "pile_set_name": "StackExchange" }
Q: Prove $ (∀x_1 (∃x_2 (p(x_1, x_2) ⇒ (∀x_2 p(x_1, x_2)))))$ Can u help show that this is a theorem? $ (∀x_1 (∃x_2 (p(x_1, x_2) ⇒ (∀x_2 p(x_1, x_2)))));$ I was trying to use the deduction theorem but i hit a wall. Can u help me out using derivatives and Hilbert Calculus? A: Hint In this post you can find an Hilbert-style proof of : $⊢(∀xβ → α) ↔ ∃x(β → α)$, provided that $x$ is not free in $\alpha$. We have to consider : $∀x_2p(x_1,x_2) \to ∀x_2p(x_1,x_2)$; it is an instance of the propositional tautology : $\vdash A \to A$, and thus is a theorem. Now we apply the equivalence above to it, due to the fact that $x_2$ is not free in $∀x_2p(x_1,x_2)$ to get : $\vdash ∃x_2 \ (p(x_1,x_2) \to ∀x_2p(x_1,x_2))$. The last step is obtained with Generalization: $\vdash ∀x_1 \ ∃x_2 \ (p(x_1,x_2) \to ∀x_2p(x_1,x_2))$.
{ "pile_set_name": "StackExchange" }
Q: Resources$NotFoundException: resource ID not valid. Why? I am trying to add a float to my dimens.xml file. I was reading the following SO answer. When I tried the solution, I got the exception described in the comments. I am trying to figure out why that exception is thrown. For completeness here is the XML: <item name="zoom_level" format="float" type="dimen">15.0</item> Here is the code that blows up: final float zoom = this.getResources().getDimension(R.dimen.zoom_level); I jumped into the Android source, and here is the method definition for getDimension: public float getDimension(int id) throws NotFoundException { synchronized (mTmpValue) { TypedValue value = mTmpValue; getValue(id, value, true); if (value.type == TypedValue.TYPE_DIMENSION) { return TypedValue.complexToDimension(value.data, mMetrics); } throw new NotFoundException( "Resource ID #0x" + Integer.toHexString(id) + " type #0x" + Integer.toHexString(value.type) + " is not valid"); } } So for whatever reason value.type != TypedValue.TYPE_DIMENSION. I do not have my Android source completely set up so I cannot easily add a Log.w("YARIAN", "value type is " + value.type)' statement in there. I then jumped into getValue and the chain of calls seems to be: Resources.getValue -> AssetManager.getResourceValue -> AssetManager.loadResourceValue loadResourceValue is a native method and here is where my digging falls apart. Anybody know what the best way to understand what's going is? I also noticed that Resources has a TypedValue.TYPE_FLOAT and TypedValue.TYPE_DIMENSION. But in XML, I cannot write type="float". The work around described in the comments is to use type=string and then use Float.parse to get the float. Is this necessary? Why or why not? A: I know it's a late answer but you should use TypedValue#getFloat() instead of parsing the String to a float like you suggested. XML: <item name="float_resource" format="float" type="raw">5.0</item> Java: TypedValue out = new TypedValue(); context.getResources().getValue(R.raw.float_resource, out, true); float floatResource = out.getFloat(); You can put fraction, raw or string as the type if you prefer, this only corresponds to the resource class in R. A: I just ran into this problem too, and though the error message isn't too helpful, I realized my problem was that I was putting just a float value in my resource file and didn't specify a measurement. Switching 15.0 to 15.0dp for instance would avoid the problem and allow you to still use a regular dimension resource.
{ "pile_set_name": "StackExchange" }
Q: Greatest common divider program not working as expected I've to write a program in C++ to accept 2 integers and find their G.C.D (Greatest Common Divisor) using a function with a return statement. Here is what I've written: int gcd(int x, int y) #include<iostream.h> #include<conio.h> void main() { clrscr(); int q, x, y, ans; cout<<"Enter 2 nos."<<endl; cin>>x>>y; q = gcd(x,y) cout<<"The GCD is: "<<q<<endl; getch(); } int gcd(int x, int y) { int ans; int i; for(i = 0; i<=x && i<=y; i++) { if(x%i==0 && y%i==0) ans = i; } return ans; } On compiling my code, I'm getting a declaration syntax error. Could someone please point out in which line my error is and how I should fix it? A: int gcd(int x, int y) Missing a ; q = gcd(x,y) Missing a ; #include<iostream.h> Maybe you meant #include <iostream> if(x%i==0 && y%i==0) Integer division by zero, in the first iteration when i = 0. main must return int. Aditional considerations: getch() and clrscr() are deprecated functions and conio.h is Windows specific, you should consider not using it.
{ "pile_set_name": "StackExchange" }
Q: SQL: Get multiple line entries linked to one item? I have a table: ID | ITEMID | STATUS | TYPE 1 | 123 | 5 | 1 2 | 123 | 4 | 2 3 | 123 | 5 | 3 4 | 125 | 3 | 1 5 | 125 | 5 | 3 Any item can have 0 to many entries in this table. I need a query that will tell me if an ITEM has all it's entries in either a state of 5 or 4. For example, in the above example, I would like to end up with the result: ITEMID | REQUIREMENTS_MET 123 | TRUE --> true because all statuses are either 5 or 4 125 | FALSE --> false because it has a status of 3 and a status of 5. If the 3 was a 4 or 5, then this would be true What would be even better is something like this: ITEMID | MET_REQUIREMENTS | NOT_MET_REQUIREMENTS 123 | 3 | 0 125 | 1 | 1 Any idea how to write a query for that? A: Fast, short, simple: SELECT itemid ,count(status = 4 OR status = 5 OR NULL) AS met_requirements ,count(status < 4 OR status > 5 OR NULL) AS not_met_requirements FROM tbl GROUP BY itemid ORDER BY itemid; Assuming all columns to be integer NOT NULL. Builds on basic boolean logic: TRUE OR NULL yields TRUE FALSE OR NULL yields NULL And NULL is not counted by count(). ->SQLfiddle demo.
{ "pile_set_name": "StackExchange" }
Q: Conky either has title bar, dissapears or over writes itself I have been trying to set up Conky. Gothic works fine, but seamod has been giving me problems. I have tried setting the window_type as recommended elsewhere, but: normal has a title bar and appears in the side panel like a normal window. The suggested solution to this is to set the window_type to desktop Desktop disappears when I click on the desktop (left or right). The suggested solution to this is to set the window_type to normal + Override has the problem depicted here: Conky seamod GUI writing over iteself. Still not sure what triggers it, but I think it may be having two windows side-by-side (by dragging into the corner) or using the terminal Panel and dock appear to work as the should, but I do not want a panel or dock. How do I display it on the desktop without it disappearing or having a title bar or writing over itself? Conky seamod config file: background yes update_interval 1 cpu_avg_samples 1 net_avg_samples 2 temperature_unit celsius double_buffer yes no_buffers yes text_buffer_size 2048 gap_x 0 gap_y 0 minimum_size 300 900 maximum_width 350 own_window yes own_window_type desktop own_window_transparent yes own_window_argb_visual yes own_window_argb_visual yes own_window_colour 000000 own_window_argb_value 0 own_window_hints undecorate,sticky,skip_taskbar,skip_pager,below border_inner_margin 0 border_outer_margin 0 alignment top_right draw_shades no draw_outline no draw_borders no draw_graph_borders no override_utf8_locale yes use_xft yes xftfont caviar dreams:size=10 xftalpha 0.5 uppercase no Notes: I'm using unity and running from a persistent live USB. A: Fixed it by setting window_type to normal and following instructions here: Conky Widgets are Opening in their Own Windows
{ "pile_set_name": "StackExchange" }
Q: Can't remove old PHP_CodeSniffer install My employer has asked me to move from a PEAR phpcs install to a global Composer phpcs install. In trying to make that change, I've discovered an existing phpcs of an old version. macpro@~ $: which phpcs /usr/local/bin/phpcs macpro@~ $: /usr/local/bin/phpcs --version PHP_CodeSniffer version 2.3.4 (stable) by Squiz (http://www.squiz.net) This version doesn't match with PEAR or Composer versions I've installed. macpro@~/bin/phpcs $: pear list Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.4.0 stable Console_Getopt 1.4.1 stable PEAR 1.10.1 stable PHP_CodeSniffer 2.6.0 stable Structures_Graph 1.1.1 stable XML_Util 1.3.0 stable macpro@~/.composer $: composer info squizlabs/php_codesniffer 2.6.0 PHP_CodeSniffer tokenizes PHP, JavaScript and CSS files and detects violations of a defined set of coding standards. I cannot update or uninstall this old version using PEAR. I remove and reinstall PHP_CodeSniffer using PEAR and nothing changes. I thought maybe I had installed the old phpcs version with MacPorts. Installed MacPorts to check. macpro@~ $: port installed No ports are installed. So my question is, how can I remove this old phpcs installation (without breaking anything, and in such a way that I can, ultimately uninstall every instance of phpcs except a globally available composer package)? A: Is that phpcs executable a script, or a phar file? If it's a phar file then it definitely wasn't placed there by a PEAR install. $ file /usr/local/bin/phpcs You should either get: /usr/local/bin/phpcs: PHP script, ASCII text executable or /usr/local/bin/phpcs: data If you get "data", then it's most likely a phar in which case you can simply delete it. If it was placed there via PEAR, or if you know there's a PEAR install of phpcs laying about, you can do a check of sorts to see where the PEAR installed script is by doing: $ pear list PHP_CodeSniffer | grep "phpcs$"
{ "pile_set_name": "StackExchange" }
Q: Reputation requirements for posting on Meta Stack Exchange and per-site metas To post on a per-site meta you need to have at least 5 reputation on the parent site. To post on Meta.SE you need to register, but you don't need to have any reputation on an SE site. Now, Meta.SO is both the meta for Stackoverflow and the meta for the whole network. But wouldn't the reasons to require a minimum reputation on per-site metas be equally valid on Meta.SE? I'm not entirely sure what the main reasons behind the minimum reputation are, I can think of a few possible explanations: It prevents users from accidentally posting on meta when they should post on main It prevents noise from users that aren't active on the main site per-site metas have far lower traffic, so community moderation is not as effective and a higher barrier of entry might be necessary Requiring a minimum reputation (especially if also enabled on Meta.SE) also might have some other side-effects: 1 rep users have no valid way of challenging community or diamond moderation actions except mailing the SE team Would it make sense to enable the minimum reputation also on Meta.SE? It would reduce the number of completely off-topic posts on meta.SE, but might make it harder for 1 rep users to challenge moderation actions. Or is the minimum reputation barrier not needed on per-site metas and should it be disabled everywhere? Maybe I'm also missing a reason why the minimum reputation requirement makes sense on per-site metas and shouldn't be enabled on Meta.SE. Or maybe they should be disabled on the per-site metas as well. A: I think there should not be any minimum rep requirements on Meta.SE because users can ask questions such as if a question is off-topic, why was my question closed, etc. If the reputation to participate in Meta.SE would be higher, the users could not ask those questions, which may result in more bad quality questions on main sites. A: I think that perhaps MSO should be split, in one part into what it actually says it is - meta.stackoverflow.com, and then also have a meta.stackexchange.com for the network as a whole. meta.stackoverflow.com could then mirror rep from StackOverflow as with other sites, whereas perhaps rep on meta.stackexchange.com could mirror the highest rep you have on any one site, or network wide. Leaving it as it is but enforcing a rep requirement would mean needing some way to earn rep before being able to participate, even though to earn rep you need to participate.
{ "pile_set_name": "StackExchange" }
Q: Solving equation: $\cos x-\sin x (2\cos x-4)=0$ Solve the following trigonometric equation $$\cos x-\sin x (2\cos x-4)=0$$ Thank you very much. A: Using Weierstrass substitution, the equation reduces to $$t^4-12t^3-4t-1=0$$ where $t=\tan\frac x2$ Clearly, all the solutions are finite. But, the solution is not too simple
{ "pile_set_name": "StackExchange" }
Q: How to run a java program in command line? Here is the thing: I am trying to run the example program in the joda-time project. The start of the Examples.java file looks like this: package org.joda.example.time; import java.util.Locale; import org.joda.time.DateTime; import org.joda.time.Instant; /** * Example code demonstrating how to use Joda-Time. * * @author Stephen Colebourne */ public class Examples { public static void main(String[] args) throws Exception { try { new Examples().run(); } catch (Throwable ex) { ex.printStackTrace(); } } And all the classes for compiling this Example.java is in a joda-time-2.3.jar. I can successfully compile this program by using javac -cp somewhere/joda-time-2.3.jar Example.java And it generate an Example.class, but I jut cannot execute that. So far I have tried: java Examples java -cp somewhere/joda-time-2.3.jar Examples java -cp somewhere/joda-time-2.3.jar org.joda.example.time.Examples But they all generate this kind of errors: Error: Could not find or load main class org.joda.example.time.Example Error: Could not find or load main class Examples And I've tried both in the org/joda/example/time folder and the parent folder of org Anyone can give an instruction on how to execute that? Really appreciate it! A: Error: Could not find or load main class org.joda.example.time.Example public class Examples { Name of your class is Examples not Example EDIT Sorry for late reply... To execute specific Java program you need to bring control to root directory so if your class is in abc/newdir/Examples.java you need to use cd command (in windows) to lead control to root directory and than compile or you can defeneitly go for the suggestion of kogut. C:/abc/newdir>java -cp somewhere/joda-time-2.3.jar Examples A: Modify your classpath parameter, so it should include directory where Example.class was generated. In case of out/org/joda/example/time/Example.class you need to use java -cp somewhere/jodata-time-2.3.jar:out org.joda.example.time.Example
{ "pile_set_name": "StackExchange" }
Q: Ajax call rails returns Missing template I am trying to make a query to a db to get the available dates for that month every time a user changes the month in a jquery ui datepicker. When they select the day, it will make another query to the DB for the hours available for that day. Should I do that? Should I change it to yearly? Would that be too many records? (I feel it would if I were to send which hours were available for which days for a year). How do I do this correctly? When I make an Ajax call, I get "Template not found." Relevant code: tasks_controller.rb def new if signed_in? @task = Task.new get_dates else redirect_to root_path end end def get_dates(day=nil) if !day day = DateTime.now end day = day.utc.midnight minus_one_month = (day - 1.month).to_time.to_i plus_one_month = (day + 1.month).to_time.to_i full_days_results = AvailableDates.all.select(['unix_day, count(*) as hour_count']).group('unix_day').having('hour_count > 23').where(unix_day: (minus_one_month..plus_one_month)) full_days_arr = full_days_results.flatten full_unix_array = [] full_days_arr.each do |full_day| tmp_date = Time.at(full_day.unix_day).strftime('%m-%d-%Y') full_unix_array.append(tmp_date) end gon.full_days = full_unix_array return end tasks.js.coffee full_days = gon.full_days if gon $ -> $('.datepicker').datepicker( dateFormat: 'mm-dd-yy' beforeShowDay: available onChangeMonthYear: (year, month, inst) -> target = "#{month}-01-#{year}" jqxhr = $.get("/getdates", day: target ) console.log(jqxhr) $(this).val target return tasks\new.html.erb <%= f.label :end_time %> <%= f.text_field :end_time, :class => 'datepicker' %> routes.rb ... match '/getdates',to: 'tasks#get_dates', via: 'get' Error Template is missing Missing template tasks/get_dates, application/get_dates with {:locale=>[:en], :formats=>[:html], :variants=>[], :handlers=>[:erb, :builder, :raw, :ruby, :jbuilder, :coffee]}. Searched in: * "c:/dev/rails_projects/onager-web-app/app/views" A: Well, couple of things you need to do: 1 - Make get_dates private (and choose a more ruby-like name) private def hours_for(day=nil) ... end Rails is thinking get_dates is a controller action when it's really not. That's why it cannot find the respective view for the get_dates action (when it's not an action, it's a helper method, maybe you should consider putting it in a model) 2 - hour_for method should return something. Right now it's not. I don't know what this line does: gon.full_days = full_unix_array I mean, what is gon? Just return the array directly. You shouldn't be setting stuff in get methods. Also, take a look at this to learn how to render json in rails pages. 3 - Rename your tasks.rb to tasks_controller.rb in your controllers folder in your rails project. 4 - Fix the routes.rb file to: get '/getdates/:day', to: 'tasks#load_dates', as: 'getdates' Also, hours_for must be called at load_dates. Your 'new' action in tasks should render a template and every time the user updates the date, your coffeescript should call the load_dates ajax method. Now, what you need to do is learn how to update your new.html.erb page.
{ "pile_set_name": "StackExchange" }
Q: How to pass npm package lodash as a DI in angular js for angular-google-maps using browserify? I am using npm as a package manager for my angular application and Browserify to include the libraries. I am using the angular-google-maps package : http://angular-ui.github.io/angular-google-maps I am getting : ReferenceError: _ is not defined at o (vendor.js:8) at Object.load (vendor.js:8) at $get (vendor.js:8) at Object.r [as invoke] (vendor.js:1) at vendor.js:1 at i (vendor.js:1) at Object.r [as invoke] (vendor.js:1) at p.instance (vendor.js:2) at m (vendor.js:2) at a (vendor.js:1) as error. Now this how I include the js files : require('lodash'); require('angular-simple-logger'); require('angular-google-maps'); And this is how I inject them in the angular.module : var requires = [ /*What to inject here?*/ 'nemLogging', 'uiGmapgoogle-maps', ]; angular.module('mapApp',requires); What to include in the controller : angular.module('mapApp').controller('mapController', function($scope, uiGmapGoogleMapApi, /*What to pass here?*/){ }); Please guide as to what to inject in the angular app and what parameter to pass to my angular app to get rid of this error? A: The angular-google-maps module doesn't inject lodash using angular DI, but assumes that lodash is available globaly. Make lodash global by adding it to the window object: window._ = require('lodash'); // this will add it to the global namespace require('angular-simple-logger'); require('angular-google-maps');
{ "pile_set_name": "StackExchange" }
Q: taking input from user i tried to take input from user input type is not determined(can be char or int) i wanna take input and store in pointer array while i doing that job forr each pointer i wanna take place from leap area that is using malloc but below code doesnot work why??? int main(void) { char *tutar[100][20],temp; int i; int n; i=0; while(temp!='x') { scanf("%c",&temp); tutar[i]=malloc(sizeof(int)); tutar[i]=temp; ++i; } n =i; for(i=0;i<=n;++i) { printf(" %c ",*tutar[i]); } printf("\n\n"); /*for(i=0;i<=n;++i) { printf("%d",atoi(*tutar[i])); } */ } note that; this cite has problem when rewrite(edit) the previous mail it is general problem or not A: There are several problems in your code, including: you declare tutar as a two-dimensional array of pointers to character, then use it as a one-dimensional array tutar[i]=temp assigns the value of temp (a char) to tutar[i] (a pointer to char), effectively overwriting the pointer to the newly reserved memory block you don't initialize temp, so it will have garbage value - occasionally it may have the value x, in which your loop will not execute Here is an improved version (it is not tested, and I don't claim it to be perfect): int main(void) { char *tutar[100], temp = 0; int i = 0; int n; while(temp!='x') { scanf("%c",&temp); tutar[i]=malloc(sizeof(char)); *tutar[i]=temp; ++i; } n =i; for(i=0;i<=n;++i) { printf(" %c ",*tutar[i]); } printf("\n\n"); } Note that unless you really need to allocate memory dynamically, you would be better off using a simple array of chars: char tutar[100], ... ... while(temp!='x') { scanf("%c",&temp); tutar[i++]=temp; } ... For the sake of brevity, I incremented i within the assignment statement.
{ "pile_set_name": "StackExchange" }
Q: Is geth --rpc --support-dao-fork still valid I have the blockchain synced, and then tried to run geth --rpc --support-dao-fork. I got a message saying that --support-dao-fork is unsupported command. So I have run geth --rpc --syncmode "fast" --verbosity 3 --cache=1024 and then Running etherminer -G from cpp-etherum it downloads the DAG and starts to mine. geth is committing the work to the blockchain. Has the --support option been depreciated or am I mining ETC instead of post fork Eth by default? A: On hard fork the PR 2813 was introduced to support the network split and those two options were added. This PR implements setting the --support-dao-fork and --oppose-dao-fork flags. As of now they only modify a single database entry specifying whether the current default behavior should change. Nowadays both options were removed and the official client doesn't support ETC
{ "pile_set_name": "StackExchange" }
Q: How to tell when we are disconnected from GameCenter GKMatch session? I'm wondering how do I get the disconnect message for local player when the game session is in progress and we're unable to communicate our data to other players. As there is nothing in documentation that says "this method will inform you whenever your connection fails", I'm at a bit of a loss. I was trying to use this chunk of code in hopes that it would work, but it's futile. The "We're disconnected." message is never triggered. - (void)match:(GKMatch *)theMatch player:(NSString *)playerID didChangeState:(GKPlayerConnectionState)state { if (self.match != theMatch) return; switch (state) { case GKPlayerStateDisconnected: //disconnected NSLog(@"player status changed: disconnected"); matchStarted = NO; GKLocalPlayer *player = [GKLocalPlayer localPlayer]; if ([playerID isEqualToString:player.playerID]) { // We have been disconnected NSLog(@"We're disconnected."); } if ([delegate respondsToSelector:@selector(matchEnded)]) { [delegate matchEnded]; } break; } } The only other line that I found might tell us that we're unable to communicate is when we actually send data like this: - (void)sendRandomMatchData:(NSData *)data { GKMatch *match = [GCHelper sharedInstance].match; BOOL success = [match sendDataToAllPlayers:data withDataMode:GKMatchSendDataReliable error:nil]; if (!success) { [self matchEnded]; } } But I assume that "success" will also be false if the opponent has disconnected and we're unable to send messages to them. I have a pretty strict game logics, if someone has been disconnected I need to inform them that they are unable to continue playing the match and that they have lost. Any help is highly appreciated. A: What about examining the error after the following code line: BOOL success = [match sendDataToAllPlayers:data withDataMode:GKMatchSendDataReliable error:nil]; //replace nil with NSError variable Maybe error will give you extra info u need. Another idea is to create NSTimer and set some certain time for making moves/turns. If some player didn't make it for a certain time then assume this player is disconnected. Also you could check your Internet connection state to determine you have a connection cuz maybe you just lost it and that's the reason you can't send/receive any data. Also you could check every player periodically by sending them some short amount of data using GC just to make them answer you. This way you could ensure all players are "alive" or detect some "zombie". Remember if player moves the game to background using Home button you won't detect it anyhow cuz code in your game wont execute. And this situation doesn't mean that player is "zombie". Player could be busy by a call or something else like another app. Player could temporary loose Internet connection. This means that he/she could return to game soon...
{ "pile_set_name": "StackExchange" }
Q: Using artifactory in gradle I want to use an artifactory in gradle. To be more specific, i want to use 4 customized jars that are not in the maven repository. So i'd like them to be at the artifactory server and will be downloaded when needed. Do i need to install something other than "Gradle eclipse integration" ? Can someone give me an example on how to do that in the gradle.build? A: First, you need to deploy those jars to Artifactory. Probably, using the UI will be the easiest way to go. Next, you need to declare Artifactory as your repository. You can do it by using the standard repositories clause (as @lance-java suggested), or by using Artifactory Gradle plugin. Probably the easiest will be generating the build script snippet from Artifactory itself. Last will be adding the dependencies to your script. You can navigate to the jars you uploaded in the tree browser, and copy the snippets of dependency declarations from there. Both steps are documented in the User Guide as well. I am with JFrog, the company behind Bintray and [artifactory], see my profile for details and links.
{ "pile_set_name": "StackExchange" }
Q: Reading a text file in scala, one line after the other, not iterated I'm trying to open a text file with Scala, read the first line, then the second, then the third. All samples I've found online want to read/buffer the entire file into a list or array and then access the individual lines from that construct. This code doesn't work as described above(of course). It reads the entire file into "first" since "file" is a BufferedStream and getLines fetches all lines, as it should. import scala.io.Source; object ScalaDemo { def main(args: Array[String]) = { val file = io.Source.fromFile("TextFile.txt"); // ----------------------------------------------- // read text from file, line by line, no iterator // ----------------------------------------------- val first = file.getLines().mkString; val second = file.getLines().mkString; val third = file.getLines().mkString; // Close the file file.close; println(first+"|"+second+"|"+third); } } What idiom/function can I use to read one line at a time... without using a list/array as an intermediate step. A: As stated in comments, .mkString will fetch all the elements that the iterator would return and concatenate them in a single string. The option of @Régis Jean-Gilles is probably the best if you already know that you always have at least three lines in the file. Another option is to call getLines() followed by grouped(3) to get an iterator that groups elements into blocks of 3. A call to next() will give you a list with at most three elements (it can have less if the iterator has only two elements left to return for example). val ite = io.Source.fromFile("textfile.txt").getLines().grouped(3) //list with the first three elements, if any - //otherwise an empty list if the file is empty val list = if(ite.hasNext()) ite.next() else Nil At least it does ensure that you won't have a NoSuchElementException at runtime if there is less than 3 lines in the file.
{ "pile_set_name": "StackExchange" }
Q: Using and try catch in C# database related classes? Does using keyword catch or handle exceptions in C# when connecting to a database? Or should I use try catch block in all of my database methods inside the using? Does using try catch create unnecessary code? using (var db = new ApplicationContext()) { try { /* Query something */ } catch(Exception e) { logger.Debug(e); } } A: The using block does not "handle" exceptions for you, it only ensures that the Dispose() method gets called on the IDisposable object (in this case, your db instance), even in the event of an exception. So yes, you need to add try-catch blocks where needed. That said, in general, you only want to catch exceptions where you can actually do something meaningful with them. If you only need to log the exceptions, consider doing your exception handling in a single location higher in the call stack so that you don't have to litter your code with try-catch blocks all over the place. You can read about the using Statement here to see what it actually does, and how it gets translated. EDIT: If, for whatever reason, you choose to keep your try-catch block where it is, at the very least, make sure to rethrow the exception instead of swallowing it, which would be like sweeping the mess under the rug and pretending everything is fine. Also, make sure to rethrow it without losing your precious stack trace. Like this: using (var db = new ApplicationContext()) { try { /* Query something */ } catch(Exception e) { logger.Debug(e); throw; // rethrows the exception without losing the stack trace. } } EDIT 2: Very nice blog entry by Eric Lippert about exception handling. A: Does using keyword catch or handle exceptions in C# when connecting to a database? A using is logically equivalent to a try-finally, so yes, it handles exceptions, but it does not stop the exception. A finally propagates the exception. should I use try catch block in all of my database methods inside the using? No. The try-catch should go outside the using. That way it will protect the resource creation. Does using try catch create unnecessary code? I have no idea what this question means. Some questions you did not ask: Should I catch all exceptions for logging and then fail to re-throw them? No. Only catch and eat exceptions that you know how to handle. If you want to log exceptions then re throw them when you're done logging; other code might want to handle them. What is the correct way to write this code? Separate your concerns. You have three concerns: Dispose the resource Log all exceptions Handle expected exogenous exceptions Each of those should be handled by a separate statement: try // handle exogenous exceptions { try // log all exceptions { using(var foo = new Foo()) // dispose the resource { foo.Bar(); } } catch(Exception x) { // All exceptions are logged and re-thrown. Log(x); throw; } } catch(FooException x) { // FooException is caught and handled } If your goal is to only log unhandled exceptions then invert the nesting of the two handlers, or use another mechanism such as the appdomain's unhandled exception event handler. A: Your using will converts to bottom code by the C# compiler, by 8.13 The using statement: { var db = new ApplicationContext(); try { try { /* Query something */ } catch(Exception e) { logger.Debug(e); } } finally { // Check for a null resource. if (db!= null) // Call the object's Dispose method. ((IDisposable)myRes).Dispose(); } } So, my opinion, for your situation I think better without using statement, it will be a little bit clear and will have less steps: var db = new ApplicationContext(); try { /* Query something */ } catch(Exception e) { logger.Debug(e); } finally { if (db!= null) { ((IDisposable)myRes).Dispose(); } } Because using it's just a syntax sugar. P.S: The performance cost of try statement is very small and you can left your code as it is.
{ "pile_set_name": "StackExchange" }
Q: Proof that the limit of a particular sequence of Lebesgue integrals equals zero. Prove that if $f \in L^{1}(A)$ and {${A_{n}}$} is a sequence of measurable subsets of A with $\lim_{n \to \infty}m(A_{n})=0$ then $lim_{n \to \infty} \int_{A_n}f=0$ I know there exists a very similar post for tackling this particular problem but I cannot find the complete proof of it. I would be grateful if someone could help. A: $\lvert\int f1_{A_n}\,dm\rvert\leq\int\lvert f1_{A_n}\rvert\,dm\leq m(A_n)\int\lvert f\rvert\,dm\to0$ by Holder's inequality.
{ "pile_set_name": "StackExchange" }
Q: Why does Python slow to a crawl in this code? So I'm doing an Euler Project problem trying to incorporate the Sieve of Eratosthenes to find the largest prime factor of a number, however when I try to fill my initial hashtable it slows to a crawl and eats up gigs worth of RAM and takes over my CPU. Can anyone explain why? I realize the code itself is probably subpar allNums = {} maxNum=600851475143 maxFactor=0 #fill dictionary, slows to a crawl here for x in xrange(2,maxNum+1): allNums[x]=True #sieve of Erastosthenes for x in xrange(2,len(allNums)): y=x if allNums[x]: y **= 2 while y<=maxNum: if allNums[y]: allNums.pop(y) y+=x #largest prime factor for x in allNums: if maxNum%x==0 and x>maxFactor: maxFactor=x print x A: Well, you allocate a huge list (maxNum*4 bytes, right)? Even if it is a dictionary and you can search there with log(N) complexity, it is still going to take significant time (and, more importantly, memory). Such a huge list might not even fit into your RAM, therefore your OS would have to imitate the extra memory (which would mean even longer access time to just read that data). By the way, this is how your problem can be done (not very effective - takes a couple of seconds on my machine :) - but with the idea similar to Eratosphene sieve, except for a small optimization, namely not to continue if x*x > maxNum. maxNum=600851475143 maxFactor=0 x = 1 while True: (quotient, remainder) = divmod(maxNum, x) if x>quotient: break if remainder==0: maxFactor=x x +=1 print maxNum, maxFactor
{ "pile_set_name": "StackExchange" }
Q: cast (sysdate as date) doesn't give exact integer values Why does select (cast(sysdate as date) - to_date('01011970','DDMMYYYY')) * 86400 from dual; give values like 1582881272,000000000000000000000000000001 or 1582881301,999999999999999999999999999999 instead of exact integer values? I thought "cast(sysdate as date)" cuts the milliseconds, so * 86400 ensures integer values. Do i really need to round (!) this? A: Looking at the factors: 86400 = 24 * 60 * 60 = (2³*3)*(2*3*5)*(2*3*5) = 25*3³*5² Unless your number of seconds since midnight is a multiple of 3³=27 (so you are just left with the factors of 2 and 5 in the divisor) you then are going to get recurring number that cannot be expressed nicely as a decimal so regardless of the precision that Oracle stores the result of SYSDATE - DATE '1970-01-01' it is going to be inaccurate to some small degree. You will just need to round the number: SELECT ROUND((SYSDATE - DATE '1970-01-01')* 86400) FROM DUAL; However, if you are trying to create a unix timestamp then you should use: SELECT ROUND( (CAST(SYSTIMESTAMP AT TIME ZONE 'UTC' AS DATE) - DATE '1970-01-01') * 86400 ) FROM DUAL; As Unix timestamps are in the UTC time zone. Or, if you want to do it without rounding: SELECT EXTRACT( DAY FROM diff ) * 24 * 60 * 60 + EXTRACT( HOUR FROM diff ) * 60 * 60 + EXTRACT( MINUTE FROM diff ) * 60 + EXTRACT( SECOND FROM diff ) AS unix_timestamp FROM ( SELECT CAST(SYSTIMESTAMP AS TIMESTAMP(0) WITH TIME ZONE) - TIMESTAMP '1970-01-01 00:00:00 UTC' AS diff FROM DUAL ) (The cast is there to remove the fractional seconds that SYSTIMESTAMP normally has.)
{ "pile_set_name": "StackExchange" }
Q: HTML4: Insert IBAN by separate fields? I am not sure with input fields for IBAN numbers. Often the IBAN is separated into different fields. XX00 0000 0000 0000 0000 00 Sometimes the customers have their IBAN in one line XX00000000000000000000 If they have the IBAN in one line, they can be tangled by read AND write. What shall I do? Separate the fields and run the risk of confuse the customers who have the IBAN in one line on their card? Make one input field that allows to make spaces? Additional make an example of a valid IBAN? EDIT: We have customers who enter from a IBAN: XX00 0000 0000 0000 0000 00 only the first XX00 and getting angry. A: Permit users to enter IBAN in a variety of formats and syntaxes, and make the application interpret it intelligently. Users probably won’t (and shouldn’t have to) know what format your system is expecting. http://quince.infragistics.com/Patterns/Forgiving%20Format.aspx A: Do not separate the fields, it is perfectly understandable that separating these fields would infuriate someone. I think an input mask is your solution for the front-end of things: http://digitalbush.com/projects/masked-input-plugin/ http://jsfiddle.net/49HVc/1/ And make sure you have rigorous back-end validation if you are saving/sending this data somewhere.
{ "pile_set_name": "StackExchange" }
Q: List of all the templates in flask Is there a programmatic (built-in) way of getting all the templates along with their routes in a flask app? Something like: {'route_a': 'template_name.html', 'route_b': 'template_name.html', ... } I mean I can create a mapping like this by parsing the templates, directory, but just wondering if there is any built-in functionality to construct this. I have already tried looking into the app.url_map items, but don't see any template related mappings.. A: No, this is not possible. Templates don't have routes. Routes render templates. There is no way to know what templates are rendered without actually evaluating each route. There is no guarantee that only one route renders a template, or that a route only renders one template.
{ "pile_set_name": "StackExchange" }
Q: How do I use a void function using delegates? I have been attempting to use event handling for a game's input. When seeing others use similar methods, they are able to add a void function to the delegate variable without an error. Whenever I try to add the Move() function to OnAxisChange, I receive the following error: Cannot implicitly convert type 'void' to 'CharacterView.InputAction' public class CharacterView : MonoBehaviour { public delegate void InputAction(); public static event InputAction OnAxisChange; public Vector2 InputAxis { get { float x = Input.GetAxisRaw("Horizontal"); float y = Input.GetAxisRaw("Vertical"); return (new Vector2(x, y)); } } private void Update() { Vector2 input = InputAxis; if (input.x != 0 || input.y != 0) { if (OnAxisChange != null) { OnAxisChange(); } } } } The following is the class that handles the event. public class CharacterController : MonoBehaviour { private void OnEnable() { CharacterView.OnAxisChange += Move(); } private void OnDisable() { CharacterView.OnAxisChange -= Move(); } public void Move() { Debug.Log("Entered the move function!"); } } Using delegates for event handling is still a bit foreign to me, so I assume I am misunderstanding something. A: You need to remove () after Move. Like that you are calling your method and try to add the return type of it which is nothing. Just change it to the following. CharacterView.OnAxisChange += Move; CharacterView.OnAxisChange -= Move;
{ "pile_set_name": "StackExchange" }
Q: Why are event-based network applications inherently faster than threaded ones? We've all read the benchmarks and know the facts - event-based asynchronous network servers are faster than their threaded counterparts. Think lighttpd or Zeus vs. Apache or IIS. Why is that? A: I think event based vs thread based is not the question - it is a nonblocking Multiplexed I/O, Selectable sockets, solution vs thread pool solution. In the first case you are handling all input that comes in regardless of what is using it- so there is no blocking on the reads- a single 'listener'. The single listener thread passes data to what can be worker threads of different types- rather than one for each connection. Again, no blocking on writing any of the data- so the data handler can just run with it separately. Because this solution is mostly IO reads/writes it doesn't occupy much CPU time- thus your application can take that to do whatever it wants. In a thread pool solution you have individual threads handling each connection, so they have to share time to context switch in and out- each one 'listening'. In this solution the CPU + IO ops are in the same thread- which gets a time slice- so you end up waiting on IO ops to complete per thread (blocking) which could traditionally be done without using CPU time. Google for non-blocking IO for more detail- and you can prob find some comparisons vs. thread pools too. (if anyone can clarify these points, feel free) A: Event-driven applications are not inherently faster. From Why Events Are a Bad Idea (for High-Concurrency Servers): We examine the claimed strengths of events over threads and show that the weaknesses of threads are artifacts of specific threading implementations and not inherent to the threading paradigm. As evidence, we present a user-level thread package that scales to 100,000 threads and achieves excellent performance in a web server. This was in 2003. Surely the state of threading on modern OSs has improved since then. Writing the core of an event-based server means re-inventing cooperative multitasking (Windows 3.1 style) in your code, most likely on an OS that already supports proper pre-emptive multitasking, and without the benefit of transparent context switching. This means that you have to manage state on the heap that would normally be implied by the instruction pointer or stored in a stack variable. (If your language has them, closures ease this pain significantly. Trying to do this in C is a lot less fun.) This also means you gain all of the caveats cooperative multitasking implies. If one of your event handlers takes a while to run for any reason, it stalls that event thread. Totally unrelated requests lag. Even lengthy CPU-invensive operations have to be sent somewhere else to avoid this. When you're talking about the core of a high-concurrency server, 'lengthy operation' is a relative term, on the order of microseconds for a server expected to handle 100,000 requests per second. I hope the virtual memory system never has to pull pages from disk for you! Getting good performance from an event-based architecture can be tricky, especially when you consider latency and not just throughput. (Of course, there are plenty of mistakes you can make with threads as well. Concurrency is still hard.) A couple important questions for the author of a new server application: How do threads perform on the platforms you intend to support today? Are they going to be your bottleneck? If you're still stuck with a bad thread implementation: why is nobody fixing this?
{ "pile_set_name": "StackExchange" }
Q: Kotlin inline keyword causing IntelliJ IDEA Coverage reporting 0% I created a very simple test function as below class SimpleClassTest { lateinit var simpleObject: SimpleClass @Mock lateinit var injectedObject: InjectedClass @Before fun setUp() { MockitoAnnotations.initMocks(this) } @Test fun testSimpleFunction() { simpleObject = lookupInstance() } inline fun lookupInstance() = SimpleClass(injectedObject) } I Run it with Coverage... The test coverage number is 0%. But if I remove the inline keyword, the test coverage number shows now. Is this a Kotlin issue or Android IntelliJ IDEA Coverage issue? (note: JaCoco coverage is good). Note: I'm using Android Studio 2.0 and Kotlin 1.0.2 A: When an inlined function is compiled, the compiler essentially pastes its body into the call site (in place of the function call). This means that the coverage analysis can't tell that it's an inlined function because it doesn't really exist where you defined it. In other words, this behavior is a natural artifact of what it means for a function to be inlined.
{ "pile_set_name": "StackExchange" }
Q: .GetType().GetProperties() returns properties in different order I want to check our configuration file and see if it is the same as if I were to create a new configuration file. This method is called GetConfig(). After some hours I noticed that if I save my configuration file and then call GetConfig it works, but if I close the program start it up and load my configuration file in and call GetConfig() it returns my properties in a different order. Below you can see what I mean, property b is an object of a class. There are more than 3 properties, but I only wanted to give a small example: - - - - - - - - -- - - - - - -- S A V E C O N F I G - - - - - - -- - G E T C O N F I G 1 Field: a 1 Field: b 1 Field: c and the next config object it has to save. 1 Field: a 1 Field: b 1 Field: c When I load the config on the same instance - - - -- - - - - - - - - -- - A R E E Q U A L - - - - - - -- - G E T C O N F I G 1 Field: a 1 Field: b 1 Field: c next config object 1 Field: a 1 Field: b 1 Field: c However when I load my config when I restart the program I get this: - - - -- - - - - - - - - -- - A R E E Q U A L - - - - - - -- - G E T C O N F I G 1 Field: a 1 Field: b <-- correct 1 Field: c 2nd object 1 Field: a 1 Field: c 1 Field: b <-- should be 2nd. So when I try to compare both configuration files they do not match. Has anybody any experience with this? foreach (var field in channel.GetType().GetProperties()) { Console.WriteLine(channel.ChannelNumber + " Field: " + field.Name); Help is much appreciated. A: You cannot make any assumption about the order of return values of Type.GetProperties, see what documentation says: The GetProperties method does not return properties in a particular order, such as alphabetical or declaration order. Your code must not depend on the order in which properties are returned, because that order varies. If you want an specific order, you should make your code order the collection returned.
{ "pile_set_name": "StackExchange" }
Q: Responsive DataTable causing weird select box bug I am using a DataTable and I am also using the responsive extension for it. In my DataTable, there is a select box and when you change the select box there is a prompt asking if you are sure you want to make this change. On normal mode, it works fine, but when it is responsive it doesn't work. The pop-up dialogue gives the wrong values of the data to be changed. Here is an example: $(document).ready(function() { $('#example').DataTable({ autoWidth: false, responsive: true }); }); function changeResStatus(str1) { var id = str1; var status = document.getElementById("resstatus" + id).value; var mailres = ""; var r = confirm("Change status for ID # " + id + " to " + status + "?"); if (r == true) { alert("changed!"); } else { alert("not changed!"); } } <link href="https://cdn.datatables.net/responsive/2.2.3/css/responsive.dataTables.min.css" rel="stylesheet" /> <link href="https://cdn.datatables.net/1.10.20/css/jquery.dataTables.min.css" rel="stylesheet" /> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script src="https://cdn.datatables.net/1.10.20/js/jquery.dataTables.min.js"></script> <script src="https://cdn.datatables.net/responsive/2.2.3/js/dataTables.responsive.min.js"></script> <table id="example" class="display responsive nowrap" style="width:100%"> <thead> <tr> <th>First name</th> <th>Last name</th> <th>Position</th> <th>Office</th> <th>Age</th> <th>Start date</th> <th>Salary</th> <th>Extn.</th> <th>BUG</th> </tr> </thead> <tbody> <tr> <td>Tiger</td> <td>Nixon</td> <td>System Architect</td> <td>Edinburgh</td> <td>61</td> <td>2011/04/25</td> <td>$320,800</td> <td>5421</td> <td> <select id="resstatus1" data-previousvalue="audi" onchange="changeResStatus(1);"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option selected value="audi">Audi</option> </select> </td> </tr> <tr> <td>Garrett</td> <td>Winters</td> <td>Accountant</td> <td>Tokyo</td> <td>63</td> <td>2011/07/25</td> <td>$170,750</td> <td>8422</td> <td> <select id="resstatus2" data-previousvalue="audi" onchange="changeResStatus(2);"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option selected value="audi">Audi</option> </select> </td> </tr> </tbody> </table> Super weird behavior, how can I fix this?! A: Simply use event.target.value to get the selected value. $(document).ready(function() { $('#example').DataTable({ autoWidth: false, responsive: true }); }); function changeResStatus(str1) { var id = str1; var status = document.getElementById("resstatus" + id).value; var mailres = ""; var r = confirm("Change status for ID # " + id + " to " + event.target.value + "?"); if (r == true) { alert("changed!"); } else { alert("not changed!"); } } <link href="https://cdn.datatables.net/responsive/2.2.3/css/responsive.dataTables.min.css" rel="stylesheet" /> <link href="https://cdn.datatables.net/1.10.20/css/jquery.dataTables.min.css" rel="stylesheet" /> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script src="https://cdn.datatables.net/1.10.20/js/jquery.dataTables.min.js"></script> <script src="https://cdn.datatables.net/responsive/2.2.3/js/dataTables.responsive.min.js"></script> <table id="example" class="display responsive nowrap" style="width:100%"> <thead> <tr> <th>First name</th> <th>Last name</th> <th>Position</th> <th>Office</th> <th>Age</th> <th>Start date</th> <th>Salary</th> <th>Extn.</th> <th>BUG</th> </tr> </thead> <tbody> <tr> <td>Tiger</td> <td>Nixon</td> <td>System Architect</td> <td>Edinburgh</td> <td>61</td> <td>2011/04/25</td> <td>$320,800</td> <td>5421</td> <td> <select id="resstatus1" data-previousvalue="audi" onchange="changeResStatus(1);"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option selected value="audi">Audi</option> </select> </td> </tr> <tr> <td>Garrett</td> <td>Winters</td> <td>Accountant</td> <td>Tokyo</td> <td>63</td> <td>2011/07/25</td> <td>$170,750</td> <td>8422</td> <td> <select id="resstatus2" data-previousvalue="audi" onchange="changeResStatus(2);"> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="mercedes">Mercedes</option> <option selected value="audi">Audi</option> </select> </td> </tr> </tbody> </table>
{ "pile_set_name": "StackExchange" }
Q: Manipulating a data structure in Pig/Hive I'm not really sure how to phrase this question, so please redirect me if there is a better place for this question. Right now I have a data structure, more or less organized like this: I want my data to look like this: Sorry for the images, apparently I can't use markdown to make these! I realize my question is similar to this one, but ideally I would like to be able to do this in Pig, but knowing how to do it in Hive, R, Python, or Excel/LibreCalc would be useful/interesting too. I'm not even sure what this kind of data manipulation is called, so directing me to some sort of general wiki page would be helpful. A: @vkp got me started in the right direction, but I had to add a few tweaks to get it working on Hive: CREATE TABLE myDatabase.newTable STORED AS TEXTFILE AS SELECT item, year, 'jan' AS Month, jan AS Value FROM myDatabase.myTable UNION ALL SELECT item, year, 'feb' AS Month, feb AS Value FROM myDatabase.myTable UNION ALL SELECT item, year, 'mar' AS Month, mar AS Value FROM myDatabase.myTable; Still interested in a solution that works on Pig.
{ "pile_set_name": "StackExchange" }
Q: How can I best host PowerPoint slides on a web side, interleaved with native HTML content? Many of my users prepare training presentations using PowerPoint. My training app presents content in a simple chapter-questions chapter-questions format, where a course module has many chapters, and each chapter is basically a web page for content, and a collection of questions or exercises for that content. I manage the questions individually, i.e. each question is it's own 'document', versus one content document and one questions document per chapter. I need to offer some kind of import from PowerPoint which will allow a course author to break the PowerPoint presentation into chapters, i.e. groups of slides, and create a new set of question documents, basically as HTML text, so the flow of a course module is e.g. chapter 1 intro - html chapter 1 content - ppt chapter 1 question html chapter 1 question html I realise I could convert the content PowerPoint to HTML, and treat the whole course as native content, but my users are much more familiar with PowerPoint as their content editor, versus ckeditor and my utilities. I want to only require the user to author their questions usign my editors, and then just manage the ppt content. How can I go about this? I realise MS probably has something for hosting ppt content in web pages, and that there may be more open tools for this as well, but the latter isn't a requirement. A dependency on an MS tool, just not PowerPoint itself, is something I'm happy with. A: Microsoft does support an embed option, but you have to own a skydrive account for it to work. Outside of MS scribd and docstoc also enable you to share powerpoint slides - you can find some more ideas on our sister site stackOverflow IMHO Scribd is the best of the bunch.
{ "pile_set_name": "StackExchange" }
Q: Bootstrap form is not clickable Hello I am trying to figure out, why this form is not clickable: https://www.codeply.com/go/Am5LvvjTxC/bootstrap-4-center-form This is the HTML Code: <section id="cover" class="min-vh-100"> <div id="cover-caption"> <div class="container"> <div class="row text-white"> <div class="col-xl-5 col-lg-6 col-md-8 col-sm-10 mx-auto text-center form p-4"> <h1 class="display-4 py-2 text-truncate">Center my form.</h1> <div class="px-2"> <form action="" class="justify-content-center"> <div class="form-group"> <label class="sr-only">Name</label> <input type="text" class="form-control" placeholder="Jane Doe"> </div> <div class="form-group"> <label class="sr-only">Email</label> <input type="text" class="form-control" placeholder="[email protected]"> </div> <button type="submit" class="btn btn-primary btn-lg">Launch</button> </form> </div> </div> </div> </div> </div> </section> This is the CSS Code: #cover { background: #222 url('https://thenypost.files.wordpress.com/2018/12/santa-claus-kids.jpg') center center no-repeat; background-size: cover; height: 100%; text-align: center; display: flex; align-items: center; position: relative; z-index: -1; } #cover-caption { width: 100%; z-index: 1; } /* only used for background overlay not needed for centering */ form:before { content: ''; height: 100%; left: 0; position: absolute; top: 0; width: 100%; background-color: rgba(0,0,0,0.3); z-index: -1; } form-z-index { z-index: 10; } } I believe it is probably the zindex. But i cant figure out the right order. Can anybody help? A: The problem is with the z-index (the -1 on #cover): Change CSS to: #cover { background: #222 url('https://unsplash.it/1920/1080/?random') center center no-repeat; background-size: cover; height: 100%; text-align: center; display: flex; align-items: center; position: relative; } #cover-caption { width: 100%; position: relative; z-index: 1; } /* only used for background overlay not needed for centering */ form:before { content: ''; height: 100%; left: 0; position: absolute; top: 0; width: 100%; background-color: rgba(0,0,0,0.3); z-index: -1; } This should solve the problem.
{ "pile_set_name": "StackExchange" }
Q: How to count number of checkboxes selected, change background after selected, and have hover continue to work I want to create a list of checkboxes that users can select, however, limit the number of checkboxes to 5, as well as show the user how many they have currently clicked. I also want to change the background color of the checkbox labels after they have been selected. My main problem is that the number showing how many checkboxes have been selected is always one click behind. Also, the background color is changing after being selected, but the hover call stops working if selected. Finally, I'd love to hear any suggestions on how to make my count function cleaner. I don't like having 7 if statements... $(document).ready(function() { $("input[name='group_option[]']").change(function() { var maxAllowed = 5; var cnt = $("input[name='group_option[]']:checked").length; if (cnt > maxAllowed) { $(this).prop("checked", ""); } }); }); function count() { var count = 0; if ($('#checkbox1').is(':checked')) { count = count + 1; } if ($('#checkbox2').is(':checked')) { count = count + 1; } if ($('#checkbox3').is(':checked')) { count = count + 1; } if ($('#checkbox4').is(':checked')) { count = count + 1; } if ($('#checkbox5').is(':checked')) { count = count + 1; } if ($('#checkbox6').is(':checked')) { count = count + 1; } if ($('#checkbox7').is(':checked')) { count = count + 1; } document.getElementById("count").innerHTML = count + "/5 Selected"; } .options { background-color: #e6e6e6; display: block; width: 300px; margin-left: 20px; padding: 2px; margin-bottom: 1px; } .options:hover { color: black; cursor: pointer; transition-duration: .15s; background-color: #b3b3b3; } input { float: left; } label:hover { background-color: #bfbfbf; } input[type=checkbox]:checked + label { background-color: #cccccc; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <b id="count" style="float: left;">0/5 Selected</b> <br> <br> <input id="checkbox1" type="checkbox" name="group_option[]" value="option1" /> <label for="checkbox1" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 1</label> <input id="checkbox2" type="checkbox" name="group_option[]" value="option2" /> <label for="checkbox2" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 2</label> <input id="checkbox3" type="checkbox" name="group_option[]" value="option3" /> <label for="checkbox3" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 3</label> <input id="checkbox4" type="checkbox" name="group_option[]" value="option4" /> <label for="checkbox4" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 4</label> <input id="checkbox5" type="checkbox" name="group_option[]" value="option5" /> <label for="checkbox5" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 5</label> <input id="checkbox6" type="checkbox" name="group_option[]" value="option6" /> <label for="checkbox6" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 6</label> <input id="checkbox7" type="checkbox" name="group_option[]" value="option7" /> <label for="checkbox7" class="options" onclick="count(this)">&nbsp;&nbsp;&nbsp;Option 7</label> A: There's no need for your separate count() function as you can do all the required processing in your jQuery change event handler (and on* event attributes are considered outdated and should avoided anyway). You already have the cnt variable stored there which you can use. Try this: $(document).ready(function() { var maxAllowed = 5; $("input[name='group_option[]']").change(function() { var cnt = $("input[name='group_option[]']:checked").length; if (cnt > maxAllowed) $(this).prop("checked", false); else $('#count').text(cnt + '/5 Selected'); }); }); .options { background-color: #e6e6e6; display: block; width: 300px; margin-left: 20px; padding: 2px; margin-bottom: 1px; } .options:hover { color: black; cursor: pointer; transition-duration: .15s; background-color: #b3b3b3; } input { float: left; } input:checked + label { background-color: #cccccc; } input:checked + label:hover { background-color: #bfbfbf; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script> <b id="count" style="float: left;">0/5 Selected</b><br><br> <input id="checkbox1" type="checkbox" name="group_option[]" value="option1" /> <label for="checkbox1" class="options">&nbsp;&nbsp;&nbsp;Option 1</label> <input id="checkbox2" type="checkbox" name="group_option[]" value="option2" /> <label for="checkbox2" class="options">&nbsp;&nbsp;&nbsp;Option 2</label> <input id="checkbox3" type="checkbox" name="group_option[]" value="option3" /> <label for="checkbox3" class="options">&nbsp;&nbsp;&nbsp;Option 3</label> <input id="checkbox4" type="checkbox" name="group_option[]" value="option4" /> <label for="checkbox4" class="options">&nbsp;&nbsp;&nbsp;Option 4</label> <input id="checkbox5" type="checkbox" name="group_option[]" value="option5" /> <label for="checkbox5" class="options">&nbsp;&nbsp;&nbsp;Option 5</label> <input id="checkbox6" type="checkbox" name="group_option[]" value="option6" /> <label for="checkbox6" class="options">&nbsp;&nbsp;&nbsp;Option 6</label> <input id="checkbox7" type="checkbox" name="group_option[]" value="option7" /> <label for="checkbox7" class="options">&nbsp;&nbsp;&nbsp;Option 7</label>
{ "pile_set_name": "StackExchange" }
Q: Why does my UIAlertController not appear until the second time I press it's tab? I am using the following class to try and make an alert popup when I click on a specific tab (rather than switching to that tab). The first time I click the tab it goes to an empty view and does not show the alert. The second time I click it (and beyond) it works as intended. Is there a way to make the alert fire the first time I click the tab? Code for the class is below: class TabOverlayViewController: UIViewController, UITabBarControllerDelegate { override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) self.tabBarController?.delegate = self } override func viewDidLoad() { super.viewDidLoad() } func tabBarController(_ tabBarController: UITabBarController, shouldSelect viewController: UIViewController) -> Bool { if viewController == tabBarController.viewControllers?[1] { let alert = UIAlertController(title: "Add", message: "", preferredStyle: .alert) alert.addAction(UIAlertAction(title: "Album", style: .default) { action in }) self.present(alert, animated: true, completion: { }) return false } else { return true } } } A: This happens because you set UITabBarViewControllerDelegate only in viewWillAppear method. So first time there is no delegate's call, view appears, sets delegate and involves func tabBarController(_ tabBarController: UITabBarController, shouldSelect viewController: UIViewController) -> Bool method only on following taps. UPD: By the way, are you sure that this particular controller should be a delegate for UITabBarController? Is it correct that concrete controller decides which tab could be opened and which not? In a simple app I would assign to delegate tab bar controller itself: final class TabBarVC: UITabBarController { override func viewDidLoad() { super.viewDidLoad() delegate = self } } extension TabBarVC: UITabBarControllerDelegate { func tabBarController(_ tabBarController: UITabBarController, shouldSelect viewController: UIViewController) -> Bool { // do your logic here } } Of course, you should specify, that your current tab bar controller is type of TabBarVC.
{ "pile_set_name": "StackExchange" }
Q: Why does ASP.NET MVC 4 with IList for editor not write index notation properly? I have a peculiar situation in which referencing 'model in my editor template produces a malformed index 'name' field: Editor @model IList<BillingRateItem> @for (int i = 0;i < this.Model.Count(); i++) { @Html.HiddenFor(m => this.Model[i].BillingRateItemID) } The hidden field produced includes a period before the index, which is problematic: HTML Rendered <input id="Model_BillingRateItems__0__BillingRateItemID" name="Model.BillingRateItems.[0].BillingRateItemID" type="hidden" value="9"> What I want Rendered Note that 'Model.BillingRateItems[0]' has no period between the name and the index. This is good! <input id="Model_BillingRateItems__0__BillingRateItemID" name="Model.BillingRateItems[0].BillingRateItemID" type="hidden" value="9"> EDIT - This is an example of the view calling the editor @model BillingRateViewModel // has multiple BillingRateItems @Html.EditorFor(m=>m.BillingRateItems,"BillingRateItemsGrid") Note: When I change the Editor to accept the parent object (e.g. BillingRate) the indexing works fine. Maybe the conclusion is simply the all editors will begin with a '.' regardless of the context. In the case of Enumerables it will add the '.[].' even though this does the native model binder no help in reconstructing the object. A: I don't know exactly wich is the issue with the version of the MVC assembly that you are using, but looks like some bug that I can't reproduce on the latest version that you can install from the Visual Studio Extensions and Updates menu. You can create this hidden field using the @Html.Hidden this way: @Html.Hidden("[" + i + "].BillingRateItemID", this.Model[i].BillingRateItemID); The produced HTML will look like, where i will be the index of each item and the value will assume the BillingRateItemID: <input name="[1].BillingRateItemID" type="hidden" value="1"> Then the default Model Binder will take care about the list correctly.
{ "pile_set_name": "StackExchange" }
Q: Add a column in an existing table and make it the primary key mysql Is there a way to add a column in an existing table (which has data populated) in mysql and make it the primary key? The issue that I am running into is that the system is now generating a GUID (unique ID) for each record and ideally that should be the primary key in the table. However, the previous records will not have this GUID populated, but the new records will have this GUID populated. So the existing rows in the table will not allow me to make this GUID column the primary key. A: You will first need to remove the Primary Key from your column key: ALTER TABLE table MODIFY UIDD INT NOT NULL; ALTER TABLE table DROP PRIMARY KEY; After that you can add your new column: ALTER TABLE table ADD pk_column INT NOT NULL Add your new IDs to the new column: After that add the Primary Key to it: ALTER TABLE table ADD PRIMARY KEY (pk_column) You can set your Old Primary Key column to a unique index, so that queries are still fast and you have the unique constraint.
{ "pile_set_name": "StackExchange" }
Q: Best way to use the same HTML on static web-pages If you use dynamic pages like JSP or asp.net, you can have your page template included, and then content added. But what if you have no server-side component and all pages are just HTML/JS? You can of course create a template then copy it for each page, but then if you want to change something you risk having to modify every page, even if you put most styling in CSS properly. Are there any non-awful ways to do this? I could see that an iframe could be used to load the content into the central page but that sounds nasty. Does HTML provide any way to include a base file and add to it? A: You can use Server Side Includes to include other files on the server. It's similar to scripting languages like ASP or php, but SSI is usually supported by the server directly, so it's available on many servers, even if there is scripting language available.
{ "pile_set_name": "StackExchange" }
Q: Как при вызове хранимой процедуры получить значение переданное в RETURN Допустим имеем произвольную хранимую процедуру, для чистоты эксперимента возьмем тривиальный вариант: CREATE PROCEDURE dbo.GetAnswer AS RETURN 42 и код ее вызова на C# using(SqlConnection con = new SqlConnection(connectionString)) { con.Open(); SqlCommand cmd = con.CreateCommand(); cmd.CommandText = "dbo.GetAnswer"; cmd.CommandType = CommandType.StoredProcedure; cmd.ExecuteNonQuery(); } Как получить возвращаемое значение? Замена cmd.ExecuteNonQuery(); на cmd.ExecuteScalar(); не поможет, т.к. ExecuteScalar возвращает null для такой процедуры. A: Ответ оказался для меня совсем не очевидным. В коде будет выглядеть так: using(SqlConnection con = new SqlConnection(connectionString)) { //начало не меняется con.Open(); SqlCommand cmd = con.CreateCommand(); cmd.CommandText = "dbo.GetAnswer"; cmd.CommandType = CommandType.StoredProcedure; //Добавляем параметр к вызову процедуры (имя значения не имеет) //Стоп, но у нашей процедуры нет параметров??? var returnParameter = cmd.Parameters.Add("@ReturnVal", SqlDbType.Int); //Обратите внимание на значение Direction returnParameter.Direction = ParameterDirection.ReturnValue; //выполняем тоже как обычно cmd.ExecuteNonQuery(); //забираем значение int answer = (int)returnParameter.Value; } В общем то ничего сложного, но последнее чего я ожидал, это добавление несуществующего параметра. Отдельно замечу, никаких изменений в хранимой процедуре делать не требуется. Добавляемый в команду параметр не является параметром процедуры. Хранимая процедура всегда возвращает какое-то целочисленное значение. Если ни одного RETURN с явным указанием значения в процедуре нет, возвращается 0.
{ "pile_set_name": "StackExchange" }
Q: Implementing the reliability pattern in CloudHub with VM queues I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint: As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub? I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)? A: VM queues are very basic, whether you use them in CloudHub or not. VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features. You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control. Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.
{ "pile_set_name": "StackExchange" }
Q: Neat responsive horizontal gutter When using the Neat framework, the column's gutter automatically adjusts to the window size (as is expected from grid frameworks, of course). In my current project I'd like to use the gutter-width ($gutter) as top or bottom padding for some elements. Using the $gutter variable directly works, except that the padding won't be adjusted when downsizing the view port. Does anyone have a solution for this? A: Well, I was a little bit too fast with asking this question. I did a little browsing through the Neat code, and apparently using padding: flex-gutter(); works. Does anyone know if this has any downsides? Edit: Well, I found the downside. flex-gutter(), or using the bourbon function pad () (same result except for padding), defines the padding/margin around 2.3%. Problem is that, when you'll view your grid on a large screen, the padding will get huge (while the gutter on your grid holds on to a maximum of 25px. So I kinda made my own max-margin: margin-bottom: 25px; @media (max-width: 1100px) { margin-bottom: flex-gutter();} First line defines the max margin, the second line sets flex-gutter if your viewport is < 1100px (my max grid size for Neat). Would love to hear if I can implement this in a sass function!
{ "pile_set_name": "StackExchange" }
Q: Taxonomy archive template to have conditional logic for displaying child categories I have a custom post-type "downloads", and within the downloads menu, I have download categories, the taxonomy name is "downloadcats". My Downloads page uses a page template that lists all categories in the "downloadcats" taxonomy, this only shows parent categories. When a category is clicked, it uses the taxonomy archive template (taxonomy-downloadcats.php) and what I am trying to do is use conditional to say if there are child categories, display a list, otherwise show a list of posts. In my taxonomy-downloadcats.php at the moment I have the following: <?php $term = get_queried_object(); $children = get_terms( $term->taxonomy, array( 'parent' => $term->term_id, 'hide_empty' => false ) ); // print_r($children); // uncomment to examine for debugging if($children) { // get_terms will return false if tax does not exist or term wasn't found. echo 'show child categories'; } else { if (have_posts()) : while (have_posts()) : the_post(); ?> <li><?php the_post_thumbnail('post-image'); ?> <a href="<?php the_permalink(); ?>"><?php the_title(); ?></a></li> <?php endwhile; ?> <?php endif; ?> <?php } ?> The conditional statement is correct as if the category has no children it does show posts and if there are child categories, it displays "show child categories", I need to know what goes in there to make the parent's children appear. Thanks A: You are so very close here, you can almost smell it. Here is how you would complete your code: You already have all your child terms stored in an array called $children. To display these, you simply need a foreach loop. Something like this will do the trick. (You can just extend on this, simply do a var_dump or print_r to get the available objects you can use) foreach ( $children as $child ) { echo $child->name; //very basic, display term name } If you need to get a link to the term, make use get_term_link
{ "pile_set_name": "StackExchange" }
Q: Python: why am I getting a NameError? I have the following code: from crypt import crypt import itertools from string import ascii_letters, digits def decrypt(all_hashes, salt, charset=ascii_letters + digits + "-"): products = (itertools.product(charset, repeat=r) for r in range(8)) chain = itertools.chain.from_iterable(products) for candidate in chain: hash = crypt(candidate, salt) if hash in all_hashes: yield candidate, hash all_hashes.remove(hash) if not all_hashes: return all_hashes = 'aaRrt6qwqR7xk', 'aaacT.VSMxhms' , 'aaWIa93yJI9kU', / 'aakf8kFpfzD5E', 'aaMOPiDnXYTPE', 'aaz71s8a0SSbU', 'aa6SXFxZJrI7E', / 'aa9hi/efJu5P.', 'aaBWpr07X4LDE', 'aaqwyFUsGMNrQ', 'aa.lUgfbPGANY', / 'aaHgyDUxJGPl6', 'aaTuBoxlxtjeg', 'aaluQSsvEIrDs', 'aajuaeRAx9C9g', / 'aat0FraNnWA4g', 'aaya6nAGIGcYo', 'aaya6nAGIGcYo', 'aawmOHEectP/g', / 'aazpGZ/jXGDhw', 'aadc1hd1Uxlz.', 'aabx55R4tiWwQ', 'aaOhLry1KgN3.', / 'aaGO0MNkEn0JA', 'aaGxcBxfr5rgM', 'aa2voaxqfsKQA', 'aahdDVXRTugPc', / 'aaaLf47tEydKM', 'aawZuilJMRO.w', 'aayxG5tSZJJHc', 'aaPXxZDcwBKgo', / 'aaZroUk7y0Nao', 'aaZo046pM1vmY', 'aa5Be/kKhzh.o', 'aa0lJMaclo592', / 'aaY5SpAiLEJj6', 'aa..CW12pQtCE', 'aamVYXdd9MlOI', 'aajCM.48K40M.', / 'aa1iXl.B1Zjb2', 'aapG.//419wZU' all_hashes = set(all_hashes) salt = 'aa' for candidate, hash in decrypt(all_hashes, salt): print 'Found', hash, '! The original string was', candidate And when I go to run it i get the following traceback error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in decrypt NameError: must be string, not tuple I think it has something to do with my "all_hashes" variable but if I take out the commas all the different hashes are stored as one long string Someone please shed some light, thanks in advance A: The products are tuples, like ('g', 'g', 'g', 'e', 'a', 'b', 'f') and crypt expects a string. Transform your tuples to strings before passing them to crypt, for candidate in chain: hash = crypt((''.join(candidate)), salt) (thanks to Joel Cornett for the nice transformation to string and to HVNSweeting for the improvement on that.)
{ "pile_set_name": "StackExchange" }
Q: How can I make the loop item in a new line not in one line I have a code to display data from the database into a label. All of the data is displaying in one line but I want to make each row of data display in a new line. I'm using VB.Net Aspx.vb Sub ResourceName_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Try Dim LanguageName As String = vbNullString Dim LangNameList As String = vbNullString Dim LangWrittenList As String = vbNullString Dim LangSpokenList As String = vbNullString Dim WrittenRate As String = vbNullString Dim SpokenRate As String = vbNullString lblLanguageName.Text = "" lblLanguageWritten.Text = "" lblLanguageSpoken.Text = "" attPage.SQLQuery = DC.Data_TechnicalResource("12",chkResourceName1.SelectedValue) DS = DA.GetSQLDataset(attPage.SQLQuery) If DS IsNot Nothing AndAlso DS.Tables(0).Rows.Count > 0 Then For Each dr In DS.Tables(0).Rows LanguageName = dr("LanguageName").ToString WrittenRate = dr("WrittenLevel").ToString SpokenRate = dr("SpokenLevel").ToString If cnt <> 0 Then LangNameList = LangNameList + LanguageName LangWrittenList = LangWrittenList + WrittenRate LangSpokenList = LangSpokenList + SpokenRate Else LangNameList = LanguageName LangWrittenList = WrittenRate LangSpokenList = SpokenRate End If cnt = cnt + 1 Next End If lblLanguageName.Text = LangNameList lblLanguageWritten.Text = LangWrittenList lblLanguageSpoken.Text = LangSpokenList The output is as below Bahasa MalaysiaEnglishTamil 10,10,4 10,10,4 I want the output should be like this Bahasa Malaysia 10 10 English 10 10 Tamil 4 4 Anyone can help me to do this ? I would appreciate it. Thank you. A: You are not inserting a new line anywhere in your code, to do that change your if block code to this If cnt <> 0 Then LangNameList = LangNameList + vbNewLine + LanguageName LangWrittenList = LangWrittenList + vbNewLine + WrittenRate LangSpokenList = LangSpokenList + vbNewLine + SpokenRate Else LangNameList = LanguageName LangWrittenList = WrittenRate LangSpokenList = SpokenRate End If So your entire code should be like this: Sub ResourceName_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Try Dim LanguageName As String = vbNullString Dim LangNameList As String = vbNullString Dim LangWrittenList As String = vbNullString Dim LangSpokenList As String = vbNullString Dim WrittenRate As String = vbNullString Dim SpokenRate As String = vbNullString lblLanguageName.Text = "" lblLanguageWritten.Text = "" lblLanguageSpoken.Text = "" attPage.SQLQuery = DC.Data_TechnicalResource("12",chkResourceName1.SelectedValue) DS = DA.GetSQLDataset(attPage.SQLQuery) If DS IsNot Nothing AndAlso DS.Tables(0).Rows.Count > 0 Then For Each dr In DS.Tables(0).Rows LanguageName = dr("LanguageName").ToString WrittenRate = dr("WrittenLevel").ToString SpokenRate = dr("SpokenLevel").ToString If cnt <> 0 Then LangNameList = LangNameList + vbNewLine + LanguageName LangWrittenList = LangWrittenList + vbNewLine + WrittenRate LangSpokenList = LangSpokenList + vbNewLine + SpokenRate Else LangNameList = LanguageName LangWrittenList = WrittenRate LangSpokenList = SpokenRate End If cnt = cnt + 1 Next End If lblLanguageName.Text = LangNameList lblLanguageWritten.Text = LangWrittenList lblLanguageSpoken.Text = LangSpokenList
{ "pile_set_name": "StackExchange" }
Q: In Timeline, does a "time" traveller truly die during transit? In Timeline by Michael Chrichton, does a time traveller die during transit? I've been confused about this because from Chris' point of view of the transit, things seem to end suddenly in darkness (which I took to be death). But then Gordon explains to Stern that at the moment of transit, the travelers from this timeline are destroyed, while exact copies from another universe arrive at the destination. While all this fictionalized quantum mechanics gives me a migraine, I'm unsure if I am wrapping my head around the details correctly. A: My reading of it is the same as yours. Technically the traveler dies (is destroyed), but an identical copy of the traveler is reconstituted (from another universe) so even the traveler doesn't feel any different. It reminds me of the Steven Wright joke, "I woke up one morning and everything in my apartment had been stolen and replaced with an exact replica."
{ "pile_set_name": "StackExchange" }
Q: Boot Camp partition removal failed in macOS Sierra I installed Windows 10 on my Mac using Boot Camp Assistant, but later decided to remove the partition as I no longer needed Windows. I did (or tried to) using Boot Camp "restore", but it gave me an error saying it failed. Now when I go into Disk Utility, the Windows partition doesn't appear, so I can't delete it. But it's there somewhere because I'm only showing 180 GB on the Mac partition (not the full 250 GB). Going back into Boot Camp Assistant doesn't help, it just allows an option to make another partition (no delete or restore option). I'm at a loss for how to proceed. A: After an unsuccessful Boot Camp Assistant “Remove Windows 10 or later version” execution, almost any CoreStorage partition I came across was FUBAR. This usually doesn't affect the file system/content of the system volume but some internal CoreStorage structures which inhibits any modification of the Logical Volume Group and subsequent containers. Backup your main macOS volume with Time Machine or another backup solution. Disconnect your external backup drive. Last resort (very unlikely to work!): Open Terminal and enter diskutil cs list With the UUID of the Logical Volume (usually the last UUID in the output of the previous command) enter diskutil cs resizeStack <lvUUID> <size> e.g. diskutil cs resizeStack F95E3156-8CFA-4B73-98F7-5A6D9644CA6F 248500m The command usually fails with some error. But if you get Error: -69720: There is not enough free space... use a slightly smaller size like 248200m until you are successful. Common approach (with a Time Machine backup): Restart to Internet Recovery Mode by pressing alt cmd R at startup. Booting to Recovery Mode is not conducive because the Recovery HD will be moved in one of the steps below. And you can't move a partition used as a boot volume. The prerequisites are the latest firmware update installed, either E thernet or WLAN (WPA/WPA2) and a router with DHCP activated. On a 50 Mbps-line it takes about 4 min (presenting a small animated globe) to boot into a recovery netboot image which usually is loaded from an Apple/Akamai server. I recommend Ethernet because it's more reliable. If you are restricted to Wi-Fi and the boot process fails, just restart your Mac with the same shortcut until you succeed booting. Alternatively you may start from a bootable installer thumb drive (preferably Sierra) or a thumb drive containing a full system (Sierra). Open Disk Utility in the macOS Utilities window Repartition your internal drive to one volume/GUID partition table/JHFS+. Quit Disk Utility. Attach your backup drive. Open Restore from Time Machine Backup in the macOS Utilities window and restore the backup to your newly created main volume.
{ "pile_set_name": "StackExchange" }
Q: ASP.NET form validator display dynamic, adjust CSS of input? I've got my dynamic validator working, (its creating a span on invalidation client-side), is there a way to control the styling of the invalid input on client-side val? I want to give it a redish background. Looking for a light-weight simplistic solution but open to all options. Thanks! A: To Exactly quote a previous answer of mine: This article might help you: http://msdn.microsoft.com/en-us/library/aa479045.aspx Particularly this section (look for "Client Side Validation" then under there, "Special Effects"): <asp:Label id=lblZip runat=server Text="Zip Code:"/> <asp:TextBox id=txtZip runat=server OnChange="txtZipOnChange();" /></asp:TextBox><br> <asp:RegularExpressionValidator id=valZip runat=server ControlToValidate=txtZip ErrorMessage="Invalid Zip Code" ValidationExpression="[0-9]{5}" /><br> <script language=javascript> function txtZipOnChange() { // Do nothing if client validation is not active if (typeof(Page_Validators) == "undefined") return; // Change the color of the label txtZip.style.color = valZip.isvalid ? "Black" : "Red"; } </script> There is still some wiring up that needs to be done, which you may be able to tidy up with some jQuery or the like
{ "pile_set_name": "StackExchange" }
Q: Long Polling loop de request em phpt? Preciso criar uma forma de notificar os usuários do sistema cada vez que uma tabela do banco de dados recebe um novo registro. Estou usando como base um exemplo publicado aqui Server Push: Long Polling porém estou com alguns problemas, o primeiro deles é que analisando o network do navegador ele faz diversas solicitações até o navegador travar... e como consequência ele fica repetindo a lista de registro do banco de dados, segue o código do server.php: <?php include ('config.php'); include ('database.php'); $requestedTimestamp = isset ( $_GET [ 'entry' ] ) ? (int)$_GET [ 'entry' ] : time(); while ( true ) { $stmt = $PDO->prepare("SELECT * FROM notifications WHERE entry >= :requestedTimestamp" ); $stmt->bindParam(':requestedTimestamp', $requestedTimestamp); $stmt->execute(); $rows = $stmt->fetchAll(PDO::FETCH_ASSOC); if ( count( $rows ) > 0 ){ $json = json_encode( $rows ); echo $json; break; }else{ sleep( 2 ); continue; } } e esse é o js: function getContent( entry ) { var queryString = { 'entry' : entry }; $.get('./database/server.php', queryString, function( data ) { var obj = jQuery.parseJSON( data ), string = ""; // lista obj json for (var i in obj) { var classy = obj[i]['readable'] == 0 ? 'new-notfication' : ''; string += '<div class="table notification-table '+classy+'">'; string += '<div class="td"><img src="img/default/default-user-image.png" alt=""></div>'; string += '<div class="td">'; string += '<p><a href="#"><strong>'+obj[i]['title']+'</strong></a></p>'; string += '<p>'+obj[i]['msg'].substr(0,66)+'...</p>'; string += '</div>'; string += '<div class="td"><a href="#" class="close"><span class="fa fa-times" aria-hidden="true"></span></a></div>'; string += '</div>'; $('#entry').append(string); } //reconecta // getContent(data); }); } getContent(); na primeira execução da função ele repete os registro e traz 6 valores. A: Procurei mais a respeito do long polling para entender como funciona e desenvolver a lógica: server.php # includes de conexão include ('config.php'); include ('database.php'); include ('conection.php'); # define momento em que começou a roda o Polling $start = time(); # Defini tempo maximo da conexão $timeRequest = 55; #verifica se ouve post if (isset( $_POST['entry'] )) { # previne injections e tags invalidas $entry = DBEscape( trim( $_POST['entry'] ) ); }else{ # pega hora atual do servidor $getTime = $PDO->prepare("SELECT NOW() as now"); # executa query $getTime->execute(); # transforma o resultado em objeto $result = $getTime->fetchObject(); # atribui valor do resultado $entry = $result->now; } # pepara a query para buscar os registro novos $stmt = $PDO->prepare("SELECT * FROM notifications WHERE entry > '$entry'" ); # controle para saber se ouve novo registro $newData = false; # array para as novas notificações $notify = array(); # mantem a conexão aberta até o limite maximo estabelecido em $start while (!$newData and (time()-$start) < $timeRequest ) { $stmt->execute(); # caso encontre resultado fecha a conexão while ($result = $stmt->fetchAll(PDO::FETCH_ASSOC)) { # atribui valor do resultado $notify = $result; # encerra a conexão $newData = true; } # aguarda 1/2 segundo para abrir a conexão usleep(500000); } # pega novamente a hora do servidor $getTime = $PDO->prepare("SELECT NOW() as now"); # executa query $getTime->execute(); # transforma resultado em objeto $result = $getTime->fetchObject(); # atribui valor do resultado $entry = $result->now; # converte tudo em um array $data = array('notify' => $notify, 'entry' => $entry ); # envia dados em formato Json para o front-end echo json_encode($data); # encerra execução de escript php exit; front-end: // chamada a função getNotifications(); function getNotifications( entry ) { // cria array de dados var data = {}; // verifica se existe um tempo definido if(typeof entry != 'undefined'){ // atribui o tempo definido ao objeto data.entry = entry; } // envia os dados para o script de polling $.post('./database/server.php', data, function(result) { for(i in result.notify){ // passa os valores para a lista de notitficações $('#entry').append(result.notify[i].title); } // reinicia a busca por dados getNotifications(result.entry); }, 'json'); }
{ "pile_set_name": "StackExchange" }
Q: Domain User logged in with Temp Profile in Windows Server 2008 R2 64 Bit OS I am new to the forum. I am having a trouble with Windows server 2008 R2 Domain. I am having IBM 3650 Server installed with Windows Server 2008 R2 64 Bit domain. I am using Thin Clients as Nodes. Model No of thin client is : NComputing L300. Now I have created Domain users. All the thin Client are successfully connected to the server over the LAN and ask to provide the user log in details i.e. credentials. Now if I login using one of the user and then Go to "c"\Users\" folder what I can see is a "Temp.domainname" folder is created as profile folder instead of the "username" profile. Due to this every time the user logs off, the "temp.domainname" profile is deleted automatically. This is happening with all the users. More over I tried log in using the uers credentials locally in server it self in order to assure that is related to thin client or some thing is wrong with the server. But even if I login locally in the server using the user credential, it loads with "Temp.domainname" profile instead of the "username" profile. Can you please let me know how can I get out of this? A: This happens a lot when you have Mandatory Profiles configured for Server 2003 and then move to 2008+ without planning. Profiles are different between these versions of Windows, so a mandatory 2003/XP profile won't load on Vista+/2008+, which will cause a temporary profile to load. My hunch is that you just have misconfigured profiles. If you don't, it could also be caused by disk corruption or some other oddity. Check the event logs for more descriptive details and update your question if this doesn't resolve it.
{ "pile_set_name": "StackExchange" }
Q: XHTML possibly blocking jQuery method .append? I was trying to improve my website adding parts that will be there in every page and automatically update as i update a file: i know there are many methods but I'm still a beginner and i decided to use jQuery. The problem is that i use a <table> because I have a bar div that needs to be on the side of what's in the HTML. I prepended the header, the opening tags of the table (including tr and td) and their content and appended the closing of the tags. The HTML contained whatever staid near the bar: and it does! But it's under the bar: I'm not really sure what happened: my thought was that XHTML was closing the tags automatically before appending since in the HTML it didn't find them... I'll include the code that we're interested it $(document).ready(function(){ $('body').prepend('<table><tr><td>Text</td><td>'); $('body').append('</td></tr></table>'); }) That is the jQuery code and i can ensure you i have proper links to the script and to jQuery (+UI) which work. Now everything should be simple: you just add your HTML code in the of the .html file... But nope! <html> <head> <title>Cheese</title> <script to my script and jQuery library/> </head> <body> <!-- here should go what I prepended, and it does, but closes the <table></td></tr> tags aswell --> This should be to the side of 'Text' and comprehended in the table, in a td <!-- here should go what I appended, but strangely it doesn't --> </body> </html> Have I done anything wrong? if you don't see anything wrong here I'll send you all of the code, thank you in advance! A: You cannot append/prepend/insert tags into the DOM only complete html elements. When you pass partial fragments of html to be inserted into the DOM the browser(or maybe jquery) does the best it can with it. What you should do is build a table with the descried content and insert that to the dom $(document).ready(function(){ $('body').html('<table><tr><td>Text</td><td>'+$('body').html()+'</td></tr></table>'); }) http://jsfiddle.net/yY92d/
{ "pile_set_name": "StackExchange" }
Q: Float: right doesn´t work with an element I´m trying to make a responsive navigation bar, I wan´t a element in the right of the nav, like this: Example I was putting float:right, but it doesn´t work, after that, I tried with margin-left, but if I use margin left, it won´t be responsive. The element with the float left is a li element. This is all my HTML: <!DOCTYPE html> <html lang="es"> <head> <meta charset="UTF-8"> <link rel="stylesheet" type="text/css" href="css/style.css"> <title>Responsive Menu</title> </head> <body> <div class="container"> <a class="toggleMenu" href="#">Menu</a> <ul class="nav"> <li class="test"> <a href="#">I amsterdam</a> </li> <li class="test"> <a href="#">La Ciudad</a> <ul> <li> <a href="#">Museos</a> <ul> <li><a href="#">Museo van Gogh</a></li> <li><a href="#">Rijksmuseum</a></li> <li><a href="#">Casa de Ana Frank</a></li> <li><a href="#">Museo Casa de Rembrandt</a></li> </ul> </li> <li> <a href="#">Lugares</a> <ul> <li><a href="#">Vondelpark</a></li> <li><a href="#">Barrio de Jordan</a></li> <li><a href="#">Red Light Distrit</a></li> </ul> </li> </ul> </li> <li class="test"> <a href="#">Alojamiento</a> <ul> <li> <a href="#">Hoteles</a> <ul> <li><a href="#">Los mejores Hoteles</a></li> <li><a href="#">Hoteles más económicos</a></li> </ul> </li> <li> <a href="#">Albergues</a> </li> </ul> </li> <li class="test"> <a href="#">Eventos</a> <ul> <li> <a href="#">Concierto Kanye West</a> </li> <li> <a href="#">Ajax - Real Madrid</a> </li> <li> <a href="#">Amsterdam Fashion Week</a> </li> </ul> </li> <li class="languages"> <a href="#">Languages</a> <ul> <li> <a href="#">English</a> </li> <li> <a href="#">Español</a> </li> <li> <a href="#">Français</a> </li> <li> <a href="#">Deutsch</a> </li> </ul> </li> </ul> </div> And my CSS, I thought that it could be a problem for other styles, but the class .languages is the last one, them, I tried with an ID, but I got this: The CSS is this: body, nav, ul, li, a { margin: 0; padding: 0; } body { width: 100%; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; background-color: red; } a { text-decoration: none; } .container { width: 100%; } .toggleMenu { display: none; background: #666; padding: 10px 15px; color: #fff; } .nav { list-style: none; *zoom: 1; background:#175e4c; position: relative; } .nav:before, .nav:after { content: " "; display: table; } .nav:after { clear: both; } .nav ul { list-style: none; width: 9em; } .nav a { padding: 10px 15px; color:#fff; *zoom: 1; } .nav > li { float: left; border-top: 1px solid #104336; z-index: 200; } .nav > li > a { display: block; } .nav li ul { position: absolute; left: -9999px; z-index: 100; } .nav li li a { display: block; background: #1d7a62; position: relative; z-index:100; border-top: 1px solid #175e4c; } .nav li li li a { background:#249578; z-index:200; border-top: 1px solid #1d7a62; } #languages{ float: right; } A: I'm not sure why it wasn't working for you. Using your example code but simply adding id="languages" to the li element seems to work. See https://jsfiddle.net/eaLow5h3/
{ "pile_set_name": "StackExchange" }
Q: Calculating primes: Could this be any faster? I've ported a prime-number calculation program from primes.pyx to C++ for benchmark purpose. Since I wrote it in C++, I thought that my program would be faster than the original. However, mine took 25.8 ms at the fastest, while the original took only 1.45 ms on the same machine. I tested 10 times each, but got similar results (25.8~51.7ms vs 1.45~1.47ms). But, why? Here's my code: #include <iostream> #include <vector> #include <chrono> using namespace std; vector<int> primes(size_t nb_primes) { int n; vector<int> p; p.reserve(nb_primes); n = 2; while (p.size() < nb_primes) { bool other = true; for (size_t i = 0; i < p.size(); i++) { if (n % p[i] == 0) { other = false; break; } } if (other) p.push_back(n); n += 1; } return p; } int main() { auto start = std::chrono::high_resolution_clock::now(); vector<int> p = primes(1000); //for (auto i : p) // cout << i << ' '; auto finish = std::chrono::high_resolution_clock::now(); std::chrono::duration<double> elapsed = finish - start; std::cout << "Elapsed Time: " << elapsed.count() << " s\n"; } These algorithms are exactly the same, I believe. Please don't limit the check up to sqrt(n), to achieve a sieve of Eratosthenes. I need to compare with the original. One thing I'm worried is the for ... else statement in the original. I borrowed the idea to use the other flag from Username: haccks. If you can apply another for ... else method, please go ahead. My Windows 10 machine (i5) spec: Clock Frequency: 1.60GHz 1.80GHz Memory: 8.00GB I wrote the original version on Anaconda Prompt/Python 3.8. I write the C++ version on Visual Studio 2019. If you need more information, please ask me. Thank you in advance. A: There are some optimizations that can make your code better, even with compiler optimizations turned on: Pre-allocate the vector and treat it like an array. Use a variable to keep track of the length. Putting these together, I found about a 10% increase in speed: vector<int> primes(size_t nb_primes) { vector<int> p(nb_primes,2); int n = 2; size_t len_p = 0; while (len_p < nb_primes) { bool other = true; for (size_t i = 0; i < len_p; i++) { if (n % p[i] == 0) { other = false; break; } } if (other) p[len_p++] = n; n += 1; } return p; }
{ "pile_set_name": "StackExchange" }
Q: Are these 3 sequences of commands equivalent in git? I'm curious whether the following are equivalent Let's say latest master has head C and my branch is based off C and has one commit D. origin/master | A -> B -> C \ foo | D Then let's say master changes. origin/master | A -> B -> C -> E \ foo | D I'm curious whether all of git pull origin/master git fetch && git merge origin/master git reset --soft HEAD~ && git stash save && git fetch && git reset --hard origin/master && git stash pop are expected to be equivalent and whether the algorithms that git runs for each are logically equivalent. A: All your arrows and branch labels are misleading, because they are all quite sensible. Git, however, works backwards. :-) Let's draw them the other way, the way Git does them: A <-B <-C <-E <-- origin/master \ D <-- foo That is, the name origin/master contains the hash ID of commit E. Commit E contains the hash ID of commit C, which contains the hash ID of commit B, which contains the hash ID of commit A. Commit A has no other hash ID, because it's the first commit and can't have a parent, so it points nowhere. All "interior" arrows point backwards. They have to, because commits, like all Git objects, are read-only once created. We know, when we create a new commit, what its parent hash ID is, but we don't know, when we create it, what it's child or children will be, if and when they are ever created. As a result there's no need to draw in the parent arrows themselves; we can just connect the commits, remembering that they point backwards. Branch names, on the other hand, move all the time. So it's a good idea to keep the branch-name arrows. Let's add in the name master and an arrow, and note that master is the current branch (HEAD) as well: E <-- origin/master / A--B--C <-- master (HEAD) \ D <-- foo git pull origin/master This isn't quite a valid Git command. The actual command is the peculiarly-spelled git pull origin master. If you are a newcomer to Git, I recommend avoiding git pull entirely for a while. I think it mostly confuses people. All it really does is run two other Git commands for you: git fetch (passing on the rest of the arguments you gave it, if any, or a remote-name and branch-name it extracts from the current branch if not), followed by (normally) git merge. Once you are familiar with the other Git commands and know what to expect from them, you can start using git pull as a convenience, provided that you find it convenient (sometimes it is, sometimes it's not). Let's look instead/first at git fetch. What git fetch does is call up another Git and ask it about its branches and tags. This second Git has its own, independent master. Your Git finds out which commit hash their Git is identifying by their master. Your Git then obtains that commit by its hash ID. The hash ID is the "true name" of the commit—a name like master is just a moveable pointer, containing a hash ID, and the hash ID that your master, or their master, has, changes over time. If their master names commit E, and you already have commit E, your Git does not have to download commit E. Your Git simply changes your own origin/master to point to commit E (which is no change at all, if it already points there). If you don't have commit E yet, your Git gets it from their Git. (Along with commit E, your Git gets anything they have, that you need, that you don't already have—such as commits C, B, and/or A and/or all the tree and blob objects any of those might need. You will usually have most of these already, but whatever you don't have, they will package up and ship to you, so that your Git can set your origin/master.) If their master names some other commit (any of A through D, or some commit you don't have yet), your Git will download whatever it needs so that it has that commit and all its auxiliary data and other reachable commits, then make your origin/master point to that commit by its hash ID. I'll assume for now that their master still points to E, though. That's the end of all the work for git fetch: it obtains the various objects, and then updates your remote-tracking names (your origin/* names). Well, there's one more thing it does, of historical interest: it writes every name it fetched to .git/FETCH_HEAD. If you run git fetch, it will default to fetching all the branch and tag names from origin; if you run git fetch origin master, you tell it to fetch only one name, the one matching master (hence branch master), from the other Git that you call origin. git fetch && git merge origin/master After running git fetch origin master, git pull origin master will, in effect, run git merge origin/master. It does so via the special FETCH_HEAD file, rather than by literally running git merge origin/master—but git pull origin master and git fetch && git merge origin/master will, in this case, do the same thing. Note that git fetch is the unrestricted form: update all remote-tracking names. If you're not currently on your own master, or your master has a different upstream setting, git pull will run git fetch origin some-other-name, but git pull origin master will explicitly run git fetch origin master. Then it will run git merge with a hash ID extracted out of .git/FETCH_HEAD (and a -m argument as well). So there are a lot of differences here, but most are usually minor, assuming you're on master with its upstream set to origin/master. The git merge step is a fair bit more complicated. This: Checks whether the index and current (HEAD) commit match, or if not, whether the merge looks safe. Ideally they should match (you should have run git commit if not). It's tricky to back out of a failed merge if the index and the HEAD commit don't match (although git merge --abort will do its best). Uses the current commit's hash ID and the merge target commit's hash ID to locate two specific commits. Since HEAD names master and master points to C, the current commit is C and the target is E. Git doesn't have a single consistent name for the target commit; I like to call the HEAD commit L for left/local/--ours and the other one R for right/remote/--theirs. It won't matter much here, though, as we'll see in a moment. Computes the merge base of the L and R commits. The merge base is, simply put (somewhat too simply in hard cases), the first place the two branches come together when we start at both L and R and work backwards. In this case, that's commit L (aka C) itself! If there is no common ancestor merge base commit, fail (in modern Git). If the merge base is not one of the two L and R commits, do a true merge. If the common base is R, do nothing: there is nothing to merge. If the merge base is L / HEAD, do a fast-forward operation if allowed. If not allowed, resort to a true merge. Since the merge base is L, and you did not say --no-ff, Git will use the fast-forward operation for this particular merge. The result will be to check out commit E and move the name master to point to E: E <-- master (HEAD), origin/master / A--B--C \ D <-- foo Finally: git reset --soft HEAD~ && git stash save && git fetch && git reset --hard origin/master && git stash pop This one is much more complex. A soft reset using HEAD~1 tells Git to: Find the current hash ID by reading .git/HEAD. This will normally contain a string like ref: refs/heads/master, which tells Git that the current branch is master. If you're in "detached HEAD" mode, .git/HEAD will have a raw hash ID in it, rather than a branch name; this affects step 4 below. Otherwise, read the branch name itself to find the hash ID. Read that commit's parent hash ID (HEAD~ means HEAD~1 which means "one parent back along the first-parent line of ancestry"). Don't touch the index (--soft), and don't touch the work-tree (--soft or --mixed). Write the new hash back into the current branch. Or, if HEAD is detached, write the new hash directly into .git/HEAD. Since we have not touched the index and work-tree, they remain unchanged, regardless of whether we had a branch name to rewrite in step 4. Assuming that HEAD names master, and that the index and work-tree match commit C (to which master points), this soft reset will change the name master to point to commit B, leaving the index and work-tree matching the contents of commit C. Next, git stash save writes two commits, not on any branch. One contains the contents of the index, and one contains the contents of the work-tree. (It doesn't matter that these two match each other, or that they match commit C for that matter—that just means that the two new commits use the existing top level tree object from commit C, which saves space.) The resulting diagram now looks like this: E <-- origin/master / C--D <-- foo / A--B <-- master (HEAD) |\ i-w <-- refs/stash (I call the i-w commit clump, to which refs/stash points, a stash bag, because it hangs off the commit that was current when you ran git stash save.) The git fetch step now does whatever it does, possibly adding more commits and/or moving origin/master to point somewhere. We'll assume here that it leaves origin/master pointing to commit E. The git reset --hard origin/master now turns origin/master into a hash ID. This was step 1 above in our earlier git reset, but this time we don't read .git/HEAD, we just read the value of origin/master: git rev-parse origin/master Note that we can do the same to compute HEAD~1: git rev-parse HEAD~1 At any time, git rev-parse can turn a name into a raw hash ID, whenever that's what we need. For git reset, that's what we need: what commit are we resetting to? The git reset now writes that hash ID into master, and this time, because we used --hard, writes that commit's tree into the index and updates the work-tree to match. While the index and work-tree are not in the diagram, we now have this: E <-- master (HEAD), origin/master / C--D <-- foo / A--B |\ i-w <-- refs/stash (we could draw the A-B-C-D line horizontally here, or go back to having D down one row except for the refs/stash in the way). Last, the git stash pop takes whatever is in the w commit and tries to merge it, using git merge-recursive, with commit B as the merge base, the current index turned into a tree as the L tree—since we just git reset --hard to commit E, that's E as L—and the saved w commit as R. This merge may, depending on what has happened since commit B, see that there is no work to be done, and do nothing. If it does nothing, or does something and thinks the merge succeeded, it drops the stash: E <-- master (HEAD), origin/master / C--D <-- foo / A--B It does not make any new commit, so the index and/or work-tree may now differ from the snapshot in commit E, if the merge did some work. There are a number of important things to note here: git pull really is git fetch followed by a second Git command. The syntax for git pull is odd, and either of the two sub-commands it runs can fail, although a failure of git fetch is unlikely (and generally pretty harmless except for stopping the pull). A failure during git merge is common and requires manual intervention to complete or abort the merge. It's a good idea to know what you are doing here, including whether you're in a git merge that needs help; and to know that, it's good to run git merge yourself the first however-many times. git merge itself is quite complicated. It can do a fast-forward, which is not a merge at all (and never encounters merge conflicts). It can do nothing at all. Or, it can do a real merge, which can fail with merge conflicts. To find out what it will do, you must find the merge base, which requires looking at the commit graph (git log --graph). Some of the clicky web interfaces, such as those on GitHub, hide the graph from you, and make it difficult or impossible to tell what will happen. git stash is also quite complicated internally. When all goes well, it seems simple, but when it fails, it fails rather spectacularly. git reset has too many modes of operation to make it easy to use. With --soft, --mixed, and --hard, it works one way, and the three options just tell it when to stop working: after moving the current branch, or after resetting the index, or after resetting both index and work-tree. With other options, it works another (different) way. Using git stash for anything complicated is tricky. All it does is make commits anyway, so if you are doing something complicated, just make a commit that you can see and work with. You can remove it later with git reset with --soft or --mixed.
{ "pile_set_name": "StackExchange" }
Q: Max or min of $F(x) = \int_0^{2x-x^2} \cos\Big(\frac {1}{1+t^2}\Big) \,dt$ $$F(x) = \int_0^{2x-x^2} \cos\left(\frac {1}{1+t^2}\right) \,dt$$ Does the function have a max or min? Can someone help me with this? How can I calculate the maximum and minimum? A: Hint: Let: $$I(x) = \int_0^x \cos\left(\frac{1}{1+t^2}\right)\, {\rm d}t \quad \text{and} \quad g(x) = 2x-x^2.$$ This way, $F = I \circ g$ and you get $F'(x) = I'(g(x)) \, g'(x)$ by the chain rule. To compute $I'$ use the fundamental theorem of calculus. Look for points where $F'(x) = 0$.
{ "pile_set_name": "StackExchange" }
Q: Extension of rings of integers always locally free In his answer to this question, Andrea claims that if $A \subset B$ is an extension of rings of integers of number fields, $B$ is locally free over $A$. How can one prove this? Furthermore, I am looking for an example (with $A$ and $B$ as above) where $B$ is not a free $A$-module (in case $A = \mathbb{Z}$, $B$ is always free over $A$, since it is a finitely generated, torsion-free $\mathbb{Z}$-module). A: The extension $A\subseteq B$ is always finite (because $B$ is finite over $\mathbf{Z}$). Since $B$ is a torsion-free $A$-module, it is $A$-flat (since $A$ is Dedekind). Over a Noetherian ring, finite flat is the same as finite locally free.
{ "pile_set_name": "StackExchange" }
Q: INSERT NULL if a field is left blank (decimal) MySQL/PHP I have a if NULL echo "Message" else $value. There is some JavaScript hiding the input field, so they check it and enter the value. If they don't check it to enter the field, the db enters 0.00 and doesn't specify NULL so my PHP if statement works. How do I either set the variable as NULL if its blank, or set NULL in the INSERT statement? `myTableField` decimal(10,2) default NULL, A: Not sure exactly what you're after, but you can run the below if statement to check what value was entered, and set to null accordingly: if(!is_numeric($fieldvalue) || $fieldvalue==0){ // if the entered value isnt a number (i.e. isnt entered, or invalid) or if the value is zero, sounded like it was your default $fieldvalue=NULL; // could also use unset($fieldvalue); } If the variable $fieldvalue is set to null (or un-set), it will be inserted as a NULL in your DB according to your field definitions. Make sure your insert statement references the value without ' or " encapsulating figures however (not needed as its a decimal field).
{ "pile_set_name": "StackExchange" }
Q: Conditional, term-wise addition of sublist slices I have a list of lists : ls = [['01',2,3,4], ['02',5,2,4], ['03',2,6,4], ['01',1,3,4]] I want to add the terms of the sublist after the string with their corresponding ones from all other sublists beginning with the same string. So the result should look like this: result = [['01',3,6,8], ['02',5,2,4], ['03',2,6,4]] # the first and last original sublists were "combined" I tried this code: ls=[['01',2,3,4],['02',5,2,4],['03',2,6,4],['01',1,3,4]] totals = {} for key, value1,value2,value3 in ls: totals[key] = totals.get(key, 0) + value1 totals[key] = totals.get(key, 0) + value2 totals[key] = totals.get(key, 0) + value3 print(totals) but it is not my goal: {'01': 17, '02': 11, '03': 12} A: A small amendment of your code, I have initialised the values [0, 0, 0] when there is a new 'key'. ls = [['01', 2, 3, 4],['02', 5, 2, 4],['03', 2, 6, 4],['01', 1, 3, 4]] totals = {} for key, value1, value2, value3 in ls: if key not in totals.keys(): totals[key] = [0, 0, 0] totals[key][0] += value1 totals[key][1] += value2 totals[key][2] += value3 totals = [[k] + v for k, v in totals.items()] print(totals) # Output: # [['01', 3, 6, 8], ['02', 5, 2, 4], ['03', 2, 6, 4]] The first problem of your code is that totals.get(key, 0) will give you 0 not our desired initial value [0, 0, 0] when you have a new key. Second, you are summing all the values of the list in your for loop instead of summing the list element-wise. Therefore, you are getting {'01': 17, '02': 11, '03': 12} Version 2 (more generic, and handle non-numbers) ls = [['01', 2, 'a', 4],['02', 5, 2, 4],['03', 2, 6, 4],['01', 1, 'ad', 4]] totals = {} for x in ls: if x[0] not in totals.keys(): totals[x[0]] = x[1:] else: for i in range(len(totals[x[0]])): if isinstance(x[1 + i], (int, float)): totals[x[0]][i] += x[1 + i] totals = [[k] + v for k, v in totals.items()] print(totals) # Output: # [['01', 3, 'a', 8], ['02', 5, 2, 4], ['03', 2, 6, 4]]
{ "pile_set_name": "StackExchange" }
Q: What does bar plot compute in Y-axis in seaborn? I am visualizing the titanic dataset. I created 9 different age categories and was trying to visualize the age_categories vs Survived using a bar chart. I wrote the following piece of code: age_cats = [1, 2, 3, 4, 5, 6, 7, 8, 9] df_train['Age_Cats'] = pd.cut(df_train['Age'], 9, labels = age_cats) sns.barplot(x = 'Age_Cats', y = 'Survived', hue = 'Sex', data = df_train) I am not understanding what do the numbers on the Y-axis represent? My assumption is: {n(Survived = 1)}/{n(Survived = 1) + n(Survived = 0)} or the ratio of people survived out of all people in that category. But how is seaborn calculating it? Or do the numbers on the Y-axis represent anything else? A: The bar plot shows the survival rate or percentage of people who survived. E.g. in the age class 1 60% of all males survived. In the age class 7 less than 15% of all males survived. This is calculated by taking the mean of the survival variable for that age class. E.g. if you had 3 people, 2 of which survived, this variable could look like [1,0,1], the mean of this array is (1+0+1)/3=0.66; the bar plot would hence show a bar up to 0.66.
{ "pile_set_name": "StackExchange" }
Q: GSL Fast-Fourier Transform - Non-zero Imaginary for Transformed Gaussian? As an extension to this question that I asked. The Fourier transform of a real Gaussian is a real Gaussian. Now of course a DFT of a set of points that only resemble a Gaussian will not always be a perfect Gaussian, but it should certainly be close. In the code below I'm taking this [discrete] Fourier transform using GSL. Aside from the issue of the returned/transformed real components (outlined in linked question), I'm getting a weird result for the imaginary component (which should be identically zero). Granted, it's very small in magnitude, but its still weird. What is the cause for this asymmetric & funky output? #include <gsl/gsl_fft_complex.h> #include <gsl/gsl_errno.h> #include <fstream> #include <iostream> #include <iomanip> #define REAL(z,i) ((z)[2*(i)]) //complex arrays stored as [Re(z0),Im(z0),Re(z1),Im(z1),...] #define IMAG(z,i) ((z)[2*(i)+1]) #define MODU(z,i) ((z)[2*(i)])*((z)[2*(i)])+((z)[2*(i)+1])*((z)[2*(i)+1]) #define PI 3.14159265359 using namespace std; int main(){ int n = pow(2,9); double data[2*n]; double N = (double) n; ofstream file_out("out.txt"); double xmin=-10.; double xmax=10.; double dx=(xmax-xmin)/N; double x=xmin; for (int i=0; i<n; ++i){ REAL(data,i)=exp(-100.*x*x); IMAG(data,i)=0.; x+=dx; } gsl_fft_complex_radix2_forward(data, 1, n); for (int i=0; i<n; ++i){ file_out<<(i-n/2)<<" "<<IMAG(data,((i+n/2)%n))<<'\n'; } file_out.close(); } A: Your result for the imaginary part is correct and expected. The difference to zero (10^-15) is less than accuracy that you give to pi (12 digits, pi is used in the FFT, but I'm can't know whether you are overriding the pi inside the routine). The FFT of a real function is not in general a real function. When you do the math analytically you integrate over the following expression: f(t) e^{i w t} = f(t) cos wt + i f(t) sin wt, so only if the function f(t) is real and even will the imaginary part (which is otherwise odd) vanish during integration. This has little meaning though, since the real part and imaginary part have physical meaning only in special cases. Direct physical meaning is in the abs value (magnitude spectrum), the abs. value squared (intensity spectrum) and the phase or angle (phase spectrum). A more significant offset from zero in the imaginary part would happen if it wasn't centered at the center of your time vector. Try shifting the x vector by some fraction of dx. See below how the shift of the input by dx/2 (right column) affects the imaginary part, but not the magnitude (example written in Python, Numpy). from __future__ import division import numpy as np import matplotlib.pyplot as p %matplotlib inline n=512 # number of samples 2**9 x0,x1=-10,10 dx=(x1-x0)/n x= np.arange(-10,10,dx) # even number, asymmetric range [-10, 10-dx] #make signal s1= np.exp(-100*x**2) s2= np.exp(-100*(x+dx/2 )**2) #make ffts f1=np.fft.fftshift(np.fft.fft(s1)) f2=np.fft.fftshift(np.fft.fft(s2)) #plots p.figure(figsize=(16,12)) p.subplot(421) p.title('gaussian (just ctr shown)') p.plot(s1[250:262]) p.subplot(422) p.title('same, shifted by dx/2') p.plot(s2[250:262]) p.subplot(423) p.plot(np.imag(f1)) p.title('imaginary part of FFT') p.subplot(424) p.plot(np.imag(f2)) p.subplot(425) p.plot(np.real(f1)) p.title('real part of FFT') p.subplot(426) p.plot(np.real(f2)) p.subplot(427) p.plot(np.abs(f1)) p.title('abs. value of FFT') p.subplot(428) p.plot(np.abs(f2))
{ "pile_set_name": "StackExchange" }
Q: Go: Act on Signal while not Blocking Operations I want to fire the ps command continuously in a goroutine to monitor mem and cpu usages. I didn't use top because top doesn't allow me to select columns as ps does. This goroutine needs to receive a stop signal to stop the command, but I don't know how not to block running the command while waiting the signal. For top I can do: top := exec.Command("top") <-stop // blocking top.Process.Signal(os.Kill) But for ps if I do: ps := exec.Command("ps") for { ps.Run() <-stop } The above code will block on stop. I want to keep firing ps.Run(), while being able to stop when a stop signal is ready. Thanks. A: One way that you could achieve this is by utilizing the for/select timeout idiom, there are a couple of similar methods of doing this. Take this trivial example: package main import ( "fmt" "time" ) func main() { abort := make(chan struct{}) go func() { for { select { case <-abort: return case <-time.After(1 * time.Second): // replace fmt.Println() with the command you wish to run fmt.Println("tick") } } }() // replace time.Sleep() with code waiting for 'abort' command input time.Sleep(10 * time.Second) abort <- struct{}{} } To modify this example to work in your circumstance place the code that you want to run in the <-time.After(): case, which (in this instance) will run once per second, if no other case is available to receive for that duration. And instead of time.Sleep() which I placed at the end, put the code that will interrupt the <-time.After(): case, sending <- struct{}{} on the abort channel (or whatever you name it). NOTE: In an earlier version of this answer I had abort as a chan bool, as I like the clarity of <-abort true and don't consider chan struct{} to be as clear, I opted to change it in this example however, as <- struct{}{} isn't unclear, especially once you've gotten used to the pattern. Also, if you want the command to execute on each iteration of the for loop and not wait for a timeout then you can make that case default:, remove <-time.After() and it will run on each iteration of the loop where another channel is not ready to receive. You can play with this example in the playground if you'd like, although it will not allow syscalls, or the default: case example to be run in that environment.
{ "pile_set_name": "StackExchange" }
Q: quadratic formula - getting wrong values I know this is a stupid question but, I posted this thread up not long a go Function derivatives. I'm trying to replicate this question. However, I can't seem to work out how I got the value $-2$ and $3$ from the quadratic formula. I've tried and tried to use the template, however each time im getting different values to the ones I got when I successfully done it. The formula is $6x^2-6x-36\dots$ A: No need to use the quadratic formula, but of course you can do so to yield the same roots: $$\begin{align} 6x^2 - 6x - 36 = 0 & \iff 6(x^2 - x - 6) = 0 \\ \\ & \iff x^2 - x - 6 = 0\\ \\ & \iff (x+2)(x - 3)= 0 \\ \\ &\iff x = -2 \;\text{or}\; x = 3\end{align}$$
{ "pile_set_name": "StackExchange" }
Q: selectOneMenu ajax events I am using an editable primefaces selectOneMenu to display some values. If the user selects an item from the List a textarea should be updated. However, if the user types something in the selectOneMenu, the textarea should not be updated. I thought I could work this with ajax event out. However, I don't know which event I can use here. I only know the valueChange event. Are there any other events, like onSelect or onKeyUp? Here is my code: <p:selectOneMenu id="betreff" style="width: 470px !important;" editable="true" value="#{post.aktNachricht.subject}"> <p:ajax event="valueChange" update="msgtext" listener="#{post.subjectSelectionChanged}" /> <f:selectItems value="#{post.subjectList}" /> </p:selectOneMenu> <p:inputTextarea style="width:550px;" rows="15" id="msgtext" value="#{post.aktNachricht.text}" /> A: The PrimeFaces ajax events sometimes are very poorly documented, so in most cases you must go to the source code and check yourself. p:selectOneMenu supports change event: <p:selectOneMenu ..> <p:ajax event="change" update="msgtext" listener="#{post.subjectSelectionChanged}" /> <!--...--> </p:selectOneMenu> which triggers listener with AjaxBehaviorEvent as argument in signature: public void subjectSelectionChanged(final AjaxBehaviorEvent event) {...} A: I'd rather use more convenient itemSelect event. With this event you can use org.primefaces.event.SelectEvent objects in your listener. <p:selectOneMenu ...> <p:ajax event="itemSelect" update="messages" listener="#{beanMB.onItemSelectedListener}"/> </p:selectOneMenu> With such listener: public void onItemSelectedListener(SelectEvent event){ MyItem selectedItem = (MyItem) event.getObject(); //do something with selected value } A: Be carefull that the page does not contain any empty component which has "required" attribute as "true" before your selectOneMenu component running. If you use a component such as <p:inputText label="Nm:" id="id_name" value="#{ myHelper.name}" required="true"/> then, <p:selectOneMenu .....></p:selectOneMenu> and forget to fill the required component, ajax listener of selectoneMenu cannot be executed.
{ "pile_set_name": "StackExchange" }
Q: Freebase quick start demo return empty result I'm going over Get Started with the Freebase API and When trying to send the following request from the browser url box: https://www.googleapis.com/freebase/v1/search?q=bob&key=MY_API_KEY I'm getting {"status":"200 OK","result":[]} I activated freebase in the API console and used browser API key. A: Thanks for catching that. It's a typo. It should be: https://www.googleapis.com/freebase/v1/search?query=bob&key=MY_API_KEY
{ "pile_set_name": "StackExchange" }
Q: Moment of Inertia of Tetrahedron How do you calculate the moment of inertia of a regular tetrahedron of side a about an axis passing through the center of one of the phases and perpendicular to it? A: The moment of inertia is given by $\frac{m*s^2}{20}$, where $m$ refers to mass and $s$ refers to the length of a side of the tetrahedron. You can read up on the proof over here and here.
{ "pile_set_name": "StackExchange" }
Q: Perl Data Structures: How to create an array ref from an array I have the following data structure: @keys = [1, 2, 3, 4]; And using a loop (for) from 1 to 4, I want to create a new data structure like $new = +{ key => '1', meaning => '', time => '', }; So, basically I would have in this case four $new data structures. Do I need to use the map function? A: Confusing question, but I think you problem is that @keys = [1, 2, 3, 4]; is probable not what you mean. It should be either @keys = (1, 2, 3, 4); or $keysref = [1, 2, 3, 4]; I'll assume the first. Then yes, you could populate an array of records with map @records = map( {key => $_,meaning => '',time => ''}, @keys );
{ "pile_set_name": "StackExchange" }
Q: Why while(n) n = n>>1 loop does not terminate with negative n My goal was to write a program to count the number of required bits to represent a number in python, in case that i choose number = -1 or any negative number, the program does not terminate, here is my code : number = -1 cnt = 0 while(number!=0): number = number>>1 cnt+=1 print(cnt) i thought it should print 32 and terminates. it's the same for all negative numbers. i would appreciate if you clarify the reason behind it. A: Arithmetic right shift rounds towards -infinity on a normal CPU, or in a language like C. I assume it's the same in Python. You're probably thinking about 2's complement integers, where -1 has all bits set. A 2's complement signed right shift shifts in a copy of the sign bit, not zero. -1 >> 1 = -1 Python integers are arbitrary-precision so this doesn't make much sense. The max width is effectively unlimited.
{ "pile_set_name": "StackExchange" }
Q: Shrink and extend function on new ExtendedFloatingActionButton in Material Components for Android is not working I am using the new ExtendedFloatingActionButton from the Material Components for Android library 1.1.0-alpha06. It is rendered just fine but the 'extend' and 'shrink' methods are not doing anything. <com.google.android.material.floatingactionbutton.ExtendedFloatingActionButton android:id="@+id/extended_fab" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_anchor="@id/bottom_sheet" android:text="Drag map to change location" app:icon="@drawable/my_location" app:backgroundTint="@color/white" app:iconTint="@color/quantum_googblueA200" android:textColor="@color/quantum_googblueA200" app:iconSize="18dp" style="@style/Widget.MaterialComponents.ExtendedFloatingActionButton" android:padding="4dp" android:textSize="12sp" android:textAllCaps="false" android:layout_margin="8dp" app:layout_anchorGravity="right|top"/> Here's the rendered layout: A: Here is a Kotlin version which matches the behavior of the built-in Contacts app. The FAB is extended whenever the RecyclerView is at the top, and the FAB shrinks whenever the user scrolls away from the top. class FabExtendingOnScrollListener( private val floatingActionButton: ExtendedFloatingActionButton ) : RecyclerView.OnScrollListener() { override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) { if (newState == RecyclerView.SCROLL_STATE_IDLE && !floatingActionButton.isExtended && recyclerView.computeVerticalScrollOffset() == 0 ) { floatingActionButton.extend() } super.onScrollStateChanged(recyclerView, newState) } override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) { if (dy != 0 && floatingActionButton.isExtended) { floatingActionButton.shrink() } super.onScrolled(recyclerView, dx, dy) } } Usage: recyclerView.addOnScrollListener(FabExtendingOnScrollListener(fab))
{ "pile_set_name": "StackExchange" }
Q: Flexbox layout with multiple rows having 3 items in each I am trying to have the 3 unordered list items take up the whole block spaced out evenly, with the 1st one at the start the 2nd one in the middle and the last one touching the end of the block. I am trying to do it with flex box but currently I am having difficulty, I figured justify-content: would work for this task, but its not doing anything. Here is the end result I am trying to achieve: link to codepen Example: body { background: green; } .area--third-amenities--list { width: 375px; background: blue; } ul { padding: 0; list-style-type: none; margin: 0; display: flex; flex-wrap: wrap; justify-content: space-between; } ul li {} <div class="area--third-amenities--list"> <ul> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> </ul> </div> A: you could use flex-basis:33.33% on the lis so they stay 3 on one line and text-align:center body { background: green; } .area--third-amenities--list { width: 375px; background: blue; } ul { padding: 0; list-style-type: none; margin: 0; display: flex; flex-wrap: wrap; text-align:center; } ul li { flex-basis: 33.33%; } <div class="area--third-amenities--list"> <ul> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> <li> <img src="http://i.imgur.com/PwKgUnO.png" /> <p> 50px </p> </li> </ul> </div> let me know if this helps
{ "pile_set_name": "StackExchange" }
Q: Trying to access a temporary array in another method I'm sorry if this is a silly question, but as a beginner in coding, I find it hard to remember the limits/bounds of variables that I create. I am trying to create a temporary array in the GetLetters() method below, but I later need to access this information in the EstimateGrade() method so as to "estimate a grade" for the user based on their name. I get the error that "the name 'threeLetters' does not exist in the current context'. Is there a way to access the threeLetters array without creating a public array. public int[] GetLetters(String userName) { //Creating an array that will hold the 3 values that determine grade int[] threeLetters = new int[3]; char firstLetter = userName[0]; threeLetters[0] = userName[0]; char thirdLetter = userName[2]; threeLetters[1] = userName[2]; char fifthLetter = userName[4]; threeLetters[2] = userName[4]; if(userName.Length > 5) { threeLetters = new int[0]; } return threeLetters; } public int EstimateGrade(int[] grade) { int sum = (threeLetters[0] + threeLetters[1] + threeLetters[2]) * 85; int result = sum % 101; return result; } A: threeLetters[] is local to GetLetters(), ie threeLetters[] is not accessible outside the GetLetters(). Since you are passing threeLetters[] as parameter to EstimateGrade() with alias name grade[] then change the threeLetters to grade. See the below code. public int EstimateGrade(int[] grade) { int sum = (grade[0] + grade[1] + grade[2]) * 85; int result = sum % 101; return result; }
{ "pile_set_name": "StackExchange" }
Q: Desktop Application Dependency Injection with Unity I've used unity with MVC Web App. and in MVC we there is app_start method. and we use DependencyResolver.SetResolver(new UnityDependencyResolver(container)); and this send the parameters to the Controller Constructor. now i am trying to find how to implement this pattern to DesktopApplication. u guys know that we use new Form1().Show(); //bla bla foo and when i create a new form is it possible to do when i created a new instance of type of System.Windows.Forms it automatically send the parameters to the constructor. Sorry for my language. now i am using something like this and asking if it's a better solution: public partial class frm_FirmaSecimi : XtraForm { private IFirmaService _firmaService; public frm_FirmaSecimi() { InitializeComponent(); _firmaService = UnityConfig.Container.Resolve<IFirmaService>(); } } is there a way to turn this in to: public partial class frm_FirmaSecimi : XtraForm { private IFirmaService _firmaService; public frm_FirmaSecimi(IFirmaService firmaService) { InitializeComponent(); _firmaService = firmaService; } } this is a DevExpress Form by the way. Thanx for answers. A: Try introducing interfaces for your forms, then use this on each form: public class FormMain : IFormMain { ISubForm _subForm; public FormMain(ISubForm subForm) { InitializeComponent(); _subForm = subForm; } ... } Now, on program start, create the Unity container, and resolve your IFormMain dependency: private static void Main() { var container = BuildContainer(); Application.Run(container.Resolve<IFormMain>()); } public static IUnityContainer BuildContainer() { var currentContainer = new UnityContainer() ; currentContainer.RegisterType<IFormMain, FormMain>(); // note: registering types could be moved off to app config if you want as well return currentContainer; } Obviously your implementation won't look exactly like this, however it should point you in the right direction. Disclaimer: I don't work with WinForms at all, so the above may not be the best solution for app, depending on it's complexity etc.. Also try reading this, it may help: http://foyzulkarim.blogspot.co.nz/2012/09/using-dependency-injection-unity-to.html
{ "pile_set_name": "StackExchange" }
Q: Kentico Required document no longer exists I have a Kentico 5.5 site and some custom document types. When trying to edit the HomePage in CMSDesk it says Required document no longer exists, please select another document. I have not deleted the HomePage but the document type was missing. So I restored the document type by exporting and importing from another of my sites. The error is still present. When I create a new Homepage with the same document type it shows the this site has no content please go to cmsdesk Is there something wrong with my document type or is my DB just ruined? UPDATE: I imported a new document type from my other site along with a page template and a role. After which the error occurs. I think it blows away the Homepage's document type or some other related file. A: Turns out I had imported a document type and it was from another server where the id's were not in sync so the new document type overwrote the old document type with the same id. Ids are sequential so if you are importing from one site into another be very careful to always have all the same document types and have them all created in the same order.
{ "pile_set_name": "StackExchange" }