date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/20
531
1,967
<issue_start>username_0: In Haskell, I know that there are tuples `(x1,x2,x3)` and also lists `[x1,x2,x3]` and that tuples are not homogeneous and have a fixed length. On the other hand, lists can be rewritten for example in the form `x1:x2:x3:[]`, which I guess is a function `a ->[a]->[a]`, which makes some sense to me, but in the following code: ``` head' :: [a] -> a head' [] = error head' (x:_) = x ``` it is unclear to me why we use parentheses, which is to my understanding tuple notation. It looks more like we should have something like an uncurry function on the colon operator, and take the first input, or something like that. How are the parentheses behaving here, and why can we not use brackets in some way or another?<issue_comment>username_1: The parentheses in `head' (x:_)` are just being used for grouping; you can tell no tuples are involved because there are no commas inside. Without the parentheses, `head' x:_` would be parsed as `(head' x):_`, which doesn't work on the left-hand side of an assignment. We can't use brackets here because they would require knowing how many elements are in the list passed to `head'`. Upvotes: 3 [selected_answer]<issue_comment>username_2: Parentheses denote tuples when they contain commas. Parentheses without commas are used for grouping, e.g. to distinguish between `x - (y - z)` and `(x - y) - z` or, in this case, between `head' (x:_) = x` and `(head' x) : _ = x`. In other words, all that the parentheses do here is to denote that `:` is being applied to `x` and `_` rather than `head' x` and `_`. If you used `[]` here, that would be a list pattern. Generally the pattern `[p]` matches if the given value is a list with one element and that element matches the pattern `p`. So `[x:_]` matches a list that contains one element and that element is itself a list matching the pattern `x:_`. Since that's not what you want, you need to use the version without the list pattern. Upvotes: 2
2018/03/20
1,333
4,225
<issue_start>username_0: I'm trying to inserting the values of counter into an array called numpay.Unfortunately nothing happens.Where's my mistake?Here's what i tried below. ``` Example-1 function validateForm() { var months=(principal+principal\*interestrate\*0.01)/monthlypayment; var numpay = new Array(months); for(var i=0;i<=months-1;i++) { numpay.push(i); text += numpay[i] + "<br>"; } document.getElementById("demo").innerHTML = text; } ```<issue_comment>username_1: Arrays in JS are zero-based, so if you do `.push(i)` with `i === 1`, the value `1` goes into the slot for `numpay[0]`. And then to access that item, you must use `numpay[i-1]`. **Another thing** is that you seem to have forgotten to declare and initialize `text` before the loop starts. You need to do that, if not then the variable only exists inside the loop body (and loses its value each time the loop body ends). Demo: ```js var months = 12; var numpay = []; // just as easy var text = "- "; for (var i = 1; i <= months; i++) { numpay.push(i); text += numpay[i - 1] + " - "; } console.log(text); ``` Upvotes: 0 <issue_comment>username_2: 1.) By creating an array like that (`new Array(12)`) you are saying to create 12 `undefined` entries in the array. Then you are pushing to the end of the array. 2.) You also need to initialize text with months `var months = 12, text;`. ``` var months = 12, text; var numpay = new Array(months); for(var i=1; i<=months; i++){ numpay[i] = i; text += numpay[i] + " "; } document.getElementById("demo").innerHTML = text; ``` This will still give you undefined at index 0 but I will leave that for you to work out. But doing `numpay[i] = i` will overwrite your `undefined` that was created with the array. Upvotes: 1 [selected_answer]<issue_comment>username_3: When you instantiate an array with a parameter like this: ``` var numpay = Array(12); ``` it results in an array with 12 undefined elements. When you push a new item, it will be placed in the 13th slot. Instead, just do this: ``` var text = ""; var months=12; var numpay = []; for(var i=1;i<=months;i++) { numpay.push(i); text += numpay[i-1] + " "; //use i-1 here, not i } document.getElementById("demo").innerHTML = text; ``` The result is: ``` "1 2 3 4 5 6 7 8 9 10 11 12 " ``` Upvotes: 0 <issue_comment>username_4: Fixing existing problems ------------------------ As others have pointed out, something like this should work: ```js var months = 12; var numpay = []; // just as easy var text = ""; for (var i = 1; i <= months; i++) { numpay.push(i); text += numpay[i - 1] + " "; } document.getElementById('demo').innerHTML = text; ``` ```html (empty) ``` Enhancing your skills --------------------- Although you're combining them in your `for`-loop, you're doing two separate things here: filling out the months, and creating the text you want to add to the DOM. There's a lot to be said for one bit of code doing only one thing. You could write a reusable `range` function which uses more modern JS techniques to give you a numeric integer range between two values. So ``` const range = (lo, hi) => [...new Array(hi - lo + 1)].map((_, i) => i + lo); ``` Using that, you can create your `months` variable by calling this function: ``` const months = range(1, 12); ``` Then, with this array, you can use [`Array.prototype.join`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/join) to combine the values into the text you would like: ``` const text = months.join(' ') ``` And that leads to a nicer bit of code: ```js const range = (lo, hi) => [...new Array(hi - lo + 1)].map((_, i) => i + lo); const months = range(1, 12); document.getElementById('demo').innerHTML = months.join(' '); ``` ```html (empty) ``` If you need that `text` variable for something additional, then just assign it as the result of the join, and then assign the `innerHTML` to it. Obviously that `range` function is unnecessary. You could just write `const months = [...new Array(12)].map((_, i) => i + 1);`. But thinking in terms of such abstractions often lets you write cleaner code. Upvotes: 1
2018/03/20
1,720
6,313
<issue_start>username_0: Okay, I did some research and apparently there are a lot of duplicate questions on SO on this topic, to name a few: * [Elegant solution to duplicate, const and non-const, getters?](https://stackoverflow.com/questions/856542/elegant-solution-to-duplicate-const-and-non-const-getters) * [How to avoid operator's or method's code duplication for const and non-const objects?](https://stackoverflow.com/questions/5735287/how-to-avoid-operators-or-methods-code-duplication-for-const-and-non-const-obj) * [How do I remove code duplication between similar const and non-const member functions?](https://stackoverflow.com/questions/123758/how-do-i-remove-code-duplication-between-similar-const-and-non-const-member-func) etc. But I cannot help but raise this again, because 1. With [c++14](/questions/tagged/c%2b%2b14 "show questions tagged 'c++14'") `auto`-typed return value, I am literally duplicating the function body with the only difference being the `const` function qualifier. 2. It is possible that `const` version and non-`const` version may return types that are totally incompatible from each other. In such cases, neither Scott Meyers's `const_cast` idiom, nor the "private `const` function returning non-`const`" technique would work. As an example: ``` struct A { std::vector examples; auto get() { return examples.begin(); } auto get() const { return examples.begin(); } }; // Do I really have to duplicate? // My real-world code is much longer // and there are a lot of such get()'s ``` In this particular case, `auto get()` returns an `iterator` while `auto get() const` returns a `const_iterator`, and the two cannot be converted into each other (I know `erase(it,it)` trick does the conversion in this case but that's not the point). The `const_cast` hence does not work; even if it works, that requires the programmer to manually deduce `auto` type, totally defeating the purpose of using `auto`. So is there really no way except with macros?<issue_comment>username_1: Ok, so after a bit of tinkering I came up with the following two solutions that allow you to keep the auto return type and only implement the getter once. It uses the opposite cast of what Meyer's does. C++ 11/14 --------- This version simply returns both versions in the implemented function, either with `cbegin()` or if you don't have that for your type this should work as a replacement for `cbegin()`: `return static_cast(\*this).examples.begin();` Basically cast to constant and use the normal `begin()` function to obtain the constant one. ``` // Return both, and grab the required one struct A { private: // This function does the actual getter work, hiding the template details // from the public interface, and allowing the use of auto as a return type auto get_do_work() { // Your getter logic etc. // ... // ... // Return both versions, but have the other functions extract the required one return std::make_pair(examples.begin(), examples.cbegin()); } public: std::vector examples{ 0, 1, 2, 3, 4, 5 }; // You'll get a regular iterator from the .first auto get() { return get\_do\_work().first; } // This will get a const iterator auto get() const { // Force using the non-const to get a const version here // Basically the inverse of Meyer's casting. Then just get // the const version of the type and return it return const\_cast(\*this).get\_do\_work().second; } }; ``` C++ 17 - Alternative with if constexpr -------------------------------------- This one should be better since it only returns one value and it is known at compile time which value is obtained, so `auto` will know what to do. Otherwise the `get()` functions work mostly the same. ``` // With if constexpr struct A { private: // This function does the actual getter work, hiding the template details // from the public interface, and allowing the use of auto as a return type template auto get\_do\_work() { // Your getter logic etc. // ... // ... if constexpr (asConst) { return examples.cbegin(); // Alternatively // return static\_cast(\*this).examples.begin(); } else { return examples.begin(); } } public: std::vector examples{ 0, 1, 2, 3, 4, 5 }; // Nothing special here, you'll get a regular iterator auto get() { return get\_do\_work(); } // This will get a const iterator auto get() const { // Force using the non-const to get a const version here // Basically the inverse of Meyer's casting, except you get a // const\_iterator as a result, so no logical difference to users return const\_cast(\*this).get\_do\_work(); } }; ``` This may or may not work for your custom types, but it worked for me, and it solves the need for code duplication, although it uses a helper function. But in turn the actual getters become one-liners, so that should be reasonable. The following main function was used to test both solutions, and worked as expected: ``` int main() { const A a; *a.get() += 1; // This is an error since it returns const_iterator A b; *b.get() += 1; // This works fine std::cout << *a.get() << "\n"; std::cout << *b.get() << "\n"; return 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Just a hypothetical solution, that I am thinking about applying every where once we will have concepts: *using friend free function in place of member function*. ``` //Forwarding ref concept template concept FRef = std::is\_same\_v,std::decay\_t>; struct A{ std::vector examples; friend decltype(auto) get(FRef{T,A}&& aA){ return std::forward(aA).examples.begin(); //the std::forward is actualy not necessary here //because there are no overload value reference overload of begin. } }; ``` Upvotes: 0 <issue_comment>username_3: ``` struct A { std::vector examples; private: template static auto get\_tmpl(ThisType& t) { return t.examples.begin(); } public: auto get() { return get\_tmpl(\*this); } auto get() const { return get\_tmpl(\*this); } }; ``` How about the above? Yes, you still need to declare both methods, but the logic can be contained in a single static template method. By adding a template parameter for return type this will work even for pre-C++11 code. Upvotes: 1
2018/03/20
1,484
5,573
<issue_start>username_0: I am trying to create a summary list for people in a downstream application to feed several of my production machines. Each machine is going to have their own tab to request material, and I want all of their requests to be summarized on one tab (called "Core\_Cutter\_List"). So basically I am trying to create a VBA that will copy over a row from spreadsheet "2" into the next blank line on spreadsheet "Core\_Cutter\_List". I want it to copy if there is text in column A and column G is blank. I have limited knowledge of VBA. The code that I found was able to only test for one of my criteria which was that column G is blank, but basically it runs through every single cell on my file. Do you know how I can add the other criteria of column A having text in it so that it doesn't look through every cell on my sheet? Thanks for any help! ``` Sub Test() ' ' Test Macro ' Sheets("2").Select For Each Cell In Sheets(1).Range("G:G") If Cell.Value = "" Then matchRow = Cell.Row Rows(matchRow & ":" & matchRow).Select Selection.Copy Sheets("Core_Cutting_List").Select ActiveSheet.Rows(matchRow).Select ActiveSheet.Paste Sheets("2").Select End If Next End Sub ```<issue_comment>username_1: Ok, so after a bit of tinkering I came up with the following two solutions that allow you to keep the auto return type and only implement the getter once. It uses the opposite cast of what Meyer's does. C++ 11/14 --------- This version simply returns both versions in the implemented function, either with `cbegin()` or if you don't have that for your type this should work as a replacement for `cbegin()`: `return static_cast(\*this).examples.begin();` Basically cast to constant and use the normal `begin()` function to obtain the constant one. ``` // Return both, and grab the required one struct A { private: // This function does the actual getter work, hiding the template details // from the public interface, and allowing the use of auto as a return type auto get_do_work() { // Your getter logic etc. // ... // ... // Return both versions, but have the other functions extract the required one return std::make_pair(examples.begin(), examples.cbegin()); } public: std::vector examples{ 0, 1, 2, 3, 4, 5 }; // You'll get a regular iterator from the .first auto get() { return get\_do\_work().first; } // This will get a const iterator auto get() const { // Force using the non-const to get a const version here // Basically the inverse of Meyer's casting. Then just get // the const version of the type and return it return const\_cast(\*this).get\_do\_work().second; } }; ``` C++ 17 - Alternative with if constexpr -------------------------------------- This one should be better since it only returns one value and it is known at compile time which value is obtained, so `auto` will know what to do. Otherwise the `get()` functions work mostly the same. ``` // With if constexpr struct A { private: // This function does the actual getter work, hiding the template details // from the public interface, and allowing the use of auto as a return type template auto get\_do\_work() { // Your getter logic etc. // ... // ... if constexpr (asConst) { return examples.cbegin(); // Alternatively // return static\_cast(\*this).examples.begin(); } else { return examples.begin(); } } public: std::vector examples{ 0, 1, 2, 3, 4, 5 }; // Nothing special here, you'll get a regular iterator auto get() { return get\_do\_work(); } // This will get a const iterator auto get() const { // Force using the non-const to get a const version here // Basically the inverse of Meyer's casting, except you get a // const\_iterator as a result, so no logical difference to users return const\_cast(\*this).get\_do\_work(); } }; ``` This may or may not work for your custom types, but it worked for me, and it solves the need for code duplication, although it uses a helper function. But in turn the actual getters become one-liners, so that should be reasonable. The following main function was used to test both solutions, and worked as expected: ``` int main() { const A a; *a.get() += 1; // This is an error since it returns const_iterator A b; *b.get() += 1; // This works fine std::cout << *a.get() << "\n"; std::cout << *b.get() << "\n"; return 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Just a hypothetical solution, that I am thinking about applying every where once we will have concepts: *using friend free function in place of member function*. ``` //Forwarding ref concept template concept FRef = std::is\_same\_v,std::decay\_t>; struct A{ std::vector examples; friend decltype(auto) get(FRef{T,A}&& aA){ return std::forward(aA).examples.begin(); //the std::forward is actualy not necessary here //because there are no overload value reference overload of begin. } }; ``` Upvotes: 0 <issue_comment>username_3: ``` struct A { std::vector examples; private: template static auto get\_tmpl(ThisType& t) { return t.examples.begin(); } public: auto get() { return get\_tmpl(\*this); } auto get() const { return get\_tmpl(\*this); } }; ``` How about the above? Yes, you still need to declare both methods, but the logic can be contained in a single static template method. By adding a template parameter for return type this will work even for pre-C++11 code. Upvotes: 1
2018/03/20
388
1,263
<issue_start>username_0: Say I have a function. If I wanted to add it as a method to an object, I would use: ``` let foofunc = function() {} { foo: foofunc } ``` However, what if I want to add it as a getter? You'd think I could do it like this: ``` { get x: foofunc } ``` but my IDE complains, so I assume that's not possible. How would I do this?<issue_comment>username_1: You can use the `Object.defineProperty` function like so: ``` function getterFunc() { return 1; } let anObject = {}; Object.defineProperty(anObject, 'propertyName', {get: getterFunc}); ``` Live Example: ```js function getterFunc() { return 1; } let anObject = {}; Object.defineProperty(anObject, 'propertyName', {get: getterFunc}); console.log(anObject.propertyName); // 1 ``` You can access use the getter normally by doing `anObject.propertyName`. The [MDN page](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/get) has more detailed information if you still have more questions. Upvotes: 4 [selected_answer]<issue_comment>username_2: To make it as getter function.. ``` let foofunc = function() { return this.x;} let obj = { x: "marvel", get foo(){return foofunc.call(this)} }; ``` use: ``` console.log(obj.foo); ``` Upvotes: -1
2018/03/20
792
3,175
<issue_start>username_0: Building a custom template for the wagtail `StreamField` block I found myself in the situation that I need somehow pass the ID of the current block to the other views. For instance when the URL is clicked in the particular block, the landing page `view` must know exactly in which of the blocks the URL has been clicked. Then the `view` can extract other information which is associated with the particular block but not necessarily visually present to the user. My current strategy is using the `snippets`, so I can pass the ID of the `snippet` and the `view` may obtain related but beforehand hidden data. This works not so bad, but people have to edit the content in two places and I have to look at their sad faces. It seems that the `value` variable in the block template context is an instance of the `wagtail.core.blocks.struct_block.StructValue`, which gives me access to all the fields of the block but it doesn't seem to reveal its footprint in the DB. Further `value` has an interesting attribute: `value.block`, which seems like it's an instance of the actual model used to construct the block, but again I can't find anything useful like `id` or `pk` which would allow to identify that instance in the database. Is there a way?<issue_comment>username_1: The block IDs you see in the database representation of a StreamField are a detail implemented by the enclosing StreamBlock, so that we can keep track of each block's history as it gets added / moved / deleted from the stream. The items within the stream do not know their own ID - this is because they could be any possible data type (for example, a `CharBlock` produces a string value, and you can't attach an ID to a string). As a result, the block template doesn't have access to the ID either. To access the ID, you'll need to make use of the `BoundBlock` (or, more precisely, `StreamChild`) object that's returned whenever you iterate over the StreamField value (or access it by index, e.g. `page.body[0]` or `page.body.0` within template code); this object is a wrapper around the block value which knows the block's type and ID. (More background on `BoundBlock` in the docs here: <http://docs.wagtail.io/en/v2.0/topics/streamfield.html#boundblocks-and-values>) ``` {% for block in page.body %} {% include_block block with id=block.id %} {% endfor %} ``` Here `block` is an instance of `StreamChild`, which has 'value', 'block\_type' and 'id' properties. Normally, the `{% include_block %}` tag will just pass on the `value` variable to the block template, but here we're passing `id` as an additional variable that will now be available within that block template. StreamField blocks are not 'real' database objects, so to retrieve the value again based on the ID you'll need to scan through the StreamField, using code such as: ``` value = None for block in page.body: if block.id == requested_id: value = block.value break ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In the HTML file that displays your block, try adding ``` {% with block.id|stringformat:"s" as block_id %} {{ block_id }} {% endwith %} ``` Upvotes: 0
2018/03/20
2,337
8,704
<issue_start>username_0: I have a standard field in a form for location, using Google Places API. ``` ``` As a user types, it shows suggestions below. Is there a setting to also fill in the field with the first returned value? This is what it currently looks like: [![enter image description here](https://i.stack.imgur.com/IQaqS.png)](https://i.stack.imgur.com/IQaqS.png) The goal here is that a user can't just type "New York". If they stop typing, the whole field will already be filled with "New York, NY, USA". This is the ideal experience as a user types: [![enter image description here](https://i.stack.imgur.com/Sbm8U.png)](https://i.stack.imgur.com/Sbm8U.png) Thank you! UPDATE - I've found [this discussion](https://ux.stackexchange.com/questions/20607/is-there-a-name-for-this-instant-filter-search-pattern) about what it's called. Here is an image from the link above, of what I'm talking about: [![enter image description here](https://i.stack.imgur.com/j5KdM.png)](https://i.stack.imgur.com/j5KdM.png)<issue_comment>username_1: check this out! this is a google place autofill API, maybe with this you can work it. <https://developers.google.com/maps/documentation/javascript/examples/places-autocomplete> Upvotes: 0 <issue_comment>username_2: ```js // Bind dollar signs to query selector (IE8+) var $ = document.querySelector.bind(document); function preventStandardForm(evt) { // prevent standard form from submitting evt.preventDefault(); } function autoCallback(predictions, status) { // *Callback from async google places call if (status != google.maps.places.PlacesServiceStatus.OK) { // show that this address is an error pacInput.className = 'error'; return; } // Show a successfull return pacInput.className = 'success'; pacInput.value = predictions[0].description; } function queryAutocomplete(input) { // *Uses Google's autocomplete service to select an address var service = new google.maps.places.AutocompleteService(); service.getPlacePredictions({ input: input, componentRestrictions: { country: 'us' } }, autoCallback); } function handleTabbingOnInput(evt) { // *Handles Tab event on delivery-location input if (evt.target.id == "pac-input") { // Remove active class evt.target.className = ''; // Check if a tab was pressed if (evt.which == 9 || evt.keyCode == 9) { queryAutocomplete(evt.target.value); } } } // ***** Initializations ***** // // initialize pac search field // var pacInput = $('#pac-input'); pacInput.focus(); // Initialize Autocomplete var options = { componentRestrictions: { country: 'us' } }; var autocomplete = new google.maps.places.Autocomplete(pacInput, options); // ***** End Initializations ***** // // ***** Event Listeners ***** // google.maps.event.addListener(autocomplete, 'place_changed', function () { var result = autocomplete.getPlace(); if (typeof result.address_components == 'undefined') { queryAutocomplete(result.name); } else { console.log(result.address_components); } }); // Tabbing Event Listener if (document.addEventListener) { document.addEventListener('keydown', handleTabbingOnInput, false); } else if (document.attachEvent) { // IE8 and below document.attachEvent("onsubmit", handleTabbingOnInput); } // search form listener var standardForm = $('#search-shop-form'); if (standardForm.addEventListener) { standardForm.addEventListener("submit", preventStandardForm, false); } else if (standardForm.attachEvent) { // IE8 and below standardForm.attachEvent("onsubmit", preventStandardForm); } // ***** End Event Listeners ***** // ``` ```html Delivery Location Search ``` **Update:** Note: According to your requirement there is any option available for google places. So you have change timeout seconds as per your requirement. I'm assuming with `300` ms i will get the suggestions in drop down. ```js $("#find").click(function(){ $("#geocomplete").trigger("geocode"); }); $(function(){ $("#geocomplete").geocomplete() }); $('.autoc').bind('keyup', function(e) { if(e.keyCode==13){ $('#find').click() }else{ setTimeout(function(){ var inputVal = $("#geocomplete").val(); var inpt = $("#geocomplete"); if( inputVal != "" ){ var firstRowVal = firstRowValue(); if( firstRowVal != "" ){ if(firstRowVal.toLowerCase().indexOf(inputVal.toLowerCase()) === 0){ inpt.val(firstRowVal);//change the input to the first match inpt[0].selectionStart = inputVal.length; //highlight from end of input inpt[0].selectionEnd = firstRowVal.length;//highlight to the end } } } }, 300);// please change this 300 as per your internet connection speed. } }); function firstRowValue() { var selected = ''; // Check if any result is selected. if ($(".pac-item-selected")[0]) { selected = '-selected'; } // Get the first suggestion's text. var $span1 = $(".pac-container:visible .pac-item" + selected + ":first span:nth-child(2)").text(); var $span2 = $(".pac-container:visible .pac-item" + selected + ":first span:nth-child(3)").text(); // Adds the additional information, if available. var firstResult = $span1; if ($span2) { firstResult += " - " + $span2; } return firstResult; } ``` ```html ``` Hope this will helps you reference [here](http://jsfiddle.net/x4utt1tL/111/) Upvotes: 3 <issue_comment>username_3: I looked for an effective solution regarding your request but wasn't able to find anything working through Google ( except Google itself!… =) ). I tried to do things with jQuery and jQuery-UI… and in the end, I started over from the beginning. Anyway, here is what I've read and want to share with you, I'm sure you'll find something interesting in all this. **About Google** Here is some answers to how google instant works: [How does Google Instant work?](https://stackoverflow.com/questions/3670831/how-does-google-instant-work) <https://searchengineland.com/how-google-instant-autocomplete-suggestions-work-62592> Also, Google isn't using only autocomplete, but a prediction algorithm too: <https://support.google.com/websearch/answer/106230?hl=en> <https://blog.google/products/search/google-search-autocomplete/> Visually, we could try to do something similar to google, and this link could be helpfull: [How to implement a google suggest-like input field?](https://stackoverflow.com/questions/10795843/how-to-implement-a-google-suggest-like-input-field) **What I'll do** Here is the snippet I ended with: ```js $(function() { var states = [ 'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California', 'Colorado', 'Connecticut', 'Delaware', 'Florida', 'Georgia', 'Hawaii', 'Idaho', 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana', 'Maine', 'Maryland', 'Massachusetts', 'Michigan', 'Minnesota', 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada', 'New Hampshire', 'New Jersey', 'New Mexico', 'New York', 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon', 'Pennsylvania', 'Rhode Island', 'South Carolina', 'South Dakota', 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virginia', 'Washington', 'West Virginia', 'Wisconsin', 'Wyoming' ]; $("#tags").autocomplete({ source: states, autoSelectFirst: true, autoFocus: true }); }); ``` ```html Tags: ``` As we're not Google, we can't predict what the user wants to write. So, in my snippet, they need to press enter or tab to confirm the autocomplete. It is not wise to auto-select the first hit after the user typed only two or three characters. But, if we do auto-select anyway… Which one will be auto-selected ? For example, when I am starting to type “New ”, the auto-selection will certainly be “New York” if we follow Google suggestion, “New Hampshire” if we're based on the alphabetical order, and it could be “New Mexico” if based on the user location. Hope it helps, in any way. Upvotes: 0
2018/03/20
1,459
5,539
<issue_start>username_0: Consider these following classes: ``` class A{ } class B extends A{ } ``` As we know this compiles fine: ``` List extends A xx = new ArrayList**(); List extends List<? extends A> xy = new ArrayList>();** ``` But this gives compile time error ``` List extends A yx = new ArrayList extends A(); List extends List<? extends A> yy = new ArrayList extends List<? extends A>(); ``` The error says: > > required: class or interface without bounds > > > I'm aware that the fresh values interpreted by the compiler for the above initializations are different, and thus they cannot be cast safely. But what does 'without bounds' means in the above error message?<issue_comment>username_1: The type arguments to the class instance being created can't be wildcards ([§15.9](https://docs.oracle.com/javase/specs/jls/se9/html/jls-15.html#jls-15.9)): > > If [type arguments are] present immediately after `new`, or immediately before `(`, then it is a compile-time error if any of the type arguments are wildcards. > > > That's pretty much it. The type arguments may contain wildcards (as in `new ArrayList>`), but wildcards can't be given directly (as in `new ArrayList` ). This distinction is maybe clarified by the grammar of type arguments which the above quote references ([§4.5.1](https://docs.oracle.com/javase/specs/jls/se9/html/jls-4.html#jls-4.5.1)): ``` TypeArguments: < TypeArgumentList > TypeArgumentList: TypeArgument {, TypeArgument} TypeArgument: ReferenceType Wildcard Wildcard: {Annotation} ? [WildcardBounds] WildcardBounds: extends ReferenceType super ReferenceType ``` In other words, for each `TypeArgument` in the `TypeArgumentList` provided to the class instance creation expression, the `TypeArgument` may only be a `ReferenceType` and not a `Wildcard`. If a reference type is *itself* generic, then it *can* have type arguments provided to *it* which are wildcards. Upvotes: 2 <issue_comment>username_2: At the time of construction / defenition of a generic type the type has to be specified. Both of these ``` ? extends A ? extends List exetnds A ``` are creating a new wildcard which is an yet unknown generic type (you can tell from the ? in front). Both of these ``` B List extends A ``` are actual types. So when calling ``` new ArrayList**(); new ArrayList>();** ``` The generic type is specified. In once case it is `B`, in the other it is a `List` containing `? extends A`. When calling new ArrayList(); new ArrayList>(); Both times the generic type is undefined. This is not allowed. The reason why `List extends A` doesn't cause the error is, because here it is a type. The List within your original `List extends List<? extends A>` wasn't created yet. You only define that later it has to be a `List extends A`. If you would try to create this list like this ``` new ArrayList extends A(); ``` You will run into the same problem. Understandable? Upvotes: 0 <issue_comment>username_3: This error is referring to the creation of the new `ArrayList` whose direct, top-level type parameter is using a wildcard. This is not allowed, despite the fact that a *nested* type parameter is allowed to have a wildcard. The [JLS, Section 15.9, "Class Instance Creation Expressions"](https://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.9), states: > > If *TypeArguments* is present immediately after `new`, or immediately before `(`, then it is a compile-time error if any of the type arguments are wildcards ([§4.5.1](https://docs.oracle.com/javase/specs/jls/se8/html/jls-4.html#jls-4.5.1)). > > > The key word here is "immediately", because that represents the direct type argument, not the nested type argument. Here is the restriction mentioned in [<NAME>'s article about generics and its usages](http://www.angelikalanger.com/Articles/JavaPro/02.JavaGenericsWildcards/Wildcards.html#More%20Details%20Regarding%20the%20Wildcard%20Language%20Feature): > > They [wildcards] can not be used for creation of objects or arrays, that is, a wildcard instantiation is not permitted in a new expression. Wildcard instantiations are not types, they are **placeholders for a member from a family of types**. In a way, a wildcard instantiation is similar to an interface: we can declare variables of interface types, but we cannot create objects of interface types; the created objects must be of a class type that implements the interface. Similar with wildcard instantiations: we can declare variables of a wildcard instantiated type, but we cannot create objects of such a type; the created objects must be of a concrete instantiation from the family of instantiations designated by the wildcard instantiation. > > > (emphasis mine) Basically, a wildcard type is not a concrete type and cannot be instantiated, with similar reasoning to not being allowed to create an instance of interface directly. However, that says nothing about nested wildcards, which relate to the creation of initially unrelated objects that may eventually be associated with this type. In your example, this would be the nested `List`s that may be added to the outer `ArrayList`. They could be references to `List`s of any matching wildcard type, but they're not being created here. **Summary** Java doesn't allow a wildcard in a instance creation expression (with `new`), but it does allow a nested wildcard in such an expression, because a direct wildcard type is not a concrete type. Upvotes: 3 [selected_answer]
2018/03/20
529
1,658
<issue_start>username_0: Using the following class, I would like to be able to create an instance of a specified `Foo()` to store and call a function, but how can I transmit the necessary arguments to a function `call` ? ``` template struct Foo { std::function func; Foo(const std::function &funcIn) : func(funcIn) {} R call(Args...) { return func(/\* ? \*/); } }; ``` E.g: ``` int main() { typedef typename Foo Bar; Bar x([](int y, int z){return y + z + 2;}); std::cout << x.call(12, 7); return 0; } ```<issue_comment>username_1: To make it simple ``` R call (Args... as) { return func(as...); } ``` This should work well if the `Args...` types are `int`, `long`, `float` and other simple types without support for move syntax. If you want add perfect forwarding, something as ``` template R call (As && ... as) { return func(std::forward(as)...); } ``` **--- EDIT ---** If I understand correctly (?) according <NAME> this code doesn't even compile. There are some errors in the OP original code (the `class Foo` should be a `struct` or make something `public`; the `typedef` in `main()` is wrong) but the following corrected code compile with my g++ and my clang++ ``` #include #include template struct Foo { std::function func; Foo(const std::function &funcIn) : func(funcIn) {} R call (Args... as) { return func(as...); } }; int main () { typedef Foo Bar; Bar x([](int y, int z){return y + z + 2;}); std::cout << x.call(12, 7); return 0; } ``` Upvotes: 0 <issue_comment>username_2: It is simple. Just add name of arguments. ``` R call(Args... args) { return func(args...); } ``` Upvotes: 1
2018/03/20
865
3,360
<issue_start>username_0: I have made a new widget that has two different layout designs in my CSHTML depending on customers needs. All other code is the same, it is only the HTML that is really different. Instead of having two different widgets, I was wondering if I could use parameters in the CSHTML. Then call them in the Control Properties of the widget to call that design. Pseudo code below. --- if(parameter in sitecore = null) {This HTML code} else if(parameter in sitecore = scrolling) {This HTML code} --- I havent been able to find any examples of this online as of yet so any help would be fantastic. Thanks!<issue_comment>username_1: You could use rendering parameters for this purpose. Rendering parameters can be used to pass parameters to Sitecore presentation components. They are normally used to define the presentation of a component. Rendering parameters can be set on a rendering on the page by the editors (so they can decide which display is used). More info here: <https://www.youtube.com/watch?v=lkpWgv2Pt0c> or [here](http://www.nttdatasitecore.com/Blog/2016/September/Create-Rendering-Parameters-in-Sitecore-MVC). Upvotes: 1 <issue_comment>username_2: There are multiple options to do this in Sitecore, To switch or adjust the html you can use this option in the Experience editor. 1) Rendering parameters Rendering parameters can be used to pass parameters to Sitecore Renderings [Create-Rendering-Parameters-in-Sitecore-MVC](http://www.nttdatasitecore.com/Blog/2016/September/Create-Rendering-Parameters-in-Sitecore-MVC) [Youtube Sitecore Rendering Parameters](https://www.youtube.com/watch?v=lkpWgv2Pt0c) [Friday Sitecore Best Practice: Using Parameter Templates](https://www.youtube.com/watch?v=QjN4IXXjnLQ) 2) Compatible Renderings, A easy way for a content editor to switch between renderings and use the same datasource: [Compatible rendering](http://sitecore.stockpick.nl/english/the-experience-editor/) Upvotes: 0 <issue_comment>username_3: As mentioned in the other answers, you could use Rendering Parameters which will allow your authors to select a value on the Rendering Properties and then to decide on the logic to run. This will require your user to make a change after adding component to the page. An alternative option is to make use of the `Parameters` field on the View / Controller Rendering. This will allow you to define 2 separate Renderings in Sitecore, and allow your editors to select the variant they require. Coupled together with [compatible renderings](http://sitecore.stockpick.nl/english/the-experience-editor/) will allow your editors to quickly switch between the variants. On the View Rendering/Controller Rendering, set the Parameters field as required, e.g.: `Parameters: ShouldScroll=true&WidgetClass=alert` Then in your code you can access these parameters: ``` @using Sitecore.Mvc.Presentation @{ RenderingParameters parameters = Sitecore.Mvc.Presentation.RenderingContext.CurrentOrNull.Rendering.Parameters; bool shouldScroll = MainUtil.GetBool(parameters["ShouldScroll"], false); string widgetClass = parameters["WidgetClass"]; } @if(shouldScroll) { Keep Scrolling! } else if (widgetClass == "alert") { OH NO! } ``` You can [read more in this similar answer](https://sitecore.stackexchange.com/a/3049/135) I previously provided. Upvotes: 0
2018/03/20
326
1,183
<issue_start>username_0: While running `docker:dind` I can't use `docker login` command and any other docker command. My use case is that I got a Nexus Docker Registry and I'm trying to connect to this registry through GitLab CI. ``` docker run --rm -it docker:stable-dind docker login -u user -p password https://registry.mine.io ``` Give: ``` Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/ Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? ```<issue_comment>username_1: just add `--privileged` to command. `docker run --rm -it --privileged docker:stable-dind docker login -u user -p password https://registry.mine.io` Upvotes: 3 [selected_answer]<issue_comment>username_2: I have found solution of same problem in this article <https://www.santoshsrinivas.com/docker-on-ubuntu-16-04/> You need to run next commant on machine with your gitlab-ci worker ``` sudo groupadd docker sudo gpasswd -a gitlab-runner docker sudo service docker restart ``` Upvotes: -1
2018/03/20
1,000
3,670
<issue_start>username_0: If I want project `A` to compile and tests to run but when I place it as a dependency into project `B` then project `A`'s dependencies should not be available to project `B`. For example: 1. Add `org.example.foo` as a dependency into project `A` (not `B`) 2. Add project `A` as a dependency inside project `B` 3. Add this statement in a class inside project `B`: `import org.example.foo.*` 4. You should get a compilation error on this line: `import org.example.foo.*`<issue_comment>username_1: You can exclude particular transitive dependencies via exclusions like this: ``` ... sample.ProjectA Project-A 1.0 sample.ProjectB Project-B ``` Or you can exclude all transitive dependencies like this (requires [Maven 3.2.1+](http://maven.apache.org/docs/3.2.1/release-notes.html)): ``` ... sample.ProjectA Project-A 1.0 \* \* ``` Apart from that I would really think about your project structure if you might need to change your project A to use different dependencies etc. Upvotes: 2 <issue_comment>username_2: For Gradle, you can use the [gradle-dependency-analyze](https://github.com/wfhartford/gradle-dependency-analyze) plugin, it will add the `analyzeClassesDependencies` task : > > This task depends on the `classes` task and analyzes the dependencies > of the main source set's output directory. This ensures that all > dependencies of the classes are declared in the `compile`, > `compileOnly`, or `provided` configuration. It also ensures the > inverse, that all of the dependencies of these configurations are used > by classes > > > So it won't trigger a compilation issue but if `B` tries to import `org.example.foo.*`, during `B`'s build you will get this message : ``` FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':B:analyzeClassesDependencies'. > Dependency analysis found issues: usedUndeclaredArtifacts: - org.example.foo:foo-core:1.0.0 ``` Either `B` needs its own `foo-core`dependency (it can't use `A`'s) or it's not allowed and `A`'s dependencies remain not usable (no build possible) Upvotes: 1 <issue_comment>username_3: The Readme of [gradle-dependency-analyze](https://github.com/wfhartford/gradle-dependency-analyze) plugin mentioned in this [answer](https://stackoverflow.com/a/49693428/2438951) here mentions: > > This plugin attempts to replicate the functionality of the maven > dependency plugin's analyze goals which fail the build if dependencies > are declared but not used or used but not declared. > > > Taking this lead, when I checked maven dependency plugin's [dependency:analyze](https://maven.apache.org/plugins/maven-dependency-plugin/analyze-mojo.html) goal, I found the option [**failOnWarning**](https://maven.apache.org/plugins/maven-dependency-plugin/analyze-mojo.html#failOnWarning), which might be what you are after. So, you can plug it to your project `B`'s maven invocation like so: > > mvn -DfailOnWarning=true dependency:analyze < other goals e.g. package > > > > And, if there are instances of any `Used undeclared dependencies` or `Unused declared dependencies`, the former being your use case, the build will fail. Only downside is, I didn't find any command-line way to ask the plugin to ignore `Unused declared dependencies`. Upvotes: 1 <issue_comment>username_4: You should specifiy the dependencies in project "A" with `provided`. This way the respective dependency is used for compilation & testing, but is not transitively used in project "B". Here you find the different scopes on maven dependencies: <https://maven.apache.org/pom.html#Dependencies> Upvotes: 2 [selected_answer]
2018/03/20
811
2,502
<issue_start>username_0: I have a function that uses dplyr to summarize a variable. I want to be able to pass the name of the summary function as a parameter. The approach below works (using match.fun). I was wondering if there is a better/simpler approach? ``` exampleFunction <- function(df, var, function_name, ...){ var <- enquo(var) apply_some_function <-function(data, function_name, ...){ FUN <- match.fun(function_name) FUN(data,...) } results <- df %>% summarize (result=apply_some_function(!!var, function_name,...)) } exampleFunction(iris, Sepal.Width, "mean") exampleFunction(iris, Sepal.Width, "min") ```<issue_comment>username_1: Usually no need to pass a function by its name in R -- since functions are first-class(ish), you can almost always just pass the function itself! For example: ``` library(dplyr) # data to illustrate iris <- iris[1:10, ] iris$Sepal.Length[1:3] <- NA # the custom summary function custom_summary <- function(df, var, summary_func, ...){ var <- enquo(var) df %>% summarize(res = summary_func(!!var, ...)) } # check that we can pass params to `summary_func` via `...`: custom_summary(iris, var=Sepal.Length, summary_func=mean) custom_summary(iris, var=Sepal.Length, summary_func=mean, na.rm=TRUE) # double-check result against same thing in global env: iris %>% summarize(res = mean(Sepal.Length)) iris %>% summarize(res = mean(Sepal.Length, na.rm=TRUE)) ``` Note that while passing column names to functions is annoying and complicated in `dplyr::`, passing functions as parameters to other functions is a perfectly natural thing to do in R. Especially when combined with the `magrittr::` pipe, this enables super compact summaries. Just one example: ``` funcs <- c(mean=mean, mdn=median, lu=function(x) length(unique(x))) cols <- c("Petal.Length", "Petal.Width", "Sepal.Length", "Sepal.Width") funcs %>% sapply(function(f) iris[, cols] %>% sapply(f)) ## mean mdn lu ## Petal.Length 3.758000 4.35 43 ## Petal.Width 1.199333 1.30 22 ## Sepal.Length 5.843333 5.80 35 ## Sepal.Width 3.057333 3.00 23 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `summarize_at` ``` exampleFunction2 <- function(df, var, function_name, ...){ var <- enquo(var) results <- df %>% summarize_at(vars(!!var), .funs = function_name,...) %>% setNames("result") } identical(exampleFunction2(iris, Sepal.Width, "mean"), exampleFunction(iris, Sepal.Width, "mean")) # [1] TRUE ``` Upvotes: 1
2018/03/20
917
3,395
<issue_start>username_0: Currently I am working on to deploy the Azure SQL Database by adding multiple IP addresses under Firewall rules using Azure ARM templates. This is the code for adding one IP address under Firewall settings of Azure SQL Server. ``` { "name": "AllowAllMicrosoftAzureIps", "type": "firewallrules", "apiVersion": "2014-04-01", "location": "[resourceGroup().location]", "properties": { "startIpAddress": "[parameters('startIpAddress')]", "endIpAddress": "[parameters('endIpAddress')]" }, "dependsOn": [ "[variables('sqlServerName')]" ] }, ``` But I want to add the multiple IP addresses at a time under Firewall settings of Azure SQL Database using Azure ARM templates.<issue_comment>username_1: I haven't tested it, but I believe it would look something like this. Use the `copy` iterator and supply an array of start and end IP addresses. ```json "parameters": { "firewallIpAddresses": { "type": "object", "defaultValue": [ { "start": "192.168.3.11", "end": "172.16.58.3","clientName": "Client1" }, { "start": "192.168.127.12", "end": "192.168.3.11","clientName": "Client2" }, { "start": "172.16.58.3", "end": "192.168.3.11","clientName": "Client3" } ] } }, "resources": [ { "name": "[concat(variables('sqlServerName'), '/', parameters('firewallIpAddresses')[copyIndex()].clientName)]", "type": "Microsoft.Sql/servers/firewallrules", "apiVersion": "2014-04-01", "location": "[resourceGroup().location]", "properties": { "startIpAddress": "[parameters('firewallIpAddresses')[copyIndex('firewallrulecopy')].start]", "endIpAddress": "[parameters('firewallIpAddresses')[copyIndex('firewallrulecopy')].end]" }, "dependsOn": [ "[variables('sqlServerName')]" ], "copy": { "name": "firewallrulecopy", "count": "[length(parameters('firewallIpAddresses'))]" } } ] ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: ``` "name": "nba-instance-one", "type": "Microsoft.Sql/servers", "apiVersion": "2014-04-01", "location": "[resourceGroup().location]", "tags": { "displayName": "sql-server-instance" }, "properties": { "administratorLogin": "admin", "administratorLoginPassword": "<PASSWORD>" }, "resources": [ { "type": "firewallRules", "apiVersion": "2014-04-01", "location": "[resourceGroup().location]", "name": "LaptopIp", "properties": { "startIpAddress": "172.16.31.10", "endIpAddress": "172.16.31.10" }, "dependsOn": [ "[resourceId('Microsoft.Sql/servers', 'sql-server-instance')]" ] }, { "type": "firewallRules", "apiVersion": "2014-04-01", "location": "[resourceGroup().location]", "name": "OtherIP", "properties": { "startIpAddress": "192.168.3.11", "endIpAddress": "192.168.3.11" }, "dependsOn": [ "[resourceId('Microsoft.Sql/servers', 'sql-server-instance')]" ] } ``` If it's only a few IP addresses you could add more fire wall rules for each IP address. Upvotes: 1
2018/03/20
2,266
4,746
<issue_start>username_0: I have a csv file with a column containing dates, but the dates are in two different formats: "m/d/y H:M" and "y m d H:M:S". I want to make a new column with these dates all in one format (I don't care which one). I tried the parse\_date\_time function but it would only work for one of the formats, not both. How can I go about doing this? Here is the code I was trying to use: ``` newdata <- mutate(data, newcolumn = parse_date_time(x = data$date_column, orders = c("m/d/y H:M", "y m d H:M:S"), locale = "eng") ) ``` Here are some example dates from the column: ``` x <- c("6/21/2006 0:00", "1889-06-13 00:00:00", "6/28/2012 0:00", "5/19/2015 0:00", "6/6/2016 0:00", "1884-05-24 00:00:00", "7/28/2013 0:00") ```<issue_comment>username_1: So we start by separating the two: ``` x <- c("03/20/2018 10:42", "2018-03-20 10:37:02") DF <- data.frame(x = x, stringsAsFactors = FALSE) slash_index <- grep("/", DF$x) slash <- DF$x[slash_index] dash <- DF$x[-slash_index] ``` Then we convert them. I like lubridate, but you can use your method if you'd like ``` library(lubridate) slash <- mdy_hm(slash) dash <- ymd_hms(dash) ``` Then we put them into a date vector: ``` date_times <- integer(0) date_times[slash_index] <- slash date_times[seq_along(DF$x)[-slash_index]] <- dash DF$x <- as.POSIXct(date_times, origin = "1970-01-01 00:00:00") DF # x # 1 2018-03-20 03:42:02 # 2 2018-03-20 03:37:02 ``` Note: The tricky part here was re-assigning parts of a vector to a vector according to their index. When a portion of a vector was assigned to a `POSIXct` object, it had its attributes stripped, turning it into the internal integer code for the date time. This was resolved by stripping the attributes at the beginning, and then re-assigning the class at the end. Here's the full thing with your example: ``` install.packages("lubridate") library(lubridate) x <- c("6/21/2006 0:00", "1889-06-13 00:00:00", "6/28/2012 0:00", "5/19/2015 0:00", "6/6/2016 0:00", "1884-05-24 00:00:00", "7/28/2013 0:00") DF <- data.frame(x = x, stringsAsFactors = FALSE) slash_index <- grep("/", DF$x) slash <- DF$x[slash_index] dash <- DF$x[-slash_index] slash <- mdy_hm(slash) dash <- ymd_hms(dash) date_times <- integer(0) date_times[slash_index] <- slash date_times[seq_along(DF$x)[-slash_index]] <- dash DF$x <- as.POSIXct(date_times, origin = "1970-01-01 00:00:00", tz = "UTC") DF # x # 1 2006-06-21 # 2 1889-06-13 # 3 2012-06-28 # 4 2015-05-19 # 5 2016-06-06 # 6 1884-05-24 # 7 2013-07-28 ``` Because the times for these are all `"00:00:00"`, they've been truncated. You can display them with the `"00:00:00"` using the method described in answers to [this question](https://stackoverflow.com/questions/19756771/as-posixct-with-datetimes-including-midnight). Upvotes: 0 <issue_comment>username_2: The `anytime` package does just that -- heuristically evaluating plausible formats: ``` R> library(anytime) R> x <- c("6/21/2006 0:00", + "1889-06-13 00:00:00", + "6/28/2012 0:00", + "5/19/2015 0:00", + "6/6/2016 0:00", + "1884-05-24 00:00:00", + "7/28/2013 0:00") R> anytime(x) [1] "2006-06-21 CDT" "1889-06-13 CST" "2012-06-28 CDT" [4] "2015-05-19 CDT" NA "1884-05-24 CST" [7] "2013-07-28 CDT" R> ``` It uses Boost's date\_time library parser by default, and that one does *not* do single digit month/day, hence the `NA` on element six. But we also added R's parser as a fallback: ``` R> anytime(x, useR=TRUE) [1] "2006-06-21 CDT" "1889-06-13 CST" "2012-06-28 CDT" [4] "2015-05-19 CDT" "2016-06-06 CDT" "1884-05-24 CST" [7] "2013-07-28 CDT" R> ``` So here is *all just works* without a single format specification. Upvotes: 2 <issue_comment>username_3: Using `lubridate::parse_date_time()`: ``` library(lubridate) library(dplyr) x <- c("6/21/2006 0:00", "1889-06-13 00:00:00", "6/28/2012 0:00", "5/19/2015 0:00", "6/6/2016 0:00", "1884-05-24 00:00:00", "7/28/2013 0:00") df <- data_frame(date_column = x) df_new <- df %>% mutate(new_column = parse_date_time(date_column, orders = c('ymdHMS', "mdyHM"))) df_new # A tibble: 7 x 2 date_column new_column 1 6/21/2006 0:00 2006-06-21 00:00:00 2 1889-06-13 00:00:00 1889-06-13 00:00:00 3 6/28/2012 0:00 2012-06-28 00:00:00 4 5/19/2015 0:00 2015-05-19 00:00:00 5 6/6/2016 0:00 2016-06-06 00:00:00 6 1884-05-24 00:00:00 1884-05-24 00:00:00 7 7/28/2013 0:00 2013-07-28 00:00:00 ``` Upvotes: 2
2018/03/20
871
3,255
<issue_start>username_0: I need to get the frames from a local video file so i can process them before the video is played. I already tried using AVAssetReader and VideoOutput. [EDIT] Here is the code i used from [Accesing Individual Frames using AV Player](https://stackoverflow.com/questions/39570745/accesing-individual-frames-using-av-player) ``` let asset = AVAsset(URL: inputUrl) let reader = try! AVAssetReader(asset: asset) let videoTrack = asset.tracksWithMediaType(AVMediaTypeVideo)[0] // read video frames as BGRA let trackReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings:[String(kCVPixelBufferPixelFormatTypeKey): NSNumber(unsignedInt: kCVPixelFormatType_32BGRA)]) reader.addOutput(trackReaderOutput) reader.startReading() while let sampleBuffer = trackReaderOutput.copyNextSampleBuffer() { print("sample at time \(CMSampleBufferGetPresentationTimeStamp(sampleBuffer))") if let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { // process each CVPixelBufferRef here // see CVPixelBufferGetWidth, CVPixelBufferLockBaseAddress, CVPixelBufferGetBaseAddress, etc } } ```<issue_comment>username_1: I believe AVAssetReader should work. What did you try? Have you seen this sample code from Apple? <https://developer.apple.com/library/content/samplecode/ReaderWriter/Introduction/Intro.html> Upvotes: 2 [selected_answer]<issue_comment>username_2: You can have a look at VideoToolbox : <https://developer.apple.com/documentation/videotoolbox> But beware: this is close to the hardware decompressor and sparsely documented terrain. Upvotes: 0 <issue_comment>username_3: Depending on what processing you want to do, OpenCV may be a an option - in particular if you are detecting or tracking objets in your frames. If your needs are simpler, then the effort to use OpenCV with swift may be a little too much - see below. You can open a video, read it frame by frame, do your work on the frames and then display then - bearing in mind the need to be efficient to avoid delaying the display. The basic code structure is quite simple - this is a python example but the same principles apply across supported languages ```js import numpy as np import cv2 cap = cv2.VideoCapture('vtest.avi') while(cap.isOpened()): ret, frame = cap.read() //Do whatever work you want on the frame here - in this example //from the tutorial the image is being converted from one colour //space to another gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) //This displays the resulting frame cv2.imshow('frame',gray) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows() ``` More info here: <http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html> The one caveat is that using OpenCV with swift requires some additional effort - this is a good example, but it evolves constantly so it is worth searching for if you decide to go this way: <https://medium.com/@yiweini/opencv-with-swift-step-by-step-c3cc1d1ee5f1> Upvotes: -1 <issue_comment>username_4: I found out what the problem was! It was with my implementation. The code i posted is correct. Thank you all Upvotes: 1
2018/03/20
380
1,490
<issue_start>username_0: I have a baseclass that inherits from Monobehaviour. How do i cast my monobehaviour to the base class when finding it in the hierarchy? ```cs GameManager : MonoBehaviour { public MonoBaseClass MyThing; void Awake() { MyThing = GameObject.Find("Child") as MonoBaseClass; } } MonoBaseClass : MonoBehaviour { public void BaseClassMethod() {} } ```<issue_comment>username_1: You need to use `FindObjectOfType()`, i.e.: ```cs void Awake() { MyThing = FindObjectOfType(); } ``` Upvotes: 1 <issue_comment>username_2: GameObject.Find returns a GameObject, a MonoBehaviour is a component of a GameObject. That's why you can't cast the GameObject to the MonoBaseClass. Instead you have to get a reference of the GameObject and then get the Component: ``` GameObject childGameObject = GameObject.Find("Child"); MyThing = childGameObject.GetComponent(); ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Problem with both `Find` and `FindObjectOfType` is: They are quite slow and you will get the first hit from the entire scene. If the Component you are looking for is on a Gameobject which is a child of the current GameObject (which seems the case) than you can just use: ``` MyThing = GetComponentInChildren(); ``` <https://docs.unity3d.com/ScriptReference/Component.GetComponentInChildren.html> Of course this will anyway still only get the first hit. For more use an array and `GetComponentsInChildren()` Upvotes: 1
2018/03/20
1,696
5,995
<issue_start>username_0: Our Rest API takes JSON input from several external parties. They all use "ISO-ish" formats, but the formatting of the time zone offset is slightly different. These are some of the most common formats we see: ``` 2018-01-01T15:56:31.410Z 2018-01-01T15:56:31.41Z 2018-01-01T15:56:31Z 2018-01-01T15:56:31+00:00 2018-01-01T15:56:31+0000 2018-01-01T15:56:31+00 ``` Our stack is Spring Boot 2.0 with Jackson ObjectMapper. In our data classes we use the type `java.time.OffsetDateTime` a lot. Several developers have tried to achieve a solution that parses all of the above formats, none have been successful. Particularly the fourth variant with a colon (`00:00`) seems to be unparseable. It would be great if the solution works without having to place an annotation on each and every date/time field of our models. Dear community, do you have a solution?<issue_comment>username_1: One alternative is to create a custom deserializer. First you annotate the respective field: ``` @JsonDeserialize(using = OffsetDateTimeDeserializer.class) private OffsetDateTime date; ``` And then you create the deserializer. It uses a [`java.time.format.DateTimeFormatterBuilder`](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatterBuilder.html), using lots of optional sections to deal with all the different types of offsets: ``` public class OffsetDateTimeDeserializer extends JsonDeserializer { private DateTimeFormatter fmt = new DateTimeFormatterBuilder() // date/time .append(DateTimeFormatter.ISO\_LOCAL\_DATE\_TIME) // offset (hh:mm - "+00:00" when it's zero) .optionalStart().appendOffset("+HH:MM", "+00:00").optionalEnd() // offset (hhmm - "+0000" when it's zero) .optionalStart().appendOffset("+HHMM", "+0000").optionalEnd() // offset (hh - "+00" when it's zero) .optionalStart().appendOffset("+HH", "+00").optionalEnd() // offset (pattern "X" uses "Z" for zero offset) .optionalStart().appendPattern("X").optionalEnd() // create formatter .toFormatter(); @Override public OffsetDateTime deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JsonProcessingException { return OffsetDateTime.parse(p.getText(), fmt); } } ``` I also used the built-in constant [`DateTimeFormatter.ISO_LOCAL_DATE_TIME`](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ISO_LOCAL_DATE_TIME) because it takes care of the optional fraction of seconds - and the number of fractional digits seems to be variable as well, and this built-in formatter already takes care of those details for you. --- I'm using JDK 1.8.0\_144 and found a shorter (but not much) solution: ``` private DateTimeFormatter fmt = new DateTimeFormatterBuilder() // date/time .append(DateTimeFormatter.ISO_LOCAL_DATE_TIME) // offset +00:00 or Z .optionalStart().appendOffset("+HH:MM", "Z").optionalEnd() // offset +0000, +00 or Z .optionalStart().appendOffset("+HHmm", "Z").optionalEnd() // create formatter .toFormatter(); ``` Another improvement you can make is change the formatter to be `static final`, [because this class is immutable and thread-safe](https://stackoverflow.com/questions/48540562/can-i-create-only-1-static-instance-of-datetimeformatter-of-java8?rq=1#comment84077015_48540562). Upvotes: 2 <issue_comment>username_2: This is just about a quarter of an answer. I neither have experience with Kotlin nor Jackson, but I have a couple of solutions in Java that I’d like to contribute. I should be glad if you can fit them into a total solution somehow. ``` String modifiedEx = ex.replaceFirst("(\\d{2})(\\d{2})$", "$1:$2"); System.out.println(OffsetDateTime.parse(modifiedEx)); ``` On my Java 9 (9.0.4) the one-arg `OffsetDateTime.parse` parses all of your example strings except the one with offset `+0000` without colon. So my hack is to insert that colon and then parse. The above parses all of your strings. It doesn’t work readily in Java 8 (there were some changes from Java 8 to Java 9). The nicer solution that works in Java 8 too (I have tested): ``` DateTimeFormatter formatter = new DateTimeFormatterBuilder() .append(DateTimeFormatter.ISO_LOCAL_DATE_TIME) .appendPattern("[XXX][XX][X]") .toFormatter(); System.out.println(OffsetDateTime.parse(ex, formatter)); ``` The patterns `XXX`, `XX` and `X` match `+00:00`, `+0000` and `+00`, respectively. We need to try them in order from the longest to the shortest to make sure that all text is being parsed in all cases. Upvotes: 1 <issue_comment>username_3: Thank you very much for all your input! I chose the deserializer suggested by username_1 combined with the formatter suggested by Ole V.V (because it's shorter). ``` class DefensiveIsoOffsetDateTimeDeserializer : JsonDeserializer() { private val formatter = DateTimeFormatterBuilder() .append(DateTimeFormatter.ISO\_LOCAL\_DATE\_TIME) .appendPattern("[XXX][XX][X]") .toFormatter() override fun deserialize(p: JsonParser, ctxt: DeserializationContext) = OffsetDateTime.parse(p.text, formatter) override fun handledType() = OffsetDateTime::class.java } ``` I also added a custom serializer to make sure we use the correct format when producing json: ``` class OffsetDateTimeSerializer: JsonSerializer() { override fun serialize( value: OffsetDateTime, gen: JsonGenerator, serializers: SerializerProvider ) = gen.writeString(value.format(DateTimeFormatter.ISO\_OFFSET\_DATE\_TIME)) override fun handledType() = OffsetDateTime::class.java } ``` Putting all the parts together, I added a `@Configuraton` class to my spring classpath to make it work without any annotations on the data classes: ``` @Configuration open class JacksonConfig { @Bean open fun jacksonCustomizer() = Jackson2ObjectMapperBuilderCustomizer { it.deserializers(DefensiveIsoOffsetDateTimeDeserializer()) it.serializers(OffsetDateTimeSerializer()) } } ``` Upvotes: 2 [selected_answer]
2018/03/20
493
1,654
<issue_start>username_0: I followed this video: <https://www.youtube.com/watch?v=Y2q_b4ugPWk> and got my cmd to recognize python as a path variable. But this only works if I open cmd and powershell as an Admin. How do I get it to work for all users? As Admin: ``` PS C:\windows\system32> python Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 10:22:32) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ``` As user: ``` PS C:\Users> python python : The term 'python' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + python + ~~~~~~ + CategoryInfo : ObjectNotFound: (python:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException PS C:\Users> ```<issue_comment>username_1: With elevated privileges (right-click `powershell.exe`, run as admin) ``` $PathToPython = 'C:\whatever\containingfolderforpythonexe' $CurrentValue = [Environment]::GetEnvironmentVariable('Path', 'Machine') [Environment]::SetEnvironmentVariable('Path', "$CurrentValue;$PathToPython", 'Machine') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I build a batch opener for my powershell script. Once open in elevated privileges any other script called is also elevated. I do a similar thing with .vbs to open batch in admin. ``` @ECHO off ECHO Please Wait. powershell.exe -ExecutionPolicy ByPass -file "File.ps1" ECHO. ECHO PRESS ANY KEY ONCE DEBUGGED Pause >nul exit ``` Upvotes: -1
2018/03/20
436
1,521
<issue_start>username_0: I have this URL here with my access key, ``` https://api.instagram.com/v1/users/MY_USER_ID/media/recent?access_token=MY_ACCESS_KEY ``` but it only returns 1 item. I have googled for a solution, but can't seem to find one. This was **working** about a **month ago** and now it's broken. I have deleted the old client, created a new one and generated a new access\_token. But it stays with the same result. My question is: **How can I get the same results like a month ago?** *I got my access token via [freevision.me/instagram](http://freevision.me/instagram/) and my client ID by following these instructions [doc.freevision.me/adventure/knowledge-base/get-instagram-client-id](http://doc.freevision.me/adventure/knowledge-base/get-instagram-client-id/).*<issue_comment>username_1: With elevated privileges (right-click `powershell.exe`, run as admin) ``` $PathToPython = 'C:\whatever\containingfolderforpythonexe' $CurrentValue = [Environment]::GetEnvironmentVariable('Path', 'Machine') [Environment]::SetEnvironmentVariable('Path', "$CurrentValue;$PathToPython", 'Machine') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I build a batch opener for my powershell script. Once open in elevated privileges any other script called is also elevated. I do a similar thing with .vbs to open batch in admin. ``` @ECHO off ECHO Please Wait. powershell.exe -ExecutionPolicy ByPass -file "File.ps1" ECHO. ECHO PRESS ANY KEY ONCE DEBUGGED Pause >nul exit ``` Upvotes: -1
2018/03/20
325
1,125
<issue_start>username_0: I need assistance in creating a program that takes a vector of strings and combines it with a vector of numbers. To produce an output of (for example) Apple.1, Orange.1, Peach.1, Apple.2, Orange.2, Peach.2 I currently have ``` rep(paste(c("Apple", "Orange", "Peach"), c(1,2), sep='.') each=2) ``` But that is not working. Can anyone show me what I'm doing wrong? Thanks.<issue_comment>username_1: With elevated privileges (right-click `powershell.exe`, run as admin) ``` $PathToPython = 'C:\whatever\containingfolderforpythonexe' $CurrentValue = [Environment]::GetEnvironmentVariable('Path', 'Machine') [Environment]::SetEnvironmentVariable('Path', "$CurrentValue;$PathToPython", 'Machine') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I build a batch opener for my powershell script. Once open in elevated privileges any other script called is also elevated. I do a similar thing with .vbs to open batch in admin. ``` @ECHO off ECHO Please Wait. powershell.exe -ExecutionPolicy ByPass -file "File.ps1" ECHO. ECHO PRESS ANY KEY ONCE DEBUGGED Pause >nul exit ``` Upvotes: -1
2018/03/20
274
1,012
<issue_start>username_0: I am able to read the messages from activemq using camel context[xml], but i could like to read only no of the messages, for example if queue contains 10 000 messages, we want to read only first 1 000 messages, remaining shouldn't be touched. I am new to the camel<issue_comment>username_1: With elevated privileges (right-click `powershell.exe`, run as admin) ``` $PathToPython = 'C:\whatever\containingfolderforpythonexe' $CurrentValue = [Environment]::GetEnvironmentVariable('Path', 'Machine') [Environment]::SetEnvironmentVariable('Path', "$CurrentValue;$PathToPython", 'Machine') ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I build a batch opener for my powershell script. Once open in elevated privileges any other script called is also elevated. I do a similar thing with .vbs to open batch in admin. ``` @ECHO off ECHO Please Wait. powershell.exe -ExecutionPolicy ByPass -file "File.ps1" ECHO. ECHO PRESS ANY KEY ONCE DEBUGGED Pause >nul exit ``` Upvotes: -1
2018/03/20
1,364
4,668
<issue_start>username_0: No matter what I try I keep hitting the "MongoClient opened before fork" warning regarding not forking active mongo connections when trying to use multiprocessing on a mongoengine db. The [standard mongo advice](http://api.mongodb.com/python/current/faq.html#using-pymongo-with-multiprocessing) seems to be to only connect to the db from within the child processes but I think what I'm doing should be functionally equivalent because I'm closing the database prior to using multiprocessing however I still hit the problem. Related questions either without a minimal example or with inapplicable solutions are [here](https://stackoverflow.com/questions/45530741/manage-python-multiprocessing-with-mongodb), [here](https://stackoverflow.com/questions/44133435/mongoengine-and-dealing-with-userwarning-mongoclient-opened-before-fork-creat), and specifically for the case of flask/celery and [here](https://stackoverflow.com/questions/41905472/pymongo-create-mongoclient-with-connect-false-or-create-client-after-forking) Minimal example to reproduce the problem: ``` from mongoengine import connect, Document, StringField, ListField, ReferenceField from pathos.multiprocessing import ProcessingPool class Base(Document): key = StringField(primary_key=True) name = StringField() parent = ReferenceField('Parent', required=True) class Parent(Document): key = StringField(primary_key=True) name = StringField() bases = ListField(ReferenceField('Base')) def remove_base(key): db = connect('mydb') mongo_b = Base.objects().get(key=key) mongo_b.parent.update(pull__bases=mongo_b) mongo_b.delete() ### setup db = connect('mydb', connect=False) Base(key='b1', name='test', parent='p1').save() Base(key='b2', name='test', parent='p1').save() Base(key='b3', name='test2', parent='p1').save() p=Parent(key='p1', name='parent').save() p.update(add_to_set__bases='b1') p.update(add_to_set__bases='b2') p.update(add_to_set__bases='b3') ### find objects we want to delete my_base_objects = Base.objects(name='test') keys = [b.key for b in my_base_objects] del my_base_objects # close db to avoid problems?! db.close() del db # parallel map removing base objects and references from the db # warning generated here pp = ProcessingPool(2) pp.map(remove_base, keys) ```<issue_comment>username_1: Ok so I figured it out. Mongoengine caches connections to the database all over the place. If you manually remove them then the issue is resolved. Adding the following import ``` from mongoengine import connection ``` then adding in: ``` connection._connections = {} connection._connection_settings ={} connection._dbs = {} Base._collection = None Parent._collection = None ``` to the '#close db' section appears to solve the issue. Complete code: ``` from mongoengine import connect, Document, StringField, ListField, ReferenceField, connection from pathos.multiprocessing import ProcessingPool class Base(Document): key = StringField(primary_key=True) name = StringField() parent = ReferenceField('Parent', required=True) class Parent(Document): key = StringField(primary_key=True) name = StringField() bases = ListField(ReferenceField('Base')) def remove_base(key): db = connect('mydb', connect=False) mongo_b = Base.objects().get(key=key) mongo_b.parent.update(pull__bases=mongo_b) mongo_b.delete() def setup(): Base(key='b1', name='test', parent='p1').save() Base(key='b2', name='test', parent='p1').save() Base(key='b3', name='test2', parent='p1').save() p=Parent(key='p1', name='parent').save() p.update(add_to_set__bases='b1') p.update(add_to_set__bases='b2') p.update(add_to_set__bases='b3') db = connect('mydb', connect=False) setup() ### find objects we want to delete my_base_objects = Base.objects(name='test') keys = [b.key for b in my_base_objects] del my_base_objects ### close db to avoid problems?! db.close() db = None connection._connections = {} connection._connection_settings ={} connection._dbs = {} Base._collection = None Parent._collection = None ### parallel map removing base objects from the db pp = ProcessingPool(2) pp.map(remove_base, keys) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This got recently improved and as of MongoEngine>=0.18.0, the method `disconnect()` and `disconnect_all()` should be used to respectively disconnect 1 or all existing connections ([changelog 0.18.0](http://docs.mongoengine.org/changelog.html#changes-in-0-18-0)) See official [doc](http://docs.mongoengine.org/guide/connecting.html?highlight=disconnect#disconnecting-an-existing-connection) Upvotes: 1
2018/03/20
594
1,856
<issue_start>username_0: I'm building a line-through header that can span multiple lines. Using the sample code below, is it possible to write my CSS in such a way that the left and right divs are not needed? Where they could be added as pseudo-classes to my header class? [CodePen](https://codepen.io/nickvsg/pen/qordBO) ```css .container { box-sizing: border-box; display: flex; place-content: center space-evenly; align-items: center; } .line { flex: 1; height: 2px; background: black; } .header { font-size: 50px; margin: 0 30px; text-align: center; } .header-broken:after { content: ''; display: -webkit-inline-flex; display: -ms-inline-flexbox; display: inline-flex; width: 50px; height: 5px; flex: auto; width: 100%; height: 2px; background: black; } ``` ```html Normal Title fdasfsaf ```<issue_comment>username_1: It can be done with just one div, see the example below, add some margin to the pseudo elements as needed for spacing. ```css .container { display: flex; text-align: center; } .container:before, .container:after { content: ""; flex: 1; background: linear-gradient(black, black) center / 100% 1px no-repeat; } ``` ```html Normal Title fdasfsaf ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can also try this. HTML: ``` Normal Title fdasfsaf ====================== ``` CSS: ``` .container { display: flex; text-align: center; } .header { flex: 1; } .header h1 { font-size: 50px; margin: 0 30px; text-align: center; background-color: #fff; display: inline-block; } .header:after { content: ''; border-bottom: 1px solid #000; display: block; margin-top: -58px; } ``` Upvotes: 0
2018/03/20
1,295
4,049
<issue_start>username_0: This is where I am at now: ``` CharList = [] text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4" , "S":"5"} for LETTER in text: CharList.append(LETTER) for EL in CharList: if EL in PasswordSubHT: print(text.replace(EL,str(PasswordSubHT[EL]))) ``` This is what I get: ``` xxxx44aaaSSSxxx xxxx44aaaSSSxxx xxxxAA@@@SSSxxx xxxxAA@@@SSSxxx xxxxAA@@@SSSxxx xxxxAAaaa555xxx xxxxAAaaa555xxx xxxxAAaaa555xxx ``` This is what I am trying to get: ``` xxxx4AaaaSSSxxx xxxx44aaaSSSxxx xxxxAA@aaSSSxxx xxxxAA@@aSSSxxx xxxxAA@@@SSSxxx ``` I want to do substitution down character by charter which is why I am piping the string to an array. Is this the right way to go about this?<issue_comment>username_1: Check this and let me know: ``` text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4" , "S":"5"} for letter in text: if letter in PasswordSubHT.keys(): text = text.replace(letter, str(PasswordSubHT[letter])) print(text) ``` Note: `replace(x, y)` will replace all the `x` in the text. If you want to replace just one char at the time add the `count` argument = 1: `text = text.replace(letter, str(PasswordSubHT[letter]), 1)` Upvotes: 1 <issue_comment>username_2: Python string [`replace`](https://docs.python.org/2/library/string.html#string.replace) method replaces all the instance of the letter with the new letter in the string. For example in the first pass, it skips all letters until A, then sees that A is in the Sub List and replaces ALL INSTANCES of A in the string with 4. Then it moves to the next character, which at that point has become 4 instead of A. Python strings are immutable. So you cant really pick them up and change letters in between. What you can do is convert this string into a list. `char_list = list(text)`. Then you can iterate this and keep a counter. ``` for char in char_list: counter = counter + 1 if char in PassSubList: char_list[counter] = PassSubList[char] return_list = ''.join(char_list) ``` Upvotes: 2 <issue_comment>username_3: Before answering your question, there is an important thing you need to know about Python strings, which is that they are immutable. As the [Data Model](https://docs.python.org/2/reference/datamodel.html#objects-values-and-types) doc proposes: > > The value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable. > > > So you can't just use the `replace method` but you have to reassign the new string into your `text` string. So this part: ``` print(text.replace(EL,str(PasswordSubHT[EL]))) ``` Would be: ``` text = text.replace(LETTER, PasswordSubHT[LETTER], 1) print(text) ``` You don't also have to loop over the string to assign it to a list, but just loop over the string itself and check the condition you want. According to [`replace`](https://docs.python.org/2/library/string.html#string.replace) method doc. You can specify how many characters are allowed to be replaced, by providing the `maxreplace` argument. So what you need is something like: ``` CharList = [] text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4" , "S":"5"} for LETTER in text: if LETTER in PasswordSubHT.keys(): # Note: Check the new max character argument. text = text.replace(LETTER, PasswordSubHT[LETTER], 1) print(text) ``` Upvotes: 1 <issue_comment>username_4: You can use regex: ``` import re text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4"} combos = {a:iter([b*i for i in range(1, len(re.findall(a, text))+1)]) for a, b in PasswordSubHT.items()} new_results = [[re.sub('{}+'.format(i), next(combos[i[0]])+i[c:], text) for c, i in enumerate(b)] for b in re.findall('|'.join(map(lambda x:'{}+'.format(x), PasswordSubHT.keys())), text)] final_results = [i for b in new_results for i in b] ``` Output: ``` ['xxxx4AaaaSSSxxx', 'xxxx44aaaSSSxxx', 'xxxxAA@aSSSxxx', 'xxxxAA@@SSSxxx', 'xxxxAA@@@SSSxxx'] ``` Upvotes: 1
2018/03/20
1,158
3,856
<issue_start>username_0: I have an http ohs that redirects requests to weblogic 12c; I would like to introduce error 404 if there was a redirect to a custom page and did not display the default weblogic error. Here is the apache location configuration: ``` WebLogicCluster xxxxxx:xxxx,xxxxxx:xxxx SetHandler weblogic-handler ErrorPage /error.html ``` even informing the "ErrorPage" it does not redirect to the chosen page. would you perhaps have some way to modify weblogic page 404 and not apache?<issue_comment>username_1: Check this and let me know: ``` text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4" , "S":"5"} for letter in text: if letter in PasswordSubHT.keys(): text = text.replace(letter, str(PasswordSubHT[letter])) print(text) ``` Note: `replace(x, y)` will replace all the `x` in the text. If you want to replace just one char at the time add the `count` argument = 1: `text = text.replace(letter, str(PasswordSubHT[letter]), 1)` Upvotes: 1 <issue_comment>username_2: Python string [`replace`](https://docs.python.org/2/library/string.html#string.replace) method replaces all the instance of the letter with the new letter in the string. For example in the first pass, it skips all letters until A, then sees that A is in the Sub List and replaces ALL INSTANCES of A in the string with 4. Then it moves to the next character, which at that point has become 4 instead of A. Python strings are immutable. So you cant really pick them up and change letters in between. What you can do is convert this string into a list. `char_list = list(text)`. Then you can iterate this and keep a counter. ``` for char in char_list: counter = counter + 1 if char in PassSubList: char_list[counter] = PassSubList[char] return_list = ''.join(char_list) ``` Upvotes: 2 <issue_comment>username_3: Before answering your question, there is an important thing you need to know about Python strings, which is that they are immutable. As the [Data Model](https://docs.python.org/2/reference/datamodel.html#objects-values-and-types) doc proposes: > > The value of some objects can change. Objects whose value can change are said to be mutable; objects whose value is unchangeable once they are created are called immutable. > > > So you can't just use the `replace method` but you have to reassign the new string into your `text` string. So this part: ``` print(text.replace(EL,str(PasswordSubHT[EL]))) ``` Would be: ``` text = text.replace(LETTER, PasswordSubHT[LETTER], 1) print(text) ``` You don't also have to loop over the string to assign it to a list, but just loop over the string itself and check the condition you want. According to [`replace`](https://docs.python.org/2/library/string.html#string.replace) method doc. You can specify how many characters are allowed to be replaced, by providing the `maxreplace` argument. So what you need is something like: ``` CharList = [] text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4" , "S":"5"} for LETTER in text: if LETTER in PasswordSubHT.keys(): # Note: Check the new max character argument. text = text.replace(LETTER, PasswordSubHT[LETTER], 1) print(text) ``` Upvotes: 1 <issue_comment>username_4: You can use regex: ``` import re text = "xxxxAAaaaSSSxxx" PasswordSubHT = {"a":"@" , "A":"4"} combos = {a:iter([b*i for i in range(1, len(re.findall(a, text))+1)]) for a, b in PasswordSubHT.items()} new_results = [[re.sub('{}+'.format(i), next(combos[i[0]])+i[c:], text) for c, i in enumerate(b)] for b in re.findall('|'.join(map(lambda x:'{}+'.format(x), PasswordSubHT.keys())), text)] final_results = [i for b in new_results for i in b] ``` Output: ``` ['xxxx4AaaaSSSxxx', 'xxxx44aaaSSSxxx', 'xxxxAA@aSSSxxx', 'xxxxAA@@SSSxxx', 'xxxxAA@@@SSSxxx'] ``` Upvotes: 1
2018/03/20
988
3,170
<issue_start>username_0: Cross entropy formula: [![enter image description here](https://i.stack.imgur.com/W3xm0.gif)](https://i.stack.imgur.com/W3xm0.gif) But why does the following give `loss = 0.7437` instead of `loss = 0` (since `1*log(1) = 0`)? ```python import torch import torch.nn as nn from torch.autograd import Variable output = Variable(torch.FloatTensor([0,0,0,1])).view(1, -1) target = Variable(torch.LongTensor([3])) criterion = nn.CrossEntropyLoss() loss = criterion(output, target) print(loss) ```<issue_comment>username_1: Your understanding is correct but pytorch doesn't compute [**cross entropy**](http://pytorch.org/docs/0.3.1/nn.html?highlight=crossentropyloss#torch.nn.CrossEntropyLoss) in that way. Pytorch uses the following formula. ``` loss(x, class) = -log(exp(x[class]) / (\sum_j exp(x[j]))) = -x[class] + log(\sum_j exp(x[j])) ``` Since, in your scenario, `x = [0, 0, 0, 1]` and `class = 3`, if you evaluate the above expression, you would get: ``` loss(x, class) = -1 + log(exp(0) + exp(0) + exp(0) + exp(1)) = 0.7437 ``` Pytorch considers natural logarithm. Upvotes: 5 <issue_comment>username_2: In your example you are treating output `[0, 0, 0, 1]` as probabilities as required by the mathematical definition of cross entropy. But PyTorch treats them as outputs, that don’t need to sum to `1`, and need to be first converted into probabilities for which it uses the softmax function. So `H(p, q)` becomes: ``` H(p, softmax(output)) ``` Translating the output `[0, 0, 0, 1]` into probabilities: ``` softmax([0, 0, 0, 1]) = [0.1749, 0.1749, 0.1749, 0.4754] ``` whence: ``` -log(0.4754) = 0.7437 ``` Upvotes: 8 [selected_answer]<issue_comment>username_3: I would like to add an important note, as this often leads to confusion. **Softmax is not a loss function**, nor is it really an activation function. It has a very specific task: It is used for multi-class classification to normalize the scores for the given classes. By doing so we get probabilities for each class that sum up to **1**. **Softmax is combined with Cross-Entropy-Loss** to calculate the loss of a model. Unfortunately, because this combination is so common, it is often abbreviated. Some are using the term **Softmax-Loss**, whereas PyTorch calls it only **Cross-Entropy-Loss**. Upvotes: 4 <issue_comment>username_4: > > The combination of `nn.LogSoftmax` and `nn.NLLLoss` is equivalent to using > `nn.CrossEntropyLoss`. This terminology is a particularity of PyTorch, as the > `nn.NLLoss` [sic] computes, in fact, the cross entropy but with log probability predictions as inputs where `nn.CrossEntropyLoss` takes scores (sometimes called *logits*). Technically, `nn.NLLLoss` is the cross entropy between the Dirac distribution, putting all mass on the target, and the predicted distribution given by the log probability inputs. > > > * [Deep Learning with PyTorch](https://pytorch.org/assets/deep-learning/Deep-Learning-with-PyTorch.pdf) PyTorch's `CrossEntropyLoss` expects unbounded scores (interpretable as logits / log-odds) as input, not probabilities (as the CE is traditionally defined). Upvotes: 3
2018/03/20
567
2,093
<issue_start>username_0: When developed the app in XCode8 it used to work fine, both in Portrait and Landscape modes. Didn't change anything particular. Now started to developed in Xcode9, Portrait works fine but any View in the App, if switched to Landscape it acts weird. Like in the images , view gets cut off [![enter image description here](https://i.stack.imgur.com/DHGqb.png)](https://i.stack.imgur.com/DHGqb.png) [![enter image description here](https://i.stack.imgur.com/ZYxKc.png)](https://i.stack.imgur.com/ZYxKc.png) I use Autolayouts in Storyboard... Is this happening because of new `safearea` feature? EDIT: View Hierarchy added for Sign View [![enter image description here](https://i.stack.imgur.com/oadKR.png)](https://i.stack.imgur.com/oadKR.png) [![enter image description here](https://i.stack.imgur.com/iCxib.png)](https://i.stack.imgur.com/iCxib.png)<issue_comment>username_1: First of all please update your xcode to 9.2 if you are using any version below that. next turn on the storyboard feature called - "Use Safe Area Layout Guides" option inside "File inspector" then remove and re-apply the viewController constraints. hope it will work! Upvotes: 1 <issue_comment>username_2: This can be caused by anything, wihout reviewing the code this cant be answered properly. My suggestions are to check for a `layer` property added to any view in the hierarchy. The view may not have been updated properly in Landscape mode due to the **new `autoLayout` property of `Xcode 9`.** I have a feeling that it may have been caused by the "Hugin.ShadowView", if any shadow property added, please check the constraint of the view it is added to and rectify it as needed for `Xcode 9`. If this doesn't help you you may share a short and simple project file and i'll have a look at it. Upvotes: 1 <issue_comment>username_3: This can be due to some layer effect which are applying unintentionally on **UIViewControllerWrapperView**. Please Check your UIViewControllerWrapperView layer properties or some layer effect on your **UINavigationController**. Upvotes: 2
2018/03/20
818
2,522
<issue_start>username_0: I have the following script: ``` User.includes(:owned_ratings).map{|x| x.owned_ratings.average(:score)} ``` calling `x.owned_ratings.average(:score)` causes n+1 queries: ``` (0.2ms) SELECT AVG("ratings"."score") FROM "ratings" INNER JOIN "video_chats" ON "ratings"."video_chat_id" = "video_chats"."id" WHERE "video_chats"."user_id" = $1 [["user_id", 4]] (0.1ms) SELECT AVG("ratings"."score") FROM "ratings" INNER JOIN "video_chats" ON "ratings"."video_chat_id" = "video_chats"."id" WHERE "video_chats"."user_id" = $1 [["user_id", 1]] (0.1ms) SELECT AVG("ratings"."score") FROM "ratings" INNER JOIN "video_chats" ON "ratings"."video_chat_id" = "video_chats"."id" WHERE "video_chats"."user_id" = $1 [["user_id", 5]] (0.1ms) SELECT AVG("ratings"."score") FROM "ratings" INNER JOIN "video_chats" ON "ratings"."video_chat_id" = "video_chats"."id" WHERE "video_chats"."user_id" = $1 [["user_id", 7]] (0.1ms) SELECT AVG("ratings"."score") FROM "ratings" INNER JOIN "video_chats" ON "ratings"."video_chat_id" = "video_chats"."id" WHERE "video_chats"."user_id" = $1 [["user_id", 3]] ``` Why includes is not working with aggregate methods? Is there any way to fix that? I know that I can implement average method on my own and omit the problem but I want to be sure that there is not better solution for that.<issue_comment>username_1: > > Why `includes` is not working with aggregate methods? > > > Because it would not make sense to reimplement aggregate methods in ruby, when the database server can do the work so much faster. In fact, if this is what you need to do, it would probably be better to prepare and execute a raw SQL query, so that entire iteration is done in the database, therefore avoiding roundtrips and N+1 (not to mention loading **everything**). Upvotes: 4 [selected_answer]<issue_comment>username_2: Because [average](https://apidock.com/rails/ActiveRecord/Calculations/average) actually requires sql query? Try something like: ``` User.includes(:owned_ratings).map{ |x| x.owned_ratings.map(&:score).instance_eval { reduce(:+) / size.to_f } } ``` Method to calc average in array got from this [SO answer](https://stackoverflow.com/questions/1341271/how-do-i-create-an-average-from-a-ruby-array). Efficiency depends on how much `owned_ratings` records there are. Oh [username_1](https://stackoverflow.com/questions/49390861/ruby-on-rails-average-method-causes-n1-queries/49390998#49390998) is right and its better to do on db side Upvotes: 2
2018/03/20
611
1,852
<issue_start>username_0: I am using python for C code generation, I want to have a function that prints the following: ``` { .data1="egg", .data2="dog", }, ``` I tried this function: ``` def funky(data1,data2): return """\ { .data1="egg", .data2="dog", },""".format(data1,data2) ``` Calling "`funky("egg","dog")`" results in a KeyError. Relating to the unpaired curly braces. How can I print these braces?<issue_comment>username_1: You would need to: 1. Use two `{`s and two `}`s. 2. Escape the double quotes inside the string. 3. Use `{0}` and `{1}` instead of `{egg}` and `{dog}`. --- ``` def funky(data1,data2): return """\ {{ .data1=\"{0}\", .data2=\"{1}\", }},""".format(data1,data2) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You have no variables in your string literal, you didn't escape the `{` and `}`, and you broke the indentation. Try this: ``` def stripMargin(txt): return '\n'.join([r.split('|', 1)[1] for r in txt.split('\n')]) def funky(data1,data2): return stripMargin("""|{{ |.data1="{0}", |.data2="{1}", |}},""".format(data1,data2)) print(funky("egg", "dog")) ``` The `|`-part combined with `stripMargin` ensures that the indentation of the generated code (the object language) does not interfere with the indentation of python (the metalanguage). I would actually indent the generated code differently: ``` def funky(data1,data2): return stripMargin("""|{{ | .data1="{0}", | .data2="{1}", |}},""".format(data1,data2)) ``` --- Since you don't need any common indentation in the final output, you can also use `dedent`: ``` from textwrap import dedent def funky2(data1,data2): return dedent("""\ {{ .data1="{0}", .data2="{1}", }},""").format(data1,data2) ``` (Thanks @user2357112 for pointing it out) Upvotes: 0
2018/03/20
1,371
5,698
<issue_start>username_0: I have been spending the day learning how Azure Functions work. I have gotten my Azure Function registered with my Azure Active Directory, and have registered this application to be secured by way of my Azure Active Directory provide in my Azure Portal. I have deployed this to Azure and everything works as expected, asking for my Azure AD User account, and once I sign in it shows me my HelloWorld Azure Function as I expect. Additionally, I have been able to debug my Azure Function locally. However, locally it is using the `AuthorizationLevel` as configured on my `HttpTriggerAttribute` (`AuthorizationLevel.Anonymous`). This, of course, is ignored when deployed in Azure, but locally now I have lost my user identity as it is configured to be anonymous and not using Azure Active Directory. Is there a way to enable Azure Active Directory authentication on my locally deployed Azure Function? To be clear here, I would like to sign in with my Azure Function locally just as I do with my deployed Azure Function (so I will be redirected to `login.microsoftonline.com` to login), but have that same identity be available to my local Azure Function development environment. Thank you for any assistance you can provide!<issue_comment>username_1: Alright after a few more (OK, a LOT) hours, I have figured out a solution for now. This works in both local and deployed scenarios. I have posted a template solution here: <https://github.com/username_1-angelo/Stash/tree/master/AzureV2Authentication/AzureV2Authentication> Here are the steps that outline the overall process: 1. Sign into your function at https://`function-name`.azurewebsites.net 2. CTRL-SHIFT-C in Chrome -> Application -> Cookies -> -> AppServiceAuthSession -> Copy Value 3. Open `local.settings.json` and paste value from previous step in `AuthenticationToken` setting. 4. While you're there, paste in the URL from first step in `AuthenticationBaseAddress` 5. Launch application. 6. Cross fingers. 7. Enjoy magic (Hopefully.) Here is the main event: ```cs public static class AuthenticationExtensions { public static Authentication Authenticate(this HttpRequest @this) { var handler = new HttpClientHandler(); var client = new HttpClient(handler) // Will want to make this a singleton. Do not use in production environment. { BaseAddress = new Uri(Environment.GetEnvironmentVariable("AuthenticationBaseAddress") ?? new Uri(@this.GetDisplayUrl()).GetLeftPart(UriPartial.Authority)) }; handler.CookieContainer.Add(client.BaseAddress, new Cookie("AppServiceAuthSession", @this.Cookies["AppServiceAuthSession"] ?? Environment.GetEnvironmentVariable("AuthenticationToken"))); var service = RestService.For(client); var result = service.GetCurrentAuthentication().Result.SingleOrDefault(); return result; } } ``` Note that: 1. An `HttpClient` is created for each call. This is against best practices. 2. Sample code is based on [EasyAuth sample by @stuartleeks](https://github.com/stuartleeks/AzureFunctionsEasyAuth) 3. This solution makes use of the excellent [Refit](https://github.com/paulcbetts/refit) project to get its data. Here are the remaining classes of interest, for the sake of completeness: ```cs public class Authentication // structure based on sample here: https://cgillum.tech/2016/03/07/app-service-token-store/ { [JsonProperty("access_token", NullValueHandling = NullValueHandling.Ignore)] public string AccessToken { get; set; } [JsonProperty("provider_name", NullValueHandling = NullValueHandling.Ignore)] public string ProviderName { get; set; } [JsonProperty("user_id", NullValueHandling = NullValueHandling.Ignore)] public string UserId { get; set; } [JsonProperty("user_claims", NullValueHandling = NullValueHandling.Ignore)] public AuthenticationClaim[] UserClaims { get; set; } [JsonProperty("access_token_secret", NullValueHandling = NullValueHandling.Ignore)] public string AccessTokenSecret { get; set; } [JsonProperty("authentication_token", NullValueHandling = NullValueHandling.Ignore)] public string AuthenticationToken { get; set; } [JsonProperty("expires_on", NullValueHandling = NullValueHandling.Ignore)] public string ExpiresOn { get; set; } [JsonProperty("id_token", NullValueHandling = NullValueHandling.Ignore)] public string IdToken { get; set; } [JsonProperty("refresh_token", NullValueHandling = NullValueHandling.Ignore)] public string RefreshToken { get; set; } } public class AuthenticationClaim { [JsonProperty("typ")] public string Type { get; set; } [JsonProperty("val")] public string Value { get; set; } } interface IAuthentication { [Get("/.auth/me")] Task GetCurrentAuthentication(); } public static class Function1 { [FunctionName("Function1")] public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, TraceWriter log) { log.Info("C# HTTP trigger function processed a request."); var authentication = req.Authenticate(); return authentication != null ? (ActionResult)new OkObjectResult($"Hello, {authentication.UserId}") : new BadRequestObjectResult("Authentication not found. :("); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: [Here is another alternative](https://stackoverflow.com/a/59762344/97803) if you are developing a SPA with JWT tokens that uses Azure-AD or Azure B2C via [Easy Auth](https://learn.microsoft.com/en-us/azure/app-service/configure-authentication-provider-aad?toc=%2Fazure%2Fazure-functions%2Ftoc.json#-configure-with-express-settings). Upvotes: 2
2018/03/20
238
831
<issue_start>username_0: In an xml file, how to change all occurrence of *annotation* tag which is in format: ``` ``` to: ``` ``` I couldn't figure out options to do this using xmlstarlet<issue_comment>username_1: The following [xmlstarlet](/questions/tagged/xmlstarlet "show questions tagged 'xmlstarlet'") command should do the job: ```sh xmlstarlet ed --append //annotation -t attr -n name -v version \ -r //annotation/@version -v value file.xml ``` The order of the attributes is different, but in XML this doesn't matter. Upvotes: 1 <issue_comment>username_2: ``` xmlstarlet edit --omit-decl \ --insert '//manifest/project/annotation' --type 'attr' -n 'name' --value version \ --rename '//manifest/project/annotation/@version' -v 'value' file.xml ``` Output: ``` ``` Upvotes: 3 [selected_answer]
2018/03/20
730
2,273
<issue_start>username_0: i am trying to send JSON data to PHP server, but i can understand how can i parse my JSON in PHP and insert values in Mysql database. Here is how looks my JSON data ``` {"reps_list":{"selected_subcategory_id":[0,1,2]}} ``` Here is my PHP code: ``` $reps_list = $_POST['reps_list']; $reps_list= json_decode($reps_list,TRUE); for($i = 0; $i <= count($array['reps_list']['selected_subcategory_id']); $i++){ mysqli_query($conn, "INSERT INTO reps VALUES(NULL, '".$array['reps_list']['selected_subcategory_id'][i]."', 1, 1 )"); ``` } Any Suggestions?<issue_comment>username_1: Try like this for your **valid** json, decode the string as `array` using [json\_decode](http://php.net/manual/en/function.json-decode.php) with second parameter **true** to make it array and then treat it as an array before using it on db query. ``` $array = json_decode('{"reps_list":{"selected_subcategory_id":[0,1,2]}}',1); echo "Main Array after json string decode \n\n"; print_r($array); echo "\n\n"; echo "Access selected_subcategory_id \n\n"; print_r($array['reps_list']['selected_subcategory_id']); ``` **Output:** **Main Array** after json string decode ``` Array ( [reps_list] => Array ( [selected_subcategory_id] => Array ( [0] => 0 [1] => 1 [2] => 2 ) ) ) ``` Access **selected\_subcategory\_id** ``` Array ( [0] => 0 [1] => 1 [2] => 2 ) ``` **DEMO**: <https://eval.in/975261> Upvotes: 2 <issue_comment>username_2: Since you've updated your post with the intention to `INSERT` it into the database, here is what will work: ``` $reps_list = $_POST['reps_list']; $json = json_decode($reps_list,TRUE); $arr = $json['reps_list']['selected_subcategory_id']; $insertWorkout = $mysqli->prepare("INSERT INTO reps (column1, column2, column3) VALUES (?, ?, ?)"); $insertWorkout->bind_param("iii", $arr[0], $arr[1], $arr[2]); // iii means they are all integers $insertWorkout->execute(); ``` Upvotes: 1 <issue_comment>username_3: You can use ``` foreach($reps_list['selected_subcategory_id'] as $key => $value){ mysqli_query($conn, "INSERT INTO reps VALUES(NULL, '".$value."', 1, 1 )"); } ``` Upvotes: 0
2018/03/20
897
2,935
<issue_start>username_0: Consider this example of solving [Advent of Code 2015 1.2](https://adventofcode.com/2015/day/1). ``` fn main() { // advent of code 1.2 2015 // you are at floor 0 // if instruction is ) go one floor up, else go one floor down // what index has the character that makes you go below floor 0 let instruction = ")))(((()))))"; let mut floor = 0; for (i, c) in input.chars().enumerate() { if c.to_string() == ")" { floor += 1; } else { floor -= 1; } if floor < 0 { break; } } // will fail println!("floor: {}", i) } ``` How can `i` be accessed outside the loop block? Having read [Understanding scope and shadowing matches](https://stackoverflow.com/questions/33422543/understanding-scope-and-shadowing-matches) and [this chapter of the book](https://doc.rust-lang.org/book/first-edition/variable-bindings.html), I understand **why** my code fails, but I can't figure out how to deal with it and use `i` outside the block. Is my problem that I have not understood the purpose of scopes in Rust? Should I put the loop inside of a function and return `i` if I want to use it outside the loop's scope?<issue_comment>username_1: Try like this for your **valid** json, decode the string as `array` using [json\_decode](http://php.net/manual/en/function.json-decode.php) with second parameter **true** to make it array and then treat it as an array before using it on db query. ``` $array = json_decode('{"reps_list":{"selected_subcategory_id":[0,1,2]}}',1); echo "Main Array after json string decode \n\n"; print_r($array); echo "\n\n"; echo "Access selected_subcategory_id \n\n"; print_r($array['reps_list']['selected_subcategory_id']); ``` **Output:** **Main Array** after json string decode ``` Array ( [reps_list] => Array ( [selected_subcategory_id] => Array ( [0] => 0 [1] => 1 [2] => 2 ) ) ) ``` Access **selected\_subcategory\_id** ``` Array ( [0] => 0 [1] => 1 [2] => 2 ) ``` **DEMO**: <https://eval.in/975261> Upvotes: 2 <issue_comment>username_2: Since you've updated your post with the intention to `INSERT` it into the database, here is what will work: ``` $reps_list = $_POST['reps_list']; $json = json_decode($reps_list,TRUE); $arr = $json['reps_list']['selected_subcategory_id']; $insertWorkout = $mysqli->prepare("INSERT INTO reps (column1, column2, column3) VALUES (?, ?, ?)"); $insertWorkout->bind_param("iii", $arr[0], $arr[1], $arr[2]); // iii means they are all integers $insertWorkout->execute(); ``` Upvotes: 1 <issue_comment>username_3: You can use ``` foreach($reps_list['selected_subcategory_id'] as $key => $value){ mysqli_query($conn, "INSERT INTO reps VALUES(NULL, '".$value."', 1, 1 )"); } ``` Upvotes: 0
2018/03/20
1,509
3,514
<issue_start>username_0: Take a look at following code: [typescript playground](https://www.typescriptlang.org/play/#src=interface%20A%20%7B%0D%0A%20%20get()%3A%20number%3B%0D%0A%7D%0D%0A%0D%0Aclass%20Smth1%20%7B%0D%0A%20%20public%20x%3A%20A%20%3D%20%7B%20value%3A%202%2C%20get()%20%7B%20return%20this.value%20%7D%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20Smth2%20%7B%0D%0A%20%20public%20x%3A%20A%20%3D%20%7B%20value%3A%202%2C%20get()%20%7B%20return%20this.value%20%7D%20%7D%20as%20A%0D%0A%7D%0D%0A%0D%0Aclass%20Smth3%20%7B%0D%0A%20%20public%20x%3A%20A%0D%0A%0D%0A%20%20constructor()%20%7B%0D%0A%20%20%20%20const%20x%20%3D%20%7B%20value%3A%202%2C%20get()%20%7B%20return%20this.value%20%7D%20%7D%0D%0A%20%20%20%20this.x%20%3D%20x%0D%0A%20%20%7D%0D%0A%7D) (turn on `noImplicitThis` flag) ``` interface A { get(): number; } class Smth1 { public x: A = { value: 2, get() { return this.value } } } class Smth2 { public x: A = { value: 2, get() { return this.value } } as A } class Smth3 { public x: A constructor() { const x = { value: 2, get() { return this.value } } this.x = x } } ``` Both `Smth1` and `Smth2` have compilation errors: ``` public x: A = { value: 2, get() { return this.value } } ``` > > Object literal may only specify known properties, and 'value' does not exist in type 'A'. > > > Property 'value' does not exist on type 'A'. > > > ``` public x: A = { value: 2, get() { return this.value } } as A ``` > > Property 'value' does not exist on type 'A'. > > > Only `Smth3` has no compilation errors. That means that I must add explicit constructor for my class and split assignment into two statements: temporary variable and assign it to a class field. As for me, it seems to much code for such thing. **How can I assign object literal with extra fields as an interface type without using `as any`?**<issue_comment>username_1: You need to declare a separate subtype that has that property, so that TypeScript recognizes your `this` as a type that has the property: ``` interface AWithValue extends A { value: number; } class Smth2 { public x: A = { value: 2, get() { return this.value } } as AWithValue } ``` [Demo](https://www.typescriptlang.org/play/#src=interface%20A%20%7B%0D%0A%20%20get()%3A%20number%3B%0D%0A%7D%0D%0A%0D%0Ainterface%20AWithValue%20extends%20A%20%7B%20value%3A%20number%3B%20%7D%0D%0Aclass%20Smth4%20%7B%0D%0A%20%20public%20x%3A%20A%20%3D%20%7B%20value%3A%202%2C%20get()%20%7B%20return%20this.value%20%7D%20%7D%20as%20AWithValue%0D%0A%7D%0D%0A) Upvotes: 2 <issue_comment>username_2: A way to do this without an extra interface is to use a helper function. This gets around the error that object literals must only specify known properties, but it maintains full type checking on compatibility between `A` and the json literal. : ``` function objectLiteral(v: T) : T{ return v; } class Smth1 { public x: A = objectLiteral({ value: 2, get() { return this.value } }) } ``` You still need to define an extra entity, but at least it's reusable. **Edit** A functionless way to do it, would be an extra field/variable, it works pretty much the same way `_x` will be inferred and then checked for compatibility when assigned to `x`: ``` class Smth2 { private _x ={ value: 2, get() { return this.value } }; public x: A = this._x; } ``` Or without defining the function explicitly, using an self executing function. ``` class Smth2 { public x: A = (() => ({ value: 2, get() { return this.value } }))(); } ``` Upvotes: 2
2018/03/20
1,171
4,908
<issue_start>username_0: I'm getting this to work on my production server, but on localhost `canMakePayment()` returns `null`. I've traced this through the minified Stripe code but hit a wall with function `ko` which just sends an action called `CAN_MAKE_PAYMENT` to some message queue, at which point execution becomes asynchronous and I can't track further until the request is resolved with `e.available === false` with no further information. I've verified the API is indeed available in Chrome on localhost (`window.PaymentRequest` is available). I'm also running on local `https` (though without a green check). How can I trace what is causing Stripe to report that `PaymentRequest` is unavailable? Will Chrome reject PaymentRequest calls if I don't have a green SSL check? If so, how would I test this? Chrome documentation just says if PaymentRequest is available then you can call the API. If I know where the message queue is getting processed I could debug further.<issue_comment>username_1: Bypassing Stripe, I was able to verify that Chrome is reporting "basic-card" payment method is not supported. This required setting up a PaymentRequest per Google's documentation and attempting a `request.show()` command. I'm guessing this has to do with not having a green SSL verification, I'll try fixing that. Upvotes: 0 <issue_comment>username_2: **Stripe's support team confirmed to me that a green SSL verification is required.** > > "One of the prerequisites for the payment request button is that the > page the payment request is located on will have to be served as > secure with a valid certificate. This is a requirement for both > production and development." > > > Here is an experiment. Browse to a site in Chrome where the URL says "Secure https:" in green, such as <https://stackoverflow.com>. Open the developer console, and paste in these commands ([from here](https://developers.google.com/web/fundamentals/payments/deep-dive-into-payment-request)) and press `Enter`: ``` const supportedPaymentMethods = [ { supportedMethods: 'basic-card', } ]; const paymentDetails = { total: { label: 'Total', amount:{ currency: 'USD', value: 0 } } }; const options = {}; const request = new PaymentRequest( supportedPaymentMethods, paymentDetails, options ); request.show(); ``` You'll then see a payment request modal pop up. But if you browse to your own local site that where the address bar says in *red* "Not secure" (and "https" is crossed out), and if you try to run that same code in the console, no payment request modal will pop up (even if you've added a security exception for the domain). **So, apparently Chrome (and probably other browsers) prevent Stripe (and other tools like Stripe) from accessing that browser functionality when the connection isn't fully secure.** UPDATE from Stripe: > > While Chrome iOS does include a PaymentRequest implementation, it does not allow PaymentRequest to be used from an iframe which prevents Stripe's payment request button element from working. We're working with the Chrome team to get this resolved in a future release. > > > In order for Apple Pay to work in the iOS Simulator or on a test device the domain must be publicly accessible and registered through the Stripe dashboard (<https://dashboard.stripe.com/account/apple_pay>) or API. <https://stripe.com/docs/stripe-js/elements/payment-request-button#verifying-your-domain-with-apple-pay> We recommend using a tool like ngrok (ngrok.com) to create a public-facing domain with a valid HTTPS certificate that tunnels to your local environment. > > > Upvotes: 4 <issue_comment>username_3: You should enable SSL in Visual Studio to use paymentRequest. [Enable SSL in Visual Studio](https://stackoverflow.com/questions/39183773/enable-ssl-in-visual-studio) Upvotes: 0 <issue_comment>username_4: I was experiencing the same issue, but `paymentRequest.canMakePayment()` was returning `null` on both development and production despite working fine previously in Chrome. The issue is that Google have disabled the `basic-card` payment method in the PaymentRequest API, so browser-saved cards no longer work and the payment request button no longer appears in our checkout. The solution was to: * [Join the Google Pay API Test Card group to get access to the test card suite](https://developers.google.com/pay/api/web/guides/resources/test-card-suite) * Add a **valid** card to Google Pay After performing these two steps the Google Pay button appeared in our checkout, and on our development server the test cards appeared in the payment methods on the pay sheet. There no longer appears to be a way of testing browser-saved cards, which is a bit of a pain when you're trying to test your integration and simulate things like failed payments. Hopefully this will be helpful to others who encounter this issue. Upvotes: 1
2018/03/20
1,888
5,401
<issue_start>username_0: I am using the example from [Here](https://github.com/aws/aws-logging-dotnet#apache-log4net) My log4net configuration looks like this: ``` MY\_Logs us-west-2 ``` my app.config: ``` xml version="1.0" encoding="utf-8"? ``` my awscredentials ``` [default] aws_access_key_id=[my_id] aws_secret_access_key=[my_access_key] ``` here is the debug info on my console app: ``` log4net: Configuration update mode [Merge]. log4net: Logger [root] Level string is [ALL]. log4net: Logger [root] level set to [name="ALL",value=-2147483648]. log4net: Loading Appender [AWS] type: [AWS.Logger.Log4net.AWSAppender,AWS.Logger .Log4net] log4net: Setting Property [LogGroup] to String value [MY_Logs] log4net: Setting Property [Region] to String value [us-west-2] log4net: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=F alse] log4net: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=F alse] log4net: Setting Property [ConversionPattern] to String value [%-4timestamp [%th read] %-5level %logger %ndc - %message%newline] log4net: Converter [timestamp] Option [] Format [min=4,max=2147483647,leftAlign= True] log4net: Converter [literal] Option [ [] Format [min=-1,max=2147483647,leftAlign =False] log4net: Converter [thread] Option [] Format [min=-1,max=2147483647,leftAlign=Fa lse] log4net: Converter [literal] Option [] ] Format [min=-1,max=2147483647,leftAlign =False] log4net: Converter [level] Option [] Format [min=5,max=2147483647,leftAlign=True ] log4net: Converter [literal] Option [ ] Format [min=-1,max=2147483647,leftAlign= False] log4net: Converter [logger] Option [] Format [min=-1,max=2147483647,leftAlign=Fa lse] log4net: Converter [literal] Option [ ] Format [min=-1,max=2147483647,leftAlign= False] log4net: Converter [ndc] Option [] Format [min=-1,max=2147483647,leftAlign=False ] log4net: Converter [literal] Option [ - ] Format [min=-1,max=2147483647,leftAlig n=False] log4net: Converter [message] Option [] Format [min=-1,max=2147483647,leftAlign=F alse] log4net: Converter [newline] Option [] Format [min=-1,max=2147483647,leftAlign=F alse] log4net: Setting Property [Layout] to object [log4net.Layout.PatternLayout] log4net: Creating repository for assembly [AWSSDK.Core, Version=3.3.0.0, Culture =neutral, PublicKeyToken=<KEY>] log4net: Assembly [AWSSDK.Core, Version=3.3.0.0, Culture=neutral, PublicKeyToken =<KEY>] Loaded From [C:\Users\stal\documents\visual studio 2017\Proje cts\ConsoleApp9\ConsoleApp9\bin\Debug\AWSSDK.Core.dll] log4net: Assembly [AWSSDK.Core, Version=3.3.0.0, Culture=neutral, PublicKeyToken =<KEY>04] does not have a RepositoryAttribute specified. log4net: Assembly [AWSSDK.Core, Version=3.3.0.0, Culture=neutral, PublicKeyToken =<KEY>] using repository [log4net-default-repository] and repository type [log4net.Repository.Hierarchy.Hierarchy] log4net: repository [log4net-default-repository] already exists, using repositor y type [log4net.Repository.Hierarchy.Hierarchy] log4net: Creating repository for assembly [AWSSDK.CloudWatchLogs, Version=3.3.0. 0, Culture=neutral, PublicKeyToken=<KEY>] log4net: Assembly [AWSSDK.CloudWatchLogs, Version=3.3.0.0, Culture=neutral, Publ icKeyToken=<KEY>] Loaded From [C:\Users\stal\documents\visual studio 2017\Projects\ConsoleApp9\ConsoleApp9\bin\Debug\AWSSDK.CloudWatchLogs.dll] log4net: Assembly [AWSSDK.CloudWatchLogs, Version=3.3.0.0, Culture=neutral, Publ icKeyToken=<KEY>] does not have a RepositoryAttribute specified. log4net: Assembly [AWSSDK.CloudWatchLogs, Version=3.3.0.0, Culture=neutral, Publ icKeyToken=<KEY>] using repository [log4net-default-repository] and r epository type [log4net.Repository.Hierarchy.Hierarchy] log4net: repository [log4net-default-repository] already exists, using repositor y type [log4net.Repository.Hierarchy.Hierarchy] log4net: Created Appender [AWS] log4net: Adding appender named [AWS] to logger [root]. log4net: Hierarchy Threshold [] ``` I dont see anything out of the ordinary in the debug info. Looking at my aws cloudwatch console, I do not see the group My\_Logs. I have permissions to view/write to cloudwatch. Any idea ?<issue_comment>username_1: According to the [documentation](https://logging.apache.org/log4net/release/manual/configuration.html): > > In order to embed the configuration data in the .config file the section name must be identified to the .NET config file parser using a configSections element. The section must specify the log4net.Config.Log4NetConfigurationSectionHandler that will be used to parse the config section. > > > Which is missing in your config file. So add the following: ``` ``` Upvotes: 0 <issue_comment>username_2: I ran into a similar issue, my log4net logs were not writing to CloudWatch even though it looked like the configuration was correct. I was able to find the issue (in my case, I was missing some configuration settings) by enabling the logging feature of AWS.Logger.Log4net by adding the following to the appender node ``` c:\logs\awslog.txt ``` So in your case the new configuration section would look like ``` MY\_Logs us-west-2 c:\logs\awslog.txt ``` Just make sure to check the log file quickly, in my case it gained size quite rapidly. Hopefully the error messages in the log file will point you to the issue. Upvotes: 2
2018/03/20
1,394
6,165
<issue_start>username_0: I have a class set up to return a customised ObjectMapper. As far as I can find, the correct way to have Spring Boot use this ObjectMapper is to declare it as @Primary, which it is. ```java @Configuration public class MyJacksonConfiguration { @Bean @Primary public ObjectMapper objectMapper() { return Jackson2ObjectMapperBuilder .json() .findModulesViaServiceLoader(true) .mixIn(Throwable.class, ThrowableMixin.class) .featuresToDisable( WRITE_DATES_AS_TIMESTAMPS) .serializationInclusion( Include.NON_ABSENT) .build(); } } ``` However, when I return an object from a controller method it is serialized with the default Jackson ObjectMapper configuration. If I add an explicit ObjectMapper to my controller and call writeValueAsString on it, I can see that this ObjectMapper is the customised one that I would like Spring Boot to use. ```java @RestController public class TestController { @Autowired private TestService service; @Autowired private ObjectMapper mapper; @GetMapping(value = "/test", produces = "application/json") public TestResult getResult() { final TestResult ret = service.getResult(); String test = ""; try { test = mapper.writeValueAsString(ret); // test now contains the value I'd like returned by the controller! } catch (final JsonProcessingException e) { e.printStackTrace(); } return ret; } } ``` When I run tests on my controller the test class also uses an autowired ObjectMapper. Again the ObjectMapper supplied to the test is the customised one. So Spring knows about the customised ObjectMapper to some extent, but it isn't being used by my rest controller classes. I have tried turning on Debug logging for Spring but can't see anything useful in logs. Any idea what might be happening, or where else I should be looking to track down the issue? **EDIT**: There appear to be multiple ways to do this, however the way I'm trying to do it appears to be a recommended method and I would like to get it to work this way - see 71.3 of <https://docs.spring.io/spring-boot/docs/1.4.7.RELEASE/reference/html/howto-spring-mvc.html#howto-customize-the-jackson-objectmapper> - am I misunderstanding something there?<issue_comment>username_1: Spring uses HttpMessageConverters to render @ResponseBody (or responses from @RestController). I think you need to override HttpMessageConverter. You can do that by extending `WebMvcConfigurerAdapter` and override following. ``` @Override public void configureMessageConverters( List> converters) { messageConverters.add(new MappingJackson2HttpMessageConverter()); super.configureMessageConverters(converters); } ``` [Spring documentation](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#howto-customize-the-responsebody-rendering) Upvotes: 2 <issue_comment>username_2: Because what you have done is creating a bean that you can `Autowired` for later usage. (just like that you wrote in the code) Spring boot uses auto-configured `Jackson2ObjectMapperBuilder` to get default ObjectMapper for serialization. So if you want to customize an `ObjectMapper` for serialization on your rest controller, you need to find a way to configure `Jackson2ObjectMapperBuilder`. There are two ways to do this: **Declare your own Jackson2ObjectMapperBuilder for spring** ``` @Bean public Jackson2ObjectMapperBuilder jacksonBuilder() { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.serializationInclusion(JsonInclude.Include.NON_NULL); // other settings you want. return builder; } ``` **Setting through properties** You can set it through `application.properties`. For example, if you want to enable pretty print, set `spring.jackson.serialization.indent_output=true`. For more detail information, please check here: [Customize the Jackson ObjectMapper](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-spring-mvc.html#howto-customize-the-jackson-objectmapper) Upvotes: -1 <issue_comment>username_3: Whilst the other answers show alternative ways of achieving the same result, the actual answer to this question is that I had defined a separate class that extended `WebMvcConfigurationSupport`. By doing that the `WebMvcAutoConfiguration` bean had been disabled and so the @Primary ObjectMapper was not picked up by Spring. (Look for `@ConditionalOnMissingBean(WebMvcConfigurationSupport.class)` in the [`WebMvcAutoConfiguration` source](https://github.com/spring-projects/spring-boot/blob/master/spring-boot-project/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration.java).) Temporarily removing the class extending `WebMvcConfigurationSupport` allowed the `@Primary` `ObjectMapper` to be picked up and used as expected by Spring. As I couldn't remove the `WebMvcConfigurationSupport` extending class permanently, I instead added the following to it: ``` @Autowired private ObjectMapper mapper; @Override public void configureMessageConverters(final List> converters) { converters.add(new MappingJackson2HttpMessageConverter(mapper)); addDefaultHttpMessageConverters(converters); super.configureMessageConverters(converters); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: Correct way that won't broke string serialization(keep order): ``` public void configureMessageConverters(List> converters) { addDefaultHttpMessageConverters(converters); HttpMessageConverter jacksonConverter = converters.stream() .filter(converter -> converter.getClass().equals(MappingJackson2HttpMessageConverter.class)) .findFirst().orElseThrow(RuntimeException::new); converters.add(converters.indexOf(jacksonConverter), new MappingJackson2HttpMessageConverter(objectMapper)); } ``` Upvotes: 0
2018/03/20
1,075
4,369
<issue_start>username_0: I have the following function: ``` function returnAge(PlayerId){ var data = [ {name : 'jack', isPlayer: true, games:[{id:343, age:12}, {id:3422, age :132}]}, { id :2 , name : 'king'}, {id: 145, name: 'james'} ] let filteredData= data.filter(item=>item.isPlayer) let itemData = filteredData[0].games.find(game=>game.id===PlayerId) return itemData } console.log(returnAge(343)) ``` The function returns the information of a user with particular Id if the user is a player. I used a filter and then used a find to get the information based on the filtered data.I am looking for a better way of doing this by using Reduce/destructor or any other JavaScript feature. Kindly appreciate any suggestion.<issue_comment>username_1: Spring uses HttpMessageConverters to render @ResponseBody (or responses from @RestController). I think you need to override HttpMessageConverter. You can do that by extending `WebMvcConfigurerAdapter` and override following. ``` @Override public void configureMessageConverters( List> converters) { messageConverters.add(new MappingJackson2HttpMessageConverter()); super.configureMessageConverters(converters); } ``` [Spring documentation](https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#howto-customize-the-responsebody-rendering) Upvotes: 2 <issue_comment>username_2: Because what you have done is creating a bean that you can `Autowired` for later usage. (just like that you wrote in the code) Spring boot uses auto-configured `Jackson2ObjectMapperBuilder` to get default ObjectMapper for serialization. So if you want to customize an `ObjectMapper` for serialization on your rest controller, you need to find a way to configure `Jackson2ObjectMapperBuilder`. There are two ways to do this: **Declare your own Jackson2ObjectMapperBuilder for spring** ``` @Bean public Jackson2ObjectMapperBuilder jacksonBuilder() { Jackson2ObjectMapperBuilder builder = new Jackson2ObjectMapperBuilder(); builder.serializationInclusion(JsonInclude.Include.NON_NULL); // other settings you want. return builder; } ``` **Setting through properties** You can set it through `application.properties`. For example, if you want to enable pretty print, set `spring.jackson.serialization.indent_output=true`. For more detail information, please check here: [Customize the Jackson ObjectMapper](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-spring-mvc.html#howto-customize-the-jackson-objectmapper) Upvotes: -1 <issue_comment>username_3: Whilst the other answers show alternative ways of achieving the same result, the actual answer to this question is that I had defined a separate class that extended `WebMvcConfigurationSupport`. By doing that the `WebMvcAutoConfiguration` bean had been disabled and so the @Primary ObjectMapper was not picked up by Spring. (Look for `@ConditionalOnMissingBean(WebMvcConfigurationSupport.class)` in the [`WebMvcAutoConfiguration` source](https://github.com/spring-projects/spring-boot/blob/master/spring-boot-project/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/web/servlet/WebMvcAutoConfiguration.java).) Temporarily removing the class extending `WebMvcConfigurationSupport` allowed the `@Primary` `ObjectMapper` to be picked up and used as expected by Spring. As I couldn't remove the `WebMvcConfigurationSupport` extending class permanently, I instead added the following to it: ``` @Autowired private ObjectMapper mapper; @Override public void configureMessageConverters(final List> converters) { converters.add(new MappingJackson2HttpMessageConverter(mapper)); addDefaultHttpMessageConverters(converters); super.configureMessageConverters(converters); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: Correct way that won't broke string serialization(keep order): ``` public void configureMessageConverters(List> converters) { addDefaultHttpMessageConverters(converters); HttpMessageConverter jacksonConverter = converters.stream() .filter(converter -> converter.getClass().equals(MappingJackson2HttpMessageConverter.class)) .findFirst().orElseThrow(RuntimeException::new); converters.add(converters.indexOf(jacksonConverter), new MappingJackson2HttpMessageConverter(objectMapper)); } ``` Upvotes: 0
2018/03/20
723
2,580
<issue_start>username_0: I am currently working on a project with Python3, and was needed to implement such code, where I use an API that passes items with limit count, and need to figure out the latest items that have been added. I'm sure there must be a more pythonic way, but couldn't think of any. Kindly let me know if there's similar duplicate question cause I could not find one. ``` list1 = ["d", "e", "f", "g", "h", "i", "j"] # items requested before list2 = ["a", "b", "c", "d", "e", "f", "g"] # items requested now for index, item in enumerate(list2): if item is list1[0]: print(list2[:index]) break ``` UPDATE: I have thousands of items in those lists, and majority of list2 duplicates with items in list1. I need somewhat more efficient way to process those diffs.<issue_comment>username_1: This will solve your problem: ``` list(set(list2) - set(list1)) ``` Upvotes: 2 <issue_comment>username_2: You can use `list2.index()` to find the index of the first item of `list1` in `list2`: ``` list1 = ["d", "e", "f", "g", "h", "i", "j"] # items requested before list2 = ["a", "b", "c", "d", "e", "f", "g"] # items requested now print(list2[:list2.index(list1[0])]) ``` --- The following part of the code ``` list2.index(list1[0]) ``` will efficiently find the first index in `list2` of the first element in `list1`. This is just as efficient as your manually-implemented python loop from an algorithmic point of view, but more efficient in practice since the loop is implemented natively instead of in python. Then, you can use it to take a slice out of your `list2` just like you were already doing in your own solution. Upvotes: 3 [selected_answer]<issue_comment>username_3: To do a set difference (or intersection, or any other set operation), you don't need to convert *both* lists to sets, just one of them. The set operation actually works by iterating over the second argument and doing a set operation against the `self` argument. (This only works with the named methods, not the operators.) Similarly, you can write every set operation as a comprehension with only a small constant slowdown, leaving the main iterable alone, and only converting the one you're `in`-testing to a set. Normally, what you do with this information is convert the smaller one to a set, and leave the other one as a list. But in your case, you want to preserve the order of one of the two. So, leave *that* one as a list, and convert the other to a set: ``` set1 = set(list1) newlist = [elem for elem in list2 if elem not in set1] ``` Upvotes: 2
2018/03/20
782
2,591
<issue_start>username_0: I have a multi line strings which contain new line (\n) and non breaking space characters (\u00A0), latter shown for readability: **Sample 1** ``` dog \u00A0cat mat ``` **Sample 2** ``` bat can \u00A0boo ``` I would like to return a java regex match ONLY when the first instance of '\n' is followed by '\u00A0' Thus: * Sample 1 would match. * Sample 2 wouldn't as the first '\n' after 'bat' is followed by 'can' not '\u00A0'. I'm struggling on this, all I can get is a match for both samples with a simple: \n\u00A0 as per screen shots below. Any suggestions appreciated, I think I need to use a negative look ahead, but can't work out how. Thanks. [Match as expected](https://i.stack.imgur.com/AQRQ0.png) [Do not want a match](https://i.stack.imgur.com/1mA4C.png)<issue_comment>username_1: This will solve your problem: ``` list(set(list2) - set(list1)) ``` Upvotes: 2 <issue_comment>username_2: You can use `list2.index()` to find the index of the first item of `list1` in `list2`: ``` list1 = ["d", "e", "f", "g", "h", "i", "j"] # items requested before list2 = ["a", "b", "c", "d", "e", "f", "g"] # items requested now print(list2[:list2.index(list1[0])]) ``` --- The following part of the code ``` list2.index(list1[0]) ``` will efficiently find the first index in `list2` of the first element in `list1`. This is just as efficient as your manually-implemented python loop from an algorithmic point of view, but more efficient in practice since the loop is implemented natively instead of in python. Then, you can use it to take a slice out of your `list2` just like you were already doing in your own solution. Upvotes: 3 [selected_answer]<issue_comment>username_3: To do a set difference (or intersection, or any other set operation), you don't need to convert *both* lists to sets, just one of them. The set operation actually works by iterating over the second argument and doing a set operation against the `self` argument. (This only works with the named methods, not the operators.) Similarly, you can write every set operation as a comprehension with only a small constant slowdown, leaving the main iterable alone, and only converting the one you're `in`-testing to a set. Normally, what you do with this information is convert the smaller one to a set, and leave the other one as a list. But in your case, you want to preserve the order of one of the two. So, leave *that* one as a list, and convert the other to a set: ``` set1 = set(list1) newlist = [elem for elem in list2 if elem not in set1] ``` Upvotes: 2
2018/03/20
330
1,102
<issue_start>username_0: My requirement as follows: ``` ``` Above is the content in one file contact.xml where i need to get value of the password `qqqTy<PASSWORD>ahLDjHJH6LvQ==`. How can I achive that through an Ansible task?<issue_comment>username_1: You can use <https://docs.ansible.com/ansible/2.4/xml_module.html> There is an example with "Retrieve and display" There is a python module that you need to install with pip but should do the trick OR: ``` shell: cat /path/to/your_file.xml | grep -e username -e password | awk -F '"' '{print $4}' register: output ``` And when you want ot get the value you just call {{ output.stdout }} Upvotes: 0 <issue_comment>username_2: If you wanted to try to use module that are included you could use the slurp module and the set\_fact module and with jinja2 you can extract the password using a regex like such: ``` - name: Slurp file slurp: src: /your/file register: passwordfile - name: Set Password set_fact: your_password: "{{ passwordfile['content'] | b64decode | regex_findall('\bpassword\b\=\"(.+)\"') }}" ``` Upvotes: 2
2018/03/20
427
1,622
<issue_start>username_0: When executing `formBuild.group` I am creating two values ​​that I use only for validations, these two values ​​do not want to save in the database, I would remove them before saving in the database. **profile.component.ts:** `profileForm: FormGroup;` ``` constructor(){ this.profileForm = this.createPerfilForm(); } createProfileForm() { return this.formBuilder.group({ id: [this.perfil.id], name: [this.perfil.name, [Validators.required, Validators.minLength(5), Validators.maxLength(45)]], email: [this.perfil.email, [Validators.required, Validators.email]], password: [''], passwordConfirm: ['', [confirmPassword]], }); } saveProfile(){ // I need to remove here password and passwordConfirm //before saving to the database this.authService.updateProfile(this.profileForm.value); } ``` I need to remove `this.profileForm.value` from the `password` and `passwordConfirm` values, since I do not want to save these values ​​in the database.<issue_comment>username_1: ``` saveProfile(){ // I need to remove here password and passwordConfirm //before saving to the database let copy = { ... this.profileForm.value }; delete copy.password; delete copy.confirmPassword; this.authService.updateProfile(copy); } ``` try this? Upvotes: 3 [selected_answer]<issue_comment>username_2: Make a new object with only what you need: ``` this.authService.updateProfile({id: this.profileForm.value.id, name: this.profileForm.value.name, email:this.profileForm.value.email }) ``` Upvotes: 1
2018/03/20
811
2,527
<issue_start>username_0: I want to pass a bunch of boolean flags to a Windows batch script function. Something like this: ``` 1 SETLOCAL 2 SET _debug=1 3 SET _x64=1 4 5 call :COMPILE %_debug% %_x64% 6 :: call again where %_debug% is now 0 7 :: call again where %_x64% is now 0 8 :: etc... 9 10 :COMPILE 11 :: stuff depending on input args ``` I am using variables `_debug` and `_x64` to make the calls (line 5-7) more readable, as opposed to something like this: ``` call :COMPILE 0 1 ``` Is there a simple way to pass the equivalent of `not variable`? Like: ``` call :COMPILE ~%_debug% ~%_64% ``` or ``` call :COMPILE (%_debug% eq 1) (%_64% eq 1) ``` or do I have to declare *not*-variables like: ``` SET _debug=1 SET _not_debug=0 SET _x64=1 SET _not_x64=0 ``` It's easy enough when I just have these 2 variables, but I anticipate having more. --- *edit based on initial response*: Regarding lines 6 and 7, where I wrote: ``` 6 :: call again where %_debug% is now 0 7 :: call again where %_x64% is now 0 ``` I am not interested in actually changing the values of `_debug` and `_x64` to 0. What I want to know is if there is a way to pass "not \_var" to the function. This way I can preserve the meaning of the argument.<issue_comment>username_1: Unfortunately, there is no way to do any kind of (bit) arithmetics in an arbitrary command line, so you always have to use interim variables together with the [`set /A` command](https://ss64.com/nt/set.html#expressions "Arithmetic expressions (SET /a)"): ```cmd :: ... call :COMPILE %_debug% %_x64% set /A "_not_debug=!_debug, _not_x64=!_x64" call :COMPILE %_not_debug% %_not_x64% :: ... ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could minimize the code it takes to set up your various variables: ``` for %%V in (debug x64) do set /a _%%V=1, _not_%%V=0 ``` It might seem silly to do the above for just two flags with four values. But it becomes more and more attractive as you add more flags. But I don't see why you don't take this one step further - why not have your routines expect meaningful text arguments instead of cryptic integral values. Not only will your CALLs be well documented, but the routine code will also be easier to follow. And the CALLer no longer needs to worry about translating meaningful text into an integer value. If you find yourself developing complex routines with optional arguments, then you should have a look at <https://stackoverflow.com/a/8162578/1012053> Upvotes: 3
2018/03/20
1,426
4,716
<issue_start>username_0: For C++ code, doxygen has no insurmountable problem with `\code{.markdown}`. For example ``` //=========================================================================== //! \defgroup markdown_cpp C++ markdown test //! \brief test cpp markdown //! \date 2018 March //! //! \code{.markdown} //! ______ //! | | //! x ->-| |->- y //! |______| //! \endcode //=========================================================================== int test(void) { const unsigned x=3; return x*2; } ``` and it produces in a `.tex` file ``` \begin{DoxyCode} \_\_\_\_\_\_ | | x ->-| |->- y |\_\_\_\_\_\_| \end{DoxyCode} ``` However, in VHDL code, doxygen crashes on Windows 10 ("doxygen.exe has stopped working") when I attempt this: ``` library IEEE; use IEEE.STD_LOGIC_1164.all; ----------------------------------------------------------------------------- --! \defgroup markdown_vhd VHDL markdown test --! \ingroup markdown_vhd --! --! \code{.markdown} --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! \endcode ----------------------------------------------------------------------------- entity test is port (--inpts x1 : in std_logic; x2 : in std_logic; --outputs y : out std_logic; ); end entity test; architecture test_arch of test is begin y <= x1 xor x2; end architecture test_arch; ```<issue_comment>username_1: The syntax for your doxygen comment in your example is correct. This is resulting in a crash which is a bug (tested in doxygen 1.8.11 in windows). All commands in the documentation start with a backslash (\) or an at-sign (@), see [doxygen manual](http://www.doxygen.nl/manual/commands.html). To achieve the exact same (intended) result, use the following: ``` --! @code{.markdown} --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endcode ``` It will not crash in doxygen (tested in 1.8.11 on windows). **produced html result for code:** ![produced html result for code](https://i.stack.imgur.com/TZLC9.jpg) You may also be interested in the following variant: ``` --! \verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! \endverbatim ``` The outcome is different: **produced html result for verbatim:** ![produced html result for verbatim](https://i.stack.imgur.com/a9LGs.jpg) Note 1: I do not have the latest version installed on my machine which is why I used an slightly older version. It will hopefully not affect the provided solution. Note 2: Workaround had been successfully tested for 1.8.14 by OP. Upvotes: 1 <issue_comment>username_2: All of the following work for Doxygen 1.8.14 on Windows 10: ``` --! --! @code{.markdown} --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endcode --! --! \verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! \endverbatim --! --! @verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endverbatim ``` These produce the following LaTeX code: ``` \begin{DoxyCode} \_\_\_\_\_\_ | | x1 ->-| | | |->- y x2 ->-| | |\_\_\_\_\_\_| \end{DoxyCode} \begin{DoxyVerb} ______ | | x1 ->-| | | |->- y x2 ->-| | |______|\end{DoxyVerb} \begin{DoxyVerb} ______ | | x1 ->-| | | |->- y x2 ->-| | |______|\end{DoxyVerb} ``` The --! leads are completely stripped out with @code, replaced with spaces (plus an additional space) with the \verbatim and @verbatim. The DoxyCode environment is defined as ``` \newenvironment{DoxyCode}{% \par% \scriptsize% \begin{alltt}% }{% \end{alltt}% \normalsize% } ``` where alltt is defined in the alltt package: <https://ctan.org/pkg/alltt> and the DoxyVerb is defined as ``` \newenvironment{DoxyVerb}{% \footnotesize% \verbatim% }{% \endverbatim% \normalsize% } ``` Note that the verbatim methods typeset larger ("\footnotesize") than the @code ("\scriptsize"). Many many thanks to @username_1 for very helpful help. Upvotes: 0
2018/03/20
1,115
3,568
<issue_start>username_0: I am working on Struts 1 to 2 migration application. I have successfully migrated JSP, Action, POJO and XML too. But when I integrated Tiles 3 in my Struts 2 application, suddenly it is showing some bigger font size on web page, compared to Struts 1 + Tiles. I am not getting what exactly happening which is impacting on JSP as if I haven't changed anything w.r.t HTML, CSS?<issue_comment>username_1: The syntax for your doxygen comment in your example is correct. This is resulting in a crash which is a bug (tested in doxygen 1.8.11 in windows). All commands in the documentation start with a backslash (\) or an at-sign (@), see [doxygen manual](http://www.doxygen.nl/manual/commands.html). To achieve the exact same (intended) result, use the following: ``` --! @code{.markdown} --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endcode ``` It will not crash in doxygen (tested in 1.8.11 on windows). **produced html result for code:** ![produced html result for code](https://i.stack.imgur.com/TZLC9.jpg) You may also be interested in the following variant: ``` --! \verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! \endverbatim ``` The outcome is different: **produced html result for verbatim:** ![produced html result for verbatim](https://i.stack.imgur.com/a9LGs.jpg) Note 1: I do not have the latest version installed on my machine which is why I used an slightly older version. It will hopefully not affect the provided solution. Note 2: Workaround had been successfully tested for 1.8.14 by OP. Upvotes: 1 <issue_comment>username_2: All of the following work for Doxygen 1.8.14 on Windows 10: ``` --! --! @code{.markdown} --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endcode --! --! \verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! \endverbatim --! --! @verbatim --! ______ --! | | --! x1 ->-| | --! | |->- y --! x2 ->-| | --! |______| --! @endverbatim ``` These produce the following LaTeX code: ``` \begin{DoxyCode} \_\_\_\_\_\_ | | x1 ->-| | | |->- y x2 ->-| | |\_\_\_\_\_\_| \end{DoxyCode} \begin{DoxyVerb} ______ | | x1 ->-| | | |->- y x2 ->-| | |______|\end{DoxyVerb} \begin{DoxyVerb} ______ | | x1 ->-| | | |->- y x2 ->-| | |______|\end{DoxyVerb} ``` The --! leads are completely stripped out with @code, replaced with spaces (plus an additional space) with the \verbatim and @verbatim. The DoxyCode environment is defined as ``` \newenvironment{DoxyCode}{% \par% \scriptsize% \begin{alltt}% }{% \end{alltt}% \normalsize% } ``` where alltt is defined in the alltt package: <https://ctan.org/pkg/alltt> and the DoxyVerb is defined as ``` \newenvironment{DoxyVerb}{% \footnotesize% \verbatim% }{% \endverbatim% \normalsize% } ``` Note that the verbatim methods typeset larger ("\footnotesize") than the @code ("\scriptsize"). Many many thanks to @username_1 for very helpful help. Upvotes: 0
2018/03/20
863
2,637
<issue_start>username_0: SQLFiddle: <http://sqlfiddle.com/#!4/db1bd/49/0> I'm working on a query that returns an object's DN:`(cn=name,ou=folder,dc=hostname,dc=com)` My goal is to return this information in a "prettier" output akin to AD:`(name\folder\hostname.com)` I've accomplished this in a clunky way: ``` REGEXP_REPLACE(REGEXP_REPLACE(TEST, '.*CN=(.+?),DC=.*', '\1', 1, 1, 'i'), ',OU=', '\', 1, 0, 'i') -- grab everything between CN= and DC=, replace with \'s -- || '\' || REGEXP_REPLACE(SUBSTR(TEST, REGEXP_INSTR(TEST, ',DC=', 1, 1, 0, 'i')+4),',DC=','.', 1, 0, 'i') -- grab everything after DC=, replace with .'s -- ``` While that works I'm not thrilled with how overly complicated it is (and that it involves having to stitch two regex'd strings together). I started clean and realized I was doing too much to get what I wanted and my starting point is now here: ``` REGEXP_REPLACE(test, '(,?(cn=|ou=)(.+?),)', '\3\') ``` I *think* I have a good understanding of how this one works but if I add an additional (...) it breaks what I already have working and returns the entire string. I've read that Oracle's regex engine is not as advanced as some others, but I'm struggling to grasp the order of how things are evaluated. Example Input (can have multiple OUs/DCs): `cn=name,ou=subgroup,ou=group,dc=accounts,dc=hostname,dc=com cn=name,ou=group,dc=hostname,dc=com` Expected Output `name\subgroup\group\accounts.hostname.com name\group\hostname.com` The data coming in is dynamic and never a set number of OUs or DCs.<issue_comment>username_1: You may use ``` SELECT REPLACE( REGEXP_REPLACE( test, '(^|,)(cn|ou)=([^,]*)(,dc=)?', '\3\\'), ',dc=', '.') FROM regexTest ``` See the [SQLFiddle](http://sqlfiddle.com/#!4/217a61/7). The first `(^|,)(cn|ou)=([^,]*)(,dc=)?` regex matches `,` or start of string, then `cn` or `ou`, then `=`, then captures into Group 3 zero or more chars other than a comma, and then matches an optional `,dc=` substring (thus, removing the first instance of `,dc=`). The replacement is Group 3 contents and a backslash. So, the second operation is easy, just replace all `,dc=` with `.`, you do not even need a regex for this. Upvotes: 3 [selected_answer]<issue_comment>username_2: May be something like that: ``` SELECT nvl(regexp_replace( regexp_replace( nullif( regexp_replace(test, '^cn=(.+?),DC=(.+?)$', '\1 \2',1,1,'i') , test ) , ' |,(CN|OU)=', '\\', 1, 0,'i' ), ',DC=', '.', 1, 0,'i' ),test) result FROM regexTest ``` This query does not change the input if there is no `DC=`. Upvotes: -1
2018/03/20
615
1,795
<issue_start>username_0: I want to rename a file present in several subdirectories using bash script. my files are in folders: ./FolderA/ABCD/ABCD\_Something.ctl ./FolderA/EFGH/EFGH\_Something.ctl ./FolderA/WXYZ/WXYZ\_Something.ctl I want to rename all of the .ctl file with the same name (name.ctl). I tried several command using mv or rename but didnt work. Working from FolderA: > > find . -name '\*.ctl' -exec rename \*.ctl name.ctl '{}' \; > > > or > > for f in ./\*/\*.ctl; do mv "$f" "${f/\*.ctl/name .ctl}"; done > > > or > > for f in $(find . -type f -name '\*.ctl'); do mv $f $(echo "$f" | sed 's/\*.ctl/name.ctl/'); done > > > Can you help me using bash? thanks<issue_comment>username_1: You may use ``` SELECT REPLACE( REGEXP_REPLACE( test, '(^|,)(cn|ou)=([^,]*)(,dc=)?', '\3\\'), ',dc=', '.') FROM regexTest ``` See the [SQLFiddle](http://sqlfiddle.com/#!4/217a61/7). The first `(^|,)(cn|ou)=([^,]*)(,dc=)?` regex matches `,` or start of string, then `cn` or `ou`, then `=`, then captures into Group 3 zero or more chars other than a comma, and then matches an optional `,dc=` substring (thus, removing the first instance of `,dc=`). The replacement is Group 3 contents and a backslash. So, the second operation is easy, just replace all `,dc=` with `.`, you do not even need a regex for this. Upvotes: 3 [selected_answer]<issue_comment>username_2: May be something like that: ``` SELECT nvl(regexp_replace( regexp_replace( nullif( regexp_replace(test, '^cn=(.+?),DC=(.+?)$', '\1 \2',1,1,'i') , test ) , ' |,(CN|OU)=', '\\', 1, 0,'i' ), ',DC=', '.', 1, 0,'i' ),test) result FROM regexTest ``` This query does not change the input if there is no `DC=`. Upvotes: -1
2018/03/20
575
1,877
<issue_start>username_0: I am Using W3 Total cache plugin for WordPress. My post has one table which updates every minute/hour. But as I do caching, the server does not serves latest version, instead it displays cached copy to the visitor. How can I purge cache every 30 minutes automatically so that it shows latest version of the post at after every 30 minutes? I have already tried inserting this code in functions.php file but its not purging cache every 1h, instead it does 2-3h later. ``` function w3_flush_cache( ) { $w3_plugin_totalcache->flush_all(); } // Schedule Cron Job Event function w3tc_cache_flush() { if ( ! wp_next_scheduled( 'w3_flush_cache' ) ) { wp_schedule_event( current_time( 'timestamp' ), 'hourly', 'w3_flush_cache' ); } } ```<issue_comment>username_1: You may use ``` SELECT REPLACE( REGEXP_REPLACE( test, '(^|,)(cn|ou)=([^,]*)(,dc=)?', '\3\\'), ',dc=', '.') FROM regexTest ``` See the [SQLFiddle](http://sqlfiddle.com/#!4/217a61/7). The first `(^|,)(cn|ou)=([^,]*)(,dc=)?` regex matches `,` or start of string, then `cn` or `ou`, then `=`, then captures into Group 3 zero or more chars other than a comma, and then matches an optional `,dc=` substring (thus, removing the first instance of `,dc=`). The replacement is Group 3 contents and a backslash. So, the second operation is easy, just replace all `,dc=` with `.`, you do not even need a regex for this. Upvotes: 3 [selected_answer]<issue_comment>username_2: May be something like that: ``` SELECT nvl(regexp_replace( regexp_replace( nullif( regexp_replace(test, '^cn=(.+?),DC=(.+?)$', '\1 \2',1,1,'i') , test ) , ' |,(CN|OU)=', '\\', 1, 0,'i' ), ',DC=', '.', 1, 0,'i' ),test) result FROM regexTest ``` This query does not change the input if there is no `DC=`. Upvotes: -1
2018/03/20
785
2,865
<issue_start>username_0: I have an application which I would like to deploy, compiled for java 1.8.0\_151. However, the user has only 1.8.0\_25. User cannot launch the app because `LocalDateStringConverter` is missing. As written here (<https://docs.oracle.com/javase/8/javafx/api/javafx/util/converter/LocalDateStringConverter.html>) this class has been added only in `8u40` How can I compile (is it possible?) a jar with dependencies for the specific java version of the user? Or maybe I misunderstood something, new to java here **EDIT** I tried specify the pricise version with update number in my pom.xml but it didn't help<issue_comment>username_1: There are a two ways to approach this: * You have to set your environment to use the JDK that is expected by the user. In this case `1.8.0_25` - you need to develop all logic using this JDK and the classes available in it. If `LocalDateStringConverter` is your only dependency missing, you can easily find out the [source code](http://grepcode.com/file/repo1.maven.org/maven2/org.datanucleus/datanucleus-core/3.2.6/org/datanucleus/store/types/converters/LocalDateStringConverter.java) and duplicate the logic inside your project as utility class and use that instead. * You can ask/help/advise your user to upgrade their environment to more recent JDK version (security reasons and such). As you are aware most software has minimum requirements to run and it's expected from the end user to meet those requirements in order to run the software. As already mentioned you can use the [Maven Enforcer Plugin](https://maven.apache.org/enforcer/enforcer-rules/requireJavaVersion.html) to enforce specific java version, but this will not make the functionality from `1.8.0_151` available in `1.8.0_25`. Upvotes: 1 <issue_comment>username_2: Compiling your code with 1.8.0\_25 and hoping that the newer versions are backward compatible is probably the easiest solution. Assuming you have automated tests this will catch problems like a missing class. The other option would be to build an executable bundle containing both your application and the entire JRE 1.8.0\_40 or newer. This is going to result in your software bundle growing by dozens of MBs so I would not recommend it. However one way to do it would be to use [Launch4j](http://launch4j.sourceforge.net/) as advised [here](https://stackoverflow.com/questions/13996547/how-do-i-bundle-a-jre-into-an-exe-for-a-java-application-launch4j-says-runtime). You can try implementing your own `LocalDateStringConverter` but how many other classes are you missing? What if there are other subtleties in behaviour between versions? Based on [java.com](https://java.com/en/download/faq/release_dates.xml) 1.8.0\_25 was released on October 14, 2014 while 1.8.0\_151 on October 17, 2017. That's 3 years of Java development that your user is missing. Upvotes: 0
2018/03/20
1,037
3,428
<issue_start>username_0: I tried making a simple "investing game". But for some reason the variable cash still says 1000 after the "investment". I also want to make this game continuous. Like the player can keep playing it and gaining/losing cash. The program is below! Thanks! ``` import sys import random print "INVEST" cash = 1000 highlow = ["h", "l"] percentrand = random.randint(1,99) percentup = percentrand/100 + 1 percentdown = percentrand/100 - 1 randomhighlow = random.choice(highlow) print "You have 1000$ on you now." investquit = raw_input("Invest or quit?") if investquit.lower() == "quit": quit() elif investquit.lower() == "invest": if randomhighlow == "h": cash == cash*percentup print str(cash) + ",up," + str(percentrand) + "%" if randomhighlow == "l": cash == cash*percentdown print str(cash) + ",down," + str(percentrand) + "%" ```<issue_comment>username_1: You don't have a loop to run the program multiple times. Furthermore, in python 2.7 dividing two ints will produce another int, not a float. That is where your main issue is because that is causing percent up or down to always be 1. So you should be doing this: ``` percentrand = float(random.randint(1,99)) percentup = percentrand/100.0 + 1 percentdown = percentrand/100.0 - 1 randomhighlow = random.choice(highlow) ``` Upvotes: 2 <issue_comment>username_2: Double equals `==` is a comparison operator while a single equals `=` is assignment. In your case to have the cash value update you want ``` cash = cash * percentup ``` (and percentdown accordingly). To have the game play infinitely, or until a certain condition (i.e. cash > 0) you can surround the whole thing in a while loop, such as ``` while cash > 0: percentrand = float(random.randint(1,99)) [.. rest of code ...] ``` **edit**: as Ryan rightly mentions, you'd want `percentrand = float(random.randint(1,99))` to make sure your division result is not an integer. Upvotes: 0 <issue_comment>username_3: You have several problems. Other answers and comments cover most of them, but I'll combine them into one answer. First, you're using integer division when you should be using floating point division. This would work in Python 3.x, but since you've tagged it 2.7 it's different: ``` percentup = percentrand/100.0 + 1 ``` Same with the down, except you've subtracted 1 instead of subtracting *from* 1: ``` percentdown = 1 - percentrand/100.0 ``` Then you're using the wrong operator to assign `cash`: ``` cash = cash*percentup ``` And you have incorrect indentation in the code as you've posted it. Finally, you need a loop to keep playing: ``` while True: ``` This seems to work: ``` import sys import random print "INVEST" cash = 1000 highlow = ["h", "l"] while True: percentrand = random.randint(1,99) percentup = percentrand/100.0 + 1 percentdown = 1 - percentrand/100.0 randomhighlow = random.choice(highlow) print "You have $" + str(cash) + " on you now." investquit = raw_input("Invest or quit?") if investquit.lower() == "quit": break elif investquit.lower() == "invest": if randomhighlow == "h": cash = cash*percentup print str(cash) + ",up," + str(percentrand) + "%" if randomhighlow == "l": cash = cash*percentdown print str(cash) + ",down," + str(percentrand) + "%" print 'Thanks for playing!' ``` Upvotes: 1
2018/03/20
915
3,111
<issue_start>username_0: I have installed node-sass and have set it up to recompile every time that the .scss source is changed. Separately I use nodemon to look for changes in my files and restart the server. My package.json script look like this: ``` "watch": "nodemon ./bin/www", "watch-css": "node-sass -w scss -o public/css" ``` This works fine, however it requires me to keep two terminals open at all times. Is there any way to tell nodemon to run/restart the server, and in the special case where an scss file is changed, to recompile AND restart?<issue_comment>username_1: You don't have a loop to run the program multiple times. Furthermore, in python 2.7 dividing two ints will produce another int, not a float. That is where your main issue is because that is causing percent up or down to always be 1. So you should be doing this: ``` percentrand = float(random.randint(1,99)) percentup = percentrand/100.0 + 1 percentdown = percentrand/100.0 - 1 randomhighlow = random.choice(highlow) ``` Upvotes: 2 <issue_comment>username_2: Double equals `==` is a comparison operator while a single equals `=` is assignment. In your case to have the cash value update you want ``` cash = cash * percentup ``` (and percentdown accordingly). To have the game play infinitely, or until a certain condition (i.e. cash > 0) you can surround the whole thing in a while loop, such as ``` while cash > 0: percentrand = float(random.randint(1,99)) [.. rest of code ...] ``` **edit**: as Ryan rightly mentions, you'd want `percentrand = float(random.randint(1,99))` to make sure your division result is not an integer. Upvotes: 0 <issue_comment>username_3: You have several problems. Other answers and comments cover most of them, but I'll combine them into one answer. First, you're using integer division when you should be using floating point division. This would work in Python 3.x, but since you've tagged it 2.7 it's different: ``` percentup = percentrand/100.0 + 1 ``` Same with the down, except you've subtracted 1 instead of subtracting *from* 1: ``` percentdown = 1 - percentrand/100.0 ``` Then you're using the wrong operator to assign `cash`: ``` cash = cash*percentup ``` And you have incorrect indentation in the code as you've posted it. Finally, you need a loop to keep playing: ``` while True: ``` This seems to work: ``` import sys import random print "INVEST" cash = 1000 highlow = ["h", "l"] while True: percentrand = random.randint(1,99) percentup = percentrand/100.0 + 1 percentdown = 1 - percentrand/100.0 randomhighlow = random.choice(highlow) print "You have $" + str(cash) + " on you now." investquit = raw_input("Invest or quit?") if investquit.lower() == "quit": break elif investquit.lower() == "invest": if randomhighlow == "h": cash = cash*percentup print str(cash) + ",up," + str(percentrand) + "%" if randomhighlow == "l": cash = cash*percentdown print str(cash) + ",down," + str(percentrand) + "%" print 'Thanks for playing!' ``` Upvotes: 1
2018/03/20
1,917
5,930
<issue_start>username_0: As known, at the moment PostgreSQL has no method to compare two json values. The comparison like `json = json` doesn't work. But what about casting `json` to `text` before? Then ``` select ('{"x":"a", "y":"b"}')::json::text = ('{"x":"a", "y":"b"}')::json::text ``` returns `true` while ``` select ('{"x":"a", "y":"b"}')::json::text = ('{"x":"a", "y":"d"}')::json::text ``` returns `false` I tried several variants with more complex objects and it works as expected. Are there any gotchas in this solution? **UPDATE:** The compatibility with v9.3 is needed<issue_comment>username_1: Yes there are multiple problem with your approach (i.e. converting to text). Consider the following example ``` select ('{"x":"a", "y":"b"}')::json::text = ('{"y":"b", "x":"a"}')::json::text; ``` This is like your first example example, except that I flipped the order of the `x` and `y` keys for the second object, and now it returns false, even thought the objects are equal. Another issue is that `json` preserves white space, so ``` select ('{"x":"a", "y":"b"}')::json::text = ('{ "x":"a", "y":"b"}')::json::text; ``` returns false just because I added a space before the `x` in the second object. A solution that works with v9.3 is to use the `json_each_text` function to expand the two JSON objects into tables, and then compare the two tables, e.g. like so: ``` SELECT NOT exists( SELECT FROM json_each_text(('{"x":"a", "y":"b"}')::json) t1 FULL OUTER JOIN json_each_text(('{"y":"b", "x":"a"}')::json) t2 USING (key) WHERE t1.value<>t2.value OR t1.key IS NULL OR t2.key IS NULL ) ``` Note that this only works if the two JSON values are objects where for each key, the values are strings. The key is in the query inside the `exists`: In that query we match all keys from the first JSON objects with the corresponding keys in the second JSON object. Then we keep only the rows that correspond to one of the following two cases: * a key exists in both JSON objects but the corresponding values are different * a key exists only in one of the two JSON objects and not the other These are the only cases that "witness" the inequality of the two objects, hence we wrap everything with a `NOT exists(...)`, i.e. the objects are equal if we didn't find any witnesses of inequality. If you need to support other types of JSON values (e.g. arrays, nested objects, etc), you can write a `plpgsql` function based on the above idea. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can also use the `@>` operator. Let's say you have A and B, both JSONB objects, so `A = B` if: ``` A @> B AND A <@ B ``` Read more here: <https://www.postgresql.org/docs/current/functions-json.html> Upvotes: 4 <issue_comment>username_3: Most notably `A @> B AND B @> A` will signify `TRUE` if they are both equal JSONB objects. However, be careful when assuming that it works for all kinds of JSONB values, as demonstrated with the following query: ``` select old, new, NOT(old @> new AND new @> old) as changed from ( values ( '{"a":"1", "b":"2", "c": {"d": 3}}'::jsonb, '{"b":"2", "a":"1", "c": {"d": 3, "e": 4}}'::jsonb ), ( '{"a":"1", "b":"2", "c": {"d": 3, "e": 4}}'::jsonb, '{"b":"2", "a":"1", "c": {"d": 3}}'::jsonb ), ( '[1, 2, 3]'::jsonb, '[3, 2, 1]'::jsonb ), ( '{"a": 1, "b": 2}'::jsonb, '{"b":2, "a":1}'::jsonb ), ( '{"a":[1, 2, 3]}'::jsonb, '{"b":[3, 2, 1]}'::jsonb ) ) as t (old, new) ``` Problems with this approach are that JSONB arrays are not compared correctly, as in JSON `[1, 2, 3] != [3, 2, 1]` but Postgres returns `TRUE` nevertheless. A correct solution will recursively iterate through the contents of the json and comparing arrays and objects differently. I have quickly built a set of functions that accomplishes just that. Use them like `SELECT jsonb_eql('[1, 2, 3]'::jsonb, '[3, 2, 1]'::jsonb)` (the result is `FALSE`). ``` CREATE OR REPLACE FUNCTION jsonb_eql (a JSONB, b JSONB) RETURNS BOOLEAN AS $$ DECLARE BEGIN IF (jsonb_typeof(a) != jsonb_typeof(b)) THEN RETURN FALSE; ELSE IF (jsonb_typeof(a) = 'object') THEN RETURN jsonb_object_eql(a, b); ELSIF (jsonb_typeof(a) = 'array') THEN RETURN jsonb_array_eql(a, b); ELSIF (COALESCE(jsonb_typeof(a), 'null') = 'null') THEN RETURN COALESCE(a, 'null'::jsonb) = 'null'::jsonb AND COALESCE(b, 'null'::jsonb) = 'null'::jsonb; ELSE RETURN coalesce(a = b, FALSE); END IF; END IF; END; $$ LANGUAGE plpgsql; ``` ``` CREATE OR REPLACE FUNCTION jsonb_object_eql (a JSONB, b JSONB) RETURNS BOOLEAN AS $$ DECLARE _key_a text; _val_a jsonb; _key_b text; _val_b jsonb; BEGIN IF (jsonb_typeof(a) != jsonb_typeof(b)) THEN RETURN FALSE; ELSIF (jsonb_typeof(a) != 'object') THEN RETURN jsonb_eql(a, b); ELSE FOR _key_a, _val_a, _key_b, _val_b IN SELECT t1.key, t1.value, t2.key, t2.value FROM jsonb_each(a) t1 LEFT OUTER JOIN ( SELECT * FROM jsonb_each(b) ) t2 ON (t1.key = t2.key) LOOP IF (_key_a != _key_b) THEN RETURN FALSE; ELSE RETURN jsonb_eql(_val_a, _val_b); END IF; END LOOP; RETURN a = b; END IF; END; $$ LANGUAGE plpgsql; ``` ``` CREATE OR REPLACE FUNCTION jsonb_array_eql (a JSONB, b JSONB) RETURNS BOOLEAN AS $$ DECLARE _val_a jsonb; _val_b jsonb; BEGIN IF (jsonb_typeof(a) != jsonb_typeof(b)) THEN RETURN FALSE; ELSIF (jsonb_typeof(a) != 'array') THEN RETURN jsonb_eql(a, b); ELSE FOR _val_a, _val_b IN SELECT jsonb_array_elements(a), jsonb_array_elements(b) LOOP IF (NOT(jsonb_eql(_val_a, _val_b))) THEN RETURN FALSE; END IF; END LOOP; RETURN TRUE; END IF; END; $$ LANGUAGE plpgsql; ``` Upvotes: 2
2018/03/20
979
2,954
<issue_start>username_0: I've got some problems with python script. I'm not that good at this language, as I'm doing something for my friend. The code is working but output is straight up weird. Some random things instead of binary numbers. Here's the code: ``` def decToBin(n): wynik = "" while n > 0: wynik = str(n % 2) + wynik n = n / 2 return wynik print("zamiana liczb z systemu dziesietnego na binarny") with open('program.txt', 'r') as plik: # otwieramy plik do odczytu for line in plik: x=int(line) with open('wyniki.txt', 'w') as plik1: plik1.write(decToBin(x)) plik1.close() plik.close() ``` And here's the link to the script: [repl.it](https://repl.it/repls/SerenePastelLegacysystem) /edit OKay, figured it out, here's the code now: ``` def decToBin(n): wynik = "" while n > 0: wynik = str(n % 2) + wynik n = int(n/2) return wynik plik1=open('wyniki.txt', 'w') print("zamiana liczb z systemu dziesietnego na binarny") with open('program.txt', 'r') as plik: # otwieramy plik do odczytu for line in plik: x = int(line) plik1.write(decToBin(x)) plik1.write("\n") plik1.close() ```<issue_comment>username_1: Here's a way to do it for a sample of the text- Python will not convert chars to ints in the way that Java/C may. Instead, you would get the ordinal value, and then convert this to a binary number ``` def to_bin(c): bin_rep = bin(ord(c)) print bin_rep # for testing return bin_rep # can be written to a file here or not... text = "Hello there" for each_char in text: to_bin(each_char) ``` I think this approach is worth trying with a sample of your input file, and then refactor to do it as a file r/w. To do this with the file, try this: ``` write_str = "" with open('program.txt', 'r') as plik: # otwieramy plik do odczytu for line in plik: for each_char in line: write_str += str(to_bin(each_char)) with open('wyniki.txt', 'w') as plik1: plik1.write(write_str) plik1.close() plik.close() ``` If you want the order of the bytes reversed, the entire file could be reversed or change write\_str += str(to\_bin(each\_char)) to write\_str = str(to\_bin(each\_char)) + write\_str Upvotes: 0 <issue_comment>username_2: First things first: there already is something that does precisely what you're trying to do, so it would be wise and effective to just use it: ``` for char in text: bin(int(char)) ``` The problem in your code, I guess, is that you're using Python 3.x, where standard division between integers returns a float (as of [<https://www.python.org/dev/peps/pep-0238/][PEP-0238]>). Here is a suggestion on how you should change your function for it to work. You can check it against the built-in *bin* function: ``` def decToBin(n): wynik = "" while n > 0: wynik = str(n % 2) + wynik n = n // 2 return wynik ``` Upvotes: 1
2018/03/20
799
2,417
<issue_start>username_0: I have a template of handlebars in which there are multiple div which i need to make clickable. However, in my div the text is dynamic, how will i be able to make whole div clickable. Here is the code ``` {{#each event}} {{#is this.event ../eventName }} ![](/Small/{{this.imageId}}.JPG) [*favorite*](/{{this.imageId}}/love) {{this.love}} [*sentiment\_very\_satisfied*](/{{this.imageId}}/laugh) {{this.laugh}} [*trending\_down*](/{{this.imageId}}/sad) {{this.sad}} {{!-- {{this.imageId}} --}} button {{/is}} {{/each}} ``` I want to make my div `action-time` clickable with the link in the `a` tag. I tried to use the popular solution [like this](https://stackoverflow.com/a/3494108/6787187) but no use. It works for the last `action-time` making first two not working. How can i solve this. Css code for `action-time` and `card` is: ``` .action-time{ display: inline-block; border: 1px solid rgba(202, 202, 202, 0.733); width: 32%; padding-top:2px; padding-left: 4px; border-radius:8px; } ``` and for card : ``` .card { margin: 10px; padding: 5px; border-radius: 8px; box-shadow: 0 19px 38px rgba(0,0,0,0.30), 0 15px 12px rgba(0,0,0,0.22); } .card-action { padding: 5px 0px !important; /* padding-right: 5px !important; */ } ``` Here is what my webpage looks like, I have multiple of those cards [![this is what my card look like now](https://i.stack.imgur.com/L1VY3.png)](https://i.stack.imgur.com/L1VY3.png) I want to make the whole area in border of heart clickable, rather than just the image to heart.<issue_comment>username_1: You can write some javascript to handle the operation ``` $(" .action-time").click(function() { window.location = $(this).find("a").attr("href"); return false; }); ``` Looks for a link inside div with class of "action-time". Redirects to that links value when anywhere in div is clicked. Upvotes: 0 <issue_comment>username_2: You can update your html so that the is inside your . ``` [*sentiment\_very\_satisfied* {{this.laugh}}](/{{this.imageId}}/laugh) ``` Displaying your as a block element will have it fill up the entire . Then you can float your to the right and style it accordingly until it's where you want it exactly. ``` .action-time a { display: block; } .action-time span { float: right; } ``` Upvotes: 3 [selected_answer]
2018/03/20
1,351
5,032
<issue_start>username_0: So I have a view -> UITextField & UILabel are the two children, I have multiple views that contain the children. I am using the delegate function below and I assign each of the UITextfields as delegates "textFieldShouldReturn(\_ textField: UITextField) -> Bool". However, they do not seem to change the textField Focus when I press return. When use this focus technique with UITextFields without having them nested in the view. It allows me to change focus without issue. Why does nesting a UITextField, in a view cause the inability to have the next UITextField become the first responder ? I have read a few things about the first Responder and how it works, but it doesn't clearly explain how to work around this issue. ``` class ScrollingViewWithFields:UIViewController, UITextFieldDelegate { let scrollView = UIScrollView() let contentView = UIView() var textFields:[UITextField] = [] var labeledTextField:[LabeledTextField] = [] override func viewDidLoad() { contentView.backgroundColor = UIColor.white contentView.translatesAutoresizingMaskIntoConstraints = false view.addSubview(scrollView) scrollView.addSubview(contentView) scrollView.translatesAutoresizingMaskIntoConstraints = false scrollView.backgroundColor = UIColor.white let top = view.safeAreaLayoutGuide.topAnchor let bottom = view.safeAreaLayoutGuide.bottomAnchor NSLayoutConstraint.activate([ scrollView.topAnchor.constraint(equalTo: top), scrollView.bottomAnchor.constraint(equalTo: bottom), scrollView.leftAnchor.constraint(equalTo: view.leftAnchor), scrollView.rightAnchor.constraint(equalTo: view.rightAnchor) ]) let ltf = LabeledTextField() ltf.translatesAutoresizingMaskIntoConstraints = false contentView.addSubview(ltf) ltf.populate(title: "Hello", font: UIFont.systemFont(ofSize: 14.0)) ltf.textField.delegate = self ltf.textField.tag = 0 let ltf2 = LabeledTextField() ltf2.translatesAutoresizingMaskIntoConstraints = false contentView.addSubview(ltf2) ltf2.populate(title: "What", font: UIFont.systemFont(ofSize: 14.0)) ltf2.textField.tag = 1 ltf2.textField.delegate = self self.textFields.append(ltf2.textField) self.textFields.append(ltf.textField) NSLayoutConstraint.activate([ ltf.topAnchor.constraint(equalTo: contentView.topAnchor, constant: 8.0), ltf.leadingAnchor.constraint(equalTo: contentView.leadingAnchor, constant: 8.0), ltf.trailingAnchor.constraint(equalTo: contentView.trailingAnchor, constant: 8.0), ltf.heightAnchor.constraint(equalToConstant: 60) ]) NSLayoutConstraint.activate([ ltf2.topAnchor.constraint(equalTo: ltf.bottomAnchor, constant: 8.0), ltf2.leadingAnchor.constraint(equalTo: contentView.leadingAnchor, constant: 8.0), ltf2.trailingAnchor.constraint(equalTo: contentView.trailingAnchor, constant: 8.0), ltf2.heightAnchor.constraint(equalToConstant: 60) ]) NSLayoutConstraint.activate([ contentView.topAnchor.constraint(equalTo: scrollView.topAnchor), contentView.widthAnchor.constraint(equalTo: scrollView.widthAnchor), contentView.bottomAnchor.constraint(equalTo: scrollView.bottomAnchor), contentView.heightAnchor.constraint(equalToConstant:4000) ]) } func textFieldShouldReturn(_ textField: UITextField) -> Bool { let tag = textField.tag let next = tag + 1 if next < self.textFields.count { let textField = self.textFields[next] textField.becomeFirstResponder() self.scrollView.contentOffset = CGPoint(x: 0.0, y: textField.frame.origin.y - 8.0) } else { textField.resignFirstResponder() } return true } ``` }<issue_comment>username_1: The issue is the way you're setting the tags on the textfield and putting them in the array. ``` ltf.textField.tag = 0 ltf2.textField.tag = 1 self.textFields.append(ltf2.textField) self.textFields.append(ltf.textField) ``` The issue that the tags don't match the order in the array, since the array will end up being `[ltf2.textField, ltf.textField]`. I would totally skip using tags and just use the order in the array. ``` func textFieldShouldReturn(_ textField: UITextField) -> Bool { if let index = textFields.index(of: textField) { let nextIndex = index + 1 let lastIndex = textFields.count - 1 if nextIndex <= lastIndex { textFields[nextIndex].becomeFirstResponder() } } return true } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can reserve memory instead of declaring an array to hod the textfeilds and try something like this ``` func textFieldShouldReturn(_ textField: UITextField) -> Bool { if let next = self.view.viewWithTag(textField.tag + 1) as? UITextField { next.becomeFirstResponder() } else { textField.resignFirstResponder() } return true } ``` Upvotes: 0
2018/03/20
747
2,468
<issue_start>username_0: is there an easy way to generate Vector Drawable that is a circle with the icon inside from the existing vector drawable? Example: [![existing vector drawable](https://i.stack.imgur.com/X73Vn.png)](https://i.stack.imgur.com/X73Vn.png) [![generated vector drawable, circle with empty icon inside](https://i.stack.imgur.com/1774u.png)](https://i.stack.imgur.com/1774u.png)<issue_comment>username_1: I would suggest something like this: ``` ``` The resources with ids ic\_brightness\_1\_black\_24dp and ic\_call\_black\_24dp are imported vector drawables. ic\_brightness\_1\_black\_24dp: ``` ``` and ic\_call\_black\_24dp: ``` ``` Upvotes: 5 <issue_comment>username_2: Actually it's quite simple, you just need to include both paths in a single vector, so with your paths it will look like the following: ``` ``` The result will obviously depend on sizes of paths in relation to each other, and since scaling them without a graphical tool is a pain, [username_1's solution](https://stackoverflow.com/a/49392438/2911458) with a `layer-list` is easier to implement. Upvotes: 3 <issue_comment>username_3: Use this way, ![enter image description here](https://i.stack.imgur.com/yhUHN.png) I tried in my own and it's working fine. It looks like this. ![enter image description here](https://i.stack.imgur.com/EUctV.png) for button\_round: ![enter image description here](https://i.stack.imgur.com/TCWiw.png) Upvotes: -1 <issue_comment>username_4: Since nobody has mentioned how to do this using vector drawing as the question says here is the way to do it. ``` M72,72m -> circle's center coordinates 50` -> the circle's radius 100` -> circle's diameter strokeWidth -> the ring's thickness ``` * To make it into a disc instead of a ring change the `fillColor` * To make your circle half the size change all occurrences of 50 to 25 and all occurrences of 100 to 50. Change accordingly for other sizes. * To move the circle around inside the viewport change the circle's coordinates (the `72` numbers) These numbers obviously are related to the viewport size. 72 is the center for 144 which is the defined viewport size in this case. To center it in a 200 viewport size you would need to use 100 [![enter image description here](https://i.stack.imgur.com/nKNjd.png)](https://i.stack.imgur.com/nKNjd.png) [![enter image description here](https://i.stack.imgur.com/Edn75.png)](https://i.stack.imgur.com/Edn75.png) Upvotes: 3
2018/03/20
630
2,146
<issue_start>username_0: I have a simple markup with just a body with nothing in it, when I style the body like this: ``` body { width: 200px; height: 200px; background-color: antiquewhite; margin: auto; } ``` In stead of just filling the body the whole screen is filled up with the background-color, it doesn't make any sense to me, is this some crazy CSS weirdness?<issue_comment>username_1: Just add this snippet to your code in css file ```css html{ background-color:white; } ``` Upvotes: -1 <issue_comment>username_2: Create a div in your body and style that. ```css #body{ margin:auto; width:250px; height:250px; background-color:antiquewhite; } ``` ```html ``` Upvotes: 0 <issue_comment>username_3: The element contains all the contents of an HTML document, such as: text, hyperlinks, images, tables, lists, etc. If you want to change the color of the body you will change it for the whole screen, but if you want to change the background color only of a piece of your screen you can divide the body by using div elements, sections, etc. This is an example of a structure: ```css header { height:50px; background-color:red; } section { height:500px; background-color:blue; } footer { height:50px; background-color:green; } ``` ```html ``` Upvotes: 0 <issue_comment>username_4: By calling tag in the css file you are including the whole interface of your web browser. So, it is obvious that the whole screen will be filled with the selected color. My suggestion would be that, it would be better if you can divide the body into header and footer or use div with a class or ID to adjust the portion you want to color. Upvotes: 0 <issue_comment>username_5: Refer to <https://www.w3.org/TR/CSS2/colors.html#background> Basically, if the background of your HTML is not specified, it is "transparent". And it will use the background-color of the BODY if present. I believe it will be easier to set a bgcolor for the HTML: ``` html{background-color: white;} body { width: 200px; height: 200px; background-color: black; margin: auto; } ``` Upvotes: 2
2018/03/20
1,134
3,096
<issue_start>username_0: I am using a white background for a Carousel using Bootstrap 4.0 and would like to change the color of the controls. It seems that bootstrap now uses SVG for their carousel icons. This means altering the attributes directly does not work. I am currently using Font Awesome for other elements on the site as well, so if there is a way to use fa-chevrons and format those instead, and it will still behave the same regarding resizing and formatting, that could be an effective solution as well. Here is my current code for the control elements: ``` [Previous](#carouselExampleIndicators) [Next](#carouselExampleIndicators) ``` I found a similar question [here](https://stackoverflow.com/questions/47122852/change-color-of-svg-background-image-bootstrap-4-carousel) but was not able to make sense of the answer provided there. I also found [this](https://github.com/twbs/bootstrap/issues/21985) page on GitHub but was not able to make any of the answers there work for me either.<issue_comment>username_1: There's no need for any unnecessary css hacks. If you want to modify any Bootstrap css (or the carousel control colors in particular), you can easily do that. Here are the rules that control the color of the carousel controls: ``` .carousel-control-prev-icon { background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' viewBox='0 0 8 8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5-2.5-2.5 2.5-2.5-1.5-1.5z'/%3E%3C/svg%3E"); } .carousel-control-next-icon { background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23fff' viewBox='0 0 8 8'%3E%3Cpath d='M2.75 0l-1.5 1.5 2.5 2.5-2.5 2.5 1.5 1.5 4-4-4-4z'/%3E%3C/svg%3E"); } ``` Replace the `fff` in the `fill='%23fff'` parts with the hex code of the desired color. Here's a working code snippet where `fill='%23fff'` has been replaced with `fill='%23f00'` for red instead of white: ```html .carousel-control-prev-icon { background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23f00' viewBox='0 0 8 8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5-2.5-2.5 2.5-2.5-1.5-1.5z'/%3E%3C/svg%3E"); } .carousel-control-next-icon { background-image: url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='%23f00' viewBox='0 0 8 8'%3E%3Cpath d='M2.75 0l-1.5 1.5 2.5 2.5-2.5 2.5 1.5 1.5 4-4-4-4z'/%3E%3C/svg%3E"); } ![First slide](https://placeimg.com/800/400/animals) ![Second slide](https://placeimg.com/800/400/arch) ![Third slide](https://placeimg.com/800/400/nature) [Previous](#carouselExampleControls) [Next](#carouselExampleControls) ``` Upvotes: 7 [selected_answer]<issue_comment>username_2: Just insert your own icon inside the `carousel-control-next` & `carousel-control-prev` class. For example, I want to change the prev & next icon with **Font-Awesome** icons. I can do this: ```html [Previous](#carouselExampleIndicators) [Next](#carouselExampleIndicators) ``` Hope it solve your problem Upvotes: 3
2018/03/20
308
1,022
<issue_start>username_0: Is it possible to pass command line arguments to a Spring boot app on Azure via the web.config file? Our app is up and running but we need to set the: ``` --spring.profiles.active=local ``` at startup. ``` xml version="1.0" encoding="UTF-8"? ```<issue_comment>username_1: I got a response from Microsoft and they had two ways: 1. Add "environmentVariables" block to the xml above: Add "environmentVariables" block to the xml above: ``` xml version="1.0" encoding="UTF-8"? ``` 2. Add this in the App settings. [![enter image description here](https://i.stack.imgur.com/GFVcA.png) 3, Actually, neither of those worked for us. We had to do this: ``` xml version="1.0" encoding="UTF-8"? ``` AND #2 above. Upvotes: 3 <issue_comment>username_2: Configuration-> Application Settings, I've simply added: [![enter image description here](https://i.stack.imgur.com/EGppp.png)](https://i.stack.imgur.com/EGppp.png) After that, Springboot could load: application-prod.properties Upvotes: 2
2018/03/20
352
1,508
<issue_start>username_0: I've got this basic setup, with `interrupt` having been registered from outside as edge-triggered callback for a GPIO pin: ``` public class Foo { private static final Object notifier = new Object(); public static GpioCallback interrupt = pin -> { synchronized (notifier) { notifier.notifyAll(); } return true; }; public void waitForInterrupt() { try { synchronized (notifier) { notifier.wait(5000); } Log.d("FOO", "Done."); } catch (InterruptedException e) { e.printStackTrace(); } } } ``` The timeout of `wait()` is always exhausted, even if the interrupt occurs. Only then is the callback being executed. Is there a way to execute the callback as soon as it occurs, and if so, how?<issue_comment>username_1: You probably don't want to be using notify/wait at all, especially on the main thread. Android uses an event loop to do things like post results to callbacks. If you wait on the main thread, you will be blocking the event loop, ensuring that your callback never gets called (and your application will be unresponsive in general). Upvotes: 1 <issue_comment>username_2: Solved by moving the call to `waitForInterrupt` to its own thread. Before, it was called by a different callback function, now that one just starts the thread. I'm guessing that waiting for callbacks inside a callback is asking for trouble, maybe GPIO callbacks can only be executed in series..? Upvotes: 1 [selected_answer]
2018/03/20
438
1,334
<issue_start>username_0: I am trying to get the index of the key in the map. I use distance for this. Why is the result always one off? I was expecting "ale" index to be 2 but the answer is 1. ``` #include #include int main(){ std::map my\_map; my\_map.insert(std::make\_pair("apple", 0)); my\_map.insert(std::make\_pair("a", 0)); my\_map.insert(std::make\_pair("ale", 0)); my\_map.insert(std::make\_pair("aple", 0)); my\_map.insert(std::make\_pair("aplle", 0)); std::cout << "map size = " << my\_map.size() << std::endl; int index = distance(my\_map.begin(), my\_map.find("ale")); std::cout << "index = " << index << std::endl; index = distance(my\_map.begin(), my\_map.find("a")); std::cout << "index = " << index << std::endl; } ```<issue_comment>username_1: `std::map` (an *associative* container) does not maintain elements in 'insert order` like `std::vector` (a *sequence* container). It maintains them in whatever sorted order makes map lookup efficient. Upvotes: 1 <issue_comment>username_2: Print the keys of the `map` in the order in which they are stored in the map. Then, the return value of `std::distance` will make sense. ``` for ( auto& item : my_map ) { std::cout << item.first << " "; } std::cout << std::endl; ``` Output: ```none a ale aple aplle apple ``` Upvotes: 3 [selected_answer]
2018/03/20
359
1,227
<issue_start>username_0: I have a dropdown list of car brands and I want to pass the value of the selected car brand to method getCarModelsByBrand(brand). Any help would be much appreciated. ``` Pasirinkite {{brand}} ```<issue_comment>username_1: Use [ngModel](https://angular.io/guide/template-syntax#inside-ngmodel) with 2 way binding and bind to a property/field on the component along with [ngModelChange](https://angular.io/guide/template-syntax#inside-ngmodel) to trigger the selection has changed. ``` Pasirinkite {{brand}} ``` --- *I removed code that was not needed in the example to illustrate the point* Upvotes: 4 [selected_answer]<issue_comment>username_2: **Edit** Remove the set value attribute from your option tag. `Pasirinkite {{brand}}` Upvotes: 0 <issue_comment>username_3: Here is my own and it works [My Personal site](http://www.alexmackinnon.ca) with the country select list for the globe. ``` {{country.name}} selectCountry(event) { let fireboxFix = event.target || event.srcElement; let indexToFind = (fireboxFix).value; let testArray = []; for (let row of this.countryList) { testArray.push(row.name) } this.changeI = testArray.indexOf(indexToFind); } ``` Upvotes: 0
2018/03/20
1,068
4,205
<issue_start>username_0: I'm trying to display a list of names using an AsyncTask. `doInBackground()` stores all the names found on a database in a String array. ``` public class GetAll extends AsyncTask { public String convertStreamToString(InputStream is) { java.util.Scanner s = new java.util.Scanner(is).useDelimiter("\\A"); return s.hasNext() ? s.next() : ""; } @Override protected String[] doInBackground(String... apikey) { String[] Students; //Making a http call HttpURLConnection urlConnection; InputStream in = null; try { // the url we wish to connect to URL url = new URL("http://radikaldesign.co.uk/sandbox/studentapi/getallstudents.php?apikey="+apikey); // open the connection to the specified URL urlConnection = (HttpURLConnection) url.openConnection(); // get the response from the server in an input stream in = new BufferedInputStream(urlConnection.getInputStream()); } catch (IOException e) { e.printStackTrace(); } // convert the input stream to a string String response = convertStreamToString(in); // print the response to android monitor/log cat System.out.println("Server response = " + response); final ArrayList allStudents= new ArrayList<>(); try { // declare a new json array and pass it the string response from the server // this will convert the string into a JSON array which we can the iterate // over using a loop JSONArray jsonArray = new JSONArray(response); // instantiate the cheeseNames array and set the size // to the amount of cheese object returned by the server Students = new String[jsonArray.length()]; // use a for loop to iterate over the JSON array for (int i=0; i < jsonArray.length(); i++) { // the following line of code will get the name of the cheese from the // current JSON object and store it in a string variable called name String name = jsonArray.getJSONObject(i).get("name").toString(); String gender= jsonArray.getJSONObject(i).get("gender").toString(); String dob= jsonArray.getJSONObject(i).get("dob").toString(); String address= jsonArray.getJSONObject(i).get("address").toString(); String postcode= jsonArray.getJSONObject(i).get("postcode").toString(); String studentNumber= jsonArray.getJSONObject(i).get("studentNumber").toString(); String courseTitle= jsonArray.getJSONObject(i).get("courseTitle").toString(); String startDate= jsonArray.getJSONObject(i).get("startDate").toString(); String bursary= jsonArray.getJSONObject(i).get("bursary").toString(); String email= jsonArray.getJSONObject(i).get("email").toString(); Student s= new Student(name, gender, dob, address, postcode, studentNumber, courseTitle, startDate, bursary, email); allStudents.add(s); Students[i]= name; return Students; } } catch (JSONException e) { e.printStackTrace(); } return new String[0]; } ``` Once the array is filled I want to display the results in a ListView on the MainActivity. I've tried to store the results like using ``` String[] Students= new GetAll.execute(apikey); ``` and then use an ArrayAdapter to fill the listview. That did not work so I'm here for help and suggestions. Thanks<issue_comment>username_1: `execute((Params... params)` returns the task itself . See the signature . ``` AsyncTask execute (Params... params) ``` Use `void onPostExecute (Result result)` to set The result to UI .Use an `ArrayList` instead of `Array` to make things easier. Below is an example. ``` public class GetAll extends AsyncTask> { @Override protected ArrayList doInBackground(String... strings) { ArrayList result=new ArrayList<>(); // Do your stuff and fill the data in result return result; } @Override protected void onPostExecute(ArrayList result) { super.onPostExecute(result); // Set the adapter here with result } } ``` Now only call execute as below. Read about [Asynctask](https://developer.android.com/reference/android/os/AsyncTask.html). ``` new GetAll.execute(apikey) ``` Upvotes: 1 <issue_comment>username_2: Adding to username_1's answer.. Create a inner class in the activity which extends Async Task which makes populating your original list after thread call simpler. Call it as: new GetAll().execute("string"); Upvotes: 1 [selected_answer]
2018/03/20
534
1,966
<issue_start>username_0: Specifically, I'm trying to delay `a = 1 + 1` for 10 seconds. During this time, I would like the user to be able to input a value. If the user doesn't respond in time, I would `a = 1 + 1` to make `a = 2` which will cause a bomb to go off and kill the user. I tried using Sleep, but that halts the entire program and doesn't allow input from the user. How can I make the system preform both tasks at once, where for 10 seconds the user has a change to disable the bomb and make it not go off?<issue_comment>username_1: > > How can I make the system perform both tasks at once? > > > The answer is Concurrency. For example 2 threads, one for the first task, another for the second. And a raindrop of synchronization between them. For further reading, I would like to recommend C++ Concurrency in Action: Practical Multithreading by <NAME>. Upvotes: 1 <issue_comment>username_2: You don't need to use Sleep, nor you're forced to use multithreading to accomplish the task. The simplest solution would use a 'message loop', and a tick/time-control function (i.e. GetTickCount, QueryPerformance, timeGetTime). Concurrency is a little bit of an overkill for a not so complex task. Upvotes: 0 <issue_comment>username_3: The answer for this depends on the situation. If you're obtaining input through a method that blocks the current thread, like `std::cin` then a thread will probably necessary. However, if you are obtaining input through a means that isn't blocking, then measuring time intervals would be a valid solution since you could simply exit as soon as the time limit has been reached. For this situation you could measure time via `std::chrono`. ``` #include #include #include int main() { using clock = std::chrono::steady\_clock; auto start = clock::now(); while (std::chrono::duration\_cast(clock::now() - start).count() < 2000) { } std::cout << "Took two seconds to complete."; } ``` Upvotes: 1
2018/03/20
1,050
3,492
<issue_start>username_0: I'm trying to pass arguments to a MSI installer: ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $InstallerArguments = @( "/i `"$InstallerPath`"", "/quiet", "/log `"$LogFile`"", "EMAILSERVER=`"$Servers[0]`"", "DATABASESERVER=`"$Servers[1]`"", "SHAREPOINTSERVER=`"$Servers[2]`"", "USERNAME=`"$UserName`"" ) ``` Reviewing the installer logfile shows the result: ``` Property(S): EMAILSERVER = ServerOne ServerTwo ServerThree[0] Property(S): DATABASESERVER = ServerOne ServerTwo ServerThree[0] ``` Expected result: ``` Property(S): EMAILSERVER = ServerOne ``` I think I need to escape the index somehow, what is wrong with the code? Edit (passing arguments to installer): ``` Start-Process -FilePath msiexec.exe -ArgumentList $InstallerArguments -Wait ```<issue_comment>username_1: If you use `"`, PowerShell will recognize variables and replace them with their content. If it's an array, the elements will be joined by `$OFS`. If you want to specify properties, elements, etc. you have to use `$()` to make shure, PowerShell understand that this is part of the variable (e.g. `$($Servers[0])`. If you need in addition `"` inside a string, I recommend to work with `'` (which does not replace variables) and the `-f` operator. ``` 'EMAILSERVER="{0}"' -f $Servers[0] ``` Also there is no need for all those arrays. ``` $InstallerPath = 'C:\' $LogPath = 'D:\' $MailServer = 'mail' $InstallerArguments = '/i "{0}" /quiet /log "{1}" EMAILSERVER="{2}"' -f $InstallerPath, $LogPath, $MailServer Start-Process -FilePath msiexec.exe -ArgumentList $InstallerArguments -Wait ``` I think that's much more readable. Upvotes: 0 <issue_comment>username_2: This does exactly what you want. I presume you did want the quotes. ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $InstallerArguments = @( "/i `"$InstallerPath`"", "/quiet", "/log `"$LogFile`"", "EMAILSERVER=`"$($Servers[0])`"", "DATABASESERVER=`"$($Servers[1])`"", "SHAREPOINTSERVER=`"$($Servers[2])`"", "USERNAME=`"$UserName`"" ) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: The subexpression operator - `$()` - is one way you can embed an array element (or the output of an arbitrary expression) within a string: ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $InstallerArguments = @( "/i ""$InstallerPath""" "/quiet" "/log ""$LogFile""" "EMAILSERVER=""$($Servers[0])""" "DATABASESERVER=""$($Servers[1])""" "SHAREPOINTSERVER=""$($Servers[2])""" "USERNAME=""$UserName""" ) Start-Process msiexec.exe $InstallerArguments -Wait ``` The subexpression operator is documented in the `about_Operators` help topic. For an even more readable array, you can use the `-f` string formatting operator and single-quotes; example: ``` $InstallerArguments = @( ('/i "{0}"' -f $InstallerPath) '/quiet' ('/log "{0}"' -f $LogFile) ('EMAILSERVER="{0}"' -f $Servers[0]) ('DATABASESERVER="{0}"' -f $Servers[1]) ('SHAREPOINTSERVER="{0}"' -f $Servers[2]) ('USERNAME="{0}"' -f $UserName) ) ``` Upvotes: 1 <issue_comment>username_4: So much unneeded complexity with the array. This should work: ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $args = '/i "{0}" /quiet /log "{1}" EMAILSERVER={2} DATABASESERVER={3} SHAREPOINTSERVER={4} USERNAME={5}' -f $InstallerPath, $LogFile, $servers[0], $servers[1], $servers[2], $username Start-Process msiexec.exe $args ``` Upvotes: 0
2018/03/20
974
3,191
<issue_start>username_0: I'm using primeng autocomplete input I would that when I focus on the input the blue glow effect get disabled. [![enter image description here](https://i.stack.imgur.com/mtiNF.png)](https://i.stack.imgur.com/mtiNF.png) Here's my html component ``` {{elm.name}} ( ID: {{elm.code}} ) ``` I have tried to change css according to the [documentation](https://www.primefaces.org/primeng/#/autocomplete) ``` ::ng-deep .ui-autocomplete { box-shadow: 0 !important; } ``` but that doesn't work.<issue_comment>username_1: If you use `"`, PowerShell will recognize variables and replace them with their content. If it's an array, the elements will be joined by `$OFS`. If you want to specify properties, elements, etc. you have to use `$()` to make shure, PowerShell understand that this is part of the variable (e.g. `$($Servers[0])`. If you need in addition `"` inside a string, I recommend to work with `'` (which does not replace variables) and the `-f` operator. ``` 'EMAILSERVER="{0}"' -f $Servers[0] ``` Also there is no need for all those arrays. ``` $InstallerPath = 'C:\' $LogPath = 'D:\' $MailServer = 'mail' $InstallerArguments = '/i "{0}" /quiet /log "{1}" EMAILSERVER="{2}"' -f $InstallerPath, $LogPath, $MailServer Start-Process -FilePath msiexec.exe -ArgumentList $InstallerArguments -Wait ``` I think that's much more readable. Upvotes: 0 <issue_comment>username_2: This does exactly what you want. I presume you did want the quotes. ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $InstallerArguments = @( "/i `"$InstallerPath`"", "/quiet", "/log `"$LogFile`"", "EMAILSERVER=`"$($Servers[0])`"", "DATABASESERVER=`"$($Servers[1])`"", "SHAREPOINTSERVER=`"$($Servers[2])`"", "USERNAME=`"$UserName`"" ) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: The subexpression operator - `$()` - is one way you can embed an array element (or the output of an arbitrary expression) within a string: ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $InstallerArguments = @( "/i ""$InstallerPath""" "/quiet" "/log ""$LogFile""" "EMAILSERVER=""$($Servers[0])""" "DATABASESERVER=""$($Servers[1])""" "SHAREPOINTSERVER=""$($Servers[2])""" "USERNAME=""$UserName""" ) Start-Process msiexec.exe $InstallerArguments -Wait ``` The subexpression operator is documented in the `about_Operators` help topic. For an even more readable array, you can use the `-f` string formatting operator and single-quotes; example: ``` $InstallerArguments = @( ('/i "{0}"' -f $InstallerPath) '/quiet' ('/log "{0}"' -f $LogFile) ('EMAILSERVER="{0}"' -f $Servers[0]) ('DATABASESERVER="{0}"' -f $Servers[1]) ('SHAREPOINTSERVER="{0}"' -f $Servers[2]) ('USERNAME="{0}"' -f $UserName) ) ``` Upvotes: 1 <issue_comment>username_4: So much unneeded complexity with the array. This should work: ``` $Servers = @("ServerOne", "ServerTwo", "ServerThree") $args = '/i "{0}" /quiet /log "{1}" EMAILSERVER={2} DATABASESERVER={3} SHAREPOINTSERVER={4} USERNAME={5}' -f $InstallerPath, $LogFile, $servers[0], $servers[1], $servers[2], $username Start-Process msiexec.exe $args ``` Upvotes: 0
2018/03/20
216
902
<issue_start>username_0: I Want to show splash screen until web-service response comes in the app.<issue_comment>username_1: `launchScreen.storyboard`'s VC stays a fixed time and directs to the `rootViewController` , You can create a view with the same look as the splash above the first VC's view and remove it when response come , that way you can fake the splash as it's still shown also it's better to add a `UIActivityIndicatorView` above it / show network activity in statusBar to get a better UX Upvotes: 2 <issue_comment>username_2: The LaunchScreen.storyboard can't have any class associated with it. What you can do, and Apple suggests it, is to have the LaunchScreen have the same overall layout of your first View. All you need to do is to have your web request be performed in you VC on Main.Storyboard, and have your Initial VC with the same UI as the LaunchScree.Storyboard. Upvotes: 1
2018/03/20
883
3,461
<issue_start>username_0: I have a superclass like this: ``` public abstract class Foo { //some method here public void aMethod() { //some code } } ``` and a 2 child-classes `BarOne` and `BarTwo` which extends `Foo` class. In `BarOne`, everything are normal, but in `BarTwo`, I want to `@Deprecated` `aMethod()` To be precise: I mean hiding it, not deprecate, due to privacy reasons. Are there some better way rather than override the method?<issue_comment>username_1: Even when it is *technically* possible, it conceptually wrong. As it is a **conceptual** violation of the Liskov substitution principle. You see, inheritance is more than putting `A extends B` in your source code. The idea is that the methods in your base class define a contract. Code using these methods should not need to **care** about the fact that such a method is implemented in the base class - or overridden in a subclass. You are basically saying: I want to have method `x()` on the base class, but I want to express that `x()` is deprecated on child classes. Which would mean that code calling `x()` should only do that if it is calling the method on a base class instance. Thus the non-answer here is: don't even think about doing things like that. Either a method is "fine to be called" on all levels of inheritance, or it is not. There is no point of putting such a restriction on child classes only. So the real answer here: step back, and consider *why* you came up with this idea. Then look out for *other* ways to solve that problem. And given the comment about *hiding* it: same story. And technically: you can't **hide** a method. Because, as said when calling `foo.bar()` is a valid statement for some base class `foo` instance - then it needs to be a valid statement also when `foo` happens to be an instance of a subclass. The only thing that is legit: * have javadoc on the **base** method explaining that subclasses might override methods to do nothing (or throw up an exception) * then, well, have your sub class methods do whatever is required Upvotes: 4 [selected_answer]<issue_comment>username_2: You are describing something like this: ``` ___________ | class Foo | | | | aMethod() | |___________| / \ / \ _________/____ __\___________ | class BarOne | | class BarTwo | | | | | | inherits | | hides | | aMethod() | | aMethod() | |______________| |______________| ``` You cannot and you should not do that, as already described in username_1s answer. However, what you can do is something like this: ``` ___________ | class Baz | | | |___________| / \ / \ _________/____ __\___________ | class Foo | | class BarTwo | | | | | | aMethod() | |______________| |______________| | | ______|_______ | class BarOne | | | | inherits | | aMethod() | |______________| ``` `Baz` would then have all the code that `Foo` previously had, except `aMethod()`. `Foo` would simply add `aMethod()` and be the *superclass* for all *classes* that want to inherit `aMethod()`, while `Baz` is the *superclass* for all *classes* that don't want to inherit `aMethod()`. Upvotes: 3
2018/03/20
864
2,600
<issue_start>username_0: I have the following dataframe in pandas: ``` df = pd.DataFrame({'field_1' : ['a', 'b', np.nan, 'a', 'c'], 'field_2': ['c', 'b', 'a', np.nan, 'c']}, index=[1,2,3,4,5]) ``` I want to apply the following function on the entire dataframe that replaces each value with something else. For example: ``` def func_replace(value, n): if value == 'a': return 'This is a'*n elif value == 'b': return 'This is b'*n elif value == 'c': return 'This is c'*n elif str(value) == 'nan': return np.nan else: 'The value is not included' ``` so that the final product would look like (given that `n=1`). For example: ``` df = pd.DataFrame({'field_1' : ['This is a', 'This is b', np.nan, 'This is a', 'This is c'], 'field_2': ['This is c', 'This is b', 'This is a', np.nan, 'This is c']}, index=[1,2,3,4,5]) ``` I tried the following: ``` df.apply(func_replace, args=(1), axis=1) ``` and bunch of other options, but it always gives me an error. I know that I can write a `for` loop that goes through every column and uses lambda function to solve this problem, but I feel that there is an easier option. I feel the solution is easier than I think, but I just can't figure out the correct syntax. Any help would be really appreciated.<issue_comment>username_1: Just modify your function to operate at the level of each value in a `Series` and use `applymap`. ``` df = pd.DataFrame({'field_1' : ['a', 'b', np.nan, 'a', 'c'], 'field_2': ['c', 'b', 'a', np.nan, 'c']}, index=[1,2,3,4,5]) df Out[35]: field_1 field_2 1 a c 2 b b 3 NaN a 4 a NaN 5 c c ``` Now, if we define the function as: ``` def func_replace(value): if value == 'a': return 'This is a' elif value == 'b': return 'This is b' elif value == 'c': return 'This is c' elif str(value) == 'nan': return np.nan else: 'The value is not included' ``` Calling this function on each value on the `DataFrame` is very straightforward: ``` df.applymap(func_replace) Out[42]: field_1 field_2 1 This is a This is c 2 This is b This is b 3 NaN This is a 4 This is a NaN 5 This is c This is c ``` Upvotes: 1 <issue_comment>username_2: I think you need: ``` def func_replace(df, n): df_temp = df.replace({r"[^abc]": "The value is not included"}, regex=True) return df_temp.replace(["a", "b", "c"], ["This is a " * n, "This is b " * n, "This is c " * n]) df.apply(func_replace, args=(2,)) ``` Upvotes: 0
2018/03/20
358
1,464
<issue_start>username_0: I would like to have a VOIP Gateway server to route/control calls from public site to internal network and vise versa. What I know of is Asterisk and FreeSwitch, can handle this job. But in term of functionality and security, I am confusing either to make a decision between Asterisk or FreeSwitch. Or if you know any other software can take care of this job better, please help me! please help me to pick a good one! Thank you very much!<issue_comment>username_1: Question about recommend to select tool is offtopic on SO. Asterisk simpler to configure, simple to find developer/help, simple install for begginner(have ISO like elastic). Freeswitch have slightly better source code(becuase it NEWER). Slightly better perfomance(only if you use default setting for asterisk and default settings for freeswitch, for advanced use perfomance same) Security depend of your skill and confidence with switch as primary, not depend much of solution you use. Yes, i know other software. Kamailio/opensips will work "better"(can handle 5000calls+).But much more complex to setup and maintain. Upvotes: 0 <issue_comment>username_2: Asterisk: more online resources, high level features FreeSwitch: more stable, higher performance It usually comes down to the SIP stack. FreeSwitch uses Sofia from the start, Asterisk had it's own implementation for a while (chan\_sip, deprecated), but integrated PJSIP just a couple of years ago. Upvotes: 1
2018/03/20
913
2,695
<issue_start>username_0: I have an archive with comma separated string I want to slip every line into 2 arrays: v[i].date and v[i].value. However, when I run the code it shows random values for the arrays. Is there anything I should change? > > Input > > 1761 > > 02/20/18,11403.7 > > 02/19/18,11225.3 > > 02/18/18,10551.8 > > 02/17/18,11112.7 > > 02/16/18,10233.9 > > > ``` #include #include #include #include typedef struct{ char data[10]; double valor; }vetor; int main(int argc,char \*argv[]){ FILE \*csv; if((csv=fopen(argv[1], "r")) == NULL ) { printf("not found csv\n"); exit(1); } long int a=0; char linha[256]; char \*token = NULL; if(fgets(linha, sizeof(linha), csv)) { token = strtok(linha, "\n"); a =(atoi(token)); } printf("%d\n", a); rewind(csv); vetor \*v; v=(vetor\*)malloc(a\*sizeof(vetor)); char linha2[256]; while (fgets(linha2, sizeof(linha2), csv) != 0) { fseek(csv, +1, SEEK\_CUR); for(int i=0;i ```<issue_comment>username_1: Your `data` field only holds one `char` now. It needs to have room for at least the typical value, like 02/19/18. Use `char[10]` for example (if you know for sure that it can never be longer than 9 characters). I think the compiler ought to have warned against your `fscanf` call. Upvotes: 1 <issue_comment>username_2: Your `fscanf()` calls are off: When you make this call ``` fscanf(csv, "%s[^,]", v[i].data); ``` `fscanf()` will first parse a string of non-whitespace characters until it finds a whitespace character (the `%s` conversion), so `"02/20/18,11403.7"` gets written to your array, and your buffer will overflow. After that, you are in undefined behavior territory, and anything could happen. **The `[^,]` conversion won't even be reached before you get UB.** Of course, you can simply fix this by dropping the `%s` from your format string, but I think, what you really want to do, is to just parse all the numbers as numbers: ``` int month, day, year; double value; int conversionCount = fscanf(csv, "%d/%d/%d,%lf\n", &month, &day, &year, &value); if(conversionCount != 4) { handleError(); } ``` Do not forget to check the result of `fscanf()` as it's the only way to know whether parsing completed successfully. I have added `\n` at the end of the format string, which will gobble up any whitespace following the float value, including the newline character. This allows you to perform your conversions one line at a time with a single `fscanf()` in a simple loop, without any need to call `fseek()`. Just keep parsing lines until an `fscanf()` call returns something other than the number of conversions you expect. Upvotes: 1 [selected_answer]
2018/03/20
1,828
6,142
<issue_start>username_0: I have added a "before\_save" to my model to apply some logic to my model before saving. When I use this code, the record is created, then immediately updated (with the incorrect value). If I comment it out, there is no subsequent update when I create a new record. **Model** ``` class Transaction < ApplicationRecord belongs_to :account attr_accessor :trx_type before_save do if self.trx_type == "debit" self.amount = self.amount * -1 end end end ``` **Controller** ``` class TransactionsController < ApplicationController before_action :find_account before_action :find_transaction, only: [:edit, :update, :show, :destroy] # Index action to render all transactions def index @transactions = @account.transactions respond_to do |format| format.html # index.html.erb format.xml { render :xml => @transactions } end end # New action for creating transaction def new @transaction = @account.transactions.build respond_to do |format| format.html # new.html.erb format.xml { render :xml => @transaction } end end # Create action saves the trasaction into database def create @transaction = @account.transactions.create(transaction_params) respond_to do |format| if @transaction.save format.html { redirect_to([@transaction.account, @transaction], :notice => 'Transaction was successfully created.') } format.xml { render :xml => @transaction, :status => :created, :location => [@transaction.account, @transaction] } else format.html { render :action => "new" } format.xml { render :xml => @transaction.errors, :status => :unprocessable_entity } end end end # Edit action retrieves the transaction and renders the edit page def edit end # Update action updates the transaction with the new information def update respond_to do |format| if @transaction.update_attributes(transaction_params) format.html { redirect_to([@transaction.account, @transaction], :notice => 'Transaction was successfully updated.') } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @transaction.errors, :status => :unprocessable_entity } end end end # The show action renders the individual transaction after retrieving the the id def show respond_to do |format| format.html # show.html.erb format.xml { render :xml => @transaction } end end # The destroy action removes the transaction permanently from the database def destroy @transaction.destroy respond_to do |format| format.html { redirect_to(account_transactions_url) } format.xml { head :ok } end end private def transaction_params params.require(:transaction).permit(:trx_date, :description, :amount, :trx_type) end def find_account @account = current_user.accounts.find(params[:account_id]) end def find_transaction @transaction = @account.transactions.find(params[:id]) end end ``` **Console Output** ``` Started POST "/accounts/1/transactions" for 127.0.0.1 at 2018-03-20 13:59:37 -0400 Processing by TransactionsController#create as HTML Parameters: {"utf8"=>"✓", "authenticity_token"=>"<KEY> "transaction"=>{"trx_type"=>"debit", "trx_date(1i)"=>"2018", "trx_date(2i)"=>"3", "trx_date(3i)"=>"20", "description"=>"Test 10", "amount"=>"132"}, "commit"=>"Create Transaction", "account_id"=>"1"} User Load (0.5ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 ORDER BY "users"."id" ASC LIMIT $2 [["id", 1], ["LIMIT", 1]] Account Load (0.5ms) SELECT "accounts".* FROM "accounts" WHERE "accounts"."user_id" = $1 AND "accounts"."id" = $2 LIMIT $3 [["user_id", 1], ["id", 1], ["LIMIT", 1]] (0.2ms) BEGIN SQL (0.6ms) INSERT INTO "transactions" ("trx_date", "description", "amount", "account_id", "created_at", "updated_at") VALUES ($1, $2, $3, $4, $5, $6) RETURNING "id" [["trx_date", "2018-03-20"], ["description", "Test 10"], ["amount", "-132.0"], ["account_id", 1], ["created_at", "2018-03-20 13:59:37.349781"], ["updated_at", "2018-03-20 13:59:37.349781"]] (3.5ms) COMMIT (0.1ms) BEGIN SQL (0.3ms) UPDATE "transactions" SET "amount" = $1, "updated_at" = $2 WHERE "transactions"."id" = $3 [["amount", "132.0"], ["updated_at", "2018-03-20 13:59:37.355748"], ["id", 27]] (0.9ms) COMMIT Redirected to http://localhost:3000/accounts/1/transactions/27 Completed 302 Found in 16ms (ActiveRecord: 6.6ms) ``` I'm new with Rails and trying to understand what is happening with my code. I appreciate any help in advance. Thanks!<issue_comment>username_1: There are two things here that are causing you some grief, but they're easy to address. First, in the create action of your controller you're actually calling two methods that persist data to the database, so that's why you're seeing two saves in the console output. The first line in the method is responsible for the first save: `@transaction = @account.transactions.create(transaction_params)` And this line here in your respond\_to block is responsible for the second save: `if @transaction.save` Second, the reason the record has the correct `amount` in the first save and not in the second save is related to the logic in the `before_save` callback of your Transaction model. It's taking the `amount` and calling `* -1` on it. Since the first save has already made the amount negative, the second save will flip it back to positive. Upvotes: 3 [selected_answer]<issue_comment>username_2: It seems like you need `before_create` because it's unlikely you would change the type of transaction right? ``` before_create do if self.trx_type == "debit" self.amount = self.amount * -1 end end ``` Update: Looks like you need in your controller change: ``` @transaction = @account.transactions.create(transaction_params) ``` to ``` @transaction = @account.transactions.build(transaction_params) ``` Upvotes: 1
2018/03/20
3,798
10,427
<issue_start>username_0: right now I'm in the middle at developing an application using Ruby and Mongo. But I got stuck when I deploy the application using gitlab-ci where it seems that mongo server doesn't start at test environment on gitlab runner. Here are my gitlab-ci.yml ``` stages : - test - deploy services: - mongo:latest unitTest:API: image: ruby:2.6.10 stage: test cache: paths: - API/vendor/ before_script: - ruby -v - gem install bundler --no-ri --no-rdoc - bundle install --gemfile=API/Gemfile --path vendor script: - RAILS_ENV=test rspec API/spec - RAILS_ENV=test rubocop API artifacts: paths: - coverage/ deploy_coba: stage: deploy before_script: - cd API script: - gem install dpl - dpl --provider=heroku --app=exhibisi-cobacoba --api-key=<KEY> only: - coba_coba staging: stage: deploy before_script: - cd API script: - gem install dpl - dpl --provider=heroku --app=exhibisi-staging --api-key=<KEY> only: - sit_uat production: stage: deploy before_script: - cd API script: - gem install dpl - dpl --provider=heroku --app=exhibisi-prod --api-key=<KEY> only: - master pages: stage: deploy script: - mv coverage/ public/ artifacts: paths: - public expire_in: 30 days ``` and this are the result of the gitlab runner error ``` Running with gitlab-runner 10.6.0-rc1 (0a9d5de9) on docker-auto-scale e11ae361 Using Docker executor with image ruby:2.3.0 ... Starting service mongo:2.6.10 ... Pulling docker image mongo:2.6.10 ... Using docker image sha256:54fb6f9984dde283c9ff55f5aba7d883355793dd7997b0f9f75bb31e89607311 for mongo:2.6.10 ... Waiting for services to be up and running... Pulling docker image ruby:2.3.0 ... Using docker image sha256:7ca70eb2dfea00e9e3eaece33024ad5c06b7473728d559e1a3a574629de95a6a for ruby:2.3.0 ... Running on runner-e11ae361-project-5550018-concurrent-0 via runner-e11ae361-srm-1521565965-49af3740... Cloning repository... Cloning into '/builds/cymon1997/ppl-coba'... Checking out 61333b01 as US02_menampilkan_daftar_produk... Skipping Git submodules setup Checking cache for default... FATAL: file does not exist Failed to extract cache $ ruby -v ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-linux] $ gem install bundler --no-ri --no-rdoc Successfully installed bundler-1.16.1 1 gem installed $ bundle install --gemfile=API/Gemfile --path vendor Fetching gem metadata from http://rubygems.org/.......... Fetching rake 12.3.0 Installing rake 12.3.0 Fetching concurrent-ruby 1.0.5 Installing concurrent-ruby 1.0.5 Fetching i18n 0.9.5 Installing i18n 0.9.5 Fetching minitest 5.11.3 Installing minitest 5.11.3 Fetching thread_safe 0.3.6 Installing thread_safe 0.3.6 Fetching tzinfo 1.2.5 Installing tzinfo 1.2.5 Fetching activesupport 5.1.5 Installing activesupport 5.1.5 Fetching activemodel 5.1.5 Installing activemodel 5.1.5 Fetching ast 2.4.0 Installing ast 2.4.0 Fetching bson 4.3.0 Installing bson 4.3.0 with native extensions Using bundler 1.16.1 Fetching diff-lcs 1.3 Installing diff-lcs 1.3 Fetching docile 1.1.5 Installing docile 1.1.5 Fetching json 2.1.0 Installing json 2.1.0 with native extensions Fetching mongo 2.5.1 Installing mongo 2.5.1 Fetching mongoid 6.3.0 Installing mongoid 6.3.0 Fetching mustermann 1.0.2 Installing mustermann 1.0.2 Fetching parallel 1.12.1 Installing parallel 1.12.1 Fetching parser 2.5.0.3 Installing parser 2.5.0.3 Fetching powerpack 0.1.1 Installing powerpack 0.1.1 Fetching rack 2.0.4 Installing rack 2.0.4 Fetching rack-protection 2.0.1 Installing rack-protection 2.0.1 Fetching rack-test 0.8.2 Installing rack-test 0.8.2 Fetching racksh 1.0.0 Installing racksh 1.0.0 Fetching rainbow 3.0.0 Installing rainbow 3.0.0 Fetching rspec-support 3.7.1 Installing rspec-support 3.7.1 Fetching rspec-core 3.7.1 Installing rspec-core 3.7.1 Fetching rspec-expectations 3.7.0 Installing rspec-expectations 3.7.0 Fetching rspec-mocks 3.7.0 Installing rspec-mocks 3.7.0 Fetching rspec 3.7.0 Installing rspec 3.7.0 Fetching rspec-json_expectations 2.1.0 Installing rspec-json_expectations 2.1.0 Fetching ruby-progressbar 1.9.0 Installing ruby-progressbar 1.9.0 Fetching unicode-display_width 1.3.0 Installing unicode-display_width 1.3.0 Fetching rubocop 0.53.0 Installing rubocop 0.53.0 Fetching shotgun 0.9.2 Installing shotgun 0.9.2 Fetching simplecov-html 0.10.2 Installing simplecov-html 0.10.2 Fetching simplecov 0.15.1 Installing simplecov 0.15.1 Fetching tilt 2.0.8 Installing tilt 2.0.8 Fetching sinatra 2.0.1 Installing sinatra 2.0.1 Bundle complete! 10 Gemfile dependencies, 39 gems now installed. Bundled gems are installed into `./vendor` $ RAILS_ENV=test rspec API/spec ...FFFF. Failures: 1) ListProductController GET to /products returns status 200 OK Failure/Error: expect(last_response).to be_ok expected `#"text/html", "Content-Length...bled the `show_exceptions` setting.\n \n \n\n"]>.ok?` to return true, got false # ./API/spec/controllers/listproduct\_controller\_spec.rb:9:in `block (3 levels) in ' 2) ListProductController GET to /products show a list of product's name and its icon Failure/Error: Product.each do |product| payload.push({ exhibit\_name: product.exhibit\_name, icon: product.icon }) end Mongo::Error::NoServerAvailable: No server is available matching preference: # using server\_selection\_timeout=30 and local\_threshold=0.015 # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/server\_selector/selectable.rb:119:in `select\_server' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/collection/view/iterable.rb:41:in `block in each' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/retryable.rb:44:in `read\_with\_retry' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/collection/view/iterable.rb:40:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/query\_cache.rb:222:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/contextual/mongo.rb:132:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/contextual.rb:20:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/findable.rb:15:in `each' # ./API/spec/controllers/listproduct\_controller\_spec.rb:15:in `block (3 levels) in ' 3) ListProductController GET to /:id returns status 200 OK Failure/Error: expect(last\_response).to be\_ok expected `#"text/html", "Content-Length...bled the `show_exceptions` setting.\n \n \n\n"]>.ok?` to return true, got false # ./API/spec/controllers/listproduct\_controller\_spec.rb:31:in `block (3 levels) in ' 4) ListProductController GET to /:id displays the product's profil Failure/Error: product = Product.find\_by(exhibit\_id: "EXH1") Mongo::Error::NoServerAvailable: No server is available matching preference: # using server\_selection\_timeout=30 and local\_threshold=0.015 # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/server\_selector/selectable.rb:119:in `select\_server' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/collection/view/iterable.rb:41:in `block in each' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/retryable.rb:44:in `read\_with\_retry' # ./API/vendor/ruby/2.3.0/gems/mongo-2.5.1/lib/mongo/collection/view/iterable.rb:40:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/query\_cache.rb:222:in `each' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/contextual/mongo.rb:278:in `first' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/contextual/mongo.rb:278:in `find\_first' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/contextual.rb:20:in `find\_first' # ./API/vendor/ruby/2.3.0/gems/mongoid-6.3.0/lib/mongoid/findable.rb:114:in `find\_by' # ./API/spec/controllers/listproduct\_controller\_spec.rb:35:in `block (3 levels) in ' Finished in 7 minutes 2 seconds (files took 1.31 seconds to load) 8 examples, 4 failures Failed examples: rspec ./API/spec/controllers/listproduct\_controller\_spec.rb:7 # ListProductController GET to /products returns status 200 OK rspec ./API/spec/controllers/listproduct\_controller\_spec.rb:12 # ListProductController GET to /products show a list of product's name and its icon rspec ./API/spec/controllers/listproduct\_controller\_spec.rb:29 # ListProductController GET to /:id returns status 200 OK rspec ./API/spec/controllers/listproduct\_controller\_spec.rb:33 # ListProductController GET to /:id displays the product's profil Coverage report generated for RSpec to /builds/cymon1997/ppl-coba/coverage. 44 / 56 LOC (78.57%) covered. ERROR: Job failed: exit code ``` and this are my mongoid.yml file : ``` development: options: raise_not_found_error: false clients: default: database: mongoid_dev hosts: - localhost:27017 test: options: raise_not_found_error: false clients: default: database: mongoid_dev hosts: - localhost:27017 ``` Can anybody tell me what I'm missing. I search everywhere about gitlab-ci and databases but only mysql and postgre sql that showed up about this. Did I miss to run the mongo server on test? or Should I run the server from the code itself. Or it is another problem?<issue_comment>username_1: Your MongoDB most probably runs on GitlabCI, but it is executed in the separate Docker container. It must be accessed from the tests using `mongo:27017`, not `localhost:27017`. Check here: <https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services> So you should fix your configuration (mongoid.yml) appropriately. Upvotes: 3 <issue_comment>username_2: The problem is with your connection string for the MongoDB database. Since you have used the mongo service, and no alias is put, so Gitlab takes "mongo" as its hostname. According to Gitlab: The default aliases for the service’s hostname are created from its image name following these rules: * Everything after the colon (:) is stripped. * Slash (/) is replaced with double underscores (\_\_) and the primary alias is created. * Slash (/) is replaced with a single dash (-) and the secondary alias is created (requires GitLab Runner v1.1.0 or higher). So, instead of localhost, you have to use **mongo:27017** `Your connection string will be like: mongodb://mongo:27017/${databaseName}` Upvotes: 1
2018/03/20
741
2,712
<issue_start>username_0: I am learning javascript. I used 'this' as below in javascript ``` let person = { name: 'Krishna', age: 35, sayHello() { return 'Hello, my name is ' + this.name; }, sayHi: () => `Hi ${this.name}` }; console.log(person.sayHello()); console.log(person.sayHi()); ``` sayHello() function is properly getting 'this' as person object. But in shorthand function sayHi(), this is referring to global window object. Here I am calling sayHi() with person object. But not sure why this is not getting referred to window object. Can any one please explain why this in sayHi() is not getting initialized with person object?<issue_comment>username_1: In `sayHello`, you are returning the resulting string (which is calculating in context of the object (has `this`). In `sayHi`, you are returning a function to console.log and then calling it where `this` is no longer defined because you're out of context. Upvotes: -1 <issue_comment>username_2: > > Arrow Functions lexically bind their context so this actually refers > to the originating context. > > > ``` { sayHi: () => // in context, sayHello() { // needs this to be in context } } ``` Upvotes: 1 <issue_comment>username_3: When you use `sayHi: () => ...` you are bounding `this` to the global context, not to `person`, thus you don't have a `name` property on the scope. [This reading might help you.](https://derickbailey.com/2015/09/28/do-es6-arrow-functions-really-solve-this-in-javascript/) Upvotes: 1 <issue_comment>username_4: `this` inside the arrow function point to the same object as it did right before the arrow function was assigned (`window`). If you really want to access the `person` object inside arrow function you have to do that directly by the the object name (`person`): ```js let person = { name: 'Krishna', age: 35, sayHello() { return 'Hello, my name is ' + this.name; }, sayHi: () => `Hi ${person.name}` }; console.log(person.sayHello()); console.log(person.sayHi()); ``` Upvotes: 2 <issue_comment>username_5: When you define `sayHello()`, you are leaving the `this` keyword unbound, to be assigned in the function invocation `person.sayHello()`. `this` refers to whatever is left of the dot in the invocation, namely `person`. In `sayHi()`, you are binding the `this` keyword at the moment `sayHi` is defined. Since the context is not a function invocation but rather an object definition, the value of `this` is unknown and it defaults to the `window` object. Once you bind it in this way, **it cannot be reassigned.** When you run `person.sayHi()`, `this` refers to `window` and not `person`. Upvotes: 2 [selected_answer]
2018/03/20
553
1,695
<issue_start>username_0: When running a Corda 3 node, I get the following exception: ``` Exception in thread “main” java.lang.OutOfMemoryError: Java heap space ``` How can I increase the amount of memory available to the node?<issue_comment>username_1: You can run a node with additional memory by running the node's corda JAR from the command line with the following flag: ``` java -Xmx2048m -jar corda.jar ``` You can also specify that the node should be run with extra memory in the node's `node.conf` configuration file: ``` myLegalName="O=PartyA,L=London,C=GB" ... jvmArgs=["-Xmx8G"] ``` Finally, you can specify that the node should be run with extra memory in the `deployNodes` task: ``` task deployNodes(type: net.corda.plugins.Cordform, dependsOn: ['jar']) { directory "./build/nodes" node { name "O=Node,L=London,C=GB" ... extraConfig = [ jvmArgs : [ "-Xmx1g"] ] } } ``` See <https://docs.corda.net/running-a-node.html#starting-an-individual-corda-node>. Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding following block in "task deployNodes" section worked for me - > > extraConfig = [ > jvmArgs : [ "-Xmx1g"] > ] > > > Upvotes: 0 <issue_comment>username_3: Adding `extraConfig` in Gradle's `Cordform` task worked for me with Corda Enterprise 4.2: ``` task deployNodes(type: net.corda.plugins.Cordform) { nodeDefaults { // ... extraConfig = [ custom: [jvmArgs: [ "-Xms8G", "-Xmx8G", "-XX:+UseG1GC" ]] ] } // ... } ``` The resulting node.conf fragment is: ``` custom { jvmArgs=[ "-Xms8G", "-Xmx8G", "-XX:+UseG1GC" ] } ``` Upvotes: 1
2018/03/20
1,040
3,835
<issue_start>username_0: I believe this is a unique problem, but definitely link me to other answers elsewhere if they exist. I have a convolutional sequential network in Keras, very similar to the one in the [guide to the sequential model (and here is their model):](https://keras.io/getting-started/sequential-model-guide/) ``` from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import Embedding from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D model = Sequential() model.add(Conv1D(64, 3, activation='relu', input_shape=(seq_length, 100))) model.add(Conv1D(64, 3, activation='relu')) model.add(MaxPooling1D(3)) model.add(Conv1D(128, 3, activation='relu')) model.add(Conv1D(128, 3, activation='relu')) model.add(GlobalAveragePooling1D()) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) model.fit(x_train, y_train, batch_size=16, epochs=10) score = model.evaluate(x_test, y_test, batch_size=16) ``` Unfortunately, my sequence sizes are pretty massive (up to a million), and I would really like to do an embedding. With that, I'd like to do 2d Convolution (and possibly much deeper architectures). My GPU runs fast enough as convolution is easy, but it has 2GB of memory. Therefore, I cannot even train the network one sample at a time. As soon as I introduce an embedding, it will explode the size of the network - in this example, (batch\_size, 1000000, 100, embed\_size). I know about [fit\_generator](https://keras.io/models/sequential/), but using fit\_generator (and TimeSeriesGenerator) requires me to have a label for every step of the broken up timesteps of the sequence. My problem is a simple classification problem so it does not make sense to provide a label at, for example, after the first 1000 timesteps of the sequence compared to all million. My impression is that the network is probably running the GlobalAveragePooling for every part of the broken up sequence. As proof, when I run fit\_generator compared to regular\_fit on a small dataset, the performance for fit\_generator suffers greatly. Therefore my problem is: what can I use to create a large network to run on extremely long sequences in Keras? Is it possible I am misunderstanding fit\_generator? Or is there some other way to break up long sequences into parts? If this absolutely does not exist, I can probably write it myself and submit it to Keras, but I would rather not. This is NOT like an LSTM with extremely long sequence lengths because I do not care about TBTT, and convolutional networks do not have state.<issue_comment>username_1: You have a sequence of sentences, and the embedding can only be applied to one sentence at a time, so you need to wrap it in a TimeDistributed layer ``` from keras.models import Sequential from keras.layers import Embedding, TimeDistributed # plug-in your own values vocab_size = 10000 embed_size = 200 seq_length = 1000000 model = Sequential() model.add(TimeDistributed(Embedding(vocab_size, embed_size), input_shape=(seq_length, 100))) ``` The above gives me an input\_shape of `(None, 1000000, 100)` and an output shape of `(None, 1000000, 100, 200)`, with 2 million parameters. Upvotes: 2 <issue_comment>username_2: If anyone stumbles on this, to solve the issue I simply used a max pooling layer (not avg. pooling) of size 10 as the input. This effectively reduces the number of items in the sequence by a factor of 10, allowing space for the embedding layer. It performs great, so I don't think reducing the input had an adverse affect. Using Max pooling essentially just chooses items at random, as the number values for each item are chosen at random. Upvotes: 2 [selected_answer]
2018/03/20
1,406
5,926
<issue_start>username_0: I am trying to add the data from one json string to another in php, but I can't get it to work properly. These are my json strings. json\_A ``` { data: [ { date: "2018032012", p: [ { lon: -7.777, lat: 66.666, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 } ] } [ } ``` json\_B ``` { data: [ { date: "2018032012", p: [ { lon: -8.888, lat: 99.999, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 } ] } [ } ``` This is my desired result json\_A ``` { data: [ { date: "2018032012", p: [ { lon: -8.888, lat: 99.999, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 }, lon: -7.777, lat: 66.666, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 } ] } [ } ``` This is my php code: ``` $a = file_get_contents('json_A', FILE_USE_INCLUDE_PATH); $b = file_get_contents('json_B', FILE_USE_INCLUDE_PATH); $a2 = json_decode($a, true); $b2 = json_decode($b, true); $a2["data"][] = $b2; echo json_encode($a2); ``` This is what I am getting: json\_A ``` { data: [ { date: "2018032012", p: [ { lon: -7.777, lat: 66.666, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 } ] }, { data: [ { date: "2018032012", p: [ { lon: -8.888, lat: 99.999, precip-intensity: 0.0625, wind-dir: 256.50015, temperature: 5.5065155, wind-speed: 9.045654, weather-symbol: 3, pressure-sealevel: 102366.94 } ] } ] } } ``` So yea I am getting the data in the other json string, but not the desired way, and I cant figure out what I am doing wrong. Any help is much obliged!<issue_comment>username_1: You have a sequence of sentences, and the embedding can only be applied to one sentence at a time, so you need to wrap it in a TimeDistributed layer ``` from keras.models import Sequential from keras.layers import Embedding, TimeDistributed # plug-in your own values vocab_size = 10000 embed_size = 200 seq_length = 1000000 model = Sequential() model.add(TimeDistributed(Embedding(vocab_size, embed_size), input_shape=(seq_length, 100))) ``` The above gives me an input\_shape of `(None, 1000000, 100)` and an output shape of `(None, 1000000, 100, 200)`, with 2 million parameters. Upvotes: 2 <issue_comment>username_2: If anyone stumbles on this, to solve the issue I simply used a max pooling layer (not avg. pooling) of size 10 as the input. This effectively reduces the number of items in the sequence by a factor of 10, allowing space for the embedding layer. It performs great, so I don't think reducing the input had an adverse affect. Using Max pooling essentially just chooses items at random, as the number values for each item are chosen at random. Upvotes: 2 [selected_answer]
2018/03/20
641
2,646
<issue_start>username_0: I have two classes (Person and Payment) and I struggle with their communication. So can you show me how I can write a method Process which checks if given person exists? Also, I don't understand how I should take information from cmd and call a creatPeople and process methods in the main method. ``` class Person { private String name; private String surname; private double salary; boolean adjustSalary(double money, String type) { if (type.equals("+")) { salary = salary + money; return true; } else if (type.equals("-")) { if (salary > 0) { salary = salary - money; return true; } } else System.out.println("Wrong adjustment type!"); return false; } } class Payment { static Person[] people; static int success = 0; static int fail = 0; static void createPeople(String[][] personInfo) { people = new Person[personInfo.length]; for (int i = 0; i < personInfo.length; i++) { Person x = new Person(); people[i] = x; } } static void process(String[] Info) { Person obj = new Person(); System.out.println(obj.adjustSalary); if (obj.adjustSalary == true) success++; else fail++; } public static void main(String[] args) { } } ```<issue_comment>username_1: You can use a process called composition. This is the idea that one object1 must have another instance of an object2 in its constructor that way in order to create object1 you must ensure that you have an object2! Lmk if this helps Upvotes: 1 <issue_comment>username_2: Your Person class must provide access to its members. You have to implement getters and setters. e.g.: ``` public void setName(String name){ this.name = name; } public String getName(){ return this.name; } setSurname(String surname){ this.surname = surname; } public String getSurname(){ return this.surname; } ... ``` In your Payment class put the following method: ``` static boolean personIsKnown(Person person) { boolean found = false; for(Person _person: people){ if(_person.getName().toUpperCase().equals(person.getName().toUpperCase()) && _person.getSurname().toUpperCase().equals(person.getSurname().toUpperCase())) { found = true; break; } } return found; } ``` Whenever you now call personIsKnown() you can find out whether a person exists in your array or not. This is just for the start. Upvotes: 0
2018/03/20
637
2,245
<issue_start>username_0: I am implementing this in Python with SQLite 3. I have the following table ‘parts’ with attributes part1, part2, and supplier. ``` part1 part2 supplier a j Foo b g Bar c d Nom a b Bar b k Bar c m Bar c l Foo ``` I want to select `part1` and `supplier` if `part1` is obtainable from more than one supplier (or not with one only supplier), independent of `part2`. The result I am looking for is: ``` a, Foo a, Bar c, Nom c, Bar c, Foo ``` or simply a,c. I don’t want b because it only occurs with Bar. I tried this: ``` co.execute("SELECT ar.part1,ar.supplier FROM parts ar, parts ts WHERE ar.part1 = ts.part1 AND ar.supplier != ts.supplier ") for row in co.fetchall(): print(row) ``` The results were pairs of `part1` and `supplier` but only included 2 suppliers, and the table has 20. Answers using case imply stating specific suppliers, such as a `part1` that occurs with supplier Bar and Foo, or Bar and Nom, but I have more than 20 suppliers.<issue_comment>username_1: You can use a process called composition. This is the idea that one object1 must have another instance of an object2 in its constructor that way in order to create object1 you must ensure that you have an object2! Lmk if this helps Upvotes: 1 <issue_comment>username_2: Your Person class must provide access to its members. You have to implement getters and setters. e.g.: ``` public void setName(String name){ this.name = name; } public String getName(){ return this.name; } setSurname(String surname){ this.surname = surname; } public String getSurname(){ return this.surname; } ... ``` In your Payment class put the following method: ``` static boolean personIsKnown(Person person) { boolean found = false; for(Person _person: people){ if(_person.getName().toUpperCase().equals(person.getName().toUpperCase()) && _person.getSurname().toUpperCase().equals(person.getSurname().toUpperCase())) { found = true; break; } } return found; } ``` Whenever you now call personIsKnown() you can find out whether a person exists in your array or not. This is just for the start. Upvotes: 0
2018/03/20
377
1,195
<issue_start>username_0: I want to search for one string at the beginning of another in Python 3.x so I have ``` for pattern, responses in substrings: match = re.match(pattern, statement) if match: # etc. ``` But if one of substrings is `'no'` this will be found not only if the first word is 'no' but also if it is `'noggin'` for example How can I search for the `'no'` at the start only if the word is `'no'`? Thank you<issue_comment>username_1: You need a regex [word boundary anchor](https://www.regular-expressions.info/wordboundaries.html): `\b`, which matches the boundary between word characters and non-word characters. Try `^no\b` See [my regex101 example](https://regex101.com/r/aAlx2k/1) Upvotes: 2 <issue_comment>username_2: '^no$'...this regular expression should work. re.match('^no.\*',string) import re mat=re.match('^no.\*', 'no') print(mat) Out: <\_sre.SRE\_Match object; span=(0, 2), match='no'> Upvotes: 0 <issue_comment>username_3: Just search for `'no '` - it won't find the `no` in `noggin` because there is no space after it: ``` for pattern, responses in substrings: match = re.match(pattern+' ', statement) if match: # etc. ``` Upvotes: 0
2018/03/20
2,065
7,261
<issue_start>username_0: I have two meteor call in client events in meteor, where i want to execute one after another. But as i debugged the flow dosen't follow the way i want it to be. **client.js** ``` Meteor.call('methodCall',param1,param2,param3,function (error, result) { if (error) console.log(error.reason); Session.set("xyz",result); }); var abc=Session.get("xyz"); Meteor.call('methodCall',abc,param2,param3,function (error, result) { if (error) console.log(error.reason); console.log("result: "+result); Session.set("cdf",result); }); var pqr=Session.get("cdf"); ``` As you can see this is the code i want to run in sequential order i.e one after another. But when i debugged the code i found that the order of execution is: ``` 1. Meteor will be called 3. session.get("xyz") return undefined. 4. Meteor will be called 6. session.get("cdf") return undefined. 2. session.set() will have results as value. 5. session.get() will not have any value. ``` The second meteor.call() will not execute successfully because the 1st parameter will not have any value as step 3 executed before step 2. So is there any way i can achieve this and wait for meteor call completion to execute next instructions?<issue_comment>username_1: One of the way is to reorganize slightly your code. ``` Meteor.call('methodCall',param1,param2,param3,function (error, result) { if (error) console.log(error.reason); Session.set("xyz",result); var abc=Session.get("xyz"); Meteor.call('methodCall',abc,param2,param3,function (error, result) { if (error) console.log(error.reason); console.log("result: "+result); Session.set("cdf",result); var pqr=Session.get("cdf"); }); }); ``` Upvotes: 1 <issue_comment>username_2: I have made some research on the various options for such a situation as some others here might have faced it already, too. Option A- Nested calls in client ================================ The first and most obvious one is to do nested calls. This means to call the next function after the result has been received in the callback. ``` // level 1 Meteor.call('methodCall', param1, param2, param3, function (error, result) { // level 2 if (error) console.log(error.reason); Session.set("xyz",result); Meteor.call('methodCall',result, param2, param3, function (error, result) { // level 3... if (error) console.log(error.reason); console.log("result: "+result); Session.set("cdf",result); }); }); ``` **Pros:** classic js way, no fancy new concepts required, server methods sticks so a simple logic while client dies the complex work **Cons:** ugly, can cause confusion and sometimes hard to debug **Requires:** `Template.autorun` or `Tracker.autorun` to capture the changes from `Session` reactively. Option B - Wrap Async ===================== Many might have already found this method to be no.1 choice for structuring async code into sync code. Fibers (and wrapAsync utilizing fibers) make the code only **look** to be sync but the nature of execution remains async. This works the same way like Promises work or like async/await works. **Pros:** powerful when in a single environment **Cons:** not to be used with Meteor.call **Requires:** a fiber to run in ### Problem with Meteor.call However, you can't easily call a Meteor method using this feature. Consider the following code ``` const param1 = "param1"; const param2 = "param2"; const param3 = "param3"; const asyncCall = Meteor.wrapAsync(Meteor.call); const result1 = asyncCall("methodCall", param1, param2, param3); // result1 will be undefined ``` To further explain I will cite the [documentation](https://docs.meteor.com/api/methods.html#Meteor-call): > > On the client, if you do not pass a callback and you are not inside a > stub, call will return undefined, and you will have no way to get the > return value of the method. That is because the client doesn’t have > fibers, so there is not actually any way it can block on the remote > execution of a method. > > > Summary:`Meteor.wrapAsync` is not to be utilized together with `Meteor.call`. Option C - Bundle in one method =============================== Instead of trying to create a synced sequence of meteor calls, you could also provide all parameters and logic to a single server method, that returns an object which keeps all returned values: *client.js* ``` const param1 = "param1"; const param2 = "param2"; const param3 = "param3"; Meteor.call('methodCall', param1, param2, param3, function (err, result) { const xyz = result.xyz; const cdf = result.cdf; }); ``` *server.js* ``` function _methodCall(p1, p2, p3) { // ... return result; } Meteor.methods({ 'methodCall'(p1, p2, p3) { const result1 = _methodCall(p1, p2, p3); const result2 = _methodCall(result1, p2, p3); return { xyz: result1, cdf: result2, } } }) ``` This will create a sequential execution (by following the sequential logic you provided in your question) and returns all it's results in a bundled object. Pros: sequential as desired, one request - all results Cons: one extra method to be tested, can introduce tight coupeling between methods, return objects can become large and complex to parse for the clinet Requires: a good sense for method design If I find other options I will add them to this post. Upvotes: 3 [selected_answer]<issue_comment>username_3: You must use promise for example future fibers on server ``` Meteor.methods({ 'methodCall': function(params...){ var future = new Future(); try{ your code... future.return(result) catch(e){ future.throw(e) }finally{ return future.wait(); } }, }) ``` On client ``` Meteor.call('methodCall',params...,(err,res)=>{ if(err){ console.log(err); }else{ console.log(res); } }); ``` link for ref <https://github.com/jagi/meteor-astronomy/issues/562> Upvotes: 0 <issue_comment>username_4: I'm sorry, I don't like any of that solutions. What about convert Meteor.call callbacks to promises? ``` const meteorPromiseCall = (method: string, ...args: any[]) => new Promise((resolve, reject) => { Meteor.call(method, ...args, (err: any, res: any) => { if (err) reject(err); else resolve(res); }); }); ``` And example of use: ``` const Dashboards = () => { const [data, setData] = useState(null); const readData = async () => { // Waiting to all Meteor.calls const res = await Promise.all([ meteorPromiseCall( "reports.activitiesReport", DateTime.now().startOf("day").minus({ day: 30 }).toJSDate(), DateTime.now().startOf("day").toJSDate(), ), meteorPromiseCall( "reports.activitiesReport2", DateTime.now().startOf("day").minus({ day: 30 }).toJSDate(), DateTime.now().startOf("day").toJSDate(), ), meteorPromiseCall( "reports.activitiesReport3", DateTime.now().startOf("day").minus({ day: 30 }).toJSDate(), DateTime.now().startOf("day").toJSDate(), ), ]); setData(res[0]); }; useEffect(() => { readData(); }, []); if (!data) return Loading...; return (...) ``` Upvotes: 0
2018/03/20
618
1,712
<issue_start>username_0: Possibly there's an out-the-box method that will do all this for me! I need to get an array (to put into a select box) in the form i.e. **the expected output**: ``` array: [ "4" => "Siemens" "5" => "Dell" ] ``` Currently I'm doing (using Eloquent): `$array = $this->get(['id','manufacturer'])->toArray();` Which produces: ``` array:2 [▼ 0 => array:2 [▼ "id" => 4 "manufacturer" => "Siemens" ] 1 => array:2 [▼ "id" => 5 "manufacturer" => "Dell" ] ] ``` I'm then doing: ``` $test = []; $i=0; $key=''; $value=''; $it = new RecursiveIteratorIterator(new RecursiveArrayIterator($array)); foreach($it as $v) { $i++; if ($i % 2 == 0) { $key = $v; array_push($test,[$key=>$value]); } else { $value = $v; } } ``` Which produces: ``` array:2 [▼ 0 => array:1 [▼ 4 => "Siemens" ] 1 => array:1 [▼ 5 => "Dell" ] ] ``` Which is very close...! I'm a bit stuck on the final bit, but wondering if there's a better way to solve this altogether?<issue_comment>username_1: A bit more digging and turns out that Eloquent's `pluck` method achieves exactly this `$manufacturers->pluck('manufacturer','id');` returns: ``` Collection {#562 ▼ #items: array:2 [▼ 4 => "Siemens" 5 => "Dell" ] } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Array\_combine and array\_column can do what you need. ``` $arr =[ 0 => [ "id" => 4, "manufacturer" => "Siemens", ], 1 => [ "id" => 5, "manufacturer" => "Dell" ] ]; $res = array_combine(array_column($arr, "id"), array_column($arr, "manufacturer")); Var_dump($res); ``` <https://3v4l.org/4f495> Upvotes: 0
2018/03/20
767
2,436
<issue_start>username_0: I've tried creating a basic registration form (I'm pretty new to PHP). The form works sometimes, but most of the times it's just sending blank entries into the MySQL database. Below is the code: I have the following form: ``` Participant 1 ------------- Year 1st Year 2nd Year 3rd Year Participant 2 ------------- Year 1st Year 2nd Year 3rd Year ``` I'm sorry for the long form code. This is the PHP code to post the data to the database: ``` $servername = "localhost"; $username = "fic"; $password = "<PASSWORD>"; $dbname = "fic201718"; $conn = new mysqli($servername, $username, $password, $dbname); if ($conn->connect_error) { die("Connection failed: " . $conn->connect_error); } $name1 = $_POST['name1']; $year1 = $_POST['year1']; $phone1 = $_POST['phone1']; $college1 = $_POST['college1']; $email1 = $_POST['email1']; $name2 = $_POST['name2']; $year2 = $_POST['year2']; $phone2 = $_POST['phone2']; $college2 = $_POST['college2']; $email2 = $_POST['email2']; $sql = "INSERT INTO identitytheft (Participant1Name,Participant1Year,Participant1Phone,Participant1College,Participant1eMail,Participant2Name,Participant2Year,Participant2Phone,Participant2College,Participant2eMail) VALUES ('$name1','$year1','$phone1','$college1','$email1','$name2','$year2','$phone2','$college2','$email2')"; $conn->query($sql); if (!empty($_POST['name1'])) { echo (" alert('Successfully Registered'); "); } ``` However, the form sometimes inserts absolutely blank data into the database. It sometimes works though. One thing that I have noticed is, I do not get blank rows if there are no special characters in the responses. My columns are set to utf8\_unicode\_ci (all of the columns). Could there be something wrong here? Please help?<issue_comment>username_1: A bit more digging and turns out that Eloquent's `pluck` method achieves exactly this `$manufacturers->pluck('manufacturer','id');` returns: ``` Collection {#562 ▼ #items: array:2 [▼ 4 => "Siemens" 5 => "Dell" ] } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Array\_combine and array\_column can do what you need. ``` $arr =[ 0 => [ "id" => 4, "manufacturer" => "Siemens", ], 1 => [ "id" => 5, "manufacturer" => "Dell" ] ]; $res = array_combine(array_column($arr, "id"), array_column($arr, "manufacturer")); Var_dump($res); ``` <https://3v4l.org/4f495> Upvotes: 0
2018/03/20
405
1,402
<issue_start>username_0: I'm trying to create a window using glew but I am getting this link error. I also tried compiling the libraries myself, which didn't work either. I also made sure that glew is properly linked. Here's the code that's causing the error: ``` if (configuration.api == API::OpenGL) { static bool sGLEWInitialized; if (!sGLEWInitialized) { glfwMakeContextCurrent(handle); #if WINDOWS glewExperimental = true; auto error = glewInit(); if (error) { destroy_glfw_window(handle); throw std::runtime_error("Failed to initialize GLEW"); } #endif sGLEWInitialized = true; } } ```<issue_comment>username_1: A bit more digging and turns out that Eloquent's `pluck` method achieves exactly this `$manufacturers->pluck('manufacturer','id');` returns: ``` Collection {#562 ▼ #items: array:2 [▼ 4 => "Siemens" 5 => "Dell" ] } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Array\_combine and array\_column can do what you need. ``` $arr =[ 0 => [ "id" => 4, "manufacturer" => "Siemens", ], 1 => [ "id" => 5, "manufacturer" => "Dell" ] ]; $res = array_combine(array_column($arr, "id"), array_column($arr, "manufacturer")); Var_dump($res); ``` <https://3v4l.org/4f495> Upvotes: 0
2018/03/20
1,658
4,125
<issue_start>username_0: I want to get sum row values per day and per year, and showing on the same row. The database that the first and second queries get results from from include a table like this (ltg\_data): ``` time lon lat geom 2018-01-30 11:20:21 -105.4333 32.3444 01010.... ``` And then some geometries that I'm joining to. One query: > > SELECT to\_char(time, 'MM/DD/YYYY') as day, count(\*) as strikes FROM counties JOIN ltg\_data on ST\_contains(counties.the\_geom, ltg\_data.ltg\_geom) WHERE cwa = 'MFR' and time >= (now() at time zone 'utc') - interval '50500 hours' group by 1; > > > Results are like: ``` day strikes 01/28/2018 22 03/23/2018 15 12/19/2017 20 12/20/2017 12 ``` Second query: > > SELECT to\_char(time, 'YYYY') as year, count(\*) as strikes FROM counties JOIN ltg\_data on ST\_contains(counties.the\_geom, ltg\_data.ltg\_geom) WHERE cwa = 'MFR' and time >= (now() at time zone 'utc') - interval '50500 hours' group by 1; > > > Results are like: ``` year strikes 2017 32 2018 37 ``` What I'd like is: ``` day daily_strikes year yearly_strikes 01/28/2018 22 2018 37 03/23/2018 15 2018 37 12/19/2017 20 2017 32 12/20/2017 12 2017 32 ``` I found that union all shows the year totals at the very bottom, but I'd like to have the results horizontally, even if there are repeat yearly totals. Thanks for any help!<issue_comment>username_1: ``` create table strikes (game_date date, strikes int ) ; insert into strikes (game_date, strikes) values ('01/28/2018', 22), ('03/23/2018', 15), ('12/19/2017', 20), ('12/20/2017', 12) ; select * from strikes ; select game_date, strikes, sum(strikes) over(partition by extract(year from game_date) ) as sum_stikes_by_year from strikes ; "2017-12-19" 20 "32" "2017-12-20" 12 "32" "2018-01-28" 22 "37" "2018-03-23" 15 "37" ``` This application of aggregation is known as "windowing" functions or analytic functions: [PostgreSQL Docs](https://www.postgresql.org/docs/9.1/static/tutorial-window.html) ``` ---- EDIT --- based on comments... create table strikes_tally (strike_time timestamp, lat varchar(10), long varchar(10), geom varchar(10) ) ; insert into strikes_tally (strike_time, lat, long, geom) values ('2018-01-01 12:43:00', '100.1', '50.8', '1234'), ('2018-01-01 12:44:00', '100.1', '50.8', '1234'), ('2018-01-01 12:45:00', '100.1', '50.8', '1234'), ('2018-01-02 20:01:00', '100.1', '50.8', '1234'), ('2018-01-02 20:02:00', '100.1', '50.8', '1234'), ('2018-01-02 22:03:00', '100.1', '50.8', '1234') ; select to_char(strike_time, 'dd/mm/yyyy') as strike_date, count(strike_time) over(partition by to_char(strike_time, 'dd/mm/yyyy')) as daily_strikes, to_char(strike_time, 'yyyy') as year, count(strike_time) over(partition by to_char(strike_time, 'yyyy') ) as yearly_strikes from strikes_tally ; ``` Upvotes: 0 <issue_comment>username_2: You can try this kind of approach. It's not very optimal but at lease works: I have a test table like this: ``` postgres=# select * from test; d | v ------------+--- 2001-02-16 | a 2002-02-16 | a 2002-02-17 | a 2002-02-17 | a (4 wiersze) ``` And query: ``` select q.year, sum(q.countPerDay) over (partition by extract(year from q.day)), q.day, q.countPerDay from ( select extract('year' from d) as year, date_trunc('day', d) as day, count(*) as countPerDay from test group by day, year ) as q ``` So the result looks like this: ``` 2001 | 1 | 2001-02-16 00:00:001 | 1 2002 | 3 | 2002-02-16 00:00:001 | 1 2002 | 3 | 2002-02-17 00:00:001 | 2 ``` Upvotes: 2 [selected_answer]
2018/03/20
1,412
4,143
<issue_start>username_0: Suppose, you're given the following dataframe: ``` a <- data.frame(var = c(",1,2,3,", ",2,3,5,", ",1,3,5,5,")) ``` What I am looking for is to create the variables flag\_1, ..., flag\_7 in a containing the information of how many times the respective values occur. For a, I would expect the following result: ``` var flag_1 flag_2 flag_3 flag_4 flag_5 ",1,2,3," 1. 1. 1. 0. 0. ",2,3,5," 0. 1. 1. 0. 1. ",1,3,5,5," 1. 0. 1. 0. 2. ``` I managed to get the result using a nested for-loop and an if-condition but there must be a nicer (more aesthetic and better performing) solution.<issue_comment>username_1: One option would be to do `strsplit`, get the `table` and then `cbind` with original data ``` cbind(a, do.call(rbind, lapply(strsplit(as.character(a$var), ","), function(x) table(factor(x[nzchar(x)], levels = 1:5, labels = paste0("flag_", 1:5)))))) # var flag_1 flag_2 flag_3 flag_4 flag_5 #1 ,1,2,3, 1 1 1 0 0 #2 ,2,3,5, 0 1 1 0 1 #3 ,1,3,5,5, 1 0 1 0 2 ``` --- Another option is with `tidyverse` ``` library(tidyverse) str_extract_all(a$var, "[0-9]") %>% map(~ as.integer(.x) %>% as_tibble) %>% bind_rows(.id = 'grp') %>% count(grp, value = factor(value, levels = min(value):max(value))) %>% spread(value, n, drop = FALSE, fill = 0) %>% select(-grp) %>% bind_cols(a, .) %>% rename_at(vars(matches("^[0-9]+$")), ~ paste0("flag_", .)) # var flag_1 flag_2 flag_3 flag_4 flag_5 #1 ,1,2,3, 1 1 1 0 0 #2 ,2,3,5, 0 1 1 0 1 #3 ,1,3,5,5, 1 0 1 0 2 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: First, don't make the strings into factors. Nothing good comes from that. ``` a <- data.frame(var = c(",1,2,3,", ",2,3,5,", ",1,3,5,5,"), stringsAsFactors = FALSE) ``` To get from strings to your table is simple enough if we take it in small steps. Here, I've written (or renamed) a function per step and then gone through the steps using `lapply` one at a time. You can string it all together in a pipeline if like, but it would be roughly these steps. First, I extract the numbers from the strings. That involves splitting on commas, getting rid of empty strings, you have those because you can begin and end a string with a comma, but otherwise, that step wouldn't be necessary. Then we need to translate the strings into numbers, count how often we see each (we can do that with the `as.numeric` and `table` functions, respectively), and then it is just a question of mapping the observed counts into a table that also includes those we haven't observed. ``` pick_indices <- function(str) unlist(strsplit(str, split = ",")) remove_empty <- function(chrs) chrs[nchar(chrs) > 0] get_indices <- as.numeric to_counts <- table to_flag_vect <- function(counts, len) { vec <- rep(0, len) names(vec) <- 1:len vec[names(counts)] <- counts vec } strings <- lapply(a$var, pick_indices) cleaned <- lapply(strings, remove_empty) indices <- lapply(cleaned, get_indices) counts <- lapply(indices, to_counts) flags <- lapply(counts, to_flag_vect, len = 5) ``` We now have the flag-counts in a list, so to make it into the table you want, with the column names you want, we simply do this: ``` tbl <- do.call(rbind, flags) colnames(tbl) <- paste0("flag_", 1:5) tbl ``` Done. Upvotes: 1 <issue_comment>username_3: Split and unlist the values into a factor with appropriate levels ``` x = strsplit(a$var, ",") xp = factor(unlist(x), levels = seq_len(5)) ``` Create an index that maps the values of `xp` to the rows they came from ``` i = rep(seq_along(x), lengths(x)) ``` use `xtabs()` to cross-tabulate the entries by row ``` xt = xtabs(~ i + xp) ``` and `cbind()` the matrix representation of the result to the original ``` > cbind(a, unclass(xt)) var 1 2 3 4 5 1 ,1,2,3, 1 1 1 0 0 2 ,2,3,5, 0 1 1 0 1 3 ,1,3,5,5, 1 0 1 0 2 ``` Upvotes: 0
2018/03/20
764
2,684
<issue_start>username_0: I'm a bit new to laravel and try to do a simple thing, just trying to select multiple rows with eloquent and tried : ``` php namespace App\Http\Controllers; use Auth; use App\Company; use App\User; use Illuminate\Support\Facades\View; use Model; class BaseController extends Controller { public function __construct() { //$companies = Company::find(1); //$companies = Company::all(); $companies = Company::where('owner_id', Auth::user()-id); print_r($cpm); View::share ( 'companies', '$companies' ); } } ``` But always get this error : > > ErrorException > > > Trying to get property of non-object in BaseController.php (line 16) > > > And 2 commented lines above are working fine, so i'm a bit lost? Thanks, Nicolas<issue_comment>username_1: ``` public function __construct() { //$companies = Company::find(1); //$companies = Company::all(); $companies = Company::where('owner_id', Auth::user()->id); print_r($cpm); View::share ( 'companies', '$companies' ); } ``` This piece: ``` $companies = Company::where('owner_id', Auth::user()->id); ``` Needs to change into this: ``` $companies = Company::where('owner_id', Auth::user()->id)->get(); ``` The get makes sure your sql gets runned, and the output us returned to $companies. And I believe ``` View::share ( 'companies', '$companies' ); ``` needs to be: ``` View::share ( 'companies', $companies ); ``` resulting in: ``` public function __construct() { //$companies = Company::find(1); //$companies = Company::all(); $companies = Company::where('owner_id', Auth::user()->id)->get(); print_r($cpm); View::share ( 'companies', $companies ); } ``` Upvotes: 2 <issue_comment>username_2: The `where()` method returns a [Builder](https://laravel.com/api/5.4/Illuminate/Database/Eloquent/Builder.html#method_where) object and not the result of the query. You need to call [`get()` method](https://laravel.com/api/5.4/Illuminate/Database/Eloquent/Builder.html#method_get) in order to get an exploitable Collection. Upvotes: 0 <issue_comment>username_3: You are trying to get the ID of a loggedin user when no user is logged in. So you should check if a user is logged. I advice you to use a [middleware](https://laravel.com/docs/5.6/middleware). You can also check if the user is logged in using: ``` if (Auth::check()) { $companies = Company::where('owner_id', Auth::user()->id)->get(); } ``` Read this for more information about Authentication: <https://laravel.com/docs/5.6/authentication> Upvotes: 1
2018/03/20
625
2,185
<issue_start>username_0: I am attempting to select just 2 distinct column to determine the records that are shown in my query. The column userid is capable of owning several houses which mean userid can be present multiple times currently. However, I only care if they specific colors of a house, so I'd like the userid column to be distinct, along with the House colum while the rest of the rows can remain to be whatever is within that row. ``` Select UserID, House, NumOfPpl, NumOfCars from people ``` Results: ``` userID House NumOfPpl NumOfCars ----------------------------------- 1a red 3 2 1a blue 1 1 1a red 5 4 1a green 2 3 1a blue 1 3 2a red 3 3 3a green 4 6 3ab red 2 1 3ab red 5 5 3ab blue 2 1 ``` Would need to be: ``` userID House NumOfPpl NumOfCars ---------------------------------- 1a red 3 2 1a blue 1 1 1a green 2 3 2a red 3 3 3a green 4 6 3ab red 2 1 3ab blue 2 1 ``` I have used cte to get rid of duplicate userid's, but how can I get rid of duplicate house's, within userid's? ``` ;with cte AS ( select userid, house, numofppl, numofcars, row_number() OVER(partition by userID order by house) AS rowcounter FROM people ) SELECT userid, house, numofppl, numofcars from cte WHERE rowcounter = 1 ```<issue_comment>username_1: Put the values in the `partition by` that you want to be unique. So, I think you want `userID, house` there. The `order by` doesn't make a difference: ``` with cte AS ( select p.* row_number() over (partition by userID, house order by house) AS seqnum from people p ) select userid, house, numofppl, numofcars from cte where seqnum = 1; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This query would work. > > SELECT \* FROM people > WHERE house = ANY (SELECT DISTINCT house FROM people); > > > Try this i guess this should work. Upvotes: 0
2018/03/20
2,131
6,588
<issue_start>username_0: I'm trying to create an animated plot using `matplotlib`. It works as expected when i'm using integers for the `X` values: ``` #!/usr/bin/env python import os import random import numpy as np from datetime import datetime as dt, timedelta from collections import deque import matplotlib.pyplot as plt # $ pip install matplotlib import matplotlib.animation as animation %matplotlib notebook npoints = 30 x = deque([0], maxlen=npoints) y = deque([0], maxlen=npoints) fig, ax = plt.subplots() [line] = ax.plot(x, y) def get_data(): t = random.randint(-100, 100) return t * np.sin(t**2) def data_gen(): while True: yield get_data() def update(dy): x.append(x[-1] + 1) y.append(dy) line.set_data(x, y) ax.relim() ax.autoscale_view(True, True, True) return line, ax plt.rcParams['animation.convert_path'] = 'c:/bin/convert.exe' ani = animation.FuncAnimation(fig, update, data_gen, interval=500, blit=True) #ani.save(os.path.join('C:/','temp','test.gif'), writer='imagemagick', fps=30) plt.show() ``` this produces the following animation: [![enter image description here](https://i.stack.imgur.com/UmbNO.gif)](https://i.stack.imgur.com/UmbNO.gif) however as soon as i'm trying to use `datetime` values as `x` values - the plot is empty: ``` npoints = 30 x = deque([dt.now()], maxlen=npoints) # NOTE: `dt.now()` y = deque([0], maxlen=npoints) fig, ax = plt.subplots() [line] = ax.plot(x, y) def get_data(): t = random.randint(-100, 100) return t * np.sin(t**2) def data_gen(): while True: yield get_data() def update(dy): x.append(dt.now()) # NOTE: `dt.now()` y.append(dy) line.set_data(x, y) ax.relim() ax.autoscale_view(True, True, True) return line, ax plt.rcParams['animation.convert_path'] = 'c:/bin/convert.exe' ani = animation.FuncAnimation(fig, update, data_gen, interval=1000, blit=True) #ani.save(os.path.join('C:/','temp','test.gif'), writer='imagemagick', fps=30) plt.show() ``` what am I doing wrong? PS I'm using `matplotlib` version: `2.1.2`<issue_comment>username_1: The code from the question runs fine for me in matplotlib 2.2.0 in a Jupyter notebook (`%matplotlib notebook`). It does fail however using any of the following backends when run as script: Qt4Agg, Qt4Cairo, TkAgg, TkCairo. I would hence suspect that @M.F.'s comment above is indeed true and that date2num conversion is necessary. This is what the following code does, apart from getting rid of the blitting, which is not useful in the case where the axes itself has to be drawn as well. ``` import random import numpy as np from datetime import datetime as dt, timedelta from collections import deque import matplotlib.pyplot as plt import matplotlib.dates as mdates import matplotlib.animation as animation npoints = 30 x = deque([mdates.date2num(dt.now())], maxlen=npoints) # NOTE: `dt.now()` y = deque([0], maxlen=npoints) fig, ax = plt.subplots() [line] = ax.plot_date(x, y, ls="-", marker="") def get_data(): t = random.randint(-100, 100) return t * np.sin(t**2) def data_gen(): while True: yield get_data() def update(dy): x.append(mdates.date2num(dt.now())) # NOTE: `dt.now()` y.append(dy) line.set_data(x, y) ax.relim() ax.autoscale_view(True, True, True) ani = animation.FuncAnimation(fig, update, data_gen, interval=1000) #ani.save("anidates.gif", writer='imagemagick', fps=30) plt.show() ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using pandas, you could register a converter (by calling `register_matplotlib_converters()`) to tell matplotlib how to handle `datetime.datetime` objects when `line.set_data` is called so that you do not have to call `date2num` on each value yourself: ``` import datetime as DT import collections import random import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation import matplotlib.dates as mdates import pandas.plotting as pdplt pdplt.register_matplotlib_converters() npoints = 30 x = collections.deque([DT.datetime.now()], maxlen=npoints) y = collections.deque([0], maxlen=npoints) fig, ax = plt.subplots() [line] = ax.plot(x, y) # Not necessary, but offers more control over the format xfmt = mdates.DateFormatter('%H:%M:%S') ax.xaxis.set_major_formatter(xfmt) def get_data(): t = random.randint(-100, 100) return t * np.sin(t**2) def data_gen(): while True: yield get_data() def update(dy): x.append(DT.datetime.now()) y.append(dy) line.set_data(list(x), y) ax.relim() ax.autoscale_view() # Not necessary, but it rotates the labels, making them more readable fig.autofmt_xdate() return [line] ani = animation.FuncAnimation(fig, update, data_gen, interval=1000, blit=False) plt.show() ``` Tested with matplotlib version 2.2.0, backends TkAgg, Qt4Agg, Qt5Agg, GTK3Agg, and GTK3Cairo. --- `matplotlib.units` maintains a registry of converters which it uses to [convert non "numlike" values](https://github.com/matplotlib/matplotlib/blob/master/lib/matplotlib/axis.py#L1506) to plottable values. ``` In [91]: import matplotlib.units as munits In [92]: munits.registry Out[92]: {numpy.str_: , numpy.bytes\_: , str: , bytes: } ``` High-level plot functions like `plt.plot` handle `datetime`s automatically, but lower-level methods like `line.set_data` do not. So if we want to make an animation which uses `datetime` objects, and we do not wish to call `date2num` manually on each value, then we could instead [register a converter](https://matplotlib.org/examples/units/evans_test.html). If we have `pandas` installed, then instead of writing a converter from scratch, we could use `pandas.plotting.register_matplotlib_converters`, which teaches matplotlib to handle (among other things) *lists* of `datetime.datetime` objects. ``` In [96]: import pandas.plotting as pdplt In [97]: pdplt.register_matplotlib_converters() In [98]: munits.registry Out[98]: {datetime.datetime: , numpy.str\_: , numpy.bytes\_: , pandas.\_libs.tslibs.timestamps.Timestamp: , str: , numpy.datetime64: , datetime.date: , datetime.time: , bytes: , pandas.\_libs.tslibs.period.Period: } ``` Unfortunately, [`DatetimeConverter`](https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/_converter.py#L322) it does not handle *deque*s of `datetime.datetime` objects. To get around this little roadblock, call ``` line.set_data(list(x), y) ``` instead of ``` line.set_data(x, y) ``` Upvotes: 1
2018/03/20
899
3,910
<issue_start>username_0: What is meant by Activation function in Machine learning. I go through with most of the articles and videos, everyone states or compare that with neural network. I'am a newbie to machine learning and not that much familiar with deep learning and neural networks. So, can any one explain me what exactly an Activation function is ? instead of explaining with neural networks. I struck with this ambiguity while I learning Sigmoid function for logistic regression.<issue_comment>username_1: Activation functions are really important for a Artificial Neural Network to learn and make sense of something really complicated and Non-linear complex functional mappings between the inputs and response variable.They introduce non-linear properties to our Network.Their main purpose is to convert a input signal of a node in a A-NN to an output signal. That output signal now is used as a input in the next layer in the stack. Specifically in A-NN we do the sum of products of inputs(X) and their corresponding Weights(W) and apply a Activation function f(x) to it to get the output of that layer and feed it as an input to the next layer. [More info here](https://towardsdatascience.com/activation-functions-and-its-types-which-is-better-a9a5310cc8f) Upvotes: 0 <issue_comment>username_2: It's rather difficult to describe activation functions without *some* reference to automated learning, because that's exactly their application, as well as the rationale behind a collective term. They help us focus learning in a stream of functional transformations. I'll try to reduce the complexity of the description. Very simply, an [activation function](https://en.wikipedia.org/wiki/Activation_function) is a filter that alters an output signal (series of values) from its current form into one we find more "active" or useful for the purpose at hand. For instance, a very simple activation function would be a cut-off score for college admissions. My college requires a score of at least 500 on each section of the SAT. Thus, any applicant passes through this filter: if they don't meet that requirement, the "admission score" is dropped to zero. This "activates" the other candidates. Another common function is the sigmoid you studied: the idea is to differentiate the obviously excellent values (map them close to 1) from obviously undesirable values (map them close to -1), and preserve the ability to discriminate or learn about the ones in the middle (map them to something with a gradient useful for further work). A third type might accentuate differences at the top end of a spectrum -- say, football goals and assists. In trying to judge relative levels of skill between players, we have to consider: is the difference between 15 and 18 goals in a season the same as between 0 and 3 goals? Some argue that the larger numbers show a greater differentiation in scoring skill: the more you score, the more opponents focus to stop you. Also, we might want to consider that there's a little "noise" in the metric: the first two goals in a season don't really demonstrate much. In this case, we might choose an activation function for goals `g` such as ``` 1.2 ^ max(0, g-2) ``` This evaluation would then be added to other factors to obtain a metric for the player. Does this help explain things for you? Upvotes: 4 [selected_answer]<issue_comment>username_3: Simply put, an activation function is a function that is added into an artificial neural network in order to help the network learn complex patterns in the data. When comparing with a neuron-based model that is in our brains, the activation function is at the end deciding what is to be fired to the next neuron. That is exactly what an activation function does in an ANN as well. It takes in the output signal from the previous cell and converts it into some form that can be taken as input to the next cell. Upvotes: 0
2018/03/20
1,200
3,739
<issue_start>username_0: In a child-parent table, I need to aggregate all parents for each child. I can readily get children per parent in a CTE query, but can't figure how to reverse it ([sqfiddle here](http://sqlfiddle.com/#!17/bb52c/1)). Given this: ``` CREATE TABLE rel( child integer, parent integer ); INSERT INTO rel(child, parent) VALUES (1,NULL), (2,1), (3,1), (4,3), (5,2), (6,4), (7,2), (8,7), (9,8); ``` a query that will return an array of parents (order is not important): ``` 1, {NULL} 2, {1} 3, {1} 4, {3,1} 5, {2,1} 6, {4,3,1} 7, {2,1} 8, {7,2,1} 9, {8,7,2,1} ```<issue_comment>username_1: For this you \*can create a PL. I did something similar, here is my PL that handles any father-son structure, it returned a table, but for your case I changed a little bit: ``` DROP FUNCTION IF EXISTS ancestors(text,integer,integer); CREATE OR REPLACE FUNCTION ancestors( table_name text, son_id integer,-- the id of the son you want its ancestors ancestors integer)-- how many ancestors you want. 0 for every ancestor. RETURNS integer[] AS $$ DECLARE ancestors_list integer[]; father_id integer:=0; query text; row integer:=0; BEGIN LOOP query:='SELECT child, parent FROM '||quote_ident(table_name) || ' WHERE child='||son_id; EXECUTE query INTO son_id,father_id; RAISE NOTICE 'son:% | father: %',son_id,father_id; IF son_id IS NOT NULL THEN ancestors_list:=array_append(ancestors_list,father_id); son_id:=father_id; ELSE ancestors:=0; father_id:=0; END IF; IF ancestors=0 THEN EXIT WHEN father_id IS NULL; ELSE row:=row+1; EXIT WHEN ancestors<=row; END IF; END LOOP; RETURN ancestors_list; END; $$ LANGUAGE plpgsql; ``` Once the PL is created, to get wat you want just query: ``` SELECT *,ancestors('rel',child,0) from rel ``` This returns: ``` child | parent | ancestors ------+--------+----------------- 1 | NULL | {NULL} 2 | 1 | {1,NULL} 3 | 1 | {1,NULL} 4 | 3 | {3,1,NULL} 5 | 2 | {2,1,NULL} 6 | 4 | {4,3,1,NULL} 7 | 2 | {2,1,NULL} 8 | 7 | {7,2,1,NULL} 9 | 8 | {8,7,2,1,NULL} ``` If you don't want the NULL to appear, just update the PL ;) Upvotes: 2 <issue_comment>username_2: Even if there is an accepted answer, I would like to show how the problem can be solved in pure SQL in a much simpler way, with a *recursive CTE:* ``` WITH RECURSIVE t(child, parentlist) AS ( SELECT child , ARRAY[]::INTEGER[] FROM rel WHERE parent IS NULL UNION SELECT rel.child, rel.parent || t.parentlist FROM rel JOIN t ON rel.parent = t.child ) SELECT * FROM t; child | parentlist -------+------------ 1 | {} 2 | {1} 3 | {1} 4 | {3,1} 5 | {2,1} 7 | {2,1} 6 | {4,3,1} 8 | {7,2,1} 9 | {8,7,2,1} (9 rows) ``` If you insist on having a singleton `{NULL}` for children with an empty list of parents, just say ``` SELECT child, CASE WHEN CARDINALITY(parentlist) = 0 THEN ARRAY[NULL]::INTEGER[] ELSE parentlist END FROM t; ``` instead of `SELECT * FROM t`, but frankly, I don’t see why you should. A final remark: I am not aware of any efficient way to do this with relational databases, either in pure SQL or with procedural languages. The point is that `JOIN`’s are inherently expensive, and if you have really large tables, your queries will take lots of time. You can mitigate the problem with indexes, but the best way to tackle this kind of problems is by using graphing software and not RDBMS. Upvotes: 4 [selected_answer]
2018/03/20
458
1,823
<issue_start>username_0: We are using a tool in which we need to delete log tables.Right now,we are keeping 1 million rows and deleting rest of rows if any.This is time consuming and sometimes in production takes 12 hours to delete the data,which affects daily transactions.Is there any other way to delete log tables efficiently,without affecting daily transactions. Lets suppose we want to keep 1 million rows: ``` Select query: Select Query:select min(date) from (select date from table order by date desc) where rownum <= 1000000 Delete Query: Delete Query:Delete from table where date > (result of select query) ``` Is there any way we can optimize these two queries?<issue_comment>username_1: Inserts into a smaller table are much faster than deletions from a larger table. In this case you can insert the records you want to keep into a staging table. **If logging is not a concern** and **referential integrity will allow it**, you could simply: 1. Create a staging table that is an exact copy of your log table. 2. Insert the records you intend to keep from your log table into your staging table. 3. Drop your log table. 4. Rename your staging table to log. Upvotes: 2 <issue_comment>username_2: You want to delete ten thousand rows at a time. ``` delete top (10000) from tableA where condition1 <> xyz while (@@rowcount >0) begin delete top (10000) from tableA where condition1 <> xyz end delete from tablea where condition1 <> xyz ``` This way you wont have a large transaction log. You may want to experiment with the number of rows (some people go with 1000 rows) but its very dependent on the amount of activity in your machine and the speed of your drives and the placement and configuration of your log files. You said you want to keep certain rows so I added condition1 <> xyz Upvotes: 1
2018/03/20
735
2,353
<issue_start>username_0: I am trying to achieve the following: ``` IMG TEXT IMG TEXT ``` Here is what I have so far: ``` var ws_ftr = data.ws_ftr.records; console.log(JSON.stringify(ws_ftr)); jQuery.each(ws_ftr, function(index, ftr) { jQuery('.carousel-inner').append(''); jQuery('.item').append(''); jQuery('.container-fluid').append(''); jQuery('.row').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); ``` Which gives me this, I stopped at the first major error because I believe the others will be corrected once I fix it... ``` IMAGE TEXT \*\*IMG TEXT\*\* ``` This should not be giving me the col-md-3 and col-md-9 for each loop, but instead should be giving me the entire item block. I am fairly new to jQuery/javascript and am learning as I go. Can anyone explain to me what I have done wrong and the best way to correct? Thanks so much!<issue_comment>username_1: Just noted that in your jQuery you never closed your any of your divs. So continuing with your code it can be completed like so: ``` var ws_ftr = data.ws_ftr.records; console.log(JSON.stringify(ws_ftr)); $.each(ws_ftr, function(index, ftr) { $('.carousel-inner').append(''); $('.item').append(''); $('.container-fluid').append(''); $('.row').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); $('.container-fluid').append(''); $('.item').append(''); $('.carousel-inner').append(''); ); ``` I believe this can also be completed this way: ``` var ws_ftr = data.ws_ftr.records; console.log(JSON.stringify(ws_ftr)); $.each(ws_ftr, function(index, ftr) { $('.carousel-inner').append(''); $('.item').append(''); $('.container-fluid').append(''); $('.row').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); ); ``` Upvotes: -1 <issue_comment>username_2: 1.Instead of multiple `.append()` do every-thing in single `.append()` 2.Close all the div's that you started. ``` jQuery('.carousel-inner').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); ``` Note- seems that non-closed divs are creating the issue. Upvotes: 1 [selected_answer]
2018/03/20
777
2,346
<issue_start>username_0: I am trying to use Python3 and Pandas to shape a dataframe. My current frame looks like this: ```html .tg {border-collapse:collapse;border-spacing:0;} .tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;} .tg .tg-baqh{text-align:center;vertical-align:top} | | Col1 | Col2 | Col3 | | --- | --- | --- | --- | | 0 | X | Y | | | 1 | | Z | B | ``` I am trying to drop the row if the second column is blank (so drop row 2, index 1, only here), but the name of the second column can change depending on the file being used so Im attempting to use ix but to no avail... ``` c = df.ix[:,1] df.dropna(subset=c, how='all', inplace=True) ``` Any advice?<issue_comment>username_1: Just noted that in your jQuery you never closed your any of your divs. So continuing with your code it can be completed like so: ``` var ws_ftr = data.ws_ftr.records; console.log(JSON.stringify(ws_ftr)); $.each(ws_ftr, function(index, ftr) { $('.carousel-inner').append(''); $('.item').append(''); $('.container-fluid').append(''); $('.row').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); $('.container-fluid').append(''); $('.item').append(''); $('.carousel-inner').append(''); ); ``` I believe this can also be completed this way: ``` var ws_ftr = data.ws_ftr.records; console.log(JSON.stringify(ws_ftr)); $.each(ws_ftr, function(index, ftr) { $('.carousel-inner').append(''); $('.item').append(''); $('.container-fluid').append(''); $('.row').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); ); ``` Upvotes: -1 <issue_comment>username_2: 1.Instead of multiple `.append()` do every-thing in single `.append()` 2.Close all the div's that you started. ``` jQuery('.carousel-inner').append('![feature-'+ftr[2]+'](img/features_sliding/'+ftr[3] +')'+ftr[2]+' ---------- '+ftr[1]+''); ``` Note- seems that non-closed divs are creating the issue. Upvotes: 1 [selected_answer]
2018/03/20
631
2,144
<issue_start>username_0: I have integrated froala editor on my HTML page. I need to fetch the data I have written inside the `div` with its HTML properties. Below is my code: **html** ``` click The editor can use `BR` tags. When ENTER key is hit, a `BR` tag is inserted. ``` **js code** ``` $('div#froala-editor-br').froalaEditor({ enter: $.FroalaEditor.ENTER_BR }); $("button").click(function(){ var chk = $('#froala-editor-br').html(); alert(chk); }); ``` And here is the working [jsfiddle](https://jsfiddle.net/18svg452/17/) If you press the `click` button, it is fetching the froala code, not the code I am entering.<issue_comment>username_1: Just need to update selector in the one line inside button click function to: ``` var chk = $('#froala-editor-br .fr-element').html(); ``` --- ES6 Update ========== Here's the ES6 version of the same code using template literals and `querySelector`: ``` const chk = document.querySelector('#froala-editor-br .fr-element').innerHTML; ``` Upvotes: 2 <issue_comment>username_2: Just replace your code with below script. ``` $("button").click(function(){ var chk = $('#froala-editor-br').froalaEditor('html.get'); alert(chk); }); ``` Here is the documentation link. [`getHTML`](https://www.froala.com/wysiwyg-editor/examples/getHTML) Upvotes: 2 <issue_comment>username_3: The froala editor adds its own `div`s dynamically and wraps many elements. If you want only the text inside the resulting editor, you need to change your selector. So, your selector should be `$('#froala-editor-br .fr-view')` instead. As in: ``` $("button").click(function() { var chk = $('#froala-editor-br .fr-view').text(); alert(chk); }); ``` As mentioned in the comments, @Smit Raval's answer uses the API for the froala editor and it seems like a better option to use that instead. Upvotes: 3 [selected_answer]<issue_comment>username_4: Pure JS code ``` let button = document.querySelector("button"); let div = document.getElementById("froala-editor-br"); button.addEventListener("click", function(e){ alert(div.innerHTML); }); ``` Upvotes: 1
2018/03/20
638
2,443
<issue_start>username_0: I'm doing a WPF application with a *UserControl* with a *TextBlock* element. The content of this elements depends on an enum in the view model (Success, Pending, Error etc.). Here are some example of the different states the TextBlock: **Example 1 - Simple** ``` Please wait ``` **Example 2 - With hyperlink** ``` Searching for item. Link to details ``` **Example 3 - With linebreak** ``` The content has been uploaded The item is not ready to use ``` What is the best approach for changing the content of this element dynamically depending on the state of my enum in the view model? If I bind the Text property to a string in my view model, I don't think that I'm able to insert child elements like *Hyperlink*, *LineBreak* etc. What options do I have?<issue_comment>username_1: you can use a Label as host and set the template based on trigger ``` <Style.Triggers> <DataTrigger Binding="{Binding YourEnum}" Value="something"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <TextBlock> Please wait </TextBlock> </ControlTemplate> </Setter.Value> </Setter> </DataTrigger> <DataTrigger Binding="{Binding YourEnum}" Value="somethingElse"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <TextBlock> Searching for item. <Hyperlink Command="{Binding DetailsCommand}">Link to details</Hyperlink> </TextBlock> </ControlTemplate> </Setter.Value> </Setter> </DataTrigger> <DataTrigger Binding="{Binding YourEnum}" Value="else"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <TextBlock> The content has been uploaded<LineBreak /> The item is not ready to use </TextBlock> </ControlTemplate> </Setter.Value> </Setter> </DataTrigger> </Style.Triggers> ``` Upvotes: 2 <issue_comment>username_2: I would implement it with TemplateSelector: ``` public class TemplSelector : DataTemplateSelector { public override DataTemplate SelectTemplate(object item, DependencyObject container) { var element = container as FrameworkElement; if (element != null && item != null) { var vm = (ViewModel)item; if (vm.YourEnum == 1) return element.FindResource("templ1") as DataTemplate; else if (vm.YourEnum == 0) return element.FindResource("templ0") as DataTemplate; } return null; } } Please wait Searching for item. Link to details ``` Upvotes: 3 [selected_answer]
2018/03/20
284
1,048
<issue_start>username_0: I am looking for a way to change css of the focused element. ``` input.on("select2-focus", function(elem) { console.log('FOCUSING'); $(this).css('background-color', 'yellow'); }); ``` What I am trying to do is, the clicked focus element should have the background color yellow. But this is not working, it does not do anything.<issue_comment>username_1: Once check with this code ``` $(document).on("focus", ".select2-focus", function() { console.log('FOCUSING'); $(this).css('background-color', 'yellow'); }); ``` Upvotes: 0 <issue_comment>username_2: That's because Select2 will hide the original input and create a new component on your page. Your code targets the original input, which is hidden and styling it won't make any difference. But you can get the select2 container from the input, like this: ``` input.on("select2-focus", function(elem) { var $container = $(this).data('select2').$container; $container.css('background-color', 'yellow'); }); ``` Upvotes: 2 [selected_answer]
2018/03/20
712
2,415
<issue_start>username_0: I was trying to display the status of processing to the user on the front end when I was using *StreamingHttpResponse*. I was able to get the current status but it is being appended to the previous one. I want the response template to contain only the current yield. > > views.py > > > ``` from django.shortcuts import render from django.http import StreamingHttpResponse,HttpResponse import time def f1(): x = 0 while x<5: time.sleep(1) x = x+1 code = """{} """.format(x) yield code def home(request): return StreamingHttpResponse(f1()) ``` > > output in the browser > > > ``` 1 2 3 4 ``` > > expected output > > > 1st: `1` 2nd: `2` instead of `1 2` 3rd: `3` instead of `1 2 3` 4th: `4` instead of `1 2 34` instead of appending the previous yield I want the template to be filled with the current yield.<issue_comment>username_1: ``` def f1(): x = 1 while x < 5: time.sleep(1) x += 1 s = str() for i in range(1,x): s+= """{} """.format(str(i)) code = s yield code ``` I think like this... ``` def home(request): for z in f1(): return StreamingHttpResponse(z) ``` Upvotes: -1 <issue_comment>username_2: You can't do it this way. ``` def home(request): return StreamingHttpResponse(f1()) ``` A `StreamingHttpResponse` means you want to stream data slowly instead of one go. Now once you have placed the `1` on the stream, you can't just call it back and make it vanish So you have few options on how to get it to work. **AJAX** You can from the page make a AJAX call which updates the latest status and you update the same using javascript In this too you can use your existing setup with [jQuery read AJAX stream incrementally?](https://stackoverflow.com/questions/7740646/jquery-read-ajax-stream-incrementally) And then display the last line, I won't recommend this though Next is to make a Ajax method which only returns current status **Socket.io** You can use the [django-socketio](https://github.com/stephenmcd/django-socketio) for the same **Django Channels** You can use [django-channels](https://channels.readthedocs.io/en/latest/) But add `sockets` and `channels` would be a add-on complexity for your problem. So you should try to solve your with pure AJAX Upvotes: 4 [selected_answer]
2018/03/20
1,264
4,950
<issue_start>username_0: currently I am working on a Jenetics ([link to jenetics](http://jenetics.io/)) implementation to optimize a particle accelerator beam line. My fitness function calls accelerator detector devices and is defined as follows: ``` private double fitness(final DoubleChromosome chromosomes) { // private double fitness(final Genotype chromosomes) { // Convert genes to a format the device scanner can understand // we will inject a 1:n Set> final Set> trimValues = new HashSet<>(); final List valueList = new ArrayList<>(); for (final DoubleGene chromosome : chromosomes) { valueList.add(Double.valueOf(chromosome.doubleValue())); } trimValues.add(valueList); .... more code specific to application } ``` Jenetics' stream engine is initialized in a specific method: ``` public void initAlgorithm(final Object scanParameters) throws Exception { if (scanParameters != null) { /// See constructor of EvolvingImagesWorker _geneticScanParameters = (GeneticScanParameters) scanParameters; } if (_geneticScanParameters.getTrimParameterSets() != null) { final int chromosomeCount = _geneticScanParameters.getTrimParameterSets().size(); if (chromosomeCount > 0) { ISeq chromosomeSet = ISeq.empty(); // create an ISeq of genes for (final TrimParameterValueSet valueSet : \_geneticScanParameters.getTrimParameterSets()) { final double minValue = valueSet.getMinValue(); final double maxValue = valueSet.getMaxValue(); final double initialValue = (maxValue + minValue) / 2; final DoubleGene doubleGene = DoubleGene.of(initialValue, minValue, maxValue); final DoubleChromosome doubleChromosome = DoubleChromosome.of(doubleGene.newInstance()); chromosomeSet = chromosomeSet.append(doubleChromosome.newInstance()); } Codec codec = null; try { final Genotype genotype = Genotype.of(chromosomeSet); codec = Codec.of(genotype.newInstance(), // gt -> (DoubleChromosome) gt.getChromosome()); } catch (final IllegalArgumentException ex) { MessageLogger.logError(getClass(), Thread.currentThread(), ex); throw ex; } \_scannerEngine = Engine.builder(this::fitness, codec) // .executor(Executors.newSingleThreadExecutor()) // without this command, engine will be executed // in // parallel threads .populationSize(\_geneticScanParameters.getPopulationSize()) // .optimize(\_geneticScanParameters.getOptimizationStrategy()) // .offspringFraction(\_geneticScanParameters.getOffspringSize()) // .survivorsSelector(new RouletteWheelSelector<>()) // .offspringSelector(new TournamentSelector<>(\_geneticScanParameters.getTournamentSizeLimit())) // .alterers( // new Mutator<>(\_geneticScanParameters.getMutator()), // new MeanAlterer<>(\_geneticScanParameters.getMeanAlterer()) // ) // .build(); } else { throw new IllegalStateException(ERROR\_INITSCANNER\_NO\_SETTING\_DEVICE); } } } ``` where: ``` private Engine \_scannerEngine = null; ``` What I would like to do is to call the fitness function such that I have the Genotype available in the fitness function to have access to the genes' values (settings I send to the accelerator). I already tried to define fitness() as follows: ``` private double fitness(final Genotype genotype) { ... } ``` but this call causes a compliation error.<issue_comment>username_1: I had a look at your code and I think you want to do something like this: ``` class Foo { // Your parameter class. class TrimParameterSet { double min, max; } static double fitness(final double[] values) { // Your fitness function. return 0; } public static void main(final String[] args) { final List valueSets = ...; final DoubleRange[] ranges = valueSets.stream() .map(p -> DoubleRange.of(p.min, p.max)) .toArray(DoubleRange[]::new); final Codec codec = Codecs.ofVector(ranges); final Engine engine = Engine.builder(Foo::fitness, codec) .build(); // ... } } ``` The `double[]` array of your fitness function has a different range, accoriding to the defined ranges in your `TrimParameterSet` class. If you want to define a *direct* fitness function, you have to define a genotype with a gene as parameter type. ``` double fitness(Genotype gt) {...} ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: As the comment function did not allow me to finish to document my final implementation, here the actual code: ``` ISeq chromosomeSet = ISeq.empty(); // create an ISeq of genes for (loop criteria) { final DoubleGene doubleGene = DoubleGene.of(initialValue, minValue, maxValue); final DoubleChromosome doubleChromosome = DoubleChromosome.of(doubleGene.newInstance()); chromosomeSet = chromosomeSet.append(doubleChromosome.newInstance()); } \_genotype = Genotype.of(chromosomeSet); \_scannerEngine = Engine.builder(this::fitness, \_genotype) ... // engine settings .build(); double fitness(Genotype gt) {...} ``` Upvotes: 0