qid
int64
1
74.7M
question
stringlengths
0
58.3k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
2
48.3k
response_k
stringlengths
2
40.5k
53,203,011
It's abundantly clear to me that when we want to delete a node in a Linked List (be it doubly or singly linked), and we have to search for this node, the time complexity for this task is O(n), as we must traverse the whole list in the worst case to identify the node. Similarly, it is O(k) if we want to delete the k-th node, and we don't have a reference to this node already. It is commonly cited that one of the benefits of using a doubly linked list over a singly linked list is that deletion is O(1) when we have a reference to the node we want to delete. I.e., if you want to delete the Node i, simply do the following: i.prev.next = i.next and i.next.prev = i.prev It is said that deletion is O(1) in a singly linked list ONLY if you have a reference to the node prior to the one you want to delete. However, I don't think this is necessarily the case. If you want to delete Node i (and you have a reference to Node i), why can't you just copy over the data from i.next, and set i.next = i.next.next? This would also be O(1), as in the doubly linked list case, meaning that deletion is no more efficient in a doubly linked list in ANY case, as far as Big-O is concerned. Of course, this idea wouldn't work if the node you're trying to delete is the last node in the linked list. It's really bugging me that no one remembers this when comparing singly and doubly linked lists. What am I missing? **To clarify**: what I'm suggesting in the singly linked case is **overwriting the data at the Node you want to delete**, with the data from the next node, and then deleting the next node. This has the same desired effect as deleting Node `i`, though it is not what you're doing per se. **EDIT** **What I've Learned:** So it seems that I am correct to some extent. First of all, many people mentioned that my solution isn't complete, as deletion of the last element is a problem, so my algorithm is O(n) (by definition of Big-O). I naively suggested in response to get around this by keeping track of the "2nd to last node" in your list - of course, this causes problems once the last node in your list has been deleted the first time. A solution that was suggested, and does seem to work, is to demarcate the end of your list with something like a NullNode, and I like this approach. Other problems that were presented were referential integrity, and the time associated with copying the data itself from the next node (i.e. presumably, a costly deep copy might be necessary). If you can assume that you don't have other objects using the node that you're copying, and that the task of copying is O(1) in itself, then it seems like my solution works. Although, at this point, maybe its worth it to just use a Doubly Linked List :)
2018/11/08
[ "https://Stackoverflow.com/questions/53203011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10622251/" ]
For a node in the middle of the list you need to modify the *previous* node (so its "next" pointer is pointing to the removed nodes "next"). With a double-linked list it's simple since the node to delete contains a pointer to the previous node. That's not possible with s single-linked list, where you need to iterate over list until you find a node whose "next" pointer is the node to delete. Therefore removing a node in a double-linked list is O(1). And for a single-linked list it's O(n), where n is the number of nodes before the node you want to remove.
**Deletion for Single Link List** Assume that there is total 6 node. and the first node is indicating Head. If you want to delete the first node then complexity will O(1) because you just need 1 iteration. If you want to delete the 4th node then complexity will O(n) If you want to delete the last node then complexity will O(n) because you have to iterate all the node.
53,203,011
It's abundantly clear to me that when we want to delete a node in a Linked List (be it doubly or singly linked), and we have to search for this node, the time complexity for this task is O(n), as we must traverse the whole list in the worst case to identify the node. Similarly, it is O(k) if we want to delete the k-th node, and we don't have a reference to this node already. It is commonly cited that one of the benefits of using a doubly linked list over a singly linked list is that deletion is O(1) when we have a reference to the node we want to delete. I.e., if you want to delete the Node i, simply do the following: i.prev.next = i.next and i.next.prev = i.prev It is said that deletion is O(1) in a singly linked list ONLY if you have a reference to the node prior to the one you want to delete. However, I don't think this is necessarily the case. If you want to delete Node i (and you have a reference to Node i), why can't you just copy over the data from i.next, and set i.next = i.next.next? This would also be O(1), as in the doubly linked list case, meaning that deletion is no more efficient in a doubly linked list in ANY case, as far as Big-O is concerned. Of course, this idea wouldn't work if the node you're trying to delete is the last node in the linked list. It's really bugging me that no one remembers this when comparing singly and doubly linked lists. What am I missing? **To clarify**: what I'm suggesting in the singly linked case is **overwriting the data at the Node you want to delete**, with the data from the next node, and then deleting the next node. This has the same desired effect as deleting Node `i`, though it is not what you're doing per se. **EDIT** **What I've Learned:** So it seems that I am correct to some extent. First of all, many people mentioned that my solution isn't complete, as deletion of the last element is a problem, so my algorithm is O(n) (by definition of Big-O). I naively suggested in response to get around this by keeping track of the "2nd to last node" in your list - of course, this causes problems once the last node in your list has been deleted the first time. A solution that was suggested, and does seem to work, is to demarcate the end of your list with something like a NullNode, and I like this approach. Other problems that were presented were referential integrity, and the time associated with copying the data itself from the next node (i.e. presumably, a costly deep copy might be necessary). If you can assume that you don't have other objects using the node that you're copying, and that the task of copying is O(1) in itself, then it seems like my solution works. Although, at this point, maybe its worth it to just use a Doubly Linked List :)
2018/11/08
[ "https://Stackoverflow.com/questions/53203011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10622251/" ]
It is true that copying data from `i.next` to `i` and then deleting `i` would be `O(1)` assuming that copying the data is also `O(1)`. But even with this algorithm, since deleting the last element is `O(n)`, and a description of a function in terms of big O notation only provides an upper bound on the growth rate of the function, that means your algorithm is still `O(n)`. Regarding your comment: > > I guess my dissatisfaction comes from the fact that textbooks and basically every resource online cites the #1 biggest advantage of doubly linked lists is deletion - this seems a little disingenuous. It's a very specific case of deletion - deletion at the tail! If efficient deletion is all you're going for, seems like this doesn't warrant using a doubly instead of a singly linked list (due to all the overhead necessary of having double the number of pointers). Simply store a reference to the second to last node in your list, and you're good to go! > > > You can certainly store a reference to the second to last node and make deletion of the last node `O(1)`, but this would be the case only for the first time you delete the last node. You could update the reference to the node before it, but finding it will be `O(n)`. You can solve this if you keep a reference to second to last element, and so on. At this point, you have reasoned your way to a doubly linked list, whose main advantage is deletion, and since you already have pointers to previous nodes you don't really need to move values around. **Remember that big `O` notation talks about the worst case scenario, so if even a single case is `O(n)` then your entire algorithm is `O(n)`.** When you say a solution is `O(n)` you are basically saying *"in the worst possible case, this algorithm will grow as fast as `n` grows"*. Big `O` does not talk about expected or average performance, and it's a great theoretical tool, but you need to consider your particular use cases when deciding what to use. Additionally, if you need to preserve reference integrity, you would not want to move values from one node to the other, i.e. if you tad a reference to node `i+1` and delete node `i`, you wouldn't expect your reference to be silently invalid, so when removing elements the more robust option is to delete the node itself.
The problem with this approach is that it invalidates the wrong reference. Deleting the node shall only invalidate a reference to *that* node, while the references to *any other* node shall remain valid. As long as you do not hold any reference to the list, this approach would work. Otherwise it is prone to failure.
53,203,011
It's abundantly clear to me that when we want to delete a node in a Linked List (be it doubly or singly linked), and we have to search for this node, the time complexity for this task is O(n), as we must traverse the whole list in the worst case to identify the node. Similarly, it is O(k) if we want to delete the k-th node, and we don't have a reference to this node already. It is commonly cited that one of the benefits of using a doubly linked list over a singly linked list is that deletion is O(1) when we have a reference to the node we want to delete. I.e., if you want to delete the Node i, simply do the following: i.prev.next = i.next and i.next.prev = i.prev It is said that deletion is O(1) in a singly linked list ONLY if you have a reference to the node prior to the one you want to delete. However, I don't think this is necessarily the case. If you want to delete Node i (and you have a reference to Node i), why can't you just copy over the data from i.next, and set i.next = i.next.next? This would also be O(1), as in the doubly linked list case, meaning that deletion is no more efficient in a doubly linked list in ANY case, as far as Big-O is concerned. Of course, this idea wouldn't work if the node you're trying to delete is the last node in the linked list. It's really bugging me that no one remembers this when comparing singly and doubly linked lists. What am I missing? **To clarify**: what I'm suggesting in the singly linked case is **overwriting the data at the Node you want to delete**, with the data from the next node, and then deleting the next node. This has the same desired effect as deleting Node `i`, though it is not what you're doing per se. **EDIT** **What I've Learned:** So it seems that I am correct to some extent. First of all, many people mentioned that my solution isn't complete, as deletion of the last element is a problem, so my algorithm is O(n) (by definition of Big-O). I naively suggested in response to get around this by keeping track of the "2nd to last node" in your list - of course, this causes problems once the last node in your list has been deleted the first time. A solution that was suggested, and does seem to work, is to demarcate the end of your list with something like a NullNode, and I like this approach. Other problems that were presented were referential integrity, and the time associated with copying the data itself from the next node (i.e. presumably, a costly deep copy might be necessary). If you can assume that you don't have other objects using the node that you're copying, and that the task of copying is O(1) in itself, then it seems like my solution works. Although, at this point, maybe its worth it to just use a Doubly Linked List :)
2018/11/08
[ "https://Stackoverflow.com/questions/53203011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10622251/" ]
It is true that copying data from `i.next` to `i` and then deleting `i` would be `O(1)` assuming that copying the data is also `O(1)`. But even with this algorithm, since deleting the last element is `O(n)`, and a description of a function in terms of big O notation only provides an upper bound on the growth rate of the function, that means your algorithm is still `O(n)`. Regarding your comment: > > I guess my dissatisfaction comes from the fact that textbooks and basically every resource online cites the #1 biggest advantage of doubly linked lists is deletion - this seems a little disingenuous. It's a very specific case of deletion - deletion at the tail! If efficient deletion is all you're going for, seems like this doesn't warrant using a doubly instead of a singly linked list (due to all the overhead necessary of having double the number of pointers). Simply store a reference to the second to last node in your list, and you're good to go! > > > You can certainly store a reference to the second to last node and make deletion of the last node `O(1)`, but this would be the case only for the first time you delete the last node. You could update the reference to the node before it, but finding it will be `O(n)`. You can solve this if you keep a reference to second to last element, and so on. At this point, you have reasoned your way to a doubly linked list, whose main advantage is deletion, and since you already have pointers to previous nodes you don't really need to move values around. **Remember that big `O` notation talks about the worst case scenario, so if even a single case is `O(n)` then your entire algorithm is `O(n)`.** When you say a solution is `O(n)` you are basically saying *"in the worst possible case, this algorithm will grow as fast as `n` grows"*. Big `O` does not talk about expected or average performance, and it's a great theoretical tool, but you need to consider your particular use cases when deciding what to use. Additionally, if you need to preserve reference integrity, you would not want to move values from one node to the other, i.e. if you tad a reference to node `i+1` and delete node `i`, you wouldn't expect your reference to be silently invalid, so when removing elements the more robust option is to delete the node itself.
For a node in the middle of the list you need to modify the *previous* node (so its "next" pointer is pointing to the removed nodes "next"). With a double-linked list it's simple since the node to delete contains a pointer to the previous node. That's not possible with s single-linked list, where you need to iterate over list until you find a node whose "next" pointer is the node to delete. Therefore removing a node in a double-linked list is O(1). And for a single-linked list it's O(n), where n is the number of nodes before the node you want to remove.
53,203,011
It's abundantly clear to me that when we want to delete a node in a Linked List (be it doubly or singly linked), and we have to search for this node, the time complexity for this task is O(n), as we must traverse the whole list in the worst case to identify the node. Similarly, it is O(k) if we want to delete the k-th node, and we don't have a reference to this node already. It is commonly cited that one of the benefits of using a doubly linked list over a singly linked list is that deletion is O(1) when we have a reference to the node we want to delete. I.e., if you want to delete the Node i, simply do the following: i.prev.next = i.next and i.next.prev = i.prev It is said that deletion is O(1) in a singly linked list ONLY if you have a reference to the node prior to the one you want to delete. However, I don't think this is necessarily the case. If you want to delete Node i (and you have a reference to Node i), why can't you just copy over the data from i.next, and set i.next = i.next.next? This would also be O(1), as in the doubly linked list case, meaning that deletion is no more efficient in a doubly linked list in ANY case, as far as Big-O is concerned. Of course, this idea wouldn't work if the node you're trying to delete is the last node in the linked list. It's really bugging me that no one remembers this when comparing singly and doubly linked lists. What am I missing? **To clarify**: what I'm suggesting in the singly linked case is **overwriting the data at the Node you want to delete**, with the data from the next node, and then deleting the next node. This has the same desired effect as deleting Node `i`, though it is not what you're doing per se. **EDIT** **What I've Learned:** So it seems that I am correct to some extent. First of all, many people mentioned that my solution isn't complete, as deletion of the last element is a problem, so my algorithm is O(n) (by definition of Big-O). I naively suggested in response to get around this by keeping track of the "2nd to last node" in your list - of course, this causes problems once the last node in your list has been deleted the first time. A solution that was suggested, and does seem to work, is to demarcate the end of your list with something like a NullNode, and I like this approach. Other problems that were presented were referential integrity, and the time associated with copying the data itself from the next node (i.e. presumably, a costly deep copy might be necessary). If you can assume that you don't have other objects using the node that you're copying, and that the task of copying is O(1) in itself, then it seems like my solution works. Although, at this point, maybe its worth it to just use a Doubly Linked List :)
2018/11/08
[ "https://Stackoverflow.com/questions/53203011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10622251/" ]
For a node in the middle of the list you need to modify the *previous* node (so its "next" pointer is pointing to the removed nodes "next"). With a double-linked list it's simple since the node to delete contains a pointer to the previous node. That's not possible with s single-linked list, where you need to iterate over list until you find a node whose "next" pointer is the node to delete. Therefore removing a node in a double-linked list is O(1). And for a single-linked list it's O(n), where n is the number of nodes before the node you want to remove.
I was looking this up as a way to explain it and get references for a blog post. Assuming you have to look up the node, like we do often with arrays and lists to find a value, you can only travel in one direction, and it will take O^n times in double and single link lists to get to the node and retrieve it's address in memory. In a double linked list, once you have the nodes location you can set the pointers for the previous and the next node as needed without having to temporarily store any data. I think your idea would work, regardless of the last node issue, if while traversing to find the node in question that needs to be removed, a temp value, for the previous node is kept track of. I think the real issue is in a single linked list you'd have to hold the node addresses in a temporary variable as you traverse to assign the new pointers. On each node, we'd have to store the current node as a previous, and the next node as the next so that pointer reassignment could be done, which is essentially what the double linked list does as it's created. Even if we have to travel to the end node, if the previous is kept in a temporary variable, we could go back to assign a none to it's next pointer. But still, this is what the doubly linked list accomplishes my storing the addresses for it's neighbors, and then nothing has to go into a temporary state for the search and deletion. Consider also that the O^n might not be the benefit, but in not having to place temporary data to do the removal. At the location of the node, we can access the neighbors in a doubly linked list, and in a single linked list, we would have to store data temporarily on each iteration for when the value is found. There's always the possibility the data would not be in the list. In a doubly linked list, the traversal would happen without having to store temporary information. What if there are parallel processes and that temporary data is changed before the pointer swap can happen? What happens if that temporary data is deleted before the new assignment? Just some thoughts on it. Was hoping for a more thorough explanation myself. Wikipedia: <https://en.wikipedia.org/wiki/Doubly_linked_list>
53,203,011
It's abundantly clear to me that when we want to delete a node in a Linked List (be it doubly or singly linked), and we have to search for this node, the time complexity for this task is O(n), as we must traverse the whole list in the worst case to identify the node. Similarly, it is O(k) if we want to delete the k-th node, and we don't have a reference to this node already. It is commonly cited that one of the benefits of using a doubly linked list over a singly linked list is that deletion is O(1) when we have a reference to the node we want to delete. I.e., if you want to delete the Node i, simply do the following: i.prev.next = i.next and i.next.prev = i.prev It is said that deletion is O(1) in a singly linked list ONLY if you have a reference to the node prior to the one you want to delete. However, I don't think this is necessarily the case. If you want to delete Node i (and you have a reference to Node i), why can't you just copy over the data from i.next, and set i.next = i.next.next? This would also be O(1), as in the doubly linked list case, meaning that deletion is no more efficient in a doubly linked list in ANY case, as far as Big-O is concerned. Of course, this idea wouldn't work if the node you're trying to delete is the last node in the linked list. It's really bugging me that no one remembers this when comparing singly and doubly linked lists. What am I missing? **To clarify**: what I'm suggesting in the singly linked case is **overwriting the data at the Node you want to delete**, with the data from the next node, and then deleting the next node. This has the same desired effect as deleting Node `i`, though it is not what you're doing per se. **EDIT** **What I've Learned:** So it seems that I am correct to some extent. First of all, many people mentioned that my solution isn't complete, as deletion of the last element is a problem, so my algorithm is O(n) (by definition of Big-O). I naively suggested in response to get around this by keeping track of the "2nd to last node" in your list - of course, this causes problems once the last node in your list has been deleted the first time. A solution that was suggested, and does seem to work, is to demarcate the end of your list with something like a NullNode, and I like this approach. Other problems that were presented were referential integrity, and the time associated with copying the data itself from the next node (i.e. presumably, a costly deep copy might be necessary). If you can assume that you don't have other objects using the node that you're copying, and that the task of copying is O(1) in itself, then it seems like my solution works. Although, at this point, maybe its worth it to just use a Doubly Linked List :)
2018/11/08
[ "https://Stackoverflow.com/questions/53203011", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10622251/" ]
For a node in the middle of the list you need to modify the *previous* node (so its "next" pointer is pointing to the removed nodes "next"). With a double-linked list it's simple since the node to delete contains a pointer to the previous node. That's not possible with s single-linked list, where you need to iterate over list until you find a node whose "next" pointer is the node to delete. Therefore removing a node in a double-linked list is O(1). And for a single-linked list it's O(n), where n is the number of nodes before the node you want to remove.
> > It is said that deletion is O(1) in a singly linked list ONLY if you > have a reference to the node prior to the one you want to delete. > However, I don't think this is necessarily the case. If you want to > delete Node i (and you have a reference to Node i), why can't you just > copy over the data from i.next, and set i.next = i.next.next? > > > Because it is the previous node's "next" member that you want to set equal to what i.next points to before you delete i. Finding the previous node is an O(N) operation for single-linked list, if you don't have a reference to it. For a double-linked list, finding the previous node is a O(1) operation as it should be i.prev
27,529,956
I need to enlarge an input field when it's on focus without the containing td enlarging, too. Further on both fields should stay on their actual position (have a look on the fiddle, it mooves) and not move up or down (what is caused by the absolute position). This is my actual state ``` $('.Bemerkung').focus(function() { $(this).attr('size','30'); $(this).parent().css({'position':'absolute'}); $(this).css({'position':'absolute'}); }) $('.Bemerkung').blur(function() { $(this).attr('size','5'); $(this).removeAttr('style'); $(this).parent().removeAttr('style'); }) ``` <http://jsfiddle.net/kazuya88/dhy0dxyy/> Hope you can help me.
2014/12/17
[ "https://Stackoverflow.com/questions/27529956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4371042/" ]
It's not too far from what you originally had: ``` $('.Bemerkung').focus(function() { oldWidth = $(this).width(); $(this).attr('size','30'); $(this).css({'position':'relative'}); $(this).css({'margin-right':('-'+($(this).width()-oldWidth)+'px')}); }) $('.Bemerkung').blur(function() { $(this).attr('size','5'); $(this).removeAttr('style'); $(this).parent().removeAttr('style'); }) ``` Update: I figured I should add more explanation about what is going on here. This basically detects what the size is before gaining focus, then increases the size and sets the right margin to the difference between the new size and the old size (thus making the renderer see it as the same "width" as before gaining focus). Position:relative is so that the element width will not affect the td width (as long as right-margin is set negative).
You could use `transform: scale(1.2)` to enlarge it, add `transition` and remove `absolute` position declarations. ```js $('.Bemerkung').focus(function() { $(this).css({ 'transition': 'transform 0.5s', 'transform': 'scale(1.2)' }); }); $('.Bemerkung').blur(function() { $(this).css({ 'transform': 'scale(1)' }); }); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <table id="weektimereg" class="recordlist" border=1 style="text-align:center;"> <tr> <th style="width:110px;">sample</th> <th style="width:110px;">sample2</th> </tr> <tr> <td> <input type=text size=3 name=abc value='03:00'></input> <input class='Bemerkung' type=text size=5 name=cde value='test'></input> </td> <td> <input type=text size=3 name=abc value='03:00'></input> <input class='Bemerkung' type=text size=5 name=cde value='test'></input> </td> </tr> <tr> <td> <input type=text size=3 name=abc value='03:00'></input> <input class='Bemerkung' type=text size=5 name=cde value='test'></input> </td> <td> <input type=text size=3 name=abc value='03:00'></input> <input class='Bemerkung' type=text size=5 name=cde value='test'></input> </td> </tr> </table> ```
27,529,956
I need to enlarge an input field when it's on focus without the containing td enlarging, too. Further on both fields should stay on their actual position (have a look on the fiddle, it mooves) and not move up or down (what is caused by the absolute position). This is my actual state ``` $('.Bemerkung').focus(function() { $(this).attr('size','30'); $(this).parent().css({'position':'absolute'}); $(this).css({'position':'absolute'}); }) $('.Bemerkung').blur(function() { $(this).attr('size','5'); $(this).removeAttr('style'); $(this).parent().removeAttr('style'); }) ``` <http://jsfiddle.net/kazuya88/dhy0dxyy/> Hope you can help me.
2014/12/17
[ "https://Stackoverflow.com/questions/27529956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4371042/" ]
It's not too far from what you originally had: ``` $('.Bemerkung').focus(function() { oldWidth = $(this).width(); $(this).attr('size','30'); $(this).css({'position':'relative'}); $(this).css({'margin-right':('-'+($(this).width()-oldWidth)+'px')}); }) $('.Bemerkung').blur(function() { $(this).attr('size','5'); $(this).removeAttr('style'); $(this).parent().removeAttr('style'); }) ``` Update: I figured I should add more explanation about what is going on here. This basically detects what the size is before gaining focus, then increases the size and sets the right margin to the difference between the new size and the old size (thus making the renderer see it as the same "width" as before gaining focus). Position:relative is so that the element width will not affect the td width (as long as right-margin is set negative).
Try replacing the javascript with strict CSS. One thing i did was make the td cell text-align: left. that's the only real difference right now. The reason there is movement when you switch to an absolute position is due to the fact that it's trying to align the elements to the center, but then absolute elements don't get measured or centered. Here is the CSS. ``` table td{ position: relative; overflow: visible; text-align: left; } .Bemerkung:focus{ width: 196px; position: absolute; z-index: 10000; } ``` Let me know if you need to text-align: center and i'll look for a different solution.
27,541,934
I'm trying to accomplish a dynamic button which is always square, and based on the height of the text it is with. Something like this: ![Example](https://i.stack.imgur.com/9Vg9m.png) Basically the icon stays the same, but the size of the box varies, based on what size of text it is next to. The icon should be centered vertically and horizontally. To get it to look like the image, I had to manually put in everything, but I want it to work whether the `font-size` is `20px`, `70px`, or anything else. Basically, I don't know the height, but it should work is the goal, and that seems to be what is different in this question from others around the site/web. This is the HTML code: ``` <!-- This may be any font size, but the result should be like the image above. --> <div id="name"> <!-- This holds the text --> <span>Amy</span> <!-- This holds the image, and the anchor is the box. --> <a href="#"><img src="/images/edit.png" alt="Edit Name" /></a> </div> ``` I've tried following [this tutorial](http://blog.brianjohnsondesign.com/responsive-centered-square-div-resizes-height/), but I can't get it to work for some reason. Everything I've tried (which is too many things to enumerate here) either gives me the right height, but wrong width, the exact size of the image, or the image as the size of the square. Is this possible with just CSS, or am I going to have to resort to JavaScript? Thanks.
2014/12/18
[ "https://Stackoverflow.com/questions/27541934", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1940394/" ]
Something like this should do it. You may have to change the size a little depending on the font. Also, you may have to `vertical-align` it. ``` .edit { display: inline-block; width: 1em; height: 1em; } .edit img { display: block; } ```
[DEMO](http://jsfiddle.net/gkqsckdb/1/) HTML: ``` <button> <h2 id="name"> <span>Amy<a href="#" class="edit"> <img src="/images/edit.png" alt="Edit Name" /></a></span> </h2> </button> ``` CSS: ``` button img { vertical-align: middle; } h2 { font-size:20pt; } ``` Is this what you want?
35,435,755
I am creating a json string using dictionary and I have to remove only that part from string my string is ``` [{Id: "code": "AAA" , Title: "display": "ANAA,FRENCH POLYNESIA"},{Id: "code": "AAB" , Title: "display": "ARRABURY, QL AUSTRALIA"}] ``` I want to remove only ``` "code": ``` And that part from string using string.Replace('', "") ``` "display": ``` I am trying this: ``` var entries = dict.Select(d => string.Format("{{Id: {0} , Title: {1}}}", d.Key, string.Join(",", d.Value))); return "" + string.Join(",", entries) + ""; ``` Not working to achieve ``` [{Id: "AAA" , Title: "ANAA,FRENCH POLYNESIA"},{Id: "AAB" , Title: "ARRABURY, QL AUSTRALIA"}] ```
2016/02/16
[ "https://Stackoverflow.com/questions/35435755", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5879311/" ]
You can run this code: ``` string json = "[{Id: \"code\": \"AAA\" , Title: \"display\": \"ANAA,FRENCH POLYNESIA\"},{Id: \"code\": \"AAB\" , Title: \"display\": \"ARRABURY, QL AUSTRALIA\"}]"; json = json.Replace("\"code\":", String.Empty); json = json.Replace("\"display\":", String.Empty); ``` You can remove with replace method if you use **String.Empty**
As suggested, use string.Replace: ``` const string codeToRemove = "\"code\":"; const string displayToRemove = "\"display\":"; var entries = dict.Select(d => string.Format("{{Id: {0} , Title: {1}}}", d.Key.Replace(codeToRemove, ""), string.Join(",", d.Value.Replace(displayToRemove, "")))); var result = "" + string.Join(",", entries) + ""; ```
29,175
I'm trying to write a SQL Server script to iterate through .bak files in a directory and then restore them to our server. In doing so, I've created three temp tables: #Files to keep track of the file-list returned by running xp\_dirtree, #HeaderInfo to hold data returned when querying restore headeronly to get the database names and #FileListInfo to hold data returned from querying restore filelistonly to get the logical file names. My question is regarding the #HeaderInfo table. Consulting the [MSDN definition](http://msdn.microsoft.com/en-us/library/ms178536%28v=sql.100%29.aspx) of the resultset returned from restore header only, I find that the fifth column (Compressed) and the last column (CompressedBackupSize) have 'invalid data types' (BYTE(1) and uint64, respectively). This, obviously, gives me an error when I try to execute the query. To get around this, I have used tinyint and bigint, respectively, and the code now runs fine. My question(s) is/are this/these: * Is using tinyint/bigint the 'correct' work around for this? Or is there a better way to do it? * Is using them likely to cause any undesired behaviour? * If SQL is expecting BYTE(1) and uint64s, why does using different data types not cause an error? * And why does MSDN specify BYTE(1) and uint64 if they're not what gets returned? What are these used for and where? * Bonus question, for anyone who's interested, is there a more elegant/efficient way of automating a restore script? Many thanks EDIT: SQL Server 2008
2012/11/23
[ "https://dba.stackexchange.com/questions/29175", "https://dba.stackexchange.com", "https://dba.stackexchange.com/users/15382/" ]
**First your first questions** 1. I would use tinyint for the `BYTE(1)` in this case they told us the possible values are 1 or 0. `BIT` may also work. You could also try `BIT`. But `uint64` is an unsigned 64 Byte integer. BIGINT is signed, so the max value is lower. So technically speaking a DECIMAL(20,0) or greater precision would be used here. But in later versions of that same article this is a BIGINT (For SQL Server 2008 R2 and SQL Server 2012) so I am sure you are fine with BIGINT here. If you get enough disk space and time to create a database big enough to compress to a value that blows BIGINT you can test this theory out someday ;-) 2. No undesired behavior if you go with `SMALLINT`/`BIGINT/DECIMAL(20,0)` 3. I am not sure I understand your question, but I believe the answer is conversion if you are asking what I think you are asking but this is potentially just an oops 4. I'm not sure why those datatypes are in the documentation but you've chosen good logical approximations. **Then the last question** I hate to shove off on this one, but I'm kind of going to do that. There are a lot of great restore scripts out there on the internet for different scenarios. You haven't fully described yours so not sure I can comment on the efficiency/elegance but you are right to read the headers to determine what you do next. Some questions to ask yourself: Are you looking at things like the date to ensure you restore the latest? Are you looking at things like full/diff/log backups and accounting for them in the restore? What purpose is this for? Restoring a dev environment? Or for a production restore? If a dev restore, I like to go more automated. If a prod restore I like to have a script that eliminates some "oops" factor from a critical production restore but not automate so much of it that it makes it easy to forget to do a critical step or do something like backup the tail of the log. I'd search for restore scripts and see what others have done, ask yourself these questions and incorporate what you like. I also am not sure you need to know if the file is compressed or what the compressed size is. Those facts shouldn't be terribly necessary for a restore script since SQL just handles the restore of a compressed backup for you. You don't have to tell SQL it is compressed. So you may just drop those columns altogether and only take what you require from the header to perform you restore.
You can find the [data types mapping](http://msdn.microsoft.com/en-us/library/cc716729.aspx) between SQL Server and C# on MSDN. I believe that the two return types there are just a mistype in the documentation, and not the actual requirement. Actually Uint64 should be BIGINT, as you already found, nothing else matches closely. While BYTE(1) is equivalent to [BINARY](http://msdn.microsoft.com/en-us/library/ms188362%28v=sql.105%29.aspx) in SQL Server, not tinyint. But going through the same documentation I'd say that this one should actually be a [bit](http://msdn.microsoft.com/en-us/library/ms177603%28v=sql.105%29.aspx) data type, as the returned values are only 1 and 0. Except for unnecessary casts, there shouldn't be any issues in the future. Bonus answer: this can be done also in powershell scripts, SSIS packages, whatever your preference is. Do you verify anywhere the type of the backups and their order of restore?
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
What you see around the corners is most probably not glue, and certainly not put there to hold the chip in place during automated assembly. Some SMD components need to be glued down after the soldering, as in case of a PCB with components on both sides, when you flip it upside down to assemble the other side some components might fall when the solder is liquid. This happen in case the surface tension of the molten solder is not enough to hold them in place. That said, it is possible that it is just a sealing compound that is put there to avoid moisture penetrating below the chip, especially for chips that might get hot, such as a CPU or GPU. Moisture can penetrate below the part, and diffuse inside the chip itself, depending on the package technology, and when the chip heats up the water can become steam, and crack certain parts of the chip. To avoid this, you bake the parts before soldering, you solder, and then shut the sides to avoid any moisture ingress. How can you solder your BGAs is not a single-answer question. This entirely depends on what pitch we are dealing with, PCB thickness, PCB status (new/used) and equipment you have. A good recipe is to use a very (very very very) thin amount of really thin (as in runny) flux on the PCB, place the part, hot air & pray. You might need a hot plate if the PCB is particularly thick/big, and if it is small enough you can get away with the hot plate only. And absolutely, categorically, no glue.
The main reason for using staking or underfill is to 1) reduce the stress on the BGA solder joints caused by CTE differences between the package and the board, 2) reduce the possibility of the part detaching from the board during a high shock (depth charge near a submarine) or vibration (rocket launch) event and 3) in the case of underfill, provide a better thermal path from the package to the board. As others have said, you want the part to be able to move a bit during the solder reflow operation, so any staking or underfill is done after soldering and after initial testing shows that the unit works properly. Whether staking or underfill is needed, and the specific material used, can only be determined after a rigorous structural (for staking) and thermal (for underfill) analysis has been performed. Staking is preferred to underfill because of the difficulty/impossibility of reworking a BGA that has been underfilled. **Edit 1** Note that on double side SMT boards that go through a solder reflow operation, it is usual practice to use a small bit of adhesive under components to hold them in place, especially those on the underside of the board. However this adhesive is not meant to serve a structural or thermal need.
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
What you see around the corners is most probably not glue, and certainly not put there to hold the chip in place during automated assembly. Some SMD components need to be glued down after the soldering, as in case of a PCB with components on both sides, when you flip it upside down to assemble the other side some components might fall when the solder is liquid. This happen in case the surface tension of the molten solder is not enough to hold them in place. That said, it is possible that it is just a sealing compound that is put there to avoid moisture penetrating below the chip, especially for chips that might get hot, such as a CPU or GPU. Moisture can penetrate below the part, and diffuse inside the chip itself, depending on the package technology, and when the chip heats up the water can become steam, and crack certain parts of the chip. To avoid this, you bake the parts before soldering, you solder, and then shut the sides to avoid any moisture ingress. How can you solder your BGAs is not a single-answer question. This entirely depends on what pitch we are dealing with, PCB thickness, PCB status (new/used) and equipment you have. A good recipe is to use a very (very very very) thin amount of really thin (as in runny) flux on the PCB, place the part, hot air & pray. You might need a hot plate if the PCB is particularly thick/big, and if it is small enough you can get away with the hot plate only. And absolutely, categorically, no glue.
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
What you see around the corners is most probably not glue, and certainly not put there to hold the chip in place during automated assembly. Some SMD components need to be glued down after the soldering, as in case of a PCB with components on both sides, when you flip it upside down to assemble the other side some components might fall when the solder is liquid. This happen in case the surface tension of the molten solder is not enough to hold them in place. That said, it is possible that it is just a sealing compound that is put there to avoid moisture penetrating below the chip, especially for chips that might get hot, such as a CPU or GPU. Moisture can penetrate below the part, and diffuse inside the chip itself, depending on the package technology, and when the chip heats up the water can become steam, and crack certain parts of the chip. To avoid this, you bake the parts before soldering, you solder, and then shut the sides to avoid any moisture ingress. How can you solder your BGAs is not a single-answer question. This entirely depends on what pitch we are dealing with, PCB thickness, PCB status (new/used) and equipment you have. A good recipe is to use a very (very very very) thin amount of really thin (as in runny) flux on the PCB, place the part, hot air & pray. You might need a hot plate if the PCB is particularly thick/big, and if it is small enough you can get away with the hot plate only. And absolutely, categorically, no glue.
The "red" glue you are seeing is an SMT red glue and it's a certain type of temperature-set adhesive. Normally most assembly houses will not be using these adhesives, as surface tension will position the components correctly. However that is the "theory" ... In practice and depending on the circumstance, it may be required or used sometimes. I know for sure that hand soldering small BGAs based on the common idea of "surface tension will center it naturally - don't worry" is pie in the sky ! If the hot-air gun blows too strongly, as they often do, even on minimum setting, you will quickly blow the part away, quicker then you can sneeze ! In this case, some sort of manual positioning on your prototype PCB might be the answer ... its not easy to say what type of glue works best (if you really had to go that way), but the temperature sensitive red glue is used because it hardends very quickly over 130 - 150 Celcius, so there's a reason for it being used in some cases ... Having said that, bigger BGA components, say with over 160 BGA balls you hopefully don't need any adhesive at all as the surface tension etc will do the job once the solder is melting ...
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
What you see around the corners is most probably not glue, and certainly not put there to hold the chip in place during automated assembly. Some SMD components need to be glued down after the soldering, as in case of a PCB with components on both sides, when you flip it upside down to assemble the other side some components might fall when the solder is liquid. This happen in case the surface tension of the molten solder is not enough to hold them in place. That said, it is possible that it is just a sealing compound that is put there to avoid moisture penetrating below the chip, especially for chips that might get hot, such as a CPU or GPU. Moisture can penetrate below the part, and diffuse inside the chip itself, depending on the package technology, and when the chip heats up the water can become steam, and crack certain parts of the chip. To avoid this, you bake the parts before soldering, you solder, and then shut the sides to avoid any moisture ingress. How can you solder your BGAs is not a single-answer question. This entirely depends on what pitch we are dealing with, PCB thickness, PCB status (new/used) and equipment you have. A good recipe is to use a very (very very very) thin amount of really thin (as in runny) flux on the PCB, place the part, hot air & pray. You might need a hot plate if the PCB is particularly thick/big, and if it is small enough you can get away with the hot plate only. And absolutely, categorically, no glue.
* 1. No, absolutely no glue should be used on BGAs prior to reflow. BGA solder balls collapse slightly during reflow, increasing contact with the pad, and any adhesive would interfere with that. * 2. The solder paste holds the chip in place prior to the melt, then surface tension during. No need for anything else. * 3. For backside components, limit the BGA size / weight depending on the solder paste contact area. If you can’t avoid this, you may need to do a 2-pass reflow with higher melt solder on back for pass 1, then lower-melt solder on top for pad 2. This adds cost of course. You can consider using NSMD pads to increase solder-to-pad contact area (and thus, surface tension) during reflow. NSMD pads have proven more mechanically robust than SMD pads for very fine-pitch BGAs. Finally, make sure your board planarity is well controlled. Also your package planarity needs to be specified and assured by the vendor. The glue you're seeing isn't for soldering. It's 'corner staking' applied after the reflow process for improved shock-and-vibe mechanical robustness. It's a reduced type of underfill where adhesive is injected between the corners of the soldered BGA and the board, as opposed to complete underfill which is injected underneath the entire IC. More here: <https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure>
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
What you see around the corners is most probably not glue, and certainly not put there to hold the chip in place during automated assembly. Some SMD components need to be glued down after the soldering, as in case of a PCB with components on both sides, when you flip it upside down to assemble the other side some components might fall when the solder is liquid. This happen in case the surface tension of the molten solder is not enough to hold them in place. That said, it is possible that it is just a sealing compound that is put there to avoid moisture penetrating below the chip, especially for chips that might get hot, such as a CPU or GPU. Moisture can penetrate below the part, and diffuse inside the chip itself, depending on the package technology, and when the chip heats up the water can become steam, and crack certain parts of the chip. To avoid this, you bake the parts before soldering, you solder, and then shut the sides to avoid any moisture ingress. How can you solder your BGAs is not a single-answer question. This entirely depends on what pitch we are dealing with, PCB thickness, PCB status (new/used) and equipment you have. A good recipe is to use a very (very very very) thin amount of really thin (as in runny) flux on the PCB, place the part, hot air & pray. You might need a hot plate if the PCB is particularly thick/big, and if it is small enough you can get away with the hot plate only. And absolutely, categorically, no glue.
Personally, I'm intimidated by the whole idea of BGA rework with hobby grade equipment, and really wouldn't do it. But, no, if I were doing it, I would be very hesitant to glue the chip in place. Surface mount soldering relies on letting the surface tension of melted solder being able to pull the chip into alignment. Gluing would prevent that from happening. The issue is that with hot air, you may have problems getting all the balls molten at the same time. I really wouldn't even consider personally trying it without using a board heater to bring the temp just below the eutectic point, and then using hot air to nudge the region of the board over melting. Note that facilities that do BGA rework often have xray devices to check the results. Certainly, you would need to be able to tolerate errors, and be able to run a functionality test to verify correct placement.
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
* 1. No, absolutely no glue should be used on BGAs prior to reflow. BGA solder balls collapse slightly during reflow, increasing contact with the pad, and any adhesive would interfere with that. * 2. The solder paste holds the chip in place prior to the melt, then surface tension during. No need for anything else. * 3. For backside components, limit the BGA size / weight depending on the solder paste contact area. If you can’t avoid this, you may need to do a 2-pass reflow with higher melt solder on back for pass 1, then lower-melt solder on top for pad 2. This adds cost of course. You can consider using NSMD pads to increase solder-to-pad contact area (and thus, surface tension) during reflow. NSMD pads have proven more mechanically robust than SMD pads for very fine-pitch BGAs. Finally, make sure your board planarity is well controlled. Also your package planarity needs to be specified and assured by the vendor. The glue you're seeing isn't for soldering. It's 'corner staking' applied after the reflow process for improved shock-and-vibe mechanical robustness. It's a reduced type of underfill where adhesive is injected between the corners of the soldered BGA and the board, as opposed to complete underfill which is injected underneath the entire IC. More here: <https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure>
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
The "red" glue you are seeing is an SMT red glue and it's a certain type of temperature-set adhesive. Normally most assembly houses will not be using these adhesives, as surface tension will position the components correctly. However that is the "theory" ... In practice and depending on the circumstance, it may be required or used sometimes. I know for sure that hand soldering small BGAs based on the common idea of "surface tension will center it naturally - don't worry" is pie in the sky ! If the hot-air gun blows too strongly, as they often do, even on minimum setting, you will quickly blow the part away, quicker then you can sneeze ! In this case, some sort of manual positioning on your prototype PCB might be the answer ... its not easy to say what type of glue works best (if you really had to go that way), but the temperature sensitive red glue is used because it hardends very quickly over 130 - 150 Celcius, so there's a reason for it being used in some cases ... Having said that, bigger BGA components, say with over 160 BGA balls you hopefully don't need any adhesive at all as the surface tension etc will do the job once the solder is melting ...
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
Personally, I'm intimidated by the whole idea of BGA rework with hobby grade equipment, and really wouldn't do it. But, no, if I were doing it, I would be very hesitant to glue the chip in place. Surface mount soldering relies on letting the surface tension of melted solder being able to pull the chip into alignment. Gluing would prevent that from happening. The issue is that with hot air, you may have problems getting all the balls molten at the same time. I really wouldn't even consider personally trying it without using a board heater to bring the temp just below the eutectic point, and then using hot air to nudge the region of the board over melting. Note that facilities that do BGA rework often have xray devices to check the results. Certainly, you would need to be able to tolerate errors, and be able to run a functionality test to verify correct placement.
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
What others have said. We use a small under-board preheater (my technician says that is essential) so minimal airflow is needed on top using a hot air pencil. We just bought one of those inexpensive reflow ovens (under USD$500), which with some controller modifications can be quite good for doing small board runs. I saw one in use an an OSHWA meeting a couple years ago and was quite impressed. It has actual profiles and you can add thermocouples etc to do it right but on a small scale. For double sided boards surface tension should hold the upside-down parts on but you can of course use a slightly lower temp solder for the second side. Also you may find a local assembly house that can do small runs for you, and they should even have x-ray inspection for BGAs. There's such a place in Salt Lake City.
654,953
I have often see ball grid array (BGA) chips, mostly those from CPUs or GPUs, being glued around in the corners with some red glue or to the perimeter with a translucent one. Having to manually solder BGA chips using hot air, should I glue the chips to the board before heating? In their answers to a quite similar question about soldering small electronic parts (but not specifically BGA chips), some users mention that glue can cause additional problems when it is not applied properly: [adhesive glue before soldering](https://electronics.stackexchange.com/questions/71682/adhesive-glue-before-soldering) Not having an assistant, such soldering process remains currently challenging for me, as I hold the hot air gun (from a rework station) in one hand and the tweezers in the other one. Without using glue, I see at least three difficulties: * positioning and aligning the chips precisely * maintaining the surface of the chips parallel to thus of the PCB when bringing them to it * remaining stable during the soldering, without false move nor trembling So, my questions are: 1. Is the use of glue recommended, given the context? 2. Are there possibly alternative compounds to help keeping the chips into place during the soldering, like a kind of "butter" that would progressively melt when reaching high temperatures? 3. Are there other improvements that I can make to facilitate the soldering of the BGA chips?
2023/02/21
[ "https://electronics.stackexchange.com/questions/654953", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/199752/" ]
To add on to the other excellent answers, and to answer your third question: the red glue you see is likely to be some kind of **corner staking** or **underfilling**. After soldering, an adhesive compound is added to mitigate in-the-field failure, particularly when packages are subjected to thermal or physical stresses. It's not intended to improve solderability. See [the ANSYS website](https://www.ansys.com/blog/bga-and-qfn-failure-mitigation-underfilling-edge-bonds-and-corner-staking-physics-of-failure) for more info.
The main reason for using staking or underfill is to 1) reduce the stress on the BGA solder joints caused by CTE differences between the package and the board, 2) reduce the possibility of the part detaching from the board during a high shock (depth charge near a submarine) or vibration (rocket launch) event and 3) in the case of underfill, provide a better thermal path from the package to the board. As others have said, you want the part to be able to move a bit during the solder reflow operation, so any staking or underfill is done after soldering and after initial testing shows that the unit works properly. Whether staking or underfill is needed, and the specific material used, can only be determined after a rigorous structural (for staking) and thermal (for underfill) analysis has been performed. Staking is preferred to underfill because of the difficulty/impossibility of reworking a BGA that has been underfilled. **Edit 1** Note that on double side SMT boards that go through a solder reflow operation, it is usual practice to use a small bit of adhesive under components to hold them in place, especially those on the underside of the board. However this adhesive is not meant to serve a structural or thermal need.
56,444,790
Why do I need to bind () a function inside a constructor? ``` constructor (props){ super(props); this.funcao = this.funcao.bind(this); } ``` could not bind () without using a constructor?
2019/06/04
[ "https://Stackoverflow.com/questions/56444790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11599057/" ]
You don't have to bind the methods in the constructor, have a look at the the explanation below. ```js // Setup (mock) class MyValue { constructor(value) { this.value = value; } log() { console.log(this.value); } bindMethods() { this.log = this.log.bind(this); } } const value = new MyValue("Hello World!"); var onclick; // Explanation console.log("==== 1. ===="); // Normally when you call a function, you call it with // a receiver. Meaning that it is called upon an object. // The receiver is used to fill the `this` keyword. // // «receiver».«method name»(«params») // value.log(); // In the line above `value` is the receiver and is used // when you reference `this` in the `log` function. console.log("==== 2. ===="); // However in React a function is often provided to an // handler. For example: // // onclick={this.log} // // This could be compared to the following: onclick = value.log; // When the event is triggered the function is executed. try { onclick(); } catch (error) { console.error(error.message); } // The above throws an error because the function is // called without receiver. Meaning that `this` is // `undefined`. console.log("==== 3. ===="); // Binding a function pre-fills the `this` keywords. // Allowing you to call the function without receiver. onclick = value.log.bind(value); onclick(); console.log("==== 4. ===="); // Another option is using an anonymous function to // call the function with receiver. onclick = () => value.log(); onclick(); console.log("==== 5. ===="); // Binding doesn't have to occur in the constructor, // this can also be done by any other instance function. // a.k.a. a method. value.bindMethods(); // After binding the functions to the receiver you're // able to call the method without receiver. (Since all // `this` keywords are pre-filled.) onclick = value.log; onclick(); ``` For more details about the `this` keyword you should have a look through the [MDN `this` documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this). Methods that don't make use of the `this` keyword don't have to be bound.
Keep in mind that if you write your functions in your class as arrow functions, you don't need to bind(this).. its automatic. You don't need to bind properties in your constructor because it already has a this keyword attached to it.
56,444,790
Why do I need to bind () a function inside a constructor? ``` constructor (props){ super(props); this.funcao = this.funcao.bind(this); } ``` could not bind () without using a constructor?
2019/06/04
[ "https://Stackoverflow.com/questions/56444790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11599057/" ]
Keep in mind that if you write your functions in your class as arrow functions, you don't need to bind(this).. its automatic. You don't need to bind properties in your constructor because it already has a this keyword attached to it.
The reason why you `bind()` functions is because class methods are not bound to the class instance object, which in React's case, it means you don't have access to the component's `state` or `props`. Use arrow functions to automatically bind to the instance object: `funcao = () => {...}` And call it from anywhere inside your component like this: `this.funcao()`
56,444,790
Why do I need to bind () a function inside a constructor? ``` constructor (props){ super(props); this.funcao = this.funcao.bind(this); } ``` could not bind () without using a constructor?
2019/06/04
[ "https://Stackoverflow.com/questions/56444790", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11599057/" ]
You don't have to bind the methods in the constructor, have a look at the the explanation below. ```js // Setup (mock) class MyValue { constructor(value) { this.value = value; } log() { console.log(this.value); } bindMethods() { this.log = this.log.bind(this); } } const value = new MyValue("Hello World!"); var onclick; // Explanation console.log("==== 1. ===="); // Normally when you call a function, you call it with // a receiver. Meaning that it is called upon an object. // The receiver is used to fill the `this` keyword. // // «receiver».«method name»(«params») // value.log(); // In the line above `value` is the receiver and is used // when you reference `this` in the `log` function. console.log("==== 2. ===="); // However in React a function is often provided to an // handler. For example: // // onclick={this.log} // // This could be compared to the following: onclick = value.log; // When the event is triggered the function is executed. try { onclick(); } catch (error) { console.error(error.message); } // The above throws an error because the function is // called without receiver. Meaning that `this` is // `undefined`. console.log("==== 3. ===="); // Binding a function pre-fills the `this` keywords. // Allowing you to call the function without receiver. onclick = value.log.bind(value); onclick(); console.log("==== 4. ===="); // Another option is using an anonymous function to // call the function with receiver. onclick = () => value.log(); onclick(); console.log("==== 5. ===="); // Binding doesn't have to occur in the constructor, // this can also be done by any other instance function. // a.k.a. a method. value.bindMethods(); // After binding the functions to the receiver you're // able to call the method without receiver. (Since all // `this` keywords are pre-filled.) onclick = value.log; onclick(); ``` For more details about the `this` keyword you should have a look through the [MDN `this` documentation](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this). Methods that don't make use of the `this` keyword don't have to be bound.
The reason why you `bind()` functions is because class methods are not bound to the class instance object, which in React's case, it means you don't have access to the component's `state` or `props`. Use arrow functions to automatically bind to the instance object: `funcao = () => {...}` And call it from anywhere inside your component like this: `this.funcao()`
34,624
I am about to upgrade the hard disk of my MacBook. (From 60GB to 320GB, which I know is below the limit of 500GB). I was both able to install an OS X on the new drive and also to transfer the old hard disks partition to the new one with a [sysresccd](http://www.sysresccd.org/Main_Page) (linux live disk) and `dd` (`dd if=/dev/sdb of=/dev/sda`, so entire disk). So it's not a problem of hardware, limitations or restrictions. So, after having transferred the 60 GB-disk to the new one, every partition manager would show me that there is about 260GB left on the disk, but none would allow me to resize the partition. The `Disk Utility` on the OS X installation cd allowes to resize, but also erases the data. I tried to copy (`dd`) the old partition into the new, bigger one (`dd if=/dev/sdb2 of=/dev/sda2`), but then the system would refuse to boot. What I have not yet tried is creating manually a partition, then copying all the contents (with hfsplus-support on the linux live disk), because I suppose this partition would not boot either. Is there a way to either resize the partition after having copied the entire disk, or to render the manually resized partition bootable ?
2009/07/01
[ "https://serverfault.com/questions/34624", "https://serverfault.com", "https://serverfault.com/users/1509/" ]
I think there is a GUI way of doing this with Disk Utility but I can't remember off the top of my head. Anyway you should be able to resize a hfs+ volume nondestructively from the command line. The article is kind of old but the syntax haven't changed so it should still be the same. Standard warning like you should have a backup and don't try to resize the current boot drive but boot off CD or backup drive applies. You may also want to do sync and reboot right after resize as I don't know how much to trust finder to pickup the change right away. <http://www.macworld.com/article/55274/2007/02/marchgeekfactor.html>
Interestingly enough, information online suggests that you can use the Boot Camp installer to return your Mac to a non-Boot Camp machine which will grow your HFS+ partitions. <http://wiki.onmac.net/index.php/Triple_Boot_via_BootCamp#Restoring_your_Mac_to_its_original_state>
34,624
I am about to upgrade the hard disk of my MacBook. (From 60GB to 320GB, which I know is below the limit of 500GB). I was both able to install an OS X on the new drive and also to transfer the old hard disks partition to the new one with a [sysresccd](http://www.sysresccd.org/Main_Page) (linux live disk) and `dd` (`dd if=/dev/sdb of=/dev/sda`, so entire disk). So it's not a problem of hardware, limitations or restrictions. So, after having transferred the 60 GB-disk to the new one, every partition manager would show me that there is about 260GB left on the disk, but none would allow me to resize the partition. The `Disk Utility` on the OS X installation cd allowes to resize, but also erases the data. I tried to copy (`dd`) the old partition into the new, bigger one (`dd if=/dev/sdb2 of=/dev/sda2`), but then the system would refuse to boot. What I have not yet tried is creating manually a partition, then copying all the contents (with hfsplus-support on the linux live disk), because I suppose this partition would not boot either. Is there a way to either resize the partition after having copied the entire disk, or to render the manually resized partition bootable ?
2009/07/01
[ "https://serverfault.com/questions/34624", "https://serverfault.com", "https://serverfault.com/users/1509/" ]
Just cleanly partition the new disk to any size you like and copy the data over with [Carbon Copy Cloner.](http://www.bombich.com/) It will be bootable and have the size you want. You can do that on a running system, and don't need any live cd's or anything, just an usb or firewire interface for the new/second harddisk.
Interestingly enough, information online suggests that you can use the Boot Camp installer to return your Mac to a non-Boot Camp machine which will grow your HFS+ partitions. <http://wiki.onmac.net/index.php/Triple_Boot_via_BootCamp#Restoring_your_Mac_to_its_original_state>
34,624
I am about to upgrade the hard disk of my MacBook. (From 60GB to 320GB, which I know is below the limit of 500GB). I was both able to install an OS X on the new drive and also to transfer the old hard disks partition to the new one with a [sysresccd](http://www.sysresccd.org/Main_Page) (linux live disk) and `dd` (`dd if=/dev/sdb of=/dev/sda`, so entire disk). So it's not a problem of hardware, limitations or restrictions. So, after having transferred the 60 GB-disk to the new one, every partition manager would show me that there is about 260GB left on the disk, but none would allow me to resize the partition. The `Disk Utility` on the OS X installation cd allowes to resize, but also erases the data. I tried to copy (`dd`) the old partition into the new, bigger one (`dd if=/dev/sdb2 of=/dev/sda2`), but then the system would refuse to boot. What I have not yet tried is creating manually a partition, then copying all the contents (with hfsplus-support on the linux live disk), because I suppose this partition would not boot either. Is there a way to either resize the partition after having copied the entire disk, or to render the manually resized partition bootable ?
2009/07/01
[ "https://serverfault.com/questions/34624", "https://serverfault.com", "https://serverfault.com/users/1509/" ]
Just cleanly partition the new disk to any size you like and copy the data over with [Carbon Copy Cloner.](http://www.bombich.com/) It will be bootable and have the size you want. You can do that on a running system, and don't need any live cd's or anything, just an usb or firewire interface for the new/second harddisk.
I think there is a GUI way of doing this with Disk Utility but I can't remember off the top of my head. Anyway you should be able to resize a hfs+ volume nondestructively from the command line. The article is kind of old but the syntax haven't changed so it should still be the same. Standard warning like you should have a backup and don't try to resize the current boot drive but boot off CD or backup drive applies. You may also want to do sync and reboot right after resize as I don't know how much to trust finder to pickup the change right away. <http://www.macworld.com/article/55274/2007/02/marchgeekfactor.html>
34,624
I am about to upgrade the hard disk of my MacBook. (From 60GB to 320GB, which I know is below the limit of 500GB). I was both able to install an OS X on the new drive and also to transfer the old hard disks partition to the new one with a [sysresccd](http://www.sysresccd.org/Main_Page) (linux live disk) and `dd` (`dd if=/dev/sdb of=/dev/sda`, so entire disk). So it's not a problem of hardware, limitations or restrictions. So, after having transferred the 60 GB-disk to the new one, every partition manager would show me that there is about 260GB left on the disk, but none would allow me to resize the partition. The `Disk Utility` on the OS X installation cd allowes to resize, but also erases the data. I tried to copy (`dd`) the old partition into the new, bigger one (`dd if=/dev/sdb2 of=/dev/sda2`), but then the system would refuse to boot. What I have not yet tried is creating manually a partition, then copying all the contents (with hfsplus-support on the linux live disk), because I suppose this partition would not boot either. Is there a way to either resize the partition after having copied the entire disk, or to render the manually resized partition bootable ?
2009/07/01
[ "https://serverfault.com/questions/34624", "https://serverfault.com", "https://serverfault.com/users/1509/" ]
I think there is a GUI way of doing this with Disk Utility but I can't remember off the top of my head. Anyway you should be able to resize a hfs+ volume nondestructively from the command line. The article is kind of old but the syntax haven't changed so it should still be the same. Standard warning like you should have a backup and don't try to resize the current boot drive but boot off CD or backup drive applies. You may also want to do sync and reboot right after resize as I don't know how much to trust finder to pickup the change right away. <http://www.macworld.com/article/55274/2007/02/marchgeekfactor.html>
I'll second the idea of just partitioning the drive as you want and copying the data. See [How to Create a Bootable Backup of Mac OS X](http://www.bombich.com/mactips/image.html) for various methods to do so.
34,624
I am about to upgrade the hard disk of my MacBook. (From 60GB to 320GB, which I know is below the limit of 500GB). I was both able to install an OS X on the new drive and also to transfer the old hard disks partition to the new one with a [sysresccd](http://www.sysresccd.org/Main_Page) (linux live disk) and `dd` (`dd if=/dev/sdb of=/dev/sda`, so entire disk). So it's not a problem of hardware, limitations or restrictions. So, after having transferred the 60 GB-disk to the new one, every partition manager would show me that there is about 260GB left on the disk, but none would allow me to resize the partition. The `Disk Utility` on the OS X installation cd allowes to resize, but also erases the data. I tried to copy (`dd`) the old partition into the new, bigger one (`dd if=/dev/sdb2 of=/dev/sda2`), but then the system would refuse to boot. What I have not yet tried is creating manually a partition, then copying all the contents (with hfsplus-support on the linux live disk), because I suppose this partition would not boot either. Is there a way to either resize the partition after having copied the entire disk, or to render the manually resized partition bootable ?
2009/07/01
[ "https://serverfault.com/questions/34624", "https://serverfault.com", "https://serverfault.com/users/1509/" ]
Just cleanly partition the new disk to any size you like and copy the data over with [Carbon Copy Cloner.](http://www.bombich.com/) It will be bootable and have the size you want. You can do that on a running system, and don't need any live cd's or anything, just an usb or firewire interface for the new/second harddisk.
I'll second the idea of just partitioning the drive as you want and copying the data. See [How to Create a Bootable Backup of Mac OS X](http://www.bombich.com/mactips/image.html) for various methods to do so.
14,741,859
I have 2 tables: * **Table1** = names of gas stations (in pairs) * **Table2** = has co-ordinate information (longitude and latitude amongst other things) Example of **Table1**: ``` StationID1 StationID2 Name1 Name2 Lattitude1 Longitude1 Lattitude2 Longitude2 Distance ------------------------------------------------------------------------------------------------ 93353477 52452 FOO BAR NULL NULL NULL NULL NULL 93353527 52452 HENRY BENNY NULL NULL NULL NULL NULL 93353551 52452 GALE SAM NULL NULL NULL NULL NULL ``` Example of **Table2**: ``` IDInfo Name Lattitude Longitude ------------------------------------------- 93353477 BAR 37.929654 -87.029622 ``` I want to update this table with the coordinate information which resides in `tableA`. I tried to do the following as per [SQL Server 2005: The multi-part identifier … could not be bound](https://stackoverflow.com/questions/4933888/sql-server-2005-the-multi-part-identifier-could-not-be-bound) ``` update table1 set t1.[Lattitude1] = t2.[Lattitude] from table1 t1 left join table2 t2 on (t1.StationID1 = t2.IDInfo) ``` I get the following error message: > > *Msg 4104, Level 16, State 1, Line 1 > > The multi-part identifier "t1.Lattitude1" could not be bound.* > > > However, if I do the following it works which I can then store into another table. ``` SELECT t1.[StationID1] ,t1.[StationID2] ,t1.[Name1] ,t1.[Name2] ,t2.[Lattitude] AS [Lattitude1] ,t2.[Longitude] AS [Longitude1] ,t3.[Lattitude] AS [Lattitude2] ,t3.[Longitude] AS [Longitude2] from table1 t1 left join table2 t2 on (t1.StationID1 = t2.IDInfo) left join table2 t3 on (t1.StationID2 = t2.IDInfo) ``` I am very new to SQL and am having a difficult time understanding why some things work and others don't. Based on the link I posted above my initial query should have worked - no? Perhaps I'm not thinking straight as I have spent many hours trying this and I finally got help from a co-worker (she suggested the approach I mention above).
2013/02/07
[ "https://Stackoverflow.com/questions/14741859", "https://Stackoverflow.com", "https://Stackoverflow.com/users/668624/" ]
I think you can modify your UPDATE statement to reference the table alias in the UPDATE line. ``` update t1 set t1.[Lattitude1] = t2.[Lattitude] from table1 t1 left join table2 t2 on (t1.StationID1 = t2.IDInfo) ```
You need to change the inner table and give a different allias to the columns that are similar. This should work. ``` update table1 set [Lattitude1] = x.[lat] from ( SELECT IDInfo [id], Lattitude [lat] FROM table2 ) x WHERE StationID1 = x.[id] ``` In your particular case its not necessary to rename Lattitude to lat, but if you end up updating a table with itself and force yourself into giving the columns different names, it will save you headaches down the road.
56,637,126
I'm testing a tableview the cell content in XCUItest. In my case, I don't know the order of the cell text, nor am I allowed to set an accessibility id for the text. How can I get the index of a cell given the text inside? [![enter image description here](https://i.stack.imgur.com/J6a9R.png)](https://i.stack.imgur.com/J6a9R.png) For instance, if I wanted to get the index of the cell containing text "Cell 2 Text" I would try something like this: ``` func testSample() { let app = XCUIApplication() let table = app.tables let cells = table.cells let indexOfCell2Text = cells.containing(.staticText, identifier: "Cell 2 Text").element.index(ofAccessibilityElement: "I dunno") print(indexOfCell2Text) } ``` I feel like I'm close, but I'm unsure. Can anyone suggest a solution? I apologize if this question has been asked before. I wasn't able to find anything specific about this. References I visited beforehand: <https://developer.apple.com/documentation/xctest/xcuielementquery/1500842-element> [How can I verify existence of text inside a table view row given its index in an XCTest UI Test?](https://stackoverflow.com/questions/32365327/how-can-i-verify-existence-of-text-inside-a-table-view-row-given-its-index-in-an/32389031) [iOS UI Testing tap on first index of the table](https://stackoverflow.com/questions/36566558/ios-ui-testing-tap-on-first-index-of-the-table)
2019/06/17
[ "https://Stackoverflow.com/questions/56637126", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8234508/" ]
The most reliable way really is to add the index into the accessibility identifier. But, you can't. Can you change the accessibility identifier of the cell instead of the text ? Anyway, if you don't scroll your table view, you can handle it like that : ``` let idx = 0 for cell in table.cells.allElementsBoundByIndex { if cell.staticTexts["Text you are looking for"].exists { return idx } idx = idx + 1 } ``` Otherwise, the index you will use is related to cells which are displayed on the screen. So, after scrolling, the new first visible cell would become the cell at index 0 and would screw up your search.
``` for index in 0..<table.cells.count { if table.cells.element(boundBy: index).staticTexts["Your Text"].exists { return index } } ```
132,698
I used linear interpolation between points: ``` T = 1; w = 0.05; num = 1000; A = 1; pulse[x_] := A*(UnitStep[x + w*T/2] - UnitStep[x - w*T/2]) fun = Table[pulse[x] + 0.2*(RandomReal[] - 0.5), {x, -T/2, T/2, T/(num - 1)}]; funX = Table[i, {i, -T/2, T/2, T/(num - 1)}]; funINT = Interpolation[Transpose[{funX[[All]], fun[[All]]}], InterpolationOrder -> 1]; ListPlot[Transpose[{funX[[All]], fun[[All]]}], PlotRange -> All, Filling -> Axis, Frame -> True, FrameLabel -> {"Time [s]", "Amplitude [V]"}, PlotLegends -> {"Pulse"}, ImageSize -> Large] ``` This produces exactly what I want: [![plot](https://i.stack.imgur.com/2fVP4.png)](https://i.stack.imgur.com/2fVP4.png) But in order to calculate the coefficients of complex Fourier series, I have to calculate the following integral: ``` cn[k_] = NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {t, -T/2, T/2}, Method -> "Trapezoidal"]; ``` which is not working. I get an error saying that the integrand has evaluated to non-numerical values. Any ideas on what the problem is? EDIT: If there is a method to calculate the integral between the discrete points, that might be even better in my case. However, I couldn't find one.
2016/12/03
[ "https://mathematica.stackexchange.com/questions/132698", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19601/" ]
You could find a general symbolic Fourier coefficient for a linear polynomial and use the formula to integrate the interpolating function piecewise. If you're content with machine precision (double precision), then you can `Compile` it for really great speed. ``` (* Basic integral formulas *) ClearAll[cn0]; cn0[0][{t0_, t1_}, {x0_, x1_}] = (* k == 0 is a special case *) Integrate[(x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T /. k -> 0, {t, t0, t1}]; cn0[k_][{t0_, t1_}, {x0_, x1_}] = Integrate[(x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T, {t, t0, t1}]; (* Coefficient function *) Clear[cn]; cn[0] = Total@ MapThread[ (* map over interpolation segments *) cn0[0], {Partition[funX, 2, 1], Partition[fun, 2, 1]}]; cn[k_] = Total@ MapThread[ cn0[k], {Partition[funX, 2, 1], Partition[fun, 2, 1]}]; (* Compiled version *) cnC = With[ (* basic integrals *) {i0 = Function[{t0, t1, x0, x1}, (* k == 0 is a special case *) Evaluate@ Integrate[ (x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T /. k -> 0, {t, t0, t1}]], i = Function[{t0, t1, x0, x1}, Evaluate@ Integrate[ (x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T, {t, t0, t1}]]}, Compile[{{k, _Integer}, {t, _Real, 1}, {x, _Real, 1}}, Total@If[k == 0, i0[Most[t], Rest[t], Most[x], Rest[x]], (* vectorized for speed *) i[Most[t], Rest[t], Most[x], Rest[x]]] ]]; ``` Checks and comparison of speeds: ``` (* OP's method for comparison *) cn1[k_] := NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {t, -T/2, T/2}, Method -> "Trapezoidal"]; res1 = Table[cn1[k], {k, 0, 5}] // AbsoluteTiming res2 = Table[cn[k], {k, 0, 5}] // AbsoluteTiming res3 = Table[cnC[k, funX, fun], {k, 0, 5}] // AbsoluteTiming (* {6.41549, {0.0509924, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} {0.154046, {0.0509924, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} {0.001207, {0.0509924 + 0. I, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} *) res1 - res2 res2 - res3 (* {6.26145, {-6.245*10^-17, -1.27026*10^-10 - 3.56074*10^-10 I, 5.71595*10^-12 - 8.24057*10^-11 I, -5.08276*10^-10 - 1.50366*10^-11 I, 8.25427*10^-11 - 4.73669*10^-10 I, -3.07932*10^-10 + 1.72791*10^-10 I}} {0.152839, {-1.38778*10^-17 + 0. I, -3.7817*10^-15 - 1.49451*10^-14 I, 1.1019*10^-14 - 3.11627*10^-15 I, -1.10328*10^-15 - 2.62073*10^-15 I, -3.42781*10^-15 - 8.26162*10^-17 I, 1.88738*10^-15 - 1.13711*10^-15 I}} *) ``` So `cn` is almost 50 times faster than `NIntegrate` and `cnC` is over 100 times faster than `cn`.
There are several ways to approach answering this question. Not using NIntegrate -------------------- One approach is to use direct Trpezoidal formula integration as shown in [this answer](https://mathematica.stackexchange.com/a/5629/34008) of ["Is it possible to compute with the trapezoidal rule by numerical integration?"](https://mathematica.stackexchange.com/q/5627/34008). Using NIntegrate's range specification -------------------------------------- > > If there is a method to calculate the integral between the discrete points, that might be even better in my case. However, I couldn't find one. > > > `NIntegrate`'s range specification can take range splitting points. In this case we put the points `funX` in the range specification with `Evaluate[{t, Sequence @@ funX}]`. ``` Clear[cn] cn[k_] := NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, Evaluate[{t, Sequence @@ funX}], PrecisionGoal -> 4, AccuracyGoal -> 6, MaxRecursion -> 1, Method -> {"GlobalAdaptive", "SymbolicProcessing" -> 0}]; Clear[rekon] rekon[t_, st_] := Sum[cn[n]*Exp[I*2*Pi*n*t/T], {n, -st, st}] In[74]:= AbsoluteTiming[res = rekon[t, 100];] Out[74]= {33.411, Null} In[75]:= AbsoluteTiming[res = rekon[t, 1000];] Out[75]= {360.723, Null} ``` Using a specially made rule for a list of functions --------------------------------------------------- We can apply the approach given in [this answer](https://mathematica.stackexchange.com/a/126041/34008) of the question ["How to calculate the numerical integral more efficiently?"](https://mathematica.stackexchange.com/q/126001/34008). We make a list of all the integrands formed in `rekon` and we integrate them at once with [`ArrayOfFunctionsRule`](https://github.com/antononcube/MathematicaForPrediction/blob/master/Misc/ArrayOfFunctionsRule.m). The computations are 8-10 times faster. ``` Import["https://raw.githubusercontent.com/antononcube/\ MathematicaForPrediction/master/Misc/ArrayOfFunctionsRule.m"] Clear[n, st, t] With[{st = 1000}, factors = Table[Exp[I*2*Pi*n*t/T], {n, -st, st}]; funcs = Table[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {k, -st, st}]; ] AbsoluteTiming[ res = NIntegrate[1, Evaluate[{t, Sequence @@ funX}], PrecisionGoal -> 4, AccuracyGoal -> 6, MaxRecursion -> 20, Method -> {"UnitCubeRescaling", "FunctionalRangesOnly" -> True, Method -> {"GlobalAdaptive", "SingularityHandler" -> None, Method -> {ArrayOfFunctionsRule, "Functions" -> funcs}}}]; ] (* {39.5742, Null} *) res.factors ```
132,698
I used linear interpolation between points: ``` T = 1; w = 0.05; num = 1000; A = 1; pulse[x_] := A*(UnitStep[x + w*T/2] - UnitStep[x - w*T/2]) fun = Table[pulse[x] + 0.2*(RandomReal[] - 0.5), {x, -T/2, T/2, T/(num - 1)}]; funX = Table[i, {i, -T/2, T/2, T/(num - 1)}]; funINT = Interpolation[Transpose[{funX[[All]], fun[[All]]}], InterpolationOrder -> 1]; ListPlot[Transpose[{funX[[All]], fun[[All]]}], PlotRange -> All, Filling -> Axis, Frame -> True, FrameLabel -> {"Time [s]", "Amplitude [V]"}, PlotLegends -> {"Pulse"}, ImageSize -> Large] ``` This produces exactly what I want: [![plot](https://i.stack.imgur.com/2fVP4.png)](https://i.stack.imgur.com/2fVP4.png) But in order to calculate the coefficients of complex Fourier series, I have to calculate the following integral: ``` cn[k_] = NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {t, -T/2, T/2}, Method -> "Trapezoidal"]; ``` which is not working. I get an error saying that the integrand has evaluated to non-numerical values. Any ideas on what the problem is? EDIT: If there is a method to calculate the integral between the discrete points, that might be even better in my case. However, I couldn't find one.
2016/12/03
[ "https://mathematica.stackexchange.com/questions/132698", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19601/" ]
There are several ways to approach answering this question. Not using NIntegrate -------------------- One approach is to use direct Trpezoidal formula integration as shown in [this answer](https://mathematica.stackexchange.com/a/5629/34008) of ["Is it possible to compute with the trapezoidal rule by numerical integration?"](https://mathematica.stackexchange.com/q/5627/34008). Using NIntegrate's range specification -------------------------------------- > > If there is a method to calculate the integral between the discrete points, that might be even better in my case. However, I couldn't find one. > > > `NIntegrate`'s range specification can take range splitting points. In this case we put the points `funX` in the range specification with `Evaluate[{t, Sequence @@ funX}]`. ``` Clear[cn] cn[k_] := NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, Evaluate[{t, Sequence @@ funX}], PrecisionGoal -> 4, AccuracyGoal -> 6, MaxRecursion -> 1, Method -> {"GlobalAdaptive", "SymbolicProcessing" -> 0}]; Clear[rekon] rekon[t_, st_] := Sum[cn[n]*Exp[I*2*Pi*n*t/T], {n, -st, st}] In[74]:= AbsoluteTiming[res = rekon[t, 100];] Out[74]= {33.411, Null} In[75]:= AbsoluteTiming[res = rekon[t, 1000];] Out[75]= {360.723, Null} ``` Using a specially made rule for a list of functions --------------------------------------------------- We can apply the approach given in [this answer](https://mathematica.stackexchange.com/a/126041/34008) of the question ["How to calculate the numerical integral more efficiently?"](https://mathematica.stackexchange.com/q/126001/34008). We make a list of all the integrands formed in `rekon` and we integrate them at once with [`ArrayOfFunctionsRule`](https://github.com/antononcube/MathematicaForPrediction/blob/master/Misc/ArrayOfFunctionsRule.m). The computations are 8-10 times faster. ``` Import["https://raw.githubusercontent.com/antononcube/\ MathematicaForPrediction/master/Misc/ArrayOfFunctionsRule.m"] Clear[n, st, t] With[{st = 1000}, factors = Table[Exp[I*2*Pi*n*t/T], {n, -st, st}]; funcs = Table[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {k, -st, st}]; ] AbsoluteTiming[ res = NIntegrate[1, Evaluate[{t, Sequence @@ funX}], PrecisionGoal -> 4, AccuracyGoal -> 6, MaxRecursion -> 20, Method -> {"UnitCubeRescaling", "FunctionalRangesOnly" -> True, Method -> {"GlobalAdaptive", "SingularityHandler" -> None, Method -> {ArrayOfFunctionsRule, "Functions" -> funcs}}}]; ] (* {39.5742, Null} *) res.factors ```
Michael's and Anton's answers involve the manual splitting of the piecewise linear functions involved in the computation of the Fourier coefficients at the interpolation points. Yet another way to tell *Mathematica* to automatically split the integrand before integrating is to use the option setting `Method -> "InterpolationPointsSubdivision"`, like so: ``` With[{T = 1, w = 0.05, num = 1000, A = 1}, BlockRandom[SeedRandom["many pulses"]; (* for reproducibility *) funINT = Interpolation[Table[{x, A (UnitStep[x + w T/2] - UnitStep[x - w T/2]) + RandomReal[{-1, 1} 0.1]}, {x, -T/2, T/2, T/(num - 1)}], InterpolationOrder -> 1]]]; With[{T = 1}, Table[NIntegrate[funINT[t] Exp[-2 π I k t/T]/T, {t, -T/2, T/2}, Method -> "InterpolationPointsSubdivision"], {k, 0, 5}]] {0.0511259271805279, 0.05148235941558258 - 0.001596805628001697*I, 0.05010102394935553 - 0.0007387015194085259*I, 0.05130250748609571 + 0.0018324339082365982*I, 0.04620591693520239 - 0.00029540402973357425*I, 0.04467904044269826 + 0.0016773681935466633*I} ```
132,698
I used linear interpolation between points: ``` T = 1; w = 0.05; num = 1000; A = 1; pulse[x_] := A*(UnitStep[x + w*T/2] - UnitStep[x - w*T/2]) fun = Table[pulse[x] + 0.2*(RandomReal[] - 0.5), {x, -T/2, T/2, T/(num - 1)}]; funX = Table[i, {i, -T/2, T/2, T/(num - 1)}]; funINT = Interpolation[Transpose[{funX[[All]], fun[[All]]}], InterpolationOrder -> 1]; ListPlot[Transpose[{funX[[All]], fun[[All]]}], PlotRange -> All, Filling -> Axis, Frame -> True, FrameLabel -> {"Time [s]", "Amplitude [V]"}, PlotLegends -> {"Pulse"}, ImageSize -> Large] ``` This produces exactly what I want: [![plot](https://i.stack.imgur.com/2fVP4.png)](https://i.stack.imgur.com/2fVP4.png) But in order to calculate the coefficients of complex Fourier series, I have to calculate the following integral: ``` cn[k_] = NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {t, -T/2, T/2}, Method -> "Trapezoidal"]; ``` which is not working. I get an error saying that the integrand has evaluated to non-numerical values. Any ideas on what the problem is? EDIT: If there is a method to calculate the integral between the discrete points, that might be even better in my case. However, I couldn't find one.
2016/12/03
[ "https://mathematica.stackexchange.com/questions/132698", "https://mathematica.stackexchange.com", "https://mathematica.stackexchange.com/users/19601/" ]
You could find a general symbolic Fourier coefficient for a linear polynomial and use the formula to integrate the interpolating function piecewise. If you're content with machine precision (double precision), then you can `Compile` it for really great speed. ``` (* Basic integral formulas *) ClearAll[cn0]; cn0[0][{t0_, t1_}, {x0_, x1_}] = (* k == 0 is a special case *) Integrate[(x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T /. k -> 0, {t, t0, t1}]; cn0[k_][{t0_, t1_}, {x0_, x1_}] = Integrate[(x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T, {t, t0, t1}]; (* Coefficient function *) Clear[cn]; cn[0] = Total@ MapThread[ (* map over interpolation segments *) cn0[0], {Partition[funX, 2, 1], Partition[fun, 2, 1]}]; cn[k_] = Total@ MapThread[ cn0[k], {Partition[funX, 2, 1], Partition[fun, 2, 1]}]; (* Compiled version *) cnC = With[ (* basic integrals *) {i0 = Function[{t0, t1, x0, x1}, (* k == 0 is a special case *) Evaluate@ Integrate[ (x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T /. k -> 0, {t, t0, t1}]], i = Function[{t0, t1, x0, x1}, Evaluate@ Integrate[ (x0 + (x1 - x0)/(t1 - t0) (t - t0)) * Exp[-I*2*Pi*k*t/T]/T, {t, t0, t1}]]}, Compile[{{k, _Integer}, {t, _Real, 1}, {x, _Real, 1}}, Total@If[k == 0, i0[Most[t], Rest[t], Most[x], Rest[x]], (* vectorized for speed *) i[Most[t], Rest[t], Most[x], Rest[x]]] ]]; ``` Checks and comparison of speeds: ``` (* OP's method for comparison *) cn1[k_] := NIntegrate[funINT[t]*Exp[-I*2*Pi*k*t/T]/T, {t, -T/2, T/2}, Method -> "Trapezoidal"]; res1 = Table[cn1[k], {k, 0, 5}] // AbsoluteTiming res2 = Table[cn[k], {k, 0, 5}] // AbsoluteTiming res3 = Table[cnC[k, funX, fun], {k, 0, 5}] // AbsoluteTiming (* {6.41549, {0.0509924, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} {0.154046, {0.0509924, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} {0.001207, {0.0509924 + 0. I, 0.0485667 + 0.000384561 I, 0.0489479 + 0.000373475 I, 0.0489737 + 0.000976852 I, 0.0459565 + 0.00132399 I, 0.0452069 + 0.000833868 I}} *) res1 - res2 res2 - res3 (* {6.26145, {-6.245*10^-17, -1.27026*10^-10 - 3.56074*10^-10 I, 5.71595*10^-12 - 8.24057*10^-11 I, -5.08276*10^-10 - 1.50366*10^-11 I, 8.25427*10^-11 - 4.73669*10^-10 I, -3.07932*10^-10 + 1.72791*10^-10 I}} {0.152839, {-1.38778*10^-17 + 0. I, -3.7817*10^-15 - 1.49451*10^-14 I, 1.1019*10^-14 - 3.11627*10^-15 I, -1.10328*10^-15 - 2.62073*10^-15 I, -3.42781*10^-15 - 8.26162*10^-17 I, 1.88738*10^-15 - 1.13711*10^-15 I}} *) ``` So `cn` is almost 50 times faster than `NIntegrate` and `cnC` is over 100 times faster than `cn`.
Michael's and Anton's answers involve the manual splitting of the piecewise linear functions involved in the computation of the Fourier coefficients at the interpolation points. Yet another way to tell *Mathematica* to automatically split the integrand before integrating is to use the option setting `Method -> "InterpolationPointsSubdivision"`, like so: ``` With[{T = 1, w = 0.05, num = 1000, A = 1}, BlockRandom[SeedRandom["many pulses"]; (* for reproducibility *) funINT = Interpolation[Table[{x, A (UnitStep[x + w T/2] - UnitStep[x - w T/2]) + RandomReal[{-1, 1} 0.1]}, {x, -T/2, T/2, T/(num - 1)}], InterpolationOrder -> 1]]]; With[{T = 1}, Table[NIntegrate[funINT[t] Exp[-2 π I k t/T]/T, {t, -T/2, T/2}, Method -> "InterpolationPointsSubdivision"], {k, 0, 5}]] {0.0511259271805279, 0.05148235941558258 - 0.001596805628001697*I, 0.05010102394935553 - 0.0007387015194085259*I, 0.05130250748609571 + 0.0018324339082365982*I, 0.04620591693520239 - 0.00029540402973357425*I, 0.04467904044269826 + 0.0016773681935466633*I} ```
13,003,257
If I have some kind of tree, and I need to find a specific node in that tree which is essentially null (the struct is not initialised / malloc'ed yet). If I want to return that very specific uninitialised struct place to be able to initialise it, would something like: ``` if (parentNode->childNode == NULL) return parentNode->childNode; ``` work? Or would it return NULL? Would I need to return the parent node instead? The problem I have is that this specific node may be a child of multiple different struct types. I'm not sure how to overcome this in C, since it does not support generics. Does anyone have any ideas?
2012/10/21
[ "https://Stackoverflow.com/questions/13003257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/680441/" ]
You can not return NULL. It will not be an identifiable location. What you can do however is: * `malloc` the node at the point where you find it and return the pointer returned by malloc, * you can `return &(parentNode->childNode)` (a pointer to the childNode pointer) which the caller of the function can use to set it to the new node, or * `return parentNode` I don't see how that problem could be mitigated by generics.
You can return that `parentNode` and have its `child` initialized elsewhere.
13,003,257
If I have some kind of tree, and I need to find a specific node in that tree which is essentially null (the struct is not initialised / malloc'ed yet). If I want to return that very specific uninitialised struct place to be able to initialise it, would something like: ``` if (parentNode->childNode == NULL) return parentNode->childNode; ``` work? Or would it return NULL? Would I need to return the parent node instead? The problem I have is that this specific node may be a child of multiple different struct types. I'm not sure how to overcome this in C, since it does not support generics. Does anyone have any ideas?
2012/10/21
[ "https://Stackoverflow.com/questions/13003257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/680441/" ]
You can not return NULL. It will not be an identifiable location. What you can do however is: * `malloc` the node at the point where you find it and return the pointer returned by malloc, * you can `return &(parentNode->childNode)` (a pointer to the childNode pointer) which the caller of the function can use to set it to the new node, or * `return parentNode` I don't see how that problem could be mitigated by generics.
You can return NULL, but it would make no sense. You can return a *pointer* to the pointer whose value is NULL: ``` typedef struct link { struct link *next; } LL; LL **getTailPP(LL **ppHead) { for( ; *ppHead; ppHead = &(*ppHead)->next ) {;} return ppHead; } ```
13,003,257
If I have some kind of tree, and I need to find a specific node in that tree which is essentially null (the struct is not initialised / malloc'ed yet). If I want to return that very specific uninitialised struct place to be able to initialise it, would something like: ``` if (parentNode->childNode == NULL) return parentNode->childNode; ``` work? Or would it return NULL? Would I need to return the parent node instead? The problem I have is that this specific node may be a child of multiple different struct types. I'm not sure how to overcome this in C, since it does not support generics. Does anyone have any ideas?
2012/10/21
[ "https://Stackoverflow.com/questions/13003257", "https://Stackoverflow.com", "https://Stackoverflow.com/users/680441/" ]
You can not return NULL. It will not be an identifiable location. What you can do however is: * `malloc` the node at the point where you find it and return the pointer returned by malloc, * you can `return &(parentNode->childNode)` (a pointer to the childNode pointer) which the caller of the function can use to set it to the new node, or * `return parentNode` I don't see how that problem could be mitigated by generics.
If the child node is not initialised yet, then you cannot return a pointer to it... how can you return a pointer to something that doesn't exist yet?? What you need to do is return a pointer to the parent node's pointer, which can then be changed to point to the newly-allocated memory for the child node.
17,268,287
I am developing a website and currently I am stick in the registration process. When I ask users to register to my website, they need to choose the number of people that a team will have. When I select the number of people in the selection box, my website displays input fields according to the number of people that I selected. This works fine in Mozilla Firefox (Mozilla/5.0 (X11; U; Linux x86\_64; en-US; rv:1.9.2.24) Gecko/20111109 CentOS/3.6.24-3.el6.centos Firefox/3.6.24). Nonetheless, the other web browsers do not display the input fields according to the number of people that I selected. This is the following code: ``` <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js"> </script> <script> $(document).ready(function(){ $("#muestra1").click(function(){ $("#loscuatro").hide(); }); }); $(document).ready(function(){ $("#muestra2").click(function(){ $("#loscuatro").show(); }); }); </script> </head> <body> <div id="container" class="ltr"> <li><span> <div id="container" class="ltr"> <h2>Please give us your name</h2> <ul> <li> <span> Number of team members <select name="integrantes" id="integrantes" > <option name="member1" value="1" id="muestra1" >1</option> <option name="member2" value="2" id="muestra2">2</option> </li><hr /></span> <b><h3>Leader</h3></b> <li> <span> Name* <input id="Field0" /></span> </li><hr /> <div id="loscuatro"> <b><h3>Volunteer</h3></b> <li> <span> Name* <input id="Field0" /></span> </li><hr /> </div><!--end of volunteer loscuatros--> </div> </body> </html> ``` Please, I would be very glad to receive support to this code because I've looked at it several times and I can't find the bug. I remark that this is not the final design of my registration part. Cheers.
2013/06/24
[ "https://Stackoverflow.com/questions/17268287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2514936/" ]
Here is a [**Live demo**](http://jsfiddle.net/mplungjan/EyG6g/) You need to change your option click to the change event of the select. You can also drop the name and ID of the options: ``` $(function(){ $("#integrantes").on("change",function(){ $("#loscuatro").toggle(this.selectedIndex==1); // second option is option 1 }); $("#integrantes").change(); // set to whatever it is at load time }); ``` and close the select ``` <select name="integrantes" id="integrantes" > <option value="1">1</option> <option value="2">2</option> </select> ``` Lastly run your code through a [validator](http://validator.w3.org/) You have unwrapped LIs and two divs with the same ID and as far as I can tell without seeing the CSS, useless spans. Here is a cleaned up version [**Live demo**](http://jsfiddle.net/mplungjan/eTcUH/) ``` <div id="container" class="ltr"> <h2>Please give us your name</h2> <ul> <li> Number of team members <select name="integrantes" id="integrantes" > <option value="1">1</option> <option value="2">2</option> </select> </li> </ul> <h3>Leader</h3> <ul> <li> Name* <input id="Field0" /><hr /> </li> </ul> <div id="loscuatro"> <h3>Volunteer</h3> <ul> <li> Name* <input id="Field0" /><hr /> </li> </ul> </div><!--end of volunteer loscuatros--> </div> ```
I don't think `option` is a element that can be clicked (at least in a cross-browser compatible way). It's better to rely on `change` event of the `select` element: ``` $(document).ready(function(){ $("#integrantes").change(function(){ var val = $(this).val(); if(val=='1') { $('#loscuatro').hide(); } else { $('#loscuatro').show(); } }); }); ``` Also close your `<select>` element. Lastly, check `#loscuatro` div element: `li` must be a child of a `ul` or `ol` element.
17,268,287
I am developing a website and currently I am stick in the registration process. When I ask users to register to my website, they need to choose the number of people that a team will have. When I select the number of people in the selection box, my website displays input fields according to the number of people that I selected. This works fine in Mozilla Firefox (Mozilla/5.0 (X11; U; Linux x86\_64; en-US; rv:1.9.2.24) Gecko/20111109 CentOS/3.6.24-3.el6.centos Firefox/3.6.24). Nonetheless, the other web browsers do not display the input fields according to the number of people that I selected. This is the following code: ``` <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.10.1/jquery.min.js"> </script> <script> $(document).ready(function(){ $("#muestra1").click(function(){ $("#loscuatro").hide(); }); }); $(document).ready(function(){ $("#muestra2").click(function(){ $("#loscuatro").show(); }); }); </script> </head> <body> <div id="container" class="ltr"> <li><span> <div id="container" class="ltr"> <h2>Please give us your name</h2> <ul> <li> <span> Number of team members <select name="integrantes" id="integrantes" > <option name="member1" value="1" id="muestra1" >1</option> <option name="member2" value="2" id="muestra2">2</option> </li><hr /></span> <b><h3>Leader</h3></b> <li> <span> Name* <input id="Field0" /></span> </li><hr /> <div id="loscuatro"> <b><h3>Volunteer</h3></b> <li> <span> Name* <input id="Field0" /></span> </li><hr /> </div><!--end of volunteer loscuatros--> </div> </body> </html> ``` Please, I would be very glad to receive support to this code because I've looked at it several times and I can't find the bug. I remark that this is not the final design of my registration part. Cheers.
2013/06/24
[ "https://Stackoverflow.com/questions/17268287", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2514936/" ]
Here is a [**Live demo**](http://jsfiddle.net/mplungjan/EyG6g/) You need to change your option click to the change event of the select. You can also drop the name and ID of the options: ``` $(function(){ $("#integrantes").on("change",function(){ $("#loscuatro").toggle(this.selectedIndex==1); // second option is option 1 }); $("#integrantes").change(); // set to whatever it is at load time }); ``` and close the select ``` <select name="integrantes" id="integrantes" > <option value="1">1</option> <option value="2">2</option> </select> ``` Lastly run your code through a [validator](http://validator.w3.org/) You have unwrapped LIs and two divs with the same ID and as far as I can tell without seeing the CSS, useless spans. Here is a cleaned up version [**Live demo**](http://jsfiddle.net/mplungjan/eTcUH/) ``` <div id="container" class="ltr"> <h2>Please give us your name</h2> <ul> <li> Number of team members <select name="integrantes" id="integrantes" > <option value="1">1</option> <option value="2">2</option> </select> </li> </ul> <h3>Leader</h3> <ul> <li> Name* <input id="Field0" /><hr /> </li> </ul> <div id="loscuatro"> <h3>Volunteer</h3> <ul> <li> Name* <input id="Field0" /><hr /> </li> </ul> </div><!--end of volunteer loscuatros--> </div> ```
There are several things wrong with the HTML: ``` <body> <div id="container" class="ltr"> <h2>Please give us your name</h2> <ul> <li> <span>Number of team members <select name="integrantes" id="integrantes" > <option name="member1" value="1" id="muestra1" >1</option> <option name="member2" value="2" id="muestra2">2</option> </select> </span> </li> <b><h3>Leader</h3></b> <li> <span> Name* <input id="Field0" /></span> </li> </ul> </div> <div id="loscuatro"> <b><h3>Volunteer</h3></b> <ul> <li> <span> Name* <input id="Field0" /> </span> </li><hr /> </ul> </div><!--end of volunteer loscuatros--> </body> ``` * You cannot have two elements with the same `id`. * Most HTML tags need to have a closing tag aka :`<li>....</li>` * You cannot have an `<li>` outside of a `<ul>` or `<ol>` This is the correct form: ``` <ul> <li>...</li> <li>...</li> </ul> ```
3,935,641
How can I add a close button to a draggable/resizable div? I understand that I am essentially describing a dialog, but I have to be able to take advantage of a few of the properties of resizable/draggable (such as containment) that are not a part of dialog. Any ideas?
2010/10/14
[ "https://Stackoverflow.com/questions/3935641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476055/" ]
You can use multiple class names (a perfectly normal thing to do), but are only allowed one class attribute on your HTML element. Do this instead: ``` <a href="#" class="paren defaul">text</a> ```
Following on from RedFilters' answer you could of course extend your class selectors by using the [angular](https://angularjs.org/) ng-class attribute as follows: ```html <a href="#" class="paren defaul" ng-class="['tea', 'mat', 'thirs']">text</a> ``` The resulting html would then be: ```html <a href="#" class="paren defaul tea mat thirs">text</a> ``` Might come in useful to get around a tslint "line too long" error :-)
3,935,641
How can I add a close button to a draggable/resizable div? I understand that I am essentially describing a dialog, but I have to be able to take advantage of a few of the properties of resizable/draggable (such as containment) that are not a part of dialog. Any ideas?
2010/10/14
[ "https://Stackoverflow.com/questions/3935641", "https://Stackoverflow.com", "https://Stackoverflow.com/users/476055/" ]
You can use multiple class names (a perfectly normal thing to do), but are only allowed one class attribute on your HTML element. Do this instead: ``` <a href="#" class="paren defaul">text</a> ```
There's no need for two class statements, simply: ``` <a href="#" class="paren defaul">text</a> ``` Now, in order to handle this in CSS you need to do this: ``` .paren.default{ } ``` ...Whithout spaces between the two class selectors. Cheers!
511,515
We have a scenario One Main e-commerce website - currently attracting a lot of visitors. Three sub "brand specific" sites which will hang off this site - each of these sites will potentiall have the same level of traffic over time. The client requires order processing for each brand site to happen in one place (i.e. one backoffice). What topology should we choose? We think that perhaps having a main sql server with both reads and writes from the backoffice, and replicate that data to "brand specific" sql server instances might work. Each brand specific site would have its own dedicated sql server for Frontoffice "reads". Any writes we perform would go back to the main database to keep stock concurrent Any thoughts? Future scalability is a major factor.
2009/02/04
[ "https://Stackoverflow.com/questions/511515", "https://Stackoverflow.com", "https://Stackoverflow.com/users/42446/" ]
Without having a more detailed understanding of how your application is to function it is difficult to provide you with clear direction. Your proposed implementation of having a central server (Publisher) supporting reads and writes, with a number of additional site specific servers (subscribers) for reads only, is certainly plausible. This has the added benefit of giving you the flexibility to replicate only the tables that would be required for read queries i.e. your central server will likely manage data such as supplier information, billing etc. that may not need to be pushed to subscribers. Your central server is likely going to be your sticking point if any, as all other servers will directing write activity back to it. The location of your distributed sites i.e. how far they are from the central server will also affect the transactional latency of your replicated environment. If you wish to present all of your database data as read only at the distributed sites then you may wish to consider using Log Shipping for this. The disadvantage of this implementation if that your application needs to be aware that only read activity can be processed on the local server and all write activity needs to be routed to the central server. I hope this helps but please feel free to pose additional questions. Cheers, John
Not a specific answer to your question but [Youtube scaling](http://video.google.com/videoplay?docid=-6304964351441328559) is an interesting video about youtube scaling. Prehaps it will give you some ideas.
67,359,673
I'm using *Entity Framework* and *Dynamic Linq Core* to perform some dynamic queries at run time. I have a question on how to write dynamic linq statements to output columns of counts where each column is a field item of another field. Say I have a table with 3 columns: ID, Gender, and Age (assuming they are only in the 10s). ``` ID | Gender | Age 01 | male | 20 02 | female | 30 ... some thousands of rows ``` I would like to count the number of people in each gender (groupBy Gender), by their age group. ``` [{Gender:"male", "20": 120, "30": 200, "40": 300}, {Gender:"female", "20": 300, "30": 200, "40": 1000 }] ``` I tried to group by age, but this doesn't give exactly what i wanted in the above format, because each gender and age combo becomes a new array item. ``` var query = db.someDB .GroupBy("new(Gender, Age)") .Select("new(Key.Gender as Gender, Key.Age as Age, Count() as value)"); ``` I'm restricted to use dynamic linq core because in my real application, the gender and age fields is up to the user to decide, so their field name will change at run-time. How would you do it?
2021/05/02
[ "https://Stackoverflow.com/questions/67359673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11735492/" ]
let's say your query returns a list of object of the following class ``` public class Data { public string Gender { get; set;} public int Age { get; set;} public int Value { get; set;} } ``` ```cs Data results = //query result var resultsV2 = results.GroupBy(r => r.Gender); var list = new List<IDictionary<string, object>>(); foreach(var g in resultsV2) { IDictionary<string, object> obj = new ExpandoObject(); var data = g.Select(x => x); obj["Gender"] = $"{g.Key}"; foreach(var d in data) { var propName = $"{d.Age}"; obj[propName] = $"{d.Value}"; } list.Add(obj); } string jsonString = JsonConvert.SerializeObject(list); ``` [Fiddler](https://dotnetfiddle.net/rPMYN1) input: ``` new List<Data> { new Data() { Gender = "Male", Age = 20, Value = 5 }, new Data() { Gender = "Male", Age = 30, Value = 3 }, new Data() { Gender = "Female", Age = 20, Value = 9 } }; ``` output: ``` [{"Gender":"Male","20":"5","30":"3"},{"Gender":"Female","20":"9"}] ```
You could leverage the JSON.Net types in your LINQ Query. [JObject](https://www.newtonsoft.com/json/help/html/M_Newtonsoft_Json_Linq_JObject__ctor_3.htm) accepts an collection of [JProperty](https://www.newtonsoft.com/json/help/html/M_Newtonsoft_Json_Linq_JProperty__ctor_1.htm) and [JArray](https://www.newtonsoft.com/json/help/html/M_Newtonsoft_Json_Linq_JArray__ctor_3.htm) accepts an colection of JObject. Those can be constructed in one LINQ query: ``` var table = new List<Person>{ new Person(30,"M"), new Person(30,"M"), new Person(30,"M"), new Person(20,"M"), new Person(40,"M"), new Person(30,"V"), new Person(50,"V"), new Person(50,"V") }; var data = from p1 in table group p1 by p1.Gender into genderGrp // Build our JObject select new JObject( new JProperty( "Gender", genderGrp.Key) // and the rest of proprties will the collection of Age , ( from p2 in genderGrp // over the gender group p2 by p2.Age into ageGrp select new JProperty( ageGrp.Key.ToString() // needs to be a string , ageGrp.Count())) // aggregate count ); var jtoken = new JArray(data); // the collection of JObjects goes in the JArray Console.WriteLine(jtoken.ToString()); ``` The result will be: ```json [ { "Gender": "M", "30": 3, "20": 1, "40": 1 }, { "Gender": "V", "30": 1, "50": 2 } ] ```
2,436,542
I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic\_compare\_and\_swap ... and I just want to do the swap, without comparing]. Thanks!
2010/03/12
[ "https://Stackoverflow.com/questions/2436542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247265/" ]
the "lock xchg" intel assembly instruction probably achieves what you want but i dont think there is a GCC wrapper function to make it portable. Therefor your stuck using inline assembly(not portable) or using compare and swap and forcing the compare to be true(inneficient). Hope this helps :-)
Don't think there is. Here's the reference, btw: <http://developer.apple.com/Mac/library/documentation/DriversKernelHardware/Reference/libkern_ref/OSAtomic_h/index.html#//apple_ref/doc/header/user_space_OSAtomic.h>
2,436,542
I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic\_compare\_and\_swap ... and I just want to do the swap, without comparing]. Thanks!
2010/03/12
[ "https://Stackoverflow.com/questions/2436542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247265/" ]
GCC does provide this operation on some processors, under the (confusingly named) `__sync_lock_test_and_set`. From the GCC documentation: > > > ``` > This builtin, as described by Intel, is not a traditional > test-and-set operation, but rather an atomic exchange operation. > It writes VALUE into `*PTR', and returns the previous contents of > `*PTR'. > > Many targets have only minimal support for such locks, and do not > support a full exchange operation. In this case, a target may > support reduced functionality here by which the _only_ valid value > to store is the immediate constant 1. The exact value actually > stored in `*PTR' is implementation defined. > > ``` > > However the full swap operation is supported on x86-32 and x86-64, effectively providing the `lock xchg` wrapper you would otherwise need to write.
Don't think there is. Here's the reference, btw: <http://developer.apple.com/Mac/library/documentation/DriversKernelHardware/Reference/libkern_ref/OSAtomic_h/index.html#//apple_ref/doc/header/user_space_OSAtomic.h>
2,436,542
I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic\_compare\_and\_swap ... and I just want to do the swap, without comparing]. Thanks!
2010/03/12
[ "https://Stackoverflow.com/questions/2436542", "https://Stackoverflow.com", "https://Stackoverflow.com/users/247265/" ]
the "lock xchg" intel assembly instruction probably achieves what you want but i dont think there is a GCC wrapper function to make it portable. Therefor your stuck using inline assembly(not portable) or using compare and swap and forcing the compare to be true(inneficient). Hope this helps :-)
GCC does provide this operation on some processors, under the (confusingly named) `__sync_lock_test_and_set`. From the GCC documentation: > > > ``` > This builtin, as described by Intel, is not a traditional > test-and-set operation, but rather an atomic exchange operation. > It writes VALUE into `*PTR', and returns the previous contents of > `*PTR'. > > Many targets have only minimal support for such locks, and do not > support a full exchange operation. In this case, a target may > support reduced functionality here by which the _only_ valid value > to store is the immediate constant 1. The exact value actually > stored in `*PTR' is implementation defined. > > ``` > > However the full swap operation is supported on x86-32 and x86-64, effectively providing the `lock xchg` wrapper you would otherwise need to write.
45,888
I use tabs in C/C++ but want to convert those tabs to spaces when copying code that I'm going to paste into an external program. However, I want the original code to retain tabs. Additionally (if possible), I'd like to keep the tabs when copying/pasting within Emacs. I sort of adapted the code at [this link](https://emacs.stackexchange.com/a/18425/20317) and tried: ``` (advice-add 'kill-ring-save :before 'untabify) (advice-add 'kill-ring-save :after 'tabify) ``` but it doesn't copy all the text and converts away from tabs when copying and pasting within Emacs. For instance, if I use the above setup on ``` int main() { int i = 5; } ``` it only copies ``` int main() { int i ``` I'm not a Lisp programmer and am relatively new to Emacs, so forgive me if I'm missing something obvious here.
2018/11/10
[ "https://emacs.stackexchange.com/questions/45888", "https://emacs.stackexchange.com", "https://emacs.stackexchange.com/users/20317/" ]
It's not documented well enough, and possibly poorly named, but the `hideshow` function `hs-hide-level` will collapse all the blocks within the current block. That is, if your cursor is on the `class ...` line (or below it) in your example input, it will give you something very similar to your desired output. Since `hideshow` works with indentation I've found that it works best with *well formated* code or text. Given: ```python class LongClassIDidNotWrite(): def method1(): print("") print("") print("") def method2(): print("") print("") print("") def method3(): print("") print("") print("") ``` Placing the cursor in "Long" on the `class ...` line and calling `M-x hs-hide-level` or `C-c @ C-l` will give you: ```python class LongClassIDidNotWrite(): def method1():...) def method2():...) def method3():...) ``` This works on any point that is part of the `class ...` block. That is, that line, on the blank lines, and on the `def ...` lines. If you try it on the `print ...` lines it won't work, as you're in a new block, and there are no child blocks to fold. `hs-hide-level` also runs a hook `hs-hide-hook` which could prove useful. There is also a `hs-hide-level-recursive` which will recursively hide blocks in a region. It does not run `hs-hide-hook`, according to its documentation.
I believe the method given in the question, although not efficient, is sufficient. Let me explain it in greater detail. Assuming you have `evil-mode` enabled: 1. Toggle `hs-minor-mode` to enable folding 2. Place your cursor at the first column of a `def` statement line. This can be done by pressing `0` 3. Record a macro which folds the current section and then moves to the next section: `qazc]]q`. * `q`: begin recording a macro * `a`: name the new macro "a" * `zc`: close the current section (fold the section) * `]]`: go to the next section via `evil-forward-section-begin` * `q`: finish recording the macro 4. Replay the macro 100 times: `100@a` * `100`: the number of times to call the macro * `@a`: call macro with name "a" Replace 100 with whatever number of folds you want to make. Or, simply repeat the macro call with `@@`. --- I found that the real question I had was something like, *"How can I get a quick overview of a class's methods?"* or *"How can I quickly navigate my code?"* To this end, `elpy-mode` has `elpy-occur-definitions` which lists all `class` and `def` statements with the `occur` function. I find the defaults for `occur` clunky. Here is how I adjust the behavior to my tastes. ```lisp (evil-mode) ; enable evil mode (add-hook 'python-mode-hook 'elpy-mode) ; always use elpy-mode with python files ;; Make *Occur* window size to the contents (setq fit-window-to-buffer-horizontally t) (add-hook 'occur-hook (lambda () (save-selected-window (pop-to-buffer "*Occur*") (fit-window-to-buffer)))) ;; Make *Occur* window always open on the right side (setq display-buffer-alist `(("\\*Occur\\*" display-buffer-in-side-window (side . right) (slot . 0) (window-width . fit-window-to-buffer)))) ;; Automatically switch to *Occur* buffer (add-hook 'occur-hook '(lambda () (switch-to-buffer-other-window "*Occur*"))) ;; Modify *Occur* buffer navigation ;;; Set initial state of *Occur* buffer to normal mode (evil-set-initial-state 'occur-mode 'normal) ;;; Use q or ESC to quit close the *Occur* window (evil-define-key 'normal occur-mode-map (kbd "q") 'quit-window) (evil-define-key 'normal occur-mode-map (kbd "<escape>") 'quit-window) ;;; Use TAB or SPC to show the occurrence in the other window (evil-define-key 'normal occur-mode-map (kbd "TAB") 'occur-mode-display-occurrence) (evil-define-key 'normal occur-mode-map (kbd "SPC") 'occur-mode-display-occurrence) ;;; Use <return> to jump to the occurrence and close the *Occur* window (evil-define-key 'normal occur-mode-map (kbd "<return>") '(lambda () (interactive) (occur-mode-goto-occurrence-other-window) (kill-buffer "*Occur*"))) ```
53,242
How do I check if a partition is encrypted? In particular I would like to know how I check if `/home` and swap is encrypted.
2011/07/15
[ "https://askubuntu.com/questions/53242", "https://askubuntu.com", "https://askubuntu.com/users/19490/" ]
Regarding the standard home encryption provided by Ubuntu, you can ``` sudo ls -lA /home/username/ ``` and if you get something like ``` totale 0 lrwxrwxrwx 1 username username 56 2011-05-08 18:12 Access-Your-Private-Data.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop lrwxrwxrwx 1 username username 38 2011-05-08 18:12 .ecryptfs -> /home/.ecryptfs/username/.ecryptfs lrwxrwxrwx 1 username username 37 2011-05-08 18:12 .Private -> /home/.ecryptfs/username/.Private lrwxrwxrwx 1 username username 52 2011-05-08 18:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt ``` then the username's home directory is encrypted. This works when username is not logged in, so the partition is not mounted. Otherwise you can look at `mount` output. About the swap, do ``` sudo blkid | grep swap ``` and should check for an output similar to ``` /dev/mapper/cryptswap1: UUID="95f3d64d-6c46-411f-92f7-867e92991fd0" TYPE="swap" ```
In addition to the answer provided by enzotib, there's the possibility of full disk encryption as provided by the alternate installer. (Also called LUKS-crypt.) You can use `sudo dmsetup status` to check if there are any LUKS-encrypted partitions. The output should look something like: ``` ubuntu-home: 0 195305472 linear ubuntu-swap_1: 0 8364032 linear sda5_crypt: 0 624637944 crypt ubuntu-root: 0 48824320 linear ``` The line marked "crypt" shows that sda5 has been encrypted. You can see which filesystems are on that via the lvm tools. In the case of LUKS encryption, the Disk Utility in Ubuntu will also show the encryption layer and the configuration in a graphical manner.
53,242
How do I check if a partition is encrypted? In particular I would like to know how I check if `/home` and swap is encrypted.
2011/07/15
[ "https://askubuntu.com/questions/53242", "https://askubuntu.com", "https://askubuntu.com/users/19490/" ]
Regarding the standard home encryption provided by Ubuntu, you can ``` sudo ls -lA /home/username/ ``` and if you get something like ``` totale 0 lrwxrwxrwx 1 username username 56 2011-05-08 18:12 Access-Your-Private-Data.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop lrwxrwxrwx 1 username username 38 2011-05-08 18:12 .ecryptfs -> /home/.ecryptfs/username/.ecryptfs lrwxrwxrwx 1 username username 37 2011-05-08 18:12 .Private -> /home/.ecryptfs/username/.Private lrwxrwxrwx 1 username username 52 2011-05-08 18:12 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt ``` then the username's home directory is encrypted. This works when username is not logged in, so the partition is not mounted. Otherwise you can look at `mount` output. About the swap, do ``` sudo blkid | grep swap ``` and should check for an output similar to ``` /dev/mapper/cryptswap1: UUID="95f3d64d-6c46-411f-92f7-867e92991fd0" TYPE="swap" ```
To check the encrypted swap status and cipher details, use this cmd: ``` $ sudo cryptsetup status /dev/mapper/cryptswap1 /dev/mapper/cryptswap1 is active and is in use. type: PLAIN cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/sda2 offset: 0 sectors size: 8388608 sectors mode: read/write ``` Your swap device name may be different, you can check the proper name by: ``` $ swapon -s Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 4194300 0 -1 ```
53,242
How do I check if a partition is encrypted? In particular I would like to know how I check if `/home` and swap is encrypted.
2011/07/15
[ "https://askubuntu.com/questions/53242", "https://askubuntu.com", "https://askubuntu.com/users/19490/" ]
In addition to the answer provided by enzotib, there's the possibility of full disk encryption as provided by the alternate installer. (Also called LUKS-crypt.) You can use `sudo dmsetup status` to check if there are any LUKS-encrypted partitions. The output should look something like: ``` ubuntu-home: 0 195305472 linear ubuntu-swap_1: 0 8364032 linear sda5_crypt: 0 624637944 crypt ubuntu-root: 0 48824320 linear ``` The line marked "crypt" shows that sda5 has been encrypted. You can see which filesystems are on that via the lvm tools. In the case of LUKS encryption, the Disk Utility in Ubuntu will also show the encryption layer and the configuration in a graphical manner.
To check the encrypted swap status and cipher details, use this cmd: ``` $ sudo cryptsetup status /dev/mapper/cryptswap1 /dev/mapper/cryptswap1 is active and is in use. type: PLAIN cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/sda2 offset: 0 sectors size: 8388608 sectors mode: read/write ``` Your swap device name may be different, you can check the proper name by: ``` $ swapon -s Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 4194300 0 -1 ```
105,877
So I'm making a game using love2d where the player will find himself in an zombie infested city but I don't want the city/map to be just the same all the time, so I want to create a random map/city generator, but I don't know where to start, I maybe can make my own but the result would probably be not what I wanted, as I don't want tiles getting placed all over the place messily(it won't look like a city then), I want random set of tiles getting placed in the world/map like buildings get placed randomly throughout the world/map, I hope someone can help me with this as this is really hard for me...
2015/08/19
[ "https://gamedev.stackexchange.com/questions/105877", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/70087/" ]
Like stated by Shiro in a comment , it's difficult to give a precise answer. I can suggest a possible starting point. Use random [voronoi](https://it.wikipedia.org/wiki/Diagramma_di_Voronoi) generation where , given a set of random points P , each point in space is weighted relative to the distance from the nearest p in P. [![enter image description here](https://i.stack.imgur.com/JWgJh.png)](https://i.stack.imgur.com/JWgJh.png) Now , instead of considering euclidean distance use [Manhattan distance](https://en.wikipedia.org/wiki/Taxicab_geometry) and you get something like this : [![enter image description here](https://i.stack.imgur.com/koXjs.png)](https://i.stack.imgur.com/koXjs.png) then adjoust the gradient to obtain this [![enter image description here](https://i.stack.imgur.com/83aDs.png)](https://i.stack.imgur.com/83aDs.png)
You should do some research about l-systems, they allow you to specify some basic rules and then procedurally generate the map. You could for example specify that every building must be surrounded by roads, and every road must continue in a straight line or eventually end, and every road is surrounded by buildings or empty space. Then, the algorithm will start "substituting" a building with a building surrounded by roads, a road piece with a full road... until you have a complete city. Read the answer to this question for a more detailed explanation: [Using L-Systems to procedurally generate cities](https://gamedev.stackexchange.com/questions/86234/using-l-systems-to-procedurally-generate-cities)
105,877
So I'm making a game using love2d where the player will find himself in an zombie infested city but I don't want the city/map to be just the same all the time, so I want to create a random map/city generator, but I don't know where to start, I maybe can make my own but the result would probably be not what I wanted, as I don't want tiles getting placed all over the place messily(it won't look like a city then), I want random set of tiles getting placed in the world/map like buildings get placed randomly throughout the world/map, I hope someone can help me with this as this is really hard for me...
2015/08/19
[ "https://gamedev.stackexchange.com/questions/105877", "https://gamedev.stackexchange.com", "https://gamedev.stackexchange.com/users/70087/" ]
There are simple ways to generate cities, depending on your needs. Some time ago, I wanted to generate cities for a fantasy setting, so I started [playing with a generator](http://rpg20.com/cityGen.php?imgStyle=3&citySize=very_large). Like I said in [another SE post](https://rpg.stackexchange.com/questions/57695/modern-floorplan-generator/59603#59603): "rough on the edges" is an understatement. But it suits my needs (it may meet your needs too). **This is what it looks like:** [![Generated city](https://i.stack.imgur.com/jEOXQ.png)](https://i.stack.imgur.com/jEOXQ.png) Some terms I use internally: * A tile is the smallest unit I handle; on the image one of the small houses fills almost a full tile. * A block is a group of NxM tiles. **And this is how I generate it:** First I generate a central feature, such as the block with the "fountain" (let's call this block a town square). I decide its width and height and place it near the center. Then I try generating "blocks" for as long as I can: 1. Generate a block size (somewhere between 2x2 and 5x5). 2. Iterate through the image and decide where to place it. For this a weight is assigned depending on adjacency to other blocks (similar to what I explained on [this post](https://gamedev.stackexchange.com/questions/82059/algorithm-for-procedureral-2d-map-with-connected-paths/87569#87569)). 3. Place the block if a suitable location is found. When no suitable location is found, go on to the next step. Once all blocks have been placed, I can add houses and buildings for each block. I took care of leaving a gap between blocks and another between houses. Some buildings use four tiles (2x2). Finally, I do the actual drawing in an image.
You should do some research about l-systems, they allow you to specify some basic rules and then procedurally generate the map. You could for example specify that every building must be surrounded by roads, and every road must continue in a straight line or eventually end, and every road is surrounded by buildings or empty space. Then, the algorithm will start "substituting" a building with a building surrounded by roads, a road piece with a full road... until you have a complete city. Read the answer to this question for a more detailed explanation: [Using L-Systems to procedurally generate cities](https://gamedev.stackexchange.com/questions/86234/using-l-systems-to-procedurally-generate-cities)
951,981
Expanding on [How can I make Windows 8 use the classic theme?](https://superuser.com/questions/513492/how-can-i-make-windows-8-use-the-classic-theme) and [Windows 10 TenForums: Windows Classic Look Theme in Windows 10](http://www.tenforums.com/customization/11432-windows-classic-look-theme-windows-10-a.html) -- how does one use Windows 10 with the old classic theme? [![enter image description here](https://i.stack.imgur.com/cwmlt.png)](https://i.stack.imgur.com/cwmlt.png) There's a [Windows 10 theme over at DeviantArt](http://kizo2703.deviantart.com/art/Windows-classic-theme-for-Windows-8-RTM-8-1-10-325642288) but it does not work with the final RTM: [![enter image description here](https://i.stack.imgur.com/dUZHw.jpg)](https://i.stack.imgur.com/dUZHw.jpg) Also, your vote over at [Windows 10 UserVoice: Windows Classic Look Theme in Windows 10](https://windows.uservoice.com/forums/265757-windows-feature-suggestions/suggestions/9193677-windows-classic-look-theme-in-windows-10) would be much appreciated.
2015/08/06
[ "https://superuser.com/questions/951981", "https://superuser.com", "https://superuser.com/users/327566/" ]
Have a look at this thread: <http://forum.thinkpads.com/viewtopic.php?f=67&t=113024&p=777781&hilit=classictheme#p777781> They're discussing/testing how to modify windows binary files to "get back" to classic interface by "unusual" methods, rather than just turning colors into gray! But it appears to be very complex due to totally different structure of explorer, windows manager and whatelse in Windows 10 w.r.t. previous versions. Note: be sure to create a restore point before running ClassicTheme.exe for testing! It can screw up whole interface! For the start menu, things are easier thanks to ClassicShell: <http://www.classicshell.net/gallery/> Additionally, try googling for: Classic AE by Saarineames (download from deviantart) Aero Lite Theme (among other things, make windows border larger than one pixel, for better resize and better visibility) You can also play (carefully) with registry keys: * HKEY\_CURRENT\_USER\SOFTWARE\Microsoft\Windows\DWM *(you can add **AccentColorInactive** key here, or edit existing keys)* * KEY\_CURRENT\_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Themes\Personalize <http://www.thewindowsclub.com/get-colored-window-title-bar-windows-10> Ultimate Windows Tweaker is also an amazing tool: <http://www.thewindowsclub.com/image-gallery-for-uwt4>
Its impossible to change it to: [![Windows XP](https://i.stack.imgur.com/YMIdr.png)](https://i.stack.imgur.com/YMIdr.png) If you are REALLY desperate for the classic theme, [downgrade to Windows 7](http://www.pcadvisor.co.uk/how-to/windows/roll-back-windows-7-from-windows-8-3459580/). Or go along with this [![Windows 8 Version of classic](https://i.stack.imgur.com/irtmo.jpg)](https://i.stack.imgur.com/irtmo.jpg)
951,981
Expanding on [How can I make Windows 8 use the classic theme?](https://superuser.com/questions/513492/how-can-i-make-windows-8-use-the-classic-theme) and [Windows 10 TenForums: Windows Classic Look Theme in Windows 10](http://www.tenforums.com/customization/11432-windows-classic-look-theme-windows-10-a.html) -- how does one use Windows 10 with the old classic theme? [![enter image description here](https://i.stack.imgur.com/cwmlt.png)](https://i.stack.imgur.com/cwmlt.png) There's a [Windows 10 theme over at DeviantArt](http://kizo2703.deviantart.com/art/Windows-classic-theme-for-Windows-8-RTM-8-1-10-325642288) but it does not work with the final RTM: [![enter image description here](https://i.stack.imgur.com/dUZHw.jpg)](https://i.stack.imgur.com/dUZHw.jpg) Also, your vote over at [Windows 10 UserVoice: Windows Classic Look Theme in Windows 10](https://windows.uservoice.com/forums/265757-windows-feature-suggestions/suggestions/9193677-windows-classic-look-theme-in-windows-10) would be much appreciated.
2015/08/06
[ "https://superuser.com/questions/951981", "https://superuser.com", "https://superuser.com/users/327566/" ]
Have a look at this thread: <http://forum.thinkpads.com/viewtopic.php?f=67&t=113024&p=777781&hilit=classictheme#p777781> They're discussing/testing how to modify windows binary files to "get back" to classic interface by "unusual" methods, rather than just turning colors into gray! But it appears to be very complex due to totally different structure of explorer, windows manager and whatelse in Windows 10 w.r.t. previous versions. Note: be sure to create a restore point before running ClassicTheme.exe for testing! It can screw up whole interface! For the start menu, things are easier thanks to ClassicShell: <http://www.classicshell.net/gallery/> Additionally, try googling for: Classic AE by Saarineames (download from deviantart) Aero Lite Theme (among other things, make windows border larger than one pixel, for better resize and better visibility) You can also play (carefully) with registry keys: * HKEY\_CURRENT\_USER\SOFTWARE\Microsoft\Windows\DWM *(you can add **AccentColorInactive** key here, or edit existing keys)* * KEY\_CURRENT\_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Themes\Personalize <http://www.thewindowsclub.com/get-colored-window-title-bar-windows-10> Ultimate Windows Tweaker is also an amazing tool: <http://www.thewindowsclub.com/image-gallery-for-uwt4>
On Windows 11 and Windows 10 version 1903 and above I suggest to use [Explorer Patcher](https://github.com/valinet/ExplorerPatcher) and [ClassicThemeTray](https://github.com/spitfirex86/ClassicThemeTray): [![Windows 11 in classic theme, with the help of ExplorerPatcher and OpenShell](https://i.stack.imgur.com/7CACo.png)](https://i.stack.imgur.com/7CACo.png) Check [this GitHub discussion](https://github.com/valinet/ExplorerPatcher/discussions/167) for a detailed tutorial.
10,569,321
In Spring WS, endpoints are typically annotated with the @Endpoint annotation. e.g. ``` @Endpoint public class HolidayEndpoint { ... } ``` My question is: is there any way to deifine schema-based endpoint (based on XML configuration)? Thanks...
2012/05/13
[ "https://Stackoverflow.com/questions/10569321", "https://Stackoverflow.com", "https://Stackoverflow.com/users/379814/" ]
In your spring-ws-servlet.xml configuration,add the following: ``` <?xml version="1.0" encoding="UTF-8"?> <beans> <context:annotation-config /> <sws:annotation-driven /> <sws:dynamic-wsdl id="holidayEndPoint" portTypeName="HolidayEndpoint" ............ ...... ``` More info can be had from here [Unable to access web service endpoint: Spring-WS 2](https://stackoverflow.com/questions/7718257/unable-to-access-web-service-endpoint-spring-ws-2) May be this will help you.
Generate and publish wsdl: ``` <sws:dynamic-wsdl id="EntityService" portTypeName="Entity" locationUri="/ws/EntityService/" targetNamespace="http://me.com/myproject/definitions"> <sws:xsd location="WEB-INF/schemas/EntityCommons.xsd" /> <sws:xsd location="WEB-INF/schemas/EntityService.xsd" /> </sws:dynamic-wsdl> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <props> <prop key="ws/EntityService/*.wsdl">EntityService</prop> </props> </property> <property name="defaultHandler" ref="messageDispatcher" /> </bean> ``` Setup an interceptor: ``` <sws:interceptors> <bean class="org.springframework.ws.server.endpoint.interceptor.PayloadLoggingInterceptor" /> <!-- Postel's Law: “Be conservative in what you do; be liberal in what you accept from others.” --> <bean id="validatingInterceptor" class="org.springframework.ws.soap.server.endpoint.interceptor.PayloadValidatingInterceptor"> <property name="schema" value="WEB-INF/schemas/EntityService.xsd"/> <property name="validateRequest" value="false"/> <property name="validateResponse" value="true"/> </bean> </sws:interceptors> ``` Or alternativly if you are using JAXB, you can configure the marshaller to use the schema.
40,012,211
The question reads "Just as it is possible to multiply by adding over and over, it is possible to divide by subtracting over and over. Write a program with a procedure to compute how many times a number N1 goes into another number N2. You will need a loop, and count for how many times that loop is executed". I am really stuck at the subtraction phase. I know I have to create a loop but I don't know where to place it. ``` org 100h .MODEL SMALL .STACK 100H .DATA MSG1 DB 'FIRST > $' MSG2 DB 'SECOND > $' MSG3 DB 'THE SUBTRACTION OF ' VALUE1 DB ? MSG4 DB ' AND ' VALUE2 DB ?, ' IS ' SUM DB ?,'.$' CR DB 0DH, 0AH, '$' .CODE MAIN PROC ;INITIALIZE DS MOV AX, @DATA MOV DS, AX ;PROMPT FOR FIRST INPUT LEA DX, MSG1 MOV AH, 9H INT 21H MOV AH, 1H INT 21H MOV VALUE1, AL MOV BH, AL SUB BH, '0' ;CARRIAGE RETURN FORM FEED LEA DX, CR MOV AH, 9H INT 21H ;PROMPT FOR SECOND INPUT LEA DX, MSG2 MOV AH, 9H INT 21H MOV AH, 1H INT 21H MOV VALUE2, AL MOV BL, AL SUB BL, '0' SUBTRACT: ;SUB THE VALUES CONVERT TO CHARACTER AND SAVE SUB BH, BL ADD BH, '0' MOV SUM, BH ;CARRIAGE RETURN FORM FEED LEA DX, CR MOV AH, 9H INT 21H ;OUTPUT THE RESULT LEA DX, MSG3 MOV AH, 9H INT 21H TERMINATE: ;RETURN TO DOS MOV AH, 4CH INT 21H MAIN ENDP END MAIN ```
2016/10/13
[ "https://Stackoverflow.com/questions/40012211", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7011413/" ]
Algorithm for positive N1,N2: 1. prepare `N1`, `N2` and set some `R` to -1 2. increment `R` 3. subtract `N1` from `N2` (update `N2` with result) 4. when result of subtraction is above or equal to zero, go to step 2. 5. `R` has result of integer division `N2`/`N1` Steps 2. to 4. can be written in x86 Assembly by single instruction per step (the `sub` updates "carry flag", which can be used to decide whether the subtraction did "overflow" - used by one of the "Jcc" = jump-condition-code instructions to either jump somewhere else or continue by next instruction).
Next program does the job. The numbers are declared as variables in the data segment, comments explain everything (just copy-paste it in EMU8086 and run it) : ``` .model small .stack 100h .data n1 dw 3 n2 dw 95 count dw ? msg db 'Quotient = $' str db 10 dup('$') .code mov ax, @data mov ds, ax ;DIVIDE BY SUBTRACTIONS. mov count, 0 ;COUNTER FOR SUBTRACTIONS. subtract: ;CHECK IF SUBTRACTION CAN BE DONE. mov ax, n1 ;COPY N1 INTO AX BECAUSE CAN... cmp n2, ax ;...NOT COMPARE MEMORY-MEMORY. jl finale ;N2 < N1. NO MORE SUBTRACTIONS. ;SUBTRACT. sub n2, ax ;N2-N1. inc count ;INCREASE SUBTRACTIONS COUNTER. jmp subtract ;REPEAT. finale: ;DISPLAY MESSAGE. mov ah, 9 mov dx, offset msg int 21h ;CONVERT QUOTIENT (COUNT) INTO STRING. mov ax, count mov si, offset str call number2string ;DISPLAY COUNT (QUOTIENT). mov ah, 9 mov dx, offset str int 21h ;WAIT FOR A KEY TO BE PRESSED. mov ah, 0 int 16h ;EXIT. mov ax, 4c00h int 21h ;------------------------------------------ ;CONVERT A NUMBER IN STRING. ;ALGORITHM : EXTRACT DIGITS ONE BY ONE, STORE ;THEM IN STACK, THEN EXTRACT THEM IN REVERSE ;ORDER TO CONSTRUCT STRING (STR). ;PARAMETERS : AX = NUMBER TO CONVERT. ; SI = POINTING WHERE TO STORE STRING. number2string proc mov bx, 10 ;DIGITS ARE EXTRACTED DIVIDING BY 10. mov cx, 0 ;COUNTER FOR EXTRACTED DIGITS. cycle1: mov dx, 0 ;NECESSARY TO DIVIDE BY BX. div bx ;DX:AX / 10 = AX:QUOTIENT DX:REMAINDER. push dx ;PRESERVE DIGIT EXTRACTED FOR LATER. inc cx ;INCREASE COUNTER FOR EVERY DIGIT EXTRACTED. cmp ax, 0 ;IF NUMBER IS jne cycle1 ;NOT ZERO, LOOP. ;NOW RETRIEVE PUSHED DIGITS. cycle2: pop dx add dl, 48 ;CONVERT DIGIT TO CHARACTER. mov [ si ], dl inc si loop cycle2 ret number2string endp ``` Procedure "number2string" is used to convert the result into string, necessary in case this result has more than one digit. Notice both numbers, dividend (N2) and divisor (N1), are not captured from keyboard, they are static values in the data segment. In order to capture them from keyboard we would need another procedure, "string2number", to convert the strings into numeric format.
4,468,310
suppose there is a tree with number of child nodes increasing from 2 to 4 then 8 and so on.how can we write recurrence relation for such a tree.
2010/12/17
[ "https://Stackoverflow.com/questions/4468310", "https://Stackoverflow.com", "https://Stackoverflow.com/users/531802/" ]
* using subtitution method- T(n)=2T(n/2)+n =2[2T(n/2^2)+n/2]+n =2^2T(n/2^2)+n+n =2^2[2T(n/2^3)+n/2^2]+n+n =2^3T(n/2^3)+n+n+n similarly subtituing k times-- we get T(n)=2^k T(n/2^k)+nk the recursion will terminate when n/2^k=1 k=log n base 2. thus T(n)=2^log n(base2)+n(log n) =n+nlogn thus the tight bound for the above recurrence relation will be =n log N (base of log is 2)
Take a look at this [link](http://www.cs.duke.edu/~ola/ap/recurrence.html). ``` T(n) = 2 T(n/2) + O(n) [the O(n) is for Combine] T(1) = O(1) ```
34,686,217
``` ggplot(all, aes(x=area, y=nq)) + geom_point(size=0.5) + geom_abline(data = levelnew, aes(intercept=log10(exp(interceptmax)), slope=fslope)) + #shifted regression line scale_y_log10(labels = function(y) format(y, scientific = FALSE)) + scale_x_log10(labels = function(x) format(x, scientific = FALSE)) + facet_wrap(~levels) + theme_bw() + theme(panel.grid.major = element_line(colour = "#808080")) ``` And I get this figure [![enter image description here](https://i.stack.imgur.com/VIMue.png)](https://i.stack.imgur.com/VIMue.png) Now I want to add **one geom\_line** to one of the facets. Basically, I wanted to have a dotted line (Say x=10,000) in only the **major** panel. How can I do this?
2016/01/08
[ "https://Stackoverflow.com/questions/34686217", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3942806/" ]
I don't have your data, so I made some up: ``` df <- data.frame(x=rnorm(100),y=rnorm(100),z=rep(letters[1:4],each=25)) ggplot(df,aes(x,y)) + geom_point() + theme_bw() + facet_wrap(~z) ``` [![enter image description here](https://i.stack.imgur.com/uCOR9.png)](https://i.stack.imgur.com/uCOR9.png) To add a vertical line at `x = 1` we can use `geom_vline()` with a dataframe that has the same faceting variable (in my case `z='b'`, but yours will be `levels='major'`): ``` ggplot(df,aes(x,y)) + geom_point() + theme_bw() + facet_wrap(~z) + geom_vline(data = data.frame(xint=1,z="b"), aes(xintercept = xint), linetype = "dotted") ``` [![enter image description here](https://i.stack.imgur.com/0uBOB.png)](https://i.stack.imgur.com/0uBOB.png)
Another way to express this which is possibly easier to generalize (and formatting stuff left out): ``` ggplot(df, aes(x,y)) + geom_point() + facet_wrap(~ z) + geom_vline(data = subset(df, z == "b"), aes(xintercept = 1)) ``` The key things being: facet first, then decorate facets by subsetting the original data frame, and put the details in a new `aes` if possible. Other examples of a similar idea: ``` ggplot(df, aes(x,y)) + geom_point() + facet_wrap(~ z) + geom_vline(data = subset(df, z == "b"), aes(xintercept = 1)) + geom_smooth(data = subset(df, z == "c"), aes(x, y), method = lm, se = FALSE) + geom_text(data = subset(df, z == "d"), aes(x = -2, y=0, label = "Foobar")) ```
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
An SSL identity is characterized by four parts: 1. A *private* key, which is not shared with anyone. 2. A *public* key, which you can share with anyone. The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you *cannot* decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic. 3. *Metadata* attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for). 4. A *digital signature* on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature: * Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because *anyone* can do this). * Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates). * Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a *trust chain* back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed! Items 2–4 are what comprises the digital certificate. When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is *not* sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go. B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of: * *The private key*. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands. * *A trusted authority that makes false statements*. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons. * *A way to poison DNS* so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced. As to your other questions… > > Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? > > > While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose. > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still *one service* that just happens to be implemented by *multiple servers*. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative. Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
Question N°1 ------------ > > can't B just take this certificate [...] which will allow them to authenticate as A to C > > > This [part](http://i.imgur.com/jrUIZOn.png) of the a larger [diagram](http://i.imgur.com/5T2fJsG.png) deals with that question. Mainly : if you only have the public key then you can not establish an SSL connection with any client because you need to exchange a secret key with them and that secret key has to be encrypted using your public key, which is why the client asks for it in the first time. The client is supposed to encrypt the shared secret key with your public key and you are supposed to decrypt it using your private key. Since you don't have the private key, you can't decrypt the secret exchange key, hence you can't establish any SSL communication with any client. Question N°2 ------------ > > Why use hashing on the certificate if a part of it is already > encrypted by the CA? > > > This is also answered in the [original diagram](http://i.imgur.com/5T2fJsG.png) by the question "what's a signature ?". Basically, we're hashing the whole certificate to be sure that it hasn't been tampered with (data integrity), that nobody has changed anything in it, and that what you see is really what was delivered by the CA. The diagram shows how hasing makes that possible. Question N°3 ------------ > > If I am stackoverflow and I have 3 servers [...] do I have to have a > different digital certificate for each of the 3 servers. > > > This is not necessarily always the case. Consider the situation where all three servers are on the same domain, then you only need one certificate, if each of them is on its own subdomain you can have one single wildcard certificate installed on all of them. On the contrary, if you have one server that hosts multiple domains you would have one single multi-domain SSL certificate.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
An SSL identity is characterized by four parts: 1. A *private* key, which is not shared with anyone. 2. A *public* key, which you can share with anyone. The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you *cannot* decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic. 3. *Metadata* attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for). 4. A *digital signature* on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature: * Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because *anyone* can do this). * Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates). * Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a *trust chain* back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed! Items 2–4 are what comprises the digital certificate. When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is *not* sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go. B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of: * *The private key*. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands. * *A trusted authority that makes false statements*. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons. * *A way to poison DNS* so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced. As to your other questions… > > Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? > > > While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose. > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still *one service* that just happens to be implemented by *multiple servers*. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative. Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
I also have some answers. Q1) If B steals A's certificate and try to impersonate as A to C. * C will validate the IP address of B and find out that it does not belong to C. It will then abort the SSL connection. Of course, even if C sends an encrypted message, then only the Real A will be able to decrypt it. Q2) A certificate is usually represented in plain-text using the common format X.509. All entries are readable by anyone. The hashing process is used to digitally sign a document. Digital signing a certificate makes the end user validate that the certificate has not been altered by anyone after it was created. Hashing and encrypting the content using the issuer's private key is done to create a digital signature.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
> > My question is can't "B" just take this certificate, thus stealing the identity of "A" - which will allow them to authenticate as "A" to "C" > > > There's also a private part of the certificate that does not get transmitted (the private key). **Without the private key, B cannot authenticate as A.** Similarly, I know your StackOverflow username, but that doens't let me log in as you. > > Why use hashing on the certificate if a part of it is already encrypted by the CA? > > > By doing it this way, anyone can verify that it was the CA who produced the hash, and not someone else. This proves that the certificate was **produced by the CA, and thus, the "validation etc." was performed.** > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > It depends on the particular case, but you will likely have identical certificates on each.
Question N°1 ------------ > > can't B just take this certificate [...] which will allow them to authenticate as A to C > > > This [part](http://i.imgur.com/jrUIZOn.png) of the a larger [diagram](http://i.imgur.com/5T2fJsG.png) deals with that question. Mainly : if you only have the public key then you can not establish an SSL connection with any client because you need to exchange a secret key with them and that secret key has to be encrypted using your public key, which is why the client asks for it in the first time. The client is supposed to encrypt the shared secret key with your public key and you are supposed to decrypt it using your private key. Since you don't have the private key, you can't decrypt the secret exchange key, hence you can't establish any SSL communication with any client. Question N°2 ------------ > > Why use hashing on the certificate if a part of it is already > encrypted by the CA? > > > This is also answered in the [original diagram](http://i.imgur.com/5T2fJsG.png) by the question "what's a signature ?". Basically, we're hashing the whole certificate to be sure that it hasn't been tampered with (data integrity), that nobody has changed anything in it, and that what you see is really what was delivered by the CA. The diagram shows how hasing makes that possible. Question N°3 ------------ > > If I am stackoverflow and I have 3 servers [...] do I have to have a > different digital certificate for each of the 3 servers. > > > This is not necessarily always the case. Consider the situation where all three servers are on the same domain, then you only need one certificate, if each of them is on its own subdomain you can have one single wildcard certificate installed on all of them. On the contrary, if you have one server that hosts multiple domains you would have one single multi-domain SSL certificate.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
Question N°1 ------------ > > can't B just take this certificate [...] which will allow them to authenticate as A to C > > > This [part](http://i.imgur.com/jrUIZOn.png) of the a larger [diagram](http://i.imgur.com/5T2fJsG.png) deals with that question. Mainly : if you only have the public key then you can not establish an SSL connection with any client because you need to exchange a secret key with them and that secret key has to be encrypted using your public key, which is why the client asks for it in the first time. The client is supposed to encrypt the shared secret key with your public key and you are supposed to decrypt it using your private key. Since you don't have the private key, you can't decrypt the secret exchange key, hence you can't establish any SSL communication with any client. Question N°2 ------------ > > Why use hashing on the certificate if a part of it is already > encrypted by the CA? > > > This is also answered in the [original diagram](http://i.imgur.com/5T2fJsG.png) by the question "what's a signature ?". Basically, we're hashing the whole certificate to be sure that it hasn't been tampered with (data integrity), that nobody has changed anything in it, and that what you see is really what was delivered by the CA. The diagram shows how hasing makes that possible. Question N°3 ------------ > > If I am stackoverflow and I have 3 servers [...] do I have to have a > different digital certificate for each of the 3 servers. > > > This is not necessarily always the case. Consider the situation where all three servers are on the same domain, then you only need one certificate, if each of them is on its own subdomain you can have one single wildcard certificate installed on all of them. On the contrary, if you have one server that hosts multiple domains you would have one single multi-domain SSL certificate.
I also have some answers. Q1) If B steals A's certificate and try to impersonate as A to C. * C will validate the IP address of B and find out that it does not belong to C. It will then abort the SSL connection. Of course, even if C sends an encrypted message, then only the Real A will be able to decrypt it. Q2) A certificate is usually represented in plain-text using the common format X.509. All entries are readable by anyone. The hashing process is used to digitally sign a document. Digital signing a certificate makes the end user validate that the certificate has not been altered by anyone after it was created. Hashing and encrypting the content using the issuer's private key is done to create a digital signature.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
An SSL identity is characterized by four parts: 1. A *private* key, which is not shared with anyone. 2. A *public* key, which you can share with anyone. The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you *cannot* decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic. 3. *Metadata* attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for). 4. A *digital signature* on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature: * Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because *anyone* can do this). * Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates). * Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a *trust chain* back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed! Items 2–4 are what comprises the digital certificate. When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is *not* sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go. B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of: * *The private key*. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands. * *A trusted authority that makes false statements*. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons. * *A way to poison DNS* so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced. As to your other questions… > > Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? > > > While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose. > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still *one service* that just happens to be implemented by *multiple servers*. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative. Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
In general, yes, if the cert file gets stolen, nothing will stop someone from installing it on their server and suddenly assuming the stolen site's identity. However, unless the thief takes over control of the original site's DNS setup, any requests for the site's URL will still go to the original server, and the thief's server will stay idle. It's the equivalent of building an exact duplicate of the Statue of Liberty in Antarctica with the expectation of stealing away New York's tourist revenue. Unless you start hacking every single tourist guide book and history textbook to replace "New York" with Antarctica, everyone'll still go to New York to see the real statue and the thief will just have a very big, green, complete idle icicle. However, when you get a cert from a CA, the cert is password protected and cannot simply be installed in a webserver. Some places will remove the password so the webserver can restart itself without intervention. But a secure site will keep the password in place, which means that any server restarts will kill the site until someone gets to the admin console and enters the PW to decrypt the cert.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
> > My question is can't "B" just take this certificate, thus stealing the identity of "A" - which will allow them to authenticate as "A" to "C" > > > There's also a private part of the certificate that does not get transmitted (the private key). **Without the private key, B cannot authenticate as A.** Similarly, I know your StackOverflow username, but that doens't let me log in as you. > > Why use hashing on the certificate if a part of it is already encrypted by the CA? > > > By doing it this way, anyone can verify that it was the CA who produced the hash, and not someone else. This proves that the certificate was **produced by the CA, and thus, the "validation etc." was performed.** > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > It depends on the particular case, but you will likely have identical certificates on each.
First question: You are correct about what you get back from the CA, but you are missing part of what you need before you submit your request to the CA. You need (1) a certificate request, and (2) the corresponding *private* key. You do not send the private key as part of the request; you keep it secret on your server. Your signed certificate includes a copy of the corresponding *public* key. Before any client will believe that B "owns" the certificate, B has to prove it by using the secret key to sign a challenge sent by the client. B cannot do that without A's private key. Second question: Typical public-key cryptography operates on fixed-size data (e.g., 2048 bits) and is somewhat computationally expensive. So in order to digitally sign an arbitrary-size document, the document is hashed down to a fixed-size block which is then encrypted with the private key. Third question: You can use a single certificate on multiple servers; you just need the corresponding private key on all servers. (And of course the DNS name used to reach the server must match the CN in the certificate, or the client will likely balk. But having one DNS name refer to multiple servers is a common and simple means of load-balancing.)
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
An SSL identity is characterized by four parts: 1. A *private* key, which is not shared with anyone. 2. A *public* key, which you can share with anyone. The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you *cannot* decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic. 3. *Metadata* attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for). 4. A *digital signature* on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature: * Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because *anyone* can do this). * Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates). * Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a *trust chain* back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed! Items 2–4 are what comprises the digital certificate. When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is *not* sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go. B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of: * *The private key*. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands. * *A trusted authority that makes false statements*. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons. * *A way to poison DNS* so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced. As to your other questions… > > Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? > > > While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose. > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still *one service* that just happens to be implemented by *multiple servers*. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative. Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
> > My question is can't "B" just take this certificate, thus stealing the identity of "A" - which will allow them to authenticate as "A" to "C" > > > There's also a private part of the certificate that does not get transmitted (the private key). **Without the private key, B cannot authenticate as A.** Similarly, I know your StackOverflow username, but that doens't let me log in as you. > > Why use hashing on the certificate if a part of it is already encrypted by the CA? > > > By doing it this way, anyone can verify that it was the CA who produced the hash, and not someone else. This proves that the certificate was **produced by the CA, and thus, the "validation etc." was performed.** > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > It depends on the particular case, but you will likely have identical certificates on each.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
First question: You are correct about what you get back from the CA, but you are missing part of what you need before you submit your request to the CA. You need (1) a certificate request, and (2) the corresponding *private* key. You do not send the private key as part of the request; you keep it secret on your server. Your signed certificate includes a copy of the corresponding *public* key. Before any client will believe that B "owns" the certificate, B has to prove it by using the secret key to sign a challenge sent by the client. B cannot do that without A's private key. Second question: Typical public-key cryptography operates on fixed-size data (e.g., 2048 bits) and is somewhat computationally expensive. So in order to digitally sign an arbitrary-size document, the document is hashed down to a fixed-size block which is then encrypted with the private key. Third question: You can use a single certificate on multiple servers; you just need the corresponding private key on all servers. (And of course the DNS name used to reach the server must match the CN in the certificate, or the client will likely balk. But having one DNS name refer to multiple servers is a common and simple means of load-balancing.)
I also have some answers. Q1) If B steals A's certificate and try to impersonate as A to C. * C will validate the IP address of B and find out that it does not belong to C. It will then abort the SSL connection. Of course, even if C sends an encrypted message, then only the Real A will be able to decrypt it. Q2) A certificate is usually represented in plain-text using the common format X.509. All entries are readable by anyone. The hashing process is used to digitally sign a document. Digital signing a certificate makes the end user validate that the certificate has not been altered by anyone after it was created. Hashing and encrypting the content using the issuer's private key is done to create a digital signature.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
> > My question is can't "B" just take this certificate, thus stealing the identity of "A" - which will allow them to authenticate as "A" to "C" > > > There's also a private part of the certificate that does not get transmitted (the private key). **Without the private key, B cannot authenticate as A.** Similarly, I know your StackOverflow username, but that doens't let me log in as you. > > Why use hashing on the certificate if a part of it is already encrypted by the CA? > > > By doing it this way, anyone can verify that it was the CA who produced the hash, and not someone else. This proves that the certificate was **produced by the CA, and thus, the "validation etc." was performed.** > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > It depends on the particular case, but you will likely have identical certificates on each.
I also have some answers. Q1) If B steals A's certificate and try to impersonate as A to C. * C will validate the IP address of B and find out that it does not belong to C. It will then abort the SSL connection. Of course, even if C sends an encrypted message, then only the Real A will be able to decrypt it. Q2) A certificate is usually represented in plain-text using the common format X.509. All entries are readable by anyone. The hashing process is used to digitally sign a document. Digital signing a certificate makes the end user validate that the certificate has not been altered by anyone after it was created. Hashing and encrypting the content using the issuer's private key is done to create a digital signature.
6,117,315
I have three questions regarding SSL that I don't fully understand. 1. If I get it correctly, a server `A` submits a request to a certain CA. Then, it receives (after validation etc.) a digital certificate composed of a public key + identity + an encription of this information using the CA's private key. Later on, a client `B` wants to open an SSL communication with `A`, so `A` sends `B` its digital certificate. My question is can't `B` just take this certificate, thus stealing the identity `A` - which will allow them to authenticate as `A` to `C`, for example. I understand that `C` will decrypt the certificate with the CA's public key, It will then encrypt its symetric key which will only be decryptable by the real `A`. However, I do not see where authentication comes to play if `B` can actually steal `A`'s identity. Unless I am missing something. 2. Second question: Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? 3. If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. Thank you very much.
2011/05/24
[ "https://Stackoverflow.com/questions/6117315", "https://Stackoverflow.com", "https://Stackoverflow.com/users/764420/" ]
An SSL identity is characterized by four parts: 1. A *private* key, which is not shared with anyone. 2. A *public* key, which you can share with anyone. The private and public key form a matched pair: anything you encrypt with one can be decrypted with the other, but you *cannot* decrypt something encrypted with the public key without the private key or vice versa. This is genuine mathematical magic. 3. *Metadata* attached to the public key that state who it is talking about. For a server key, this would identify the DNS name of the service that is being secured (among other things). Other data in here includes things like the intended uses (mainly used to limit the amount of damage that someone with a stolen certificate can do) and an expiry date (to limit how long a stolen certificate can be used for). 4. A *digital signature* on the combination of public key and metadata so that they can't be messed around with and so that someone else can know how much to trust the metadata. There are multiple ways to handle who does the signature: * Signing with the private key (from part 1, above); a self-signed certificate. Anyone can do this but it doesn't convey much trust (precisely because *anyone* can do this). * Getting a group of people who trust each other to vouch for you by signing the certificate; a web-of-trust (so called because the trust relationship is transitive and often symmetric as people sign each others certificates). * Getting a trusted third party to do the signing; a certificate authority (CA). The identity of the CA is guaranteed by another higher-level CA in a *trust chain* back to some root authority that “everyone” trusts (i.e., there's a list built into your SSL library, which it's possible to update at deployment time).There's no basic technical difference between the three types of authorities above, but the nature of the trust people put in them is extremely variable. The details of why this is would require a very long answer indeed! Items 2–4 are what comprises the digital certificate. When the client, B, starts the SSL protocol with the server, A, the server's digital certificate is communicated to B as part of the protocol. A's private key is *not* sent, but because B can successfully decrypt a message sent by the other end with the public key in the digital certificate, B can know that A has the private key that matches. B can then look at the metadata in the certificate and see that the other end claims to be A, and can examine the signature to see how much to trust that assertion; if the metadata is signed by an authority that B trusts (directly or indirectly) then B can trust that the other end has A's SSL identity. If that identity is the one that they were expecting (i.e., they wanted to talk to A: in practice, this is done by comparing the DNS name in the certificate with the name that they used when looking up the server address) then they can know that they have a secured communications channel: they're good to go. B can't impersonate A with that information though: B doesn't get A's private key and so it would all fall apart at the first stage of verification. In order for some third party to impersonate B, they need to have (at least) two of: * *The private key*. The owner of the identity needs to take care to stop this from happening, but it is ultimately in their hands. * *A trusted authority that makes false statements*. There's occasional weaknesses here — a self-signed authority is never very trustworthy, a web of trust runs into problems because trust is an awkward thing to handle transitively, and some CAs are thoroughly unscrupulous and others too inclined to not exclude the scum — but mostly this works fairly well because most parties are keen to not cause problems, often for financial reasons. * *A way to poison DNS* so that the target believes a different server is really the one being impersonated. Without DNSsec this is somewhat easy unfortunately, but this particular problem is being reduced. As to your other questions… > > Why use hashing on the certificate if a part of it is already encrypted by the CA? Doesn't this mean that no one can mess around with a digital certificate (in high probability) anyway? > > > While keys are fairly long, certificates are longer (for one thing, they include the signers public key anyway, which is typically the same length the key being signed). Hashing is part of the general algorithm for signing documents anyway because nobody wants to be restricted to signing only very short things. Given that the algorithm is required, it makes sense to use it for this purpose. > > If I am stackoverflow and I have 3 servers doing the same thing - allowing clients to access, read, identify etc. - do I have to have a different digital certificate for each of the 3 servers. > > > If you have several servers serving the same DNS name (there's many ways to do this, one of the simplest being round-robin DNS serving) you can put the same identity on each of them. This slightly reduces security, but only very slightly; it's still *one service* that just happens to be implemented by *multiple servers*. In theory you could give each one a different identity (though with the same name) but I can't think of any good reason for actually doing it; it's more likely to worry people than the alternative. Also note that it's possible to have a certificate for more than one service name at once. There are two mechanisms for doing this (adding alternate names to the certificate or using a wildcard in the name in the certificate) but CAs tend to charge quite a lot for signing certificates with them in.
First question: You are correct about what you get back from the CA, but you are missing part of what you need before you submit your request to the CA. You need (1) a certificate request, and (2) the corresponding *private* key. You do not send the private key as part of the request; you keep it secret on your server. Your signed certificate includes a copy of the corresponding *public* key. Before any client will believe that B "owns" the certificate, B has to prove it by using the secret key to sign a challenge sent by the client. B cannot do that without A's private key. Second question: Typical public-key cryptography operates on fixed-size data (e.g., 2048 bits) and is somewhat computationally expensive. So in order to digitally sign an arbitrary-size document, the document is hashed down to a fixed-size block which is then encrypted with the private key. Third question: You can use a single certificate on multiple servers; you just need the corresponding private key on all servers. (And of course the DNS name used to reach the server must match the CN in the certificate, or the client will likely balk. But having one DNS name refer to multiple servers is a common and simple means of load-balancing.)
10,017,027
I have a table transaction which has duplicates. i want to keep the record that had minimum id and delete all the duplicates based on four fields DATE, AMOUNT, REFNUMBER, PARENTFOLDERID. I wrote this query but i am not sure if this can be written in an efficient way. Do you think there is a better way? I am asking because i am worried about the run time. ``` DELETE FROM TRANSACTION WHERE ID IN (SELECT FIT2.ID FROM (SELECT MIN(ID) AS ID, FIT.DATE, FIT.AMOUNT, FIT.REFNUMBER, FIT.PARENTFOLDERID FROM EWORK.TRANSACTION FIT GROUP BY FIT.DATE, FIT.AMOUNT , FIT.REFNUMBER, FIT.PARENTFOLDERID HAVING COUNT(1)>1 and FIT.AMOUNT >0) FIT1, EWORK.TRANSACTION FIT2 WHERE FIT1.DATE=FIT2.DATE AND FIT1.AMOUNT=FIT2.AMOUNT AND FIT1.REFNUMBER=FIT2.REFNUMBER AND FIT1.PARENTFOLDERID=FIT2.PARENTFOLDERID AND FIT1.ID<>FIT2.ID) ```
2012/04/04
[ "https://Stackoverflow.com/questions/10017027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/510242/" ]
It would probably be more efficient to do something like ``` DELETE FROM transaction t1 WHERE EXISTS( SELECT 1 FROM transaction t2 WHERE t1.date = t2.date AND t1.refnumber = t2.refnumber AND t1.parentFolderId = t2.parentFolderId AND t2.id > t1.id ) ```
I would try something like this: ``` DELETE transaction FROM transaction LEFT OUTER JOIN ( SELECT MIN(id) as id, date, amount, refnumber, parentfolderid FROM transaction GROUP BY date, amount, refnumber, parentfolderid ) as validRows ON transaction.id = validRows.id WHERE validRows.id IS NULL ```
10,017,027
I have a table transaction which has duplicates. i want to keep the record that had minimum id and delete all the duplicates based on four fields DATE, AMOUNT, REFNUMBER, PARENTFOLDERID. I wrote this query but i am not sure if this can be written in an efficient way. Do you think there is a better way? I am asking because i am worried about the run time. ``` DELETE FROM TRANSACTION WHERE ID IN (SELECT FIT2.ID FROM (SELECT MIN(ID) AS ID, FIT.DATE, FIT.AMOUNT, FIT.REFNUMBER, FIT.PARENTFOLDERID FROM EWORK.TRANSACTION FIT GROUP BY FIT.DATE, FIT.AMOUNT , FIT.REFNUMBER, FIT.PARENTFOLDERID HAVING COUNT(1)>1 and FIT.AMOUNT >0) FIT1, EWORK.TRANSACTION FIT2 WHERE FIT1.DATE=FIT2.DATE AND FIT1.AMOUNT=FIT2.AMOUNT AND FIT1.REFNUMBER=FIT2.REFNUMBER AND FIT1.PARENTFOLDERID=FIT2.PARENTFOLDERID AND FIT1.ID<>FIT2.ID) ```
2012/04/04
[ "https://Stackoverflow.com/questions/10017027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/510242/" ]
It would probably be more efficient to do something like ``` DELETE FROM transaction t1 WHERE EXISTS( SELECT 1 FROM transaction t2 WHERE t1.date = t2.date AND t1.refnumber = t2.refnumber AND t1.parentFolderId = t2.parentFolderId AND t2.id > t1.id ) ```
``` DELETE FROM transaction WHERE ID IN ( SELECT ID FROM (SELECT ID, ROW_NUMBER () OVER (PARTITION BY date ,amount ,refnumber ,parentfolderid ORDER BY ID) rn FROM transaction) WHERE rn <> 1); ``` I will try like this
10,017,027
I have a table transaction which has duplicates. i want to keep the record that had minimum id and delete all the duplicates based on four fields DATE, AMOUNT, REFNUMBER, PARENTFOLDERID. I wrote this query but i am not sure if this can be written in an efficient way. Do you think there is a better way? I am asking because i am worried about the run time. ``` DELETE FROM TRANSACTION WHERE ID IN (SELECT FIT2.ID FROM (SELECT MIN(ID) AS ID, FIT.DATE, FIT.AMOUNT, FIT.REFNUMBER, FIT.PARENTFOLDERID FROM EWORK.TRANSACTION FIT GROUP BY FIT.DATE, FIT.AMOUNT , FIT.REFNUMBER, FIT.PARENTFOLDERID HAVING COUNT(1)>1 and FIT.AMOUNT >0) FIT1, EWORK.TRANSACTION FIT2 WHERE FIT1.DATE=FIT2.DATE AND FIT1.AMOUNT=FIT2.AMOUNT AND FIT1.REFNUMBER=FIT2.REFNUMBER AND FIT1.PARENTFOLDERID=FIT2.PARENTFOLDERID AND FIT1.ID<>FIT2.ID) ```
2012/04/04
[ "https://Stackoverflow.com/questions/10017027", "https://Stackoverflow.com", "https://Stackoverflow.com/users/510242/" ]
``` DELETE FROM transaction WHERE ID IN ( SELECT ID FROM (SELECT ID, ROW_NUMBER () OVER (PARTITION BY date ,amount ,refnumber ,parentfolderid ORDER BY ID) rn FROM transaction) WHERE rn <> 1); ``` I will try like this
I would try something like this: ``` DELETE transaction FROM transaction LEFT OUTER JOIN ( SELECT MIN(id) as id, date, amount, refnumber, parentfolderid FROM transaction GROUP BY date, amount, refnumber, parentfolderid ) as validRows ON transaction.id = validRows.id WHERE validRows.id IS NULL ```
43,339,561
I want to select the number of rows which are greater than 3 by rownum function i\_e "(rownum>3)" for example if there are 25 rows and I want to retrieve the last 22 rows by rownum function. but when I write the ``` select * from test_table where rownum>3; ``` it retrieve no row. can any one help me to solve this problem. thanks in advance
2017/04/11
[ "https://Stackoverflow.com/questions/43339561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7677507/" ]
In RDBMS there is no first or last rows. What you calls "raws" , actually is set(sets), they can be ordered or not. `rownum` is a function, which is just enumerates result set, it makes sense only after set is calculated, to order your set of data (rows) you should do it in your query before `rownum` call, you must tell DB what means for the order in particular select statement.
``` select * from (select rownum as rn, t.* from test_table t) where rn > 3 ``` see this article for more samples [On Top-n and Pagination Queries By Tom Kyte](http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html)
43,339,561
I want to select the number of rows which are greater than 3 by rownum function i\_e "(rownum>3)" for example if there are 25 rows and I want to retrieve the last 22 rows by rownum function. but when I write the ``` select * from test_table where rownum>3; ``` it retrieve no row. can any one help me to solve this problem. thanks in advance
2017/04/11
[ "https://Stackoverflow.com/questions/43339561", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7677507/" ]
It is not working because: for the first row assumes the `ROWNUM` of 1 and since your `WHERE` clause is `ROWNUM>3` then this reduces to `1>3` and the row is discarded. The subsequent row will then be tested against a `ROWNUM` of 1 (since the previous row is no longer in the output and now does not require a row number), which will again fail the test and be discarded. Repeat, ad nauseum and all rows fail the `WHERE` clause filter and are discarded. If you want to assign the rows a `ROWNUM` then you need to do this is a sub-query: ``` SELECT * -- Finally, in the outer query, filter on the assigned ROWNUM FROM ( SELECT t.*, -- First, in the inner sub-query, apply a ROWNUM ROWNUM AS rn FROM test_table t ) WHERE rn > 3; ``` Or, if you want to order the results before numbering: ``` SELECT * -- Finally, in the outer query, filter on the assigned ROWNUM FROM ( SELECT t.*, -- Second, in the next level sub-query, apply a ROWNUM ROWNUM AS rn FROM ( SELECT * -- First, in the inner-most sub-query, apply an order FROM test_table ORDER BY some_column ) t ) WHERE rn > 3; ```
``` select * from (select rownum as rn, t.* from test_table t) where rn > 3 ``` see this article for more samples [On Top-n and Pagination Queries By Tom Kyte](http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html)
6,213,814
How to populate form with JSON data using data store? How are the textfields connected with store, model? ``` Ext.define('app.formStore', { extend: 'Ext.data.Model', fields: [ {name: 'naziv', type:'string'}, {name: 'oib', type:'int'}, {name: 'email', type:'string'} ] }); var myStore = Ext.create('Ext.data.Store', { model: 'app.formStore', proxy: { type: 'ajax', url : 'app/myJson.json', reader:{ type:'json' } }, autoLoad:true }); Ext.onReady(function() { var testForm = Ext.create('Ext.form.Panel', { width: 500, renderTo: Ext.getBody(), title: 'testForm', waitMsgTarget: true, fieldDefaults: { labelAlign: 'right', labelWidth: 85, msgTarget: 'side' }, items: [{ xtype: 'fieldset', title: 'Contact Information', items: [{ xtype:'textfield', fieldLabel: 'Name', name: 'naziv' }, { xtype:'textfield', fieldLabel: 'oib', name: 'oib' }, { xtype:'textfield', fieldLabel: 'mail', name: 'email' }] }] }); testForm.getForm().loadRecord(app.formStore); }); ``` JSON ``` [ {"naziv":"Lisa", "oib":"2545898545", "email":"[email protected]"} ] ```
2011/06/02
[ "https://Stackoverflow.com/questions/6213814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/765628/" ]
The field names of your model and form **should match**. Then you can load the form using `loadRecord()`. For example: ``` var record = Ext.create('XYZ',{ name: 'Abc', email: '[email protected]' }); formpanel.getForm().loadRecord(record); ``` or, get the values from already loaded store.
The answer of Abdel Olakara works great. But if you want to populate without the use of a store you can also do it like: ``` var record = { data : { group : 'Moody Blues', text : 'One of the greatest bands' } }; formpanel.getForm().loadRecord(record); ```
6,213,814
How to populate form with JSON data using data store? How are the textfields connected with store, model? ``` Ext.define('app.formStore', { extend: 'Ext.data.Model', fields: [ {name: 'naziv', type:'string'}, {name: 'oib', type:'int'}, {name: 'email', type:'string'} ] }); var myStore = Ext.create('Ext.data.Store', { model: 'app.formStore', proxy: { type: 'ajax', url : 'app/myJson.json', reader:{ type:'json' } }, autoLoad:true }); Ext.onReady(function() { var testForm = Ext.create('Ext.form.Panel', { width: 500, renderTo: Ext.getBody(), title: 'testForm', waitMsgTarget: true, fieldDefaults: { labelAlign: 'right', labelWidth: 85, msgTarget: 'side' }, items: [{ xtype: 'fieldset', title: 'Contact Information', items: [{ xtype:'textfield', fieldLabel: 'Name', name: 'naziv' }, { xtype:'textfield', fieldLabel: 'oib', name: 'oib' }, { xtype:'textfield', fieldLabel: 'mail', name: 'email' }] }] }); testForm.getForm().loadRecord(app.formStore); }); ``` JSON ``` [ {"naziv":"Lisa", "oib":"2545898545", "email":"[email protected]"} ] ```
2011/06/02
[ "https://Stackoverflow.com/questions/6213814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/765628/" ]
The field names of your model and form **should match**. Then you can load the form using `loadRecord()`. For example: ``` var record = Ext.create('XYZ',{ name: 'Abc', email: '[email protected]' }); formpanel.getForm().loadRecord(record); ``` or, get the values from already loaded store.
I suggest you use Ext Direct methods. This way you can implement very nice and clean all operations: edit, delete, etc.
6,213,814
How to populate form with JSON data using data store? How are the textfields connected with store, model? ``` Ext.define('app.formStore', { extend: 'Ext.data.Model', fields: [ {name: 'naziv', type:'string'}, {name: 'oib', type:'int'}, {name: 'email', type:'string'} ] }); var myStore = Ext.create('Ext.data.Store', { model: 'app.formStore', proxy: { type: 'ajax', url : 'app/myJson.json', reader:{ type:'json' } }, autoLoad:true }); Ext.onReady(function() { var testForm = Ext.create('Ext.form.Panel', { width: 500, renderTo: Ext.getBody(), title: 'testForm', waitMsgTarget: true, fieldDefaults: { labelAlign: 'right', labelWidth: 85, msgTarget: 'side' }, items: [{ xtype: 'fieldset', title: 'Contact Information', items: [{ xtype:'textfield', fieldLabel: 'Name', name: 'naziv' }, { xtype:'textfield', fieldLabel: 'oib', name: 'oib' }, { xtype:'textfield', fieldLabel: 'mail', name: 'email' }] }] }); testForm.getForm().loadRecord(app.formStore); }); ``` JSON ``` [ {"naziv":"Lisa", "oib":"2545898545", "email":"[email protected]"} ] ```
2011/06/02
[ "https://Stackoverflow.com/questions/6213814", "https://Stackoverflow.com", "https://Stackoverflow.com/users/765628/" ]
The answer of Abdel Olakara works great. But if you want to populate without the use of a store you can also do it like: ``` var record = { data : { group : 'Moody Blues', text : 'One of the greatest bands' } }; formpanel.getForm().loadRecord(record); ```
I suggest you use Ext Direct methods. This way you can implement very nice and clean all operations: edit, delete, etc.
33,864,134
I'm developing an Android Application which is consists of a Navigation drawer and a Google Map. I have successfully developed my Navigation Drawer and connect my Map into it. The thing is I need my Map to Zoom to the current location. Here is the code I used in `MapsActivity.java`. ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); setUpMapIfNeeded(); mMap.setMyLocationEnabled(true); // Identify the current location of the device mMap.setOnMyLocationChangeListener(this); // change the place when the device is moving Location currentLocation = getMyLocation(); // Calling the getMyLocation method if(currentLocation!=null){ LatLng currentCoordinates = new LatLng( currentLocation.getLatitude(), currentLocation.getLongitude()); mMap.animateCamera(CameraUpdateFactory.newLatLngZoom(currentCoordinates, 13.0f)); } } ``` Here I implemented getMyLocation() method. ``` //Zoom to the current location private Location getMyLocation() { LocationManager lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); // Get location from GPS if it's available Location myLocation = lm.getLastKnownLocation(LocationManager.GPS_PROVIDER); // Location wasn't found, check the next most accurate place for the current location if (myLocation == null) { Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_COARSE); // Finds a provider that matches the criteria String provider = lm.getBestProvider(criteria, true); // Use the provider to get the last known location myLocation = lm.getLastKnownLocation(provider); } return myLocation; } ``` Here is How I gave MapsFragment in to NavigatioDrawerActivity. ``` fragment = new MapFragment(); ``` When I run this alone (Insert intent filter to MapsActivity in Manifest) it works perfect. But, when I'm running the Nvigation Drawer as MainActivity this function is not working. Only the default Map is loading. What should I do? -edit- ``` private void setUpMapIfNeeded() { // Do a null check to confirm that we have not already instantiated the map. if (mMap == null) { // Try to obtain the map from the SupportMapFragment. mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map)) .getMap(); // Check if we were successful in obtaining the map. if (mMap != null) { setUpMap(); } } } ``` My Maps.xml is like this. ``` <fragment xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/map" tools:context=".MapsActivity" android:name="com.google.android.gms.maps.SupportMapFragment" /> ``` My whole MapsActivity.java ``` public class MapsActivity extends FragmentActivity implements GoogleMap.OnMyLocationChangeListener { private GoogleMap mMap; // Might be null if Google Play services APK is not available. private MapView mapView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); setUpMapIfNeeded(); mMap.setMyLocationEnabled(true); // Identify the current location of the device mMap.setOnMyLocationChangeListener(this); // change the place when the device is moving initializaMap(rootView, savedInstanceState); } @Override protected void onResume() { super.onResume(); setUpMapIfNeeded(); } private void setUpMapIfNeeded() { // Do a null check to confirm that we have not already instantiated the map. if (mMap == null) { // Try to obtain the map from the SupportMapFragment. mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map)) .getMap(); // Check if we were successful in obtaining the map. if (mMap != null) { setUpMap(); } } } private void initializaMap(View rootView, Bundle savedInstanceState){ MapsInitializer.initialize(MapsActivity.this); switch (GooglePlayServicesUtil.isGooglePlayServicesAvailable(MapsActivity.this)) { case ConnectionResult.SUCCESS: mapView = (MapView) rootView.findViewById(R.id.mapView); mapView.onCreate(savedInstanceState); if (mapView != null) { mMap = mapView.getMap(); mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); UiSettings mUiSettings = mMap.getUiSettings(); mMap.setMyLocationEnabled(true); mMap.animateCamera(CameraUpdateFactory.zoomTo(15.0f)); mUiSettings.setCompassEnabled(true); mUiSettings.setMyLocationButtonEnabled(false); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(6.9270786, 79.861243), 13)); } break; case ConnectionResult.SERVICE_MISSING: break; case ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED: break; default: } } /** * This is where we can add markers or lines, add listeners or move the camera. In this case, we * just add a marker near Africa. * <p/> * This should only be called once and when we are sure that {@link #mMap} is not null. */ private void setUpMap() { mMap.addMarker(new MarkerOptions().position(new LatLng(0, 0)).title("Marker")); } @Override public void onMyLocationChange(Location location) { } } ``` Here is my NavigationDrawer.java ``` public class NavigationDrawer extends ActionBarActivity { private GoogleMap mMap; String[] menutitles; TypedArray menuIcons; // nav drawer title private CharSequence mDrawerTitle; private CharSequence mTitle; private DrawerLayout mDrawerLayout; private ListView mDrawerList; private ActionBarDrawerToggle mDrawerToggle; private List<RowItem> rowItems; private CustomAdapter adapter; private LinearLayout mLenear; static ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_NavigationDrawer); mTitle = mDrawerTitle = getTitle(); menutitles = getResources().getStringArray(R.array.titles); menuIcons = getResources().obtainTypedArray(R.array.icons); mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout); mDrawerList = (ListView) findViewById(R.id.slider_list); mLenear = (LinearLayout)findViewById(R.id.left_drawer); getSupportActionBar().setBackgroundDrawable(new ColorDrawable(Color.parseColor("#FFA500"))); imageView=(ImageView)findViewById(R.id.profPic); Bitmap bitmap= BitmapFactory.decodeResource(getResources(), R.drawable.ic_prof); imageView.setImageBitmap(getCircleBitmap(bitmap)); rowItems = new ArrayList<RowItem>(); for (int i = 0; i < menutitles.length; i++) { RowItem items = new RowItem(menutitles[i], menuIcons.getResourceId( i, -1)); rowItems.add(items); } menuIcons.recycle(); adapter = new CustomAdapter(getApplicationContext(), rowItems); mDrawerList.setAdapter(adapter); mDrawerList.setOnItemClickListener(new SlideitemListener()); // enabling action bar app icon and behaving it as toggle button getSupportActionBar().setDisplayHomeAsUpEnabled(true); getSupportActionBar().setHomeButtonEnabled(true); getSupportActionBar().setHomeAsUpIndicator(R.drawable.ic_menu); mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout,R.drawable.ic_menu, R.string.app_name,R.string.app_name) { public void onDrawerClosed(View view) { getSupportActionBar().setTitle(mTitle); // calling onPrepareOptionsMenu() to show action bar icons invalidateOptionsMenu(); } public void onDrawerOpened(View drawerView) { getSupportActionBar().setTitle(mDrawerTitle); // calling onPrepareOptionsMenu() to hide action bar icons invalidateOptionsMenu(); } }; mDrawerLayout.setDrawerListener(mDrawerToggle); if (savedInstanceState == null) { // on first time display view for first nav item updateDisplay(0); } initializaMap(savedInstanceState); } private void initializaMap(Bundle savedInstanceState){ MapsInitializer.initialize(Extract.this); switch (GooglePlayServicesUtil.isGooglePlayServicesAvailable(Extract.this)) { case ConnectionResult.SUCCESS: MapView mapView = (MapView) findViewById(R.id.mapView); mapView.onCreate(savedInstanceState); if (mapView != null) { mMap = mapView.getMap(); mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); UiSettings mUiSettings = mMap.getUiSettings(); mMap.setMyLocationEnabled(true); mMap.animateCamera(CameraUpdateFactory.zoomTo(15.0f)); mUiSettings.setCompassEnabled(true); mUiSettings.setMyLocationButtonEnabled(false); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(6.9192, 79.8950), 13)); } break; case ConnectionResult.SERVICE_MISSING: break; case ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED: break; default: } } //Circle Image public static Bitmap getCircleBitmap(Bitmap bitmap) { int w = bitmap.getWidth(); int h = bitmap.getHeight(); int radius = Math.min(h / 2, w / 2); Bitmap output = Bitmap.createBitmap(w + 8, h + 8, Bitmap.Config.ARGB_8888); Paint p = new Paint(); p.setAntiAlias(true); Canvas c = new Canvas(output); c.drawARGB(0, 0, 0, 0); p.setStyle(Paint.Style.FILL); c.drawCircle((w / 2) + 4, (h / 2) + 4, radius, p); p.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.SRC_IN)); c.drawBitmap(bitmap, 4, 4, p); p.setXfermode(null); p.setStyle(Paint.Style.STROKE); p.setColor(Color.WHITE); p.setStrokeWidth(3); c.drawCircle((w / 2) + 2, (h / 2) + 2, radius, p); return output; } class SlideitemListener implements ListView.OnItemClickListener { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { updateDisplay(position); } } private void updateDisplay(int position) { Fragment fragment = null; switch (position) { case 0: // fragment = new MapFragment(); //break; default: break; } if (fragment != null) { FragmentManager fragmentManager = getFragmentManager(); fragmentManager.beginTransaction().replace(R.id.frame_container, fragment).commit(); // update selected item and title, then close the drawer setTitle(menutitles[position]); mDrawerLayout.closeDrawer(mLenear); } else { // error in creating fragment Log.e("Extract", "Error in creating fragment"); } } @Override public void setTitle(CharSequence title) { mTitle = title; getSupportActionBar().setTitle(mTitle); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_extract, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // toggle nav drawer on selecting action bar app icon/title if (mDrawerToggle.onOptionsItemSelected(item)) { return true; } // Handle action bar actions click switch (item.getItemId()) { case R.id.action_settings: return true; default : return super.onOptionsItemSelected(item); } } /*** * Called when invalidateOptionsMenu() is triggered */ @Override public boolean onPrepareOptionsMenu(Menu menu) { // if nav drawer is opened, hide the action items boolean drawerOpen = mDrawerLayout.isDrawerOpen(mLenear); menu.findItem(R.id.action_settings).setVisible(!drawerOpen); return super.onPrepareOptionsMenu(menu); } /** * When using the ActionBarDrawerToggle, you must call it during * onPostCreate() and onConfigurationChanged()... */ @Override protected void onPostCreate(Bundle savedInstanceState) { super.onPostCreate(savedInstanceState); // Sync the toggle state after onRestoreInstanceState has occurred. mDrawerToggle.syncState(); } @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); // Pass any configuration change to the drawer toggles mDrawerToggle.onConfigurationChanged(newConfig); } } ```
2015/11/23
[ "https://Stackoverflow.com/questions/33864134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
try this .. ``` map.animateCamera(CameraUpdateFactory.newLatLngZoom((sydney), 13.0f)); ``` you have not given by in float. so its not working.. try this..
it will not work because the navigation drawer takes a fragment and you are initializing : ``` fragment = new MapFragment(); ``` so it takes the **MapFragment default layout** . you must to change the **updateDisplay** to takes an activity not a fragment . In another words change the navigation drawer to activities instead of fragments
33,864,134
I'm developing an Android Application which is consists of a Navigation drawer and a Google Map. I have successfully developed my Navigation Drawer and connect my Map into it. The thing is I need my Map to Zoom to the current location. Here is the code I used in `MapsActivity.java`. ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); setUpMapIfNeeded(); mMap.setMyLocationEnabled(true); // Identify the current location of the device mMap.setOnMyLocationChangeListener(this); // change the place when the device is moving Location currentLocation = getMyLocation(); // Calling the getMyLocation method if(currentLocation!=null){ LatLng currentCoordinates = new LatLng( currentLocation.getLatitude(), currentLocation.getLongitude()); mMap.animateCamera(CameraUpdateFactory.newLatLngZoom(currentCoordinates, 13.0f)); } } ``` Here I implemented getMyLocation() method. ``` //Zoom to the current location private Location getMyLocation() { LocationManager lm = (LocationManager)getSystemService(Context.LOCATION_SERVICE); // Get location from GPS if it's available Location myLocation = lm.getLastKnownLocation(LocationManager.GPS_PROVIDER); // Location wasn't found, check the next most accurate place for the current location if (myLocation == null) { Criteria criteria = new Criteria(); criteria.setAccuracy(Criteria.ACCURACY_COARSE); // Finds a provider that matches the criteria String provider = lm.getBestProvider(criteria, true); // Use the provider to get the last known location myLocation = lm.getLastKnownLocation(provider); } return myLocation; } ``` Here is How I gave MapsFragment in to NavigatioDrawerActivity. ``` fragment = new MapFragment(); ``` When I run this alone (Insert intent filter to MapsActivity in Manifest) it works perfect. But, when I'm running the Nvigation Drawer as MainActivity this function is not working. Only the default Map is loading. What should I do? -edit- ``` private void setUpMapIfNeeded() { // Do a null check to confirm that we have not already instantiated the map. if (mMap == null) { // Try to obtain the map from the SupportMapFragment. mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map)) .getMap(); // Check if we were successful in obtaining the map. if (mMap != null) { setUpMap(); } } } ``` My Maps.xml is like this. ``` <fragment xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/map" tools:context=".MapsActivity" android:name="com.google.android.gms.maps.SupportMapFragment" /> ``` My whole MapsActivity.java ``` public class MapsActivity extends FragmentActivity implements GoogleMap.OnMyLocationChangeListener { private GoogleMap mMap; // Might be null if Google Play services APK is not available. private MapView mapView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_maps); setUpMapIfNeeded(); mMap.setMyLocationEnabled(true); // Identify the current location of the device mMap.setOnMyLocationChangeListener(this); // change the place when the device is moving initializaMap(rootView, savedInstanceState); } @Override protected void onResume() { super.onResume(); setUpMapIfNeeded(); } private void setUpMapIfNeeded() { // Do a null check to confirm that we have not already instantiated the map. if (mMap == null) { // Try to obtain the map from the SupportMapFragment. mMap = ((SupportMapFragment) getSupportFragmentManager().findFragmentById(R.id.map)) .getMap(); // Check if we were successful in obtaining the map. if (mMap != null) { setUpMap(); } } } private void initializaMap(View rootView, Bundle savedInstanceState){ MapsInitializer.initialize(MapsActivity.this); switch (GooglePlayServicesUtil.isGooglePlayServicesAvailable(MapsActivity.this)) { case ConnectionResult.SUCCESS: mapView = (MapView) rootView.findViewById(R.id.mapView); mapView.onCreate(savedInstanceState); if (mapView != null) { mMap = mapView.getMap(); mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); UiSettings mUiSettings = mMap.getUiSettings(); mMap.setMyLocationEnabled(true); mMap.animateCamera(CameraUpdateFactory.zoomTo(15.0f)); mUiSettings.setCompassEnabled(true); mUiSettings.setMyLocationButtonEnabled(false); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(6.9270786, 79.861243), 13)); } break; case ConnectionResult.SERVICE_MISSING: break; case ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED: break; default: } } /** * This is where we can add markers or lines, add listeners or move the camera. In this case, we * just add a marker near Africa. * <p/> * This should only be called once and when we are sure that {@link #mMap} is not null. */ private void setUpMap() { mMap.addMarker(new MarkerOptions().position(new LatLng(0, 0)).title("Marker")); } @Override public void onMyLocationChange(Location location) { } } ``` Here is my NavigationDrawer.java ``` public class NavigationDrawer extends ActionBarActivity { private GoogleMap mMap; String[] menutitles; TypedArray menuIcons; // nav drawer title private CharSequence mDrawerTitle; private CharSequence mTitle; private DrawerLayout mDrawerLayout; private ListView mDrawerList; private ActionBarDrawerToggle mDrawerToggle; private List<RowItem> rowItems; private CustomAdapter adapter; private LinearLayout mLenear; static ImageView imageView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_NavigationDrawer); mTitle = mDrawerTitle = getTitle(); menutitles = getResources().getStringArray(R.array.titles); menuIcons = getResources().obtainTypedArray(R.array.icons); mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout); mDrawerList = (ListView) findViewById(R.id.slider_list); mLenear = (LinearLayout)findViewById(R.id.left_drawer); getSupportActionBar().setBackgroundDrawable(new ColorDrawable(Color.parseColor("#FFA500"))); imageView=(ImageView)findViewById(R.id.profPic); Bitmap bitmap= BitmapFactory.decodeResource(getResources(), R.drawable.ic_prof); imageView.setImageBitmap(getCircleBitmap(bitmap)); rowItems = new ArrayList<RowItem>(); for (int i = 0; i < menutitles.length; i++) { RowItem items = new RowItem(menutitles[i], menuIcons.getResourceId( i, -1)); rowItems.add(items); } menuIcons.recycle(); adapter = new CustomAdapter(getApplicationContext(), rowItems); mDrawerList.setAdapter(adapter); mDrawerList.setOnItemClickListener(new SlideitemListener()); // enabling action bar app icon and behaving it as toggle button getSupportActionBar().setDisplayHomeAsUpEnabled(true); getSupportActionBar().setHomeButtonEnabled(true); getSupportActionBar().setHomeAsUpIndicator(R.drawable.ic_menu); mDrawerToggle = new ActionBarDrawerToggle(this, mDrawerLayout,R.drawable.ic_menu, R.string.app_name,R.string.app_name) { public void onDrawerClosed(View view) { getSupportActionBar().setTitle(mTitle); // calling onPrepareOptionsMenu() to show action bar icons invalidateOptionsMenu(); } public void onDrawerOpened(View drawerView) { getSupportActionBar().setTitle(mDrawerTitle); // calling onPrepareOptionsMenu() to hide action bar icons invalidateOptionsMenu(); } }; mDrawerLayout.setDrawerListener(mDrawerToggle); if (savedInstanceState == null) { // on first time display view for first nav item updateDisplay(0); } initializaMap(savedInstanceState); } private void initializaMap(Bundle savedInstanceState){ MapsInitializer.initialize(Extract.this); switch (GooglePlayServicesUtil.isGooglePlayServicesAvailable(Extract.this)) { case ConnectionResult.SUCCESS: MapView mapView = (MapView) findViewById(R.id.mapView); mapView.onCreate(savedInstanceState); if (mapView != null) { mMap = mapView.getMap(); mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); UiSettings mUiSettings = mMap.getUiSettings(); mMap.setMyLocationEnabled(true); mMap.animateCamera(CameraUpdateFactory.zoomTo(15.0f)); mUiSettings.setCompassEnabled(true); mUiSettings.setMyLocationButtonEnabled(false); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(6.9192, 79.8950), 13)); } break; case ConnectionResult.SERVICE_MISSING: break; case ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED: break; default: } } //Circle Image public static Bitmap getCircleBitmap(Bitmap bitmap) { int w = bitmap.getWidth(); int h = bitmap.getHeight(); int radius = Math.min(h / 2, w / 2); Bitmap output = Bitmap.createBitmap(w + 8, h + 8, Bitmap.Config.ARGB_8888); Paint p = new Paint(); p.setAntiAlias(true); Canvas c = new Canvas(output); c.drawARGB(0, 0, 0, 0); p.setStyle(Paint.Style.FILL); c.drawCircle((w / 2) + 4, (h / 2) + 4, radius, p); p.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.SRC_IN)); c.drawBitmap(bitmap, 4, 4, p); p.setXfermode(null); p.setStyle(Paint.Style.STROKE); p.setColor(Color.WHITE); p.setStrokeWidth(3); c.drawCircle((w / 2) + 2, (h / 2) + 2, radius, p); return output; } class SlideitemListener implements ListView.OnItemClickListener { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { updateDisplay(position); } } private void updateDisplay(int position) { Fragment fragment = null; switch (position) { case 0: // fragment = new MapFragment(); //break; default: break; } if (fragment != null) { FragmentManager fragmentManager = getFragmentManager(); fragmentManager.beginTransaction().replace(R.id.frame_container, fragment).commit(); // update selected item and title, then close the drawer setTitle(menutitles[position]); mDrawerLayout.closeDrawer(mLenear); } else { // error in creating fragment Log.e("Extract", "Error in creating fragment"); } } @Override public void setTitle(CharSequence title) { mTitle = title; getSupportActionBar().setTitle(mTitle); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_extract, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // toggle nav drawer on selecting action bar app icon/title if (mDrawerToggle.onOptionsItemSelected(item)) { return true; } // Handle action bar actions click switch (item.getItemId()) { case R.id.action_settings: return true; default : return super.onOptionsItemSelected(item); } } /*** * Called when invalidateOptionsMenu() is triggered */ @Override public boolean onPrepareOptionsMenu(Menu menu) { // if nav drawer is opened, hide the action items boolean drawerOpen = mDrawerLayout.isDrawerOpen(mLenear); menu.findItem(R.id.action_settings).setVisible(!drawerOpen); return super.onPrepareOptionsMenu(menu); } /** * When using the ActionBarDrawerToggle, you must call it during * onPostCreate() and onConfigurationChanged()... */ @Override protected void onPostCreate(Bundle savedInstanceState) { super.onPostCreate(savedInstanceState); // Sync the toggle state after onRestoreInstanceState has occurred. mDrawerToggle.syncState(); } @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); // Pass any configuration change to the drawer toggles mDrawerToggle.onConfigurationChanged(newConfig); } } ```
2015/11/23
[ "https://Stackoverflow.com/questions/33864134", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
try this ``` map.moveCamera(CameraUpdateFactory.newLatLngZoom(currentCoordinates, 13)); ``` In XML ``` <com.google.android.gms.maps.MapView android:id="@+id/mapView" android:layout_width="match_parent" android:layout_height="match_parent" /> ``` In JAVA Activity ``` private void initializaMap(Bundle savedInstanceState){ MapsInitializer.initialize(MainActivity.this); switch (GooglePlayServicesUtil.isGooglePlayServicesAvailable(getActivity())) { case ConnectionResult.SUCCESS: mapView = (MapView) findViewById(R.id.mapView); mapView.onCreate(savedInstanceState); if (mapView != null) { mMap = mapView.getMap(); mMap.setMapType(GoogleMap.MAP_TYPE_NORMAL); UiSettings mUiSettings = mMap.getUiSettings(); mMap.setMyLocationEnabled(true); mMap.animateCamera(CameraUpdateFactory.zoomTo(15.0f)); mUiSettings.setCompassEnabled(true); mUiSettings.setMyLocationButtonEnabled(false); mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(new LatLng(mLatitude, mLongitude), 13)); } break; case ConnectionResult.SERVICE_MISSING: break; case ConnectionResult.SERVICE_VERSION_UPDATE_REQUIRED: break; default: } } ``` call like this ``` initializaMap(savedInstanceState); ```
it will not work because the navigation drawer takes a fragment and you are initializing : ``` fragment = new MapFragment(); ``` so it takes the **MapFragment default layout** . you must to change the **updateDisplay** to takes an activity not a fragment . In another words change the navigation drawer to activities instead of fragments
32,739,103
I have a complex object and I'm trying to set the > > SelectedTransportation > > > property which I manually add in a mapping. The code properly populates the dropdownlist, but I can't figure out how to set SelectedTransportation properly. I've tried setting it during the mapping process and after mapping through a loop but all attempts have failed. Seems like this should be rather easy, but the solution eludes me. ```js var model = {"LoadCarriers":[ { "Id":1, "IsDispatched":false, "IsPrimary":false, "RCNotes":null, "CarrierId":4, "Carrier":{ "Id":4, "Name":"West Chase", "MCNumber":"EPZEPFEEGJAJ", "DOTNumber":"AJSCEXFTFJ", "InternetTruckStopCACCI":"DJOGRBQ", "Phone":"0773283820", "RemitToPhone":null, "AlternatePhone":"4428290470", "Fax":null, "MainAddress":{ "ShortAddress":"Toledo, IN", "Address1":"Lee St", "Address2":"apt 131", "City":"Toledo", "State":"IN", "PostalCode":"07950", "Country":"US" }, "RemitToAddress":{ "ShortAddress":"Fuquay Varina, MO", "Address1":"Manchester Rd", "Address2":"", "City":"Fuquay Varina", "State":"MO", "PostalCode":"23343", "Country":"US" }, "EmailAddress":"[email protected]", "Coverage":null, "CanLoad":false, "InsuranceNumber":"RIQERAIAJBMP", "InsuranceExpirationDate":"\/Date(1442978115587)\/", "AdditionalInsurance":null, "ProNumber":"07643", "URL":"http://www.west-chase.com", "AccountId":1 }, "Dispatcher":"Bob McGill", "DriverId":null, "LoadDriver":{ "Id":1, "Name":"Bobby Pittman", "Phone":"8950121068", "Mobile":null, "MT":false, "Tractor":"OQRNBP", "Trailer":"QTZP", "Location":"Lee St", "TansportationSize":"958424896573544192", "Pallets":null, "IsDispatched":false, "TransportationId":7, "Transportation":{ "Name":"Flatbed or Van", "Id":7 }, "TransportationList":[ { "Name":"Flat", "Id":1 }, { "Name":"Van or Reefer", "Id":2 }, { "Name":"Rail", "Id":3 }, { "Name":"Auto", "Id":4 }, { "Name":"Double Drop", "Id":5 }, { "Name":"Flat with Tarps,", "Id":6 }, { "Name":"Flatbed or Van", "Id":7 }, { "Name":"Flatbed, Van or Reefer", "Id":8 }, { "Name":"Flatbed with Sides", "Id":9 }, { "Name":"Hopper Bottom", "Id":10 }, { "Name":"Hot Shot", "Id":11 }, { "Name":"Lowboy", "Id":12 }, { "Name":"Maxi", "Id":13 }, { "Name":"Power Only", "Id":14 }, { "Name":"Reefer w/ Pallet Exchange", "Id":15 }, { "Name":"Removable Gooseneck", "Id":16 }, { "Name":"Step Deck", "Id":17 }, { "Name":"Tanker", "Id":18 }, { "Name":"Curtain Van", "Id":19 }, { "Name":"Flatbed Hazardous", "Id":20 }, { "Name":"Reefer Hazardous", "Id":21 }, { "Name":"Van Hazardous", "Id":22 }, { "Name":"Vented Van", "Id":23 }, { "Name":"Van w/ Pallet Exchange", "Id":24 }, { "Name":"B-Train", "Id":25 }, { "Name":"Container", "Id":26 }, { "Name":"Double Flat", "Id":27 }, { "Name":"Flatbed or Stepdeck", "Id":28 }, { "Name":"Air", "Id":29 }, { "Name":"Ocean", "Id":30 }, { "Name":"Walking Floor", "Id":31 }, { "Name":"Landoll Flatbed", "Id":32 }, { "Name":"Conestoga", "Id":33 }, { "Name":"Load Out", "Id":34 }, { "Name":"Van Air-Ride", "Id":35 }, { "Name":"Container Hazardous", "Id":36 } ], "CarrierId":0, "Carrier":null }, "Carriers":null, "LoadId":1 } ]}; var loadDriverModel = function (data) { ko.mapping.fromJS(data, {}, this); this.SelectedTransportation = ko.observable(); }; var loadDriverMapping = { 'LoadDriver': { key: function (data) { return data.Id; }, create: function (options) { return new loadDriverModel(options.data); } } }; var carrierModel = function (data) { ko.mapping.fromJS(data, loadDriverMapping, this); }; var carrierMapping = { 'LoadCarriers': { key: function (data) { return data.Id; }, create: function (options) { return new carrierModel(options.data); } } }; var model = ko.mapping.fromJS(model); ko.applyBindings(model); ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.2.0/knockout-min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/knockout.mapping/2.4.1/knockout.mapping.js"></script> <!-- ko foreach: LoadCarriers --> <select class="form-control" data-bind="options:LoadDriver.TransportationList, optionsText:'Name', value:$data.LoadDriver.SelectedTransportation"></select> <!-- /ko --> ```
2015/09/23
[ "https://Stackoverflow.com/questions/32739103", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2855467/" ]
Your idea of creating a proxy is good imo, however, if you have access to ES6, why not looking into [Proxy](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy)? I think it does what you want out-of-the-box. The MDN provides good examples on how do traps for value validation in a setter, etc. **EDIT :** **Possible trap** I have imagined : `document.currentScript` is not supported in IE. So if you care about it and decide to polyfill it/use a pre-exisiting polyfill, make sure it is secure. Or it could be used to modify on the fly the external script url returned by `document.currentScript` and skew the proxy. I don't know if this could happen in real life tho.
This way for protecting JavaScript objects has a very significant issue which should be addressed, otherwise this way will not work properly. [MDN](https://developer.mozilla.org/en-US/docs/Web/API/Document/currentScript) noted on this API that: > > It's important to note that this will not reference the `<script>` element if the code in the script is being called as a *callback* or *event handler*; it will only reference the element while it's initially being processed. > > > Thus, any call to `proxy.doStuffWithBigApple();` inside callbacks and event handlers might lead to misbehaving of your framework.
5,100,229
So I have set up a mysql database that holds an image (more specifically a path to an image) and the images rank (starting at 0). I then created a web page that displays two images at random at a time. [Up till here everything works fine] I want my users to be able to click on one of the images that they like better and for that images rank to increase by +1 in the database. How would I set that up? Please be as detailed as possible in your explanation. I am new to php and mysql.
2011/02/24
[ "https://Stackoverflow.com/questions/5100229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518080/" ]
Well i wasn't able to solve why the Between function didn't work - but mysql doesn't allow BETWEEN on floating point numbers either. So i'm going to assume that it's a similar reason. I changed my code to merely create it's own between statement. ``` NSPredicate *longPredicate = nil; NSPredicate *latPredicate = nil; if ([neCoordLong floatValue] > [swCoordLong floatValue]) { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", neCoordLong, swCoordLong]; }else { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", swCoordLong, neCoordLong]; } if ([neCoordLat floatValue] > [swCoordLat floatValue]) { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", neCoordLat, swCoordLat]; }else { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", swCoordLat, neCoordLat]; } ``` Hopefully this helps someone else
I don't see the exact cause either, and you haven't shown the code where the rest of boutiqueRequest's properties are set and the fetch fired (not that it matters unless you set the predicate again or something else funky), but the report > > 2011-02-24 13:57:18.916 DL2[9628:207] -[NSCFNumber constantValue]: unrecognized selector sent to instance 0x954ba80 > > > indicates you're sending the message -constantValue to an object of class NSCFNumber (a toll-free NSNumber/CFNumber instance), which doesn't handle it. So I suggest you try using only portions of the predicate to find which doesn't and does cause some expression within the predicate to be substituted with an NSCFNumber instance.
5,100,229
So I have set up a mysql database that holds an image (more specifically a path to an image) and the images rank (starting at 0). I then created a web page that displays two images at random at a time. [Up till here everything works fine] I want my users to be able to click on one of the images that they like better and for that images rank to increase by +1 in the database. How would I set that up? Please be as detailed as possible in your explanation. I am new to php and mysql.
2011/02/24
[ "https://Stackoverflow.com/questions/5100229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518080/" ]
I had a similar error as well. I am pretty sure it is a very low level database related issue. Here is some more information I have - a. With a inmemory database, I could run a test case with exactly the same data just fine. b. When I got the error "-[\_\_NSCFNumber constantValue]: unrecognized selector sent to instance 0xa9e4ca0". I did ``` (lldb) po 0xa9e4ca0 $0 = 178146464 -96.24999189101783 ``` Not sure where 178146464 coming from. I will try the manual "between" now.
I don't see the exact cause either, and you haven't shown the code where the rest of boutiqueRequest's properties are set and the fetch fired (not that it matters unless you set the predicate again or something else funky), but the report > > 2011-02-24 13:57:18.916 DL2[9628:207] -[NSCFNumber constantValue]: unrecognized selector sent to instance 0x954ba80 > > > indicates you're sending the message -constantValue to an object of class NSCFNumber (a toll-free NSNumber/CFNumber instance), which doesn't handle it. So I suggest you try using only portions of the predicate to find which doesn't and does cause some expression within the predicate to be substituted with an NSCFNumber instance.
5,100,229
So I have set up a mysql database that holds an image (more specifically a path to an image) and the images rank (starting at 0). I then created a web page that displays two images at random at a time. [Up till here everything works fine] I want my users to be able to click on one of the images that they like better and for that images rank to increase by +1 in the database. How would I set that up? Please be as detailed as possible in your explanation. I am new to php and mysql.
2011/02/24
[ "https://Stackoverflow.com/questions/5100229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518080/" ]
Well i wasn't able to solve why the Between function didn't work - but mysql doesn't allow BETWEEN on floating point numbers either. So i'm going to assume that it's a similar reason. I changed my code to merely create it's own between statement. ``` NSPredicate *longPredicate = nil; NSPredicate *latPredicate = nil; if ([neCoordLong floatValue] > [swCoordLong floatValue]) { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", neCoordLong, swCoordLong]; }else { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", swCoordLong, neCoordLong]; } if ([neCoordLat floatValue] > [swCoordLat floatValue]) { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", neCoordLat, swCoordLat]; }else { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", swCoordLat, neCoordLat]; } ``` Hopefully this helps someone else
At the beginning of your fetch you're declaring `NSPredicate *predicateToRun = nil;` And then you assign it a NSCompoundPredicate to it in `predicateToRun = [NSCompoundPredicate andPredicateWithSubpredicates:[NSArray arrayWithObjects:longPredicate, latPredicate, nil]];` See if that solves it. Rog
5,100,229
So I have set up a mysql database that holds an image (more specifically a path to an image) and the images rank (starting at 0). I then created a web page that displays two images at random at a time. [Up till here everything works fine] I want my users to be able to click on one of the images that they like better and for that images rank to increase by +1 in the database. How would I set that up? Please be as detailed as possible in your explanation. I am new to php and mysql.
2011/02/24
[ "https://Stackoverflow.com/questions/5100229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518080/" ]
Well i wasn't able to solve why the Between function didn't work - but mysql doesn't allow BETWEEN on floating point numbers either. So i'm going to assume that it's a similar reason. I changed my code to merely create it's own between statement. ``` NSPredicate *longPredicate = nil; NSPredicate *latPredicate = nil; if ([neCoordLong floatValue] > [swCoordLong floatValue]) { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", neCoordLong, swCoordLong]; }else { longPredicate = [NSPredicate predicateWithFormat: @" longitude <= %@ AND longitude >= %@", swCoordLong, neCoordLong]; } if ([neCoordLat floatValue] > [swCoordLat floatValue]) { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", neCoordLat, swCoordLat]; }else { latPredicate = [NSPredicate predicateWithFormat: @" latitude <= %@ AND latitude >= %@", swCoordLat, neCoordLat]; } ``` Hopefully this helps someone else
I had a similar error as well. I am pretty sure it is a very low level database related issue. Here is some more information I have - a. With a inmemory database, I could run a test case with exactly the same data just fine. b. When I got the error "-[\_\_NSCFNumber constantValue]: unrecognized selector sent to instance 0xa9e4ca0". I did ``` (lldb) po 0xa9e4ca0 $0 = 178146464 -96.24999189101783 ``` Not sure where 178146464 coming from. I will try the manual "between" now.
5,100,229
So I have set up a mysql database that holds an image (more specifically a path to an image) and the images rank (starting at 0). I then created a web page that displays two images at random at a time. [Up till here everything works fine] I want my users to be able to click on one of the images that they like better and for that images rank to increase by +1 in the database. How would I set that up? Please be as detailed as possible in your explanation. I am new to php and mysql.
2011/02/24
[ "https://Stackoverflow.com/questions/5100229", "https://Stackoverflow.com", "https://Stackoverflow.com/users/518080/" ]
I had a similar error as well. I am pretty sure it is a very low level database related issue. Here is some more information I have - a. With a inmemory database, I could run a test case with exactly the same data just fine. b. When I got the error "-[\_\_NSCFNumber constantValue]: unrecognized selector sent to instance 0xa9e4ca0". I did ``` (lldb) po 0xa9e4ca0 $0 = 178146464 -96.24999189101783 ``` Not sure where 178146464 coming from. I will try the manual "between" now.
At the beginning of your fetch you're declaring `NSPredicate *predicateToRun = nil;` And then you assign it a NSCompoundPredicate to it in `predicateToRun = [NSCompoundPredicate andPredicateWithSubpredicates:[NSArray arrayWithObjects:longPredicate, latPredicate, nil]];` See if that solves it. Rog
357,141
I am struck in finding the solution for the below requirements Assume drop down#1 and drop down#2 selection box in the Magento2 admin UI grid filters. If we select option in one drop down#1 then we have to filter option in drop down#2 based on drop down#1 selection value. If anyone aware of the solution / any alternate to achieve this requirement, please help to share your feedback / suggestion. [![enter image description here](https://i.stack.imgur.com/vThDa.png)](https://i.stack.imgur.com/vThDa.png) Thanks is advance
2022/06/23
[ "https://magento.stackexchange.com/questions/357141", "https://magento.stackexchange.com", "https://magento.stackexchange.com/users/12392/" ]
The answer just a concept and i don't have enough time to provide full example, but hope this will help to understand a concept. **1. Custom Column Ui Component** You need to create a custom column element for your second filter and extend for example `Magento_Ui/js/grid/columns/select`. You still can use native template. ```js define([ 'Magento_Ui/js/grid/columns/select', 'uiRegistry', 'underscore' ], function (Select, registry, _) { 'use strict'; return Select.extend({ defaults: { sourceFilter: '', // the name of primary filter sourceFilterUi: null, allOptions: [], // all available options }, initConfig: function () { this._super(); // maybe in some cases need to use defer for wait for element this.sourceFilterUi = registry.get(this.parent + '.' + this.sourceFilter); // store all options and reset applicable this.allOptions = this.options; this.options = []; // track source changes if (this.sourceFilterUi) { this.sourceFilterUi.value.subscribe(this.sourceChanged.bind(this)); } // here you can add logic check source value and rebuild options (loaded from bookmark, etc) }, sourceChanged: function(value) { // value might be undefined, string or array depends on parent source // store current value let old_value = this.values(), value; // generate applicable options let new options = []; _.each(this.allOptions, function(option) { if (option.parent == value) { options.push(option); } if (old_value == option.value) { value = option.value; } }); // update options this.options(options); // restore value after update options if (value) { this.value(value); } } }); }); ``` **2. Filter in listing** You need to define custom filter for listing and disable original filter ```xml <?xml version="1.0"?> <listing xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="urn:magento:module:Magento_Ui:etc/ui_configuration.xsd"> <listingToolbar name="listing_top"> <filters name="listing_filters"> <filterSelect name="second_filter" provider="${ $.parentName }" component="Acme_StackExchange/js/grid/filters/elements/select-deps"> <argument name="data" xsi:type="array"> <item name="config" xsi:type="array"> <item name="sourceFilter" xsi:type="string" translate="true">first_filter</item> </item> </argument> <settings> <options class="Acme\StackExchange\Ui\Component\Listing\Columns\Second\Options"/> <!-- ... --> </settings> </filters> </listingToolbar> </listing> ``` **3. Create custom options with relations** ```php // ... class Options implements OptionSourceInterface { // ... public function toOptionArray(): array { return [ [ 'label' => __('Label'), 'value' => 'value', 'parent' => 'parent', ], // ... ]; } } ```
Credit to Victor, just refactored the code with working examples ``` define( [ 'Magento_Ui/js/form/element/select', 'uiRegistry', 'underscore' ], function (Select, registry, _) { 'use strict'; return Select.extend({ defaults: { parent: '${ $.parentName }', sourceFilter: '', sourceFilterUi: null, allOptions: [] }, initConfig: function () { this._super(); this.sourceFilterUi = registry.get(this.parent+'.'+this.sourceFilter); this.allOptions = this.options; if (!this.sourceFilterUi || (this.sourceFilterUi == undefined)) { return; } this.sourceFilterUi.value.subscribe(this.setFilteredOptions.bind(this)); }, setFilteredOptions: function (parent) { if (parent == undefined) { return; } var filteredOptions = []; _.each(this.allOptions, function (option) { if (option.parent == parent) { filteredOptions.push(option); } }); this.options(filteredOptions); } }); } ); ```
34,798,757
I am working on an app written in Polymer. I have some CSS variables defined like this: ``` :root { --my-option-1: #ff8a80; --my-option-2: #4a148c; --my-option-3: #8c9eff; } ``` The user literally chooses "1", "2", or "3". I have a function that looks like this: ``` // The v parameter will be 1, 2, or 3 function getOptionColor(v) { var name = '--my-option-' + v; return ?; } ``` I need `getOptionColor` to return `#ff8a80`, `#4a148c`, or `#8c9eff` based on the value entered into the function. However, I do not know how to get the value of a CSS variable at runtime. Is there a way to do this? If so, how?
2016/01/14
[ "https://Stackoverflow.com/questions/34798757", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1185425/" ]
You can use a constructor, I don't see a need for inheritance ``` function Config(key, name, icon, values, filterFn) { this.key = key; this.name = name; this.icon = icon; this.values = values; this.filterFn = filterFn; } var cfg = {}; // Don't want to repeat "someKey"? put it in a variable. var myKey = 'someKey'; cfg[myKey] = new Config(myKey, 'someName', 'icon', [1,2], function(obj){ return obj + myKey}); myKey = 'otherKey'; cfg[myKey] = new Config(myKey, 'anotherName', 'anotherIcon', [3,4], function(obj){ return obj + '_' + myKey}); // Or create a method to help, may be overdoing it... function addConfig(key, name, icon, values, filterFn) { cfg[myKey] = new Config(key, name, icon, values, filterFn]; } addConfig('someKey', 'thatName', 'thisIcon', [6,7], o => o); ``` In EcmaScript 6, you can save some typing ``` function Config(key, name, icon, values, filterFn) { Object.assign(this, {key, name, icon, values, filterFn}); } ```
What I've done before: ``` var defaultConfig = { everything: 'goes', 'in': {here: ""}, once: true }; var selectedOptions = jQuery.extend(true, {}, defaultConfig, optionConfig); ``` Where optionConfig can look like this: ``` {once: false} ```
16,637,051
I'm trying to make a number input. I've made so my textbox only accepts numbers via this code: ``` function isNumber(evt) { evt = (evt) ? evt : window.event; var charCode = (evt.which) ? evt.which : evt.keyCode; if (charCode > 31 && (charCode < 48 || charCode > 57)) { return false; } return true; } ``` But now the question comes to, how would I create spaces as the user is typing in their number, much like the iPhone's telephone spacing thing, but with numbers so for example if they type in: 1000 it will become 1 000; ``` 1 10 100 1 000 10 000 100 000 1 000 000 10 000 000 100 000 000 1 000 000 000 ``` Etc... I've tried to read the input from the right for each 3 characters and add a space. But my way is inefficient and when I'm changing the value from 1000 to 1 000 in the textbox, it selects it for some reason, making any key press after that, erase the whole thing. If you know how to do this, please refrain from using javascript plugins like jQuery. Thank You.
2013/05/19
[ "https://Stackoverflow.com/questions/16637051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1768788/" ]
For integers use ``` function numberWithSpaces(x) { return x.toString().replace(/\B(?=(\d{3})+(?!\d))/g, " "); } ``` For floating point numbers you can use ``` function numberWithSpaces(x) { var parts = x.toString().split("."); parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, " "); return parts.join("."); } ``` This is a simple regex work. <https://developer.mozilla.org/en-US/docs/JavaScript/Guide/Regular_Expressions> << Find more about Regex here. If you are not sure about whether the number would be integer or float, just use 2nd one...
Easiest way: 1 = ``` var num = 1234567890, result = num.toLocaleString() ;// result will equal to "1 234 567 890" ``` 2 = ``` var num = 1234567.890, result = num.toLocaleString() + num.toString().slice(num.toString().indexOf('.')) // will equal to 1 234 567.890 ``` 3 = ``` var num = 1234567.890123, result = Number(num.toFixed(0)).toLocaleString() + '.' + Number(num.toString().slice(num.toString().indexOf('.')+1)).toLocaleString() //will equal to 1 234 567.890 123 ``` 4 = If you want ',' instead of ' ': ``` var num = 1234567.890123, result = Number(num.toFixed(0)).toLocaleString().split(/\s/).join(',') + '.' + Number(num.toString().slice(num.toString().indexOf('.')+1)).toLocaleString() //will equal to 1,234,567.890 123 ``` If not working, set the parameter like: "toLocaleString('ru-RU')" parameter "en-EN", will split number by the ',' instead of ' '
16,637,051
I'm trying to make a number input. I've made so my textbox only accepts numbers via this code: ``` function isNumber(evt) { evt = (evt) ? evt : window.event; var charCode = (evt.which) ? evt.which : evt.keyCode; if (charCode > 31 && (charCode < 48 || charCode > 57)) { return false; } return true; } ``` But now the question comes to, how would I create spaces as the user is typing in their number, much like the iPhone's telephone spacing thing, but with numbers so for example if they type in: 1000 it will become 1 000; ``` 1 10 100 1 000 10 000 100 000 1 000 000 10 000 000 100 000 000 1 000 000 000 ``` Etc... I've tried to read the input from the right for each 3 characters and add a space. But my way is inefficient and when I'm changing the value from 1000 to 1 000 in the textbox, it selects it for some reason, making any key press after that, erase the whole thing. If you know how to do this, please refrain from using javascript plugins like jQuery. Thank You.
2013/05/19
[ "https://Stackoverflow.com/questions/16637051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1768788/" ]
For integers use ``` function numberWithSpaces(x) { return x.toString().replace(/\B(?=(\d{3})+(?!\d))/g, " "); } ``` For floating point numbers you can use ``` function numberWithSpaces(x) { var parts = x.toString().split("."); parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, " "); return parts.join("."); } ``` This is a simple regex work. <https://developer.mozilla.org/en-US/docs/JavaScript/Guide/Regular_Expressions> << Find more about Regex here. If you are not sure about whether the number would be integer or float, just use 2nd one...
This function works well inside an input: ``` const formatAndVerifyNumericValue = (value, callback) => { const reg = new RegExp('^[0-9]+$'); let newValue = value.replace(/\s/g, ''); if (reg.test(newValue)) { newValue = newValue.toString().replace(/\B(?<!\.\d*)(?=(\d{3})+. (?!\d))/g, ' '); return callback(newValue); }} ```
16,637,051
I'm trying to make a number input. I've made so my textbox only accepts numbers via this code: ``` function isNumber(evt) { evt = (evt) ? evt : window.event; var charCode = (evt.which) ? evt.which : evt.keyCode; if (charCode > 31 && (charCode < 48 || charCode > 57)) { return false; } return true; } ``` But now the question comes to, how would I create spaces as the user is typing in their number, much like the iPhone's telephone spacing thing, but with numbers so for example if they type in: 1000 it will become 1 000; ``` 1 10 100 1 000 10 000 100 000 1 000 000 10 000 000 100 000 000 1 000 000 000 ``` Etc... I've tried to read the input from the right for each 3 characters and add a space. But my way is inefficient and when I'm changing the value from 1000 to 1 000 in the textbox, it selects it for some reason, making any key press after that, erase the whole thing. If you know how to do this, please refrain from using javascript plugins like jQuery. Thank You.
2013/05/19
[ "https://Stackoverflow.com/questions/16637051", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1768788/" ]
Easiest way: 1 = ``` var num = 1234567890, result = num.toLocaleString() ;// result will equal to "1 234 567 890" ``` 2 = ``` var num = 1234567.890, result = num.toLocaleString() + num.toString().slice(num.toString().indexOf('.')) // will equal to 1 234 567.890 ``` 3 = ``` var num = 1234567.890123, result = Number(num.toFixed(0)).toLocaleString() + '.' + Number(num.toString().slice(num.toString().indexOf('.')+1)).toLocaleString() //will equal to 1 234 567.890 123 ``` 4 = If you want ',' instead of ' ': ``` var num = 1234567.890123, result = Number(num.toFixed(0)).toLocaleString().split(/\s/).join(',') + '.' + Number(num.toString().slice(num.toString().indexOf('.')+1)).toLocaleString() //will equal to 1,234,567.890 123 ``` If not working, set the parameter like: "toLocaleString('ru-RU')" parameter "en-EN", will split number by the ',' instead of ' '
This function works well inside an input: ``` const formatAndVerifyNumericValue = (value, callback) => { const reg = new RegExp('^[0-9]+$'); let newValue = value.replace(/\s/g, ''); if (reg.test(newValue)) { newValue = newValue.toString().replace(/\B(?<!\.\d*)(?=(\d{3})+. (?!\d))/g, ' '); return callback(newValue); }} ```
72,120,997
Let's say I have a data frame like this: ``` dat<- data.frame(ID= c("A","A","A","A","B","B", "B", "B"), test= rep(c("pre","post"),4), item= c(rep("item1",2), rep("item2",2), rep("item1",2), rep("item2",2)), answer= c("1_2_3_4", "1_2_3_4","2_4_3_1","4_3_2_1", "2_4_3_1","2_4_3_1","4_3_2_1","4_3_2_1")) ``` For each group of `ID` and `item`, I want to determine if the levels of `answer` match. The result data frame would look like this: ``` res<- data.frame(ID= c("A","A","B","B"), item= c("item1","item2","item1","item2"), match=c("TRUE","FALSE", "TRUE", "TRUE")) ```
2022/05/05
[ "https://Stackoverflow.com/questions/72120997", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8061255/" ]
```r dat<- data.frame(ID= c("A","A","A","A","B","B", "B", "B"), test= rep(c("pre","post"),4), item= c(rep("item1",2), rep("item2",2), rep("item1",2), rep("item2",2)), answer= c("1_2_3_4", "1_2_3_4","2_4_3_1","4_3_2_1", "2_4_3_1","2_4_3_1","4_3_2_1","4_3_2_1")) library(data.table) setDT(dat) dat[, .(match = all(answer == answer[1])), by = .(ID, item)] #> ID item match #> <char> <char> <lgcl> #> 1: A item1 TRUE #> 2: A item2 FALSE #> 3: B item1 TRUE #> 4: B item2 TRUE ``` Created on 2022-05-04 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
In `dplyr`, we can use `group_by` and `summarize` to see if the `answer` column is the same in "pre" and "post" with the same `ID` and `item` columns. ```r library(dplyr) dat<- data.frame(ID= c("A","A","A","A","B","B", "B", "B"), test= rep(c("pre","post"),4), item= c(rep("item1",2), rep("item2",2), rep("item1",2), rep("item2",2)), answer= c("1_2_3_4", "1_2_3_4","2_4_3_1","4_3_2_1", "2_4_3_1","2_4_3_1","4_3_2_1","4_3_2_1")) dat %>% group_by(ID, item) %>% summarize(match = answer[test == "pre"] == answer[test == "post"]) #> # A tibble: 4 × 3 #> # Groups: ID [2] #> ID item match #> <chr> <chr> <lgl> #> 1 A item1 TRUE #> 2 A item2 FALSE #> 3 B item1 TRUE #> 4 B item2 TRUE ``` Created on 2022-05-05 by the [reprex package](https://reprex.tidyverse.org) (v2.0.1)
49,320,845
<https://colab.research.google.com/notebooks/io.ipynb#scrollTo=KHeruhacFpSU> In this notebook help it explains how to upload a file to drive and then download to Colaboratory but my files are already in drive. Where can I find the file ID ? ``` # Download the file we just uploaded. # # Replace the assignment below with your file ID # to download a different file. # # A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz file_id = 'target_file_id' ```
2018/03/16
[ "https://Stackoverflow.com/questions/49320845", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8833943/" ]
If the parameter is definitely unused, `[[maybe_unused]]` is not particularly useful, unnamed parameters and comments work just fine for that. `[[maybe_unused]]` is mostly useful for things that are *potentially* unused, like in ``` void fun(int i, int j) { assert(i < j); // j not used here anymore } ``` This can't be handled with unnamed parameters, but if `NDEBUG` is defined, will produce a warning because `j` is unused. Similar situations can occur when a parameter is only used for (potentially disabled) logging.
[Baum mit Augen's answer](https://stackoverflow.com/a/49320892/817643) is the definitive and undisputed explanation. I just want to present another example, which doesn't require macros. Specifically, C++17 introduced the `constexpr if` construct. So you may see template code like this (bar the stupid functionality): ``` #include <type_traits> template<typename T> auto add_or_double(T t1, T t2) noexcept { if constexpr (std::is_same_v<T, int>) return t1 + t2; else return t1 * 2.0; } int main(){ add_or_double(1, 2); add_or_double(1.0, 2.0); } ``` As of writing this, GCC 8.0.1 warns me about `t2` being unused when the else branch is the instantiated one. The attribute is indispensable in a case like this too.
49,320,845
<https://colab.research.google.com/notebooks/io.ipynb#scrollTo=KHeruhacFpSU> In this notebook help it explains how to upload a file to drive and then download to Colaboratory but my files are already in drive. Where can I find the file ID ? ``` # Download the file we just uploaded. # # Replace the assignment below with your file ID # to download a different file. # # A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz file_id = 'target_file_id' ```
2018/03/16
[ "https://Stackoverflow.com/questions/49320845", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8833943/" ]
If the parameter is definitely unused, `[[maybe_unused]]` is not particularly useful, unnamed parameters and comments work just fine for that. `[[maybe_unused]]` is mostly useful for things that are *potentially* unused, like in ``` void fun(int i, int j) { assert(i < j); // j not used here anymore } ``` This can't be handled with unnamed parameters, but if `NDEBUG` is defined, will produce a warning because `j` is unused. Similar situations can occur when a parameter is only used for (potentially disabled) logging.
I find [[maybe\_unused]] is useful when you have a set of constants that define a set configuration constants that may or may not be used depending on the configuration. You are then free to change the configuration without having to define new constants and worrying about unused constants. I use this mainly in embedded code where you have to set specific values. Otherwise enumerations are generally better.
49,320,845
<https://colab.research.google.com/notebooks/io.ipynb#scrollTo=KHeruhacFpSU> In this notebook help it explains how to upload a file to drive and then download to Colaboratory but my files are already in drive. Where can I find the file ID ? ``` # Download the file we just uploaded. # # Replace the assignment below with your file ID # to download a different file. # # A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz file_id = 'target_file_id' ```
2018/03/16
[ "https://Stackoverflow.com/questions/49320845", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8833943/" ]
[Baum mit Augen's answer](https://stackoverflow.com/a/49320892/817643) is the definitive and undisputed explanation. I just want to present another example, which doesn't require macros. Specifically, C++17 introduced the `constexpr if` construct. So you may see template code like this (bar the stupid functionality): ``` #include <type_traits> template<typename T> auto add_or_double(T t1, T t2) noexcept { if constexpr (std::is_same_v<T, int>) return t1 + t2; else return t1 * 2.0; } int main(){ add_or_double(1, 2); add_or_double(1.0, 2.0); } ``` As of writing this, GCC 8.0.1 warns me about `t2` being unused when the else branch is the instantiated one. The attribute is indispensable in a case like this too.
I find [[maybe\_unused]] is useful when you have a set of constants that define a set configuration constants that may or may not be used depending on the configuration. You are then free to change the configuration without having to define new constants and worrying about unused constants. I use this mainly in embedded code where you have to set specific values. Otherwise enumerations are generally better.
33,001,985
I have started using Webpack when developing usual web sites consisting of a number pages and of different pages types. I'm used to the RequireJs script loader that loads all dependencies on demand when needed. Just a small piece of javascript is downloaded when page loads. What I want to achieve is this: * A small initial javascript file that loads dependencies asynchronous * Each page type can have their own javascript, which also may have dependencies. * Common modules, vendor scripts should be bundled in common scripts I have tried many configurations to achieve this but with no success. ```js entry: { main: 'main.js', //Used on all pages, e.g. mobile menu 'standard-page': 'pages/standard-page.js', 'start-page': 'pages/start-page.js', 'vendor': ['jquery'] }, alias: { jquery: 'jquery/dist/jquery.js' }, plugins: [ new webpack.optimize.CommonsChunkPlugin("vendor", "vendor.js"), new webpack.optimize.CommonsChunkPlugin('common.js') ] ``` In the html I want to load the javascripts like this: ```html <script src="/Static/js/dist/common.js"></script> <script src="/Static/js/dist/main.js" async></script> ``` And on a specific page type (start page) ```html <script src="/Static/js/dist/start-page.js" async></script> ``` common.js should be a tiny file for fast loading of the page. main.js loads async and require('jquery') inside. The output from Webpack looks promising but I can't get the vendors bundle to load asynchronously. Other dependencies (my own modules and domReady) is loaded in ther autogenerated chunks, but not jquery. I can find plenty of examples that does almost this but not the important part of loading vendors asynchronously. Output from webpack build: ``` Asset Size Chunks Chunk Names main.js.map 570 bytes 0, 7 [emitted] main main.js 399 bytes 0, 7 [emitted] main standard-page.js 355 bytes 2, 7 [emitted] standard-page c6ff6378688eba5a294f.js 348 bytes 3, 7 [emitted] start-page.js 361 bytes 4, 7 [emitted] start-page 8986b3741c0dddb9c762.js 387 bytes 5, 7 [emitted] vendor.js 257 kB 6, 7 [emitted] vendor common.js 3.86 kB 7 [emitted] common.js 2876de041eaa501e23a2.js 1.3 kB 1, 7 [emitted] ```
2015/10/07
[ "https://Stackoverflow.com/questions/33001985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4886214/" ]
The solution to this problem is two-fold: 1. First you need to understand [how code-splitting works in webpack](https://webpack.github.io/docs/code-splitting.html) 2. Secondly, you need to use something like the `CommonsChunkPlugin` to generate that shared bundle. ### Code Splitting Before you start using webpack you need to unlearn to be dependent on configuration. Require.js was all about configuration files. This mindset made it difficult for me to transition into webpack which is modeled more closely after CommonJS in node.js, which relies on no configuration. With that in mind consider the following. If you have an app and you want it to asynchronously load some other parts of javascript you need to use one of the following paradigms. **Require.ensure** `require.ensure` is one way that you can create a "split point" in your application. Again, you may have thought you'd need to do this with configuration, but that is not the case. In the example when I hit `require.ensure` in my file webpack will automatically create a second bundle and load it on-demand. Any code executed inside of that split-point will be bundled together in a separate file. ``` require.ensure(['jquery'], function() { var $ = require('jquery'); /* ... */ }); ``` **Require([])** You can also achieve the same thing with the AMD-version of `require()`, the one that takes an array of dependencies. This will also create the same split point: ``` require(['jquery'], function($) { /* ... */ }); ``` ### Shared Bundles In your example above you use `entry` to create a `vendor` bundle which has jQuery. You don't need to manually specify these dependency bundles. Instead, using the split points above you webpack will generate this automatically. *Use `entry` only for separate `<script>` tags you want in your pages*. Now that you've done all of that you can use the `CommonsChunkPlugin` to additional optimize your chunks, but again most of the magic is done for you and outside of specifying which dependencies should be shared you won't need to do anything else. `webpack` will pull in the shared chunks automatically without the need for additional `<script>` tags or `entry` configuration. ### Conclusion The scenario you describe (multiple `<script>` tags) may not actually be what you want. With webpack all of the dependencies and bundles can be managed automatically starting with only a single `<script>` tag. Having gone through several iterations of re-factoring from require.js to webpack, I've found that's usually the simplest and best way to manage your dependencies. All the best!
Here's the solution I came up with. First, export these two functions to `window.*` -- you'll want them in the browser. ``` export function requireAsync(module) { return new Promise((resolve, reject) => require(`bundle!./pages/${module}`)(resolve)); } export function runAsync(moduleName, data={}) { return requireAsync(moduleName).then(module => { if(module.__esModule) { // if it's an es6 module, then the default function should be exported as module.default if(_.isFunction(module.default)) { return module.default(data); } } else if(_.isFunction(module)) { // if it's not an es6 module, then the module itself should be the function return module(data); } }) } ``` Then, when you want to include one of your scripts on a page, just add this to your HTML: ``` <script>requireAsync('script_name.js')</script> ``` Now everything in the `pages/` directory will be pre-compiled into a separate chunk that can be asynchronously loaded at run time, only when needed. Furthermore, using the functions above, you now have a convenient way of passing server-side data into your client-side scripts: ``` <script>runAsync('script_that_needs_data', {my:'data',wow:'much excite'})</script> ``` And now you can access it: ``` // script_that_needs_data.js export default function({my,wow}) { console.log(my,wow); } ```
33,001,985
I have started using Webpack when developing usual web sites consisting of a number pages and of different pages types. I'm used to the RequireJs script loader that loads all dependencies on demand when needed. Just a small piece of javascript is downloaded when page loads. What I want to achieve is this: * A small initial javascript file that loads dependencies asynchronous * Each page type can have their own javascript, which also may have dependencies. * Common modules, vendor scripts should be bundled in common scripts I have tried many configurations to achieve this but with no success. ```js entry: { main: 'main.js', //Used on all pages, e.g. mobile menu 'standard-page': 'pages/standard-page.js', 'start-page': 'pages/start-page.js', 'vendor': ['jquery'] }, alias: { jquery: 'jquery/dist/jquery.js' }, plugins: [ new webpack.optimize.CommonsChunkPlugin("vendor", "vendor.js"), new webpack.optimize.CommonsChunkPlugin('common.js') ] ``` In the html I want to load the javascripts like this: ```html <script src="/Static/js/dist/common.js"></script> <script src="/Static/js/dist/main.js" async></script> ``` And on a specific page type (start page) ```html <script src="/Static/js/dist/start-page.js" async></script> ``` common.js should be a tiny file for fast loading of the page. main.js loads async and require('jquery') inside. The output from Webpack looks promising but I can't get the vendors bundle to load asynchronously. Other dependencies (my own modules and domReady) is loaded in ther autogenerated chunks, but not jquery. I can find plenty of examples that does almost this but not the important part of loading vendors asynchronously. Output from webpack build: ``` Asset Size Chunks Chunk Names main.js.map 570 bytes 0, 7 [emitted] main main.js 399 bytes 0, 7 [emitted] main standard-page.js 355 bytes 2, 7 [emitted] standard-page c6ff6378688eba5a294f.js 348 bytes 3, 7 [emitted] start-page.js 361 bytes 4, 7 [emitted] start-page 8986b3741c0dddb9c762.js 387 bytes 5, 7 [emitted] vendor.js 257 kB 6, 7 [emitted] vendor common.js 3.86 kB 7 [emitted] common.js 2876de041eaa501e23a2.js 1.3 kB 1, 7 [emitted] ```
2015/10/07
[ "https://Stackoverflow.com/questions/33001985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4886214/" ]
The solution to this problem is two-fold: 1. First you need to understand [how code-splitting works in webpack](https://webpack.github.io/docs/code-splitting.html) 2. Secondly, you need to use something like the `CommonsChunkPlugin` to generate that shared bundle. ### Code Splitting Before you start using webpack you need to unlearn to be dependent on configuration. Require.js was all about configuration files. This mindset made it difficult for me to transition into webpack which is modeled more closely after CommonJS in node.js, which relies on no configuration. With that in mind consider the following. If you have an app and you want it to asynchronously load some other parts of javascript you need to use one of the following paradigms. **Require.ensure** `require.ensure` is one way that you can create a "split point" in your application. Again, you may have thought you'd need to do this with configuration, but that is not the case. In the example when I hit `require.ensure` in my file webpack will automatically create a second bundle and load it on-demand. Any code executed inside of that split-point will be bundled together in a separate file. ``` require.ensure(['jquery'], function() { var $ = require('jquery'); /* ... */ }); ``` **Require([])** You can also achieve the same thing with the AMD-version of `require()`, the one that takes an array of dependencies. This will also create the same split point: ``` require(['jquery'], function($) { /* ... */ }); ``` ### Shared Bundles In your example above you use `entry` to create a `vendor` bundle which has jQuery. You don't need to manually specify these dependency bundles. Instead, using the split points above you webpack will generate this automatically. *Use `entry` only for separate `<script>` tags you want in your pages*. Now that you've done all of that you can use the `CommonsChunkPlugin` to additional optimize your chunks, but again most of the magic is done for you and outside of specifying which dependencies should be shared you won't need to do anything else. `webpack` will pull in the shared chunks automatically without the need for additional `<script>` tags or `entry` configuration. ### Conclusion The scenario you describe (multiple `<script>` tags) may not actually be what you want. With webpack all of the dependencies and bundles can be managed automatically starting with only a single `<script>` tag. Having gone through several iterations of re-factoring from require.js to webpack, I've found that's usually the simplest and best way to manage your dependencies. All the best!
I've recently travelled this same road, I'm working on optimizing my Webpack output since I think bundles are too big, HTTP2 can load js files in parallel and caching will be better with separate files, I was getting some dependencies duplicated in bundles, etc. While I got a solution working with Webpack 4 SplitChunksPlugin configuration, I'm currently moving towards using mostly Webpack's dynamic import() syntax since just that syntax will cause Webpack to automatically bundle dynamically imported bundles in their own file which I can name via a "magic comment": ``` import(/* webpackChunkName: "mymodule" */ "mymodule"); // I added an resolve.alias.mymodule entry in Webpack.config ```
33,001,985
I have started using Webpack when developing usual web sites consisting of a number pages and of different pages types. I'm used to the RequireJs script loader that loads all dependencies on demand when needed. Just a small piece of javascript is downloaded when page loads. What I want to achieve is this: * A small initial javascript file that loads dependencies asynchronous * Each page type can have their own javascript, which also may have dependencies. * Common modules, vendor scripts should be bundled in common scripts I have tried many configurations to achieve this but with no success. ```js entry: { main: 'main.js', //Used on all pages, e.g. mobile menu 'standard-page': 'pages/standard-page.js', 'start-page': 'pages/start-page.js', 'vendor': ['jquery'] }, alias: { jquery: 'jquery/dist/jquery.js' }, plugins: [ new webpack.optimize.CommonsChunkPlugin("vendor", "vendor.js"), new webpack.optimize.CommonsChunkPlugin('common.js') ] ``` In the html I want to load the javascripts like this: ```html <script src="/Static/js/dist/common.js"></script> <script src="/Static/js/dist/main.js" async></script> ``` And on a specific page type (start page) ```html <script src="/Static/js/dist/start-page.js" async></script> ``` common.js should be a tiny file for fast loading of the page. main.js loads async and require('jquery') inside. The output from Webpack looks promising but I can't get the vendors bundle to load asynchronously. Other dependencies (my own modules and domReady) is loaded in ther autogenerated chunks, but not jquery. I can find plenty of examples that does almost this but not the important part of loading vendors asynchronously. Output from webpack build: ``` Asset Size Chunks Chunk Names main.js.map 570 bytes 0, 7 [emitted] main main.js 399 bytes 0, 7 [emitted] main standard-page.js 355 bytes 2, 7 [emitted] standard-page c6ff6378688eba5a294f.js 348 bytes 3, 7 [emitted] start-page.js 361 bytes 4, 7 [emitted] start-page 8986b3741c0dddb9c762.js 387 bytes 5, 7 [emitted] vendor.js 257 kB 6, 7 [emitted] vendor common.js 3.86 kB 7 [emitted] common.js 2876de041eaa501e23a2.js 1.3 kB 1, 7 [emitted] ```
2015/10/07
[ "https://Stackoverflow.com/questions/33001985", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4886214/" ]
The solution to this problem is two-fold: 1. First you need to understand [how code-splitting works in webpack](https://webpack.github.io/docs/code-splitting.html) 2. Secondly, you need to use something like the `CommonsChunkPlugin` to generate that shared bundle. ### Code Splitting Before you start using webpack you need to unlearn to be dependent on configuration. Require.js was all about configuration files. This mindset made it difficult for me to transition into webpack which is modeled more closely after CommonJS in node.js, which relies on no configuration. With that in mind consider the following. If you have an app and you want it to asynchronously load some other parts of javascript you need to use one of the following paradigms. **Require.ensure** `require.ensure` is one way that you can create a "split point" in your application. Again, you may have thought you'd need to do this with configuration, but that is not the case. In the example when I hit `require.ensure` in my file webpack will automatically create a second bundle and load it on-demand. Any code executed inside of that split-point will be bundled together in a separate file. ``` require.ensure(['jquery'], function() { var $ = require('jquery'); /* ... */ }); ``` **Require([])** You can also achieve the same thing with the AMD-version of `require()`, the one that takes an array of dependencies. This will also create the same split point: ``` require(['jquery'], function($) { /* ... */ }); ``` ### Shared Bundles In your example above you use `entry` to create a `vendor` bundle which has jQuery. You don't need to manually specify these dependency bundles. Instead, using the split points above you webpack will generate this automatically. *Use `entry` only for separate `<script>` tags you want in your pages*. Now that you've done all of that you can use the `CommonsChunkPlugin` to additional optimize your chunks, but again most of the magic is done for you and outside of specifying which dependencies should be shared you won't need to do anything else. `webpack` will pull in the shared chunks automatically without the need for additional `<script>` tags or `entry` configuration. ### Conclusion The scenario you describe (multiple `<script>` tags) may not actually be what you want. With webpack all of the dependencies and bundles can be managed automatically starting with only a single `<script>` tag. Having gone through several iterations of re-factoring from require.js to webpack, I've found that's usually the simplest and best way to manage your dependencies. All the best!
Some time ago I made such a small "Proof of concept" to check how importlazy will work in IE11. I have to admit it works :) After clicking the button, the code responsible for changing the background color of the page is loaded - [full example](https://github.com/tomik23/importlazy) Js: ``` // polyfils for IE11 import 'core-js/modules/es.array.iterator'; const button = document.getElementById('background'); button.addEventListener('click', async (event) => { event.preventDefault(); try { const background = await import(/* webpackChunkName: "background" */ `./${button.dataset.module}.js`); background.default(); } catch (error) { console.log(error); } }) ``` Html: ``` <button id="background" class="button-primary" data-module="background">change the background</button> ```
510,152
Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it ``` . /path/to/script ``` or ``` sh /path/to/script ``` and how to use `sudo` in such cases? This, for example, doesn't work: ``` sudo . /path/to/script ```
2012/11/24
[ "https://superuser.com/questions/510152", "https://superuser.com", "https://superuser.com/users/105968/" ]
sh /path/to/script will spawn a new shell and run she script independent of your current shell. The `source` (.) command will call all the commands in the script in the current shell. If the script happens to call `exit` for example, then you'll lose the current shell. Because of this it is usually safer to call scripts in a separate shell with sh or to execute them as binaries using either the full (starting with `/`) or relative path (`./`). If called as binaries, they will be executed with the specified interpreter (`#!/bin/bash`, for example). As for knowing whether or not a script will find the files it needs, there is no good answer, other than looking at the script to see what it does. As an option, you could always go to the script's folder in a sub-process without leaving your current folder: ``` (cd /wherever/ ; sh script.sh) ```
I'm not sure it works like this in linux, assuming it doesn't if no-ones suggested it. But instead of using ././ to go back directories. Can you use quotes to give it an absolute path? Maybe it doesn't give you access to the whole drive to even be able to do that actually come to think of it.
510,152
Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it ``` . /path/to/script ``` or ``` sh /path/to/script ``` and how to use `sudo` in such cases? This, for example, doesn't work: ``` sudo . /path/to/script ```
2012/11/24
[ "https://superuser.com/questions/510152", "https://superuser.com", "https://superuser.com/users/105968/" ]
Not sure why no one has suggested this one, but it's super easy! I've Googled a few times and couldn't find this exact answer I'm giving so I'd thought I'd share. IMO, this but the best solution, also the easiest one, for me anyway, however others may feel and do things differently. ``` # Place this somewhere in your .bashrc/.bash_profile/etc and edit as you see fit YOURCOMMAND () { cd /path/to/directory/containing/your/script/ && ./YOURSCRIPT } ``` First the 'cd' command telling it the directory of the scripts location. Then '&&' so that you can tie it after excuting the next command. Finally open your script just as you would would execute it within terminal! Saved your in your BASH file and takes 5 seconds to setup. Hope this helped someone.
Ancient question, but a timeless one. The solution I've consistently seen is to have a `$HOME/bin` directory and put it first in `$PATH` (via `~/.bashrc` if it isn't already there; on some systems `~/bin` is first in `$PATH` by default). Dropping scripts in there for execution or symlinks to scripts/executables elsewhere is the simple way to deal with path issues that shouldn't affect the system or other users. If a script requires additional resources that can be found *relative to its own location* (not uncommon) then the envvar `$BASH_SOURCE` is used. `$BASH_SOURCE` always contains the absolute path to the currently running script itself, regardless the value of `$PWD`. Consider the following: ``` ceverett@burrito:~$ echo $PATH /home/ceverett/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games ``` So we can see that `$HOME/bin` is first in `$PATH`, so whatever I put in `~/bin` will run. I have a demonstration script called `~/bin/findme`: ``` #!/bin/bash echo "Running from $BASH_SOURCE" ``` This can be used to get the absolute path to the running script's location. ``` ceverett@burrito:~$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~$ cd foo ceverett@burrito:~/foo$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~/foo$ cd / ceverett@burrito:/$ findme Running from /home/ceverett/bin/findme ```
510,152
Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it ``` . /path/to/script ``` or ``` sh /path/to/script ``` and how to use `sudo` in such cases? This, for example, doesn't work: ``` sudo . /path/to/script ```
2012/11/24
[ "https://superuser.com/questions/510152", "https://superuser.com", "https://superuser.com/users/105968/" ]
You can definitely do that (with the adjustments the others mentioned like `sudo sh /pathto/script.sh` or `./script.sh`). However, I do one of a few things to run them system wide to not worry about dirs and save me useless extra typing. 1) Symlink to `/usr/bin` ``` ln -s /home/username/Scripts/name.sh /usr/bin/name ``` (be sure there is no overlapping name there, because you would obviously override it.) This also lets me keep them in my development folders so I can adjust as necessary. 2) Add the Scripts dir to your path (using .bash\_profile - or whatever.profile you have on your shell) `PATH=/path/to/scripts/:$PATH` 3) Create Alias's in the `.bash_profile` in `~/.bash_profile` add something like: ``` alias l="ls -l" ``` As you can tell, the syntax is just alias, digits you want to act as a command, the command. So typing "l" anywhere in the terminal would result in `ls -l` If you want sudo, just `alias sl="sudo ls -l"` to note to yourself l vs sl (as a useless example). Either way, you can just type `sudo nameofscript` and be on your way. No need to mess with ./ or . or sh, etc. Just mark them as executable first :D
Ancient question, but a timeless one. The solution I've consistently seen is to have a `$HOME/bin` directory and put it first in `$PATH` (via `~/.bashrc` if it isn't already there; on some systems `~/bin` is first in `$PATH` by default). Dropping scripts in there for execution or symlinks to scripts/executables elsewhere is the simple way to deal with path issues that shouldn't affect the system or other users. If a script requires additional resources that can be found *relative to its own location* (not uncommon) then the envvar `$BASH_SOURCE` is used. `$BASH_SOURCE` always contains the absolute path to the currently running script itself, regardless the value of `$PWD`. Consider the following: ``` ceverett@burrito:~$ echo $PATH /home/ceverett/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games ``` So we can see that `$HOME/bin` is first in `$PATH`, so whatever I put in `~/bin` will run. I have a demonstration script called `~/bin/findme`: ``` #!/bin/bash echo "Running from $BASH_SOURCE" ``` This can be used to get the absolute path to the running script's location. ``` ceverett@burrito:~$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~$ cd foo ceverett@burrito:~/foo$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~/foo$ cd / ceverett@burrito:/$ findme Running from /home/ceverett/bin/findme ```
510,152
Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it ``` . /path/to/script ``` or ``` sh /path/to/script ``` and how to use `sudo` in such cases? This, for example, doesn't work: ``` sudo . /path/to/script ```
2012/11/24
[ "https://superuser.com/questions/510152", "https://superuser.com", "https://superuser.com/users/105968/" ]
Say you want to run `./script` and the path is `/home/test/stuff/` But the path you're currently in is `/home/test/public_html/a/` Then you would need to do `../../stuff/./script` Which goes back 2 folders, then into into the folder there and run the script.
It's better to modify your script to use its absolute path before addressing any relative files. Of course for scripts we can do it conveniently and flexibly by keeping the [`dirname`](https://linux.die.net/man/1/dirname) of the script in a variable before-hand. For example, change: ``` cat needed_file ``` to: ``` SCRIPT_PATH=$(dirname "$0") cat "${SCRIPT_PATH}/needed_file" ```
510,152
Quite often, the script I want to execute is not located in my current working directory and I don't really want to leave it. Is it a good practice to run scripts (BASH, Perl etc.) from another directory? Will they usually find all the stuff they need to run properly? If so, what is the best way to run a "distant" script? Is it ``` . /path/to/script ``` or ``` sh /path/to/script ``` and how to use `sudo` in such cases? This, for example, doesn't work: ``` sudo . /path/to/script ```
2012/11/24
[ "https://superuser.com/questions/510152", "https://superuser.com", "https://superuser.com/users/105968/" ]
If you have scripts lying around that you need to run often, and they depend on their location for finding resources you can easily do this by just combining commands in an alias like this. ``` alias run-script="cd /home/user/path/to/script/ && bash script.sh" ``` This way you don't have to alter anything else to make it work.
Ancient question, but a timeless one. The solution I've consistently seen is to have a `$HOME/bin` directory and put it first in `$PATH` (via `~/.bashrc` if it isn't already there; on some systems `~/bin` is first in `$PATH` by default). Dropping scripts in there for execution or symlinks to scripts/executables elsewhere is the simple way to deal with path issues that shouldn't affect the system or other users. If a script requires additional resources that can be found *relative to its own location* (not uncommon) then the envvar `$BASH_SOURCE` is used. `$BASH_SOURCE` always contains the absolute path to the currently running script itself, regardless the value of `$PWD`. Consider the following: ``` ceverett@burrito:~$ echo $PATH /home/ceverett/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games ``` So we can see that `$HOME/bin` is first in `$PATH`, so whatever I put in `~/bin` will run. I have a demonstration script called `~/bin/findme`: ``` #!/bin/bash echo "Running from $BASH_SOURCE" ``` This can be used to get the absolute path to the running script's location. ``` ceverett@burrito:~$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~$ cd foo ceverett@burrito:~/foo$ findme Running from /home/ceverett/bin/findme ceverett@burrito:~/foo$ cd / ceverett@burrito:/$ findme Running from /home/ceverett/bin/findme ```