date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 693 | 2,048 | <issue_start>username_0: I'm trying to convert my audio data which is 32 bit signed int, into 24 bit signed.
The goal is to write a Wave file. I already managed to write correct Wave files with 8, 16 and 32 bit depths, but for some reason I struggle with 24 bit. The sound output sounds like 8 bits (lots of noise), which seems to mean only the MSB is correctly interpreted, the second and third bytes of the 24-bit value seems not working.
The code:
```
int buffer_32[1024];
unsigned char buffer_24[1024*3];
for (unsigned i = 0; i < 1024; i++)
{
buffer_24[i*3] = (unsigned char)((buffer_32[i] & 0xff0000) >> 16);
buffer_24[i*3+1] = (unsigned char)((buffer_32[i] & 0xff00) >> 8);
buffer_24[i*3+2] = (unsigned char)(buffer_32[i] & 0xff);
}
```
Basically, I manipulate bytes to create a 24-bit stream ('24bit packed').
I don't know whats wrong in my conversion algorithm: can you help me?
EDIT : I forgot to say, my int values are already correctly scaled into -8388608,8388607 (24-bit signed integer) range before going into this conversion loop.<issue_comment>username_1: The reason it sounds like noise is that you're stripping the MSB from the data.
Try this instead:
```
buffer_24[i*3] = (unsigned char)((buffer_32[i] & 0xff000000) >> 24);
buffer_24[i*3+1] = (unsigned char)((buffer_32[i] & 0xff0000) >> 16);
buffer_24[i*3+2] = (unsigned char)((buffer_32[i] & 0xff00) >> 8);
```
Upvotes: 2 <issue_comment>username_2: It is over 30 years since I did this kind of stuff, so bear with me:
1. If your audio is multiplexed stereo then I doubt what you are doing works.
2. If the signed bit is for left and right then then you should work (AND) the sign out of it and work with the result and then re-attach (OR) the sign back.
3. If you are just trying to remove the top byte then you should AND out the top bytes with 0x00ffffffff (four bytes).
If it is a pure analogue sample, I would reduce the resolution (software resample), depending on what type of quality I want, reduce the volume, etc.
Trust this helps.
Upvotes: 0 |
2018/03/20 | 525 | 1,714 | <issue_start>username_0: I'v an async function that returns a string asynchronously and I'm calling that function within test method and is throwing computation expressions, what would be the possible fix?
Code
```
let requestDataAsync (param: string) : Async =
async {
Console.WriteLine param
return "my result"
}
```
Test Code
```
[]
member this.TestRequestDataAsync() =
let! result = requestDataAsync("some parameter")
Console.WriteLine(result)
Assert.IsTrue(true)
```
Error for this line `let! result = requestDataAsync("some parameter")`
>
>
> >
> > This construct can only be used within computation expressions
> >
> >
> >
>
>
>
Question, How to wait and display the result of the async function?<issue_comment>username_1: The reason it sounds like noise is that you're stripping the MSB from the data.
Try this instead:
```
buffer_24[i*3] = (unsigned char)((buffer_32[i] & 0xff000000) >> 24);
buffer_24[i*3+1] = (unsigned char)((buffer_32[i] & 0xff0000) >> 16);
buffer_24[i*3+2] = (unsigned char)((buffer_32[i] & 0xff00) >> 8);
```
Upvotes: 2 <issue_comment>username_2: It is over 30 years since I did this kind of stuff, so bear with me:
1. If your audio is multiplexed stereo then I doubt what you are doing works.
2. If the signed bit is for left and right then then you should work (AND) the sign out of it and work with the result and then re-attach (OR) the sign back.
3. If you are just trying to remove the top byte then you should AND out the top bytes with 0x00ffffffff (four bytes).
If it is a pure analogue sample, I would reduce the resolution (software resample), depending on what type of quality I want, reduce the volume, etc.
Trust this helps.
Upvotes: 0 |
2018/03/20 | 721 | 2,130 | <issue_start>username_0: I have a question on how to add onto to this code to not copy duplicate rows. My columns A, C, and D combined together would make an unique identifier, but I'd rather not have to add that "helper" column to my spreadsheet if possible.
```
Option Explicit
Public Sub CopyRows()
Dim ws1 As Worksheet, ws2 As Worksheet, ws1r As Range, ws2r As Range
Dim ws1lr As Long, ws1lc As Long, ws2lr As Long, i As Long
Set ws1 = ThisWorkbook.Worksheets("2")
Set ws2 = ThisWorkbook.Worksheets("Core_Cutter_List")
ws1lr = ws1.Range("A" & Rows.Count).End(xlUp).Row 'last row in "2"
ws1lc = ws1.Cells(1, Columns.Count).End(xlToLeft).Column 'last col in "2"
ws2lr = ws2.Range("A" & Rows.Count).End(xlUp).Row + 1 'last row in "Core_Cutter"
For i = 1 To ws1lr
If Len(ws1.Cells(i, "A")) > 0 And Len(ws1.Cells(i, "G")) = 0 Then
Set ws1r = ws1.Range(ws1.Cells(i, 1), ws1.Cells(i, ws1lc))
Set ws2r = ws2.Range(ws2.Cells(ws2lr, 1), ws2.Cells(ws2lr, ws1lc))
ws2r.Value2 = ws1r.Value2
ws2lr = ws2lr + 1
End If
Next i
End Sub
```<issue_comment>username_1: The reason it sounds like noise is that you're stripping the MSB from the data.
Try this instead:
```
buffer_24[i*3] = (unsigned char)((buffer_32[i] & 0xff000000) >> 24);
buffer_24[i*3+1] = (unsigned char)((buffer_32[i] & 0xff0000) >> 16);
buffer_24[i*3+2] = (unsigned char)((buffer_32[i] & 0xff00) >> 8);
```
Upvotes: 2 <issue_comment>username_2: It is over 30 years since I did this kind of stuff, so bear with me:
1. If your audio is multiplexed stereo then I doubt what you are doing works.
2. If the signed bit is for left and right then then you should work (AND) the sign out of it and work with the result and then re-attach (OR) the sign back.
3. If you are just trying to remove the top byte then you should AND out the top bytes with 0x00ffffffff (four bytes).
If it is a pure analogue sample, I would reduce the resolution (software resample), depending on what type of quality I want, reduce the volume, etc.
Trust this helps.
Upvotes: 0 |
2018/03/20 | 909 | 2,715 | <issue_start>username_0: That is the premise of an exercise I'm doing, but i don't get what is the kth element of an array.
For example, the exercise asks me the following input/output:
input: [7, 2, 1, 6, 1] and k = 3
output: 6
I just don't get what's the relation between K and the array.<issue_comment>username_1: K is just a number, in this example: the `K'th smallest element`. So if K is 2 you're looking for the `2'nd smallest element`
Upvotes: 0 <issue_comment>username_2: k=3 means that you want the 3rd smallest element in the array. I don't understand the result, however:
1st smallest: 1
2nd smallest: 1 (the *other* 1)
3rd smallest: 2
4th smallest: 6
When we're talking about array elements in their original order, however, the `k`th element is the one in position `k`. For instance, if you have
```
arr = [7, 2, 1, 6, 1]
```
Then the 3rd element is `1` (the first one). However, if your array indexing begins at 0, then `arr[k]` is `arr[3]`, or `6`.
Does that bludgeon the question to death?
Upvotes: 0 <issue_comment>username_3: I think this question is asking you to return the third smallest number in an array with considering duplicates.
For example: `[4,5,2,7,8], 3` should return `5`, whereas in your case it should return `6`
```
import java.util.Comparator;
import java.util.PriorityQueue;
public class FindKthSmallest {
public static void main(String[] args) {
int[] arr = new int[]{7,2,1,6,1};
FindKthSmallest f = new FindKthSmallest();
System.out.println(f.findKthSmall(arr, 3));
}
public int findKthSmall(int[] nums, int k) {
if (nums == null || nums.length == 0 || k == 0) {
return -1;
}
PriorityQueue pq = new PriorityQueue(new Comparator() {
@Override
public int compare(Integer i, Integer j) {
return j - i;
}
});
for (int i : nums) {
if (pq.contains(i)) {
continue;
}
pq.offer(i);
if (pq.size() > k) {
pq.poll();
}
}
return pq.peek();
}
}
```
Upvotes: 2 <issue_comment>username_4: If you're confident, that the number of distinct elements is big enough:
```
scala> val arr = Array (7,2,1,6,1)
arr: Array[Int] = Array(7, 2, 1, 6, 1)
scala> arr.distinct.sorted.take(3).last
res85: Int = 6
```
However,
```
scala> arr.distinct.sorted.take(103).last
```
will return the last element, which is 7 but not the correct answer, so you need some check, to return an Option[Int]. Or use a Try (scala.util.Try)
```
Try {arr.distinct.sorted.toList (5)}
res92: scala.util.Try[Int] = Failure(java.lang.IndexOutOfBoundsException: 5)
scala> Try {arr.distinct.sorted.toList (2)}
res94: scala.util.Try[Int] = Success(6)
```
Since indexing starts at 0, we have to look for index 2, to get the 3rd element.
Upvotes: 0 |
2018/03/20 | 817 | 2,675 | <issue_start>username_0: I am trying to replace a selected text with a single word from that selected text using regex. I tried re.sub() but it seems that it takes the second argument "The word that I want to replace it with the text" as a string, not as regex.
Here is my string:
```
I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob .
```
And here is my code:
```
# The regex of the form ..
select_text_regex = r""
# The regex of the correct word that will replace the selected text of teh form ..
correct\_word\_regex = r"targ=([^>]+)>"
line = re.sub(select\_text\_regex, correct\_word\_regex, line.rstrip())
```
I get:
```
I go to Bridgebrook i go out targ=([^>]+)> on Tuesday night i go to
Youth targ=([^>]+)> .
```
My goal is:
```
I go to Bridgebrook i go out sometimes on Tuesday night i go to
Youth club .
```
Does Python support replacing two strings using Regex?<issue_comment>username_1: You would need to match the target word in the pattern, as a capturing group - you can't start an entirely new search in the replacement string!
Not tested, but this should do the job:
Replace `r".\*?"`
With `r"\1"`
Upvotes: 0 <issue_comment>username_2: What you're looking for is regex capture groups. Instead of selecting the regex and then trying to replace it with another regex, put the part of your regex you want to match inside parenthesis in your select statement, then get it back in the replacement with \1. (the number being the group you included)
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = r"[^<]+<\/ERR>" #Correct Here.
correct\_word\_regex = r"\1" #And here.
line = re.sub(select\_text\_regex, correct\_word\_regex, line.rstrip())
print(line)
```
Upvotes: 0 <issue_comment>username_3: Here's another solution (I also rewrote the regex using "non-greedy" modifiers by putting `?` after `*` because I find it more readable).
The group referenced by `r"\1"` is done with parenthises as an unnamed group. Also used `re.compile` as a style preference to reduce the number of args:
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = re.compile(r".\*?<\/ERR>")
select\_text\_regex.sub(r"\1", line)
```
Named group alternative:
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = re.compile(r".\*?)>.\*?<\/ERR>")
select\_text\_regex.sub(r"\g", line)
```
You can find some docs on group referencing here:
<https://docs.python.org/3/library/re.html#regular-expression-syntax>
Upvotes: 2 [selected_answer] |
2018/03/20 | 660 | 2,242 | <issue_start>username_0: I'm having an issue trying to querying my database, my script with cassandra-driver was this:
```
const query = 'CREATE TABLE IF NOT EXISTS test.RestaurantMenuItems ' +
'(id UUID, restaurantId varchar, menuName text, menuCategoryNames text, menuItemName text, menuItemDescription text, menuItemPrice decimal, PRIMARY KEY (id))';
return client.execute(query);
```
I have no idea how I could query with the spaces involved.
<https://i.stack.imgur.com/0HU9b.png><issue_comment>username_1: You would need to match the target word in the pattern, as a capturing group - you can't start an entirely new search in the replacement string!
Not tested, but this should do the job:
Replace `r".\*?"`
With `r"\1"`
Upvotes: 0 <issue_comment>username_2: What you're looking for is regex capture groups. Instead of selecting the regex and then trying to replace it with another regex, put the part of your regex you want to match inside parenthesis in your select statement, then get it back in the replacement with \1. (the number being the group you included)
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = r"[^<]+<\/ERR>" #Correct Here.
correct\_word\_regex = r"\1" #And here.
line = re.sub(select\_text\_regex, correct\_word\_regex, line.rstrip())
print(line)
```
Upvotes: 0 <issue_comment>username_3: Here's another solution (I also rewrote the regex using "non-greedy" modifiers by putting `?` after `*` because I find it more readable).
The group referenced by `r"\1"` is done with parenthises as an unnamed group. Also used `re.compile` as a style preference to reduce the number of args:
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = re.compile(r".\*?<\/ERR>")
select\_text\_regex.sub(r"\1", line)
```
Named group alternative:
```
line = "I go to Bridgebrook i go out some times on Tuesday night i go to Youth clob ."
select_text_regex = re.compile(r".\*?)>.\*?<\/ERR>")
select\_text\_regex.sub(r"\g", line)
```
You can find some docs on group referencing here:
<https://docs.python.org/3/library/re.html#regular-expression-syntax>
Upvotes: 2 [selected_answer] |
2018/03/20 | 626 | 2,009 | <issue_start>username_0: Yes I know there are heaps of posts about converting objects to json but my question is more specific..
Say Im calling some data from an api and the response is an object that looks like this
```
{
date: ...,
value: ...,
useless-info: ...,
useless-info: ...
}
```
now I know I can do this `JSON.stringify(returnedobject);`
so I get the newly formed json..
```
{
"date": ...,
"value": ...,
"useless-info": ...,
"useless-info": ...
}
```
now all I want in my newly formed json to be the `"date"` and `"value"` and remove the `useless-info` is this even possible?
any help would be appreciated!<issue_comment>username_1: You can either create a new object with the data you want, or delete the fields you don't need:
```
const someReturn = {
date: ...,
value: ...,
badstuff: ...
}
const goodObj = {
date: someReturn.date,
value: someReturn.value
}
```
Or to delete fields you can just call `delete someReturn.badstuff`
Upvotes: 1 [selected_answer]<issue_comment>username_2: ```
const oldJson = {
"date": ...,
"value": ...,
"useless-info": ...,
"useless-info": ...
}
const newJson = {
"date" : oldJson.date,
"value": oldJson.value
}
```
Upvotes: 1 <issue_comment>username_3: **Working Demo**
```js
var jsonObj = {
"date": "",
"value": "",
"useless-info": "",
"useless-info": ""
};
delete jsonObj["useless-info"];
var jsonString = JSON.stringify(jsonObj);
console.log(jsonString);
```
Upvotes: 2 <issue_comment>username_4: [`JSON.stringify()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify) has a `replacer` `param` that can be used to limit `output` to a `whitelisted` `array` of `keys` you want to keep.
```js
// Input.
const input = {
date: new Date(),
value: 8905934,
useless: 'useless',
extra: 'extra'
}
// Output.
const output = JSON.stringify(input, ['date', 'value'])
// Proof.
console.log(output)
```
Upvotes: 2 |
2018/03/20 | 522 | 2,050 | <issue_start>username_0: In App Purchases work in Sandbox environment but don't work during Apple review which results in the app rejection:
[](https://i.stack.imgur.com/wGLT6.jpg)
I can't find the problem as In App purchases work well on the test device with Sandbox accounts. I did send the IAPs for review as well and I don't do any kind of receipt validation.
Any ideas where I can find the problem roots?<issue_comment>username_1: Add some sort of logging that you can see remotely. I suspect they're hitting one of the SKErrorDomain errors because of the weird environment they operate in (app store signed builds, but sandbox accounts).
Do you refresh the receipt in your purchase flow? That's a step where they typically encounter an error.
Upvotes: 0 <issue_comment>username_2: Always verify your receipt first with the production URL; proceed to verify with the sandbox URL if you receive a 21007 status code. Following this approach ensures that you do not have to switch between URLs while your application is being tested or reviewed in the sandbox or is live in the App Store.
The 21007 status code indicates that this receipt is a sandbox receipt, but it was sent to the production service for verification. A status of 0 indicates that the receipt was properly verified.
Look here: <https://developer.apple.com/library/content/technotes/tn2413/_index.html#//apple_ref/doc/uid/DTS40016228-CH1-RECEIPTURL>
<https://developer.apple.com/library/content/releasenotes/General/ValidateAppStoreReceipt/Chapters/ValidateRemotely.html>
**How do I verify my receipt (iOS)?**
Always verify your receipt first with the production URL; proceed to verify with the sandbox URL if you receive a 21007 status code. Following this approach ensures that you do not have to switch between URLs while your application is being tested or reviewed in the sandbox or is live in the App Store.
<https://developer.apple.com/library/content/technotes/tn2259/_index.html>
Upvotes: 1 |
2018/03/20 | 641 | 2,540 | <issue_start>username_0: Trying to pass the defined variable to element selector, I tried few options, but it didn't work. Can anyone help me ?
```
def test_04(self):
driver = self.driver
spreadsheet =
pd.read_excel('SCC_ProdEdit_Page_Top80_Usage_ControlIds.xlsx',
sheetname='Prod_Edit_Page')
usernameField = spreadsheet['ControlID'][0]
username = spreadsheet['ControlID'][1]
passwordfield = spreadsheet['ControlID'][2]
password = spreadsheet['ControlID'][3]
login = spreadsheet['ControlID'][4]
print(usernameField)
print(username)
print(passwordfield)
print(password)
print(login)
self.driver.get("https://stagenext-scc3.foodchainid.com/Login")
driver.maximize_window()
driver.find_element_by_id(%s username?? ).send_keys(username ??)
driver.find_element_by_id(%s username?? ).send_keys(username ??)
```<issue_comment>username_1: Add some sort of logging that you can see remotely. I suspect they're hitting one of the SKErrorDomain errors because of the weird environment they operate in (app store signed builds, but sandbox accounts).
Do you refresh the receipt in your purchase flow? That's a step where they typically encounter an error.
Upvotes: 0 <issue_comment>username_2: Always verify your receipt first with the production URL; proceed to verify with the sandbox URL if you receive a 21007 status code. Following this approach ensures that you do not have to switch between URLs while your application is being tested or reviewed in the sandbox or is live in the App Store.
The 21007 status code indicates that this receipt is a sandbox receipt, but it was sent to the production service for verification. A status of 0 indicates that the receipt was properly verified.
Look here: <https://developer.apple.com/library/content/technotes/tn2413/_index.html#//apple_ref/doc/uid/DTS40016228-CH1-RECEIPTURL>
<https://developer.apple.com/library/content/releasenotes/General/ValidateAppStoreReceipt/Chapters/ValidateRemotely.html>
**How do I verify my receipt (iOS)?**
Always verify your receipt first with the production URL; proceed to verify with the sandbox URL if you receive a 21007 status code. Following this approach ensures that you do not have to switch between URLs while your application is being tested or reviewed in the sandbox or is live in the App Store.
<https://developer.apple.com/library/content/technotes/tn2259/_index.html>
Upvotes: 1 |
2018/03/20 | 453 | 1,081 | <issue_start>username_0: I have a dictionary with lists as values:
```
my_dict = {1: [2,3], 2: [4, 5], 3: [6, 7]}
```
and I want to get to update the dictionary to update the values to be the sum of the old list values:
```
my_dict = {1: 5, 2: 9, 3: 13}
```
What is the most efficient/pythonic way of doing so? What I usually do is:
```
for key in my_dict:
my_dict[key] = sum(my_dict[key])
```
Are there better ways?<issue_comment>username_1: You can use a dictionary comprehension:
```
my_dict = {1: [2,3], 2: [4, 5], 3: [6, 7]}
new_d = {a:sum(b) for a, b in my_dict.items()}
```
Output:
```
{1: 5, 2: 9, 3: 13}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `reduce` instead of `sum`:
```
from functools import reduce
my_dict = {1: [2,3], 2: [4, 5], 3: [6, 7]}
final = {k: reduce(lambda x,y: x+y, v) for k,v in my_dict.items()}
```
output:
```
{1: 5, 2: 9, 3: 13}
```
Otherwise you can refer to this [thread](https://stackoverflow.com/questions/33772092/when-would-you-use-reduce-instead-of-sum) for more informations.
Upvotes: 1 |
2018/03/20 | 1,016 | 3,671 | <issue_start>username_0: While static member variables can be templated in C++14 this wont work:
```
class SomeClass
{
public:
template
T var = {};
};
int main()
{
SomeClass instance;
instance.var = 50;
instance.var = 0.1;
}
```
What are the reasons, that templates for variable members are not supported by the C++ standard since it should be possible in principle?<issue_comment>username_1: When you instantiate the class you don't know how much memory it will use. Does this class contain an int and a double? What if you write
```
instance.var = 0.2;
instance.var = 1;
```
later in your code
Upvotes: 2 <issue_comment>username_2: This would make two objects of the same type `SomeClass` different, rendering the class concept as we understand it in c++ useless.
Also your code sample implies that `var` could change type during runtime, this can be done using std::variant or std::any.
Upvotes: 2 <issue_comment>username_3: It *cannot* be possible in principle or in practice, as the other answers explain: `sizeof(SomeClass)` would be impossible to compute in general, and `SomeClass` would no longer have any predictable or sane identity, defeating the purpose of its existence.
If there are only a select few types you wish to choose from, and you wish to change the "selected" type at runtime, perhaps a *variant* is what you're looking for?
```
#include
class SomeClass
{
public:
std::variant var = {};
};
int main()
{
SomeClass instance;
instance.var = 50;
instance.var = 0.1;
}
```
(This [requires C++17](http://en.cppreference.com/w/cpp/utility/variant), but a Boost equivalent has been available for many, many years.)
It works because `var` will be as big as it needs to to store *either* an `int` or a `double` (plus some housekeeping), and this size is fixed no matter which "mode" your variant is in at any given time.
If you want to accept *any* type, you could use [`std::any`](http://en.cppreference.com/w/cpp/utility/any), which is like a variant on drugs. The overhead is a little heavier, but if your requirements are *really* so relaxed then this can do the job.
But if you want multiple variables, have multiple variables.
Upvotes: 2 <issue_comment>username_4: [c++](/questions/tagged/c%2b%2b "show questions tagged 'c++'") has value types with known sizes. All complete types in C++ that you can create can have their sizes calculated by the compiler based only on information at or above the line of creation within that compilation unit.
In order to do what you want, either the size of instances of a class varies with every template variable ever used in any compilation unit, or the size of instances varies over time as new elements are added.
Now you can create new data based on type, but it won't be inside the class; instead, you add a map storing the data.
```
using upvoid=std::unique_ptr;
template
static upvoid make(){
return { new T, [](void\*ptr){ delete static\_cast(ptr); } };
}
std::map m\_members;
template
T& get() {
auto it = m\_members.find(typeid(T));
if (it == m\_members.end()){
auto r = m\_members.insert( {typeid(T), make()} );
it=r.first;
}
return \*it.second;
}
```
now `foo.get()` allocates an `int` if it wasn't there, and if it was there gets it. Extra work would have to be done if you want to be able to copy instances.
This kind of mess emulates what you want, but its abstraction leaks (you can tell itmisn't a member variable). And it isn't really a template member variable, it just acts a bit like one.
Barring doing something like this, what you ask for is impossoble. And doing this as part of the language would be, quite frankly, a bad idea.
Upvotes: 1 |
2018/03/20 | 1,138 | 3,754 | <issue_start>username_0: I have an array: Option[Seq[People]]
```
case class People (
name: Option[String],
tall: Option[Boolean],
fat: Boolean
)
```
What I want looks like:
```
String name = "Jack|Tom|Sam"
String tall = "True|True|True"
String fat = "True|False|True"
```
So, I tried:
```
name = array.flatMap(x => x.name).map(_.mkString("|"))
name = array.flatMap(_.slot_name).map(_.mkString("|"))
```
The above tries didn't work.<issue_comment>username_1: The `mkString` is a method of `Seq[String]`
```
val names = array.map(_.flatMap(x => x.name).mkString("|")).getOrElse("")
val tall = array.map(_.flatMap(_.tall).map(_.toString.capitalize).mkString("|")).getOrElse("")
val fat = array.map(_.map(_.fat.toString.capitalize).mkString("|")).getOrElse("")
```
Upvotes: 0 <issue_comment>username_2: Here's what you need (demonstrated in a *Scala REPL* session):
```scala
$ scala
Welcome to Scala 2.12.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_161).
Type in expressions for evaluation. Or try :help.
scala> case class People (
| name: Option[String],
| tall: Option[Boolean],
| fat: Boolean
| )
defined class People
scala> val array = Option(
| Seq(
| People(Some("Jack"), Some(true), true),
| People(Some("Tom"), Some(true), false),
| People(Some("Sam"), Some(true), true),
| )
| )
array: Option[Seq[People]] = Some(List(People(Some(Jack),Some(true),true), People(Some(Tom),Some(true),false), People(Some(Sam),Some(true),true)))
scala> val name = array.fold("")(_.flatMap(_.name).mkString("|"))
name: String = Jack|Tom|Sam
scala> val tall = array.fold("")(_.flatMap(_.tall).map(_.toString.capitalize).mkString("|"))
tall: String = True|True|True
scala> val fat = array.fold("")(_.map(_.fat.toString.capitalize).mkString("|"))
fat: String = True|False|True
```
Each `fold` operation considers that the value of `array` may be `None` (the first argument list, which it maps to an empty string); otherwise, `fold` takes the defined sequence (in the second argument list) then processes each member.
The `flatMap` operations convert `People` instances to the corresponding required optional values (`name` and `tall`), retrieving the defined values while filtering out those that are undefined. (`flatMap` is equivalent to a `map` followed by `flatten`.) Since the `fat` field is not optional, only a `map` is required, instead of `flatMap`.
Resulting `Boolean` values must be converted into capitalized strings through another `map` operation, in order to match your required output. (In the case of `fat`, this can be combined with the `map` call that converts `People` instances to a `Boolean` value.)
Finally, the resulting `Seq[String]`'s are joined into a single `String` using the virgule ("|") as a *separator* via the `mkString` function.
Upvotes: 2 <issue_comment>username_3: Here is another approach using `collect` over an `array` of elements
```
val array = Seq(
People(Some("Jack"), Some(true), true),
People(Some("Tom"), Some(true), false),
People(Some("Sam"), Some(true), true)
)
array: Seq[People] = List(People(Some(Jack),Some(true),true), People(Some(Tom),Some(true),false), People(Some(Sam),Some(true),true))
```
For getting name(s) of all people
```
scala> val name = array.collect{
case p : People => p.name
}.flatten.mkString("|")
res3: name: String = Jack|Tom|Sam
```
For getting all tall(s) of people
```
scala> val tall = array.collect{
case p: People => p.tall
}.flatten.mkString("|")
tall: String = true|true|true
```
Same way for fat(s)
```
scala> val tall = array.collect{
case p: People => p.fat.toString.capitalize
}.mkString("|")
tall: String = True|False|True
```
Upvotes: 0 |
2018/03/20 | 1,306 | 4,272 | <issue_start>username_0: I googled this question but didn't find appropriate solution in few links so thought to put it here.
Question : if we have multiple For Loop (having simple method) then how can we refactor this scenario. In my selenium script it looks really weird to have back to back such methods.
I need that specific sequence. Method\_2 can Not be executed until Method\_1 executed completely.
```
for (int i = 0; i < n; i++)
{
Method_1();
}
for (int i = 0; i < n; i++)
{
Method_2();
}
for (int i = 0; i < n; i++)
{
Method_3();
}
for (int i = 0; i < n; i++)
{
Method_4();
}
```
Consider : all methods under For loops are different.
Number of count -> n : we can consider constant for now, please let me know if it's a good idea to refactor such code or if possible then what would be solution ?<issue_comment>username_1: ***If*** the order or execution is unimportant, you can put all the calls in one loop since they're all called equal times
```
for (int i = 0; i < n; i++)
{
Method_1();
Method_2();
Method_3();
Method_4();
}
```
Upvotes: -1 <issue_comment>username_2: From a PHP background (which you stated in the tags initially, not sure why) this is the approach if I had to run a certain function n times across 4 functions:
**Different syntax to what you did earlier**
```
$n = 50;
for ($i = 0; $i < $n; $i++) Method_1();
for ($i = 0; $i < $n; $i++) Method_2();
for ($i = 0; $i < $n; $i++) Method_3();
for ($i = 0; $i < $n; $i++) Method_4();
```
**Different loop approach**
```
$n = 50; // per-function calls
$a = 4; // amount of functions being called
for ($i = 1; $i <= $a * $n; $i++){
$nm = floor($i/$n)+1;
$fn = "Method_".$nm;
if ($nm <= $a) $fn();
}
```
**This codeset works if you are starting from `Method_0`, has less code and the above was made up from the non-starting from 0 approaches which was creating less maintainable code**
```
$n = 50; // per-function calls
$a = 4; // amount of functions being called
for ($i = 1; $i <= ($a * $n) -2; $i++){
$nm = floor($i/$n);
$fn = "Method_".$nm;
$fn();
}
```
Tested code, works perfectly fine for the use case necessary.
Upvotes: 0 <issue_comment>username_3: (This is for C#, back when there was a C# tag. Not sure how this converts to java)
You can write a method that takes an integer that represents how many times you want to execute some method, and that takes an `Action` or delegate for the method to execute:
```
private static void ExecuteNTimes(int n, Action method)
{
for (int i = 0; i < n; i++)
{
method();
}
}
```
Then, if you have some simple methods with the same signature as in your example:
```
private static void Method_1()
{
Console.WriteLine("Executed Method_1");
}
private static void Method_2()
{
Console.WriteLine("Executed Method_2");
}
private static void Method_3()
{
Console.WriteLine("Executed Method_3");
}
private static void Method_4()
{
Console.WriteLine("Executed Method_4");
}
```
You can execute them in your main code like:
```
private static void Main()
{
var numTimesToExecute = 3;
ExecuteNTimes(numTimesToExecute, Method_1);
ExecuteNTimes(numTimesToExecute, Method_2);
ExecuteNTimes(numTimesToExecute, Method_3);
ExecuteNTimes(numTimesToExecute, Method_4);
Console.Write("\nPress any key to exit...");
Console.ReadKey();
}
```
**Output**
[](https://i.stack.imgur.com/nQb9S.png)
Upvotes: 3 [selected_answer]<issue_comment>username_4: You could do an array with your method names such as:
$functions = [ 0 => 'Method\_1', 1=>'Method\_2']
And then do a couple of nested loops, one for moving towards the end of the array (First method1, then method2, etc) and the inner loop can be exactly as the one you have but using call\_user\_func to dynamically change methods.
PHP Docs: [Call User Func](http://php.net/manual/en/function.call-user-func.php)
I personally don't see any real advantage for this unless you have a lot of methods. If you only have to do it for 4, I might as well would leave it as is....
**got confused and thought this was PHP, sorry for the references but logic in Java should be the same**
Upvotes: 0 |
2018/03/20 | 424 | 1,445 | <issue_start>username_0: Can somebody explain me, why in this program if I call the `printf` in that way the flag will be win? but without will not? Why this `printf` allow such things i can't understand thanks. Why without the `printf` the array can't overwrite the variable flag?
```
#include
#include
int main() {
int flag = false;
int arr[10] = {0};
int siz = sizeof(arr) / sizeof(\* arr);
printf("%p", &flag);
arr[10] = 1; // Without the printf call can't get the win. Why?
puts("");
if(flag == true)
{
printf("win !");
}
else
{
printf("lose");
}
return 0;
}
```<issue_comment>username_1: Your program acesses the array beyond it's bounds. The array indexes start at *0* and end at *N - 1* where *N* is the size of the array.
Doing this invokes undefined behavior, so your prediction of program's behavior will be wrong after this. Adding the `printf()` can change this behavior and it does, and that is what *undefined behavior* means, it should not affect the behavior of the program but once you have caused the undefined behavior at
```
arr[10] = 1;
```
you cannot know how the program will behave anymore.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is causing your problem
```
arr[10] = 1;
```
You only allocate 10 elements in your array
```
int arr[10] = { 0 };
```
`arr[10]` is actually trying to access the 11th element in the array because array indices start at `0`.
Upvotes: 0 |
2018/03/20 | 801 | 2,633 | <issue_start>username_0: I have the following script which runs on .zip files in a directory which have a whole directory structure with many files. These files then have 7zip run on them to extract them and then .eml is added to the extracted file.
```
& "c:\program files\7-zip\7z.exe" x c:\TestZip -r -oC:\TestRestore 2> c:\TestLog\ziplog.txt
& "c:\program files\7-zip\7z.exe" x c:\TestRestore -r -aos -oc:\TestExtract 2> c:\TestLog\sevenzip.txt
gci -path "c:\TestExtract" -file | rename-item -newname {$PSItem.name + ".eml"}
```
My problem is that out of these files sometimes the final extraction cannot be done by 7zip as it does not see it as an archive. I have found that these particular files if I just put .eml on them they are accessible as emails. So when the archive fails to extract I write the output to the sevenzip.txt file.
What I need help with is how do I read this file to get the filenames and place them in a directory so I can add the .eml extension.
An example of the output in the sevenzip.txt file is as follows
```
ERROR: c:\TestRestore\0\0\54\3925ccb78f80d28b7569b6759554d.0_4011
Can not open the file as archive
ERROR: c:\TestRestore\0\0\5b\6fa7acb219dec5d9e55d4eadd6eb1.0_3958
Can not open the file as archive
```
Any help would be greatly appreciated on how to do this.
Sorry for all the comments but I am working on this
```
$SourceFile = 'c:\testlog\sevenzip.txt'
$DestinationFile = 'c:\testlog\testlogextractnew.txt'
$Pattern = 'c:\\TestRestore\\' (Get-Content $SourceFile) |
% {if ($_ -match $Pattern){$_}} |
Set-Content $DestinationFile (Get-Content $DestinationFile).replace('ERROR: ', '') |
Set-Content $DestinationFile (Get-Content$DestinationFile).replace('7z.exe : ', '') |
Set-Content $DestinationFile
```<issue_comment>username_1: Your program acesses the array beyond it's bounds. The array indexes start at *0* and end at *N - 1* where *N* is the size of the array.
Doing this invokes undefined behavior, so your prediction of program's behavior will be wrong after this. Adding the `printf()` can change this behavior and it does, and that is what *undefined behavior* means, it should not affect the behavior of the program but once you have caused the undefined behavior at
```
arr[10] = 1;
```
you cannot know how the program will behave anymore.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is causing your problem
```
arr[10] = 1;
```
You only allocate 10 elements in your array
```
int arr[10] = { 0 };
```
`arr[10]` is actually trying to access the 11th element in the array because array indices start at `0`.
Upvotes: 0 |
2018/03/20 | 2,509 | 8,398 | <issue_start>username_0: I am currently using Python 3.5.5 on Anaconda and I am unable to import torch. It is giving me the following error in Spyder:
```
Python 3.5.5 |Anaconda, Inc.| (default, Mar 12 2018, 17:44:09) [MSC v.1900
64 bit (AMD64)]
Type "copyright", "credits" or "license" for more information.
IPython 6.2.1 -- An enhanced Interactive Python.
import torch
Traceback (most recent call last):
File "", line 1, in
import torch
File "C:\Users\trish\Anaconda3\envs\virtual\_platform\lib\site-
packages\torch\\_\_init\_\_.py", line 76, in
from torch.\_C import \*
ImportError: DLL load failed: The specified module could not be found.
```
Many suggestions on the internet say that the working directory should not be the same directory that the torch package is in, however I've manually set my working directory to C:/Users/trish/Downloads, and I am getting the same error.
Also I've already tried the following: reinstalling Anaconda and all packages from scratch, and I've ensured there is no duplicate "torch" folder in my directory.
Pls help! Thank you!<issue_comment>username_1: Make sure you installed the right version of pytorch for your enviroment. I had the same problem I was using pytorch on windows but I had the default package installed which was meant for cuda 8. So I reinstalled the pytorch package for cpu which was what I needed.
Upvotes: 0 <issue_comment>username_2: I had the same issue with running torch installed with pure pip and solved it by switching to conda.
Following steps:
1. uninstall python 3.6 from python.org (if exists)
2. install [miniconda](https://conda.io/miniconda.html)
3. install torch in conda ("conda install pytorch -c pytorch")
Issue with pip installation:
```html
import torch
File "C:\Program Files\Python35\lib\site-packages\torch\__init__.py", line 78, in
from torch.\_C import \*
ImportError: DLL load failed: The specified module could not be found.
```
After switching to conda it works fine. I believe the issue was resolved by conda through installing the vs\_redist 2017
>
> vs2017\_runtime 15.4.27004.2010 peterjc123
>
>
>
But I have tried it w/o conda and it did not help. Could not find how to check (and tweak) Python's vs\_redist.
Upvotes: 0 <issue_comment>username_3: Had the same problem and fixed it by re-installing numpy with mkl (Intel's math kernel library)
<https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Download the right `.whl` for your machine. For me it was numpy‑1.14.5+mkl‑cp36‑cp36m‑win\_amd64.whl (python 3.6, windows, 64-bit)
and then install using pip.
```
pip install numpy‑1.14.5+mkl‑cp36‑cp36m‑win_amd64.whl
```
Upvotes: 2 <issue_comment>username_4: I had this similar problem in windows 10...
Solution:
* Download **win-64/intel-openmp-2018.0.0-8.tar.bz2** from <https://anaconda.org/anaconda/intel-openmp/files>
* Extract it and put the dll files in **Library\bin** into
`C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.0\bin`
* Make sure your cuda directory is added to your `%PATH%` environment variable
Upvotes: 4 [selected_answer]<issue_comment>username_5: I had the same problem. In my case I didn't want the GPU version of pytorch.
I uninstalled it. The version was pytorch: 0.3.1-py36\_cuda80\_cudnn6he774522\_2 peterjc123.
The problem is that cuda and cudnn . then installed with the following command and now it works!
```
conda install -c peterjc123 pytorch-cpu
```
Upvotes: 2 <issue_comment>username_6: I also encountered the same problem when I used a conda environment with python 3.6.8 and pytorch installed by conda from channel -c pytorch.
Here is what worked for me:
1:) `conda create -n envName python=3.6 anaconda`
2:) `conda update -n envName conda`
3:) `conda activate envName`
4:) `conda install pytorch torchvision cudatoolkit=9.0 -c pytorch`
and then tested torch with the given code:
5:) `python -c "import torch; print(torch.cuda.get_device_name(0))"`
Note: 5th step will return your gpu name if you have a cuda compatible gpu
**Summary: I just created a conda environment containing whole anaconda and then to tackle the issue of unmatched conda version I updated conda of new environment from the base environment and then installed pytorch in that environment and tested pytorch.**
For CPU version, here is the link for my another answer: <https://gist.github.com/peterjc123/6b804651288e76db7b5fabe5348e1f03#gistcomment-2842825>
<https://gist.github.com/peterjc123/6b804651288e76db7b5fabe5348e1f03#gistcomment-2842837>
Upvotes: 2 <issue_comment>username_7: **Windows10 Solution(This worked for my system):**
I was having the same issue in my system. Previously I was using Python 3.5 and I created a virtual environment named pytorch\_test using the virtualenv module because I didn't want to mess up my tensorflow installation(which took me a lot of time). I followed every instruction but it didn't seem to work. I installed python 3.6.7 added it to the path. Then I created the virtual environment using:
`virtualenv --python=3.6 pytorch_test`
Then go to the destination folder
`cd D:\pytorch_test`
and activate the virtual environment entering the command in cmd:
`.\Scripts\activate`
After you do this the command prompt will show:
`(pytorch_test) D:\pytorch_test>`
Update pip if you have not done it before using:
`(pytorch_test) D:\pytorch_test>python -m pip install --upgrade pip`
Then go for installing numpy+mkl from the site:
<https://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy>
Choose the correct version from the list if you have python 3.6.7 go with the wheel file:
numpy‑1.15.4+mkl‑cp36‑cp36m‑win\_amd64.whl (For 64 bit)
(Note if the whole thing doesnot work just go with simple numpy installation and mkl installation separately)
Then go for installing openmp using:
`(pytorch_test) D:\pytorch_test>pip install intel-openmp`
Now you are done with the prerequisites. To install pytorch go to the previous versions site:
<https://pytorch.org/get-started/previous-versions/>
Here select the suitable version from the list of Windows Binaries. For example I am having CUDA 9.0 installed in my system with python 3.6.7 so I went with the gpu version:
cu90/torch-1.0.0-cp36-cp36m-win\_amd64.whl
(There are two available versions 0.4.0 and 1.0.0 for pytorch, I went with 1.0.0)
After downloading the file install it using pip(assuming the whl file is in **D:**).You have to do this from the virtual environment pytorch\_test itself:
`(pytorch_test) D:\pytorch_test>pip install D:\torch-1.0.0-cp36-cp36m-win_amd64.whl`
Prerequisites like six, pillow will be installed automatically.
Then once everything is done, install the models using torchvision.
Simply type :
`(pytorch_test) D:\pytorch_test>pip install torchvision`
To check everything is working fine try the following script:
```
import torch
test = torch.rand(4, 7)
print(test)
```
If everything was good then it wont be an issue. Whenever there is an issue like this it is related to version mismatch of one or more dependencies. This also occurred during tensorflow installation.
Deactivate the following virtual environment using the command deactivate in the cmd:
`(pytorch_test) D:\pytorch_test>deactivate`
This is the output of **pip list** in my system:
```
Package Version
------------ -----------
intel-openmp 2019.0
mkl 2019.0
numpy 1.16.2
Pillow 6.0.0
pip 19.0.3
setuptools 41.0.0
six 1.12.0
torch 1.0.0
torchvision 0.2.2.post3
wheel 0.33.1
```
Hope this helps. This is my first answer in this community, hope you all find it helpful. I setup pytorch today in the afternoon after trying all sorts of combinations. The same import problem occurred to me while installing CNTK and tensorflow. Anyway I kept them separate in different virtual environments so that I can use them anytime.
Upvotes: 0 <issue_comment>username_8: I am using a Windows 10 computer with an NVIDIA GeForce graphics card. NVIDIA showed I had CUDA 10.1, but I was getting this error when running `import torch` in Jupyter Lab and suspected it had something to do with CUDA support.
I fixed this problem by downloading and installing the [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) directly from NVIDIA. It installed all required Visual Studio components. When I returned to Jupyter Lab, `import torch` ran without error.
Upvotes: 1 |
2018/03/20 | 1,660 | 5,970 | <issue_start>username_0: I would like to know how in the [Julia](https://julialang.org) language, I can determine if a `file.jl` is run as script, such as in the call:
```
bash$ julia file.jl
```
It must only in this case start a function `main`, for example. Thus I could use `include('file.jl')`, without actually executing the function.
To be specific, I am looking for something similar answered already [in a python question](https://stackoverflow.com/questions/1389044/how-do-i-determine-whether-a-python-script-is-imported-as-module-or-run-as-scrip#1389048):
```
def main():
# does something
if __name__ == '__main__':
main()
```
---
Edit:
To be more specific, the method `Base.isinteractive` (see [here](https://discourse.julialang.org/t/detect-if-running-as-script/8818)) is not solving the problem, when using `include('file.jl')` from within a non-interactive (e.g. script) environment.<issue_comment>username_1: The global constant `PROGRAM_FILE` contains the script name passed to Julia from the command line (it does not change when `include` is called).
On the other hand `@__FILE__` macro gives you a name of the file where it is present.
For instance if you have a files:
*a.jl*
```
println(PROGRAM_FILE)
println(@__FILE__)
include("b.jl")
```
*b.jl*
```
println(PROGRAM_FILE)
println(@__FILE__)
```
You have the following behavior:
```
$ julia a.jl
a.jl
D:\a.jl
a.jl
D:\b.jl
$ julia b.jl
b.jl
D:\b.jl
```
In summary:
* `PROGRAM_FILE` tells you what is the file name that Julia was started with;
* `@__FILE__` tells you in what file actually the macro was called.
Upvotes: 2 <issue_comment>username_2: **tl;dr version:**
```
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
```
---
**Explanation:**
There seems to be some confusion. Python and Julia work very differently in terms of their "modules" (even though the two use the same term, in principle they are different).
In python, a source file is either a module or a script, depending on how you chose to "load" / "run" it: the boilerplate exists to detect the environment in which the source code was run, by querying the `__name__` of the embedding module at the time of execution. E.g. if you have a file called `mymodule.py`, it you import it normally, then within the module definition the variable `__name__` automatically gets set to the value `mymodule`; but if you ran it as a standalone script (effectively "dumping" the code into the "main" module), the `__name__` variable is that of the global scope, namely `__main__`. This difference gives you the ability to detect *how* a python file was ran, so you could act slightly differently in each case, and this is exactly what the boilerplate does.
In julia, however, a module is defined *explicitly as code*. Running a file that contains a `module` declaration will load that module regardless of whether you did `using` or `include`; however in the former case, the module will not be reloaded if it's already on the workspace, whereas in the latter case it's as if you "redefined" it.
Modules can have initialisation code via the special `__init__()` function, whose job is to only run the first time a module is loaded (e.g. when imported via a using statement). So one thing you could do is have a standalone script, which you could either `include` directly to run as a standalone script, or `include` it within the scope of a `module` definition, and have it detect the presence of module-specific variables such that it behaves differently in each case. But it would still have to be a standalone file, separate from the main module definition.
If you want the module to do stuff, that the standalone script shouldn't, this is easy: you just have something like this:
```
module MyModule
__init__() = # do module specific initialisation stuff here
include("MyModule_Implementation.jl")
end
```
If you want the reverse situation, you need a way to detect whether you're running inside the module or not. You could do this, e.g. by detecting the presence of a suitable `__init__()` function, belonging to that particular module. For example:
```
### in file "MyModule.jl"
module MyModule
export fun1, fun2;
__init__() = print("Initialising module ...");
include("MyModuleImplementation.jl");
end
```
```
### in file "MyModuleImplementation.jl"
fun1(a,b) = a + b;
fun2(a,b) = a * b;
main() = print("Demo of fun1 and fun2. \n" *
" fun1(1,2) = $(fun1(1,2)) \n" *
" fun2(1,2) = $(fun2(1,2)) \n");
if !isdefined(:__init__) || Base.function_module(__init__) != MyModule
main()
end
```
If `MyModule` is loaded as a module, the `main` function in `MyModuleImplementation.jl` will not run.
If you run `MyModuleImplementation.jl` as a standalone script, the `main` function will run.
So this is a way to achieve something close to the effect you want; but it's very different to saying running a module-defining file as either a module or a standalone script; I don't think you can simply "strip" the `module` instruction from the code and run the module's "contents" in such a manner in julia.
Upvotes: 2 <issue_comment>username_3: The answer is available at the official Julia docs [FAQ](https://docs.julialang.org/en/v1/manual/faq/#How-do-I-check-if-the-current-file-is-being-run-as-the-main-script?). I am copy/pasting it here because this question comes up as the first hit on some search engines. It would be nice if people found the answer on the first-hit site.
>
> How do I check if the current file is being run as the main script?
> ===================================================================
>
>
> When a file is run as the main script using `julia file.jl` one might want to activate extra functionality like command line argument handling. A way to determine that a file is run in this fashion is to check if `abspath(PROGRAM_FILE) == @__FILE__` is `true`.
>
>
>
Upvotes: 2 |
2018/03/20 | 746 | 2,776 | <issue_start>username_0: I have two classes
```
class A {
public:
virtual void doStuff() = 0;
};
class B : public A {
int x;
public:
virtual void doStuff() override { x = x*2;} //just example function
};
```
And another class that modify and use data from the previous
```
class Foo {
A a;
public:
Foo::Foo(A &a_) : a(a_) {}
};
```
now I create the objects, and passes to the Foo class
```
B b;
// edit b attributes,
Foo foo(b);
```
So at the argument list for the class constructor I know there is not the problem of object slicing, because is a reference, but what is the case at the moment of assign the variable `a(a_)`?
Since I don't know how much time the object `b` is going to live I need to make a secure copy. I have a lot of different derived classes from A, even derived from the derived.
Will there be a object slicing?,
Is there a solution to this, or I need to pass pointers (don't want this approach)?<issue_comment>username_1: There will be object slicing with what you currently have. You're calling the `A` copy-constructor in `Foo`'s constructor, and there aren't virtual constructors.
Having a member variable of type `A` only reserves enough space within an instance of `Foo` for an instance of `A`. There is only dynamic binding with pointers and references (which are pointers under the hood), not with member variables.
You would have to use pointers to get around this or you could rethink whether you really need a set-up like this.
Upvotes: 1 <issue_comment>username_2: This causes slicing. C++ built in polymorphism only works with pointer/reference semantics.
In fact:
```
class Foo {
A a;
```
that won't even compile, because `A` is not a concrete class.
To fix this, first make `virtual ~A(){};` and then pass smart pointers to `A` around. Either unique or shared.
---
Failing that you can use your own bespoke polymorphism. The easiers way is to stuff a `pImpl` smart pointer as a private member of a class and implement copy/move semantics in the holding class. The `pImpl` can have a virtual interface, and the wrapping class just forwards the non-overridable part of the behaviour to it.
This technique can be extended with the small buffer optimization, or even bounded size instances, in order to avoid heap allocation.
All of this is harder than just using the built in C++ object model directly, but it can have payoff.
To see a famous example of this, examine `std::function` which is a value type that behaves polymorphically.
Upvotes: 2 <issue_comment>username_3: Yes, there is slicing.
There has to be slicing, because a `B` does not fit inside a `A`, but it is an `A` that you are storing inside the class `Foo`. The `B` part is "sliced off" to fit; hence the name.
Upvotes: 0 |
2018/03/20 | 2,979 | 7,686 | <issue_start>username_0: I am given a list and I am trying to recursively check if the adjacent elements in the list are a multiple of 12, if they are, then I need to check if the remaining sum of the numbers in the list are odd. For example, [6, 6, 5] returns true, [6, 5, 6, 1] returns false. I am having trouble with the case where [10, 7, 5, 5] returns true because (7+5) = 12(multiple of 12) and (10+5) = 15(odd). This is my code. It works when the elements are 0 and 1 index but not when the multiple would be in the middle.
```
public static boolean twoGroups(List t) {
if(t.size() < 2 ) {
return false;
}
if((t.get(0) \* t.get(1)) % 12 == 0){
List i = new ArrayList<>(t);
i.remove(0);
i.remove(1);
int works = checkSum(i, 0);
if(works % 2 == 0) {
return true;
}else {
return false;
}
}else { // I think this is where I am going wrong
List i = new ArrayList<>(t);
List newList = new ArrayList<>();
newList.add(i.get(0));
i.remove(0);
return twoGroups(i);
}
}
/\*
\* returns true if the sum of the elements of the list is odd
\*Helper method
\*/
public static int checkSum(List t, int index) {
if(t.size() == index) {
return 0;
}
return t.get(index) + checkSum(t, index + 1);
}
```
I commented at the part where I think I am going wrong. Plz help<issue_comment>username_1: I believe you have a few issues.
1. In line 6 where you multiply
`t.get(0) * t.get(1) % 12 == 0`. From the description you gave, you should instead add. [10, 7, 5, 5] would multiply 10 \* 7 which equals 70 (a multiple of 12).
2. In lines 8 & 9 `i.remove(0); i.remove(1);` does not remove the variables as you think. First, you remove 10 (index 0) from [10, 7, 5, 5] ->[7, 5, 5]. Then, you remove 5 (the new index 1) [7, 5]
3. You add the variables traversed over into `List newList = new ArrayList<>();` but that variable is never used. You should use that list along with the rest of the variables in List t to find the sum.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This should work:
```
public class Divisible {
public boolean divisibleBy12(List numbers) {
return divisibleHelper(numbers, 0);
}
private boolean divisibleHelper(List numbers, int currentIndex) {
if(numbers.size() <= currentIndex -1) {
return false;
} else if(currentIndex + 1 < numbers.size()
&& isDivisible(numbers.get(currentIndex), numbers.get(currentIndex + 1))) {
return getSum(numbers, numbers.get(currentIndex), numbers.get(currentIndex + 1));
} else {
return divisibleHelper(numbers, currentIndex + 1);
}
}
private boolean getSum(List numbers, Integer num1, Integer num2) {
int sum = numbers.stream().mapToInt(Integer::intValue).sum() - num1 - num2;
return (sum & 1) == 1;
}
private boolean isDivisible(Integer num1, Integer num2) {
return (num1 + num2) % 2 == 0;
}
}
```
Your problem was removing elements from list so you couldn't count them in the sum. Code is a little messy so I'll leave it to you to tidy it up.
Upvotes: 0 <issue_comment>username_3: I only have a solution in Scala, which is in parts easy to translate in Java, but in parts not so - at least I'm not used to the newly introduced functional parts of Java, I get constantly lost between different collections with their different capabilities.
If we turn the List around, we can first check, if the sublist is odd. Only then, we test the rest of the list incrementally for adding up to a multiple of 12:
```
// easy: sum of list is odd/multiple of 12
def isOdd (l: List[Int]): Boolean = { (l.sum % 2 == 1) }
def dozenix (l: List[Int]): Boolean = { (l.sum % 12 == 0) }
// For (1,8,7,6) test 1 => 1, 8 => 1, 8, 7 => 1, 8, 7, 6
def findDozenic (fromFront: List[Int]): Boolean = {
(1 to fromFront.size - 1).exists (i => dozenix (fromFront.take (i)))
}
// for (a,b,c,d) test (a)(bcd), (ab)(cd), (abc)(d)
def dozenAfterOddRest (rest: List[Int]): Boolean = {
for (i <- (1 to rest.length - 1);
if isOdd (rest.take (i));
if findDozenic (rest.drop (i))) return true
return false
}
// initial method to call, turning the list around
def restAfterDozen (rest: List[Int]): Boolean = {
dozenAfterOddRest (rest.reverse)
}
```
Test:
```
scala> val samples = (1 to 10).map (dummy => (1 to 5).map (foo => {rnd.nextInt (15)+1;}).toList)
samples: scala.collection.immutable.IndexedSeq[List[Int]] = Vector(List(13, 3, 14, 8, 2), List(8, 13, 5, 4, 6), List(5, 1, 5, 12, 1), List(5, 14, 15, 8, 2), List(1, 6, 15, 2, 12), List(1, 1, 13, 15, 8), List(4, 15, 2, 10, 8), List(2, 12, 1, 7, 2), List(7, 9, 4, 9, 8), List(6, 8, 7, 15, 8))
scala> samples.map (restAfterDozen)
res107: scala.collection.immutable.IndexedSeq[Boolean] = Vector(false, false, true, false, false, false, false, false, false, false)
```
As a true result, List(5, 1, 5, 12, 1) is obviously correct. Last element 1 is odd, in front of it is 12 which is div by 12. I didn't checked the other lists, but changed the code, to make examples better readable:
```
def dozenAfterOddRest (rest: List[Int]): Boolean = {
for (i <- (1 to rest.length - 1);
if isOdd (rest.take (i));
if findDozenic (rest.drop (i))) {
println (rest.drop (i).reverse.mkString ("+") + "> -- <" + rest.take(i).mkString ("+"));
return true
}
return false
}
```
So for
```
Vector(List(1, 3, 3, 8, 2, 7, 3, 7, 5), List(2, 2, 8, 11, 9, 4, 5, 5, 9), List(10, 10, 8, 9, 1, 4, 8, 5, 6), List(3, 10, 2, 6, 3, 2, 10, 4, 2), List(7, 5, 6, 5, 11, 6, 8, 1, 2), List(11, 4, 1, 1, 4, 1, 4, 4, 6), List(3, 10, 1, 11, 8, 2, 8, 3, 11), List(10, 1, 10, 7, 6, 2, 6, 2, 11), List(10, 9, 10, 9, 3, 8, 9, 5, 4), List(2, 8, 5, 2, 9, 7, 4, 3, 1))
```
it prints (only matches):
```
scala> samples.map (restAfterDozen)
2+2+8+11+9+4> -- <9+5+5 // 9+5+5 is Odd, 4+9=13, +11=24
10+10+8+9+1+4+8> -- <6+5 // 6+5 odd, 8+4=12
3+10+2> -- <2+4+10+2+3+6 // and so on.
7+5+6+5+11+6+8> -- <2+1
10+1+10+7+6> -- <11+2+6+2
10+9+10+9+3+8+9> -- <4+5
2+8+5+2+9> -- <1+3+4+7
res115: scala.collection.immutable.IndexedSeq[Boolean] = Vector(false, true, true, true, true, false, false, true, true, true)
```
Upvotes: 0 <issue_comment>username_3: Here is my solution in Java; it works as the scala solution above, and not just for Lists of 4 elements. The code starts with convenience methods, and you have to read it from bottom, where the method is, which is called:
```
int sum (List l) {
return (l.stream().reduce(0, (x,y) -> x+y));
}
boolean isOdd (List l) {
return (sum(l) % 2 == 1);
}
boolean dozenix (List l) {
return (sum(l) % 12 == 0);
}
// For (1,8,7,6) test 1 => 1, 8 => 1, 8, 7 => 1, 8, 7, 6
boolean findDozenic (List fromFront) {
for (int i = 1; i < fromFront.size() - 1; ++i)
if (dozenix (fromFront.subList (0, i)))
return true;
return false;
}
// for (a,b,c,d) test (a)(bcd), (ab)(cd), (abc)(d)
boolean dozenAfterOddRest (List rest) {
for (int i = 1; i < rest.size () - 1; ++i)
if (isOdd (rest.subList (0, i)))
if (findDozenic (rest.subList (i, rest.size () - 1)))
return true;
return false;
}
// initial method to call, turning the list around
boolean restAfterDozen (List input) {
Collections.reverse (input);
return dozenAfterOddRest (input);
}
```
Since the rest after a sum of dozens has to be odd, I start from the end, searching for an odd sum, because there we are bound to end of the list, while for the parts, which sum up to multiples of 12, they can grow in the end, shrink in the end or shrink at the head. Easy to loose the overview.
From the end it is easy. Last digit odd? Then try to build a divBy12 from the digit in front of it - just a sequential task. If the sum isn't odd, take one next digit and try again, until (former) head of list (now end of list).
Upvotes: 0 |
2018/03/20 | 2,823 | 7,606 | <issue_start>username_0: I'm noticing that when a Sidekiq / Active Job fails due to an error being thrown, any database changes that occurred during the job are rolled back. This seems to be an intentional feature to make jobs idempotent.
My problem is that the method run by the job can send emails to users and it uses database modifications to prevent re-sending emails. If the database change is rolled back, then the email will be resent whenever the job is retried.
Here's roughly what my job looks like:
```
class ProcessPaymentsJob < ApplicationJob
queue_as :default
def perform(*args)
begin
# This can send emails to users.
PaymentProcessor.perform
rescue StandardError => error
puts 'PaymentsJob failed, ignoring'
puts error
end
end
end
```
The job is scheduled to run periodically using [sidekiq-scheduler](https://github.com/moove-it/sidekiq-scheduler). I'm using rails-api v5.
I've added a `rescue` to try to prevent the job from rolling back the database changes but it still happens.
It occurred to me that maybe this isn't a Sidekiq issue at all but a feature of Rails.
What's the best solution here to prevent spamming the user with emails?<issue_comment>username_1: I believe you have a few issues.
1. In line 6 where you multiply
`t.get(0) * t.get(1) % 12 == 0`. From the description you gave, you should instead add. [10, 7, 5, 5] would multiply 10 \* 7 which equals 70 (a multiple of 12).
2. In lines 8 & 9 `i.remove(0); i.remove(1);` does not remove the variables as you think. First, you remove 10 (index 0) from [10, 7, 5, 5] ->[7, 5, 5]. Then, you remove 5 (the new index 1) [7, 5]
3. You add the variables traversed over into `List newList = new ArrayList<>();` but that variable is never used. You should use that list along with the rest of the variables in List t to find the sum.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This should work:
```
public class Divisible {
public boolean divisibleBy12(List numbers) {
return divisibleHelper(numbers, 0);
}
private boolean divisibleHelper(List numbers, int currentIndex) {
if(numbers.size() <= currentIndex -1) {
return false;
} else if(currentIndex + 1 < numbers.size()
&& isDivisible(numbers.get(currentIndex), numbers.get(currentIndex + 1))) {
return getSum(numbers, numbers.get(currentIndex), numbers.get(currentIndex + 1));
} else {
return divisibleHelper(numbers, currentIndex + 1);
}
}
private boolean getSum(List numbers, Integer num1, Integer num2) {
int sum = numbers.stream().mapToInt(Integer::intValue).sum() - num1 - num2;
return (sum & 1) == 1;
}
private boolean isDivisible(Integer num1, Integer num2) {
return (num1 + num2) % 2 == 0;
}
}
```
Your problem was removing elements from list so you couldn't count them in the sum. Code is a little messy so I'll leave it to you to tidy it up.
Upvotes: 0 <issue_comment>username_3: I only have a solution in Scala, which is in parts easy to translate in Java, but in parts not so - at least I'm not used to the newly introduced functional parts of Java, I get constantly lost between different collections with their different capabilities.
If we turn the List around, we can first check, if the sublist is odd. Only then, we test the rest of the list incrementally for adding up to a multiple of 12:
```
// easy: sum of list is odd/multiple of 12
def isOdd (l: List[Int]): Boolean = { (l.sum % 2 == 1) }
def dozenix (l: List[Int]): Boolean = { (l.sum % 12 == 0) }
// For (1,8,7,6) test 1 => 1, 8 => 1, 8, 7 => 1, 8, 7, 6
def findDozenic (fromFront: List[Int]): Boolean = {
(1 to fromFront.size - 1).exists (i => dozenix (fromFront.take (i)))
}
// for (a,b,c,d) test (a)(bcd), (ab)(cd), (abc)(d)
def dozenAfterOddRest (rest: List[Int]): Boolean = {
for (i <- (1 to rest.length - 1);
if isOdd (rest.take (i));
if findDozenic (rest.drop (i))) return true
return false
}
// initial method to call, turning the list around
def restAfterDozen (rest: List[Int]): Boolean = {
dozenAfterOddRest (rest.reverse)
}
```
Test:
```
scala> val samples = (1 to 10).map (dummy => (1 to 5).map (foo => {rnd.nextInt (15)+1;}).toList)
samples: scala.collection.immutable.IndexedSeq[List[Int]] = Vector(List(13, 3, 14, 8, 2), List(8, 13, 5, 4, 6), List(5, 1, 5, 12, 1), List(5, 14, 15, 8, 2), List(1, 6, 15, 2, 12), List(1, 1, 13, 15, 8), List(4, 15, 2, 10, 8), List(2, 12, 1, 7, 2), List(7, 9, 4, 9, 8), List(6, 8, 7, 15, 8))
scala> samples.map (restAfterDozen)
res107: scala.collection.immutable.IndexedSeq[Boolean] = Vector(false, false, true, false, false, false, false, false, false, false)
```
As a true result, List(5, 1, 5, 12, 1) is obviously correct. Last element 1 is odd, in front of it is 12 which is div by 12. I didn't checked the other lists, but changed the code, to make examples better readable:
```
def dozenAfterOddRest (rest: List[Int]): Boolean = {
for (i <- (1 to rest.length - 1);
if isOdd (rest.take (i));
if findDozenic (rest.drop (i))) {
println (rest.drop (i).reverse.mkString ("+") + "> -- <" + rest.take(i).mkString ("+"));
return true
}
return false
}
```
So for
```
Vector(List(1, 3, 3, 8, 2, 7, 3, 7, 5), List(2, 2, 8, 11, 9, 4, 5, 5, 9), List(10, 10, 8, 9, 1, 4, 8, 5, 6), List(3, 10, 2, 6, 3, 2, 10, 4, 2), List(7, 5, 6, 5, 11, 6, 8, 1, 2), List(11, 4, 1, 1, 4, 1, 4, 4, 6), List(3, 10, 1, 11, 8, 2, 8, 3, 11), List(10, 1, 10, 7, 6, 2, 6, 2, 11), List(10, 9, 10, 9, 3, 8, 9, 5, 4), List(2, 8, 5, 2, 9, 7, 4, 3, 1))
```
it prints (only matches):
```
scala> samples.map (restAfterDozen)
2+2+8+11+9+4> -- <9+5+5 // 9+5+5 is Odd, 4+9=13, +11=24
10+10+8+9+1+4+8> -- <6+5 // 6+5 odd, 8+4=12
3+10+2> -- <2+4+10+2+3+6 // and so on.
7+5+6+5+11+6+8> -- <2+1
10+1+10+7+6> -- <11+2+6+2
10+9+10+9+3+8+9> -- <4+5
2+8+5+2+9> -- <1+3+4+7
res115: scala.collection.immutable.IndexedSeq[Boolean] = Vector(false, true, true, true, true, false, false, true, true, true)
```
Upvotes: 0 <issue_comment>username_3: Here is my solution in Java; it works as the scala solution above, and not just for Lists of 4 elements. The code starts with convenience methods, and you have to read it from bottom, where the method is, which is called:
```
int sum (List l) {
return (l.stream().reduce(0, (x,y) -> x+y));
}
boolean isOdd (List l) {
return (sum(l) % 2 == 1);
}
boolean dozenix (List l) {
return (sum(l) % 12 == 0);
}
// For (1,8,7,6) test 1 => 1, 8 => 1, 8, 7 => 1, 8, 7, 6
boolean findDozenic (List fromFront) {
for (int i = 1; i < fromFront.size() - 1; ++i)
if (dozenix (fromFront.subList (0, i)))
return true;
return false;
}
// for (a,b,c,d) test (a)(bcd), (ab)(cd), (abc)(d)
boolean dozenAfterOddRest (List rest) {
for (int i = 1; i < rest.size () - 1; ++i)
if (isOdd (rest.subList (0, i)))
if (findDozenic (rest.subList (i, rest.size () - 1)))
return true;
return false;
}
// initial method to call, turning the list around
boolean restAfterDozen (List input) {
Collections.reverse (input);
return dozenAfterOddRest (input);
}
```
Since the rest after a sum of dozens has to be odd, I start from the end, searching for an odd sum, because there we are bound to end of the list, while for the parts, which sum up to multiples of 12, they can grow in the end, shrink in the end or shrink at the head. Easy to loose the overview.
From the end it is easy. Last digit odd? Then try to build a divBy12 from the digit in front of it - just a sequential task. If the sum isn't odd, take one next digit and try again, until (former) head of list (now end of list).
Upvotes: 0 |
2018/03/20 | 5,798 | 22,075 | <issue_start>username_0: I have a TVP+SP insert strategy implemented as i need to insert big amounts of rows (probably concurrently) while being able to get some info in return like `Id` and stuff. Initially I'm using EF code first approach to generate the DB structure. My entities:
FacilityGroup
```
public class FacilityGroup
{
public int Id { get; set; }
[Required]
public string Name { get; set; }
public string InternalNotes { get; set; }
public virtual List Facilities { get; set; } = new List();
}
```
FacilityInstance
```
public class FacilityInstance
{
public int Id { get; set; }
[Required]
[Index("IX_FacilityName")]
[StringLength(450)]
public string Name { get; set; }
[Required]
public string FacilityCode { get; set; }
//[Required]
public virtual FacilityGroup FacilityGroup { get; set; }
[ForeignKey(nameof(FacilityGroup))]
[Index("IX_FacilityGroupId")]
public int FacilityGroupId { get; set; }
public virtual List RelatedBatches { get; set; } = new List();
public virtual HashSet BatchRecords { get; set; } = new HashSet();
}
```
BatchRecord
```
public class BatchRecord
{
public long Id { get; set; }
//todo index?
public string ItemName { get; set; }
[Index("IX_Supplier")]
[StringLength(450)]
public string Supplier { get; set; }
public decimal Quantity { get; set; }
public string ItemUnit { get; set; }
public string EntityUnit { get; set; }
public decimal ItemSize { get; set; }
public decimal PackageSize { get; set; }
[Index("IX_FamilyCode")]
[Required]
[StringLength(4)]
public string FamilyCode { get; set; }
[Required]
public string Family { get; set; }
[Index("IX_CategoryCode")]
[Required]
[StringLength(16)]
public string CategoryCode { get; set; }
[Required]
public string Category { get; set; }
[Index("IX_SubCategoryCode")]
[Required]
[StringLength(16)]
public string SubCategoryCode { get; set; }
[Required]
public string SubCategory { get; set; }
public string ItemGroupCode { get; set; }
public string ItemGroup { get; set; }
public decimal PurchaseValue { get; set; }
public decimal UnitPurchaseValue { get; set; }
public decimal PackagePurchaseValue { get; set; }
[Required]
public virtual DataBatch DataBatch { get; set; }
[ForeignKey(nameof(DataBatch))]
public int DataBatchId { get; set; }
[Required]
public virtual FacilityInstance FacilityInstance { get; set; }
[ForeignKey(nameof(FacilityInstance))]
[Index("IX_FacilityInstance")]
public int FacilityInstanceId { get; set; }
[Required]
public virtual Currency Currency { get; set; }
[ForeignKey(nameof(Currency))]
public int CurrencyId { get; set; }
}
```
DataBatch
```
public class DataBatch
{
public int Id { get; set; }
[Required]
public string Name { get; set; }
public DateTime DateCreated { get; set; }
public BatchStatus BatchStatus { get; set; }
public virtual List RelatedFacilities { get; set; } = new List();
public virtual HashSet BatchRecords { get; set; } = new HashSet();
}
```
And then my SQL Server related code, TVP Structure:
```
CREATE TYPE dbo.RecordImportStructure
AS TABLE (
ItemName VARCHAR(MAX),
Supplier VARCHAR(MAX),
Quantity DECIMAL(18, 2),
ItemUnit VARCHAR(MAX),
EntityUnit VARCHAR(MAX),
ItemSize DECIMAL(18, 2),
PackageSize DECIMAL(18, 2),
FamilyCode VARCHAR(4),
Family VARCHAR(MAX),
CategoryCode VARCHAR(MAX),
Category VARCHAR(MAX),
SubCategoryCode VARCHAR(MAX),
SubCategory VARCHAR(MAX),
ItemGroupCode VARCHAR(MAX),
ItemGroup VARCHAR(MAX),
PurchaseValue DECIMAL(18, 2),
UnitPurchaseValue DECIMAL(18, 2),
PackagePurchaseValue DECIMAL(18, 2),
FacilityCode VARCHAR(MAX),
CurrencyCode VARCHAR(MAX)
);
```
Insert stored procedure:
```
CREATE PROCEDURE dbo.ImportBatchRecords (
@BatchId INT,
@ImportTable dbo.RecordImportStructure READONLY
)
AS
SET NOCOUNT ON;
DECLARE @ErrorCode int
DECLARE @Step varchar(200)
--Clear old stuff?
--TRUNCATE TABLE dbo.BatchRecords;
INSERT INTO dbo.BatchRecords (
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
DataBatchId,
FacilityInstanceId,
CurrencyId
)
OUTPUT INSERTED.Id
SELECT
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
@BatchId,
--FacilityInstanceId,
--CurrencyId
(SELECT TOP 1 f.Id from dbo.FacilityInstances f WHERE f.FacilityCode=FacilityCode),
(SELECT TOP 1 c.Id from dbo.Currencies c WHERE c.CurrencyCode=CurrencyCode)
FROM @ImportTable;
```
And finally my quick, test only solution to execute this stuff on .NET side.
```
public class BatchRecordDataHandler : IBulkDataHandler
{
public async Task ImportAsync(SqlConnection conn, SqlTransaction transaction, IEnumerable src)
{
using (var cmd = new SqlCommand())
{
cmd.CommandText = "ImportBatchRecords";
cmd.Connection = conn;
cmd.Transaction = transaction;
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 600;
var batchIdParam = new SqlParameter
{
ParameterName = "@BatchId",
SqlDbType = SqlDbType.Int,
Value = 1
};
var tableParam = new SqlParameter
{
ParameterName = "@ImportTable",
TypeName = "dbo.RecordImportStructure",
SqlDbType = SqlDbType.Structured,
Value = DataToSqlRecords(src)
};
cmd.Parameters.Add(batchIdParam);
cmd.Parameters.Add(tableParam);
cmd.Transaction = transaction;
using (var res = await cmd.ExecuteReaderAsync())
{
var resultTable = new DataTable();
resultTable.Load(res);
var cnt = resultTable.AsEnumerable().Count();
return cnt;
}
}
}
private IEnumerable DataToSqlRecords(IEnumerable src)
{
var tvpSchema = new[] {
new SqlMetaData("ItemName", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("Supplier", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("Quantity", SqlDbType.Decimal),
new SqlMetaData("ItemUnit", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("EntityUnit", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("ItemSize", SqlDbType.Decimal),
new SqlMetaData("PackageSize", SqlDbType.Decimal),
new SqlMetaData("FamilyCode", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("Family", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("CategoryCode", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("Category", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("SubCategoryCode", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("SubCategory", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("ItemGroupCode", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("ItemGroup", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("PurchaseValue", SqlDbType.Decimal),
new SqlMetaData("UnitPurchaseValue", SqlDbType.Decimal),
new SqlMetaData("PackagePurchaseValue", SqlDbType.Decimal),
new SqlMetaData("FacilityInstanceId", SqlDbType.VarChar, SqlMetaData.Max),
new SqlMetaData("CurrencyId", SqlDbType.VarChar, SqlMetaData.Max),
};
var dataRecord = new SqlDataRecord(tvpSchema);
foreach (var importItem in src)
{
dataRecord.SetValues(importItem.ItemName,
importItem.Supplier,
importItem.Quantity,
importItem.ItemUnit,
importItem.EntityUnit,
importItem.ItemSize,
importItem.PackageSize,
importItem.FamilyCode,
importItem.Family,
importItem.CategoryCode,
importItem.Category,
importItem.SubCategoryCode,
importItem.SubCategory,
importItem.ItemGroupCode,
importItem.ItemGroup,
importItem.PurchaseValue,
importItem.UnitPurchaseValue,
importItem.PackagePurchaseValue,
importItem.FacilityCode,
importItem.CurrencyCode);
yield return dataRecord;
}
}
}
```
Import entity structure:
```
public class BatchRecordImportItem
{
public string ItemName { get; set; }
public string Supplier { get; set; }
public decimal Quantity { get; set; }
public string ItemUnit { get; set; }
public string EntityUnit { get; set; }
public decimal ItemSize { get; set; }
public decimal PackageSize { get; set; }
public string FamilyCode { get; set; }
public string Family { get; set; }
public string CategoryCode { get; set; }
public string Category { get; set; }
public string SubCategoryCode { get; set; }
public string SubCategory { get; set; }
public string ItemGroupCode { get; set; }
public string ItemGroup { get; set; }
public decimal PurchaseValue { get; set; }
public decimal UnitPurchaseValue { get; set; }
public decimal PackagePurchaseValue { get; set; }
public int DataBatchId { get; set; }
public string FacilityCode { get; set; }
public string CurrencyCode { get; set; }
}
```
Please don't mind useless reader at the end, doesn't really do much. So without the reader inserting 2.5kk rows took around 26 minutes while `SqlBulkCopy` took around 6+- minutes. Is there something I'm doing fundamentally wrong? I’m using `IsolationLevel.Snapshot` if this matters. Using SQL Server 2014, free to change DB structure and indices.
**UPD 1**
---
Done a couple of adjustments/improvement attempts described by @Xedni, specifically:
1. Limited all string fields that didn't have a max length to some fixed length
2. Changed all TVP members from `VARCHAR(MAX)` to `VARCHAR(*SomeValue*)`
3. Added a unique index to FacilityInstance->FacilityCode
4. Added a unique index to Curreency->CurrencyCode
5. Tried adding WITH RECOMPILE to my SP
6. Tried using `DataTable` instead of `IEnumerable`
7. Tried batchinng data into smaller buckets, 50k and 100k per SP execution instead of 2.5kk
My structure is now like this:
```
CREATE TYPE dbo.RecordImportStructure
AS TABLE (
ItemName VARCHAR(4096),
Supplier VARCHAR(450),
Quantity DECIMAL(18, 2),
ItemUnit VARCHAR(2048),
EntityUnit VARCHAR(2048),
ItemSize DECIMAL(18, 2),
PackageSize DECIMAL(18, 2),
FamilyCode VARCHAR(16),
Family VARCHAR(512),
CategoryCode VARCHAR(16),
Category VARCHAR(512),
SubCategoryCode VARCHAR(16),
SubCategory VARCHAR(512),
ItemGroupCode VARCHAR(16),
ItemGroup VARCHAR(512),
PurchaseValue DECIMAL(18, 2),
UnitPurchaseValue DECIMAL(18, 2),
PackagePurchaseValue DECIMAL(18, 2),
FacilityCode VARCHAR(450),
CurrencyCode VARCHAR(4)
);
```
---
So far no noticeable performance gains unfortunately, 26-28 min as before
---
**UPD 2**
Checked the execution plan - indices are my bane? [](https://i.stack.imgur.com/l3AOV.png)
---
**UPD 3**
Added `OPTION (RECOMPILE);` at the end of my SP, gained a minor boost, now sitting at ~25m for 2.5kk<issue_comment>username_1: I would guess your proc could use some love. Without seeing an execution plan its hard to say for sure, but here are some thoughts.
A table variable (which a table-valued-parameter essentially is) is always assumed by SQL Server to contain exactly 1 row (even if it doesn't). This is irrelevant for many cases, but you have two correlated subqueries in your insert list which is where I'd focus my attention. It's more than likely hammering that poor table variable with a bunch of nested loop joins because of the cardinality estimate. I would consider putting the rows from your TVP into a temp table, updating the temp table with the IDs from `FacilityInstances` and `Currencies` then do your final insert from that.
Upvotes: 2 <issue_comment>username_2: Well... why not just use SQL Bulk Copy?
There's plenty of solutions out there that help you convert a collection of entities into a IDataReader object that can be handed directly to SqlBulkCopy.
This is a good start...
<https://github.com/matthewschrager/Repository/blob/master/Repository.EntityFramework/EntityDataReader.cs>
Then it becomes as simple as...
```
SqlBulkCopy bulkCopy = new SqlBulkCopy(connection);
IDataReader dataReader = storeEntities.AsDataReader();
bulkCopy.WriteToServer(dataReader);
```
I've used this code, the one caveat is that you need to be quite careful about the definition of your entity. The order of the properties in the entity determines the order of the columns exposed by the IDataReader and this needs to correlate with the order of the columns in the table that you are bulk copying to.
Alternatively there's other code here..
<https://www.codeproject.com/Tips/1114089/Entity-Framework-Performance-Tuning-Using-SqlBulkC>
Upvotes: 1 <issue_comment>username_3: You could set [traceflag 2453](https://support.microsoft.com/en-us/help/2952444/fix-poor-performance-when-you-use-table-variables-in-sql-server-2012-o):
>
> FIX: Poor performance when you use table variables in SQL Server 2012 or SQL Server 2014
>
>
> When you use a table variable in a batch or procedure, the query is compiled and optimized for the initial empty state of table variable. If this table variable is populated with many rows at runtime, the pre-compiled query plan may no longer be optimal. For example, the query may be joining a table variable with nested loop since it is usually more efficient for small number of rows. This query plan can be inefficient if the table variable has millions of rows. A hash join may be a better choice under such condition. To get a new query plan, it needs to be recompiled. Unlike other user or temporary tables, however, row count change in a table variable does not trigger a query recompile. Typically, you can work around this with OPTION (RECOMPILE), which has its own overhead cost.
> The trace flag 2453 allows the benefit of query recompile without OPTION (RECOMPILE). This trace flag differs from OPTION (RECOMPILE) in two main aspects.
> (1) It uses the same row count threshold as other tables. The query does not need to be compiled for every execution unlike OPTION (RECOMPILE). It would trigger recompile only when the row count change exceeds the predefined threshold.
> (2) OPTION (RECOMPILE) forces the query to peek parameters and optimize the query for them. This trace flag does not force parameter peeking.
>
>
> **You can turn on trace flag 2453 to allow a table variable to trigger recompile when enough number of rows are changed. This may allow the query optimizer to choose a more efficient plan**
>
>
>
Upvotes: 2 <issue_comment>username_4: Try with the following stored procedure:
```
CREATE PROCEDURE dbo.ImportBatchRecords (
@BatchId INT,
@ImportTable dbo.RecordImportStructure READONLY
)
AS
SET NOCOUNT ON;
DECLARE @ErrorCode int
DECLARE @Step varchar(200)
CREATE TABLE #FacilityInstances
(
Id int NOT NULL,
FacilityCode varchar(512) NOT NULL UNIQUE WITH (IGNORE_DUP_KEY=ON)
);
CREATE TABLE #Currencies
(
Id int NOT NULL,
CurrencyCode varchar(512) NOT NULL UNIQUE WITH (IGNORE_DUP_KEY = ON)
)
INSERT INTO #FacilityInstances(Id, FacilityCode)
SELECT Id, FacilityCode FROM dbo.FacilityInstances
WHERE FacilityCode IS NOT NULL AND Id IS NOT NULL;
INSERT INTO #Currencies(Id, CurrencyCode)
SELECT Id, CurrencyCode FROM dbo.Currencies
WHERE CurrencyCode IS NOT NULL AND Id IS NOT NULL
INSERT INTO dbo.BatchRecords (
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
DataBatchId,
FacilityInstanceId,
CurrencyId
)
OUTPUT INSERTED.Id
SELECT
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
@BatchId,
F.Id,
C.Id
FROM
#FacilityInstances F RIGHT OUTER HASH JOIN
(
#Currencies C
RIGHT OUTER HASH JOIN @ImportTable IT
ON C.CurrencyCode = IT.CurrencyCode
)
ON F.FacilityCode = IT.FacilityCode
```
This enforces the execution plan to use hash match joins instead of nested loops. I think the culprit of bad performance is the first nested loop that performs an index scan for each row in `@ImportTable`
I don't know if `CurrencyCode` is unique in `Currencies` table, so I create the temporal table #Currencies with unique currency codes.
I don't know if `FacilityCode` is unique in `Facilities` table, so I create the temporal table #FacilityInstances with unique facility codes.
If they are unique you don't need the temporal tables, you can use the permanent tables directly.
Assuming CurrencyCode and FacilityCode are unique the following stored procedure would be better because it doesn't create unnecessary temporary tables:
```
CREATE PROCEDURE dbo.ImportBatchRecords (
@BatchId INT,
@ImportTable dbo.RecordImportStructure READONLY
)
AS
SET NOCOUNT ON;
DECLARE @ErrorCode int
DECLARE @Step varchar(200)
INSERT INTO dbo.BatchRecords (
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
DataBatchId,
FacilityInstanceId,
CurrencyId
)
OUTPUT INSERTED.Id
SELECT
ItemName,
Supplier,
Quantity,
ItemUnit,
EntityUnit,
ItemSize,
PackageSize,
FamilyCode,
Family,
CategoryCode,
Category,
SubCategoryCode,
SubCategory,
ItemGroupCode,
ItemGroup,
PurchaseValue,
UnitPurchaseValue,
PackagePurchaseValue,
@BatchId,
F.Id,
C.Id
FROM
dbo.FacilityInstances F RIGHT OUTER HASH JOIN
(
dbo.Currencies C
RIGHT OUTER HASH JOIN @ImportTable IT
ON C.CurrencyCode = IT.CurrencyCode
)
ON F.FacilityCode = IT.FacilityCode
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: I know there is an accepted answer, but I can't resist. I believe you can improve the performance 20-50% over the accepted answer.
The key is to `SqlBulkCopy` to the final table `dbo.BatchRecords` directly.
To make this happen you need `FacilityInstanceId` and `CurrencyId` before to `SqlBulkCopy`. To get them, load `SELECT Id, FacilityCode FROM FacilityIntances` and `SELECT Id, CurrencyCode FROM Currencies` into collections, then build a dictionary:
```
var facilityIdByFacilityCode = facilitiesCollection.ToDictionary(x => x.FacilityCode, x => x.Id);
var currencyIdByCurrencyCode = currenciesCollection.ToDictionnary(x => x.CurrencyCode, x => x.Id);
```
Once you have the dictionaries, getting the id's from the codes is constant time cost. This is equivalent and very similar to `HASH MATCH JOIN` in SQL Server, but at the client side.
The other barrier you need to tear down is to get the `Id` column of new inserted rows in `dbo.BatchRecords` table. Actually can you get the `Id`s before inserting them.
Make the `Id` column "sequence driven":
```
CREATE SEQUENCE BatchRecords_Id_Seq START WITH 1;
CREATE TABLE BatchRecords
(
Id int NOT NULL CONSTRAINT DF_BatchRecords_Id DEFAULT (NEXT VALUE FOR BatchRecords_Id_Seq),
.....
CONSTRAINT PK_BatchRecords PRIMARY KEY (Id)
)
```
One you have the `BatchRecords` collection, you know how many records are in it. You can then reserve a contiguous range of sequences. Execute the following T-SQL:
```
DECLARE @BatchCollectionCount int = 2500 -- Replace with the actual value
DECLARE @range_first_value sql_variant
DECLARE @range_last_value sql_variant
EXEC sp_sequence_get_range
@sequence_name = N'BatchRecords_Id_Seq',
@range_size = @BatchCollectionCount,
@range_first_value = @range_first_value OUTPUT,
@range_last_value = @range_last_value OUTPUT
SELECT
CAST(@range_first_value AS INT) AS range_first_value,
CAST(@range_last_value AS int) as range_last_value
```
This returns `range_first_value` and `range_last_value`. You can now assign `BatchRecord.Id` to each record:
```
int id = range_first_value;
foreach (var record in batchRecords)
{
record.Id = id++;
}
```
Next, you can `SqlBulkCopy` the batch record collection directly into the final table `dbo.BatchRecords`.
To get a `DataReader` from an `IEnumerable` to feed [`SqlBulkCopy.WriteToServer`](https://msdn.microsoft.com/en-us/library/434atets(v=vs.110).aspx) you can use code like [this](https://github.com/jesuslpm/EntityLite/blob/master/inercya.EntityLite/Collections/CollectionDataReader.cs) which is part of `EntityLite`, a micro ORM I developed.
You can make it even faster if you cache `facilityIdByFacilityCode` and `currencyIdByCurrencyCode`. To be sure these dictionaries are up to date you can use [`SqlDependency`](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/detecting-changes-with-sqldependency)or techniques like [this one](https://stackoverflow.com/questions/48926435/is-there-an-equivalent-of-rowversion-in-sql-server-but-for-a-table/48928776#48928776).
Upvotes: 0 |
2018/03/20 | 908 | 3,553 | <issue_start>username_0: Hello i am new to C programming, currently working on a code, a simple guessing game. Maybe not simple for me, Every time i build and run the code. At the first attempt. The command interface asks to press any key to continue and then takes the interface away. I have to start from the beginning and it doesn't let me go any further unless i run the code again as it is meant to allow another guessed number until the game is over or until i guess right. i do not know what is wrong with my code. Please help, Code is attached below. Thank you.
```
while(match ==0 && chance_count <= MAX_CHANCES){
printf("Enter your number: ");
scanf("%match", &user_guess);
if(user_guess > computer_guess)
{
printf("your guess is high\n");
printf("your remaining chances are %d\n", MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else if (user_guess < computer_guess){
printf("your guess is too low\n");
printf("your remaining chances are %d\n",MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else{
printf("BINGO! you have won\n");
printf("it took you %d steps for the number\n", MAX_CHANCES - (MAX_CHANCES - chance_count));
}
break;
printf("\n\n");
chance_count++;
} // end of while loop
if(chance_count > MAX_CHANCES){
printf("GAME OVER");
}
return 0;
}
```<issue_comment>username_1: Instead of your approach you can use a infinite loop `while(1);` and break out of it if some criteria are meet. In this way your code will be more readable, efficient and scalable.
So there is no need to use a flag variable 'match', in the else statement add a `break();` command at the end if the user guess the right number.
Also use an if statement to check if the user has more chances to guess, or he has suppressed them.
Last but not least you have to replace `scanf("%match", &user_guess);` with `scanf("%d", &user_guess);` because you need to read an int variable from the user and store it at the 'user\_guess' variable
```
while(1)
{
printf("Enter your number: ");
scanf("%d", &user_guess);
if(user_guess > computer_guess)
{
printf("your guess is high\n");
printf("your remaining chances are %d\n", MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else if (user_guess < computer_guess)
{
printf("your guess is too low\n");
printf("your remaining chances are %d\n",MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else
{
printf("BINGO! you have won\n");
printf("it took you %d steps for the number\n", chance_count);
break;
}
if(chance_count > MAX_CHANCES)
{
printf("GAME OVER");
break;
}
chance_count++;
} // end of while loop
```
Upvotes: -1 <issue_comment>username_2: ```
scanf("%match", &user_guess);
```
Wrong. There's no such placeholders in `scanf` format string. Read this carefully <https://en.wikipedia.org/wiki/Scanf_format_string>
Your `break;` matches with `while(match ==0 && chance_count <= MAX_CHANCES)`
So, it will run only first time once the condition is true.
Upvotes: 0 |
2018/03/20 | 756 | 2,971 | <issue_start>username_0: I'm building a translator that saves the translation in a dictionary where the first string is an identifier and the seconds string is the translated string.
It seems to me that the dictionary syntax is not very readable so I'm thinking about wrapping my dictionary like
```
class Translation : Dictionary{}
```
and then also the keyvaluepair like
```
class SingleTranslation : KeyValuePair
```
But the KeyValuePair class is sealed (can not be inherited). Does anyone have any suggestions on how I can make my dictionary more readable?
My biggest worry is when I have to iterate over the dictionary with
```
foreach(KeyValuePair kvp in \_translation)
{
string whatever = kvp.Value;
do stuff...
if(kvp.key)
do stuff..
}
```
I could of course create a string in the foreach that is called Identifier and set it equal to kvp.key. But I would prefer something like
```
foreach(SingleTranslation singleTranslation in _translation)
{
singleTranslation.Identifier ... do stuff...
}
```<issue_comment>username_1: Instead of your approach you can use a infinite loop `while(1);` and break out of it if some criteria are meet. In this way your code will be more readable, efficient and scalable.
So there is no need to use a flag variable 'match', in the else statement add a `break();` command at the end if the user guess the right number.
Also use an if statement to check if the user has more chances to guess, or he has suppressed them.
Last but not least you have to replace `scanf("%match", &user_guess);` with `scanf("%d", &user_guess);` because you need to read an int variable from the user and store it at the 'user\_guess' variable
```
while(1)
{
printf("Enter your number: ");
scanf("%d", &user_guess);
if(user_guess > computer_guess)
{
printf("your guess is high\n");
printf("your remaining chances are %d\n", MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else if (user_guess < computer_guess)
{
printf("your guess is too low\n");
printf("your remaining chances are %d\n",MAX_CHANCES - chance_count);
printf("number of steps taken is %d\n", chance_count);
}
else
{
printf("BINGO! you have won\n");
printf("it took you %d steps for the number\n", chance_count);
break;
}
if(chance_count > MAX_CHANCES)
{
printf("GAME OVER");
break;
}
chance_count++;
} // end of while loop
```
Upvotes: -1 <issue_comment>username_2: ```
scanf("%match", &user_guess);
```
Wrong. There's no such placeholders in `scanf` format string. Read this carefully <https://en.wikipedia.org/wiki/Scanf_format_string>
Your `break;` matches with `while(match ==0 && chance_count <= MAX_CHANCES)`
So, it will run only first time once the condition is true.
Upvotes: 0 |
2018/03/20 | 1,095 | 3,463 | <issue_start>username_0: I have a character vector that I need to clean. Specifically, I want to remove the number that comes before the word "Votes." Note that the number has a comma to separate thousands, so it's easier to treat it as a string.
I know that gsub("\*. Votes","", text) will remove everything, but how do I just remove the number? Also, how do I collapse the repeated spaces into just one space?
Thanks for any help you might have!
Example data:
```
text <- "STATE QUESTION NO. 1 Amendment to Title 15 of the Nevada Revised Statutes Shall Chapter 202 of the Nevada Revised Statutes be amended to prohibit, except in certain circumstances, a person from selling or transferring a firearm to another person unless a federally-licensed dealer first conducts a federal background check on the potential buyer or transferee? 558,586 Votes"
```<issue_comment>username_1: You may use
```
text <- "STATE QUESTION NO. 1 Amendment to Title 15 of the Nevada Revised Statutes Shall Chapter 202 of the Nevada Revised Statutes be amended to prohibit, except in certain circumstances, a person from selling or transferring a firearm to another person unless a federally-licensed dealer first conducts a federal background check on the potential buyer or transferee? 558,586 Votes"
trimws(gsub("(\\s){2,}|\\d[0-9,]*\\s*(Votes)", "\\1\\2", text))
# => [1] "STATE QUESTION NO. 1 Amendment to Title 15 of the Nevada Revised Statutes Shall Chapter 202 of the Nevada Revised Statutes be amended to prohibit, except in certain circumstances, a person from selling or transferring a firearm to another person unless a federally-licensed dealer first conducts a federal background check on the potential buyer or transferee? Votes"
```
See the [online R demo](http://rextester.com/BIYZ80757) and the [**online regex demo**](https://regex101.com/r/Dv5KRn/1).
**Details**
* `(\\s){2,}` - matches 2 or more whitespace chars while capturing the last occurrence that will be reinserted using the `\1` placeholder in the replacement pattern
* `|` - or
* `\\d` - a digit
* `[0-9,]*` - 0 or more digits or commas
* `\\s*` - 0+ whitespace chars
* `(Votes)` - Group 2 (will be restored in the output using the `\2` placeholder): a `Votes` substring.
Note that `trimws` will remove any leading/trailing whitespace.
Upvotes: 2 [selected_answer]<issue_comment>username_2: **Easiest** way is with `stringr`:
```
> library(stringr)
> regexp <- "-?[[:digit:]]+\\.*,*[[:digit:]]*\\.*,*[[:digit:]]* Votes+"
> str_extract(text,regexp)
[1] "558,586 Votes"
```
To do the same thing but extract only the number, wrap it in `gsub`:
```
> gsub('\\s+[[:alpha:]]+', '', str_extract(text,regexp))
[1] "558,586"
```
Here's a version that will strip out all numbers before the word "Votes" even if they have commas or periods in it:
```
> gsub('\\s+[[:alpha:]]+', '', unlist(regmatches (text,gregexpr("-?[[:digit:]]+\\.*,*[[:digit:]]*\\.*,*[[:digit:]]* Votes+",text) )) )
[1] "558,586"
```
If you want the label too, then just throw out the `gsub` part:
```
> unlist(regmatches (text,gregexpr("-?[[:digit:]]+\\.*,*[[:digit:]]*\\.*,*[[:digit:]]* Votes+",text) ))
[1] "558,586 Votes"
```
And if you want to pull out all the numbers:
```
> unlist(regmatches (text,gregexpr("-?[[:digit:]]+\\.*,*[[:digit:]]*\\.*,*[[:digit:]]*",text) ))
[1] "1" "15" "202" "558,586"
```
Upvotes: 0 |
2018/03/20 | 349 | 1,267 | <issue_start>username_0: [](https://i.stack.imgur.com/onLtn.png)
I try to study the Map object by reading document. However, it confuses me in the beginning. I wonder what does it mean by" Return undefined." here?<issue_comment>username_1: This just means that the effect of `forEach` is entirely dependent on what you do inside the function `f`, you don't get an overall returned value from the `forEach` operation. If you did this:
```
let a = arr.forEach(myFunc);
```
...then `a` would be undefined.
Upvotes: 0 <issue_comment>username_2: >
> I wonder what does it mean by" Return undefined." here?
>
>
>
It means the function `forEach` will execute the provided function `f` and nothing else, basically, the function `forEach` doesn't return anything because its job is to loop the provided entries calling the callback function `f` on every entry within the map.
If the callback returns a value, this will be ignored.
```js
var undefinedValue = new Map([
['foo', 3],
['bar', {}],
['baz', 2]
]).forEach(() => {
console.log('Looping...');
return "HELLO WORLD!"; // This will be ignored!
});
console.log("Value returned from forEach:", undefinedValue)
```
Upvotes: 1 |
2018/03/20 | 643 | 2,333 | <issue_start>username_0: I'm trying to work with a DynamoDB table, and have successfully connected to it, but am unable to read it. When I try, I get the following error:
>
> Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '\_datePublished is not a property of ProjectName.Article.'
>
>
>
Here's my Article class/model:
```
class Article: AWSDynamoDBObjectModel, AWSDynamoDBModeling {
var _articleSource: String?
var _articleUrl: String?
var _datePublished: String?
var _headline: String?
var _imageURL: String?
var _rating: String?
var _listingId: String?
class func dynamoDBTableName() -> String {
return "tableName"
}
class func hashKeyAttribute() -> String {
return "_listingId"
}
class func rangeKeyAttribute() -> String {
return "_articleUrl"
}
override class func jsonKeyPathsByPropertyKey() -> [AnyHashable: Any] {
return [
"_articleSource" : "articleSource",
"_articleUrl" : "articleUrl",
"_datePublished" : "datePublished",
"_headline" : "headline",
"_imageURL" : "imageURL",
"_rating" : "rating",
"_listingId" : "listingId",
]
}
}
```
And the function I'm calling in `ViewDidLoad` to read an article from the DB:
```
func readArticle() {
let dynamoDbObjectMapper = AWSDynamoDBObjectMapper.default()
// Create data object
let article: Article = Article();
dynamoDbObjectMapper.load(
Article.self,
hashKey: "2018-03-17T08:50:30+00:00",
rangeKey: "https://www.example.com",
completionHandler: {
(objectModel: AWSDynamoDBObjectModel?, error: Error?) -> Void in
if let error = error {
print("Amazon DynamoDB Read Error: \(error)")
return
}
print("Article:\n \(article)")
})
```
}
What am I doing wrong? datePublished exists in that entry on the DB, and is defined in the model<issue_comment>username_1: Check to see if datePublished is a String or Date type in the database. If the types do not match, then you will have an error.
Upvotes: 1 <issue_comment>username_2: It turns out the AWS SDK isn't exactly Swift 4 compatible yet. In swift 4, you have to put @objcMembers before the object model class definition like:
```
@objcMembers class Article: AWSDynamoDBObjectModel, AWSDynamoDBModeling {
}
```
Upvotes: 1 [selected_answer] |
2018/03/20 | 1,249 | 3,719 | <issue_start>username_0: I'm having trouble getting usort to work and not sure what I'm missing. Below is an example of my array. I want to sort the array based on the value of the sort key.
```
Array
(
[0] => Array
(
[sort] => 1520546956
[row] => Data lives here
)
[1] => Array
(
[sort] => 1521047928
[row] => Data lives here
)
[2] => Array
(
[sort] => 1520525366
[row] => Data lives here
)
[3] => Array
(
[sort] => 1520525227
[row] => Data lives here
)
```
My code to try and sort this is:
```
foreach ($resultsArray as $record)
{
usort($record['sort'], function($a, $b)
{
if ($a == $b)
{
return 0;
}
return ($a < $b) ? -1 : 1;
});
}
```
However my code seems to be ineffective as the order of the array isn't changing. I feel like I'm close but can't identify what I'm missing. Thank you for any help!<issue_comment>username_1: Modified code to reflect the below as suggested:
```
usort($resultsArray, function($a, $b) { /* compare $a['sort'] and $b['sort'] */ }
```
Working perfectly.
Upvotes: 0 <issue_comment>username_2: `$record['sort']` would have to be an array for that to work. Yet still, it does nothing.
I'm pretty sure you want to do this:
```
php
$multiArray = array(
array('sort'=1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data'));
foreach($multiArray as $a){
$numArray[] = $a['sort'];
}
asort($numArray, SORT_NUMERIC);
foreach($numArray as $k => $v){
$resArray[] = $multiArray[$k];
}
print_r($resArray);
?>
```
Upvotes: 0 <issue_comment>username_3: A different approach to accomplish the same functionality is to use [`array_multisort`](http://php.net/manual/en/function.array-multisort.php) with the desired combination of sorting flags.
Dataset:
```
$resultsArray = array(
array('sort'=>1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data')
);
```
`array_multisort` Example:
```
$sortValues = array_column($resultsArray, 'sort');
array_multisort($sortValues, SORT_ASC, $resultsArray);
print_r($resultsArray);
```
Results: <https://3v4l.org/NpVIc>
```
Array
(
[0] => Array
(
[sort] => 1520525227
[row] => row 3 data
)
[1] => Array
(
[sort] => 1520525366
[row] => row 2 data
)
[2] => Array
(
[sort] => 1520546956
[row] => row 0 data
)
[3] => Array
(
[sort] => 1521047928
[row] => row 1 data
)
)
```
---
Alternatively you can still use [`usort`](http://php.net/manual/en/function.usort.php), but in your function, you need to retrieve the associated array key named `sort` in order to compare the values.
usort Example:
```
usort($resultsArray, function($a, $b) {
if ($a['sort'] == $b['sort']) {
return 0;
}
return ($a['sort'] < $b['sort'] ? -1 : 1);
});
print_r($resultsArray);
```
Results: <https://3v4l.org/5nfbc>
Upvotes: 1 <issue_comment>username_4: It is because php foreach construct works on the copy of the array provided(ie: $resultsArray), where php usort() function references or points to the same array. That is why your code is not working as expected.
If you don't understand this concept I suggest you a good online course by <NAME> (php essential training) in Lynda.com
Upvotes: 0 |
2018/03/20 | 1,688 | 3,949 | <issue_start>username_0: I am writing a small macro to do a simple task. I have made some progress so far, however I am stuck in trying to do an if statement to check if the contents of a cell equals a string in an array then it performs the next statement. Here is my code so far:
```
Public Sub Saturdays()
Dim i As Integer
Dim j As Integer
Dim Sat As Variant
Sat = Array("1/6/2018", "1/13/2018", "1/20/2018", "1/27/2018", "2/3/2018", "2/10/2018", "2/17/2018", "2/24/2018", "3/3/2018", "3/10/2018", "3/17/2018", "3/24/2018", "3/31/2018", "4/7/2018", "4/14/2018", "4/21/2018", "4/28/2018", "5/5/2018", "5/12/2018", "5/19/2018", "5/26/2018", "6/2/2018", "6/9/2018", "6/16/2018", "6/23/2018", "6/30/2018", "7/7/2018", "7/14/2018", "7/21/2018", "7/28/2018", "8/4/2018", "8/11/2018", "8/18/2018", "8/25/2018", "9/1/2018", "9/8/2018", "9/15/2018", "9/22/2018", "9/29/2018", "10/6/2018", "10/13/2018", "10/20/2018", "10/27/2018", "11/3/2018", "11/10/2018", "11/17/2018", "11/24/2018", "12/1/2018", "12/8/2018", "12/15/2018", "12/22/2018", "12/29/2018")
For i = 5 To 100
If Sheet1.Cells(i, 1) <> "" Then
For j = 4 To 730
If Sheet1.Cells(4, j) = Sat Then
Sheet1.Cells(i, j) = 0
End If
Next j
End If
Next i
End Sub
```<issue_comment>username_1: Modified code to reflect the below as suggested:
```
usort($resultsArray, function($a, $b) { /* compare $a['sort'] and $b['sort'] */ }
```
Working perfectly.
Upvotes: 0 <issue_comment>username_2: `$record['sort']` would have to be an array for that to work. Yet still, it does nothing.
I'm pretty sure you want to do this:
```
php
$multiArray = array(
array('sort'=1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data'));
foreach($multiArray as $a){
$numArray[] = $a['sort'];
}
asort($numArray, SORT_NUMERIC);
foreach($numArray as $k => $v){
$resArray[] = $multiArray[$k];
}
print_r($resArray);
?>
```
Upvotes: 0 <issue_comment>username_3: A different approach to accomplish the same functionality is to use [`array_multisort`](http://php.net/manual/en/function.array-multisort.php) with the desired combination of sorting flags.
Dataset:
```
$resultsArray = array(
array('sort'=>1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data')
);
```
`array_multisort` Example:
```
$sortValues = array_column($resultsArray, 'sort');
array_multisort($sortValues, SORT_ASC, $resultsArray);
print_r($resultsArray);
```
Results: <https://3v4l.org/NpVIc>
```
Array
(
[0] => Array
(
[sort] => 1520525227
[row] => row 3 data
)
[1] => Array
(
[sort] => 1520525366
[row] => row 2 data
)
[2] => Array
(
[sort] => 1520546956
[row] => row 0 data
)
[3] => Array
(
[sort] => 1521047928
[row] => row 1 data
)
)
```
---
Alternatively you can still use [`usort`](http://php.net/manual/en/function.usort.php), but in your function, you need to retrieve the associated array key named `sort` in order to compare the values.
usort Example:
```
usort($resultsArray, function($a, $b) {
if ($a['sort'] == $b['sort']) {
return 0;
}
return ($a['sort'] < $b['sort'] ? -1 : 1);
});
print_r($resultsArray);
```
Results: <https://3v4l.org/5nfbc>
Upvotes: 1 <issue_comment>username_4: It is because php foreach construct works on the copy of the array provided(ie: $resultsArray), where php usort() function references or points to the same array. That is why your code is not working as expected.
If you don't understand this concept I suggest you a good online course by <NAME> (php essential training) in Lynda.com
Upvotes: 0 |
2018/03/20 | 1,385 | 4,530 | <issue_start>username_0: In an AngularJS record display (with filters) I have a multiselect array of territories that a user can select from to find out if a certain item is available in a certain territory.
The array returns a list of values such as
```
['001','010','200']
```
based on the ID of the territories selected. This is then checked against a JSON list of records which has a JSON value looks like this
```
territoriesnotavailable: "001, 085, 090"
```
Each record either has this set to null, or has a list from one to many numbers.
I currently use the following code (customFilter) which works perfectly if you only select ONE value.. it basically makes the item filter out if the territory selected in the multiselect is in the list of territoriesnotavailable
```
function CustomTerritoryFilter() {
return function(data, query) {
if (query.length === 0) return data;
if (data) return data.filter(function(item) {
for (var i = 0; i < query.length; i++) {
var queryitem = query[i]["id"];
if(item.territoriesnotavailable) {
stringB = item.territoriesnotavailable;
} else {
stringB = 'xxxxxxxx';
}
stringA = queryitem;
if (!(stringB.indexOf( stringA ) > -1)) {
return data;
}
}
});
return [];
};
}
```
So if I choose only one filter (resulting in a query of ['010'] for example. and this appears in territoriesnoavailable for the record.. it vanishes as expected.. but if I choose any value that is NOT in territoriesnotavailable the item appears again.. i need the record to vanish if ANY selected territory appears in the list regardless of any that do not<issue_comment>username_1: Modified code to reflect the below as suggested:
```
usort($resultsArray, function($a, $b) { /* compare $a['sort'] and $b['sort'] */ }
```
Working perfectly.
Upvotes: 0 <issue_comment>username_2: `$record['sort']` would have to be an array for that to work. Yet still, it does nothing.
I'm pretty sure you want to do this:
```
php
$multiArray = array(
array('sort'=1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data'));
foreach($multiArray as $a){
$numArray[] = $a['sort'];
}
asort($numArray, SORT_NUMERIC);
foreach($numArray as $k => $v){
$resArray[] = $multiArray[$k];
}
print_r($resArray);
?>
```
Upvotes: 0 <issue_comment>username_3: A different approach to accomplish the same functionality is to use [`array_multisort`](http://php.net/manual/en/function.array-multisort.php) with the desired combination of sorting flags.
Dataset:
```
$resultsArray = array(
array('sort'=>1520546956, 'row'=>'row 0 data'),
array('sort'=>1521047928, 'row'=>'row 1 data'),
array('sort'=>1520525366, 'row'=>'row 2 data'),
array('sort'=>1520525227, 'row'=>'row 3 data')
);
```
`array_multisort` Example:
```
$sortValues = array_column($resultsArray, 'sort');
array_multisort($sortValues, SORT_ASC, $resultsArray);
print_r($resultsArray);
```
Results: <https://3v4l.org/NpVIc>
```
Array
(
[0] => Array
(
[sort] => 1520525227
[row] => row 3 data
)
[1] => Array
(
[sort] => 1520525366
[row] => row 2 data
)
[2] => Array
(
[sort] => 1520546956
[row] => row 0 data
)
[3] => Array
(
[sort] => 1521047928
[row] => row 1 data
)
)
```
---
Alternatively you can still use [`usort`](http://php.net/manual/en/function.usort.php), but in your function, you need to retrieve the associated array key named `sort` in order to compare the values.
usort Example:
```
usort($resultsArray, function($a, $b) {
if ($a['sort'] == $b['sort']) {
return 0;
}
return ($a['sort'] < $b['sort'] ? -1 : 1);
});
print_r($resultsArray);
```
Results: <https://3v4l.org/5nfbc>
Upvotes: 1 <issue_comment>username_4: It is because php foreach construct works on the copy of the array provided(ie: $resultsArray), where php usort() function references or points to the same array. That is why your code is not working as expected.
If you don't understand this concept I suggest you a good online course by <NAME> (php essential training) in Lynda.com
Upvotes: 0 |
2018/03/20 | 197 | 743 | <issue_start>username_0: Partial indexes only include a subset of the rows of a table.
I've been able to create partial indexes in Oracle, DB2, PostgreSQL, and SQL Server. For example, in SQL Server I can create the index as:
```
create index ix1_case on client_case (date)
where status = 'pending';
```
This index is cheap since it does not include all 5 million rows of the table, but only the pending cases, that should not exceed a thousand rows.
How do I do it in MySQL?<issue_comment>username_1: Queries that need the index you suggest may benefit from
```
INDEX(`status`, `date`)
```
Upvotes: 0 <issue_comment>username_2: As @Gordon Linoff commented, MySQL does not (yet) support partial indexes.
Upvotes: 4 [selected_answer] |
2018/03/20 | 393 | 1,172 | <issue_start>username_0: I'd like to know if there's an easy command to read and use data from an Excel file. I'm moving from `MATLAB` to `Python`, so I'd like to know if there's something simple as in `MATLAB`:
>
> data = xlsread(filename).
>
>
>
I need this to load data to train an ANN.
Thank you.<issue_comment>username_1: This can be done with Pandas
```
sudo pip install pandas
```
By convention, it's abbreviated as pd when imported:
```
import pandas as pd
```
Once imported, use Panda's [read\_excel](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html) attribute to load the excel file as a Pandas Dataframe:
```
df = pd.read_excel('path/to/xlxs_file/')
df.head()
```
Read more on Pandas [here](https://pandas.pydata.org/pandas-docs/stable/index.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
import tensorflow as tf
import pandas as pd
import numpy as np
df=pd.read_csv('./blahblah.csv',usecols = [0,1,2],skiprows = [0],header=None)
d = df.values
data = np.float32(d)
x = tf.placeholder(tf.float32,shape=(150,5))
x = data
```
This is one of the ways to load a `.csv` file into TF.
Upvotes: 2 |
2018/03/20 | 1,642 | 5,567 | <issue_start>username_0: So I have a function that performs just fine on small lists. It's function is to check if removing one element from the sequence will make the sequence a strictly increasing sequence:
```
def almostIncreasingSequence(sequence):
length = len(sequence)
for i in range(1, length):
newSequence = sequence[:i-1] + sequence[i:]
if checkIfSorted(newSequence, length):
return True
return checkIfSorted(sequence[:length-1], length)
def checkIfSorted(sequence, length):
for i in range(1, length - 1):
if sequence[i-1] >= sequence[i]:
return False
return True
```
But I need it to work on lists up to 100,000 elements long. What sort of optimizations could I make for this to work faster? Right now, it's abysmally slow on lists of 100,000, working through a few thousand elements a second.<issue_comment>username_1: I wrote [another answer on this site](https://stackoverflow.com/a/43017981/6246044) answering almost the same question as you, but mine was for checking if removing *at most one* element from the sequence makes it strictly increasing. That may be what you mean--there seems to be no practical difference. It seems you want my second solution, copied here.
```
def first_bad_pair(sequence, k):
"""Return the first index of a pair of elements in sequence[]
for indices k-1, k+1, k+2, k+3, ... where the earlier element is
not less than the later element. If no such pair exists, return -1."""
if 0 < k < len(sequence) - 1:
if sequence[k-1] >= sequence[k+1]:
return k-1
for i in range(k+1, len(sequence)-1):
if sequence[i] >= sequence[i+1]:
return i
return -1
def almostIncreasingSequence(sequence):
"""Return whether it is possible to obtain a strictly increasing
sequence by removing no more than one element from the array."""
j = first_bad_pair(sequence, -1)
if j == -1:
return True # List is increasing
if first_bad_pair(sequence, j) == -1:
return True # Deleting earlier element makes increasing
if first_bad_pair(sequence, j+1) == -1:
return True # Deleting later element makes increasing
return False # Deleting either does not make increasing
```
Your code is slow because it, like my first solution, makes new sequences by joining slices. This copies almost the entire sequence, and doing that many times slows the code. The code above avoids that by complicating the routine that checks a sequence to see if it is strictly increasing. Check my other linked answer for more details.
Upvotes: 2 [selected_answer]<issue_comment>username_2: My solution:
```
def is_almost_increasing(x):
lastx = x[0] # value to use in the next comparison
found_one = False
for i in range(1, len(x)):
if x[i] <= lastx:
if found_one:
return False
found_one = True
if i > 1 and x[i] <= x[i-2]: # i > 1 in case the first comparison failed
break
lastx = x[i]
return True
print('\nThese should be True.')
print(is_almost_increasing([1]))
print(is_almost_increasing([1, 2]))
print(is_almost_increasing([1, 2, 3]))
print(is_almost_increasing([1, 3, 2]))
print(is_almost_increasing([10, 1, 2, 3, 4, 5]))
print(is_almost_increasing([0, -2, 5, 6]))
print(is_almost_increasing([1, 1]))
print(is_almost_increasing([1, 2, 3, 4, 3, 6]))
print(is_almost_increasing([1, 2, 3, 4, 99, 5, 6]))
print(is_almost_increasing([1, 2, 2, 3]))
print('\nThese should be False.')
print(is_almost_increasing([1, 3, 2, 1]))
print(is_almost_increasing([3, 2, 1]))
print(is_almost_increasing([1, 1, 1]))
```
This is pretty similar to, but slightly shorter than, username_1's. I borrowed his test code from the link he provided, so thanks to him for that. The point is that you don't want to build a lot of secondary lists, which is inefficient. To get a really big improvement in efficiency you almost always need to find a better algorithm.
The two complications here are (1) what to do when the first element fails the test? (2) when you find an element out of sequence, do you drop that element or the one before it? The comments address that.
Upvotes: 1 <issue_comment>username_3: Okay I know this post has already an answer. But I will just like to offer my code for same problem. Suppose you have two lists lis1 and lis2 as follows:-
```
test_lis1 = [1, 2, 3, 4, 5, 2, 6]
test_lis2 = [1, 2, 3, 1, 4, 5, 1]
test_lis3 = [7, 1, 2, 3, 4, 5, 6]
```
Now, I guess your problem is that you want to know if this list can be converted to a strictly increasing subsequence by removing ONLY one element. If we need to remove two, Then answer is no.
Here is the function:-
```
def remove_one_only(lis):
len_lis = len(lis)
i = 0
j = 1
rem_count = 0
flag = False
while j <= len_lis - 1:
if lis[i] > lis[j]:
rem_count += 1
if rem_count == 2:
break
if i > 0:
j += 1
continue
i = j
j += 1
if rem_count == 1:
flag = True
return flag
```
This function will tell you answer in yes or no if removing only one element can convert list into strictly increasing list and it runs in O(n). This is the fastest you can get.
Upvotes: 1 <issue_comment>username_4: This would be the most pythonic solution I've found, but it's not the fastest.
```
def almostIncreasingSequence(sequence):
return any([all([i
```
Looks nice, though. :)
Upvotes: 0 |
2018/03/20 | 1,739 | 5,511 | <issue_start>username_0: I have a string, which contains a specific number that I would like extracted into a single number.
```
string = Result: ResultSet({'(u'examplemeasure', None)': [{u'value': 15, u'time': u'2018-03-20T22:50:33.803894733Z'}]})
```
I would like the number 15 by itself as a result.
```
15 (<-- just like this)
```
My code is as follows.
```
import re
m = re.search('(\d+)', 'Result: ResultSet({'(u'examplemeasure', None)': [{u'value': 15, u'time': u'2018-03-20T22:50:33.803894733Z'}]})', re.IGNORECASE)
print (m.group(1))
```
However, the abundance of apostrophe's give a syntax error.
```
File "filename.py", line 3
print (m.group(1))
^
SyntaxError: invalid syntax
```
Is there a way to disregard the abundance of apostrophe's and get the number 15 by itself?
Thanks,
whyiamafool<issue_comment>username_1: I wrote [another answer on this site](https://stackoverflow.com/a/43017981/6246044) answering almost the same question as you, but mine was for checking if removing *at most one* element from the sequence makes it strictly increasing. That may be what you mean--there seems to be no practical difference. It seems you want my second solution, copied here.
```
def first_bad_pair(sequence, k):
"""Return the first index of a pair of elements in sequence[]
for indices k-1, k+1, k+2, k+3, ... where the earlier element is
not less than the later element. If no such pair exists, return -1."""
if 0 < k < len(sequence) - 1:
if sequence[k-1] >= sequence[k+1]:
return k-1
for i in range(k+1, len(sequence)-1):
if sequence[i] >= sequence[i+1]:
return i
return -1
def almostIncreasingSequence(sequence):
"""Return whether it is possible to obtain a strictly increasing
sequence by removing no more than one element from the array."""
j = first_bad_pair(sequence, -1)
if j == -1:
return True # List is increasing
if first_bad_pair(sequence, j) == -1:
return True # Deleting earlier element makes increasing
if first_bad_pair(sequence, j+1) == -1:
return True # Deleting later element makes increasing
return False # Deleting either does not make increasing
```
Your code is slow because it, like my first solution, makes new sequences by joining slices. This copies almost the entire sequence, and doing that many times slows the code. The code above avoids that by complicating the routine that checks a sequence to see if it is strictly increasing. Check my other linked answer for more details.
Upvotes: 2 [selected_answer]<issue_comment>username_2: My solution:
```
def is_almost_increasing(x):
lastx = x[0] # value to use in the next comparison
found_one = False
for i in range(1, len(x)):
if x[i] <= lastx:
if found_one:
return False
found_one = True
if i > 1 and x[i] <= x[i-2]: # i > 1 in case the first comparison failed
break
lastx = x[i]
return True
print('\nThese should be True.')
print(is_almost_increasing([1]))
print(is_almost_increasing([1, 2]))
print(is_almost_increasing([1, 2, 3]))
print(is_almost_increasing([1, 3, 2]))
print(is_almost_increasing([10, 1, 2, 3, 4, 5]))
print(is_almost_increasing([0, -2, 5, 6]))
print(is_almost_increasing([1, 1]))
print(is_almost_increasing([1, 2, 3, 4, 3, 6]))
print(is_almost_increasing([1, 2, 3, 4, 99, 5, 6]))
print(is_almost_increasing([1, 2, 2, 3]))
print('\nThese should be False.')
print(is_almost_increasing([1, 3, 2, 1]))
print(is_almost_increasing([3, 2, 1]))
print(is_almost_increasing([1, 1, 1]))
```
This is pretty similar to, but slightly shorter than, username_1's. I borrowed his test code from the link he provided, so thanks to him for that. The point is that you don't want to build a lot of secondary lists, which is inefficient. To get a really big improvement in efficiency you almost always need to find a better algorithm.
The two complications here are (1) what to do when the first element fails the test? (2) when you find an element out of sequence, do you drop that element or the one before it? The comments address that.
Upvotes: 1 <issue_comment>username_3: Okay I know this post has already an answer. But I will just like to offer my code for same problem. Suppose you have two lists lis1 and lis2 as follows:-
```
test_lis1 = [1, 2, 3, 4, 5, 2, 6]
test_lis2 = [1, 2, 3, 1, 4, 5, 1]
test_lis3 = [7, 1, 2, 3, 4, 5, 6]
```
Now, I guess your problem is that you want to know if this list can be converted to a strictly increasing subsequence by removing ONLY one element. If we need to remove two, Then answer is no.
Here is the function:-
```
def remove_one_only(lis):
len_lis = len(lis)
i = 0
j = 1
rem_count = 0
flag = False
while j <= len_lis - 1:
if lis[i] > lis[j]:
rem_count += 1
if rem_count == 2:
break
if i > 0:
j += 1
continue
i = j
j += 1
if rem_count == 1:
flag = True
return flag
```
This function will tell you answer in yes or no if removing only one element can convert list into strictly increasing list and it runs in O(n). This is the fastest you can get.
Upvotes: 1 <issue_comment>username_4: This would be the most pythonic solution I've found, but it's not the fastest.
```
def almostIncreasingSequence(sequence):
return any([all([i
```
Looks nice, though. :)
Upvotes: 0 |
2018/03/20 | 296 | 938 | <issue_start>username_0: So I have a column with a list of dates in it. I have a list/array with a set of specific dates in it. I want to assign a new column in my dataframe with true/false as to whether or not the specific date was in the list. I have the following, but it doesn't work and I'm not sure why.
```
__DATELIST = [date(2017, 7, 4), date(2016, 7, 4), ...]
def isholiday(x):
return x in __DATELIST
df['isholiday'] = df['date'].apply(isholiday)
```
Any thoughts? The above is always false.<issue_comment>username_1: You should try using `datetime.datetime` objects instead of `datetime.date` objects to construct your list of dates. Your data types need to be equivalent.
Upvotes: 0 <issue_comment>username_2: Convert to `datetime` using `to_datetime`, and then use `isin` to get your mask:
```
dates = pd.to_datetime([date(2017, 7, 4), date(2016, 7, 4), ...])
df['isholiday'] = df['date'].isin(dates)
```
Upvotes: 2 |
2018/03/20 | 452 | 1,366 | <issue_start>username_0: I am learning about dynamic memory allocation and overloading operators in C++.
I am trying out a simple program to test out my knowledge but I can't find out what I'm doing wrong.
Here's the code:
```
#include
#include
using namespace std;
class myClass{
private:
char \*ptr;
public:
myClass () {}
myClass (char \*str)
{
ptr = new char[strlen(str)];
strcpy(ptr, str);
}
myClass (const myClass &k)
{
ptr= new char [strlen(k.ptr)+1];
strcpy(ptr, k.ptr);
}
myClass& operator= (const myClass k)
{
delete [] ptr;
ptr = new char [strlen(k.ptr)+1];
strcpy(ptr, k.ptr);
return \*this;
}
~myClass() {
delete [] ptr;
}
void print() {
cout<<\*ptr;
}
};
int main() {
char s[6]="Hello";
myClass p(s), m;
m=p;
m.print();
return 0;
}
```
I am trying to use the operator = so I can assign the value of ptr from object p to object m, but I get no output. Any ideas what I am doing wrong?<issue_comment>username_1: You should try using `datetime.datetime` objects instead of `datetime.date` objects to construct your list of dates. Your data types need to be equivalent.
Upvotes: 0 <issue_comment>username_2: Convert to `datetime` using `to_datetime`, and then use `isin` to get your mask:
```
dates = pd.to_datetime([date(2017, 7, 4), date(2016, 7, 4), ...])
df['isholiday'] = df['date'].isin(dates)
```
Upvotes: 2 |
2018/03/20 | 1,593 | 5,375 | <issue_start>username_0: I have already pre-cleaned the data, and below shows the format of the top 4 rows:
```
[IN] df.head()
[OUT] Year cleaned
0 1909 acquaint hous receiv follow letter clerk crown...
1 1909 ask secretari state war whether issu statement...
2 1909 i beg present petit sign upward motor car driv...
3 1909 i desir ask secretari state war second lieuten...
4 1909 ask secretari state war whether would introduc...
```
I have called train\_test\_split() as follows:
```
[IN] X_train, X_test, y_train, y_test = train_test_split(df['cleaned'], df['Year'], random_state=2)
[Note*] `X_train` and `y_train` are now Pandas.core.series.Series of shape (1785,) and `X_test` and `y_test` are also Pandas.core.series.Series of shape (595,)
```
I have then vectorized the X training and testing data using the following TfidfVectorizer and fit/transform procedures:
```
[IN] v = TfidfVectorizer(decode_error='replace', encoding='utf-8', stop_words='english', ngram_range=(1, 1), sublinear_tf=True)
X_train = v.fit_transform(X_train)
X_test = v.transform(X_test)
```
I'm now at the stage where I would normally apply a classifier, etc (if this were a balanced set of data). However, I initialize imblearn's [SMOTE()](http://contrib.scikit-learn.org/imbalanced-learn/stable/generated/imblearn.over_sampling.SMOTE.html) class (to perform over-sampling)...
```
[IN] smote_pipeline = make_pipeline_imb(SMOTE(), classifier(random_state=42))
smote_model = smote_pipeline.fit(X_train, y_train)
smote_prediction = smote_model.predict(X_test)
```
... but this results in:
```
[OUT] ValueError: "Expected n_neighbors <= n_samples, but n_samples = 5, n_neighbors = 6.
```
I've attempted to whittle down the number of n\_neighbors but to no avail, any tips or advice would be much appreciated. Thanks for reading.
**------------------------------------------------------------------------------------------------------------------------------------**
**EDIT:**
[Full Traceback](https://i.stack.imgur.com/nVP0G.png)
The dataset/dataframe (`df`) contains 2380 rows across two columns, as shown in `df.head()` above. `X_train` contains 1785 of these rows in the format of a list of strings (`df['cleaned']`) and `y_train` also contains 1785 rows in the format of strings (`df['Year']`).
Post-vectorization using `TfidfVectorizer()`: `X_train` and `X_test` are converted from `pandas.core.series.Series` of shape '(1785,)' and '(595,)' respectively, to `scipy.sparse.csr.csr_matrix` of shape '(1785, 126459)' and '(595, 126459)' respectively.
As for the number of classes: using `Counter()`, I've calculated that there are 199 classes (Years), each instance of a class is attached to one element of aforementioned `df['cleaned']` data which contains a list of strings extracted from a textual corpus.
The objective of this process is to automatically determine/guess the year, decade or century (any degree of classification will do!) of input textual data based on vocabularly present.<issue_comment>username_1: Since there are approximately 200 classes and 1800 samples in the training set, you have on average 9 samples per class. The reason for the error message is that a) probably the data are not perfectly balanced and there are classes with less than 6 samples and b) the number of neighbors is 6. A few solutions for your problem:
1. Calculate the minimum number of samples (n\_samples) among the 199 classes and select `n_neighbors` parameter of SMOTE class less or equal to n\_samples.
2. Exclude from oversampling the classes with n\_samples < n\_neighbors using the `ratio` parameter of `SMOTE` class.
3. Use `RandomOverSampler` class which does not have a similar restriction.
4. Combine 3 and 4 solutions: Create a pipeline that is using `SMOTE` and `RandomOversampler` in a way that satisfies the condition n\_neighbors <= n\_samples for smoted classes and uses random oversampling when the condition is not satisfied.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Try to do the below code for SMOTE
`oversampler=SMOTE(kind='regular',k_neighbors=2)`
This worked for me.
Upvotes: 3 <issue_comment>username_3: I think that's possible to use the code:
>
> sampler = SMOTE(ratio={1: 1927, 0: 300},random\_state=0)
>
>
>
Upvotes: 0 <issue_comment>username_4: **WHY IT OCCURS:**
In my case it was occurring because i had as few samples as 1 for some of the values/categories. Since SMOTE is based on KNN concept, it's not possible to apply SMOTE on 1 sampled values.
**HOW I SOLVED IT:**
>
> Since those 1 sampled values/categories were equivalent to outliers, i removed them from the dataset and then applied SMOTE and it worked.
>
>
>
>
> Also try decreasing the `k_neighbors` parameter to make it work
>
>
>
```py
xr, yr = SMOTE(k_neighbors=3).fit_resample(x, y)
```
Upvotes: 2 <issue_comment>username_5: I was able to solve this issue following number 1 of [this](https://stackoverflow.com/a/49418705/6663387) answer.
```
from collections import Counter
Count(df) # get the classes
# drop the classes with 1 as their value because it's lower than k_neighbors which has 2 as minimum value in my case
X_res, y_res = SMOTE(k_neighbors = 2).fit_resample(X, y)
```
Upvotes: 0 |
2018/03/20 | 1,325 | 4,808 | <issue_start>username_0: I am trying to follow the instructions at <https://docs.docker.com/docker-for-windows/kubernetes/#use-docker-commands> for running a docker-compose.yml file against kubernetes on Docker for Windows.
I am using the Edge version of Docker for Windows -- 18.03.0-ce-rc4 -- and I have kubernetes enabled.
I am using the example docker-compose app at <https://docs.docker.com/compose/gettingstarted/#step-3-define-services-in-a-compose-file>, i.e.
```
version: '3.3'
services:
web:
build: .
ports:
- '5000:5000'
redis:
image: redis
```
This example works fine with `docker-compose build` and `docker-compose up`
But following the documentation linked above for `docker stack`, I get the following:
```
PS C:\dev\projects\python\kubetest> docker stack deploy --compose-file .\docker-compose.yml mystack
Ignoring unsupported options: build
Stack mystack was created
Waiting for the stack to be stable and running...
- Service redis has one container running
PS C:\dev\projects\python\kubetest> kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 443/TCP 5d
redis ClusterIP None 55555/TCP 8s
```
Note that it doesn't create the web service, along with that "ignoring unsupported options: build" error
I also tried using the sample docker-compose.yml file in that documentation linked above, and it didn't work, either, with a totally different error.
In short, by following the documentation, I'm unable to get anything deployed to kubernetes on Docker for Windows.<issue_comment>username_1: Due to the lack of support for a `build` there would be no `image` to run for the `web` service containers.
Compose can manage the build for you on a single Docker host. As Swarm and Kubernetes are normally run across multiple nodes, an `image` should reference a registry available on the network so all nodes can access the same image.
[Dockers `stack deploy` example](https://docs.docker.com/engine/swarm/stack-deploy/#create-the-example-application) includes a step to [setup a private registry](https://docs.docker.com/engine/swarm/stack-deploy/#set-up-a-docker-registry) and use that for source of the image:
```
services:
web:
image: 127.0.0.1:5000/stackdemo
```
### Workaround
In this instance, it *might* be possible to get away with building the image manually and referencing that image name due to everything running under the one Docker instance, it depends on how Kubernetes is setup.
```
version: '3.3'
services:
web:
build: .
image: me/web
ports:
- '5000:5000'
redis:
image: redis
```
Build the image externally
```
docker-compose build web
```
or directly with `docker`:
```
docker build -t me/web .
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I ran into the same problem when following the official instruction.
To bypass this issue, I chose using `kubectl` for deploying docker images to local k8s instead of using `docker stack` (seems the root cause might be the `--orchestrator kubernetes` flag, it doesn't work).
Here are the steps:
1. **Using the Kubernetes' docker registry per terminal (important)**:
run `& minikube docker-env | iex` under Windows Powershell (`iex` is the alias of `Invoke-Expression`)
or
run `eval $(minikube docker-env)` under bash environment.
After that, run `docker image ls`, make sure your docker registry is set to Kubernetes's env. (You should see some default images under 'k8s.gcr.io' domain.)
You may need to do this in **every** terminal if multiple terminals are opened.
2. **Rebuild your docker image**:
run `docker-compose -f /path/to/your/docker-compose.yml build`
Your image should appear in K8s's local registry.
3. **Run your image with 'kubectl'**:
run `kubectl run hello-world --image=myimage --image-pull-policy=Never`
### References:
<https://stackoverflow.com/a/48999680/4989702>
Upvotes: 0 <issue_comment>username_3: there is a project:
>
> <https://github.com/kubernetes/kompose>
>
>
>
called **Docker Kompose** that helps users who already have docker-compose files to deploy their applications on Kubernetes as easy as possible by automatically converting their existing docker-compose file into many yaml files.
Upvotes: 2 <issue_comment>username_4: Though it might be possible to deploy docker compose yamls in kubernetes as suggested. It is better to create the right kubernetes yamls to deploy on your specific cluster.
There are some tools out there which can make your life simple.
One such tool is Move2Kube (<https://github.com/konveyor/move2kube>). You can do
```
move2kube translate -s
```
It can create the relevant deployments, services, ingress, etc that will be required to deploy your application.
Upvotes: 0 |
2018/03/20 | 809 | 3,001 | <issue_start>username_0: I'm quite new to C# so I apologize if this has been asked before but I've done some searches and haven't found what I'm looking for.
Specifically, I'm aware I can use the keyword `using` in the following manner to (in some way) mimic the use of `typedef`:
```
using myStringDict = System.Collections.Generic.Dictionary;
```
or other pre-defined types.
I am curious to know if there is a way to do this with generic types. As in
```
using mySomethingDict = System.Collections.Generic.Dictionary; //this doesn't work
```
This is all to avoid having to include `using System.Collections.Generic;` in my files (as there are many files in my project).
Alternative advice is also welcome.<issue_comment>username_1: No you can't do this, using aliases cannot contain type parameters. From the specification:
**9.4.1 Using alias directives**
>
> Using aliases can name a closed constructed type, but cannot name an
> unbound generic type declaration without supplying type arguments. For
> example:
>
>
>
```
namespace N1
{
class A
{
class B {}
}
}
namespace N2
{
using W = N1.A; // Error, cannot name unbound generic type
using X = N1.A.B; // Error, cannot name unbound generic type
using Y = N1.A; // Ok, can name closed constructed type
using Z = N1.A; // Error, using alias cannot have type parameters
}
```
Upvotes: 2 <issue_comment>username_2: You´re misusing the [`using`](https://learn.microsoft.com/dotnet/csharp/language-reference/keywords/using-directive)-statement. The meaning of this statement depends on its context. In your case you refer to the "using alias directive" to define an alias for a namespace or a type. However this is usually used to avoid ambuigities, e.g. when you have your own class `Math` defined but want to use the `System.Math`-class in your code also. So in order to be able to refer to both types, you can use a `using` as alias, e.g. `using MyMath = MyNamespace.Math`.
So `using` is *not* the C#-equivalent for a `typedef`.
On the other side it´s absolutely okay to have multiple usings in your code, it simply shows the classes that are used by your code. You shouldn´t bother for that at all. In contrast to your statement in your question you´re *not* importing a complete namespace. You simply load the classes you want to use in your code. You could do the exact same by don´t use any `using` and allways use fully-qualified names for all the types, e.g. `System.Generics.Dictionary`. This will compile to the exact same code, but is harder to read and write.
This difers from how JAVA imports types - which might be the cause for your confusion. In JAVA you can write `import MyNameSpace.*` to load *all* classes within the namespace `MyNameSpace`. In C# however there´s no such thing, you´re just refering to *single types*.
[See also the examples on the alias](https://learn.microsoft.com/dotnet/csharp/programming-guide/namespaces/how-to-use-the-global-namespace-alias)
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,084 | 4,584 | <issue_start>username_0: I'm new to the Scala world and I'm using PLAY to make an API. It's going well but I am having some trouble understanding some of the notation and there is not a lot of documentation on it. Specifically, I am confused by the following controller method in one of the examples on the PLAY site:
```
class HomeController @Inject()(@Named("userParentActor") userParentActor: ActorRef,
cc: ControllerComponents)
(implicit ec: ExecutionContext) {
}
```
My question is what is going on in this constructor? Which part is the constructor and which part is the injected parameters? Is the `ExecutionContext` injected as well? And why is the `ec` in a separate parenthesis?
Thank you for the clarification.<issue_comment>username_1: It's just a constructor with two parameter lists, and the parameter in the second list is implicit.
```
class HomeController @Inject()( // list-1 start
@Named("userParentActor") userParentActor: ActorRef, // list-1, arg-1
cc: ControllerComponents // list-1, arg-2
)( // 1 end, 2 start
implicit ec: ExecutionContext // list-2, arg-1
) { // list-2 end
// body
}
```
The `@Inject` annotation applies to both argument lists, so `ec` is also injected by guice (using [Play's default thread pool](https://www.playframework.com/documentation/2.6.x/ThreadPools)).
The `@Named` annotation affects only the first argument.
The `ec` argument is in a separate list, because implicit arguments must be declared in a separate list.
It is declared in a separate list probably because the author anticipated the use case where the controller is instantiated manually instead by the dependency injection container: it is simpler then, because you don't have to specify the default thread pool everywhere.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Ok, so lets go back a little and talk about why Play evolved to be like this. Normally when you want to write a class/method/function in an imperative kind of manner we write something like:
```
class XProviderFromCloud {
def getX (xId: String) : X = ??? // ???: To be implemented
}
```
Assuming the above code is somewhere in your `models`, this is ok, you can import models and use the method here. However a good engineering
approach here is to create the interface and test things: something like test driven development (TDD). Well in this case, the code will be:
```
trait XProvider{
def getX(xId: String): x
}
class XProviderFromCloud extends Xprovider{
override def getX (xId: String) : X = ??? // ???: To be implemented
}
```
Here you go through the interface, so you can inject the interface to a controller:
```
class MyController @inject()(xProvider: XProvider)
```
So you can see here, that your controller class, and have number of injectable components to use. One of the main reason I do this,
is because I can mock the interface and retrun a result; and test on that. So it means, I do not need to have code within
`override def getX` to test the controller implementing this. After I'm sure the controller can use the result from getX, I write the
test for `getX`, and then write the code for the body of it.
Now lets go to the next point, using the `@Named` annotation. Sometimes an interface has multiple implementation (extended by the number of
classes), we use the @Named annotation, to explicitly express which implementation we want. For example I can extend the above interface
with two classes on get the X from Amazon cloud (e.g., S3), the other get it from Google cloud. As simple as that, you can
also look at the docs: <https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#Programmatic-bindings>
What about the `ec: ExecutionContext` part, you may ask. Well, this is later on when you want to deal with concurrency and Futures. The above
code, is not good in a sense that it is not concurrent, if we want to call a cloud service or a database; we need to write a non-blocking
concurrent code, using Future. Futures run on the cpu theads, and we can either use the default execution context (as you shown in your code),
or create our own execution context, as demonstrated in Play's doc: <https://www.playframework.com/documentation/2.6.x/ScalaAsync#Creating-non-blocking-actions>.
Upvotes: 1 |
2018/03/20 | 1,053 | 4,391 | <issue_start>username_0: writing automated tests, I need to trigger a download.
Clicking the button to download, runs fine and all other assertions pass.
The problem is that edge will not allow selenium to close the browser when I call `browser.end()` because there is a dialog saying there is a download pending and is waiting for confirmation.
is there a flag or capability to allow selenium to dismiss this "download is pending" dialog?
Ive tried
```js
edge: {
desiredCapabilities: {
browserName: "edge",
browser_version: "16.0",
unhandledPromptBehavior: "accept",
},
```<issue_comment>username_1: It's just a constructor with two parameter lists, and the parameter in the second list is implicit.
```
class HomeController @Inject()( // list-1 start
@Named("userParentActor") userParentActor: ActorRef, // list-1, arg-1
cc: ControllerComponents // list-1, arg-2
)( // 1 end, 2 start
implicit ec: ExecutionContext // list-2, arg-1
) { // list-2 end
// body
}
```
The `@Inject` annotation applies to both argument lists, so `ec` is also injected by guice (using [Play's default thread pool](https://www.playframework.com/documentation/2.6.x/ThreadPools)).
The `@Named` annotation affects only the first argument.
The `ec` argument is in a separate list, because implicit arguments must be declared in a separate list.
It is declared in a separate list probably because the author anticipated the use case where the controller is instantiated manually instead by the dependency injection container: it is simpler then, because you don't have to specify the default thread pool everywhere.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Ok, so lets go back a little and talk about why Play evolved to be like this. Normally when you want to write a class/method/function in an imperative kind of manner we write something like:
```
class XProviderFromCloud {
def getX (xId: String) : X = ??? // ???: To be implemented
}
```
Assuming the above code is somewhere in your `models`, this is ok, you can import models and use the method here. However a good engineering
approach here is to create the interface and test things: something like test driven development (TDD). Well in this case, the code will be:
```
trait XProvider{
def getX(xId: String): x
}
class XProviderFromCloud extends Xprovider{
override def getX (xId: String) : X = ??? // ???: To be implemented
}
```
Here you go through the interface, so you can inject the interface to a controller:
```
class MyController @inject()(xProvider: XProvider)
```
So you can see here, that your controller class, and have number of injectable components to use. One of the main reason I do this,
is because I can mock the interface and retrun a result; and test on that. So it means, I do not need to have code within
`override def getX` to test the controller implementing this. After I'm sure the controller can use the result from getX, I write the
test for `getX`, and then write the code for the body of it.
Now lets go to the next point, using the `@Named` annotation. Sometimes an interface has multiple implementation (extended by the number of
classes), we use the @Named annotation, to explicitly express which implementation we want. For example I can extend the above interface
with two classes on get the X from Amazon cloud (e.g., S3), the other get it from Google cloud. As simple as that, you can
also look at the docs: <https://www.playframework.com/documentation/2.6.x/ScalaDependencyInjection#Programmatic-bindings>
What about the `ec: ExecutionContext` part, you may ask. Well, this is later on when you want to deal with concurrency and Futures. The above
code, is not good in a sense that it is not concurrent, if we want to call a cloud service or a database; we need to write a non-blocking
concurrent code, using Future. Futures run on the cpu theads, and we can either use the default execution context (as you shown in your code),
or create our own execution context, as demonstrated in Play's doc: <https://www.playframework.com/documentation/2.6.x/ScalaAsync#Creating-non-blocking-actions>.
Upvotes: 1 |
2018/03/20 | 614 | 1,395 | <issue_start>username_0: I have a dictionary that its value is a list of list. I want to sum up the first item in list 1 with the first item in list 2 with .... to the end, and then do it for all the items. in the end, I will have one list as a value for my dict. all of the lists have the same length.
>
> `my_dict = {'r1': [[0,1,0,1],[1,1,0,1],[0,1,1,1],[1,0,0,1]] , 'r2' : [[1,0,0,0],[1,1,0,0],[0,1,1,0],[1,0,0,1]]}`
>
>
>
result :
```
my_dict = {'r1':[2,3,1,4], 'r2' : [3,2,1,1]}
```<issue_comment>username_1: This operation would be much simpler with numpy.
You could try something like this:
```
import numpy as np
result = {}
for k in my_dict:
v = np.array(my_dict[k]).sum(axis=0).tolist()
result[k] = v
```
Upvotes: 0 <issue_comment>username_2: You could use a dict comprehension:
```
new_dict = {k: list(map(sum, zip(*v))) for k, v in my_dict.items()}
```
The key idea here is zip(\*v), which transforms your list of lists into a list of lists of the i-th elements:
```
zip(*[[1,2,3,4], [5,6,7,8]]) # returns [[1, 5], [2, 6], [3, 7], [4, 8]]
```
Upvotes: 1 <issue_comment>username_3: You can try:
```
my_dict = {'r1': [[0,1,0,1],[1,1,0,1],[0,1,1,1],[1,0,0,1]] , 'r2' : [[1,0,0,0],[1,1,0,0],[0,1,1,0],[1,0,0,1]]}
print({i:map(lambda x:sum(x),zip(*j)) for i,j in my_dict.items()})
```
output:
```
{'r1': [2, 3, 1, 4], 'r2': [3, 2, 1, 1]}
```
Upvotes: 1 |
2018/03/20 | 560 | 1,961 | <issue_start>username_0: I'm having a small issue on running a function inside of a function. Hope u guys can guide me.
```
function myFunction()
{
function play(a, b)
{
document.getElementById("demo").innerHTML = a \* b;
}
}
myFunction().play(2,3);
```<issue_comment>username_1: So what you've done with `myFunction` is nested a 'private' function `play` inside so the outside cannot call it. So you can either return the inner function or **call** play inside of function like and only call `myFunction`:
```
function myFunction(a, b) {
function play(a, b) {
document.getElementById("demo").innerHTML = a * b;
}
play(a, b);
}
myFunction(2,3); // will set innerHTML of demo to a*b
```
Or remove the nesting and just have
```
function play(a, b) {
document.getElementById("demo").innerHTML = a * b;
}
play(2,3);
```
Upvotes: 0 <issue_comment>username_2: You will not be able to access the play function where you are currently invoking. The play function is out of scope.
```
function myFunction()
{
function play(a, b)
{
document.getElementById("demo").innerHTML = a * b;
}
play(2,3);
}
myFunction()
```
This should work for you
Upvotes: 0 <issue_comment>username_3: To keep with your current usage, try this:
```
function myFunction()
{
return {
play: function(a, b)
{
document.getElementById("demo").innerHTML = a * b;
}
}
}
myFunction().play(2,3);
```
Upvotes: 2 <issue_comment>username_4: You can't do this because that function `play` only exists within the function `myFunction` and everything outside of it won't be able to execute it.
An alternative is to return an object with a property called `play` whose value is the function you need to execute.
```js
function myFunction() {
return {
play: function(a, b) {
document.getElementById("demo").innerHTML = a * b;
}
}
}
myFunction().play(2, 3);
```
Upvotes: 0 |
2018/03/20 | 1,323 | 4,676 | <issue_start>username_0: I have a large excel file, where column A has random numbers in each cell, for all the 1 million+ rows in that excel sheet. I am trying to print number of occurrences of each of those values in column B. Usually, I do this in Excel itself, by sorting the data first and then using COUNTIF formula. But since I have so many rows (1 million), copy pasting the formula in all rows of Column B doesn't seem to work. Excel takes forever to compute and hangs up frequently. I want to now try doing this with Python.
Any ideas to get me started would be very much appreciated!
Update:
Here's what I tried:
```
import csv
import collections
with open ('test.csv','rb') as f:
reader = csv.reader(f)
my_list = list(reader)
#print my_list[1000]
counter = collections.Counter(my_list)
print counter
```
But I get TypeError: unhashable type: 'list'
Can anyone help?<issue_comment>username_1: You could use the same strategy in Python: read the entire sequence of numbers into a list, sort the list and count the number of duplicates.
Upvotes: 0 <issue_comment>username_2: First a bit of advice: your question has all the right content, but the phrasing is quite poor. I am answering it because of the former, but I feel the need to point out the latter so you can avoid getting so many close votes in the future. "Any ideas to get me started would be very much appreciated!" and "Can anyone help?" are not valid questions for SO. The problem here is that they are fluff that detracts from the real question, to the point that most reviewers will see them as trigger phrases. In your case, you actually have a good clear problem statement, a coding attempt that is nearly spot-on, and all you need is help with a specific exception. Next time, phrase your question to be about your error or actual problem, and stay away from vagueness like "can you help?".
Enough of that.
A CSV reader is an iterable over the rows of the CSV. Each row is a list. Therefore, when you do `list(reader)`, you are actually creating a list of lists. In your case, each list contains only one element, but that is irrelevant to the `Counter`: lists can't be dictionary keys, so you get your exception. Literally all you need to change is to extract the first element of each row before you pass it to the `Counter`. Replace `my_list = list(reader)` with any of the following:
```
my_list = list(r[0] for r in reader)
```
OR
```
my_list = [r[0] for r in reader]
```
OR
```
counter = collections.Counter(r[0] for r in reader)
```
The last one creates a generator expression that will be evaluated lazily. It is probably your best option for a very large input since it will not retain the entire data set in memory, only the histogram.
Since the generator is evaluated lazily, you can not evaluate the `Counter` outside the `with` block. If you attempt to do so, the file will already have been closed, and the generator will raise an error on the first iteration.
You might get a slight speed boost by using `operator.itemgetter` instead of an explicit `r[0]` in any of the expressions above. All combined, the example below is pretty close to what you already have:
```
import csv
from collections import Counter
from operator import itemgetter
with open ('test.csv','rb') as f:
reader = csv.reader(f)
g = itemgetter(0)
counter = Counter(g(r) for r in reader)
print(counter)
```
Upvotes: 1 <issue_comment>username_3: Consider using [pandas](https://pandas.pydata.org/), which is simple to use and optimized for large datasets.
**Given**
```
import csv
import random
import pandas as pd
```
For demonstration, here is a csv file with a single column of random numbers:
```
random.seed(123)
data = [random.randint(0, 100) for _ in range(25)]
# Write data to csv
filename = "discard.csv"
with open(filename, "w+") as f:
writer = csv.writer(f)
for row in data:
writer.writerow([str(row)])
```
**Code**
```
# Read and count
s = pd.read_csv(filename, header=None, index_col=False)[0]
s.value_counts()
```
Output
```
34 2
20 2
6 2
71 2
43 2
42 2
98 1
11 1
99 1
4 1
13 1
31 1
48 1
17 1
52 1
55 1
68 1
89 1
0 1
Name: 0, dtype: int64
```
Apply the latter code to your dataset.
Upvotes: 1 <issue_comment>username_4: The `pandas` package is a simple way to load Excel data. Then you can use the `value_counts()` member function of the resulting dataframe. For example,
```
import pandas as pd
xl = pd.ExcelFile("C:\\Temp\\test.xlsx") # or whatever your filename is
df = xl.parse("Sheet1", header=None)
answer = df[0].value_counts()
print(answer)
```
Upvotes: 0 |
2018/03/20 | 1,355 | 4,897 | <issue_start>username_0: I need to send a few requests to the DB and update each entry from the response.
I am trying to use forEach/for but as far as forEach/for is async I can't find a way how to do a few requests to Firebase DB using cycle.
Any ideas?
Here is my code. I've tried to do work around with counter inside the cycle. But unfortunately, this not works for me.
```
function updateMethod(inputArray) {
return new Promise((resolve, reject) => {
var updates = {};
var ctr = 0;
for (let i = 0; i < inputArray.length; i++) {
db.ref("/foo").orderByChild("finished").equalTo(inputArray[i]).once("value", (snapshot) => {
ctr++;
var key = snapshot.key;
updates['transactions/' + key + '/finished'] = true;
if (ctr === inputArray.length) {
resolve(updates);
}
}).catch((e) => {
console.log(e);
});
}
});
}
```
**So bottom line:**
I need to do a few requests inside cycle and then make a return from the method.<issue_comment>username_1: You could use the same strategy in Python: read the entire sequence of numbers into a list, sort the list and count the number of duplicates.
Upvotes: 0 <issue_comment>username_2: First a bit of advice: your question has all the right content, but the phrasing is quite poor. I am answering it because of the former, but I feel the need to point out the latter so you can avoid getting so many close votes in the future. "Any ideas to get me started would be very much appreciated!" and "Can anyone help?" are not valid questions for SO. The problem here is that they are fluff that detracts from the real question, to the point that most reviewers will see them as trigger phrases. In your case, you actually have a good clear problem statement, a coding attempt that is nearly spot-on, and all you need is help with a specific exception. Next time, phrase your question to be about your error or actual problem, and stay away from vagueness like "can you help?".
Enough of that.
A CSV reader is an iterable over the rows of the CSV. Each row is a list. Therefore, when you do `list(reader)`, you are actually creating a list of lists. In your case, each list contains only one element, but that is irrelevant to the `Counter`: lists can't be dictionary keys, so you get your exception. Literally all you need to change is to extract the first element of each row before you pass it to the `Counter`. Replace `my_list = list(reader)` with any of the following:
```
my_list = list(r[0] for r in reader)
```
OR
```
my_list = [r[0] for r in reader]
```
OR
```
counter = collections.Counter(r[0] for r in reader)
```
The last one creates a generator expression that will be evaluated lazily. It is probably your best option for a very large input since it will not retain the entire data set in memory, only the histogram.
Since the generator is evaluated lazily, you can not evaluate the `Counter` outside the `with` block. If you attempt to do so, the file will already have been closed, and the generator will raise an error on the first iteration.
You might get a slight speed boost by using `operator.itemgetter` instead of an explicit `r[0]` in any of the expressions above. All combined, the example below is pretty close to what you already have:
```
import csv
from collections import Counter
from operator import itemgetter
with open ('test.csv','rb') as f:
reader = csv.reader(f)
g = itemgetter(0)
counter = Counter(g(r) for r in reader)
print(counter)
```
Upvotes: 1 <issue_comment>username_3: Consider using [pandas](https://pandas.pydata.org/), which is simple to use and optimized for large datasets.
**Given**
```
import csv
import random
import pandas as pd
```
For demonstration, here is a csv file with a single column of random numbers:
```
random.seed(123)
data = [random.randint(0, 100) for _ in range(25)]
# Write data to csv
filename = "discard.csv"
with open(filename, "w+") as f:
writer = csv.writer(f)
for row in data:
writer.writerow([str(row)])
```
**Code**
```
# Read and count
s = pd.read_csv(filename, header=None, index_col=False)[0]
s.value_counts()
```
Output
```
34 2
20 2
6 2
71 2
43 2
42 2
98 1
11 1
99 1
4 1
13 1
31 1
48 1
17 1
52 1
55 1
68 1
89 1
0 1
Name: 0, dtype: int64
```
Apply the latter code to your dataset.
Upvotes: 1 <issue_comment>username_4: The `pandas` package is a simple way to load Excel data. Then you can use the `value_counts()` member function of the resulting dataframe. For example,
```
import pandas as pd
xl = pd.ExcelFile("C:\\Temp\\test.xlsx") # or whatever your filename is
df = xl.parse("Sheet1", header=None)
answer = df[0].value_counts()
print(answer)
```
Upvotes: 0 |
2018/03/20 | 1,262 | 4,421 | <issue_start>username_0: I want import DML script after spring creates tables.
I was fighting with **`data.sql`** file but my application **don't see it**.
I don't know why. It works when I rename `data.sql` to **`import.sql`**, but **it should also work** with `data.sql`.
Anybody know why?
My application.properties:
```
spring.jpa.hibernate.ddl-auto=create-drop
spring.datasource.url=jdbc:postgresql://localhost:5432/yyy
spring.datasource.username=xxx
spring.datasource.password=xxx
spring.datasource.driver-class-name=org.postgresql.Driver
```
I put `data.sql` into `src/main/resources`
When only data.sql is in `resources`:
```
2018-03-21 00:42:13.646 INFO 4740 --- [ main] o.h.t.schema.internal.SchemaCreatorImpl : HHH000476: Executing import script 'org.hibernate.tool.schema.internal.exec.ScriptSourceInputNonExistentImpl@eebc0db'
```
When only import.sql (also in `src/main/resources`):
```
2018-03-21 00:48:57.023 INFO 16600 --- [ main] o.h.t.schema.internal.SchemaCreatorImpl : HHH000476: Executing import script 'ScriptSourceInputFromUrl(file:/C:/Users/Pawel/Desktop/Project/target/classes/import.sql)'
```
When i type `spring.datasource.data=data.sql` into `application.properties`
```
Exception in thread "SimpleAsyncTaskExecutor-2" org.springframework.boot.context.properties.source.InvalidConfigurationPropertyValueException: Property spring.datasource.data with value 'ServletContext resource [/data.sql]' is invalid: The specified resource does not exist.
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.getResources(DataSourceInitializer.java:169)
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.getScripts(DataSourceInitializer.java:151)
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializer.initSchema(DataSourceInitializer.java:114)
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker.onApplicationEvent(DataSourceInitializerInvoker.java:93)
at org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker.onApplicationEvent(DataSourceInitializerInvoker.java:37)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
at org.springframework.context.event.SimpleApplicationEventMulticaster.lambda$multicastEvent$0(SimpleApplicationEventMulticaster.java:136)
at java.lang.Thread.run(Thread.java:745)
```
I can see both `data.sql` and `import.sql` in `target/classes/data.sql`, `target/classes/import.sql` ...<issue_comment>username_1: If you put data.sql inside jar then prepend its name with classpath or META-INF
```
spring.datasource.data=classpath:/data.sql
spring.datasource.data=/META-INF/data.sql
```
(I'm not 100% sure so it would be great if you try both solution and give me feedback)
Upvotes: 2 <issue_comment>username_2: you have to inactivate the Hibernate loading (from import.sql) by commenting your line spring.jpa.hibernate.ddl-auto=create-drop and setting it to validate.
Then add spring.datasource.initialization:
```
spring.datasource.initialization-mode=always
spring.jpa.hibernate.ddl-auto=validate
```
Upvotes: 2 <issue_comment>username_3: If somebody have this trouble, here is what I did:
applications.properties :
Oracle settings
===============
```
spring.datasource.url = jdbc:oracle:thin:@localhost:1521:xe
spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
spring.datasource.username=xxxx
spring.datasource.password=<PASSWORD>
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialization-mode=always
spring.datasource.platform=oracle
```
MYSQL settings
==============
```
spring.datasource.url = jdbc:mysql://localhost:3306/mysql
spring.datasource.username = xxxx
spring.datasource.password = <PASSWORD>
spring.jpa.hibernate.ddl-auto=none
spring.datasource.initialization-mode=always
spring.datasource.platform=mysql
```
The lines that helped me was adding the last 2, and also remember that you must have a file with the name "data-oracle.sql" or "data-mysql.sql" respectively.
Here is the source:
Upvotes: 1 <issue_comment>username_4: work for me when first option of naXa:
spring.datasource.data=classpath:/data.sql
Upvotes: 0 |
2018/03/20 | 1,174 | 4,625 | <issue_start>username_0: I need help figuring out how to make a button that will play the same sound over and over again without having to wait for the sound to finish playing.
here is my code so far.
```
final MediaPlayer mp = MediaPlayer.create(this,R.raw.boosto);
Button button = (Button) this.findViewById(R.id.button);
button.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
mp.start();
}
```<issue_comment>username_1: When playback has already started, calls to `start()` don't have any effect. Use `seekTo()` to reset playback to the beginning.
```
public void onClick(View v) {
if(mp.isPlaying())
mp.seekTo(0L, MediaPlayer.SEEK_NEXT_SYNC); // continues playback from millisecond 0
else
mp.start();
}
```
Upvotes: 0 <issue_comment>username_2: **SoundPool**
SoundPool contains a set of source music, the sound source can be from music file in the app or in the file system, .. SoundPool support play music sources simultaneously.
how to use:
<https://developer.android.com/reference/android/media/SoundPool.html>
```
SoundPool sp = new SoundPool(5, AudioManager.STREAM_MUSIC, 0);
/** soundId for Later handling of sound pool **/
int soundId = sp.load(context, R.raw.windows_8_notify, 1);
sp.play(soundId, 1, 1, 0, 0, 1);
```
example:
```
public class MainActivity extends AppCompatActivity {
private SoundPool soundPool;
private AudioManager audioManager;
// Maximumn sound stream.
private static final int MAX_STREAMS = 5;
// Stream type.
private static final int streamType = AudioManager.STREAM_MUSIC;
private boolean loaded;
private int soundIdDestroy;
private int soundIdGun;
private float volume;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
// AudioManager audio settings for adjusting the volume
audioManager = (AudioManager) getSystemService(AUDIO_SERVICE);
// Current volumn Index of particular stream type.
float currentVolumeIndex = (float) audioManager.getStreamVolume(streamType);
// Get the maximum volume index for a particular stream type.
float maxVolumeIndex = (float) audioManager.getStreamMaxVolume(streamType);
// Volumn (0 --> 1)
this.volume = currentVolumeIndex / maxVolumeIndex;
// Suggests an audio stream whose volume should be changed by
// the hardware volume controls.
this.setVolumeControlStream(streamType);
// For Android SDK >= 21
if (Build.VERSION.SDK_INT >= 21 ) {
AudioAttributes audioAttrib = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_GAME)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build();
SoundPool.Builder builder= new SoundPool.Builder();
builder.setAudioAttributes(audioAttrib).setMaxStreams(MAX_STREAMS);
this.soundPool = builder.build();
}
// for Android SDK < 21
else {
// SoundPool(int maxStreams, int streamType, int srcQuality)
this.soundPool = new SoundPool(MAX_STREAMS, AudioManager.STREAM_MUSIC, 0);
}
// When Sound Pool load complete.
this.soundPool.setOnLoadCompleteListener(new SoundPool.OnLoadCompleteListener() {
@Override
public void onLoadComplete(SoundPool soundPool, int sampleId, int status) {
loaded = true;
}
});
// Load sound file (destroy.wav) into SoundPool.
this.soundIdDestroy = this.soundPool.load(this, R.raw.destroy,1);
// Load sound file (gun.wav) into SoundPool.
this.soundIdGun = this.soundPool.load(this, R.raw.gun,1);
}
// When users click on the button "Gun"
public void playSoundGun(View view) {
if(loaded) {
float leftVolumn = volume;
float rightVolumn = volume;
// Play sound of gunfire. Returns the ID of the new stream.
int streamId = this.soundPool.play(this.soundIdGun,leftVolumn, rightVolumn, 1, 0, 1f);
}
}
// When users click on the button "Destroy"
public void playSoundDestroy(View view) {
if(loaded) {
float leftVolumn = volume;
float rightVolumn = volume;
// Play sound objects destroyed. Returns the ID of the new stream.
int streamId = this.soundPool.play(this.soundIdDestroy,leftVolumn, rightVolumn, 1, 0, 1f);
}
}
}
```
Upvotes: 2 |
2018/03/20 | 1,739 | 5,437 | <issue_start>username_0: I'm new to Nim, so this might be an obtuse question, but how does one create a short-hand alias variable for the purpose of simplifying code?
For instance:
```
import sdl2
import sdl2.gfx
type
Vector[T] = object
x, y: T
Ball = object
pos: Vector[float]
Game = ref object
renderer: RendererPtr
ball: array[10, Ball]
proc render(game: Game) =
# ...
# Render the balls
for ix in low(game.ball)..high(game.ball):
var ball : ref Ball = game.ball[ix]
game.renderer.filledCircleRGBA(
int16(game.renderer.ball[ix].pos.x),
int16(game.renderer.ball[ix].pos.y),
10, 100, 100, 100, 255)
# ...
```
Instead of that last part, I'd like to use a shorter alias to access the ball position:
```
# Update the ball positions
for ix in low(game.ball)..high(game.ball):
??? pos = game.ball[ix].pos
game.renderer.filledCircleRGBA(
int16(pos.x),
int16(pos.y),
10, 100, 100, 100, 255)
```
However, if I use a `var` in place of `???`, then I seem to create a copy in `pos`, which then means the original isn't updated. A `ref` isn't allowed, and `let` won't let me mutate it.
This seems a natural thing to want to do, so I'd be surprised if Nim doesn't let you do it, I just can't see anything in the manuals or tutorials.
[later] Well, apart from "abusing" `ptr` to achieve this, but I had thought that use of `ptr` is discouraged except for C API interoperability.
What I'm hoping for is something like Lisp/Haskell's `let*` construct...<issue_comment>username_1: There are [rules to creating a reference](https://stackoverflow.com/a/30584797/172690) so you would likely need to use an unsafe pointer into the memory held by the `Game` variable like this:
```
type
Vector[T] = object
x, y: T
RendererPtr = ref object
dummy: int
Ball = object
pos: Vector[float]
Game = ref object
renderer: RendererPtr
ball: array[10, Ball]
proc filledCircleRGBA(renderer: RendererPtr, x, y: int16,
a, b, c, d, e: int) =
discard
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
let ball: ptr Ball = addr game.ball[ix]
game.renderer.filledCircleRGBA(
int16(ball.pos.x), int16(ball.pos.y),
10, 100, 100, 100, 255)
```
Note that the `let` only applies to the local `ball` alias, you can still mutate whatever it is pointing at. Another way to reduce typing might be to write a wrapper around `filledCircleRGBA` which accepts a `Game` and the index to the `Ball` you want to render:
```
proc filledCircleRGBA(renderer: RendererPtr, x, y: int16,
a, b, c, d, e: int) =
discard
proc filledCircleRGBA(game: Game, ballIndex: int,
a, b, c, d, e: int) =
filledCircleRGBA(game.renderer,
game.ball[ballIndex].pos.x.int16,
game.ball[ballIndex].pos.y.int16,
a, b, c, d, e)
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
game.filledCircleRGBA(ix, 10, 100, 100, 100, 255)
```
Depending on your performance needs you could [inline](https://nim-lang.org/docs/manual.html#types-procedural-type) that wrapper `proc` or turn it into a [template](https://nim-lang.org/docs/manual.html#templates) guaranteeing no proc call overhead.
Upvotes: 2 <issue_comment>username_2: Another solution, maybe more Nim-like, would be to use a template. Templates in Nim are just a simple substitution at the AST level. So if you create a couple templates like this:
```
template posx(index: untyped): untyped = game.ball[index].pos.x.int16
template posy(index: untyped): untyped = game.ball[index].pos.y.int16
```
You can now replace your code with:
```
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
var ball : ref Ball = game.ball[ix]
game.renderer.filledCircleRGBA(
posx(ix),
posy(ix),
10, 100, 100, 100, 255)
```
This will get converted to your original code on compile-time and not carry any overhead. It will also maintain the same type-safety of the original code.
Of course if this is something you find yourself doing often you can create a template to create the templates:
```
template alias(newName: untyped, call: untyped) =
template newName(): untyped = call
```
This can then be used like this in your code:
```
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
var ball : ref Ball = game.ball[ix]
alias(posx, game.ball[ballIndex].pos.x.int16)
alias(posy, game.ball[ballIndex].pos.y.int16)
game.renderer.filledCircleRGBA(
posx(ix),
posy(ix),
10, 100, 100, 100, 255)
```
As you can see that solution is only really useful if you use it multiple times. Also note that since the alias template is expanded within the for loop the created templates will be scoped in there as well and can therefore share a name just fine.
Of course what might be more normal in a game setting is to use a more object oriented approach (one of the few cases where OO really makes sense IMHO, but that's another discussion). If you crate a procedure for the ball type you can annotate it with the `{.this: self.}` pragma to save on some typing:
```
type
A = object
x: int
{.this: self.}
proc testproc(self: A) =
echo x # Here we can acces x without doing self.x
var t = A(x: 10)
t.testproc()
```
Upvotes: 4 [selected_answer] |
2018/03/20 | 1,589 | 5,379 | <issue_start>username_0: I've looked for an answer to this, but all the questions I could find tell me to use delegation which I have done as seen below, but yet it still only fires once when I select an item from a dropdown.
My HTML:
```
Keyboard Trays
Another option
```
Pretty simple. My javascript code is:
```
$(document).on("change", "#CatalogItems", function() {
console.log("The item has been changed...");
});
```
This HTML is generated by a partial form when I select another item from another dropdown. I have that working just fine, so not including that code. This code works only once when I select the first item. When I use the other dropdown to reload the partial view that creates this CatalogItems dropdown, the above on change code will work again only once. So it seems to be attached correctly for persisting through Ajax calls, but I'm not sure why it is only firing once. Any help would be greatly appreciated.
EDIT: So to make things a bit clearer: If my Ajax call only happens at the beginning of the page load, which loads in the above dropdown I only am able to get my script as above to fire off once. If I do not do any more Ajax calls, it still fires off once. When I inspect the element of the dropdown list, nothing there changes (as far as ids, classes, etc) when I select an item. There's not duplicates as I haven't fired off the Ajax call to replace the partial. I'd supply more code, but that's really all I have for this bit on functionality.<issue_comment>username_1: There are [rules to creating a reference](https://stackoverflow.com/a/30584797/172690) so you would likely need to use an unsafe pointer into the memory held by the `Game` variable like this:
```
type
Vector[T] = object
x, y: T
RendererPtr = ref object
dummy: int
Ball = object
pos: Vector[float]
Game = ref object
renderer: RendererPtr
ball: array[10, Ball]
proc filledCircleRGBA(renderer: RendererPtr, x, y: int16,
a, b, c, d, e: int) =
discard
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
let ball: ptr Ball = addr game.ball[ix]
game.renderer.filledCircleRGBA(
int16(ball.pos.x), int16(ball.pos.y),
10, 100, 100, 100, 255)
```
Note that the `let` only applies to the local `ball` alias, you can still mutate whatever it is pointing at. Another way to reduce typing might be to write a wrapper around `filledCircleRGBA` which accepts a `Game` and the index to the `Ball` you want to render:
```
proc filledCircleRGBA(renderer: RendererPtr, x, y: int16,
a, b, c, d, e: int) =
discard
proc filledCircleRGBA(game: Game, ballIndex: int,
a, b, c, d, e: int) =
filledCircleRGBA(game.renderer,
game.ball[ballIndex].pos.x.int16,
game.ball[ballIndex].pos.y.int16,
a, b, c, d, e)
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
game.filledCircleRGBA(ix, 10, 100, 100, 100, 255)
```
Depending on your performance needs you could [inline](https://nim-lang.org/docs/manual.html#types-procedural-type) that wrapper `proc` or turn it into a [template](https://nim-lang.org/docs/manual.html#templates) guaranteeing no proc call overhead.
Upvotes: 2 <issue_comment>username_2: Another solution, maybe more Nim-like, would be to use a template. Templates in Nim are just a simple substitution at the AST level. So if you create a couple templates like this:
```
template posx(index: untyped): untyped = game.ball[index].pos.x.int16
template posy(index: untyped): untyped = game.ball[index].pos.y.int16
```
You can now replace your code with:
```
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
var ball : ref Ball = game.ball[ix]
game.renderer.filledCircleRGBA(
posx(ix),
posy(ix),
10, 100, 100, 100, 255)
```
This will get converted to your original code on compile-time and not carry any overhead. It will also maintain the same type-safety of the original code.
Of course if this is something you find yourself doing often you can create a template to create the templates:
```
template alias(newName: untyped, call: untyped) =
template newName(): untyped = call
```
This can then be used like this in your code:
```
proc render(game: Game) =
# Render the balls
for ix in low(game.ball)..high(game.ball):
var ball : ref Ball = game.ball[ix]
alias(posx, game.ball[ballIndex].pos.x.int16)
alias(posy, game.ball[ballIndex].pos.y.int16)
game.renderer.filledCircleRGBA(
posx(ix),
posy(ix),
10, 100, 100, 100, 255)
```
As you can see that solution is only really useful if you use it multiple times. Also note that since the alias template is expanded within the for loop the created templates will be scoped in there as well and can therefore share a name just fine.
Of course what might be more normal in a game setting is to use a more object oriented approach (one of the few cases where OO really makes sense IMHO, but that's another discussion). If you crate a procedure for the ball type you can annotate it with the `{.this: self.}` pragma to save on some typing:
```
type
A = object
x: int
{.this: self.}
proc testproc(self: A) =
echo x # Here we can acces x without doing self.x
var t = A(x: 10)
t.testproc()
```
Upvotes: 4 [selected_answer] |
2018/03/21 | 366 | 1,099 | <issue_start>username_0: I need my elements to show in a single line and be hidden on overflow, but my current code breaks the line if the screen is too short in width. How do I make the elements hidden on overflow when screen width is short?
<https://jsfiddle.net/4szyqv20/5/>
```
1000000
1000001
1000002
1000003
1000004
1000005
1000006
1000007
```
CSS:
```
.container {
display: block
overflow: hidden;
}
.element {
display: inline-block;
border: solid black 1px;
}
```<issue_comment>username_1: On `.container` you want to add `white-space: nowrap;`
Upvotes: 2 <issue_comment>username_2: `overflow: hidden` will not do anything if the container doesn't have a fixed size. You'll also need to add `white-space: nowrap` in order to prevent text-wrapping.
Your CSS will look something like this:
```
.container {
display: block
overflow: hidden;
width: 100%;
white-space: nowrap;
}
.element {
display: inline-block;
border: solid black 1px;
}
```
[Here is an updated fiddle.](https://jsfiddle.net/4szyqv20/17/)
Upvotes: 2 [selected_answer] |
2018/03/21 | 2,730 | 8,059 | <issue_start>username_0: In Java or Scala, what is the best way to find the number of seconds between 2 `LocalTimes`?
If I have `11:20:00.000Z` and `11:21:00.500Z` I would want the result to be `61` seconds, not `60` seconds.<issue_comment>username_1: You can calculate the time difference in `millis` and round the result in seconds using `math.round()`
```
import java.time.{LocalTime, OffsetTime}
import java.time.temporal.ChronoUnit.MILLIS
val t1 = LocalTime.parse("11:20:00.000")
val t2 = LocalTime.parse("11:21:00.500")
math.round(MILLIS.between(t1, t2) / 1000.0)
// res1: Long = 61
```
[UPDATE]
The above timezone-ignored calculation works fine given that the times to be compared are in the same timezone. In case different timezones are involved, [OffsetTime](https://docs.oracle.com/javase/8/docs/api/java/time/OffsetTime.html) as suggested in @username_4's answer should be used instead:
```
val t1 = OffsetTime.parse("11:20:00.000Z") // UTC +00:00
val t2 = OffsetTime.parse("11:21:00.500-01:00") // UTC -01:00
math.round(MILLIS.between(t1, t2) / 1000.0)
// res2: Long = 3661
```
Upvotes: 2 <issue_comment>username_2: **DateFormatter, LocalTime and Duration in java.time**
```
val time1 = "11:20:00.000Z"
val time2 = "11:21:00.500Z"
import java.time._
import java.time.format._
val format = DateTimeFormatter.ofPattern("HH:mm:ss.SSS")
val parsedTime1 = LocalTime.parse(time1.replace("Z", ""), format)
val parsedTime2 = LocalTime.parse(time2.replace("Z", ""), format)
val diff = Duration.between(parsedTime1, parsedTime2)
val roundedSeconds = math.round((diff.getSeconds.toDouble+(diff.getNano.toDouble/1000000000.0)))
println(roundedSeconds)
//61 is printed
```
**using SimpleDateFormat in java.text**
```
val time1 = "11:20:00.000Z"
val time2 = "11:21:00.500Z"
import java.text._
val format = new SimpleDateFormat("HH:mm:ss.SSSZ")
val parsedTime1 = format.parse(time1.replaceAll("Z$", "+0000"))
val parsedTime2 = format.parse(time2.replaceAll("Z$", "+0000"))
val diff = math round (parsedTime2.getTime - parsedTime1.getTime)/1000.0
println(diff)
//61 is printed
```
Upvotes: 0 <issue_comment>username_3: tl;dr
=====
```
ChronoUnit.SECONDS.between( // Calculate elapsed time as a total number of seconds.
LocalTime.parse( "11:20:00.000" ).truncatedTo( ChronoUnit.SECONDS ) , // Parse string as a `LocalTime` time-of-day, then lop off any fractional second.
LocalTime.parse( "11:21:00.500" ).truncatedTo( ChronoUnit.SECONDS ) // Lop off the half-second present in this `LocalTime` object.
)
```
>
> 60
>
>
>
Truncate
========
If you want to ignore the fractional seconds, [truncate](https://docs.oracle.com/javase/9/docs/api/java/time/LocalTime.html#truncatedTo-java.time.temporal.TemporalUnit-), just lop them off.
```
LocalTime start = LocalTime.parse( "11:20:00.000" ).truncatedTo( ChronoUnit.SECONDS ) ;
LocalTime stop = LocalTime.parse( "11:21:00.500" ).truncatedTo( ChronoUnit.SECONDS ) ; // Lop off the extra half-second we don't care about.
```
Calculate the elapsed time as a total number of seconds using the [`ChronoUnit`](https://docs.oracle.com/javase/9/docs/api/java/time/temporal/ChronoUnit.html) enum.
```
long seconds = ChronoUnit.SECONDS.between( start , stop ) ;
```
>
> 60
>
>
>
By the way, your Question asks about `LocalTime`, which has no concept of time zone or offset-from-UTC. Yet your example times have a `Z` on the end which is short for `Zulu` and means UTC. So those examples cannot be `LocalTime` values.
If you really have date-time values in UTC, represent them as `Instant` objects rather than as `LocalTime`. And perform the same logic: truncate, and calculate with `SECONDS.between`.
---
About *java.time*
=================
The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html).
The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes.
To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310).
You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes.
Where to obtain the java.time classes?
* [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later
+ Built-in.
+ Part of the standard Java API with a bundled implementation.
+ Java 9 adds some minor features and fixes.
* [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7)
+ Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/).
* [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system))
+ Later versions of Android bundle implementations of the java.time classes.
+ For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706).
The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html).
Upvotes: 0 <issue_comment>username_4: You want to use `LocalTime`, but the inputs (11:20:00.000Z) have the `Z` in the end, which is the [UTC designator](https://en.wikipedia.org/wiki/ISO_8601#Coordinated_Universal_Time_(UTC)). If the times are in UTC, it's not wise to ignore this info - assuming that other inputs may have another offsets.
So I would not ignore the `Z` and [use the proper type](https://docs.oracle.com/javase/8/docs/api/java/time/OffsetTime.html):
```
OffsetTime t1 = OffsetTime.parse("11:20:00.000Z");
OffsetTime t2 = OffsetTime.parse("11:21:00.500Z");
```
Both inputs have the same offset, so using `LocalTime` will work as well. But if there are inputs with different offsets, then `OffsetTime` is the right type to use.
Then you get the difference in milliseconds and round it to get the rounded seconds:
```
long millis = ChronoUnit.MILLIS.between(t1, t2);
long seconds = Math.round(millis / 1000.0); // 61
```
Upvotes: 1 <issue_comment>username_5: Give a try to `LocalTime#until`
```js
LocalTime past = LocalTime.now().minusSeconds(4);
LocalTime now = LocalTime.now();
System.out.println("4 == " + past.until(now, ChronoUnit.SECONDS));
```
You can find more about the method in doc of [LocalTime#until](https://docs.oracle.com/en/java/javase/13/docs/api/java.base/java/time/LocalTime.html#until(java.time.temporal.Temporal,java.time.temporal.TemporalUnit)).
Upvotes: 2 |
2018/03/21 | 1,001 | 2,847 | <issue_start>username_0: I have two `List()`s and I want to find `GUID` values that are **not** in the second list.
How do I do this using `LINQ`? I think `LINQ` would be a more efficient approach than a `foreach()`.<issue_comment>username_1: For that you can use the LINQ `Except()` extension method:
```
var result = list1.Except(list2);
```
Upvotes: 2 <issue_comment>username_2: I made a test, to compare the how much time different methods take to complete this task.
For the test, I used 2 `List` of 200 items each.
The second list contains ~1/10 of pseudo-random elements which are also in the first one.
I measured the time each method required to complete using a `StopWatch()`.
>
> Since `Except`, `Where` and `LookUp` are cached, the test has been
> restarted each time. It can however be useful to know that the cached
> Functions take only a few `Tick` (1 ~ 7) to complete once initialized.
>
> If the same query must be repeated multiple times, these Functions'
> feature can really make the difference.
>
>
>
This is how the two Lists are created:
```
static Random random = new Random();
// [...]
random.Next(0, 10);
List guid1 = new List(200);
List guid2 = new List(200);
int insertPoint = random.Next(0, 10);
for (int x = 0; x < 200; x++)
{
guid1.Add(Guid.NewGuid());
guid2.Add((x == insertPoint) ? guid1.Last() : Guid.NewGuid());
if (x > 9 && ((x % 10F) == 0.0F))
insertPoint = random.Next(x, x + 10);
}
```
These are the Functions tested:
List1 `Except` List2:
```
var result1 = guid1.Except(guid2);
```
List1.Item `Where` != List2.Item
```
var result2 = guid1.Where(guid1 => guid2.All(g => g != guid1));
```
List1.Items `FindAll` != List2.Items
```
var result3 = guid1.FindAll(g1 => guid2.All(g2 => g2 != g1));
```
List1.Item `LookUp Contains` (List2.Item)
```
var lookUpresult = guid1.ToLookup(g1 => guid2.Contains(g1));
var result4 = lookUpresult[false].ToList();
```
List1 `Hashset GroupBy Contains` (List2 `Hashset`)
```
var guidHS1 = new HashSet(guid1);
var guidHS2 = new HashSet(guid2);
var hsGroups = guid1.GroupBy(g => guidHS2.Contains(g));
var result5 = hsGroups.First().ToList();
```
`ForEach` List1->Item `ForEach` List2->Item (Item1 == Item2) => List3
```
List guid3 = new List();
bool found;
foreach (Guid guidtest in guid1) {
found = false;
foreach (Guid guidcompare in guid2) {
if (guidtest == guidcompare) {
found = true;
break;
}
}
if (!found) guid3.Add(guidtest);
}
```
These are the results of this test: (20 rounds)
```
Number of equal elements found: 181~184
EXCEPT => Time: 1724 ~ 4356 ticks
WHERE => Time: 3651 ~ 7360 ticks
FINDALL => Time: 3037 ~ 6472 ticks
LOOKUP => Time: 9406 ~ 16502 ticks
HASHSET GROUPBY => Time: 1773 ~ 3597 ticks
FOREACH => Time: 650 ~ 1529 ticks
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 851 | 2,939 | <issue_start>username_0: I use live-server to live reload folder with `HTML`, `javascript` and `CSS` files using `Visual Studio Code`.
When I open the console in Google Chrome I see this
[](https://i.stack.imgur.com/e0KYG.png):
>
> "Failed to load resource: the server responded with a status of 404
> (Not Found)
> <http://eluxer.net/code?sesscheck=1&id=105&subid=51824_5848_>"
>
>
>
I tried this with many different folders and every time there is the same message with live-server. Аt the same time, there is no error message when I open the `HTML` file direct in the browser (without live-server).
Please, Can anyone tell me how to fix this?<issue_comment>username_1: **I fixed the problem making this steps:**
* **Clear cache and cookies.** [Here](https://support.google.com/accounts/answer/32050?co=GENIE.Platform%3DDesktop&hl=en) You can see how.
* **Reset Chrome** this way: Click on "Customize and control Google Chrome" in the upper right corner => Click on "Settings" => then "Advanced" => "Reset and clean up" => "Reset" => Restart Chrome.
* **Uninstall Chrome.** For `Windows 10` is this way: Close all Chrome windows and tabs => Right click on the "Start menu" => Click "Apps and Features" => Click on `Google Chrome` => Click "Uninstall" => (The current step will delete your user profile information, like your browser preferences, bookmarks and history, so if You want to save it, do it in advance! Mine profile information was save in my Google account.) Check this box "Also delete your browsing data" and click "Uninstall". Here is a [video tutorial "How to uninstall Google Chrome web browser and remove all Google Chrome files from Your computer"](https://www.youtube.com/watch?v=fgvjwq6rDz0).
* **Install fresh copy of Google Chrome**
**After all this, The Error was gone but I had a new one:**
[](https://i.stack.imgur.com/7jlGG.png)
**I removed this error [this way](https://stackoverflow.com/questions/31075893/im-getting-favicon-ico-error/#50052294).**
**Finally all is clear:**
[](https://i.stack.imgur.com/WNeeV.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I faced this issue.
If you are just testing / developing. You can use **Incognito**, `ctrl+shift+n`
mode this quickly clears the issue and you can proceed with testing/dev.
But please follow the other steps for a permanent fix.
Upvotes: 0 <issue_comment>username_3: In my case, I got the error because I was having issues trying to load a picture that the server could not find.
I solved by moving the picture from the root location to
```
\wwwroot\images\picture.jpg
```
Upvotes: 0 <issue_comment>username_4: The reason for the error may be;
You may not have defined the js file in the body with the script tag.
Upvotes: 0 |
2018/03/21 | 581 | 2,068 | <issue_start>username_0: I have a retrofit service defined as such:
```
package com.example.android;
import java.util.Map;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
import retrofit.http.RestMethod;
import static java.lang.annotation.ElementType.METHOD;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
interface MyRetrofitService {
@Target(METHOD)
@Retention(RUNTIME)
@RestMethod(value = "DELETE", hasBody = true)
@interface DELETE_WITH_A_BODY {
String value();
}
@DELETE_WITH_A_BODY(BASE_URL + "/user/{uid}")
Observable deleteUser(@Path("uid") String uid, @Body Map deleteBody)
}
```
What do I have to tell proguard to make it stop stripping this annotation from the interface method? It's causing problems:
```
03-20 17:04:27.991 21813-27757/? E/RetrofitErrorHandler: 1686168:[ERROR] ~20734 Message: Error happened. Response null., cause MyRetrofitService.deleteUser: HTTP method annotation is required (e.g., @GET, @POST, etc.).
```
I found [an issue on Github](https://github.com/square/retrofit/issues/1167), but I can't figure out what the actual proguard rules to make this work are.<issue_comment>username_1: You did not specify which rules you already include into your project, but you will need to keep all the RuntimeVisible*Annotation* attributes like that:
```
-keepattributes RuntimeVisible*Annotation*
```
apart from the other attributes that you want to keep.
Edit: also you might need to prevent the annotation classes from being shrunk like this:
```
-keep @interface com.example.android.MyRetrofitService$*
```
This will prevent that annotations that are defined as inner classes in the MyRetrofitService are being shrunk/obfuscated by ProGuard.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Use the following lines to prevent annotation being removed
```
-keepattributes *Annotation*
-keepclassmembers interface com.example.android.MyRetrofitService {
@com.example.android.MyRetrofitService.DELETE_WITH_A_BODY ;
}
```
Upvotes: 0 |
2018/03/21 | 775 | 2,868 | <issue_start>username_0: I want to see output from my systemctl commands. For example:
```
systemctl restart systemd-networkd
```
would display the output of
```
systemctl status systemd-networkd.
```
I know that I could write a script that always puts the commands sequentially but I am hoping there is something like
```
systemctl --verbose restart ....
```
that didn't make it into the man page.<issue_comment>username_1: To my knowledge, there is no such thing. That being said, you can go ahead and "make you own":
We're going to edit out bashrc file to add this as a an alias command
```
echo "startstat(){ systemctl start \$*; systemctl status \$* }" >> ~/.bashrc
```
Note that this will only work for bash sessions and for the user you're running it for, so don't run this inside stuff that doesn't run bashrc before starting.
You can then start services and immediately get the status by running
```
startstat [arguments to pass to BOTH systemctl start AND systemctl status]
```
Sample usage:
```
startstat systemd-networkd
```
If you want to wait a little bit before checking the status, you can always add a sleep between:
Just `nano ~/.bashrc`, scroll to the bottom (or if you added things, whichever line it's at), and just add `sleep [seconds];` between `systemctl start \$*;` and `systemctl status \$*;`
If you want the status to be run after the start is finished, you can put a singular `&` sign with a space in front of it between the `\$*` and the `;` to fork it off into background.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Unfortunately `systemctl` does not provide a "verbose" option like most Unix commands do.
One solution is to increase `SYSTEMD_LOG_LEVEL` to `debug` (`info` is useless here) and then filter the output like e.g.:
```
$ SERVICES="smartmontools cron xxxxx"
$ SYSTEMD_LOG_LEVEL=debug systemctl restart $SERVICES 2>&1|egrep "Got result|Failed"
Failed to restart xxxxx.service: Unit xxxxx.service not found.
Got result done/Success for job cron.service
Got result done/Success for job smartmontools.service
```
You can also add a prefix like e.g.
```
$ SYSTEMD_LOG_LEVEL=debug systemctl restart $SERVICES 2>&1|egrep -i "Got result|Failed"|sed 's,^,restart: ,'
restart: Failed to restart xxxxx.service: Unit xxxxx.service not found.
restart: Got result done/Success for job cron.service
restart: Got result done/Success for job smartmontools.service
```
`SYSTEMD_LOG_LEVEL` might not be available on all systems.
Upvotes: 1 <issue_comment>username_3: `systemctl` does not have a `verbose` option. If you want to see the output of the service you are running in real time, what you can do is to open another terminal and run:
`sudo journalctl --unit=systemd-networkd.service -f`
Journalctl documentation: <https://www.freedesktop.org/software/systemd/man/journalctl.html>
Upvotes: 1 |
2018/03/21 | 289 | 873 | <issue_start>username_0: I am trying to print out the variables val1-5 in HTML. Doesn't seem to be working, any help would be appreciated.
```
var val1 = 0;
var val2 = 0;
var val3 = 0;
var val4 = 0;
var val5 = 0;
var fing = prompt("enter code");
window.onload = function(){
var str = fing;
var res = str.charAt()
val1 = res;
//str 2//
var str = fing;
var res1 = str.charAt(1)
val2 = res1;
var str = fing;
var res2 = str.charAt(2)
val3 = res2;
};
```<issue_comment>username_1: Okay, i am clueless about your end game and for some reason your code doesn't make sense but..
to print in HTML, You need to do something like..
```
document.getElementById("var1").innerHTML = val1
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: JS:
```
document.getElementById("demo").innerHTML = val1
```
HTML:
```
```
Upvotes: -1 |
2018/03/21 | 528 | 1,843 | <issue_start>username_0: I am trying to make a structure of some sort that has key value pairs where the key is an identifier like 'equals' and the values are the unicode equivalent html like '\u003D'. Right now I am using an enum, but I am not sure if this is the best structure. Anyways, I need to use this enum (or other structure) to display the unicode characters in a drop down on a page by using a ngFor statement to iterate over my enum and making options which innerHtml correspondes to the unicode character. What is the best way to do this?
Right now my code is as follows:
**enum**
```
export enum Symbols {
equals = '\u003D'
}
```
**component.ts**
```
symbols: Symbols;
```
**component.html**
```
{{symbol}}
```<issue_comment>username_1: To have the enum accessible within the component, you must declare a property, like you did, but instead of declaring it of *type* `Symbols` (using `:`), you should assign `Symbol` to it (using `=`).
To declare a with options, you should use the `*ngFor` in the s, not in the .
Also, to iterate the enum, you must use `Object.keys(symbols)`, which is aliased by `keys(symbol)` in the template below.
[**Stackblitz Demo here.**](https://stackblitz.com/edit/ngfor-enum-select-options?file=app/app.component.ts)
```
import { Component } from '@angular/core';
export enum Symbols {
equals = '\u003D',
notEquals = '!='
}
@Component({
selector: 'my-app',
template: `
Having the name as label and symbol as value:
{{symbol}}
Having the symbol as label and name as value:
{{symbols[symbol]}}
`
})
export class AppComponent {
keys = Object.keys;
symbols = Symbols;
}
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also use the keyvalue pipe as well, it makes your life easier.
```
{{ enum.key }} - {{ enum.value}}
```
Upvotes: 0 |
2018/03/21 | 733 | 2,506 | <issue_start>username_0: I have this simple SQL Select query:
```
SELECT part_number, bin_location FROM list ORDER BY bin_location
```
This will give me a result like this:
```
part_number bin_location
Alaska wine A1
German water A2
London whisky A3
German wine A4
London water B1
German wine B2
London whisky B3
German wine C1
London whisky C2
Water Green Wine C3
Pure Water D1
Orange whisky D2
Ireland Moat D3
French wine XX
Italy water XX
Water Green Wine XX
Pure Water XX
Orange whisky XX
Ireland Moat XX
French wine XX
Italy water XX
```
Now what i'm trying to do is order by bin\_location and once bin\_location equals xx order by part\_number.
This is the query I am using to do this:
```
SELECT part_number,bin_location
FROM list
ORDER BY case when bin_location = 'xx' then part_number end
```
This is the result I get:
```
part_number bin_location
Alaska wine A1
German water A2
London whisky A3
German wine C1
London water A4
German wine B2
London whisky B3
German wine A4
London whisky C2
Water Green Wine C3
Pure Water D1
Orange whisky D2
Ireland Moat D3
French wine XX
French water XX
Ireland Moat xx
Italy water XX
Orange whisky XX
Pure Water XX
Water Green Wine XX
```
As you can see after the xx clause is met it puts the part\_number in order, but before this the bin location is not.
How can I fix this?
Thanks in advance.<issue_comment>username_1: I think you want:
```
SELECT part_number, bin_location
FROM list
ORDER BY (CASE WHEN bin_location = 'xx' THEN 1 ELSE 2 END), -- put non-XX first
(CASE WHEN bin_location <> 'xx' THEN bin_location END),
part_number;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't think there is any need for a `CASE` statement. You can simply query as follows:
```
SELECT part_number, bin_location
FROM list
ORDER BY bin_location, part_number;
```
If you intend for `bin_location` to be case-insensitive (that is, `xx` and `XX` should be treated similarly), then you can do this instead:
```
SELECT part_number, bin_location
FROM list
ORDER BY UPPER(bin_location), part_number;
```
Hope this helps.
Upvotes: 1 |
2018/03/21 | 1,498 | 5,230 | <issue_start>username_0: I'm trying to add a UICollectionView in a swift class programmatically, without any storyboard, with a custom cell (a simple label in every cell)
```
import Foundation
import UIKit
class BoardView : UIView, UICollectionViewDataSource, UICollectionViewDelegate {
var boardTable: UICollectionView!
var cellLabel:UILabel!
let layout: UICollectionViewFlowLayout = UICollectionViewFlowLayout()
var items = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]
convenience init(frame: CGRect, title: String) {
self.init(frame: frame)
}
override init(frame: CGRect) {
super.init(frame: frame)
layout.sectionInset = UIEdgeInsets(top: 60, left: 10, bottom: 10, right: 10)
layout.itemSize = CGSize(width: 30, height: 30)
boardTable = UICollectionView(frame: self.frame, collectionViewLayout: layout)
boardTable.dataSource = self
boardTable.delegate = self
boardTable.register(UICollectionViewCell.self, forCellWithReuseIdentifier: "cell")
boardTable.backgroundColor = UIColor.clear
self.addSubview(boardTable)
}
required init?(coder aDecoder: NSCoder) {
fatalError("MainViewis not NSCoding compliant")
}
func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int {
return self.items.count
}
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "cell", for: indexPath as IndexPath)
cellLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 30, height: 30))
cellLabel.textAlignment = .center
cellLabel.text = self.items[indexPath.item]
cell.contentView.addSubview(cellLabel)
return cell
}
func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath)
{
print("User tapped on item \(indexPath.row)")
}
}
```
I'm able to change the cell background with the code
```
let cell = self.boardTable.cellForItem(at: IndexPath(row: 1 , section: 0))
cell?.backgroundColor = UIColor.lightGray
```
How I may change the cell text color or the cell text content (the cellLabel)?
Thanks in advance for your support.<issue_comment>username_1: ```
/* For text color */
//Default set of colours
cellLabel.textColor = .white
//Color of your choice - RGB components in as float values
cellLabel.textColor = UIColor(red: 100.0, green: 100.0, blue: 100.0, alpha: 1)
/* For text content */
cellLabel.text = "Hello World"
```
Upvotes: 0 <issue_comment>username_2: There are two ways of doing it
1) Create a custom UICollectionviewcell class and which have cellLabel as a property.
Create a coustom class say CollectionViewCell
class CollectionViewCell: UICollectionViewCell {
var cellLabel: UILabel!
```
override func drawRect(rect: CGRect) { //Your code should go here.
super.drawRect(rect)
}
```
}
And to create the custon cell change the below func
```
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "cell", for: indexPath as IndexPath) as! CollectionViewCell
cellLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 30, height: 30))
cellLabel.textAlignment = .center
cellLabel.text = self.items[indexPath.item]
cell.contentView.addSubview(cellLabel)
return cell
}
```
To change the property of cellLabel use below code
```
let cell = self.boardTable.cellForItem(at: IndexPath(row: 1 , section: 0)) as! CollectionViewCell
cell?.backgroundColor = UIColor.lightGray
cellLabel.textColor = .white
cellLabel.textColor = UIColor(red: 100.0, green: 100.0, blue: 100.0, alpha: 1)
cellLabel.text = "Hello World"
```
2) Iterate through cell subviews and find label by UILabel class and then change its property.
```
for view in cell.subviews {
if let label = view as? UILabel {
label.textColor = UIColor.red
}
}
```
I would suggest using 1st one.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Try This :
```
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "cell", for: indexPath as IndexPath)
var colors : [UIColor] = [UIColor.blue , UIColor.red , UIColor.cyan]
cellLabel.textColor = colors[indexPath.row]
cellLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 30, height: 30))
cellLabel.textAlignment = .center
cellLabel.text = self.items[indexPath.item]
cell.contentView.addSubview(cellLabel)
return cell
}
```
Upvotes: 1 <issue_comment>username_4: For time being you can use this but the ideal approach would be the one suggested in this [answer](https://stackoverflow.com/a/49396687/3760078) by username_2
```
for view in cell.subviews {
if let lable = view as? UILabel {
lable.textColor = UIColor.red
}
}
```
Upvotes: 0 |
2018/03/21 | 1,298 | 4,308 | <issue_start>username_0: My excel spreadsheet has the following datasets, but as can you see some are duplicates, while others have similar names. I want to find duplicates that are same and similar. The condition is that if they have three or more than three similar keywords, then they are also duplicate. I have the following excel function, but how do I expand it to find similar duplicates:
```
=IF(COUNTIF($C$2:C2,C2)>1, "Duplicate!","Original")
```
Spreadsheet:
```
The Power by <NAME>
Grant by <NAME>*********
Exit West by <NAME>
Janesville: An American Story by <NAME>
Exit West by <NAME>
Five-Carat Soul by <NAME>
Anything Is Possible by <NAME>
Dying: A Memoir by <NAME>
A Gentleman in Moscow by <NAME>
Janesville: An American Story by <NAME>
Exit West by <NAME>
Five-Carat Soul by <NAME>
Janesville: An Story by Amy
Exit West by <NAME>
Five-Carat Soul by <NAME>
Evicted: Poverty and Profit in the American City <NAME>
Exit West by <NAME>
An American Story by <NAME>
Poverty and Profit American City Matthew
Grant by Ron*********
Grant by <NAME>
```
As you can see ***Grant by <NAME>*** has multiple exact same duplicates and there is another one that simply has ***Grant by Ron*** without ***Chernow***. Please help.
Here is the screenshot: [Link](https://i.stack.imgur.com/YL0SL.jpg)<issue_comment>username_1: ```
/* For text color */
//Default set of colours
cellLabel.textColor = .white
//Color of your choice - RGB components in as float values
cellLabel.textColor = UIColor(red: 100.0, green: 100.0, blue: 100.0, alpha: 1)
/* For text content */
cellLabel.text = "Hello World"
```
Upvotes: 0 <issue_comment>username_2: There are two ways of doing it
1) Create a custom UICollectionviewcell class and which have cellLabel as a property.
Create a coustom class say CollectionViewCell
class CollectionViewCell: UICollectionViewCell {
var cellLabel: UILabel!
```
override func drawRect(rect: CGRect) { //Your code should go here.
super.drawRect(rect)
}
```
}
And to create the custon cell change the below func
```
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "cell", for: indexPath as IndexPath) as! CollectionViewCell
cellLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 30, height: 30))
cellLabel.textAlignment = .center
cellLabel.text = self.items[indexPath.item]
cell.contentView.addSubview(cellLabel)
return cell
}
```
To change the property of cellLabel use below code
```
let cell = self.boardTable.cellForItem(at: IndexPath(row: 1 , section: 0)) as! CollectionViewCell
cell?.backgroundColor = UIColor.lightGray
cellLabel.textColor = .white
cellLabel.textColor = UIColor(red: 100.0, green: 100.0, blue: 100.0, alpha: 1)
cellLabel.text = "<NAME>"
```
2) Iterate through cell subviews and find label by UILabel class and then change its property.
```
for view in cell.subviews {
if let label = view as? UILabel {
label.textColor = UIColor.red
}
}
```
I would suggest using 1st one.
Upvotes: 2 [selected_answer]<issue_comment>username_3: Try This :
```
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "cell", for: indexPath as IndexPath)
var colors : [UIColor] = [UIColor.blue , UIColor.red , UIColor.cyan]
cellLabel.textColor = colors[indexPath.row]
cellLabel = UILabel(frame: CGRect(x: 0, y: 0, width: 30, height: 30))
cellLabel.textAlignment = .center
cellLabel.text = self.items[indexPath.item]
cell.contentView.addSubview(cellLabel)
return cell
}
```
Upvotes: 1 <issue_comment>username_4: For time being you can use this but the ideal approach would be the one suggested in this [answer](https://stackoverflow.com/a/49396687/3760078) by username_2
```
for view in cell.subviews {
if let lable = view as? UILabel {
lable.textColor = UIColor.red
}
}
```
Upvotes: 0 |
2018/03/21 | 413 | 1,282 | <issue_start>username_0: I'm trying to update all my Documents so that they only have the 3 highest scored opinions but i don't know how to do it. Here's my example:
```
{
_id:9787878,
name: "<NAME>",
scores: [
{grade:8, opinion:"Very good"},
{grade:1, opinion:"Bad"},
{grade:10, opinion:"Very good"},
{grade:2, opinion:"Bad"},
{grade:6, opinion:"Very good"},
]
}
```
I want to update the document so it looks like this:
```
{
_id:9787878,
name: "<NAME>",
scores: [
{grade:10, opinion:"Very good"},
{grade:8, opinion:"Very good"},
{grade:6, opinion:"Very good"},
]
}
```
Any idea what type of Query should i use?<issue_comment>username_1: ```
db.server.updateMany(
{}, // all documents
{
$set : {
scores : {
$sort: { grade: 1 }, // low->high
$slice: -3 // keep last 3
}
}
}
);
```
Upvotes: 0 <issue_comment>username_2: Try [`$sort with $slice`](https://docs.mongodb.com/manual/reference/operator/update/sort/#use-sort-with-other-push-modifiers) modifier.
```
db.col.update(
{ name: "<NAME>" },
{ $push: { scores: { $each: [ ], $sort: { grade: -1 }, $slice: 3 } } }
)
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 3,249 | 12,329 | <issue_start>username_0: I am building a calendar that allows the user to cycle through the months and years by pressing the buttons created of the previous month and next month. Essentially what I want the main window to do is update with the new month upon clicking PREV or NEXT month with the correct days, which it does, only issue is the day buttons that display the specific days of the month overlap when cycling through.
Below is the part where I am having issues:
```
def prevMonth(self):
try:
self.grid_forget()
#SHOULD REFRESH THE WINDOW SO BUTTONS DONT OVERLAP
print "forgeting"
except:
print "passed the forgetting"
pass
lastMonth = self.month - 1
self.month = lastMonth
self.curr_month()
def nextMonth(self):
try:
self.grid_forget()
#SHOULD REFRESH THE WINDOW SO BUTTONS DONT OVERLAP
print "forgeting"
except:
print "passed the forgetting"
pass
nextMonth = self.month + 1
self.month = nextMonth
self.curr_month()
```
When the program iterates between the months the grid does not refresh it just overlaps the days and months. I have tried EVERYTHING I found in my hours of research. "self.destroy()" merely creates a blank window. "self.grid.destroy()" returns and error that function has no attribute destroy. I have tried making the children of grid all global variables within self and I cant iterate through the months correctly so the set up is permanent but I feel like I am missing something simple as far as working with refreshing the grid and reprinting the based upon the updated month.
Can you please point me in the right direction or correct the error I am missing?
below is the entire program
```
from Tkinter import *
from calendar import *
import datetime
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.grid()
DateNow = datetime.datetime.now()
self.year = DateNow.year#declaring global variable year
self.month = DateNow.month#declaring global variable month
self.curr_month()
def curr_month(self):
try:#iterating the month and year backward if index is out of range
if self.month == 0:
self.month = 12
trueYear = int(self.year)
self.year = trueYear - 1
except:
pass
try:#iterating month and year forward if index is out of range
if self.month == 13:
self.month = 1
trueYear = int(self.year)
self.year = trueYear + 1
except:
pass
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
#create labels
self.label = Label(self, text=months[self.month - 1])#displaying month
self.label.grid(row=0, column = 1)
self.label = Label(self, text=self.year)#displaying year
self.label.grid(row=0, column = 6)
try:#displaying previous month
prevMonthBut = Button(self, text=months[self.month-2], command=self.prevMonth)
prevMonthBut.grid(row=0,column=0)
except:#USED ONLY IF PREVIOUS MONTH IS IN PREVIOUS YEAR
prevMonthBut = Button(self, text=months[11], command=self.prevMonth)
prevMonthBut.grid(row=0,column=0)
try:#displaying next month
nextMonthBut = Button(self, text=months[self.month], command=self.nextMonth)
nextMonthBut.grid(row=0,column=2)
except:#USED ONLY IF NEXT MONTH IS IN NEXT YEAR
nextMonthBut = Button(self, text=months[0], command=self.nextMonth)
nextMonthBut.grid(row=0,column=2)
for i in range(7):
self.label = Label(self, text=days[i])
self.label.grid(row = 1, column = i)
weekday, numDays = monthrange(self.year, self.month)
week = 2
for i in range(1, numDays + 1):
self.button = Button(self, text = str(i))
self.button.grid(row = week, column = weekday)
weekday += 1
if weekday > 6:
week += 1
weekday = 0
def prevMonth(self):
try:
self.grid_forget()
#SHOULD REFRESH THE WINDOW SO BUTTONS DONT OVERLAP
print "forgeting"
except:
print "passed the forgetting"
pass
lastMonth = self.month - 1
self.month = lastMonth
self.curr_month()
def nextMonth(self):
try:
self.grid_forget()
#SHOULD REFRESH THE WINDOW SO BUTTONS DONT OVERLAP
print "forgeting"
except:
print "passed the forgetting"
pass
nextMonth = self.month + 1
self.month = nextMonth
self.curr_month()
mainWindow = Tk()
obj = Application()
mainWindow.mainloop()here
```<issue_comment>username_1: Tkinter is fairly efficient. And for the number of widgets that you have, it won't impact performance much to create them all initially. Here is a sample that works about like what you were trying to do.
```
from calendar import *
import datetime
try:
from tkinter import * # Python 3.x
except:
from Tkinter import * # Python 2.x
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.grid(row=0, column=0, sticky='news')
DateNow = datetime.datetime.now()
month = int(DateNow.month)
year = int(DateNow.year)
self.createDaysOfWeekLabels()
month_name = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
# Create frames and button controls for previous, current and next month.
self.frameList = [] # List that contains the frame objects.
self.buttonList = [] # List that contains the button objects.
amonth = month - 1
for i in range(3):
if amonth < 0:
amonth = 11
year -= 1
elif amonth == 12:
amonth = 0
year += 1
mFrame = Frame(self)
self.createMonth(mFrame, amonth, year)
self.frameList.append(mFrame)
mButton = Button(self, text=month_name[amonth-1])
mButton['command'] = lambda f=mFrame, b=mButton: self.showMonth(f, b)
mButton.grid(row=0, column=i)
# Grid each frame
mFrame.grid(row=2, column=0, columnspan=7, sticky='news')
if (i == 1):
mButton['relief'] = 'flat'
else:
# Remove all but the ith frame. More efficient to remove than forget and configuration is remembered.
mFrame.grid_remove()
self.buttonList.append(mButton)
amonth += 1
# Create year widget at top left of top frame
label = Label(self, text=year)#displaying year
label.grid(row=0, column=6)
def createDaysOfWeekLabels(self):
days = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
for i in range(7):
label = Label(self, text=days[i])
label.grid(row = 1, column = i)
def showMonth(self, mFrame, mButton):
# Display all buttons normally
for button in self.buttonList:
button['relief'] = 'raised'
# Set this month's button relief to flat
mButton['relief'] = 'flat'
# Hide all frames
for frame in self.frameList:
frame.grid_remove()
mFrame.grid()
def createMonth(self, mFrame, month, year):
weekday, numDays = monthrange(year, month)
week = 0
for i in range(1, numDays + 1):
button = Button(mFrame, text = str(i), width=3)
button.grid(row = week, column = weekday)
weekday += 1
if weekday > 6:
week += 1
weekday = 0
mainWindow = Tk()
obj = Application(mainWindow)
mainWindow.mainloop()
```
Upvotes: 0 <issue_comment>username_2: This is a modified version of the proposed answer that also includes the original desired intent of allowing the user to cycle through the months and will also increment the year.
```
from calendar import *
import datetime
try:
from tkinter import * # Python 3.x
except:
from Tkinter import * # Python 2.x
class Application(Frame):
def __init__(self, master=None):
Frame.__init__(self, master)
self.grid(row=0, column=0, sticky='news')
DateNow = datetime.datetime.now()
self.month = int(DateNow.month)
self.year = int(DateNow.year)
self.createDaysOfWeekLabels()
# Create frames and button controls for previous, current and next month.
self.frameList = [] # List that contains the frame objects.
self.buttonList = [] # List that contains the button objects.
self.split()
def split(self):
month_name = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
leftArrow = Button(self, text="<", command=self.prevMonth)
leftArrow.grid(row = 0, column = 0)
rightArrow = Button(self, text=">", command=self.nextMonth)
rightArrow.grid(row = 0, column = 1)
for i in range(3):
try:
print i, "this is i"
print self.month
mFrame = Frame(self)
self.createMonth(mFrame)
self.frameList.append(mFrame)
mButton = Button(self, text=month_name[self.month-1])
mButton['command'] = lambda f=mFrame, b=mButton: self.showMonth(f, b)
mButton.grid(row=1, column=i)
# Grid each frame
mFrame.grid(row=3, column=0, columnspan=7, sticky='news')
if (i == 1):
mButton['relief'] = 'flat'
else:
mButton.grid_remove()
# Remove all but the ith frame. More efficient to remove than forget and configuration is remembered.
mFrame.grid_remove()
self.buttonList.append(mButton)
except:
pass
# Create year widget at top right of top frame
label = Label(self, text=self.year)#displaying year
label.grid(row=0, column=6)
print "-------------------"
def prevMonth(self):
self.month -= 1
print self.month, "this is month in PREV"
if self.month <= 0:
self.month = 12
print self.month, "month inside forinif in PREVMONTH"
self.year -= 1
elif self.month >= 13:
self.month = 0
print self.month, "month inside forinelif in PREVMONTH"
self.year += 1
self.split()
def nextMonth(self):
self.month += 1
print self.month, "this is month in NEXT"
for frame in self.frameList:
frame.grid_remove()
if self.month <= -1:
self.month = 11
print self.month, "month inside forinif in NEXTMONTH"
self.year -= 1
elif self.month >= 13:
self.month = 1
print self.month, "month inside forinelif in NEXTMONTH"
self.year += 1
self.split()
def createDaysOfWeekLabels(self):
days = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun']
for i in range(7):
label = Label(self, text=days[i], width = 3)
label.grid(row = 2, column = i)
def showMonth(self, mFrame, mButton):
# Display all buttons normally
for button in self.buttonList:
button['relief'] = 'raised'
# Set this month's button relief to flat
mButton['relief'] = 'flat'
# Hide all frames
for mframe in self.frameList:
mframe.grid_remove()
mFrame.grid()
def createMonth(self, mFrame):
weekday, numDays = monthrange(self.year, self.month)
week = 0
for i in range(1, numDays + 1):
button = Button(mFrame, text = str(i), width=3)
button.grid(row = week, column = weekday)
weekday += 1
if weekday > 6:
week += 1
weekday = 0
mainWindow = Tk()
obj = Application(mainWindow)
mainWindow.mainloop()
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 642 | 2,209 | <issue_start>username_0: I have a
```
{{tipoUsuario.Tipo}}
```
populated with Vue and axios, but I need to obtain the ID Value to post in another table.
In the response.data returns these values:
```
[ {
"TipoUsuarioId": 1,
"Tipo": "Administrador"
},
{
"TipoUsuarioId": 2,
"Tipo": "Usuario"
} ]
```
To populate my i use this code:
```
export default {
data() {
return {
tipoUsuarios:[],
}
},
method: {
getTipoUsuario() {
axios.get("http://localhost:50995/api/GetTipoUsuario")
.then(response => {
this.tipoUsuarios = response.data,
this.status = response.data
})
.catch(e => {
console.log(e)
})
}
}
```
This is my POST method for now:
```
addUsuario() {
axios.post("http://localhost:50995/api/PostUsuario", {
"Nombre": this.nombre,
"ApellidoP": this.apellidoP,
"ApellidoM": this.apellidoM,
"Email": this.email,
"NombreUsuario": this.nombreUsuario,
"Contrasena": this.password
})
},
```
I need to generate a POST with the value of the ID when i select one option of the .
Thank You.<issue_comment>username_1: When you select one of the options, the `select` will fire a `change` event, which you can catch and send to a method:
```
{{tipoUsuario.Tipo}}
```
The method will receive a normal Event object as its argument. You can get the `id` from it in the usual way:
```
selectChange(event) {
this.selectId = event.target.id;
this.selectedOption = event.target.value; // you probably also want this
}
```
Upvotes: 0 <issue_comment>username_2: You have to set a `v-model` on the to a data property to store the selected value, and add the `:value` to the .
```js
new Vue({
el: '#example',
data: {
types: [{id: 1, name: 'admin'}, {id: 2, name: 'user'}],
selectedType: 1
}
})
```
```html
{{ item.name }}
Selected: {{ selectedType }}
```
Take a look at the example in [Form Input Bindigs: Select](https://v2.vuejs.org/v2/guide/forms.html#Select) from official documentation
Upvotes: 2 [selected_answer] |
2018/03/21 | 406 | 1,377 | <issue_start>username_0: 
I'm very confused and not sure what to do with joistcost, floorcost and projecttotal.
joistcost is the cost of an individual component
floorcost is the joist cost multiplied by the width
projecttotal is the cost of all the floors in a project
I'd be very appreciative of any help, I have no idea how to normalize this correctly.<issue_comment>username_1: When you select one of the options, the `select` will fire a `change` event, which you can catch and send to a method:
```
{{tipoUsuario.Tipo}}
```
The method will receive a normal Event object as its argument. You can get the `id` from it in the usual way:
```
selectChange(event) {
this.selectId = event.target.id;
this.selectedOption = event.target.value; // you probably also want this
}
```
Upvotes: 0 <issue_comment>username_2: You have to set a `v-model` on the to a data property to store the selected value, and add the `:value` to the .
```js
new Vue({
el: '#example',
data: {
types: [{id: 1, name: 'admin'}, {id: 2, name: 'user'}],
selectedType: 1
}
})
```
```html
{{ item.name }}
Selected: {{ selectedType }}
```
Take a look at the example in [Form Input Bindigs: Select](https://v2.vuejs.org/v2/guide/forms.html#Select) from official documentation
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,024 | 4,451 | <issue_start>username_0: We are embarking on integrating Selenium into our web projects (something we’ve been meaning to do for quite some time).
I’ve read a few articles that argue whether to use C#, node, python etc with selenium, but even those articles tend to finish by saying go with which ever language you already know best. That would put us firmly in the C# camp. I’d be quite happy with this approach because it gives me all the things I’m used to - VS, Intelkisense, Debugger etc, however one niggle in my mind tells me that using a flavour of JavaScript is going to provide better access to the elements in the DOM compared to C# - I might want tests to check the padding on an element, the height of an element, or the x/y position of an element. In standard web programming you would never dream of trying to evaluate these checks in c# - you’d always opt for JavaScript, so my question becomes whether using JavaScript to script selenium tests actually provides stronger functionality than in C#? Can anyone advise?
Many thanks
dotdev<issue_comment>username_1: You saw it right *go with which ever language you already know best* is the best policy.
The current version of *Selenium* are released through five major *Language Bindings Art* (several minor) as follows :
* **Java**
* **C#**
* **Ruby**
* **Python**
* **Javascript (Node)**
It is worth to be mentioned that the *Selenium* team takes utmost care while releasing the *Selenium* libraries through different *Language Bindings Arts*. All the *Selenium Clients* are released at par with the new/updated/enhanced/bug-fixed functionalities. Hence irrespective of the *Selenium Language Bindings Art* you use, your experiance will be indifferent.
But the efficiency of designing/coding/implementing a framework to work will *Selenium* can make a lot of difference in terms of effectiveness and performance. Hence selecting the selection of *Selenium Language Bindings Art* effectively boils down to **go with which ever language you already know best**.
[A bit of Selenium History](https://docs.seleniumhq.org/docs/01_introducing_selenium.jsp#brief-history-of-the-selenium-project)
-------------------------------------------------------------------------------------------------------------------------------
Selenium in early stage was developed as a **Javascript library** that could drive interactions with a webpage which allowed him to automatically rerun tests against multiple browsers. That library eventually became **Selenium Core**.
Though at that time *Selenium* was a tremendous tool but it had it's own drawbacks. As a result of **Javascript based Automation Engine** and the security restrictions browsers apply to *Javascript* a lot of things were impossible to achieve.
It was in 2006 an engineer from Google developed a testing tool that spoke directly to the browser using the **native method** for the browser and operating system to avoid the restrictions of a sandboxed Javascript environment which is the **WebDriver** we know as of today.
Hence the conclusion can be using **NodeJS** it's **Advantage You !!!**
Upvotes: 0 <issue_comment>username_2: For starters, the Selenium WebDriver has the ability to execute javascript in the middle of a test run
```
IWebDriver myDriver = new WebDriver();
JavascriptExecutor js = (JavascriptExecutor) myDriver;
js.executeScript(yourJavascriptHere);
```
More importantly, I would definitely recommend using the programming language you and your team are most familiar with.
Here are a few of my reasons why:
* As you mentioned, **you get to use tools you are familiar with**. Working within the same IDE is a huge plus when you are correcting bugs that were found in a UI Test run.
* You can also **Reuse Utility Libraries** you may have already written for your web applications. You will may still need to use Logging, Data Access and other infrastructure
* Easier Deployments. The ability to run all tests on build
* **Inheritance with strongly typed objects comes in handy** when creating an UI Testing Framework. Essentially all of the HTML paths, css, etc. can be abstracted out.
<NAME> has a great course on Pluralsight about creating an automated testing framework with C#. I recommend you check it out if Pluralsight is a resource you have available to you.
<https://app.pluralsight.com/library/courses/automated-testing-framework-selenium/table-of-contents>
Good Luck!
Upvotes: 1 |
2018/03/21 | 890 | 2,555 | <issue_start>username_0: I have a function to simulate genetic drift and I would like to loop it over multiple values of each parameter
The function is below:
```
wright.fisher<-function(p,Ne,nsim,t){
N <-Ne/2
NA1 <- 2*N*p
NA2 <- 2*N*(1-p)
k <- matrix(0, nrow = max(2, t), ncol = nsim)
k[1,] <- rep(NA1, nsim)
for (j in 1:nsim) {
for (i in 2:t) {
k[i, j] <- rbinom(1, 2*N, prob = k[i-1, j] / (2*N))
}
}
k <- as.matrix(k/(2*N))
t(k)
}
```
I've attempted to loop it over t (generations of drift), but that fails, the following is my code:
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[i]<-lapply(locifreq,wright.fisher,3000,4,gen[i])
}
```
Doing that results in an error for each iteration -
```
> warnings()
Warning messages:
1: In pop[i] <- lapply(locifreq, wright.fisher, 3000, 4, ... :
number of items to replace is not a multiple of replacement length
2: In pop[i] <- lapply(locifreq, wright.fisher, 3000, 4, ... :
number of items to replace is not a multiple of replacement length
```
I suspect that the issue may be in storing the output into a matrix, and perhaps the function isn't accessing the matrix correctly, but i'm not sure.
Thanks!<issue_comment>username_1: Honestly, I am not completely sure what you want but does this give you what you want? I used [[i]] on the looppop instead of [i]
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[[i]]<-lapply(locifreq,wright.fisher,3000,4,gen[i])
```
When lists are concatenated, one of two things will happen: preservation or simplifications. Your code uses `[` so simplification is attempted. Because your matrices at each iteration are different dimensions, simplification produces and error. But if you use `[[` then the concatenation will use preservation and the result will be a list of your matrices of ever increasing size. The accepted answer changes the catenation from that of matrices to that of lists. In that case the `[` is preserving a list, not a matrix and the code runs as needed.
Upvotes: 0 <issue_comment>username_2: You have to add list() inside any for loop that populates a new list inside the loop:
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[i]<-list(lapply(locifreq,wright.fisher,3000,4,gen[i])) # note the list( addition
}
```
[](https://i.stack.imgur.com/mCjdx.png)
Upvotes: 2 [selected_answer] |
2018/03/21 | 597 | 1,923 | <issue_start>username_0: My problem is: scrape the curse price (8875.53) out of the webpage.
```
8875.53
```
But my way failed. Does somebody know what's my mistake?
```
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.select.Elements;
public class JavaTest {
public static void main(String[] args) throws IOException {
Document doc=
Jsoup.connect("https://www.plus500.de/Instruments/BTCUSD").get();
Elements element =doc.select(".inst-rate");
System.out.println(element);
}
}
```
Console output:<issue_comment>username_1: Honestly, I am not completely sure what you want but does this give you what you want? I used [[i]] on the looppop instead of [i]
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[[i]]<-lapply(locifreq,wright.fisher,3000,4,gen[i])
```
When lists are concatenated, one of two things will happen: preservation or simplifications. Your code uses `[` so simplification is attempted. Because your matrices at each iteration are different dimensions, simplification produces and error. But if you use `[[` then the concatenation will use preservation and the result will be a list of your matrices of ever increasing size. The accepted answer changes the catenation from that of matrices to that of lists. In that case the `[` is preserving a list, not a matrix and the code runs as needed.
Upvotes: 0 <issue_comment>username_2: You have to add list() inside any for loop that populates a new list inside the loop:
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[i]<-list(lapply(locifreq,wright.fisher,3000,4,gen[i])) # note the list( addition
}
```
[](https://i.stack.imgur.com/mCjdx.png)
Upvotes: 2 [selected_answer] |
2018/03/21 | 734 | 2,423 | <issue_start>username_0: I've spent now days trying to solve a problem, but I totally failed ...
I do have a simple PNG file, with only 2 indexed colors and need to change the first index color dynamically ... so far no problem ... (just for example i change the color hardcoded to some green) and deliver the outcome ... calling this php file in a browser works perfectly:
```
php
$imgname = '../images/pdf/sidebar01.png';
$im = imagecreatefrompng($imgname);
imagecolorset($im,0, 0,150,0);
header("Content-type: image/png");
imagepng($im);
imagedestroy($im);
?
```
BUT, now I would need to use this image during creating a pdf file with FPDF:
```
$pdf->Image('http://server/phps/getColoredLogopart.php',50,1,15,'PNG');
```
and this line gives me the creeps ... I only get error messages from FPDF like
>
> Fatal error: Uncaught Exception: FPDF error: Unsupported image type
>
>
>
I have tried so many different variations of headers, outputs, filetypes ... nothing has worked.
Is there anybody out there, who managed to solve this issue? ... I really have no further clue on how to get this working<issue_comment>username_1: Honestly, I am not completely sure what you want but does this give you what you want? I used [[i]] on the looppop instead of [i]
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[[i]]<-lapply(locifreq,wright.fisher,3000,4,gen[i])
```
When lists are concatenated, one of two things will happen: preservation or simplifications. Your code uses `[` so simplification is attempted. Because your matrices at each iteration are different dimensions, simplification produces and error. But if you use `[[` then the concatenation will use preservation and the result will be a list of your matrices of ever increasing size. The accepted answer changes the catenation from that of matrices to that of lists. In that case the `[` is preserving a list, not a matrix and the code runs as needed.
Upvotes: 0 <issue_comment>username_2: You have to add list() inside any for loop that populates a new list inside the loop:
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[i]<-list(lapply(locifreq,wright.fisher,3000,4,gen[i])) # note the list( addition
}
```
[](https://i.stack.imgur.com/mCjdx.png)
Upvotes: 2 [selected_answer] |
2018/03/21 | 845 | 2,812 | <issue_start>username_0: I wrote a code for my school assignment:
>
> Write a function to change the brightness (pwm) of the led with the press of an input switch. The first press and hold increases the brightness, next press will decrease it.
>
>
>
It works when it is between 0 and 100 but when it hits 0 or 100 it just stops working. Here's the code:
```
import RPi.GPIO as GPIO
def main():
sw = 17
GPIO.setmode(GPIO.BCM)
GPIO.setup(sw, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(2, GPIO.OUT)
pwm_red=GPIO.PWM(2,500)
pwm_red.start(100)
bright=0
state=0
while 1:
if swin=GPIO.input(sw)
if state== 1 and bright !=100: '''I put bright !=100 becuase without it, it would just kick me out of the loop when hits 100'''
bright=bright+1
pwm_red.ChangeDutyCycle(bright)
print(bright)
while GPIO.input(sw)==1:
state=0
if state ==0 and bright !=0:
bright=bright-1
pwm_red.ChangeDutyCycle(bright)
print(bright)
while GPIO.input(sw)==1:
state=1
```
Can someone help me find the solution? I made multiple versions of this code and they all have this problem. I tried to Google the problem but there was nothing there. I can make it so its never reaches 100 nor 0, but it is not the right solution. I want it to be turned off (when it is 99.999999 it is still on)<issue_comment>username_1: Honestly, I am not completely sure what you want but does this give you what you want? I used [[i]] on the looppop instead of [i]
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[[i]]<-lapply(locifreq,wright.fisher,3000,4,gen[i])
```
When lists are concatenated, one of two things will happen: preservation or simplifications. Your code uses `[` so simplification is attempted. Because your matrices at each iteration are different dimensions, simplification produces and error. But if you use `[[` then the concatenation will use preservation and the result will be a list of your matrices of ever increasing size. The accepted answer changes the catenation from that of matrices to that of lists. In that case the `[` is preserving a list, not a matrix and the code runs as needed.
Upvotes: 0 <issue_comment>username_2: You have to add list() inside any for loop that populates a new list inside the loop:
```
locifreq<-runif(49, .4, 0.8)
gen <- 2:99
looppop<-list()
for (i in 2:length(gen)){
looppop[i]<-list(lapply(locifreq,wright.fisher,3000,4,gen[i])) # note the list( addition
}
```
[](https://i.stack.imgur.com/mCjdx.png)
Upvotes: 2 [selected_answer] |
2018/03/21 | 2,320 | 5,419 | <issue_start>username_0: If I wanted to sum over some variables in a data-frame using `dplyr`, I could do:
```
> head(iris)
Sepal.Length Sepal.Width Petal.Length Petal.Width Species
1 5.1 3.5 1.4 0.2 setosa
2 4.9 3.0 1.4 0.2 setosa
3 4.7 3.2 1.3 0.2 setosa
4 4.6 3.1 1.5 0.2 setosa
5 5.0 3.6 1.4 0.2 setosa
6 5.4 3.9 1.7 0.4 setosa
> select(iris, starts_with('Petal')) %>% rowSums()
[1] 1.6 1.6 1.5 1.7 1.6 2.1 1.7 1.7 1.6 1.6 1.7 1.8 1.5 1.2 1.4 1.9 1.7 1.7 2.0 1.8 1.9 1.9 1.2 2.2 2.1 1.8 2.0 1.7 1.6 1.8 1.8 1.9 1.6 1.6 1.7 1.4
[37] 1.5 1.5 1.5 1.7 1.6 1.6 1.5 2.2 2.3 1.7 1.8 1.6 1.7 1.6 6.1 6.0 6.4 5.3 6.1 5.8 6.3 4.3 5.9 5.3 4.5 5.7 5.0 6.1 4.9 5.8 6.0 5.1 6.0 5.0 6.6 5.3
[73] 6.4 5.9 5.6 5.8 6.2 6.7 6.0 4.5 4.9 4.7 5.1 6.7 6.0 6.1 6.2 5.7 5.4 5.3 5.6 6.0 5.2 4.3 5.5 5.4 5.5 5.6 4.1 5.4 8.5 7.0 8.0 7.4 8.0 8.7 6.2 8.1
[109] 7.6 8.6 7.1 7.2 7.6 7.0 7.5 7.6 7.3 8.9 9.2 6.5 8.0 6.9 8.7 6.7 7.8 7.8 6.6 6.7 7.7 7.4 8.0 8.4 7.8 6.6 7.0 8.4 8.0 7.3 6.6 7.5 8.0 7.4 7.0 8.2
[145] 8.2 7.5 6.9 7.2 7.7 6.9
```
That's fine, but I would have thought `rowwise` accomplishes the same thing, but it doesn't,
```
> select(iris, starts_with('Petal')) %>% rowwise() %>% sum()
[1] 743.6
```
What I particularly want to do is select a set of columns, and create a new variable each value of which is the maximum value of each row of the selected columns. For example, if I selected the "Petal" columns, by maximum values would be 1.4, 1.4, 1.3 and so on.
I could do it like this:
```
> select(iris, starts_with('Petal')) %>% apply(1, max)
```
and that's fine. But I'm just curious as to why the `rowwise` approach doesn't work. I realize I am using `rowwise` incorrectly, I'm just not sure why it is wrong.<issue_comment>username_1: The problem is that the entire data frame is passed as dot despite the `rowwise`. To handle this use `do` which will interpret dot as meaning just the current row. One further problem is that the dot within `do` will represent the row as a list so convert it appropriately.
```
library(dplyr)
iris %>%
slice(1:6) %>%
select(starts_with('Petal')) %>%
rowwise() %>%
do( (.) %>% as.data.frame %>% mutate(sum = sum(.)) ) %>%
ungroup
```
giving:
```
# A tibble: 6 x 3
Petal.Length Petal.Width sum
*
1 1.40 0.200 1.60
2 1.40 0.200 1.60
3 1.30 0.200 1.50
4 1.50 0.200 1.70
5 1.40 0.200 1.60
6 1.70 0.400 2.10
```
dplyr 1.0 - added later
-----------------------
Since this was asked dplyr 1.0 was released and it has `cur_data()` which can be used to simplify the above eliminating the need for `do`. `cur_data()` within a `rowwise` block refers only to the current row.
```
iris %>%
slice(1:6) %>%
select(starts_with('Petal')) %>%
rowwise() %>%
mutate(sum = sum(cur_data())) %>%
ungroup
```
Upvotes: 4 <issue_comment>username_2: In short: you are expecting the "sum" function to be aware of `dplyr` data structures like a data frame grouped by row. `sum` is not aware of it so it just takes the sum of the whole `data.frame`.
Here is a brief explanation. This:
```
select(iris, starts_with('Petal')) %>% rowwise() %>% sum()
```
Can be rewritten without using the pipe operator as the following:
```
data <- select(iris, starts_with('Petal'))
data <- rowwise(data)
sum(data)
```
As you can see you were constructing something called a `tibble`. Then the `rowwise` call adds additional information on this object and specifies that it should be grouped row-wise.
However only the functions aware of this grouping like `summarize` and `mutate` can work like intended. Base R functions like `sum` are not aware of these objects and treat them as any standard `data.frame`s. And the standard approach for `sum()` is to sum the entire data frame.
Using `mutate` works:
```
select(iris, starts_with('Petal')) %>%
rowwise() %>%
mutate(sum = sum(Petal.Width, Petal.Length))
```
Result:
```
Source: local data frame [150 x 3]
Groups:
# A tibble: 150 x 3
Petal.Length Petal.Width sum
1 1.40 0.200 1.60
2 1.40 0.200 1.60
3 1.30 0.200 1.50
...
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: You can skip the use of `select` if you use `c_across` to select the variables you want to sum:
```
iris %>%
rowwise() %>%
mutate(sum = sum(c_across(starts_with("Petal"))), .keep = "used") %>%
ungroup()
```
**Output**
If you want keep all the columns in your data frame then remove the `.keep` argument.
```
Petal.Length Petal.Width sum
1 1.4 0.2 1.6
2 1.4 0.2 1.6
3 1.3 0.2 1.5
4 1.5 0.2 1.7
5 1.4 0.2 1.6
6 1.7 0.4 2.1
7 1.4 0.3 1.7
8 1.5 0.2 1.7
9 1.4 0.2 1.6
10 1.5 0.1 1.6
# ... with 140 more rows
```
---
Similarly, with `max`:
```
iris %>%
rowwise() %>%
mutate(max = max(c_across(starts_with("Petal"))), .keep = "used") %>%
ungroup()
```
---
**Note**
If a row-wise aggregation function already exists it very likely much faster than using `rowwise`. For example, to get row sums, the row-wise aggregation function `rowSums` is available in base R and can be implemented like so with `across` *not* `c_across`:
```
# dplyr 1.1.0 use pick instead of across
iris %>%
mutate(sum = rowSums(across(starts_with("Petal"))), .keep = "used")
```
Upvotes: 3 |
2018/03/21 | 2,807 | 7,773 | <issue_start>username_0: Given the following code, where `x` is a dangling `const reference` to a vanished object, and is therefore undefined behavior.
```
auto get_vec() { return std::vector{1,2,3,4,5}; }
const auto& x = get\_vec().back();
```
It seems like neither *GCC 7.3*, *Clang 6.0* and *MSVC* is able to emit a warning, even with all warnings enabled.
Does anyone know if it is any way to emit a warning in these cases?
Is there any difference between `const auto&` and `auto&&` in these cases?
Note, if `back()` would return by value, it wouldn't be undefined behavior as the lifetime temporary object x is extended to function scoop.
*Long story: I have a code base where `const auto&` is used as the default way of initializing variables, and for some odd reason these cases executes correctly using MSVC, but when compiled with Clang for android, every occurance results in a wrongly assigned value. For now the solution seems to investigate every `const auto&` in the whole code base.
Also, in many cases the `const auto&` refers to a heavy object returned by reference so simply removing the `&` is not a solution.*
*One more thing, I'm responsible for the miss use of `const auto&` :)*<issue_comment>username_1: Almost certainly there is no way of warning on this. The compiler has no idea whether the referenced object returned by `back()` will outlive the line or not, and if it does, there's no problem (though I'd be hard pressed to think of a realistic situation where a non-static member function called on a temporary object returns a reference to an object which outlives the temporary object).
It sounds like whoever wrote that code read about [the most important const](https://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/), and took away entirely the wrong lesson from it.
Upvotes: 2 <issue_comment>username_2: Only thing I can come up with right now is to use CLANG with -fsanitize=address. But of course this will only help at runtime, but then you get something nice like this:
```
==102554==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000000020 at pc 0x00000050db71 bp 0x7ffdd3a5b770 sp 0x7ffdd3a5b768
READ of size 4 at 0x603000000020 thread T0
#0 0x50db70 in main (/home/user/testDang+0x50db70)
#1 0x1470fb404889 in __libc_start_main (/lib64/libc.so.6+0x20889)
#2 0x41a019 in _start (/home/user/testDang+0x41a019)
0x603000000020 is located 16 bytes inside of 20-byte region [0x603000000010,0x603000000024)
freed by thread T0 here:
#0 0x50a290 in operator delete(void*) (/home/user/testDang+0x50a290)
#1 0x50eccf in __gnu_cxx::new_allocator::deallocate(int\*, unsigned long) (/home/user/testDang+0x50eccf)
#2 0x50ec9f in std::allocator\_traits >::deallocate(std::allocator&, int\*, unsigned long) (/home/user/testDang+0x50ec9f)
#3 0x50ec2a in std::\_Vector\_base >::\_M\_deallocate(int\*, unsigned long) (/home/user/testDang+0x50ec2a)
#4 0x50e577 in std::\_Vector\_base >::~\_Vector\_base() (/home/user/testDang+0x50e577)
#5 0x50e210 in std::vector >::~vector() (/home/user/testDang+0x50e210)
#6 0x50db16 in main (/home/user/testDang+0x50db16)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
previously allocated by thread T0 here:
#0 0x509590 in operator new(unsigned long) (/home/user/testDang+0x509590)
#1 0x50e9ab in \_\_gnu\_cxx::new\_allocator::allocate(unsigned long, void const\*) (/home/user/testDang+0x50e9ab)
#2 0x50e94b in std::allocator\_traits >::allocate(std::allocator&, unsigned long) (/home/user/testDang+0x50e94b)
#3 0x50e872 in std::\_Vector\_base >::\_M\_allocate(unsigned long) (/home/user/testDang+0x50e872)
#4 0x50e2ff in void std::vector >::\_M\_range\_initialize(int const\*, int const\*, std::forward\_iterator\_tag) (/home/user/testDang+0x50e2ff)
#5 0x50deb7 in std::vector >::vector(std::initializer\_list, std::allocator const&) (/home/user/testDang+0x50deb7)
#6 0x50dafb in main (/home/user/testDang+0x50dafb)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
SUMMARY: AddressSanitizer: heap-use-after-free (/home/user/testDang+0x50db70) in main
Shadow bytes around the buggy address:
0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c067fff8000: fa fa fd fd[fd]fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
```
Maybe you have automated unit tests you can easily run as "sanizizer" builds.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> I have a code base where const auto& is used as the default way of initializing variables
>
>
>
Ouch. :(
>
> for some odd reason these cases executes correctly using MSVC, but when compiled with Clang for android, every occurance results in a wrongly assigned value
>
>
>
UB is UB innit.
>
> For now the solution seems to investigate every const auto& in the whole code base
>
>
>
Yes.
Just as you cannot tell at a glance whether a particular case is "safe"/correct, the compiler cannot tell simply from a function signature.
If it always had access to the full definition of every function, it would be able to warn you in some cases (and analysis tools like `-fsanitize=address` will do their best with this), but there is no general-case solution for the compiler to detect dangling references at runtime.
Also congratulations on the payrise you can receive now that the guilty employees (the author and the reviewer) have been fired, right? :)
Upvotes: 2 <issue_comment>username_4: Obviously, for the above example, one would write something like:
```
std::vector xx{1,2,3,4,5};
const auto& x = xx.back();
```
It does not make much sense to create a whole vector to keep only its last element. And if you have an expression like the above one and want to use a single expression, then you should almost never uses `auto &` to start with.
It the object is large, then you should either use move semantic or reference counting. So maybe you would have a function like `GetLastValue` that would returns by value a copy of the last vector value and then move that into the target destination.
You really need to understand what you are doing. Otherwise, you should use a language like C# where you need less knowledge about the internal working of the compiler or the exact language specifications.
As a general rule, I would say that you should not use `auto &` unless **you are sure** that you want a reference to the returned item. The most common case when I would use `auto &` or `const auto &` would be for a range based loop. For example, with the above vector named `xx`, I would generally write:
```
for (auto & item : xx) …
```
except if I know that it returns trivial types.
Upvotes: 0 |
2018/03/21 | 2,658 | 7,086 | <issue_start>username_0: I have django 1.11.5 and celery 4.
I want to pass user in view to task (because I can't do this in tasks.py, right?)
```
def form_valid(self, form):
form.instance.user = self.request.user
dict_obj = model_to_dict(self.request.user)
# serialized = json.dumps(dict_obj)
# print(serialized)
task_number_one.delay(dict_obj['username'])
return super().form_valid(form)
```
In celery I got an error:
```
ValueError: invalid literal for int() with base 10: 'my_username'
```<issue_comment>username_1: Almost certainly there is no way of warning on this. The compiler has no idea whether the referenced object returned by `back()` will outlive the line or not, and if it does, there's no problem (though I'd be hard pressed to think of a realistic situation where a non-static member function called on a temporary object returns a reference to an object which outlives the temporary object).
It sounds like whoever wrote that code read about [the most important const](https://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/), and took away entirely the wrong lesson from it.
Upvotes: 2 <issue_comment>username_2: Only thing I can come up with right now is to use CLANG with -fsanitize=address. But of course this will only help at runtime, but then you get something nice like this:
```
==102554==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000000020 at pc 0x00000050db71 bp 0x7ffdd3a5b770 sp 0x7ffdd3a5b768
READ of size 4 at 0x603000000020 thread T0
#0 0x50db70 in main (/home/user/testDang+0x50db70)
#1 0x1470fb404889 in __libc_start_main (/lib64/libc.so.6+0x20889)
#2 0x41a019 in _start (/home/user/testDang+0x41a019)
0x603000000020 is located 16 bytes inside of 20-byte region [0x603000000010,0x603000000024)
freed by thread T0 here:
#0 0x50a290 in operator delete(void*) (/home/user/testDang+0x50a290)
#1 0x50eccf in __gnu_cxx::new_allocator::deallocate(int\*, unsigned long) (/home/user/testDang+0x50eccf)
#2 0x50ec9f in std::allocator\_traits >::deallocate(std::allocator&, int\*, unsigned long) (/home/user/testDang+0x50ec9f)
#3 0x50ec2a in std::\_Vector\_base >::\_M\_deallocate(int\*, unsigned long) (/home/user/testDang+0x50ec2a)
#4 0x50e577 in std::\_Vector\_base >::~\_Vector\_base() (/home/user/testDang+0x50e577)
#5 0x50e210 in std::vector >::~vector() (/home/user/testDang+0x50e210)
#6 0x50db16 in main (/home/user/testDang+0x50db16)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
previously allocated by thread T0 here:
#0 0x509590 in operator new(unsigned long) (/home/user/testDang+0x509590)
#1 0x50e9ab in \_\_gnu\_cxx::new\_allocator::allocate(unsigned long, void const\*) (/home/user/testDang+0x50e9ab)
#2 0x50e94b in std::allocator\_traits >::allocate(std::allocator&, unsigned long) (/home/user/testDang+0x50e94b)
#3 0x50e872 in std::\_Vector\_base >::\_M\_allocate(unsigned long) (/home/user/testDang+0x50e872)
#4 0x50e2ff in void std::vector >::\_M\_range\_initialize(int const\*, int const\*, std::forward\_iterator\_tag) (/home/user/testDang+0x50e2ff)
#5 0x50deb7 in std::vector >::vector(std::initializer\_list, std::allocator const&) (/home/user/testDang+0x50deb7)
#6 0x50dafb in main (/home/user/testDang+0x50dafb)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
SUMMARY: AddressSanitizer: heap-use-after-free (/home/user/testDang+0x50db70) in main
Shadow bytes around the buggy address:
0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c067fff8000: fa fa fd fd[fd]fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
```
Maybe you have automated unit tests you can easily run as "sanizizer" builds.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> I have a code base where const auto& is used as the default way of initializing variables
>
>
>
Ouch. :(
>
> for some odd reason these cases executes correctly using MSVC, but when compiled with Clang for android, every occurance results in a wrongly assigned value
>
>
>
UB is UB innit.
>
> For now the solution seems to investigate every const auto& in the whole code base
>
>
>
Yes.
Just as you cannot tell at a glance whether a particular case is "safe"/correct, the compiler cannot tell simply from a function signature.
If it always had access to the full definition of every function, it would be able to warn you in some cases (and analysis tools like `-fsanitize=address` will do their best with this), but there is no general-case solution for the compiler to detect dangling references at runtime.
Also congratulations on the payrise you can receive now that the guilty employees (the author and the reviewer) have been fired, right? :)
Upvotes: 2 <issue_comment>username_4: Obviously, for the above example, one would write something like:
```
std::vector xx{1,2,3,4,5};
const auto& x = xx.back();
```
It does not make much sense to create a whole vector to keep only its last element. And if you have an expression like the above one and want to use a single expression, then you should almost never uses `auto &` to start with.
It the object is large, then you should either use move semantic or reference counting. So maybe you would have a function like `GetLastValue` that would returns by value a copy of the last vector value and then move that into the target destination.
You really need to understand what you are doing. Otherwise, you should use a language like C# where you need less knowledge about the internal working of the compiler or the exact language specifications.
As a general rule, I would say that you should not use `auto &` unless **you are sure** that you want a reference to the returned item. The most common case when I would use `auto &` or `const auto &` would be for a range based loop. For example, with the above vector named `xx`, I would generally write:
```
for (auto & item : xx) …
```
except if I know that it returns trivial types.
Upvotes: 0 |
2018/03/21 | 2,711 | 7,325 | <issue_start>username_0: I actually have three problems, but I believe they are all related to trying to place the two images within the same background. I only need this to appear on one page only. 1) The image on the left is XXpx away from the edge, but the image on the right is butt up against the edge. 2) The footer is appearing between the bottom 25% or so of the two images instead of being at the bottom of the page. 3) I cannot get the size of the images to change, so that each is the same.
**HTML**
```
##### © BPA Productions 2018
```
**CSS**
```
.homepage {
background-image: url(pagepics/emmy.png), url(pagepics/big_oscar.png);
background-position: left, right;
background-repeat: no-repeat, no-repeat;
padding: 250px;
}
```<issue_comment>username_1: Almost certainly there is no way of warning on this. The compiler has no idea whether the referenced object returned by `back()` will outlive the line or not, and if it does, there's no problem (though I'd be hard pressed to think of a realistic situation where a non-static member function called on a temporary object returns a reference to an object which outlives the temporary object).
It sounds like whoever wrote that code read about [the most important const](https://herbsutter.com/2008/01/01/gotw-88-a-candidate-for-the-most-important-const/), and took away entirely the wrong lesson from it.
Upvotes: 2 <issue_comment>username_2: Only thing I can come up with right now is to use CLANG with -fsanitize=address. But of course this will only help at runtime, but then you get something nice like this:
```
==102554==ERROR: AddressSanitizer: heap-use-after-free on address 0x603000000020 at pc 0x00000050db71 bp 0x7ffdd3a5b770 sp 0x7ffdd3a5b768
READ of size 4 at 0x603000000020 thread T0
#0 0x50db70 in main (/home/user/testDang+0x50db70)
#1 0x1470fb404889 in __libc_start_main (/lib64/libc.so.6+0x20889)
#2 0x41a019 in _start (/home/user/testDang+0x41a019)
0x603000000020 is located 16 bytes inside of 20-byte region [0x603000000010,0x603000000024)
freed by thread T0 here:
#0 0x50a290 in operator delete(void*) (/home/user/testDang+0x50a290)
#1 0x50eccf in __gnu_cxx::new_allocator::deallocate(int\*, unsigned long) (/home/user/testDang+0x50eccf)
#2 0x50ec9f in std::allocator\_traits >::deallocate(std::allocator&, int\*, unsigned long) (/home/user/testDang+0x50ec9f)
#3 0x50ec2a in std::\_Vector\_base >::\_M\_deallocate(int\*, unsigned long) (/home/user/testDang+0x50ec2a)
#4 0x50e577 in std::\_Vector\_base >::~\_Vector\_base() (/home/user/testDang+0x50e577)
#5 0x50e210 in std::vector >::~vector() (/home/user/testDang+0x50e210)
#6 0x50db16 in main (/home/user/testDang+0x50db16)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
previously allocated by thread T0 here:
#0 0x509590 in operator new(unsigned long) (/home/user/testDang+0x509590)
#1 0x50e9ab in \_\_gnu\_cxx::new\_allocator::allocate(unsigned long, void const\*) (/home/user/testDang+0x50e9ab)
#2 0x50e94b in std::allocator\_traits >::allocate(std::allocator&, unsigned long) (/home/user/testDang+0x50e94b)
#3 0x50e872 in std::\_Vector\_base >::\_M\_allocate(unsigned long) (/home/user/testDang+0x50e872)
#4 0x50e2ff in void std::vector >::\_M\_range\_initialize(int const\*, int const\*, std::forward\_iterator\_tag) (/home/user/testDang+0x50e2ff)
#5 0x50deb7 in std::vector >::vector(std::initializer\_list, std::allocator const&) (/home/user/testDang+0x50deb7)
#6 0x50dafb in main (/home/user/testDang+0x50dafb)
#7 0x1470fb404889 in \_\_libc\_start\_main (/lib64/libc.so.6+0x20889)
SUMMARY: AddressSanitizer: heap-use-after-free (/home/user/testDang+0x50db70) in main
Shadow bytes around the buggy address:
0x0c067fff7fb0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fc0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fd0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7fe0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x0c067fff7ff0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x0c067fff8000: fa fa fd fd[fd]fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8010: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8020: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8030: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8040: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c067fff8050: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
```
Maybe you have automated unit tests you can easily run as "sanizizer" builds.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> I have a code base where const auto& is used as the default way of initializing variables
>
>
>
Ouch. :(
>
> for some odd reason these cases executes correctly using MSVC, but when compiled with Clang for android, every occurance results in a wrongly assigned value
>
>
>
UB is UB innit.
>
> For now the solution seems to investigate every const auto& in the whole code base
>
>
>
Yes.
Just as you cannot tell at a glance whether a particular case is "safe"/correct, the compiler cannot tell simply from a function signature.
If it always had access to the full definition of every function, it would be able to warn you in some cases (and analysis tools like `-fsanitize=address` will do their best with this), but there is no general-case solution for the compiler to detect dangling references at runtime.
Also congratulations on the payrise you can receive now that the guilty employees (the author and the reviewer) have been fired, right? :)
Upvotes: 2 <issue_comment>username_4: Obviously, for the above example, one would write something like:
```
std::vector xx{1,2,3,4,5};
const auto& x = xx.back();
```
It does not make much sense to create a whole vector to keep only its last element. And if you have an expression like the above one and want to use a single expression, then you should almost never uses `auto &` to start with.
It the object is large, then you should either use move semantic or reference counting. So maybe you would have a function like `GetLastValue` that would returns by value a copy of the last vector value and then move that into the target destination.
You really need to understand what you are doing. Otherwise, you should use a language like C# where you need less knowledge about the internal working of the compiler or the exact language specifications.
As a general rule, I would say that you should not use `auto &` unless **you are sure** that you want a reference to the returned item. The most common case when I would use `auto &` or `const auto &` would be for a range based loop. For example, with the above vector named `xx`, I would generally write:
```
for (auto & item : xx) …
```
except if I know that it returns trivial types.
Upvotes: 0 |
2018/03/21 | 621 | 1,793 | <issue_start>username_0: I found the function `pop()`, which will remove a single value from a list. However, I want to remove two values from the list - randomly, but both numbers must be adjacent. For example, in a list of `[1, 2, 3, 4, 5]`, if I randomly picked `2` with `pop()`, I'd also want to remove `1` or `3`.
I need to store the numbers (p and q) for a later calculation, here's my code so far:
```
nlist = [1, 2, 3, 4, 5]
shuffle(nlist)
while nlist:
p = nlist.pop(random.randrange(len(nlist)))
#save p and one adjacent value (q) within this loop
#remove p and q from list
```<issue_comment>username_1: You could choose the `randrange` to be across one less than the list's length, and then pop the same index twice:
```
pop_index = random.randrange(len(nlist)-1)
p = nlist.pop(pop_index)
q = nlist.pop(pop_index)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to handle some edge cases when removing your elements, namely when the `p` is the first or last element in the list. This uses a handy `random` function, `choice` to determine which adjacent element you choose.
```
while len(nlist) > 1:
# the number to remove
p_index = random.randrange(len(nlist))
if p_index == 0:
q_index = p_index + 1
elif p_index == len(nlist) - 1:
q_index = p_index - 1
else:
q_index = p_index + random.choice([-1, 1])
p, q = nlist[p_index], nlist[q_index]
nlist.pop(p_index)
nlist.pop(q_index)
return p, q
```
Upvotes: 1 <issue_comment>username_3: You can try this approach:
```
from random import randint
nlist = [1, 2, 3, 4, 5]
data=randint(0,len(nlist)-2)
print([j for i,j in enumerate(nlist) if i not in range(data,data+2)])
```
output:
```
#[3, 4, 5]
#[1, 4, 5]
#[1, 2, 5]
```
Upvotes: 0 |
2018/03/21 | 618 | 1,699 | <issue_start>username_0: Hi friends I want to perform an operation I believe I can solve this with map.
The following operation is actually working correctly. But I want to do something a bit different.
18 will be 19 when it increases. But when the next value comes, it will now be collected with 19.
in this example, the output I want is:
```
>> [18,18,18,19,19,21]
```
This is what I can do:
```
def sum(x):
return x+18
new_list = list(map(sum, [0,0,0,1,0,2]))
>> [18, 18, 18, 19, 18, 20]
```<issue_comment>username_1: You could choose the `randrange` to be across one less than the list's length, and then pop the same index twice:
```
pop_index = random.randrange(len(nlist)-1)
p = nlist.pop(pop_index)
q = nlist.pop(pop_index)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to handle some edge cases when removing your elements, namely when the `p` is the first or last element in the list. This uses a handy `random` function, `choice` to determine which adjacent element you choose.
```
while len(nlist) > 1:
# the number to remove
p_index = random.randrange(len(nlist))
if p_index == 0:
q_index = p_index + 1
elif p_index == len(nlist) - 1:
q_index = p_index - 1
else:
q_index = p_index + random.choice([-1, 1])
p, q = nlist[p_index], nlist[q_index]
nlist.pop(p_index)
nlist.pop(q_index)
return p, q
```
Upvotes: 1 <issue_comment>username_3: You can try this approach:
```
from random import randint
nlist = [1, 2, 3, 4, 5]
data=randint(0,len(nlist)-2)
print([j for i,j in enumerate(nlist) if i not in range(data,data+2)])
```
output:
```
#[3, 4, 5]
#[1, 4, 5]
#[1, 2, 5]
```
Upvotes: 0 |
2018/03/21 | 1,325 | 4,480 | <issue_start>username_0: I want to integrate AngularJS in my custom `password_reset_confirm.html` template in Django. But when I fill out a new password and hit "submit," nothing happens.
I have this in my custom `password_reset_confirm.html` template:
```
{% block main %}
{% trans "Reset password" %}
----------------------------
{% if validlink %}
{% csrf\_token %}
[[[ newPW.showPW ? 'Hide' : 'Show' ]]]
Submit
{% else %}
{% trans "The password reset link was invalid, possibly because it has already been used. Please request a new password reset." %}
{% endif %}
{% endblock main %}
```
And this in my JS file:
```
var app= angular.module("app",[]);
app.config(function($interpolateProvider, $httpProvider){
$interpolateProvider.startSymbol("[[[");
$interpolateProvider.endSymbol("]]]");
$httpProvider.defaults.xsrfCookieName= "csrftoken";
$httpProvider.defaults.xsrfHeaderName= "X-CSRFToken";
});
app.controller("Ctrl", function($scope, $http){
});
```
How do I make the form submit to the correct path for setting my new password when I hit the "submit" button?
**UPDATE** I added `action="{% url 'password_reset_confirm' %}"` to my `form` and now I get an error.
1. First I go to `password_reset/` and enter my email.
2. I check the email in my inbox and click the link to go reset my password: e.g. `http://127.0.0.1:8000/reset/MQ/4uq-4fdadaa29b99110fcccb/`
3. I see the following error.
NoReverseMatch at /reset/MQ/4uq-4fdadaa29b99110fcccb/
Reverse for 'password\_reset\_confirm' with arguments '()' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'reset/(?P[0-9A-Za-z\_\-]+)/(?P.+)/$']
The offending line is this:
Here's some code from my `urls.py`:
```
from __future__ import unicode_literals
from django.conf.urls import include, url
from django.conf.urls.i18n import i18n_patterns
from django.contrib import admin
from django.views.i18n import set_language
from django.contrib.auth import views as auth_views
from mezzanine.core.views import direct_to_template
from mezzanine.conf import settings
from theme import views
admin.autodiscover()
urlpatterns += [
url("^map-types/$", views.get_map_types, name="map_types"),
url("^searched-location/$", views.insert_searched_location, name="searched_location"),
url("^geoms/$", views.get_geoms, name="get_geoms"),
url("^single-geom/$", views.get_single_geom_data, name="single_geom_data"),
url("^email-check/$", views.email_check, name="email_check"),
url("^login-register/$", views.login_register, name="login_register"),
url("^membership/$", views.membership, name="membership"),
url("^member/$", views.member, name="member"),
url("^change-email/$", views.change_email, name="change_email"),
url("^change-password/$", views.change_pw, name="change_pw"),
url("^change-cc/$", views.change_cc, name="change_cc"),
url("^change-plan/$", views.change_plan, name="change_plan"),
url("^cancel-plan/$", views.cancel_plan, name="cancel_plan"),
url("^logout/$", views.log_out, name="logout"),
url(r'^', include('django.contrib.auth.urls')),
url("^", include("mezzanine.urls")),
]
```<issue_comment>username_1: Either:
1. Provide an `action="/"` in your form or
2. Provide a click event in your `submit` button `ng-click=submit()` and implement the `submit()` function in your controller that will submit the details. If you use this option remove `method="post"` in your form.
`$scope.submit = function () {
$http.post('', ).then(function(){...})
}`
Upvotes: 0 <issue_comment>username_2: The path you're sending cannot be interpreted.
Add `url(r'^', include('django.contrib.auth.urls'))` to your `urls.py` towards the end or import the individual reset view from `auth_views` as you've defined them.
Upvotes: 0 <issue_comment>username_3: `password_reset_confirm` requires two arguments (`uuidb64` and `token`) which you are not supplying to the `url` template tag, hence the `NoReverseMatch` error.
Rather than trying to reconstruct the URL with `reverse` it would be easier just to change your form `action` to point to the current URL (that is what you need to post the form to):
```
```
If for some reason you do want to reconstruct the URL manually then you'd need to do something like this:
```
{% url 'password_reset_confirm` uuidb64=request.view.kwargs.uuidb64 token=request.view.kwargs.token %}
```
... but that really isn't necessary IMO.
Upvotes: 2 [selected_answer] |
2018/03/21 | 2,780 | 10,563 | <issue_start>username_0: My following understanding of the terminology is this
>
> **1) An interrupt**
>
> is "a notification" that is initiated by the hardware to call the OS to run its handlers
>
>
> **2) A trap**
>
> is "a notification" that is initiated by the software to call the OS to run its handlers
>
>
> **3) A fault**
>
> is an exception that is raised by the processor if an error has occurred but it is recoverable
>
>
> **4) An abort**
>
> is an exception that is raised by the processor if an error has occurred but it is non-recoverable
>
>
>
Why do we call it a `segmentation fault` and not a `segmentation abort` then?
>
> **A segmentation fault**
>
> is when your program attempts to access memory it
> has either not been assigned by the operating system, or is otherwise
> not allowed to access.
>
>
>
My experience (primarily while testing `C` code) is that anytime a program throws a `segmentation fault` it is back to the drawing board - is there a scenario where the programmer can actually "catch" the exception and do something useful with it?<issue_comment>username_1: There are two types of exceptions: faults and traps. When a fault occurs, the instruction ca be restarted. When a trap occurs the instruction cannot be restarted.
For example, when a page fault occurs, the operating system exception handler loads the missing page and the restarts the instruction that caused the fault.
If the processor has defined a "segmentation fault" then the instruction causing the exception is restartable—but it is possible that the operating system's handler might not restart the instruction.
Upvotes: 1 <issue_comment>username_2: At a CPU level, modern OSes don't use x86 segment limits for memory protection. (And in fact they couldn't even if they wanted to in long mode (x86-64); segment base is fixed at 0 and limit at -1).
OSes use virtual memory page tables, so the real CPU exception on an out-of-bounds memory access is a page fault.
x86 manuals call this a **`#PF(fault-code)` exception**, e.g. see [the list of exceptions `add` can raise](https://github.com/HJLebbink/asm-dude/wiki/ADD). Fun fact: the x86 exception for access outside of a segment limit is `#GP(0)`.
It's up to the OS's page-fault handler to decide how to handle it. Many `#PF` exceptions happen as part of normal operation:
* copy-on-write mapping got written: copy the page and mark it writeable in the page table, then return to user-space to retry the instruction that faulted. (This is a type of "soft" aka "minor" page fault.)
* other soft page fault, e.g. the kernel was lazy and didn't actually have the page table updated to reflect the mappings the process made. (e.g. [`mmap(2)`](http://man7.org/linux/man-pages/man2/mmap.2.html) without `MAP_POPULATE`).
* hard page fault: find some physical memory and read the file from disk (a file mapping or from swap file/partition for anonymous pages).
After sorting out any of the above, update the page table that the CPU reads on its own, and invalidate that TLB entry if necessary. (e.g. valid but read-only changed to valid + read-write).
Only if the kernel finds that the process really doesn't logically have anything mapped to that address (or that it's a write to a read-only mapping) will the kernel deliver a **`SIGSEGV`** to the process. **This is purely a software thing,** after sorting out the cause of the hardware exception.
---
**The English text for `SIGSEGV` ([from `strerror(3)`](http://man7.org/linux/man-pages/man3/strsignal.3.html)) is "Segmentation Fault"** on all Unix/Linux systems, so that's what's printed (by the shell) when a child process dies from that signal.
This term is well understood, so even though it mostly only exists for historical reasons and hardware doesn't use segmentation.
Note that you also get a SIGSEGV for stuff like trying to execute privileged instructions in user-space (like `wbinvd` or [`wrmsr` (write model-specific register)](https://github.com/HJLebbink/asm-dude/wiki/WRMSR)). At a CPU level, the x86 exception is `#GP(0)` for privileged instructions when you're not in ring 0 (kernel mode).
Also for misaligned SSE instructions (like `movaps`), although some Unixes on other platforms send `SIGBUS` for misaligned accesses faults (e.g. Solaris on SPARC).
---
>
> Why do we call it a segmentation fault and not a segmentation abort then?
>
>
>
**It *is* recoverable**. It doesn't crash the whole machine / kernel, it just means that user-space process tried to do something that the kernel doesn't allow.
Even for that process that segfaulted it *can* be recoverable. This is why it's a catchable signal, unlike `SIGKILL`. Usually you can't just resume execution, but you can usefully record where the fault was (e.g. print a precise exception error message and even a stack backtrace).
The signal handler for SIGSEGV could `longjmp` or whatever. Or if the SIGSEGV was expected, then modify the code or the pointer used for the load, before returning from the signal handler. (e.g. [for a Meltdown exploit](https://meltdownattack.com/meltdown.pdf), although there are much more efficient techniques that do the chained loads in the shadow of a mispredict or something else that suppresses the exception, instead of actually letting the CPU raise an exception and catching the SIGSEGV the kernel delivers)
Most programming languages (other than assembly) aren't low-level enough to give well defined behaviour when optimizing around an access that might segfault in a way that would let you write a handler that recovers. This is why usually you don't do anything more than print an error message (and maybe a stack backtrace) in a SIGSEGV handler if you install one at all.
---
Some JIT compilers for sandboxed languages (like Javascript) use hardware memory access checks to eliminate NULL pointer checks. In the normal case there's no fault, so it doesn't matter how slow the faulting case is.
**A Java JVM can turn a `SIGSEGV` received by a thread of the JVM into a `NullPointerException` for the Java code it's running, without any problems for the JVM.**
* [Effective Null Pointer Check Elimination Utilizing Hardware Trap](http://prolangs.cs.vt.edu/refs/docs/kawahito-asplos00.pdf) a research paper on this for Java, from three IBM scientists.
* [SableVM: 6.2.4 Hardware Support on Various Architectures](http://www.sable.mcgill.ca/%7Edbelan2/research/thesis/thesis.pdf#page=82) about NULL pointer checks
A further trick is to put the end of an array at the end of a page (followed by a large-enough unmapped region), so bounds-checking on every access is done for free by the hardware. If you can statically prove the index is always positive, and that it can't be larger than 32 bit, you're all set.
* [Implicit Java Array Bounds Checking on 64-bit
Architectures](https://www2.cs.arizona.edu/%7Edkl/Publications/Papers/ics.pdf). They talk about what to do when array size isn't a multiple of the page size, and other caveats.
---
### Trap vs. abort
I don't think there's standard terminology to make that distinction. It depends what kind of recovery you're talking about. Obviously the OS can keep running after anything user-space can make the hardware do, otherwise unprivileged user-space could crash the machine.
Related: On
[When an interrupt occurs, what happens to instructions in the pipeline?](https://stackoverflow.com/questions/8902132/when-an-interrupt-occurs-what-happens-to-instructions-in-the-pipeline), <NAME> (CPU architect who worked on Intel's P6 microarchitecture) says "trap" is basically any interrupt that's caused by the code that's running (rather than an external signal), and happens synchronously. (e.g. when a faulting instruction reaches the retirement stage of the pipeline without an earlier branch-mispredict or other exception being detected first).
"Abort" isn't standard CPU-architecture terminology. Like I said, you want the OS to be able to continue no matter what, and only hardware failure or kernel bugs normally prevent that.
AFAIK, "abort" is not very standard operating-systems terminology either. Unix has signals, and some of them are uncatchable (like SIGKILL and SIGSTOP), but most can be caught.
**[`SIGABRT` can be caught by a signal handler](https://stackoverflow.com/questions/8934879/how-to-handle-sigabrt-signal/8935037#8935037)**. The process exits if the handler returns, so if you don't want that you can `longjmp` out of it. But AFAIK no error condition raises SIGABRT; it's only sent manually by software, e.g. by calling the `abort()` library function. (It often results in a stack backtrace.)
---
### x86 exception terminology
If you look at x86 manuals or [this exception table on the osdev wiki](https://wiki.osdev.org/Exceptions), there are specific meanings in this context ([thanks to @MargaretBloom for the descriptions](https://stackoverflow.com/questions/49396346/why-are-segfaults-called-faults-and-not-aborts-if-they-are-not-recoverable/49398662?noredirect=1#comment85809199_49398662)):
* **trap**: raised after an instruction successfully completed, the return address points after the trapping inst. `#DB` debug and `#OF` overflow ( `into`) exceptions are traps. ([Some sources of #DB are faults instead](https://wiki.osdev.org/Exceptions#Debug)) . But `int 0x80` or other software interrupt instructions are also traps, as is `syscall` (but it puts the return address in `rcx` instead of pushing it; `syscall` is not an exception, and thus not really a trap in this sense)
* **fault**: raised after an attempted execution is made and then rolled back; the return address points to the faulting instruction. (Most exception types are faults)
* **abort** is when the return address points to an unrelated location (i.e. for `#DF` double-fault and `#MC` machine-check). Triple fault can't be handled; it's what happens when the CPU hits an exception trying to run the double-fault handler, and really does stop the whole CPU.
Note that even Intel CPU architects like Andy Glew sometimes use the term "trap" more generally, I think meaning any synchronous exception, when using discussion computer-architecture theory. Don't expect people to stick to the above terminology unless you're actually talking about handling specific exceptions on x86. Although it is useful and sensible terminology, and you could use it in other contexts. But if you want to make the distinction, you should clarify what you mean by each term so everyone's on the same page.
Upvotes: 4 [selected_answer] |
2018/03/21 | 1,119 | 3,855 | <issue_start>username_0: I have the API Gateway that calls Lamdba function 1 and that invokes lambda function 2 in Go. I want to see these 2 functions joined in the service map.
The only way i have been able to do this so far is to create a custom segment eg called "parent" and the create a subsegment from this context eg called "child". Then using client.InvokeWithContext to invoke the function 2 passing the "child" segment context.
```
sess := session.Must(session.NewSession())
client := lambda.New(sess, &aws.Config{Region: aws.String(region)})
xray.Configure(xray.Config{LogLevel: "trace"})
xray.AWS(client.Client)
ctx, seg := xray.BeginSegment(context.Background(), "Parent")
ctx, subseg := xray.BeginSubsegment(ctx, "Child")
result, _ := client.InvokeWithContext(ctx,
lambda.InvokeInput{FunctionName: aws.String(functionName), Payload: nil})
subseg.Close(nil)
seg.Close(nil)
```
Problem is that this creates trace parent -> child in sevice map but also has function 1 too.
What is the best way to join these 2 functions on the service map please ?
Note. I have more than 2 that i want to see linked up on the service map to show me my whole flow through lambdas.
Please help.
Thanks
Rick<issue_comment>username_1: You don't need to add a subsegment for the "child" call unless you want to add annotation/metadata.
The API gateway adds a trace ID called `X-Amzn-Trace-Id` to the header of incoming requests, which X-ray picks up. If you forward that trace ID in your call from lambda 1 to lambda 2, then X-ray will visually represent the calls with an arrow from lambda 1 to lambda 2 in the overview and include the trace details of lambda 2 when viewing the trace details of lambda 1.
As long as you forward the top trace ID through the call chain, X-ray will correctly visualize the call chain from service to service with nodes and arrows.
From <https://aws.amazon.com/xray/faqs/>:
>
> Q: What is a trace?
>
>
> An X-Ray trace is a set of data points that share the same trace ID.
> For example, when a client makes a request to your application, it is
> assigned a unique trace ID. As the request makes its way through
> services in your application, the services relay information regarding
> the request back to X-Ray using this unique trace ID. The piece of
> information relayed by each service in your application to X-Ray is a
> segment, and a trace is a collection of segments.
>
>
>
<https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader>
<https://docs.aws.amazon.com/xray/latest/devguide/xray-services-apigateway.html>
Upvotes: 3 <issue_comment>username_2: This [Go and Lambda boilerplate app](https://github.com/nzoschke/gofaas) demonstrates Lambda to Lambda traces on the X-Ray service map:
<https://github.com/nzoschke/gofaas/blob/master/worker.go>
[Resulting X-Ray Service Map](https://i.stack.imgur.com/UQBKW.png)
`WorkCreateFunction` (function 1) is an API Gateway handler function. It calls `WorkerFunction` (function 2) via a `Lambda.InvokeWithContext` call.
The trick is to instrument the Lambda API client with xray before making Lambda API calls:
```
// Lambda is an xray instrumented Lambda client
func Lambda() *lambda.Lambda {
c := lambda.New(sess)
xray.AWS(c.Client)
return c
}
out, err := Lambda().InvokeWithContext(ctx, &lambda.InvokeInput{
FunctionName: aws.String(os.Getenv("WORKER_FUNCTION_NAME")),
InvocationType: aws.String("Event"), // async
})
if err != nil {
return responseEmpty, errors.WithStack(err)
}
```
The `aws-xray-sdk-go` copies the `X-Amzn-Trace-Id` header from function 1 into the Lambda API request for function 2:
<https://github.com/aws/aws-xray-sdk-go/blob/master/xray/aws.go#L56>
If this is not working, try updating to the latest `aws-xray-sdk-go`.
Upvotes: 3 [selected_answer] |
2018/03/21 | 1,023 | 2,837 | <issue_start>username_0: I want to filter my data frame so that I get all the columns that have a particular value for given row.
```
DF
vec1 vec2 vec3
1 a aa d
2 b bb e
3 c cc f
4 1 1 2
```
For example, all the columns that have `1` in the 4th row.
```
DF[4,1]==1 and DF[4,2]==1
# These both evaluate to TRUE. I want those columns.
vec1 vec2
1 a aa
2 b bb
3 c cc
4 1 1
```
or all the columns that have something other than `1` in the 4th row
```
DF[4,3]==1
# This evaluates to FALSE. So this would go in a separate data frame
vec3
1 d
2 e
3 f
4 2
```<issue_comment>username_1: Generally, the way people use data.frames and their children, this is most often done the other way around. The columns are variables, and the rows are observations of those variables. You can filter a data frame for only those observations (rows) with a particular value for a variable (column). It's not typical, but you could do it the other way around, I suppose.
Filtering by columns instead of rows:
```
DF <- data.frame(vec1 = c("a", "b", "c", 1), vec2 = c("aa", "bb", "cc", 1), vec3 = c("d", "e", "f", 2))
DF
# vec1 vec2 vec3
# 1 a aa d
# 2 b bb e
# 3 c cc f
# 4 1 1 2
DF[,DF[4,] == 1, drop = FALSE]
# vec1 vec2
# 1 a aa
# 2 b bb
# 3 c cc
# 4 1 1
DF[,DF[4,] != 1, drop = FALSE]
# vec3
# 1 d
# 2 e
# 3 f
# 4 2
```
NOTE: you don't NEED `drop = FALSE` if your filter by columns expression is going to have more than one column, but you do if it isn't. The idiom you rely on should account for this, since you may not know ahead of time how many columns will meet your conditions. That's why it's in both statements.
EDIT:
@thelatemail 's suggestion in the comments works as well. For my own style, I don't like to use `c()` for it's attribute stripping effect, but you might prefer not having to type drop.
```
DF[c(DF[4,] != 1)]
# vec3
# 1 d
# 2 e
# 3 f
# 4 2
```
If you use either of these, you'll want to pay special attention to the commas.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This sounds like a `dplyr` problem using some combination of `slice` to pick rows and `filter` to pick observations based on certain values, but it's not clear from your question above what you're trying to do. Here are some examples:
```
library(dplyr)
> df %>% slice(n=4)
# A tibble: 1 x 3
vec1 vec2 vec3
1 1 1 2
> df %>% slice(n=4) %>% filter(vec1==1)
# A tibble: 1 x 3
vec1 vec2 vec3
1 1 1 2
> df %>% slice(n=4) %>% filter(vec2==1 && vec3==2)
# A tibble: 1 x 3
vec1 vec2 vec3
1 1 1 2
> df %>% slice(n=4) %>% filter(vec2==1 && vec3==1)
# A tibble: 0 x 3
# ... with 3 variables: vec1 , vec2 , vec3
> df %>% filter(vec1==1)
vec1 vec2 vec3
1 1 1 2
```
Upvotes: 1 |
2018/03/21 | 738 | 2,246 | <issue_start>username_0: I have a blank table for which I've set up a trigger:
```
CREATE OR REPLACE TRIGGER authors_bir
BEFORE INSERT ON authors
FOR EACH ROW
begin
if upper(:new.name) = 'TEST' then
raise_application_error(-20001, 'Sorry, that value is not allowed.');
end if;
end;
```
After executing:
```
insert into AUTHORS
VALUES (1, 'test', '1-Jan-1989', 'M');
```
Why am I getting ORA-06512 and ORA-04088 error messages in addition to the expected ORA-20001 error prompt?
**ErrorMessage**
```
Error starting at line : 5 in command -
insert into AUTHORS
VALUES (1, 'test', '1-Jan-1989', 'M')
Error report -
ORA-20001: Sorry, that value is not allowed.
ORA-06512: at "RPS.AUTHORS_BIR", line 3
ORA-04088: error during execution of trigger 'RPS.AUTHORS_BIR'
```<issue_comment>username_1: Your trigger works perfectly and ORA-06512 is part of debugging mode and tells you what line of code raised that ORA-20001 that you coded. While ORA-04088 is saying that an error has occurred in the trigger. Both error codes are GENERIC part of oracle troubleshooting report.
Upvotes: 3 [selected_answer]<issue_comment>username_2: According to [documentation](https://docs.oracle.com/cd/E11882_01/server.112/e17766/e4100.htm):
>
> ORA-06512: at stringline string
>
>
> Cause: Backtrace message as the stack is unwound by unhandled exceptions.
>
>
>
Basically, this error is part of the error stack telling at which line the actual error occurred.
And [documentation](https://docs.oracle.com/cd/E11882_01/server.112/e17766/e2100.htm):
>
> ORA-04088: error during execution of trigger 'string.string'
>
>
> Cause: A runtime error occurred during execution of a trigger.
>
>
>
And this error is a part of the error stack telling you that error actually occurred in a trigger.
When an unhandled error occurs, error stack is always displayed. If you want to display just the error message, you could use an exception handling part so body of the trigger would look something like this:
```
begin
if upper(:new.name) = 'TEST' then
raise_application_error(-20001, 'Sorry, that value is not allowed.');
end if;
exception
when others then
dbms_output.put_line(sqlcode|' '|sqlerrm);
end;
```
Upvotes: 0 |
2018/03/21 | 732 | 2,318 | <issue_start>username_0: I have everything I'd like my code to perform, but it is bugging me that the square is not centered. I've been searching the internet for hours...please help! I have tried using `anchor = "center"` and `.place()` but I just can't seem to get it right.
```
from tkinter import *
import random
class draw():
def __init__(self, can, start_x, start_y, size):
self.can = can
self.id = self.can.create_rectangle((start_x, start_y,start_x+size, start_y+size), fill="red")
self.can.tag_bind(self.id, "", self.set\_color)
self.color\_change = True
def set\_color(self,event = None):
self.color\_change = not self.color\_change
colors = ["red", "orange", "yellow", "green", "blue", "violet","pink","teal"]
self.can.itemconfigure(self.id, fill = random.choice(colors))
root = Tk()
canvas = Canvas(root)
canvas.grid(column=1, row=1)
square = draw(canvas,1,1,90)
root.mainloop()
```<issue_comment>username_1: By defining a height and a width for the canvas and using `pack()` instead of `grid()` (like so)
```
from tkinter import *
import random
class draw():
def __init__(self, can, start_x, start_y, size):
self.can = can
self.id = self.can.create_rectangle((start_x, start_y,start_x+size, start_y+size), fill="red")
self.can.tag_bind(self.id, "", self.set\_color)
self.color\_change = True
def set\_color(self,event = None):
self.color\_change = not self.color\_change
colors = ["red", "orange", "yellow", "green", "blue", "violet","pink","teal"]
self.can.itemconfigure(self.id, fill = random.choice(colors))
WIDTH = 400 #change as needed
HEIGHT = 500 #change as needed
root = Tk()
canvas = Canvas(root, height=HEIGHT, width=WIDTH)
canvas.pack()
square = draw(canvas,WIDTH/2,HEIGHT/2,10)
root.mainloop()
```
You can center the rectangle
Upvotes: 2 [selected_answer]<issue_comment>username_2: Your starting position is being set by your call to the draw method. You can have it automatically detect the correct center by calculating it from the canvas object.
```
size = 90
center_height = canvas.winfo_reqheight() / 2 - size / 2
center_width = canvas.winfo_reqwidth() / 2 - size / 2
square = draw(canvas, center_width, center_height, size)
```
You could also set start\_x and start\_y in the draw method if you'd prefer.
Upvotes: 0 |
2018/03/21 | 518 | 1,670 | <issue_start>username_0: How can I add a Xamarin.Forms XAML file to an FSharp project?
By default, XAML files have C# code-behind.
I tried moving a XAML file with C# code-behind to an F# project.
I then changed the file extension to ".fs".
However, my attempt crashed VS2017.<issue_comment>username_1: By defining a height and a width for the canvas and using `pack()` instead of `grid()` (like so)
```
from tkinter import *
import random
class draw():
def __init__(self, can, start_x, start_y, size):
self.can = can
self.id = self.can.create_rectangle((start_x, start_y,start_x+size, start_y+size), fill="red")
self.can.tag_bind(self.id, "", self.set\_color)
self.color\_change = True
def set\_color(self,event = None):
self.color\_change = not self.color\_change
colors = ["red", "orange", "yellow", "green", "blue", "violet","pink","teal"]
self.can.itemconfigure(self.id, fill = random.choice(colors))
WIDTH = 400 #change as needed
HEIGHT = 500 #change as needed
root = Tk()
canvas = Canvas(root, height=HEIGHT, width=WIDTH)
canvas.pack()
square = draw(canvas,WIDTH/2,HEIGHT/2,10)
root.mainloop()
```
You can center the rectangle
Upvotes: 2 [selected_answer]<issue_comment>username_2: Your starting position is being set by your call to the draw method. You can have it automatically detect the correct center by calculating it from the canvas object.
```
size = 90
center_height = canvas.winfo_reqheight() / 2 - size / 2
center_width = canvas.winfo_reqwidth() / 2 - size / 2
square = draw(canvas, center_width, center_height, size)
```
You could also set start\_x and start\_y in the draw method if you'd prefer.
Upvotes: 0 |
2018/03/21 | 1,119 | 3,775 | <issue_start>username_0: I am writing few sbt tasks in a scala file. These SBT tasks will be imported into many other projects.
```
lazy val root = (project in file(".")).
settings(
inThisBuild(List(
organization := "com.example",
scalaVersion := "2.11.8",
version := "1.0.0"
)),
name := "sbttasks",
libraryDependencies ++= Seq(
"org.scala-sbt" % "sbt" % "1.0.0" % "provided"
)
)
```
I get a compilation error
```
error] java.lang.RuntimeException: Conflicting cross-version suffixes in: org.scala-lang.modules:scala-xml, org.scala-lang.modules:scala-parser-combinators
[error] at scala.sys.package$.error(package.scala:27)
[error] at sbt.librarymanagement.ConflictWarning$.processCrossVersioned(ConflictWarning.scala:39)
[error] at sbt.librarymanagement.ConflictWarning$.apply(ConflictWarning.scala:19)
[error] at sbt.Classpaths$.$anonfun$ivyBaseSettings$64(Defaults.scala:1995)
[error] at scala.Function1.$anonfun$compose$1(Function1.scala:44)
[error] at sbt.internal.util.$tilde$greater.$anonfun$$u2219$1(TypeFunctions.scala:39)
[error] at sbt.std.Transform$$anon$4.work(System.scala:66)
[error] at sbt.Execute.$anonfun$submit$2(Execute.scala:262)
[error] at sbt.internal.util.ErrorHandling$.wideConvert(ErrorHandling.scala:16)
[error] at sbt.Execute.work(Execute.scala:271)
[error] at sbt.Execute.$anonfun$submit$1(Execute.scala:262)
[error] at sbt.ConcurrentRestrictions$$anon$4.$anonfun$submitValid$1(ConcurrentRestrictions.scala:174)
[error] at sbt.Completion
```
I don't want to write the custom tasks in build.sbt itself (as the SBT documentation shows) because then I won't be able to import my custom tasks into other projects.<issue_comment>username_1: To write reusable tasks that you can "import" in different projects, you need to make an [sbt plugin](https://www.scala-sbt.org/release/docs/Using-Plugins.html).
If you have a [multi-project build](https://www.scala-sbt.org/release/docs/Multi-Project.html) and want to reuse your tasks in the subprojects, you can create a file `project/MyPlugin.scala` with
```
import sbt._
import sbt.Keys._
object MyPlugin extends AutoPlugin {
override def trigger = noTrigger
object autoImport {
val fooTask = taskKey[Foo]("Foo description")
val barTask = taskKey[Bar]("Bar description")
}
import autoImport._
override lazy val projectSettings = Seq(
fooTask := { ??? },
barTask := { ??? }
)
}
```
Then to enable this plugin (i.e. make those tasks available) in a subproject, you can write this in your `build.sbt`:
```
lazy val subproject = (project in file("subproject"))
.enablePlugins(MyPlugin)
```
---
On the contrast, if you want to reuse these tasks in other unrelated projects, you need to make this plugin a separate project and publish it. It's a normal sbt project, but instead of an explicit sbt dependency, you write in its `build.sbt`:
```
sbtPlugin := true
```
And the code defining tasks goes to `src/main/scala/` (like in a normal project).
You can read in detail about writing plugins in the [sbt documentation](https://www.scala-sbt.org/release/docs/Plugins.html).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Change version of **"org.scala-sbt" to "1.0.0-M4"**
```
lazy val root = (project in file(".")).
settings(
inThisBuild(List(
organization := "com.example",
scalaVersion := "2.11.8",
version := "1.0.0",
name := "sbttasks"
)),
libraryDependencies ++= Seq(
"org.scala-sbt" % "sbt" % "1.0.0-M4" % "provided"
)
)
```
For entire compatibility matrix check
<https://mvnrepository.com/artifact/org.scala-sbt/main>
Upvotes: 0 |
2018/03/21 | 384 | 1,376 | <issue_start>username_0: I'm still very new to programming so bear with me. This is what I have started with:
```
letter = input ("Please enter a letter of the alphabet to be displayed as a banner: ")
while letter == **UPPERCASE**:
......
```
What I want to do is: if the input is an uppercase letter, to continue with the while loop, but if the input letter is lowercase to not even start the while loop. So what could I put in place of "**UPPERCASE**" to check for uppercase or lowercase and use it in a while or if statement.
Thanks in advance.<issue_comment>username_1: If you take the input as a char, you can test it with a for loop like this:
```
for (char i = 'A'; i <= 'Z'; i++) {
if (letter == i) {
while(true) {
//Do whatever
}
}
}
```
This compares the variable letter to every capital letter in the alphabet, and if it finds a match it means that yes, it is a capital letter.
Upvotes: 0 <issue_comment>username_2: ```
d = {"Upper case": 0, "Lower case": 0}
sen = input(">")
for i in sen:
if i.isupper():
d["Upper case"]+=1
elif i.islower():
d["Lower case"]+=1
else:
pass
print("Upper", d["Upper case"])
print("Lower", d["Lower case"])
```
In Python,
isupper() is a built-in method used for string handling.
islower() is a built-in method used for string handling.
Upvotes: 2 |
2018/03/21 | 344 | 1,165 | <issue_start>username_0: I have this JSON file,
```
{
"file_paths": {
"PROCESS": "C:\incoming",
"FAILED": "C:\failed"
}
}
```
I get an error when I try to access PROCESS or FAILED. The error is `SyntaxError: Unexpected token i in JSON`. It must be due to backslash. How can I access PROCESS or FAILED without editing the JSON file?<issue_comment>username_1: If you take the input as a char, you can test it with a for loop like this:
```
for (char i = 'A'; i <= 'Z'; i++) {
if (letter == i) {
while(true) {
//Do whatever
}
}
}
```
This compares the variable letter to every capital letter in the alphabet, and if it finds a match it means that yes, it is a capital letter.
Upvotes: 0 <issue_comment>username_2: ```
d = {"Upper case": 0, "Lower case": 0}
sen = input(">")
for i in sen:
if i.isupper():
d["Upper case"]+=1
elif i.islower():
d["Lower case"]+=1
else:
pass
print("Upper", d["Upper case"])
print("Lower", d["Lower case"])
```
In Python,
isupper() is a built-in method used for string handling.
islower() is a built-in method used for string handling.
Upvotes: 2 |
2018/03/21 | 292 | 1,028 | <issue_start>username_0: I am trying to implement some changes into a deployed Flask API.
Am I able to edit files in the deployed API and have the changes take effect without having the restart the API build?<issue_comment>username_1: If you take the input as a char, you can test it with a for loop like this:
```
for (char i = 'A'; i <= 'Z'; i++) {
if (letter == i) {
while(true) {
//Do whatever
}
}
}
```
This compares the variable letter to every capital letter in the alphabet, and if it finds a match it means that yes, it is a capital letter.
Upvotes: 0 <issue_comment>username_2: ```
d = {"Upper case": 0, "Lower case": 0}
sen = input(">")
for i in sen:
if i.isupper():
d["Upper case"]+=1
elif i.islower():
d["Lower case"]+=1
else:
pass
print("Upper", d["Upper case"])
print("Lower", d["Lower case"])
```
In Python,
isupper() is a built-in method used for string handling.
islower() is a built-in method used for string handling.
Upvotes: 2 |
2018/03/21 | 576 | 2,057 | <issue_start>username_0: I am writing a Java program that can convert an inputted Roman Numeral into a Short, and I want it to be able to recognize and ask for input again when any letter has been inputted besides a Roman Numeral (I, V, X, L, C, D, and M).
Is there a method similar to .contains(), but with the opposite function?
Or do I have to check each individual letter with some kind of loop?<issue_comment>username_1: I suggest you use a regular expression to check whether the input is a Roman numeral. You can find a regular expression for this problem [here](https://stackoverflow.com/a/267405). Use `String#matches()` to determine whether your input matches the regex.
```
if(!input.matches("^M{0,4}(CM|CD|D?C{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})$")) { // input is the String the user entered
// handle invalid input here
}
```
Upvotes: 1 <issue_comment>username_2: Well of course, you need some type of filter to test against the input.
One solution could be to use a string that contains all the possible valid characters in the input and then return false if a character wasn't found in the filter.
```
public class HelloWorld
{
public static boolean filter(String test, String filter) {
for(int i = 0; i < test.length(); i++) {
if (filter.indexOf(test.charAt(i)) == -1) {
return false;
}
}
return true;
}
// arguments are passed using the text field below this editor
public static void main(String[] args)
{
System.out.println(filter("XDQX", "XDQ"));
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Just testing for valid characters, not for a valid sequence, can be done with contains:
```
boolean roman (String s)
{
for (char c: s.toCharArray())
if (! "IVXCLDM".contains(""+c))
return false;
return true;
}
```
However, I would prefer a regular expression
```
boolean roman (String s)
{
return s.matches ("[IVXCLDM]+");
}
```
which means any number(+) of characters from that Set, at least one.
Upvotes: 0 |
2018/03/21 | 802 | 3,011 | <issue_start>username_0: I have 2 textboxes and want to do the validation if the fields are empty then. But, I want to display the message differently based on the conditions.
//Inside my Model
```
[Required(ErrorMessage ="Please enter Name")]
public string txtName { get; set; }
[Required(ErrorMessage = "Address")]
public string txtAddress { get; set; }
```
//In 1st view, two textboxes appear so the message should be "Please enter Name/Address"
```
@Html.ValidationMessageFor(m => m.txtName, "", new { @class = "error" })
@Html.ValidationMessageFor(m => m.txtAddress, "", new { @class = "error" })
```
//In 2nd View, only Address textbox appears, so the message should be "Please enter Address"
```
@Html.ValidationMessageFor(m => m.txtAddress, "", new { @class = "error" })
```
I do not have an idea how to achieve this.Anyoe has any ideas then please share. Thank you.<issue_comment>username_1: There is a (quick and dirty) workarround. In the second view, instead of usind @Html.EditorFor(m => m.txtAddress), use plain html with the jquery validation attributes.
Example:
```
```
Please note the usage of the data-val-required attribute that contains the actual message.
Upvotes: 1 <issue_comment>username_2: Well based on your comments you could do something, but the main principles should really be one "id" per page, and the DataAnnotations in the ViewModel are tied to that one input. Having to work around this strongly suggests that the design of the page(s) should be reviewed.
You could consider:
(i) text inputs (note TextBoxFor and TextBox):
```
@Html.TextBoxFor(m => m.txtAddress, new { @class = "some-class" })
@Html.ValidationMessageFor(m => m.txtAddress, "", new { @class = "error" })
// txtAddressTwo takes the value assigned to txtAddress in the ViewModel:
@Html.TextBox("txtAddressTwo", Model.txtAddress, new { @class = "some-class" })
@Html.ValidationMessage("txtAddressTwo", null, new { @class = "error" })
```
(ii) Set the validation rules for txtAddressTwo using jQuery:
```
$(function () {
$("#txtAddressTwo").rules("add", {
required: true,
messages: {
required: "xxx Please enter Address xxx"
}
});
});
```
(iii) Reconcile values when the form is submitted (not great practice)
```
$(function () {
$("#your-form-id").on("submit", function () {
// Work out the appropriate reconciliation using
var a = $("#txtAddress").val();
var b = $("#txtAddressTwo").val();
// For example (or something similar (very hacky))
if ($("#txtAddress").val() === "") {
$("#txtAddress").val($("#txtAddressTwo").val());
}
});
});
```
I can't help but think that there must be some special reason to have to need to go about it in this way, and I would really suggest revisiting the limitations you have about only one entry in the ViewModel.
Anyway, I hope this helps a little and sets you on the path to a solution.
Upvotes: 3 [selected_answer] |
2018/03/21 | 931 | 3,822 | <issue_start>username_0: My app has an `Activity` that's declared with:
```
android:windowSoftInputMode="adjustResize|stateAlwaysHidden"
```
`adjustResize` is set and in that activity I have a `RecyclerView` and an `EditText` as in a chat like app.
The problem I'm facing is that when the keyboard shows up the layout is resized as intented but it also scrolls up the `RecyclerView` contents.
The behavior desired is that the scroll stays put.
I've tried using `LayoutManager#onSaveInstanceState()` and it's counterpart `LayoutManager#onRestoreInstanceState()` with no avail.
I know many had similar/same issues but I couldn't find a good solutions for this.<issue_comment>username_1: Ok, amazing how a clear head and a sudden glimpse of thought makes wonders.
I don't know about everyone but I haven't found a solution for this simple problem this way, and it works for me. Sharing:
```
recyclerView.addOnLayoutChangeListener { _, _, _, _, bottom, _, _, _, oldBottom ->
val y = oldBottom - bottom
val firstVisibleItem = linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (y.absoluteValue > 0 && !(y < 0 && firstVisibleItem == 0)) {
recycler_view.scrollBy(0, y)
}
}
```
Only drawback so far is that when you're scrolled to the second item and you hide the soft keyboard, it scrolls to the very end but no big deal for me.
Hope it helps someone.
**EDIT**:
Here's how I solved without any drawbacks now:
```
private var verticalScrollOffset = AtomicInteger(0)
recyclerView.addOnLayoutChangeListener { _, _, _, _, bottom, _, _, _, oldBottom ->
val y = oldBottom - bottom
if (y.absoluteValue > 0) {
// if y is positive the keyboard is up else it's down
recyclerView.post {
if (y > 0 || verticalScrollOffset.get().absoluteValue >= y.absoluteValue) {
recyclerView.scrollBy(0, y)
} else {
recyclerView.scrollBy(0, verticalScrollOffset.get())
}
}
}
}
recyclerView.addOnScrollListener(object : RecyclerView.OnScrollListener() {
var state = AtomicInteger(RecyclerView.SCROLL_STATE_IDLE)
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
state.compareAndSet(RecyclerView.SCROLL_STATE_IDLE, newState)
when (newState) {
RecyclerView.SCROLL_STATE_IDLE -> {
if (!state.compareAndSet(RecyclerView.SCROLL_STATE_SETTLING, newState)) {
state.compareAndSet(RecyclerView.SCROLL_STATE_DRAGGING, newState)
}
}
RecyclerView.SCROLL_STATE_DRAGGING -> {
state.compareAndSet(RecyclerView.SCROLL_STATE_IDLE, newState)
}
RecyclerView.SCROLL_STATE_SETTLING -> {
state.compareAndSet(RecyclerView.SCROLL_STATE_DRAGGING, newState)
}
}
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
if (state.get() != RecyclerView.SCROLL_STATE_IDLE) {
verticalScrollOffset.getAndAdd(dy)
}
}
})
```
**EDIT2**: This can be easily converted to Java if that's your poison.
Upvotes: 3 [selected_answer]<issue_comment>username_2: use addOnscrollListener to check weather currently recyclerview is at bottom or not
if yes then use the method provided in accepted answer by doing it youe recycler only scroll when last item is visible else it wont scroll
```
recycler.addOnScrollListener(new RecyclerView.OnScrollListener() {
@Override
public void onScrollStateChanged(@NonNull RecyclerView recyclerView, int newState) {
super.onScrollStateChanged(recyclerView, newState);
cannotScrollVertically = !recyclerView.canScrollVertically(1);//use a Boolean variable
}
});
```
Upvotes: 0 |
2018/03/21 | 1,436 | 4,701 | <issue_start>username_0: I'm assuming ImageMagick is the best option for this, but please let me know if you have other recommendations that can be scripted.
I am trying to replace all the 32x32 tiles of an image with a single tile. This is an example for the original image:
[](https://i.stack.imgur.com/byXPr.png)
This is the tile that I want to use to replace all tiles on the original image:
[](https://i.stack.imgur.com/Ro1Lp.png)
And this is what I want the output to be:
[](https://i.stack.imgur.com/yyYMq.png)
I've figured out from other posts on Stack Overflow that I can use ImageMagick's *composite* option to overlay the tile onto the original image:
`$ convert original.png tile.png -composite overlay.png`
Resulting in the following:
[](https://i.stack.imgur.com/0LKIT.png)
And I assume by knowing the original images dimensions I can overlay the tile manually multiple times. But is there a way to automate the process. In the example pictures I have given, I need to overlay the tile 8 times on the original 64x128 image.
How can I do this with ImageMagick or another software? And if ImageMagick, would the *montage* or *composite* command be a better option?
**Edit:** As an additional question, would it be possible to skip tiles that are completely transparent?
Input example:
[](https://i.stack.imgur.com/GM5Aq.png)
Output example:
[](https://i.stack.imgur.com/r13jP.png)
It isn't really important to be able to do this part, but would be nice.<issue_comment>username_1: I don't know why you would need to overlay the 8 tiles on the original. Just create it from scratch and name the output the same as your original
You could use Imagemagick montage to do that (unix syntax):
```
nx=`convert original.png -format "%[fx:w/32]" info:`
ny=`convert original.png -format "%[fx:h/32]" info:`
num=$((nx*ny-1))
montage tile.png -duplicate $num -tile ${nx}x${ny} -geometry +0+0 result.png
```
[](https://i.stack.imgur.com/1os6f.png)
Here I use convert to duplicated the tile, but it uses a relatively current -duplicate feature. If you do not have a current enough version of Imagemagick, then just repeat the tile in montage as follows:
```
montage Ro1Lp.png Ro1Lp.png Ro1Lp.png Ro1Lp.png Ro1Lp.png Ro1Lp.png Ro1Lp.png Ro1Lp.png -tile 2x8 -geometry +0+0 result.png
```
Upvotes: 2 <issue_comment>username_2: If the tile image fits evenly into the dimensions of the original, a command like this should do most of what you want...
```
convert original.png tile.png -background none -virtual-pixel tile \
-set option:distort:viewport %[fx:u.w]x%[fx:u.h] -distort SRT 0 +swap \
-compose copyopacity -composite overlay.png
```
That reads in both images. Then it creates another canvas the size of the original and filled with multiple copies of the tile image. Then it uses the original as a transparency mask to create a copy of the new tiled image with the same transparent cells as the original.
Upvotes: 3 [selected_answer]<issue_comment>username_3: As Fred (username_1) says, *"why don't you just create the whole image from scratch?"*.
Maybe your description isn't complete, so here are a couple more pieces that might help you work it out.
Given `bluetiles.png` and `singlered.png`:
[](https://i.stack.imgur.com/CVv18.png) [](https://i.stack.imgur.com/SpReH.png)
you can position red ones as you wish like this:
```
convert bluetiles.png \
singlered.png -geometry +0+32 -composite \
singlered.png -geometry +32+96 -composite result.png
```
[](https://i.stack.imgur.com/esjQp.png)
---
Given `bluewithtransparent.png`:
[](https://i.stack.imgur.com/AAkLX.png)
you can copy its transparency to the newly-created image like this:
```
convert bluetiles.png \
singlered.png -geometry +0+32 -composite \
singlered.png -geometry +32+96 -composite \
\( bluewithtransparent.png -alpha extract \) -compose copyopacity -composite result.png
```
[](https://i.stack.imgur.com/rOOeX.png)
Upvotes: 0 |
2018/03/21 | 1,909 | 7,063 | <issue_start>username_0: There is a table in our SQL Server 2012 to generate and send emails. Its simplified structure is as follows:
```
CREATE TABLE [dbo].[EmailRequest]
(
[EmailRequestID] [int] NOT NULL,
[EmailAddress] [varchar](1024) NULL,
[CCEmailAddress] [varchar](1024) NULL,
[EmailReplyToAddress] [varchar](128) NULL,
[EmailReplyToName] [varchar](128) NULL,
[EmailSubject] [varchar](max) NULL,
[EmailBody] [varchar](max) NULL,
[Attachments] [varchar](max) NULL,
[CreateDateTime] [datetime] NULL,
[_EmailSent] [varchar](1) NULL,
[_EmailSentDateTime] [datetime] NULL,
CONSTRAINT [PK_EmailRequest]
PRIMARY KEY CLUSTERED ([EmailRequestID] ASC)
)
```
I don't have any control over that table or the database where it sits; it is provided "as is".
Different programs and scripts insert records into the table at random intervals. I suspect most of them do this with queries like this:
```
INSERT INTO [dbo].[EmailRequest] ([EmailRequestID], ... )
SELECT MAX([EmailRequestID]) + 1,
FROM [dbo].[EmailRequest];
```
I run a big SQL script which at some conditions must send emails as well. In my case the part responsible for emails looks like this:
```
INSERT INTO [dbo].[EmailRequest] ([EmailRequestID], ... )
SELECT MAX([EmailRequestID]) + 1,
FROM [dbo].[EmailRequest]
JOIN db1.dbo.table1 ON ...
JOIN db1.dbo.table2 ON ... and so on;
```
The "select" part takes its time, so when it actually inserts data the calculated `MAX([EmailRequestID]) + 1` value may become redundant and cause primary key violation (rare event, but nevertheless annoying one).
The question: is there a way to design the query so it calculates `MAX([EmailRequestID])+1` later, just before `insert`?
One of the options might be:
```
INSERT INTO [dbo].[EmailRequest] ([EmailRequestID], ... )
SELECT
(SELECT MAX([EmailRequestID]) + 1
FROM [dbo].[EmailRequest]),
FROM db1.dbo.table1
JOIN db1.dbo.table2 ON ... and so on;
```
but I am not sure if it brings any advantages.
So there may be another question: is there a way to see "time-lapse" of query execution?
Testing is a challenge, because no one sends request to the test database, so I will never get PK violation in there.
Thank you.
**Some amazing results from testing the accepted answer.**
The elapsed time for original (real) query - 2000...2800 ms;
same query without "insert" part - 1200...1800 ms.
Note: the "select" statement collects information from three databases.
The test query retains real "select" statement (removed below):
```
Declare @mailTable table
(mt_ID int,
mt_Emailaddress varchar(1024),
mt_CCEmailAddress varchar(1024),
mt_EmailSubject varchar(max),
mt_EmailBody varchar(max)
);
insert into @mailTable
select row_number() over (ORDER BY (SELECT NULL)),
am.ul_EMail, ... -- EmailAddress - the rest is removed
FROM ;
insert into dbo.EmailRequest
(EmailRequestID, \_MessageID, EmailType, EmailAddress, CCEmailAddress,
BulkFlag, EmailSubject, EmailBody, EmailReplyToAddress,
CreateDateTime, SQLServerUpdated, SQLServerDateTime, \_EmailSent)
select (select Max(EmailRequestID)+1 from dbo.EmailRequest),
0, '\*TEXT', -- \_MessageID, EmailType
mt\_Emailaddress,
mt\_CCEmailAddress,
'N', -- BulkFlag
mt\_EmailSubject, -- EmailSubject
mt\_EmailBody, -- EmailBody
'', GetDate(), '0', GetDate(), '0'
FROM @mailTable;
```
Elapsed time on 10 runs for first part - 48 ms (worst), 8 (best);
elapsed time for second part, where collision may occur - 85 ms (worst), 1 ms (best)<issue_comment>username_1: You don't have any good options, if you cannot fix the table. The table should be defined as:
```
CREATE TABLE [dbo].[EmailRequest](
[EmailRequestID] [int] identity(1, 1) NOT NULL PRIMARY KEY,
. . .
```
Then the database will generate a unique id for each row.
If you didn't are about performance, you can lock the table to prevent other threads from writing to the table. That's a lousy idea.
Your best bet is to capture the error and try again. No guarantee of when things will finish, and you could end up with different threads all deadlocking.
Wait, there is one thing you could do. You could use a sequence instead of the max id. If you control *all* the inserts into the table, then you could create a sequence and insert from that value rather than from the table. This would solve the performance problem and the need for a unique id. To really effect this, you would want to take the database down, bring it back up, set up all the code using the sequence, and then let'er rip.
That said, much the better solution is an identity primary key.
Upvotes: 2 <issue_comment>username_2: I know this might not be the most ideal solution, but I wanted to add it for completeness sake. Unfortunately, sometimes we don't have much of a choice in how we deal with certain problems.
Let me preface this with a disclaimer:
This may not work well in extremely high concurrency scenarios since it will hold an exclusive lock on the table. In practice, I've used this approach with up to 32 concurrent threads interacting with the table across 4 different machines and this was not the bottleneck. Make sure that the transaction here runs separately if at all possible.
The basic idea is that you perform your complex query first and store the results somewhere temporarily (a table variable in this example). You then take a lock on the table while locating the max ID, insert your records based on that ID, and then release the lock.
Assuming your table is structured like this:
```
CREATE TABLE EmailRequest (
EmailRequestID INT,
Field1 INT,
Field2 VARCHAR(20)
);
```
You could try something like this to push your inserts:
```
-- Define a table variable to hold the data to be inserted into the main table:
DECLARE @Emails TABLE(
RowID INT IDENTITY(1, 1),
Field1 INT,
Field2 VARCHAR(20)
);
-- Run the complex query and store the results in the table variable:
INSERT INTO @Emails (Field1, Field2)
SELECT Field1, Field2
FROM (VALUES
(10, 'DATA 1'),
(11, 'DATA 2'),
(15, 'DATA 3')
) AS a (Field1, Field2);
BEGIN TRANSACTION;
-- Determine the current max ID, and lock the table:
DECLARE @MaxEmailRequestID INT = (
SELECT ISNULL(MAX(EmailRequestID), 0)
FROM [dbo].[EmailRequest] WITH(TABLOCKX, HOLDLOCK)
);
-- Insert the records into the main table:
INSERT INTO EmailRequest (EmailRequestID, Field1, Field2)
SELECT
@MaxEmailRequestID + RowID,
Field1,
Field2
FROM @Emails;
-- Commit to release the lock:
COMMIT;
```
If your complex query returns a large number of rows (thousands), you might want to consider using a temp table instead of a table variable.
Honestly, even if you remove the `BEGIN TRANSACTION`, `COMMIT`, and locking hints (`WITH(TABLOCKX, HOLDLOCK)`), this still has the potential to dramatically reduce the frequency of the issue you described. In that case, the disclaimer above would no longer apply.
Upvotes: 2 [selected_answer] |
2018/03/21 | 909 | 1,993 | <issue_start>username_0: I have huge list of tuples that looks like:
```
data =[
('-0.167969896634', '0.475981802514', ''),
('-0.186100643368', '0.47510168705', ''),
('-0.205064369305', '0.476225633961', ''),
...]
```
I need to remove the ' ' and convert them all to floats, but I am not sure how to do it. I tried this:
```
[tuple(float(x) for x in t) for t in data]
```
It however will tell me that a `str` cannot be converted to type `float` and I suspect it may be the `''` part?<issue_comment>username_1: You can convert using a try/except block like:
### Code:
```
def convert(a_float):
try:
return float(a_float)
except ValueError:
return 0
```
### Test Code:
```
data = [('-0.167969896634', '0.475981802514', ''),
('-0.186100643368', '0.47510168705', ''),
('-0.205064369305', '0.476225633961', '')
]
new_data = [tuple(convert(f) for f in t) for t in data]
print(new_data)
```
### Results:
```
[(-0.167969896634, 0.475981802514, 0),
(-0.186100643368, 0.47510168705, 0),
(-0.205064369305, 0.476225633961, 0)]
```
Upvotes: 2 <issue_comment>username_2: If empty strings are the only non-numeric values in your floats, then simply filter them out:
```
>>> tuple(map(float, filter(lambda s: s, ('1.23', '3.14', ''))))
(1.23, 3.14)
```
Or simply `None` for identity function:
```
>>> tuple(map(float, filter(None, ('1.23', '3.14', ''))))
(1.23, 3.14)
```
Upvotes: 2 <issue_comment>username_3: A simple list comprehension will do the job along with [`filter`](https://docs.python.org/3/library/functions.html#filter) and [`map`](https://docs.python.org/3/library/functions.html#map). `filter` will be used to filter out `None` values and `map` will be used to cast the filtered string to `float`
```
>>> [tuple(map(float,filter(None, a))) for a in data]
>>> [
(-0.167969896634, 0.475981802514),
(-0.186100643368, 0.47510168705),
(-0.205064369305, 0.476225633961)
]
```
Upvotes: 1 |
2018/03/21 | 1,549 | 4,188 | <issue_start>username_0: I need to plot the velocities of some objects(cars).
Each velocity are being calculated through a routine and written in a file, roughly through this ( I have deleted some lines to simplify):
```
thefile_v= open('vels.txt','w')
for car in cars:
velocities.append(new_velocity)
if len(car.velocities) > 4:
try:
thefile_v.write("%s\n" %car.velocities) #write vels once we get 5 values
thefile_v.close
except:
print "Unexpected error:", sys.exc_info()[0]
raise
```
The result of this is a text file with list of velocities for each car.
something like this:
```
[0.0, 3.8, 4.5, 4.3, 2.1, 2.2, 0.0]
[0.0, 2.8, 4.0, 4.2, 2.2, 2.1, 0.0]
[0.0, 1.8, 4.2, 4.1, 2.3, 2.2, 0.0]
[0.0, 3.8, 4.4, 4.2, 2.4, 2.4, 0.0]
```
Then I wanted to plot each velocity
```
with open('vels.txt') as f:
lst = [line.rstrip() for line in f]
plt.plot(lst[1]) #lets plot the second line
plt.show()
```
This is what I found. The values are taken as a string and put them as yLabel.

I got it working through this:
```
from numpy import array
y = np.fromstring( str(lst[1])[1:-1], dtype=np.float, sep=',' )
plt.plot(y)
plt.show()
```

What I learnt is that, the set of velocity lists I built previously were treated as lines of data.
I had to convert them to arrays to be able to plot them. However the brackets [] were getting into the way. By converting the line of data to string and removing the brackets through this (i.e. [1:-1]).
It is working now, but I'm sure there is a better way of doing this.
Any comments?<issue_comment>username_1: Just say you had the array `[0.0, 3.8, 4.5, 4.3, 2.1, 2.2, 0.0]`, to graph this the code would look something like:
```
import matplotlib.pyplot as plt
ys = [0.0, 3.8, 4.5, 4.3, 2.1, 2.2, 0.0]
xs = [x for x in range(len(ys))]
plt.plot(xs, ys)
plt.show()
# Make sure to close the plt object once done
plt.close()
```
if you wanted to have different intervals for the x axis then:
```
interval_size = 2.4 #example interval size
xs = [x * interval_size for x in range(len(ys))]
```
**Also** when reading your values from the text file make sure that you have **converted your values from strings back to integers**. This maybe why your code is assuming your input is the y label.
Upvotes: 3 <issue_comment>username_2: Just one possible easy solution. Use the `map` function. Say in your file, you have the data stored like, without any `[` and `]` non-convertible letters.
```
#file_name: test_example.txt
0.0, 3.8, 4.5, 4.3, 2.1, 2.2, 0.0
0.0, 2.8, 4.0, 4.2, 2.2, 2.1, 0.0
0.0, 1.8, 4.2, 4.1, 2.3, 2.2, 0.0
0.0, 3.8, 4.4, 4.2, 2.4, 2.4, 0.0
```
Then the next step is;
```
import matplotlib.pyplot as plt
path = r'VAR_DIRECTORY/test_example.txt' #the full path of the file
with open(path,'rt') as f:
ltmp = [list(map(float,line.split(','))) for line in f]
plt.plot(ltmp[1],'r-')
plt.show()
```
In top, I just assume you want to plot the second line, `0.0, 2.8, 4.0, 4.2, 2.2, 2.1, 0.0`. Then here is the result.
[](https://i.stack.imgur.com/QCCdH.png)
Upvotes: 0 <issue_comment>username_3: The example is not complete, so some assumptions must be made here. In general, use numpy or pandas to store your data.
Suppose `car` is an object, with a `velocity` attribute, you can write all velocities in a list, save this list as text file with numpy, read it again with numpy and plot it.
```
import numpy as np
import matplotlib.pyplot as plt
class Car():
def __init__(self):
self.velocity = np.random.rand(5)
cars = [Car() for _ in range(5)]
velocities = [car.velocity for car in cars]
np.savetxt("vels.txt", np.array(velocities))
####
vels = np.loadtxt("vels.txt")
plt.plot(vels.T)
## or plot only the first velocity
#plt.plot(vels[0]
plt.show()
```
[](https://i.stack.imgur.com/FH7bE.png)
Upvotes: 1 [selected_answer] |
2018/03/21 | 462 | 1,622 | <issue_start>username_0: Okay, so I have the following issue. I have a Mac, so the the default Python 2.7 is installed for the OS's use. However, I also have Python 3.6 installed, and I want to install a package using Pip that is only compatible with python version 3. How can I install a package with Python 3 and not 2?<issue_comment>username_1: Why do you ask such a thing here?
<https://docs.python.org/3/using/mac.html>
>
> 4.3. Installing Additional Python Packages
> There are several methods to install additional Python packages:
>
>
> Packages can be installed via the standard Python distutils mode (python setup.py install).
> Many packages can also be installed via the setuptools extension or pip wrapper, see <https://pip.pypa.io/>.
>
>
>
<https://pip.pypa.io/en/stable/user_guide/#installing-packages>
>
> Installing Packages
> pip supports installing from PyPI, version control, local projects, and directly from distribution files.
>
>
> The most common scenario is to install from PyPI using Requirement Specifiers
>
>
> `$ pip install SomePackage` # latest version
> `$ pip install SomePackage==1.0.4` # specific version
> `$ pip install 'SomePackage>=1.0.4'` # minimum version
> For more information and examples, see the pip install reference.
>
>
>
Upvotes: 0 <issue_comment>username_2: To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
```
Upvotes: 1 <issue_comment>username_3: Just a suggestion, before you run any command that you don't know what is it, please use `which your_cmd` or `whereis your_cmd` to find its path.
Upvotes: 0 |
2018/03/21 | 740 | 2,590 | <issue_start>username_0: I have a project loaded in Android Studio 3.0. Gradle sync works fine, and the project builds.
When I add `implementation 'com.amazonaws.aws-android-sdk-mobile-client:2.6.+'` to my build.gradle (Module:app) file, right next to all the other dependencies that are already part of this fine project, gradle fails to find that dependency. Many of the existing project dependencies are under com.amazonaws.aws-android-sdk-\* and are being sync'ed just fine, for e.g. `implementation 'com.amazonaws:aws-android-sdk-core:2.6.+'` is fine.
So I double check that new project dependency actually exists, browsing <http://repo.maven.apache.org/maven2/com/amazonaws/aws-android-sdk-mobile-client> shows it exists, I don't see a typo.
Looking at my build.gradle (Project: myProject), I see the following
```
allprojects {
repositories {
mavenCentral()
google()
jcenter()
}
}
```
Yet when gradle syncs i get
```
Unable to resolve dependency for ':app@debug/compileClasspath': Could not resolve com.amazonaws.aws-android-sdk-mobile-client:2.6.+:.
Could not resolve com.amazonaws.aws-android-sdk-mobile-client:2.6.+:.
Required by:
project :app
No cached version of com.amazonaws.aws-android-sdk-mobile-client:2.6.+: available for offline mode.`
```<issue_comment>username_1: Why do you ask such a thing here?
<https://docs.python.org/3/using/mac.html>
>
> 4.3. Installing Additional Python Packages
> There are several methods to install additional Python packages:
>
>
> Packages can be installed via the standard Python distutils mode (python setup.py install).
> Many packages can also be installed via the setuptools extension or pip wrapper, see <https://pip.pypa.io/>.
>
>
>
<https://pip.pypa.io/en/stable/user_guide/#installing-packages>
>
> Installing Packages
> pip supports installing from PyPI, version control, local projects, and directly from distribution files.
>
>
> The most common scenario is to install from PyPI using Requirement Specifiers
>
>
> `$ pip install SomePackage` # latest version
> `$ pip install SomePackage==1.0.4` # specific version
> `$ pip install 'SomePackage>=1.0.4'` # minimum version
> For more information and examples, see the pip install reference.
>
>
>
Upvotes: 0 <issue_comment>username_2: To download use
```
pip3 install package
```
and to run the file
```
python3 file.py
```
Upvotes: 1 <issue_comment>username_3: Just a suggestion, before you run any command that you don't know what is it, please use `which your_cmd` or `whereis your_cmd` to find its path.
Upvotes: 0 |
2018/03/21 | 405 | 1,435 | <issue_start>username_0: In the [doc](https://docs.oracle.com/javase/8/docs/api/java/nio/file/Files.html) it says
>
> Read all lines from a file as a `Stream`.
>
>
>
Does that necessary mean that it's loading the entire file? For example:
```
try (Stream stream = Files.lines(Paths.get("myfilename.txt"))) {
stream.forEach(x -> {
```
if `myfilename` is *100GB* , will `Files.lines` load the entire *100GB*?<issue_comment>username_1: No, it doesn't load the entire file into memory. It internally uses a `BufferedReader` with the default buffer size, repeatedly calling `readLine()`.
Upvotes: 3 <issue_comment>username_2: Well, the link you provided states it already:
>
> Unlike readAllLines, this method does not read all lines into a List, but instead populates lazily as the stream is consumed.
>
>
>
So for every time your `for-each` block is called, a new line is read.
Upvotes: 3 <issue_comment>username_3: A new method lines() has been added since 1.8, it lets BufferedReader returns content as Stream.this method does not read all lines into a List, but instead populates lazily as the stream is consumed.when you call for-each,then a new line is reload. hope can help you!
Upvotes: 2 <issue_comment>username_4: There is an issue with the Spliterator if Files.lines is used with a parallel stream. Than all processed lines will be stored in the Spliterator which makes it unuseable for large files.
Upvotes: 0 |
2018/03/21 | 570 | 2,232 | <issue_start>username_0: Today when I woke up to continue my developing process I got `Firefox` update and then I wasn't able to reach my `localhost` websites and redirecting to `HTTPS` protocol.
We all know that Google did the same while before but as many of us using Firefox mostly we (at least me) didn't care and continued our works with Firefox, now that Firefox decided to play with us (developers) here is some unanswered questions for me here:
Questions
=========
1. How do we add HTTPS to our localhost?
2. Should we buy SSL certificate for our local environment?
3. How do I add SSL to my laravel project on localhost?
4. What will happen if I develop application with SSL and when I move it to host my domain doesn't have SSL (will be any conflict there?)
Concerns
========
My most concerns goes to:
1. What if I don't want to buy `SSL certificate` for my local environment and **Publish my projects data (such as names etc.) with others (basically SSL companies)**.
2. What if I develop with `HTTPS` and my live site is `HTTP`
UPDATE
======
As I'm working on `Windows` and also I'm suing `Laragon` (i don't know about mapps,xampp etc.) here is how I solved my issue **But still looking for answer to my other questions**
First of all I turned on my laragon ssl certificate, then i changed my domains to `pp` now my sites loads like `domain.pp`
PS: I also tested same way with `.local`, `.test` and `.app` it didn't worked but `pp` worked.<issue_comment>username_1: You can also change the domain suffix.
just like
* .localhost
* .invalid
* .test
* .example
Upvotes: 1 <issue_comment>username_2: The folks that created DesktopServer (which I \*\*\*highly\*\*\*\* recommend over MAMP/XAMPP) registered the domain .dev.cc for local development use when Google did its thing with dev, which, as we all know, now requires https for local work when you use Chrome or Firefox. When you use DesktopServer to install a new instance of a site locally, DS will append the .dev.cc TLD which will only exist on your local computer. DesktopServer modifies all instances of .dev.cc to the correct production domain when you push your site to live. But, even if you don't use DS, you can use the .dev.cc domain.
Upvotes: 0 |
2018/03/21 | 1,578 | 5,090 | <issue_start>username_0: I've been tasked with locating the bug in the following code, and fixing it:
```
/* $Id: count-words.c 858 2010-02-21 10:26:22Z tolpin $ */
#include
#include
/\* return string "word" if the count is 1 or "words" otherwise \*/
char \*words(int count) {
char \*words = "words";
if(count==1)
words[strlen(words)-1] = '\0';
return words;
}
/\* print a message reportint the number of words \*/
int print\_word\_count(char \*\*argv) {
int count = 0;
char \*\*a = argv;
while(\*(a++))
++count;
printf("The sentence contains %d %s.\n", count, words(count));
return count;
}
/\* print the number of words in the command line and return the number as the exit code \*/
int main(int argc, char \*\*argv) {
return print\_word\_count(argv+1);
}
```
The program works well for every number of words given to it, except for one word. Running it with `./count-words hey` will cause a segmentation fault.
I'm running my code on the Linux subsystem on Windows 10 (that's what I understand it is called at least...), with the official Ubuntu app.
When running the program from terminal, I do get the segmentation fault, but using gdb, for some reason the program works fine:
```
(gdb) r hey
Starting program: .../Task 0/count-words hey
The sentence contains 1 word.
[Inferior 1 (process 87) exited with code 01]
(gdb)
```
After adding a breakpoint on line 9 and stepping through the code, I get this:
```
(gdb) b 9
Breakpoint 1 at 0x400579: file count-words.c, line 9.
(gdb) r hey
Starting program: /mnt/c/Users/tfrei/Google Drive/BGU/Semester F/Computer Architecture/Labs/Lab 2/Task 0/count-words hey
Breakpoint 1, words (count=1) at count-words.c:9
9 if(count==1)
(gdb) s
10 words[strlen(words)-1] = '\0';
(gdb) s
strlen () at ../sysdeps/x86_64/strlen.S:66
66 ../sysdeps/x86_64/strlen.S: No such file or directory.
(gdb) s
67 in ../sysdeps/x86_64/strlen.S
(gdb) s
68 in ../sysdeps/x86_64/strlen.S
(gdb)
```
The weird thing is that when I ran the same thing from a "true" Ubuntu (using a virtual machine on Windows 10), the segmentation fault did happen on gdb.
I tend to believe that the reason for this is somehow related to my runtime environment (the "Ubuntu on Windows" thing), but could not find anything that will help me.
This is my makefile:
```
all:
gcc -g -Wall -o count-words count-words.c
clean:
rm -f count-words
```
Thanks in advance<issue_comment>username_1: This function is wrong
```
char *words(int count) {
char *words = "words";
if(count==1)
words[strlen(words)-1] = '\0';
return words;
}
```
The pointer `words` points to the string literal `"words"`. Modifying a string
literal is undefined behaviour and in most system string literals are stored in
read-only memory, so doing
```
words[strlen(words)-1] = '\0';
```
will lead into a segfault. That's the behaviour you see in Ubuntu. I don't know
where strings literals are stored in windows executables, but modifying a string
literal is undefined behaviour and anything can happen and it's pointless to try
to deduce why sometimes things work and why sometimes things don't work. That's
the nature of undefined behaviour.
**edit**
>
> username_1 thanks, but I'm not asking about the bug itself , and why the segmentation fault happened. I'm asking why it didn't happen with gdb. Sorry if that was not clear enough.
>
>
>
I don't know why it doesn't happent to your, but when I run your code on my gdb I get:
```
Reading symbols from ./bug...done.
(gdb) b 8
Breakpoint 1 at 0x6fc: file bug.c, line 8.
(gdb) r hey
Starting program: /tmp/bug hey
Breakpoint 1, words (count=1) at bug.c:8
8 words[strlen(words)-1] = '\0';
(gdb) s
Program received signal SIGSEGV, Segmentation fault.
0x0000555555554713 in words (count=1) at bug.c:8
8 words[strlen(words)-1] = '\0';
(gdb)
```
Upvotes: 0 <issue_comment>username_2: >
> I'm asking why it didn't happen with gdb
>
>
>
It *did* happen with GDB, when run on a real (or virtual) UNIX system.
It didn't happen when running under the weird "Ubuntu on Windows" environment, because that environment is doing crazy sh\*t. In particular, for some reason the Windows subsystem maps usually readonly sections (`.rodata`, and probably `.text` as well) with writable permissions (which is why the program no longer crashes), but only when you run the program under debugger.
I don't know why exactly Windows does that.
Note that debuggers *do* need to write to (readonly) `.text` section in order to insert breakpoints. On a real UNIX system, this is achieved by `ptrace(PTRACE_POKETEXT, ...)` system call, which updates the readonly page, *but leaves it readonly* for the inferior (being debugged) process.
I am guessing that Windows is imperfectly emulating this behavior (in particular does not write-protect the page after updating it).
P.S. In general, using "Ubuntu on Windows" to learn Ubuntu is going to be full of gotchas like this one. You will likely be *much* better off using a virtual machine instead.
Upvotes: 2 [selected_answer] |
2018/03/21 | 396 | 1,259 | <issue_start>username_0: In a Zeppelin notebook, running the following query with elasticsearch-py 5x
```
es = Elasticsearch(["es-host:9200"])
es.search(index="some_index",
doc_type="some_type",
body={"query": {"term": {"day": "2018_02_04"}}}
)
```
Takes 28 minutes to return.
From the same notebook, using curl to run:
```
curl -XGET 'http://es-host:9200/some_index/some_type/_search?pretty' -H 'Content-Type: application/json' -d'
{"query": {"term": {"day": "2018_02_04"}}}
'
```
returns basically instantly.
Why is the python library performance so poor, and what can be done to make that fast?<issue_comment>username_1: this is not anything I have ever seen, judging based on this question I would guess that there is something wrong with your environment.
Upvotes: 0 <issue_comment>username_2: I do not understand *why* this works, but if I add a `filter_path` to the query, it returns as quickly as the raw curl:
```
es = Elasticsearch(["es-host:9200"])
results = es.search(index="some_index",
doc_type="some_type",
filter_path=['hits.hits._id'],
body={"query": {"term": {"day": "2018_02_04"}}}
)
```
If anyone has an explanation for this behavior, I'd appreciate that.
Upvotes: 2 [selected_answer] |
2018/03/21 | 4,918 | 18,527 | <issue_start>username_0: I am trying to configure Kubernetes RBAC in the least-permissive way possible and I want to scope my roles to specific resources and subresouces. I've dug through the docs and can't find a concise list of resources and their subresources.
I'm particularly interested in a the subresource that governs a part of a Deployment's spec--the container image.<issue_comment>username_1: *I hesitate to even put this as an "Answer", but it is for sure too long for a comment*
For the list of resources, are you aware of `$HOME/.kube/cache/discovery` wherein the Swagger JSON files are persisted to disk by directory that matches their enclosing `apiVersion`? [This is](https://blog.openshift.com/kubernetes-deep-dive-api-server-part-3a/) the fastest link I could find (look in the "Discovering and Using CRDs" heading) but `ls -la ~/.kube/cached/discovery` will show what I mean. Those Swagger JSON files enumerate all the major players within an `apiVersion` in a way that I find a lot more accessible than the API reference website.
I don't have those files in front of me to know if they contain subresource definitions, so hopefully someone else can weigh in on that.
The minor asterisk to the "weigh in" part is that, based on the surfing I did of the RBAC docs and the 1.9 API reference, I didn't get the impression that a subresource is "field level access" to its parent resource. For example, [v1beta1/Evictions](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#eviction-v1beta1-policy) is a Pod subresource of `/evictions` which to the best of my knowledge is not a field within `PodSpec`
So if you are interested in doing RBAC to constrain a Deployment's image, you may be *much* happier with [Webhook Mode](https://kubernetes.io/docs/admin/authorization/webhook/) where one can have almost unbounded business logic applied to the attempted request.
Upvotes: 3 <issue_comment>username_2: You can find the resources list of Kubernetes v1.26 from here: <https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.26/>. For other K8s versions, check <https://kubernetes.io/docs/reference/kubernetes-api/>
Check the catalog on the left side, for example, 'Workloads' is the high-level overview of the basic types of resources such as Container, Deployment, CronJob etc. And these subresources like 'Container, Deployment, CronJob' are the typical basic Kubernetes API resources.
You can access these basic resources via kubectl, hence there also have a list of 'Resource types' available in <https://kubernetes.io/docs/reference/kubectl/cheatsheet/>
But I'm confusing in your statement "a the subresource that governs a a part of a Deployment's spec--the container image", if you are trying to manage the permissions of an container image, you should do it on your image registry, but not on Kubernetes side. For example, your registry should has an access controller to do authentication when user pulling images.
Upvotes: 2 <issue_comment>username_3: The resources, sub-resources and verbs that you need to define RBAC roles are not documented anywhere in a static list. They are available in the discovery documentation, i.e. via the API, e.g. `/api/apps/v1`.
The following bash script will list all the resources, sub-resources and verbs in the following format:
```
api_version resource: [verb]
```
where `api-version` is `core` for the core resources and should be replaced by `""` (an empty quoted string) in your role definition.
For example, `core pods/status: get patch update`.
The script requires [jq](https://stedolan.github.io/jq/).
```
#!/bin/bash
SERVER="localhost:8080"
APIS=$(curl -s $SERVER/apis | jq -r '[.groups | .[].name] | join(" ")')
# do core resources first, which are at a separate api location
api="core"
curl -s $SERVER/api/v1 | jq -r --arg api "$api" '.resources | .[] | "\($api) \(.name): \(.verbs | join(" "))"'
# now do non-core resources
for api in $APIS; do
version=$(curl -s $SERVER/apis/$api | jq -r '.preferredVersion.version')
curl -s $SERVER/apis/$api/$version | jq -r --arg api "$api" '.resources | .[]? | "\($api) \(.name): \(.verbs | join(" "))"'
done
```
**WARNING:** Note that where no verbs are listed via the api, the output will just show the api version and the resource, e.g.
```
core pods/exec:
```
In the specific instance of the following resources, no verbs are shown via the api, which is wrong (Kubernetes bug [#65421](https://github.com/kubernetes/kubernetes/issues/65421), fixed by [#65518](https://github.com/kubernetes/apiserver/commit/173c0190d3dc561466519bed9f9776a925b8cbc3)):
```
nodes/proxy
pods/attach
pods/exec
pods/portforward
pods/proxy
services/proxy
```
The supported verbs for these resources are as follows:
```
nodes/proxy: create delete get patch update
pods/attach: create get
pods/exec: create get
pods/portforward: create get
pods/proxy: create delete get patch update
services/proxy: create delete get patch update
```
**WARNING 2:** Sometime Kubernetes checks for additional permissions using specialised verbs that are not listed here. For example, the `bind` verb is needed for `roles` and `clusterroles` resources in the `rbac.authorization.k8s.io` API group. Details of these specialised verbs are to be found in the [docs here](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb).
Upvotes: 5 <issue_comment>username_4: Using `kubectl api-resources -o wide` shows all the **resources**, **verbs** and associated **API-group**.
```
$ kubectl api-resources -o wide
NAME SHORTNAMES APIGROUP NAMESPACED KIND VERBS
bindings true Binding [create]
componentstatuses cs false ComponentStatus [get list]
configmaps cm true ConfigMap [create delete deletecollection get list patch update watch]
endpoints ep true Endpoints [create delete deletecollection get list patch update watch]
events ev true Event [create delete deletecollection get list patch update watch]
limitranges limits true LimitRange [create delete deletecollection get list patch update watch]
namespaces ns false Namespace [create delete get list patch update watch]
nodes no false Node [create delete deletecollection get list patch update watch]
persistentvolumeclaims pvc true PersistentVolumeClaim [create delete deletecollection get list patch update watch]
persistentvolumes pv false PersistentVolume [create delete deletecollection get list patch update watch]
pods po true Pod [create delete deletecollection get list patch update watch]
statefulsets sts apps true StatefulSet [create delete deletecollection get list patch update watch]
meshpolicies authentication.istio.io false MeshPolicy [delete deletecollection get list patch create update watch]
policies authentication.istio.io true Policy [delete deletecollection get list patch create update watch]
...
...
```
I guess you can use this to create the list of resources needed in your RBAC config
Upvotes: 7 <issue_comment>username_5: ```
for kind in `kubectl api-resources | tail +2 | awk '{ print $1 }' | sort`; do kubectl explain $kind ; done | grep -e "KIND:" -e "VERSION:" | awk '{print $2}' | paste -sd' \n'
```
Upvotes: 3 <issue_comment>username_6: If you are using kubectl krew plug-in, I will suggest using [get-all](https://github.com/kubernetes-sigs/krew-index/blob/master/plugins.md). It can get almost 90% resources. included configmap, secret, endpoints, istio, etc
And It have a great arg --since, you can use it to list out last x min created resources.
example
```
kubectl get-all --since 1d
```
[](https://i.stack.imgur.com/qZG17.png)
Upvotes: 0 <issue_comment>username_7: I wrote a tiny Go utility for this exact purpose. Generates a complete RBAC role with every possible resource & sub-resource on the cluster. You can then prune that back to fit your role's use case.
<https://github.com/username_7/kube-role-gen>
Upvotes: 2 <issue_comment>username_8: Another option, especially for those who don't have immediate access to a live `k8s`, is the `OpenAPI` spec.
From the [api reference](https://kubernetes.io/docs/reference/kubernetes-api/), you can reach the [latest docs](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/) which has a link, at the top right, to the [git managed OpenAPI spec](https://github.com/kubernetes/kubernetes/blob/release-1.19/api/openapi-spec/swagger.json) which you can load at the [Swagger live web editor](https://editor.swagger.io/).
Endpoints like `/api/v1/namespaces/{namespace}/pods/{name}/log` will be listed there.
Placed all these links in an attempt to future-proof this answer. I couldn't find a `/latest` type URL that'll point to the latest version.
Upvotes: 0 <issue_comment>username_9: Markdown version, using kubectl instead of curl
-----------------------------------------------
Here follows a different code snippet, derived from the script posted in the [answer by username_3](https://stackoverflow.com/a/51289417/1010681).
When executed in Bash, it produces a more detailed output in the form of a [Markdown table](https://www.markdownguide.org/extended-syntax/#tables), saved as the file `Kubernetes_API_resources.md`.
It uses `kubectl get --raw ...` instead of `curl` to query the API, and the resulting Markdown file documents its own creation in a code block.
```sh
echo "# Kubernetes API resources
Updated on `date -I`
\`\`\`bash
${BASH_COMMAND}
\`\`\`
| API name/version | Resource | Verbs | Kind | Namespaced |
| ---------------- | -------- | ----- | ---- | ---------- |
`
for apipath in $(kubectl api-versions | sort | sed '/\//{H;1h;$!d;x}'); do
version=${apipath#*/}
api=${apipath%$version}
api=${api%/}
prefix="/api${api:+s}/"
api=${api:-(core)}
>&2 echo "${prefix}${apipath}: ${api}/${version}"
kubectl get --raw "${prefix}${apipath}" | jq -r --arg api "${api}/${version}" '.resources | sort_by(.name) | .[]? | "| \($api) | \(.name) | \(.verbs | join(" ")) | \(.kind) | \(if .namespaced then "true" else "false" end) |"'
done
`" > Kubernetes_API_resources.md
```
Upvotes: 2 <issue_comment>username_10: You can use explain command to get mode details about api-resource and sub resources.
Here I am taking an example of POD api-resource:
```
kubectl explain pod
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.
```
If you to want to check more about spec section (sub-resource) of POD, use
```
kubectl explain pod.spec
```
For toleration
```
kubectl explain pod.spec.tolerations
```
and if you want to get check values and its input type use
```
kubectl explain pod.spec.tolerations.value
```
[enter image description here](https://i.stack.imgur.com/4V9Qk.png)
Hope that answers your question
Upvotes: 0 <issue_comment>username_11: There is a kubectl plugin - [rbac-tool](https://github.com/alcideio/rbac-tool/releases/tag/v1.10.0) that has a new subcommand that outputs the available resource (and subresource) available permissions.
under the hood it uses the Kubernetes dynamic api client to fetch server API resources for all groups.
Fore example:
```
$kubectl rbac-tool show --for-groups=,apps
```
```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations: null
creationTimestamp: null
labels: null
name: custom-cluster-role
rules:
- apiGroups:
- ""
resources:
- bindings
verbs:
- create
- apiGroups:
- ""
resources:
- componentstatuses
verbs:
- get
- list
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- endpoints
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- limitranges
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- namespaces
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- namespaces/finalize
verbs:
- update
- apiGroups:
- ""
resources:
- namespaces/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- nodes/proxy
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- persistentvolumeclaims/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- persistentvolumes
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- persistentvolumes/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/attach
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/binding
verbs:
- create
- apiGroups:
- ""
resources:
- pods/eviction
verbs:
- create
- apiGroups:
- ""
resources:
- pods/exec
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- apiGroups:
- ""
resources:
- pods/portforward
verbs:
- create
- get
- apiGroups:
- ""
resources:
- pods/proxy
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- podtemplates
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- replicationcontrollers
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- replicationcontrollers/scale
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- replicationcontrollers/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- resourcequotas
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- resourcequotas/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- secrets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- serviceaccounts
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- services/proxy
verbs:
- create
- delete
- get
- patch
- update
- apiGroups:
- ""
resources:
- services/status
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- controllerrevisions
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- daemonsets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- daemonsets/status
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- deployments
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- deployments/scale
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- deployments/status
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- replicasets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- replicasets/scale
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- replicasets/status
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- statefulsets
verbs:
- create
- delete
- deletecollection
- get
- list
- patch
- update
- watch
- apiGroups:
- apps
resources:
- statefulsets/scale
verbs:
- get
- patch
- update
- apiGroups:
- apps
resources:
- statefulsets/status
verbs:
- get
- patch
- update
```
Upvotes: 2 <issue_comment>username_12: run `kubectl proxy` , the server will start running on <http://127.0.0.1:8001/>. so just open that in browser, you will see all api-resources
Upvotes: 1 |
2018/03/21 | 1,350 | 4,681 | <issue_start>username_0: I have the following code snippet where `Func0` and `Action1` are used.
```
Observable.defer(new Func0>() {
@Override
public Observable call() {
try {
return Observable.just(Database.readValue());
}
catch(IOException e) {
return Observable.error(e);
}
})
.subscribe(new Action1() {
@Override
public void call(String result) {
resultTextView.setText(result);
}
}
}
```
But I am just wondering what is the difference between them. I understand that the number means the number of parameters i.e. `Func0` has no parameters and `Action1` has 1 parameters.
However, how would you know which one to use? Should I use `Action` or `Func`.
What is the purpose of the `call` method?
Many thanks for any suggestions,<issue_comment>username_1: Look at their definition:
```java
interface Func0 {
R call();
}
interface Action1 {
void call(T t);
}
```
The `Func0` provides data whereas `Action1` consumes data. These are dual functionalities and you can't mistake the two.
Upvotes: 3 <issue_comment>username_2: **The short answer; You'll know based on what method you're calling.**
First lets take a look at the two methods you're trying to use:
>
> [Observable.defer](http://grepcode.com/file/repo1.maven.org/maven2/io.reactivex/rxjava/1.0.0-rc.1/rx/Observable.java#Observable.defer%28rx.functions.Func0%29)
> Returns an Observable that calls an Observable factory to create an Observable for each new Observer that subscribes. That is, for each subscriber, the actual Observable that subscriber observes is determined by the factory function.
>
>
> **Parameters**:
> observableFactory the Observable factory function to invoke for each Observer that subscribes to the resulting Observable
>
>
> **Returns**:
> an Observable whose Observers' subscriptions trigger an invocation of the given Observable factory function
>
>
>
> ```
> public final static Observable defer(Func0> observableFactory)...
>
> ```
>
>
---
>
> [Observable.subscribe](http://grepcode.com/file/repo1.maven.org/maven2/io.reactivex/rxjava/1.0.0-rc.1/rx/Observable.java#Observable.subscribe%28rx.functions.Action1%29)
> Subscribes to an Observable and provides a callback to handle the items it emits.
>
>
> **Parameters**:
> onNext the Action1 you have designed to accept emissions from the Observable
>
>
> **Returns**:
> a Subscription reference with which the Observer can stop receiving items before the Observable has finished sending them
>
>
>
> ```
> public final Subscription subscribe(final Action1 super T onNext)...
>
> ```
>
>
---
What you see above are two examples of [Higher-order functions](https://en.wikipedia.org/wiki/Higher-order_function) or implementations of the [Strategy Pattern](https://en.wikipedia.org/wiki/Strategy_pattern) which each accept a different strategy format.
In the case of `defer` you a providing a way to create a new `Observable` with no initial input provided. A [Func0](http://grepcode.com/file/repo1.maven.org/maven2/io.reactivex/rxjava/1.0.0-rc.1/rx/functions/Func0.java#Func0) is requested because it has that format (where `R` is an `Observable`):
>
>
> ```
> public interface Func0 extends Function, Callable {
> @Override
> public R call();
> }
>
> ```
>
>
In the case of `subscribe` you are providing a way to accept a value from an observable. The best interface to represent this would be an [Action1](http://grepcode.com/file/repo1.maven.org/maven2/io.reactivex/rxjava/1.0.0-rc.1/rx/functions/Action1.java#Action1) (where `T1` is a `String`)
>
>
> ```
> public interface Action1 extends Action {
> public void call(T1 t1);
> }
>
> ```
>
>
When you write `new Action1<>() {...}` or `new Func0<>() {...}` you are creating what is known as an [Anonymous Class](https://docs.oracle.com/javase/tutorial/java/javaOO/anonymousclasses.html). You are defining in place what happens when the method `Action1.call` or `Func0.call` are invoked.
---
Your questions:
>
> how would you know which one to use? Should I use Action or Func.
>
>
>
It depends on the needs of your application. Read through the docs and see which method best suits your needs. Depending on the method you choose you will have to implement the interface it specifies in the method signature.
>
> What is the purpose of the call method?
>
>
>
This is the name of method in the strategy/interface required by the higher order function which you are using. You will know the name by looking at the interface definition. It is only by chance that each interface declares a method named `call`. One could have easily been titled `foo` and the other `bar`.
Upvotes: 3 [selected_answer] |
Subsets and Splits