date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/19 | 1,103 | 3,199 | <issue_start>username_0: Is someone can explain me this behavior ?
```
const object1 = {
a: {
d : 1,
e : 4
},
b: 2,
c: 3
};
const object2 = Object.assign({}, object1);
console.log('object1 :', object1);
console.log('object2 :', object2);
delete object2.a;
console.log('object1 :', object1);
console.log('object2 :', object2);
```
If I delete `a`, only `object2` is affected.
```
> "object1 :" Object { a: Object { d: 1, e: 4 }, b: 2, c: 3 }
> "object2 :" Object { a: Object { d: 1, e: 4 }, b: 2, c: 3 }
> "object1 :" Object { a: Object { d: 1, e: 4 }, b: 2, c: 3 }
> "object2 :" Object { b: 2, c: 3 }
```
And if I delete a nested property...
```
const object2 = Object.assign({}, object1);
console.log('object1 :', object1);
console.log('object2 :', object2);
delete object2.a.d;
console.log('object1 :', object1);
console.log('object2 :', object2);
```
Both `object1` and `object2` lose the property `d`.
```
> "object1 :" Object { a: Object { d: 1, e: 4 }, b: 2, c: 3 }
> "object2 :" Object { a: Object { d: 1, e: 4 }, b: 2, c: 3 }
> "object1 :" Object { a: Object { e: 4 }, b: 2, c: 3 }
> "object2 :" Object { a: Object { e: 4 }, b: 2, c: 3 }
```
Why ?<issue_comment>username_1: When you are doing :
```
const object2 = Object.assign({}, object1);
```
You are assigning the properties of `object1` to a new object, that will be stored in `object2`. In your case there is only one property of `object1`, that is `a`. **But this is the same `a` that you put in object2 and that is in object1**
Then, this line :
```
delete object2.a;
```
Just deletes the reference of the property `a` from `object2`. The property itself is not changed and will still exist if there are some other attached references to it.
In your second case, when you do :
```
delete object2.a.d;
```
you actually delete the reference of the property `d` from the object referenced by `object2.a`, which, remember, is the same as the object referenced by `object1.a`, so it indeed appears changed in both `object1` and `object2`
Upvotes: 3 [selected_answer]<issue_comment>username_2: On examples section in the documentation of assign (<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign>):
>
> Warning for Deep Clone
>
>
> For deep cloning, we need to use other alternatives because
> Object.assign() copies property values. If the source value is a
> reference to an object, it only copies that reference value.
>
>
>
There are a specific example on how to deep clone an object:
```
JSON.parse(JSON.stringify(obj1))
```
And if you want more info about, there's an old thread on SO over this issue:
[What is the most efficient way to deep clone an object in JavaScript?](https://stackoverflow.com/questions/122102/what-is-the-most-efficient-way-to-deep-clone-an-object-in-javascript/5344074#5344074)
Upvotes: 2 <issue_comment>username_3: Hi [reffered from](https://stackoverflow.com/questions/728360/how-do-i-correctly-clone-a-javascript-object)
You can use OBJECT COPY AS following
```
const object1 = {
a: {
d : 1,
e : 4
},
b: 2,
c: 3
};
const objecct2 = JSON.parse(JSON.stringify(object1));
```
Upvotes: 0 |
2018/03/19 | 775 | 2,531 | <issue_start>username_0: I'm trying to extract the degree rate from the CSS transform property,
```
transform = "rotate(33.8753deg) translateZ(0px)"
```
with a regular expression. So far I've succeeded to get almost the exact number:
```
const re = new RegExp('.*rotate( *(.*?) *deg).*', 'm');
let degRate = transform.match(re);
```
Output: An array which the third element is:
```
"(33.8753"
```
1. How can I get only the number without the parenthesis?
2. How can I get only the number? (not in an array)<issue_comment>username_1: When you are doing :
```
const object2 = Object.assign({}, object1);
```
You are assigning the properties of `object1` to a new object, that will be stored in `object2`. In your case there is only one property of `object1`, that is `a`. **But this is the same `a` that you put in object2 and that is in object1**
Then, this line :
```
delete object2.a;
```
Just deletes the reference of the property `a` from `object2`. The property itself is not changed and will still exist if there are some other attached references to it.
In your second case, when you do :
```
delete object2.a.d;
```
you actually delete the reference of the property `d` from the object referenced by `object2.a`, which, remember, is the same as the object referenced by `object1.a`, so it indeed appears changed in both `object1` and `object2`
Upvotes: 3 [selected_answer]<issue_comment>username_2: On examples section in the documentation of assign (<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign>):
>
> Warning for Deep Clone
>
>
> For deep cloning, we need to use other alternatives because
> Object.assign() copies property values. If the source value is a
> reference to an object, it only copies that reference value.
>
>
>
There are a specific example on how to deep clone an object:
```
JSON.parse(JSON.stringify(obj1))
```
And if you want more info about, there's an old thread on SO over this issue:
[What is the most efficient way to deep clone an object in JavaScript?](https://stackoverflow.com/questions/122102/what-is-the-most-efficient-way-to-deep-clone-an-object-in-javascript/5344074#5344074)
Upvotes: 2 <issue_comment>username_3: Hi [reffered from](https://stackoverflow.com/questions/728360/how-do-i-correctly-clone-a-javascript-object)
You can use OBJECT COPY AS following
```
const object1 = {
a: {
d : 1,
e : 4
},
b: 2,
c: 3
};
const objecct2 = JSON.parse(JSON.stringify(object1));
```
Upvotes: 0 |
2018/03/19 | 858 | 3,027 | <issue_start>username_0: I'm using SoundPlayer object to play audio on my web application in c# now. But when I tried to play .mp3 file it wasn't successful. How can I play mp3 file on a c# web application from a url? I'm having the url of the audio to be played.
This is the code I'm using to play .wav audio file, but it fails for .mp3 files.
```
SoundPlayer player = new SoundPlayer();
string sound url ="http://audio.oxforddictionaries.com/en/mp3/ranker_gb_1_8.mp3";
player.SoundLocation = soundurl;
player.Play();
System.Threading.Thread.Sleep(2000);
player.Stop();
```<issue_comment>username_1: If you open a documentation page for SoundPlayer (<https://msdn.microsoft.com/en-us/library/system.media.soundplayer(v=vs.110).aspx>) you'll read, that this class "**Controls playback of a sound from a .wav file.**"
This means, that if you want to play .mp3 you need something different, something capable of doing the job. There are a lot of options, to name a few: <https://github.com/filoe/cscore>, <https://www.ambiera.com/irrklang/>, <https://github.com/naudio/NAudio> and many more others.
Another thing is that you are working with ASP.Net. This means that what you probably want is to play music on the client's machine, not at your server like you do now. If this is the case, then this is a completely different story, no .Net sound libraries would help you, you need to learn what is the difference between client-side and server-side execution first, until you do that you don't go anywhere.
Upvotes: 2 [selected_answer]<issue_comment>username_2: It looks like SoundPlayer is designed to play wav files only from this [documentation](https://learn.microsoft.com/en-us/dotnet/framework/winforms/controls/soundplayer-class-overview).
You can use follow the steps provided in the following [forum](https://forums.asp.net/t/1770305.aspx?Play%20MP3%20file%20in%20asp%20net%20C%20). It has a few steps that you can follow for playing mp3 files.
Or you could also try [HTML5 Audio](https://www.w3schools.com/html/html5_audio.asp) tag to embed mp3 in your application.
Upvotes: 0 <issue_comment>username_3: One important thing you should know: [`System.Media.SoundPlayer`](http://msdn.microsoft.com/en-us/library/system.media.soundplayer.aspx) class will play the sound on **server-side** instead of client-side. This is indistinguishable if you're running the project locally since your machine plays it, but you should able to recognize something going wrong when trying to access the page in another client.
As far I as know, you need to use either , or HTML tags to play MP3 from a URL in client browser such like examples below:
```
```
Similar issue:
[How do I play a sound in an asp.net web page?](https://stackoverflow.com/questions/12329191/how-do-i-play-a-sound-in-an-asp-net-web-page)
Upvotes: 0 <issue_comment>username_4: Create an empty html and run the music and check loop-auto start, then put that file on any server. then connect the server on project with WebBrowser
Upvotes: 0 |
2018/03/19 | 1,318 | 5,081 | <issue_start>username_0: I'm developing a simple search box in my application, and I want to highlight multiple words in one sentence.
I use [SpannableString](https://developer.android.com/reference/android/text/SpannableString.html) to add multiple span in one sentence.
Here the function a wrote
```
private CharSequence highlightText(String text, String query) {
if (query != null && !query.isEmpty()) {
Spannable spannable = new SpannableString(text);
ForegroundColorSpan highlightSpan = new ForegroundColorSpan(Color.BLUE);
String[] queryParts = query.split(" ");
for (String queryPart : queryParts) {
int startPos = text.toLowerCase(Locale.US).indexOf(queryPart.toLowerCase(Locale.US));
int endPos = startPos + queryPart.length();
if (startPos != -1) {
Log.d(TAG, "find: '" + queryPart + "' in '" + text + "' (" + startPos + ")");
spannable.setSpan(highlightSpan, startPos, endPos, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
}
}
return spannable;
} else {
return text;
}
}
```
When I cal this function with
```
TextView spannableTest = findViewById(R.id.spannable_test);
spannableTest.setText(highlightText(
"Lorem ipsum dolor sit amet, consectetur adipisicing elit.",
"ipsum consect"));
```
I got this logs
```
D/SPAN: find: 'ipsum' in 'Lorem ipsum dolor sit amet, consectetur adipisicing elit.' (6)
D/SPAN: find: 'consect' in 'Lorem ipsum dolor sit amet, consectetur adipisicing elit.' (28)
```
But on result screen only the last occurence is realy highlighted
[](https://i.stack.imgur.com/DW6BL.png)<issue_comment>username_1: Add this file to your project:**RichTextView.java**
```
package com.outpace.expert.utility;
import android.graphics.Typeface;
import android.text.Spannable;
import android.text.SpannableString;
import android.text.style.ClickableSpan;
import android.text.style.ForegroundColorSpan;
import android.text.style.RelativeSizeSpan;
import android.text.style.StrikethroughSpan;
import android.text.style.StyleSpan;
import android.text.style.URLSpan;
import android.view.View;
/**
* Created by username_1 on 19-11-16
*/
//.setText(text, TextView.BufferType.SPANNABLE); to textView if not work
public class RichTextView extends SpannableString {
private String syntax;
public RichTextView(String syntax) {
super(syntax);
this.syntax = syntax;
}
public RichTextView setTextColor(String word, int color) {
setSpan(new ForegroundColorSpan(color), syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
public RichTextView setSize(String word, float howMuch) {
setSpan(new RelativeSizeSpan(howMuch), syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
public RichTextView setStrikeOut(String word) {
setSpan(new StrikethroughSpan(), syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
public RichTextView setUrl(String word, String redirectUrl) {
setSpan(new URLSpan(redirectUrl), syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
public RichTextView setBold(String word) {
StyleSpan boldSpan = new StyleSpan(Typeface.BOLD);
setSpan(boldSpan, syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
//setMovementMethod(LinkMovementMethod.getInstance()); after or before call
public RichTextView setClickable(String word, final setOnLinkClickListener listener) {
ClickableSpan clickableSpan = new ClickableSpan() {
@Override
public void onClick(View view) {
if (listener != null) {
listener.onLinkClicked();
}
}
};
setSpan(clickableSpan, syntax.indexOf(word), syntax.indexOf(word) + word.length(), Spannable.SPAN_EXCLUSIVE_EXCLUSIVE);
return this;
}
public interface setOnLinkClickListener {
void onLinkClicked();
}
}
```
Here is how to use:
```
String data = model.getMessage().replace("{from_name}", model.getFromUser()); // String to display
tvNotificationText.setText(new RichTextView(data)
.setBold(model.getFromUser()) // bold
.setTextColor(model.getFromUser(), ContextCompat.getColor(mContext, R.color.textPrimary))); // set text color for specific string
```
There are other spanning options available please explore and get the best fit solution!
Happy Coding !!
Upvotes: 1 <issue_comment>username_2: @pskink pointed I should move `ForegroundColorSpan highlightSpan = new ForegroundColorSpan(Color.BLUE);` to `for` loop.
Upvotes: 4 [selected_answer] |
2018/03/19 | 1,844 | 4,993 | <issue_start>username_0: Suppose I have a vector as follows
```
std::vector v = {3, 9, 7, 7, 2};
```
I would like to sort this vector of elements so that the vector will be stored as 77932. So first, we store the common elements (7), then we sort the remaining elements from the highest to the lowest.
If I have a vector as follows
```
std::vector v = {3, 7, 7, 7, 2};
```
Here, it would lead to 77732.
Same for
```
std::vector v = {7, 9, 2, 7, 9};
```
it should lead to 99772, because the 9s are higher than 7s.
One last example
```
std::vector v = {7, 9, 7, 7, 9};
```
it should lead to 77799, because there are more 7s than 9s.
What could be the fastest algorithm to implement this?<issue_comment>username_1: You will need an auxiliary frequency count structure, then you can just define a comparator lambda and use whatever sort you like, `std::sort` is a sensible default
```
std::unordered_map frequency;
std::for\_each(v.begin(), v.end()
, [&](int i) { ++frequency[i]; });
std::sort(v.begin(), v.end()
, [&](int lhs, int rhs)
{
return std::tie(frequency[lhs], lhs) < std::tie(frequency[rhs], rhs);
});
```
Upvotes: 1 <issue_comment>username_2: Use `std::multiset` to do counting for you. Then sort using a simple custom comparer with tie breaking logic implemented with `std::tie`:
```
std::vector data = {7, 9, 2, 7, 9};
std::multiset count(data.begin(), data.end());
std::sort(
data.begin()
, data.end()
, [&](int a, int b) {
int ca = count.count(a);
int cb = count.count(b);
return std::tie(ca, a) > std::tie(cb, b);
}
);
std::copy(data.begin(), data.end(), std::ostream\_iterator(std::cout, " "));
```
[Demo 1](https://ideone.com/HYdBZj)
**Edit:** `count(n)` function of of `std::multiset` is linear in the number of duplicates, which may degrade the performance of your sorting algorithm. You can address this by using `std::unordered_map` in its place:
```
std::vector data = {7, 9, 2, 7, 9};
std::unordered\_map count;
for (auto v : data)
count[v]++;
std::sort(
data.begin()
, data.end()
, [&](int a, int b) {
return std::tie(count[a], a) > std::tie(count[b], b);
}
);
std::copy(data.begin(), data.end(), std::ostream\_iterator(std::cout, " "));
```
[Demo 2.](https://ideone.com/EUYh7M)
Upvotes: 4 [selected_answer]<issue_comment>username_3: I wouldn't be satisfied if a candidate proposed an auxiliary map for this task - clearly a sort does most of the work, and the auxiliary structure should be a vector (or, after I've actually tried to implement it, 2 vectors):
```
void custom_sort(vector &v)
{
if (v.size() < 2)
return;
sort(v.begin(), v.end(), std::greater());
vector dupl;
vector singl;
int d;
bool dv = false;
for (int i = 1; i < v.size(); ++i)
{
if (!dv)
{
if (v[i - 1] == v[i])
{
d = v[i];
dv = true;
dupl.push\_back(d);
}
else
{
singl.push\_back(v[i - 1]);
}
}
else
{
dupl.push\_back(d);
if (v[i] != d)
dv = false;
}
}
if (!dv)
singl.push\_back(v.back());
else
dupl.push\_back(d);
auto mid = copy(dupl.begin(), dupl.end(), v.begin());
copy(singl.begin(), singl.end(), mid);
}
```
But yes, the branching is tricky - if you want to use it for more than an inverview, please test it... :-)
Upvotes: 0 <issue_comment>username_4: **EDIT** this answers an early version of the question.
If the elements are small integers, i.e. have limited range, we can extend the [counting sort](https://en.wikipedia.org/wiki/Counting_sort) algorithm (since the keys here are the elements, we don't need to establish the starting position separately).
```
void custom_sort(std::vector&v, const int N)
// assume that all elements are in [0,N[ and N elements fit into cash
{
vector count(N);
for(auto x:v)
count.at(x) ++; // replace by count[x]++ if you're sure that 0 <= x < N
int i=0;
// first pass: insert multiple elements
for(auto n=N-1; n>=0; --n)
if(count[n] > 1)
for(auto k=0; k!=count[n]; ++k)
v[i++] = n;
// second pass: insert single elements
for(auto n=N-1; n>=0; --n)
if(count[n] == 1)
v[i++] = n;
}
```
Upvotes: 0 <issue_comment>username_5: There is O(N Log(N)) algorithm with extra O(N) memory.
```
#include
#include
#include
#include
int main(){
typedef std::pair pii;
typedef std::vector< int > vi ;
typedef std::vector< pii > vii;
vi v = {7, 9, 7, 7, 9};
//O( N log(N) )
std::sort(v.begin(), v.end());
vii vc;
vc.reserve(v.size());
// O (N) make (cnt, value) pair of vector
for(size\_t i = 0; i != v.size(); ++i)
{
if (vc.empty() || v[i] != vc.back().second ){
vc.push\_back( pii(0, v[i]) ) ;
}
vc.back().first ++ ;
}
// O (N Log(N) ) sort by (cnt, value)
std::sort( vc.begin(), vc.end() ) ;
// O(N) restore they, reverse order.
v.clear();
for(int i = 0; i < (int)vc.size(); ++i){
int rev\_i = vc.size() - i - 1;
int cnt = vc[rev\_i].first;
for(int k = 0; k < cnt; ++k)
v.push\_back( vc[rev\_i].second ) ;
}
/////////////////////////
for(size\_t i = 0; i != v.size(); ++i){
printf("%4d, ", v[i]);
}
printf("\n");
}
```
Upvotes: 0 |
2018/03/19 | 1,484 | 5,148 | <issue_start>username_0: I have written a script to convert a text file into dictionary..
script.py
```
l=[]
d={}
count=0
f=open('/home/asha/Desktop/test.txt','r')
for row in f:
rowcount+=1
if row[0] == ' ' in row:
l.append(row)
else:
if count == 0:
temp = row
count+=1
else:
d[temp]=l
l=[]
count=0
print d
```
textfile.txt
```
Time
NtGetTickCount
NtQueryPerformanceCounter
NtQuerySystemTime
NtQueryTimerResolution
NtSetSystemTime
NtSetTimerResolution
RtlTimeFieldsToTime
RtlTimeToTime
System informations
NtQuerySystemInformation
NtSetSystemInformation
Enumerations
Structures
```
The output i have got is
```
{'Time\n': [' NtGetTickCount\n', ' NtQueryPerformanceCounter\n', ' NtQuerySystemTime\n', ' NtQueryTimerResolution\n', ' NtSetSystemTime\n', ' NtSetTimerResolution\n', ' RtlTimeFieldsToTime\n', ' RtlTimeToTime\n']}
```
Able to convert upto 9th line in the text file. Suggest me where I am going wrong..<issue_comment>username_1: So you need to know two things at any given time while looping over the file:
1) Are we on a title level or content level (by indentation) and
2) What is the current title
In the following code, we first check if the current line we are at, is a title (so it does not start with a space) and set the `currentTitle` to that as well as insert that into our dictionary as a key and an empty list as a value.
If it is not a title, we just append to corresponding title's list.
```
with open('49359186.txt', 'r') as input:
topics = {}
currentTitle = ''
for line in input:
line = line.rstrip()
if line[0] != ' ':
currentTitle = line
topics[currentTitle] = []
else:
topics[currentTitle].append(line)
print topics
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You never commit (i.e. run `d[row] = []`) the final list to the dictionary.
You can simply commit when you create the row:
```
d = {}
cur = []
for row in f:
if row[0] == ' ': # line in section
cur.append(row)
else: # new row
d[row] = cur = []
print (d)
```
Upvotes: 1 <issue_comment>username_3: Try this:
```
d = {}
key = None
with open('/home/asha/Desktop/test.txt','r') as file:
for line in file:
if line.startswith(' '):
d[key].append(line.strip())
else:
key = line.strip(); d[key] = []
print(d)
```
Upvotes: 0 <issue_comment>username_4: Using [`dict.setdefault`](https://docs.python.org/3/library/stdtypes.html#dict.setdefault) to create dictionary with lists as values will make your job easier.
```
d = {}
with open('input.txt') as f:
key = ''
for row in f:
if row.startswith(' '):
d.setdefault(key, []).append(row.strip())
else:
key = row
print(d)
```
Output:
```
{'Time\n': ['NtGetTickCount', 'NtQueryPerformanceCounter', 'NtQuerySystemTime', 'NtQueryTimerResolution', 'NtSetSystemTime', 'NtSetTimerResolution', 'RtlTimeFieldsToTime', 'RtlTimeToTime'], 'System informations\n': ['NtQuerySystemInformation', 'NtSetSystemInformation', 'Enumerations', 'Structures']}
```
A few things to note here:
1. Always use `with open(...)` for file operations.
2. If you want to check the first index, or the first few indices, use `str.startswith()`
The same can be done using [`collections.defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict):
```
from collections import defaultdict
d = defaultdict(list)
with open('input.txt') as f:
key = ''
for row in f:
if row.startswith(' '):
d[key].append(row)
else:
key = row
```
Upvotes: 1 <issue_comment>username_5: Just for the sake of adding in my 2 cents.
This problem is easier to tackle backwards. Consider iterating through your file backwards and then storing the values into a dictionary whenever a header is reached.
```
f=open('test.txt','r')
d = {}
l = []
for row in reversed(f.read().split('\n')):
if row[0] == ' ':
l.append(row)
else:
d.update({row: l})
l = []
```
Upvotes: 0 <issue_comment>username_6: Just keep track the line which start with ' ' and you are done with one loop only :
```
final=[]
keys=[]
flag=True
with open('new_text.txt','r') as f:
data = []
for line in f:
if not line.startswith(' '):
if line.strip():
keys.append(line.strip())
flag=False
if data:
final.append(data)
data=[]
flag=True
else:
if flag==True:
data.append(line.strip())
final.append(data)
print(dict(zip(keys,final)))
```
output:
```
{'Example': ['data1', 'data2'], 'Time': ['NtGetTickCount', 'NtQueryPerformanceCounter', 'NtQuerySystemTime', 'NtQueryTimerResolution', 'NtSetSystemTime', 'NtSetTimerResolution', 'RtlTimeFieldsToTime', 'RtlTimeToTime'], 'System informations': ['NtQuerySystemInformation', 'NtSetSystemInformation', 'Enumerations', 'Structures']}
```
Upvotes: 0 |
2018/03/19 | 1,302 | 4,457 | <issue_start>username_0: Can someone help on the below error
```
npm install nightwatch --save
ERR! code EPROTO
npm ERR! errno EPROTO
npm ERR! request to https://registry.npmjs.org/nightwatch failed, reason: write
EPROTO 101057795:error:14077419:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert access denied:openssl\ssl\s23_clnt.c:802:
npm ERR!
```<issue_comment>username_1: So you need to know two things at any given time while looping over the file:
1) Are we on a title level or content level (by indentation) and
2) What is the current title
In the following code, we first check if the current line we are at, is a title (so it does not start with a space) and set the `currentTitle` to that as well as insert that into our dictionary as a key and an empty list as a value.
If it is not a title, we just append to corresponding title's list.
```
with open('49359186.txt', 'r') as input:
topics = {}
currentTitle = ''
for line in input:
line = line.rstrip()
if line[0] != ' ':
currentTitle = line
topics[currentTitle] = []
else:
topics[currentTitle].append(line)
print topics
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You never commit (i.e. run `d[row] = []`) the final list to the dictionary.
You can simply commit when you create the row:
```
d = {}
cur = []
for row in f:
if row[0] == ' ': # line in section
cur.append(row)
else: # new row
d[row] = cur = []
print (d)
```
Upvotes: 1 <issue_comment>username_3: Try this:
```
d = {}
key = None
with open('/home/asha/Desktop/test.txt','r') as file:
for line in file:
if line.startswith(' '):
d[key].append(line.strip())
else:
key = line.strip(); d[key] = []
print(d)
```
Upvotes: 0 <issue_comment>username_4: Using [`dict.setdefault`](https://docs.python.org/3/library/stdtypes.html#dict.setdefault) to create dictionary with lists as values will make your job easier.
```
d = {}
with open('input.txt') as f:
key = ''
for row in f:
if row.startswith(' '):
d.setdefault(key, []).append(row.strip())
else:
key = row
print(d)
```
Output:
```
{'Time\n': ['NtGetTickCount', 'NtQueryPerformanceCounter', 'NtQuerySystemTime', 'NtQueryTimerResolution', 'NtSetSystemTime', 'NtSetTimerResolution', 'RtlTimeFieldsToTime', 'RtlTimeToTime'], 'System informations\n': ['NtQuerySystemInformation', 'NtSetSystemInformation', 'Enumerations', 'Structures']}
```
A few things to note here:
1. Always use `with open(...)` for file operations.
2. If you want to check the first index, or the first few indices, use `str.startswith()`
The same can be done using [`collections.defaultdict`](https://docs.python.org/3/library/collections.html#collections.defaultdict):
```
from collections import defaultdict
d = defaultdict(list)
with open('input.txt') as f:
key = ''
for row in f:
if row.startswith(' '):
d[key].append(row)
else:
key = row
```
Upvotes: 1 <issue_comment>username_5: Just for the sake of adding in my 2 cents.
This problem is easier to tackle backwards. Consider iterating through your file backwards and then storing the values into a dictionary whenever a header is reached.
```
f=open('test.txt','r')
d = {}
l = []
for row in reversed(f.read().split('\n')):
if row[0] == ' ':
l.append(row)
else:
d.update({row: l})
l = []
```
Upvotes: 0 <issue_comment>username_6: Just keep track the line which start with ' ' and you are done with one loop only :
```
final=[]
keys=[]
flag=True
with open('new_text.txt','r') as f:
data = []
for line in f:
if not line.startswith(' '):
if line.strip():
keys.append(line.strip())
flag=False
if data:
final.append(data)
data=[]
flag=True
else:
if flag==True:
data.append(line.strip())
final.append(data)
print(dict(zip(keys,final)))
```
output:
```
{'Example': ['data1', 'data2'], 'Time': ['NtGetTickCount', 'NtQueryPerformanceCounter', 'NtQuerySystemTime', 'NtQueryTimerResolution', 'NtSetSystemTime', 'NtSetTimerResolution', 'RtlTimeFieldsToTime', 'RtlTimeToTime'], 'System informations': ['NtQuerySystemInformation', 'NtSetSystemInformation', 'Enumerations', 'Structures']}
```
Upvotes: 0 |
2018/03/19 | 725 | 2,873 | <issue_start>username_0: I came across this bug in some code running on a Blackfin 533 processor.
The first time `Func()` runs, `fooStruct` will contain garbage, but in the next iteration, the old value that was returned by `getFoo()` will by chance still be in `fooStruct.foo`.
```
FooStruct
{
double foo;
double bar;
};
void Func()
{
FooStruct fooStruct;
double bar = 123.4 / fooStruct.foo;
fooStruct.foo = getFoo();
fooStruct.bar = bar;
}
```
That means that the first time this runs, we are reading from an uninitialized variable, which is undefined behavior. What about the following iterations? Is that still undefined behavior? What sort of behavior can we expect to see when reading uninitialized variables on embedded processors?<issue_comment>username_1: One undefined behaviour has been encountered, the behaviour of that *and all subsequent statements* is undefined too.
Paradoxically, the behaviour of any statements *prior to the undefined one* are undefined too.
As for the *sort* of behaviour, asking to categorise undefined behaviour is not logical.
Upvotes: 3 <issue_comment>username_2: Yes it is undefined, but the behaviour you observe is not necessarily surprising; it is just that the stack is reused and the reused space is not initialised, and you happened to have reused exactly the same stack location as the previous call. All memory has to contain something and if you call this function and it happens to re-use the same stack frame as a previous call, it will contain whatever was last left there.
For example if you call:
```
Func() ;
Func() :
```
It is not defined, but not unreasonable for the second call `fooStruct.foo` to contain the value left by the first call, because that is what would happen when the compiler takes *no action* to initialise the variable.
However if instead you had:
```
void Func2()
{
int x = 0 ;
int y = 0 ;
Func() ;
}
```
Then called:
```
Func() ;
Func2() ;
```
The second call to `Func()` via `Func2()` would *almost* certainly place the local `fooStruct.foo` at a different address within the stack because of the stack frame for `Func2`, so would not then have the same value other then by coincidence. Moreover if the sequence were:
```
Func() ;
Func2() ;
Func() ;
```
The third call to `Func()` *might* use the same stack location as the first, but that space will *probably* have been modified by `Func2()` (because of the initialised variables), so likely you will no longer observe the same value in `fooStruct.foo`.
That is what *uninitialised* means; you get whatever happens to be there. And because when a variable goes out of scope, it is not generally modified, such values can "reappear" (and not necessarily in the same variable) - just because that is the simplest and most efficient implementation (i.e. to do nothing).
Upvotes: 2 |
2018/03/19 | 242 | 841 | <issue_start>username_0: I’m working on a project which is an online shop,
I want to show in a page the most sold items,
So my sql is
```
Select (*), Count(Product_ID) as n from Order_Details order by n desc.
```
But it doesn’t work. Can someone help?<issue_comment>username_1: You need to aggregate the data first, this can be done using the `GROUP BY` clause:
```sql
SELECT (*), COUNT(DISTINCT Product_ID)
FROM table
GROUP BY Product_ID
ORDER BY COUNT(DISTINCT Product_ID) DESC
```
The `DESC` keyword allows you to show the highest count first, `ORDER BY` by default orders in ascending order which would show the lowest count first.
Upvotes: 2 <issue_comment>username_2: ```
SELECT (*), Max(DISTINCT Product_ID)
FROM table
GROUP BY Product_ID
ORDER BY Max(DISTINCT Product_ID) DESC
```
the most sold item you use `max`
Upvotes: 0 |
2018/03/19 | 654 | 2,366 | <issue_start>username_0: I already asked a question about this 2 days ago, here are the links
[Got "password authentication failed for user" but in pgAdmin 3 its working](https://stackoverflow.com/questions/49332711/got-password-authentication-failed-for-user-but-in-pgadmin-3-its-working)
But I still didn't get an answer to solve the problem.
So I tried to create a new laravel project, then edit the `.env` file, check if `php artisan migrate` can run.
After I run `php artisan migrate` it's running, so it means that my credentials to PostgreSQL database are correct right? if not it will tell you **password authentication failed for user "postgres"**, but I don't get any error at all, so I go to the next step. Now after I make sure everything is OK i run `php artisan make:auth`, it's a success without error at all, so I go to the web browser then run the site, I clicked the `register` / `login` button, fill the fields, submit then, it's happened again the nightmare
I got this message from the website
>
> SQLSTATE[08006] [7] FATAL: password authentication failed for user "postgres" FATAL: password authentication failed for user "postgres" (SQL: select count(\*) as aggregate from "users" where "email" = <EMAIL>)
>
>
>
Even though `php artisan migrate` run really well, so I've no idea why it's happening. Is there somebody that ever run into this problem before? or maybe why it's happening?
I already search all keywords that possible to fix this problem, but I can't found the answer, it's really stressed me out.
for the info I'm using:
>
> PostgreSQL 9.6.8
>
>
> Laravel 5.6
>
>
> Ubuntu 17.10
>
>
>
Edited: Here is my `pg_hba.conf`
[](https://i.stack.imgur.com/S3d9j.png)<issue_comment>username_1: Check your database.php file in config directory, and check the pgsql array. If accessing DB credential values from .env not worked there, test it by directly putting credentials there. hope it will help. Artisan commands work by accessing the credentials from .env files directly,not from database.php that's why migration worked.
Upvotes: 2 <issue_comment>username_2: I had a similar problem, my generated password contained the character: '#'. This made it a comment line, ignoring the rest of the password. I just changed my password.
Upvotes: 0 |
2018/03/19 | 248 | 1,000 | <issue_start>username_0: I want to do something like this:
**HTML code:**
```
[Click Me](#)
```
**JavaScript code:**
```
function myfun(event) {
event.preventDefault();
alert("OK");
}
```
**I know we can code like below :**
```
document.getElementById("myAnchor").addEventListener("click",
function(event){
event.preventDefault();
alert("OK");
});
```
**But I want something like above mentioned**<issue_comment>username_1: Check your database.php file in config directory, and check the pgsql array. If accessing DB credential values from .env not worked there, test it by directly putting credentials there. hope it will help. Artisan commands work by accessing the credentials from .env files directly,not from database.php that's why migration worked.
Upvotes: 2 <issue_comment>username_2: I had a similar problem, my generated password contained the character: '#'. This made it a comment line, ignoring the rest of the password. I just changed my password.
Upvotes: 0 |
2018/03/19 | 325 | 1,267 | <issue_start>username_0: I've just reinstalled VS 2017 15.6.2 using web installer and tried to create new
empty Asp.net core 2.0 project.
But when i open it i got error:
>
> Project file is incomplete. Expected imports are missing.
>
>
>
My version of SDK is 2.1.101, which was installed with visual studio.
I selected web and .net core development features during install, what am i missing?
I tried to create empty core console app and everything works fine.
**UPDATE**
this error is related to folder structure
```
C:\Novatek.Monitoring.WebUI // no errors
C:\Svn\Monitoring\trunk\Novatek.Monitoring.WebUI // Project file is incomplete. Expected imports are missing.
```<issue_comment>username_1: Check your database.php file in config directory, and check the pgsql array. If accessing DB credential values from .env not worked there, test it by directly putting credentials there. hope it will help. Artisan commands work by accessing the credentials from .env files directly,not from database.php that's why migration worked.
Upvotes: 2 <issue_comment>username_2: I had a similar problem, my generated password contained the character: '#'. This made it a comment line, ignoring the rest of the password. I just changed my password.
Upvotes: 0 |
2018/03/19 | 349 | 1,267 | <issue_start>username_0: I have some crystal reports on a website. They are working fine on my local machine. I'm using `Crystal Reports version 13.0.3500.0`
The test environment it is on another server, `Windows 2008 R2`, 64 bit operating system.
The problem is that when on the test environment I want to use a crsytal report, I get the following error: `The document has not been opened`
[](https://i.stack.imgur.com/tGGac.png)
I spent some time already to figure it out what can be the problem, but without any success. I'm using the same `DLL-s` as in my local machine.
I checked and the folder where I have the report has all the rights.
Can you please advise?<issue_comment>username_1: I was getting this error before I installed Crystal Reports Runtime (21) on my server.
[Here](https://www.tektutorialshub.com/how-to-download-and-install-crystal-report-runtime/) is an article where you can find download links etc.
Upvotes: 0 <issue_comment>username_2: if you are using IIS go to the application pool select your web site Click on Advanced Settings on the right side Set 'enable 32-bit application ' true (crystal report runtime engine for .net 32-bit had to be installed)
Upvotes: 1 |
2018/03/19 | 464 | 1,767 | <issue_start>username_0: I have a DAL project with this ApplicationContext.cs
```
[DbConfigurationType(typeof(MyConfiguration))]
public class ApplicationDbContext : IdentityDbContext
{
public ApplicationDbContext() : base("DefaultConnection", throwIfV1Schema: false) { }
...
}
```
but when I instance it in an other project
```
using (var databaseContext = new ApplicationDbContext())
{ }
```
I get this error:
>
> The default DbConfiguration instance was used by the Entity Framework
> before the 'MyConfiguration' type was discovered. An instance of
> 'MyConfiguration' must be set at application start before using any
> Entity Framework features or must be registered in the application's
> config file. See <http://go.microsoft.com/fwlink/?LinkId=260883> for
> more information.
>
>
>
I tried to solve it updating the web.config file, inserting codeConfigurationType in node, but I get the same error.
Can anyone help me?
UPDATE
------
My custom class is this:
```
public class MyConfiguration : DbConfiguration
{
public MyConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new System.Data.Entity.SqlServer.SqlAzureExecutionStrategy(2, TimeSpan.FromSeconds(30)));
}
}
```<issue_comment>username_1: I was getting this error before I installed Crystal Reports Runtime (21) on my server.
[Here](https://www.tektutorialshub.com/how-to-download-and-install-crystal-report-runtime/) is an article where you can find download links etc.
Upvotes: 0 <issue_comment>username_2: if you are using IIS go to the application pool select your web site Click on Advanced Settings on the right side Set 'enable 32-bit application ' true (crystal report runtime engine for .net 32-bit had to be installed)
Upvotes: 1 |
2018/03/19 | 689 | 2,614 | <issue_start>username_0: [](https://i.stack.imgur.com/VMu1X.png)I am using bitmap.But after capturing image I am trying to save image in External storage it is getting blured.Please give me solution.
This is my code -
```
public void saveImageToExternalStorage() {
String root =Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES).toString();
File myDir = new File(root + "/saved_images");
myDir.mkdirs();
Random generator = new Random();
int n = 10000;
n = generator.nextInt(n);
String fname = "Image-" + n + ".jpg";
File file1 = new File(myDir, fname);
if (file1.exists())
file1.delete();
try {
FileOutputStream out = new FileOutputStream(file1);
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
out.flush();
out.close();
}
catch (Exception e) {
e.printStackTrace();
}
// Tell the media scanner about the new file so that it is
// immediately available to the user.
MediaScannerConnection.scanFile(this, new String[] { file1.toString() }, null,
new MediaScannerConnection.OnScanCompletedListener() {
public void onScanCompleted(String path, Uri uri) {
Log.i("ExternalStorage", "Scanned " + path + ":");
Log.i("ExternalStorage", "-> uri=" + uri);
}
});
}
}
```<issue_comment>username_1: It's probably compression fault. Try to modify the parameters of
```
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 90, out);
```
to eg:
```
finalBitmap.compress(Bitmap.CompressFormat.JPEG, 100, out);
```
or save it to lossless PNG format if it's not against your requirements:
```
finalBitmap.compress(Bitmap.CompressFormat.PNG, 100, out);
```
Source:
<https://developer.android.com/reference/android/graphics/Bitmap.html#compress(android.graphics.Bitmap.CompressFormat>, int, java.io.OutputStream)
Upvotes: 0 <issue_comment>username_2: you need to notify the gallery that a new image with a path is added or i should say that you must update the MediaStore so that it will be added in Media Store as new Image
```
private void addImageGallery( File file ) {
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.DATA, file.getAbsolutePath());
values.put(MediaStore.Images.Media.MIME_TYPE, "image/jpeg"); // setar isso
getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
}
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,339 | 4,485 | <issue_start>username_0: I'm busy on an angular/nativescript project but I can't get the application running on the iOS emulator. The build itself is successful but the deployment fails.
```
Unable to apply changes on device: B1F01CA0-AEC5-452A-9D85-957B8BC398E3. Error is: Unable to get application path on device..
```
The above error is coming up whenever I want to send the build to the emulator.
I can't get find any related error to this error.
The stacktrace is as follows (I left out some path parts that aren't required)
```
=== BUILD TARGET nativescript OF PROJECT nativescript WITH CONFIGURATION Debug ===
while processing while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
/nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
warning: /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/28BJBFDGVGZZ3/UIKit-1V5UHAPTOD24G.pcm: No such file or directory
warning: /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/2DKXPQ92SAISO/UIKit-1V5UHAPTOD24G.pcm: No such file or directory
note: Linking a static library that was built with -gmodules, but the module cache was not found. Redistributable static libraries should never be built with module debugging enabled. The debug experience will be degraded due to incomplete debug information.
note: Linking a static library that was built with -gmodules, but the module cache was not found. Redistributable static libraries should never be built with module debugging enabled. The debug experience will be degraded due to incomplete debug information.
while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
warning: /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/28BJBFDGVGZZ3/Darwin-38I6DLZ5IH61J.pcm: No such file or directory
while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
warning: while processing /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/2DKXPQ92SAISO/Darwin-38I6DLZ5IH61J.pcm /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o): :
No such file or directorywarning:
/Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/28BJBFDGVGZZ3/Foundation-3DFYNEBRQSXST.pcm: No such file or directory
while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o)while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
:
warning: /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/2DKXPQ92SAISO/Foundation-3DFYNEBRQSXST.pcm: No such file or directorywarning:
/Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/28BJBFDGVGZZ3/Dispatch-2LX9MWM6UEPQW.pcm: No such file or directory
while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
warning: while processing /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/2DKXPQ92SAISO/Dispatch-2LX9MWM6UEPQW.pcm /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o): :
No such file or directorywarning:
/Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/28BJBFDGVGZZ3/ObjectiveC-1RQA2RIXJIE19.pcm: No such file or directory
while processing /nativescript/platforms/ios/internal/TKLiveSync/TKLiveSync(TKLiveSync.o):
warning: /Users/nsbuilduser/Library/Developer/Xcode/DerivedData/ModuleCache/2DKXPQ92SAISO/ObjectiveC-1RQA2RIXJIE19.pcm: No such file or directory
Project successfully built.
```
What setting/config do I need to look at? I cant find any difference in the variables from when I set it up the first time and got it working.<issue_comment>username_1: I got it fixed, the solution was most likely as follows:
Start the emulator -> In the statusbar go to Hardware -> "Erase all content and Settings..." -> Press the power button on the emulator skin to restart
Upvotes: 2 <issue_comment>username_2: I had the same problem I solved it as I explain in the picture.
I'm not sure it works for all iOS versions. In my app my iOS is V5.0.0 (iPhone 5) and my NativeScript is version 5v.
Show the first three steps in my picture:
[](https://i.stack.imgur.com/xGeXZ.jpg)
4. Once you have crushed the state of emulation.
5. Exit and close the emulator.
6. Restart the command: $ tns run ios
Normally the phone will completely reset itself to reinstall the application.
Upvotes: 1 |
2018/03/19 | 387 | 1,524 | <issue_start>username_0: I need to load an item when the CheckBox is selected and if it is selected it should disappear because something else will be loaded.
Where did I make a mistake in this case?
XAML:
```
Load linek [kN/m]
```
NegatingConverter
```
[ValueConversion(typeof(bool), typeof(bool))]
public class NegatingConverter : IValueConverter
{
public object Convert(object value, Type targetType, object parameter, CultureInfo culture)
{
return !((bool)value);
}
public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture)
{
throw new NotImplementedException();
}
}
```<issue_comment>username_1: I got it fixed, the solution was most likely as follows:
Start the emulator -> In the statusbar go to Hardware -> "Erase all content and Settings..." -> Press the power button on the emulator skin to restart
Upvotes: 2 <issue_comment>username_2: I had the same problem I solved it as I explain in the picture.
I'm not sure it works for all iOS versions. In my app my iOS is V5.0.0 (iPhone 5) and my NativeScript is version 5v.
Show the first three steps in my picture:
[](https://i.stack.imgur.com/xGeXZ.jpg)
4. Once you have crushed the state of emulation.
5. Exit and close the emulator.
6. Restart the command: $ tns run ios
Normally the phone will completely reset itself to reinstall the application.
Upvotes: 1 |
2018/03/19 | 1,033 | 3,039 | <issue_start>username_0: I'm trying to set up CI with AWS ECS and docker. I use Codeship as a CI tool, but that should not really matter much.
I do the following steps in a shell script:
* build an image with my Dockerfile,
* push the image to ECS repository,
* push a task-definition.json to ECS `aws ecs register-task-definition --family postgraphile --cli-input-json file:///deploy/ecs-task-def.json --region us-east-2`
* run the ECS task `aws ecs run-task --task-definition postgraphile --cluster testcluster --region us-east-2`
Shell script runs successfully, however I see an error in output after I try to run my ECS task:
```
{
"tasks": [],
"failures": [
{
"arn": "arn:aws:ecs:us-east-2:99999999999:container-instance/050ab165-7669-45d5-8be7-d990cf4fff42",
"reason": "RESOURCE:MEMORY"
}
]
}
```
my `ecs-task-def.json`:
```
{
"containerDefinitions": [
{
"name": "postgraphile-container",
"image": "999999999999.dkr.ecr.us-east-2.amazonaws.com/test-repository",
"memory": 500,
"essential": true,
"portMappings": [
{
"hostPort": 5000,
"containerPort": 5000
}
]
}
],
"volumes": [],
"memory": "900",
"cpu": "128",
"placementConstraints": [],
"family": "postgraphile",
"taskRoleArn": ""
}
```
I think I already checked all the memory limits.. am I missing anything?
UPDATE:
After couple of reboots of ec2 instance I can finally run the ecs task with no errors. After running task several times, the error returns<issue_comment>username_1: I was getting this error when deploying a service on an ECS cluster. Removing the service completely and then redeploying it afresh helped me resolve this issue.
Upvotes: 0 <issue_comment>username_2: I had faced similar error while deploying services to EC2 using docker. I was using putty not CLI. I ran command `docker images` to get the size of each image. I found total size was more than my EC2 instance storage. I increased the EC2 instance volume to 20GB (EC2-->Volume-->Modify volume). Issue was resolved. Later I faced issue again after multiple trial and error. This time, I found I had lot of unused images which got accumulated after multiple docker compose commands. I removed all the containers, volumes and images and memory error was resolved.
Upvotes: 0 <issue_comment>username_3: There are limited values accepted for CPU and memory. Check the docs for the supported values:
| CPU value | Memory value (MiB) |
| --- | --- |
| 256 (.25 vCPU) | 512 (0.5GB), 1024 (1GB), 2048 (2GB) |
| 512 (.5 vCPU) | 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) |
| 1024 (1 vCPU) | 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) |
| 2048 (2 vCPU) | Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) |
| 4096 (4 vCPU) | Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB) |
<https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html>
Upvotes: 1 |
2018/03/19 | 622 | 1,781 | <issue_start>username_0: i have a table mark\_summery, Here is my table structure

now i just want to sum the "obt\_marks" and "total\_marks" of specific student with multiple exams (maximum four). "multiple exams mean summation of four exams " how can i sql it. any idea.?<issue_comment>username_1: I was getting this error when deploying a service on an ECS cluster. Removing the service completely and then redeploying it afresh helped me resolve this issue.
Upvotes: 0 <issue_comment>username_2: I had faced similar error while deploying services to EC2 using docker. I was using putty not CLI. I ran command `docker images` to get the size of each image. I found total size was more than my EC2 instance storage. I increased the EC2 instance volume to 20GB (EC2-->Volume-->Modify volume). Issue was resolved. Later I faced issue again after multiple trial and error. This time, I found I had lot of unused images which got accumulated after multiple docker compose commands. I removed all the containers, volumes and images and memory error was resolved.
Upvotes: 0 <issue_comment>username_3: There are limited values accepted for CPU and memory. Check the docs for the supported values:
| CPU value | Memory value (MiB) |
| --- | --- |
| 256 (.25 vCPU) | 512 (0.5GB), 1024 (1GB), 2048 (2GB) |
| 512 (.5 vCPU) | 1024 (1GB), 2048 (2GB), 3072 (3GB), 4096 (4GB) |
| 1024 (1 vCPU) | 2048 (2GB), 3072 (3GB), 4096 (4GB), 5120 (5GB), 6144 (6GB), 7168 (7GB), 8192 (8GB) |
| 2048 (2 vCPU) | Between 4096 (4GB) and 16384 (16GB) in increments of 1024 (1GB) |
| 4096 (4 vCPU) | Between 8192 (8GB) and 30720 (30GB) in increments of 1024 (1GB) |
<https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html>
Upvotes: 1 |
2018/03/19 | 1,911 | 4,257 | <issue_start>username_0: I would like to write the outputs of the following code in a text file. It gives this error:
`for x in zip(c(), R1):
TypeError: zip argument #1 must support iteration`
I could not find any solution. Any help please?
```
import numpy as np
from math import *
from scipy.integrate import quad
from scipy.integrate import odeint
xx=np.array([0.01,0.012])
yy=np.array([32.95388698,33.87900347])
Cov=[[137,168],[28155,-2217]]
with open('txtfile.txt', 'w') as f:
for j in range (1,20):
R1=np.random.uniform(0,1)
Omn=0.32+R1
Odn=1-Omn
def dl(n):
fn=xx[n]*Odn+yy[n]*Omn
return fn
def c():
f_list = []
for i in range(2): #the value '2' reflects matrix size
f_list.append(dl(i))
r1=[f_list]
r2=[[f] for f in f_list]
a=np.dot(r1,Cov)
b=np.dot(a,r2)
matrix=np.linalg.det(b)
return matrix
for x in zip(c(), R1):
f.write("{0}\t{1}\n".format(x[0],x[1]))
```
I appreciate your help.<issue_comment>username_1: Both `c()` and `R1` are both simple values, not lists. So to write them to a file with a tab, you would just need:
```
f.write("{}\t{}\n".format(c(), R1))
```
For example:
```
import numpy as np
from math import *
from scipy.integrate import quad
from scipy.integrate import odeint
def dl(n):
return xx[n] * Odn + yy[n] * Omn
def c():
f_list = []
for i in range(2): #the value '2' reflects matrix size
f_list.append(dl(i))
r1 = [f_list]
r2 = [[f] for f in f_list]
a = np.dot(r1, Cov)
b = np.dot(a, r2)
matrix = np.linalg.det(b)
return matrix
xx = np.array([0.01, 0.012])
yy = np.array([32.95388698, 33.87900347])
Cov = [[137, 168], [28155, -2217]]
with open('txtfile.txt', 'w') as f:
for j in range (1,20):
R1 = np.random.uniform(0, 1)
Omn = 0.32 + R1
Odn = 1 - Omn
f.write("{}\t{}\n".format(c(), R1))
```
This would create your `txtfile.txt` as follows:
```none
35206063.6746 0.777596199441
45374454.3839 0.926105934266
3990656.69091 0.0493187574204
28925205.8769 0.674852617966
45542873.2768 0.928417018276
4412088.81481 0.0683471360264
20148228.6097 0.510253466599
6934013.9475 0.166927414742
18602042.1473 0.477747802178
49485237.1146 0.981343401759
31379848.1448 0.716219179241
21670623.7641 0.541061316417
25859179.9751 0.620631842725
10642383.5164 0.28331967175
14640960.1091 0.387697186294
5183085.91921 0.100940240452
12734994.2117 0.340005554729
26863086.7454 0.638722906359
6227944.29448 0.141453730959
```
---
To write extra variable for each row, I would recommend you switch to using a CSV writer as follows:
```
import numpy as np
from math import *
from scipy.integrate import quad
from scipy.integrate import odeint
import csv
def dl(n):
return xx[n] * Odn + yy[n] * Omn
def c():
f_list = [dl(i) for i in range(2)]
r1 = [f_list]
r2 = [[f] for f in f_list]
a = np.dot(r1, Cov)
b = np.dot(a, r2)
matrix = np.linalg.det(b)
return matrix
xx = np.array([0.01, 0.012])
yy = np.array([32.95388698, 33.87900347])
Cov = [[137, 168], [28155, -2217]]
with open('txtfile.txt', 'w', newline='') as f:
csv_output = csv.writer(f, delimiter='\t')
for j in range (1,20):
R1 = np.random.uniform(0, 1)
Omn = 0.32 + R1
Odn = 1 - Omn
csv_output.writerow([c(), R1])
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: When you are using `zip()`, you are working with lists.
List contains arrays, your function and random number are just number without `[]` which demonstrats an array. so you can use without its for loop containing `zip()`.
`f.write("{0}\t{1}\n".format(c(),R1))`
The other point: Bring functions out of the `with open` order.
Upvotes: 0 <issue_comment>username_3: Input to the zip function must be \*iterables such an array or list.
Please try below, hopefully it will work.
```
for x in zip([c()], [R1]):
f.write("{0}\t{1}\n".format(x[0],x[1]))
```
Documentation for zip in python3 is available [here](https://docs.python.org/3/library/functions.html).
Upvotes: 1 |
2018/03/19 | 1,194 | 4,140 | <issue_start>username_0: I'm trying to follow the examples found [here](https://learn.microsoft.com/en-US/dynamics365/customer-engagement/developer/clientapi/reference/xrm-navigation/openform "here") that explain how to use `Xrm.Navigation.openForm` method to open a CRM form for a new entity.
My target entity has multiple forms and I'm trying to specify the form ID in the `entityFormOptions` object as described in the link above. I've copied the relevant text here (with the relevant line in bold):
>
> entityFormOptions
> =================
>
>
> Entity form options for opening the form. The
> object contains the following attributes:
>
>
> * cmdbar: (Optional) Boolean. Indicates whether to display the command bar. If you do not specify this parameter, the command bar is displayed by default.
> * createFromEntity: (Optional) Lookup. Designates a record that will provide default values based on mapped attribute values. The lookup object has the following String properties: entityType, id, and name (optional).
> * entityId: (Optional) String. ID of the entity record to display the form for.
> * entityName: (Optional) String. Logical name of the entity to display the form for.
> * **formId: (Optional) String. ID of the form instance to be displayed**.
> * height: (Optional) Number. Height of the form window to be displayed in pixels.
> * navBar: (Optional) String. Controls whether the navigation bar is displayed and whether application navigation is available using the
> areas and subareas defined in the sitemap. Valid values are: "on",
> "off", or "entity".
>
>
>
However this doesn't seem to work for me.
The ID of my form is `375DE297-C0AF-4711-A811-5F1663FAE5DA`
Here's my code:
```
var entityFormOptions = {};
entityFormOptions["entityName"] = "contact";
entityFormOptions["formId"] = "375DE297-C0AF-4711-A811-5F1663FAE5DA";
Xrm.Navigation.openForm(entityFormOptions);
```
The new entity form opens; however it uses the default form, not the specified form.
I am running as a System Administrator and I have confirmed that I have access to all the forms for the specified entity so I don't think it is a form-security issue.
Has anyone tried this method of opening forms in Dynamics 365?<issue_comment>username_1: That's looks like mistake in docs or bug in Dynamics.
Previous implementation (v8 and before) took **formid** in parameters object: <https://msdn.microsoft.com/en-us/library/jj602956.aspx#openEntityForm>
Although current documentation states that formId must be set in entityFormOptions it isn't actually honoured. But it is honoured when you put it to good old formParameters.
Thus this does the trick:
```
var entityFormOptions = {};
entityFormOptions["entityName"] = "contact";
var formParameters = {};
formParameters ["formid"] = "375DE297-C0AF-4711-A811-5F1663FAE5DA";
Xrm.Navigation.openForm(entityFormOptions, formParameters);
```
P.S. Note that lowercase **"formid"**.
Upvotes: 3 [selected_answer]<issue_comment>username_2: we can also use the below code to open a particular entity form:
```
var entityFormOptions = {};
entityFormOptions["entityName"] = "nrw_contact";//Logical name of the entity
entityFormOptions["entityId"] = "nrw_contact_ID"; //ID of the entity record
entityFormOptions["formId"] = "CF8D885B-256D-43E6-8776-CBBB7AA88EF5"; //FormId
Xrm.Navigation.openForm(entityFormOptions);
```
Please refer this link for more details : <https://learn.microsoft.com/en-us/dynamics365/customer-engagement/developer/clientapi/reference/xrm-navigation/openform>
Upvotes: 1 <issue_comment>username_3: This may be a little late but hopefully will help someone else.
The documentation is correct. You can supply formId as shown. You only need to make sure that form is added to the Model Driven App in App Designer (You add the form by checking it on the panel on the right) [](https://i.stack.imgur.com/brRJu.jpg)
```
var pageInput = {
pageType: "entityrecord",
entityName:"icon_case",
entityId: recordId,
formId: v_formId
};
```
Upvotes: 2 |
2018/03/19 | 401 | 1,349 | <issue_start>username_0: ```
void slpEnableService (void)
{
STATUS status;
slp_task_msg_t msg;
int slpEventBuf = 1;
msg.id = SLP_MSG_ENABLE;
status = msgQSend(slp_msg_queue, (char *)&msg, sizeof(slp_task_msg_t),
NO_WAIT, MSG_PRI_NORMAL);
assert(status == OK);
write(slp_fifo_wr_fd, &slpEventBuf, sizeof(slpEventBuf));
}
```
what is STATUS here?<issue_comment>username_1: There is no "STATUS" in C.
It's probably defined in a header that we don't know.
Upvotes: 2 <issue_comment>username_2: It might be a structure/macros/enum , check in your header files which are included in your present code file.
without knowing what things are present in your header file , it's impossible to answer.
Upvotes: 0 <issue_comment>username_3: Given that it is all in upper case, it is probably a `#define` macro.
Given it's location - it is where you would expect to see the type in a variable declaration - it is probably a macro that defines a type.
Given that it is used as the return type for `msgQSend()`, it is probably an integer type, in fact, it is probably `int`.
Somewhere in your code (probably in a header) there will be a line like
```
#define STATUS int
```
or possibly
```
typedef int STATUS;
```
or possibly even an enum
```
typedef enum { OK = 0, ERROR = -1 } STATUS;
```
Upvotes: 0 |
2018/03/19 | 1,050 | 3,682 | <issue_start>username_0: I faced a little trouble. I just wrote a simple shell script to show a few dependencies of maven in files. It works fine. But the only problem. During the process, I show an information about this process progress and so on (I described that in the example below). And one of the lines I display using `printf` is duplicating and it looks like:
>
> Writing dependencies in ../path/all\_dependencies.txt
>
>
> -- Applying with "mvn dependency:tree -Dverbose ".....
>
>
> -- Applying with "mvn dependency:tree -Dverbose -Dincludes=javax.servlet".....
>
>
>
I can't get why an additional line appears here. I revealed a lot of sources on the Internet but found nothing in which direction I have to dig. I guess it can be influenced by progress bar but can't understand how exactly.
And what is also interesting, why I get different lines? Maybe it's a feature how arrays work?...
I would appreciate any help/explanation/etc.
This is my script, I added comment before troubled line:
```
#!/bin/sh
start=$(date)
printf "\n Show dependencies from all projects \n $start"
printf "\n -----------------------------------"
#Paths to catalogs
PATH1=..some/path1
PATH2=..some/path2
PATH3=..some/path3
green='\033[0;32m'
red='\033[0;31'
nc='\033[0m'
# mvn_params represents Maven Parametrs options
# you can specify here all params you want to use when dependency tree will be applied
#
# E.G: -Dverbose -Dincludes=javax.servlet
#
if [ "$#" -eq 0 ]; then
maven_params=""
else
maven_params=( "$@" )
fi
array=(
$PATH1
$PATH2
$PATH3
)
cp /dev/null all-dependencies.txt
for element in ${array[@]}; do
module=$element
if [ -d "$module" ]; then
cd $element
full_path="dependencies.txt"
printf "\n Writing dependencies in $module/$full_path"
# A duplicated line is below
printf "\n -- Applying with \"mvn dependency:tree %s \"....." "${maven_params[@]}"
sp='/-\|'
printf ' '
mvn dependency:tree "${maven_params[@]}" > $full_path &
while [[ -n $(jobs -r) ]]; do
printf '\b%.1s' "$sp"
sp=${sp#?}${sp%???}
done
status_maven=$?
cat $full_path >> ../all-dependencies.txt
if [ $status_maven -eq 0 ]; then
printf "\b%.1s ${green}\\u2714${nc} Done\n"
else
printf "\b%.1s ${red}\\u274C${nc} Failed\n"
fi
else
printf "\n ${red}\\u274C Failed. ${nc} $module: No such file or directory\n"
fi
done
printf "\n ${green}DONE:${nc} File with all dependencies has been created: all-dependencies.txt"
exit 0
```<issue_comment>username_1: The behaviour you see, can be demonstrated by this example:
```
printf "\nParameter: %s\n" a b c
```
If you have only one formatting code (here: `%s`), but pass several arguments to `printf`, the string will be used over and over again, so the output of this statement is:
```
Parameter: a
Parameter: b
Parameter: c
```
In your case, it means that the array `maven_params` contains two elements.
Upvotes: 2 <issue_comment>username_2: Thanks to username_1 I was able to think about array elements representation in a shell. I found a simple solution.
I hope it can be helpful for somebody who will also be looking for an answer.
SOLUTION:
=========
In my case, I wrote `"${maven_params[@]}"` where `[@]` represents all arguments separated from each other. I had to use `[*]` instead. It provides using array elements in a row like `$1$2`..etc.
>
> "$@" expands each element as a separate argument, while "$\*" expands to the args merged into one argument
>
>
>
Upvotes: 2 [selected_answer] |
2018/03/19 | 300 | 1,036 | <issue_start>username_0: I use python 3.6.4 and my OS is macOS High Sierra.
So I've installed `pynput`. I can import it well on terminal, however, not on Python IDLE.

I've used `sys.version` & `sys.path` and they're the same. I only have python 3.6 installed on my Mac

Does anyone know the reason why and how I can solve it?<issue_comment>username_1: This usually happens when you have multiple versions of python installed.
Use this line of code in both IDLE and Terminal:
`import sys
sys.version
sys.path`
You will get system PATH of your current python. Now just delete one of them and that's it.
Upvotes: 1 <issue_comment>username_2: This happens because on the terminal you use Anaconda which is unknown to IDLE.
You can use Spyder which comes with Anaconda
Edit: You'll find IDLE from: c:\Anaconda\Lib\idlelib\idle.bat
On mac i think you should do this `pip install spyder`
Upvotes: 0 |
2018/03/19 | 405 | 1,372 | <issue_start>username_0: As you can see below. The first photo works fine when
```
this.state.blabal
```
is NOT inside the
```
map(a, b){blabla}
```
but like as photo2, when i move the working-fine block inside the
```
map(a, b){`here!!`}
{Object.keys(newsProviderID_Name_Dic).map(function(key, index){
return
}
label="<NAME>"
/>;
})}
```
It will show up the error:
>
> TypeError: Cannot read property 'state' of undefined
>
>
>
Why is this happening? and how to solve this without binding individually?


<issue_comment>username_1: use an arrow function
```
Object.keys(newsProviderID_Name_Dic).map((key, index) => {
return () } )
```
inside an arrow function, `this` maintains the same meaning as the enclosing scope
Upvotes: 3 [selected_answer]<issue_comment>username_2: May be you not set initial state for Component.
you try it :
```
class Example extends Component {
state:{}
render(){
//.....code
}
}
```
or
```
class Example extends Component {
constructor(props){
this.state={};
}
```
}
Good luck!
Upvotes: 1 |
2018/03/19 | 631 | 2,531 | <issue_start>username_0: After a button click, I want to create an Intent for browsing files and selecting a directory. Then I want to store a path for this directory in shared preferences. Later I want to use this path as an argument for a File object, so I can, for example, get a parent directory of a directory I picked, or list all its files. The problem is, I am getting this path from Intent:
content://com.android.externalstorage.documents/tree/primary%3AAndroid%2Fdata
I read here [link](https://stackoverflow.com/questions/5657411/android-getting-a-file-uri-from-a-content-uri) and tried to convert content Uri to File Uri using cursor, but i am getting this error:
```
java.lang.UnsupportedOperationException: Unsupported Uri content://com.android.externalstorage.documents/tree/primary%3AAndroid%2Fdata
```
Is it because of characters before "Android" and "data" folder ? It always fails when trying to create a cursor.
Here is a simple example of what I want to achieve. I did not include a code for converting Content Uri to File Uri. I tried almost every code for this what I found, but with no result.
```
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
button.setOnClickListener {
val intent = Intent(Intent.ACTION_OPEN_DOCUMENT_TREE)
startActivityForResult(intent, SELECT_DIRECTORY)
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
if (resultCode == Activity.RESULT_OK) {
if (requestCode == SELECT_DIRECTORY) {
val path = data?.data
// convert content Uri to File Uri ?
// store path in shared preferences...
// later use it in File File(storedPath)
}
}
}
```<issue_comment>username_1: Do not even try to convert a content scheme to a file scheme.
There is no reason for it.
Please explain why you think you should.
```
// store path in shared preferences...
```
No. You should do that with
```
content://com.android.externalstorage.documents/tree/primary%3AAndroid%2Fdata
```
Then you can use it later.
Well if you made the obtained permissions persistent.
Upvotes: 2 <issue_comment>username_2: I was trying to implement "default folder" function in my file manager app and to pick a directory I wanted to use some other app, but that did not work for me. But thanks to @username_1 I realized that I can select default folder within my own app.
Upvotes: 1 [selected_answer] |
2018/03/19 | 748 | 1,905 | <issue_start>username_0: Hello, everybody, I have this task:
I have an array [4,7,3,6,9] and I have to make an array like this:
```
[4,7,3,6,9]
[9,4,7,3,6]
[6,9,4,7,3]
[3,6,9,4,7]
[7,3,6,9,4]
```
I have to make a program where array is rotating even if I add a new item to an array it should change accordingly. I am total newbie at JS, 1 week or so, here is my current try:
```
var numbers = [4, 7, 3, 6, 9];
console.log(numbers);
numbers[0] = 9; numbers[1] = 4; numbers[2] = 7; numbers[3] = 3; numbers[4] = 6;
console.log(numbers);
numbers[0] = 6; numbers[1] = 9; numbers[2] = 4; numbers[3] = 7; numbers[4] = 3;
console.log(numbers);
numbers[0] = 3; numbers[1] = 6; numbers[2] = 9; numbers[3] = 4; numbers[4] = 7;
console.log(numbers);
numbers[0] = 7; numbers[1] = 3; numbers[2] = 6; numbers[3] = 9; numbers[4] = 4;
console.log(numbers);
```
Also in my mind I have .push, .splice, etc. I dont know why but i really feel that javascript is not for my brain, haha :D<issue_comment>username_1: You could pop the value and unshift it.
```js
var array = [4, 7, 3, 6, 9],
i = array.length;
while (i--) {
console.log(array.join(' '));
array.unshift(array.pop());
}
console.log(array.join(' '));
```
Upvotes: 3 <issue_comment>username_2: You can simply use `splice` in conjunction with `pop`:
```js
var arr = [4,7,3,6,9];
for(var i=0; i
```
Upvotes: 0 <issue_comment>username_3: This is my solution:
```
var numbers = [4, 7, 3, 6, 9];
for(var i = 0; i < numbers.length; i++) {
console.log(numbers);
var lastElement = numbers.pop();
numbers = [lastElement].concat(numbers);
}
```
Upvotes: 0 <issue_comment>username_4: you can use swift and push
```
function rotate( array , times ){
while( times-- ){
var temp = array.shift();
array.push( temp )
}
}
//Test
var players = ['Bob','John','Mack','Malachi'];
rotate( players ,2 )
console.log( players );
```
Upvotes: 2 |
2018/03/19 | 787 | 2,074 | <issue_start>username_0: Is it possible to do this? Here is my code:
```
$sponsor1 = "Sponsor 1";
$sponsor2 = "Sponsor 2";
$sponsor3 = "Sponsor 3";
$sponsor4 = "Sponsor 4";
$sponsor5 = "Sponsor 5";
for($i=1;$i<=5;$i++) {
if($sponsor.$i){
echo $sponsor.$i;
echo "
";
}}
```
I want the above loop can display like this. I want something like this:
```
Sponsor 1
Sponsor 2
Sponsor 3
Sponsor 4
Sponsor 5
```<issue_comment>username_1: You can construct your variables using `PHP` variable variable, read here:
<http://php.net/manual/en/language.variables.variable.php>
```
$sponsor1 = "Sponsor 1";
$sponsor2 = "Sponsor 2";
$sponsor3 = "Sponsor 3";
$sponsor4 = "Sponsor 4";
$sponsor5 = "Sponsor 5";
for($i=1;$i<=5;$i++) {
$v = 'sponsor'.$i;
echo $$v;
echo "
";
}
```
Notice the `$$v`
Upvotes: 2 <issue_comment>username_2: There are three possible solutions:
1.Create array and iterate over it. (***Best***)
```
$sponsor_array = ["Sponsor 1","Sponsor 2","Sponsor 3","Sponsor 4","Sponsor 5"];
foreach($sponsor_array as $sponsor){
echo $sponsor;
echo PHP_EOL;
}
```
Output:- <https://eval.in/974403>
2.Direct use `for()` loop (***Better***)
```
php
for($i=1;$i<=5;$i++) {
echo "Sponsor ".$i;
echo PHP_EOL;
}
</code
```
Output:- <https://eval.in/974401>
3.use [Variables variable](http://php.net/manual/en/language.variables.variable.php) concept (***Not recomended for your case***)
```
$sponsor1 = "Sponsor 1";
$sponsor2 = "Sponsor 2";
$sponsor3 = "Sponsor 3";
$sponsor4 = "Sponsor 4";
$sponsor5 = "Sponsor 5";
for($i=1;$i<=5;$i++) {
$v = 'sponsor'.$i;
echo $$v;
echo "
";
}
```
Output:- <https://eval.in/974405>
Upvotes: 2 <issue_comment>username_3: The proper solution is to use an array.
```
$sponsors = [
"Sponsor 1",
"Sponsor 2",
"etc"
];
foreach ($sponsors as $thisSponsor) {
echo "$thisSponsor
\n";
}
```
If for whatever reason you can't use an array directly you could always do the following:
```
foreach ([$sponsor1, $sponsor2, $sponsor3, $sponsor4, $sponsor5] as $thisSponsor) {
```
Upvotes: 2 |
2018/03/19 | 902 | 2,671 | <issue_start>username_0: The Unit test code from the simplegraph-core testsuite below displays the region count of airports but it is not ordered as I would have expected.
The result starts with:
```
NZ-BOP= 3
MZ-A= 1
MZ-B= 1
IN-TN= 5
MZ-N= 1
PW-004= 1
MZ-I= 2
BS-FP= 1
IN-TR= 1
MZ-T= 1
BJ-AQ= 1
GB-ENG= 27
```
I looked into
* [Gremlin group by vertex property and get sum other properties in the same vertex](https://stackoverflow.com/questions/45094967/gremlin-group-by-vertex-property-and-get-sum-other-properties-in-the-same-vertex)
* [Gremlin query for groupcount of last value](https://stackoverflow.com/questions/42016541/gremlin-query-for-groupcount-of-last-value)
and searched for "GroupCount" in questions tagged gremlin to no avail
**What is necessary to fix the ordering?**
**Unit Test**
see also <https://github.com/BITPlan/com.bitplan.simplegraph/blob/master/simplegraph-core/src/test/java/com/bitplan/simplegraph/core/TestTinkerPop3.java>
```
@Test
public void testSortedGroupCount() throws Exception {
Graph graph = getAirRoutes();
GraphTraversalSource g = graph.traversal();
Map counts = g.V().hasLabel("airport").groupCount()
.by("region").order().by(Order.decr).next();
assertEquals(1473, counts.size());
for (Object key : counts.keySet()) {
System.out.println(String.format("%s=%3d", key, counts.get(key)));
}
}
```<issue_comment>username_1: You need to order the `values` with `local` scoping:
```
g.V().hasLabel("airport").
groupCount().
by("region").
order(local).
by(values, Order.decr)
```
With `local` scoping you order within the current traverser (i.e. order the contents of each `Map` in the traversal).
```
@Test
public void testSortedGroupCount() throws Exception {
Graph graph = getAirRoutes();
GraphTraversalSource g = graph.traversal();
Map counts = g.V().hasLabel("airport").groupCount()
.by("region").order(Scope.local).by(Column.values,Order.decr).next();
// https://stackoverflow.com/a/49361250/1497139
assertEquals(1473, counts.size());
assertEquals("LinkedHashMap",counts.getClass().getSimpleName());
debug=true;
if (debug)
for (Object key : counts.keySet()) {
System.out.println(String.format("%s=%3d", key, counts.get(key)));
}
}
```
will then show:
```
US-AK=149
AU-QLD= 50
CA-ON= 46
CA-QC= 44
PF-U-A= 30
US-CA= 29
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Using the Gremlin language only, you can do it using the unfold method. This may be useful if you need to run your query by the REST API.
```js
g.V()
.hasLabel("airport")
.values("region")
.groupCount()
.unfold()
.order()
.by(values)
```
Upvotes: 0 |
2018/03/19 | 2,121 | 8,390 | <issue_start>username_0: I've got the following route:
```
public void configure() throws Exception {
from(ftpEndpoint)
.routeId("import-lib-files")
.log(INFO, "Processing file: '${headers.CamelFileName}' from Libri-FTP")
.choice()
.when(method(isFilenameAlreadyImported))
.log(DEBUG, "'${headers.CamelFileName}' is already imported.")
.endChoice()
.otherwise()
.bean(method(unzipLibFile))
.bean(method(persistFilename))
.log(DEBUG, "Import file '${headers.CamelFileName}'.")
.endChoice()
.end()
.end();
}
```
inside the `unzipLibFile` processor bean the file from the ftp gets uncompressed and is written to the HD.
I want to test (integration test) this route, like:
* 1. Copy file to ftp
* 2. Start the route
* 3. evaluate the 'outcome'
I like:
```
@Before
public void setUp() throws Exception {
// delete test-file from sftp
final String uploaded = ftpPath + "/" + destination + "/libri-testfile.zip";
final File uploadedFile = new File(uploaded);
uploadedFile.delete();
// delete unzipped test-file
final String unzippedFile = unzipped + "/libri-testfile.xml";
final File expectedFile = new File(unzippedFile);
expectedFile.delete();
// delete entries from db
importedLibFilenameRepository.deleteAll();
// copy file to ftp
final File source =
new ClassPathResource("vendors/references/lib.zip/libri-testfile.zip").getFile();
final String target = ftpPath + "/" + destination + "/libri-testfile.zip";
FileUtils.copyFile(new File(source.getAbsolutePath()), new File(target));
}
@Test
@Ignore
public void testStuff() throws Exception {
// Well here is a problem, I can't fix at the moment
// the Camel-Context within the SpringContext get started when the tests starts
// during this process the Camel-Routes are executed and because i copied the file to
// the ftp all is fine... but I don't want to have a sleep in a test, I want to start the
// route (like commented code beneath the sleep)
Thread.sleep(2000);
// final Map headers = Maps.newHashMap();
// headers.put("CamelFileName", "libri-testfile.zip");
//
// final File file =
// new ClassPathResource("vendors/references/lib.zip/libri-testfile.zip").getFile();
// final GenericFile genericFile =
// FileConsumer.asGenericFile(file.getParent(), file, StandardCharsets.UTF\_8.name(), false);
//
// final String uri = libFtpConfiguration.getFtpEndpoint();
// producer.sendBodyAndHeaders(uri, InOut, genericFile, headers);
// test if entry was made in the database
final List filenames = importedLibFilenameRepository.findAll();
assertThat(filenames).usingElementComparatorIgnoringFields("id", "timestamp")
.containsExactly(expectedFilename("libri-testfile.zip"));
// test if content of unzipped file is valid
final String expected = unzipped + "/libri-testfile.xml";
final Path targetFile = Paths.get(expected);
final byte[] encoded = Files.readAllBytes(targetFile);
final String actualFileContent = new String(encoded, Charset.defaultCharset());
final String expectedFileContent = "This is my little test file for Libri import";
assertThat(actualFileContent).isEqualTo(expectedFileContent);
}
private ImportedLibFilename expectedFilename(final String filename) {
final ImportedLibFilename entity = new ImportedLibFilename();
entity.setFilename(filename);
return entity;
}
```
The **problem** is:
All camel route are started automaticly and because I copied the file to the FTP the test is green. But I've a #sleep inside my test, which I don't want. I want no camel route starting and start only the route I need.
My **questions** are:
* 1. How can I prevent the Camel-Routes from starting automaticly
* 2. Is the commented code (in the test method) the right way to start a route manually?
* 3. What are best practices to test a camel route with a ftp<issue_comment>username_1: 1. Use `.autoStartup(yourVariable)` in your routes to make their startup configurable. Set the variable to `true` in normal environments and to `false`in your test cases.
2. I don't see code to start a route?!?
3. Well, take a step back. Think about splitting your FTP route. For testing and more reasons:
For example split the route into an FTP and a processing route. The first does only the FTP transfer and then sends the received messages to the processing route (for example a `direct:` route).
Benefits:
* [**SRP**](https://de.wikipedia.org/wiki/Single-Responsibility-Prinzip): Both routes do just one thing and you can concentrate on it.
* **Testability**: You can test the processing route easily by sending messages to the `direct:` endpoint of the processing route. The tests can focus on one thing too.
* **Extensibility**: Imagine there is a new input channel (JMS, HTTP, whatever). Then you just add another input route that also sends to your processing route. Done.
When you really want to test the whole process from FTP file drop until the end, think about using the [Citrus test framework](http://www.citrusframework.org/) or similar tooling. Camel route tests are (in my opinion) a kind of "Unit tests for Camel routes", not full integration tests.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Thx to @username_1...
His advise to split the routes (Single Responsibility) helped me to solve my problem:
Here is the route:
The "Main-Route" consuming from the sFTP:
```
@Override
public void configure() throws Exception {
// @formatter:off
from(endpoint)
.setHeader("Address", constant(address))
.log(INFO, "Import Libri changeset: Consuming from '${headers.Address}' the file '${headers.CamelFileName}'.")
.to("direct:import-new-file");
// @formatter:on
}
```
The first sub-route:
```
@Override
public void configure() throws Exception {
// @formatter:off
from("direct:import-new-file")
.choice()
.when(method(isFilenameAlreadyImported))
.log(TRACE, "'${headers.CamelFileName}' is already imported.")
.endChoice()
.otherwise()
.log(TRACE, "Import file '${headers.CamelFileName}'.")
.multicast()
.to("direct:persist-filename", "direct:unzip-file")
.endChoice()
.end()
.end();
// @formatter:on
}
```
The two multicasts:
```
@Override
public void configure() throws Exception {
// @formatter:off
from("direct:persist-filename")
.log(TRACE, "Try to write filename '${headers.CamelFileName}' to database.")
.bean(method(persistFilename))
.end();
// @formatter:on
}
```
and
```
@Override
public void configure() throws Exception {
// @formatter:off
from("direct:unzip-file")
.log(TRACE, "Try to unzip file '${headers.CamelFileName}'.")
.bean(method(unzipFile))
.end();
// @formatter:on
}
```
And with this setup I can write my tests like:
```
@Test
public void testRoute_validExtractedFile() throws Exception {
final File source = ZIP_FILE_RESOURCE.getFile();
producer.sendBodyAndHeaders(URI, InOut, source, headers());
final String actual = getFileContent(unzippedPath, FILENAME);
final String expected = "This is my little test file for Libri import";
assertThat(actual).isEqualTo(expected);
}
@Test
public void testRoute_databaseEntryExists() throws Exception {
final File source = ZIP_FILE_RESOURCE.getFile();
producer.sendBodyAndHeaders(URI, InOut, source, headers());
final List actual = importedFilenameRepository.findAll();
final ImportedFilename expected = importedFilename(ZIPPED\_FILENAME);
assertThat(actual).usingElementComparatorIgnoringFields("id", "timestamp")
.containsExactly(expected);
}
private String getFileContent(final String path, final String filename) throws IOException {
final String targetFile = path + "/" + filename;
final byte[] encodedFileContent = Files.readAllBytes(Paths.get(targetFile));
return new String(encodedFileContent, Charset.defaultCharset());
}
private Map headers() {
final Map headers = Maps.newHashMap();
headers.put("CamelFileName", ZIPPED\_FILENAME);
return headers;
}
```
I can start the camel route with the `ProducerTemplate` (producer) and send a message to a direct endpoint (instead the ftp endpoint).
Upvotes: 0 |
2018/03/19 | 526 | 1,683 | <issue_start>username_0: In css, I have class 'current-cat-parent' and class 'hide-it'. One is on the top of page and 2nd is in mid of page.
Now the thing is, I want to make appear 'hide-it' class's content only if class 'current-cat-parent' exist somewhere in the page, otherwise hide that content.
Please let me know how can make it possible
Thank you<issue_comment>username_1: ```
var parent = document.querySelectorAll('.current-cat-parent');
if(parent.length == 0){
document.querySelector('.hide-it').style.display = none;
}
```
Upvotes: 1 <issue_comment>username_2: You can do it also through css.
.current-cat-parent .hideit{ display:none; }
This code work when page has not current-cat-parent class
```
.hideit{ display:block; }
```
Upvotes: 0 <issue_comment>username_3: The problem with Amir's answer is that the length of a div that exists on the page is `1` not `0`. Also `none` should be `"none"`
see below
```js
var parent = document.querySelectorAll('.current-cat-parent');
console.log(parent.length)
if (parent.length === 1) { // or > 0
document.querySelector('.hide-it').style.display = "none";
}
```
```css
.current-cat-parent {
height: 100px;
width: 100px;
background: Red;
}
.hide-it {
height: 100px;
width: 100px;
background: blue;
}
```
```html
```
Upvotes: 2 [selected_answer]<issue_comment>username_4: If both of your HTML elements are sibling selectors, for example if you have something like below:
```
---- other contents -----
```
Then you can simply achieve that using CSS only. Try this:
```
.hide-it {
display: none;
}
.current-cat-parent ~ .hide-it {
display: block;
}
```
Upvotes: 1 |
2018/03/19 | 221 | 902 | <issue_start>username_0: I have a problem with my account in windows. It disappeared and I can't find a way to recover it. So I can't access my data on comp and my jobs are not up in the cloud. Is any chance that I can recover my jobs with my talend account?
Thank you!<issue_comment>username_1: if your disk (hdd) not encrypted you still can connect the disk to another computer and read your data.
Upvotes: 0 <issue_comment>username_2: If you are using Talend Open Studio, the source code of your jobs is only stored on your local disk.
With the subscription version of Talend, you would be using a remote repository (svn or git), where your jobs are stored.
With either version, your jobs are never going to be stored outside these locations, so a Talend account is of no use.
You may have a chance with a data recovery software to restore the files from disk.
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,113 | 3,374 | <issue_start>username_0: Is there any easy way to find out what's the trend of the list?
For example:
* `[5.0, 6.0, 9.0, 4.0, 10.0]`. On the whole, its elements are increasing over time.
* `[6.0, 4.0, 5.0, 4.0, 3.0]`. Its elements are decreasing over time.
Ideally, I want a Boolean result from these kinds of lists.
Actually, I'd like to know the trend of a collection of data. not linear increasing or exactly increasing one by one. In the real world, some data not always good, Perhaps there are one or two quarters of data not as good as it used to be(but not too bad, also not too much), but as long as Its trend is good, It is good.<issue_comment>username_1: >
> On the whole, Its elements are increasing.
>
>
>
I take this to mean you want to consider the change in moving average. Half the job is defining *what you really want*, so I advise you think carefully about this before starting to write logic.
I've combined a moving average [solution by @Jaime](https://stackoverflow.com/a/14314054/9209546) with `np.diff` to suggest a possible way to infer what you want.
```
import numpy as np
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
lst1 = [5.0, 6.0, 9.0, 4.0, 10.0]
lst2 = [6.0, 4.0, 5.0, 4.0, 3.0]
res1 = np.all(np.diff(moving_average(np.array(lst1), n=4))>0)
# True; i.e. "generally increasing"
res2 = np.all(np.diff(moving_average(np.array(lst2), n=4))>0)
# False, i.e. "generally not increasing"
```
**Explanation**
* `moving_average` calculates the moving average across a window of 4 entries.
* In each case you get an array of 2 numbers (for list of length 5).
* `np.diff` then calculates the pairwise changes between these numbers.
* `np.all` with test `>0` determines if the changes are all positive or not all positive. An oversimplification driven by no clear requirement.
Upvotes: 4 <issue_comment>username_2: You can simply check , Will this work ?
```
def checker(list_):
check={}
temp = []
for m, n in enumerate(list_):
try:
if list_[m] < list_[m + 1]:
temp.append('Increasing')
else:
temp.append('Decreasing')
except IndexError:
pass
check[temp.count('Increasing')] = 1
check[temp.count('Decreasing')] = 0
return check[max(check)]
```
test\_Case 1:
```
print(checker([5.0, 6.0, 9.0, 4.0, 10.0]))
```
output:
```
1
```
test\_Case 2
```
print(checker([6.0, 4.0, 5.0, 4.0, 3.0]))
```
output:
```
0
```
Upvotes: 0 <issue_comment>username_3: I just had the same issue, and created a version using only pandas for this problem, making the implementation of the moving\_average function unnecessary.
```
def get_trend(array=[], reverse=False):
array = pd.Series(array).iloc[::-1].reset_index(drop=True) if reverse else pd.Series(array) #reverse Array if needed and convertes it to a pandas Series object
trend = pd.Series(array).rolling(len(array)-1).mean().dropna().reset_index(drop=True) #calculate MA from array
return -1 if trend[0] > trend[1] else 1 if trend[0] < trend[1] else 0
```
The function returns 1 for uptrends, -1 for downtrends and 0 if neither of those are given.
According to my measurements, using your arrays from above, this function takes about 0.002 seconds per call.
Upvotes: 0 |
2018/03/19 | 1,778 | 6,211 | <issue_start>username_0: I am writing Unit test cases for my application. There is one function which is written in Utils section and Used in all files. I wanted to mock this Utils function whenever I need but I am unable to do so.
Here is my code setup:
Utils.js
```
> const getData = (name) => "Hello !!! " + name;
>
> const getContact = ()=> return Contacts.mobile;
>
> export {
> getData,
> getContact }
```
Login.js (Which uses Utils.js)
```
const welcomeMessage = (name) => {
return getData(name);
}
```
My Test file (Login.spec.js)
```
import { getData } from '../../src/utils';
jest.mock('getData', () => jest.fn())
describe('User actions', () => {
it('should get username', () => {
const value = 'Hello !!! Jest';
expect(welcomeMessage('Jest')).toEqual(value);
});
});
```
When I run my test case then I am getting this error:
```
Cannot find module 'getData' from 'Login.spec.js'
```
I tried to find the solution on official Jest Documentation and on SO as well but couldn't found anything. I am unable to fix this error and mock this function.<issue_comment>username_1: The first argument of `jest.mock(...)` must be a module path:
```
jest.mock('../../src/utils');
```
because the `utils` module is your code, not a 3rd lib, so you must learn manual mock of jest:
<https://facebook.github.io/jest/docs/en/manual-mocks.html>
if you had this file: `src/utils.js`
you can mock it by creating a file: `src/__mocks__/utils.js`
content of this file is the replication of the original but replace the implementation by `getData = jest.fn()`
on you test file, just call: `jest.mock('../../src/utils');` at begin of file.
then when you're familiar with, you can call that function inside `beforeEach()` and call its counter `jest.unmock('../../src/utils');` insider `afterEach()`
An easy way to think about it is that:
when you call `jest.mock('../../src/utils');`, it means you tell `jest` that:
>
> hey if the running test meets the line `require('../../src/utils')`, don't load it, let load `../../src/__mocks__/utils`.
>
>
>
Upvotes: 5 <issue_comment>username_2: another solution would fake this by doing something like:
```
window['getData'] = jest.fn();
```
Upvotes: 1 <issue_comment>username_3: I got the same question and finally I find the solution. I post it here for the one who got the same issue.
Jest test file:
```
import * as utils from "./demo/utils";
import { welcomeMessage } from "./demo/login";
// option 1: mocked value
const mockGetData = jest.spyOn(utils, "getData").mockReturnValue("hello");
// option 2: mocked function
const mockGetData = jest.spyOn(utils, "getData").mockImplementation((name) => "Hello !!! " + name);
describe("User actions", () => {
it("should get username", () => {
const value = "Hello !!! Jest";
expect(welcomeMessage("Jest")).toEqual(value);
});
});
```
references:
<https://jestjs.io/docs/jest-object#jestfnimplementation>
[`jest.spyOn(object, methodName)`](https://jestjs.io/docs/jest-object#jestspyonobject-methodname)
>
> Creates a mock function similar to jest.fn but also tracks calls to
> object[methodName]. Returns a Jest mock function.
>
>
>
Note: By default, jest.spyOn also calls the spied method. This is different behavior from most other test libraries. If you want to override the original function, you can use:
```
jest.spyOn(object, methodName).mockImplementation(() => customImplementation)
```
or
```
object[methodName] = jest.fn(() => customImplementation);
```
Upvotes: 5 <issue_comment>username_4: Using `jest.spyOn` fails in 2022
--------------------------------
I was trying to recreate [sherwin waters solution](https://stackoverflow.com/a/66669162/3978701) using `jest.spyOn` but it was not working. Don't know why.
>
> **UPDATE**: Now I know why. I was using Create-React-App and with new version they changed default flag `resetMocks` is now set to true by default [source](https://github.com/facebook/create-react-app/issues/9935). That's why we need to declare them in `beforeEach` as they are cleaned after each test.
>
>
>
```js
// ./demo/welcomeMessage.js
import { getData } from "./utils"
export const WelcomeMessage = ({name}) => {
return getData(name)
}
// ./demo/utils.js
const getData = (name) => `Hello ${name}!`;
export { getData }
// ./App.js
import { WelcomeMessage } from "./demo/welcomeMessage";
function App() {
return (
My app
======
);
}
export default App;
// ./App.test.js
import { render, screen } from '@testing-library/react';
import App from './App';
import * as utils from "./demo/utils";
jest.spyOn(utils, "getData").mockReturnValue("mocked message"); // this doesn't work as intended
describe('App', () => {
test('renders header', () => {
render();
expect(screen.getByText(/My app/i)).toBeInTheDocument()
});
test('renders correct welcome message', () => {
render()
expect(screen.getByText(/mocked message/i)).toBeInTheDocument()
});
})
```
Solution #1 using `jest.spyOn` in `beforeEach`
----------------------------------------------
Wrap `jest.spyOn` in `beforeEach` block
```
beforeEach(() => {
jest.spyOn(utils, "getData").mockReturnValue("mocked message");
});
```
Now tests should work correctly. This is similar to this [Stack Overflow post](https://stackoverflow.com/a/69163310/3978701).
Solution #2 using `jest.mock`
-----------------------------
Instead of using `import * as ...` we can mock our module with `jest.mock`. Following test works fine:
```
import { render, screen } from '@testing-library/react';
import App from './App';
jest.mock('./demo/utils', () => ({
getData: () => 'mocked message'
}));
describe('App', () => {
test('renders header', () => {
render();
expect(screen.getByText(/My app/i)).toBeInTheDocument()
});
test('renders correct welcome message', () => {
render()
expect(screen.getByText(/mocked message/i)).toBeInTheDocument()
});
})
```
This approach is harder to use if we want to have multiple mocked implementations, e.g. one for testing failing cases and one for normal cases. We would need to use `doMock` and async imports.
Upvotes: 4 |
2018/03/19 | 724 | 2,854 | <issue_start>username_0: I am wondering how does Keras compute a metric (a custom one or not).
For example, suppose I have the following metric which yields the maximal error between the prediction and the ground truth:
```
def max_error(y_true, y_pred):
import keras.backend as K
return K.max(K.abs(y_true-y_pred))
```
Is the output scalar metric computed on all mini-batches and then averaged or is the metric directly computed on the whole dataset (training or validation)?<issue_comment>username_1: There is a difference between the metric on training dataset and on validation dataset. For the val set the metric is calculated at epoch end for your whole val dataset.
For the train set: The metric is calculated on batch end and the average keeps getting updated till epochs end.
As you can see the metric for the train set is evaluated on the fly with each batch was evaluated using different weights. That's why the train metric shows sometimes strange behaviour.
Upvotes: 2 <issue_comment>username_2: Dennis has already explain this clearly.
One more thing to point out, if you want compute the metric over all train datasets, Or like your custome metric function could just be computed on single pass and no averaging, you could try ***use the keras callback and define the on\_epoch\_end, in on\_epoch\_end method you could compute this on whole train data.***
like this :
```
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict(self.X_train, verbose=0)
score = max_error(self.y_train, y_pred)
y_val_pred = self.model.predict(self.X_val, verbose=0)
val_score = max_error(self.y_val, y_val_pred)
print("\n ROC-AUC - epoch: %d - train score: %.6f \n - val score: %.6f" % (epoch+1, score, val_score))
```
And you need **pass the train data and val data to model.fit's validation\_data parameter**.
Upvotes: 2 <issue_comment>username_3: **Something additional to know with respect to the metric for the VALIDATION set:**
Contrary to what is suggested in another answer, I just saw that the metric on the **validation set is calculated in batches**, and then averaged (of course the trained model at the end of the epoch is used, in contrast to how the metric score is calculated for the training set).
If you want to compute it on the whole validation data at once, you have to use a callback as described in the ~~accepted~~ answer of guangshengzuo (see <https://keras.io/guides/writing_your_own_callbacks/> for more details).
Sure, for the usual metrics, there will not be any difference whether you calculate first in batches and average, or do it all in one big batch. BUT for custom metrics, there very well can be: I just had a case where the metric would tune a parameter, based on the data.
Edit: added link on callbacks, in response to comment
Upvotes: 4 [selected_answer] |
2018/03/19 | 1,600 | 4,585 | <issue_start>username_0: I have PHP code below that will check for two associative arrays. My problem is that, this code perform many loops and I don't want it cause it affects the performance of my application. Is there any way or function in PHP that will automatically detect for duplicate? And result with the same as mine.
Please check sample [here](http://sandbox.onlinephpfunctions.com/code/cec307bd4ca79eeb56e57397bbb5fe2d06a842ce).
Here is my code:
```
// first Array
$m1Array = array();
$m1Array[] = array('eId' => '0001', 'numVal' => 1);
$m1Array[] = array('eId' => '0002', 'numVal' => 2);
$m1Array[] = array('eId' => '0003', 'numVal' => 3);
$m1Array[] = array('eId' => '0004', 'numVal' => 4);
$m1Array[] = array('eId' => '0005', 'numVal' => 5);
$m1Array[] = array('eId' => '0006', 'numVal' => 6);
//second Array
$m2Array = array();
$m2Array[] = array('eId' => '0001', 'numVal' => 1);
$m2Array[] = array('eId' => '0004', 'numVal' => 4);
$m2Array[] = array('eId' => '0005', 'numVal' => 5);
$m2Array[] = array('eId' => '0006', 'numVal' => 6);
$m2Array[] = array('eId' => '0007', 'numVal' => 7);
//final result array
$finalResult = array();
//[seond array] will be my master or the bases
//loop thru the [second array(m2Array)]
foreach($m2Array as $m2Arr){
$numValSum = 0;
$dupFound = false;
//get current eId value in [second array]
$eId2 = $m2Arr['eId'];
//loop thru the [first array(m1Array)] to check if eId2 has duplicate
$arrIndex = 0;
foreach($m1Array as $m1Arr){
//get current eId value in [first array]
$eId1 = $m1Arr['eId'];
//check if the value of eId2 is equal to eId 1
if($eId1 == $eId2){
//if equal then
//add their respective numVal value and put it to [final result array]
$numValSum = $m2Arr['numVal'] + $m1Arr['numVal'];
$finalResult[] = array('eId' => $eId2, 'numValSum' => $numValSum);
unset($m1Array[$arrIndex]); //remove the duplicate eId in [first array]
sort($m1Array); //sort the index of [first array]
$dupFound = true;
}
$arrIndex += 1;
}
//if eId2 has no duplicate then just add the numVal to [final result array]
if($dupFound == false){
$finalResult[] = array('eId' => $eId2, 'numValSum' => $m2Arr['numVal']);
}
}
//now check if [second array] still have an element
//if still have then add all to [final result array]
if(count($m1Array)){
foreach($m1Array as $m1Arr){
$finalResult[] = array('eId' => $m1Arr['eId'], 'numValSum' => $m1Arr['numVal']);
}
}
//display my final result array
foreach($finalResult as $fRes){
echo $fRes['eId'].' -> '.$fRes['numValSum'].PHP_EOL;
}
```
Additional Info.:
Apologies, but I think some of you missed the point of my code. I don't want to remove duplicates in my arrays. The code will check for duplicate bet. two arrays and if the `eId` has a duplicate in another array then add their corresponding `numVal` before puting it to my `output` or `finalResult array`. But if don't have duplicate then put it directly to my `finalResult array`.
I attached a table diagram image of how it works. Hope this helps. Thank you.
[](https://i.stack.imgur.com/J4XUg.png)<issue_comment>username_1: The solution is to use hash tables. Instead of having a list of objects, use `eId` as a key as well, in this way you can avoid nesting loops and only lookup for keys, like:
```
$allKeys = array_unique( array_merge( array_keys($m1Array), array_keys($m2Array) ) );
foreach ($allKeys as $k) {
if (isset($m1Array[$k]) and isset($m2Array[$k]) ) {
// duplicate
}
}
```
Upvotes: 0 <issue_comment>username_2: You can reduce the `59` lines of code to `3` by using [`array_udiff()`](http://php.net/array_udiff) to get the difference between two multidimensional arrays. It accepts a function for data comparision:
```
var_dump(array_udiff($m1Array, $m2Array, function($a, $b) {
return $a["eId"] - $b["eId"];
}));
```
There is no such function in php, so you can break it down to pieces:
```
$finalResult = $a1 = array_column($m1Array, "numVal", "eId");
$a2 = array_column($m2Array, "numVal", "eId");
array_walk($finalResult, function(&$item, $key) use ($a2) {
if(isset($a2[$key])) $item += $a2[$key];
});
$finalResult = array_merge($finalResult, array_diff_key($a2, $finalResult));
// Array ( [0001] => 2 [0002] => 2 [0003] => 3 [0004] => 8 [0005] => 10 [0006] => 12 [0007] => 7 )
```
Upvotes: 2 [selected_answer] |
2018/03/19 | 702 | 2,386 | <issue_start>username_0: I have the following service...
```
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
clusterIP: None
selector:
app: mongo
```
And the following stateful set...
```
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: mongo
selector:
matchLabels:
app: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- name: mongo
containerPort: 27017
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: "VERSION"
value: "2"
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
```
However, my code can't make a connection, using the url `mongo:27017`. I've tried connecting to `mongo`, `mongo-0.mongo:27017`, loads of others. If I exec into a container and run `$ nslookup mongo` I get...
```
Name: mongo
Address 1: 10.1.0.80 mongo-0.mongo.default.svc.cluster.local
Address 2: 10.1.0.81 mongo-1.mongo.default.svc.cluster.local
Address 3: 10.1.0.82 mongo-2.mongo.default.svc.cluster.local
```
Hitting `$ curl mongo:27017` or `$ telnet mongo 27017` gives me a connection refused error.<issue_comment>username_1: Add `bind_ip` to command:
```
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip"
- "0.0.0.0"
```
This [option](https://docs.mongodb.com/manual/reference/program/mongod/#core-options) tells the monogodb daemon to listen on all IPv4 addresses, rather than the default of `localhost`.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Okay so I fixed this, it was a case of setting the bind\_ip=0.0.0.0, but it was also that I had to change the selector in my service to `role: mongo` instead if `app: mongo`. Then in my deployment I had to include `role: mongo` in my metadata, like...
```
template:
metadata:
labels:
app: mongo
role: mongo
```
Upvotes: 1 |
2018/03/19 | 708 | 2,708 | <issue_start>username_0: I have this program which I am working into. Then when I started to debug it, it seems perfectly running smooth. But then it just stops and then a note popped up
>
> System.Data.SqlClient.SqlException Incorrect syntax near '`'.'
>
>
>
Here is my code.
```
public void searchData(string valueToSearch)
{
string query = "SELECT * FROM users WHERE CONCAT(`lastname`, `middle`, `firstname`, `username`) like '%" + valueToSearch + "%'";
SqlCommand command = new SqlCommand(query, connection);
SqlDataAdapter sda = new SqlDataAdapter(command);
DataTable dt = new DataTable();
sda.Fill(dt);
dataGridView1.DataSource = dt;
}
```
and the message popped up on `sda.Fill(dt);` and I don't really know what part is the problem since it doesn't show up on the error list.
Please help me.<issue_comment>username_1: The params in the concat function are surrounded by quotes, which doesnt make sense. Those have to be the column names.
Change your query object to :
```
var query = "SELECT * FROM users WHERE CONCAT(lastname, middle, firstname, username) like '%" + valueToSearch + "%'";
```
Upvotes: 2 <issue_comment>username_2: There are two problems in your SQL; the *immediate* one is that you're using the wrong variant of escaping. SQL is "more what you call *guidelines* than actual rules", and different vendors use different rules. You're using *backtick* escaping of the form:
```
`identifier`
```
You also mention `SqlClient` in the question, which tells us that you're using SQL Server. SQL Server uses TSQL syntax, where `[identifier]` is the correct escape syntax - and it is optional, so unless your column/table names are reserved keywords you can just use the name directly:
```
WHERE CONCAT(lastname, middle, firstname, username)
```
---
The second and IMO much more serious problem is: SQL injection. You should **never ever** concatenate user input into a query. That's how a huge number of data breaches and outages happen - it is a **huge** security hole, and the sooner you learn not to do that: the better. Instead, use *parameters*:
```
string query = "... like @searchVal";
SqlCommand command = new SqlCommand(query, connection);
command.Parameters.AddWithValue("@searchVal", "%" + valueToSearch + "%");
// ...
```
This *completely protects you* from SQL injection (unless you've done something silly inside the SQL, such as `EXEC`-ing it), *and* (in the case of numbers / dates / etc) solves a wide range of "culture" issues (such as whether "123,456" is "one hundred and twenty three **thousand** four hundred and fifty six", or whether it is "one hundred and twenty three **point** four five six").
Upvotes: 2 |
2018/03/19 | 521 | 2,023 | <issue_start>username_0: I want to expand the grid view by clicking pus image on the gridview. But my below code is not working. How can i do this?
Code :<issue_comment>username_1: The params in the concat function are surrounded by quotes, which doesnt make sense. Those have to be the column names.
Change your query object to :
```
var query = "SELECT * FROM users WHERE CONCAT(lastname, middle, firstname, username) like '%" + valueToSearch + "%'";
```
Upvotes: 2 <issue_comment>username_2: There are two problems in your SQL; the *immediate* one is that you're using the wrong variant of escaping. SQL is "more what you call *guidelines* than actual rules", and different vendors use different rules. You're using *backtick* escaping of the form:
```
`identifier`
```
You also mention `SqlClient` in the question, which tells us that you're using SQL Server. SQL Server uses TSQL syntax, where `[identifier]` is the correct escape syntax - and it is optional, so unless your column/table names are reserved keywords you can just use the name directly:
```
WHERE CONCAT(lastname, middle, firstname, username)
```
---
The second and IMO much more serious problem is: SQL injection. You should **never ever** concatenate user input into a query. That's how a huge number of data breaches and outages happen - it is a **huge** security hole, and the sooner you learn not to do that: the better. Instead, use *parameters*:
```
string query = "... like @searchVal";
SqlCommand command = new SqlCommand(query, connection);
command.Parameters.AddWithValue("@searchVal", "%" + valueToSearch + "%");
// ...
```
This *completely protects you* from SQL injection (unless you've done something silly inside the SQL, such as `EXEC`-ing it), *and* (in the case of numbers / dates / etc) solves a wide range of "culture" issues (such as whether "123,456" is "one hundred and twenty three **thousand** four hundred and fifty six", or whether it is "one hundred and twenty three **point** four five six").
Upvotes: 2 |
2018/03/19 | 1,058 | 3,855 | <issue_start>username_0: I have a little specific case here and I'm struggling with it.
I'm trying to insert in information into database, but the situation is slightly different from the other cases I've watched.
The things I have to do is:
1. Create a HTML form, and the values should come with `$_POST` request
from it.
2. Create a credentials plus connection and database.
3. Need to assign the variables which will save the values of `$_POST`, they need to be = NULL.
4. I need to put these `$_POST` values into "if"'s body.
5. All of this should be done only if `$_SERVER['REQUEST_METHOD'] == POST`.
This is my HTML Form:
```
<NAME>:
User Phone:
Email Address:
```
These are my credentials and database connection:
```
php
session_start();
$host = "localhost";
$user_name = "root";
$user_password = "";
$database = "our_new_database";
function db_connect($host, $user_name, $user_password, $database) {
$connection = mysqli_connect($host, $user_name, $user_password, $database);
if(mysqli_connect_errno()){
die("Connection failed: ".mysqli_connect_error());
}
mysqli_set_charset($connection, "utf8");
return $connection;
</code
```
This is my database creation:
```
$foo_connection = db_connect($host, $user_name, $user_password, $database);
$sql = "CREATE TABLE user_info(
user_name_one VARCHAR(30) NOT NULL,
user_name_two VARCHAR(30) NOT NULL,
user_email VARCHAR(70) NOT NULL UNIQUE
)";
if(mysqli_query($foo_connection, $sql)){
echo "Table created successfully";
}
else {
echo "Error creating table".mysqli_connect_error($foo_connection);
}
```
And this is where I hardly stuck. When I try to assign the `$_POST` form values, I'm getting error:
>
> Notice: Undefined index: userNameOne Notice: Undefined index:
> userNameTwo Notice: Undefined index: userEmail
>
>
>
Also I don't know where to use this `$_SERVER['REQUEST_METHOD'] == POST`.
Can you help me a little bit to finish this "mission" :).<issue_comment>username_1: The params in the concat function are surrounded by quotes, which doesnt make sense. Those have to be the column names.
Change your query object to :
```
var query = "SELECT * FROM users WHERE CONCAT(lastname, middle, firstname, username) like '%" + valueToSearch + "%'";
```
Upvotes: 2 <issue_comment>username_2: There are two problems in your SQL; the *immediate* one is that you're using the wrong variant of escaping. SQL is "more what you call *guidelines* than actual rules", and different vendors use different rules. You're using *backtick* escaping of the form:
```
`identifier`
```
You also mention `SqlClient` in the question, which tells us that you're using SQL Server. SQL Server uses TSQL syntax, where `[identifier]` is the correct escape syntax - and it is optional, so unless your column/table names are reserved keywords you can just use the name directly:
```
WHERE CONCAT(lastname, middle, firstname, username)
```
---
The second and IMO much more serious problem is: SQL injection. You should **never ever** concatenate user input into a query. That's how a huge number of data breaches and outages happen - it is a **huge** security hole, and the sooner you learn not to do that: the better. Instead, use *parameters*:
```
string query = "... like @searchVal";
SqlCommand command = new SqlCommand(query, connection);
command.Parameters.AddWithValue("@searchVal", "%" + valueToSearch + "%");
// ...
```
This *completely protects you* from SQL injection (unless you've done something silly inside the SQL, such as `EXEC`-ing it), *and* (in the case of numbers / dates / etc) solves a wide range of "culture" issues (such as whether "123,456" is "one hundred and twenty three **thousand** four hundred and fifty six", or whether it is "one hundred and twenty three **point** four five six").
Upvotes: 2 |
2018/03/19 | 348 | 789 | <issue_start>username_0: I have an array `a=[2,4,5,'Flag',3,7,'Flag',2,5]`.
Want to break this up into `b[0]=[2,4,5]`,`b[1]=[3,7]`,`b[2]=[2,5]`
How do I do this? Can use numpy.<issue_comment>username_1: You could use [`itertools.groupby()`](https://docs.python.org/3/library/itertools.html#itertools.groupby):
```
>>> from itertools import groupby
>>> a=[2,4,5,'Flag',3,7,'Flag',2,5]
>>> b = [list(g) for k, g in groupby(a, lambda x: x == 'Flag') if not k]
>>> b
[[2, 4, 5], [3, 7], [2, 5]]
```
Upvotes: 2 <issue_comment>username_2: With simple `for` loop:
```
a = [2, 4, 5, 'Flag', 3, 7, 'Flag', 2, 5]
result = [[]]
for i in a:
if i == 'Flag':
result.append([])
else:
result[-1].append(i)
print(result)
```
The output:
```
[[2, 4, 5], [3, 7], [2, 5]]
```
Upvotes: 0 |
2018/03/19 | 927 | 3,035 | <issue_start>username_0: I am looking to import an excel workbook into R with multiple sheets. However, I can't seem to quite make this work. The code I have been using is the following:
```
library(XLConnect)
# Read Excel Sheet
excel <- loadWorkbook("C:/Users/rawlingsd/Downloads/17-18 Prem Stats.xlsx")
# get sheet names
sheet_names <- getSheets(excel)
names(sheet_names) <- sheet_names
# put sheets into a list of data frames
sheet_list <- lapply(sheet_names, function(.sheet){readWorksheet(object=excel, .sheet)})
# limit sheet_list to sheets with at least 1 dimension
# sheet_list2 <- sheet_list[sapply(sheet_list, function(x) dim(x)[1]) > 0]
# code to read in each excel worksheet as individual dataframes
for (i in 1:length(sheet_list)){assign(paste0("2018df", i), as.data.frame(sheet_list[i]))}
# define function to clean data in each data frame (updated based on your data)
```
If anyone could help me with my code or share a code that works for them, it would be greatly appreciated<issue_comment>username_1: What I use:
```
full_excel_read<-function(fpath,v=TRUE){
sheetnames <- excel_sheets(fpath)
workbook <- sapply(sheetnames,function (x){readxl::read_excel(fpath,sheet = x)})
for (sh in sheetnames) {
workbook[[sh]]<-as.data.table(workbook[[sh]])
}
if (v){
lapply(sheetnames, function(x){View(workbook[[x]],x)})
}
workbook
}
```
Upvotes: 2 <issue_comment>username_2: You can use `readxl` package. See the following example.
```
library(readxl)
path <- readxl_example("datasets.xls")
sheetnames <- excel_sheets(path)
mylist <- lapply(excel_sheets(path), read_excel, path = path)
# name the dataframes
names(mylist) <- sheetnames
```
The spreadsheet will be captured in a list with the sheetname as the name of the dataframe in the list.
If you want to bring the dataframes out of the list use the next bit of code.
```
# Bring the dataframes to the global environment
list2env(mylist ,.GlobalEnv)
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Please look into openxlsx package, which allows you to do loads of stuff with excel workbooks.
Here is a code script to read all the sheets from a given workbook.
```
library(openxlsx)
a <- loadWorkbook('~/filename.xlsx')
sheetNames <- sheets(a)
for(i in 1:length(sheetNames))
{
assign(sheetNames[i],readWorkbook(a,sheet = i))
}
```
You can verify the data is loaded in R and can view in your workSpace.
Thanks.
Upvotes: 3 <issue_comment>username_4: See [Read all worksheets in an Excel workbook into an R list with data.frames](https://stackoverflow.com/questions/12945687/read-all-worksheets-in-an-excel-workbook-into-an-r-list-with-data-frames/12948450#12948450)
```
require(XLConnect)
wb <- loadWorkbook(system.file("demoFiles/mtcars.xlsx", package = "XLConnect"))
lst <- readWorksheet(wb, sheet = getSheets(wb))
```
`lst` is a named list whose names correspond to the sheet names. Note that `readWorksheet` is vectorized and therefore you can read multiple worksheets with a single `readWorksheet` call.
Upvotes: 2 |
2018/03/19 | 1,622 | 5,037 | <issue_start>username_0: Hello everybody I have a script that loops an array that put data in a CSV file, i need to count the rows with same ID.
this is my scritpt that loops the array and put it in a csv file for export.
```
public function fputToFile($file, $allexportfields, $object, $ae)
{
if($allexportfields && $file && $object && $ae)
{
//one ready for export product
$readyForExport = array();
//put in correct sort order
foreach ($allexportfields as $value)
{
$object = $this->processDecimalSettings($object, $ae, $value);
$readyForExport[$value] = iconv("UTF-8", $ae->charset, $object[$value]);
}
//write into csv line by line
fputcsv($file, $readyForExport, $ae->delimiter, $ae->separator);
}
}
```
I've tried to use :
```
$numero_riga = array_count_values($readyForExport);
$readyForExport['numero_riga'] = $numero_riga;
```
but it does not print any correct value in the csv file meabe because it is a multi dimensional array, you can see the csv export in the text and screenshot below:
```
ID row_number
```
```
198 Array
199 Array
200 Array
200 Array
201 Array
201 Array
201 Array
201 Array
202 Array
202 Array
203 Array
203 Array
203 Array
204 Array
204 Array
204 Array
204 Array
204 Array
205 Array
205 Array
205 Array
206 Array
207 Array
207 Array
208 Array
209 Array
```
[csv export](https://i.stack.imgur.com/ejQbe.png)
The result have to be like this in the text and screenshot below you can see a column that counts the rows with same ID.
```
ID row_number
```
```
176 1
177 1
177 2
178 1
178 2
179 1
179 2
180 1
181 1
181 2
182 1
182 2
183 1
184 1
184 2
185 1
185 2
186 1
186 2
186 3
```
[correct result](https://i.stack.imgur.com/KhCFV.png)
Thanks in advance.
EDIT
Edited whit suggestions from scaisEdge but now the csv export acts in a strange way. I paste screenshot here [csv strange behaviour](https://i.stack.imgur.com/11uTf.png)
EDIT
now I'm using this code whit the help of scaisEdge, i think we are close to the solution.
```
$cnt_row = 0;
$match_id = -1;
//put in correct sort order
foreach ($allexportfields as $value)
{
if ( $value['id_order'] == $match_id){
$cnt_row++;
} else {
$cnt_row =1;
$match_id = $value['id_order'];
}
//$value['num_row'] = $cnt_row;
print_r($cnt_row);
$object = $this->processDecimalSettings($object, $ae, $value);
$readyForExport[$value] = iconv("UTF-8", $ae->charset, $object[$value]);
}
$readyForExport['num_row'] = $cnt_row;
```
i paste the screenshot of the actual result here: [partially correct result](https://i.stack.imgur.com/2fEWn.png) you can see that now is printing some values in the correct column but is prints " 4 " all the time...<issue_comment>username_1: What I use:
```
full_excel_read<-function(fpath,v=TRUE){
sheetnames <- excel_sheets(fpath)
workbook <- sapply(sheetnames,function (x){readxl::read_excel(fpath,sheet = x)})
for (sh in sheetnames) {
workbook[[sh]]<-as.data.table(workbook[[sh]])
}
if (v){
lapply(sheetnames, function(x){View(workbook[[x]],x)})
}
workbook
}
```
Upvotes: 2 <issue_comment>username_2: You can use `readxl` package. See the following example.
```
library(readxl)
path <- readxl_example("datasets.xls")
sheetnames <- excel_sheets(path)
mylist <- lapply(excel_sheets(path), read_excel, path = path)
# name the dataframes
names(mylist) <- sheetnames
```
The spreadsheet will be captured in a list with the sheetname as the name of the dataframe in the list.
If you want to bring the dataframes out of the list use the next bit of code.
```
# Bring the dataframes to the global environment
list2env(mylist ,.GlobalEnv)
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Please look into openxlsx package, which allows you to do loads of stuff with excel workbooks.
Here is a code script to read all the sheets from a given workbook.
```
library(openxlsx)
a <- loadWorkbook('~/filename.xlsx')
sheetNames <- sheets(a)
for(i in 1:length(sheetNames))
{
assign(sheetNames[i],readWorkbook(a,sheet = i))
}
```
You can verify the data is loaded in R and can view in your workSpace.
Thanks.
Upvotes: 3 <issue_comment>username_4: See [Read all worksheets in an Excel workbook into an R list with data.frames](https://stackoverflow.com/questions/12945687/read-all-worksheets-in-an-excel-workbook-into-an-r-list-with-data-frames/12948450#12948450)
```
require(XLConnect)
wb <- loadWorkbook(system.file("demoFiles/mtcars.xlsx", package = "XLConnect"))
lst <- readWorksheet(wb, sheet = getSheets(wb))
```
`lst` is a named list whose names correspond to the sheet names. Note that `readWorksheet` is vectorized and therefore you can read multiple worksheets with a single `readWorksheet` call.
Upvotes: 2 |
2018/03/19 | 845 | 2,736 | <issue_start>username_0: I have an input time in UTC and want to convert it to the device's local time. I want to use the TryParse method to avoid exceptions.
The input time is 7 PM. Expected `out DateTime date` parameter is 20:00. Instead it becomes 21:00.
My time zone is GMT+1 Central European Time. It is not currently DST, but DST starts later this March.
```
var value = "4/2/2018 7:00:00 PM"; // UTC time
if (DateTime.TryParse(value.ToString(), CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal, out DateTime date))
{
var isDST = date.IsDaylightSavingTime(); // true
var kind = date.Kind; // DateTimeKind.Local
var time = date.ToShortTimeString(); // 21:00
}
```<issue_comment>username_1: What I use:
```
full_excel_read<-function(fpath,v=TRUE){
sheetnames <- excel_sheets(fpath)
workbook <- sapply(sheetnames,function (x){readxl::read_excel(fpath,sheet = x)})
for (sh in sheetnames) {
workbook[[sh]]<-as.data.table(workbook[[sh]])
}
if (v){
lapply(sheetnames, function(x){View(workbook[[x]],x)})
}
workbook
}
```
Upvotes: 2 <issue_comment>username_2: You can use `readxl` package. See the following example.
```
library(readxl)
path <- readxl_example("datasets.xls")
sheetnames <- excel_sheets(path)
mylist <- lapply(excel_sheets(path), read_excel, path = path)
# name the dataframes
names(mylist) <- sheetnames
```
The spreadsheet will be captured in a list with the sheetname as the name of the dataframe in the list.
If you want to bring the dataframes out of the list use the next bit of code.
```
# Bring the dataframes to the global environment
list2env(mylist ,.GlobalEnv)
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Please look into openxlsx package, which allows you to do loads of stuff with excel workbooks.
Here is a code script to read all the sheets from a given workbook.
```
library(openxlsx)
a <- loadWorkbook('~/filename.xlsx')
sheetNames <- sheets(a)
for(i in 1:length(sheetNames))
{
assign(sheetNames[i],readWorkbook(a,sheet = i))
}
```
You can verify the data is loaded in R and can view in your workSpace.
Thanks.
Upvotes: 3 <issue_comment>username_4: See [Read all worksheets in an Excel workbook into an R list with data.frames](https://stackoverflow.com/questions/12945687/read-all-worksheets-in-an-excel-workbook-into-an-r-list-with-data-frames/12948450#12948450)
```
require(XLConnect)
wb <- loadWorkbook(system.file("demoFiles/mtcars.xlsx", package = "XLConnect"))
lst <- readWorksheet(wb, sheet = getSheets(wb))
```
`lst` is a named list whose names correspond to the sheet names. Note that `readWorksheet` is vectorized and therefore you can read multiple worksheets with a single `readWorksheet` call.
Upvotes: 2 |
2018/03/19 | 566 | 2,161 | <issue_start>username_0: I get crash with GCD once point at
```
+ (instancetype)sharedInstance {
static TabBarViewController *_sharedInstance = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_sharedInstance = [[TabBarViewController alloc] init];
});
return _sharedInstance;
}
_dispatch_once(dispatch_once_t *predicate,
DISPATCH_NOESCAPE dispatch_block_t block){
if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
dispatch_once(predicate, block); <-Crash:Thread 1: signal SIGABRT
} else {
dispatch_compiler_barrier();
}
DISPATCH_COMPILER_CAN_ASSUME(*predicate == ~0l);
}
```
strangely is that some code sometimes crash and after a few time it works well.
I have no idea why......<issue_comment>username_1: have same issue, wrap `dispatch_one {}` in `dispatch_async(dispatch_get_main_queue() ...` so it will work
Upvotes: 2 <issue_comment>username_2: I had the same problem. The reason was in calling [... sharedInstance] twice at almost the same time so it enters dispatch\_once block second time before first one is finished.
Upvotes: 0 <issue_comment>username_3: It could be a problem with TabBarViewController. Its singleton method may be called again in the init method. or it is held by another singleton and the singleton holding it is called right in the init method.
```
// implementation ClassA
+ (instancetype)sharedInstance {
static ClassA *_sharedInstance = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_sharedInstance = [[ClassA alloc] init];
});
return _sharedInstance;
}
- (instancetype)init
{
if (self = [super init]) {
[ClassB sharedInstance];
}
return self;
}
// implementation ClassB
+ (instancetype)sharedInstance {
static ClassB *_sharedInstance = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
_sharedInstance = [[ClassB alloc] init];
});
return _sharedInstance;
}
- (instancetype)init
{
if (self = [super init]) {
[ClassA sharedInstance]; // deadlock, will be crash
}
return self;
}
```
Upvotes: 0 |
2018/03/19 | 569 | 1,743 | <issue_start>username_0: I have `If Not` statement with 2 `or` but the code runs still like it is regular `If` statement. Moncol is an integer variable that equales 13 and the if statement should go to End If, and it is not. This code should delete columns just when `Moncol` not equals 12 or 13 or 14.
```
With NewPayWS
If Not MonCol = 12 Or Not MonCol = 13 Or Not MonCol = 14 Then
.Range(.Cells(1, lastcol - 1), .Cells(1, MonCol + 1)).EntireColumn.Delete
.Range(.Cells(1, DataType + 1), .Cells(1, MonCol - 4)).EntireColumn.Delete
End If
End With
```<issue_comment>username_1: Try `Select Case` instead, when having multiple scenarios of `If` and `Else`, it's much easier to use, and read.
```
Select Case MonCol
Case 12, 13, 14
' do nothing
Case Else
.Range(.Cells(1, lastcol - 1), .Cells(1, MonCol + 1)).EntireColumn.Delete
.Range(.Cells(1, DataType + 1), .Cells(1, MonCol - 4)).EntireColumn.Delete
End Select
```
**Edit 1**: Following @Rory comments, you can also use `Case 12 To 14`, this may come handy especially for ranges with a lot of values, then you can use `Case 12 To 30`, etc.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your current If statement will always result in True.
You can do:
```
With NewPayWS
If Not (MonCol = 12 Or MonCol = 13 Or MonCol = 14) Then
.Range(.Cells(1, lastcol - 1), .Cells(1, MonCol + 1)).EntireColumn.Delete
.Range(.Cells(1, DataType + 1), .Cells(1, MonCol - 4)).EntireColumn.Delete
End If
End With
```
Upvotes: 3 <issue_comment>username_3: There are multiple ways to handle it. Here's one more
```
If moncol >= 12 and moncol <=14 then
'Do nothing
else
'delete code
end if
```
Upvotes: 2 |
2018/03/19 | 564 | 2,042 | <issue_start>username_0: I have my sidebar menus located in the `main.scala.html` as a default and put all the other contents in separate `*.scala.html` pages.
I realized that I need to put `user id` on sidebar menu and it has to appear all the time. Is it possible to put other parameters in `main.scala.html` page while calling different pages? If so how?<issue_comment>username_1: Yes, just put it, like
```
@(title: String, id: String)(content: Html)
...
```
Then you can call it like
```
@main(title = "Home", id=myId) {
...
```
Here you can read more about how to put parameters in a template:
<https://www.playframework.com/documentation/2.6.x/ScalaTemplates#template-parameters>
You can use a dependency injection in the case of parameters that often used:
<https://www.playframework.com/documentation/2.6.x/ScalaTemplatesDependencyInjection>
Upvotes: 2 <issue_comment>username_2: Ok, first lets discuss this: why we want to pass parameters between views?
The reason is that you can factor out components; and reuse them throughout your views. For example, you can
imagine that you have a web app that has the topbar, body, and footer. You simply can factor out topbar, and footer and reuse
them throughout your views files.
Second, you can pass, almost any arbitray or functional types (String, Int, List[String], etc) to the views, either from the controller to views; or views to views.
Lets take a look at this (`userPage.scala.html`):
```
@(name: String, lastname: String, location: String)
```
Then you can call the userPage as following:
```
@userPage("Jessica", "Jones", "<NAME>")
```
You are calling the `userPage` and passing the arguments.
Sometimes you want to send a form (e.g., signin/feedback/signup form) to the views; that is not a problem either. You can do the following (signin.scala.html):
```
@(singin: Form[Signin])
```
Where you create the form and its related case class (Signin in our example); and pass it from the controller to the views.
Upvotes: 2 [selected_answer] |
2018/03/19 | 612 | 2,298 | <issue_start>username_0: I used python to get a json response from a website ,the json file is as follows:
```
{
"term":"albany",
"moresuggestions":490,
"autoSuggestInstance":null,
"suggestions":[
{
"group":"CITY_GROUP",
"entities":[
{
"geoId":"1000000000000000355",
"destinationId":"1508137",
"landmarkCityDestinationId":null,
"type":"CITY",
"caption":"Albany, Albany County, United States of America",
"redirectPage":"DEFAULT_PAGE",
"latitude":42.650249,
"longitude":-73.753578,
"name":"Albany"
},
{},
{},
{},
{},
{}
]
},
{},
{},
{}
]
}
```
I used the following script to display the values according to a key:
```
import json
a =['']
data = json.loads(a)
print data["suggestions"]
```
This displays everything under 'suggestions' from the json file, however If I want to go one or two more level down,it throws an error.For Eg. I wanted to display the value of "caption", I searched for the solution but could not find what I need.I even tried calling :
```
print data["suggestions"]["entities"]
```
But the above syntax throws an error.What am I missing here?<issue_comment>username_1: `data["suggestions"]` is a list of dictionaries. You either need to provide an index (ie `data["suggestions"][0]["entities"]`) or use a loop:
```
for suggestion in data["suggestions"]:
print suggestion["entities"]
```
Keep in mind that `"entities"` is also a list, so the same will apply:
```
for suggestion in data["suggestions"]:
for entity in suggestion["entities"]:
print entity["caption"]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you see data within suggestions, is an array, so you should read like below:
`print data["suggestions"][0]["entities"]`
Upvotes: 1 <issue_comment>username_3: "Suggestion" key holds a list of dicts.
You can access it like this though if the positions of dictionary remain intact.
`data["suggestions"][0]["entities"][0]["caption"]`
Upvotes: 0 <issue_comment>username_4: ```
print data["suggestions"][0]["entities"][0]["caption"]
```
Upvotes: 0 |
2018/03/19 | 1,464 | 5,428 | <issue_start>username_0: Trying to do a redirect depending on user status in my app (logged in or not), but it won't work as I want it to as I am not sure how to get the BuildContext inside the method.
```
import 'dart:async';
import 'package:flutter/material.dart';
import 'package:shared_preferences/shared_preferences.dart';
import 'package:t2/helpers/currentuser.dart';
import 'screens/dashboard.dart';
import 'screens/login.dart';
void main() => runApp(new MyApp());
CurrentUser user = new CurrentUser();
Future checkActiveUser() async {
SharedPreferences prefs = await SharedPreferences.getInstance();
user.usr = prefs.get('usr');
user.pwd = <PASSWORD>('pwd');
if (user.usr.length == 0 && user.pwd.length == 0) {
user.isLoggedIn = false;
Navigator.of(x).pushNamedAndRemoveUntil('/dashboard', (Route route) => false);
} else {
// Send to login screen
user.isLoggedIn = false;
Navigator.of(x).pushNamedAndRemoveUntil('/login', (Route route) => false);
}
return user.isLoggedIn;
/\*
// How to read/write to local storage
int counter = (prefs.getInt('counter') ?? 0) + 1;
print('Pressed $counter times.');
prefs.setInt('counter', counter);
\*/
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Flutter Demo',
theme: new ThemeData(
// This is the theme of your application.
//
// Try running your application with "flutter run". You'll see the
// application has a blue toolbar. Then, without quitting the app, try
// changing the primarySwatch below to Colors.green and then invoke
// "hot reload" (press "r" in the console where you ran "flutter run",
// or press Run > Flutter Hot Reload in IntelliJ). Notice that the
// counter didn't reset back to zero; the application is not restarted.
primarySwatch: Colors.blue,
),
home: new MyHomePage(),
routes: {
'/dashboard': (BuildContext context) => new Dashboard(),
'/login': (BuildContext context) => new Login()
});
}
}
class MyHomePage extends StatelessWidget {
var isLoggedIn = checkActiveUser();
@override
Widget build(BuildContext context) {
return new Scaffold(
appBar: new AppBar(
title: new Text('Demo Building'),
),
body: new Container(
child: new Center(
child: new Column(
children: [new Text('DASHBOARD')],
),
)));
}
}
```
If you have suggestions for a different approach, I'm all ears! I basically want to run this check on app load and redirect accordingly.
Regards, Bob
UPDATED CODE: Tried the suggestion from Hadrien, and got a step closer. It now runs and I get contact access but, get the following error:
'Navigator operation requested with a context that does not include a Navigator. The context used to push or pop routes from the Navigator must be that of a widget that is a descendant of a Navigator widget.'
This is the updated code:
```
import 'package:flutter/material.dart';
import 'package:shared_preferences/shared_preferences.dart';
import 'package:t2/helpers/currentuser.dart';
import 'screens/dashboard.dart';
import 'screens/login.dart';
void main() => runApp(new MyApp());
CurrentUser user = new CurrentUser();
checkActiveUser(BuildContext context) async {
SharedPreferences prefs = await SharedPreferences.getInstance();
try {
user.usr = prefs.get('usr');
user.pwd = prefs.get('pwd');
if (user.usr.length == 0 && user.usr.length == 0) {
user.isLoggedIn = false;
Navigator
.of(context)
.pushNamedAndRemoveUntil('/dashboard', (Route route) => false);
} else {
throw new Exception('No user data found');
}
} catch (e) {
// Send to login screen
user.isLoggedIn = false;
Navigator
.of(context)
.pushNamedAndRemoveUntil('/login', (Route route) => false);
}
/\*
// How to read/write to local storage
int counter = (prefs.getInt('counter') ?? 0) + 1;
print('Pressed $counter times.');
prefs.setInt('counter', counter);
\*/
}
class MyApp extends StatefulWidget {
@override
\_MyAppState createState() => new \_MyAppState();
}
class \_MyAppState extends State {
void initState() {
super.initState();
checkActiveUser(context);
}
@override
Widget build(BuildContext context) {
return new MaterialApp(
title: 'Welcome to Flutter',
home: new Scaffold(
appBar: new AppBar(
title: new Text('CADSYS'),
),
body: new Center(
child: new Text('Loading...'),
),
),
routes: {
'/dashboard': (BuildContext context) => new Dashboard(),
'/login': (BuildContext context) => new Login()
},
);
}
}
```<issue_comment>username_1: The `Navigator` come with the `MaterialApp` widget, so you can only access it from the routes you defined in it and from their child. (`Login`, `Dashboard`, `MyHomePage`).
if you transform your `MyHomePage` widget into a stateful widget. you will be able to call your `checkActiveUser()` function inside `initState`
```
initState() async {
super.initState();
checkActiveUser(context);
}
```
Upvotes: 0 <issue_comment>username_2: I would probably do it a little differently... instead of pushing a route inside a function, set the login state inside your `StatefulWidget` and then set the `body` based on that.
`body: user.isLoggedIn ? new Dashboard() : new Login(),`
then elsewhere in your code you'll need to check the active user and do `setState((){ user.isLoggedIn = true; });` (or false).
When the login state changes, your view will automatically update with the new Widget.
Upvotes: 4 [selected_answer] |
2018/03/19 | 408 | 1,393 | <issue_start>username_0: I have read the forums to find a solution for my issue but I am stuck with a MySQL error when I use the query.
I want to extract part of a field, everything that is after `\\nt4\applications\prod\hde\atn\` in the FILE\_NAME column
Here is the query:
```
SELECT FILE_NAME,
REPLACE (FILE_NAME,'\\nt4\applications\prod\hde\atn\','') as newfilename
from atn_documents
```
It always return me a
>
> syntax error near ''\
>
>
>
It looks like the string to look into can not contains \ character??
Can anyone drive me?
Thanks
Cedric<issue_comment>username_1: You have to escape the "\" character in the query. You can add additional "\" to escape it.
e.g.
```
SELECT FILE_NAME, REPLACE (FILE_NAME,'\\nt4\\applications\\prod\\hde\\atn\\','') as newfilename from atn_documents
```
Upvotes: 0 <issue_comment>username_2: Use `SUBSTRING_INDEX`:
```
SELECT
SUBSTRING_INDEX(FILE_NAME,
'\\nt4\\applications\\prod\\hde\\atn\\',
-1) AS path
FROM yourTable;
```
[Demo
----](http://rextester.com/LOV64823)
The above query is a verbatim implementation of your requirement, since it returns only what is after the path of interest. Also note that the immediate reason why your query does not even run is that you need to escape backslashes by doubling them up `\\` if you want them as literals.
Upvotes: 3 [selected_answer] |
2018/03/19 | 2,019 | 7,658 | <issue_start>username_0: I have a PowerShell script to backup a database. But today it has stoped working with next error:
```
Backup-SqlDatabase : The term 'Backup-SqlDatabase' is not recognized as
the name of a cmdlet, function, script file,
or operable program. Check the spelling of the name, or if a path was
included, verify that the path is correct and
try again.
```
I didn't change the script. What could be the reason for that?
UPDATE:
Installed SqlServer module. Now I have next:
```
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
PS C:\Temp> import-module sqlserver -erroraction stop -verbose
VERBOSE: Loading module from path 'C:\Program Files\WindowsPowerShell\Modules\sqlserver\21.0.17224\sqlserver.psd1'.
VERBOSE: Loading 'TypesToProcess' from path 'C:\Program
Files\WindowsPowerShell\Modules\sqlserver\21.0.17224\sqlprovider.types.ps1xml'.
VERBOSE: Loading 'FormatsToProcess' from path 'C:\Program
Files\WindowsPowerShell\Modules\sqlserver\21.0.17224\sqlprovider.format.ps1xml'.
VERBOSE: Populating RepositorySourceLocation property for module sqlserver.
VERBOSE: Loading module from path 'C:\Program Files\WindowsPowerShell\Modules\sqlserver\21.0.17224\SqlServer.psm1'.
VERBOSE: Exporting function 'SQLSERVER:'.
VERBOSE: Exporting alias 'Encode-SqlName'.
VERBOSE: Exporting alias 'Decode-SqlName'.
VERBOSE: Importing function 'SQLSERVER:'.
VERBOSE: Importing alias 'Decode-SqlName'.
VERBOSE: Importing alias 'Encode-SqlName'.
PS C:\Temp> Get-Command -Name Backup-SqlDatabase
Get-Command : The term 'Backup-SqlDatabase' is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At line:1 char:1
+ Get-Command -Name Backup-SqlDatabase
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Backup-SqlDatabase:String) [Get-Command], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException,Microsoft.PowerShell.Commands.GetCommandCommand
```
UPDATE 2: Uninstalled `SqlServer` module, directory were removed. After that reinstalled it. Installation log:
```
PS C:\WINDOWS\system32> Install-Module -Name SqlServer -Repository PSGallery -Verbose
VERBOSE: Repository details, Name = 'PSGallery', Location = 'https://www.powershellgallery.com/api/v2/'; IsTrusted
= 'False'; IsRegistered = 'True'.
VERBOSE: Using the provider 'PowerShellGet' for searching packages.
VERBOSE: Using the specified source names : 'PSGallery'.
VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2/' and PackageManagementProvider is
'NuGet'.
VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='SqlServer'' for ''.
VERBOSE: Total package yield:'1' for the specified package 'SqlServer'.
VERBOSE: Performing the operation "Install-Module" on target "Version '21.0.17224' of module 'SqlServer'".
Untrusted repository
You are installing the modules from an untrusted repository. If you trust this repository, change its
InstallationPolicy value by running the Set-PSRepository cmdlet. Are you sure you want to install the modules from
'PSGallery'?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help (default is "N"): y
VERBOSE: The installation scope is specified to be 'AllUsers'.
VERBOSE: The specified module will be installed in 'C:\Program Files\WindowsPowerShell\Modules'.
VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
VERBOSE: Downloading module 'SqlServer' with version '21.0.17224' from the repository
'https://www.powershellgallery.com/api/v2/'.
VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='SqlServer'' for ''.
VERBOSE: InstallPackage' - name='SqlServer',
version='21.0.17224',destination='C:\Users\Oleg\AppData\Local\Temp\1981035148'
VERBOSE: DownloadPackage' - name='SqlServer',
version='21.0.17224',destination='C:\Users\Oleg\AppData\Local\Temp\1981035148\SqlServer\SqlServer.nupkg',
uri='https://www.powershellgallery.com/api/v2/package/SqlServer/21.0.17224'
VERBOSE: Downloading 'https://www.powershellgallery.com/api/v2/package/SqlServer/21.0.17224'.
VERBOSE: Completed downloading 'https://www.powershellgallery.com/api/v2/package/SqlServer/21.0.17224'.
VERBOSE: Completed downloading 'SqlServer'.
VERBOSE: Hash for package 'SqlServer' does not match hash provided from the server.
VERBOSE: InstallPackageLocal' - name='SqlServer',
version='21.0.17224',destination='C:\Users\Oleg\AppData\Local\Temp\1981035148'
VERBOSE: Catalog file 'SqlServer.cat' is not found in the contents of the module 'SqlServer' being installed.
VERBOSE: Valid authenticode signature found in the file 'SqlServer.psd1' for the module 'SqlServer'.
VERBOSE: Module 'SqlServer' was installed successfully to path 'C:\Program
Files\WindowsPowerShell\Modules\SqlServer\21.0.17224'.
```
Even after that `Import-Module SqlServer -ErrorAction Stop -Verbose` hasn't changed and `Backup-SqlDatabase` is still is not available. What could be the reason?<issue_comment>username_1: You need to import the SQL Server PowerShell module to be able to access the cmdlets it contains:
```
Import-Module SQLPS -ErrorAction Stop
```
Run this code to see if the function is available to you or not:
```
Get-Command -Name Backup-SqlDatabase
```
Here are the results from my machine:
```
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Backup-SqlDatabase 14.0 SQLPS
```
Upvotes: 0 <issue_comment>username_2: That function is provided by the `sqlps` (old & busted) and `sqlserver` (current) modules. `sqlps` and older versions of `sqlserver` were provided by the SQL Server Management Studio installation, but `sqlserver` is now in the PowerShell Gallery. Assuming you have a current version of PowerShell/Windows Management Framework, you can `install-module sqlserver` (run in an Administrator PowerShell session) and get the latest version installed globally.
As to what happened to your script:
Possibility #1: You're using a very old version of PowerShell which doesn't auto-load modules and you're not explicitly importing the `sqlserver` or `sqlps` module into the session/script where you're calling this function. Solution: upgrade to a current release of PowerShell which does support auto-loading modules and/or explicitly import the proper module into your script/session with `import-module`.
Possibility #2: Someone uninstalled or moved the module that you're getting that function from, or it's not in your module search path. Solution: Check `$PSModulePath` and then look in each location for the module. Easiest would be to reinstall it in a global scope.
Upvotes: 3 <issue_comment>username_3: I too have the same issue but with `Delete-SqlDatabase` from SQLPS.
In my case I was trying to call a function `Delete-SqlDatabase` which I declared and consuming in my code.
The mistake I did is to call the function which was down below.
You see in Powershell in order for a function to be visible, you have to declare it on top. The main function should be the last section hierarchy wise.
Its such a silly thing. I am sure you figured this out within a day.
I am 100% certain this is your issue. I know this thread is too old but it might help other's like me who can potentially save an hours time.
Upvotes: 0 |
2018/03/19 | 881 | 3,640 | <issue_start>username_0: I would like to run Python scripts in various stages of Jenkins (pipeline) jobs, abroad a wide range of agents. I want the same Python environment for all of these, so I'm considering using Docker for this purpose.
I'm considering using Docker to build an image that contains the Python environment (with installed packages, etc.), that then allows an external Python script based on argument input:
`docker run my_image my_python_file.py`
My question is now, how should the infrastructure be? I see that the Python docker distribution is 688MB, and transferring this image to all steps would surely be an overhead? However, they *are* all on the same network, so perhaps it wouldn't be a big issue.
**Updates.** So, my Dockerfile looks like this:
```
FROM python:3.6-slim-jessie
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python3"]
```
Then I build the image using
`>docker build ./ -t my-app`
which successfully builds the image and install my requirements. Then I want to start the image as daemon using
`> docker run -dit my-app`
Then I execute the process using
`> docker exec -d {DAEMON_ID} my-script.py`<issue_comment>username_1: You need to import the SQL Server PowerShell module to be able to access the cmdlets it contains:
```
Import-Module SQLPS -ErrorAction Stop
```
Run this code to see if the function is available to you or not:
```
Get-Command -Name Backup-SqlDatabase
```
Here are the results from my machine:
```
CommandType Name Version Source
----------- ---- ------- ------
Cmdlet Backup-SqlDatabase 14.0 SQLPS
```
Upvotes: 0 <issue_comment>username_2: That function is provided by the `sqlps` (old & busted) and `sqlserver` (current) modules. `sqlps` and older versions of `sqlserver` were provided by the SQL Server Management Studio installation, but `sqlserver` is now in the PowerShell Gallery. Assuming you have a current version of PowerShell/Windows Management Framework, you can `install-module sqlserver` (run in an Administrator PowerShell session) and get the latest version installed globally.
As to what happened to your script:
Possibility #1: You're using a very old version of PowerShell which doesn't auto-load modules and you're not explicitly importing the `sqlserver` or `sqlps` module into the session/script where you're calling this function. Solution: upgrade to a current release of PowerShell which does support auto-loading modules and/or explicitly import the proper module into your script/session with `import-module`.
Possibility #2: Someone uninstalled or moved the module that you're getting that function from, or it's not in your module search path. Solution: Check `$PSModulePath` and then look in each location for the module. Easiest would be to reinstall it in a global scope.
Upvotes: 3 <issue_comment>username_3: I too have the same issue but with `Delete-SqlDatabase` from SQLPS.
In my case I was trying to call a function `Delete-SqlDatabase` which I declared and consuming in my code.
The mistake I did is to call the function which was down below.
You see in Powershell in order for a function to be visible, you have to declare it on top. The main function should be the last section hierarchy wise.
Its such a silly thing. I am sure you figured this out within a day.
I am 100% certain this is your issue. I know this thread is too old but it might help other's like me who can potentially save an hours time.
Upvotes: 0 |
2018/03/19 | 135 | 575 | <issue_start>username_0: How I can retrieve pagination result when trying to call spring data JPA stored procedure.<issue_comment>username_1: There is no explicit support for this.
Therefore all you can really do is create a wrapper method, that takes pageable, uses it to pass the required parameters to the stored procedure, takes the result and wraps it in a page object.
Upvotes: 2 <issue_comment>username_2: you can retrieve List and then convert to Pageable
```
return new PageImpl<>(storedProcedureQuery.getResultList(), PageRequest.of(page, size));
```
Upvotes: 1 |
2018/03/19 | 257 | 813 | <issue_start>username_0: I have a huge file structured :
```
>ABC_123|XX|YY|ID
CNHGYDGHA
>BBC_153|XX|YY|ID
ACGFDRER
```
I need to split this file by based on first value on line
```
File1: ABC_123 -> should contain
>ABC_123|XX|YY|ID
CNHGYDGHA
File2: BBC_153 -> should contain
>BBC_153|XX|YY|ID
ACGFDRER
```<issue_comment>username_1: There is no explicit support for this.
Therefore all you can really do is create a wrapper method, that takes pageable, uses it to pass the required parameters to the stored procedure, takes the result and wraps it in a page object.
Upvotes: 2 <issue_comment>username_2: you can retrieve List and then convert to Pageable
```
return new PageImpl<>(storedProcedureQuery.getResultList(), PageRequest.of(page, size));
```
Upvotes: 1 |
2018/03/19 | 6,588 | 18,415 | <issue_start>username_0: So Im working on **opengl** project from learnopengl and I am a beginner in C++ so I have little problem with it. It is a VS2017 project.
I have problem with `main.cpp`, when I compile it it shows this error:
>
> name followed by '::' must be a class or namespace
>
>
> * it is in (FileSystem::getPath) so when i include filesystem.h in main.cpp it shows another error but in filesystem.h : cannot open
> source file "root\_directory.h"
>
>
>
so I downloaded `root_directory.h` from <https://github.com/alifradityar/LastOrder> same for `entry.h`. Now I have 10 warnings and 3 errors :-) just this is what happens when one wants to repair one error.
`logl_root` undeclared identifier from `filesystem.h` 23 next 'getenv': This function or variable may be unsafe. Consider using \_dupenv\_s instead. To disable deprecation, use \_CRT\_SECURE\_NO\_WARNINGS. Every help is welcome.
I know I am only beginner but how am I supposed to learn it without trying to deal with problems ? And I know how stupid this question is :D...
Here if full project in 7z.:
<https://drive.google.com/open?id=1vNTkh9HEcMKvM8Yzm0iCTJtx1d2xqvlR>
filesystem.h, root\_directory.h and entry.h are in /includes/learnopengl
lib-s and includes are linked in VS.
>
> line: 24 - logl\_root undefined 23 - 'getenv': This function or
> variable may be unsafe. Consider using \_dupenv\_s instead. To disable
> deprecation, use \_CRT\_SECURE\_NO\_WARNINGS. Every help is welcome.
>
>
>
```
**filesystem.h**
#ifndef FILESYSTEM_H
#define FILESYSTEM_H
#include
#include
#include "root\_directory.h" // This is a configuration file generated by CMake.
class FileSystem
{
private:
typedef std::string (\*Builder) (const std::string& path);
public:
static std::string getPath(const std::string& path)
{
static std::string(\*pathBuilder)(std::string const &) = getPathBuilder();
return (\*pathBuilder)(path);
}
private:
static std::string const & getRoot()
{
static char const \* envRoot = getenv("LOGL\_ROOT\_PATH");
static char const \* givenRoot = (envRoot != nullptr ? envRoot : logl\_root);
static std::string root = (givenRoot != nullptr ? givenRoot : "");
return root;
}
//static std::string(\*foo (std::string const &)) getPathBuilder()
static Builder getPathBuilder()
{
if (getRoot() != "")
return &FileSystem::getPathRelativeRoot;
else
return &FileSystem::getPathRelativeBinary;
}
static std::string getPathRelativeRoot(const std::string& path)
{
return getRoot() + std::string("/") + path;
}
static std::string getPathRelativeBinary(const std::string& path)
{
return "../../../" + path;
}
};
// FILESYSTEM\_H
#endif
\*\*root\_directory.h\*\*
#ifndef \_\_ROOT
#define \_\_ROOT
#include "entry.h"
#include
#include
#include
#include
using namespace std;
class RootDirectory {
public:
vector data;
RootDirectory();
string toString();
void load(string);
};
#endif#ifndef \_\_ROOT
#define \_\_ROOT
#include "entry.h"
#include
#include
#include
#include
using namespace std;
class RootDirectory {
public:
vector data;
RootDirectory();
string toString();
void load(string);
};
#endif
\*\*main.cpp\*\*
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
void framebuffer\_size\_callback(GLFWwindow\* window, int width, int height);
void mouse\_callback(GLFWwindow\* window, double xpos, double ypos);
void scroll\_callback(GLFWwindow\* window, double xoffset, double yoffset);
void processInput(GLFWwindow \*window);
unsigned int loadTexture(const char \*path);
// settings
const unsigned int SCR\_WIDTH = 800;
const unsigned int SCR\_HEIGHT = 600;
// camera
Camera camera(glm::vec3(0.0f, 0.0f, 3.0f));
float lastX = SCR\_WIDTH / 2.0f;
float lastY = SCR\_HEIGHT / 2.0f;
bool firstMouse = true;
// timing
float deltaTime = 0.0f;
float lastFrame = 0.0f;
int main()
{
// glfw: initialize and configure
// ------------------------------
glfwInit();
glfwWindowHint(GLFW\_CONTEXT\_VERSION\_MAJOR, 3);
glfwWindowHint(GLFW\_CONTEXT\_VERSION\_MINOR, 3);
glfwWindowHint(GLFW\_OPENGL\_PROFILE, GLFW\_OPENGL\_CORE\_PROFILE);
#ifdef \_\_APPLE\_\_
glfwWindowHint(GLFW\_OPENGL\_FORWARD\_COMPAT, GL\_TRUE);
#endif
// glfw window creation
// --------------------
GLFWwindow\* window = glfwCreateWindow(SCR\_WIDTH, SCR\_HEIGHT, "LearnOpenGL", NULL, NULL);
if (window == NULL)
{
std::cout << "Failed to create GLFW window" << std::endl;
glfwTerminate();
return -1;
}
glfwMakeContextCurrent(window);
glfwSetFramebufferSizeCallback(window, framebuffer\_size\_callback);
glfwSetCursorPosCallback(window, mouse\_callback);
glfwSetScrollCallback(window, scroll\_callback);
// tell GLFW to capture our mouse
glfwSetInputMode(window, GLFW\_CURSOR, GLFW\_CURSOR\_DISABLED);
// glad: load all OpenGL function pointers
// ---------------------------------------
if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))
{
std::cout << "Failed to initialize GLAD" << std::endl;
return -1;
}
// configure global opengl state
// -----------------------------
glEnable(GL\_DEPTH\_TEST);
// build and compile our shader zprogram
// ------------------------------------
Shader lightingShader("5.4.light\_casters.vs", "5.4.light\_casters.fs");
Shader lampShader("5.4.lamp.vs", "5.4.lamp.fs");
// set up vertex data (and buffer(s)) and configure vertex attributes
// ------------------------------------------------------------------
float vertices[] = {
// positions // normals // texture coords
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
-0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f,
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f,
0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f
};
// positions all containers
glm::vec3 cubePositions[] = {
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(2.0f, 5.0f, -15.0f),
glm::vec3(-1.5f, -2.2f, -2.5f),
glm::vec3(-3.8f, -2.0f, -12.3f),
glm::vec3(2.4f, -0.4f, -3.5f),
glm::vec3(-1.7f, 3.0f, -7.5f),
glm::vec3(1.3f, -2.0f, -2.5f),
glm::vec3(1.5f, 2.0f, -2.5f),
glm::vec3(1.5f, 0.2f, -1.5f),
glm::vec3(-1.3f, 1.0f, -1.5f)
};
// first, configure the cube's VAO (and VBO)
unsigned int VBO, cubeVAO;
glGenVertexArrays(1, &cubeVAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL\_ARRAY\_BUFFER, VBO);
glBufferData(GL\_ARRAY\_BUFFER, sizeof(vertices), vertices, GL\_STATIC\_DRAW);
glBindVertexArray(cubeVAO);
glVertexAttribPointer(0, 3, GL\_FLOAT, GL\_FALSE, 8 \* sizeof(float), (void\*)0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 3, GL\_FLOAT, GL\_FALSE, 8 \* sizeof(float), (void\*)(3 \* sizeof(float)));
glEnableVertexAttribArray(1);
glVertexAttribPointer(2, 2, GL\_FLOAT, GL\_FALSE, 8 \* sizeof(float), (void\*)(6 \* sizeof(float)));
glEnableVertexAttribArray(2);
// second, configure the light's VAO (VBO stays the same; the vertices are the same for the light object which is also a 3D cube)
unsigned int lightVAO;
glGenVertexArrays(1, &lightVAO);
glBindVertexArray(lightVAO);
glBindBuffer(GL\_ARRAY\_BUFFER, VBO);
// note that we update the lamp's position attribute's stride to reflect the updated buffer data
glVertexAttribPointer(0, 3, GL\_FLOAT, GL\_FALSE, 8 \* sizeof(float), (void\*)0);
glEnableVertexAttribArray(0);
// load textures (we now use a utility function to keep the code more organized)
// -----------------------------------------------------------------------------
unsigned int diffuseMap = loadTexture(FileSystem::getPath("resources/textures/container2.png").c\_str());
unsigned int specularMap = loadTexture(FileSystem::getPath("resources/textures/container2\_specular.png").c\_str());
// shader configuration
// --------------------
lightingShader.use();
lightingShader.setInt("material.diffuse", 0);
lightingShader.setInt("material.specular", 1);
// render loop
// -----------
while (!glfwWindowShouldClose(window))
{
// per-frame time logic
// --------------------
float currentFrame = glfwGetTime();
deltaTime = currentFrame - lastFrame;
lastFrame = currentFrame;
// input
// -----
processInput(window);
// render
// ------
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL\_COLOR\_BUFFER\_BIT | GL\_DEPTH\_BUFFER\_BIT);
// be sure to activate shader when setting uniforms/drawing objects
lightingShader.use();
lightingShader.setVec3("light.position", camera.Position);
lightingShader.setVec3("light.direction", camera.Front);
lightingShader.setFloat("light.cutOff", glm::cos(glm::radians(12.5f)));
lightingShader.setFloat("light.outerCutOff", glm::cos(glm::radians(17.5f)));
lightingShader.setVec3("viewPos", camera.Position);
// light properties
lightingShader.setVec3("light.ambient", 0.1f, 0.1f, 0.1f);
// we configure the diffuse intensity slightly higher; the right lighting conditions differ with each lighting method and environment.
// each environment and lighting type requires some tweaking to get the best out of your environment.
lightingShader.setVec3("light.diffuse", 0.8f, 0.8f, 0.8f);
lightingShader.setVec3("light.specular", 1.0f, 1.0f, 1.0f);
lightingShader.setFloat("light.constant", 1.0f);
lightingShader.setFloat("light.linear", 0.09f);
lightingShader.setFloat("light.quadratic", 0.032f);
// material properties
lightingShader.setFloat("material.shininess", 32.0f);
// view/projection transformations
glm::mat4 projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR\_WIDTH / (float)SCR\_HEIGHT, 0.1f, 100.0f);
glm::mat4 view = camera.GetViewMatrix();
lightingShader.setMat4("projection", projection);
lightingShader.setMat4("view", view);
// world transformation
glm::mat4 model;
lightingShader.setMat4("model", model);
// bind diffuse map
glActiveTexture(GL\_TEXTURE0);
glBindTexture(GL\_TEXTURE\_2D, diffuseMap);
// bind specular map
glActiveTexture(GL\_TEXTURE1);
glBindTexture(GL\_TEXTURE\_2D, specularMap);
// render containers
glBindVertexArray(cubeVAO);
for (unsigned int i = 0; i < 10; i++)
{
// calculate the model matrix for each object and pass it to shader before drawing
glm::mat4 model;
model = glm::translate(model, cubePositions[i]);
float angle = 20.0f \* i;
model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
lightingShader.setMat4("model", model);
glDrawArrays(GL\_TRIANGLES, 0, 36);
}
// again, a lamp object is weird when we only have a spot light, don't render the light object
// lampShader.use();
// lampShader.setMat4("projection", projection);
// lampShader.setMat4("view", view);
// model = glm::mat4();
// model = glm::translate(model, lightPos);
// model = glm::scale(model, glm::vec3(0.2f)); // a smaller cube
// lampShader.setMat4("model", model);
// glBindVertexArray(lightVAO);
// glDrawArrays(GL\_TRIANGLES, 0, 36);
// glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.)
// -------------------------------------------------------------------------------
glfwSwapBuffers(window);
glfwPollEvents();
}
// optional: de-allocate all resources once they've outlived their purpose:
// ------------------------------------------------------------------------
glDeleteVertexArrays(1, &cubeVAO);
glDeleteVertexArrays(1, &lightVAO);
glDeleteBuffers(1, &VBO);
// glfw: terminate, clearing all previously allocated GLFW resources.
// ------------------------------------------------------------------
glfwTerminate();
return 0;
}
// process all input: query GLFW whether relevant keys are pressed/released this frame and react accordingly
// ---------------------------------------------------------------------------------------------------------
void processInput(GLFWwindow \*window)
{
if (glfwGetKey(window, GLFW\_KEY\_ESCAPE) == GLFW\_PRESS)
glfwSetWindowShouldClose(window, true);
if (glfwGetKey(window, GLFW\_KEY\_W) == GLFW\_PRESS)
camera.ProcessKeyboard(FORWARD, deltaTime);
if (glfwGetKey(window, GLFW\_KEY\_S) == GLFW\_PRESS)
camera.ProcessKeyboard(BACKWARD, deltaTime);
if (glfwGetKey(window, GLFW\_KEY\_A) == GLFW\_PRESS)
camera.ProcessKeyboard(LEFT, deltaTime);
if (glfwGetKey(window, GLFW\_KEY\_D) == GLFW\_PRESS)
camera.ProcessKeyboard(RIGHT, deltaTime);
}
// glfw: whenever the window size changed (by OS or user resize) this callback function executes
// ---------------------------------------------------------------------------------------------
void framebuffer\_size\_callback(GLFWwindow\* window, int width, int height)
{
// make sure the viewport matches the new window dimensions; note that width and
// height will be significantly larger than specified on retina displays.
glViewport(0, 0, width, height);
}
// glfw: whenever the mouse moves, this callback is called
// -------------------------------------------------------
void mouse\_callback(GLFWwindow\* window, double xpos, double ypos)
{
if (firstMouse)
{
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = lastY - ypos; // reversed since y-coordinates go from bottom to top
lastX = xpos;
lastY = ypos;
camera.ProcessMouseMovement(xoffset, yoffset);
}
// glfw: whenever the mouse scroll wheel scrolls, this callback is called
// ----------------------------------------------------------------------
void scroll\_callback(GLFWwindow\* window, double xoffset, double yoffset)
{
camera.ProcessMouseScroll(yoffset);
}
// utility function for loading a 2D texture from file
// ---------------------------------------------------
unsigned int loadTexture(char const \* path)
{
unsigned int textureID;
glGenTextures(1, &textureID);
int width, height, nrComponents;
unsigned char \*data = stbi\_load(path, &width, &height, &nrComponents, 0);
if (data)
{
GLenum format;
if (nrComponents == 1)
format = GL\_RED;
else if (nrComponents == 3)
format = GL\_RGB;
else if (nrComponents == 4)
format = GL\_RGBA;
glBindTexture(GL\_TEXTURE\_2D, textureID);
glTexImage2D(GL\_TEXTURE\_2D, 0, format, width, height, 0, format, GL\_UNSIGNED\_BYTE, data);
glGenerateMipmap(GL\_TEXTURE\_2D);
glTexParameteri(GL\_TEXTURE\_2D, GL\_TEXTURE\_WRAP\_S, GL\_REPEAT);
glTexParameteri(GL\_TEXTURE\_2D, GL\_TEXTURE\_WRAP\_T, GL\_REPEAT);
glTexParameteri(GL\_TEXTURE\_2D, GL\_TEXTURE\_MIN\_FILTER, GL\_LINEAR\_MIPMAP\_LINEAR);
glTexParameteri(GL\_TEXTURE\_2D, GL\_TEXTURE\_MAG\_FILTER, GL\_LINEAR);
stbi\_image\_free(data);
}
else
{
std::cout << "Texture failed to load at path: " << path << std::endl;
stbi\_image\_free(data);
}
return textureID;
}
```<issue_comment>username_1: I make this an answer instead of a comment (although I probably should be a comment). As you told yourself you want to learn. So instead of telling you a "solution", I'll try to show you the ropes how to properly deal with this kind of problems.
First and foremost, the most important part when dealing with compilation errors is to actually *read* the error message and then to **understand** it! *Don't jump to conclusions, download arbitrary files from unrelated sources and mash things together!* This approach won't work!
Let's break this down. You have a compiler error. It reads like the following:
>
>
> >
> >
> > ```
> > (…) name followed by '::' must be a class or namespace (…)
> >
> > ```
> >
> >
>
>
>
Your quote, unfortunately is missing some information, namely in what file at which line the problem occoured. There's a certain logic behind how compilers report errors; older versions of GCC spat out rather arcane error logs, often several pages long; the culprit usually hides somewhere in the very first 5 lines or so of the whole error log. Usually you can safely ignore all the rest.
Anyway, it tells you what is wrong. Namely, that in C++ if you write something like `a::b` then `a` must be the *name of a class or a namespace* (which is exactly what the error tells you). However usually classes and namespaces are pulled in by a header include. If a include directive fails the preprocessor does bail out though, so it's unlikely that this has anything to do with a missing include at all.
But what can happen as well is, that *before* an include something is not how it should be. Usually a missing semicolon (`;`). which might cause a class declaration to be mangled up with something else.
So here's what you should do: Carefully reread the compiler log *from the beginning*. Look at all the warnings and errors on top, then work your way down.
If you get stuck again, **edit** your question, and *if* I can help you, I'll append to this answer.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If anyone is still searching for an answer...
// The FileSystem::getPath(...) is part of the GitHub repository so we can find files on any IDE/platform; replace it with your own image path.
Upvotes: 2 |
2018/03/19 | 916 | 2,835 | <issue_start>username_0: I want to extract Vimeo Id from its URL. I have tried to many solutions but not found what I exactly want for swift. I refer to many questions and found one solution in JAVA. I want same behaviour in iOS swift so I can extract the ID from matched group array.
[Using Regular Expressions to Extract a Value in Java](https://stackoverflow.com/questions/237061/using-regular-expressions-to-extract-a-value-in-java)
I use following vimeo URL regex and I want group-3 if string matched with regex:
"[http|https]+:\/\/(?:www\.|player\.)?vimeo\.com\/(?:channels\/(?:\w+\/)?|groups\/([^\/]\*)\/videos\/|album\/(\d+)\/video\/|video\/|)([a-zA-Z0-9\_\-]+)(&.+)?"
Test Vimeo URL: <https://vimeo.com/62092214?query=foo><issue_comment>username_1: ```
let strToTest = "https://vimeo.com/62092214?query=foo"
let pattern = "[http|https]+:\\/\\/(?:www.|player.)?vimeo.com\\/(?:channels\\/(?:\\w+\\/)?|groups\\/([^\\/]*)\\/videos\\/|album\\/(\\d+)\\/video\\/|video\\/|)([a-zA-Z0-9_\\-]+)(&.+)?"
let regex = try! NSRegularExpression.init(pattern: pattern, options: [])
let match = regex.firstMatch(in: strToTest, options: [], range: NSRange.init(location: 0, length: strToTest.count))
let goup3Range = match?.range(at: 3)
let substring = (strToTest as NSString).substring(with: goup3Range!)
print("substring: \(substring)")
```
That should work.
You need to escape all `\` in the pattern.
You need to call `range(at:)` to get the range of the group you want according to your pattern (currently group3), then substring.
What should be improved?
Well, I did all sort of force unwrapped (every time I wrote a `!`). for the sake of the logic and not add `do`/`catch`, `if let`, etc. I strongly suggest you check them carefully.
Upvotes: 3 <issue_comment>username_2: Here is yet another version. I am using [named capturing group](https://www.regular-expressions.info/named.html), a bit different than the answer provided by username_1.
```
let regex = "[http|https]+:\\/\\/(?:www\\.|player\\.)?vimeo\\.com\\/(?:channels\\/(?:\\w+\\/)?|groups\\/(?:[^\\/]*)\\/videos\\/|album\\/(?:\\d+)\\/video\\/|video\\/|)(?[a-zA-Z0-9\_\\-]+)(?:&.+)?"
let vimeoURL = "https://vimeo.com/62092214?query=fooiosiphoneswift"
let regularExpression = try! NSRegularExpression(pattern: regex,
options: [])
let match = regularExpression.firstMatch(in: vimeoURL,
options: [],
range: NSRange(vimeoURL.startIndex ..< vimeoURL.endIndex,
in: vimeoURL))
if let range = match?.range(withName: "vimeoId"),
let stringRange = Range(range, in: vimeoURL) {
let vimeoId = vimeoURL[stringRange]
}
```
Also, please check that I have modified your regex a bit, such that everything else except vimeoId are [non-capturing](https://stackoverflow.com/questions/3512471/what-is-a-non-capturing-group-what-does-do?rq=1).
Upvotes: 3 [selected_answer] |
2018/03/19 | 2,466 | 6,384 | <issue_start>username_0: I have a question in which I can't find answer or don't know how to search for answer.
I don't know how to position text above image in a way that I want them to align. The following image will clarify what I am asking.
[](https://i.stack.imgur.com/NqZQk.jpg)
Next is my HTML and CSS code, I only provided the HTML for about page, but CSS is for the whole website. This isn't anything professional, I am just trying to learn by doing. My idea is to use those images as links ( I know how to do that ). If there is similar question already asked, I apologize, I tried searching here and on YouTube, but could not find solution for this problem. If nothing I will edit pictures in GIMP with text in them.
```css
body {
background: #e5fcf4;
font-family: Arial;
}
header {
text-align: center;
}
header nav {
display: inline-block;
}
header nav ul {
list-style: none;
margin: 0;
padding: 0;
}
header ul li {
float: left;
color: white;
width: 200px;
height: 40px;
background-color: #0d3801;
opacity: .9;
line-height: 40px;
text-align: center;
font-size: 20px;
}
header ul li a {
text-decoration: none;
color: white;
display: block;
}
header ul li a:hover {
background-color: green;
color: black;
}
header ul li ul li {
display: none;
}
header ul li:hover ul li {
display: block;
}
div.maincontent {
width: 70%;
padding: 2px;
margin: 2px;
float: left;
}
div.sidecontent {
width: 23%;
float: right;
padding: 2px;
margin: 2px;
margin-top: 10px;
}
div.maincontent img {
width: 900px;
height: 400px;
}
.clear {
clear: both;
}
footer {
background-color: #0d3801;
text-align: center;
}
footer img {
width: 200px;
height: 200px;
margin: 5px;
}
footer h2 {
font-size: 2rem;
color: white;
}
img.aboutimage {
width: 450px;
height: 400px;
float: left;
padding: 5px;
margin-left: 125px;
margin-top: 100px;
}
```
```html
* [Home](index.html)
* [About](about.html)
+ Our team
+ Camp sites
+ Mission & Vision
* [Things to do](things.html)
+ Activities
+ Parks
+ Coffee bars
* [Contact](contact.html)
+ Map
+ Directions
* [News](news.html)



```<issue_comment>username_1: wrap each image in div and before that add your text
```
### sample title
![]()
...
```
Position your "a" depending on intended result. Now it only makes image to be link
Upvotes: 3 [selected_answer]<issue_comment>username_2: First you should wrap image with div and add following style for that div
Example:
```
Sample Text
```
Try this.
Upvotes: -1 <issue_comment>username_3: Please check with this snippet
```css
body {
background: #e5fcf4;
font-family: Arial;
}
header {
text-align: center;
}
header nav {
display: inline-block;
}
/* PRVI KORAK*/
header nav ul {
list-style: none;
margin: 0;
padding: 0;
}
/*DRUGI KORAK*/
header ul li {
float: left;
color: white;
width: 200px;
height: 40px;
background-color: #0d3801;
opacity: .9;
line-height: 40px;
text-align: center;
font-size: 20px;
}
/*TREĆI KORAK*/
header ul li a {
text-decoration: none;
color: white;
display: block;
}
/*ČETVRTI KORAK*/
header ul li a:hover {
background-color: green;
color: black;
}
/*PETI KORAK*/
header ul li ul li {
display: none;
}
header ul li:hover ul li {
display: block;
}
div.maincontent {
width: 70%;
padding: 2px;
margin: 2px;
float: left;
}
div.sidecontent {
width: 23%;
float: right;
padding: 2px;
margin: 2px;
margin-top: 10px;
}
div.maincontent img {
width: 900px;
height: 400px;
}
.clear {
clear: both;
}
footer {
background-color: #0d3801;
text-align: center;
}
footer img {
width: 200px;
height: 200px;
margin: 5px;
}
footer h2 {
font-size: 2rem;
color: white;
}
img.aboutimage {
width: 450px;
height: 400px;
float: left;
padding: 5px;
margin-left: 125px;
margin-top: 100px;
}
.img-block a{
position:relative;
}
.img-block a span{
position:absolute;
width:100%;
top:0;
left:0;
background:rgba(0,0,0,0.5);
padding:5px;
font-size:14px;
color:#fff;
font-weight:700;
text-align:center;
}
.img-block img{
padding:0;
width:100%;
margin:0;
height:auto;
}
.img-block a{
overflow:hidden;
float:left;
width:calc( 33.33% - 20px );
margin:0 10px;
}
```
```html
* [Home](index.html)
* [About](about.html)
+ Our team
+ Camp sites
+ Mission & Vision
* [Things to do](things.html)
+ Activities
+ Parks
+ Coffee bars
* [Contact](contact.html)
+ Map
+ Directions
* [News](news.html)
Text1

Text2

Text3

```
Upvotes: 2 <issue_comment>username_4: You can use `figure` and `figcaption` to have text and an image aligned with each other.
I've used `flex` to make sure everything lines up how it should.
```css
.imageblock {
display: flex;
justify-content: space-between;
}
.imageblock figure {
display: inline-flex;
flex-direction: column;
text-align: center;
width: 30vw;
margin: 0;
}
.imageblock figure * {
width: 100%;
}
```
```html
How to add text here?

How to add text here?

How to add text here?

```
Upvotes: 2 |
2018/03/19 | 1,239 | 3,807 | <issue_start>username_0: To avoid conflicts, I want to use a defined range of subnets for the `docker0` bridge and all interfaces dynamically created by the docker deamon when a new container is started.
After a quick look in the docker documentation I found the promising **fixed-cidr** option. I added this option to my daemon.json. My configuration looks as follows:
`/etc/docker/daemon.json`
```
{
"bip": "192.168.89.1/22",
"fixed-cidr": "192.168.89.2/24"
}
```
after restarting the docker deamon the configuration looks promising:
```
docker inspect bridge
[
{
"Name": "bridge",
"Id": "365e0d373bcfc82bc73c623d680dcaee773e247f631e5b2324e3b63401bcf6fd",
"Created": "2018-03-19T09:59:22.20439309+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.89.1/22",
"IPRange": "192.168.89.0/24",
"Gateway": "192.168.89.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
```
But for some reason, when I create a new network:
```
docker network create test-network
```
it gets a new address from default IP range:
```
docker inspect test-network
[
{
"Name": "test-network",
"Id": "bf0f6baa8239c73a9524f8a77035bc2be18a67ad4d0d2ba4f73b3d175f5315b3",
"Created": "2018-03-19T10:31:24.450183553+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
```
For now, I have a temporary solutation. I'll simply define a custom subnet in my docker-compose.yml
```
yaml
networks:
default:
ipam:
config:
- subnet: 192.168.89.2/24
```
But this leads to further problems as a cannot start the project multiple times on the same hosts, because of network conflicts.
Does anyone know where the problem is?
**Edit:**
It seems to be a missing feature in docker. [Github Issue](https://github.com/moby/moby/issues/21776)
Fortunately this feature has been committed to master branch some days ago: [Committed feature](https://github.com/docker/libnetwork/commit/0ae9b6f38f24f65567d4b46602502b33c95cf57a#diff-def8fd71217e3995c71d711614a5399f)<issue_comment>username_1: Network options in `/etc/docker/daemon.json` only work for the default `docker0` bridge.
You can specify network options when creating new networks, see: <https://docs.docker.com/engine/reference/commandline/network_create/>
Upvotes: 0 <issue_comment>username_2: In new docker releases there is the `default-address-pools` option.
Upvotes: 3 [selected_answer] |
2018/03/19 | 309 | 1,121 | <issue_start>username_0: I have a strange issue about using Retrofit2 in my android project. I got the issue about the server error since the request is something like that.
<https://www.example.com/api/v1/skills?q=Good%00>
Since the invalid value "%00" is not acceptable in our server, so it showed error on my activity.
API service
```
@GET("skills")
Observable getSkills(@Query("q") String keyword);
```
In my fragment, I just get the text using following simple statement.
```
String keyword = editText.getText().toString()
api.getSkills(keyword);
```
What I want to know is the following:
1. Is it possible to have a word can be converted to "%00" ?
2. How to avoid this "Good%00" before I send to `getSkills` function?<issue_comment>username_1: Network options in `/etc/docker/daemon.json` only work for the default `docker0` bridge.
You can specify network options when creating new networks, see: <https://docs.docker.com/engine/reference/commandline/network_create/>
Upvotes: 0 <issue_comment>username_2: In new docker releases there is the `default-address-pools` option.
Upvotes: 3 [selected_answer] |
2018/03/19 | 499 | 1,744 | <issue_start>username_0: I can't seem to find the best answer to my question SO.
I have this code that is "OK" but not idea
```
func mapView(_ mapView: MKMapView, regionDidChangeAnimated animated: Bool) {
print(mapView.camera.altitude)
if mapView.camera.altitude < 800.00 && !modifyingMap
{
modifyingMap = true
mapView.camera.altitude = 800.00
modifyingMap = false
}
}
```
I would like to limit a user's max and min zoom to my map in my app.
any links to the SO answer are greatly appreciated!
Thanks!<issue_comment>username_1: You could use the `mapView:regionDidChangeAnimated:` delegate method to listen for region change events, and if the region is wider/narrower than your maximum/minimum region, set it back to the max/min region with `setRegion:animated:` to indicate to your user that they can't zoom out/in that far.
e.g.
```
func mapView(_ mapView: MKMapView, regionDidChangeAnimated animated: Bool) {
let coordinate = CLLocationCoordinate2DMake(mapView.region.center.latitude, mapView.region.center.longitude)
var span = mapView.region.span
if span.latitudeDelta < 0.002 { // MIN LEVEL
span = MKCoordinateSpanMake(0.002, 0.002)
} else if span.latitudeDelta > 0.003 { // MAX LEVEL
span = MKCoordinateSpanMake(0.003, 0.003)
}
let region = MKCoordinateRegionMake(coordinate, span)
mapView.setRegion(region, animated:true)
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Try this out:
```
mapView.cameraZoomRange = MKMapView.CameraZoomRange(
minCenterCoordinateDistance: 1000, // Minimum zoom value
maxCenterCoordinateDistance: 10000) // Max zoom value
```
Upvotes: 2 |
2018/03/19 | 252 | 865 | <issue_start>username_0: I am trying to download the pdf using "download" attribute in an tag:
```
```
But it is only opening in the next tab not downloading.<issue_comment>username_1: For reactjs, `download` att need value:
```
[your link]({fileUrl})
```
fileUrl must be the same origin with the page
Upvotes: 2 <issue_comment>username_2: ```html
[Download File](url)
```
Upvotes: 0 <issue_comment>username_3: import the file like normal in React
```
import myResume from "../../assets/myResume.pdf";
```
(you choose your own path this is just an example)
after that you just have to add the reference to you href="" and thats all.
Upvotes: -1 <issue_comment>username_4: What I did on my react app using typescript is as shown below
```
[Download]({require('url)
```
Example
```
[Download]({require('../assets/doc/resume.pdf')})
```
Upvotes: 0 |
2018/03/19 | 580 | 2,077 | <issue_start>username_0: I have set constraint for my text field in story board. I want to create a table view under the text field with the following code. Setting the width similar to the `textfield.frame.size.width` brings the tableView all the way out of the screen, why? Please help.
```
-(void)cofigureAutoComTableView
{
autocompleteTableView = [[UITableView alloc] initWithFrame:CGRectMake(self.txtActivity.frame.origin.x,self.txtActivity.frame.origin.y+32,self.txtActivity.frame.size.width, 200) style:UITableViewStylePlain];
autocompleteTableView.delegate = self;
autocompleteTableView.dataSource = self;
autocompleteTableView.scrollEnabled = YES;
//autocompleteTableView.
autocompleteTableView.hidden = YES;
[self.view addSubview:autocompleteTableView];
CALayer *layer = autocompleteTableView.layer;
[layer setMasksToBounds:YES];
[layer setCornerRadius: 4.0];
[layer setBorderWidth:1.0];
[layer setBorderColor:[[UIColor blackColor] CGColor]];
}
```
[](https://i.stack.imgur.com/thOwe.png)<issue_comment>username_1: It looks ike that because you are calling function when textview is that long, use it in `viewDidAppear`
```
[self cofigureAutoComTableView];
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can try
```
-(void)viewDidLayoutSubviews
{
[super viewDidLayoutSubviews];
if(once)
{
once = NO;
[self conf];
}
}
```
Upvotes: 1 <issue_comment>username_3: The problem is that in viewDidLoad the frames are not in the final form. Only in viewDidAppear the frames are final.
The best solution in this case is to update the autocompleteTableView's frame in viewDidLayoutSubviews.
viewDidLayoutSubviews is called every time the viewController's view frame is updated.
```
-(void)viewDidLayoutSubviews
{
[super viewDidLayoutSubviews];
autocompleteTableView.frame = CGRectMake(self.txtActivity.frame.origin.x,self.txtActivity.frame.origin.y+32,self.txtActivity.frame.size.width, 200);
}
```
Upvotes: 0 |
2018/03/19 | 507 | 1,812 | <issue_start>username_0: I have 2 databases from which I have deleted rows in a specific table in order to decrease the size of the database.
After deleting, the size of `DB.mdf` does not change.
I also tried to rebuild the index and used `cleantable`, but to no effect!
```
ALTER INDEX ALL ON dbo.'Tablename' REBUILD
DBCC CLEANTABLE ('DBname', 'Tablename', 0)
```<issue_comment>username_1: Deleting rows in a database will not decrease the actual database file size.
You need to compact the database after row deletion.
[Look for this](https://msdn.microsoft.com/en-us/library/ms190488(v=sql.90).aspx)
After running this, you'll want to rebuild indexes. Shrinking typically causes index fragmentation, and that could be a significant performance cost.
I would also recommend that after you shrink, you re-grow the files so that you have some free space. That way, when new rows come in, they don't trigger autogrowth. Autogrowth has a performance cost and is something you would like to avoid whenever possible.
Upvotes: 3 <issue_comment>username_2: Even I faced the same issue, my db was 40MB after deleting some columns still its size was not getting changed..
I installed SQLManager then opened my db and used command 'vaccum' that cleaned my db and its size got reduced to 10MB.
[](https://i.stack.imgur.com/IzATL.png)
Upvotes: 1 <issue_comment>username_3: You need to shrink the db. Right click db, Tasks->Shrink database
Upvotes: 3 <issue_comment>username_4: I wrote this after being in the exact same scenario and needing to shrink the database. However, not wanting to use DBCC SHRINFKILE I used Paul Randals method of shrinking the database.
<https://gist.github.com/tcartwright/ea60e0a38fac25c847e39bced10ecd04>
Upvotes: 0 |
2018/03/19 | 353 | 1,116 | <issue_start>username_0: Here I'm trying to get the names of the checked items. For example I want to get the Name **Milk** and display it.
Here it is the code:
```js
var ckboxes=document.getElementsByClassName("ckbox");
for(i=0;i
```
```html
Milk
```<issue_comment>username_1: You can get the [next sibling node](https://developer.mozilla.org/en-US/docs/Web/API/Node/nextSibling)
```
ckboxes[i].nextSibling.nodeValue
```
**Demo**
```js
var ckboxes = document.getElementsByClassName("ckbox");
for (i = 0; i < ckboxes.length; i++) {
values = ckboxes[i].value;
console.log(ckboxes[i].nextSibling.nodeValue);
}
```
```html
Milk
```
However, if your code is going to have line-breaks within `label` and after `input`, then use @teemu's suggestion
```
ckboxes[i].parentElement.textContent;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: `Milk` is not the part of `checkbox`. It is part of the `label`. Also you should not access `value` instead you should access `textContent`:
```js
var ckboxes=document.getElementsByClassName("ckbox");
for(i=0;i
```
```html
Milk
```
Upvotes: 1 |
2018/03/19 | 1,314 | 4,397 | <issue_start>username_0: ```
//Code 1
log.info undefined
```
When we run the code 1 , we get below error in soapui/readyapi as
[](https://i.stack.imgur.com/gmX8h.png)
**please note :-** Line number is visible in error message
However to avoid this alert , we used try/catch to print this, so the above code is amended to below as code 2
```
//code 2
try
{
log.info undefined
}
catch(Exception e)
{
log.info e
}
```
when we run the code 2 we get below results
```
Mon Mar 19 15:04:16 IST 2018:INFO:groovy.lang.MissingPropertyException: No such property: undefined for class: Script6
```
**Problem** :- How can we see the line number where the problem is just like we are able to see in code1
**Requirement** :- Our exception block should be able to tell problem is in which line.
Since its a small code we are able to know, Sometimes the code is having 100+ lines and its difficult to know where the exception is<issue_comment>username_1: You can use `log.info e.getStackTrace().toString();` to get the full stack trace.
However, it'll be hard to pick out the issue. Here's my Groovy script....
```
try
{
log.info undefined
}
catch(Exception e)
{
log.info e.getStackTrace().toString();
}
```
Here's the trace....
Mon Mar 19 17:15:20 GMT 2018:INFO:[org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:50), org.codehaus.groovy.runtime.callsite.PogoGetPropertySite.getProperty(PogoGetPropertySite.java:49), org.codehaus.groovy.runtime.callsite.AbstractCallSite.callGroovyObjectGetProperty(AbstractCallSite.java:231), **Script21.run(Script21.groovy:3)**, com.eviware.soapui.support.scripting.groovy.SoapUIGroovyScriptEngine.run(SoapUIGroovyScriptEngine.java:100), com.eviware.soapui.support.scripting.groovy.SoapUIProGroovyScriptEngineFactory$SoapUIProGroovyScriptEngine.run(SourceFile:89), com.eviware.soapui.impl.wsdl.teststeps.WsdlGroovyScriptTestStep.run(WsdlGroovyScriptTestStep.java:154), com.eviware.soapui.impl.wsdl.panels.teststeps.GroovyScriptStepDesktopPanel$RunAction$1.run(GroovyScriptStepDesktopPanel.java:277), java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source), java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source), java.lang.Thread.run(Unknown Source)]
Note- the emboldened part. It's line 3 and that is what I expected. However, SoapUI must use internal numbering for scripts as I called the script "Dummy Groovy Script" and the stack trace says "Script21".
Anyhow, I do think you ought look at your Groovy script, 100+ lines in a Try seems a bit much and as you pint out, it's difficult to see the issue.
I'd suggest breaking it down into functions, or even better, call an Java class external to SoapUI, that has nice well-defined functions.
The SmartBear site describes how this can be done. Plus, it remove a lot of the bloat from the SoapUI project file.
Upvotes: 1 <issue_comment>username_2: Building on @tim\_yates answer of using `e.stackTrace.head().linenumber`:
```
import org.codehaus.groovy.runtime.StackTraceUtils
try {
println undefined
} catch (Exception e) {
StackTraceUtils.sanitize(e)
e.stackTrace.head().lineNumber
}
```
Use `sanitize()` on your Exception to remove all the weird Groovy internal stuff from the stack trace for your Exception. Otherwise when you look at the first StackTraceElement, it probably won't be the one you want.
`deepSanitize()` is the same, but also applies the transform to all the nested Exceptions if there are any.
Upvotes: 3 <issue_comment>username_3: Thanks Chris and Jeremy for solving my problem.
I have used the below solution using Chris answer with all full respect to your answer
```
try
{
log.info undefined
}
catch(Exception e)
{
log.error "Exception = " + e
String str= e.getStackTrace().toString()
def pattern = ( str =~ /groovy.(\d+)./ )
log.error " Error at line number = " + pattern[0][1]
}
```
The reason i am using that answer is i can avoid an import in all my scripts.
I have used pattern matching to extract the line number as it always come like
```
(Script18.groovy:17),
```
so i have used the pattern
`/groovy.(\d+)./`
Now i get both the exception details and line number
[](https://i.stack.imgur.com/Yl2TE.png)
Upvotes: 3 [selected_answer] |
2018/03/19 | 841 | 2,721 | <issue_start>username_0: [Similar to this unsolved question](https://stackoverflow.com/questions/24520950/json-numeric-check-and-phone-numbers)
My PHP output requires `JSON_NUMERIC_CHECK` enabled, however there is one string **nick** column in my database that needs to be returned originally as a string. Values in that column can contain numbers and there's no length restriction. Code example:
```
$response["players"] = array();
...
$stmt = $connection->prepare('SELECT id, nick FROM players WHERE NOT id = ? ORDER BY nick');
$stmt->bind_param('i', $_POST["id"]);
$stmt->execute();
$result = $stmt->bind_result($id, $nick);
while ($stmt->fetch()) {
$players = array();
$players["id"] = $id;
$players["nick"] = $nick;
array_push($response["players"], $players);
}
...
echo json_encode($response, JSON_NUMERIC_CHECK);
```
For example, nick **"007"** is being returned as **"7"** and I need it to be the original **"007"**. Removing **"JSON\_NUMERIC\_CHECK"** helps, but it bugs the rest of the code. **Strval()** function used like quoted below didn't help.
```
$result = $stmt->bind_result($id, strval($nick));
$players["nick"] = strval($nick);
```<issue_comment>username_1: In PHP there is unfortunately no explicit type for variables, this makes your problem not so easy to solve.
I came with 2 solution.
1) if you like OOP, make a `Serializable, Arrayable` class to use as container for your string with a method returning the right encoded value.
2) first encode using `JSON_NUMERIC_CHECK` in order to make sure your data is what you expect it is, then decode the validated data, manually set the string to its origina value, then encode again but this time not using `JSON_NUMERIC_CHECK`.
```
$str = $response['nick'];
$response = json_decode(json_encode( $response, JSON_NUMERIC_CHECK ) );
$response->nick = $str;
$response = json_encode( $response );
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: The management of numbers is strange to say the least ...
It's almost scary !
then on PHPTESTER:
```
$CUSTOMER['id'] = 123;
$CUSTOMER['tel'] = '06 06 06 06 06';
// unbug for zero top of tel number -> json_encode
$CUSTOMER['tel'] = "".$CUSTOMER['tel']."";
echo json_encode($CUSTOMER, JSON_NUMERIC_CHECK);
```
Output : {"id":123,"tel":"06 06 06 06 06"}
On my host (same code) : Output : {"id":123,"tel":"6 06 06 06 06"} // zero was removed !
I found an unlikely solution :
I surrounded the variable with a space ...
```
$CUSTOMER['tel'] = " ".$CUSTOMER['tel']." "; // wrap with spaces
```
Output (on my host) : {"id":123,"tel":"06 06 06 06 06"}
in PHP version 5.5
Here is a solution, I would like to have your opinion, in any case it worked for me.
Upvotes: 0 |
2018/03/19 | 867 | 2,786 | <issue_start>username_0: I am trying to filter out items by date. I want to get one item that has the dateFrom equel or bigger to today and dateTo equel or less then today. This is my query and reformat date function. In SharePoint Online I have the date set to `only date`. by default sharePoint sets the time part to 23:00:00. Thats why the static use of it. The `´ge dateFrom´` part of the query works fine but the `le dateTo` ignores the equel part. So if dateTo is set to `2018-03-19`. I dont get any items
returned with the dateTo set to that date `2018-03-19`. Is my query wrong?
```
var requestUrl = _spPageContextInfo.webAbsoluteUrl + "/_api/web/lists/getbytitle('" + _weaklyQuestion + "')/items?$select=Id,Title,answersOptions,dateFrom,dateTo&$filter=('" + ReFormatTime() + "' ge dateFrom) and ('" + ReFormatTime() + "' le dateTo)&$top=1";
function ReFormatTime() {
var date = new Date(),
year = date.getFullYear(),
month = date.getMonth() + 1,
dt = date.getDate();
if (dt < 10) {
dt = '0' + dt;
}
if (month < 10) {
month = '0' + month;
}
return year + '-' + month + '-' + dt + 'T23:00:00Z';
}
```<issue_comment>username_1: In PHP there is unfortunately no explicit type for variables, this makes your problem not so easy to solve.
I came with 2 solution.
1) if you like OOP, make a `Serializable, Arrayable` class to use as container for your string with a method returning the right encoded value.
2) first encode using `JSON_NUMERIC_CHECK` in order to make sure your data is what you expect it is, then decode the validated data, manually set the string to its origina value, then encode again but this time not using `JSON_NUMERIC_CHECK`.
```
$str = $response['nick'];
$response = json_decode(json_encode( $response, JSON_NUMERIC_CHECK ) );
$response->nick = $str;
$response = json_encode( $response );
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: The management of numbers is strange to say the least ...
It's almost scary !
then on PHPTESTER:
```
$CUSTOMER['id'] = 123;
$CUSTOMER['tel'] = '06 06 06 06 06';
// unbug for zero top of tel number -> json_encode
$CUSTOMER['tel'] = "".$CUSTOMER['tel']."";
echo json_encode($CUSTOMER, JSON_NUMERIC_CHECK);
```
Output : {"id":123,"tel":"06 06 06 06 06"}
On my host (same code) : Output : {"id":123,"tel":"6 06 06 06 06"} // zero was removed !
I found an unlikely solution :
I surrounded the variable with a space ...
```
$CUSTOMER['tel'] = " ".$CUSTOMER['tel']." "; // wrap with spaces
```
Output (on my host) : {"id":123,"tel":"06 06 06 06 06"}
in PHP version 5.5
Here is a solution, I would like to have your opinion, in any case it worked for me.
Upvotes: 0 |
2018/03/19 | 577 | 1,901 | <issue_start>username_0: It used to work, but somehow it doesn't anymore and I cannot figure out why.
I want to vertically and horizontally center an image inside a div frame. Right now it's horizontally centered, but not vertically, and instead on the top of the div. I have tried an inline-box which worked before. I'm not sure why it stopped working.<issue_comment>username_1: In PHP there is unfortunately no explicit type for variables, this makes your problem not so easy to solve.
I came with 2 solution.
1) if you like OOP, make a `Serializable, Arrayable` class to use as container for your string with a method returning the right encoded value.
2) first encode using `JSON_NUMERIC_CHECK` in order to make sure your data is what you expect it is, then decode the validated data, manually set the string to its origina value, then encode again but this time not using `JSON_NUMERIC_CHECK`.
```
$str = $response['nick'];
$response = json_decode(json_encode( $response, JSON_NUMERIC_CHECK ) );
$response->nick = $str;
$response = json_encode( $response );
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: The management of numbers is strange to say the least ...
It's almost scary !
then on PHPTESTER:
```
$CUSTOMER['id'] = 123;
$CUSTOMER['tel'] = '06 06 06 06 06';
// unbug for zero top of tel number -> json_encode
$CUSTOMER['tel'] = "".$CUSTOMER['tel']."";
echo json_encode($CUSTOMER, JSON_NUMERIC_CHECK);
```
Output : {"id":123,"tel":"06 06 06 06 06"}
On my host (same code) : Output : {"id":123,"tel":"6 06 06 06 06"} // zero was removed !
I found an unlikely solution :
I surrounded the variable with a space ...
```
$CUSTOMER['tel'] = " ".$CUSTOMER['tel']." "; // wrap with spaces
```
Output (on my host) : {"id":123,"tel":"06 06 06 06 06"}
in PHP version 5.5
Here is a solution, I would like to have your opinion, in any case it worked for me.
Upvotes: 0 |
2018/03/19 | 654 | 2,326 | <issue_start>username_0: I have an Angular 5 application and I need to redirect all the traffic to Https when it's not already the case. I only have access to .htaccess to do that.
At first, i had that configuration :
```
RewriteEngine On
RewriteBase /
RewriteRule ^index\.html$ - [L]
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteRule . /index.html [L]
```
Works fine to avoid 404 and let Angular handle the routing, but when i try to add https, it broke everything.
I've tried multiple things, first i tried to add :
```
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R,L]
```
But I got too many redirections and can't load the page
Then I tried to replace my conf with :
```
RewriteEngine On
RewriteBase /
RewriteRule ^index\.html$ - [L]
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteRule . https://%{SERVER\_NAME}/index.html [L]
```
All the URI are handled but they always redirect to "/" in Angular, so I can't use Url to navigate (for example I have a /admin section that can be accessed only by URL )
I'm not used to apache, I usually work with nginx, does somebody have an idea on how to fix my issue?
Thanks<issue_comment>username_1: In case somebody has the same problem, i managed to make it work with this .htaccess :
```
RewriteEngine On
RewriteCond %{ENV:HTTPS} !on
RewriteRule (.\*) https://%{HTTP\_HOST}/$1 [R,L]
RewriteBase /dist
RewriteRule ^index\.html$ - [L]
RewriteCond %{REQUEST\_FILENAME} !-f
RewriteCond %{REQUEST\_FILENAME} !-d
RewriteRule . /index.html [L]
```
I guess the error in my first try was the condition for the https test
Upvotes: 1 <issue_comment>username_2: Thanks username_1. In my case my `dist` files are directly in `public_html` so I don't need to rewrite base, and I tried your solution without it but it didn't work, eventually I think I have a good idea why it failed (through trial and error/experimenting) and it's that between each line my server apparently needs an empty line, something like:
```
RewriteEngine On
```
instead of:
```
RewriteEngine On
```
It is super weird but I understand why it happens. Regardless I thought I'd let anyone else know to check for this if absolutely nothing is working for you.
Upvotes: 0 |
2018/03/19 | 507 | 1,572 | <issue_start>username_0: Stuck in this Numpy Problem
```
country=['India','USA']
gdp=[22,33]
import numpy as np
a=np.column_stack((country,gdp))
array([['India', '22'],
['USA', '33']], dtype='
```
I have an NDArray and I want to find the maximum of the 2nd column. I tried the below
```
print(a.max(axis=1)[1])
print(a[:,1].max())
```
It threw this error: `TypeError: cannot perform reduce with flexible type`
Tried converting the type
```
datatype=([('country',np.str_,64),('gross',np.float32)])
new=np.array(a,dtype=datatype)
```
But got the below error
>
> could not convert string to float: 'India'.
>
>
><issue_comment>username_1: The error is due to the string data in your array, which makes the dtype to be Unicode(indicated by U11 i.e., 11-character unicode) string.
If you wish to store data in the numerical format, then use `structured arrays`.
However, if you only wish to compute the maximum of the numerical column, use
```
print(a[:, 1].astype(np.int).max())
// 33
```
You may choose to use other numerical dtypes such as `np.float` inplace of `np.int` based on the nature of data in the specific column.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Consider using `numpy` structured arrays for mixed types. You will have no issues if you *explicitly* set data types.
This is often necessary, and certainly advisable, with `numpy`.
```
import numpy as np
country = ['India','USA','UK']
gdp = [22,33,4]
a = np.array(list(zip(country, gdp)),
dtype=[('Country', '|S11'), ('Number', '
```
Upvotes: -1 |
2018/03/19 | 1,106 | 4,452 | <issue_start>username_0: I am studying Angular's change detection and I came across the following query: Let's take as base the following code:
```
@Component({
template: `
{{item}}
========
Change
`
})
class MyApp {
item:string = 'cheese';
changeModel() {
this.item = 'bread';
}
}
```
Once we click `Change` button, our ***model changes***, ***change detection is triggered*** and the appropriate DOM updates will be applied as to reflect the change in `item`. So far so good. However, Angular triggers change detection to all the components. *So, here is my query*: What's the point in doing that? Let's say, there is another component **B** which has a binding with `item.` When sometime in the future, **B** is about to be shown, the new DOM will be built upon the new correct value of `item`. So once again, `what's the point in running change detection in B` every time the value of `item` is changed through the above presented component?<issue_comment>username_1: >
> So once again, what's the point in running change detection in B every
> time the value of item is changed through the above presented
> component?
>
>
>
Let's assume you **B component** is child of **MyApp component**. `Item` is bound to **B component** via @Input. So:
```
class BComponent{
@Input() item: string;
ngOnChanges(changeObj){
// catch item changes here
}
}
```
Every time when value of `Item` is changes in **Parent** component, `ngOnChanges` lifecycle hook of B component is called accordingly.
[StackBlitz example](https://stackblitz.com/edit/angular-h16d2s)
>
> So, what's the use of applying change detection in C component when
> something changes in A?
>
>
>
In Angular change detection strategy works from top component to their childs.
Any components of angular will be direct/indirect child **Root component**. So, root component is starting point. If you have some tree of components in UI, you can disable change detection **with some exceptions** on components and their childs accodrdingly:
[](https://i.stack.imgur.com/cFo7z.jpg)
Just apply to component decorator:
```
@Component({
selector: 'hello',
template: `Hello {{name}}!
===============
`,
styles: [`h1 { font-family: Lato; }`],
changeDetection: ChangeDetectionStrategy.OnPush
})
```
If you want to read about more deeply, there are great articles from <NAME> K. @AngularInDepth.com
[Everything you need to know about change detection in Angular](https://blog.angularindepth.com/everything-you-need-to-know-about-change-detection-in-angular-8006c51d206f)
Upvotes: 2 [selected_answer]<issue_comment>username_2: **TL;DR** : *Angular checks for changes everywhere at each player interaction because it's the easiest and most reliable way to handle interactions, by using services you could have a component impact another while they're only linked by a service handling data, the `Default` change detector strategy acts like that but there's other solutions.*
The default change detector strategy is basically like saying "If anything happens anywhere in the app, check every other elements to make sure there's no changes, else apply the changes". This is nice for small apps but once you start to have a lot of components with badly optimized calls (like heavy computing function calls in template) it becomes really heavy and slow.
I guess your question behind this is "How to change this behavior?".
Change detector can be affected by multiple ways:
### [ChangeDetectionStrategy](https://angular.io/api/core/ChangeDetectionStrategy)
The first way of changing that is by setting the `changeDetectionStrategy` property in your `@Component` decorator parameter object to `OnPush`, this is like saying to the change detector "Only consider this component and its children as changed if one of the inputs changed.".
### Detaching [ChangeDetectorRef](https://angular.io/api/core/ChangeDetectorRef)
You can inject the change detector ref anywhere by simply adding it to your component contructor function (`private cdRef: ChangeDetectorRef`), once you injected this, you have full control over the change detector for the current component and its children, you can detach it, which is like saying "Don't check changes here" and reattach it when you want, you can also manually tell the CD to check changes, and apply them to the DOM.
Upvotes: 0 |
2018/03/19 | 661 | 1,800 | <issue_start>username_0: I have below mentioned tables:
Table\_1
```
ID Var1 Var2
1 123 10
2 456 12
3 789 11
4 112 14
```
Table\_2
```
ID Value2 Freq
1 123 5
2 555 4
3 779 7
4 112 8
```
I want those `Var1` which are not present in `Table_2`.
Output:
```
ID Var1 Var2
1 456 12
4 789 11
```
I have tried this:
```
select *
from Table_1 t1
left join Table_2 t2 on t1.Var1 = t2.Value2
where t1.Var1 not in t2.Value2
```<issue_comment>username_1: You can use the following:
```
SELECT * FROM Table_1 WHERE NOT Var1 IN (SELECT Value2 FROM Table_2)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: ```
select * from Table_1
where Var1 not in (select Value2 from Table_2)
```
or
```
select ID, Var1, Var2
from Table_1 t1
left join Table_2 t2 on t1.Var1 = t2.Value2
where t1.Var1 is null
```
The second way may more efficiently.
Upvotes: 1 <issue_comment>username_3: you can use the following :
```
select * from Table_1 t1
left join Table_2 t2 on t1.var1=t2.var2
where t2.var2 is not null
```
Upvotes: 0 <issue_comment>username_4: Try this:
```
SELECT A.*
FROM table_1 A
WHERE NOT EXISTS (SELECT NULL
FROM table_2 B
WHERE A.Var1=B.Var1);
```
See it [run on SQL Fiddle](http://sqlfiddle.com/#!9/9e4591/3).
Upvotes: 0 <issue_comment>username_5: The following code will give you the expected output.
```
SELECT ID, Var1, Var2
FROM Table_1
WHERE Var1 NOT IN (SELECT Value2 FROM Table_2);
```
The `NOT IN` operator is used when you want to retrieve a column that has no entries in the table or referencing table.
I have attached my [SQLFiddle](http://sqlfiddle.com/#!9/c6e381/3) with this. You can check it. Good Luck!
Upvotes: 0 |
2018/03/19 | 696 | 2,221 | <issue_start>username_0: If we have a module name like this:
```
Module.V1.CountryTest
```
I can convert it to String like this:
```
Module.V1.CountryTest |> to_string
```
Now there are some interesting results I am getting on `iex`
```
module = Module.V1.CountryTest |> to_string
"Elixir.Module.V1.CountryTest"
iex(2)> replace = Regex.replace(~r/Test/, module, "")
"Elixir.Module.V1.Country"
iex(3)> replace |> String.to_atom
Module.V1.Country
```
So if I remove `Test`. And convert it back to `atom`. It will give me back the module name. But If I `replace` or `remove` anything else from the module name it gives me this output:
```
some = Regex.replace(~r/Country/, replace, "")
"Elixir.Module.V1."
iex(5)> some |> String.to_atom
:"Elixir.Module.V1."
```
Can anybody please explain this behavior? And why it wont allow any other parts to change or replace. Meaning giving me back the output like this
```
Module.V1.Country
```
I mean if its possible.
Thanks.<issue_comment>username_1: Elixir module names are just atoms prefixed with `"Elixir."`. Elixir prints atoms which start with `"Elixir."` and contain a valid Elixir module name after that differently than other atoms:
```
iex(1)> :"Elixir.Foo"
Foo
iex(2)> :"Elixir.F-o"
:"Elixir.F-o"
```
When you replace `Test`, the rest of the value is a valid Elixir module name, but when you replace `Country` as well, you end up with a `.` at the end which is not a valid module name. If you remove the dot too, you'll get what you want:
```
iex(3)> Module.V1.Country |> to_string |> String.replace(~r/Country/, "") |> String.to_atom
:"Elixir.Module.V1."
iex(4)> Module.V1.Country |> to_string |> String.replace(~r/\.Country/, "") |> String.to_atom
Module.V1
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: To convert a string to a module, you may want to use [`Module.safe_concat/1`](https://hexdocs.pm/elixir/master/Module.html#safe_concat/1) or [`/2`](https://hexdocs.pm/elixir/master/Module.html#safe_concat/2):
```
Module.safe_concat(Module.V1, "CountryTest") # => Module.V1.CountryTest
Module.safe_concat(~w[Module V1 CountryTest]) # => Module.V1.CountryTest
```
Upvotes: 3 |
2018/03/19 | 664 | 2,411 | <issue_start>username_0: I have an email attribute in my database. I stored it in the `TINYTEXT`. But now I decided to encrypt it using `AES_ENCRYPT` and store it encrypted. With what type and size for the email should I go to avoid any problems whatsoever?
From [the](https://dev.mysql.com/doc/refman/5.5/en/encryption-functions.html) documentation I decided to store email as `VARBINARY(60)`. Here are my thoughts on it:
```
16 × (trunc(string_length / 16) + 1)
I want my email to be VARCHAR(32)
16 × (trunc(32 / 16) + 1) = 48
But in order to avoid padding problems I will use 60 instead of 48.
So, the type of the encrypted email will be VARBINARY(60)
```
Am I right?<issue_comment>username_1: **Simple Answer**: `VARBINARY(48)`, as you computed. But, to handle longer email addresses, `BLOB`.
**Messier Answer**:
`AES_ENCRYPT` expands the string some, so 254 will exceed 255 after encrypting. Also, it returns non-character bytes.
`BLOB` is a simple answer. `VARBINARY(256)` would probably also work, but might not have any advantage. (I suspect utf8 characters could make it exceed 256.)
Do you need to index it? That could lead to other issues.
If you limit emails to 32 ascii characters, what percentage of the world will you exclude or truncate?
Upvotes: 2 <issue_comment>username_2: The length of a VARCHAR, as the name suggests, is based on the number of characters in the data. The length of a VARBINARY column is based on the number of bytes (octets) in the value. You cannot calculate the the appropriate size for the column without reference to the character set being used for the data.
Trying to store VARBINARY data in a column that isn't long enough will result in an error rather than truncated data so you need to allow for the longest possible value and for the fact that your character set might allow for multi-byte characters. The simplest way to get this is to query the information\_schema.columns table and get the current size from `CHARACTER_OCTET_LENGTH` (rather than `CHARACTER_MAXIMUM_LENGTH`) and then apply your formula to that
e.g.
```
SELECT 16 * (FLOOR(CHARACTER_OCTET_LENGTH / 16) + 1)
FROM information_schema.columns
WHERE TABLE_SCHEMA = 'yrdb'
AND TABLE_NAME = 'yrtable'
AND COLUMN_NAME = 'email'
```
You don't then need to a add a bit extra to take care of padding because that's already taken account of within your formula.
Upvotes: 2 [selected_answer] |
2018/03/19 | 318 | 1,244 | <issue_start>username_0: I was trying to commit the changes to my dev branch in Gitlab, but whenever i try to commit changes it shows the errors of ESLint and the commit gets aborted.
I tried cloning the repository from dev branch and made changes and installed dependencies using '**npm install**' and the errors seems to appear only when dependencies are installed.
Tested without installing npm dependencies, made changes to the code and tried to commit , and Voila! it worked, and changes pushed to the dev branch.
***Help me to avoid these errors appearing at the time of pushing the changes.***
(Please check the image below to view the errors displayed)
[](https://i.stack.imgur.com/TCmnx.png)<issue_comment>username_1: Fix lint errors before pushing. If you don't understand a rule, search it on eslint doc.
Upvotes: 0 <issue_comment>username_2: It worked when i used
```
git commit -m "Commit Message" --no-verify
```
and this is what i was looking for. I was looking to bypass without fixing those ESLint errors!
Upvotes: 2 <issue_comment>username_3: Put `--no-verify` before `-m` (message):
```
git commit --no-verify -m 'commit message'
```
Upvotes: 0 |
2018/03/19 | 271 | 922 | <issue_start>username_0: I am a newbie in React and Node. I have two folders:
1. Client, hosted on 3000
2. Server, express on 3001
Registration worked fine form me, but after authentication `successRedirect:` to `http://localhost:3001/login`
How to redirect it to client at `http://localhost:3000/login`
```
passport.authenticate('local-signup',{
successRedirect:'/login',
failureRedirect: '/register',
failureFlash : true
})(req, res);
```<issue_comment>username_1: Fix lint errors before pushing. If you don't understand a rule, search it on eslint doc.
Upvotes: 0 <issue_comment>username_2: It worked when i used
```
git commit -m "Commit Message" --no-verify
```
and this is what i was looking for. I was looking to bypass without fixing those ESLint errors!
Upvotes: 2 <issue_comment>username_3: Put `--no-verify` before `-m` (message):
```
git commit --no-verify -m 'commit message'
```
Upvotes: 0 |
2018/03/19 | 1,434 | 4,551 | <issue_start>username_0: in airflow, I would like to run a dag each monday at 8am (the execution\_date should be of course "current day monday 8 am"). The relevant parameters to set up for this workflow are :
* `start_date` : "2018-03-19"
* `schedule_interval` : "0 8 \* \* MON"
I expect to see a dag run every monday at 8am . The first one being run the 19-03-2018 at 8 am with `execution_date` = 2018-03-19-08-00-00 and so on each monday.
However it's not what happens : the dag is not started on 19/03/18 at 8 am. The real behaviour is explained here for exemple : <https://stackoverflow.com/a/39620901/1510109> or <https://stackoverflow.com/a/48213964/1510109>
The behaviour is : at each end of the interval ( weekly in my case) the dag is run with execution\_date = beginning of the interval (i.e the previous week). This behavour is apparently motivated by an "ETL way of thinking" (see the link above). But it's absolutely not what I want.
How what can I achieve to run my dag each monday at 08:00am with `execution_date` = `trigger_date` = now ( = current monday 8am) ?
Thanks<issue_comment>username_1: That is how airflow behaves, it always runs when the duration is completed. Detailed behavior [here](https://stackoverflow.com/questions/38856886/airflow-does-not-backfill-latest-run) and [airflow faq](http://airflow.apache.org/faq.html).
But in order to somehow make it run for current week, what we can do is manipulate `execution_date` of DAG. That may be in form of adding 7 days to a `datetime` object (if weekly schedule) or may use `{{ next_execution_date }}` macro.
Agreed that this is only possible if somehow dates are used in your DAG or dependencies are triggered by it.
Just to be clear again, DAG is still running as per its normal behavior. Only thing what we trying to do is manipulate `date` in program/DAG.
```
args = { ....
'start_date': datetime.datetime(2018,3,18)
}
dag = DAG(...
schedule_interval = "@weekly"
)
# DAG would run on 3/25/2018 for week of 18th March
# but lets say we manipulate here
# {{ next_execution_date }} macro
# or add 7 days
# So basically we are running with date 3/25/2018 instead of 3/18/2018 for the week of 18th March
```
Upvotes: 1 <issue_comment>username_2: Take a [quick look at my answer](https://stackoverflow.com/a/49530174/459) with start times and execution\_date examples.
You want to run every Monday at 8am.
So this part is going to stay the same:
```
schedule_interval: '0 8 * * MON',
```
You want it to run it's first run on 2018-03-19, since the first run occurs at the end of the first full schedule period after the start date, you should change your start date to:
```
start_date: datetime(2018,03,12),
```
You will have to live with the fact that Airflow will name your DagRuns with the start of each period and pass in macros based on the `execution_date` set to the start of the interval period. Adjust your logic accordingly.
Your first run will start after `2018-03-19T08:00:00.0Z` and the `execution_date`, every other macro that depends on it, and name of the DagRun will be `2018-03-12T08:00:00.0Z`
So long as you understand what to expect from the `execution_date` and you don't try to base your time off of `datetime.now()` your DAGs will be able to be idempotent in operation. Feel free to make a new variable like `my_execution_date = execution_date + datetime.timedelta(7)` within any `PythonOperator` or custom operator (you get execution\_date from the context of the task), use template statements like `{{ (execution_date + macros.timedelta(7)).strftime('%Y%m%d') }}` or `{{ macros.ds_add(ds, 7) }}`, or use the `next_execution_date`.
You can even add a dag level `user_defined_macros` like `{'dt':lambda d: d+datetime.timedelta(days=7)}` to enable `{{ dt(execution_date) }}`. And recently `user_defined_filters` were added like `{'dt':lambda d: d+datetime.timedelta(days=7)}` enabling `{{ execution_date | dt }}`. The `next_ds` and `next_execution_date` would be easier for your purposes.
While thinking about templating, you may as well read up on the built-in stuff out there: <http://jinja.pocoo.org/docs/2.10/templates/#builtin-filters>
Upvotes: 5 [selected_answer]<issue_comment>username_3: For me I solved it in this way:
```
{{ ds if dag_run.external_trigger or dag_run.is_backfill else macros.ds_add(ds, 1) }}
```
If DAG was ran by external trigger we shouldn't change `ds`.
If DAG was ran by backfilling we shouldn't change `ds`.
If DAG was scheduled we use macros to increment it by one day.
Upvotes: 2 |
2018/03/19 | 985 | 3,392 | <issue_start>username_0: I have a terminal command I run to download all URLs included in a supplied JSON file:
```
egrep -o 'https:[^\"]*png' file-name.json | xargs -n 1 curl -O
```
This works as expected, but some of the URL's are in different "sub-folders", e.g.:
<https://website.com/a-folder-name/display/image.png>
or
<https://website.com/another-folder-name/display-side/image.png>
or
<https://website.com/a-different-folder-name/thumb/image.png>
I would like to adapt this command to only retrieve files from a specified folder name (so I can run the command again, changing the sub-folder name to only retrieve images in each folder), e.g.:
* File name starting with `http:`
* File name **including** `display-sides`
* File name ending with `png`
Below is a sample of my JSON data:
```
{
"parent_groups": [
{
"id": 1,
"name": "Main name",
"groups": [
{
"id": 3,
"name": "Sub Name",
"components": [
{
"id": "id-number",
"name": "Unit name",
"image": "https://website.com/a-folder-name/display/image.png"
},
{
"id": "another-id-number",
"name": "Another Unit name",
"image": "https://website.com/another-folder-name/display/another-image.png"
}
]
}
]
}
],
"display": {
"side": {
"components": [
{
"id": "side-id",
"filename": "https://website.com/another-folder-name/display-side/image.png"
},
{
"id": "another-side-id",
"filename": "https://website.com/some-folder-name/display-side/another-image.png"
}
]
},
"main": [
{
"position": 0,
"conditions": [
{
"ids": [
"thumb-id9"
],
"filename": "https://website.com/irrelevant-folder-name/thumb/image.png"
},
{
"ids": [
"another-thumb-id"
],
"filename": "https://website.com/this-is-a-folder-name/thumb/another-image.png"
}
]
}
]
}
}
```
This is heavily cut-down from the files I work with but hopefully serves as a relevant example.<issue_comment>username_1: Something like this?
```
egrep -o 'https:[^"]*/display-sides/[^"]*png' file-name.json | xargs -n 1 curl -O
```
This could be made a bit more robust by taking the surrounding quotes into account as well:
```
egrep -o '"https:[^"]*/display-sides/[^"]*png"' file-name.json | tr -d '"' | xargs -n 1 curl -O
```
That will allow you to leave off the `png` file extension too, if you wish.
Upvotes: 2 [selected_answer]<issue_comment>username_2: `grep` is not the right tool for processing JSON data.
The right way with [**`jq`**](https://stedolan.github.io/jq/manual/v1.5/) tool:
```
jq '.. | select(type =="string" and
test("^https://.+display-side.+\\.png$"))' file-name.json | xargs -n1 curl -O
```
This will find all urls independently of key names.
Also it can be extended to analyze only a certain keys.
Upvotes: 2 <issue_comment>username_3: @Roman is right, and if what you are *really* looking for is the field in the JSON-structure:
```
jq -r '.display.side.components[].filename' foo.json |
parallel curl -O
```
Upvotes: 0 |
2018/03/19 | 457 | 1,584 | <issue_start>username_0: I am loading all assets and `js` & `css` from next page and store them in an `Array` but I forEach loop through the document and store the javascript link seperatly now so like this:
```
0:{images: Array(61)}
1:{js: script}
2:{js: script}
3:{js: script}
4:{js: script}
5:{js: script}
6:{js: script}
etc...
```
Here is the loop:
```
[].forEach.call( document.querySelectorAll("script[src]"), function( src ) {
console.log( src );
resources.push({js: src})
});
```
But how do I store them like the images all in one object instead of pushing them separately.<issue_comment>username_1: You can do
```
resources.push( {js : [ ...document.querySelectorAll( "script[src]" ) ] } );
```
Or better if you can set them as properties of `resource` (object instead of an *Array*) instead of
```
resource = {};
resources.js = [ ...document.querySelectorAll( "script[src]" ) ];
resources.images = [ ...document.querySelectorAll( "img" ) ];
```
For older browsers
```
var toArray = function( arrLike ){ return [].slice.call( arrLike ) };
resource = {};
resources.js = toArray( document.querySelectorAll( "script[src]" ) );
resources.images = toArray( document.querySelectorAll( "img" ) );
```
Upvotes: 0 <issue_comment>username_2: You push `{js: []}` into `resources` **once**, and then push the scripts into the array that is the value of the `js` property.
(It doesn't make much sense to do this though, `resources` would probably be better off as an object with a `images` and a `js` property instead of as an array).
Upvotes: 2 [selected_answer] |
2018/03/19 | 330 | 1,111 | <issue_start>username_0: Can the "IN" be used in an other clause than "WHERE"? For instance in an iif:
```
iif((([Date] IS NOT NULL) AND ([Result] in ('PassedWithHonors', 'Passed')), 'Passed', 'Failed)))
```
Thank you.<issue_comment>username_1: Try this:
```
case when [Date] IS NOT NULL AND [Result] in ('PassedWithHonors', 'Passed')
then 'Passed'
else 'Failed'
end
```
Upvotes: 1 <issue_comment>username_2: You can put an `IN` anywhere you can put a Boolean Expression. One of those places is the `WHERE` clause, but a Boolean expression could almost be anywhere in a query. For example, in the `ON` clause (as @jarlh mentioned), or within the `SELECT`, `GROUP`, `HAVING` clauses within (for example) a `CASE` or `IIF` (but limited to those functions).
It can even go in an `IF` statement as part of your Logical Flow Operations. For example:
```
IF @i IN (1,2) BEGIN
SELECT TOP 1 * FROM MyTable;
END ELSE IF @i in (3,4) BEGIN
SELECT TOP 1 * FROM YourTable;
END
```
So, like I said at the start, an `IN` can go anywhere you can put a Boolean Expression.
Upvotes: 3 [selected_answer] |
2018/03/19 | 316 | 1,123 | <issue_start>username_0: I have a WebView into which I'm loading a web page. I want to allow the user to zoom in on the web page in a similar fashion as they would with the web browser.
Any ideas how to enable this?
Thanks in advance<issue_comment>username_1: Try this:
```
case when [Date] IS NOT NULL AND [Result] in ('PassedWithHonors', 'Passed')
then 'Passed'
else 'Failed'
end
```
Upvotes: 1 <issue_comment>username_2: You can put an `IN` anywhere you can put a Boolean Expression. One of those places is the `WHERE` clause, but a Boolean expression could almost be anywhere in a query. For example, in the `ON` clause (as @jarlh mentioned), or within the `SELECT`, `GROUP`, `HAVING` clauses within (for example) a `CASE` or `IIF` (but limited to those functions).
It can even go in an `IF` statement as part of your Logical Flow Operations. For example:
```
IF @i IN (1,2) BEGIN
SELECT TOP 1 * FROM MyTable;
END ELSE IF @i in (3,4) BEGIN
SELECT TOP 1 * FROM YourTable;
END
```
So, like I said at the start, an `IN` can go anywhere you can put a Boolean Expression.
Upvotes: 3 [selected_answer] |
2018/03/19 | 476 | 1,674 | <issue_start>username_0: I have table in which I store the evauation results of customer. Evaluation can be triggered multiple times. Below is the sample data
```
CUSTOMER_EVAL_RESULTS:
SEQ CUSTOMER_ID STATUS RESULT
1 100 C XYZ
3 100 C XYZ
7 100 C ABC
8 100 C PQR
11 100 C ABC
12 100 C ABC
```
From above data set I want only the rows with SEQ as 1,7,8,11.
I used below query suggested on other links but it is not giving the desired result. Please help
```
SELECT * FROM (
SELECT E.*, ROW_NUMBER() OVER(PARTITION BY CUSTOMER_ID, STATUS, RESULT ORDER BY SEQ) ROW_NUM
FROM CUSTOMER_EVAL_RESULTS E WHERE E.CUSTOMER_ID=100
) WHERE ROW_NUM=1;
```<issue_comment>username_1: You can utilize LAG to check the previous row's value:
```
SELECT *
FROM
(
SELECT E.*,
LAG(RESULT)
OVER(PARTITION BY CUSTOMER_ID, STATUS
ORDER BY SEQ) prevResult
FROM CUSTOMER_EVAL_RESULTS E
WHERE E.CUSTOMER_ID=100
)
WHERE prevResult IS NULL
OR prevResult <> RESULT
```
Upvotes: 2 <issue_comment>username_2: Please try the below
```
select * from CUSTOMER_EVAL_RESULTS
where not exists (select 1 from CUSTOMER_EVAL_RESULTS
a,CUSTOMER_EVAL_RESULTS b
where a.seq_no < b.seq_no and a.customer_id=b.customer_id
and a.status=b.status and a.result=b.result
and not exists(select 1 from CUSTOMER_EVAL_RESULTS c
where a.seq_no < c.seq_no and c.seq_no < b.seq_no ));
```
Upvotes: 0 |
2018/03/19 | 860 | 2,704 | <issue_start>username_0: I cannot debug the following code. I would like to update chart data (not add on top; delete current data and add completely new dataset). (Not)Working example on codepen:
<https://codepen.io/anon/pen/bvBxpr>
```
var config = {
type: 'line',
data: {
labels: ["January", "February", "March", "April", "May", "June", "July"],
datasets: [{
label: "My First dataset",
data: [65, 0, 80, 81, 56, 85, 40],
fill: false
}]
}
};
var ctx = document.getElementById("myChart").getContext("2d");
var myChart = new Chart(ctx, config);
labelsNew = ["Why", "u", "no", "work", "???"];
dataNew = [2, 4, 5, 6, 10];
function updateData(chart, label, data) {
removeData();
chart.data.labels.push(label);
chart.data.datasets.forEach((dataset) => {
dataset.data.push(data);
});
chart.update();
};
function removeData(chart) {
chart.data.labels.pop();
chart.data.datasets.forEach((dataset) => {
dataset.data.pop();
});
chart.update();
}
$('.button-container').on('click', 'button', updateData(myChart, labelsNew, dataNew));
```<issue_comment>username_1: I see 2 problems:
* in function `updateData()` missing chart argument to `removeData(chart);`
* click handler for button, use simply:
`$("#btn").click(function() {
updateData(myChart, labelsNew, dataNew)
});`
Upvotes: 1 <issue_comment>username_2: I figured it out. This works:
```
function addData(chart, label, data) {
chart.data.labels = label
chart.data.datasets.forEach((dataset) => {
dataset.data = data;
});
chart.update();
}
$("#btn").click(function() {
addData (myChart, labelsNew, dataNew);
});
```
instead of pushing the data (which adds on), data needs to be allocated by " = ".
Upvotes: 4 [selected_answer]<issue_comment>username_3: ```js
var config = {
type: 'line',
data: {
labels: ["January", "February", "March", "April", "May", "June", "July"],
datasets: [{
label: "My First dataset",
data: [65, 0, 80, 81, 56, 85, 40],
fill: false
}]
}
};
var ctx = document.getElementById("myChart").getContext("2d");
var myChart = new Chart(ctx, config);
labelsNew = ["Why", "u", "no", "work", "???"];
dataNew = [2, 4, 5, 6, 10];
function addData(chart, label, data) {
chart.data.labels = label
chart.data.datasets.forEach((dataset) => {
dataset.data = data;
});
chart.update();
}
function clickupdate(){
addData(myChart, labelsNew, dataNew);
}
```
```css
.chart-container {
height: 300px;
width: 500px;
position: relative;
}
canvas {
position: absolute;
}
```
```html
Change Data
```
Upvotes: 1 |
2018/03/19 | 659 | 2,367 | <issue_start>username_0: I created simple test class for form `SearchForm.js`
```
import React from 'react';
import ReactDOM from 'react-dom';
const formContainer = document.querySelector('.form-container')
class SeacrhForm extends React.Component {
constructor(props) {
super(props)
this.state = {
keywords: '',
city: '',
date: ''
}
this.handleChange = this.handleChange.bind(this)
this.handleSubmit = this.handleSubmit.bind(this)
}
render() {
return (
Say Hi!
=======
)
}
}
//ReactDOM.render(, formContainer)
```
It's my `App.js`
```
import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import './SearchForm.js';
class App extends React.Component {
render() {
return (
Test
);
}
}
export default App;
```
`index.js`
```
import React from 'react';
import ReactDOM from 'react-dom';
import './index.css';
import App from './App';
import registerServiceWorker from './registerServiceWorker';
import './bootstrap.min.css';
ReactDOM.render(, document.getElementById('root'));
registerServiceWorker();
```
But I don't understand how to render my form in `App.js` ?<issue_comment>username_1: Render SearchForm in App.js and import it like `import { SearchForm } from './SearchForm.js';`
```
import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import { SearchForm } from './SearchForm.js';
class App extends React.Component {
render() {
return (
Test
);
}
}
export default App;
```
and export it from the SeachForm file after correcting the typo,
```
class SearchForm extends React.Component {
constructor(props) {
super(props)
this.state = {
keywords: '',
city: '',
date: ''
}
this.handleChange = this.handleChange.bind(this)
this.handleSubmit = this.handleSubmit.bind(this)
}
render() {
return (
Say Hi!
=======
)
}
}
export { SearchForm}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
import React, { Component } from 'react';
import logo from './logo.svg';
import './App.css';
import './SearchForm.js';
class App extends React.Component {
render() {
return (
);
}
}
export default App;
```
Upvotes: 0 |
2018/03/19 | 158 | 648 | <issue_start>username_0: I want to prevent the autofocus on the first input element when open the PrimeNG dialog.
```
```
Is there a possibility to prevent the autofocus?<issue_comment>username_1: Not sure if you're still looking for an answer but there is a `focusOnShow` attribute for p-dialog which defaults to `true`. Set it to `false` will stop focusing on the first button. Not sure which version of PrimeNG this was introduced in.
Upvotes: 3 <issue_comment>username_2: focusOnShow can be used only if you have p-dialog. If you are using dialog service then there is no possibility to remove autoFocus, or I just couldn't find.
Upvotes: 0 |
2018/03/19 | 386 | 1,362 | <issue_start>username_0: I read about create variable for increment in select in this post [select increment counter in mysql](https://stackoverflow.com/questions/13566695/select-increment-counter-in-mysql)
I need some addition to this query.. I need to reset the increment based on user\_id.
Sample data :
```
id user_id name
1 1 A
2 2 B
3 3 C
4 1 D
5 2 E
6 2 F
7 1 G
8 3 H
```
Expected result:
```
id user_id name increment
1 1 A 1
4 1 D 2
7 1 G 3
2 2 B 1
5 2 E 2
6 2 F 3
3 3 C 1
8 3 H 2
```
It's not stop only until 3 increments, if I have more row with `user_id`, it will continue the increment.
How do I make query so the output look like that? Thanks!<issue_comment>username_1: Not sure if you're still looking for an answer but there is a `focusOnShow` attribute for p-dialog which defaults to `true`. Set it to `false` will stop focusing on the first button. Not sure which version of PrimeNG this was introduced in.
Upvotes: 3 <issue_comment>username_2: focusOnShow can be used only if you have p-dialog. If you are using dialog service then there is no possibility to remove autoFocus, or I just couldn't find.
Upvotes: 0 |
2018/03/19 | 590 | 1,689 | <issue_start>username_0: For Example:
```
var arr = ["tag1, tag2"]
```
I want to have the above array in `JSON` format as follows:
```
var arr = [
{"name": "tag1"},
{"name": "tag2"}
]
```
Can someone please help me out to solve this problem.<issue_comment>username_1: ```
var obj = { "name":"John", "age":30, "city":"New York"};
var myJSON = JSON.stringify(obj);
```
thats it...!
Upvotes: -1 <issue_comment>username_2: This should work:
```
var arr = ["tag1", "tag2"];
var json= [];
arr.forEach(e => json.push({name:e}));
```
Upvotes: 0 <issue_comment>username_3: You input array should be like, Then you can use array.map to iterate over objects and create new objects
```
var arr = ["tag1", "tag2"]
```
**DEMO**
```js
var arr = ["tag1", "tag2"];
var newList = arr.map(v => {
return Object.assign({}, { Name: v })
});
console.log(newList);
```
Upvotes: 0 <issue_comment>username_4: you could also do this on your array using `map` function:
```js
var objArray = ["tag1", "tag2"].map(el => {
return { name: el };
});
console.log(objArray);
```
if your problem is having 2 different arrays as you mentioned in a comment above & you want to remove duplicate tags, you could do this:
```js
var l1 = [
{'id': 2,'name':'tag1'},
{'id': 3, 'name':'tag2'},
{'id': 4, 'name':'tag3'}];
var l2 = [ "tag5", "tag2", "tag3", "tag6" ];
l2 = l2.map(el => {
return { name: el };
});
// concat both the arrays
var src = l1.concat(l2);
// remove duplicates
var temp = [];
const unique = src.filter(item => {
if (temp.indexOf(item.name) === -1) {
temp.push(item.name);
return item;
}
});
console.log(unique);
```
Upvotes: 0 |
2018/03/19 | 872 | 2,578 | <issue_start>username_0: Is there a way I can add weight-age to specific fields in Mongo-db without indexing?
Document Format:
```
{
"_id" : "55b3551164518e48",
"item" : "Item A",
"price": 400,
"added_date": "2015-07-22",
"seller": "Seller A"
},{
"_id" : "55b3551164518e49",
"item" : "Item A",
"price": 200,
"added_date":"2015-06-21",
"seller": "Seller B"
}
```
In a document like the one above, the find query should be working based on priority in the following order:
1. Item name
2. Price(lowest)
3. Added Date(Latest)
Find query with "Item A" should be working in a way to return the second entry first as it is the one with lower price although the first entry is latest. Is there a way to add weightage(importance) to field in search query without indexing.
I am looking for a weightage field search like in
```
db.blog.createIndex(
{
content: "text",
keywords: "text",
about: "text"
},
{
weights: {
content: 10,
keywords: 5
},
name: "TextIndex"
}
)
```
but without creating the index.<issue_comment>username_1: ```
var obj = { "name":"John", "age":30, "city":"New York"};
var myJSON = JSON.stringify(obj);
```
thats it...!
Upvotes: -1 <issue_comment>username_2: This should work:
```
var arr = ["tag1", "tag2"];
var json= [];
arr.forEach(e => json.push({name:e}));
```
Upvotes: 0 <issue_comment>username_3: You input array should be like, Then you can use array.map to iterate over objects and create new objects
```
var arr = ["tag1", "tag2"]
```
**DEMO**
```js
var arr = ["tag1", "tag2"];
var newList = arr.map(v => {
return Object.assign({}, { Name: v })
});
console.log(newList);
```
Upvotes: 0 <issue_comment>username_4: you could also do this on your array using `map` function:
```js
var objArray = ["tag1", "tag2"].map(el => {
return { name: el };
});
console.log(objArray);
```
if your problem is having 2 different arrays as you mentioned in a comment above & you want to remove duplicate tags, you could do this:
```js
var l1 = [
{'id': 2,'name':'tag1'},
{'id': 3, 'name':'tag2'},
{'id': 4, 'name':'tag3'}];
var l2 = [ "tag5", "tag2", "tag3", "tag6" ];
l2 = l2.map(el => {
return { name: el };
});
// concat both the arrays
var src = l1.concat(l2);
// remove duplicates
var temp = [];
const unique = src.filter(item => {
if (temp.indexOf(item.name) === -1) {
temp.push(item.name);
return item;
}
});
console.log(unique);
```
Upvotes: 0 |
2018/03/19 | 942 | 2,452 | <issue_start>username_0: I tried to solve **problem set 16** in **projecteuler.net**. I am trying to find `2^1000` for which I had written a code in C with a function named `power()`.The problem here is that if i place `printf()` inside `power()` to get the result value, the answer is `10715086071862673209484250490600018105614048117055336074437503883703510511249361224931983788156958581275946729175531468251871452856923140435984577574698574803934567774824230985421074605062371141877954182153046474983581941267398767559165543946077062914571196477686542167660429831652624386837205668069376.000000` but if I use return value in `power()` and try to print the result from `main()` the answer is `-2147483648.000000`.
here is my code.
```
#include
int power(double base,int ex)
{
int i = 1;
double final = 0;
double ans = 1;
while (i <= ex){
ans \*= base;
i++;
}
printf("%lf\n",ans); // return ans;
}
int main()
{
double num;
double result;
int power\_value;
printf("enter the base value\n");
scanf("%lf",#);
printf("enter the exponent value\n");
scanf("%d",&power\_value);
power(num,power\_value); //result = power(num,power\_value)
//printf("%lf",result);
}
```<issue_comment>username_1: The number you want to calculate is actually much greater than standard types in C can handle. `long` on todays machines has almost always a width of 64 bits, so the maximum number that can be stored in it is something like **2^64** = 18.446.744.073.709.551.615 < 2^2000
Also you can't start a program with `double main() { }`. Specification requires you to use `int` or `void`.
You'll have to find some library that does that calculation for you.
Upvotes: 2 <issue_comment>username_2: The reason that a different gets printed inside `power` as opposed to `main` is because the return type of `power` is `int`. As a result, the value gets truncated to what will fit in an `int`.
Change the return type to `double` so that it matches the type of the variables you're using and you'll get the expected result.
```
double power(double base,int ex)
```
As others have mentioned, numbers this large are outside the range of an `int`, and *most* numbers this size can't be accurately represented in a `double`. In this particular case however, because the number you're calculating is a power of two it *can* be represented exactly. Had you chosen a base besides 2 (or a power of 2), the result would not be exact.
Upvotes: 3 [selected_answer] |
2018/03/19 | 628 | 2,027 | <issue_start>username_0: I am working with 2 object types array in C#, One of my array is populated from MS Access and other in being filled from MySql.
```
object[] Product = [{123, "Tea"},{234, "Coffee"},{345, "Drinks"}]; // from MySql
object[] ProductDetails = [{123, "T", 23.00},{234, "C", 25.02},{345, "D", 11.88}]; // from MS Access
```
I need to loop through its all `ProductDetails` to match with its ID in `Product` array , where ID matches it replace the name from `product` array to `productsdetails` array,
OR any other approach do it efficiently in c# code. **Records may be in thousands**.<issue_comment>username_1: If you're using linq you could just join the two arrays on the ID (I am unsure if you have to convert them to List() first),
and then create a new array where you select want you want from each array.
```
var result = (from p in Products join pd in ProductDetails where p.Id equals pd.Id select new ...).ToList()
```
Upvotes: 0 <issue_comment>username_2: Here you see a good reason why you should not use multi-dimensional object arrays. Instead you should use a `Dictionary`. I'd suggest to implement a custom class `Product`. Then the dictionary would be a `Dictionary`. This would be very efficient and readable.
However, to answer your question and show the mess:
```
object[,] Product = { { 123, "Tea" }, { 234, "Coffee" }, { 345, "Drinks" } }; // from MySql
object[,] ProductDetails = { { 123, "T", 23.00 }, { 234, "C", 25.02 }, { 345, "D", 11.88 } };
for (int k = 0; k < ProductDetails.GetLength(0); k++)
{
object id = ProductDetails[k, 0];
string longName = null;
bool idFound = false;
for (int l = 0; l < Product.GetLength(0); l++)
{
if (id.Equals(Product[l, 0])) // Equals necessary because boxing of int to object
{
idFound = true;
longName = Product[l, 1] as string;
break;
}
}
if (idFound)
{
ProductDetails[k, 1] = longName;
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,605 | 4,980 | <issue_start>username_0: I refer to [Quick Start Guide - Leaflet - a JavaScript library for interactive maps](http://leafletjs.com/examples/quick-start/) to implement show marker on the map.
I want to show all popups of all the markers,and if I click on the map,it still keep the popups.
The bottleNeck is
1.How to change the code to show multiple popups of the markers
2.How to keep the popups if I click on the map
Because I google this,I can't find the solution.
Anybody can help me?<issue_comment>username_1: Here's a modified version of Leaflet's quickstart tutorial.
It adds three markers with their own, individual popups and keeps the open:
```
var mymap = L.map('mapid').setView([51.505, -0.09], 13);
L.tileLayer('https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token=<KEY>', {
maxZoom: 18,
attribution: 'Map data © [OpenStreetMap](http://openstreetmap.org) contributors, ' +
'[CC-BY-SA](http://creativecommons.org/licenses/by-sa/2.0/), ' +
'Imagery © [Mapbox](http://mapbox.com)',
id: 'mapbox.streets'
}).addTo(mymap);
var markers = [
{pos: [51.51, -0.10], popup: "This is the popup for marker #1"},
{pos: [51.50, -0.09], popup: "This is the popup for marker #2"},
{pos: [51.49, -0.08], popup: "This is the popup for marker #3"}];
markers.forEach(function (obj) {
var m = L.marker(obj.pos).addTo(mymap),
p = new L.Popup({ autoClose: false, closeOnClick: false })
.setContent(obj.popup)
.setLatLng(obj.pos);
m.bindPopup(p);
});
```
The key points are:
* each marker needs its own popup layer
* the popup layers need to be configured with `autoClose: false` (=> the popup is not closed when another popup is opened) and `closeOnClick: false` (=> the popup is not closed when the map is clicked).
Upvotes: 4 [selected_answer]<issue_comment>username_2: ```
var mymap = L.map('mapid').setView([51.505, -0.09], 13);
L.tileLayer('https://api.tiles.mapbox.com/v4/{id}/{z}/{x}/{y}.png?access_token=<KEY>', {
maxZoom: 18,
attribution: 'Map data © [OpenStreetMap](http://openstreetmap.org) contributors, ' +
'[CC-BY-SA](http://creativecommons.org/licenses/by-sa/2.0/), ' +
'Imagery © [Mapbox](http://mapbox.com)',
id: 'mapbox.streets'
}).addTo(mymap);
var markers = [
{pos: [51.51, -0.10], popup: "This is the popup for marker #1"},
{pos: [51.50, -0.09], popup: "This is the popup for marker #2"},
{pos: [51.49, -0.08], popup: "This is the popup for marker #3"}];
markers.forEach(function (obj) {
var m = L.marker(obj.pos).addTo(mymap),
p = new L.Popup({ autoClose: false, closeOnClick: false })
.setContent(obj.popup)
.setLatLng(obj.pos);
m.bindPopup(p);
});
window.addEventListener('DOMContentLoaded', (event) => {
console.log('DOM fully loaded and parsed');
var allLocation = document.querySelectorAll('.leaflet-marker-pane img')
allLocation.forEach((location)=>{
location.click()
})
});
```
Upvotes: 1 <issue_comment>username_3: ```
var map = L.map('map', {
center: [11.8166, 122.0942],
zoom:8,
});
L.tileLayer('https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png', {
attribution: '© [OpenStreetMap](https://www.openstreetmap.org/copyright) contributors'
}).addTo(map);
var LeafIcon = L.Icon.extend({
options: {
// shadowUrl: 'images/marker-shadow.png',
iconSize: [25, 41],
shadowSize: [41, 41],
iconAnchor: [22, 41],
// shadowAnchor: [22, 40],//map shadow position change
popupAnchor: [-10, -44] // map popup position
}
});
var pinIcon = new LeafIcon({iconUrl: 'https://leafletjs.com/examples/custom-icons/leaf-green.png'});
// lat long and popup description
var locations = [
["Hello World!", 11.8166, 122.0942],
["Hello world! 2", 11.9804, 121.9189],
["Hello world!", 10.7202, 122.5621],
["Hello world!", 11.3889, 122.6277],
["Hello world!", 10.5929, 122.6325]
];
var latLngArr = []; // make empty array for long lat
// autoClose:false
// .openPopup() added for auto content opened
for (var i = 0; i < locations.length; i++) {
marker = new L.marker([locations[i][1], locations[i][2]],pinIcon)
.bindPopup(locations[i][0],{autoClose:false})
.addTo(map)
.openPopup();
latLngArr.push([locations[i][1], locations[i][2]]);
}
// map center
map.fitBounds(latLngArr);
```
Upvotes: 0 |
2018/03/19 | 1,217 | 4,144 | <issue_start>username_0: Just recently started programming in bash and came across GNU Parallel, which is exactly, what I need for my project.
Have a basic loop script, which is meant to loop through the list of ip's and ping each, one time. The list with the ip's is constantly updated with the new ones, driven by the other script.
For multithreading, I would like to use the GNU Parallel.
My idea was to run 10 Parallel instances, each will capture one ip from the list, insert it into the curl command and removes it from the list, so the other instances of won't pick it up.
```
#! /bin/bash
while true; do
while read -r ip; do
curl $ip >> result.txt
sed -i '1,1 d' iplist
done < ipslist
done
```
I'm not sure, what's the right way to run the bash script, in this case, every solution I could find, doesn't work properly and things get totally messy.
I have a feeling, this all can be done with a single line, but, for my own reasons, I'd prefer to run it as bash script.
Would be grateful for any help!<issue_comment>username_1: This works for me:
```
#!/bin/bash
while true; do
parallel -j10 curl '{}' < ipslist >> result.txt
done
```
If that's not what you intended, please update your question to clarify.
Upvotes: 1 <issue_comment>username_2: You can do this and it will work :
```
#! /bin/bash
while true; do
while read -r ip; do
curl $ip >> result.txt &
sed -i '1,1 d' iplist
done < ipslist
wait
done
```
Upvotes: -1 <issue_comment>username_3: I would just use `xargs`. Not many people seem to know this, but there's much more to it than the standard usage to just squeeze every line of the input on a single line. That is, this:
```
echo -e "A\nB\nC\nD\nE" | xargs do_something
```
would essentially mean the same as this:
```
do_something A B C D E
```
However you can specify, how many lines are processed in one chunk, using the `-L` option:
```
echo -e "A\nB\nC\nD\nE" | xargs -L2 do_something
```
would translate to:
```
do_something A B
do_something C D
```
Additionally, you can also specify, how many of these chunks run in parallel, with the `-P` option. So to process the lines one-by-one, with a parallelism of, say 3, you would say:
```
echo -e "A\nB\nC\nD\nE" | xargs -L1 -P3 do_something
```
Et voilà, you have proper parallel execution, with basic unix tools.
The only catch is, that you have to make sure you'll separate the outputs. I am not sure, whether this has been thought of before, but a solution for the `curl` case is something like this:
```
cat url_list.txt | xargs -L1 -P10 curl -o paralell_#0.html
```
Where `#0` will be replaced by cURL with the URL being fetched. See the manuals for further details:
* <http://man7.org/linux/man-pages/man1/xargs.1.html>
* <https://curl.haxx.se/docs/manpage.html>
Upvotes: 0 <issue_comment>username_4: username_1' solution looks like the correct for this particular situation. If, however, you need to do more than simply `curl` then I will recommend making a function:
```
#! /bin/bash
doit() {
ip="$1"
curl "$ip"
echo do other stuff here
}
export -f doit
while true; do
parallel -j10 doit < ipslist >> result.txt
done
```
If you want to `ipslist` to be a queue so you can later add stuff to the queue and you only want it `curl`ed once:
```
tail -n+0 -f ipslist | parallel doit >> result.txt
```
Now you can later simply add stuff to ipslist and GNU Parallel will `curl` that, too.
(There is a a small issue when using GNU parallel as queue system/batch
manager: You have to submit JobSlot number of jobs before they will
start, and after that you can submit one at a time, and job will start
immediately if free slots are available. Output from the running or
completed jobs are held back and will only be printed when JobSlots more
jobs has been started (unless you use --ungroup or --line-buffer, in
which case the output from the jobs are printed immediately). E.g. if
you have 10 jobslots then the output from the first completed job will
only be printed when job 11 has started, and the output of second
completed job will only be printed when job 12 has started.)
Upvotes: 2 |
2018/03/19 | 1,683 | 5,578 | <issue_start>username_0: THIS IS THE ORIGINAL, EDITED IN MY NEXT ANSWER BELOW:
I have sent the original code in next answer with very few changes. Ask me for changes or clarifications if i missed something.
What I want to do:
A scoring system that connects to a database, gives some choices for the user, eg age, education.
What I have done so far:
connect to database, and echo values accordingly to each choice. However, it is not convenient to make too many "if && if && if, then" statements,
What I want to improve:
It is much better to build a "foreach" statement, so I have a variable that gives points accordingly to each answer. If age>20, 5 points, if age<20, 10 points.
Then: if education = highschool, 5 points. If education = university, 10 points.
Which would be the best way to build such a "foreach" statement?
```
php
// Get a db connection.
$db = JFactory::getDbo();
// Create a new query object.
$query = $db-getQuery(true);
// Select all records from the user profile table.
// Order it by the ordering field.
$query->select($db->quoteName(array(FieldValue)));
$query->from($db->quoteName('table'));
$query->where($db->quoteName('SubmissionId') . ' = '. $db->quote('2'));
$query->and($db->quoteName('FieldName') . ' = '. $db->quote('age'));
```
// (Extra, but for later: I currently have submission id = 2. It should become submission id = the same as the last user submited).
```
// Reset the query using our newly populated query object.
$db->setQuery($query);
```
rows 5,6 are age, education, etc, more will be added but i need to find a way to improve this after i fix the "foreach" statement.
```
$row = $db->loadObjectList();
echo nl2br("\n");
echo $row['5']->FieldValue;
echo nl2br("\n");
echo $row['6']->FieldValue;
```
//this is my statement so far, which i need to improve. instead of echoing the value, i better assign variables to it.
```
echo nl2br("\n");
if($row['5']->FieldValue==">20" && $row['6']->FieldValue=="university" )
{
echo "15 points";
}
```
//should be 5+10 from the variables, not just echo value.
```
else if($row['5']->FieldValue==">20" && $row['6']->FieldValue=="high school" )
{
echo "10 points";
}
```
//should be 5+5 from the variables, not just echo value.
```
else
{
echo "not variables given";
}
echo nl2br("\n");
// Load the results as a list of stdClass objects (see later for more options on retrieving data).
$results = $db->loadObjectList();
?>
```<issue_comment>username_1: This works for me:
```
#!/bin/bash
while true; do
parallel -j10 curl '{}' < ipslist >> result.txt
done
```
If that's not what you intended, please update your question to clarify.
Upvotes: 1 <issue_comment>username_2: You can do this and it will work :
```
#! /bin/bash
while true; do
while read -r ip; do
curl $ip >> result.txt &
sed -i '1,1 d' iplist
done < ipslist
wait
done
```
Upvotes: -1 <issue_comment>username_3: I would just use `xargs`. Not many people seem to know this, but there's much more to it than the standard usage to just squeeze every line of the input on a single line. That is, this:
```
echo -e "A\nB\nC\nD\nE" | xargs do_something
```
would essentially mean the same as this:
```
do_something A B C D E
```
However you can specify, how many lines are processed in one chunk, using the `-L` option:
```
echo -e "A\nB\nC\nD\nE" | xargs -L2 do_something
```
would translate to:
```
do_something A B
do_something C D
```
Additionally, you can also specify, how many of these chunks run in parallel, with the `-P` option. So to process the lines one-by-one, with a parallelism of, say 3, you would say:
```
echo -e "A\nB\nC\nD\nE" | xargs -L1 -P3 do_something
```
Et voilà, you have proper parallel execution, with basic unix tools.
The only catch is, that you have to make sure you'll separate the outputs. I am not sure, whether this has been thought of before, but a solution for the `curl` case is something like this:
```
cat url_list.txt | xargs -L1 -P10 curl -o paralell_#0.html
```
Where `#0` will be replaced by cURL with the URL being fetched. See the manuals for further details:
* <http://man7.org/linux/man-pages/man1/xargs.1.html>
* <https://curl.haxx.se/docs/manpage.html>
Upvotes: 0 <issue_comment>username_4: username_1' solution looks like the correct for this particular situation. If, however, you need to do more than simply `curl` then I will recommend making a function:
```
#! /bin/bash
doit() {
ip="$1"
curl "$ip"
echo do other stuff here
}
export -f doit
while true; do
parallel -j10 doit < ipslist >> result.txt
done
```
If you want to `ipslist` to be a queue so you can later add stuff to the queue and you only want it `curl`ed once:
```
tail -n+0 -f ipslist | parallel doit >> result.txt
```
Now you can later simply add stuff to ipslist and GNU Parallel will `curl` that, too.
(There is a a small issue when using GNU parallel as queue system/batch
manager: You have to submit JobSlot number of jobs before they will
start, and after that you can submit one at a time, and job will start
immediately if free slots are available. Output from the running or
completed jobs are held back and will only be printed when JobSlots more
jobs has been started (unless you use --ungroup or --line-buffer, in
which case the output from the jobs are printed immediately). E.g. if
you have 10 jobslots then the output from the first completed job will
only be printed when job 11 has started, and the output of second
completed job will only be printed when job 12 has started.)
Upvotes: 2 |
2018/03/19 | 1,050 | 3,498 | <issue_start>username_0: I have a issue with css and ng2-completer. I try to align the dropdown part and the input.
There is no example on the demo page to focus element with css and when i try to select the class .completer-dropdown-holder i have no result.
[](https://i.stack.imgur.com/oioDT.png)<issue_comment>username_1: This works for me:
```
#!/bin/bash
while true; do
parallel -j10 curl '{}' < ipslist >> result.txt
done
```
If that's not what you intended, please update your question to clarify.
Upvotes: 1 <issue_comment>username_2: You can do this and it will work :
```
#! /bin/bash
while true; do
while read -r ip; do
curl $ip >> result.txt &
sed -i '1,1 d' iplist
done < ipslist
wait
done
```
Upvotes: -1 <issue_comment>username_3: I would just use `xargs`. Not many people seem to know this, but there's much more to it than the standard usage to just squeeze every line of the input on a single line. That is, this:
```
echo -e "A\nB\nC\nD\nE" | xargs do_something
```
would essentially mean the same as this:
```
do_something A B C D E
```
However you can specify, how many lines are processed in one chunk, using the `-L` option:
```
echo -e "A\nB\nC\nD\nE" | xargs -L2 do_something
```
would translate to:
```
do_something A B
do_something C D
```
Additionally, you can also specify, how many of these chunks run in parallel, with the `-P` option. So to process the lines one-by-one, with a parallelism of, say 3, you would say:
```
echo -e "A\nB\nC\nD\nE" | xargs -L1 -P3 do_something
```
Et voilà, you have proper parallel execution, with basic unix tools.
The only catch is, that you have to make sure you'll separate the outputs. I am not sure, whether this has been thought of before, but a solution for the `curl` case is something like this:
```
cat url_list.txt | xargs -L1 -P10 curl -o paralell_#0.html
```
Where `#0` will be replaced by cURL with the URL being fetched. See the manuals for further details:
* <http://man7.org/linux/man-pages/man1/xargs.1.html>
* <https://curl.haxx.se/docs/manpage.html>
Upvotes: 0 <issue_comment>username_4: username_1' solution looks like the correct for this particular situation. If, however, you need to do more than simply `curl` then I will recommend making a function:
```
#! /bin/bash
doit() {
ip="$1"
curl "$ip"
echo do other stuff here
}
export -f doit
while true; do
parallel -j10 doit < ipslist >> result.txt
done
```
If you want to `ipslist` to be a queue so you can later add stuff to the queue and you only want it `curl`ed once:
```
tail -n+0 -f ipslist | parallel doit >> result.txt
```
Now you can later simply add stuff to ipslist and GNU Parallel will `curl` that, too.
(There is a a small issue when using GNU parallel as queue system/batch
manager: You have to submit JobSlot number of jobs before they will
start, and after that you can submit one at a time, and job will start
immediately if free slots are available. Output from the running or
completed jobs are held back and will only be printed when JobSlots more
jobs has been started (unless you use --ungroup or --line-buffer, in
which case the output from the jobs are printed immediately). E.g. if
you have 10 jobslots then the output from the first completed job will
only be printed when job 11 has started, and the output of second
completed job will only be printed when job 12 has started.)
Upvotes: 2 |
2018/03/19 | 1,218 | 5,008 | <issue_start>username_0: I am using consumer group with just one consumer, just one broker ( docker wurstmeister image ). It's decided in a code to commit offset or not - if code returns error then message is not commited. I need to ensure that system does not lose any message - even if that means retrying same msg forever ( for now ;) ). For testing this I have created simple handler which does not commit offset in case of 'error' string send as message to kafka. All other strings are commited.
```
kafka-console-producer --broker-list localhost:9092 --topic test
>this will be commited
```
Now running
```
kafka-run-class kafka.admin.ConsumerGroupCommand --bootstrap-server localhost:9092 --group michalgrupa --describe
```
returns
```
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
test 0 13 13 0
```
so thats ok, there is no lag. Now we pass 'error' string to fake that something bad happened and message is not commited:
```
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
test 0 13 14 1
```
Current offset stays at right position + there is 1 lagged message. Now if we pass correct message again offset will move on to 15:
`TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG
test 0 15 15`
and message number 14 will not be picked up ever again. Is it default behaviour? Do I need to trace last offset and load message by it+1 manually? I have set commit interval to 0 to hopefully not use any auto.commit mechanism.
fetch/commit code:
```
go func() {
for {
ctx := context.Background()
m, err := mr.brokerReader.FetchMessage(ctx)
if err != nil {
break
}
if err := msgFunc(m); err != nil {
log.Errorf("# messaging # cannot commit a message: %v", err)
continue
}
// commit message if no error
if err := mr.brokerReader.CommitMessages(ctx, m); err != nil {
// should we do something else to just logging not committed message?
log.Errorf("cannot commit message [%s] %v/%v: %s = %s; with error: %v", m.Topic, m.Partition, m.Offset, string(m.Key), string(m.Value), err)
}
}
}()
```
reader configuration:
```
kafkaReader := kafka.NewReader(kafka.ReaderConfig{
Brokers: brokers,
GroupID: groupID,
Topic: topic,
CommitInterval: 0,
MinBytes: 10e3,
MaxBytes: 10e6,
})
```
library used: <https://github.com/segmentio/kafka-go><issue_comment>username_1: In kafka you just submit offsets not single messages. If I understand your code right (not a go-developer). You just continue after you hit an invalid message. If after in invalid message a valid one appears you will submit the offset again - I guess that was not your intention.
Just to make clear what does submitting or committing an offset means: Your consumer group will store the offset to a dedicated internal kafka topic (or on older kafka versions on zookeeper). An offset can identify a single position within a topic (or to be more precise on a partition of a given topic). This means you can only consume a topic in a linear fashion.
Here you can see what happens on kafka-consumer side:
[](https://i.stack.imgur.com/32esG.png)
You are consuming from a (most likely multiple) stack(s) of messages. You submit the position (a.k.a offset) at this topic/partition. So you can **not** say I want to reconsume a specific message again. What you can do is to stop consuming once you hit an invalid message. In this case your problem will be: How do I get rid of this message. Deleting a single message from a kafka topic is tricky. A common pattern is to write this messages to some kind of dead-letter topic and deal with it with a different consumer.
Hope that made things a little bit clearer to you.
Upvotes: 2 <issue_comment>username_2: It looks like your Kafka consumer is set up to commit offsets automatically (that's the default setting).
If so, that's probably why your app skips over the erroneous message - despite the fact you skip `CommitMessages` invocation, commit is performed on a background thread
Please check out `enable.auto.commit` property specification in the docs: <https://kafka.apache.org/documentation/#newconsumerconfigs>
Upvotes: 0 <issue_comment>username_3: Here make sense to understand the concept of consumer offset. For running consumer app, it stores the offset of consumed messages in memory regardless of commit/uncommit offset, if restarting the consumer app, it will retrieve the offset of 'CURRENT-OFFSET' to continue with the consumption.
Upvotes: 1 |
2018/03/19 | 547 | 1,769 | <issue_start>username_0: I was wondering how I can store values in an Array, because I can just `echo` the `$row['Name']`, but if I want to store multiple values in array I get nothing.
```
$conn = mysqli_connect($db_host, $db_user, $db_pass, $db_name);
if (!$conn) {
die ('Failed to connect to MySQL: ' . mysqli_connect_error());
}
$sql = "SELECT * FROM " . $dbname . " WHERE id ='$id'";
$query = mysqli_query($conn, $sql);
while ($row = mysqli_fetch_array($query)) {
$thisArray[] = array( $row['Name'],$row['accountNumber']);
}
echo $thisArray[0];
```<issue_comment>username_1: As you just want to load the results of a SQL statement into an array, you can use `mysqli_fetch_all()` and just fetch the columns you need.
When outputing the results, arrays should be output with something like `print_r()`...
```
$conn = mysqli_connect($db_host, $db_user, $db_pass, $db_name);
if (!$conn) {
die ('Failed to connect to MySQL: ' . mysqli_connect_error());
}
$sql = "SELECT Name,accountNumber FROM " . $dbname . " WHERE id ='$id'";
$query = mysqli_query($conn, $sql);
$thisArray = mysqli_fetch_all($query, MYSQLI_ASSOC);
print_r( $thisArray[0]);
```
**Update:**
If you want to re-arrange the array to key by one of the columns, `array_column()` can do this...
```
$thisArray = array_column($thisArray, 'accountNumber', 'Name');
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You have to initialize your array outside of the loop first and when you echo it you don't need to specify a key except if you need a specific key.
Based on your code above:
```
$thisArray = [];
while ($row = mysqli_fetch_array($query)) {
$thisArray[] = [
$row['Name'],
$row['accountNumber']
];
}
echo $thisArray;
```
Upvotes: 0 |
2018/03/19 | 850 | 2,188 | <issue_start>username_0: I have a table (Table\_Marks) it contains value as described below: -
```
CLASS | STD | NAME | SUBJECT | PT1_M | PTB1_M | PT2_M | PTB2_M |
1 | ST1 | NITYA | ENG | 12 | 15 | 30 | 9 |
1 | ST1 | NITYA | HIN | 2 | 22 | 25 | 6 |
1 | ST1 | NITYA | MATH | 3 | 10 | 32 | 8 |
1 | ST2 | SHIV | ENG | AB | AB | 10 | 2 |
1 | ST2 | SHIV | HIN | 2 | 22 | 20 | 1 |
1 | ST2 | SHIV | MATH | 3 | 10 | AB | 5 |
```
Now I want to use this as PIVOT TABLE as described below: -
```
CLASS|STD |NAME |ENG |HIN |MATH |T1 |ENG |HIN |MATH |T2 |T1+T2|
|PT1|PB1 |PT1|PB1|PT1|PB1| |PT2|PB2|PT2|PB2|PT2|PB2|
1 |STD1|NITYA |12 |15 |2 |22 |3 |10 |64 |30 |9 |25 |6 |32 |8 |110 |174
1 |STD2|SHIV |AB |AB |2 |22 |3 |10 |37 |10 |2 |20 |1 |AB |5 |38 |75
```
Please help any one<issue_comment>username_1: As you just want to load the results of a SQL statement into an array, you can use `mysqli_fetch_all()` and just fetch the columns you need.
When outputing the results, arrays should be output with something like `print_r()`...
```
$conn = mysqli_connect($db_host, $db_user, $db_pass, $db_name);
if (!$conn) {
die ('Failed to connect to MySQL: ' . mysqli_connect_error());
}
$sql = "SELECT Name,accountNumber FROM " . $dbname . " WHERE id ='$id'";
$query = mysqli_query($conn, $sql);
$thisArray = mysqli_fetch_all($query, MYSQLI_ASSOC);
print_r( $thisArray[0]);
```
**Update:**
If you want to re-arrange the array to key by one of the columns, `array_column()` can do this...
```
$thisArray = array_column($thisArray, 'accountNumber', 'Name');
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You have to initialize your array outside of the loop first and when you echo it you don't need to specify a key except if you need a specific key.
Based on your code above:
```
$thisArray = [];
while ($row = mysqli_fetch_array($query)) {
$thisArray[] = [
$row['Name'],
$row['accountNumber']
];
}
echo $thisArray;
```
Upvotes: 0 |
2018/03/19 | 1,654 | 5,720 | <issue_start>username_0: 
I have sheet named "raw" and I want to filter it using button function. in "raw" sheet, there this table which have random header. what I want to do is that when I click the button, then new sheet "filter" will be generate with table where the header is more organized.
I am able to create new sheet within button but generating organized table is harder. I want to ask is it possible to create this table? I am a VBA Learner and interest in learn more in VBA programming.
By the way, I have try to make table using
```
Dim Ws As Worksheet
Set Ws = ThisWorkbook.Sheets("Sheet_Name")
Ws.ListObjects.Add(xlSrcRange, Ws.Range("A$xx:$V$xx"), , xlYes).Name = "New_Table_Name"
Ws.ListObjects("New_Table_Name").TableStyle = "TableStyleLight1"
```
and still I cannot naming the column table header.<issue_comment>username_1: As far as you are studying VBA for 3 days, it is a really good idea to start using the Macro recorder for tasks like this, at least to have a starting point. This is a simple example from the Macro Recorder:
```
Sub Makro1()
'
' Makro1 Makro
'
'
Cells.Clear
ActiveSheet.ListObjects.Add(xlSrcRange, Range("$A$1:$E$13"), , xlNo).Name = _
"Table1"
Range("Table1[#All]").Select
ActiveSheet.ListObjects("Table1").TableStyle = "TableStyleLight9"
Range("Table1[[#Headers],[Column1]]").Select
ActiveCell.FormulaR1C1 = "Header1"
Range("Table1[[#Headers],[Column2]]").Select
ActiveCell.FormulaR1C1 = "Second Header"
Range("Table1[[#Headers],[Column3]]").Select
ActiveCell.FormulaR1C1 = "Third Header"
Range("Table1[[#Headers],[Column4]]").Select
ActiveCell.FormulaR1C1 = "Forth Header"
Range("Table1[[#Headers],[Column5]]").Select
ActiveCell.FormulaR1C1 = "Fifth Header"
Range("A2").Select
End Sub
```
Play a bit, see how it works, use `F8`. After some time, you can look for a way to avoid the `.Select` and `ActiveSheet`. This is some example that can be automized further with a loop, based on the number of the header rows. However, it does not use `ActiveSheet` and `Select`:
```
Option Explicit
Sub TestMe()
Dim ws As Worksheet
Set ws = ThisWorkbook.Worksheets(1)
Dim tbl As ListObject
With ws
.Cells.Clear
.ListObjects.Add(xlSrcRange, .Range("A1:E10"), , xlNo).Name = "MyFirstTable"
Set tbl = .ListObjects(1)
tbl.HeaderRowRange.Cells(1, 1) = "SomeHeader1"
tbl.HeaderRowRange.Cells(1, 2) = "SomeHeader2"
tbl.HeaderRowRange.Cells(1, 3) = "SomeHeader3"
tbl.HeaderRowRange.Cells(1, 4) = "SomeHeader4"
tbl.HeaderRowRange.Cells(1, 5) = "SomeHeader5"
End With
End Sub
```
E.g., if you want to loop through the header and hive some values, then this is the content of the `With ws`:
```
With ws
.Cells.Clear
.ListObjects.Add(xlSrcRange, .Range("A1:E10"), , xlNo).Name = "MyFirstTable"
Set tbl = .ListObjects(1)
Dim myCell As Range
For Each myCell In tbl.HeaderRowRange.Cells
myCell = "SomeHeader " & myCell.Column
Next myCell
End With
```
Upvotes: 0 <issue_comment>username_2: Create a new Standard VBA module and paste the code bellow
If Worksheets("Filter") already exists:
---
```
Option Explicit
Public Sub CopyTable() 'Worksheets("Filter") exists
Const TBL_ID = "New_Table_Name"
Dim ws1 As Worksheet, ws2 As Worksheet
Set ws1 = ThisWorkbook.Worksheets("Raw")
Set ws2 = ThisWorkbook.Worksheets("Filter")
Application.ScreenUpdating = False
ws1.ListObjects(1).Range.Copy
With ws2
.Cells(1).PasteSpecial Paste:=xlPasteAll
.Cells(1).PasteSpecial Paste:=xlPasteColumnWidths
.Cells(1).Select
.ListObjects(1).Name = TBL_ID
MoveTableCols ws2, TBL_ID 'calls 3rd Sub **************
End With
Application.ScreenUpdating = True
End Sub
```
---
This will create a new Worksheet called "Filter"
---
```
Public Sub CopyWs() 'Creates a new Worksheets("Filter")
Const TBL_ID = "New_Table_Name"
Dim ws1 As Worksheet, ws2 As Worksheet, wsCount As Long
Application.ScreenUpdating = False
With ThisWorkbook
Set ws1 = .Worksheets("Raw")
ws1.Copy After:=.Worksheets(.Worksheets.Count)
wsCount = .Worksheets.Count
Set ws2 = .Worksheets(wsCount)
End With
ws2.Name = "Filter"
ws2.ListObjects(1).Name = TBL_ID
MoveTableCols ws2, TBL_ID 'calls 3rd Sub **************
Application.ScreenUpdating = True
End Sub
```
---
The Sub bellow is called by both Subs above, and reorganizes the new table
---
```
'Called by CopyTable() and CopyWs() Subs
Private Sub MoveTableCols(ByRef ws As Worksheet, ByVal tblId As String)
Dim arr As Variant
With ws
.Rows(4).Delete Shift:=xlUp 'To delete rows based on criteria use Autofilter
.ListObjects(tblId).ListColumns.Add Position:=6
arr = .ListObjects(tblId).ListColumns(1).DataBodyRange
.ListObjects(tblId).ListColumns(6).DataBodyRange = arr
arr = .Cells(1)
.Columns(1).Delete Shift:=xlToLeft
.Cells(5) = arr
End With
End Sub
```
---
As username_1 mentioned, the Macro Recorder will generate the code for all your manual actions, you'll just need to improve it be removing all Activate and Select statements
**Note**: A table cannot have 2 identical headers so moving a column involves creating a new column, copying the data from the initial column, then "remembering" the header name, deleting the initial column, and renaming the header for the new column to the initial header name
Upvotes: 2 [selected_answer] |
2018/03/19 | 1,178 | 4,556 | <issue_start>username_0: I am using rendertron as a solution for server side rendering, below is index.js file. How to execute index.js and where to execute. I have setup my own instance of redertron using docker on my server and my angular app build is within dist folder how to render complete html of my angular app using rendertron and where to execute index.js
```
const express = require('express');
const fetch = require('node-fetch');
const url = require('url');
const app = express('');
const appUrl = 'http://xyzabc.com';
const renderUrl = 'http://pqr.com/render';
function generateUrl(request) {
return url.format({
protocol: request.protocol,
host: appUrl,
pathname: request.originalUrl
});
}
function detectBot(userAgent){
const bots = [
'bingbot',
'yandexbot',
'duckduckbot',
'slurp',
//Social
'twitterbot',
'facebookexternalhit',
'linkedinbot',
'embedly',
'pinterest',
'W3C_Validator'
]
const agent = userAgent.toLowerCase();
for (const bot of bots) {
if (agent.indexOf(bot) > -1) {
console.log('bot detected', bot, agent);
return true;
}
}
console.log('no bots found');
return false;
}
app.get('*', (req, res) =>{
const isBot = detectBot(req.headers['user-agent']);
if (isBot) {
const botUrl = generateUrl(req);
fetch(`${renderUrl}/${botUrl}`)
.then(res => res.text())
.then(body => {
res.set('Cache-Control', 'public, max-age=300, s-maxage=600');
res.set('Vary', 'User-Agent');
res.send(body.toString())
});
} else {
fetch(`https://${appUrl}/`)
.then(res => res.text())
.then(body => {
res.send(body.toString());
});
}
});
```<issue_comment>username_1: Is `http://pqr.com/render` your personal rendering server? If not, you have to forward the request to `https://render-tron.appspot.com/render` or deploy Rendertron separately yourself.
Also right now you just assign the created express-instance to a constant (`const app = express('')`), configure it and export it for firebase (which you don't use). Instead you have to run express yourself on a node.js server.
Upvotes: 0 <issue_comment>username_2: I'm using an Angular 6 app and I was facing the same issue. I did it without using an express server or firebase, instead I used NGINX to check the agent header and route them to rendertron if it's a bot or to the angular app if it's a normal user.
Incase, if you wanted to take this approach using NGINX. Use this configuration.
```
server {
server_name your-server-name;
root /path to your dist;
index index.html;
location ~ /\. {
deny all;
}
location / {
try_files $uri @prerender;
}
location @prerender {
set $prerender 0;
if ($http_user_agent ~* "googlebot|yahoo|bingbot|baiduspider|yandex|yeti|yodaobot|gigabot|ia_archiver|facebookexternalhit|twitterbot|developers\.google\.com") {
set $prerender 1;
}
if ($args ~ "_escaped_fragment_|prerender=1") {
set $prerender 1;
}
if ($http_user_agent ~ "Prerender") {
set $prerender 0;
}
if ($prerender = 1) {
rewrite .* /render/$scheme://$host$request_uri? break;
proxy_pass https://render-tron.appspot.com; #You can use our own hosted Rendertron
}
if ($prerender = 0) {
rewrite .* /index.html break;
}
}
}
```
And yes, you can now pre-render if it's a bot.
If you still wanted to do it using a express, rendertron offers an express middleware. You can check it out [here](https://github.com/GoogleChrome/rendertron/tree/master/middleware).
I found this NGINX configuration from [prerender.io](https://prerender.io/), you can find something useful for different server or any other approach in their [repo](https://github.com/prerender/prerender).
Upvotes: 2 <issue_comment>username_3: You can user aws server architecture once you have setup with aws account you can logged into there and push you are all code into rendertron folder please kindly follow this link
<https://medium.com/@aakashbanerjee/deploying-rendertron-on-aws-ec2-8c00a4bb6b1e>
Upvotes: -1 |
2018/03/19 | 505 | 1,431 | <issue_start>username_0: I'm trying to print a number in binary with these two approaches:
Approach 1:
```
int input;
scanf("%d", &input);
for(int i = sizeof(int)*CHAR_BIT - 1; i >= 0; i--)
printf("%u",(input & (1<> i);
```
Approach 2:
```
int input;
scanf("%d", &input);
for(int i = sizeof(int)*CHAR_BIT - 1; i >= 0; i--)
(input & (1<
```
Approach 2 works fine but in Approach 1 the first "digit" that is printed is 4294967295 and I can't find the error.<issue_comment>username_1: use unsigned type and watch out the i>=0 comparison. because unsigned type always >=0. it cannot be minus.
```
unsigned int input;
scanf("%u", &input);
for(unsigned int i = sizeof(int)*8 - 1; ; i--) {
printf("%u",(input & (1u<> i);
if ( i==0 ) break ;
}
printf("\n") ;
```
Upvotes: 0 <issue_comment>username_2: It doesn't make sense to use signed numbers for bit shifts. When you shift data into the sign bit of the `int`, you invoke undefined behavior. Also be aware that the `1` literal is of type `int`.
Solve this by using unsigned types and get rid of the na(t)ive C types at the same time, in favour for stdint.h.
```
uint32_t input = 0x88776655;
for(uint32_t i=0; i<32; i++)
{
printf("%u", (input & (1u<<31-i)) >> 31-i);
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Solve this by casting to unsigned type before working with bits.
```
printf("%u", ((unsigned int)input & (1 << i)) >> i);
```
Upvotes: 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.