date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 9,845 | 31,972 | <issue_start>username_0: I have written the program (below) to:
* read a huge text file as `pandas dataframe`
* then `groupby` using a specific column value to split the data and store as list of dataframes.
* then pipe the data to `multiprocess Pool.map()` to process each dataframe in parallel.
Everything is fine, the program works well on my small test dataset. But, when I pipe in my large data (about 14 GB), the memory consumption exponentially increases and then freezes the computer or gets killed (in HPC cluster).
I have added codes to clear the memory as soon as the data/variable isn't useful. I am also closing the pool as soon as it is done. Still with 14 GB input I was only expecting 2\*14 GB memory burden, but it seems like lot is going on. I also tried to tweak using `chunkSize and maxTaskPerChild, etc` but I am not seeing any difference in optimization in both test vs. large file.
I think improvements to this code is/are required at this code position, when I start `multiprocessing`.
`p = Pool(3) # number of pool to run at once; default at 1
result = p.map(matrix_to_vcf, list(gen_matrix_df_list.values()))`
but, I am posting the whole code.
**Test example:** I created a test file ("genome\_matrix\_final-chr1234-1mb.txt") of upto 250 mb and ran the program. When I check the system monitor I can see that memory consumption increased by about 6 GB. I am not so clear why so much memory space is taken by 250 mb file plus some outputs. I have shared that file via drop box if it helps in seeing the real problem. <https://www.dropbox.com/sh/coihujii38t5prd/AABDXv8ACGIYczeMtzKBo0eea?dl=0>
Can someone suggest, How I can get rid of the problem?
**My python script:**
```
#!/home/bin/python3
import pandas as pd
import collections
from multiprocessing import Pool
import io
import time
import resource
print()
print('Checking required modules')
print()
''' change this input file name and/or path as need be '''
genome_matrix_file = "genome_matrix_final-chr1n2-2mb.txt" # test file 01
genome_matrix_file = "genome_matrix_final-chr1234-1mb.txt" # test file 02
#genome_matrix_file = "genome_matrix_final.txt" # large file
def main():
with open("genome_matrix_header.txt") as header:
header = header.read().rstrip('\n').split('\t')
print()
time01 = time.time()
print('starting time: ', time01)
'''load the genome matrix file onto pandas as dataframe.
This makes is more easy for multiprocessing'''
gen_matrix_df = pd.read_csv(genome_matrix_file, sep='\t', names=header)
# now, group the dataframe by chromosome/contig - so it can be multiprocessed
gen_matrix_df = gen_matrix_df.groupby('CHROM')
# store the splitted dataframes as list of key, values(pandas dataframe) pairs
# this list of dataframe will be used while multiprocessing
gen_matrix_df_list = collections.OrderedDict()
for chr_, data in gen_matrix_df:
gen_matrix_df_list[chr_] = data
# clear memory
del gen_matrix_df
'''Now, pipe each dataframe from the list using map.Pool() '''
p = Pool(3) # number of pool to run at once; default at 1
result = p.map(matrix_to_vcf, list(gen_matrix_df_list.values()))
del gen_matrix_df_list # clear memory
p.close()
p.join()
# concat the results from pool.map() and write it to a file
result_merged = pd.concat(result)
del result # clear memory
pd.DataFrame.to_csv(result_merged, "matrix_to_haplotype-chr1n2.txt", sep='\t', header=True, index=False)
print()
print('completed all process in "%s" sec. ' % (time.time() - time01))
print('Global maximum memory usage: %.2f (mb)' % current_mem_usage())
print()
'''function to convert the dataframe from genome matrix to desired output '''
def matrix_to_vcf(matrix_df):
print()
time02 = time.time()
# index position of the samples in genome matrix file
sample_idx = [{'10a': 33, '10b': 18}, {'13a': 3, '13b': 19},
{'14a': 20, '14b': 4}, {'16a': 5, '16b': 21},
{'17a': 6, '17b': 22}, {'23a': 7, '23b': 23},
{'24a': 8, '24b': 24}, {'25a': 25, '25b': 9},
{'26a': 10, '26b': 26}, {'34a': 11, '34b': 27},
{'35a': 12, '35b': 28}, {'37a': 13, '37b': 29},
{'38a': 14, '38b': 30}, {'3a': 31, '3b': 15},
{'8a': 32, '8b': 17}]
# sample index stored as ordered dictionary
sample_idx_ord_list = []
for ids in sample_idx:
ids = collections.OrderedDict(sorted(ids.items()))
sample_idx_ord_list.append(ids)
# for haplotype file
header = ['contig', 'pos', 'ref', 'alt']
# adding some suffixes "PI" to available sample names
for item in sample_idx_ord_list:
ks_update = ''
for ks in item.keys():
ks_update += ks
header.append(ks_update+'_PI')
header.append(ks_update+'_PG_al')
#final variable store the haplotype data
# write the header lines first
haplotype_output = '\t'.join(header) + '\n'
# to store the value of parsed the line and update the "PI", "PG" value for each sample
updated_line = ''
# read the piped in data back to text like file
matrix_df = pd.DataFrame.to_csv(matrix_df, sep='\t', index=False)
matrix_df = matrix_df.rstrip('\n').split('\n')
for line in matrix_df:
if line.startswith('CHROM'):
continue
line_split = line.split('\t')
chr_ = line_split[0]
ref = line_split[2]
alt = list(set(line_split[3:]))
# remove the alleles "N" missing and "ref" from the alt-alleles
alt_up = list(filter(lambda x: x!='N' and x!=ref, alt))
# if no alt alleles are found, just continue
# - i.e : don't write that line in output file
if len(alt_up) == 0:
continue
#print('\nMining data for chromosome/contig "%s" ' %(chr_ ))
#so, we have data for CHR, POS, REF, ALT so far
# now, we mine phased genotype for each sample pair (as "PG_al", and also add "PI" tag)
sample_data_for_vcf = []
for ids in sample_idx_ord_list:
sample_data = []
for key, val in ids.items():
sample_value = line_split[val]
sample_data.append(sample_value)
# now, update the phased state for each sample
# also replacing the missing allele i.e "N" and "-" with ref-allele
sample_data = ('|'.join(sample_data)).replace('N', ref).replace('-', ref)
sample_data_for_vcf.append(str(chr_))
sample_data_for_vcf.append(sample_data)
# add data for all the samples in that line, append it with former columns (chrom, pos ..) ..
# and .. write it to final haplotype file
sample_data_for_vcf = '\t'.join(sample_data_for_vcf)
updated_line = '\t'.join(line_split[0:3]) + '\t' + ','.join(alt_up) + \
'\t' + sample_data_for_vcf + '\n'
haplotype_output += updated_line
del matrix_df # clear memory
print('completed haplotype preparation for chromosome/contig "%s" '
'in "%s" sec. ' %(chr_, time.time()-time02))
print('\tWorker maximum memory usage: %.2f (mb)' %(current_mem_usage()))
# return the data back to the pool
return pd.read_csv(io.StringIO(haplotype_output), sep='\t')
''' to monitor memory '''
def current_mem_usage():
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024.
if __name__ == '__main__':
main()
```
**Update for bounty hunters:**
I have achieved multiprocessing using `Pool.map()` but the code is causing a big memory burden (input test file ~ 300 mb, but memory burden is about 6 GB). I was only expecting 3\*300 mb memory burden at max.
* Can somebody explain, What is causing such a huge memory requirement for such a small file and for such small length computation.
* Also, i am trying to take the answer and use that to improve multiprocess in my large program. So, addition of any method, module that doesn't change the structure of computation part (CPU bound process) too much should be fine.
* I have included two test files for the test purposes to play with the code.
* The attached code is full code so it should work as intended as it is when copied-pasted. Any changes should be used only to improve optimization in multiprocessing steps.<issue_comment>username_1: I had the same issue. I needed to process a huge text corpus while keeping a knowledge base of few DataFrames of millions of rows loaded in memory. I think this issue is common so I will keep my answer oriented for general purposes.
A **combination** of settings solved the problem for me (1 & 3 & 5 only might do it for you):
1. Use `Pool.imap` (or `imap_unordered`) instead of `Pool.map`. This will iterate over data lazily than loading all of it in memory before starting processing.
2. Set a value to `chunksize` parameter. This will make `imap` faster too.
3. Set a value to `maxtasksperchild` parameter.
4. Append output to disk than in memory. Instantly or every while when it reaches a certain size.
5. Run the code in different batches. You can use [itertools.islice](https://docs.python.org/3/library/itertools.html#itertools.islice) if you have an iterator. The idea is to split your `list(gen_matrix_df_list.values())` to three or more lists, then you pass the first third only to `map` or `imap`, then the second third in another run, etc. Since you have a list you can simply slice it in the same line of code.
Upvotes: 3 <issue_comment>username_2: When you use `multiprocessing.Pool` a number of child processes will be created using the `fork()` system call. Each of those processes start off with an exact copy of the memory of the parent process at that time. Because you're loading the csv before you create the `Pool` of size 3, each of those 3 processes in the pool will unnecessarily have a copy of the data frame. (`gen_matrix_df` as well as `gen_matrix_df_list` will exist in the current process as well as in each of the 3 child processes, so 4 copies of each of these structures will be in memory)
Try creating the `Pool` before loading the file (at the very beginning actually) That should reduce the memory usage.
If it's still too high, you can:
1. Dump gen\_matrix\_df\_list to a file, 1 item per line, e.g:
```
import os
import cPickle
with open('tempfile.txt', 'w') as f:
for item in gen_matrix_df_list.items():
cPickle.dump(item, f)
f.write(os.linesep)
```
2. Use `Pool.imap()` on an iterator over the lines that you dumped in this file, e.g.:
```
with open('tempfile.txt', 'r') as f:
p.imap(matrix_to_vcf, (cPickle.loads(line) for line in f))
```
(Note that `matrix_to_vcf` takes a `(key, value)` tuple in the example above, not just a value)
I hope that helps.
NB: I haven't tested the code above. It's only meant to demonstrate the idea.
Upvotes: 4 <issue_comment>username_3: **GENERAL ANSWER ABOUT MEMORY WITH MULTIPROCESSING**
You asked: "What is causing so much memory to be allocated". The answer relies on two parts.
*First*, as you already noticed, **each `multiprocessing` worker gets it's own copy of the data** (quoted [from here](https://pythonhosted.org/joblib/parallel.html)), so you should chunk large arguments. Or for large files, read them in a little bit at a time, if possible.
>
> By default the workers of the pool are real Python processes forked
> using the multiprocessing module of the Python standard library when
> n\_jobs != 1. The arguments passed as input to the Parallel call are
> serialized and reallocated in the memory of each worker process.
>
>
> This can be problematic for large arguments as they will be
> reallocated n\_jobs times by the workers.
>
>
>
*Second*, if you're trying to reclaim memory, you need to understand that python works differently than other languages, and **you are relying on [del to release the memory when it doesn't](http://effbot.org/pyfaq/why-doesnt-python-release-the-memory-when-i-delete-a-large-object.htm)**. I don't know if it's best, but in my own code, I've overcome this be reassigning the variable to a None or empty object.
**FOR YOUR SPECIFIC EXAMPLE - MINIMAL CODE EDITING**
As long as you can fit your large data in memory *twice*, I think you can do what you are trying to do by just changing a single line. I've written very similar code and it worked for me when I reassigned the variable (vice call del or any kind of garbage collect). If this doesn't work, you may need to follow the suggestions above and use disk I/O:
```
#### earlier code all the same
# clear memory by reassignment (not del or gc)
gen_matrix_df = {}
'''Now, pipe each dataframe from the list using map.Pool() '''
p = Pool(3) # number of pool to run at once; default at 1
result = p.map(matrix_to_vcf, list(gen_matrix_df_list.values()))
#del gen_matrix_df_list # I suspect you don't even need this, memory will free when the pool is closed
p.close()
p.join()
#### later code all the same
```
**FOR YOUR SPECIFIC EXAMPLE - OPTIMAL MEMORY USAGE**
As long as you can fit your large data in memory *once*, and you have some idea of how big your file is, you can use **Pandas [read\_csv](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) partial file reading**, to read in [only nrows at a time](https://stackoverflow.com/questions/23853553/python-pandas-how-to-read-only-first-n-rows-of-csv-files-in) if you really want to micro-manage how much data is being read in, or a [fixed amount of memory at a time using chunksize], which returns an iterator[5](https://stackoverflow.com/questions/25962114/how-to-read-a-6-gb-csv-file-with-pandas). By that I mean, the nrows parameter is just a single read: you might use that to just get a peek at a file, or if for some reason you wanted each part to have exactly the same number of rows (because, for example, if any of your data is strings of variable length, each row will not take up the same amount of memory). But I think for the purposes of prepping a file for multiprocessing, it will be far easier to use chunks, because that directly relates to memory, which is your concern. It will be easier to use trial & error to fit into memory based on specific sized chunks than number of rows, which will change the amount of memory usage depending on how much data is in the rows. The only other difficult part is that for some application specific reason, you're grouping some rows, so it just makes it a little bit more complicated. Using your code as an example:
```
'''load the genome matrix file onto pandas as dataframe.
This makes is more easy for multiprocessing'''
# store the splitted dataframes as list of key, values(pandas dataframe) pairs
# this list of dataframe will be used while multiprocessing
#not sure why you need the ordered dict here, might add memory overhead
#gen_matrix_df_list = collections.OrderedDict()
#a defaultdict won't throw an exception when we try to append to it the first time. if you don't want a default dict for some reason, you have to initialize each entry you care about.
gen_matrix_df_list = collections.defaultdict(list)
chunksize = 10 ** 6
for chunk in pd.read_csv(genome_matrix_file, sep='\t', names=header, chunksize=chunksize)
# now, group the dataframe by chromosome/contig - so it can be multiprocessed
gen_matrix_df = chunk.groupby('CHROM')
for chr_, data in gen_matrix_df:
gen_matrix_df_list[chr_].append(data)
'''Having sorted chunks on read to a list of df, now create single data frames for each chr_'''
#The dict contains a list of small df objects, so now concatenate them
#by reassigning to the same dict, the memory footprint is not increasing
for chr_ in gen_matrix_df_list.keys():
gen_matrix_df_list[chr_]=pd.concat(gen_matrix_df_list[chr_])
'''Now, pipe each dataframe from the list using map.Pool() '''
p = Pool(3) # number of pool to run at once; default at 1
result = p.map(matrix_to_vcf, list(gen_matrix_df_list.values()))
p.close()
p.join()
```
Upvotes: 2 <issue_comment>username_4: Prerequisite
============
1. In Python (in the following I use 64-bit build of Python 3.6.5) everything is an object. This has its overhead and with [`getsizeof`](https://docs.python.org/3/library/sys.html#sys.getsizeof) we can see exactly the size of an object in bytes:
```
>>> import sys
>>> sys.getsizeof(42)
28
>>> sys.getsizeof('T')
50
```
2. When fork system call used (default on \*nix, see `multiprocessing.get_start_method()`) to create a child process, parent's physical memory is not copied and [copy-on-write](https://en.wikipedia.org/wiki/Copy-on-write) technique is used.
3. Fork child process will still report full RSS (resident set size) of the parent process. Because of this fact, [PSS](https://en.wikipedia.org/wiki/Proportional_set_size) (proportional set size) is more appropriate metric to estimate memory usage of forking application. Here's an example from the page:
>
> * Process A has 50 KiB of unshared memory
> * Process B has 300 KiB of unshared memory
> * Both process A and process B have 100 KiB of the same shared memory region
>
>
> Since the PSS is defined as the sum of the unshared memory of a process and the proportion of memory shared with other processes, the PSS for these two processes are as follows:
>
>
> * PSS of process A = 50 KiB + (100 KiB / 2) = 100 KiB
> * PSS of process B = 300 KiB + (100 KiB / 2) = 350 KiB
>
>
>
The data frame
==============
Not let's look at your `DataFrame` alone. [`memory_profiler`](https://pypi.python.org/pypi/memory_profiler) will help us.
*justpd.py*
```
#!/usr/bin/env python3
import pandas as pd
from memory_profiler import profile
@profile
def main():
with open('genome_matrix_header.txt') as header:
header = header.read().rstrip('\n').split('\t')
gen_matrix_df = pd.read_csv(
'genome_matrix_final-chr1234-1mb.txt', sep='\t', names=header)
gen_matrix_df.info()
gen_matrix_df.info(memory_usage='deep')
if __name__ == '__main__':
main()
```
Now let's use the profiler:
```
mprof run justpd.py
mprof plot
```
We can see the plot:
[](https://i.stack.imgur.com/xYt0h.png)
and line-by-line trace:
```
Line # Mem usage Increment Line Contents
================================================
6 54.3 MiB 54.3 MiB @profile
7 def main():
8 54.3 MiB 0.0 MiB with open('genome_matrix_header.txt') as header:
9 54.3 MiB 0.0 MiB header = header.read().rstrip('\n').split('\t')
10
11 2072.0 MiB 2017.7 MiB gen_matrix_df = pd.read_csv('genome_matrix_final-chr1234-1mb.txt', sep='\t', names=header)
12
13 2072.0 MiB 0.0 MiB gen_matrix_df.info()
14 2072.0 MiB 0.0 MiB gen_matrix_df.info(memory_usage='deep')
```
We can see that the data frame takes ~2 GiB with peak at ~3 GiB while it's being built. What's more interesting is the output of [`info`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html).
```
RangeIndex: 4000000 entries, 0 to 3999999
Data columns (total 34 columns):
...
dtypes: int64(2), object(32)
memory usage: 1.0+ GB
```
But `info(memory_usage='deep')` ("deep" means introspection of the data deeply by interrogating `object` `dtype`s, see below) gives:
```
memory usage: 7.9 GB
```
Huh?! Looking outside of the process we can make sure that `memory_profiler`'s figures are correct. `sys.getsizeof` also shows the same value for the frame (most probably because of custom `__sizeof__`) and so will other tools that use it to estimate allocated `gc.get_objects()`, e.g. [`pympler`](https://pypi.python.org/pypi/Pympler).
```
# added after read_csv
from pympler import tracker
tr = tracker.SummaryTracker()
tr.print_diff()
```
Gives:
```
types | # objects | total size
================================================== | =========== | ============
```
So where do these 7.93 GiB come from? Let's try to explain this. We have 4M rows and 34 columns, which gives us 134M values. They are either `int64` or `object` (which is a 64-bit pointer; see [using pandas with large data](https://www.dataquest.io/blog/pandas-big-data/) for detailed explanation). Thus we have `134 * 10 ** 6 * 8 / 2 ** 20` ~1022 MiB only for values in the data frame. What about the remaining ~ 6.93 GiB?
String interning
================
To understand the behaviour it's necessary to know that Python does string interning. There are two good articles ([one](http://guilload.com/python-string-interning/), [two](https://www.laurentluce.com/posts/python-string-objects-implementation/)) about string interning in Python 2. Besides the Unicode change in Python 3 and [PEP 393](https://www.python.org/dev/peps/pep-0393/) in Python 3.3 the C-structures have changed, but the idea is the same. Basically, every short string that looks like an identifier will be cached by Python in an internal dictionary and references will point to the same Python objects. In other word we can say it behaves like a singleton. Articles that I mentioned above explain what significant memory profile and performance improvements it gives. We can check if a string is interned using [`interned`](https://github.com/python/cpython/blob/7ed7aead/Include/unicodeobject.h#L283) field of `PyASCIIObject`:
```
import ctypes
class PyASCIIObject(ctypes.Structure):
_fields_ = [
('ob_refcnt', ctypes.c_size_t),
('ob_type', ctypes.py_object),
('length', ctypes.c_ssize_t),
('hash', ctypes.c_int64),
('state', ctypes.c_int32),
('wstr', ctypes.c_wchar_p)
]
```
Then:
```
>>> a = 'name'
>>> b = '!@#$'
>>> a_struct = PyASCIIObject.from_address(id(a))
>>> a_struct.state & 0b11
1
>>> b_struct = PyASCIIObject.from_address(id(b))
>>> b_struct.state & 0b11
0
```
With two strings we can also do identity comparison (addressed in memory comparison in case of CPython).
```
>>> a = 'foo'
>>> b = 'foo'
>>> a is b
True
>> gen_matrix_df.REF[0] is gen_matrix_df.REF[6]
True
```
Because of that fact, in regard to `object` `dtype`, the data frame allocates at most 20 strings (one per amino acids). Though, it's worth noting that Pandas recommends [categorical types](https://www.dataquest.io/blog/pandas-big-data/#optimizingobjecttypesusingcategoricals) for enumerations.
Pandas memory
=============
Thus we can explain the naive estimate of 7.93 GiB like:
```
>>> rows = 4 * 10 ** 6
>>> int_cols = 2
>>> str_cols = 32
>>> int_size = 8
>>> str_size = 58
>>> ptr_size = 8
>>> (int_cols * int_size + str_cols * (str_size + ptr_size)) * rows / 2 ** 30
7.927417755126953
```
Note that `str_size` is 58 bytes, not 50 as we've seen above for 1-character literal. It's because PEP 393 defines compact and non-compact strings. You can check it with `sys.getsizeof(gen_matrix_df.REF[0])`.
Actual memory consumption should be ~1 GiB as it's reported by `gen_matrix_df.info()`, it's twice as much. We can assume it has something to do with memory (pre)allocation done by Pandas or NumPy. The following experiment shows that it's not without reason (multiple runs show the save picture):
```
Line # Mem usage Increment Line Contents
================================================
8 53.1 MiB 53.1 MiB @profile
9 def main():
10 53.1 MiB 0.0 MiB with open("genome_matrix_header.txt") as header:
11 53.1 MiB 0.0 MiB header = header.read().rstrip('\n').split('\t')
12
13 2070.9 MiB 2017.8 MiB gen_matrix_df = pd.read_csv('genome_matrix_final-chr1234-1mb.txt', sep='\t', names=header)
14 2071.2 MiB 0.4 MiB gen_matrix_df = gen_matrix_df.drop(columns=[gen_matrix_df.keys()[0]])
15 2071.2 MiB 0.0 MiB gen_matrix_df = gen_matrix_df.drop(columns=[gen_matrix_df.keys()[0]])
16 2040.7 MiB -30.5 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
...
23 1827.1 MiB -30.5 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
24 1094.7 MiB -732.4 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
25 1765.9 MiB 671.3 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
26 1094.7 MiB -671.3 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
27 1704.8 MiB 610.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
28 1094.7 MiB -610.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
29 1643.9 MiB 549.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
30 1094.7 MiB -549.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
31 1582.8 MiB 488.1 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
32 1094.7 MiB -488.1 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
33 1521.9 MiB 427.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
34 1094.7 MiB -427.2 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
35 1460.8 MiB 366.1 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
36 1094.7 MiB -366.1 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
37 1094.7 MiB 0.0 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
...
47 1094.7 MiB 0.0 MiB gen_matrix_df = gen_matrix_df.drop(columns=[random.choice(gen_matrix_df.keys())])
```
I want to finish this section by a quote from [fresh article about design issues and future Pandas2](http://wesmckinney.com/blog/apache-arrow-pandas-internals/) by original author of Pandas.
>
> pandas rule of thumb: have 5 to 10 times as much RAM as the size of your dataset
>
>
>
Process tree
============
Let's come to the pool, finally, and see if can make use of copy-on-write. We'll use [`smemstat`](http://manpages.ubuntu.com/manpages/artful/man8/smemstat.8.html) (available form an Ubuntu repository) to estimate process group memory sharing and [`glances`](https://pypi.python.org/pypi/Glances) to write down system-wide free memory. Both can write JSON.
We'll run original script with `Pool(2)`. We'll need 3 terminal windows.
1. `smemstat -l -m -p "python3.6 script.py" -o smemstat.json 1`
2. `glances -t 1 --export-json glances.json`
3. `mprof run -M script.py`
Then `mprof plot` produces:
[](https://i.stack.imgur.com/6uWZx.png)
The sum chart (`mprof run --nopython --include-children ./script.py`) looks like:
[](https://i.stack.imgur.com/G19sU.png)
Note that two charts above show RSS. The hypothesis is that because of copy-on-write it's doesn't reflect actual memory usage. Now we have two JSON files from `smemstat` and `glances`. I'll the following script to covert the JSON files to CSV.
```
#!/usr/bin/env python3
import csv
import sys
import json
def smemstat():
with open('smemstat.json') as f:
smem = json.load(f)
rows = []
fieldnames = set()
for s in smem['smemstat']['periodic-samples']:
row = {}
for ps in s['smem-per-process']:
if 'script.py' in ps['command']:
for k in ('uss', 'pss', 'rss'):
row['{}-{}'.format(ps['pid'], k)] = ps[k] // 2 ** 20
# smemstat produces empty samples, backfill from previous
if rows:
for k, v in rows[-1].items():
row.setdefault(k, v)
rows.append(row)
fieldnames.update(row.keys())
with open('smemstat.csv', 'w') as out:
dw = csv.DictWriter(out, fieldnames=sorted(fieldnames))
dw.writeheader()
list(map(dw.writerow, rows))
def glances():
rows = []
fieldnames = ['available', 'used', 'cached', 'mem_careful', 'percent',
'free', 'mem_critical', 'inactive', 'shared', 'history_size',
'mem_warning', 'total', 'active', 'buffers']
with open('glances.csv', 'w') as out:
dw = csv.DictWriter(out, fieldnames=fieldnames)
dw.writeheader()
with open('glances.json') as f:
for l in f:
d = json.loads(l)
dw.writerow(d['mem'])
if __name__ == '__main__':
globals()[sys.argv[1]]()
```
First let's look at `free` memory.
[](https://i.stack.imgur.com/rdHyP.png)
The difference between first and minimum is ~4.15 GiB. And here is how PSS figures look like:
[](https://i.stack.imgur.com/8H2OP.png)
And the sum:
[](https://i.stack.imgur.com/KCwzl.png)
Thus we can see that because of copy-on-write actual memory consumption is ~4.15 GiB. But we're still serialising data to send it to worker processes via `Pool.map`. Can we leverage copy-on-write here as well?
Shared data
===========
To use copy-on-write we need to have the `list(gen_matrix_df_list.values())` be accessible globally so the worker after fork can still read it.
1. Let's modify code after `del gen_matrix_df` in `main` like the following:
```
...
global global_gen_matrix_df_values
global_gen_matrix_df_values = list(gen_matrix_df_list.values())
del gen_matrix_df_list
p = Pool(2)
result = p.map(matrix_to_vcf, range(len(global_gen_matrix_df_values)))
...
```
2. Remove `del gen_matrix_df_list` that goes later.
3. And modify first lines of `matrix_to_vcf` like:
```
def matrix_to_vcf(i):
matrix_df = global_gen_matrix_df_values[i]
```
Now let's re-run it. Free memory:
[](https://i.stack.imgur.com/wkEaQ.png)
Process tree:
[](https://i.stack.imgur.com/uFsTD.png)
And its sum:
[](https://i.stack.imgur.com/ieW5P.png)
Thus we're at maximum of ~2.9 GiB of actual memory usage (the peak main process has while building the data frame) and copy-on-write has helped!
As a side note, there's so called copy-on-read, the behaviour of Python's reference cycle garbage collector, [described in Instagram Engineering](https://engineering.instagram.com/dismissing-python-garbage-collection-at-instagram-4dca40b29172) (which led to `gc.freeze` in [issue31558](https://bugs.python.org/issue31558)). But `gc.disable()` doesn't have an impact in this particular case.
Update
------
An alternative to copy-on-write copy-less data sharing can be delegating it to the kernel from the beginning by using [`numpy.memmap`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html). Here's [an example implementation](https://github.com/DonaldWhyte/high-performance-data-processing-in-python/blob/master/code/find_outliers_parallel_nocopy.py) from *High Performance Data Processing in Python* talk. The [tricky part](https://stackoverflow.com/q/45943160/2072035) is then to make Pandas to use the mmaped Numpy array.
Upvotes: 8 [selected_answer] |
2018/03/22 | 783 | 2,428 | <issue_start>username_0: Assume i > 0 is true.
Why is this expression:
```
while(i > 0) {
printf("Hello \n);
i--;
};
```
equal to this expression:
```
while(i--) {
printf("Hello \n");
};
```<issue_comment>username_1: Firstly, this is not an expression. It's a `while` loop. Also, they're not equal. `while(i--)` is equivalent to `while(i-- != 0)`, which checks for *inequality*, not *greater than*.
If `i` is greater or equal than `0` in the beginning, both snippets will behave the same way.
Upvotes: 2 <issue_comment>username_2: It is evident that you mean the equivalence of the loop executions. They are equivalent due to the fact that the variable `i` is not used in the body of the loop and provided that `i` is a positive number.
However the loops logically look the following way
```
while(i > 0) {
printf("Hello \n);
i--;
};
```
and
```
while( int temp = i; i -= 1; temp > 0 ) {
printf("Hello \n");
};
```
Though the second construction of the while loop is not valid but it shows that the variable `i` will be changed in any case whether when the condition is false or before each iteration of the loop.
Consider the following demonstrative program and its output.
```
#include
int main(void)
{
int i = 1;
puts( "The first loop" );
while ( i > 0 )
{
printf("%d: Hello\n", i );
i--;
}
putchar( '\n' );
i = 1;
puts( "The second loop" );
while ( i-- )
{
printf("%d: Hello\n", i );
}
putchar( '\n' );
i = 0;
while ( i > 0 )
{
printf("%d: Hello\n", i );
i--;
}
printf( "After the first loop i = %d\n", i );
putchar( '\n' );
i = 0;
while ( i-- )
{
printf("%d: Hello\n", i );
i--;
}
printf( "After the second loop i = %d\n", i );
return 0;
}
```
The program output is
```
The first loop
1: Hello
The second loop
0: Hello
After the first loop i = 0
After the second loop i = -1
```
Take into account that in the condition of the first loop there is checked whether the variable `i` is greater than 0 while in the condition of the second loop there is checked whether the variable `i` is not equal to 0. So the conditions for entering the bodies of the loop are different.
If `i` has an unsigned integer type or it is guaranteed that `i` can not be negative then the following loops would be fully equivalent
```
while(i > 0) {
printf("Hello \n);
i--;
};
```
and
```
while(i != 0) {
printf("Hello \n);
i--;
};
```
Upvotes: -1 |
2018/03/22 | 1,170 | 4,002 | <issue_start>username_0: >
> **Late Comment**
>
>
> The only way for this to work seems to be defining the foreign key columns in ***table\_name\_primary\_key*** format. If they don't fit this format, the relations simply doesn't work without specifying the column names.
>
>
>
I'm trying to learn how to use foreign keys with the `reliese/laravel` code generator (it's generating models from database), but I have a problem which is forcing me to re-specify all the foreign key names in the generated code. Here's my migration code and the generated relation codes:
**// migration**
```
Schema::create('hotels', function(Blueprint $table) {
$table->increments('hotel_id');
$table->string('name', 64);
$table->string('description', 512);
$table->softDeletes();
$table->timestamps();
});
Schema::create('floors', function(Blueprint $table){
$table->increments('floor_id');
$table->integer('hotel_id')->unsigned();
$table->string('label', 128);
$table->softDeletes();
$table->timestamps();
$table->foreign('hotel_id')->references('hotel_id')->on('hotels')->onDelete('cascade');
});
```
**// relation**
```
// Hotel.php contains
public function floors()
{
return $this->hasMany(\Otellier\Floor::class);
}
// Floor.php contains
public function hotel()
{
return $this->belongsTo(\Otellier\Hotel::class);
}
```
Now, when I'm creating a floor with Faker:
```
$hotel = factory(App\Hotel::class)->make();
$floor = factory(App\Floor::class)->make([ "label" => "Floor #" . $floor_number ]);
$hotel->floors()->save($floor);
```
In the last line, I get this error:
>
> Illuminate\Database\QueryException : SQLSTATE[42S22]: Column not found: 1054 Unknown column 'hotel\_hotel\_id' in 'field list' (SQL: insert into `floors` (`label`, `hotel_hotel_id`, `updated_at`, `created_at`) values (Floor #1, 8, 2018-03-22 12:37:39, 2018-03-22 12:37:39))
>
>
>
Apparently, it searches for a `hotel_hotel_id` field as a column inside `floors` table, which I suspect is adding the table name as a prefix to the column name. Why is this happening and how can I prevent this and force the whole system not to prefix any column
### without doing this:
```
public function hotel()
{
return $this->belongsTo(\Otellier\Hotel::class, "hotels", "hotel_id", "hotel_id");
}
```<issue_comment>username_1: You need to define your own primary key in the models:
```
protected $primaryKey='floor_id';
```
and for the other:
```
protected $primaryKey='hotel_id';
```
Upvotes: -1 <issue_comment>username_2: Laravel uses `getForiegnKey()` method in **Model.php** to predict the foriegnkey of a relation.
```
/**
* Get the default foreign key name for the model.
*
* @return string
*/
public function getForeignKey()
{
return Str::snake(class_basename($this)).'_'.$this->primaryKey;
}
```
You can **override** it in your model or **baseclass** to change it's behaviour. In your case it may be used like,
```
public function getForeignKey()
{
return $this->primaryKey;
}
```
it will return your `$primaryKey` as the foriegn key when a relation try Io access the correspondent foriegnkey.
NB: If you specify the foriegn key in relation then this method won't work.
Upvotes: 3 [selected_answer]<issue_comment>username_3: By default eloquent determines the foreign key of the relationship based on the model name. In this case, the `Floor` model is automatically assumed to have a `hotel_id` foreign key. Additionally, Eloquent assumes that the foreign key should have a value matching the id (or the custom `$primaryKey`) column of the parent. In other words, Eloquent will look for the value of the hotel's `id` column in the `hotel_id` column of the `Floor`.If you would like the relationship to use a value other than `id`, you may pass a third argument to the hasMany method specifying your custom `local key`
```
public function floors()
{
return $this->hasMany(\Otellier\Floor::class, 'hotel_id', 'hotel_id');
}
```
Upvotes: 0 |
2018/03/22 | 1,033 | 3,384 | <issue_start>username_0: I'm running a process that produces n number of outputs at different timestamps for a given input CSV. The output files are in CSV form and labelled as such:
"Output" \_ RouteName \_ Direction \_ YYMMDD \_ HHMMSS
I have a macro that reports on missing data in the files, I just need a list of the number of rows in each CSV.
I have been doing this by using the command:
```
@Echo Off
:_Loop
If "%~1"=="" Pause&Goto EOF
Find /C /V "Wont@findthisin#anyfile" %1 >> LineCount.txt
Shift
Goto _Loop
```
The command is called counter.cmd and I just drag the output CSV's into it and it creates this output in a text file for each output:
```
---------- R:\10_TECHNICAL\10_TESTRUN\RUN\AM\ITN\A6_1N\OUTPUT_A6_1N_180313_070112.CSV: 5
```
The problem is that, I can only use this command to process a maximum of ~ 100 files, and I have ~ 1000 output files. When I try to make it do all 1000 files I get this error:

I have relatively basic windows command scripting skills and so don't know how to overcome this problem. Any help would be appreciated!<issue_comment>username_1: Before I get to the source of your problem, and the fix, I want to point out a couple things that could improve your current script.
1) Your FIND command can be simplified to `find /n /v "" filePath` - it seems nonsensical, but it works.
2) GOTO is relatively slow. You can get rid of the GOTO loop by using the FOR command to process all of the arguments. You can use `%*` to retrieve all of the arguments.
3) Every time you redirect the file must be opened and the file pointer positioned to the end of file. This takes time. It is much faster to redirect only once.
Incorporating all of the above, your script could be as simple as
```
@echo off
> LineCount.txt (for %%F in (%*) do find /n /v "" "%%F")
pause
```
When you drag files onto a batch script, it creates a single command line containing the path to each of the files. A Windows command line is limited to 8191 bytes long. So there is no way your strategy will work if you have ~1000 files.
I'm assuming all of your files are within a single folder, in which case you could change your script to process a single folder path instead of a list of file paths.
```
@echo off
>LineCount.txt (for %%F in ("%~1\*.csv") do find /n /v "" "%%F")
pause
```
If the files are spread across a few folders, then you can add an extra loop to iterate each of the folders
```
@echo off
>LineCount.txt ( for %%A in (%*) do for %%F in ("%%~A\*.csv") do find /n /v "" %%F")
pause
```
Upvotes: 2 <issue_comment>username_2: I have to assume the restriction is with your `%1` variable and the maximum allowed command line length.
This can probably be easily remedied with a single command line
```bat
@Find /C /V "" R:\10_Technical\10_TestRun\Run\AM\ITN\A6_1N\*.csv > LineCount.txt
```
**Edit**
You can drag and drop the folder containing your csv files onto the batch file too.
If you want the outputfile in the same directory as the csv's then use:
```bat
@Find /C /V "" "%~1\*.csv" > "%~1\LineCount.txt"
```
Or in the same directory as the batch file:
```bat
@Find /C /V "" "%~1\*.csv" > "%~dp0LineCount.txt"
```
You could even have it output to the directory holding that folder:
```bat
@Find /C /V "" "%~1\*.csv" > "LineCount.txt"
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 198 | 756 | <issue_start>username_0: I installed laravel in my pc but forgot the directory I installed in.How can I find where I installed it if it's possible?I used composer to install laravel but I have forgotten the directory.<issue_comment>username_1: Thats not a laravel problem.
But generally people are using wamp64/www or xampp/htdocs
Or cmd automatically opening c:/users/name.
If you can't find, just setup filesearcher programs.
Upvotes: 0 <issue_comment>username_2: This is the default directory of laravel in windows
```
C:\Users\YourPcName\AppData\Local\Composer\files\laravel
```
Upvotes: 1 <issue_comment>username_3: This will be a little bit mad, but try search `"artisan"` text on search window. and it'll show laravel project folders.
Upvotes: 1 |
2018/03/22 | 2,058 | 7,232 | <issue_start>username_0: When a script is invoked explicitly with `python`, the `argv` is mucked with so that `argv[0]` is the path to the script being run. This is the case if invoked as `python foo/bar.py` or even as `python -m foo.bar`.
I need a way to recover the original `argv` (ie. the one received by `python`). Unfortunately, it's not as easy as prepending `sys.executable` to `sys.argv` because `python foo/bar.py` is different than `python -m foo.bar` (the implicit `PYTHONPATH` differs, which can be crucial depending on your module structure).
More specifically in the cases of `python foo/bar.py some other args` and `python -m foo.bar some other args`, I'm looking to recover `['python', 'foo/bar.py', 'some', 'other', 'args']` and `['python', '-m', 'foo.bar', 'some', 'other', 'args']`, respectively.
I am aware of prior questions about this:
* [how to get the ORIGINAL command line in python? with spaces, tabs, etc](https://stackoverflow.com/questions/6184925/how-to-get-the-original-command-line-in-python-with-spaces-tabs-etc)
* [Full command line as it was typed](https://stackoverflow.com/questions/667540/full-command-line-as-it-was-typed)
But these seem to have a misunderstanding of how shells work and the answers reflect this. I am not interested in undoing the work of the shell (eg. evaluated shell vars and functions are fine), I just want to get at the original `argv` given to `python`.
The [only solution](https://stackoverflow.com/a/4867889/568785) I've found is to use `/proc//cmdline`:
```
import os
with open("/proc/{}/cmdline".format(os.getpid()), 'rb') as f:
original_argv = f.read().split('\0')[:-1]
```
This does work, but it is Linux-only (no OSX, and Windows support seems to require installing the [wmi package](https://mail.python.org/pipermail/python-win32/2007-December/006498.html)). Fortunately for my current use case this restriction is fine. But, it would be nice to have a cleaner, cross platform approach.
The fact that that `/proc//cmdline` approach works gives me hope that python isn't execing before it runs the script (at least not the syscall exec, but maybe the `exec` builtin). I remember reading somewhere that all of this argument handling (ex. `-m`) is done in pure python, not C (this is confirmed by the fact that `python -m this.does.not.exist` will produce an exception that looks like it came from the runtime). So, I'd venture a guess that somewhere in pure python the original `argv` is available (perhaps this requires some spelunking through the runtime initialization?).
**tl;dr** Is there a cross platform (builtin, preferably) way to get at the original `argv` passed to `python` (before it remove the `python` executable and transforms `-m blah` into `blah.py`)?
**edit** From spelunking, I discovered [`Py_GetArgcArgv`](https://github.com/python/cpython/blob/186b606d8a2ea4fd51b7286813302c8e8c7006cc/Modules/main.c#L2709), which can be accessed via ctypes (found it [here](https://github.com/cherrypy/cherrypy/blob/85b48e3c45d329437c2c03595eb280c76d5f4ef3/cherrypy/process/wspbus.py#L443), links to [several](https://stackoverflow.com/a/28338254) [SO](https://stackoverflow.com/a/6683222) posts that mention this approach):
```
import ctypes
_argv = ctypes.POINTER(ctypes.c_wchar_p)()
_argc = ctypes.c_int()
ctypes.pythonapi.Py_GetArgcArgv(ctypes.byref(_argc),
ctypes.byref(_argv))
argv = _argv[:_argc.value]
print(argv)
```
Now this is OS-portable, but not python implementation portable (only works on cpython and `ctypes` is yucky if you don't need it). Also, peculiarly, I don't get the right output on Ubunutu 16.04 (`python -m foo.bar` gives me `['python', '-m', '-m']`), but I may just be making a silly mistake (I get the same behavior on OSX). It would be great to have a fully portable solution (that doesn't dig into `ctypes`).<issue_comment>username_1: Your stated problem is:
1. User called my app with environment variables and arguments.
2. I want to display a "run like this" diagnostic that will exactly reproduce the results of the current run.
There are at least two solutions:
1. Abandon the "reproduction" aspect, since the original bash calling command is lost to the portable python app, and instead go for "same effect".
2. Use a wrapper to capture the original calling command, as suggested by <NAME>.
With (1) you would be willing to accept ['-m', 'foo'] becoming ['foo.py'], or even turning it into ['/some/dir/foo.py'] in case PYTHONPATH could cause trouble. Displaying ['a', 'b c'] as `"a" "b c"`, or more concisely as `a "b c"`, is straightforward. If environment variables like SEED are an important part of the command line interface then you'll need to iterate over envp and output them, as well. For true reproducibility, you might choose to convert input args to canonical form, compare with observed input args, and exec using the canonical form if they're not identical, so there's no way to execute the bulk of your code using "odd" syntax.
With (2) you would bury the app in some inconveniently named file, advertise the wrapper program far and wide, and enjoy the benefits of seeing args before they're munged.
Upvotes: 0 <issue_comment>username_2: This seems XY problem and you are getting into the weeds in order to accommodate some existing complicated test setup (I've found the question behind the question in your [comment](https://stackoverflow.com/questions/49429412/recovering-original-argv#comment85861526_49429412)). Further efforts would be better spent writing a sane test setup.
1. Use a [better test runner](https://docs.pytest.org/en/latest/), not unittest.
2. **Create any initial state within the test setup, not in the external environment before entering the Python runtime**.
3. Use a plugin for the randomization and seed stuff, personally I use [this one](https://pypi.python.org/pypi/pytest-randomly) but there are others.
For example if you decide to go with pytest runner, all the test setup can be configured within a [`[tool.pytest.ini_options]`](https://docs.pytest.org/en/7.1.x/reference/customize.html#pyproject-toml) section of the `pyproject.toml` file and/or with a [fixture](https://docs.pytest.org/en/latest/fixture.html) defined in `conftest.py`. Overriding the default test configuration can be done with environment variables and/or command line arguments, and neither of these approaches will get mucked around by the shell or during Python interpreter startup.
The manner in which to execute the test suite can and should be as simple as executing a single command:
```
pytest
```
And then your perceived problem of needing to recover the original `sys.argv` will go away.
Upvotes: 2 <issue_comment>username_3: Python 3.10 adds [`sys.orig_argv`](https://docs.python.org/3/library/sys.html#sys.orig_argv), which the docs describe as the arguments originally passed to the Python executable. If this isn't exactly what you're looking for, it may be helpful in this or similar cases.
There were [a bunch of possibilities considered](https://bugs.python.org/issue23427), including changing `sys.argv`, but this was, I think, wisely chosen as the most effective and non-disruptive option.
Upvotes: 3 |
2018/03/22 | 473 | 1,815 | <issue_start>username_0: I need to query data from Api then save it to Realm object. I need to get the data(observable) to the Presenter from the realm object unless 5 minutes from the last Api query elapsed, in other case I need to fetch from Api again. I`m new to RxJava. Any suggestions?<issue_comment>username_1: You could create a class hosting a `ReplaySubject` and some update logic:
```
class TimedCache {
final Subject cache =
ReplaySubject.createWithTime(5, TimeUnit.MINUTES).toSerialized();
final Single valueProvider;
TimedCache(Single valueProvider) {
this.valueProvider = valueProvider;
}
public Observable valueObservable() {
return cache.take(1)
.switchIfEmpty(
valueProvider
.doOnSuccess(v -> {
cache.onNext(v);
// update realm here
})
.toObservable()
);
}
}
```
Upvotes: 2 <issue_comment>username_2: How about setting up two different observables:
1.) for observing the Realm data:
```
realm.where(MyData.class)
.findAllAsync()
.asFlowable()
.filter(RealmResults::isLoaded)
.subscribe(...);
```
2.) for fetching the data every 5 minutes
```
Observable.interval(5, TimeUnit.MINUTES)
.subscribeOn(Schedulers.io())
.switchMap((ignored) -> {
return apiService.getData();
})
.subscribe((data) -> {
try(Realm realm = Realm.getDefaultInstance()) {
realm.executeTransaction((r) -> {
r.insertOrUpdate(data);
});
}
});
```
---
**EDIT:** then just call a method like
```
flowable = realm.where(MyData.class)
.findAllAsync()
.asFlowable()
.filter(RealmResults::isLoaded)
.subscribe(...);
if(cache.isOlderThanFiveMinutes()) {
startRefreshTaskOnBackgroundThread();
}
```
Upvotes: 0 |
2018/03/22 | 643 | 2,520 | <issue_start>username_0: There is array Link ports[4] which is defined as static and part of main thread. In my project, there are various producer and consumer threads which are using this array by using static reference i.e. using name of class to access array. In some part of project, his array is getting update and other part of project this array is being read. How should I use synchronized, wait , notify or join so as I can maintain consistency in my code?
```
class Router{ \\ on main thread
static Link[] ports = new Link[4];
void processCnnect {
else if(Router.ports[i]==null && processPort > 1023 && processPort<=65535)
{
Router.ports[i] =new Link(rd,rd_c);
}
}
class ServerThread extends Thread
{
// reading then updating Router.ports[]
}
There are some threads which perform heart beat mechanism by sending packets to other consumer (receiver) like alive messages. Its a mix of multi threading and socket programming.
```<issue_comment>username_1: You could create a class hosting a `ReplaySubject` and some update logic:
```
class TimedCache {
final Subject cache =
ReplaySubject.createWithTime(5, TimeUnit.MINUTES).toSerialized();
final Single valueProvider;
TimedCache(Single valueProvider) {
this.valueProvider = valueProvider;
}
public Observable valueObservable() {
return cache.take(1)
.switchIfEmpty(
valueProvider
.doOnSuccess(v -> {
cache.onNext(v);
// update realm here
})
.toObservable()
);
}
}
```
Upvotes: 2 <issue_comment>username_2: How about setting up two different observables:
1.) for observing the Realm data:
```
realm.where(MyData.class)
.findAllAsync()
.asFlowable()
.filter(RealmResults::isLoaded)
.subscribe(...);
```
2.) for fetching the data every 5 minutes
```
Observable.interval(5, TimeUnit.MINUTES)
.subscribeOn(Schedulers.io())
.switchMap((ignored) -> {
return apiService.getData();
})
.subscribe((data) -> {
try(Realm realm = Realm.getDefaultInstance()) {
realm.executeTransaction((r) -> {
r.insertOrUpdate(data);
});
}
});
```
---
**EDIT:** then just call a method like
```
flowable = realm.where(MyData.class)
.findAllAsync()
.asFlowable()
.filter(RealmResults::isLoaded)
.subscribe(...);
if(cache.isOlderThanFiveMinutes()) {
startRefreshTaskOnBackgroundThread();
}
```
Upvotes: 0 |
2018/03/22 | 4,343 | 14,353 | <issue_start>username_0: I built this app with the help of [this question](https://stackoverflow.com/questions/49408550/structure-of-a-synchronous-application-in-node-js-and-mongodb/49409296?noredirect=1#comment85825494_49409296) I did previously.
**app.js**:
```
var mongolib = require('./middlewares/db.js');
var downloaderCoverageWho = require('./routers/downloaderCoverageWho.js');
var downloaderCoverageIta = require('./routers/downloaderCoverageIta.js');
const start = async function() {
const conn = await mongolib.connectToMongoDb();
const coverages = await mongolib.createACollection('coverages');
const isPageHasUpdates = true;
if(isPageHasUpdates) {
await downloadCoverageIta();
await downloadCoverageWho();
}
await mongolib.closeConnection();
await console.log('d3.js creation...');
return 'FINISH';
}
start()
.then(res => console.log(res))
.catch(err => console.log(err));
async function downloadCoverageWho() {
await downloaderCoverageWho.download();
console.log('Finish');
}
async function downloadCoverageIta() {
await downloaderCoverageIta.download();
console.log('Finish');
}
```
**db.js**:
```
var fs = require('fs');
var MongoClient = require('mongodb').MongoClient;
const url = 'mongodb://localhost:27017/';
const dbName = 'db';
var collCovName = 'coverages';
var collCov;
let myClient;
let myConn;
var methods = {};
methods.getConnection = function() {
return myConn;
}
methods.connectToMongoDb = async function() {
return MongoClient.connect(url + dbName)
.then(function(conn) {
console.log('Connected to MongoDB');
myClient = conn;
myConn = myClient.db(dbName);
return myConn;
})
.catch(function(err) {
console.log('Error during connection');
throw err;
});
}
methods.closeConnection = async function() {
myClient.close()
.then(function() {
console.log('Connection closed');
})
.catch(function(err) {
console.log('Error closing connection');
throw err;
});
}
methods.createACollection = async function(collectionName) {
return myConn.createCollection(collectionName)
.then(function() {
console.log('Collection', collectionName, 'created');
})
.catch(function(err) {
console.log('Error during creation of collection', collectionName);
throw err;
});
}
methods.insert = async function(collectionName, obj) {
return myConn.collection(collectionName).updateOne(obj, {$set: obj}, {upsert: true})
.then(function(res) {
console.log('Inserted 1 element in', collectionName);
})
.catch(function(err) {
console.log('Error during insertion in', collectionName);
throw err;
});
}
module.exports = methods;
```
**downloadCoverageIta.js**:
```
var cheerio = require('cheerio');
var express = require('express');
var fs = require('fs');
var request = require('request');
var textract = require('textract');
var util = require('../helpers/util.js');
var mongolib = require('../middlewares/db.js');
var methods = {};
var outDir = './output/';
var finalFilename = outDir + 'coverage-ita.json'
var urls = [
{year: '2013', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_0_fileAllegati_itemFile_1_file.pdf'},
{year: '2012', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_5_fileAllegati_itemFile_0_file.pdf'},
{year: '2011', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_6_fileAllegati_itemFile_0_file.pdf'},
{year: '2010', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_7_fileAllegati_itemFile_0_file.pdf'},
{year: '2009', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_8_fileAllegati_itemFile_0_file.pdf'},
{year: '2008', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_15_fileAllegati_itemFile_0_file.pdf'},
{year: '2007', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_14_fileAllegati_itemFile_0_file.pdf'},
{year: '2006', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_13_fileAllegati_itemFile_0_file.pdf'},
{year: '2005', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_12_fileAllegati_itemFile_0_file.pdf'},
{year: '2004', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_11_fileAllegati_itemFile_0_file.pdf'},
{year: '2003', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_10_fileAllegati_itemFile_0_file.pdf'},
{year: '2002', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_9_fileAllegati_itemFile_0_file.pdf'},
{year: '2001', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_1_fileAllegati_itemFile_0_file.pdf'},
{year: '2000', link: 'http://www.salute.gov.it/imgs/C_17_tavole_20_allegati_iitemAllegati_0_fileAllegati_itemFile_0_file.pdf'}
];
var jsons = [];
methods.download = async function(req, res) {
jsons = await extractText()
.then(function() {
console.log('Extract text success');
})
.catch(function() {
console.log('Extract text error');
});
};
async function extractText() {
var config = {
preserveLineBreaks: true
};
//await extractTextTest();
await urls.forEach(async function(url) {
await textract.fromUrl(url.link, config, async function(error, text) {
if(error) {
throw error;
}
switch(url.year) {
case '2000':
case '2001':
case '2002':
case '2003':
case '2004':
case '2005':
case '2006':
case '2007':
case '2008':
case '2009':
case '2010':
case '2011':
case '2012':
await extractTextType1(url, text)
.then(function() {
console.log('extractTextType1 success');
})
.catch(function() {
console.log('extractTextType1 error');
});
break;
case '2013':
extractTextType2(url, text)
.then(function() {
console.log('extractTextType2 success');
})
.catch(function() {
console.log('extractTextType2 error');
});
break;
default:
console.log('Error: no case');
}
});
});
}
async function extractTextTest() { // THIS WORKS
var obj = {A: 'aa', B: 'bb', C: 'cc'};
await mongolib.insert('coverages', obj);
}
async function extractTextType1(url, text) {
var matrix = [];
var map = [];
var vaccines = [];
var regionsTemp = [];
var regions = [];
var regionLength = [1, 2, 1, 2, 3, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1];
// text to matrix
var textArray = text.split('\n');
for(var i = 0; i < 23; i++) {
matrix[i] = textArray[i].split(' ');
}
matrix[0].shift();
vaccines = matrix[0];
map[0] = vaccines;
for(var i = 0; i < regionLength.length; i++) {
var j = i + 1;
var indexToRemove = 0;
var numberToRemove = regionLength[i];
var region = matrix[j].splice(indexToRemove, numberToRemove);
regionsTemp.push(region);
map[j+1] = matrix[j];
}
for(var i = 0; i < regionsTemp.length; i++) {
var region = '';
if(regionLength[i] > 1) {
region = regionsTemp[i].join(' ');
}
else {
region = regionsTemp[i].join('');
}
regions.push(region);
}
map[1] = regions;
vaccines = map.shift();
regions = map.shift();
var thisJson = await map.reduce(function(result, v, i) {
v.forEach(function(o, k) {
var obj = util.createJsonObjectCoverage(url.year, 'Italy', vaccines[k], regions[i], o);
// save on db
mongolib.insert('coverages', obj); // HERE
result.push(obj);
});
return result;
}, jsons);
util.printOnFile(jsons, finalFilename);
}
function extractTextType2(url, text) {
var matrix = [];
var map = [];
var vaccines = [];
var regions = [];
var textArray = text.split('\n');
for(var i = 0; i < 36; i++) {
matrix[i] = textArray[i].split(' ');
}
vaccines.push(matrix[0][1].replace(/\(a\)/g, '').replace(/\(b\)/g, '').replace(/\(c\)/g, '').replace(/\r/g, ''));
for(var i = 1; i < 10; i++) {
vaccines.push(matrix[i][0].replace(/\(a\)/g, '').replace(/\(b\)/g, '').replace(/\(c\)/g, '').replace(/\r/g, ''));
}
var meningo = ''.concat(matrix[10][0], matrix[11]).replace(/\(a\)/g, '').replace(/\(b\)/g, '').replace(/\(c\)/g, '').replace(/\r/g, '');
vaccines.push(meningo);
var pneumo = ''.concat(matrix[12][0], ' ', matrix[13]).replace(/\(a\)/g, '').replace(/\(b\)/g, '').replace(/\(c\)/g, '').replace(/\r/g, '');
vaccines.push(pneumo);
map[0] = vaccines;
for(var i = 14; i < matrix.length; i++) {
regions.push(matrix[i][0]);
}
map[1] = regions;
for(var i = 14; i < matrix.length; i++) {
matrix[i].shift();
map.push(matrix[i]);
}
vaccines = map.shift();
regions = map.shift();
var thisJson = map.reduce(function(result, v, i) {
v.forEach(function(o, k) {
var obj = util.createJsonObjectCoverage(url.year, 'Italy', vaccines[k], regions[i], o);
// save on db
mongolib.insert('coverages', obj); // HERE
result.push(obj);
});
return result;
}, jsons);
util.printOnFile(jsons, finalFilename);
}
module.exports = methods;
```
**downloaderCoverageWho.js**:
```
var cheerio = require('cheerio');
var express = require('express');
var fs = require('fs');
var request = require('request');
var util = require('../helpers/util.js');
var mongolib = require('../middlewares/db.js');
var methods = {};
var countries = {
'Albania': 'ALB',
'Austria': 'AUT'
};
var outDir = './output/';
var finalData = outDir + 'coverage-eu.json'
var jsons = [];
methods.download = async function(req, res) {
for(country in countries) {
var url = 'http://apps.who.int/immunization_monitoring/globalsummary/coverages?c=' + countries[country];
request(url, (function(country) {
var thisCountry = country;
return function(error, res, html) {
if(error) {
throw error;
}
$ = cheerio.load(html);
var years = [];
var vaccines = [];
var coverages = [];
$('.ts .year').each(function() {
years.push($(this).text().trim());
});
$('.ts .odd td a, .ts .even td a').each(function() {
vaccines.push($(this).text().trim());
});
$('.ts .odd .statistics_small, .ts .even .statistics_small').each(function() {
coverages.push($(this).text().trim());
});
const numYears = years.length;
const numVaccines = vaccines.length;
for(var vaccineIdx = 0; vaccineIdx < numVaccines; vaccineIdx++) {
for(var yearIdx = 0; yearIdx < numYears; yearIdx++) {
let obj = {
year: years[yearIdx],
country: country,
region: "",
vaccine: vaccines[vaccineIdx],
coverage: coverages[vaccineIdx*numYears + yearIdx]
}
jsons.push(obj);
// save on db
mongolib.insert('coverages', obj); // PROBLEM HERE
}
}
util.printOnFile(jsons, finalData);
}
})(country));
}
};
module.exports = methods;
```
When I run the code, I get this error:
>
> (node:11952) UnhandledPromiseRejectionWarning: Unhandled promise
> rejection (rejection id: 2950): MongoError: server instance pool was
> destroyed
>
>
>
I think there is the same problem in both files (`downloaderCoverageIta` and `downloaderCoverageWho`).
I read [here](https://stackoverflow.com/questions/39029893/why-is-the-mongodb-node-driver-generating-instance-pool-destroyed-errors) that probably I'm calling `db.close()` before my `inserts` have completed but this is not true. I don't know how to fix it.
It's the first time I use `async/await`. How can I solve?<issue_comment>username_1: I think `db.closeConnection()` is called before all request responses are handled, and database access is finished.
The cause for this behaviour may be, that you are only waiting for creating the requests, but you are not waiting for their result (response).
You start the request with a statement like this inside an async function:
```
request(url, (function(country) {
...
})(...));
```
That means, the async function will wait for the `request(...)` function call, but not for the callback function which handles the response.
You can also use `request(...)` in the following way:
```
const response = await request(url);
```
Now your async function will wait for the response.
Upvotes: 2 <issue_comment>username_2: I found the issue...
I forgot the fact that javaScript is Async :)
the mongo.close() was the issue.
I commented it and it worked
Upvotes: 5 [selected_answer]<issue_comment>username_3: "err": "server instance pool was destroyed"
mongoose.set('useCreateIndex', true)
because of this code only u r getting this err...
mark it as comment you will not get this error again...
Upvotes: -1 <issue_comment>username_4: Looks like the server is not able to connect with MongoDB, try restarting the server.
Upvotes: 0 |
2018/03/22 | 379 | 1,465 | <issue_start>username_0: How can I use a parameter instead of a field name like:
```
Declare @Thecode varchar(10)
Set @Thecode= ‘code’ --'code' is field name.
Select @Thecode from sqltable
```<issue_comment>username_1: I think `db.closeConnection()` is called before all request responses are handled, and database access is finished.
The cause for this behaviour may be, that you are only waiting for creating the requests, but you are not waiting for their result (response).
You start the request with a statement like this inside an async function:
```
request(url, (function(country) {
...
})(...));
```
That means, the async function will wait for the `request(...)` function call, but not for the callback function which handles the response.
You can also use `request(...)` in the following way:
```
const response = await request(url);
```
Now your async function will wait for the response.
Upvotes: 2 <issue_comment>username_2: I found the issue...
I forgot the fact that javaScript is Async :)
the mongo.close() was the issue.
I commented it and it worked
Upvotes: 5 [selected_answer]<issue_comment>username_3: "err": "server instance pool was destroyed"
mongoose.set('useCreateIndex', true)
because of this code only u r getting this err...
mark it as comment you will not get this error again...
Upvotes: -1 <issue_comment>username_4: Looks like the server is not able to connect with MongoDB, try restarting the server.
Upvotes: 0 |
2018/03/22 | 1,416 | 3,249 | <issue_start>username_0: I have the output data frame from apriori, the rules as given below:
```
rules
{A,B} => {C}
{C,A} => {B}
{A,B} => {D}
{A,D} => {B}
{A,B} => {E}
{E,A} => {B}
```
I got it till this point where I grouped the items in each rule (data.frame is df\_basket)
```
rules basket
{A,B} => {C} A,B,C
{C,A} => {B} C,A,B
{A,B} => {D} A,B,D
{A,D} => {B} A,D,B
{A,B} => {E} A,B,E
{E,A} => {B} E,A,B
```
I want to be able to order the basket in alphabetical order as given below:
```
rules basket Group
{A,B} => {C} A,B,C A,B,C
{C,A} => {B} C,A,B A,B,C
{A,B} => {D} A,B,D A,B,D
{A,D} => {B} A,D,B A,B,D
{A,B} => {E} A,B,E A,B,E
{E,A} => {B} E,A,B A,B,E
```
I used the code below which works fine for small data frames and gets the job done. The for loop is inefficient for large data frames. Please help me in optimizing this atomic operation in R:
```
for(i in 1:nrow(df_basket))
{
df_basket$Basket[i]<- ifelse(1==1,paste(unlist(strsplit(df_basket$basket[i],","))
[order(unlist(strsplit(df_basket$basket[i],",")))],collapse=","))
}
```
Please let me know if there is anything easy or more direct to get the "Group" field of my data frame.<issue_comment>username_1: Try to adapt this solution:
```
f<-function(x)
{
sorted<-sort(unlist(strsplit(x,",")))
return(paste0(sorted,collapse = ","))
}
cbind(basket,unlist(lapply(basket,f)))
```
Input data:
```
basket<-c("A,B,C","C,A,B","A,B,D","A,D,B","A,B,E","E,A,B")
```
Output:
```
basket
[1,] "A,B,C" "A,B,C"
[2,] "C,A,B" "A,B,C"
[3,] "A,B,D" "A,B,D"
[4,] "A,D,B" "A,B,D"
[5,] "A,B,E" "A,B,E"
[6,] "E,A,B" "A,B,E"
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is another way using more support from `arules`:
```
### create some random data and mine rules
library("arules")
dat <- replicate(10, sample(LETTERS[1:5], size = 3), simplify = FALSE)
trans <- as(dat, "transactions")
rules <- apriori(trans)
inspect(rules)
lhs rhs support confidence lift count
[1] {} => {A} 0.8 0.8 1.000000 8
[2] {B} => {A} 0.6 1.0 1.250000 6
[3] {C,D} => {E} 0.2 1.0 1.428571 2
[4] {B,D} => {A} 0.1 1.0 1.250000 1
[5] {B,C} => {A} 0.2 1.0 1.250000 2
[6] {B,E} => {A} 0.3 1.0 1.250000 3
### Get the itemsets that generated each rule and convert the itemsets
### into a list. I use a list, since in gerneral, rules will not all
### have the same number of items.
itemsets <- as(items(generatingItemsets(rules)), "list")
### sort the item labels alphabetically. Note that you could already
### start with the item labels correctly sorted in the transaction set
### (see manual page for itemcoding in arules).
lapply(itemsets, sort)
[[1]]
[1] "A"
[[2]]
[1] "A" "B"
[[3]]
[1] "C" "D" "E"
[[4]]
[1] "A" "B" "D"
[[5]]
[1] "A" "B" "C"
[[6]]
[1] "A" "B" "E"
```
If all rules have the same number of items then you can drop this list into a matrix.
If you want them as a single string then you can do:
```
sapply(lapply(itemsets, sort), paste0, collapse = ",")
[1] "A" "A,B" "C,D,E" "A,B,D" "A,B,C" "A,B,E"
```
Upvotes: 1 |
2018/03/22 | 817 | 2,932 | <issue_start>username_0: Basically, I have a component with some animation, but I want to show them only when the user resize the page in a certain breakpoint (if the width is <800px, then I rearrange the divs, and in that case I want to show the animation).
I manage to do that, but the problem is that, when I first load the page, the animations will start, when I wouldn't show them.
Hope I've been clear enough
**EDIT:** have this divs in the render() function of 'Test' Component, where 'main-content' it's flex, the first two divs are in the first row each with 50% of width, and the third inside div it's in the second row with 100% width.
```
```
In the .css file I have this code:
```
@media only screen and (max-width: 1000px) {
.change-password-container, .organization-setting-container {
animation: fadeDown 0.5s ease-out;
}
}
@keyframes fadeDown {
0% {
opacity: 0.01;
transform: translateX(-25%);
}
100% {
opacity: 1;
transform: translateX(0);
}
}
```
So that, when the width of window became <1000, I'll have three row, one for each inside divs. It works, but the problem is that, if I open the page while the width it's already <1000, the animation will still perform, even if I would prevent them.
One thing that I tried was to have the div 'main-content' with a special class 'preload', and then remove that class once the component was Mount. The CSS class preload would prevent the animation. Let me show you the code.
```
/* IN REACT COMPONENT FILE */
componentDidMount() {
document.getElementsByClassName("preload")[0].classList.remove("preload");
}
... ... ...
/*IN CSS FILE */
.preload * {
animation-duration: 0s !important;
-webkit-animation-duration: 0s !important;
}
```<issue_comment>username_1: Do you have a code spinet at all? It sounds like something is happening on the "componentDidMount" you could try the following.
```
componentDidMount = (e) => {
e.preventDefault();
}
```
Upvotes: -1 <issue_comment>username_2: Ok, now I see what your'e talking about.
The thing is you are trying to animate only when you're resizing but the media query happens every time you'r width is lower than 1000px.
IMO you should add a class for a resizing event.
I would build it this way:
```
updateWidth() {
if(window.innerWidth < 1000 && this.state.width > 1000 || window.innerWidth > 1000 && this.state.width < 1000) {
this.setState({ width window.innerWidth });
}
}
componentDidMount() {
window.addEventListener("resize", this.updateWidth);
}
componentWillUnmount() {
window.removeEventListener("resize", this.updateWidth);
}
```
and in the component I would check if the width on the state is lower than 1000 and add the animate class. if it's bigger, remove the animate class.
Don't forget to bind the event listener handler in the constructor.
Upvotes: 2 [selected_answer] |
2018/03/22 | 623 | 2,228 | <issue_start>username_0: I am still new to Angular 4 (using Angular-CLI). I somehow do not manage to get simple Bootstrap Collapse work.
The following is my code for the collapse:
```
{{ idea.title }}
{{ idea.author }}
{{ idea.description }}
More
Show Details
```
**Update:**
I did import all the relevant Bootstrap and jQuery scripts. As you can see below, the IDs do match. I don't know why it does not work? Is there an issue with Angular 5 and Bootstrap's Collapse?
[](https://i.stack.imgur.com/yrxWf.png)<issue_comment>username_1: I was facing a similar issue with **Angular 5** and **Bootstrap 4**, when I was trying to make collapsibles in `*ngFor`. After debugging I found that `data-target="#someId"` is rendered as `target="#someId"` and that's why the bootstrap collapse wasn't working.
I found my solution in [this answer](https://stackoverflow.com/a/40203518/3070768). The solution to your specific question would be to use:
```
attr.data-target="#{{idea._id}}"
```
Upvotes: 3 <issue_comment>username_2: The first issue that I see here is that you are missing JQuery. If you have included, then you might have done it after bootstrap.js(min.js). You should fix that as it is required primarily to toggle elements.
After Reading your comments I see where the problem is. You are using bootstrap 3 and looking at examples of bootstrap 4 to achieve what you want.
For bootstrap 3, you need to use href="#id" to toggle it instead of data-target="#id".
Upvotes: -1 <issue_comment>username_3: In my case, I had to replace `[attr.data-target]="'#collapse'+index"` with `[attr.data-bs-target]="'#collapse'+index"`
I also replaced `data-parent` with `data-bs-parent` and `data-toggle` with `data-bs-toggle`.
The issue was I didn't have the -bs- but neither does the example given on the site: <https://getbootstrap.com/docs/4.3/components/collapse/>
I found this out by comparing it against the page source code on <https://getbootstrap.com/docs/5.0/examples/sidebars/#>
I am on Bootstrap 5.1.3 and those docs are for 4.3 so perhaps this is a Bootstrap version issue, but this comment might help me or someone fix this issue again later.
Upvotes: 0 |
2018/03/22 | 752 | 3,063 | <issue_start>username_0: I tried to convert my objective C AppDelegate to Swift AppDelegate. So I deleted the main.m file and Converted all the AppDelegate code to swift. I tried to run the project, then this error occurs.
>
> Cannot find protocol declaration for
> 'UNUserNotificationCenterDelegate'
>
>
>
In the -Swift.h file they generate, this is the delegate they showed
```
- (void)userNotificationCenter:(UNUserNotificationCenter * _Nonnull)center willPresentNotification:(UNNotification * _Nonnull)notification withCompletionHandler:(void (^ _Nonnull)(UNNotificationPresentationOptions))completionHandler SWIFT_AVAILABILITY(ios,introduced=10.0);
```
This is how I have written
```
@available(iOS 10.0, *)
func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) {
handleNotification(notification.request.content.userInfo)
}
```
Please help me get what is the issue am facing.
I tried, cleaning, restarting Xcode and restarting my Mac. Am using Swift 3.2.
This is my -Swift.h file error
[](https://i.stack.imgur.com/MoFBP.png)
If I comment the delegates, there are no errors.<issue_comment>username_1: You need to import following framework: import UserNotifications
Upvotes: 0 <issue_comment>username_2: This is how you need to proceed
```
import UIKit
import UserNotifications
@UIApplicationMain
class AppDelegate: UIResponder, UIApplicationDelegate {
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
registerForPushNotification()
return true
}
/**Register for push notification*/
func registerForPushNotification() {
let userNotification = UNUserNotificationCenter.current()
userNotification.delegate = self
userNotification.requestAuthorization(options: [.sound, .badge, .alert]) { (status, error) in
if error == nil {
DispatchQueue.main.async { // From iOS 10 onwards we need to call pushNotification registration method in main thread.
UIApplication.shared.registerForRemoteNotifications()
}
}
}
}
}
// MARK: - UNUserNotificationCenterDelegate methods
extension AppDelegate: UNUserNotificationCenterDelegate {
func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) {
completionHandler([.sound, .badge, .alert])
}
}
```
Thanks.
Upvotes: 0 <issue_comment>username_3: Strange fix but by importing #import in my bridging header the issue was resolved.
Upvotes: 3 [selected_answer]<issue_comment>username_4: Just need to `#import` required framework in **BridgingHeader.h** to solve the problem
* e.g
>
> `#import`
>
>
>
Upvotes: 0 |
2018/03/22 | 1,018 | 3,950 | <issue_start>username_0: I have a requirement to call a controller from java code itself. The controller is as follows,
```
@RequestMapping(value = "temp", method = RequestMethod.POST)
@ResponseBody
public String uploadDataFromExcel(@RequestBody Map colMapObj, @ModelAttribute ReqParam reqParam) {
}
```
I am trying to call the above controller using http post as follows,
```
String url ="http://localhost:8081/LeadM" + "/temp/?searchData="+ reqParam.getSearchData()+" &exportDiscardRec=" + reqParam.isExportDiscardRec() + "&fileName=" + reqParam.getFileName() + "&sheetName=" + reqParam.getSheetName() + "&importDateFormat=" + reqParam.getImportDateFormat() + "&selectedAddressTypes="+ reqParam.getSelectedAddressTypes() + "&duplicatesHandleOn=" + reqParam.getDuplicatesHandleOn() + colMapObj;
HttpPost httpPost = new HttpPost(url);
CloseableHttpClient httpClient = HttpClients.createDefault();
CloseableHttpResponse httpResponse = httpClient.execute(httpPost);
HttpEntity entity = httpResponse.getEntity();
InputStream rstream = entity.getContent();
jsonObject = new JSONObject(new JSONTokener(rstream));
```
where reqParam is a class object is the class object and colMapObj is the map that I want to pass to the above controller. However when http post is executed it gives exception in the url.
If anybody knows the right way then please suggest, thank you.<issue_comment>username_1: This should work
```
@RequestMapping(value = "/temp", method = RequestMethod.POST)
@ResponseBody
public String uploadDataFromExcel(@RequestBody Map colMapObj, @ModelAttribute ReqParam reqParam) {
}
```
and url should be
```
String url ="http://localhost:8081/LeadM" + "/temp?"+ reqParam.getSearchData()+" &exportDiscardRec=" + reqParam.isExportDiscardRec() + "&fileName=" + reqParam.getFileName() + "&sheetName=" + reqParam.getSheetName() + "&importDateFormat=" + reqParam.getImportDateFormat() + "&selectedAddressTypes="+ reqParam.getSelectedAddressTypes() + "&duplicatesHandleOn=" + reqParam.getDuplicatesHandleOn() + colMapObj;
```
Upvotes: 2 <issue_comment>username_2: URL dos not work with spaces.From your code above: " &exportDiscardRec="
To avoid such issues use **URIBuilder** or something similar if possible.
Now for the request, you are not building your request correctly for example you do not provide the body.
Check below example:
```
Map colMapObj = new HashMap<>();
colMapObj.put("testKey", "testdata");
CloseableHttpClient client = HttpClients.createDefault();
HttpPost httpPost = new HttpPost(url);
JSONObject body = new JSONObject(colMapObj);
StringEntity entity = new StringEntity(body.toString());
httpPost.setEntity(entity);
httpPost.setHeader("Accept", "application/json");
httpPost.setHeader("Content-type", "application/json");
CloseableHttpResponse response = client.execute(httpPost);
System.out.println(response.getEntity().toString());
client.close();
```
More examples just google "apache http client post examples" (e.g. <http://www.baeldung.com/httpclient-post-http-request>)
Upvotes: 2 [selected_answer]<issue_comment>username_3: Encode your query string.
```
String endpoint = "http://localhost:8081/LeadM/tmp?";
String query = "searchData="+ reqParam.getSearchData()+" &exportDiscardRec=" + reqParam.isExportDiscardRec() + "&fileName=" + reqParam.getFileName() + "&sheetName=" + reqParam.getSheetName() + "&importDateFormat=" + reqParam.getImportDateFormat() + "&selectedAddressTypes="+ reqParam.getSelectedAddressTypes() + "&duplicatesHandleOn=" + reqParam.getDuplicatesHandleOn() + colMapObj;
String q = URLEncoder.encode(query, "UTF-8");
String finalUrl = endpoint + q;
```
If this doesn't work, then encode individual params before concatenating.
On a side note
1. if you r running in same jvm then you can call method directly
2. if you own the the upload method then consider changing query string into form param
Upvotes: 1 |
2018/03/22 | 1,329 | 4,491 | <issue_start>username_0: I design a notebook so that variables that could be changed by the user are grouped into distinct cells throughout the notebook. I would like to highlight those cells with a different background color so that it is obvious to the user where the knobs are.
How could I achieve that?
NB: [This related question](https://stackoverflow.com/questions/28034783/ipython-notebook-backgound-colour-of-a-cell) was about *static* code highlighting (for a manual) and the accepted answer proposed to basically put everything in markup comments. In my case, I want highlighted code to be in a *runnable cell*.<issue_comment>username_1: Here you go (assuming that you use Python kernel):
```
from IPython.display import HTML, display
def set_background(color):
script = (
"var cell = this.closest('.jp-CodeCell');"
"var editor = cell.querySelector('.jp-Editor');"
"editor.style.background='{}';"
"this.parentNode.removeChild(this)"
).format(color)
display(HTML('![]()'.format(script)))
```
Then use it like this:
```
set_background('honeydew')
```
The solution is a bit hacky, and I would be happy to see a more elegant one.
Demo:
[](https://i.stack.imgur.com/uXXQA.png)
Tested in Firefox 60 and Chrome 67 using JupyterLab 0.32.1.
Edit to have it as cell magic, you could simply do:
```
from IPython.core.magic import register_cell_magic
@register_cell_magic
def background(color, cell):
set_background(color)
return eval(cell)
```
and use it like:
```
%%background honeydew
my_important_param = 42
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: Small addition to [*username_1*'s code](https://stackoverflow.com/a/50824920/6284645) (tried to add it as comment but couldn't get the formatting to work).
```
from IPython.core.magic import register_cell_magic
from IPython.display import HTML, display
@register_cell_magic
def bgc(color, cell=None):
script = (
"var cell = this.closest('.jp-CodeCell');"
"var editor = cell.querySelector('.jp-Editor');"
"editor.style.background='{}';"
"this.parentNode.removeChild(this)"
).format(color)
display(HTML('![]()'.format(script)))
```
This way you can use it both as magic and with normal function call:
```
bgc('yellow')
bla = 'bla'*3
```
or
```
%%bgc yellow
bla = 'bla'*3
```
Upvotes: 2 <issue_comment>username_3: If you only need to change the color of cells converted with `nbconvert`,
create a template `mytemplate.tpl` in your folder and add:
```
{% extends 'full.tpl'%}
{% block any_cell %}
{% if 'highlight' in cell['metadata'].get('tags', []) %}
{{ super() }}
{% else %}
{{ super() }}
{% endif %}
{% endblock any_cell %}
```
(adapted from the official [docs](https://nbconvert.readthedocs.io/en/latest/customizing.html))
.. then add a tag "highlight" to your cell. In Jupyter lab, you can do this on the left for the selected cell:
[](https://i.stack.imgur.com/Criuh.png)
Now, convert the notebook with nbconvert using the template:
```sh
jupyter nbconvert --to html 'mynb.ipynb' --template=mytemplate.tpl
```
The resulting HTML will look like this:
[](https://i.stack.imgur.com/ktODs.png)
I found this suitable to highlight specific cells to readers.
Upvotes: 2 <issue_comment>username_4: Here's what worked for me in both `jupyter-notebook` (v6.3.0) and `jupyter-nbconvert --to=html` (v6.0.7).
It's different from @username_1 and @username_2's answers in two ways:
1. The interactive notebook uses the class names `.cell` and `.input_area`, but the nbconvert HTML uses `.jp-CodeCell` and `.jp-Editor` and `.highlight`. This code handles all of those.
2. I prefer "line magic" over "cell magic", because line magic doesn't change the evaluation of the rest of the cell.
```
from IPython.core.magic import register_line_magic
from IPython.display import HTML, display
import json
@register_line_magic
def bg(color, cell=None):
script = (
"var n = [this.closest('.cell,.jp-CodeCell')];"
"n = n.concat([].slice.call(n[0].querySelectorAll('.input_area,.highlight,.jp-Editor')));"
f"n.forEach(e=>e.style.background='{color}');"
"this.parentNode.removeChild(this)"
)
display(HTML(f'![]()'))
%bg yellow
```
Upvotes: 1 |
2018/03/22 | 697 | 1,846 | <issue_start>username_0: I have a pandas dataframe of the form:
```
index | id | group
0 | abc | A
1 | abc | B
2 | abc | B
3 | abc | C
4 | def | A
5 | def | B
6 | ghi | B
7 | ghi | C
```
I would like to transform this to a weighted graph / adjacency matrix where nodes are the 'group', and the weights are the sum of shared ids per group pair:
The weights are the count of the group pair combinations per id, so:
```
AB = 'abc' indexes (0,1),(0,2) + 'def' indexes (4,5) = 3
AC = 'abc' (0,3) = 1
BC = 'abc' (2,3), (1,3) + 'ghi' (6,7) = 3
```
and the resulting matrix would be:
```
A |B |C
A| 0 |3 |1
B| 3 |0 |3
C| 1 |3 |0
```
At the moment I am doing this very inefficiently by:
```
f = df.groupby(['id']).agg({'group':pd.Series.nunique}) # to count groups per id
f.loc[f['group']>1] # to get a list of the ids with >1 group
# i then for loop through the id's getting the count of values per pair (takes a long time).
```
This is a first pass crude hack approach, I'm sure there must be an alternative approach using groupby or crosstab but I cant figure it out.<issue_comment>username_1: You can use the following:
```
df_merge = df.merge(df, on='id')
results = pd.crosstab(df_merge.group_x, df_merge.group_y)
np.fill_diagonal(results.values, 0)
results
```
Output:
```
group_y A B C
group_x
A 0 3 1
B 3 0 3
C 1 3 0
```
Note: the difference i your result and my result C-B and B-C three instead of two, is due to duplicate records for B-abc index row 1 and 2.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Maybe try `dot`
```
s=pd.crosstab(df.id,df.group)
s=s.T.dot(s)
s.values[[np.arange(len(s))]*2] = 0
s
Out[15]:
group A B C
group
A 0 3 1
B 3 0 3
C 1 3 0
```
Upvotes: 1 |
2018/03/22 | 1,779 | 5,733 | <issue_start>username_0: I have written a program that encrypts and decrypts a file that gets read in using a foursquare cipher. Currently, I am storing the file to a character array, passing that into a function which breaks it up into bigrams, and encrypting it that way using nested for loops in another function. At this point, I'm basically trying to optimize the running time. Is there an alternative way to iterate through the two-dimensional array I'm using that doesn't use two for loops or, at the most, using only one for loop? Below is the relevant code:
**FileHandler.java**
```
import java.io.*;
import java.util.*;
public class FileHandler {
private List characters = new ArrayList();
public char fileToChar[];
public void preEncryptionFile(String fileText) throws IOException {
String line;
FileInputStream fileReader = new FileInputStream(fileText);
DataInputStream dataInputStream = new DataInputStream(fileReader);
BufferedReader bufferedReader =
new BufferedReader(new InputStreamReader(dataInputStream));
while ((line = bufferedReader.readLine()) != null) {
characters.add(line);
}
String charsToString = characters.toString();
charsToString = charsToString.replaceAll("[^a-zA-Z]", "").toUpperCase();
fileToChar = charsToString.toCharArray();
bufferedReader.close();
}
}
```
**FourSquareCipher.java**
```
import java.util.*;
public class FourSquareCipher {
List encryptionList = new ArrayList();
List decryptionList = new ArrayList();
private char[][] matrix = {
{ 'A', 'B', 'C', 'D', 'E', 'Z', 'G', 'P', 'T', 'F' },
{ 'F', 'G', 'H', 'I', 'K', 'O', 'I', 'H', 'M', 'U' },
{ 'L', 'M', 'N', 'O', 'P', 'W', 'D', 'R', 'C', 'N' },
{ 'Q', 'R', 'S', 'T', 'U', 'Y', 'K', 'E', 'Q', 'A' },
{ 'V', 'W', 'X', 'Y', 'Z', 'X', 'V', 'S', 'B', 'L' },
{ 'M', 'F', 'N', 'B', 'D', 'A', 'B', 'C', 'D', 'E' },
{ 'C', 'R', 'H', 'S', 'A', 'F', 'G', 'H', 'I', 'K' },
{ 'X', 'Y', 'O', 'G', 'V', 'L', 'M', 'N', 'O', 'P' },
{ 'I', 'T', 'U', 'E', 'W', 'Q', 'R', 'S', 'T', 'U' },
{ 'L', 'Q', 'Z', 'K', 'P', 'V', 'W', 'X', 'Y', 'Z' } };
public void encryptionBigram(char[] fileToText) {
int i;
char x, y;
for (i = 0; i < fileToText.length - 1; i += 2) {
x = fileToText[i];
y = fileToText[i + 1];
encryption(x, y);
}
}
private void encryption(char x, char y) {
int i, j;
int a, b, c, d;
a = b = c = d = 0;
for (i = 0; i < 5; i++) {
for (j = 0; j < 5; j++) {
if (x == matrix[i][j]) {
a = i;
b = j;
}
}
}
for (i = 5; i < 10; i++) {
for (j = 5; j < 10; j++) {
if (y == matrix[i][j]) {
c = i;
d = j;
}
}
}
encryptionList.add(matrix[a][d]);
encryptionList.add(matrix[c][b]);
}
}
```<issue_comment>username_1: [Asymptotically](https://en.wikipedia.org/wiki/Asymptotic_computational_complexity), there's not much to do, since your program still has `O(n)` time complexity - it reads every character and performs fixed 50 iterations in the two nested for loops for each character, plus some other constant-time work. Since you have to read all input, you can't get better asymptotically.
However, if you want to speed up your program somewhat, those nested for loops are indeed the place to start - in each of them, you're effectively performing a lookup - going through the 25 positions and looking for a match, then returning line and column. This can be improved by creating a `Map` that maps every letter to it's position data, and has constant asymptotic time complexity for `get()`.
Upvotes: 1 <issue_comment>username_2: Using a `break;` will allow you to stop looping as soon as the value is matched, versus continuing through the loop even after you values have be found and set. Of course if your value is at the end of both loops, this may take longer. If your value is at the front of both loops, then you essentially cut out all the extra calculations. You could change your mappings to account for higher frequency letters by placing them in places to be found earlier in the loops.
```
for (i = 5; i < 10; i++) {
for (j = 5; j < 10; j++) {
if (y == matrix[i][j]) {
c = i;
d = j;
break; // Using break here allows you to do the minimum comparisons.
}
}
}
```
Upvotes: 1 <issue_comment>username_3: Iterate over a 2D array using a single loop.
============================================
More specifically, this code calculates the row & column based on the current index & the width of the sub-arrays.
**This will only work if the sub-arrays are of a fixed length/width.**
I haven't benchmarked this, so I can't say for sure if it's more efficient than what you currently have. The overhead from checking a 2nd condition & incrementing a 2nd variable may be the same (or even less) than calculating the row & column.
Never-the-less:
```
char[][] letters = {
{'A', 'B', 'C'},
{'D', 'E', 'F'},
{'G', 'H', 'I'}
};
int width = 3;
int maxIndex = letters.length * width;
for(int i = 0; i < maxIndex; i++) {
int row = i / width; // determines row
int column = i % width; // determines column
System.out.println("Value["+letters[row][column]+"] Row["+row+"] Column["+column+"]");
}
```
The row is determined by dividing the index (represented by `i`) by the width. Since the width of the sub-arrays in the above example is `3`:
* if the index is `6`, the row is 2.
* if the index is `9`, the row is 3.
* if the index is `11`, the row is 3.66 (still 3)
The column is determined by modular arithmetic. Since every sub-array in the example above has width of `3`:
* if the index is `6`, the column is 0.
* if the index is `9`, the column is 0.
* if the index is `11`, the column is 2.
Upvotes: 3 |
2018/03/22 | 1,075 | 3,543 | <issue_start>username_0: Here is the piece of code so far I've tried:
```
$month = array('red','green','red');
$values = array();
foreach($month as $dataset)
{
$values[] = ($dataset);
}
$columns = implode(", ",array_keys($values));
$escaped_values = array_values($values);
$valu = implode(", ", $escaped_values);
$sql = "INSERT INTO abc (col1,col2,col3) VALUES ('$valu');";
```
Here is the output:
>
> Error: INSERT INTO abc (col1,col2,col3) VALUES ('red, green, red');
> Column count doesn't match value count at row 1
>
>
>
What I am trying to do is to store values in the array where the value of the array may vary depending upon the value the user gave, and then store it in different columns. For example, if the total columns are 3 and array value is 2 then store values in col1 and col2 and null value in col3.<issue_comment>username_1: The values are into the single quote. Please check below example.
```
INSERT INTO abc (col1,col2,col3) VALUES ('red', 'green', 'red');
```
Upvotes: 0 <issue_comment>username_2: With the single quotes around the whole string `'red, green, red'` that is the value for `col1` only.
It should look more like this 'red','green','red'.
So quick fix is this `$valu = implode("','", $escaped_values);`
Added a single quotes inside your implode.
The outside quotes will be captured in the final statement as detailied in the problem above:
```
$sql = "INSERT INTO abc (col1,col2,col3) VALUES ('$valu');";
```
Upvotes: 1 <issue_comment>username_3: No one else seems to have addressed the actual issue.
>
> What I am trying to do is to store values in the array where the value
> of the array may vary depending upon the value the user gave, and then
> store it in different columns. For example, if the total columns are 3
> and array value is 2 then store values in col1 and col2 and null value
> in col3.
>
>
>
So if the values come in as an array at different lengths, and you want to insert as null or limit to the max length of the columns then you can do it like the following.
Ideally, you want to produce an array which looks like:
```
$data = [
'col1' => 'red',
'col2' => 'green',
'col3' => null
];
```
To do that without any looping define the database columns, then create an array of the same length of null values, then slice the input array to the same length and merge, this will produce an array like above.
```
php
$columns = [
'col1',
'col2',
'col3',
];
$month = [
'red',
'green'
];
$data = array_combine(
$columns,
array_slice($month, 0, count($columns))+array_fill(0, count($columns), null)
);
</code
```
Now you simply need to implode the array into your query, using `?` for placeholders for the prepared query.
```
$sql = '
INSERT INTO abc (
'.implode(', ', array_keys($data)).'
) VALUES (
'.implode(', ', array_fill(0, count($data), '?')).'
)';
```
Will produce:
```
INSERT INTO abc (
col1, col2, col3
) VALUES (
?, ?, ?
)
```
Then just do your query, for example:
```
$stmt = $pdo->prepare($sql);
$stmt->execute($data);
```
Simple clean and safe.
Upvotes: 0 <issue_comment>username_4: Here the Code after making it work properly
```
$month = array('red','green','red');
$values = array();
foreach($month as $dataset)
{
$values[] = "'{$dataset}'";
}
$columns = implode(", ",array_keys($values));
$escaped_values = array_values($values);
$valu = implode(", ", $escaped_values);
$sql = "INSERT INTO abc (col1,col2,col3) VALUES ($valu);";
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,151 | 3,428 | <issue_start>username_0: I have data in table that can presented by SQL as below :
```
SELECT T.VERSION_ID T_VERSION_ID
,cast(T.START_DATE As Date) as T_START_DATE
,cast(ISNULL( LEAD (START_DATE) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as CALC_END_DATE_LEAD
,cast(ISNULL( LAG (START_DATE) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as CALC_END_DATE_LAG
FROM(select 'Vrandom1' as VERSION_ID
,cast('22-MAR-2018' As Date) as start_date
,'9999-12-31' as end_date
, 1 as is_approved
union
select 'Vrandom2' as VERSION_ID
,cast('28-MAR-2018' As Date) as start_date
,'9999-12-31' as end_date
,1 as is_approved
union
select 'Vrandom3' as VERSION_ID
,cast('25-MAR-2018' As date) as start_date
,'9999-12-31' as end_date
,1 as is_approved
) as T
```
**Output**
```
T_VERSION_ID T_START_DATE CALC_END_DATE_LEAD CALC_END_DATE_LAG
Vrandom1 22/03/2018 25/03/2018 31/12/9999
Vrandom3 25/03/2018 28/03/2018 22/03/2018
Vrandom2 28/03/2018 31/12/9999 25/03/2018
```
This table is used inside application where one record say with version "Vrandom3" will be in effect. For processing, I need to find keys of immediate leading and lagging record as per start date. i.e. I would need to display Vrandom2 and Vrandom1 as the keys of leading and lagging record.
Desired result in the application:
```
T_VERSION_ID T_START_DATE CALC_END_DATE_LEAD CALC_END_DATE_LAG key_leading key_lagging
Vrandom3 25/03/2018 28/03/2018 22/03/2018 Vrandom2 Vrandom1
```
or
```
T_VERSION_ID T_START_DATE CALC_END_DATE_LEAD CALC_END_DATE_LAG key_leading key_lagging
Vrandom1 22/03/2018 25/03/2018 31/12/9999 Vrandom3 null
```
I can think of joining inline views based on start\_date but is there any better way to achieve this?<issue_comment>username_1: [LAG (there's also LEAD) windowing function](https://learn.microsoft.com/en-us/sql/t-sql/functions/lag-transact-sql)
>
> Accesses data from a previous row in the same result set without the
> use of a self-join starting with SQL Server 2012. LAG provides access
> to a row at a given physical offset that comes before the current row.
> Use this analytic function in a SELECT statement to compare values in
> the current row with values in a previous row.
>
>
>
These functions are designed to get leading and lagging rows.
Example from the link:
```
USE AdventureWorks2012;
GO
SELECT BusinessEntityID, YEAR(QuotaDate) AS SalesYear, SalesQuota AS CurrentQuota,
LAG(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');
```
Upvotes: 1 <issue_comment>username_1: How about adding:
```
,LEAD (key_col) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as Key_col_LEAD
,LAG (key_col) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as Key_col_LAG
```
to your SELECT
Upvotes: 1 [selected_answer] |
2018/03/22 | 785 | 2,644 | <issue_start>username_0: I trying to create a SQL query/stored procedure to count the number of applications in-progress on a certain day for a certain team.
Where I am having trouble is the following scenario: when the application is transferred to another user, the count for the day should not double count (a count for each team on the day of transfer) and should go to the transferred user.
My Tables
```
**Users**
Id || Name || TeamId
---------------------------------
1 User 1 1
2 User 2 2
**Application**
Id || Name
-------------
1 Application1
**ApplicationUser**
Id || ApplicationId || UserId || AssignedDate || UnassignedDate
----------------------------------------------------------
1 1 1 2018-03-01 2018-03-02
2 1 2 2018-03-02 2018-03-03
```
so in the Stored Procedure I am sending a date in as a parameter and the result i want to return is the following.
```
Date || Team 1 || Team 2 || Total
-------------------------------------------
2018-03-02 0 1 1
```
so if I put all results together they would look like this.
```
Date || Team 1 || Team 2 || Total
-------------------------------------------
2018-02-28 0 0 0
2018-03-01 1 0 1
2018-03-02 0 1 1
2018-03-03 0 1 1
```
Thank you very much in advance :)<issue_comment>username_1: [LAG (there's also LEAD) windowing function](https://learn.microsoft.com/en-us/sql/t-sql/functions/lag-transact-sql)
>
> Accesses data from a previous row in the same result set without the
> use of a self-join starting with SQL Server 2012. LAG provides access
> to a row at a given physical offset that comes before the current row.
> Use this analytic function in a SELECT statement to compare values in
> the current row with values in a previous row.
>
>
>
These functions are designed to get leading and lagging rows.
Example from the link:
```
USE AdventureWorks2012;
GO
SELECT BusinessEntityID, YEAR(QuotaDate) AS SalesYear, SalesQuota AS CurrentQuota,
LAG(SalesQuota, 1,0) OVER (ORDER BY YEAR(QuotaDate)) AS PreviousQuota
FROM Sales.SalesPersonQuotaHistory
WHERE BusinessEntityID = 275 and YEAR(QuotaDate) IN ('2005','2006');
```
Upvotes: 1 <issue_comment>username_1: How about adding:
```
,LEAD (key_col) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as Key_col_LEAD
,LAG (key_col) OVER (ORDER BY START_DATE),'9999-12-31') As Date) as Key_col_LAG
```
to your SELECT
Upvotes: 1 [selected_answer] |
2018/03/22 | 994 | 3,140 | <issue_start>username_0: I'm trying to rename one column with the `rename` function. However, nothing changes. Could someone help me?
The code i have tried is the following:
```
Snd_Mer_Vol_Output = Snd_Mer_Vol_Output.rename(columns={'(1,Snd_Mer_Vol_Probability')': 'Snd_Mer_Vol_Probability'})
File "", line 1
Snd\_Mer\_Vol\_Output = Snd\_Mer\_Vol\_Output.rename(columns={'(1, 'Snd\_Mer\_Vol\_Probability')': 'Snd\_Mer\_Vol\_Probability'})
^
SyntaxError: invalid syntax
```
Thank you.<issue_comment>username_1: following the answer from <NAME>:
```
import pandas as pd
df = pd.DataFrame({"(1, 'Snd_Mer_Vol_Probability')": [1, 2, 3], "B": [4, 5, 6]})
print (df)
df = df.rename(columns={"(1, 'Snd_Mer_Vol_Probability')": 'Snd_Mer_Vol_Probability'})
print (df)
(1, 'Snd_Mer_Vol_Probability') B
0 1 4
1 2 5
2 3 6
Snd_Mer_Vol_Probability B
0 1 4
1 2 5
2 3 6
```
Upvotes: 1 <issue_comment>username_2: Could you try this instead? Assuming I've understood what you're trying to do, which is rename a column called `(1, 'Snd_Mer_Vol_Probability')` to `Snd_Mer_Vol_Probability`
```
Snd_Mer_Vol_Output.rename(columns={"(1, 'Snd_Mer_Vol_Probability')": 'Snd_Mer_Vol_Probability'},inplace=True)
```
**EDIT:**
You actually need:
```
Snd_Mer_Vol_Output.rename(columns={(1, 'Snd_Mer_Vol_Probability'): 'Snd_Mer_Vol_Probability'},inplace=True)
```
As your .columns output below shows that the column name is a tuple and not a string, so it doesn't need quotes (double or otherwise) around it, as you can see I've done an example myself:
```
df = pd.DataFrame({(1,'hello'):[1],'test':[2]})
print(df)
>> test (1, hello)
>> 2 1
df.rename(columns={(1,'hello'):'testing2'},inplace=True)
print(df)
>> test testing2
>> 2 1
```
Upvotes: 0 <issue_comment>username_3: Ran into the same problem with one specific DF after I concatenated a Pandas DF with two Pandas Series. Tried to use several variants of `df.rename()` listed below. None of them worked.
```
# using column name and axis
df = df.rename({'oldName1':'newName1', 'oldName2':'newName2'}, axis = 'columns')
# using column index and axis
df = df.rename({28:'newName1', 29:'newName2'}, axis = 'columns')
# using column name
df = df.rename(columns = {'oldName1':'newName1', 'oldName2':'newName2'})
# using column name and inplace function
df.rename(columns = {'oldName1':'newName1', 'oldName2':'newName2'}, inplace = True)
# using column index and inplace function
df.rename(columns = {28:'newName1', 29:'newName2'}, inplace = True)
```
Also tried above suggestion `df.rename(columns={(28, 'newName1'): 'newName1'}, inplace = True`, which did not work.
What worked is this:
`df.columns.values[27]= 'newName1'`
This is of course not ideal as it needs to be done individually for each column. As I only had 2 columns to rename this is ok for me. If possible I recommend to use `df.rename()`, but if it just doesn't work this may be an alternative.
Upvotes: 0 |
2018/03/22 | 475 | 1,626 | <issue_start>username_0: Anybody please help me
Im unable to connect my server after run this command sudo ufw allow 'Nginx Full'.
In aws is there any option to undo this changes or anything else
Thanks in advance<issue_comment>username_1: If you have server backup, try restoring to that backup.
If not, try looking at [AWS Troubleshooting Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html).
Please post your error or logs upon connecting. Can't help much without logs.
Upvotes: 0 <issue_comment>username_2: 1. Stop the running EC2 instance
2. Detach its `/dev/sda1` volume (let's call it volume A)
3. Start the new t1.micro EC2 instance, create it on the same subnet, otherwise you will have to terminate the instance and create it again.
4. Attach volume A to the new micro instance, as `/dev/xvdf`
5. SSH to the new micro instance and mount volume A to `/mnt/tmp`
6. Disable UFW by setting `ENABLED=no` in `/mnt/tmp/etc/ufw/ufw.conf`
7. Exit
8. Terminate micro instance
9. Detach volume A from it
10. Attach volume A back to the main instance as /dev/sda1 Start the main instance
11. Login as before
[Source](https://stackoverflow.com/questions/7881469/change-key-pair-for-ec2-instance)
Upvotes: 2 <issue_comment>username_3: After struggling for 2 days I found few easy alternatives, here are those:
* Use AWS session manager to connect with out ssh or key ([yt](https://youtu.be/SwSEmvWMuMU))
* Use EC2 serial console
* Update the user instance details ([link](https://stackoverflow.com/questions/41929267/locked-myself-out-of-ssh-with-ufw-in-ec2-aws))
Upvotes: 0 |
2018/03/22 | 989 | 3,157 | <issue_start>username_0: I'm reading the codes about a TF official example about cifar10 on <https://github.com/tensorflow/models/blob/master/official/resnet/cifar10_main.py>
and I have some questions:
* In the function **`input_fn`**, what does
>
> num\_images = is\_training and \_NUM\_IMAGES['train'] or
> \_NUM\_IMAGES['validation']
>
>
>
... mean? How can we get the right size of data while training and validating through this function?
* In the function **`main`**, there's a similar one
>
> input\_function = FLAGS.use\_synthetic\_data and get\_synth\_input\_fn() or
> input\_fn
>
>
>
Again, I don't know how it works.<issue_comment>username_1: This is a clever application of boolean operations with integers (or actually any objects) in python (see [this question](https://stackoverflow.com/q/394809/712995) for more details). Example:
```
>>> True and 10 or 20
Out[11]: 10
>>> False and 10 or 20
Out[12]: 20
>>> a = False and (lambda x: x) or (lambda x: 2*x)
>>> a(1)
Out[14]: 2
```
So the result of `num_images` is an integer (first or second one depending on `is_training`), the result of `input_function` is a function (again, the first or the second one depending on the flag `use_synthetic_data`).
Upvotes: 0 <issue_comment>username_2: You are the victim of the bad code style of TensorFlow in this case. The tutorial is written by using a particular Python anti-trick in which you use `and` to select the final object of two objects that evaluate to `True` in a Boolean context, and you use `or` to select the final object in the case when the first object evaluates to `False`.
You can see it more easily with some simpler examples:
```
In [9]: 3 and "hello"
Out[9]: 'hello'
In [10]: 3 and False or "foobar"
Out[10]: 'foobar'
```
So these lines are selecting the necessary function or data size by chaining together these two anti-tricks.
This makes for needless obfuscated and unreadable code, and the minute someone defends it as OK just because the language supports it or "it's Pythonic" you can mostly just stop trusting their advice.
Upvotes: 0 <issue_comment>username_3: ```
num_images = is_training and _NUM_IMAGES['train'] or _NUM_IMAGES['validation']
```
is equivalent to
```
if is_training:
num_images = _NUM_IMAGES['train']
else:
num_images = _NUM_IMAGES['validation']
```
In the same vein:
```
input_function = FLAGS.use_synthetic_data and get_synth_input_fn() or input_fn
```
is equivalent to:
```
if FLAGS.use_synthetic_data:
input_function = get_synth_input_fn()
else:
input_function = input_fn()
```
While my given more verbose variants may be more readable, the original tensorflow version is more compact.
The `and` operator short circuits, e.g in
```
(A and B)
```
`B` is only evaluated if `A` is true.
This means that in:
```
A and B or C
```
If `A` is true, then B is evaluated and `or` never gets to evaluate `C`,
so the result is `B`. If `A` is false, then `B` is never evaluated and the result is `C`.
For more information study the docs:
<https://docs.python.org/2/library/stdtypes.html#boolean-operations-and-or-not>
Upvotes: 2 [selected_answer] |
2018/03/22 | 648 | 1,849 | <issue_start>username_0: I am trying to remove the empty array elements (from a *csv* file) by using the `splice()` function.
The array elements are stored inside `csv.data`:
```
csv.data.forEach(function(item,index) {
if (item.length < 2) { // don't want anything less than two
csv.data.splice(index,1);
}
});
```
This works, but it still returns me two empty arrays (lines) in the *csv* file, originally, there are six empty lines, but it skips the two empty lines.
Am I doing anything wrong?
This is the `csv.data`
```
[
[
"1212",
"okay",
""
],
[
""
],
[
""
],
[
""
],
[
""
],
[
""
],
[
""
]
]
```
Expected
```
[
[
"1212",
"okay",
""
],
]
```<issue_comment>username_1: It's not a good idea to use splice inside the loop. You can miss some indexes.
You can use `filter` function instead of `forEach`
```js
var csv = { data: [["1212", "okay", ""], [""], [""], [""], [""], [""], [""]] };
csv.data = csv.data.filter(items => items.length > 1);
console.log(csv.data);
```
Upvotes: 3 <issue_comment>username_2: You could iterate the array from the end and splice without colliding with the index.
```js
var array = [["1212", "okay", ""], [""], [""], [""], [""], [""], [""]],
i = array.length;
while (i--) {
if (array[i].length < 2) {
array.splice(i, 1);
}
}
console.log(array);
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: If you also wanted to remove empty elements from the top array, you could do another filter.
eg.
```js
const a = [ [ "1212", "okay", "" ], [ "" ], [ "" ], [ "" ], [ "" ], [ "" ], [ "" ] ];
const ret1 = a.map((b) => b.filter((f) => f));
console.log(ret1);
//if you also want to remove empty elements from top array.
const ret2 = ret1.filter((f) => f.length);
console.log(ret2);
```
Upvotes: 1 |
2018/03/22 | 1,061 | 3,988 | <issue_start>username_0: I have implemented [**Firebase**](https://ionicframework.com/docs/native/firebase/) plugin with ionic 3 app. It is working fine. Could you tell me how can I use [**Crashlytics**](https://firebase.google.com/docs/crashlytics/get-started) with this plugin? According to the doc it seems for the native apps. So how can we do it with Ionic 3?
There is a plugin [**cordova-fabric-plugin**](https://www.npmjs.com/package/cordova-fabric-plugin?activeTab=readme) which we can use with ionic apps. But it seems we don't need it anymore since this note on the native apps doc: Any guidance please?
>
> Note: If you're upgrading from Fabric Crashlytics, remove the Fabric
> API key from your AndroidManifest.xml. Delete the key's meta-data tag,
> which should have io.fabric.ApiKey and the API key.
>
>
><issue_comment>username_1: In theory you should setup the FabricPlugin just as described. (the versions required of the firebase are lower than the once included)
To setup on ionic I recommend to read the following issue: <https://github.com/sarriaroman/FabricPlugin/issues/70>
Short Version: Create then a custom error handler
```js
{provide: ErrorHandler, useClass: FabricErrorHandler},
```
and then your custom Error Handler
```js
import {Injectable} from "@angular/core";
import {IonicErrorHandler } from 'ionic-angular';
import * as stacktrace from 'stacktrace-js';
@Injectable()
export class FabricErrorHandler extends IonicErrorHandler {
constructor (public analytics: Analytics) {
super();
}
handleError(error) {
window.fabric.Crashlytics.addLog('crash triggered');
stacktrace.get().then(
trace => window.fabric.Crashlytics.endNonFatalCrash(error.message, trace)
);
super.handleError(error);
}
}
```
Upvotes: 2 <issue_comment>username_2: Fabric does not support any non-native platforms except for Unity at this time. However, as @username_1 mentioned, there are community workarounds.
Upvotes: -1 <issue_comment>username_3: Refer this link
<https://fabric.io/kits/android/crashlytics/features>
Crashlytics with Ionic 3 app
First off, follow what they say… installing the IDE plugin, etc.
Now. Ionic uses gradle for its building. You would expect the install to work right away, but it wont. Ionic/Cordova overwrites the build.gradle definitions in the dependencies section.
Anyway, first off open build.gradle, and in buildscript > repositories, if its not already in there, add
jcenter() maven { url '<https://maven.fabric.io/public>' }
Next, there are 3 dependencies for each gradle version. if you know what version you are running, great. otherwise add the following to all of them.
classpath 'io.fabric.tools:gradle:1.+'
Now what we need to do is extend gradle’s build.
Create a file in your android/ios directory called build-extras.gradle.
Inside this, we need to define the repositories.
apply plugin: 'io.fabric' repositories { jcenter() maven { url '<https://maven.fabric.io/public>' } } dependencies { // Crashlytics Kit compile('com.crashlytics.sdk.android:crashlytics:2.5.5@aar') { transitive = true } }
Now, when you run or build the app, it should work, although Fabric.io 79 won’t recognize your app.
We will need to install a cordova plugin,
The one I use is:
<https://www.npmjs.com/package/cordova-fabric-plugin> 946
After this is installed, add a force crash and rebuild & run your app.
If it is still not working, make sure you do not have any ad-blocking software installed. Even if you don’t but you have in the past, reinstall it and make sure all settings are reset (I.E AdAway edits your hosts files, uninstalling it does not reset them, you need to reset them then you can uninstall them).
Hope this has helped anyone else who had issues setting it up.
EDIT
Currently Crashlytics does not support custom stacktraces. All crashes will come from the Crashlytics/Fabric.io plugin. You have to use the logs to specify the error.
Upvotes: 0 |
2018/03/22 | 916 | 3,514 | <issue_start>username_0: I'm trying to create an Intergration Automation Script for a PUBLISHED Channel which updates a database field.
Basically for WOACTIVITY I just want a field value setting to 1 for the Work Order if the PUBLISHED channel is triggered.
Any ideas or example scripts that anyone has or can help with please? Just can't get it work.<issue_comment>username_1: In theory you should setup the FabricPlugin just as described. (the versions required of the firebase are lower than the once included)
To setup on ionic I recommend to read the following issue: <https://github.com/sarriaroman/FabricPlugin/issues/70>
Short Version: Create then a custom error handler
```js
{provide: ErrorHandler, useClass: FabricErrorHandler},
```
and then your custom Error Handler
```js
import {Injectable} from "@angular/core";
import {IonicErrorHandler } from 'ionic-angular';
import * as stacktrace from 'stacktrace-js';
@Injectable()
export class FabricErrorHandler extends IonicErrorHandler {
constructor (public analytics: Analytics) {
super();
}
handleError(error) {
window.fabric.Crashlytics.addLog('crash triggered');
stacktrace.get().then(
trace => window.fabric.Crashlytics.endNonFatalCrash(error.message, trace)
);
super.handleError(error);
}
}
```
Upvotes: 2 <issue_comment>username_2: Fabric does not support any non-native platforms except for Unity at this time. However, as @username_1 mentioned, there are community workarounds.
Upvotes: -1 <issue_comment>username_3: Refer this link
<https://fabric.io/kits/android/crashlytics/features>
Crashlytics with Ionic 3 app
First off, follow what they say… installing the IDE plugin, etc.
Now. Ionic uses gradle for its building. You would expect the install to work right away, but it wont. Ionic/Cordova overwrites the build.gradle definitions in the dependencies section.
Anyway, first off open build.gradle, and in buildscript > repositories, if its not already in there, add
jcenter() maven { url '<https://maven.fabric.io/public>' }
Next, there are 3 dependencies for each gradle version. if you know what version you are running, great. otherwise add the following to all of them.
classpath 'io.fabric.tools:gradle:1.+'
Now what we need to do is extend gradle’s build.
Create a file in your android/ios directory called build-extras.gradle.
Inside this, we need to define the repositories.
apply plugin: 'io.fabric' repositories { jcenter() maven { url '<https://maven.fabric.io/public>' } } dependencies { // Crashlytics Kit compile('com.crashlytics.sdk.android:crashlytics:2.5.5@aar') { transitive = true } }
Now, when you run or build the app, it should work, although Fabric.io 79 won’t recognize your app.
We will need to install a cordova plugin,
The one I use is:
<https://www.npmjs.com/package/cordova-fabric-plugin> 946
After this is installed, add a force crash and rebuild & run your app.
If it is still not working, make sure you do not have any ad-blocking software installed. Even if you don’t but you have in the past, reinstall it and make sure all settings are reset (I.E AdAway edits your hosts files, uninstalling it does not reset them, you need to reset them then you can uninstall them).
Hope this has helped anyone else who had issues setting it up.
EDIT
Currently Crashlytics does not support custom stacktraces. All crashes will come from the Crashlytics/Fabric.io plugin. You have to use the logs to specify the error.
Upvotes: 0 |
2018/03/22 | 1,631 | 5,403 | <issue_start>username_0: I am trying to create a working program where you have to input town, product and quantity and output the total price.
For example Town1>Milk>2 should result in `2`. But for some reason, there is no output. Can somebody please help me and show me the mistake?
Here is the code:
```
Console.Write("Enter product: ");
var product = Console.ReadLine().ToLower();
Console.Write("Enter town: ");
var town = Console.ReadLine().ToLower();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine());
if (town == "Town1")
{
if (product == "Milk")
Console.WriteLine(1.50 * quantity);
if (product == "Water")
Console.WriteLine(0.80 * quantity);
if (product == "Whiskey")
Console.WriteLine(4.20 * quantity);
if (product == "Peanuts")
Console.WriteLine(0.90 * quantity);
if (product == "Chocolate")
Console.WriteLine(2.60 * quantity);
}
if (town == "Town2")
{
if (product == "Milk")
Console.WriteLine(1.40 * quantity);
if (product == "Water")
Console.WriteLine(0.70 * quantity);
if (product == "Whiskey")
Console.WriteLine(3.90 * quantity);
if (product == "Peanuts")
Console.WriteLine(0.70 * quantity);
if (product == "Chocolate")
Console.WriteLine(1.50 * quantity);
}
if (town == "Town3")
{
if (product == "Milk")
Console.WriteLine(1.90 * quantity);
if (product == "Water")
Console.WriteLine(1.50 * quantity);
if (product == "Whiskey")
Console.WriteLine(5.10 * quantity);
if (product == "Peanuts")
Console.WriteLine(1.35 * quantity);
if (product == "Chocolate")
Console.WriteLine(3.10 * quantity);
}
}}}
```<issue_comment>username_1: You are setting `town = value.ToLower()` and `product = value.ToLower()`, this makes all characters lowercase, change these lines:
```
var town = Console.ReadLine().ToLower();
var product = Console.ReadLine().ToLower();
```
To this:
```
var town = Console.ReadLine();
var product = Console.ReadLine();
```
OR change your if statement conditions to use lowercase values as comparisons
```
if (town == "town1")
{
```
etc...
Upvotes: 4 [selected_answer]<issue_comment>username_2: For bonus points, consider using a `Dictionary<,>`, which will also allow you to specify that a case-insensitive comparison should be performed.
```
var townProductPrices = new Dictionary>(StringComparer.CurrentCultureIgnoreCase) {
["Town1"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.50d,
["Water"] = 0.80d,
["Whiskey"] = 4.20d,
["Peanuts"] = 0.90d,
["Chocolate"] = 2.60d,
},
["Town2"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.40d,
["Water"] = 0.70d,
["Whiskey"] = 3.90d,
["Peanuts"] = 0.70d,
["Chocolate"] = 1.50d,
},
//...
};
Console.Write("Enter product: ");
var product = Console.ReadLine().Trim();
Console.Write("Enter town: ");
var town = Console.ReadLine().Trim();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine().Trim());
var productPrices = townProductPrices[town];
var price = productPrices[product];
var total = price \* quantity;
Console.WriteLine(total.ToString("c"));
```
Upvotes: 1 <issue_comment>username_3: You dont need these many matches. If at all this matters, Try this:
```
public class Program
{
private static void Main(string[] args)
{
Console.Write("Enter product: ");
var product = Console.ReadLine().ToLower();
Console.Write("Enter town: ");
var town = Console.ReadLine().ToLower();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine());
var mapperData = new List()
{
new Mapper { TownName = "Town1", ProductInfo = "Milk", Quantity = 1.50 },
new Mapper { TownName = "Town1", ProductInfo = "Water", Quantity = 0.80 },
new Mapper { TownName = "Town1", ProductInfo = "Whiskey", Quantity = 4.20 },
new Mapper { TownName = "Town1", ProductInfo = "Peanuts", Quantity = 0.90 },
new Mapper { TownName = "Town1", ProductInfo = "Chocolate", Quantity = 2.60 },
new Mapper { TownName = "Town2", ProductInfo = "Milk", Quantity = 1.40 },
new Mapper { TownName = "Town2", ProductInfo = "Water", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Whiskey", Quantity = 3.90 },
new Mapper { TownName = "Town2", ProductInfo = "Peanuts", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Chocolate", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Milk", Quantity = 1.90 },
new Mapper { TownName = "Town3", ProductInfo = "Water", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Whiskey", Quantity = 5.10 },
new Mapper { TownName = "Town3", ProductInfo = "Peanuts", Quantity = 1.35 },
new Mapper { TownName = "Town3", ProductInfo = "Chocolate", Quantity = 3.10 },
};
var matchingQuantity = mapperData.FirstOrDefault(i => i.TownName.ToString().ToLower() == town.ToLower().Trim()
&& i.ProductInfo.ToString().ToLower() == product.ToLower().Trim()).Quantity;
Console.WriteLine(matchingQuantity \* quantity);
}
}
public class Mapper
{
public string TownName { get; set; }
public string ProductInfo { get; set; }
public double Quantity { get; set; }
}
```
Furthermore, you can even create enums for town and product and use them.
Upvotes: -1 |
2018/03/22 | 1,496 | 4,905 | <issue_start>username_0: I'm having issues with css and js files being cached and a solution I found here was to add a querystring and version number to my links. Unfortunately this does not work with all browsers. Another solution proffered was to embed the version number into the filename and the remove it with URL rewrite.
My files are typically `http://Site/SiteScripts/Test_js.2018.2.11.6645.js` or
```
http://Site/Css/SiteCss.2018.2.11.6645.css
```
In both cases I need the 2018.2.11.6645 part removed. This is the Version that I get from Assembly and changes with each build but is always the format of yyyy.d+.d+.d+ where yyyy is current year and d+ is one or more digits. If it makes it easier I could just user the last 4 digits only e.g. `http://Site/SiteScripts/Test_js.6645.js`
The finished result should be <http://Site/SiteScripts/Test_js.js> and <http://Site/Css/SiteCss.css>
I've got the following regex that I think is a correct match for the filename and extension....
```
\w+\.\d+\.\d+.\d+\.\d+\.(js|css)
```
but I'm having trouble working out how to remove the version and the reconstructing the url to have the full path and filename plus extention without the version.
Any pointers would be appreciated.<issue_comment>username_1: You are setting `town = value.ToLower()` and `product = value.ToLower()`, this makes all characters lowercase, change these lines:
```
var town = Console.ReadLine().ToLower();
var product = Console.ReadLine().ToLower();
```
To this:
```
var town = Console.ReadLine();
var product = Console.ReadLine();
```
OR change your if statement conditions to use lowercase values as comparisons
```
if (town == "town1")
{
```
etc...
Upvotes: 4 [selected_answer]<issue_comment>username_2: For bonus points, consider using a `Dictionary<,>`, which will also allow you to specify that a case-insensitive comparison should be performed.
```
var townProductPrices = new Dictionary>(StringComparer.CurrentCultureIgnoreCase) {
["Town1"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.50d,
["Water"] = 0.80d,
["Whiskey"] = 4.20d,
["Peanuts"] = 0.90d,
["Chocolate"] = 2.60d,
},
["Town2"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.40d,
["Water"] = 0.70d,
["Whiskey"] = 3.90d,
["Peanuts"] = 0.70d,
["Chocolate"] = 1.50d,
},
//...
};
Console.Write("Enter product: ");
var product = Console.ReadLine().Trim();
Console.Write("Enter town: ");
var town = Console.ReadLine().Trim();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine().Trim());
var productPrices = townProductPrices[town];
var price = productPrices[product];
var total = price \* quantity;
Console.WriteLine(total.ToString("c"));
```
Upvotes: 1 <issue_comment>username_3: You dont need these many matches. If at all this matters, Try this:
```
public class Program
{
private static void Main(string[] args)
{
Console.Write("Enter product: ");
var product = Console.ReadLine().ToLower();
Console.Write("Enter town: ");
var town = Console.ReadLine().ToLower();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine());
var mapperData = new List()
{
new Mapper { TownName = "Town1", ProductInfo = "Milk", Quantity = 1.50 },
new Mapper { TownName = "Town1", ProductInfo = "Water", Quantity = 0.80 },
new Mapper { TownName = "Town1", ProductInfo = "Whiskey", Quantity = 4.20 },
new Mapper { TownName = "Town1", ProductInfo = "Peanuts", Quantity = 0.90 },
new Mapper { TownName = "Town1", ProductInfo = "Chocolate", Quantity = 2.60 },
new Mapper { TownName = "Town2", ProductInfo = "Milk", Quantity = 1.40 },
new Mapper { TownName = "Town2", ProductInfo = "Water", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Whiskey", Quantity = 3.90 },
new Mapper { TownName = "Town2", ProductInfo = "Peanuts", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Chocolate", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Milk", Quantity = 1.90 },
new Mapper { TownName = "Town3", ProductInfo = "Water", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Whiskey", Quantity = 5.10 },
new Mapper { TownName = "Town3", ProductInfo = "Peanuts", Quantity = 1.35 },
new Mapper { TownName = "Town3", ProductInfo = "Chocolate", Quantity = 3.10 },
};
var matchingQuantity = mapperData.FirstOrDefault(i => i.TownName.ToString().ToLower() == town.ToLower().Trim()
&& i.ProductInfo.ToString().ToLower() == product.ToLower().Trim()).Quantity;
Console.WriteLine(matchingQuantity \* quantity);
}
}
public class Mapper
{
public string TownName { get; set; }
public string ProductInfo { get; set; }
public double Quantity { get; set; }
}
```
Furthermore, you can even create enums for town and product and use them.
Upvotes: -1 |
2018/03/22 | 1,443 | 4,706 | <issue_start>username_0: The **i** variable from the **for** counter works when getting each API for each Twitch streamer, however when I use it to generate divs it just comes out as 8. Is there a way I can make the counter work when getting the API data and iterating through the streamers?
```
$(document).ready(function(){
// streamers I want to look up
var streamers = ["ESL_SC2", "OgamingSC2", "cretetion", "freecodecamp", "storbeck", "habathcx", "RobotCaleb", "noobs2ninjas"]
// counts the number of streamers in the array
for (var i = 0; i < streamers.length; i++){
// gets each streamer, one by one
$.getJSON("https://wind-bow.gomix.me/twitch-api/streams/" + streamers[i] +"?callback=?", function(json) {
//if they are not offline, pulls information from them and adds it to a div
if ((json.stream) !== null) {
$("#results").prepend("" + json.stream.channel.display\_name + "");
// if they are offline, marks them as offline
} else {
$("#results").append("" + streamers[i] + " is offline");
}
});
};
```<issue_comment>username_1: You are setting `town = value.ToLower()` and `product = value.ToLower()`, this makes all characters lowercase, change these lines:
```
var town = Console.ReadLine().ToLower();
var product = Console.ReadLine().ToLower();
```
To this:
```
var town = Console.ReadLine();
var product = Console.ReadLine();
```
OR change your if statement conditions to use lowercase values as comparisons
```
if (town == "town1")
{
```
etc...
Upvotes: 4 [selected_answer]<issue_comment>username_2: For bonus points, consider using a `Dictionary<,>`, which will also allow you to specify that a case-insensitive comparison should be performed.
```
var townProductPrices = new Dictionary>(StringComparer.CurrentCultureIgnoreCase) {
["Town1"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.50d,
["Water"] = 0.80d,
["Whiskey"] = 4.20d,
["Peanuts"] = 0.90d,
["Chocolate"] = 2.60d,
},
["Town2"] = new Dictionary(StringComparer.CurrentCultureIgnoreCase) {
["Milk"] = 1.40d,
["Water"] = 0.70d,
["Whiskey"] = 3.90d,
["Peanuts"] = 0.70d,
["Chocolate"] = 1.50d,
},
//...
};
Console.Write("Enter product: ");
var product = Console.ReadLine().Trim();
Console.Write("Enter town: ");
var town = Console.ReadLine().Trim();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine().Trim());
var productPrices = townProductPrices[town];
var price = productPrices[product];
var total = price \* quantity;
Console.WriteLine(total.ToString("c"));
```
Upvotes: 1 <issue_comment>username_3: You dont need these many matches. If at all this matters, Try this:
```
public class Program
{
private static void Main(string[] args)
{
Console.Write("Enter product: ");
var product = Console.ReadLine().ToLower();
Console.Write("Enter town: ");
var town = Console.ReadLine().ToLower();
Console.Write("Enter quantity: ");
var quantity = double.Parse(Console.ReadLine());
var mapperData = new List()
{
new Mapper { TownName = "Town1", ProductInfo = "Milk", Quantity = 1.50 },
new Mapper { TownName = "Town1", ProductInfo = "Water", Quantity = 0.80 },
new Mapper { TownName = "Town1", ProductInfo = "Whiskey", Quantity = 4.20 },
new Mapper { TownName = "Town1", ProductInfo = "Peanuts", Quantity = 0.90 },
new Mapper { TownName = "Town1", ProductInfo = "Chocolate", Quantity = 2.60 },
new Mapper { TownName = "Town2", ProductInfo = "Milk", Quantity = 1.40 },
new Mapper { TownName = "Town2", ProductInfo = "Water", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Whiskey", Quantity = 3.90 },
new Mapper { TownName = "Town2", ProductInfo = "Peanuts", Quantity = 0.70 },
new Mapper { TownName = "Town2", ProductInfo = "Chocolate", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Milk", Quantity = 1.90 },
new Mapper { TownName = "Town3", ProductInfo = "Water", Quantity = 1.50 },
new Mapper { TownName = "Town3", ProductInfo = "Whiskey", Quantity = 5.10 },
new Mapper { TownName = "Town3", ProductInfo = "Peanuts", Quantity = 1.35 },
new Mapper { TownName = "Town3", ProductInfo = "Chocolate", Quantity = 3.10 },
};
var matchingQuantity = mapperData.FirstOrDefault(i => i.TownName.ToString().ToLower() == town.ToLower().Trim()
&& i.ProductInfo.ToString().ToLower() == product.ToLower().Trim()).Quantity;
Console.WriteLine(matchingQuantity \* quantity);
}
}
public class Mapper
{
public string TownName { get; set; }
public string ProductInfo { get; set; }
public double Quantity { get; set; }
}
```
Furthermore, you can even create enums for town and product and use them.
Upvotes: -1 |
2018/03/22 | 472 | 1,535 | <issue_start>username_0: I am working on an automation, where I will get the list of jobs that did not start to run even though the scheduled time has crossed. I am going to get the list based on an 2 hour time gap.
Now my question is how to get the list of jobs that are scheduled on a particular time period on that particular day.
For eg., 22-03-3018 08:00 - 10:00 am list of jobs scheduled on this period
I want to execute the command in unix.<issue_comment>username_1: Depending on how your linux system is set up, you can look in:
```
/var/spool/cron/* (user crontabs)
/etc/crontab (system-wide crontab)
```
also, many distros have:
```
/etc/cron.d/* These configurations have the same syntax as /etc/crontab
/etc/cron.hourly, /etc/cron.daily, /etc/cron.weekly, /etc/cron.monthly
```
These are simply directories that contain executables that are executed hourly, daily, weekly or monthly, per their directory name.
On top of that, you can have at jobs (check /var/spool/at/*), anacron (/etc/anacrontab and /var/spool/anacron/*) and probably others I'm forgetting.
Upvotes: 1 <issue_comment>username_2: In Autosys there is no easy way through using autosys native cmd's
However you can get all this information from the DB it is in the WCC DB and the table is dbo.MON\_JOB query would be to get you started:
```
SELECT [NAME]
,[NEXT_TIME]
FROM [CAAutosys11_3_5_WCC_PRD].[dbo].[MON_JOB]
WHERE [NEXT_TIME] > '1970'
ORDER BY [NEXT_TIME] ASC
```
Let me know if you need more clarification.
Upvotes: 0 |
2018/03/22 | 451 | 1,483 | <issue_start>username_0: Whenever I try printing I always get truncated results
```
import tensorflow as tf
import numpy as np
np.set_printoptions(threshold=np.nan)
tensor = tf.constant(np.ones(999))
tensor = tf.Print(tensor, [tensor])
sess = tf.Session()
sess.run(tensor)
```
As you can see I've followed a guide I found on [Print full value of tensor into console or write to file in tensorflow](https://stackoverflow.com/questions/35298299/print-full-value-of-tensor-into-console-or-write-to-file-in-tensorflow)
But the output is simply
...\core\kernels\logging\_ops.cc:79] [1 1 1...]
I want to see the full tensor, thanks.<issue_comment>username_1: This is solved easily by checking [the Tensorflow API](https://www.tensorflow.org/api_docs/python/tf/Print) for `tf.Print`. Pass `summarize=n` where `n` is the number of elements you want displayed.
Upvotes: 4 <issue_comment>username_2: You can do it as follows in TensorFlow 2.x:
```
import tensorflow as tf
tensor = tf.constant(np.ones(999))
tf.print(tensor, summarize=-1)
```
From TensorFlow docs -> summarize: The first and last summarize elements within each dimension are recursively printed per Tensor. If set to -1, it will print all elements of every tensor.
<https://www.tensorflow.org/api_docs/python/tf/print>
Upvotes: 4 <issue_comment>username_3: To print all tensors without truncation in TensorFlow 2.x:
```
import numpy as np
import sys
np.set_printoptions(threshold=sys.maxsize)
```
Upvotes: 1 |
2018/03/22 | 696 | 2,448 | <issue_start>username_0: I'm given an image (32, 32, 3) and two vectors (3,) that represent mean and std. I'm trying normalize the image by getting the image into a state where I can subtract the mean and divide by the std but I'm getting the following error when I try to plot it.
```
ValueError: Floating point image RGB values must be in the 0..1 range.
```
I understand the error so I'm thinking I'm not performing the correct operations when I try to normalize. Below is the code I'm trying to use normalize the image.
```
mean.shape #(3,)
std.shape #(3,)
sample.shape #(32,32,3)
# trying to unroll and by RGB channels
channel_1 = sample[:, :, 0].ravel()
channel_2 = sample[:, :, 1].ravel()
channel_3 = sample[:, :, 2].ravel()
# Putting the vectors together so I can try to normalize
rbg_matrix = np.column_stack((channel_1,channel_2,channel_3))
# Trying to normalize
rbg_matrix = rbg_matrix - mean
rbg_matrix = rbg_matrix / std
# Trying to put back in "image" form
rgb_image = np.reshape(rbg_matrix,(32,32,3))
```<issue_comment>username_1: Your error seems to point to a lack of normalization of the image.
I've used this function to normalize images in my Deep Learning projects
```
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalized data
"""
return np.array((x - np.min(x)) / (np.max(x) - np.min(x)))
```
Upvotes: 3 <issue_comment>username_2: Normalizing an image by setting its mean to zero and its standard deviation to 1, as you do, leads to an image where a majority, but not all, of the pixels are in the range [-2,2]. This is perfectly valid for further processing, and often explicitly applied in certain machine learning methods. I have seen it referred to as "whitening", but is more properly called a [standardization transform](https://en.wikipedia.org/wiki/Whitening_transformation).
It seems that the plotting function you use expects the image to be in the range [0,1]. This is a limitation of the plotting function, not of your normalization. [Other image display functions](https://github.com/DIPlib/diplib) would be perfectly able to show your image.
To normalize to the [0,1] range you should not use the mean and standard deviation, but the maximum and the minimum, as shown in [username_1's answer](https://stackoverflow.com/a/49429842/7328782).
Upvotes: 0 |
2018/03/22 | 832 | 2,613 | <issue_start>username_0: I'm trying to create a view that returns all the bookings that are between a certain date. I have a table called Booking which includes the columns:
```
startDate date,
noOfDays int,
```
and more columns which aren't relevant to the view I'm trying to create.
I'm trying to run the following query to create a view but it seems to fail with the error **"ORA-00904: "DATEADD": Invalid identifier"**
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE startDate => '2018-03-12'
AND startDate <= DATEADD(Booking.startDate, 'YYYY-MM-DD', Booking.noOfDays);
```
What am I doing wrong?<issue_comment>username_1: There are a couple of things wrong with your query. But the primary thing is that `DATEADD()` is not a valid function in Oracle. Oracle date arithmetic is very simple; just add `noOfDays` to `startDate`:
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE startDate >= DATE'2018-03-12'
AND startDate <= startDate + noOfDays;
```
Note that I changed your operator `=>` to `>=` and added the `DATE` keyword to the date literal `2018-03-12` to tell Oracle that it's a date.
As an aside, I don't know what the second condition is supposed to do; `startDate` should always be less than `startDate + noOfDays` unless `noOfDays` is negative. I think you might want something like the following:
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE startDate <= DATE'2018-03-12'
AND DATE'2018-03-12' <= startDate + noOfDays;
```
OR
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE DATE'2018-03-12' BETWEEN startDate AND startDate + noOfDays;
```
Upvotes: 1 <issue_comment>username_2: The logic is not correct. **All startdate is less than startdate+5 days**. Are you sure about this condition? If yes, then it will be simplied as:
```
CREATE VIEW Present_bookings AS
select *
FROM Booking
where startDate >= to_date('2018-03-12','YYYY-MM-DD')
```
OR probably you are thinking about this:
```
CREATE VIEW Present_bookings AS
select *
FROM Booking
where startDate between to_date('2018-03-12','YYYY-MM-DD')
and to_date('2018-03-17','YYYY-MM-DD')
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: I guess you're looking for a easy way to add a number of days to a date.
Try using `interval` here:
```
select sysdate, sysdate + interval '5' day as "FUTURE" from dual;
=>
SYSDATE FUTURE
22.03.2018 14:37:27 27.03.2018 14:37:27
```
Take care of the `'5'` as a character, not a number.
This also works with years, months (take care of 29.02 here) etc.
Upvotes: 0 |
2018/03/22 | 902 | 2,872 | <issue_start>username_0: After firing some async queries on my multi node cluster using a python program , one of my node in cluster has gone down.
>
> cassandra.pool:Error attempting to reconnect to 192.168.19.5,
> scheduling retry in 4.0 seconds: errors=Timed out creating connection
> (5 seconds), last\_host=None
>
>
>
But upon `nodetool status`, I see all of my nodes to be up.
I tried cqlsh on the affected node but received the following error:
>
> Connection error: ('Unable to connect to any servers',
> {'': OperationTimedOut('errors=Timed out creating
> connection (5 seconds), last\_host=None',)})
>
>
>
I am wondering whether if it is a client request timeout or connect timeout or server-side read request timeout.
To resolve the issue: I tried `cqlsh --request-timeout 20` but I am still getting the same Connection error.
What could be the solution?<issue_comment>username_1: There are a couple of things wrong with your query. But the primary thing is that `DATEADD()` is not a valid function in Oracle. Oracle date arithmetic is very simple; just add `noOfDays` to `startDate`:
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE startDate >= DATE'2018-03-12'
AND startDate <= startDate + noOfDays;
```
Note that I changed your operator `=>` to `>=` and added the `DATE` keyword to the date literal `2018-03-12` to tell Oracle that it's a date.
As an aside, I don't know what the second condition is supposed to do; `startDate` should always be less than `startDate + noOfDays` unless `noOfDays` is negative. I think you might want something like the following:
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE startDate <= DATE'2018-03-12'
AND DATE'2018-03-12' <= startDate + noOfDays;
```
OR
```
CREATE VIEW Present_bookings AS
SELECT * FROM Booking
WHERE DATE'2018-03-12' BETWEEN startDate AND startDate + noOfDays;
```
Upvotes: 1 <issue_comment>username_2: The logic is not correct. **All startdate is less than startdate+5 days**. Are you sure about this condition? If yes, then it will be simplied as:
```
CREATE VIEW Present_bookings AS
select *
FROM Booking
where startDate >= to_date('2018-03-12','YYYY-MM-DD')
```
OR probably you are thinking about this:
```
CREATE VIEW Present_bookings AS
select *
FROM Booking
where startDate between to_date('2018-03-12','YYYY-MM-DD')
and to_date('2018-03-17','YYYY-MM-DD')
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: I guess you're looking for a easy way to add a number of days to a date.
Try using `interval` here:
```
select sysdate, sysdate + interval '5' day as "FUTURE" from dual;
=>
SYSDATE FUTURE
22.03.2018 14:37:27 27.03.2018 14:37:27
```
Take care of the `'5'` as a character, not a number.
This also works with years, months (take care of 29.02 here) etc.
Upvotes: 0 |
2018/03/22 | 1,603 | 5,748 | <issue_start>username_0: I'm all very new to this stuff - so apologies if this is a silly question.
I've created a multiple-choice quiz using HTML, CSS, JavaScript (angular.js), and a JSON data file; off a tutorial I found. I am rather pleased with how it turned out; but I really need to add the functionality of having the user select more than one answer for the question to be marked as correct. How can this be done? Is it simply a case of marking more than one correct answer in the JSON file?
I appreciate any help!
Here is my code:
HTML:
```
Test Your Knowledge: Saturn
Test your Knowledge: Planets
============================
Welcome
-------
Click begin to start a topic
Begin
{{myQuestion.question}}
{{Answer.text}}
You are **correct**.
Oops! That is not correct.
{{myQuestion.feedback}}
Continue
### Results
You scored {{percentage}}% by correctly answering {{score}} of the total {{totalQuestions}} questions.
Use the links below to challenge your friends.
```
JSON (there is more than one question - but they're all in this format.
```
[{
"question" : "What is the name of Saturn's largest moon?",
"answers" : [
{"id" : 0, "text" : "Hercules"},
{"id" : 1, "text" : "Europa"},
{"id" : 2, "text" : "Goliath"},
{"id" : 3, "text" : "Zeus"},
{"id" : 4, "text" : "Titan"},
{"id" : 5, "text" : "Triton"}
],
"correct" : 4,
"feedback" : "Though the names seem similar, Triton orbits the planet Neptune."
}]
```
JavaScript
```
(function(){
var app = angular.module('myQuiz',[]);
app.controller('QuizController'['$scope','$http','$sce',function($scope,$http,$sce){
$scope.score = 0;
$scope.activeQuestion = -1;
$scope.activeQuestionAnswered = 0;
$scope.percentage = 0;
$http.get('quiz_data.json').then(function(quizData){
$scope.myQuestions = quizData.data;
$scope.totalQuestions = $scope.myQuestions.length;
});
$scope.selectAnswer = function(qIndex,aIndex){
var questionState = $scope.myQuestions[qIndex].questionState;
if(questionState != 'answered'){
$scope.myQuestions[qIndex].selectedAnswer = aIndex;
var correctAnswer = $scope.myQuestions[qIndex].correct;
$scope.myQuestions[qIndex].correctAnswer = correctAnswer;
if(aIndex === correctAnswer){
$scope.myQuestions[qIndex].correctness = 'correct';
$scope.score += 1;
}else{
$scope.myQuestions[qIndex].correctness = 'incorrect';
}
$scope.myQuestions[qIndex].questionState = 'answered';
}
$scope.percentage = (($scope.score / $scope.totalQuestions)*100).toFixed(1);
}
$scope.isSelected = function(qIndex, aIndex){
return $scope.myQuestions[qIndex].selectedAnswer === aIndex;
}
$scope.isCorrect = function(qIndex, aIndex){
return $scope.myQuestions[qIndex].correctAnswer === aIndex;
}
$scope.selectContinue = function(qIndex, aIndex){
return $scope.activeQuestion += 1;
}
$scope.createShareLinks=function(percentage){
var url = 'http://theoryquiz.com';
var emailLink = ' [Email a Friend](mailto:?subject=Try to beat my score!&body= I scored '+percentage+'% on this quiz! Try to beat my score at '+url+'.) ';
var twitterLink = ' [Tweet your score](http://twitter.com/share?text=I scored '+percentage+' on this quiz. Try to beat my score at &hashtags=TheoryQuiz&url='+url+') ';
var newMarkup = emailLink + twitterLink;
return $sce.trustAsHtml(newMarkup);}}]);})();
```<issue_comment>username_1: There are many ways you can achieve this.
I would just give you the direct solution (as I'm sure someone else will), but I think there's a lot of benefit in figuring it out yourself with the right guidance, especially given that you are "new", as you claim. :)
**Here is one approach you can take**:
Create a "store" of answers in memory to check against
* When someone checks a checkbox, check that question and answer against the store.
* If the question in the store contains an answer that was selected, we have a winner. Otherwise, we have a loser.
Example:
```
var questions = {
"question1": ["answer1", "answer2"],
"question2": ["answer1", "answer3", "answer3"]
};
function selectAnswer(question, selectedAnswer) {
if (questions[question]) {
var found = false;
questions[question].forEach(function (answer) {
if (answer === selectedAnswer) {
found = true;
}
});
if (found) {
// Do something
}
return found;
}
}
console.log(selectAnswer("question1", "answer1")) // true
console.log(selectAnswer("question1", "answer4")) // false
```
Now you can expand on that function by making it take in an argument of a list of answers instead of just one answer, and you can check all answers against the store for a particular question. OR, you can use that one function against each answer that was selected for a given question. That works as well.
That should achieve what you want! :)
---
If all fails and you still need help, feel free to leave a comment and I'll cook something up for your specific environment (Angular, etc.).
Upvotes: 1 <issue_comment>username_2: You should change your json "correct: field to be an array of ids.
If your question has only one answer, the array will just contain one element.
If Your question has several answer, you just have to modify your javascript function selectAnswer() to handle an array instead of a single value.
Small tip: for your "correctness" and "answered" field, I would use a boolean instead of a string
Upvotes: 1 <issue_comment>username_3: Solved with the following:
```
var superbag = function(sup, sub) {
sup.sort();
sub.sort();
var i, j;
for (i=0,j=0; i -1;};
```
Upvotes: -1 [selected_answer] |
2018/03/22 | 700 | 2,714 | <issue_start>username_0: I need to LEFT JOIN but the "right" table is very large and I only need values after 2018-01-01, while the whole table has records from up to 2012. To speed up my query I therefore need to not LEFT JOIN the full table, but the table with records > 2018-01-01. If I use a where statement at the end of my query it will still use the full table right? How can I do this?<issue_comment>username_1: There are many ways you can achieve this.
I would just give you the direct solution (as I'm sure someone else will), but I think there's a lot of benefit in figuring it out yourself with the right guidance, especially given that you are "new", as you claim. :)
**Here is one approach you can take**:
Create a "store" of answers in memory to check against
* When someone checks a checkbox, check that question and answer against the store.
* If the question in the store contains an answer that was selected, we have a winner. Otherwise, we have a loser.
Example:
```
var questions = {
"question1": ["answer1", "answer2"],
"question2": ["answer1", "answer3", "answer3"]
};
function selectAnswer(question, selectedAnswer) {
if (questions[question]) {
var found = false;
questions[question].forEach(function (answer) {
if (answer === selectedAnswer) {
found = true;
}
});
if (found) {
// Do something
}
return found;
}
}
console.log(selectAnswer("question1", "answer1")) // true
console.log(selectAnswer("question1", "answer4")) // false
```
Now you can expand on that function by making it take in an argument of a list of answers instead of just one answer, and you can check all answers against the store for a particular question. OR, you can use that one function against each answer that was selected for a given question. That works as well.
That should achieve what you want! :)
---
If all fails and you still need help, feel free to leave a comment and I'll cook something up for your specific environment (Angular, etc.).
Upvotes: 1 <issue_comment>username_2: You should change your json "correct: field to be an array of ids.
If your question has only one answer, the array will just contain one element.
If Your question has several answer, you just have to modify your javascript function selectAnswer() to handle an array instead of a single value.
Small tip: for your "correctness" and "answered" field, I would use a boolean instead of a string
Upvotes: 1 <issue_comment>username_3: Solved with the following:
```
var superbag = function(sup, sub) {
sup.sort();
sub.sort();
var i, j;
for (i=0,j=0; i -1;};
```
Upvotes: -1 [selected_answer] |
2018/03/22 | 972 | 3,308 | <issue_start>username_0: I have data like
```
ConcurrentDictionary OneTwoThree =
new ConcurrentDictionary();
```
I want a result like this [Final Image](https://i.stack.imgur.com/GNHE5.png)
What I tried:
```
DataTable dt = new DataTable();
dt.Columns.Add("TwoDetails");
ISet two = new HashSet();
Parallel.ForEach(OneTwoThree , One=>
{
dt.Columns.Add(One.Key);
foreach(var Two in One.Value)
{
two.Add(Two.Key); // To get Distinct Values
}
});
foreach(var item in two)
{
var row = dt.NewRow();
row["TwoDetails"] = row;
}
```
Now I don't have the idea to append "Three-Values" to a particular cell, as shown in the image.
Any Suggestions.<issue_comment>username_1: You can not access the data table rows with their names but using row numbers you can find a particular row in the data table.So for that, You need to create one class like
```
public class DataTableDetails
{
public string RowName { get; set; }
public int RowNumber { get; set; }
}
```
And this is not effecient solution, but can help you
```
ISet two = new HashSet();
List dtdetaills=new List();
Parallel.ForEach(OneTwoThree , One=>
{
dt.Columns.Add(One.Key);
foreach(var Two in One.Value)
{
two.Add(Two.Key); // To get Distinct Values
}
});
int count=0;
foreach(var item in two)
{
var row = dt.NewRow();
row["TwoDetails"] = row;
DataTableDetails details = new DataTableDetails();
details.RowName = item;
details.RowNumber = count++; // we can easily get row number
dtdetails.Add(details);
}
```
And Finally
```
foreach(var One in OnrTwoThree)
{
foreach(var Two in One.Value)
{
foreach(var rowdetails in dtdetails)
{
if(Two.Key==rowdetails.RowName)
{
dt.Rows[rowdetails.RowNumber][One.Key] = Two.Value;
}
}
}
}
```
Upvotes: 0 <issue_comment>username_2: Another pivot table question. Done a 1000. See code below :
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
namespace ConsoleApplication31
{
class Program
{
static void Main(string[] args)
{
Dictionary> OneTwoThree = new Dictionary>() {
{"A", new Dictionary(){{"U","s"}, {"Z","a"}}},
{"B", new Dictionary(){{"W","e"},{"X","d"},{"Y","d"}}},
{"C", new Dictionary(){{"V","f"}, {"W","a"},{"Z","w"}}},
};
string[] columns = OneTwoThree.Select(x => x.Key).OrderBy(x => x).ToArray();
DataTable dt = new DataTable();
dt.Columns.Add("TwoDetails", typeof(string));
foreach(string column in columns)
{
dt.Columns.Add(column, typeof(string));
}
string[] rows = OneTwoThree.Select(x => x.Value.Select(y => y.Key)).SelectMany(x => x).Distinct().OrderBy(x => x).ToArray();
var flip = rows.Select(x => new { row = x, columns = OneTwoThree.Where(y => y.Value.ContainsKey(x)).Select(y => new { col = y.Key, value = y.Value[x] }).ToList() }).ToList();
//create pivot table
foreach (var row in flip)
{
DataRow newRow = dt.Rows.Add();
newRow["TwoDetails"] = row.row;
foreach (var column in row.columns)
{
newRow[column.col] = column.value;
}
}
}
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 820 | 2,821 | <issue_start>username_0: ```
var str = "[0005000003](javascript:void(0))"
var result = "Complaint ID"
```
I wanted to extract alt value of anchor tag in the above string<issue_comment>username_1: You can not access the data table rows with their names but using row numbers you can find a particular row in the data table.So for that, You need to create one class like
```
public class DataTableDetails
{
public string RowName { get; set; }
public int RowNumber { get; set; }
}
```
And this is not effecient solution, but can help you
```
ISet two = new HashSet();
List dtdetaills=new List();
Parallel.ForEach(OneTwoThree , One=>
{
dt.Columns.Add(One.Key);
foreach(var Two in One.Value)
{
two.Add(Two.Key); // To get Distinct Values
}
});
int count=0;
foreach(var item in two)
{
var row = dt.NewRow();
row["TwoDetails"] = row;
DataTableDetails details = new DataTableDetails();
details.RowName = item;
details.RowNumber = count++; // we can easily get row number
dtdetails.Add(details);
}
```
And Finally
```
foreach(var One in OnrTwoThree)
{
foreach(var Two in One.Value)
{
foreach(var rowdetails in dtdetails)
{
if(Two.Key==rowdetails.RowName)
{
dt.Rows[rowdetails.RowNumber][One.Key] = Two.Value;
}
}
}
}
```
Upvotes: 0 <issue_comment>username_2: Another pivot table question. Done a 1000. See code below :
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Data;
namespace ConsoleApplication31
{
class Program
{
static void Main(string[] args)
{
Dictionary> OneTwoThree = new Dictionary>() {
{"A", new Dictionary(){{"U","s"}, {"Z","a"}}},
{"B", new Dictionary(){{"W","e"},{"X","d"},{"Y","d"}}},
{"C", new Dictionary(){{"V","f"}, {"W","a"},{"Z","w"}}},
};
string[] columns = OneTwoThree.Select(x => x.Key).OrderBy(x => x).ToArray();
DataTable dt = new DataTable();
dt.Columns.Add("TwoDetails", typeof(string));
foreach(string column in columns)
{
dt.Columns.Add(column, typeof(string));
}
string[] rows = OneTwoThree.Select(x => x.Value.Select(y => y.Key)).SelectMany(x => x).Distinct().OrderBy(x => x).ToArray();
var flip = rows.Select(x => new { row = x, columns = OneTwoThree.Where(y => y.Value.ContainsKey(x)).Select(y => new { col = y.Key, value = y.Value[x] }).ToList() }).ToList();
//create pivot table
foreach (var row in flip)
{
DataRow newRow = dt.Rows.Add();
newRow["TwoDetails"] = row.row;
foreach (var column in row.columns)
{
newRow[column.col] = column.value;
}
}
}
}
}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 2,250 | 8,007 | <issue_start>username_0: I have classic web application rendered on server. I want to create admin panel as single page application in React. I want to server admin panel from <https://smyapp.example.com/admin/>. I try to use `create-react-app` but it assumes that i serve SPA from root URL. How should I configure `create-react-app` to serve app from `"admin"` subdirectory? In documentation I found `"homepage"` property but if I properly understand it requires complete url. I can't give complete url because my app is deployed in few environments.<issue_comment>username_1: Maybe you could use `react-router` and its *relative* `basename` parameter which allows you to serve your app from a subdirectory.
`basename` is the base URL for all locations. If your app is served from a sub-directory on your server, you’ll want to set this to the sub-directory. A properly formatted basename should have a leading slash, but no trailing slash.
For instance:
```
```
So will render
See: <https://reacttraining.com/react-router/web/api/BrowserRouter/basename-string>
Upvotes: 2 <issue_comment>username_2: You can specify the public path in your webpack configuration along with use of react route basepath.
Link to Public Path: <https://webpack.js.org/guides/public-path/>
Note that public path will be both leading and trailing slashes / to be valid.
Upvotes: -1 <issue_comment>username_3: put in package.json something like this:
**"homepage" : "<http://localhost:3000/subfolder>",**
and work fine on any public or local server. Of course, subfolder must be your folder.
Upvotes: 2 <issue_comment>username_4: You should add entry in `package.json` for this.
Add a key "homepage": "your-subfolder/" in your **package.json**
All static files will be loaded from "your-subfolder"
If there is no subfolder and you need to load from same folder you need to add the path as "./" or remove the entire line which has `"homepage": "xxxxxxxxxx"`
`"homepage": "./"`
**From the official docs**
By default, Create React App produces a build assuming your app is hosted at the server root.
To override this, specify the homepage in your package.json, for example:
>
> "homepage": "http://mywebsite.com/relativepath",
>
>
>
**Note:** If you are using `react-router@^4`, you can route s using the basename prop on any .
From [here](https://username_4.blogspot.com/2018/08/create-react-app-custom-build-to.html) and also check the [official CRA docs](https://create-react-app.dev/docs/deployment/#building-for-relative-paths)
Upvotes: 5 <issue_comment>username_5: For create-react-app v2 and react-router v4, I used the following combo to serve a production (staging, uat, etc) app under "/app":
package.json:
```
"homepage": "/app"
```
Then in the app entry point:
```
{/\* other components \*/}
```
And everything "just works" across both local-dev and deployed environments. HTH!
Upvotes: 6 <issue_comment>username_6: In addition to your **requirements**, I am adding mine:
* It should be done by CD, through an env variable.
* If I need to rename the subdirectory, I should only have to change the env variable.
* It should work with react-router.
* It should work with scss (sass) and html.
* Everything should work normally in dev mode (npm start).
I also had to implement it in Angular2+ project not long ago, I found it harder to implement in React then in Angular2+ where you are good to go with `ng build --base-href //`. [source](https://angular.io/guide/deployment#deploy-to-github-pages)
---
### Short version
1. Before building, set `PUBLIC_URL` env variable to the value of your subdirectory, let use `/subdir` for example. You can also put this variable into your `.env.production` ([in case you do not have that file you can check the doc](https://create-react-app.dev/docs/adding-custom-environment-variables))
2. In `public/index.html` add the base element bellow, this is for static files like images.
```html
```
3. Also in `public/index.html`, if you have custom `link` element, make sure theyre are prefixed with `%PUBLIC_URL%` (like `manifest.json` and `favicon.ico` href).
4. If you use `BrowserRouter`, you can add `basename` prop:
```html
```
5. If you use `Router` instead, because you need access to `history.push` method, to programmatically change page, do the following:
```
// history.tsx
import {createBrowserHistory} from 'history';
export default createBrowserHistory({ basename: process.env.PUBLIC_URL });
```
```html
...
```
6. Use relative links inside your elements
```html

```
7. Move your `background-image` links from scss to jsx/tsx files (note that you may not need to do that if you use css files):
```
/*remove that*/
background-image: url('/assets/background-form.jpg');
```
```html
...
```
**You should be done.**
---
### Additional informations
I preferred to use `PUBLIC_URL` instead of `homepage` in `package.json` because I want to use env variable set on gitlab to set the subdir. Relevant resources about the subject:
* <https://create-react-app.dev/docs/deployment/#building-for-relative-paths>
* <https://create-react-app.dev/docs/advanced-configuration/>
* <https://github.com/facebook/create-react-app/issues/998>
`PUBLIC_URL` override `homepage`, and `PUBLIC_URL` also take the domain name, if you provide one. If you set only `homepage`, `PUBLIC_URL` will be set to the value of `homepage`.
---
If you do not want to use a base element in your index.html (I would not know why), you will need to append `process.env.PUBLIC_URL` to every link yourself. Note that if you have react-router with a base element, but have not set `basename` prop, you will get a warning.
---
Sass won't compile with an incorrect relative path. It also won't compile with correct relative path to your `../public/assets` folder, because of `ModuleScopePlugin` restrictions, you can avoid the restriction by moving your image inside the src folder, I haven't tried that.
---
There seem to be no way of testing relative path in development mode (npm start). [see comment](https://github.com/facebook/create-react-app/issues/527#issuecomment-393158807)
---
Finnaly, theses stackoverflow link have related issues:
* [Can I set a base route in react-router](https://stackoverflow.com/questions/38196448/can-i-set-a-base-route-in-react-router/58506212)
* [Setting base href using Environment variables](https://stackoverflow.com/questions/45208595/setting-base-href-using-environment-variables/58506035)
Upvotes: 6 <issue_comment>username_7: To get relative URLs you can build the app like this:
```
PUBLIC_URL="." npm run build
```
Upvotes: 4 <issue_comment>username_8: In our case, we did everything as described in [Ambroise's answer](https://stackoverflow.com/a/58508562/12084678), but got a blank page in production with no errors or warnings in the console. It turned out to be a problem with BrowserRouter - we got it working by setting BrowserRouter's basename like so:
```
```
Upvotes: 1 <issue_comment>username_9: I was facing the similar kind of issue. Need to serve **react app from Godaddy hosting**. let's say for example the context path should be **/web** ie:- http://example\_123.com/web
Following changes works fine for me with React 18.2.0 , react-router-dom: 6.11.0
1. set homepage in package.json to /web ie: homepage: "/web"
2. update basename in BrowserRouter to process.env.PUBLIC\_URL
```
{/\*Your components\*/}
```
3. build app and put your build inside /web folder in public\_html ie: /public\_html/web
4. on godaddy inside public\_html created index.html and redirect that to /web
```
welcome to example\_123
```
**Bonus**
If want to put your content inside /web and don;t want context path. follow below steps :-
1. set homepage to /web in package.json
2. put your build inside /web in public\_html
3. just move index.html from /web to publi\_html ie.: put index.html generated by build inside public\_html and all other files in /web
Upvotes: 0 |
2018/03/22 | 1,546 | 5,215 | <issue_start>username_0: I'm trying to have only some properties of *ancestor* exposed on my *descendant*. I try to achieve it through `Pick`
```
export class Base {
public a;
public b;
public c;
}
export class PartialDescendant extends Pick {
public y;
}
```
but I receive two errors -
>
> Error: TS2693: 'Pick' only refers to a type, but is being used as a value here.
>
>
>
and
>
> Error:TS4020: 'extends' clause of exported class 'PartialDescendant' has or is using private name 'Pick'.
>
>
>
Am I doing something wrong, and is there another way to expose only chosen properties of the base class?<issue_comment>username_1: **See below for 3.0 solution**
`Pick` is only a type it is not a class, a class is both a type and an object constructor. Types only exist at compile time, this is why you get the error.
You can create a function which takes in a constructor, and returns a new constructor that will instantiate an object with less fields (or at least declare it does):
```
export class Base {
public c: number = 0;
constructor(public a: number, public b: number) {
}
}
function pickConstructor(ctor: T)
: >(...keys: TKeys[]) => ReplaceInstanceType, TKeys>> & { [P in keyof Omit] : T[P] } {
return function (keys: string) { return ctor as any };
}
export class PartialDescendant extends pickConstructor(Base)("a", "b") {
public constructor(a: number, b: number) {
super(a, b)
}
}
var r = new PartialDescendant(0,1);
type IsValidArg = T extends object ? keyof T extends never ? false : true : true;
type ReplaceInstanceType = T extends new (a: infer A, b: infer B, c: infer C, d: infer D, e: infer E, f: infer F, g: infer G, h: infer H, i: infer I, j: infer J) => infer R ? (
IsValidArg extends true ? new (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H, i: I, j: J) => TNewInstance :
IsValidArg *extends true ? new (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H, i: I) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C, d: D, e: E, f: F, g: G, h: H) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C, d: D, e: E, f: F, g: G) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C, d: D, e: E, f: F) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C, d: D, e: E) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C, d: D) => TNewInstance :
IsValidArg extends true ? new (a: A, b: B, c: C) => TNewInstance :
IsValidArg **extends true ? new (a: A, b: B) => TNewInstance :
IsValidArgextends true ? new (a: A) => TNewInstance :
new () => TNewInstance
) : never***
```
For constructors parameters you will loose things like parameter names, optional parameters and multiple signatures.
**Edit**
Since the original question was answered typescript has improved the possible solution to this problem. With the addition of [Tuples in rest parameters and spread expressions](https://github.com/Microsoft/TypeScript/pull/24897) we now don't need to have all the overloads for `ReplaceReturnType`:
```
export class Base {
public c: number = 0;
constructor(public a: number, public b: number) {
}
}
type Omit = Pick>
function pickConstructor(ctor: T)
: >(...keys: TKeys[]) => ReplaceInstanceType, TKeys>> & { [P in keyof Omit] : T[P] } {
return function (keys: string| symbol | number) { return ctor as any };
}
export class PartialDescendant extends pickConstructor(Base)("a", "b") {
public constructor(a: number, b: number) {
super(a, b)
}
}
var r = new PartialDescendant(0,1);
type ArgumentTypes = T extends new (... args: infer U ) => any ? U: never;
type ReplaceInstanceType = T extends new (...args: any[])=> any ? new (...a: ArgumentTypes) => TNewInstance : never;
```
Not only is this shorter but it solves a number of problems
* Optional parameters remain optional
* Argument names are preserved
* Works for any number of arguments
Upvotes: 4 [selected_answer]<issue_comment>username_2: I am a little late to the game here, but there is an alternative and shorter way to do it if you're mainly interested in making intellisense work.
You can extend the base class and then redeclare the members you want to omit as private. This will generate a typescript error, but adding //@ts-ignore will clear it up and shouldn't affect compilation.
This is my preferred way to do it when things are simple. No real overhead here or challenging type syntax. The only real downside here is that adding //@ts-ignore above the extending class could prevent you from receiving other error messages related to incorrectly extending the Base class.
The one advantage to this approach over the accepted "pickConstructor" approach is that this method doesn't generate any extra code. Whereas "pickConstructor" literally exists as a function after compilation that runs during class definition.
```js
class Base
{
public name:string;
}
// @ts-ignore
class Ext extends Base
{
private readonly name:undefined; // re-declare
}
let thing:Ext = new Ext();
// The line below...
// Doesn't show up in intellisense
// complains about privacy
// can't be set to anything
// can't be used as an object
thing.name = "test"; // ERROR
```
Upvotes: 1 |
2018/03/22 | 270 | 772 | <issue_start>username_0: Probably an incredibly easy change but I seem to have a brain fart in this. I have a column and I want my formula to search for both "Open" and "In-Transit".
```
=COUNTIFS('<NAME>'!A:A,"Open",'<NAME>'!G:G, "PAB")
```<issue_comment>username_1: perhaps:
```
=COUNTIFS('<NAME>'!A:A,"Open",'<NAME>'!G:G, "PAB")+COUNTIFS('<NAME>'!A:A,"In-Transit",'<NAME>'!G:G, "PAB")
```
Building an *OR* into a formula can be tough...........easier to just add up the cases.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Or you can go with this one:
```
=SUM(COUNTIFS('<NAME>'!A:A,{"Open","In-Transit"},'<NAME>'!G:G, "PAB"))
```
Generally makes it easier for multiple `OR` clauses.
Upvotes: 0 |
2018/03/22 | 490 | 1,847 | <issue_start>username_0: The avgtool associated with increment\_build\_number plugin in Fastlane returning error.
When I'm running `increment_build_number` in Fastlane, I'm getting
>
>
> ```
> Updating CFBundleVersion in Info.plist(s)...
>
> $(SRCROOT)/Info.plist
> Cannot find "$(SRCROOT)/Info.plist"
>
> ```
>
>
The reason for the issue is that the avgtool couldn't identify the $(SRCROOT)
One of the solution found is to update the path to the Info.plist in Xcode settings to absolute path.
As there are a number of developers working on this project, updating the project settings with absolute path (to the plist) will affect others to build the project.
Is there any other way I can get this issue fixed?<issue_comment>username_1: The current solution to your problem would be to remove the $(SRCROOT) from your build settings. $(SRCROOT) means "directory where .xcodeproj is" so you will be perfectly fine to remove it (Xcode will still look for it relatively from the .xcodeproj).
We've recently removed agvtool from the `get_version_number` action (in version `2.87.0` in favor of using the `xcodeproj` gem where we can more nicely handle which target is found, handle $(SRCROOT), and remove that "avgtool setup process". I will be working on replacing agvtool in `increment_build_number` and `get_build_number` soon which should hopefully prevent further issues like this one.
Upvotes: 6 [selected_answer]<issue_comment>username_2: You can open the project setting and find the key **INFOPLIST\_FILE**
* `INFOPLIST_FILE = "$(SRCROOT)/ProjectName-info.plist";`
Then remove the `$(SRCROOT)` and **make sure** that after you **change Xcode** still finding your info plist
The new value may be:
* `INFOPLIST_FILE = "/ProjectName-info.plist";`
* `INFOPLIST_FILE = "/MyProjectFolderName/ProjectName-info.plist";`
Upvotes: 4 |
2018/03/22 | 1,680 | 5,589 | <issue_start>username_0: I'm attempting to stream a H.264 video feed to a web browser. Media Foundation is used for encoding a fragmented MPEG4 stream (`MFCreateFMPEG4MediaSink` with `MFTranscodeContainerType_FMPEG4`, `MF_LOW_LATENCY` and `MF_READWRITE_ENABLE_HARDWARE_TRANSFORMS` enabled). The stream is then connected to a web server through `IMFByteStream`.
Streaming of the H.264 video works fine when it's being consumed by a tag. However, the resulting latency is ~2sec, which is too much for the application in question. My suspicion is that client-side buffering causes most of the latency. Therefore, I'm experimenting with Media Source Extensions (MSE) for programmatic control over the in-browser streaming. Chrome does, however, fail with the following error when consuming the same MPEG4 stream through MSE:
>
> Failure parsing MP4: TFHD base-data-offset not allowed by MSE. See
> <https://www.w3.org/TR/mse-byte-stream-format-isobmff/#movie-fragment-relative-addressing>
>
>
>
mp4dump of a moof/mdat fragment in the MPEG4 stream. This clearly shows that the TFHD contains an "illegal" `base data offset` parameter:
```
[moof] size=8+200
[mfhd] size=12+4
sequence number = 3
[traf] size=8+176
[tfhd] size=12+16, flags=1
track ID = 1
base data offset = 36690
[trun] size=12+136, version=1, flags=f01
sample count = 8
data offset = 0
[mdat] size=8+1624
```
I'm using Chrome 65.0.3325.181 (Official Build) (32-bit), running on Win10 version 1709 (16299.309).
Is there any way of generating a MSE-compatible H.264/MPEG4 video stream using Media Foundation?
Status Update:
--------------
Based on [roman-r](https://stackoverflow.com/users/868014/roman-r) advise, I managed to fix the problem myself by intercepting the generated MPEG4 stream and perform the following modifications:
>
> * Modify **Track Fragment Header Box** (tfhd):
> + remove `base_data_offset` parameter (reduces stream size by 8bytes)
> + set `default-base-is-moof` flag
> * Add missing **Track Fragment Decode Time** (tfdt) (increases stream size by 20bytes)
> + set `baseMediaDecodeTime` parameter
> * Modify **Track fragment Run box** (trun):
> + adjust `data_offset` parameter
>
>
>
The field descriptions are documented in <https://www.iso.org/standard/68960.html> (free download).
Switching to MSE-based video streaming reduced the latency from ~2.0 to 0.7 sec. The latency was furthermore reduced to 0-1 frames by calling `IMFSinkWriter::NotifyEndOfSegment` after each IMFSinkWriter::WriteSample call.
There's a sample implementation available on <https://github.com/forderud/AppWebStream><issue_comment>username_1: The problem was solved by following [roman-r](https://stackoverflow.com/users/868014/roman-r)'s advise, and modifying the generated MPEG4 stream. See answer above.
Upvotes: 0 <issue_comment>username_2: Another way to do this is again using the same code @Fredrik mentioned but I write my own IMFByteStream and and I check the chunks written to the IMFByteStream.
FFMpeg writes the atoms almost once at a time. So you can check the atom name and do the mods. It is the same thing. I wish there was an MSE compliant windows sinker.
Is there one that can generate .ts files for HLS?
Upvotes: 0 <issue_comment>username_3: The mentioned 0.7 sec latency (in your **Status Update**) is caused by the Media Foundation's `MFTranscodeContainerType_FMPEG4` containterizer which gathers and outputs each roughly 1/3 seconds (from unknown reason) of frames in one MP4 `moof`/`mdat` box pair. This means that you need to wait 19 frames before getting any output from `MFTranscodeContainerType_FMPEG4` at 60 FPS.
To output single MP4 `moof`/`mdat` per each frame, simply lie that `MF_MT_FRAME_RATE` is 1 FPS (or anything higher than 1/3 sec). To play the video at the correct speed, use Media Source Extensions' `.playbackRate` or rather update `timescale` (i.e. multiply by real FPS) of `mvhd` and `mdhd` boxes in your MP4 stream interceptor to get the correctly timed MP4 stream.
Doing that, the latency can be squeezed to under 20 ms. This is barely recognizable when you see the output side by side on `localhost` in chains such as Unity (research) -> NvEnc -> `MFTranscodeContainerType_FMPEG4` -> WebSocket -> Chrome Media Source Extensions display.
Note that `MFTranscodeContainerType_FMPEG4` still introduces 1 frame delay (1st frame in, no output, 2nd frame in, 1st frame out, ...), hence the 20 ms latency at 60 FPS. The only solution to that seems to be writing own FMPEG4 containerizer. But that is order of magnitude more complex than intercepting of Media Foundation's MP4 streams.
Upvotes: 2 [selected_answer]<issue_comment>username_4: I was getting the same error (Failure parsing MP4: TFHD base-data-offset not allowed by MSE) when trying to play a fmp4 via MSE. The fmp4 had been created from a mp4 using the following ffmpeg comand:
```
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov myfmp4video.mp4
```
Based on this question I was able to find out that to have the fmp4 working in Chrome I had to add the "**default\_base\_moof**" flag. So, after creating the fmp4 with the following command:
```
ffmpeg -i myvideo.mp4 -g 52 -vcodec copy -f mp4 -movflags frag_keyframe+empty_moov+default_base_moof myfmp4video.mp4
```
I was able to play successfully the video using Media Source Extensions.
This Mozilla article helped to find out that missing flag:
<https://developer.mozilla.org/en-US/docs/Web/API/Media_Source_Extensions_API/Transcoding_assets_for_MSE>
Upvotes: 2 |
2018/03/22 | 1,444 | 5,097 | <issue_start>username_0: I need to call an azure function; fn(b), from another azure function; fn(a).
fn(a) -> fn(b)
Both these functions are in same function app. The problem is whenever I try to call (b), I get 403-Forbidden "data at the root level is invalid".
Is it possible to call an azure function from another azure function within same function app?
**Function 1**
```
public static class Function1
{
[FunctionName("Function1")]
public static async Task Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequestMessage req, TraceWriter log)
{
log.Info("---- C# HTTP trigger function 1 processed a request.");
UploadToF2(log);
return null;
}
private static IRestResponse UploadToF2(TraceWriter log)
{
SomeObject payload = new SomeObject();
payload.One = "One";
payload.Two = 2;
payload.Three = false;
payload.Four = 4.4;
var Fn2Url = Convert.ToString(ConfigurationManager.AppSettings["F2Url"]);
log.Info("Hitting F2 at " + Fn2Url);
var method = Method.POST;
var client = new RestClient(Fn2Url);
var body = JsonConvert.SerializeObject(payload);
var request = new RestRequest(method);
request.RequestFormat = DataFormat.Json;
request.AddHeader("Content-Type", "application/json");
request.AddBody(payload); // uses JsonSerializer
IRestResponse response = client.Execute(request);
return response;
}
}
class SomeObject
{
public string One { get; set; }
public int Two { get; set; }
public bool Three { get; set; }
public double Four { get; set; }
}
```
**Function 2**
```
public static class Function2
{
[FunctionName("Function2")]
public static async Task Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequestMessage req, TraceWriter log)
{
log.Info("---- C# HTTP trigger function 2 processed a request.");
string payload = await req.Content.ReadAsStringAsync();
log.Info("payload == "+payload);
return null;
}
}
```
Additional Information:
1. F2Url is a fully qualified url coming from config.
2. I tried running both functions in localhost. It works. I.e. fn(a) can call fn(b) in localhost. However when I host both of them in Azure, fn(b) is not callable from fn(a).
3. I tried a hybrid test too. I.e. I kept one function in local and another one in Azure. It works this way too. I.e. I kept fn(a) in local and fn(b) in Azure, fn(b) is callable.
4. I tried calling fn(b) directly from Postman and again it works.
5. `authLevel` is anonymous for both functions
6. I have IP restrictions (Platform features > Networking > IP restrictions) applied to the Function app. When I remove IP restrictions, Function1 is able to call Function2. However keeping IP restrictions, the call is not allowed.
**The only condition when fn(a) cannot call fn(b) is when both these functions are hosted in Azure.**<issue_comment>username_1: Works locally through a GET to Function1 when using:
```
var Fn2Url = "http://localhost:7071/api/Function2";
```
What value are you using in your configuration?
Upvotes: 0 <issue_comment>username_2: 1. Call Function #2 by its full URL, since there's a front end layer that gets hit first before the request makes it to your function app. This is true even for functions calling each other within the same function app.
```
GET https://{func-app-name}.azurewebsites.net/api/function2
```
2. If the `authLevel` is not anonymous in `function.json`, pass in the API key as `?code=` —
```
https://{func-app-name}.azurewebsites.net/api/function2?code={API-key}
```
or as a header —
```
GET https://{func-app-name}.azurewebsites.net/api/function2
x-functions-key: {API-key}
```
When running locally (Visual Studio/`func host start`), the value of `authLevel` is ignored. Looking at your decorator, `AuthorizationLevel.Anonymous` is present so most probably that's not it.
More on [authorization keys here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#authorization-keys).
On top of that, you could do better that returning `null` in Function #2: `200 OK`, `202 Accepted`, `204 No Content`, all valid choices depending on what's supposed to happen next (async/sync processing).
Upvotes: 0 <issue_comment>username_3: >
> 403 (Forbidden) while calling one azure function from another
>
>
>
If don't add the **client Ip in the IP restrictions**, then you test it in you client will get 403 error. Not only call on azure function from another ,but also all functions are restricted if you don't add the client IP in the IP restrictions.
In your case, you need to add your test [client Ip](https://www.whatismyip.com/) in the IP restrictions, then it will work.
[](https://i.stack.imgur.com/HfsYm.png)
[](https://i.stack.imgur.com/Cv1AG.png)
**Update:**
Add the test result.
[](https://i.stack.imgur.com/l5DVD.png)
Upvotes: 3 [selected_answer] |
2018/03/22 | 796 | 2,640 | <issue_start>username_0: I am new to Gensim, and I am trying to load my given (pre-trained) Word2vec model. I have 2 files: *xxxx.model.wv* and a bigger one *xxxx.model.wv.syn0.npy*.
When I call the following line:
```
gensim.models.Word2Vec.load('xxxx.model.wv')
```
I get the following error:
```
AttributeError: 'EuclideanKeyedVectors' object has no attribute 'negative'
```
How can I solve this error?<issue_comment>username_1: Works locally through a GET to Function1 when using:
```
var Fn2Url = "http://localhost:7071/api/Function2";
```
What value are you using in your configuration?
Upvotes: 0 <issue_comment>username_2: 1. Call Function #2 by its full URL, since there's a front end layer that gets hit first before the request makes it to your function app. This is true even for functions calling each other within the same function app.
```
GET https://{func-app-name}.azurewebsites.net/api/function2
```
2. If the `authLevel` is not anonymous in `function.json`, pass in the API key as `?code=` —
```
https://{func-app-name}.azurewebsites.net/api/function2?code={API-key}
```
or as a header —
```
GET https://{func-app-name}.azurewebsites.net/api/function2
x-functions-key: {API-key}
```
When running locally (Visual Studio/`func host start`), the value of `authLevel` is ignored. Looking at your decorator, `AuthorizationLevel.Anonymous` is present so most probably that's not it.
More on [authorization keys here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#authorization-keys).
On top of that, you could do better that returning `null` in Function #2: `200 OK`, `202 Accepted`, `204 No Content`, all valid choices depending on what's supposed to happen next (async/sync processing).
Upvotes: 0 <issue_comment>username_3: >
> 403 (Forbidden) while calling one azure function from another
>
>
>
If don't add the **client Ip in the IP restrictions**, then you test it in you client will get 403 error. Not only call on azure function from another ,but also all functions are restricted if you don't add the client IP in the IP restrictions.
In your case, you need to add your test [client Ip](https://www.whatismyip.com/) in the IP restrictions, then it will work.
[](https://i.stack.imgur.com/HfsYm.png)
[](https://i.stack.imgur.com/Cv1AG.png)
**Update:**
Add the test result.
[](https://i.stack.imgur.com/l5DVD.png)
Upvotes: 3 [selected_answer] |
2018/03/22 | 805 | 2,839 | <issue_start>username_0: The best way to include System.Net.Http.Formatting might be through nuget. But when a developer sees it in the default Assemblies section in reference manager then they just add it, expecting all developers to have it installed by default. But to our surprise, some developer machines did not have this dll.
All developers have the correct folder where this dlls is found
"C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 4\Assemblies\"
Some developers just have XML files and others have dlls in it, even though the file names are the same.
Why are dlls missing in some machines?<issue_comment>username_1: Works locally through a GET to Function1 when using:
```
var Fn2Url = "http://localhost:7071/api/Function2";
```
What value are you using in your configuration?
Upvotes: 0 <issue_comment>username_2: 1. Call Function #2 by its full URL, since there's a front end layer that gets hit first before the request makes it to your function app. This is true even for functions calling each other within the same function app.
```
GET https://{func-app-name}.azurewebsites.net/api/function2
```
2. If the `authLevel` is not anonymous in `function.json`, pass in the API key as `?code=` —
```
https://{func-app-name}.azurewebsites.net/api/function2?code={API-key}
```
or as a header —
```
GET https://{func-app-name}.azurewebsites.net/api/function2
x-functions-key: {API-key}
```
When running locally (Visual Studio/`func host start`), the value of `authLevel` is ignored. Looking at your decorator, `AuthorizationLevel.Anonymous` is present so most probably that's not it.
More on [authorization keys here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#authorization-keys).
On top of that, you could do better that returning `null` in Function #2: `200 OK`, `202 Accepted`, `204 No Content`, all valid choices depending on what's supposed to happen next (async/sync processing).
Upvotes: 0 <issue_comment>username_3: >
> 403 (Forbidden) while calling one azure function from another
>
>
>
If don't add the **client Ip in the IP restrictions**, then you test it in you client will get 403 error. Not only call on azure function from another ,but also all functions are restricted if you don't add the client IP in the IP restrictions.
In your case, you need to add your test [client Ip](https://www.whatismyip.com/) in the IP restrictions, then it will work.
[](https://i.stack.imgur.com/HfsYm.png)
[](https://i.stack.imgur.com/Cv1AG.png)
**Update:**
Add the test result.
[](https://i.stack.imgur.com/l5DVD.png)
Upvotes: 3 [selected_answer] |
2018/03/22 | 848 | 2,854 | <issue_start>username_0: This is part of a `html` code in ASP.NET (Visual Studio) and when I try to run the code, it gives me the following `error`:
>
> Unhandled exception at line 216, column 63 in <http://localhost:55031/WebForm1.aspx>
> 0x800a1391 - JavaScript runtime error: 'addInput' is undefined
>
>
>
Some people have been saying that I need to add a `JQuery` script but I keep getting this error. What am I doing wrong? Any help would be appreciated!
This is the HTML code in ASP.NET (Visual Studio):
```
Please click the Add Another Dimension button to add more dimensions/measurements
I agree.
```<issue_comment>username_1: Works locally through a GET to Function1 when using:
```
var Fn2Url = "http://localhost:7071/api/Function2";
```
What value are you using in your configuration?
Upvotes: 0 <issue_comment>username_2: 1. Call Function #2 by its full URL, since there's a front end layer that gets hit first before the request makes it to your function app. This is true even for functions calling each other within the same function app.
```
GET https://{func-app-name}.azurewebsites.net/api/function2
```
2. If the `authLevel` is not anonymous in `function.json`, pass in the API key as `?code=` —
```
https://{func-app-name}.azurewebsites.net/api/function2?code={API-key}
```
or as a header —
```
GET https://{func-app-name}.azurewebsites.net/api/function2
x-functions-key: {API-key}
```
When running locally (Visual Studio/`func host start`), the value of `authLevel` is ignored. Looking at your decorator, `AuthorizationLevel.Anonymous` is present so most probably that's not it.
More on [authorization keys here](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-http-webhook#authorization-keys).
On top of that, you could do better that returning `null` in Function #2: `200 OK`, `202 Accepted`, `204 No Content`, all valid choices depending on what's supposed to happen next (async/sync processing).
Upvotes: 0 <issue_comment>username_3: >
> 403 (Forbidden) while calling one azure function from another
>
>
>
If don't add the **client Ip in the IP restrictions**, then you test it in you client will get 403 error. Not only call on azure function from another ,but also all functions are restricted if you don't add the client IP in the IP restrictions.
In your case, you need to add your test [client Ip](https://www.whatismyip.com/) in the IP restrictions, then it will work.
[](https://i.stack.imgur.com/HfsYm.png)
[](https://i.stack.imgur.com/Cv1AG.png)
**Update:**
Add the test result.
[](https://i.stack.imgur.com/l5DVD.png)
Upvotes: 3 [selected_answer] |
2018/03/22 | 635 | 2,114 | <issue_start>username_0: I'm wondering how to let typescript know that the code is functionally valid.
It's saying it could be a string, when I'm really unsure how that would be possible. Is this a bug? Or am I just typing something incorrectly?
Example:
```
const i18nInstance = {
options: {defaultNS: 'common'}
}
const getInitialProps = (req: any, namespaces?: string | string[]) => {
if (!namespaces) {
namespaces = i18nInstance.options.defaultNS
}
if (typeof namespaces === 'string') {
namespaces = [namespaces]
}
const initialI18nStore = req.i18n.languages.reduce((langs, lang) => {
// typescript thinks namespaces could be a string, but it obviously will never be.
langs[lang] = namespaces.reduce((ns, n) => {
ns[n] = (req.i18n.services.resourceStore.data[lang] || {})[ns] || {}
return ns
}, {})
return langs
}, {})
return {
i18n: req.i18n,
initialI18nStore,
initialLanguage: req.i18n.language,
}
}
```<issue_comment>username_1: I would reassign this to a variable created with the `string[]` type:
```
let nss: string[];
if (typeof namespaces === 'string') {
namespaces = [namespaces];
}
nss = namespaces;
```
You have another issue on this line:
```
ns[n] = (req.i18n.services.resourceStore.data[lang] || {})[ns] || {}
```
I think this should probably be
```
ns[n] = (req.i18n.services.resourceStore.data[lang] || {})[n] || {}
```
`ns` is an object so it can't be used as a key.
Upvotes: 1 <issue_comment>username_2: Add `| undefined` to `namespaces` parameter
Consider do not mutate `namespaces` but create separate constant instaed
```
const nss: string[] = !ns
? []
: typeof ns === 'string'
? [ns]
: ns
```
Upvotes: 0 <issue_comment>username_3: You can just make compiler be sure that it is arry of strings.
```
const getInitialProps = (req: any, namespaces?: string | string[]) => {
if (!namespaces) {
namespaces = 'default';
}
if (typeof namespaces === 'string') {
namespaces = [namespaces] as string[]
}
namespaces.concat();
}
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 848 | 3,039 | <issue_start>username_0: All,
I have the below code which works okay, however I would like to save (Overwrite changes) instead of SaveAs. To my knowledge the Close(True) Should achieve this however it still provides the user with a save as option. :S
Any help / recommendations on the below would be much appreciated.
```
using System;
using System.Collections;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Windows.Forms;
using System.Xml;
using Microsoft.Office.Interop.Excel;
namespace Dynamic.Script_8D58F1E2C23B36A
{
// Script generated by Pega Robotics Studio 8.0.1072.0
// Please use caution when modifying class name, namespace or attributes
[OpenSpan.TypeManagement.DynamicTypeAttribute()]
[OpenSpan.Design.ComponentIdentityAttribute("Script-8D58F1E2C23B36A")]
public sealed class Script
{
public void deleterow(string WorkbookName2)
{
Microsoft.Office.Interop.Excel.Application myApp;
Microsoft.Office.Interop.Excel.Workbook myWorkBook;
Microsoft.Office.Interop.Excel.Worksheet myWorkSheet;
Microsoft.Office.Interop.Excel.Range range;
myApp = new Microsoft.Office.Interop.Excel.Application();
myWorkBook = myApp.Workbooks.Open(WorkbookName2, 0, true, 5, "", "", true, Microsoft.Office.Interop.Excel.XlPlatform.xlWindows, "\t", false, false, 0, true, 1, 0);
myWorkSheet = (Microsoft.Office.Interop.Excel.Worksheet)myWorkBook.Worksheets.get_Item(1);
range = (Microsoft.Office.Interop.Excel.Range)myWorkSheet.Application.Rows[1, Type.Missing];
range.Select();
range.Delete(Microsoft.Office.Interop.Excel.XlDirection.xlUp);
myWorkBook.Close(true);
myApp.Quit();
}
}
}
```<issue_comment>username_1: You can "silence" Excel temporarily
```
myApp.DisplayAlerts = false;
myWorkBook.Close(true);
myApp.DisplayAlerts = true;
```
Well and you should think about releasing your COM objects
Example:
```
if (obj != null && Marshal.IsComObject(obj))
{
Marshal.ReleaseComObject(obj);
}
```
Upvotes: 0 <issue_comment>username_2: So you can do this to stop DisplayAlerts from appearing:
```
myApp.DisplayAlerts = false;
```
And then if you want to save with specifying a file name, you can do this:
```
//specifying a file name
myWorkSheet.SaveAs(filename, Microsoft.Office.Interop.Excel.XlFileFormat.xlWorkbookDefault, Type.Missing, Type.Missing, true, false, XlSaveAsAccessMode.xlNoChange, XlSaveConflictResolution.xlLocalSessionChanges, Type.Missing, Type.Missing);
myWorkBook.Close(true);
```
Or you can do it short hand by doing the following:
```
myWorkBook.Close(SaveChanges:=True, Filename:=YourFileDirectory/FileName)
```
If you just want to save a file that exists without specifying the file directory/file name. Just do this:
```
myApp.DisplayAlerts = false;
myWorkBook.Save();
myWorkBook.Close(true);
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,171 | 3,819 | <issue_start>username_0: I've got 2000+ nodes and 900+ edges, but when I was trying to make graphics in networkx, I found all the nodes crowded together. I tried changing attribute values, such as scale, k. I found them no use since there were hundreds of nodes with labels below which means I could not choose the small size of nodes. I'm wondering if there's a method to expand the canvas or other ways to increase the distance of nodes to avoid overlapping so I can see each node and it's label clearly.
Thanks<issue_comment>username_1: When I had the same problem I figured out where I wanted my nodes to be and gave them as an input to networkx from a csv file:
```
f1 = csv.reader(open('nodes-C4-final.csv','r'),delimiter="\t")
for row in f1:
G.add_node(row[0], label=row[1], weight = float(row[3]), pos =(float(row[4]),float(row[5])))
```
Upvotes: 0 <issue_comment>username_2: You can use interactive graphs by **plotly** to plot such a large number of nodes and edges. You can change every attribute like canvas size etc and visualize it more easily by zooming other actions.
Example:
Import plotly
```
import plotly.graph_objects as go
import networkx as nx
```
Add edges as disconnected lines in a single trace and nodes as a scatter trace
```
G = nx.random_geometric_graph(200, 0.125)
edge_x = []
edge_y = []
for edge in G.edges():
x0, y0 = G.nodes[edge[0]]['pos']
x1, y1 = G.nodes[edge[1]]['pos']
edge_x.append(x0)
edge_x.append(x1)
edge_x.append(None)
edge_y.append(y0)
edge_y.append(y1)
edge_y.append(None)
edge_trace = go.Scatter(
x=edge_x, y=edge_y,
line=dict(width=0.5, color='#888'),
hoverinfo='none',
mode='lines')
node_x = []
node_y = []
for node in G.nodes():
x, y = G.nodes[node]['pos']
node_x.append(x)
node_y.append(y)
node_trace = go.Scatter(
x=node_x, y=node_y,
mode='markers',
hoverinfo='text',
marker=dict(
showscale=True,
# colorscale options
#'Greys' | 'YlGnBu' | 'Greens' | 'YlOrRd' | 'Bluered' | 'RdBu' |
#'Reds' | 'Blues' | 'Picnic' | 'Rainbow' | 'Portland' | 'Jet' |
#'Hot' | 'Blackbody' | 'Earth' | 'Electric' | 'Viridis' |
colorscale='YlGnBu',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
title='Node Connections',
xanchor='left',
titleside='right'
),
line_width=2))
```
Color node points by the number of connections.
Another option would be to size points by the number of connections i.e. node\_trace.marker.size = node\_adjacencies
```
node_adjacencies = []
node_text = []
for node, adjacencies in enumerate(G.adjacency()):
node_adjacencies.append(len(adjacencies[1]))
node_text.append('# of connections: '+str(len(adjacencies[1])))
node_trace.marker.color = node_adjacencies
node_trace.text = node_text
```
Create network graph
```
fig = go.Figure(data=[edge_trace, node_trace],
layout=go.Layout(
title='
Network graph made with Python',
titlefont_size=16,
showlegend=False,
hovermode='closest',
margin=dict(b=20,l=5,r=5,t=40),
annotations=[ dict(
text="Python code: <https://plotly.com/ipython-notebooks/network-graphs/>",
showarrow=False,
xref="paper", yref="paper",
x=0.005, y=-0.002 ) ],
xaxis=dict(showgrid=False, zeroline=False, showticklabels=False),
yaxis=dict(showgrid=False, zeroline=False, showticklabels=False))
)
fig.show()
```
You can get more details about plotly on internet
See docs: <https://plotly.com/python/network-graphs/>
Upvotes: 2 |
2018/03/22 | 760 | 3,013 | <issue_start>username_0: I'm developing with Visual Studio 2017 a WebForm Asp.Net web application.
it's a couple of days that the event handler autocomplete doesn't work anymore:
Previously, after I've added a control on the page, I was able to type the name of the event, *OnClick* in example, and then by typing the **="** Visual studio shows me a drop down with the **"create new..."** and the list of the compatible handlers for the event.
The "create new..." feature is quite fundamental to me as I don't always know the signature of the method, and overall I don't have to write it by hand.
All these tasks were performed in the source view of the aspx page, and the event handler method was inserted automatically by Visual Studio in the aspx.cs file.
For example, on a button control, by selecting the **"Create New..."** item in the event handlers suggestions, VS automatically create the following code in aspx.cs:
```
protected void BtnDoSomething_Click(object sender, EventArgs e)
{
}
```
**Now the drop-down suggestions are not shown anymore.**
The project compiles successfully.
I've tried to restart the project, restart VS, restart the PC, open an old project... nothing has worked.
What can I do to bring it back?<issue_comment>username_1: Also upgrading Visual Studio to **15.6.3** has not solved the issue.
I've tried also the following:
1. Uncheck option "Allow parallel project initialization" (Tools >
Options > Projects and Solutions > General)
2. Delete .vs folder in the project solution folder
3. Restart Visual Studio
4. Disable all extensions, one by one and restart VS
**None of the above has worked.**
I've noticed that inside a new blank page in the same project the intellisense was working fine.
After some debug I've found that the attribute "style" of some elements in the page was causing the issue:
```
```
**By removing all the style attribs from the .aspx page solved the issue**.
By adding back again also one of it cause the intellisense stop to work.
It's not really an issue as I use style only for fast&quick test, so I'll avoid them, but **Does anyone know why?!?**
Upvotes: 2 [selected_answer]<issue_comment>username_2: Well I'm running VS2017 15.5.6 and have the same problem with a Webforms application. My workaround is to temporary delete most of the code from markup (saving it in Notepad), leaving only the controls directly surrounding and including the control that needs the event handler itself, then add the event handler, which than succeeds, and add the removed code back again. Although there might be some hidden errors in my code, I’m unable to find them and there are no warnings or errors mentioned. But this particular page is rather large: 600 lines in markup and 1932 lines in code behind. Markup contains about 500 controls.
**Edit:**
I've updated to version 15.7.3: the problem still exists
Upvotes: 1 <issue_comment>username_3: Check your
```
CodeBehind="yourwebpagename.aspx.cs"
```
at the top of the page.
Upvotes: 0 |
2018/03/22 | 302 | 1,004 | <issue_start>username_0: how I can get fetch my 10 last comments with just 1 comment per user in laravel
I saw [this](https://stackoverflow.com/questions/36578370/show-last-comment-which-just-1-comment-per-user) but I want do it with eloquent in clean way<issue_comment>username_1: Did you try
```
Comment::groupBy('user_id')->limit(10)->get();
```
If you want to get latest comments, create a class called UserComment and store latest comment\_id for each user and update it for each new comment created, then You can fetch latest 10 comments by
```
UserComment::orderBy('updated_at')->limit(10)->get();
```
Upvotes: 1 <issue_comment>username_2: This is the logic:
`Comment::groupBy('user_id')->orderBy('id', 'desc')->first();`
Upvotes: 0 <issue_comment>username_3: If you can find the latest comments by the highest ids:
```
$ids = Comment::selectRaw('MAX(id) id')->groupBy('user_id')->take(10)->pluck('id');
$comments = Comment::whereIn('id', $ids)->orderByDesc('id')->get();
```
Upvotes: 0 |
2018/03/22 | 656 | 2,373 | <issue_start>username_0: Consider the following React Router V4 code:
```
const isAuthenticated = () => {
let hasToken = localStorage.getItem("jwtToken");
if (hasToken) return true;
return false;
};
const AuthenticatedRoute = ({ component: Component, ...rest }) =>
isAuthenticated()
?
: window.location = "/auth/login" } <<-- ERROR HERE
/>;
class App extends Component {
render() {
return (
);
}
}
export default App;
```
`auth/login` is another ReactJs application that needs to be loaded from the server, or in other words, `auth/login` needs to bypass client routing and sent to server, where it will be routed and will serve a new ReactJS application (`auth` application with `login` page).
When trying to redirect (not authenticated) I´m getting the following error on browser console:
```
You are attempting to use a basename on a page whose URL path does not begin with a basename.
Expected path "/login" to begin with "/editor"
```
Looks like I´m "trapped" inside React Router V4 and nothing can get out of the basename.
How can I solve that and redirect to a server page, quitting or bypassing the react router itself ?<issue_comment>username_1: You need to install react-router-dom in your app
```
yarn add react-router-dom
```
or
```
npm i -S react-router-dom
```
And import this
```
import { BrowserRouter as Router } from 'react-router-dom';
```
Where you want to redirect
```
browserHistory.push('/path/some/where');
```
**your code should be**
```
import { BrowserRouter as Router } from 'react-router-dom';
const isAuthenticated = () => {
let hasToken = localStorage.getItem("jwtToken");
if (hasToken) return true;
return false;
};
const AuthenticatedRoute = ({ component: Component, ...rest, history }) =>
isAuthenticated()
?
: history.push("/auth/login")}
/>;
class App extends Component {
render() {
return (
);
}
}
export default App;
```
Upvotes: -1 <issue_comment>username_2: You should use the `Redirect` component provided by `react-router-dom`
```
import { Redirect, Route } from 'react-router-dom';
const AuthenticatedRoute = ({ component: Component, ...rest }) =>
isAuthenticated()?
: (
)
}
/>
```
Here is a working [example](https://github.com/strapi/strapi-examples/tree/master/good-old-react-authentication-flow/src/containers/PrivateRoute)
Upvotes: 0 |
2018/03/22 | 493 | 2,114 | <issue_start>username_0: I am new to Hyperledger.
Where is the transaction data stored?
In the disk or any kind of db?
How many transactions can be stored in the Hyperledger node where it is stored?
Is there a max limit?
Is there a estimate on how fast a Hyperledger transactions grows?
What are the steps to be taken once it reaches the max limit?<issue_comment>username_1: Where is the transaction data stored? In the disk or any kind of db?
Transaction data is stored on the channel(there is one Blockchain ledger per channel) and the current state called as world state is stored in levedb by default OR couchdb (if you explicitly configure it).
How many transactions can be stored in the Hyperledger node where it is stored? Is there a max limit?
After you log in to peer you can view the channel with your 'channelname' at the following path
/var/hyperledger/production/ledgersData/chains/chains/channelname
There is no limit to number of transactions channel can store. However you can put the limit on number of transactions that goes into a single block.
Is there a estimate on how fast a Hyperledger transactions grows?
Transactions are executed and recorded on channel (or blockchain), as a result of which blockchain grows. How fast the blockchain grows depends on the frequency and number of transactions.
What are the steps to be taken once it reaches the max limit?
There is no upper limit to the size of ledger/blockchain other than the hard-disk size.
Upvotes: 0 <issue_comment>username_2: It might be worth investigating the new Hyperledger Caliper incubator project to answer your performance questions. I expect that the answers will depend a lot on what you are trying to achieve and Caliper should let you run tests with your own blockchain network to get results specific to your own scenarios.
There are more details on the project page and announcement blog post if you think it might be useful:
* <https://www.hyperledger.org/projects/caliper>
* <https://www.hyperledger.org/blog/2018/03/19/measuring-blockchain-performance-with-hyperledger-caliper>
Upvotes: 2 |
2018/03/22 | 591 | 1,930 | <issue_start>username_0: I made an input form when the user types in a name.
a isset($\_COOKIE) checks if the made cookie already exists.
if the cookie exists you'll get a message: Welcome back.
if not you'll get the message: this is your first time here.
but somehow i always get the message welcome back.
Here is my code:
```
php
if(!empty($_POST))
{
header("Location:form_data.php");
setcookie('name',$_POST['name'], time() + (86400 * 30));
}
if(isset($_COOKIE['name']))
{
echo "Welcome back ".$_COOKIE['name'];
}else
{
echo "hello ".$_COOKIE['name']; echo " this is your first time here.";
setcookie('name',$_POST['name'], time() + (86400 * 30));
}
?
```
can someone help me with this problem?<issue_comment>username_1: **setcookie()** must be called before any output is sent to the browser. Otherwise it will cause an header error.
In your code:
**Change from:**
```
echo "hello ".$_POST['name']; echo " this is your first time here.";
setcookie('name',$_COOKIE['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
echo "hello ".$_POST['name']; echo " this is your first time here.";
```
So no other code will be executed after header() redirection, you should append exit() to it:
**So also change:**
```
header("Location:form_data.php");
setcookie('name',$_POST['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
header("Location:form_data.php"); exit();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your code seems to be correct. (except the section pointed out by Karlo)
1. Make sure you don't POST anything. Otherwise your cookie is already set.
2. Make sure you delete your cookie from your browser.
3. Make sure you refresh the page without sending the POST data again. So you should reach the "this is your first time" section.
Upvotes: 0 |
2018/03/22 | 490 | 1,665 | <issue_start>username_0: I am stuck in token saving in mvc
While calling web api 2.0 in mvc 5, I want to store generated api token in mvc client side so that next time when I hit action of api that is authorized action so mvc should pass only token to api but api should look first whether token is expired or not if expired create new one.
The problems are:
1: Where to store token in mvc.
2: How to pass to api.
3: How to check in api whether expired or not.<issue_comment>username_1: **setcookie()** must be called before any output is sent to the browser. Otherwise it will cause an header error.
In your code:
**Change from:**
```
echo "hello ".$_POST['name']; echo " this is your first time here.";
setcookie('name',$_COOKIE['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
echo "hello ".$_POST['name']; echo " this is your first time here.";
```
So no other code will be executed after header() redirection, you should append exit() to it:
**So also change:**
```
header("Location:form_data.php");
setcookie('name',$_POST['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
header("Location:form_data.php"); exit();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your code seems to be correct. (except the section pointed out by Karlo)
1. Make sure you don't POST anything. Otherwise your cookie is already set.
2. Make sure you delete your cookie from your browser.
3. Make sure you refresh the page without sending the POST data again. So you should reach the "this is your first time" section.
Upvotes: 0 |
2018/03/22 | 625 | 2,430 | <issue_start>username_0: I'm using Tensorflow to generate a transformation matrix for a set of input vectors (X) to target vectors (Y). To minimize the error between the transformed input and the target vector samples I'm using a gradient descent algorithm. Later on I want to use the generated matrix to transform vectors coming from the same source as the training input vectors so that they look like the corresponding target vectors. Linear regression, pretty much, but with 3-dimensional targets.
I can assume that the input and target vectors are in cartesian space. Thus, the transformation matrix should consist of a rotation and a translation. I'm working solely with unit vectors, so I can also safely assume that there's no translation, only rotation.
So, in order to get a valid rotation matrix that I can turn into a rotation quaternion I understand that I have to make sure the matrix is orthogonal.
Thus, the question is, is it possible to give Tensorflow some kind of constraint so that the matrix it tries to converge to is guaranteed to be orthogonal? Can be a parameter, a mathematical constraint, a specific optimizer, whatever. I just need to make sure the algorithm converges to a valid rotation matrix.<issue_comment>username_1: **setcookie()** must be called before any output is sent to the browser. Otherwise it will cause an header error.
In your code:
**Change from:**
```
echo "hello ".$_POST['name']; echo " this is your first time here.";
setcookie('name',$_COOKIE['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
echo "hello ".$_POST['name']; echo " this is your first time here.";
```
So no other code will be executed after header() redirection, you should append exit() to it:
**So also change:**
```
header("Location:form_data.php");
setcookie('name',$_POST['name'], time() + (86400 * 30));
```
**To:**
```
setcookie('name',$_POST['name'], time() + (86400 * 30));
header("Location:form_data.php"); exit();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your code seems to be correct. (except the section pointed out by Karlo)
1. Make sure you don't POST anything. Otherwise your cookie is already set.
2. Make sure you delete your cookie from your browser.
3. Make sure you refresh the page without sending the POST data again. So you should reach the "this is your first time" section.
Upvotes: 0 |
2018/03/22 | 1,012 | 3,600 | <issue_start>username_0: I am building a shared library on a linux system which is intended to be used in a project that will be deployed in a Docker on a server. This shared library uses openmp. I am therefore wondering if it is better (or more portable) to have
1. openmp linked statically to my shared library
2. I should install gcc properly in the Docker in order to have openmp found
3. Distribute the .so of openmp together with my library
If option 1 is the best, can someone provide the right way to do this with cmake?
I add below some documentation on the subject:
* [Are runtime libraries inherently dynamic libraries](https://stackoverflow.com/questions/33502293/are-runtime-libraries-inherently-dynamic-libraries)
* [intelForum](https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/278427)
* <https://cmake.org/pipermail/cmake/2007-May/014347.html><issue_comment>username_1: You generally cannot link a static library into a shared one.
Because static libraries (`lib*.a`) don't contain [position-independent code](https://en.wikipedia.org/wiki/Position-independent_code) (PIC), but shared libraries need PIC (in practice).
(in theory, non PIC shared libraries could be possible; but they would contain so much [relocation](https://en.wikipedia.org/wiki/Relocation_(computing)) that the "shared" aspect is lost and the [dynamic linker](https://en.wikipedia.org/wiki/Dynamic_linker) would have a lot of work. So in practice, every shared library needs to be PIC to permit its [code segment](https://en.wikipedia.org/wiki/Code_segment)[s] to be [mmap(2)](http://man7.org/linux/man-pages/man2/mmap.2.html)-ed at different addresses in various processes and still stay shared)
Read Drepper's [*How To Write Shared Libraries*](https://www.akkadia.org/drepper/dsohowto.pdf) paper.
However, you can link a shared library (e.g. `libopenmp.so`) into another one (your shared library, see [this](https://stackoverflow.com/a/19424604/841108)). Then the program using your shared library will need that `libopenmp.so`.
So you could do 2 or 3, or even package your library as a proper [`.deb`](https://en.wikipedia.org/wiki/Deb_(file_format)) package (which would *depend* on `libopenmpi2` Debian package).
You may want to understand [package management](https://en.wikipedia.org/wiki/Package_manager).
You should understand more the [virtual address space](https://en.wikipedia.org/wiki/Virtual_address_space) of your [process](https://en.wikipedia.org/wiki/Process_(computing)). For that, use [proc(5)](http://man7.org/linux/man-pages/man5/proc.5.html) and [pmap(1)](http://man7.org/linux/man-pages/man1/pmap.1.html). For first examples, try `cat /proc/self/maps` and `cat /proc/$$/maps`. Then, if your process has pid 1234, try `cat /proc/1234/maps` and/or `pmap 1234`. You'll then understand how shared libraries are [mmap(2)](http://man7.org/linux/man-pages/man2/mmap.2.html)-ed.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Linking the OpenMP runtime statically into another shared library is a bad idea even if you could achieve it. If some other component of the final code also uses OpenMP you will end up with two different OpenMP runtimes in the process. That rapidly leads to over-subscription when each runtime creates its own thread pool and thus poor performance. (As well as potential correctness failures if your code assumes that OpenMP critical sections will protect it against the other parallel code...)
The easiest answer is probably your number 3: ship a copy of the relevant OpenMP runtime shared library with your code.
Upvotes: 2 |
2018/03/22 | 1,181 | 4,649 | <issue_start>username_0: I am getting this error when I try to execute HTTP unit test cases.
I am using Angular 5. How can I resolve this?
Below is my code for normal GET. Below code just brings normal GET.
```
import { TestBed } from '@angular/core/testing';
import { HttpClientTestingModule, HttpTestingController, TestRequest } from
'@angular/common/http/testing';
import { DangerService } from './danger.service';
import { DangerFlag } from '../danger.model';
describe('DataService Tests', () => {
let dataService: DangerService;
let httpTestingController: HttpTestingController;
let testDangerFlags: DangerFlag[] = [ "sample data" ]
beforeEach(() => {
TestBed.configureTestingModule({
imports: [ HttpClientTestingModule ],
providers: [ DangerService ]
});
dataService = TestBed.get(DangerService);
httpTestingController = TestBed.get(HttpTestingController);
});
afterEach(() => {
httpTestingController.verify();
});
fit('should get all danger flags', () => {
dataService.getDangerFlagDetails()
.subscribe((data: DangerFlag[]) => {
expect(data.length).toBe(3);
});
});
});
```<issue_comment>username_1: If a test is async, you have to tell jasmine that it is async and when it is finished.
```
it('should get all danger flags', (done) =>{^
^^^^^^
dataService.getDangerFlagDetails()
.subscribe((data: DangerFlag[]) =>{
expect(data.length).toBe(3);
done();
^^^^^^
});
});
```
You do that by calling a function that jasmine will provide when running a test as parameter.
Upvotes: 2 <issue_comment>username_2: I also faced this issue today. Add **expectOne** function call
```
it('should get all danger flags', () => {
const actualDangerFlags = createDangerFlagResponse();
dataService.getDangerFlagDetails()
.subscribe((actualDangerFlags) => {
expect(data.length).toBe(3);
});
});
const httpRequest = httpTestingController.expectOne(BASE_URL + relativeURL);
expect(httpRequest.request.method).toBe('GET');
httpRequest.flush(actualDangerFlags);
});
createDangerFlagResponse(): DangerFlag[] {
return /* DangerFlag Array */
}
```
Upvotes: 3 <issue_comment>username_3: >
> Expected no open requests, found 1
>
>
>
This happens when you make a mock request, but don't 'complete/close' it. An open request may stay open after a test is run, eventually memory leaking, especially if the test is ran multiple times.
Subscribing to a mock request **calls it as far as the client side is concerned** but does not 'complete' it as far as the backend is concerned. 'Completing' a request can be done in a number of ways;
`backend = TestBed.get(HttpTestingController)`
1. `backend.expectOne(URL)` - this will both test for a url, and 'close' the backend call. This will not test for params, and will fail if your query has params in it.
2. `backend.expectNone(URL)` - in case you're testing for urls that have params, `expectOne()` wont work. You'll have to use `backend.match()`. Match does not auto close the backend api call, so you can `expectNone()` after it to close it out.
3. `.flush(RESPONSE)` - flush will force-send a response for the http call, and subsequently close the call. Note: if calling flush on a `match()`, watch out for match returning an array, i.e. `backend.match(...)[0].flush({})`
Any of these methods will close out the http request, and make `backend.verify()` behave.
### References
* You can find in depth examples, and more explanations [here](https://www.ng-conf.org/2019/angulars-httpclient-testing-depth/)
* `expectOne()` and `match()` return an instance of [TestRequest](https://angular.io/api/common/http/testing/TestRequest)
* `expectNone()` always returns `void`
Upvotes: 5 <issue_comment>username_4: Firstly Add **expectOne** function call, if it doesn't solve then try below might be helping you
Sometimes the validation part might be going wrong, please add an alert or a console to verify the true condition in my case it happened,
```
service.authenticate(model).subscribe(token => {
expect(token).toBeTruthy();
expect(token).toBeDefined();
});
const request = httpTestingController.expectOne(req =>{
console.log("url: ", request.url);
return req.method === 'GET' && req.url === URL
});
expect(request.request.method).toBe('GET');
request.flush(response);
```
Hope it will solve problem :)
Happy Coding !!!
Upvotes: 0 |
2018/03/22 | 856 | 2,294 | <issue_start>username_0: I want to create a 3d mask from a 2d mask. Lets assume I have a 2d mask like:
```
mask2d = np.array([[ True, True ,False],
[ True , True, False],
[ True , True ,False]])
mask3d = np.zeros((3,3,3),dtype=bool)
```
**The desired output should look like:**
```
mask3d = [[[ True True False]
[ True True False]
[ True True False]]
[[ True True False]
[ True True False]
[ True True False]]
[[ True True False]
[ True True False]
[ True True False]]]
```
Now I want to create a 3d array mask with the 2d array mask in every z slice. It should work no matter how big the 3d array is in z direction How would i do this?
**EDIT**
Ok now I try to find out, which method is faster. I know I could to it with timepit, but I do not realyunderstand why in the first method he loops 10000 times and in the second 1000 times:
```
mask3d=np.zeros((3,3,3),dtype=bool)
def makemask():
b = np.zeros(mask3d,dtype=bool)
b[:]=mask2d
%timeit for x in range(100): np.repeat(mask[np.newaxis,:], 4, 0)
%timeit for x in range(100): makemask()
```<issue_comment>username_1: You can use `np.repeat`:
```
np.repeat([mask2d],3, axis=0)
```
Notice the `[]` around `mask2d` which makes `mask2d` 3D, otherwise the result will be still a 2D array.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do such multiplication with vanilla Python already:
```
>>> a=[[1,2,3],[4,5,6]]
>>> [a]*2
[[[1,2,3],[4,5,6]],[[1,2,3],[4,5,6]]]
```
Or, if the question is about concatenating multiple masks (probably this was the question for real):
```
>>> b=[[10,11,12],[13,14,15]]
>>> [a,b]
[[[1,2,3],[4,5,6]],[[10,11,12],[13,14,15]]
```
(This works with bools too, just it is easier to follow with numbers)
Upvotes: 0 <issue_comment>username_3: Numpy [dstack](https://numpy.org/doc/stable/reference/generated/numpy.dstack.html) works well for this, d stands for depth, meaning arrays are stacked in the depth dimension (the 3rd):
assuming you have a 2D mask of size (539, 779) and want a third dimension
```
print(mask.shape) # result: (539, 779)
mask = np.dstack((mask, mask, mask))
print(mask.shape) # result: (539, 779, 3)
```
If for any reason you want e.g. (539, 779, 5), just do mask = np.dstack((mask, mask, mask, mask, mask))
Upvotes: 0 |
2018/03/22 | 586 | 1,639 | <issue_start>username_0: I am working with VSTO Outlook addin. I have to find the version outlook which is installed and then have to set some registry value.
I have to set **Software\Microsoft\Office\16.0\Outlook\Preferences** one DWord value if outlook16 present and if outlook13 is present then need to set **Software\Microsoft\Office\15.0\Outlook\Preferences** one value.
I have to check that in If Else, I am not getting how to do.<issue_comment>username_1: You can use `np.repeat`:
```
np.repeat([mask2d],3, axis=0)
```
Notice the `[]` around `mask2d` which makes `mask2d` 3D, otherwise the result will be still a 2D array.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do such multiplication with vanilla Python already:
```
>>> a=[[1,2,3],[4,5,6]]
>>> [a]*2
[[[1,2,3],[4,5,6]],[[1,2,3],[4,5,6]]]
```
Or, if the question is about concatenating multiple masks (probably this was the question for real):
```
>>> b=[[10,11,12],[13,14,15]]
>>> [a,b]
[[[1,2,3],[4,5,6]],[[10,11,12],[13,14,15]]
```
(This works with bools too, just it is easier to follow with numbers)
Upvotes: 0 <issue_comment>username_3: Numpy [dstack](https://numpy.org/doc/stable/reference/generated/numpy.dstack.html) works well for this, d stands for depth, meaning arrays are stacked in the depth dimension (the 3rd):
assuming you have a 2D mask of size (539, 779) and want a third dimension
```
print(mask.shape) # result: (539, 779)
mask = np.dstack((mask, mask, mask))
print(mask.shape) # result: (539, 779, 3)
```
If for any reason you want e.g. (539, 779, 5), just do mask = np.dstack((mask, mask, mask, mask, mask))
Upvotes: 0 |
2018/03/22 | 381 | 1,524 | <issue_start>username_0: How do I use a custom bean as a producer endpoint in a camel and then wire it to processing beans or/and consumer beans.
For example I would like to use one of the bean methods to consume data from database and then pass on the results to another method of the same bean or to a another bean to process the data and then pass to a jms queue.
I would like to do something like following but the flow never goes to patstat service
```
public void configure() throws Exception {
from("direct:start").bean("patstatService", "getTestData")
.bean("patstatExtractorAutmn","generatRSSFromData")
.to("activemq:patstat:test");
}
```<issue_comment>username_1: I could not find a direct way to consume from bean.
For my usecase scheduling to consume from the bean every second might work. So for now the following does the work but I hope that there is a way to directly consume from the bean as this a valid use case and in my opinion quite useful.
```
public void configure() throws Exception {
from("timer:patstat?period=1s").bean("patstatService", "getTestData")
.bean("patstatExtractorAutmn","generatRSSFromData")
.to("activemq:patstat:test");
}
```
Upvotes: 1 <issue_comment>username_1: Another solution could be to invoke the bean just once when the route starts and put the processing logic in the bean as follows:
```
from("timer:patstat?repeatCount=1").threads().bean("patstatService", "getData").routeId(""+startYear);
```
Upvotes: 0 |
2018/03/22 | 974 | 2,453 | <issue_start>username_0: I have a text file with entries like this:
```
Interface01 :
adress
192.168.0.1
next-interface:
interface02:
adress
10.123.123.214
next-interface:
interface01 :
adress
172.123.456.123
```
I'd like to parse it and get only the IP address corresponding to Interface01
I tried may things with python `re.finall` but couldn't get anything matching
```
i = open(f, r, encoding='UTF-8')
txt = i.read()
interface = re.findall(r'Interface01 :\s*(.adress*)n',txt,re.DOTALL)
```
but nothing works.
The expected result is `192.168.0.1`.<issue_comment>username_1: ```
interface = re.findall(r'Interface01 :\s*.adress\s*(.*?)$',txt,re.S|re.M)
```
Upvotes: 0 <issue_comment>username_2: How about creating a pattern that said "Interface01", then skip all chars that are not digits, then get the digits and dots?
```
re.findall(r'Interface01[^0-9]+([0-9.]+)', text)
```
Result:
```
['192.168.0.1']
```
Update
======
Thanks to @zipa, here is the updated regex:
```
re.findall(r'[iI]nterface01[^0-9]+([0-9.]+)', text)
```
Result:
```
['192.168.0.1', '172.123.456.123'
```
Upvotes: 2 <issue_comment>username_3: You could try something like this:
```
interface = re.findall(r'Interface01 :\n +adress\n +(\d+.\d+.\d+.\d+)', txt)
# ['192.168.0.1']
```
Upvotes: 0 <issue_comment>username_4: For getting one single match it's better to use `re.serach()` function:
```
import re
with open('filename') as f:
pat = r'Interface01 :\s*\S+\s*((?:[0-9]{1,3}\.){3}[0-9]{1,3})'
result = re.search(pat, f.read()).group(1)
print(result)
```
The output:
```
192.168.0.1
```
Upvotes: 0 <issue_comment>username_5: You may use
```
Interface01\s*:\s*adress\s+(.*)
```
See the [regex demo](https://regex101.com/r/9sSdgK/1). In Python, use `re.search` to get the first match since you only want to extract 1 IP address.
**Pattern details**:
* `Interface01` - a literal substring
* `\s*:\s*` - a `:` enclosed with 0+ whitespaces
* `adress` - a literal substring
* `\s+` - 1+ whitespaces
* `(.*)` - Group 1: any 0+ chars other than line break chars.
[Python demo](https://ideone.com/QoE1uF):
```
import re
reg = r"Interface01\s*:\s*adress\s+(.*)"
with open('filename') as f:
m = re.search(reg, f.read())
if m:
print(m.group(1))
# => 192.168.0.1
```
Upvotes: 3 [selected_answer]<issue_comment>username_6: you can use `Interface01 :\n.*?\n(.*)`
Upvotes: 0 |
2018/03/22 | 695 | 2,470 | <issue_start>username_0: I'm attempting to use GraphViz to visualize a decision tree with Google Cloud Datalab, but the following error is being thrown:
InvocationException: GraphViz's executables not found
I found a [related post](https://stackoverflow.com/questions/27666846/pydot-invocationexception-graphvizs-executables-not-found), but the solutions here did not resolve the problem in Datalab.
Run the following to replicate:
```
!pip install graphviz
import graphviz
import numpy as np
from sklearn.datasets import load_iris
from sklearn import tree
from sklearn.externals.six import StringIO
iris = load_iris()
train_data = iris.data
train_labels = iris.target
clf = tree.DecisionTreeClassifier()
clf.fit(train_data, train_labels)
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=iris.feature_names,
class_names=iris.target_names)
graph = graphviz.Source(dot_data)
graph
```<issue_comment>username_1: For me, it worked on Datalab (Python2 kernel) after running these cells:
```
%%bash
apt-get update -y
```
(this will most likely give you an error about repositories not being signed but you can proceed anyway with the `--allow-unauthenticated` flag)
```
%%bash
apt-get install python-pydot -y --allow-unauthenticated
```
[](https://i.stack.imgur.com/W0Lfn.png)
Upvotes: 2 <issue_comment>username_2: I was also having the same problem running graphviz on the Datalab. Please run the following command before importing graphviz library in the Datalab notebook:
```
%%bash
/usr/bin/yes | apt-get install graphviz
pip install --upgrade graphviz
/usr/bin/yes | pip uninstall pyparsing
pip install -Iv https://pypi.python.org/packages/source/p/pyparsing/pyparsing-1.5.7.tar.gz
pip install --upgrade pydot
```
After that run the command and it will work well like this:
```
import graphviz
#visually representing the decision tree for num_leaves = 2
targetNames=['No Fault', 'Minor Fault','Major Fault']
dot_data = tree.export_graphviz(decTree, out_file= None,
feature_names=important_cols,
class_names = targetNames,
filled=True, rounded=True)
graph = graphviz.Source(dot_data)
graph
```
[](https://i.stack.imgur.com/vTMZx.jpg)
Upvotes: 0 |
2018/03/22 | 723 | 2,470 | <issue_start>username_0: I have written a unit test in Jest for a TypeScript function(`checkEmail`) that internally calls `showAlert`.
`showAlert` in the `utils.ts`:
```
export const showAlert = (message: string) => {
toast(message);
};
```
And in my test case I have mocked the above function:
```
import {showAlert} from './utils'
showAlert = jest.fn()
```
[](https://i.stack.imgur.com/w1fgt.png)
Although the test case works as expected, the IDE(in VSCode and WebStorm) shows error in the test file: `Cannot assign to 'showAlert' because it is not a variable.`
```
showAlert = jest.fn()
^^^^^^^^^
```
Any help to get rid of the above error would be greatly appreciated.
Usage of the `showAlert`:
```
function checkEmail(email: string) {
if (!email.trim()) {
showAlert('Email is required.');
}
}
```
Repo to reproduce the issue: <https://github.com/shishiranshuman13/tsjest-demo-error><issue_comment>username_1: For me, it worked on Datalab (Python2 kernel) after running these cells:
```
%%bash
apt-get update -y
```
(this will most likely give you an error about repositories not being signed but you can proceed anyway with the `--allow-unauthenticated` flag)
```
%%bash
apt-get install python-pydot -y --allow-unauthenticated
```
[](https://i.stack.imgur.com/W0Lfn.png)
Upvotes: 2 <issue_comment>username_2: I was also having the same problem running graphviz on the Datalab. Please run the following command before importing graphviz library in the Datalab notebook:
```
%%bash
/usr/bin/yes | apt-get install graphviz
pip install --upgrade graphviz
/usr/bin/yes | pip uninstall pyparsing
pip install -Iv https://pypi.python.org/packages/source/p/pyparsing/pyparsing-1.5.7.tar.gz
pip install --upgrade pydot
```
After that run the command and it will work well like this:
```
import graphviz
#visually representing the decision tree for num_leaves = 2
targetNames=['No Fault', 'Minor Fault','Major Fault']
dot_data = tree.export_graphviz(decTree, out_file= None,
feature_names=important_cols,
class_names = targetNames,
filled=True, rounded=True)
graph = graphviz.Source(dot_data)
graph
```
[](https://i.stack.imgur.com/vTMZx.jpg)
Upvotes: 0 |
2018/03/22 | 589 | 2,068 | <issue_start>username_0: I am trying to join the third table `comment` in this query. The complaint and comment table relate to the main table `queue` by `id` and `queue_id`. The complaint and comment table have a couple of different columns. This query brings in all the columns and the rows from `comment` but not `complaint`.
```
SELECT t.*,complaint.*, `comment`.*
FROM queue t
LEFT JOIN complaint
on complaint.queue_id = t.id AND t.state='open'
RIGHT JOIN `comment`
on `comment`.queue_id = t.id AND t.state='open'
ORDER BY date_sort DESC
```<issue_comment>username_1: For me, it worked on Datalab (Python2 kernel) after running these cells:
```
%%bash
apt-get update -y
```
(this will most likely give you an error about repositories not being signed but you can proceed anyway with the `--allow-unauthenticated` flag)
```
%%bash
apt-get install python-pydot -y --allow-unauthenticated
```
[](https://i.stack.imgur.com/W0Lfn.png)
Upvotes: 2 <issue_comment>username_2: I was also having the same problem running graphviz on the Datalab. Please run the following command before importing graphviz library in the Datalab notebook:
```
%%bash
/usr/bin/yes | apt-get install graphviz
pip install --upgrade graphviz
/usr/bin/yes | pip uninstall pyparsing
pip install -Iv https://pypi.python.org/packages/source/p/pyparsing/pyparsing-1.5.7.tar.gz
pip install --upgrade pydot
```
After that run the command and it will work well like this:
```
import graphviz
#visually representing the decision tree for num_leaves = 2
targetNames=['No Fault', 'Minor Fault','Major Fault']
dot_data = tree.export_graphviz(decTree, out_file= None,
feature_names=important_cols,
class_names = targetNames,
filled=True, rounded=True)
graph = graphviz.Source(dot_data)
graph
```
[](https://i.stack.imgur.com/vTMZx.jpg)
Upvotes: 0 |
2018/03/22 | 413 | 1,475 | <issue_start>username_0: I am using `Toggleclass` jquery function which toggles `animated wobble` class on each click. How can i animate that button on every click (not every two clicks). [animate.css](https://github.com/daneden/animate.css) on Github.
```js
$("#Bouncebtn").click(
function(e) {
e.preventdefault
$("#Bouncebtn").toggleClass('animated wobble');
});
```
```html
bounce
```<issue_comment>username_1: You need to wait for animation end. Hence, you can listen for [animationend](https://developer.mozilla.org/en-US/docs/Web/Events/animationend):
```js
$("#Bouncebtn").on('click', function (e) {
e.preventDefault();
$("#Bouncebtn").toggleClass('animated wobble');
}).on('animationend', function(e) {
$("#Bouncebtn").toggleClass('animated wobble');
});
```
```html
bounce
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Add below lines of code under the `$("#Bouncebtn").toggleClass('animated wobble');` line. This happens because in thefirst click class is assigned and in the second click class is removed so the animation will now happen.
```
setTimeout(function(){
$("#Bouncebtn").removeClass('animated wobble');
}, 1000);
```
```js
$("#Bouncebtn").click(
function(e) {
e.preventdefault
$("#Bouncebtn").toggleClass('animated wobble');
setTimeout(function(){
$("#Bouncebtn").removeClass('animated wobble');
}, 1000);
});
```
```html
bounce
```
Upvotes: 0 |
2018/03/22 | 917 | 3,446 | <issue_start>username_0: I have a script that runs on two different pages. All it does is find the text of a specific div that is next to the div with the text UID. Note that I cannot edit the actual content of the page in question, otherwise I would just give these classes or IDs to target them directly.
```
$(document).ready(function() {
var uid = $('div.label').filter(function() {
return $(this).text() == 'UID';
}).next().text();
$('.column.right.top-ribbon-nav, .client_custom_fields_container').append(function() {
return '[User Page](https://www.xxxxxxx.com/private/index.html?content=admin2&page=userdetails&id=' + uid + ')';
}
});
$('.pLink').on('click', function() {
if(uid == '~') {
alert('No UID is present for this contact. Please add it and try again.');
return false;
}
});
});
```
This works as expected, but on one of the two pages I use it on, there are actually two matches for that variable. As a result, on that page it takes the two results and adds them together, so that the variable uid matches UID1 and UID2 and becomes UID1UID2.
In this instance, I actually only want the second match.
My thought was to add a conditional that first determines if there is more than one result for that variable. I believe I was mistaken in thinking I could do `if uid.length > 1` because uid is actually returning the text of the match for the container, not the container itself. I then tried using `index` or `eq` to get `1` which would be my second result.
I am pretty new to javascript and jquery and think I might be misunderstanding something basic that's getting in my way.
Edit: I found that I was just using `.eq()` in the wrong location. Here is my working code:
```
$(document).ready(function() {
var uid = $('div.label').filter(function() {
return $(this).text() == 'UID';
}).next().text();
$('.column.right.top-ribbon-nav, .client_custom_fields_container').append(function() {
if (uid.length > 1) {
var uid2 = $('div.label').filter(function() {
return $(this).text() == 'UID';
}).eq(1).next().text();
return '[User Page](https://www.xxxxxxxx.com/private/index.html?content=admin2&page=userdetails&id=' + uid2 + ')';
} else {
return '[User Page](https://www.xxxxxxxx.com/private/index.html?content=admin2&page=userdetails&id=' + uid + ')';
}
});
$('.pLink').on('click', function() {
if(uid == '~') {
alert('No UID is present for this contact. Please add it and try again.');
return false;
}
});
});
```<issue_comment>username_1: You can get the result of your filter function
```
var uid = $('div.label').filter(function() {
return $(this).text() == 'UID';
});
```
And then get the last item of this result
```
var unique = uid[uid.length-1];
```
Then you can do what you want on this object
```
unique.text()...
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I feel this is the best option and is much easier to read.
You could use `.last()` or `.first()` after your initial filter to get the always the last or first one.
You could also use `.eq(1)` if it is always the second match but not the third etc.
```
var uid = $('div.label').filter(function() {
return $(this).text() == 'UID';
}).last().next().text();
```
Upvotes: 0 |
2018/03/22 | 360 | 1,256 | <issue_start>username_0: I want to pass php variable into html for further processing.
this is php variable-
```
$coords = array();
array_push($coords,some value);
```
I want to pass this to html-
```
var Coordinates = [];
Coordinates.push(' php $coords; ?');
```
But this is not the correct way as it gives error. Someone please help me to achieve this
Thanks.<issue_comment>username_1: Try something like that:
`Coordinates.push('="[".implode($coords,",")."]" ?');`
<http://php.net/manual/en/function.implode.php>
Upvotes: 2 [selected_answer]<issue_comment>username_2: The simplest example is to json\_encode this out into the javascript var like so:
```
var Coordinates = = json_encode($coords) ?
```
Then simply loop through this like you would any other json.
This question has been asked alot of times before however, so please check here for any further methods.
[How to pass variables and data from PHP to JavaScript?](https://stackoverflow.com/questions/23740548/how-to-pass-variables-and-data-from-php-to-javascript)
Upvotes: 2 <issue_comment>username_3: Server side, you can create a javascript array directly in your rendered html :
```
var Coordinates = [<?php
echo '"'.implode('","', $coords).'"';
?>];
```
Upvotes: 1 |
2018/03/22 | 895 | 2,800 | <issue_start>username_0: I have two tables that I created which have the following columns:
* Table 1: MediaID, CurrentIndex, isUsing
* Table 2: Table2ID, MediaID, NewIndex
I want to make a query which checks the following things
1. `IsUsing` is null or false in Table1
2. Return the top 1 from table2 if `NewIndex` is greater than `currentIndex` for each `MediaID`. Like this
```
SELECT TOP 1 *
FROM table2
ORDER BY NewIndex DESC
```
Is it possible to combine these two statements into one?
Reason being, I am doing this query using C#, and the first query returns more than 200+ records. I don't want to be sending 200+ database requests checking every time.
Input like this
```
+-----------+--+----------+--------------+--+
| MediaID | | IsUsing | CurrentIndex | |
+-----------+--+----------+--------------+--+
| 123123123 | | false | 2 | |
| 123321 | | | 0 | |
| 123123 | | false | 5 | |
+-----------+--+----------+--------------+--+
+-----------+-----------+----------+--+--+--+
| Table2ID | MediaID | NewIndex | | | |
+-----------+-----------+----------+--+--+--+
| 1 | 123123123 | 3 | | | |
| 2 | 123321 | 2 | | | |
| 3 | 123123 | 0 | | | |
+-----------+-----------+----------+--+--+--+
```
So the output would be like
MediaID,Table2ID,Index(from table2)<issue_comment>username_1: You can use the following query:
```
;WITH CTE AS (
SELECT t2.MediaID, t2.Table2ID, t2.NewIndex,
ROW_NUMBER() OVER (PARTITION BY t2.MediaID
ORDER BY t2.NewIndex DESC) AS rn
FROM table2 AS t2
JOIN table1 AS t1
ON t2.MediaID = t1.MediaID AND t2.NewIndex > t1.CurrentIndex
WHERE (t1.IsUsing IS NULL) OR (t1.IsUsing = 0)
)
SELECT MediaID, Table2ID, NewIndex
FROM CTE
WHERE rn = 1
```
The query of the `CTE`:
* Filters out `table1` records having `IsUsing` set to 1 (this is the `bit` value for `true`)
* Filters out `table2` records having `NewIndex` less than or equal to `CurrentIndex` of `table1`.
Finally, using `ROW_NUMBER()` we can get the greatest-per-MediaID record.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You might find that `apply` is simpler for this type of query:
```
select . . .
from table1 t1 cross apply
(select top 1 t2.*
from table2 t2
where t2.mediaid = t1.mediaid and t2.NewIndex > t1.currentIndex
order by t2.NewIndex desc
) t2
where t1.isUsing is null or t1.isUsing = 'false';
```
You will note that this query very much follows the logic in how you described the question. In addition, `apply` tends to do a better job of making use of indexes (in this case, on `table2(mediaid, newindex)`) than `row_number()` does.
Upvotes: 0 |
2018/03/22 | 1,468 | 4,793 | <issue_start>username_0: My client has the following topology:
```
User <--> Apache <--> TomCat <--> JBossAS7 <--> Mule
```
The Mule needs to communicate with another server, **on another domain**, to retrieve information ask by the user. This communication uses HTTPS, and the certificate is in the trustore of the Mule. Therefore, the Mule and the other server are able to create an SSL connection.
This is the ajax request used to perform what the user wants:
```
$.ajax({
url : DS.nav.importDS,
data : data,
type : "GET",
cache : false,
success : function(html)
{
//do some stuff
},
error:function (error)
{
//do some stuff
}});
```
(jquery version: 1.7.1)
About the Response Headers:
* Cache-Control: max-age=0, no-cache, no-store, must-revalidate
* Cache-Control: no-cache
* Connection: Keep-Alive
* Content-Type: text/html;charset=UTF-8
* Keep-Alive: timeout=5, max=100
This GET request returns an HTML content.
My problem is the following:
This request stop working without warning.
On **Chrome** (version used: 65.0.3325.162), after several minutes I have the following error message in the console: "ERR\_INCOMPLETE\_CHUNKED\_ENCODING.". There is no link with a real time protection antivirus.
On **Edge**, "XMLHttpRequest: Network Error 0x800c0007, No data is available for the requested resource".
BUT, it's working on IE. What I suppose is that IE is more permissive than Chrome or Edge. But I want to understand why.
I am not looking for the perfect answer, but for any idea which can put me on the trail on what's happening.
**EDIT**
On Chrome
- Status Code: 200 OK
- Timing: CAUTION: request is not finished yet! (after content download)
**EDIT**
By using chrome://net-export tool, this is the result of the HTTP request:
```
t=203357 [st= 2948] HTTP_TRANSACTION_READ_RESPONSE_HEADERS
--> HTTP/1.1 200 OK
Date: Fri, 23 Mar 2018 14:44:16 GMT
Server: Apache-Coyote/1.1
X-Frame-Options: SAMEORIGIN
Cache-Control: max-age=0, no-cache, no-store, must-revalidate
Pragma: no-cache
Expires: Sat, 26 Jul 1997 05:00:00 GMT
X-Frame-Options: SAMEORIGIN
X-UA-Compatible: IE=9,chrome=1
Content-Type: text/html;charset=UTF-8
Content-Language: en
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Cache-Control: no-cache
X-Via-NSCOPI: 1.0
Transfer-Encoding: chunked
t=203357 [st= 2948] -HTTP_TRANSACTION_READ_HEADERS
t=203357 [st= 2948] HTTP_CACHE_WRITE_INFO [dt=0]
t=203357 [st= 2948] +URL_REQUEST_DELEGATE [dt=4]
t=203358 [st= 2949] DELEGATE_INFO [dt=3]
--> delegate_blocked_by = "extension ModHeader"
t=203361 [st= 2952] -URL_REQUEST_DELEGATE
t=203361 [st= 2952] -URL_REQUEST_START_JOB
t=203361 [st= 2952] URL_REQUEST_DELEGATE [dt=1]
t=203362 [st= 2953] HTTP_TRANSACTION_READ_BODY [dt=0]
t=203362 [st= 2953] URL_REQUEST_JOB_FILTERED_BYTES_READ
--> byte_count = 12971
t=203363 [st= 2954] HTTP_TRANSACTION_READ_BODY [dt=313130]
--> net_error = -355 (ERR_INCOMPLETE_CHUNKED_ENCODING)
t=516493 [st=316084] FAILED
--> net_error = -355 (ERR_INCOMPLETE_CHUNKED_ENCODING)
t=516495 [st=316086] -REQUEST_ALIVE
--> net_error = -355 (ERR_INCOMPLETE_CHUNKED_ENCODING)
```
**UPDATE**
I've disabled javascript, and type my request directly in the URL and the result has been displayed but the page is still loading during 5 minutes.<issue_comment>username_1: The reason could be due to your anti-virus.Test this scenarios
1. Please turn off Anti virus (Avast/Kaspersky) and test.
2. Check if it could be a Chrome version Issue.As far as I can tell it is an issue of these versions being massively sensitive to the
content length of the chunk being sent and the expressed size of
that chunk (I could be far off on that one). In short a slightly
imperfect headers issue.
3. It Could be caused by the owner and permissions in Linux environment
Upvotes: 0 <issue_comment>username_2: **SOLVED**
Error was due to the vanilla JavaScript version of the Lazy Load plugin (1.9.3) <https://appelsiini.net/projects/lazyload/>
Last Chrome version, last IE version, and last Firefox version do not support it.
I have stop using this plugin, but I suppose last version can be used in the different version of Chrome, IE, and Firefox.
Upvotes: 3 [selected_answer] |
2018/03/22 | 476 | 1,555 | <issue_start>username_0: I have Months' `DropDownListFor` and I want to select current month as default I tried this two options
```
@{
var currentMonth = month.FirstOrDefault(x => x.Id == DateTime.Now.Month).Id;
}
```
**1.**
```
@Html.DropDownListFor(x => x.monthId, new SelectList(month, "Id", "Name", currentMonth ))
```
**2.**
```
@Html.DropDownListFor(x => x.monthId, month.Select(x => new SelectListItem
{ Text = x.Name.ToString(), Value = x.Id.ToString(), Selected = (x.Id == currentMonth ?true:false)})),
```
but neither works.
How can I achieve my goal?<issue_comment>username_1: You code is correct and `Selected = (x.Id == currentMonth ?true:false)}` is useless because you're binding to the property `monthId` of your model and this property is probably `null`. So at the top of your view add the following code after setting the `currentMonth` like below:
```
@{
var currentMonth = month.FirstOrDefault(x => x.Id == DateTime.Now.Month).Id;
Model.monthId = Model.monthId ?? currentMonth;
}
```
Upvotes: 1 <issue_comment>username_2: If you want with an option label then use
```
@Html.DropDownListFor(x => x.MonthId, new SelectList(month, "Id", "Name", Model.MonthId), "Select Month")
```
otherwise
```
@Html.DropDownListFor(x => x.MonthId, new SelectList(month, "Id", "Name", Model.MonthId))
```
if that's also not work then try to assign month id in your model property
```
Model.MonthId = month.FirstOrDefault(x => x.Id == DateTime.Now.Month).Id;
```
and follow the above steps.
Upvotes: 1 [selected_answer] |
2018/03/22 | 1,166 | 4,178 | <issue_start>username_0: I know how ajax works but I wanna know that how to retrieve data from a specific php function so I dont create a number of pages for each function 1 page can do all the work.
AngularJs:
```
var app = angular.module("myapp",[]);
app.controller("usercontroller",function($scope,$http){
$http({
method : "GET",
url : "functions.php",
data: {
name: 'myname',
},
}).then(function mySuccess(response) {
$scope.myWelcome = response.data;
}, function myError(response) {
$scope.myWelcome = response.statusText;
});
});
```
Php:
```
function abc(){
echo 'abc';
}
```<issue_comment>username_1: You could do it by passing the function name as GET-variable. and then use `eval ()`
```js
var app = angular.module("myapp",[]);
app.controller("usercontroller",function($scope,$http){
$http({
method : "GET",
data: "fx=abc",
dataType: "text",
url : "functions.php",
data: {
name: 'myname',
},
}).then(function mySuccess(response) {
$scope.myWelcome = response.data;
}, function myError(response) {
$scope.myWelcome = response.statusText;
});
});
```
```php
if (function_exists ($_GET['fx']))
eval ($_GET['fx'].'();');
function abc(){
echo 'abc';
}
```
Upvotes: -1 <issue_comment>username_2: Add another key to `data` set to get specific output from specific function.
Just an idea to make it work, modify as desired
1. single File holding all the method/functions
2. single class manage all the ajax calls.
3. single json response call.
`**JS**`:
```
data: {
name: 'myname',
_method: 'my_custom_function',
},
```
and then in PHP check the request which function needs to be call for specific task.
using `$_REQUEST` is usefull wheather request is `POST` OR `GET` it is always cachable and then verify by `method_exists` if method exists in the class it will execute by `call_user_func` from the given class instance.
`**function.php**`
```
class Ajax_calls{
// ... multiple functions goes here
public function my_custom_function() {
// Your logics
}
}
if( isset( $_REQUEST ) ) {
$obj = new Ajax_calls();
$method = $_REQUEST['_method'];
if( method_exists($obj, $method) ) {
$data = call_user_func(array($obj, $method));
echo json_encode($data);
}
else {
echo json_encode("no function found");
}
}
```
Upvotes: 0 <issue_comment>username_3: Why don't you add code to your php file with the functions to choose the function to execute? Checking the value of the variable you are sending to functions.php and then making the decision. For example:
```
$action = $_GET["name"];
switch ($action){
case "myname":
abc();
break;
default:
echo "";
break;
}
function abc(){
echo "abc";
}
```
I prefer separate the logic of the "controller" and the functions file having a controller.php
```
require_once('functions.php');
$action = $_GET["name"];
switch ($action){
case "myname":
abc();
break;
default:
echo "";
break;
}
```
and a functions.php
```
function abc(){
echo "abc";
}
```
But that's up to you
Upvotes: 0 <issue_comment>username_4: The problem is your PHP file. You should do something like this:
```
php
$post_date = file_get_contents("php://input");
$data = json_decode($post_date);
//now i am just printing the values
echo "Name : ".$data-name."n";
```
In PHP page we are going to retrieve the json data and decode using it json\_decode. Remember, this is just example because you can improve your code. Im just giving you a minimalist way to do that.
Let me know if you need something else about it.
///////
NEW EDIT - Send variable to PHP
///////
**Angular**
```
$http({
url: "urltopost.php",
method: "POST",
data: {
data: variable
}
}).success(function(response) {
console.log(response);
});
```
**PHP**
```
php
$request = json_decode( file_get_contents('php://input') );
$variable = $request-data
```
Upvotes: 1 |
2018/03/22 | 1,269 | 3,961 | <issue_start>username_0: I have a popup that has two containers:
1. text-container
2. buttons-container
The popup has width of 370px and the buttons appear below the text.
The width of the popup can be changed **only in a case** that there is long text in the buttons that cause the popup width to grow (the buttons should appear in one row always).
If there is a long text in the text container, the popup width should remain 370px.
For example:
1. The popup width is 383px because of long text in buttons:
[](https://i.stack.imgur.com/ThY76.png)
2. The popup width is 370px (The buttons text can be displayed in 370px):
[](https://i.stack.imgur.com/fbD3Y.png)
This is my jsbin:
<http://jsbin.com/fuzozehipo/1/edit?html,css,output>
HTML:
```
some text
Button 1 with long long long long long long text
Button 2
```
CSS:
```
.popup {
display: inline-block;
position: relative;
border: 1px solid;
min-width: 370px;
}
.text-container {
width: 100%;
}
```
Any help appreciated!<issue_comment>username_1: For the text, is is expected that `.popup` grows with the text because it is an `inline-block` element, so `width: 100%;` doesn't behave like you would want it. Since your value of `370px` is hard-coded, I see no issue in reusing it in your ruleset for the `.text-container` as part of a `max-width` rule. Although, it seems you have a `padding: 30px;` rule on your screenshots, so if you're using `box-sizing: border-box`, compensate for `.popup`'s padding by making `.text-container`'s `max-width` `310px`, which is the `width` - `padding` on each side.
For the buttons, use `white-space: nowrap;` on your `.buttons-container` to keep yours buttons on the same line, although the aforementioned behaviour of `.popup`'s `inline-block` should take care of it by itself.
```css
.popup {
display: inline-block;
position: relative;
border: 1px solid;
min-width: 370px;
}
.text-container {
max-width: 370px; /* Compensate for the padding if using box-sizing: border-box */
}
.buttons-container {
white-space: nowrap;
}
```
```html
some text that is also white long but that goes to the next line when it reaches a certain width
Button 1 with longer text
Button 2 with some text as well
```
Upvotes: 0 <issue_comment>username_2: Can you do something like
```
.wrapper {
display: inline-block;
position: relative;
border: 1px solid;
min-width: 370px;
}
.buttons-container {
display: flex;
flex-direction: row;
}
.buttons-container > button {
flex: 1;
white-space: nowrap;
margin-left: 10px;
overflow: hidden;
text-overflow: ellipsis;
}
.buttons-container > button:first-child {
margin-left: 0;
}
.text-container {
width: 100%;
}
```
Upvotes: 1 <issue_comment>username_3: You can use an [interesting Flex rendering trick](http://dcousineau.com/blog/2011/07/14/flex-box-prevent-children-from-stretching/) to make the text not expand the parent by setting the child's width to `0`. Then just set `min-width: 100%` to ensure it takes up the full width of the parent container, which is now controlled only by the width of the buttons.
```css
.popup {
border: 1px solid;
border-radius: 4px;
padding: 10px;
box-sizing: border-box;
min-width: 370px;
display: inline-flex;
flex-direction: column;
}
.text {
width: 0;
min-width: 100%;
}
.buttons {
margin-top: 10px;
display: flex;
justify-content: flex-end;
white-space: nowrap;
}
button + button {
margin-left: 10px;
}
```
```html
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla at porta sapien. Quisque sapien justo, fringilla consectetur molestie eu, hendrerit vehicula purus. Pellentesque ac ante urna.
Really long text that will make the buttons push out the parrent div
text
```
Upvotes: 4 [selected_answer] |
2018/03/22 | 680 | 2,922 | <issue_start>username_0: I am pulling weather data from an API. I wrote a script that fetches the data every 15 minutes. What is the best practice for running this scripton google's app engine?
* Appengine python app?
* Cron job?
* Running the script in a VM?<issue_comment>username_1: I have done exactly what you're asking here in the past, pull weather data (likely from a .gov source) and then do some processing to it and store it in a database.
I started using a python/cron combo but had issues tracking down what part of it failed when it failed. There were many times where the data that *should* have been available was not.
In my case I was in AWS so I used Lambda, but Google Cloud Platform's Cloud Functions is similar. I kicked individual functions off with Jenkins using their scheduled triggers then tracked their completion to ensure it completed successfully. If the function fails then I can see which specific part of the process failed easily in Jenkins.
Upvotes: 0 <issue_comment>username_2: Assuming you don't want to rewrite your script in another language (e.g. JavaScript, that would allow Cloud Functions or Google Apps Script), the question is what you actually want to do with the fetched data and if you already use an App Engine app or a VM.
You can use an App Engine app in Python standard environment for just this feature. Basically you would write a request handler that will fetch the data and configure cron.yaml to [schedule a cron-job](https://cloud.google.com/appengine/docs/standard/python/config/cron). As a result, your request handler will receive an HTTP request according to your schedule and then performs an [Outbound Request](https://cloud.google.com/appengine/docs/standard/python/outbound-requests) with `fetch()`. See the doc for limitations (e.g. port restrictions). For this setup I also suggest to configure the [task-queue](https://cloud.google.com/appengine/docs/standard/python/taskqueue/) so that only one request is handled at any time and also add an (exponential?) back-off in case a request fails. Also keep in mind, that the default `idle_timeout` before an instance is [shutdown is 5 minutes (for "basic scaling")](https://cloud.google.com/appengine/docs/standard/python/config/appref#scaling_elements). 15 minutes is the [startup fee that is billed for a new instance](https://cloud.google.com/appengine/docs/standard/python/how-instances-are-managed#instance_billing). Since cron-jobs do not exactly run on a per second base but are slightly distributed around the scheduled time, this might lead to additional costs depending on your configuration. So it might make sense to either increase `idle_timeout` in a basic-scaling configuration to 16 or 17 minutes, or to schedule your task every 13.5 minutes or so.
If the `fetch()` restrictions doesn't meet your requirements you might want to consider either a flexible environment or a VM.
Upvotes: 2 |
2018/03/22 | 1,716 | 4,731 | <issue_start>username_0: I'm appending a row to a table. How can I change the background color in that row for a group of cells. Say column has 25 columns; columns starting from 17 till 22 needs a background color change.
This is what I tried so far:
```
$("#table1").append(row1);
$(row1).children('td')
.not('td:eq(0)')
.not('td:eq(0)')
.css({ "background-color": workcolor });
```
Here I use Childeren 'td' which all cells in row got colored instead I need a particular cells background need to changed as per ID of column.
I added my Html code You can clarify this:
```
| Image | Employee | 00:00 | 01:00 | 02:00 | 03:00 | 04:00 | 05:00 | 06:00 | 07:00 | 08:00 | 09:00 | 10:00 | 11:00 | 12:00 | 13:00 | 14:00 | 15:00 | 16:00 | 17:00 | 18:00 | 19:00 | 20:00 | 21:00 | 22:00 | 23:00 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
```<issue_comment>username_1: You can try `.filter()`
```
$(row1).find('td').filter(function(index){
return parseInt(this.id) >= 17 && parseInt(this.id) <= 22
}).css({ "background-color": yellow});
```
The above example using td id `this.id` you can change it with `index` passing on filter function if you working with td index
and you can still set the all td for a row by using `.css()` before `filter()`
```
$(row1).find('td').css({ "background-color": workcolor }).filter(function(){
return parseInt(this.id) >= 17 && parseInt(this.id) <= 22
}).css({ "background-color": yellow});
```
**Notes:**
* No need to use `.find('td')` if you're looking for row id not td id
* If you want to reference the row by td you can keep using `.find('td')` and on filter you can use `parseInt($(this).closest('tr').attr('id'))`
**Up to your question update** .. you can use just css to do that
```
th:nth-child(n + 20):not(:nth-child(26)),
td:nth-child(n + 20):not(:nth-child(26)){
background : yellow;
}
```
Explanation:
>
> Why `(n + 20)`? because `nth-child` index is starting from 1 .. so if
> you count the element with id 17 you'll find its index is 20 and `n + 20` will select the element with index is 20 and above and
> `:not(:nth-child(26))` to not the last column
>
>
>
Upvotes: 2 <issue_comment>username_2: You could use a combination of the [gt](https://api.jquery.com/gt-selector/) and [lt](https://api.jquery.com/lt-selector/) selectors:
```
$("#table1").append(row1);
$(row1).children('td:gt(17):lt(22)').css('background-color', workcolor);
```
Upvotes: 0 <issue_comment>username_3: You can try this way :
```js
/* added this loop to append tr you can ignore this loop as this only for demo */
for(i=0; i<=10; i++){
var row = $('|
');
for(j=0; j<=25; j++){
row.append(' '+j+' |');
}
$("#table1 tbody#body1").append(row);
}
// demo code ends here
// actual logic starts here
var startIndex = $('#table1 thead th').index($('#table1 thead th[id=17]')) // get the index of id 17
var endingIndex = $('#table1 thead th').index($('#table1 thead th[id=22]')) // get the index of id 22
// iterates through all rows.
$.each($('#table1 #body1 tr'), function(i, item) {
// update background-color of td between the index values 17 and 22
$(item).children('td:lt('+(endingIndex + 1)+'):gt('+(startIndex - 1)+')').addClass('highlight')
});
```
```css
.highlight{background-color: yellow;}
```
```html
| Image | Employee | 00:00 | 01:00 | 02:00 | 03:00 | 04:00 | 05:00 | 06:00 | 07:00 | 08:00 | 09:00 | 10:00 | 11:00 | 12:00 | 13:00 | 14:00 | 15:00 | 16:00 | 17:00 | 18:00 | 19:00 | 20:00 | 21:00 | 22:00 | 23:00 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
```
So as a first step we need to find starting index and ending index in which we want to fill the background color.
```
var startIndex = $('#table1 th').index($('#table1 th[id=17]')) // index is 19
var lastIndex = $('#table1 th').index($('#table1 th[id=22]')) // index is 24
```
now we can iterate through each row and select columns based on starting index and last index:
```
$.each($('#body1 tr'), function(i, item) {
$(item).children('td:lt('+(endingIndex + 1)+'):gt('+(startIndex -1)+')').css('background-color', 'yellow')
});
```
in which **item** is row/`tr` itself so from that using Jquery **`:lt`** and **`:gt`** selector we can get children(`td`) between those index and apply css on it.
`:lt(index):gt(index)` will gives all the elements in an array between this index as we also want startIndex and endingIndex along with this we incremented and decremented accordingly.
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,845 | 5,734 | <issue_start>username_0: I have an Excel VBA program that can produce hundreds of potential reports in Word. It exports a large table (up to 100 rows) from Excel to Word, and then inserts a hard page break before the next section. Then it continues to the next section of the report, pasting another table. The issue is that sometimes the table length is just the right size so that Word inserts a soft page break. Then the Excel VBA code adds a hard page break, and the result is a blank page in the middle of the report.
**I'd like to insert some sort of IF statement where the Excel VBA determines if a soft page break has just occurred and only insert a hard page break if a soft page break is not there. Is this possible?**
Here's the relevant code:
```
For Each cell In ThisWorkbook.Names("TemplateTextRange").RefersToRange.Cells
If cell.Value <> False Then
If UCase(Right(cell.Value, 8)) = "TBLPASTE" Then
strTableName = Left(cell.Value, Len(cell.Value) - 10)
DoEvents
ThisWorkbook.Names(strTableName).RefersToRange.Copy
DoEvents
wrdApp.Selection.Paste
Application.CutCopyMode = False
DoEvents
wrdApp.Selection.TypeParagraph
ElseIf cell.Value = "" Then
wrdApp.Selection.InsertBreak Type:=wdPageBreak
Else
wrdApp.Selection.EndKey Unit:=wdStory
DoEvents
cell.Copy
DoEvents
wrdApp.Selection.PasteAndFormat (wdPasteDefault)
Application.CutCopyMode = False
DoEvents
wrdApp.Selection.TypeParagraph
End If
End If
Next cell
```
What it is doing is reading a set of instructions from an Excel worksheet. A potential template might look like this:
Cell 1:FALSE
Cell 2:A bunch of text (appears as written)
Cell 3:SummaryValuesTable, TBLPASTE (tells the program to paste the values in range SummaryValuesTable as a table)
Cell 4:A bunch more text
Cell 5:insert page break> (tells the program to insert a hard page break)
Cell 6:A bunch more text
Cell7:AnotherTable, TBLPASTE
etc.<issue_comment>username_1: You can try `.filter()`
```
$(row1).find('td').filter(function(index){
return parseInt(this.id) >= 17 && parseInt(this.id) <= 22
}).css({ "background-color": yellow});
```
The above example using td id `this.id` you can change it with `index` passing on filter function if you working with td index
and you can still set the all td for a row by using `.css()` before `filter()`
```
$(row1).find('td').css({ "background-color": workcolor }).filter(function(){
return parseInt(this.id) >= 17 && parseInt(this.id) <= 22
}).css({ "background-color": yellow});
```
**Notes:**
* No need to use `.find('td')` if you're looking for row id not td id
* If you want to reference the row by td you can keep using `.find('td')` and on filter you can use `parseInt($(this).closest('tr').attr('id'))`
**Up to your question update** .. you can use just css to do that
```
th:nth-child(n + 20):not(:nth-child(26)),
td:nth-child(n + 20):not(:nth-child(26)){
background : yellow;
}
```
Explanation:
>
> Why `(n + 20)`? because `nth-child` index is starting from 1 .. so if
> you count the element with id 17 you'll find its index is 20 and `n + 20` will select the element with index is 20 and above and
> `:not(:nth-child(26))` to not the last column
>
>
>
Upvotes: 2 <issue_comment>username_2: You could use a combination of the [gt](https://api.jquery.com/gt-selector/) and [lt](https://api.jquery.com/lt-selector/) selectors:
```
$("#table1").append(row1);
$(row1).children('td:gt(17):lt(22)').css('background-color', workcolor);
```
Upvotes: 0 <issue_comment>username_3: You can try this way :
```js
/* added this loop to append tr you can ignore this loop as this only for demo */
for(i=0; i<=10; i++){
var row = $('|
');
for(j=0; j<=25; j++){
row.append(' '+j+' |');
}
$("#table1 tbody#body1").append(row);
}
// demo code ends here
// actual logic starts here
var startIndex = $('#table1 thead th').index($('#table1 thead th[id=17]')) // get the index of id 17
var endingIndex = $('#table1 thead th').index($('#table1 thead th[id=22]')) // get the index of id 22
// iterates through all rows.
$.each($('#table1 #body1 tr'), function(i, item) {
// update background-color of td between the index values 17 and 22
$(item).children('td:lt('+(endingIndex + 1)+'):gt('+(startIndex - 1)+')').addClass('highlight')
});
```
```css
.highlight{background-color: yellow;}
```
```html
| Image | Employee | 00:00 | 01:00 | 02:00 | 03:00 | 04:00 | 05:00 | 06:00 | 07:00 | 08:00 | 09:00 | 10:00 | 11:00 | 12:00 | 13:00 | 14:00 | 15:00 | 16:00 | 17:00 | 18:00 | 19:00 | 20:00 | 21:00 | 22:00 | 23:00 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
```
So as a first step we need to find starting index and ending index in which we want to fill the background color.
```
var startIndex = $('#table1 th').index($('#table1 th[id=17]')) // index is 19
var lastIndex = $('#table1 th').index($('#table1 th[id=22]')) // index is 24
```
now we can iterate through each row and select columns based on starting index and last index:
```
$.each($('#body1 tr'), function(i, item) {
$(item).children('td:lt('+(endingIndex + 1)+'):gt('+(startIndex -1)+')').css('background-color', 'yellow')
});
```
in which **item** is row/`tr` itself so from that using Jquery **`:lt`** and **`:gt`** selector we can get children(`td`) between those index and apply css on it.
`:lt(index):gt(index)` will gives all the elements in an array between this index as we also want startIndex and endingIndex along with this we incremented and decremented accordingly.
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,091 | 3,613 | <issue_start>username_0: I have this hash
```
FORMATS = {
'vertical_4' => %w[vertical small small small small],
'horizontal_4' => %w[horizontal small small small small],
'horizontal_2' => %w[large small small],
'mixed_3' => %w[vertical horizontal small small],
'huge' => %w[horizontal small small horizontal small small small]
}
```
I need the max value, so I use
```
MAX_ELEMENTS = FORMATS.map {|_,v| v}.max.size
```
Why it returns 5 instead of 7?<issue_comment>username_1: If you want the max of the size of the arrays you need:
```
FORMATS.map {|_,v| v}.map(&:size).max
```
Map the value of the arrays to their size and select max of that array.
UPDATE better answer no need for 2nd map:
```
FORMATS.map{|_,v| v.size}.max
```
To explain why your code gave you the result it did, `.max` uses logic to get the max on enumerable. So if it compares array of strings it will look for the max by alphabetic sort.
```
[['aaa','bbb'],['ddd','ccc'], ['bbb','ccc']].max #=> returns ["ddd", "ccc"]
```
Your code was mapping the values to array, calling `.max` on that, which would give the highs max by alpha sort, then calling `.size` on that, just gives size of array.
Upvotes: 2 <issue_comment>username_2: You can make use of `max` iterator with a block
`FORMATS.values.max { |values| values.length }.size`
Upvotes: 2 <issue_comment>username_3: The array with the greatest number of elements
```
FORMATS.values.max_by(&:count)
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: You asked why, so I'm going to break this down step by step, first we know map returns an array of values (same as `FORMATS.values`)
```
FORMATS.map {|_,v| v}
=> [
["vertical", "small", "small", "small", "small"],
["horizontal", "small", "small", "small", "small"],
["large", "small", "small"],
["vertical", "horizontal", "small", "small"],
["horizontal", "small", "small", "horizontal", "small", "small", "small"]
]
```
We then call [`Array#max`](https://ruby-doc.org/core-2.5.0/Array.html#method-i-max) without any arguments or a block which according to the docs:
>
> Returns the object in ary with the maximum value. The first form assumes all objects implement Comparable;
>
>
>
In your case, each element is an array and since array does respond to the comparable operator [`<=>`](https://ruby-doc.org/core-2.5.0/Array.html#method-i-3C-3D-3E)
it runs each element against each other like this:
```
FORMATS.map {|_,v| v}.max { |a, b| a <=> b }
```
However `Array#<=>` does it's own comparison element by element:
>
> Arrays are compared in an “element-wise” manner; the first element of ary is compared with the first one of other\_ary using the <=> operator, then each of the second elements, etc… As soon as the result of any such comparison is non zero (i.e. the two corresponding elements are not equal), that result is returned for the whole array comparison.
>
>
>
However when we call max we get back the first item in the array, why?
```
FORMATS.map {|_,v| v}.max
=> ["vertical", "small", "small", "small", "small"]
```
That's because if we look at this element-by-element, the first value is "vertical" and "vertical" is `>=` every first element for the other arrays which then leaves us only with the first element:
```
["vertical", "small", "small", "small", "small"]
```
and the 4th element
```
["vertical", "horizontal", "small", "small"]
```
If we then compare the second elements, `small` is > `horizontal` and we're done, the first element is the `max`.
You then call `size` on the max element which is 5.
Upvotes: 2 |
2018/03/22 | 457 | 1,752 | <issue_start>username_0: When the user successfully login, i'm storing the username in tempdata so i can use it in my \_Layout:
```
TempData["username"] = model.Email.Split('@')[0];
TempData.Keep("username");
```
on my \_Layout:
```
- ##### Welcome, @TempData["username"]
```
this actually works on the first load, but if I go to another page, the tempdata turns to null and no username is displaying. How can i keep the username on my \_layout?
[Username displaying](https://i.stack.imgur.com/ckD4U.png)
[Username not display](https://i.stack.imgur.com/MN6Qf.png)<issue_comment>username_1: TempData is designed to have a life span only in between the current and the next request. You'd have to re-store it on every request (or call `.Keep()`) to make it available on the subsequent request.
You would be better of using a Session object or retrieving it from your user identity.
However you can "keep" your TempData object, if you call `.Keep()` after calling it (displaying counts towards calling).
```
- ##### Welcome, @TempData["username"]
@TempData.Keep("username")
```
Yet another way to circumvent this, is to use `.Peek()`:
```
- ##### Welcome, @TempData.Peek("username").ToString()
```
Upvotes: -1 <issue_comment>username_2: Tempdata keep method work only for next request. If you want to store data in over all pages. Use MVC identity principal methodology to persist data overall page. Iprincipal
Upvotes: 0 <issue_comment>username_3: If you've recently changed your authentication to azure Ad and your application is load balanced. Please make sure, you've updated the load balancer to use *sticky session*. Without a sticky session, the response can come from any server which can result in null temp data.
Upvotes: 0 |
2018/03/22 | 152 | 443 | <issue_start>username_0: How do I convert from int to uint16 in vala?
My requirement is to convert the text in decimal in a gtkEntry to uint16.
How do I do this?<issue_comment>username_1: ```
int myint = 543;
uint16 z = 0 + myint;
```
Upvotes: 0 <issue_comment>username_2: Just cast. `uint16 as_uint16 = (uint16) as_int;`
You may want to first check that the value is between `uint16.MIN` and `uint16.MAX`.
Upvotes: 3 [selected_answer] |
2018/03/22 | 290 | 1,035 | <issue_start>username_0: I am trying to create a method to delete rows from my dynamic form and I am having trouble targeting the array.
so the form group is this:
```
this.piForm = this.fb.group({
milestoneSaveModel: this.fb.group({
milestonesToCreate: this.fb.array([this.mileStoneCreate()]),
}),
});
```
and then my delete method is this so far:
```
deleteRow(index: number) {
const control = this.piForm.controls['milestoneSaveModel'].controls['milestonesToCreate'];
control.removeAt(index);
}
```
my linter tells me `Property 'controls' does not exist on type 'AbstractControl'.`
however when I trigger this in the browser it actually works. So how do i fix the linting error?<issue_comment>username_1: use this syntax instead :
```
this.piForm.get('milestoneSaveModel').get('milestonesToCreate')
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to convert type of control to FormArray.
```
form.get('milestoneSaveModel').get('milestonesToCreate') as FormArray;
```
Upvotes: 0 |
2018/03/22 | 456 | 1,796 | <issue_start>username_0: I use [React Navigation](https://reactnavigation.org/) for routing and navigation in a react native project. I have a bottom tab bar component, which is partially hidden in Android.
**TabNavigator**
```
...
{
lazy: true,
tabBarPosition: 'bottom',
animationEnabled: false,
initialRouteName: 'Schedule',
swipeEnabled: false,
tabBarOptions: {
showIcon: true,
showLabel: true,
allowFontScaling: true,
upperCaseLabel: false,
activeTintColor: Colors.white,
inactiveTintColor: Colors.darkGrey,
indicatorStyle: {
// backgroundColor: 'transparent',
},
labelStyle: {
fontSize: 15,
fontWeight: '500',
},
style: {
backgroundColor: Colors.darkBlue,
height: 50,
},
iconStyle: {
height: TAB_ICON_SIZE,
width: TAB_ICON_SIZE,
padding: 0,
margin: 0,
},
},
},
```
[](https://i.stack.imgur.com/3ZpP5.png)<issue_comment>username_1: Add following alignSelf property to iconStyle and make sure that TAB\_ICON\_SIZE is not greater that 24. Because it's following material guid designs in react-native android.
```
iconStyle: {
height: TAB_ICON_SIZE,
width: TAB_ICON_SIZE,
padding: 0,
margin: 0,
alignSelf: 'center'
}
```
Upvotes: 1 <issue_comment>username_2: There was `margin: 8` for `labelStyle`.
Changing labelStyle fixed that:
```
labelStyle: {
fontSize: 15,
fontWeight: '500',
margin: 2,
},
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 263 | 949 | <issue_start>username_0: this is code
```
[${jk.read}
${jk.idnum}](openmessage)
```
---
javascript for selecting the value and showing in alert box
```
var df = document.getElementsByTagName('az3');
var de = document.getElementsByTagName('a1');
function sendtoanother() {
alert(df+" "+de); }
```<issue_comment>username_1: Add following alignSelf property to iconStyle and make sure that TAB\_ICON\_SIZE is not greater that 24. Because it's following material guid designs in react-native android.
```
iconStyle: {
height: TAB_ICON_SIZE,
width: TAB_ICON_SIZE,
padding: 0,
margin: 0,
alignSelf: 'center'
}
```
Upvotes: 1 <issue_comment>username_2: There was `margin: 8` for `labelStyle`.
Changing labelStyle fixed that:
```
labelStyle: {
fontSize: 15,
fontWeight: '500',
margin: 2,
},
```
Upvotes: 1 [selected_answer] |
2018/03/22 | 731 | 2,280 | <issue_start>username_0: I'll try every solution which I get from stack overflow but its my bad luck. I store my JSON response in [Any] array like this:
```
var json = JSON()
var arrClientType = [Any]()
self.json = JSON(value) //value is json data
self.arrClientType = self.json["client_type_data"].arrayValue
```
now, I want to filter this array and reload that filtered data in tableview.
```
[{
"client_type_name" : "Asset Manager",
"client_type_id" : 1
}, {
"client_type_name" : "Broker Dealer",
"client_type_id" : 5
}, {
"client_type_name" : "Corporate",
"client_type_id" : 8
}, {
"client_type_name" : "Custodian and Prime Broker",
"client_type_id" : 3
}, {
"client_type_name" : "Diversified Financial Services Firms",
"client_type_id" : 4
}, {
"client_type_name" : "Fund Administrator",
"client_type_id" : 6
}, {
"client_type_name" : "Hedge Fund Manager",
"client_type_id" : 2
}, {
"client_type_name" : "Individual",
"client_type_id" : 7
}]
```
I'll try this also :
```
let filtered = JSON(self.arrList).arrayValue.filter({
$0["client_type_name"].arrayValue.map({ $0.stringValue }).contains("Broker Dealer")
})
print ("filterdData: \(filtered)")
```
but it give me entry filter array.
Please help me.<issue_comment>username_1: try this
```
filtered = arrList.filter { $0["client_type_name"].stringValue.contains("Broker Dealer") }
```
and change your self.arrlist to array of dictiony type
```
var arrList: [JSON] = []
var filtered :[JSON] = []
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this one , you will be able to filtered the data which **contain** the **"client\_type\_name"** as a **Broker Dealer**.
```
let arrClientType = [["client_type_name":"Asset Manager" , "client_type_id":1] , ["client_type_name":"Broker Dealer" , "client_type_id":5] , ["client_type_name":"Corporate" , "client_type_id":8]]
if let filteredData = arrClientType.filter({(($0 as? [String:Any])?["client_type_name"] as? String ?? "") == "Broker Dealer"}) as? [Any] {
print(filteredData)
}
```
The filteredData is the arrayOfDictionary which contain the dictionaries in which **"client\_type\_name"** as a **Broker Dealer**.
Upvotes: 0 |
2018/03/22 | 372 | 1,426 | <issue_start>username_0: I just started OOP java and im struggling with getting a sum of my class type elements from an array. Can anyone help me?
hwComponents is a list type of a class HardwareComponent. Any help would be appreciated.
```
private Collection hwComponents = new ArrayList<>();
public float calculatePrice()
{
float sum=0;
for (int i=1;i < hwComponents.size(); i++)
sum+= hwComponents.get(i); //The method get(i) is undefined for this type
return sum;
}
```<issue_comment>username_1: A `Collection` doesn't have a `get(index)` method.
Store your `ArrayList` in a `List` variable instead:
```
private List hwComponents = new ArrayList<>();
```
Also note that the indices of your loop should begin at 0.
As an alternative, you can use an enhanced for loop, which doesn't require the `get` method:
```
for (HardwareComponent hc : hwComponents) {
sum+= hc.getSomeProperty(); // note that HardwareComponent cannot be added to
// a float (or to anything else for that matter, so you probably
// intended to call some method of your class which returns a float
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you don’t want to change the type of your array/collection, you just need to iterate through the collection in the collections defined order:
```
sum = 0;
for( HardwareComponent hc: hwComponents)
sum += hc.cost;
return sum;
```
Upvotes: 0 |
2018/03/22 | 563 | 2,117 | <issue_start>username_0: As a beginner, what I understood is that Python Standard Library (PSL) provides a lot of modules which provide a lot of functionalities, but still if I want to use those then I have to import the module, for example, `sys`, `os` etc. are PSL modules but still those need to be imported.
Now, I wonder if that is the case then how without importing anything I am able to use functions like `print`, `list`, `len` etc.? Is it that their "support is built-in into the interpreter"?<issue_comment>username_1: You should give the page on [built-in](https://docs.python.org/3/library/functions.html) functions a read
Quote:
>
> The Python interpreter has a number of functions and types built into
> it that are **always available**.
>
>
>
Upvotes: 2 <issue_comment>username_2: Yes. They're built-in functions (or in the case of `list`, a built-in class). You can explicitly import [the `__builtin__` module](https://docs.python.org/2/library/__builtin__.html) (Py2) or [the `builtins` module](https://docs.python.org/3/library/builtins.html) (Py3) if you want qualified access to the names, but by default, those modules are searched whenever an attempt to access a global name doesn't find the name in the module globals. They're not normally needed though, per the docs:
>
> This module is not normally accessed explicitly by most applications, but can be useful in modules that provide objects with the same name as a built-in value, but in which the built-in of that name is also needed.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_3: The `print` function comes from the `builtins` module.
You can find its documentation [here](https://docs.python.org/3/library/builtins.html#module-builtins).
Here is an example session.
I first check what module `print` comes from,which is stored in its `__module__` attribute.
Then, I import the `builtins` module, and checks if its `print` function is the same as the prefix-less `print`.
```
>>> print.__module__
'builtins'
>>> import builtins
>>> builtins.print("hello")
hello
>>> print is builtins.print
True
```
Upvotes: 2 |
2018/03/22 | 685 | 2,601 | <issue_start>username_0: I am attempting to create a form in Google Spreadsheets which will pull an image file from my Drive based on the name of the file and insert it into a cell. I've read that you can't currently do this directly through Google Scripts, so I'm using setFormula() adn the =IMAGE() function in the target cell to insert the image. However, I need the URL of the image in order to do this. I need to use the name of the file to get the URL, since the form concatenates a unique numerical ID into a string to use the standardized naming convention for these files. My issue is that, when I use getFilesByName, it returns a File Iteration, and I need a File in order to use getUrl(). Below is an snippet of my code which currently returns the error "Cannot find function getUrl in object FileIterator."
```
var poNumber = entryFormSheet.getRange(2, 2);
var proofHorizontal = drive.getFilesByName('PO ' + poNumber + ' Proof Horizontal.png').getUrl();
packingInstructionsSheet.getRange(7, 1).setFormula('IMAGE(' + proofHorizontal + ')');
```<issue_comment>username_1: If you know the file name exactly, You can use `DriveApp` to search the file and getUrl()
```
function getFile(name) {
var files = DriveApp.getFilesByName(name);
while (files.hasNext()) {
var file = files.next();
//Logs all the files with the given name
Logger.log('Name:'+file.getName()+'\nUrl'+ file.getUrl());
}
}
```
If you don't know the name exactly, You can use [`DriveApp.searchFiles()`](https://developers.google.com/apps-script/reference/drive/drive-app#searchFiles(String)) method.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're close - once you have the [`FileIterator`](https://developers.google.com/apps-script/reference/drive/file-iterator), you need to advance it to obtain a File, i.e. call `FileIterator.next()`.
If multiple files can have the same name, the file you want may not be the first one. I recommend checking this in your script, just in case:
```
var searchName = "PO + .....";
var results = DriveApp.getFilesByName(searchName);
var result = "No matching files";
while (results.hasNext()) {
var file = results.next();
if (file.getMimeType() == MimeType. /* pick your image type here */ ) {
result = "=IMAGE( .... " + file.getUrl() + ")");
if (results.hasNext()) console.warn("Multiple files found for search '%s'", searchName);
break;
}
}
sheet.getRange( ... ).setFormula(result);
```
You can view the available MimeTypes in [documentation](https://developers.google.com/apps-script/reference/base/mime-type)
Upvotes: 1 |
2018/03/22 | 1,372 | 6,865 | <issue_start>username_0: First of all I'm really new to Firestore and its functionalities, so I apologize if some of this might feel obvious to others. These things are just not registering in my mind yet.
I'm trying to create a 2 person multiplayer game using the language Swift and Firestore as the backend. However I'm not to sure how to create this functionality of only allowing two players inside a single game at a given time. How would I go about restricting each game to only allowing two players inside one game? Would this be something I need to set up within the security and rules portion of Firestore? Or would I need to create this functionality within how I model my data?
My current setup for how I'm modeling the data includes creating a collection of "Games" where each "Game" has two documents for "player1" and "player2". Then, within each one of those players/documents I store the values of each players functionalities. But with this approach, I still haven't solved the issue of only allowing two players within a single "Game"/collection. How do I prevent a third player from entering the game? or how would I handle the situation when more than one person enters a game at the same time?
Thank you for any advice possible.<issue_comment>username_1: You can use Cloud Functions to assign players to a game, manage when a game is full and then start it. Take a look at this article on Medium, [Building a multi-player board game with Firebase Firestore & Functions](https://medium.com/@feloy/building-a-multi-player-board-game-with-firebase-firestore-functions-part-1-17527c5716c5)
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is the code I ended up creating in Swift using Firestore to create the 2 person multiplayer game.
```
import UIKit
import Firebase
class WaitForOpponentViewController: UIViewController
{
@IBOutlet weak var activityIndicator: UIActivityIndicatorView!
var battleRoomDocumentReference : DocumentReference!
var battleRoomListenerRegistration : ListenerRegistration!
override func viewDidLoad(){
super.viewDidLoad()
activityIndicator.hidesWhenStopped = true
activityIndicator.startAnimating()
Firestore.firestore().collection(BATTLEROOMS_Collection)
.whereField(BATTLEROOMFULL_FIELD, isEqualTo: false)
.whereField(NUMOFPLAYERS_FIELD, isLessThan: 2)
.limit(to: 1)
.getDocuments { (snapShot, error) in
if let error = error
{
print("There was a error while fetching the documents: \(error)")
}
else
{
guard let snap = snapShot else {return}
if(snap.documents.count > 0)
{
//Update the current battle room
for document in snap.documents
{
Firestore.firestore().collection(BATTLEROOMS_Collection)
.document(document.documentID)
.setData(
[
BATTLEROOMFULL_FIELD : true,
NUMOFPLAYERS_FIELD : 2, //Note: Player1Id is not changed because there is already a player1Id when this document is updated
PLAYER2ID_FIELD : Auth.auth().currentUser?.uid ?? "AnonymousNum2"
], options: SetOptions.merge(), completion: { (error) in
if let error = error
{
print("There was an error while adding the second player to the battle room document : \(error)")
}
self.addBattleRoomListener(battleRoomDocumentId: document.documentID)
})
}
}
else
{
//Create a new battle room
self.battleRoomDocumentReference = Firestore.firestore().collection(BATTLEROOMS_Collection)
.addDocument(data:
[
BATTLEROOMFULL_FIELD: false,
NUMOFPLAYERS_FIELD : 1,
PLAYER1ID_FIELD : Auth.auth().currentUser?.uid ?? "AnonymousNum1",
PLAYER2ID_FIELD : ""
], completion: { (error) in
if let error = error
{
print("Error while adding a new battle room/player 1 to the battle room document : \(error)")
}
})
self.addBattleRoomListener(battleRoomDocumentId: self.battleRoomDocumentReference.documentID)
}
}
}
}
override func viewWillDisappear(_ animated: Bool) {
//Remove Battle Room Listener
battleRoomListenerRegistration.remove()
activityIndicator.stopAnimating()
}
func addBattleRoomListener(battleRoomDocumentId : String)
{
battleRoomListenerRegistration = Firestore.firestore().collection(BATTLEROOMS_Collection)
.document(battleRoomDocumentId)
.addSnapshotListener { (documentSnapshot, error) in
guard let snapshot = documentSnapshot else { return }
guard let documentData = snapshot.data() else { return }
let battleRoomFullData = documentData[BATTLEROOMFULL_FIELD] as? Bool ?? false
let numOfPlayerData = documentData[NUMOFPLAYERS_FIELD] as? Int ?? 0
if(battleRoomFullData == true && numOfPlayerData == 2)
{
print("Two Players in the Game, HURRAY. Segue to GAME VIEW CONTROLLER")
}
else
{
return
}
}
}
@IBAction func cancelBattle(_ sender: UIButton) {
//NOTE: Canceling is only allowed for the first user thats creates the Battle Room, once the Second Person enters the Battle Room the system will automatically segue to the Game View Controller sending both players into the game VC
Firestore.firestore().collection(BATTLEROOMS_Collection)
.document(battleRoomDocumentReference.documentID)
.delete { (error) in
if let error = error
{
print("There was an error while trying to delete a document: \(error)")
}
}
activityIndicator.stopAnimating()
self.dismiss(animated: true, completion: nil)
}
}
```
Upvotes: 1 |
2018/03/22 | 605 | 2,154 | <issue_start>username_0: I am currently testing ways to reliably detect throttling in the C# GraphSdk.
When sending too many contact requests (10.000req/10min) to:
```
/v1.0/users/{Userid}/contacts/{contactId}
```
I will receive the following ServiceException:
[](https://i.stack.imgur.com/2nC9S.png)
My first idea was to just check against the StatusCode, but since it is from type [System.Net.HttpStatusCode](https://msdn.microsoft.com/de-de/library/system.net.httpstatuscode(v=vs.110).aspx) the "Throttling" code 429 sent by the REST API is not contained in the enum.
Currently I use this code to detect a throttling message:
```
if(e.StatusCode.ToString() == "429")
{
Console.WriteLine("this is an throttling exception");
}
```
The disadvantage of this approach is, that if in some time in the future an enum for 429 is added this code will fail. Silently. Resulting in a hard to notice bug.
I can't use, the string-property `Code` from the `Error-PropertyObj`, as it is not guaranteed to be always the same Message e.g. "Too Many Requests".
Especially in this case the `ErrorCode` (string) is set wrongly to "unknown error" instead of "Too many Requests" (probably a bug).
* Any recommendations or ideas how I could reliably notice a Throttling
error?
* Especially recommendations on how to reliably compare if the
HTTPStatusCode equals 429?
* Maybe there is some documentation describing another way to check for the error code?<issue_comment>username_1: You can avoid being hit by any changes to the `HttpStatusCode` enumeration by using the `int` rather than the `string` value. As you noted, the `string` would change if an enum was added for `429` but the `int` value will remain the same:
```
switch ((int) e.StatusCode)
{
case 429:
Console.WriteLine("this is an throttling exception");
break;
default:
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Another way is to look at the response headers. When we 429 you, we add a Retry-After response header which you can actually see in your screen shot above.
Upvotes: 0 |
2018/03/22 | 456 | 1,805 | <issue_start>username_0: I am making a fetch request that returns an array of strings.
```
fetch(linkFetch)
.then(resp => resp.json())
.then(arr => {
that.setState({
images: arr
});
})
.then(this.test());
```
on the last line you can see I try to call the method `test()` which is just trying to access the state that was set in the `fetch()` request.
```
test(){
console.log(this.state.images[1]);
}
```
console logs 'undefined'
However, if I assign `test()` to a button or something so that I can manually call it works fine which leads me to believe that when I call `.then(this.test());` in the fetch request it is actually being called before the state is set.
How can I make sure it gets called after the state has been set in the fetch request?<issue_comment>username_1: The argument you pass to `then` needs to be a **function**.
You are calling `this.test` *immediately* (i.e. before the asynchronous function has resolved) and passing its return value (`undefined` as there is no `return` statement).
You already have an example of how to do this correctly **on the previous line** of your code.
```
fetch(linkFetch)
.then(resp => resp.json())
.then(arr => {
that.setState({
images: arr
});
})
.then(() => {
this.test();
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: React provides a way to call a function after the setState has been executed. You can use the callback function as described in the react documentation
<https://reactjs.org/docs/react-component.html#setstate>
You can pass your desired function as a second argument to the setState call.
that.setState({
key:Val}, that.test);
Upvotes: 0 |
2018/03/22 | 650 | 2,784 | <issue_start>username_0: As I need to send message on user's whatsapp number so I have downloaded WhatsApp API from nuget packages.
and also I implemented code according to that. But after so many search I found that I need to register mobile number to get password...using WART but I think this application is not working and not able to get password from WhatsApp.
So if anyone implement this feature of sending message on whatsapp number in Asp.net c# then please update me how to do this using WhatsApp Api..
even I don't know whatsapp is provide any API or not.
I have tried with below code...but I need password before that..
```
string fromMob = "919xxxxxxxxx";
string toMob = txtMobileNo.Text.Trim();
string msg = "This is first WhatsApp Message Whatsapp API";
WhatsApp wa = new WhatsApp(fromMob, "<PASSWORD>Password", "SD", false, false);
try
{
wa.OnConnectSuccess += () =>
{
wa.OnLoginSuccess += (phoneNumber, data) =>
{
wa.SendMessage(toMob, msg);
};
wa.OnLoginFailed += (data) =>
{
msg = "Login Failed" + data;
};
};
wa.OnConnectFailed += (ex) =>
{
msg = "Connection Failed" + ex;
};
wa.Connect();
}
catch { }
```<issue_comment>username_1: **[WhatsApp Business API](https://developers.facebook.com/docs/whatsapp/getting-started)** is live now. Please refer this [link](https://developers.facebook.com/docs/whatsapp) for detailed information.
>
> The WhatsApp Business API Client supports a subset of the features
> provided by the WhatsApp applications you already know from Android,
> iOS, Web and other platforms including end-to-end encryption. The
> difference is that this application can be deployed on a server,
> providing a local API that allows you to programmatically send and
> receive messages and integrate this workflow with your own systems
> (CRMs, customer care, etc.).
>
>
> Please note that if you use anything other than the official WhatsApp
> Business API or other official WhatsApp tools, we reserve the right to
> limit or remove your access to WhatsApp as this violates our policies.
> Please do not use any non-WhatsApp authorized third-party tools to
> communicate on WhatsApp.
>
>
>
Upvotes: 2 <issue_comment>username_2: Referring to [Whatsapp API](https://chat-api.com/en/?lang=EN) documentation,
In the documentation, you can go API docs of send message, read message, set webhook APIs
Upvotes: 0 |
2018/03/22 | 486 | 1,522 | <issue_start>username_0: I'm creating node.js project now. I've a VPS which **ubuntu 16.04** as op system and **nginx** as http server installed on it. As you know we use URL with port number like <http://localhost.com:3000> to access our node.js projects. I wonder is there any way to access without port number like <http://localhost.com> like normal php and other projects?<issue_comment>username_1: Solution 1
==========
Use Apache or Nginx as a reverse proxy.
* Apache: <https://www.digitalocean.com/community/tutorials/how-to-use-apache-as-a-reverse-proxy-with-mod_proxy-on-ubuntu-16-04>
* Nginx: <https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-web-server-and-reverse-proxy-for-apache-on-one-ubuntu-16-04-server>
Solution 2
==========
Forward port 80 to port 3000 with iptables
```
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
```
References:
* <https://www.digitalocean.com/community/questions/how-can-i-get-node-js-to-listen-on-port-80?answer=5772>
* <https://stackoverflow.com/a/16573737/1728166>
Upvotes: 1 <issue_comment>username_2: The only reason we use those kinds of port numbers for local development is because ports under 1024 are "reserved" and require root permissions to be used. You can still use port 80 for local development if you set your node server to listen on this port.
If you do make these changes to your node server, in order to run it locally you will need to run it as root or using `sudo`.
Upvotes: 0 |
2018/03/22 | 402 | 1,522 | <issue_start>username_0: Running this command inside a .gitlab-ci.yml:
```
task:
script:
- yes | true
- yes | someOtherCommandWhichNeedsYOrN
```
Returns:
```
$ yes | true
ERROR: Job failed: exit status 1
```
Any clues, ideas why this happens or how to debug this?
Setup:
Gitlab runner in a docker<issue_comment>username_1: Can't comment yet.
I would extract the script in to a file and run that file from the pipeline with some debug stuff in it and see if you can reproduce it.
Make sure you make it to to, and not past, the line in question.
I try the following to get some more info maybe?
```
( set -x ; yes | true ; echo status: "${PIPESTATUS[@]}" )
```
See if you have some weird chars in the file or some weird modes set.
Make sure you are in the right shell, true can be built in so worth checking.
Good luck.
Upvotes: 1 <issue_comment>username_2: If running with `set -o pipefail`, a failure at any stage in a shell pipeline will cause the entire pipeline to be considered failed. When `yes` tries to write to stdout but the program whose stdin that stdout is connected to is not reading, this will cause an `EPIPE` signal -- thus, an expected failure message, which the shell will usually ignore (in favor of treating only the last component of a pipeline as important for purposes of that pipeline's exit status).
* Turn this off for the remainder of your current script with `set +o pipefail`
* Explicitly ignore a single failure: `{ yes || :; } | true`
Upvotes: 4 [selected_answer] |
2018/03/22 | 331 | 1,175 | <issue_start>username_0: I know the question is already out there. But i cant solve my problem with the other solutions. I want to create a List out of Lists.
My Code:
```
class SomeClassName
{
static List saveFields = new List();
public static List SaveFields
{
get { return saveFields; }
set { saveFields = value; }
}
}
public void someMethod
{
List someListName = new List();
}
```
My Namespace is TicTacToe so there is no name issue.
How can i solve this error?
I get the error for "SaveFields"<issue_comment>username_1: Have you tried using generics?
```
class SomeClassName
{
static List saveFields = new List();
public static List SaveFields
{
get { return saveFields; }
set { saveFields = value; }
}
}
```
It is insanely good, useful but machine resource hunger, use carefully
Upvotes: -1 <issue_comment>username_2: `SomeClassName.SaveFields` isn't a type, it is a property. The type of that property is `List`. If you want to create a variable with the same type, you have to use `List`.
You can assign that value to the variable though:
```
public void someMethod()
{
List someListName = SomeClassName.SaveFields;
}
```
Upvotes: 3 |
2018/03/22 | 693 | 2,595 | <issue_start>username_0: I am getting the error for following boto3 code in lambda to access dynamo db table through STS credentials.
The code is below
```
import boto3
def lambda_handler(event, context):
sts_client = boto3.client('sts')
# Call the assume_role method of the STSConnection object and pass the role
# ARN and a role session name.
assumedRoleObject = sts_client.assume_role(
RoleArn="arn:aws:iam::012345678910:role/test_role",
RoleSessionName="AssumeRoleSession1"
)
# From the response that contains the assumed role, get the temporary
# credentials that can be used to make subsequent API calls
credentials = assumedRoleObject['Credentials']
dynamoDB = boto3.resource('dynamodb',aws_access_key_id=credentials['AccessKeyId'],aws_secret_access_key=credentials['SecretAccessKey'],aws_session_token=credentials['SessionToken'],)
test1=dynamoDB.get_available_subresources
table = dynamoDB.Table('Test1')
response = table.get_item(
Key={
'Name': 'ABC'
}
)
```
The error stacktrace is below:
-------------------------------
```
ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetItem operation: Requested resource not found
```<issue_comment>username_1: I have gotten that error often usually due to the table not existing.
Check:
1. Table exists and spelling is correct
2. You are accessing the right DynamoDB instance where your table exists
3. The role you are using has access to the table
Upvotes: 0 <issue_comment>username_2: **First check whether Table "Test1" exists**
From [this](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html) documentation:
>
> **ResourceNotFoundException**
>
>
> Message: Requested resource not found.
>
>
> Example: Table which is being requested does not exist, or is too
> early in the CREATING state.
>
>
>
Check whether this table exists with the [list-tables](https://docs.aws.amazon.com/cli/latest/reference/dynamodb/list-tables.html) command:
```
aws dynamodb list-tables
```
**Verify whether your CLI default region is the same as your table's region**
If this table does exist, check your [cli configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to verify that you're querying in the same region that the table exists. You can check your default region like this:
```
aws configure get region
```
You can use `aws configure` to change your default settings, or specify `--region` directly on any CLI command to override your default region.
Upvotes: 2 |
2018/03/22 | 2,248 | 6,833 | <issue_start>username_0: Problem applying just created migration (Added db.Model) through Flask-Migrate (SQLAlchemy) for PostgresSQL DB.
The error itself:
```
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table parameter_subtype because other objects depend on it
```
Full error stack trace is:
```
INFO [alembic.autogenerate.compare] Detected removed foreign key (event_id) (id) on table stage_has_event
INFO [alembic.autogenerate.compare] Detected removed column 'stage_has_event.event_id'
Generating /app/migrations/versions/e224df1a4818_.py ... done
(venv) $ ./manage db upgrade
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade cadf22871ae0 -> e224df1a4818, empty message
Traceback (most recent call last):
File "/app/venv/bin/flask", line 11, in
sys.exit(main())
File "/app/venv/lib/python2.7/site-packages/flask\_cli/cli.py", line 502, in main
cli.main(args=args, prog\_name=name)
File "/app/venv/lib/python2.7/site-packages/flask\_cli/cli.py", line 369, in main
return AppGroup.main(self, \*args, \*\*kwargs)
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return \_process\_result(sub\_ctx.command.invoke(sub\_ctx))
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 1066, in invoke
return \_process\_result(sub\_ctx.command.invoke(sub\_ctx))
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, \*\*ctx.params)
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(\*args, \*\*kwargs)
File "/app/venv/lib/python2.7/site-packages/click/decorators.py", line 17, in new\_func
return f(get\_current\_context(), \*args, \*\*kwargs)
File "/app/venv/lib/python2.7/site-packages/flask/cli.py", line 257, in decorator
return \_\_ctx.invoke(f, \*args, \*\*kwargs)
File "/app/venv/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(\*args, \*\*kwargs)
File "/app/venv/lib/python2.7/site-packages/flask\_migrate/cli.py", line 134, in upgrade
\_upgrade(directory, revision, sql, tag, x\_arg)
File "/app/venv/lib/python2.7/site-packages/flask\_migrate/\_\_init\_\_.py", line 259, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File "/app/venv/lib/python2.7/site-packages/alembic/command.py", line 254, in upgrade
script.run\_env()
File "/app/venv/lib/python2.7/site-packages/alembic/script/base.py", line 427, in run\_env
util.load\_python\_file(self.dir, 'env.py')
File "/app/venv/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in load\_python\_file
module = load\_module\_py(module\_id, path)
File "/app/venv/lib/python2.7/site-packages/alembic/util/compat.py", line 141, in load\_module\_py
mod = imp.load\_source(module\_id, path, fp)
File "migrations/env.py", line 87, in
run\_migrations\_online()
File "migrations/env.py", line 80, in run\_migrations\_online
context.run\_migrations()
File "", line 8, in run\_migrations
File "/app/venv/lib/python2.7/site-packages/alembic/runtime/environment.py", line 836, in run\_migrations
self.get\_context().run\_migrations(\*\*kw)
File "/app/venv/lib/python2.7/site-packages/alembic/runtime/migration.py", line 330, in run\_migrations
step.migration\_fn(\*\*kw)
File "/app/migrations/versions/e224df1a4818\_.py", line 21, in upgrade
op.drop\_table('parameter\_subtype')
File "", line 8, in drop\_table
File "", line 3, in drop\_table
File "/app/venv/lib/python2.7/site-packages/alembic/operations/ops.py", line 1187, in drop\_table
operations.invoke(op)
File "/app/venv/lib/python2.7/site-packages/alembic/operations/base.py", line 319, in invoke
return fn(self, operation)
File "/app/venv/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 70, in drop\_table
operation.to\_table(operations.migration\_context)
File "/app/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", line 203, in drop\_table
self.\_exec(schema.DropTable(table))
File "/app/venv/lib/python2.7/site-packages/alembic/ddl/impl.py", line 118, in \_exec
return conn.execute(construct, \*multiparams, \*\*params)
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in \_execute\_on\_connection
return connection.\_execute\_ddl(self, multiparams, params)
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1009, in \_execute\_ddl
compiled
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in \_execute\_context
context)
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in \_handle\_dbapi\_exception
exc\_info
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise\_from\_cause
reraise(type(exception), exception, tb=exc\_tb, cause=cause)
File "/app/venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in \_execute\_context
context)
File "/app`enter code here`/venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 507, in do\_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.InternalError: (psycopg2.InternalError) cannot drop table parameter\_subtype because other objects depend on it
DETAIL: constraint parameter\_parameter\_subtype\_id\_fkey on table parameter depends on table parameter\_subtype
HINT: Use DROP ... CASCADE to drop the dependent objects too.
[SQL: '\nDROP TABLE parameter\_subtype']
```
This leads to table not created in PostgreSQL db and columns either.
Unsure what may have caused this problem and how to understand this Error message.
I did manually drop the tables and it helped. However how to be with migration.
Any Suggestions?<issue_comment>username_1: Flask-Migrate does not read in the database to see the dependencies between the objects. You can achieve a successful migration by reordering the drop\_table in your migration file.
What you can do is modify the migration file like so:
```
op.drop_table("parameter_subtype")
op.drop_table("parameter")
```
And then run the upgrade command. Normally ordering the drop in this order should solve the problem.
Upvotes: 4 <issue_comment>username_2: I got the same error and what worked for me was to first drop the table in the database using the command:
```
$ DROP TABLE table_name;
```
And then in the migration file in the upgrade() function delete or comment out the line where it says:
```
op.drop_table('table_name')
```
Upvotes: -1 |
2018/03/22 | 3,654 | 12,926 | <issue_start>username_0: I know this question has been asked lots of times but none of solutions worked for me.
I have a custom UIView class which I use for displaying alert message. I added UIButton to close the view. However, nothing happens when I tab it.
```
import UIKit
public class Alert: UIView {
public var image: UIImage?
public var title: String?
public var message: String?
public var closeButtonText: String?
public var dialogBackgroundColor: UIColor = .white
public var dialogTitleTextColor: UIColor = .black
public var dialogMessageTextColor: UIColor = UIColor(red: 0.2, green: 0.2, blue: 0.2, alpha: 1)
public var dialogImageColor: UIColor = UIColor(red:0.47, green:0.72, blue:0.35, alpha:1.0)
public var overlayColor: UIColor = .black
public var overlayOpacity: CGFloat = 0.66
public var paddingSingleTextOnly: CGFloat = 8
public var paddingTopAndBottom: CGFloat = 24
public var paddingFromSides: CGFloat = 8
public var seperatorHeight: CGFloat = 6
private var height: CGFloat = 0
private var width: CGFloat = 0
private var maxSize: CGSize = CGSize()
private let marginFromSides: CGFloat = 80
public lazy var imageSize: CGSize = CGSize(width: 75, height: 75)
public var overlay = false
public var blurOverlay = true
//animation duration
public var duration = 0.33
private var onComplete: (() -> Void)?
@objc public var titleFont: UIFont = UIFont.systemFont(ofSize: 18)
@objc public var messageFont: UIFont = UIFont.systemFont(ofSize: 15)
private lazy var backgroundView: UIView = {
let view = UIView()
view.alpha = 0
return view
}()
public let dialogView: UIView = {
let view = UIView()
view.layer.cornerRadius = 6
view.layer.masksToBounds = true
view.alpha = 0
view.clipsToBounds = true
return view
}()
private lazy var imageView: UIImageView = {
let view = UIImageView()
view.contentMode = .scaleAspectFit
return view
}()
public lazy var closeButton: UIButton = {
let button = UIButton()
return button
}()
private lazy var titleLabel: UILabel = {
let label = UILabel()
label.numberOfLines = 0
label.textAlignment = .center
return label
}()
private lazy var messageLabel: UILabel = {
let label = UILabel()
label.numberOfLines = 0
label.textAlignment = .center
return label
}()
@objc func closeButtonTapped(sender: UIButton){
dismiss()
}
private func calculations() {
height += paddingTopAndBottom
maxSize = CGSize(width: frame.width - marginFromSides * 2, height: frame.height - marginFromSides)
}
public convenience init(title:String, message: String, image:UIImage) {
self.init(frame: UIScreen.main.bounds)
self.title = title
self.message = message
self.image = image
}
public convenience init(title:String, image:UIImage) {
self.init(frame: UIScreen.main.bounds)
self.title = title
self.image = image
}
public convenience init(title: String, message: String) {
self.init(frame: UIScreen.main.bounds)
self.title = title
self.message = message
}
public convenience init(message: String) {
self.init(frame: UIScreen.main.bounds)
paddingTopAndBottom = paddingSingleTextOnly
paddingFromSides = paddingSingleTextOnly * 2
self.message = message
}
override init(frame: CGRect) {
super.init(frame: frame)
}
public required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
private func createOverlay() {
backgroundView.frame = frame
backgroundView.backgroundColor = overlayColor
backgroundView.isUserInteractionEnabled = true
addSubview(backgroundView)
if let window = UIApplication.shared.keyWindow {
window.addSubview(backgroundView)
} else if let window = UIApplication.shared.delegate?.window??.rootViewController {
window.view.addSubview(self)
}
}
private func createBlurOverlay() {
backgroundView.frame = frame
//Blur Effect
let blurEffect = UIBlurEffect(style: UIBlurEffectStyle.dark)
let blurEffectView = UIVisualEffectView(effect: blurEffect)
blurEffectView.frame = frame
blurEffectView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
backgroundView.addSubview(blurEffectView)
addSubview(backgroundView)
if let window = UIApplication.shared.keyWindow {
window.addSubview(backgroundView)
} else if let window = UIApplication.shared.delegate?.window??.rootViewController {
window.view.addSubview(self)
}
}
private func createTitle(title: String) {
titleLabel.font = titleFont
titleLabel.text = title
titleLabel.frame.origin.y = height + 2
let titleLabelSize = titleLabel.sizeThatFits(maxSize)
handleSize(size: titleLabelSize)
titleLabel.frame.size = titleLabelSize
titleLabel.textColor = self.dialogTitleTextColor
dialogView.addSubview(titleLabel)
}
private func createMessage(message: String) {
messageLabel.font = messageFont
messageLabel.text = message
messageLabel.frame.origin.y = height
let messageLabelSize = messageLabel.sizeThatFits(maxSize)
messageLabel.frame.size = messageLabelSize
messageLabel.textColor = self.dialogMessageTextColor
handleSize(size: messageLabelSize)
dialogView.addSubview(messageLabel)
}
private func createImage(image: UIImage) {
imageView.image = image.withRenderingMode(.alwaysTemplate)
imageView.frame.origin.y = height
imageView.frame.size = imageSize
imageView.tintColor = self.dialogImageColor
handleSize(size: imageSize)
dialogView.addSubview(imageView)
}
private func createButton(){
closeButton.setTitle("Close", for: .normal)
closeButton.tintColor = UIColor.white
closeButton.frame.origin.y = height + 20
let closeButtonSize = CGSize(width: width - 60, height: 60)
closeButton.frame.size = closeButtonSize
closeButton.layer.cornerRadius = 6
closeButton.backgroundColor = Color.NavigationBar.tintColor
closeButton.isUserInteractionEnabled = true
handleSize(size: closeButtonSize)
dialogView.addSubview(closeButton)
}
private func createDialog() {
centerAll()
height += paddingTopAndBottom
dialogView.frame.size = CGSize(width: width, height: height)
dialogView.backgroundColor = self.dialogBackgroundColor
dialogView.isUserInteractionEnabled = true
addSubview(dialogView)
self.dialogView.center = self.center
self.dialogView.transform = CGAffineTransform(scaleX: 1.15, y: 1.15)
if let window = UIApplication.shared.keyWindow {
window.addSubview(dialogView)
closeButton.addTarget(self, action: #selector(closeButtonTapped(sender:)), for: .touchUpInside)
} else if let window = UIApplication.shared.delegate?.window??.rootViewController {
UIApplication.topViewController()?.view.addSubview(self)
window.view.addSubview(self)
closeButton.addTarget(self, action: #selector(closeButtonTapped(sender:)), for: .touchUpInside)
}
}
private func centerAll() {
if ((messageLabel.text) != nil) {
messageLabel.frame.origin.x = (width - messageLabel.frame.width) / 2
}
if ((titleLabel.text) != nil) {
titleLabel.frame.origin.x = (width - titleLabel.frame.width) / 2
}
if ((imageView.image) != nil) {
imageView.frame.origin.x = (width - imageView.frame.width) / 2
}
closeButton.frame.origin.x = (width - closeButton.frame.width) / 2
}
private func handleSize(size: CGSize) {
if width < size.width + paddingFromSides * 2 {
width = size.width + paddingFromSides * 2
}
if paddingTopAndBottom != paddingSingleTextOnly {
height += seperatorHeight
}
height += size.height
}
private func showAnimation() {
UIView.animate(withDuration: duration, animations: {
if self.overlay {
self.backgroundView.alpha = self.overlayOpacity
self.dialogView.transform = CGAffineTransform(scaleX: 1, y: 1)
}
self.dialogView.alpha = 1
})
}
public func show() {
if let complete = onComplete {
self.onComplete = complete
}
calculations()
if self.overlay {
if blurOverlay {
createBlurOverlay()
} else {
createOverlay()
}
}
if let img = image {
createImage(image: img)
}
if let title = title {
createTitle(title: title)
}
if let message = message {
createMessage(message: message)
}
createButton()
createDialog()
showAnimation()
}
public func dismiss(){
UIView.animate(withDuration: duration, animations: {
if self.overlay {
self.backgroundView.alpha = 0
}
self.dialogView.transform = CGAffineTransform(scaleX: 1.15, y: 1.15)
self.dialogView.alpha = 0
}, completion: { (completed) in
self.dialogView.removeFromSuperview()
if (self.overlay)
{
self.backgroundView.removeFromSuperview()
}
self.removeFromSuperview()
if let completionHandler = self.onComplete {
completionHandler()
}
})
}
}
```
How I create the alert;
```
let alert = Alert(title: "hata",message: "hata mesajı ekrana basıldı", image: #imageLiteral(resourceName: "error"))
alert.show()
```
If I declare target inside UIViewController (Where I create this UIView) as
```
Alert.closeButton.addTarget(self, action: #selector(closeButtonTapped(sender:), for: .touchUPInside)
```
and create function inside UIViewController It is working. I can't figure out why It doesn't work when in custom class.
So my question is that how can close the alert view when tabbed the button?
I tried below solution but didn't work for me;
[UIButton target action inside custom class](https://stackoverflow.com/questions/25465455/uibutton-target-action-inside-custom-class)<issue_comment>username_1: Look this code
1- Call IBAction inside UIView Class
```
import UIKit
public class Alert: UIView {
public lazy var closeButton: UIButton = {
let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
button.backgroundColor = #colorLiteral(red: 0.9254902005, green: 0.2352941185, blue: 0.1019607857, alpha: 1)
return button
}()
func createDialog() {
closeButton.addTarget(self, action: #selector(self.closeButtonTapped(sender:)), for: .touchUpInside)
self.addSubview(closeButton)
}
@objc func closeButtonTapped(sender: UIButton){
print("Call 1")
}
}
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let alert = Alert(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
alert.createDialog()
self.view.addSubview(alert)
}
}
```
2- Call IBAction inside UIViewController Class
```
import UIKit
public class Alert: UIView {
public lazy var closeButton: UIButton = {
let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
button.backgroundColor = #colorLiteral(red: 0.9254902005, green: 0.2352941185, blue: 0.1019607857, alpha: 1)
return button
}()
func createDialog() {
// closeButton.addTarget(self, action: #selector(self.closeButtonTapped(sender:)), for: .touchUpInside)
self.addSubview(closeButton)
}
@objc func closeButtonTapped(sender: UIButton){
print("Call 1")
}
}
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let alert = Alert(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
alert.createDialog()
alert.closeButton.addTarget(self, action: #selector(self.closeButtonTappedController(_:)), for: .touchUpInside)
self.view.addSubview(alert)
}
@IBAction func closeButtonTappedController(_ sender:UIButton){
print("Call 2")
}
}
```
Upvotes: -1 <issue_comment>username_2: Assuming these lines are inside a function - such as due to a button tap:
```
@IBAction func didTap(_ sender: Any) {
let alert = Alert(title: "hata",message: "hata mesajı ekrana basıldı", image: #imageLiteral(resourceName: "error"))
alert.show()
}
```
You are creating an instance of your `Alert` class, calling the `.show()` function inside it, and then it goes out of scope.
So, as soon as that function exists, `alert` no longer exists, and no code inside it can run.
You need to have a class-level variable to hold onto it while it is displayed:
```
class MyViewController: UIViewController {
var alert: Alert?
@IBAction func didTap(_ sender: Any) {
alert = Alert(title: "hata",message: "hata mesajı ekrana basıldı", image: #imageLiteral(resourceName: "error"))
alert?.show()
}
}
```
Here is a demonstration of the "Wrong Way" and the "Right Way" to handle your `Alert` view: <https://github.com/username_2/EmreTest>
Upvotes: 3 [selected_answer] |
2018/03/22 | 389 | 1,691 | <issue_start>username_0: We have a brand new (as in, created a couple of days ago and no resources in it yet) Azure pay-as-you-go subscription, and now when I try to provision something, I'm not allowed.
Looking at the subscription in the portal, under "Usage + quotas" the list is empty.
Did we do something wrong when signing up? How do we enable it?<issue_comment>username_1: Turns out there were some things I could do myself on this, and some things I had to turn to Azure Support for.
Thing I could do myself: Enable a bunch of resource providers
-------------------------------------------------------------
It turned out the subscription did not have a single resource provider enabled. To fix, I found the Subscription blade, and clicked the "Resource Providers" menu item (toward the bottom). That opened up a list where I could register lots of stuff. This also enabled corresponding quotas on the subscription.
Thing I couldn't do myslef: Increase e.g. VM quotas
---------------------------------------------------
Some of those quotas, however, I wasn't able to figure out how to turn on myself. Crucially, one of them was for provisioning VM:s (including "hidden" ones, e.g. the underlying VM:s in an AKS cluster or an App Service). Thankfully, Azure Support were really responsive and from first contact until I was able to provision stuff took less than a business day.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You only need to register to the *Microsoft.Compute* Resource Provider on the Subscriptions Blade.
* Go to Subscriptions
* Select the desired subscription
* Click on Resource Providers
* Register to the option *"Microsoft.Compute"*
Upvotes: 3 |
2018/03/22 | 485 | 1,899 | <issue_start>username_0: I work with some big file. I need to check that the file end with a empty line or the previous line end with a LF.
Exemple of file :
```
a
b
c
d
empty line
```
To read it I use nio and iterator.
```
try (Stream ligneFichier = Files.lines(myPath, myCharset)){
Iterator iterator = ligneFichier.iterator();
int i = 0;
while (iterator.hasNext()) {
valeurLigne = iterator.next();
i++;
}
}
```
When I check the count i I get 4 lines, but there is a 4 + 1 empty (so 5).
Any idea how to heck if the last line end with LF ?
Thanks<issue_comment>username_1: Turns out there were some things I could do myself on this, and some things I had to turn to Azure Support for.
Thing I could do myself: Enable a bunch of resource providers
-------------------------------------------------------------
It turned out the subscription did not have a single resource provider enabled. To fix, I found the Subscription blade, and clicked the "Resource Providers" menu item (toward the bottom). That opened up a list where I could register lots of stuff. This also enabled corresponding quotas on the subscription.
Thing I couldn't do myslef: Increase e.g. VM quotas
---------------------------------------------------
Some of those quotas, however, I wasn't able to figure out how to turn on myself. Crucially, one of them was for provisioning VM:s (including "hidden" ones, e.g. the underlying VM:s in an AKS cluster or an App Service). Thankfully, Azure Support were really responsive and from first contact until I was able to provision stuff took less than a business day.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You only need to register to the *Microsoft.Compute* Resource Provider on the Subscriptions Blade.
* Go to Subscriptions
* Select the desired subscription
* Click on Resource Providers
* Register to the option *"Microsoft.Compute"*
Upvotes: 3 |
Subsets and Splits