date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/19 | 618 | 2,561 | <issue_start>username_0: The text file has 10000 lines. I've been tried using `File.ReadLine` and `StreamReader` but it seems pretty slow. Here's my code
```
foreach (var line in File.ReadLines(ofd.FileName))
{
if (analysisDatas.All(analysisData =>!string.Equals(analysisData.Text, line, StringComparison.CurrentCultureIgnoreCase)))
{
var item = new AnalysisData { Text = line };
analysisDatas.Add(item);
}
}
```
Is there a more efficient way to add them into my list of objects?<issue_comment>username_1: If you can get a good key for each line, I suggest using a `HashSet` rather than `All()` to check each line. A simple/naive example might look like this:
```
var lineKeys = new HashSet();
foreach (var line in File.ReadLines(ofd.FileName))
{
int hash = line.ToUpper().GetHashCode();
if (linesKeys.Add(hash) || analysisDatas.All(analysisData =>!string.Equals(analysisData.Text, line, StringComparison.CurrentCultureIgnoreCase)))
{
var item = new AnalysisData { Text = line };
analysisDatas.Add(item);
}
}
```
Note I said, "If". Comparing via hashcode and the `ToUpper()` method is not exactly the same as `StringComparison.CurrentCultureIgnoreCase`. Some cultures have characters that need special handling based on accents or similar. This might be a problem in your situation, but it might not... you'll have to look at your data and evaluate your needs. *Don't short yourself on that evaluation.*
Also note my use of `int` for the HashSet. I could just put the string there. However, then we end up storing two sets of data in memory for each line: the original line string in the `analysisDates` colletion, and the upper case string in the `HashSet`. Even if comparisons in the HashSet are only done via the HashCode values, the full version of the string would be stored, too. This allows the GC to collect the uppercase versions of the string. Since there have already been OutOfMemoryException issues, I opted to take a hit on potential wrong-matches in order to save memory.
Upvotes: 0 <issue_comment>username_2: You're iterating your new collection (with `.All`) on every pass of the loop, leading to some pretty nasty slow-down as the number of items increases.
Here's one way that might show better performance characteristics:
```
File
.ReadLines(filePath)
.Distinct(StringComparer.CurrentCultureIgnoreCase)
.Select(line => new AnalysisData { Text = line })
.ToList()
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,205 | 3,737 | <issue_start>username_0: I have trouble reading the csv file by python.
My csv file has Korean and numbers.
Below is my python code.
```
import csv
import codecs
csvreader = csv.reader(codecs.open('1.csv', 'rU', 'utf-16'))
for row in csvreader:
print(row)
```
First, there was a UnicodeDecodeError when I enter "for row in csvreader" line in the above code.
So I used the code below then the problem seemed to be solved
```
csvreader = csv.reader(codecs.open('1.csv', 'rU', 'utf-16'))
```
Then I ran into NULL byte error. Then I can't figure out what's wrong with the csv file.
[update] I don't think I changed anything from the previous code but my program shows "UnicodeError: UTF-16 stream does not start with BOM"
When I open the csv by excel I can see the table in proper format (image attached at the botton)
but when I open it in sublime Text, below is a snippet of what I get.
```
504b 0304 1400 0600 0800 0000 2100 6322
f979 7701 0000 d405 0000 1300 0802 5b43
6f6e 7465 6e74 5f54 7970 6573 5d2e 786d
6c20 a204 0228 a000 0200 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000
```
If you need more information about my file, let me know!
I appreciate your help.
Thanks in advance :)
csv file shown in excel
[](https://i.stack.imgur.com/ufFLM.png)
csv file shown in sublime text
[](https://i.stack.imgur.com/dsm2b.png)<issue_comment>username_1: The problem is that your input file apparently doesn’t start with a BOM (a special character that gets recognizably encoded differently for little-endian vs. big-endian utf-16), so you can’t just use “utf-16” as the encoding, you have to explicitly use “`utf-16-le`” or “`utf-16-be`”.
If you don’t do that, `codecs` will guess, and if it guesses wrong, it’ll try to read each code point backward and get illegal values.
If your posted sample starts at an even offset and contains a bunch of ASCII, it’s little-ending, so use the -le version. (But of course it’s better to look at what it actually is than to guess.)
Upvotes: 5 <issue_comment>username_1: Now that you’ve included more of the file in your question, that isn’t a CSV file at all. My guess is that it’s an old-style binary XLS file, but that’s just a guess. If you’re just renaming spam.xls to spam.csv, you can’t do that; you need to export it to CSV format. (If you need help with that, ask on another site that offers help with Excel instead of with programming.)
If you can’t do that for some reason, there are libraries on PyPI to parse XLS files—but if you wanted CSV, and you can export CSV, that’s a better idea.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The file begins with a [PKZIP signature](https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html) so it is actually an XLSX file.
This is great because instead of a CSV file, where you would have to know the character encoding, headers, column types, delimiter, text quoting and escape rules, and line endings, you can just open it and programs can see the structure of the data.
Upvotes: 1 <issue_comment>username_3: After hours of struggling with such an issue, I came to learn that Excel exports data in Multiple CSV formats.
From Excel, please make sure to use 'CSV UTF-8 (Comma delimited)' option while exporting.
(You often may want to use this type than the other CSV options).
Once you are sure of the UTF-type, in this case, 'UTF-8', go back to your python script and change encoding to 'UTF-8', though I found skipping this parameter also works.
```
with open('schools_dataset.csv', encoding='utf-8') as csv_file:
# continue opening the file
```
Upvotes: 2 |
2018/03/19 | 295 | 1,028 | <issue_start>username_0: I have single reads fastq from Illumina Hiseq, and I would like to generate the reverse using biopython ( or others).
I can only find information on how to get the reverse complement using reverse\_complement(dna), but I dont know how to get only the reverse.
Thanks!<issue_comment>username_1: This just prints out the reverse of the sequence. If you have a need for the quality information in the fastq file, you need to take the reverse of that also!
```py
from Bio import SeqIO
with open('sample.fastq') as handle:
for record in SeqIO.parse(handle, 'fastq'):
sequence = str(record.seq)
reverse_sequence = sequence[::-1]
print(reverse_sequence)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: a one-liner use **rev** and **tr** to convert the 2nd line (and rev the 4th) of your input.
```
gunzip -c in.fq.gz | while read L; do echo $L && read L && echo $L | rev | tr "ATGCN" "TACGN" && read L && echo $L && read L && echo $L | rev ;done
```
Upvotes: 2 |
2018/03/19 | 589 | 1,465 | <issue_start>username_0: I have a text file which has 4 attributes like this:
```
taxi id date time longitude latitude
0 1 2008-02-02 15:36:08 116.51172 39.92123
1 1 2008-02-02 15:46:08 116.51135 39.93883
2 1 2008-02-02 15:46:08 116.51135 39.93883
3 1 2008-02-02 15:56:08 116.51627 39.91034
4 1 2008-02-02 16:06:08 116.47186 39.91248
```
I have read this file in jupyter by using this command:
```
res=pd.read_csv("C:/Users/malik/Desktop/result.txt",low_memory=False)
res.head()
```
but when i want to fetch out the datatype of attributes by using this code:
```
type(res)
res['longitude'].dtype
```
It gives me error like:
>
> KeyError: 'longitude'
>
>
><issue_comment>username_1: This just prints out the reverse of the sequence. If you have a need for the quality information in the fastq file, you need to take the reverse of that also!
```py
from Bio import SeqIO
with open('sample.fastq') as handle:
for record in SeqIO.parse(handle, 'fastq'):
sequence = str(record.seq)
reverse_sequence = sequence[::-1]
print(reverse_sequence)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: a one-liner use **rev** and **tr** to convert the 2nd line (and rev the 4th) of your input.
```
gunzip -c in.fq.gz | while read L; do echo $L && read L && echo $L | rev | tr "ATGCN" "TACGN" && read L && echo $L && read L && echo $L | rev ;done
```
Upvotes: 2 |
2018/03/19 | 1,335 | 5,390 | <issue_start>username_0: I created a git repository to test some git features out. I learned from a blogpost that branches are stored in `.git/refs/heads/` That the file names were names of branches in your application and each of the files contained the commit sha that the branch pointed to.
Turns out you can replace the commit sha with whatever commit sha exists inside the `.git` repo. When I did this and it worked, I thought to myself this is really wrong and I shouldn't be able to do this, but I could only think of human error being involved. So my questions are kind of two fold.
1. What are some dangers of replacing the commit sha within a git branch/ref file?
2. Is there *ever* a situation where you would want to directly modify the branch files in the `.git` folder? Rather then using git commands to do these behaviors.<issue_comment>username_1: >
> What are some dangers of replacing the commit sha within a git branch/ref file?
>
>
>
`git gc` could delete something you don't want it to. I forget the exact rules (and they could change in the future anyway), but the short version is that unreachable objects (blobs, trees, and commits) can be irrevocably deleted. If a branch points to a hash that leaves a large portion of the commit tree unreachable, you could find yourself losing that data permanently.
>
> Is there ever a situation where you would want to directly modify the branch files in the `.git` folder? Rather then using git commands to do these behaviors.
>
>
>
I'm not aware of any, but git is designed with the Unix philosophy in mind: it doesn't stop you from doing stupid things because that might stop you from doing clever things.
Upvotes: 1 <issue_comment>username_2: There's no real danger to it. You're meant to be able to look under the hood of git. The biggest potential problem is that if the commit that that head pointed to isn't accessible any other way, and you replace it with something else, then you might not be able to find the old data and it will eventually get deleted. But there are lots of things (the reflog, remote repositories) to save you even from that. It doesn't hurt to know how to do things manually in case you ever end up with a repo in some inconsistent state that the tools don't want to work with, and you need to attempt a recovery.
Upvotes: 1 <issue_comment>username_3: There is a big future risk of Git changing the way it does these things. This is not particularly unlikely since Git on Windows has issues with branch name case, e.g., the branch names `a` and `A` are two different branch names on Linux and in Git, but are stored in the same *file* on Windows. When the branch name is *packed* (stored in `.git/packed-refs`—view the file to see, especially after running `git gc`), these two branches are different on Windows too. When the branch name is *unpacked*, the two branch names become one, with rather messy consequences.
Aside from future risks, the biggest danger with just dumping a raw hash ID into a `.git/refs/heads/branchname` file is that you could get it wrong: it could be a bogus hash ID, or it could be the hash ID of an object that is not a commit object. Neither of these is allowed by the normal operation of Git, so various internal bits of Git may assume that branch file contents are valid hash IDs pointing to commit objects.
Otherwise, this is precisely what `git reset --soft *commit-specifier*` does with whichever branch name `HEAD` is attached-to, and precisely what `git branch -f *name* *commit-specifier*` does with a branch name to which `HEAD` is not attached. So if your goal is to emulate one of those two commands without using that command, you *can* do it (at least today, and until Git does something about the name-case-issue on Windows and similar OSes).
As for this:
>
> Is there ever a situation where you would want to directly modify the branch files in the `.git` folder?
>
>
>
It's useful for experimentation (to test source code changes to Git, for instance, if you want to see if you've fixed some crash that occurs when a branch name has an invalid hash ID in it). It might occasionally be useful post-system-crash if Git's own internal arrangements have become corrupted.
Upvotes: 3 [selected_answer]<issue_comment>username_4: >
> 1. What are some dangers of replacing the commit sha within a git branch/ref file?
>
>
>
Modifying files in `.git/refs/heads/` directly *should* be fine, but there is to my knowledge no guarantee that this will remain true. There is also nothing to say that `.git/refs/heads/` contains all or even any branch refs -- they can also reside in a packfile. As you probably already know, you are better of using the built-in tools to update refs. See [git update-ref](https://gitirc.eu/git-update-ref.html), [git symbolic-ref](https://gitirc.eu/git-symbolic-ref.html).
>
> 2. Is there ever a situation where you would want to directly modify the branch files in the .git folder? Rather then using git commands to do these behaviors.
>
>
>
No, aside from testing the behaviour of Git itself I can think of no reason. If you need to modify them, Git provides enough tooling to do so safely. You can use custom refs to keep arbitrary data in your repository. [git notes](https://gitirc.eu/git-notes.html) is an example of such a usage that has later made it into the standard tooling.
Upvotes: 1 |
2018/03/19 | 1,828 | 4,997 | <issue_start>username_0: I am trying to get the lines from html page, proccessed by BS, containing
word 'billion'. But I am getting empty list..... Btw, these lines are between
`-` tags, I have tried to use `soup.findAll("- ", {"class": "tabcontent"})`
but it gives me an empty list also.
```
import requests
from bs4 import BeautifulSoup
import re
url = 'http://www.worldstopexports.com/united-states-top-10-exports/'
header = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
"X-Requested-With": "XMLHttpRequest"
}
page = requests.get (url, headers=header)
soup = BeautifulSoup (page.text, 'lxml')
table = soup.find_all (class_='tabcontent')[0].text
print(re.findall(r'^.*? billion', table))
print(table)
Machinery including computers: US$201.7 billion (13% of total exports)
Electrical machinery, equipment: $174.2 billion (11.3%)
Mineral fuels including oil: $138 billion (8.9%)
Aircraft, spacecraft: $131.2 billion (8.5%)
Vehicles: $130.1 billion (8.4%)
Optical, technical, medical apparatus: $83.6 billion (5.4%)
Plastics, plastic articles: $61.5 billion (4%)
Gems, precious metals: $60.4 billion (3.9%)
Pharmaceuticals: $45.1 billion (2.9%)
Organic chemicals: $36.2 billion (2.3%)
```<issue_comment>username_1: You could use `select()` to first get the tab and then the `li` children and the text:
```
# ... right under soup = BeautifulSoup (page.text, 'lxml') ...
# select the first tab
tab = soup.select('div.tabcontent')[0]
# select its items
items = [text
for item in tab.select('li')
for text in [item.text]
if "billion" in text]
print(items)
```
This yields
```
['Machinery including computers: US$201.7 billion (13% of total exports)', 'Electrical machinery, equipment: $174.2 billion (11.3%)', 'Mineral fuels including oil: $138 billion (8.9%)', 'Aircraft, spacecraft: $131.2 billion (8.5%)', 'Vehicles: $130.1 billion (8.4%)', 'Optical, technical, medical apparatus: $83.6 billion (5.4%)', 'Plastics, plastic articles: $61.5 billion (4%)', 'Gems, precious metals: $60.4 billion (3.9%)', 'Pharmaceuticals: $45.1 billion (2.9%)', 'Organic chemicals: $36.2 billion (2.3%)']
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Your error is in using `.*`; the dot operator doesn't normally match newlines, and the `table` string contains newlines between the start and the word *billion*. If you are going to use a regex, then at least use the `re.MULTILINE` flag to have the `^` match after newlines:
```
>>> re.findall(r'^.*billion', table, flags=re.MULTILINE)
['Machinery including computers: US$201.7 billion',
'Electrical machinery, equipment: $174.2 billion',
'Mineral fuels including oil: $138 billion',
'Aircraft, spacecraft: $131.2 billion',
'Vehicles: $130.1 billion',
'Optical, technical, medical apparatus: $83.6 billion',
'Plastics, plastic articles: $61.5 billion',
'Gems, precious metals: $60.4 billion',
'Pharmaceuticals: $45.1 billion',
'Organic chemicals: $36.2 billion']
```
However, since you want to find the text in `li` elements, why not select on those?
```
soup.find(class_='tabcontent').find_all('li', string=re.compile(r'billion'))
```
Passing in a regular expression pattern to `string` lets you filter on the contents of the elements. This gives you the matching elements:
```
>>> soup.find(class_='tabcontent').find_all('li', string=re.compile(r'billion'))
[- Machinery including computers: US$201.7 billion (13% of total exports)
,
- Electrical machinery, equipment: $174.2 billion (11.3%)
,
- Mineral fuels including oil: $138 billion (8.9%)
,
- Aircraft, spacecraft: $131.2 billion (8.5%)
,
- Vehicles: $130.1 billion (8.4%)
,
- Optical, technical, medical apparatus: $83.6 billion (5.4%)
,
- Plastics, plastic articles: $61.5 billion (4%)
,
- Gems, precious metals: $60.4 billion (3.9%)
,
- Pharmaceuticals: $45.1 billion (2.9%)
,
- Organic chemicals: $36.2 billion (2.3%)
]
```
You can always apply `.get_text()` to those elements if you only wanted their contents.
Upvotes: 2 <issue_comment>username_3: Another approach could be something like below:
```
import requests
from bs4 import BeautifulSoup
URL = 'http://www.worldstopexports.com/united-states-top-10-exports/'
soup = BeautifulSoup(requests.get(URL,headers={"User-Agent":"Mozilla/5.0"}).text, 'lxml')
table = soup.find(class_='tabcontent')
data = '\n'.join([item.text for item in table.find_all("li")])
print(data)
```
Output:
```
Machinery including computers: US$201.7 billion (13% of total exports)
Electrical machinery, equipment: $174.2 billion (11.3%)
Mineral fuels including oil: $138 billion (8.9%)
Aircraft, spacecraft: $131.2 billion (8.5%)
Vehicles: $130.1 billion (8.4%)
Optical, technical, medical apparatus: $83.6 billion (5.4%)
Plastics, plastic articles: $61.5 billion (4%)
Gems, precious metals: $60.4 billion (3.9%)
Pharmaceuticals: $45.1 billion (2.9%)
Organic chemicals: $36.2 billion (2.3%)
```
Upvotes: 1 |
2018/03/19 | 1,236 | 4,777 | <issue_start>username_0: I have a reactive form where on load no fields are required. If an option is selected that will add additional form elements into the formGroup then the new shown fields will be all required.
If the nickname field is hidden then you should be able to submit the form just fine. If the nickname is shown then the nickname field is required and the submit button is disabled until the nickname field is full.
Here is a sample of what I want to do.
My question is, how can I enable/disable validation once the form element is shown/hidden?
App.module.ts
```
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { FormsModule, ReactiveFormsModule } from '@angular/forms';
import { AppComponent } from './app.component';
import { HelloComponent } from './hello.component';
@NgModule({
imports: [ BrowserModule, FormsModule, ReactiveFormsModule ],
declarations: [ AppComponent, HelloComponent ],
bootstrap: [ AppComponent ]
})
export class AppModule { }
```
App.component.ts
```
import { Component, OnInit } from '@angular/core';
import { Validators, FormControl, FormGroup, FormBuilder } from '@angular/forms';
@Component({
selector: 'my-app',
templateUrl: './app.component.html',
styleUrls: [ './app.component.css' ]
})
export class AppComponent implements OnInit {
name = 'My Reactive Form';
constructor(
private fb: FormBuilder
) {}
myForm: FormGroup;
showNick: boolean = false;
ngOnInit() {
this.myForm = this.fb.group({
'firstName': new FormControl(),
'nickName': new FormControl('', Validators.required),
'lastName': new FormControl()
})
}
toggleNick() {
this.showNick = !this.showNick;
}
}
```
app.component.html
```
First Name
Nickname? yes / no
Nickname
This field is invalid
Last Name
Submit
```<issue_comment>username_1: In my application, I have a similar requirement. If the user asks to be notified by text, the phone is required. Otherwise the phone number is optional.
I wrote this method:
```
setNotification(notifyVia: string): void {
const phoneControl = this.customerForm.get('phone');
if (notifyVia === 'text') {
phoneControl.setValidators(Validators.required);
} else {
phoneControl.clearValidators();
}
phoneControl.updateValueAndValidity();
}
```
It is called from this code which is in the ngOnInit:
```
this.customerForm.get('notification').valueChanges
.subscribe(value => this.setNotification(value));
```
If the user changes the notification field (which is a radio button), it calls the `setNotification` method passing in the value. If the value is notification by 'text', it sets the phone's validation to required.
Otherwise it clears the phone field's validation.
Then it *must* call `updateValueAndValidity` to update the form info with this new validation.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I think the best option for to do this it's make fields part of inital form with all theirs validations and when you want disable and enable fields or nested forms programmatically.
Example: <https://stackblitz.com/edit/angular-ojebff>
<https://stackblitz.com/edit/angular-ojebff?embed=1&file=app/input-errors-example.html>
Upvotes: 2 <issue_comment>username_3: Even the fields are hidden from user the fields are active in the reactive from. So simply you need to disable the field from the reactive from by using following code
```
this.myForm.get("nickName").disable();
```
Change the function toggleNick() as given below
```
toggleNick() {
this.showNick = !this.showNick;
if(showNick) {
this.myForm.get("nickName").enable();
} else {
this.myForm.get("nickName").disable();
}
}
```
Upvotes: 4 <issue_comment>username_4: I didn't find those solutions viable if you have a lot of optional fields in a complex form.
What I did is somtehing like that :
```
export class MyComponent implements AfterViewChecked {
@ViewChildren(FormControlName) allFormControlInDOM: QueryList;
// ...
ngAfterViewChecked(): void {
// We must disable the controls that are not in the DOM
// If we don't, they are used for validating the form
if (this.allFormControlInDOM) {
const controls: { [p: string]: AbstractControl } = this.myForm.controls;
for (const control in controls) {
if (controls.hasOwnProperty(control) && controls[control] instanceof FormControl) {
const found = this.allFormControlInDOM.find((item: FormControlName) => item.name === control);
if (found) {
controls[control].enable();
} else {
controls[control].disable();
controls[control].setValue(null);
}
}
}
}
}
// ...
}
```
I hope that can help :)
Upvotes: 1 |
2018/03/19 | 1,024 | 3,408 | <issue_start>username_0: i made an interface using react but when i add more items to the screen and the screen gets full the scroll down bar doesn't appear and i can't see the elements that are at the end of the page. thank you for your time i really appreciate it.
this css is written in a single folder it is ment to the css for all my pages i don't know if this could make this problem
css code :
```
.App-header {
position: fixed;
display: block;
width: 100%;
background-color: #222;
color: #FFFF;
height:90px;
position: fixed;
}
.App-header ul li{
float:left;
padding-top:25px;
padding-left: 10px;
padding-bottom: auto;
}
.App-header ul li h1{
display:inline;
padding-left: 8px;
padding-right: 8px;
}
.App-header ul li{
display:inline;
}
.App-header ul li a{
text-decoration:none;
font-size:15px;
font-weight:bold;
color:white;
}
.App-header ul li a:hover{
text-decoration:underline;
text-decoration-color: red;
}
.Body{
position:absolute;
margin-top: 90px;
width:100%;
}
.Body h1 {
color: black;
}
.Body p{
color:black;
}
.Newfeeds {
float:right;
border: solid red;
margin-right:10px;
width:200px;
}
.Newfeeds h1{
font-size: 20px;
text-decoration:underline;
}
.weather{
border: solid green;
float:left;
height:400px;
width:400px;
margin-left:15px;
}
```
this is the react code that , used to make my interface i have another page called index the only code in there is to create pathes between pages it is also has the header .
home page code :
```
import React, {Component} from 'react';
import '../Css/App.css';
class HomePage extends Component {
render(){
return(
Your daily information
======================
A Kangaroo cannot walk backwards.
The San Francisco cable cars are the only mobile National Monuments.
The only 15 letter word that can be spelled without repeating a letter is uncopyrightable.
Non - dairy creamer is flammable
A duck's quack doesn't echo and no one knows why.
Although, the show Brianiac has proved that it does in fact does echo.
In the Wizard of Oz Dorothy's last name is
Gail. It is shown on the mail box.
);
}
}
export default HomePage;
```
thank you for your time i hope i described my problem in a good way :)<issue_comment>username_1: Try add `overflow-y: auto` to `.Body{}` in your css
Upvotes: 0 <issue_comment>username_2: I do not have the exact answer but what would i do is, i would go and delete one after the other css property or maybe even a whole block of css properties on specific class until i find which line causes the problem
do you have this code on github so i can download it and see if i can solve the problem for you?
Upvotes: 3 [selected_answer]<issue_comment>username_3: There are some reasons that can make this problem. you have to do something:
1. When scrollbar gets disappear go to inspect and look for the main container that scrollbar has to be there. try to revise the CSS properties like with to "100%" it'll may work for you as it worked for me.
2. You can make the scroll bar always visible by doing this steps:
```css
html {
overflow: -moz-scrollbars-vertical;
overflow-y: scroll;
}
```
3. If those two above didn't work you can try `overflow-y: auto;` as well
Upvotes: 2 <issue_comment>username_4: It is because of the position of your .App-header set to fixed, Remove the position: fixed; property and it will show
Upvotes: 0 |
2018/03/19 | 541 | 1,400 | <issue_start>username_0: How to convert Float value to Varchar in SQL Server with our Rounding or Padding in SQL Server .
```
DECLARE @A AS FLOAT, @B AS FLOAT
SELECT @A=1.353954 , @B=1.353
SELECT
CAST(@A AS VARCHAR(40)) AS B_FLOAT_TO_VARCHAR, -- Rounding
STR(@B, 25, 5) -- Padding 0's
```
Actual Result : 1.35395, 1.35300
Expected Result : 1.353954, 1.353<issue_comment>username_1: First cast as decimal and specify the length of the decimal you have
```
DECLARE @A AS FLOAT, @B AS FLOAT
SELECT @A=1.353954 , @B=1.353
SELECT
cast(CAST(@A AS decimal(9,6)) as varchar(max)) AS B_FLOAT_TO_VARCHAR, -- Rounding
cast(CAST(@B AS decimal(9,3)) as varchar(max))
```
Upvotes: 0 <issue_comment>username_2: Float is an approximate datatype so this is always going to be a bit tricky. You could convert your float to a numeric and then to a varchar. It works for these values but won't for every float value.
```
select convert(varchar(10), convert(numeric(9,6), @A))
, convert(varchar(10), @B)
```
Upvotes: 1 <issue_comment>username_3: For SQL Server 2008, you can use the 128 or 129 style:
```
DECLARE @A AS FLOAT, @B AS FLOAT
SELECT @A=1.353954 , @B=1.353
SELECT CONVERT(varchar(50), @A, 128) [A]
,CONVERT(varchar(50), @B, 128) [B]
```
Produces:
```
A B
1.353954 1.353
```
<https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql>
Upvotes: 0 |
2018/03/19 | 954 | 3,338 | <issue_start>username_0: I recently had a coding quiz that asks me to find a node in a tree that appears most frequently in all levels.
For example,
```
a
/ \
c a
/ \ / \
c a b c
```
In this tree, `a` should be the answer since it appears in level 0, 1, and 2.
I tried to approach this using level-order traversal, but I was confused how to keep track of in which level the node has appeared.
How can I approach this problem, preferably using Python?
Tree Struct:
```
class TreeNode:
def __init__(self, data = None):
self.data = data
self.left = None
self.right = None
def insert(self, data):
if self.data:
if data < self.data:
if self.left is None:
self.left = TreeNode(data)
else:
self.left.insert(data)
elif data > self.data:
if self.right is None:
self.right = TreeNode(data)
else:
self.right.insert(data)
else:
self.data = data
```<issue_comment>username_1: How I would go about it, this is basically psuedo code and untested
```
countingdict = {}
for tag, element in root:
if tag not in dict:
countingdict.update({tag:1})
else:
countingdict[tag] += 1
```
You could nest the loop for a many levels as needed
Upvotes: 0 <issue_comment>username_2: While you traverse the tree you use a `dict` to keep track of which level each node type was seen on. This can be achieved by having your keys to be nodes and your values to be sets of levels on which the node was seen.
```
def most_frequent_in_levels(tree):
counter = {}
def level_counter(tree, counter, level):
if tree.data not in counter:
counter[tree.data] = {level}
else:
counter[tree.data].add(level)
if tree.left:
level_counter(tree.left, counter, level + 1)
if tree.right:
level_counter(tree.right, counter, level + 1)
level_counter(tree, counter, 0)
return max(counter.keys(), key=lambda data: len(counter[data]))
```
Here is a working example.
```
tree = TreeNode(data='a')
tree.left, tree.right= TreeNode(data='a'), TreeNode(data='b')
tree.left.left, tree.left.right, tree.right.left = TreeNode(data='c'), TreeNode(data='c'), TreeNode(data='c')
# Which creates the following tree
#
# a
# / \
# a b
# / \ /
# c c c
most_frequent_in_levels(tree) # 'a'
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can use a custom version the [Breadth-First-Search](https://en.wikipedia.org/wiki/Breadth-first_search):
```
from collections import deque, defaultdict
def bsf(tree):
d = deque([tree])
levels = defaultdict(list)
count = 0
seen = [tree.data]
while seen:
listing = []
while d:
val = d.popleft()
if val:
levels[count].append(val.data)
listing.extend([val.right, val.left])
count += 1
if not any(listing):
break
d.extend(listing)
return levels
result = bsf(t1)
frequencies = {i:[b for _, b in result.items() if i in b] for i in [c for h in result.values() for c in h]}
last_result = map(frequencies.items(), key=lambda x:len(x[-1]))[0]
```
Upvotes: 0 |
2018/03/19 | 278 | 1,146 | <issue_start>username_0: In the documentation of BigQuery, it has only three types of sources: organization, project and dataset. Roles and permissions are on these resources.My question is is there any way to define an access control to a particular table in a dataset?<issue_comment>username_1: With BigQuery you can define read access up to a per-row level:
* <https://cloud.google.com/bigquery/docs/views#row-level-permissions>
With that said, going to a per-row or per-table level access will take a lot more work (involving authorized views) than working the native project/dataset access controls.
Upvotes: 2 <issue_comment>username_2: No, you cannot define access control on table. Only down to dataset!
At the same time there is a way to define [`row-level`](https://cloud.google.com/bigquery/docs/views#row-level-permissions) access
Yet, another option for you (depends on specific use case) would potentially be [`Protecting Data with Cloud KMS Keys`](https://cloud.google.com/bigquery/docs/customer-managed-encryption) - this will not control access but rather ability to see actual data vs. encrypted
Upvotes: 3 [selected_answer] |
2018/03/19 | 366 | 1,408 | <issue_start>username_0: New to VBA code. Need help in matching strings with a Macro. Have a system name "<NAME>" in Col A and Input Name in "<NAME>" in Column B. I want to compare the first and last string in columns A & B and if 1st and last string match, the result in Column C should be "OK" else the column C should be "Check".
PS: The format in Column A is "FirstName-MiddleName-LastName" & The format in Column B is "FirstName-Intial-LastName".
I am not sure of the coding for this. Can someone help? Thanks.<issue_comment>username_1: With BigQuery you can define read access up to a per-row level:
* <https://cloud.google.com/bigquery/docs/views#row-level-permissions>
With that said, going to a per-row or per-table level access will take a lot more work (involving authorized views) than working the native project/dataset access controls.
Upvotes: 2 <issue_comment>username_2: No, you cannot define access control on table. Only down to dataset!
At the same time there is a way to define [`row-level`](https://cloud.google.com/bigquery/docs/views#row-level-permissions) access
Yet, another option for you (depends on specific use case) would potentially be [`Protecting Data with Cloud KMS Keys`](https://cloud.google.com/bigquery/docs/customer-managed-encryption) - this will not control access but rather ability to see actual data vs. encrypted
Upvotes: 3 [selected_answer] |
2018/03/19 | 1,038 | 3,487 | <issue_start>username_0: How can I create a new repository in an organization with PyGithub on Github? In particular I like to know how to use the `create_repo` method?
My question is identical to [this question](https://stackoverflow.com/questions/28675121/how-to-create-a-new-repository-with-pygithub), but I would like the created repository to appear in an organization.
The solution to creating a repo without the organization level is:
```
g = Github("username", "password")
user = g.get_user()
repo = user.create_repo(full_name)
```<issue_comment>username_1: This link gave me the answer: [link](https://chase-seibert.github.io/blog/2016/07/22/pygithub-examples.html)
I thought I would update my question to let others know what the solution was.
Pretty simple:
```
from github import Github
# using username and password
g = Github("Username", "Password")
org = g.get_organization('orgName')
repo = org.create_repo("test name")
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Below code will help you to create new Repo in an organization:
using username and password establish connection to github:
```
g = Github(userName, password)
org = g.get_organization('yourOrgName')
```
If you are using Github Enterprise then use below code to login:
```
g = Github(base_url="https://your_host_name/api/v3", login_or_token="personal_access_token")
org = g.get_organization('yourOrgName')
```
create the new repository:
```
repo = org.create_repo(projectName, description = projectDescription )
```
full code to create a Repo:
```
from github import Github
import pygit2
g = Github(userName, password)
org = g.get_organization('yourOrgName')
repo = org.create_repo(projectName, description = projectDescription )
```
Clone a repo :
```
repoClone = pygit2.clone_repository(repo.git_url, 'path_where_to_clone')
```
push code to repo:
```
repoClone.remotes.set_url("origin", repo.clone_url)
index = repoClone.index
index.add_all()
index.write()
tree = index.write_tree()
oid = repoClone.create_commit('refs/heads/master', author, commiter, "init commit",tree,[repoClone.head.peel().hex])
remote = repoClone.remotes["origin"]
credentials = pygit2.UserPass(userName, password)
#if the above credentials does not work,use the below one
#credentials = pygit2.UserPass("personal_access_token", 'x-oauth-basic')
remote.credentials = credentials
callbacks=pygit2.RemoteCallbacks(credentials=credentials)
remote.push(['refs/heads/master'],callbacks=callbacks)
```
Full code to clone,create and push to a repo:
```
from github import Github
import pygit2
g = Github(userName, password)
org = g.get_organization('yourOrgName')
repo = org.create_repo(projectName, description = projectDescription )
repo.create_file("/README.md", "init commit", Readme_file)
repoClone = pygit2.clone_repository(repo.git_url, 'path_where_to_clone')
repoClone.remotes.set_url("origin", repo.clone_url)
index = repoClone.index
index.add_all()
index.write()
tree = index.write_tree()
oid = repoClone.create_commit('refs/heads/master', author, commiter, "init commit",tree,[repoClone.head.peel().hex])
remote = repoClone.remotes["origin"]
credentials = pygit2.UserPass(userName, password)
#if the above credentials does not work,use the below one
#credentials = pygit2.UserPass("personal_access_token", 'x-oauth-basic')
remote.credentials = credentials
callbacks=pygit2.RemoteCallbacks(credentials=credentials)
remote.push(['refs/heads/master'],callbacks=callbacks)
```
Upvotes: 0 |
2018/03/19 | 1,219 | 5,121 | <issue_start>username_0: I'm wondering if its possible to express the time complexity of an algorithm that relies on convergence using Big O notation.
In most algorithmic analysis I've seen, we evaluate our function's rate of growth based on input size.
In the case of an algorithm that has some convergence criteria (where we repeat an operation until some defined error metric is below a threshold, or the rate at which the error metric is changing is below some threshold), how can we measure the time complexity? The number of iterations required to converge and exit that loop seems difficult to reason about since the way an algorithm converges tends to be dependent on the content of the input rather than just it's size.
How can we represent the time complexity of an algorithm that relies on convergence in Big O notation?<issue_comment>username_1: Asymptotic notations don't rely on convergence.
According to **CLRS** book (Introduction to Algorithms Third Edition chapter 3 page 43):
>
> When we look at input sizes large enough to make only the order of
> growth of the running time relevant, we are studying the
> ***asymptotic*** efficiency of algorithms.That is, we are concerned with how the running time of an algorithm increases with he size of
> the input in the **limit**, as the size of the input increases without
> bound. Usually, an algorithm that is asymptotically more efficient
> will be the best choice for all but very small inputs.
>
>
>
You mentioned your code (or idea) has infinitive loop and continue to satisfy the condition and you named satisfying the condition **convergence** but in this meaning, convergence does not related to asymptotic notations like `big O`, because it must finish because a necessary condition for a code to be an algorithm is that it's iterations must finish. You need to make sure iterations of your code finish, so you can tell it the algorithm and can asymptotic analysis of it.
Another thing is it's true maybe sometime a result has more running time but another has less running time. It's not about asymptotic analysis. It's *best case, worst case*. We can show analyse of algorithms in best case or worst case by `big O` or other asymptotic notations. The most reliable of them is you analyse your algorithm in worst case. Finally, for analysis your code you should describe the step of your algorithm exactly.
Upvotes: 2 <issue_comment>username_2: In order to analyse an algorithm that relies on convergence, it seems that we have to prove something about the rate of convergence.
Convergence usually has a termination condition that checks if our error metric is below some threshold:
```
do {
// some operation with time complexity O(N)
} while (errorMetric > 0.01) // if this is false, we've reached convergence
```
Generally, we seek to define something about the algorithm's manner of convergence - usually by identifying that its a function of something.
For instance, we might be able to show that an algorithm's measure of error is a function of the number of iterations so that the error = 1 / 2^i, where i is the number of iterations.
This can be re-written in terms of the number of iterations like so: iterations = log(1 / E), where E is the desired error value.
Therefore, if we have an algorithm that performs some linear operation on each iteration of the convergence loop (as in the example above), we can surmise that our time complexity is O(N \* log(1 / E)). Our function's rate of growth is dependent on the amount of error we're willing to tolerate, in addition to the input size.
So, if we're able to determine some property about the behaviour of convergence, such as if its a function of the error, or size of the input, then we can perform asymptotic analysis.
Take, for example, PageRank, an algorithm called [power iteration](http://mlwiki.org/index.php/Power_Iteration) is used in its computation, which is an algorithm that approximates the dominant eigenvector of a matrix. It seems possible that the rate of convergence can be shown to be a function of the first two eigenvalues (shown in the link).
Upvotes: 4 [selected_answer]<issue_comment>username_3: From math point of view, the main problem is estimation of the [Rate of convergence](https://en.wikipedia.org/wiki/Rate_of_convergence) of used approach. I am not so familiar with numerical methods for speak fluently about higher than 1 Dimensions (matrixes and tensors you probably more interested in). But ley's take other example of [Equation Solving](https://en.wikibooks.org/wiki/Numerical_Methods/Equation_Solving) than Bisection, already estimated above as `O(log(1/e))`.
Consider [Newton method](https://en.wikipedia.org/wiki/Newton%27s_method) and assume we try to find one root with accuracy e=10e-8 for all float numbers. We have square as Rate of convergence, so we have approximately 2\*log(float\_range/e) cycle iterations, what's means the same as Bisection algorithmic complexity `O(log(range/accuracy))`, if we are able to calculate the derivative for constant time.
Hope, this example has a sense for you.
Upvotes: 1 |
2018/03/19 | 577 | 2,521 | <issue_start>username_0: I am brand new to Liquibase...
Today I wrote a Liquibase changeset using --liquibase formatted sql.
I created two tables where the second had a foreign key dependency on the first.
My rollback strategy was (mistakenly) drop table1; drop table2. When I ran the update and tested the rollback it failed because of the foreign key constraint. However, when I corrected my mistake and attempted to rerun it, it failed because the checksum didn't match.
I know that the obvious answer is to make more atomic changesets, however...
Does Liquibase support a way to test this sort of thing without actually running it so I can avoid the problem with the checksum on the edited rollback?
Failing that: is there a workaround for the checksum problem that will let me edit my files after running the update? (ctrl+z?)<issue_comment>username_1: The short answer to your question is, no Liquibase doesn't have such a thing.
Liquibase is a great toolkit, but it doesn't have a lot of bells and whistles, and it doesn't have much of an 'opinion' on how it should be used, or what your workflow should be. In your case, I would suggest that one way to deal with the problem is to just drop the database and then re-create it from the changelog. If you have already deployed the changelog in multiple places, that might not be possible, and if you weren't prepared to do that, that could be a problem.
There is an option to specify a `validChecksums` attribute on a changeset, so you could use that, but in general if you are using that you are making your changelog more complicated.
If you wanted to look at something that is more full-featured and that has the ability to forecast changes before they are deployed, please check out my company's product, [Datical DB](http://www.datical.com/). It uses liquibase at its core, but adds a whole lot more (and is priced accordingly).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Liquibase provides `updateTestingRollback` command that basically updates the database, then tests its rollback and if successful, applies the changes again.
Your problem with invalid checksums may be solved with `clearCheckSums` command. It removes current checksums from database and on next update change sets that have already been deployed will have their checksums recomputed, and change sets that have not been deployed will be deployed.
For more details check Liquibase [commands](https://www.liquibase.org/documentation/command_line.html).
Upvotes: 0 |
2018/03/19 | 1,222 | 3,542 | <issue_start>username_0: I have data of the form:
```
ID name date count
---------------------------------
1 A 1/1/2015 3
2 B 1/4/2015 2
3 C 1/6/2015 4
4 D 1/10/2015 2
```
Which I would like to turn into something like...
```
1 A 1/1/2015
1 A 1/2/2015
1 A 1/3/2015
2 B 1/4/2015
2 B 1/5/2015
3 C 1/6/2015
...
```
I believe this is possible using a partition query, but I'm having a real problem understanding the [examples I find on the MS page](https://learn.microsoft.com/en-us/sql/t-sql/functions/row-number-transact-sql). I need to use `ROW_NUMBER` in `DATEADD`, but I can't figure out how to get it to return the right number of rows, say 3 for the first case. The `TerritoryName` example seems close...
In the worst case I can do this in VBA code, and I also know of solutions using tables filled with dates, but I think this is something I should know how to do in the SELECT if possible.
**UPDATE:** I am not allowed to change anything in the original database (a licensing issue). I could use a #blah, but would prefer to avoid this if possible.<issue_comment>username_1: This is where a tally table is the right tool for the job. I keep one as a view on my system. It is insanely fast.
```
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
GO
```
Now we just need some sample data and we can then join to our tally table.
```
set dateformat mdy --you really should use ANSI standard YYYYMMDD
insert @Something values
(1, 'A', '1/1/2015', 3)
,(2, 'B', '1/4/2015', 2)
,(3, 'C', '1/6/2015', 4)
,(4, 'D', '1/10/2015', 2)
select s.*
, DATEADD(day, t.N - 1, s.SomeDate)
from @Something s
join cteTally t on t.N <= s.SomeCount
```
Since you can't create a view you can trim this cte down a bit also to fit your current needs and have the cte be in your query. Something like this.
```
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E1
)
select s.*
, DATEADD(day, t.N - 1, s.SomeDate)
from @Something s
join cteTally t on t.N <= s.SomeCount
```
Upvotes: 2 <issue_comment>username_2: Perhaps you could use built-in `spt_values` table, if selecting on `Type`='P', `Number` returns consecutive numbers between 0 and 2047.
If column `count` contains reliable value of day count query could be:
```
select t.id, t.name, dateadd(d, numbers.Number, t.date) as date
from t
join master..spt_values as numbers
on numbers.Type = 'P'
and numbers.Number < t.count
```
Fiddle: <http://sqlfiddle.com/#!18/62aee/2>
Upvotes: 2 [selected_answer]<issue_comment>username_3: If not all dates are represented, and you only want to list dates that are in the table, I think something like the following would work:
```
SELECT ranknum, itemname, itemdate
FROM (SELECT RANK() OVER(PARTITION BY t.itemdate ORDER BY allitems.itemname) ranknum
, t.id
, allitems.itemname
, t.itemdate
FROM (SELECT DISTINCT itemname FROM tablename) allitems
LEFT JOIN tablename t ON allitems.itemname = t.itemname) ranked
WHERE id IS NOT NULL
ORDER BY ranknum, itemdate
```
Upvotes: 0 |
2018/03/19 | 1,229 | 3,327 | <issue_start>username_0: I have a number of dates for which I would like to make an indicator variable for. The problem is that I am having difficulty making this happen in `R` using `timeDate`. Here is a toy example
```
library(timeDate)
library(lubridate)
library(tidyverse)
>df <- tribble(
~date,
"2010-12-31",
"2011-01-01",
"2011-01-02") %>%
mutate(date = ymd(date))
> df
# A tibble: 3 x 1
date
1 2010-12-31
2 2011-01-01
3 2011-01-02
```
I would like to add an indicator for New Years Day called `is_new_year`.
I tried the following
```
df %>% rowwise() %>%
mutate(is_new_year = ifelse(USNewYearsDay(year = year(date)) == date,1,0))
```
and got the error
>
> Error in mutate\_impl(.data, dots) : Evaluation error: comparison (1) is possible only for atomic and list types.
>
>
>
What should I do to get
```
date is_new_year
1 2010-12-31 0
2 2011-01-01 1
3 2011-01-02 0
```<issue_comment>username_1: This is where a tally table is the right tool for the job. I keep one as a view on my system. It is insanely fast.
```
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
GO
```
Now we just need some sample data and we can then join to our tally table.
```
set dateformat mdy --you really should use ANSI standard YYYYMMDD
insert @Something values
(1, 'A', '1/1/2015', 3)
,(2, 'B', '1/4/2015', 2)
,(3, 'C', '1/6/2015', 4)
,(4, 'D', '1/10/2015', 2)
select s.*
, DATEADD(day, t.N - 1, s.SomeDate)
from @Something s
join cteTally t on t.N <= s.SomeCount
```
Since you can't create a view you can trim this cte down a bit also to fit your current needs and have the cte be in your query. Something like this.
```
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E1
)
select s.*
, DATEADD(day, t.N - 1, s.SomeDate)
from @Something s
join cteTally t on t.N <= s.SomeCount
```
Upvotes: 2 <issue_comment>username_2: Perhaps you could use built-in `spt_values` table, if selecting on `Type`='P', `Number` returns consecutive numbers between 0 and 2047.
If column `count` contains reliable value of day count query could be:
```
select t.id, t.name, dateadd(d, numbers.Number, t.date) as date
from t
join master..spt_values as numbers
on numbers.Type = 'P'
and numbers.Number < t.count
```
Fiddle: <http://sqlfiddle.com/#!18/62aee/2>
Upvotes: 2 [selected_answer]<issue_comment>username_3: If not all dates are represented, and you only want to list dates that are in the table, I think something like the following would work:
```
SELECT ranknum, itemname, itemdate
FROM (SELECT RANK() OVER(PARTITION BY t.itemdate ORDER BY allitems.itemname) ranknum
, t.id
, allitems.itemname
, t.itemdate
FROM (SELECT DISTINCT itemname FROM tablename) allitems
LEFT JOIN tablename t ON allitems.itemname = t.itemname) ranked
WHERE id IS NOT NULL
ORDER BY ranknum, itemdate
```
Upvotes: 0 |
2018/03/19 | 669 | 2,480 | <issue_start>username_0: I'm running a spring boot scheduled process that takes 5-10 seconds to complete. After it completes, 60 seconds elapse before the process begins again (Note that I'm not using fixedRate):
```
@Scheduled(fixedDelay=60_000)
```
Now, I want to limit it to run every minute Mon-Fri 9am to 5pm. I can accomplish this with
```
@Scheduled(cron="0 * 9-16 ? * MON-FRI")
```
Problem here is that this acts similar to fixedRate - the process triggers EVERY 60 seconds regardless of the amount of time it took to complete the previous run...
Any way to to combine the two techniques?<issue_comment>username_1: You can pass fixed delay (and any other number of optional parameters) to the annotation, like so:
```
@Scheduled(cron="0 * 9-16 ? * MON-FRI", fixedDelay=60_000)
```
From the documentation: <https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html>
Upvotes: -1 <issue_comment>username_2: try this:
```
@Schedules({
@Scheduled(fixedRate = 1000),
@Scheduled(cron = "* * * * * *")
})
```
Upvotes: 0 <issue_comment>username_3: You can try this one:
@Scheduled(cron="1 9-16 \* \* MON-FRI")
---
Also you can try write correct on this site <https://crontab.guru/>
Upvotes: -1 <issue_comment>username_4: it worked for me like this
I created a bean that returns a specific task executor and allowed only 1 thread.
```
@Configuration
@EnableAsync
public class AsyncConfig implements AsyncConfigurer {
@Bean(name = "movProcTPTE")
public TaskExecutor movProcessualThreadPoolTaskExecutor() {
ThreadPoolTaskExecutor exec = new ThreadPoolTaskExecutor();
exec.setMaxPoolSize(1);
exec.initialize();
return exec;
}
}
```
In my service, I injected my task executor and wrapped my logic with it, so even though my schedule runs every minute, my logic will only run when the task executor is free.
```
@Service
@EnableScheduling
public class ScheduledService {
@Autowired
private ReportDataService reportDataService;
@Autowired
private AsyncService async;
@Autowired
@Qualifier("movProcTPTE")
private TaskExecutor movProcTaskExecutor;
@Scheduled(cron = "0 * * 1-7 * SAT,SUN")
public void agendamentoImportacaoMovProcessual(){
movProcTaskExecutor.execute(
() -> {
reportDataService.importDataFromSaj();
}
);
}
}
```
Upvotes: 1 |
2018/03/19 | 757 | 2,654 | <issue_start>username_0: I would like to convert a dictionary of key-value pairs to an excel file with column names that match the values to the corresponding columns.
For example :
I have an excel file with column names as:
```
a,b,c,d,e,f,g and h.
```
I have a dictionary like:
{`1:['c','d'],2:['a','h'],3:['a','b','b','f']}`.
I need the output to be:
```
a b c d e f g h
1 1 1
2 1 1
3 1 2 1
```
the `1,2,3` are the keys from the dictionary.
The rest of the columns could be either `0` or `null`.
I have tried splitting the dictionary and am getting
```
1 = ['c','d']
2 = ['a','h']
3 = ['a','b','b','f']
```
but, I don't know how to pass this to match with the excel file.<issue_comment>username_1: You can pass fixed delay (and any other number of optional parameters) to the annotation, like so:
```
@Scheduled(cron="0 * 9-16 ? * MON-FRI", fixedDelay=60_000)
```
From the documentation: <https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/annotation/Scheduled.html>
Upvotes: -1 <issue_comment>username_2: try this:
```
@Schedules({
@Scheduled(fixedRate = 1000),
@Scheduled(cron = "* * * * * *")
})
```
Upvotes: 0 <issue_comment>username_3: You can try this one:
@Scheduled(cron="1 9-16 \* \* MON-FRI")
---
Also you can try write correct on this site <https://crontab.guru/>
Upvotes: -1 <issue_comment>username_4: it worked for me like this
I created a bean that returns a specific task executor and allowed only 1 thread.
```
@Configuration
@EnableAsync
public class AsyncConfig implements AsyncConfigurer {
@Bean(name = "movProcTPTE")
public TaskExecutor movProcessualThreadPoolTaskExecutor() {
ThreadPoolTaskExecutor exec = new ThreadPoolTaskExecutor();
exec.setMaxPoolSize(1);
exec.initialize();
return exec;
}
}
```
In my service, I injected my task executor and wrapped my logic with it, so even though my schedule runs every minute, my logic will only run when the task executor is free.
```
@Service
@EnableScheduling
public class ScheduledService {
@Autowired
private ReportDataService reportDataService;
@Autowired
private AsyncService async;
@Autowired
@Qualifier("movProcTPTE")
private TaskExecutor movProcTaskExecutor;
@Scheduled(cron = "0 * * 1-7 * SAT,SUN")
public void agendamentoImportacaoMovProcessual(){
movProcTaskExecutor.execute(
() -> {
reportDataService.importDataFromSaj();
}
);
}
}
```
Upvotes: 1 |
2018/03/19 | 987 | 3,740 | <issue_start>username_0: I'm trying to use a scanner to parse out some text but i keep getting an InputMismatchException. I'm using the scanner.next(Pattern pattern) method and i want to return the next n amount of characters (including whitespace).
For example when trying to parse out
```
"21 SPAN 1101"
```
I want to store the first 4 characters (`"21 "`) in a variable, then the next 6 characters (`" "`) in another variable, then the next 5 (`"SPAN "`), and finally the last 4 (`"1101"`)
What I have so far is:
```
String input = "21 SPAN 1101";
Scanner parser = new Scanner(input);
avl = parser.next(".{4}");
cnt = parser.next(".{6}");
abbr = parser.next(".{5}");
num = parser.next(".{4}");
```
But this keeps throwing an InputMismatchException even though according to the java 8 documentation for the scanner.next(Pattern pattern) it doesn't throw that type of exception. Even if I explicitly declare the pattern and then pass that pattern into the method i get the same exception being thrown.
Am I approaching this problem with the wrong class/method altogether? As far as i can tell my syntax is correct but i still cant figure out why im getting this exception.<issue_comment>username_1: At documentation of [`next(String pattern)`](https://docs.oracle.com/javase/9/docs/api/java/util/Scanner.html#next-java.lang.String-) we can find that it (emphasis mine)
>
> Returns the next token **if it matches the pattern constructed from the specified string**.
>
>
>
But `Scanner` is using as default delimiter *one or more whitespaces* so it doesn't consider spaces as part of token. So first token it returns is `"21"`, not `"21 "` so condition "...if it matches the pattern constructed from the specified string" is not fulfilled for `.{4}` because of its length.
Simplest solution would be reading entire line with `nextLine()` and splitting it into separate parts via regex like `(.{4})(.{6})(.{5})(.{4})` or series of substring methods.
Upvotes: 2 <issue_comment>username_2: You might want to consider creating a convenience method to cut your input String into variable number of pieces of variable length, as approach with `Scanner.next()` seems to fail due to not considering spaces as part of tokens (spaces are used as delimiter by default). That way you can store result pieces of input String in an array and assign specific elements of an array to other variables (I made some additional explanations in comments to proper lines):
```
public static void main(String[] args) throws IOException {
String input = "21 SPAN 1101";
String[] result = cutIntoPieces(input, 4, 6, 5, 4);
// You can assign elements of result to variables the following way:
String avl = result[0]; // "21 "
String cnt = result[1]; // " "
String abbr = result[2]; // "SPAN "
String num = result[3]; // "1101"
// Here is an example how you can print whole array to console:
System.out.println(Arrays.toString(result));
}
public static String[] cutIntoPieces(String input, int... howLongPiece) {
String[] pieces = new String[howLongPiece.length]; // Here you store pieces of input String
int startingIndex = 0;
for (int i = 0; i < howLongPiece.length; i++) { // for each "length" passed as an argument...
pieces[i] = input.substring(startingIndex, startingIndex + howLongPiece[i]); // store at the i-th index of pieces array a substring starting at startingIndex and ending "howLongPiece indexes later"
startingIndex += howLongPiece[i]; // update value of startingIndex for next iterations
}
return pieces; // return array containing all pieces
}
```
Output that you get:
```
[21 , , SPAN , 1101]
```
Upvotes: 1 |
2018/03/19 | 643 | 2,780 | <issue_start>username_0: When trying to inject field variables using dagger I'm getting null. Here are the files. Some are in Java and some in Kotlin
App.java
```
public class App extends DaggerApplication{
@Override
protected AndroidInjector extends DaggerApplication applicationInjector() {
return DaggerAppComponent.builder().application(this).build();
}
}
```
AppComponent.kt
```
@Singleton
@Component(modules = arrayOf(
NetworkModule::class,
ApplicationModule::class,
AndroidSupportInjectionModule::class
))
interface AppComponent : AndroidInjector {
@Component.Builder
interface Builder {
@BindsInstance
fun application(application: Application): AppComponent.Builder
fun build(): AppComponent
}
}
```
NetworkModule.kt
```
@Module
class NetworkModule {
@Provides
@Singleton
fun provideOkHttpClient(): OkHttpClient {
val builder = OkHttpClient.Builder();
if (BuildConfig.DEBUG) {
val interceptor = HttpLoggingInterceptor()
interceptor.setLevel(HttpLoggingInterceptor.Level.BODY)
builder.addInterceptor(interceptor).build()
}
return builder.build()
}
@Singleton
@Provides
fun provideRetrofit(client: OkHttpClient): Retrofit {
val retrofit = Retrofit.Builder()
.baseUrl(BaseApi.SITE_ENDPOINT)
.addConverterFactory(GsonConverterFactory.create())
.addCallAdapterFactory(RxJava2CallAdapterFactory.create())
.client(client)
.build();
return retrofit
}
}
```
// Repository where injection should be done
```
class Repository {
private var examsService: BlogExamsService
@Inject
var retrofit: Retrofit? = null
init {
// retrofit is null here
examsService = retrofit?.create(BlogExamsService::class.java)!!
}
}
```<issue_comment>username_1: Change your `Repository` to:
```
class Repository {
private var examsService: BlogExamsService
@Inject
constructor(retrofit: Retrofit) {
examsService = retrofit.create(BlogExamsService::class.java)!!
}
}
```
Upvotes: 0 <issue_comment>username_2: Field injection won't work as you do not run `inject()` method.
To make it work with your approach you should call in you `Repository` class:
`App.self.getComponent().inject(this)`
Where:
`self` is `static` instance of your application
`getComponent()` public getter for `ApplicationComponent`
**Though I would not recommend it in your case, it is a misuse of DI framework.**
You should create `RepositoryModule` and `@Provide` instance of `Repository` the same as you have done with `NetworkModule`.
Upvotes: 2 |
2018/03/19 | 1,379 | 5,453 | <issue_start>username_0: I have a csv file of 500GB and a mysql database of 1.5 TB of data and I want to run aws sagemaker classification and regression algorithm and random forest on it.
Can aws sagemaker support it? can model be read and trained in batches or chunks? any example for it<issue_comment>username_1: Amazon SageMaker is designed for such scales and it is possible to use it to train on very large datasets. To take advantage of the scalability of the service you should consider a few modifications to your current practices, mainly around distributed training.
If you want to use distributed training to allow much faster training (“100 hours of a single instance cost exactly the same as 1 hour of 100 instances, just 100 times faster”), more scalable (“if you have 10 times more data, you just add 10 times more instances and everything just works”) and more reliable, as each instance is only handling a small part of the datasets or the model, and doesn’t go out of disk or memory space.
It is not obvious how to implement the ML algorithm in a distributed way that is still efficient and accurate. Amazon SageMaker has modern implementations of classic ML algorithms such as Linear Learner, K-means, PCA, XGBoost etc. that are supporting distributed training, that can scale to such dataset sizes. From some benchmarking these implementations can be 10 times faster compared to other distributed training implementations such as Spark MLLib. You can see some examples in this notebook: <https://github.com/awslabs/amazon-sagemaker-workshop/blob/master/notebooks/video-game-sales-xgboost.ipynb>
The other aspect of the scale is the data file(s). The data shouldn’t be in a single file as it limits the ability to distribute the data across the cluster that you are using for your distributed training. With SageMaker you can decide how to use the data files from Amazon S3. It can be in a fully replicated mode, where all the data is copied to all the workers, but it can also be sharded by key, that distributed the data across the workers, and can speed up the training even further. You can see some examples in this notebook: <https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/data_distribution_types>
Upvotes: 3 <issue_comment>username_2: You can use SageMaker for large scale Machine Learning tasks! It's designed for that. I developed this open source project <https://github.com/Kenza-AI/sagify> (sagify), it's a CLI tool that can help you train and deploy your Machine Learning/Deep Learning models on SageMaker in a very easy way. I managed to train and deploy all of my ML models whatever library I was using (Keras, Tensorflow, scikit-learn, LightFM, etc)
Upvotes: 1 <issue_comment>username_3: Amazon Sagemaker is built to help you scale your training activities. With large datasets, you might consider two main aspects:
* The way data are stored and accessed,
* The actual training parallelism.
*Data storage*: S3 is the most cost-effective way to store your data for training. To get faster startup and training times, you can consider the followings:
* If your data is are already stored on Amazon S3, you might want first to consider leveraging the [`Pipe` mode](https://aws.amazon.com/blogs/machine-learning/accelerate-model-training-using-faster-pipe-mode-on-amazon-sagemaker/) with built-in algorithms or [bringing your own](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/advanced_functionality/pipe_bring_your_own). But `Pipe` mode is not suitable all the time, for example, if your algorithm needs to backtrack or skip ahead within an epoch (the underlying FIFO cannot support lseek() operations) or if it is not easy to parse your training dataset from a streaming source.
* In those cases, you may want to leverage [Amazon FSx for Lustre and Amazon EFS file systems](https://aws.amazon.com/blogs/machine-learning/speed-up-training-on-amazon-sagemaker-using-amazon-efs-or-amazon-fsx-for-lustre-file-systems/). If your training data is already in an [Amazon EFS](https://aws.amazon.com/efs/), I recommend using it as a data source; otherwise, choose [Amazon FSx for Lustre](https://aws.amazon.com/fsx/lustre/).
*Training Parallelism*: With large datasets, it is likely you'll want to train on different GPUs. In that case, consider the followings:
* If your training is already Horovod ready, you can do it with [Amazon SageMaker](https://aws.amazon.com/blogs/machine-learning/multi-gpu-and-distributed-training-using-horovod-in-amazon-sagemaker-pipe-mode/) ([notebook](https://github.com/aws/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/keras_script_mode_pipe_mode_horovod/tensorflow_keras_CIFAR10.ipynb)).
* In December, AWS has released [managed data parallelism](https://aws.amazon.com/blogs/aws/managed-data-parallelism-in-amazon-sagemaker-simplifies-training-on-large-datasets/), which simplifies parallel training over multiple GPUs. As of today, it is available for TensorFlow and PyTorch.
*(bonus) Cost Optimisation*: Do not forget to leverage [Managed Spot training](https://aws.amazon.com/blogs/aws/managed-spot-training-save-up-to-90-on-your-amazon-sagemaker-training-jobs/) to save up to 90% of the compute costs.
You will find other examples on the Amazon SageMaker Distributed Training [documentation page](https://sagemaker-examples.readthedocs.io/en/latest/training/distributed_training/index.html)
Upvotes: 3 |
2018/03/19 | 748 | 2,941 | <issue_start>username_0: I want to create a user registration with custom usernames. I store the user's username on a user document in Firestore. How can I validate a username already exists in my Users Collection?
[](https://i.stack.imgur.com/WARAs.png)
Maybe someone already have snippet for reactive form validation?<issue_comment>username_1: There is no efficient way to check all documents in a collection for a specific value. You would have to read each document in turn, and check them. That is prohibitive both from a performance and a cost standpoint.
What you can instead do is create an additional collection (typically called a reverse index, or reverse map) where you use the username as the name of the document, and (for example) the user's UID as the data of the document. You can then easily check if a username is already taken by [checking for the existence of a document with that specific name](https://stackoverflow.com/questions/47308159/whats-the-best-way-to-check-if-a-firestore-record-exists-if-its-path-is-known?rq=1), which is a direct access lookup and thus highly scaleable.
Since you tagged with `google-cloud-datastore`; if you are indeed looking for an answer for that database too, check [Unique email in Google Datastore](https://stackoverflow.com/questions/45820737/unique-email-in-google-datastore).
Upvotes: 4 [selected_answer]<issue_comment>username_2: In my Angular/Ionic project, I use an async validator to check for existing usernames stored as fields on user documents in a users collection. In my constructor I have:
```
this.signupForm = formBuilder.group({
username: ['', Validators.compose([Validators.required,
Validators.minLength(2),
Validators.maxLength(24),
this.asyncValidator.bind(this)],
password: ['', Validators.compose([Validators.minLength(6),
Validators.required])]
})
```
My asyncValidator method:
```
asyncValidator(control) {
let username = control.value
return new Promise(resolve => {
this.checkUsername(username).then(snapshot => {
if(snapshot.docs.length > 0){
resolve({
"username taken": true
});
} else {
resolve(null);
}
})
})
}
```
My query to Firestore:
```
checkUsername(username) {
return firebase.firestore().collection('users').where("username", "==", username).get()
}
```
Upvotes: 0 <issue_comment>username_3: Here is angular code:
```
fs_collection: AngularFirestoreCollection;
this.db.collection('Users’).ref.where('username', '==',
this.model.username).get().then((ref) => {
let results = ref.docs.map(doc => doc.data() as UserItems);
if (results.length > 0) {
console.log(userData); //do what you want with code
}
else {
this.error(“no user.”);
}
});
```
Upvotes: 2 |
2018/03/19 | 544 | 1,828 | <issue_start>username_0: I am not allowed to use == or /=, but I have no idea how I can rewrite it without them.
```
iffC x y =
if x == True && y == True then True
else if x == False && y == False then True
else False
iffG x y
| y == True && x == True = True
| y == False && x == False = True
| otherwise = False
```<issue_comment>username_1: You could rewrite you existing solutions like this, which I think is what the exercise was aimed at.
```haskell
iffC :: Bool -> Bool -> Bool
iffC x y = if x && y
then True
else False
iffG :: Bool -> Bool -> Bool
iffG x y | x && y = True
| otherwise = False
iffP :: Bool -> Bool -> Bool
iffP True True = True
iffP _ _ = False
```
I don't really see how this exercise is instructive because all those implementations are a complicated way of saying `&&`.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Any binary boolean operator can be defined using the pattern
```
op :: Bool -> Bool -> Bool
op x y =
if x
then if y then ... else ...
else if y then ... else ...
```
where the four `...` are, essentially, the truth table for `op`.
Often this leads to antipatterns like
```
if z then True else False
```
which should be rewritten as
```
z
```
or
```
if z then False else True
```
which should be rewritten as
```
not z
```
or
```
if z then True else True
```
which should be rewritten as
```
True
```
or the analogous case for `False`. Essentially, such `if`s can always be rewritten as one of `z, not z, True, False`.
Upvotes: 1 <issue_comment>username_3: Pattern matching seems like it would work like a charm for this.
```
iffC :: Bool -> Bool -> Bool
iffC True True = True
iffC False False = True
iffC _ _ = False
```
You get the free bonus of having your truth table visible.
Upvotes: 0 |
2018/03/19 | 2,045 | 5,635 | <issue_start>username_0: Using Spark 1.6 on scala, how do I group each characters in the code column by position by key? First strings together, second characters together etc...
```
val someDF = Seq(
(123, "0000"),
(123, "X000"),
(123, "C111"),
(124, "0000"),
(124, "0000"),
(124, "C200")).toDF("key", "code")
someDF.show()
+---+----+
|key|code|
+---+----+
|123|0000|
|123|X000|
|123|C111|
|124|0000|
|124|0000|
|124|C200|
+---+----+
val df = someDF.select($"key", split($"code","").as("code_split"))
df.show()
+---+--------------+
|key| code_split|
+---+--------------+
|123|[0, 0, 0, 0, ]|
|123|[X, 0, 0, 0, ]|
|123|[C, 1, 1, 1, ]|
|124|[0, 0, 0, 0, ]|
|124|[0, 0, 0, 0, ]|
|124|[C, 2, 0, 0, ]|
+---+--------------+
```
Using collect\_list, I can do it for one column at a time. How can I do it for all the combination without looping?
```
df
.select($"id",
$"code_split"(0).as("m1"),
$"code_split"(1).as("m2"),
$"code_split"(2).as("m3"),
$"code_split"(3).as("m4")
)
.groupBy($"id").agg(
collect_list($"m1"),
collect_list($"m2"),
collect_list($"m3"),
collect_list($"m4")
)
.show()
+---+----------------+----------------+----------------+----------------+
| id|collect_list(m1)|collect_list(m2)|collect_list(m3)|collect_list(m4)|
+---+----------------+----------------+----------------+----------------+
|123| [0, X, C]| [0, 0, 1]| [0, 0, 1]| [0, 0, 1]|
|124| [0, 0, C]| [0, 0, 1]| [0, 0, 0]| [0, 0, 0]|
+---+----------------+----------------+----------------+----------------+
```
Is there a way to get to the same result without repeating the collect\_list in agg? If I have 60 instances, I don't want to copy-paste that 60 times.<issue_comment>username_1: I think that the `code` column will have to be split in order to achieve the result, but into columns per character, rather than into arrays. This will facilitate the further grouping of characters.
This split can be done with the following:
```
import org.apache.spark.sql.functions._
val originalDf: DataFrame = ...
// split function: returns a new dataframe with column "code{i}"
// containing the character at index "i" from "code" column
private def splitCodeColumn(df: DataFrame, i: Int): DataFrame = {
df.withColumn("code" + i, substring(originalDf("code"), i, 1))
}
// number of columns to split code in
val nbSplitColumns = "0000".length
val codeColumnSplitDf =
(1 to nbSplitColumns).foldLeft(originalDf){ case(df, i) => splitCodeColumn(df, i)}.drop("code")
// register it in order to use with Spark SQL
val splitTempViewName = "code_split"
codeColumnSplitDf.registerTempTable(splitTempViewName)
```
Now `codeColumnSplitDf` contains:
```
+---+-----+-----+-----+-----+
|key|code1|code2|code3|code4|
+---+-----+-----+-----+-----+
|123| 0| 0| 0| 0|
|123| X| 0| 0| 0|
|123| C| 1| 1| 1|
|124| 0| 0| 0| 0|
|124| 0| 0| 0| 0|
|124| C| 2| 0| 0|
+---+-----+-----+-----+-----+
```
We'll use `collect_list` function to aggregate the characters grouped by `key`:
```
// collect_list calls to insert into SQL
val aggregateSelections = (1 to nbSplitColumns).map(i => s"collect_list(code$i) as code_$i").mkString(", ")
val sqlCtx: SQLContext = ...
// DataFrames with expected results
val resultDf = sqlCtx.sql(s"SELECT key, $aggregateSelections FROM $splitTempViewName GROUP BY key")
```
`resultDf` contains:
```
+---+---------+---------+---------+---------+
|key| code_1| code_2| code_3| code_4|
+---+---------+---------+---------+---------+
|123|[0, X, C]|[0, 0, 1]|[0, 0, 1]|[0, 0, 1]|
|124|[0, 0, C]|[0, 0, 2]|[0, 0, 0]|[0, 0, 0]|
+---+---------+---------+---------+---------+
```
*Update*
To avoid repeating elements in `select` and `agg`:
```
val codeSplitColumns =
Seq(col("id")) ++ (0 until nbSplitColumns).map(i => col("code_split")(i).as("m" + i))
val aggregations =
(0 until nbSplitColumns).map(i => collect_list(col("m" + i)))
df.select(codeSplitColumns:_*)
.groupBy(col("id"))
.agg(aggregations.head, aggregations.tail:_*)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: >
>
> >
> > Is there a way to get to the same result without repeating the collect\_list in agg?
> >
> >
> >
>
>
>
Yes, definitely there are multiple ways to use **collect\_list only once**. I am going to show you using `udf` function
**the udf function**
```
def combineUdf = udf((strs: Seq[String])=> {
val length = strs(0).length
val groupLength = strs.length
val result = for(i <- 0 until length; arr <- strs)yield arr(i).toString
result.grouped(groupLength).map(_.toArray).toArray
})
```
The `combineUdf` function *takes the collected list of strings* (as you are going to use only one `collect_list` function) and then *parse the array* to give you the required output but in `Array` form as `Array[Array[String]]`
Just call the `udf` and `select` all the necessary columns as
```
someDF.groupBy("key").agg(combineUdf(collect_list("code")).as("value"))
.select(col("key") +: (0 to 3).map(x => col("value")(x).as("value_"+x)): _*)
.show(false)
```
where `(0 to 3)` can be varied according to your *final column necessity*
which should give you *your desired output* as
```
+---+---------+---------+---------+---------+
|key|value_0 |value_1 |value_2 |value_3 |
+---+---------+---------+---------+---------+
|123|[0, X, C]|[0, 0, 1]|[0, 0, 1]|[0, 0, 1]|
|124|[0, 0, C]|[0, 0, 2]|[0, 0, 0]|[0, 0, 0]|
+---+---------+---------+---------+---------+
```
I hope the answer is helpful
Upvotes: 0 |
2018/03/19 | 624 | 2,373 | <issue_start>username_0: I get data-results from a stored procedureas Laravel collection.
Data is like SQL table. Same keys (column names) inside the object of collection.
Is there a way to query (raw SQL query) this collection and get the final result data I want?
I looked into the [Laravel collection documentation](https://laravel.com/docs/5.6/collections) and I can run some methods to query the collection, but that is not raw SQL query(I need to run little complex raw SQL query on collection data.)
1. Ideally, I need to create a temporary SQL table from the collection data. The collection can be different based on the result and need to create temp table based on that.
2. I also looked into `Redis server` where I store my collection temporary and make a query, but don't find anything that I can store as SQL table and run SQL query on it.
3. The third option would be storing the result data on `Web SQL` and run query but also required to download excel that is being done from the back end.
Did anyone run into the similar issue? Maybe there is something else I can use for this issue.<issue_comment>username_1: You should be able to query with models like so:
```
//Query the db, and fetch collection with get()
$userCollection = User
::where('function_id', 3)
->get();
//Query the collection and fetch the first answer, resulting in a user model
$pietPaulusma = $userCollection
->where('firstname', 'Piet')
->where('lastname', 'Paulusma')
->where('email', '<EMAIL>')
->first();
```
Or without models like so:
```
DB::table('users')
->select(DB::raw('count(*) as user_count, status'))
->where('status', '<>', 1)
->groupBy('status')
->get();
```
If you study [this documentation](https://laravel.com/docs/5.6/queries), you should get far :)
Hope it helps!
Upvotes: 2 <issue_comment>username_2: you can write raw sql queries with `query builder` class. if you want to retrieve some data you can use `select` method like
```
\DB::select('SELECT * FROM myTable WHERE ...'); // your complex query
```
you can put the whole raw sql query in this method. also use prepare statements like
```
\DB::select('SELECT * FROM myTable WHERE id = ?', [2])
```
to make sure you prevent sql injections.
read more about raw queries at [docs](https://laravel.com/docs/5.6/database#running-queries)
Upvotes: 0 |
2018/03/19 | 385 | 1,389 | <issue_start>username_0: I'm reinstalling my Homestead/Laravel using Vagrant on my new machine.. but I can't get past this error:
>
> Unable to mount one of your folders. Please check your folders in Homestead.yaml
>
>
>
This is my configuration:
* Vagrant Version: 2.0.3
* Homestead pulled from [here](https://github.com/laravel/homestead.git) using Git Desktop and placed on my user folder.
* Homestead.yaml is unaltered
* Windows 10 with all updates
* Virtualbox version is 5.2.8 r121009 (Qt5.6.2)
to reproduce the error I use `vagrant up` or `vagrant provision` (if already up)
Everything seems to work normally but the shared folder does not show up.<issue_comment>username_1: If you are on windows you have to make sure that the directory paths are not in this format `~/code/` but change them to `c:\path\to\your\code\` for all occurences of such paths
Upvotes: 0 <issue_comment>username_2: Solved the issue by creating the Code directory. It needs to exist before trying to connect it.
After using homestead destroy and homestead up it started working again.
Upvotes: 2 [selected_answer]<issue_comment>username_3: if you are on Ubuntu(18.04 that's what i have)
make sure that the folders map: exist, you can create them first for example the /code folder must be created first then re-run vagrant reload --provision
that's how i managed to resolve that issue
Upvotes: 0 |
2018/03/19 | 599 | 2,161 | <issue_start>username_0: **Function**
```
function autoHeadline(time) {
var elem = $('div#main-headline ul.main-headline-item li.x');
var id = elem.index();
if(id==-1) {
id = 0;
elem = $('ul.main-headline-item li.x').eq(id);
}
$('div#main-headline ul.main-headline-item li').removeClass('active').removeClass('x');
$('div#main-headline ul.main-headline-item li').eq(id).addClass('active');
$('div#main-headline ul.main-headline-item li').eq(id+1).addClass('x');
$('div#main-headline ul.main-headline-number li').removeClass('active');
$('div#main-headline ul.main-headline-number li').eq(id).addClass('active');
headlineTime = setTimeout('autoHeadline('+time+')', time);
}
```
This is JS for slider:
```
$('.image-news span.btn').click(function slider() {
var container = $(this).parent();
var type = $(this).data('type');
var index = container.find('ul li.active').index();
var count = container.find('ul li').size()-1 ;
var timer = setInterval(slider,1000);
if(type == 'prev') {
index -= 1;
} else {
index += 1;
}
if(index < 0) {
index = count;
} else if(index > count) {
index = 0;
}
container.find('ul li.active').removeClass('active')
container.find('ul li').eq(index).addClass('active');
}, function() { autoHeadline(headlineDelay); });
```
Auto slider doesn't work. Only when I click arrow one time then auto slider is working.
How can I fix it?
I want my slider changing by itself.<issue_comment>username_1: const time = 1000;
setInterval(function(){
```
Paste your code here, which will run every 1000 milliseconds
```
},time);
Upvotes: -1 <issue_comment>username_2: Currently, you only make the initial `autoHeadline` call when your button is clicked. You need to call it first when the page loads in order to kick off the chain.
Since you're using jQuery, you should have a `$(document).ready(...)` call. In that handler function, call `autoHeadline(headlineDelay)`.
Upvotes: 1 [selected_answer] |
2018/03/19 | 1,299 | 4,310 | <issue_start>username_0: Is pip capable of recursively searching through a file system path and installing a package and its dependencies? For example, given the following file structure (files not shown) `pip install packageA -f C:\packages\` does not work.
```
C:\packages\
C:\packages\packageA\
C:\packages\packageA\1.0.0
C:\packages\packageA\1.0.1
C:\packages\packageB\
C:\packages\packageB\2.2.1
C:\packages\packageB\2.2.4
```
Additionally, can these packages be pure source with a setup.py file? Or do they need to be a binary like wheel or zip files. And finally, is there a way to resolve the dependencies? For example, if packageA requires a version of packageB, can pip get that version of packageB from my folders? Do I need html files indicating where and what things are?
I am aware I can point pip directly to local paths (`pip install C:\packages\packageA\1.0.0`), but I want this to function as if the packages were available in PyPI. For example, if a user types `pip install packageB` or `pip install requirements.txt` and that requirements file contains packages that exist locally but not in PyPI, it would just work. (I could set the local package storage path in a configuration file so the pip command doesn't need the -f argument)
Basically, I want to replicate PyPI functionality using a file system without a webserver (security won't let us run one). Any insight would be greatly appreciated.<issue_comment>username_1: >
> Is pip capable of recursively searching…?
>
>
>
No. Either one flat directory or a pre-generated html file that lists files in subdirectories.
>
> Additionally, can these packages be pure source with a setup.py file?
>
>
>
Yes. `pip` can install source distributions, eggs and wheels.
>
> And finally, is there a way to resolve the dependencies?
>
>
>
Yes, but they must be accessible. That is, a PyPI-like server (not necessary the PyPI), one flat directory or an html file.
PS. To prevent `pip` to look packages at a PyPI use option `--no-index`.
Upvotes: 0 <issue_comment>username_2: I figured this out. I used a package called [pip2pi](https://github.com/wolever/pip2pi) . This package has a command called dir2pi. I create a directory containing tar.gz files. Each tar.gz file was a Python package created with [this layout](http://the-hitchhikers-guide-to-packaging.readthedocs.io/en/latest/quickstart.html#lay-out-your-project). To identify versions, I added a hyphen and then used semantic versioning.
The directory looked like this:
```
C:\packages\package_a-1.0.8.tar.gz
C:\packages\package_a-1.1.0.tar.gz
C:\packages\package_b-2.0.0.tar.gz
C:\packages\package_c-1.0.5.tar.gz
```
Then I ran, `dir2pi C:\packages -S`. This created the necessary HTML files and folder layout. At the root of the folder, a folder called simple was created. The name *simple* appears to be a PEP 503 thing. Inside of simple, the folder structure looked like this:
```
C:\packages\simple\index.html
C:\packages\simple\package_a
C:\packages\simple\package_a\index.html
C:\packages\simple\package_a\package_a-1.0.8.tar.gz
C:\packages\simple\package_a\package_a-1.1.1.tar.gz
C:\packages\simple\package_b
C:\packages\simple\package_b\index.html
C:\packages\simple\package_b\package_b-2.0.0.tar.gz
C:\packages\simple\package_c
C:\packages\simple\package_c\index.html
C:\packages\simple\package_c\package_c-1.0.5.tar.gz
```
This is apparently what a compliant file/folder layout looks like for pip.
Each index.html file contains links to its neighboring files and folders. For example, `C:\packages\simple\index.html` looks like this:
```
Simple Index
[package\_a](package_a/)
[package\_b](package_b/)
[package\_c](package_c/)
```
`C:\packages\simple\package_a\index.html` looks like this:
```
<package_a-1.0.8.tar.gz>
<package_a-1.1.1.tar.gz>
```
Now with the proper HTML files and folder structure, I told pip to look for packages in the `C:\packages\simple` folder. I did this by editing the pip.ini file (the location varies from system to system) and adding this line:
```
[global]
extra-index-url = file:///c:/packages/simple
```
Alternatively, I could also pass this folder into pip as a parameter like so:
```
pip install --index-url=file:///c:/packages/simple package_a==1.1.1
```
Upvotes: 4 [selected_answer] |
2018/03/19 | 704 | 2,378 | <issue_start>username_0: I have some html that includes a *table* with some *tr* rows and each row has a *td* element containing an *img* element. Elsewhere on the page I have an ajax method that sends a url to the server, server then saves this locally and returns the location back.
I then try to update the src attribute element for each img element with the new url, bu nothing is updated. My debugging alert calls shows that the image elements are found and the correct location is returned, at a loss to know what Im doing wrong.
**Html**
```
|
#
|
Replace
|
Filename
|
Artwork
|
| --- | --- | --- | --- |
|
0
| |
E:\Music\Tango in the Night\02 - Seven Wooooooonders.WAV
|
600 x 585
|
|
1
| |
E:\Music\Tango in the Night\03 - Everywhere.WAV
|
600 x 585
|
.........
```
**Javascript Ajax function**
```
function sendFormData(urls)
{
var xhr = new XMLHttpRequest();
xhr.open('POST', '/editsongs.update_artwork', false);
xhr.setRequestHeader("Content-type", "text/plain");
xhr.send(urls[0]);
var images = document.getElementById("ARTWORK_TABLE").getElementsByTagName("img");
alert("Found img"+images.length);
for(image in images)
{
alert(xhr.responseText);
image.src = xhr.responseText;
}
}
```<issue_comment>username_1: Try changing
```
image.src = xhr.responseText;
```
to
```
images[image].src = xhr.responseText;
```
See <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in> for more details
Upvotes: 4 [selected_answer]<issue_comment>username_2: Try to `console.log` the `images` collection and then each image in your [for...in](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in) loop.
You can also `console.log(typeof images)` to find out that it is actually an object and that explains the unexpected values you find in each `image` of your `for...in` loop.
Since this is the case, you could use a normal `for` loop:
```
for (var i = 0; i < images.length; i++) {
console.log(images[i]);
}
```
Upvotes: 1 <issue_comment>username_3: If you want to keep the `for ... in` syntax, you will need to refer to the property `image` (each `img` element) on your HTML collection of `img` elements:
```
for (var image in images) {
images[image].src = xhr.responseText;
}
```
Upvotes: 0 |
2018/03/19 | 678 | 2,759 | <issue_start>username_0: I have been noticing that some of the programs that I write in C have not been working on my machine, but they works other others. So to test this, I wrote a simple program to test how the stack is pushing and popping local variables:
```
#include
#include
int main() {
char char\_test[10];
int int\_test = 0;
strcpy(char\_test, "Hello");
}
```
I then debugged this program and found that int\_test had a higher memory address than char\_test, even though, according to my knowledge, the first declared local variable is supposed to have a higher memory address. I then added two print functions that printed the memory addresses of the variables:
```
#include
#include
int main() {
char char\_test[10];
int int\_test = 0;
strcpy(char\_test, "Hello");
printf("Address of char\_test: 0x%x\n", &char\_test);
printf("Address of int\_test: 0x%x\n", ∫\_test);
}
```
Now, the first local variable has a higher memory address than the second. Does printing the addresses change the ordering of the variables? Am I doing something wrong?<issue_comment>username_1: The ordering of variables in memory is an implementation detail of the compiler.
It may put them in order, or in reverse order, or grouped by datatype, or some other way. This differs from one compiler to the next, and can also differ with different optimization levels. Also, as you've seen, which strategy is used can change with seemingly unrelated code changes.
The C standard imposes no requirements on how this is done, so you can't portably depend on their order.
Upvotes: 1 <issue_comment>username_2: Sequential requirement to which you are referring is for members of `struct`s. There is no requirement for a compiler to order local variables one way or the other. In fact, there is no a requirement for a compiler to use stack for its automatic variables, or even to allocate them at all, if it could get away with it.
In your example, it looks like the compiler did just that: since `int_test` is not used, the compiler pretended that it does not exist. Once you started using it by taking its address, the compiler was forced to allocate it. It happened to allocate the variable ahead of `char_test`, but it is not under an obligation to do so.
Upvotes: 2 <issue_comment>username_3: The ordering of variables in memory (whether they're local *or* global) is of no concern to any reasonable program.
The behavior of already-buggy programs can of course depend on relative proximity of variables (especially arrays that are being overflowed), but that's obviously because the programs are, well, already buggy.
And, no, there are no rules which govern, no guarantees about how, your compiler (and linker) will assign addresses.
Upvotes: 2 |
2018/03/19 | 652 | 1,793 | <issue_start>username_0: I have a data frame like this:
```
df = pd.DataFrame({'a1': [2,3,4,8,8], 'a2': [2,5,7,5,10], 'a3':[1,9,4,10,2]})
a1 a2 a3
0 2 2 1
1 3 5 9
2 4 7 4
3 8 5 10
4 8 10 2
```
The output should be:
```
0 2
1 3
2 4
3 8
4 8
```
What to do: I want to calculate mode row-wise, and if the mode is not present, I want the value from a1 (first column).
For example: In second row `(3,5,9)`, the mode is not present so I get `3` in output.
**Note: I've already tried `df.mode(axis=1)` but that seems to shuffle the sequence of values row wise, so I don't always get the value of first column in the output.**
------------------------------------------------------------------------------------------------------------------------------------------------------------------------<issue_comment>username_1: **No-Sort Methods**
`agg` + `collections.Counter`. *Does not sort the modes*.
```
from collections import Counter
df.agg(lambda x: Counter(x).most_common(1)[0][0], axis=1)
0 2
1 3
2 4
3 8
4 8
dtype: int64
```
---
**Mode Sorting Methods**
1. Use `mode` along the first axis and then take whatever comes first:
```
df.mode(axis=1).iloc[:, 0]
```
Or,
```
df.mode(axis=1)[0]
```
```
0 2.0
1 3.0
2 4.0
3 5.0
4 2.0
Name: 0, dtype: float64
```
2. `scipy.stats.mode`
```
from scipy.stats import mode
np.array(mode(df, axis=1))[0].squeeze()
array([2, 3, 4, 5, 2])
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: One more option is to use `np.where`:
```
mode = df.mode(axis=1)
np.where(mode.iloc[:,-1].isnull(),
mode.iloc[:,0], # No tie, use the calculated mode
df.iloc[:,0]) # Tie, use the first column of the original df
# array([2., 3., 4., 8., 8.])
```
Upvotes: 2 |
2018/03/19 | 741 | 2,641 | <issue_start>username_0: I have a CNN network with three layers and after fitting the model, weights are saved on disk. The second time I load the weights, but this time model is increased with a layer. So it is now 4 layer network. Is it possible to transfer model weights with the different architecture? If yes then how? I am using keras for development.
For me it shows error: 'You are trying to load a weight file containing 3 layers into a model with 4 layers'.
Thanks in advance!<issue_comment>username_1: I have not tried this, but it should be possible by using the layer.get\_weights() and layer.set\_weights(weights) methods.
```
weights = old_model_layer.get_weights()
new_model_layer.set_weights(weights)
```
See <https://keras.io/layers/about-keras-layers/> for more info.
Upvotes: 2 <issue_comment>username_2: I'm a bit late at answering, but hopefully this will help somebody.
This is how I did it:
1. Use a list to store all layers:
```
model_layers = []
model_layers.append(keras.layers.Conv2D(...))
...
model_layers.append(keras.layers.Dense(units=num_classes))
model_layers.append(keras.layers.Softmax())
```
2. Define the source model, and add layers from the list. Load weights from the saved file:
```
model = keras.Sequential()
for layer in model_layers:
model.add(layer)
model.compile(...)
model.load_weights(filename)
```
3. Copy the original list to a new temporary list. Clear the original list, and to it add new instances of layers required for the *target network*:
```
temp_layers = model_layers.copy()
model_layers.clear()
model_layers.append(keras.layers.Conv2D(...))
...
model_layers.append(keras.layers.Dense(units=num_classes))
model_layers.append(keras.layers.Softmax())
```
4. Assuming all layers in the source network form the initial part of the target network, copy all layer references form the temporary list to `model_layers`. EXCLUDE THE CLASSIFICATION LAYERS if required. You can also restore individual layers if you know their indices:
```
for i in range(len(temp_layers) - 2):
model_layers[i] = temp_layers[i]
```
5. Create the new model by following step 2:
```
new_model = keras.Sequential()
for layer in model_layers:
new_model.add(layer)
```
6. Add any additional layers that may be required (along with the classification layer, if required):
```
new_model.add(...)
new_model.add(keras.layers.Dense(units=num_classes))
new_model.add(keras.layers.Softmax())
new_model.compile(...)
```
I realise this is not a general answer. I have mentioned the exact steps I used in my successful implementation.
Feel free to try out different variations.
Upvotes: -1 |
2018/03/19 | 1,260 | 4,471 | <issue_start>username_0: ```
def Factorial(n):
n=int(input("Input a number?"))
if n==0:
return 1
else:
return n * Factorial(n-1)
def Fibonacci(num):
i=0
present=1
previous=0
while i<=num:
nextterm=present+previous
present=previous
previous=nextterm
i=i+1
print("The fibonacci number for", i, "is", nextterm)
def CallFibOrFac(s):
s=input('Fib or Fac?')
if s == 'Fib':
num=input('Up to what number?')
print(Fibonacci(num))
elif s == 'Fac':
n=input('What number?')
print(Factorial(n))
CallFibOrFac()
```
I've spent a lot of time trying to solve this assignment. I don't understand how I'm supposed to have my defined function CallFibOrFac call the other two functions for solving the Fibonacci or Factorial numbers. Any help/explanation is appreciated. Thanks!
This is the exact prompt that I was left with in class:
>
> Write a Python program that has 3 Functions:
>
>
> The first function will be called CallFibOrFac
>
>
> It will receive a single string as a parameter.
>
>
> If the string is "Fib" it will call the second function called
> Fibonacci
>
>
> If the string is "Fac" it will call the third function called
> Factorial.
>
>
> The Second function prints the first 10 numbers in the Fibonacci
> sequence.
>
>
> The Third function prints the value of 10 factorial.
>
>
> We want to make the second and third functions work for cases other
> than just the first 10 numbers so add a parameter to the Fibonacci
> function and the Factorial function which will tell it how far to go
> along the Fibonacci sequence or the Factorial.
>
>
><issue_comment>username_1: Try this:
```
def Factorial(n):
n=int(input("Input a number?"))
if n==0:
return 1
else:
return n * Factorial(n-1)
def Fibonacci(num):
i=0
present=1
previous=0
while i<=num:
nextterm=present+previous
present=previous
previous=nextterm
i=i+1
print("The fibonacci number for", i, "is", nextterm)
def CallFibOrFac():
s=input('Fib or Fac?')
if s == 'Fib':
num=input('Up to what number?')
print(Fibonacci(int(num)))
elif s == 'Fac':
n=input('What number?')
print(Factorial(int(n)))
CallFibOrFac()
```
Upvotes: -1 <issue_comment>username_2: You define `CallFibOrFac` to take an input, but when you run it at the bottom, you don't give it one. Either define the function as `def CallFibOrFac()`, or give it a string input when you run it. The way you have written the script, by asking for the string sequence with `input`, means that you don't need s in the definition of the function. When I get rid of the s in the definition, the script runs just fine for me, but remember that you need to reply with 'Fib' or 'Fac' with marks to declare that they are strings
Also, in the future it would be helpful if you posted the error message you receive
Upvotes: -1 <issue_comment>username_3: There are a few errors on your code:
* Your `CallFibOrFac` function receives a string but overwrites it. The task says it will just pass the string, use it as given;
* You are asking for a new input on every iteration of the factorial method, it should be given before the function starts;
* You forgot to convert the string to int before passing it to the Fibonacci Function.
That said, all the corrected functions should be:
Factorial Function:
```
def Factorial(n):
# Removed the input read
if n==0:
return 1
else:
return n * Factorial(n-1)
```
Fibonacci Function:
```
# It is working as expected
def Fibonacci(num):
i=0
present=1
previous=0
while i<=num:
nextterm=present+previous
present=previous
previous=nextterm
i=i+1
print("The fibonacci number for", i, "is", nextterm)
```
CallFibOrFac
```
def CallFibOrFac(s):
# Moved the number detection out of the ifs, since both will use it
# Note that the task says it will _receive_ the string as parameter, so you don't need to ask here.
num=int(input('Up to what number?'))
if s == 'Fib':
print(Fibonacci(num))
elif s == 'Fac':
print(Factorial(num))
```
*Note: There are stuff left to fix and / or improve, I just worked on which might be a problem for you right now (As noted by @abarnert)*
Upvotes: 3 [selected_answer] |
2018/03/19 | 2,539 | 8,946 | <issue_start>username_0: I have a container `vector` that has `std::unique_ptr` of some type. I want to return that container, but also want to enforce that I don't want the container, pointer or the object pointed at to be modifiable. I also don't want to make some paralel copy of this object. My alias type would be something like:
```
using container_t = vector>
```
So I'm thinking that I could make another alias like this:
```
using const_container_t = const vector>
```
and do a `reinterpret_cast` for my getter:
```
const_container_t& encompassing_type::get_container() const
{
return reinterpret_cast(m\_container);
}
```
I'm thinking that this should work, but I'm wondering if there are any gotchas that I'm not seeing, or if there is some other better way of doing this.
I would also imagine that this might result in duplicate binary code in the final build, but as these are most likely inlined anyway, that shouldn't be an issue.<issue_comment>username_1: The "problem" is that `std::unique_ptr::operator*` is defined to return a non-const reference:
```
std::add_lvalue_reference::type operator\*() const
```
Since it is an internal class, you could use plain pointers and manage the life-time explicitly, allowing you to do something like
```
span encompassing\_type::get\_container() const
{
return span( m\_container );
}
```
Justin proposes to use something like `span` to implement a view of constant pointers to your vector. You could e.g. do this with Boost.Range and return a range of const pointers:
```
#include
#include
using namespace boost::adaptors;
class X {
public:
void nonConst() {}
void constF() const {}
};
class A{
std::vector> v;
public:
A() : v(10) {}
auto get\_container() {
return v | transformed( [](std::unique\_ptr const& x) -> X const\* {return x.get();});
}
};
int main() {
A a;
auto const& v = a.get\_container();
a.get\_container()[0]->constF();
a.get\_container()[0]->nonConst();
return 0;
}
```
[This should be fairly efficient with an optimizing compiler.](https://godbolt.org/g/sq6tSt)
You could also switch from `std::vector>` to `boost::ptr_vector`. It also assumes ownership of the elements stored by pointer, but it returns a `const_reference` in `operator[] const` such that objects cannot be modified.
```
#include
class X {
public:
void nonConst() {}
};
class A{
boost::ptr\_vector v;
public:
boost::ptr\_vector const& get\_container() const {
return v;
}
};
int main() {
A a;
auto const& v = a.get\_container();
a.get\_container()[0].nonConst();
return 0;
}
```
This would protect the elemnents from being modified when `get_container()` returns a const reference:
>
> prog.cc:26:1: error: 'this' argument to member function 'nonConst' has
> type 'const
> boost::ptr\_container\_detail::reversible\_ptr\_container > >,
> boost::heap\_clone\_allocator>::Ty\_' (aka 'const X'), but function is
> not marked const a.get\_container()[0].nonConst(); ^~~~~~~~~~~~~~~~~~~~
> prog.cc:9:9: note: 'nonConst' declared here void nonConst() {}
> ^ 1 error generated.
>
>
>
Upvotes: 1 <issue_comment>username_2: I didn't want to include boost and `span` wouldn't work, because as @username_1 pointed out, a `unique_ptr` doesn't propagate the cv qualifiers. Also, even if I did include boost, I wouldn't be able to get an actual object reference for each item in the vector, which I would need to allow me to compare relative locations of the object with others in the container.
So I opted instead for writing a wrapper over `std::unique_ptr` which will propagate the cv qualifiers.
The following is an excerpt from my `enable_if.h` file, which I use for the comparison operators to limit how many times I have to write them:
```
namespace detail
{
// Reason to use an enum class rather than just an int is so as to ensure
// there will not be any clashes resulting in an ambiguous overload.
enum class enabler
{
enabled
};
}
#define ENABLE_IF(...) std::enable_if_t<__VA_ARGS__, detail::enabler> = detail::enabler::enabled
#define ENABLE_IF_DEFINITION(...) std::enable_if_t<__VA_ARGS__, detail::enabler>
```
Here is my implementation of c++20's `std::remove_cvref_t`:
```
template
using remove\_cvref\_t = std::remove\_cv\_t>;
```
And here is the wrapped unique ptr:
```
template >
class unique\_ptr\_propagate\_cv;
namespace detail
{
template
std::unique\_ptr const& get\_underlying\_unique\_ptr(unique\_ptr\_propagate\_cv const& object)
{
return object.ptr;
}
}
template
class unique\_ptr\_propagate\_cv
{
template
friend std::unique\_ptr const& detail::get\_underlying\_unique\_ptr(unique\_ptr\_propagate\_cv const&);
using base = std::unique\_ptr;
base ptr;
public:
template
unique\_ptr\_propagate\_cv(Ts&&...args) noexcept : ptr(std::forward(args)...) {}
using element\_type = typename base::element\_type;
using deleter\_type = typename base::deleter\_type;
using pointer = element\_type \*;
using pointer\_const = element\_type const \*;
using pointer\_volatile = element\_type volatile \*;
using pointer\_const\_volatile = element\_type const volatile \*;
using reference = element\_type &;
using reference\_const = element\_type const &;
using reference\_volatile = element\_type volatile &;
using reference\_const\_volatile = element\_type const volatile &;
pointer get() noexcept { return ptr.get(); }
pointer\_const get() const noexcept { return ptr.get(); }
pointer\_volatile get() volatile noexcept { return ptr.get(); }
pointer\_const\_volatile get() const volatile noexcept { return ptr.get(); }
pointer operator->() noexcept { return ptr.get(); }
pointer\_const operator->() const noexcept { return ptr.get(); }
pointer\_volatile operator->() volatile noexcept { return ptr.get(); }
pointer\_const\_volatile operator->() const volatile noexcept { return ptr.get(); }
reference operator[](size\_t index) noexcept { return ptr.operator[](index); }
reference\_const operator[](size\_t index) const noexcept { return ptr.operator[](index); }
reference\_volatile operator[](size\_t index) volatile noexcept { return ptr.operator[](index); }
reference\_const\_volatile operator[](size\_t index) const volatile noexcept { return ptr.operator[](index); }
reference operator\*() noexcept { return ptr.operator\*(); }
reference\_const operator\*() const noexcept { return ptr.operator\*(); }
reference\_volatile operator\*() volatile noexcept { return ptr.operator\*(); }
reference\_const\_volatile operator\*() const volatile noexcept { return ptr.operator\*(); }
template
unique\_ptr\_propagate\_cv& operator=(T\_&& rhs)
{
return static\_cast(ptr.operator=(std::forward(rhs)));
}
decltype(auto) get\_deleter() const noexcept { return ptr.get\_deleter(); }
operator bool() const noexcept { return ptr.operator bool(); }
decltype(auto) reset(pointer ptr = pointer()) noexcept { get\_base\_nonconst().reset(ptr); }
decltype(auto) release() noexcept { return get\_base\_nonconst().release(); }
};
template
struct is\_unique\_ptr\_propagate\_cv : std::false\_type {};
template
struct is\_unique\_ptr\_propagate\_cv> : std::true\_type {};
namespace detail
{
inline nullptr\_t const& get\_underlying\_unique\_ptr(nullptr\_t const& object)
{
return object;
}
template
std::unique\_ptr const& get\_underlying\_unique\_ptr(std::unique\_ptr const& object)
{
return object;
}
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
bool operator==(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
== detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
auto operator!=(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
!= detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
bool operator<=(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
<= detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
bool operator>=(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
>= detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
bool operator<(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
< detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
template >::value
|| is\_unique\_ptr\_propagate\_cv>::value
)
>
bool operator >(L&& lhs, R&& rhs) noexcept
{
return detail::get\_underlying\_unique\_ptr(std::forward(lhs))
> detail::get\_underlying\_unique\_ptr(std::forward(rhs));
}
```
Thanks for your help and reminding me that it was just a propagation issue.
Upvotes: 1 [selected_answer] |
2018/03/19 | 454 | 1,151 | <issue_start>username_0: I want to print the binary numbers 0x00 to 0xFF with their full 8-bit format.
The `bin()` Python function won't return the full 8 bits, so I need to use the `format` function to format the binary string.
This code below is giving me a `ValueError`, but I'm passing a binary string to the format function and I'm using `08b` so why is it giving an error?
```
a = bin(10);
a_bin = "{0:08b}".format(a);
```<issue_comment>username_1: Use the number, not the string:
```
>>> "{0:08b}".format(10)
'00001010'
```
To print all:
```
>>> for i in range(0, 256):
... print("{0:08b}".format(i))
...
00000000
00000001
00000010
00000011
00000100
...
11111011
11111100
11111101
11111110
11111111
```
Upvotes: 1 <issue_comment>username_2: `bin` will return a string and you cannot format a string as if it were a number. That's why the exception message said that it's an unknown format for **strings**:
```
>>> "{0:08b}".format(bin(10))
ValueError: Unknown format code 'b' for object of type 'str'
```
You have to use the integer as input for format:
```
>>> "{0:08b}".format(10)
'00001010'
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 647 | 2,088 | <issue_start>username_0: **PLEASE HELP.**
I got this lines of code on my app, the value of the input here comes from the firebase database. That works as the value is shown when I ran the app. My problem is when i console.log(attendance.studentName). It doesnt get the value. It is null or {}.
>
> HTML
>
>
>
```
Present
Absent
Late
```
>
> TS
>
>
>
```
attendance = {} as Attendance
export class CheckAttendancePage {
studentList$: Observable
attendance = {} as Attendance
constructor(private afDatabase: AngularFireDatabase, private studentsList: StudentListService, private attendanceList: AttendanceListService, public navCtrl: NavController, public navParams: NavParams) {
this.studentList$ = this.studentsList
.getStudentList()
.snapshotChanges()
.map(
changes => {
return changes.map(c => ({
key: c.payload.key, ...c.payload.val()
}))
}
)
}
addAttendance(attendance: Attendance){
console.log(attendance)
}
```
>
> MODEL
>
>
>
```
export interface Attendance {
key?: string;
studentName: string;
status: string;
}
```
>
> SERVICE
>
>
>
```
@Injectable()
export class AttendanceListService {
private attendanceListRef = this.db.list('attendance')
private studentListRef = this.db.list('students-list')
constructor(private db: AngularFireDatabase){}
getStudentsList() {
return this.studentListRef;
}
```
**PLEASE HELP**<issue_comment>username_1: Why are you using different names for ngModel and value property of input.
The syntax you are using won't work try using these syntax :
with ngModel:
```
```
without ngModel directive:
```
```
Upvotes: 1 <issue_comment>username_2: I was able to get the value of the input..
>
> html
>
>
>
```
{{student.name}}
Present
Absent
Late
```
>
> ts
>
>
>
attendance: any = {
'studentName': '',
'status': ''
}
>
> then on constructor
>
>
>
```
this.attendance.studentName = "asdasd"
```
My problem here is that the value of the this.attendance.studentName should come from firebase. And I don't know how to do that.
Upvotes: 0 |
2018/03/19 | 1,404 | 4,256 | <issue_start>username_0: We are to display the number of times the numbers from 1 through 9 appear in a two-dimensional array. We can't use map, vector, or anything advanced.
**Here is what I have so far:**
```
#include
using namespace std;
int main()
{
//declare numbers array
int numbers[5][3] = {{1, 2, 7},
{2, 5, 3},
{1, 9, 4},
{2, 6, 5},
{7, 2, 2}};
//declare counts array
int counts[9];
//declare variable
int digit = 0; //keeps track of numbers from 1 through 9
while (digit <= 9)
{
for (int row = 0; row < 5; row++)
for (int col = 0; col < 3; col++)
//count the number of times the digit appears in the numbers
array
if (numbers[row][col] == digit)
counts[numbers[row][col]]++;
//end if
//end for
//end for
digit++; //look for next digit
} //end while
//display counts
for (int x = 1; x < 10; x++)
cout << "The number " << x << " appears "
<< counts[x] << " time(s)." << endl;
//end for
cin.get();
return 0;
} //end of main function
```
I can get it to show "The numbers 1 appears (a large number) time(s)." It will show that sentence 9 times with the 1 incrementing at each statement. The problem is the large number. How do I get it to show the correct number of times?<issue_comment>username_1: ```
int counts[9];
```
This will create the array, but it won't initialize the values, so you're getting undefined behavior! (The compiler is probably just giving you whatever bits were already there in memory, which likely evaluates to some very big numbers).
Instead, you want to initialize your counts to `0`:
```
int counts[9] = {0};
```
Upvotes: 2 <issue_comment>username_2: Even if you fix your first problem and initialize `int counts[9] = {0};`, you will still experience *Undefined Behavior* and end up attempting to access values beyond the end of your array bounds because declaring:
```
int counts[9] = {0};
```
Makes the valid indexes for counts `0 - 8` *not* `1 - 9`. So when you iterate `counts[x]` in:
```
for (int x = 1; x < 10; x++)
cout << "The number " << x << " appears "
<< counts[x] << " time(s)." << endl;
```
You invoke *Undefined Behavior* attempting to read `counts[9]`. If you are going to use `x = 1; x < 10`, you need to access `counts[x-1]`.
A better approach is to loop `x = 0; x < 9` and just add `1` to:
```
cout << "The number " << x + 1 << " appears "
```
The same indexing care must be taken when incrementing the values in your `counts` array to store the frequency of the numbers `1 - 9` (note: these are 1-above your actual array indexes `0 - 8`, which appears to largely be the intent of this exercise), e.g. you must subtract `1` when accounting for the frequency of occurrence of numbers `1 - 9` so that the proper corresponding index `0 - 8` hold the associated value, e.g.
```
counts[numbers[i][j] - 1]++; /* indexes are 0-8 */
```
Always looping over your array bounds (not outside them) will prevent issues such as this.
A short rewrite of the logic, and you could do something like:
```
#include
#define NCOLS 3
#define NRANGE 9
using namespace std;
int main (void) {
int numbers[][NCOLS] = {{1, 2, 7},
{2, 5, 3},
{1, 9, 4},
{2, 6, 5},
{7, 2, 2}},
nrows = sizeof numbers / sizeof \*numbers, /\* rows \*/
counts[NRANGE] = {0}; /\* frequency \*/
for (int i = 0; i < nrows; i++) /\* fill counts \*/
for (int j = 0; j < NCOLS; j++)
if (1 <= numbers[i][j] && numbers[i][j] <= NRANGE)
counts[numbers[i][j] - 1]++; /\* indexes are 0-8 \*/
for (int x = 0; x < NRANGE; x++) /\* display counts \*/
cout << "The number " << x + 1 << " appears "
<< counts[x] << " time(s)." << endl;
#if defined (\_WIN32) || defined (\_WIN64)
cin.get();
#endif
}
```
(if you are using `cin.get();` to prevent the terminal from closing on windows, you should wrap that in preprocessor conditionals as that is unneeded on other OSs)
**Example Use/Output**
```
$ ./bin/frequency_array
The number 1 appears 2 time(s).
The number 2 appears 5 time(s).
The number 3 appears 1 time(s).
The number 4 appears 1 time(s).
The number 5 appears 2 time(s).
The number 6 appears 1 time(s).
The number 7 appears 2 time(s).
The number 8 appears 0 time(s).
The number 9 appears 1 time(s).
```
Upvotes: 1 [selected_answer] |
2018/03/19 | 838 | 2,917 | <issue_start>username_0: How to trigger multiple download's on a page using js/jquery?
The individual downloads are done using the js download attribute. the page has multiple such download links and wants to trigger all of them using a global download all
```
[Download 1](#)
[Download 2](#)
[Download 3](#)
Download All
$("a.btn-download").click(function(){
this.href = "data:text/plain;charset=UTF-8, Some Text" ;
});
function downloadAll(){
$( "a.btn-download" ).trigger( "click" );
}
```
Fiddle <https://jsfiddle.net/u0jhaaL3/6/><issue_comment>username_1: You're correctly triggering the click of each button. The problem is that you're triggering the click **before** setting the `href`, so nothing is actually **happening** when you click on each of the downloads.
To resolve this, you'll need to run `this.click()` **again** in your click handler for each of the download links. Unfortunately this would mean that if you were to click on a **singular** download would download that file twice, but this can be resolved by passing a second argument to **[`.trigger()`](https://api.jquery.com/trigger/)** that indicates that you're attempting to download multiple files. This can the be checked in your function called on click of the individual links, but you'll need to make use of **[`.on()`](https://api.jquery.com/on/)** rather than **[`.click()`](https://api.jquery.com/click/)** so you can check against the second function parameter.
This can be seen in the following working example:
```js
$("a.btn-download").on("click", function(e, multiple) {
this.href = "data:text/plain;charset=UTF-8, Some Text";
if (multiple == 'multiple') {
this.click();
}
});
function downloadAll() {
$("a.btn-download").trigger("click", ['multiple']);
};
```
```html
[Download 1](#)
[Download 2](#)
[Download 3](#)
Download All
```
Note that you'll be **warned** that you're attempting to download multiple files, so if you want to avoid this, you'd be better off zipping the files.
Upvotes: 1 <issue_comment>username_2: Dynamically generate a zip download
-----------------------------------
* <https://github.com/eligrey/FileSaver.js/>
* <https://github.com/Stuk/jszip>
Working example based on your code:
```js
$('a.btn-download').click(function() {
this.href = 'data:text/plain;charset=UTF-8, Some Text';
});
function downloadAll() {
var zip = new JSZip();
$('a.btn-download').each(function() {
zip.file(this.download, this.text);
});
zip.generateAsync({
type: 'blob'
}).then(
function(blob) {
saveAs(blob, 'all.zip');
},
function(error) {
console.log(error);
}
);
}
```
```html
[Download 1](#)
[Download 2](#)
[Download 3](#)
Download All
```
[](https://i.stack.imgur.com/RlosP.png)
Upvotes: 0 |
2018/03/19 | 973 | 3,868 | <issue_start>username_0: I have an Elastic BeanStalk environment where I run my application on 1 EC2 instance. I've added load balancer, when I configured the environment initially, but since then I set it only use 1 instance.
Application run within container apparently produces quite a lot of logs - after several days they use up whole disk space and then application crash. Health check drops to severe.
I see that terminating instance manually helps - environment removes old instance and creates a new one that works (until it fills up the whole disk again).
What are my options? A script that regularly cleans up logs? Some log rotation? Trigger that reboots instance when disk is nearly full?
I do not write anything to file myself - my application only log to std out and std err, so writing to file is done by EC2/EBS wrapper. (I deploy the application as a ZIP containing a JAR, a bash script and `Procfile` if that is relevant).<issue_comment>username_1: Logrotation is the way forward. You can create a configuration file in `/etc/logrotate.d/' where you state your options in order to avoid having large log files.
You can read more about the configurations here <https://linuxconfig.org/setting-up-logrotate-on-redhat-linux>
A sample configuration file would look something like this:
```
/var/log/your-large-log.log {
missingok
notifempty
compress
size 20k
daily
create 0600 root root
}
```
You can also test the new configuration file from the cli by running the follow:
```
logrotate -d [your_config_file]
```
This will test if the log rotation will be successful or not but only in debugging mode, therefore the log file will not be actually rotated.
Upvotes: 3 [selected_answer]<issue_comment>username_2: By default EB will rotate *some* of the logs produced by the Docker containers, but not all of them. After contacting support on this issue I received the following helpful config file, to be placed in the source path `.ebextensions/liblogrotate.config`:
```
files:
"/etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.containers.conf":
mode: "00644"
owner: "root"
group: "root"
content: |
/var/lib/docker/containers/*/*.log {
size 10M
rotate 5
missingok
compress
notifempty
copytruncate
dateext
dateformat %s
olddir /var/lib/docker/containers/rotated
}
"/etc/cron.hourly/cron.logrotate.elasticbeanstalk.containers.conf":
mode: "00755"
owner: "root"
group: "root"
content: |
#!/bin/sh
test -x /usr/sbin/logrotate || exit 0
/usr/sbin/logrotate /etc/logrotate.elasticbeanstalk.hourly/logrotate.elasticbeanstalk.containers.conf
container_commands:
create_rotated_dir:
command: mkdir -p /var/lib/docker/containers/rotated
test: test ! -d /var/lib/docker/containers/rotated
99_cleanup:
command: rm /etc/cron.hourly/*.bak /etc/logrotate.elasticbeanstalk.hourly/*.bak
ignoreErrors: true
```
What this does is install an additional log rotation configuration and cron task for the `/var/lib/docker/containers/*/*.log` files which are the ones not automatically rotated on EB.
Eventually, however, the rotated logs themselves will fill up the disk if the host lives long enough. For this, you can add `shred` in the list of logrotation options (along side `compress` `notifempty` etc).
(However, I'm not sure if the container logs that are already configured for rotation are set to be shredded, probably not - so those may accumulate too and require modification of the default EB log rotation config. Not sure how to do that yet. But the above solution in most cases would be sufficient since hosts typically do not live *that* long. The volume of logging and lifetime of your containers may force you to go even further.)
Upvotes: 4 |
2018/03/19 | 497 | 1,433 | <issue_start>username_0: I have a tblAbsence
```
Name Start End
Joe 18-Mar-2018 0800 26-Mar-2018 1830
Mary 19-Mar-2018 0010 19-Mar-2018 2349
Adam 21-Mar-2018 0700 21-Mar-2018 1300
```
Is there a SQL query that would create a new table of dates with the count of people absent that day?
```
Date Absent
18-Mar-18 1
19-Mar-18 2
20-Mar-18 3
```
I know how to create a recordset from a calendar table and basically rs1.movenext through each date while I Update tblTemp with a count that meets the
```
WHERE tblAbsence.start <= rs1.date AND tblAbsence.End >= rs1.date;
```
but that seems ridiculously repetitive when there is probably some type of JOIN that might work?<issue_comment>username_1: If you have a list of dates, the problem is pretty simple. You can use a subquery to count from your absence list, where the date from your list is between the start and end of absence.:
```
SELECT TheDate, (SELECT Count(Name) FROM Absent WHERE Absent.End > DateList.TheDate AND Absent.Start < DateList.TheDate + #23:59:59#) As Absent
From DateList
```
Where DateList is a list of all dates you're interested in (either from a table, or from Gustav's query)
Upvotes: 2 [selected_answer]<issue_comment>username_2: you may try something like below:
```
select rs1.date,count(ta.Name) from Calendar rs1
join tblAbsence ta on (ta.start=rs1.date)
group by rs1.date
order by rs1.date
```
Upvotes: 0 |
2018/03/19 | 588 | 2,029 | <issue_start>username_0: I have a data frame with a single column.
There are 620 rows. The first 31 rows we label class "A", the next 31 rows we label "class B", and so on. There are therefore 20 classes.
What I want to do is quite simple to explain but I need help coding it.
In the first iteration, I want to delete all rows that correspond to the last row for each class. That is, delete the last "A class" row, then delete the last "B class row", and so on. This iteration, and all others, have to be performed, since I intend to do something else with the newly created dataset.
In the second iteration, I want to delete all rows that correspond to the last TWO rows for each class. So, delete the last two rows for "A class", last two rows for "B class" and so on.
In the third iteration, delete the last three rows for each class. And so on.
In the final iteration, we delete the last 30 rows for each class. Meaning basically we only keep 1 row for each observation, the first one.
What's a quick way to put this into R code? I know I need to use a for loop and carefully pick some index to remove, but how?
EXAMPLE
```
column
A1
A2
A3
B1
B2
B3
```
If above is our original data frame, then in the first iteration, we should be left with
```
column
A1
A2
B1
B2
```
and so on.<issue_comment>username_1: If you have a list of dates, the problem is pretty simple. You can use a subquery to count from your absence list, where the date from your list is between the start and end of absence.:
```
SELECT TheDate, (SELECT Count(Name) FROM Absent WHERE Absent.End > DateList.TheDate AND Absent.Start < DateList.TheDate + #23:59:59#) As Absent
From DateList
```
Where DateList is a list of all dates you're interested in (either from a table, or from Gustav's query)
Upvotes: 2 [selected_answer]<issue_comment>username_2: you may try something like below:
```
select rs1.date,count(ta.Name) from Calendar rs1
join tblAbsence ta on (ta.start=rs1.date)
group by rs1.date
order by rs1.date
```
Upvotes: 0 |
2018/03/19 | 528 | 1,822 | <issue_start>username_0: I have two dockers sitting on two different machines, both running the vespa. When I submit an application which have two nodes - vespa1 and vespa2 (resolved in /etc/hosts). I get the following error.
```
Uploading application '/vespa-eval/src/main/application/' using http://localhost:19071/application/v2/tenant/default/session?name=application
Session 6 for tenant 'default' created.
Preparing session 6 using
http://localhost:19071/application/v2/tenant/default/session/6/prepared
Request failed. HTTP status code: 400
Invalid application package: default.default: Error loading model:
Could not find host in the application's host system: 'vespa-container'. Hostsystem=host 'vespa1',host 'vespa2'
```
I do not have a problem when using only localhost.
hosts.xml
```
xml version="1.0" encoding="utf-8" ?
node0
node1
```
services.xml
```
xml version="1.0" encoding="utf-8" ?
1
```<issue_comment>username_1: Looks like a host named vespa-container is already deployed but not in the new application package. To debug, try
```
vespa-model-inspect hosts
```
on the config server and see if it lists the host. Maybe a good idea to try from scratch, I don't see anything wrong with the enclosed files. To clean the config server, search for
```
vespa-configserver-remove-state
```
in the documentation
Upvotes: 3 [selected_answer]<issue_comment>username_2: I came across the same issue, and fixed the error by replacing 'vespa-container' (below command) to the hostname of physical box. However, this caused a couple of other errors in rpc connection. Did you fix the problem yet? @aman.gupta
```
docker run --detach --name vespa --hostname vespa-container --privileged \
--volume $VESPA_SAMPLE_APPS:/vespa-sample-apps --publish 8080:8080 vespaengine/vespa
```
Upvotes: 1 |
2018/03/19 | 1,055 | 3,442 | <issue_start>username_0: ```
var each = function(collection, callback){
if(Array.isArray(collection)){
for(var i=0;i
```
How to write a callback function that would output each boolean value of the callback?
// example, even of an array, [1,2,3,4,5] -> false, true, false, true, false<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,140 | 3,747 | <issue_start>username_0: I want to read the data from the table in this PDF.
[PDF](https://www.fnbaloncesto.com/img/noticias/103/pdfs/ENaB%2020180317.pdf)
I had thought about reading the PDF, exporting it to an Excel and then use the data. The problem of reading the pdf and exporting it to Excel is that there are elements of columns that move to empty columns because I read with Apache Poi, and in this way the whole PDF is saved in a string.
Another way was to read exact coordinates data, but I do not think it's a very good option.
Could someone advise me? Which way is better or some new way?<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,636 | 5,217 | <issue_start>username_0: I am using David Walsh css flip effect: <http://davidwalsh.name/css-flip>
I have this working onClick with a JavaScript function called showCard(). See the codePen here:
<https://codepen.io/Chris_Nielsen/pen/YaWmMe>
When you first click the button, it animates correctly (**opens** from left to right). However, when you click the button again, it **closes** animates from right to left. The third time the button is clicked it **opens** again animates correctly (from left to right) again.
What I want to do is get this to **re-open** from **left to right every time**.
Can someone point out how I can make this work? I have worked on this for 2 hours and am stumped.
The Code:
```js
function showCard() {
document.querySelector("#errorMessage").classList.toggle("flip");
}
```
```css
body {
background: #575955;
color: white;
}
.error-box {
width: 380px;
height: 110px;
background: #fff;
border: solid 1px #B71C1C;
border-radius: 9px;
font-family: 'Raleway', sans-serif;
font-size: 1.6rem;
color: #B71C1C;
text-align: center;
padding: 30px;
}
/* entire container, keeps perspective */
.flip-container {
perspective: 1000px;
}
/* flip the pane when hovered */
.flip-container.flip .flipper {
visibility: visible;
transform: rotateY(90deg);
/* transform: rotateY(90deg); */
}
.flip-container, .front, .back {
width: 320px;
height: 480px;
}
/* flip speed goes here */
.flipper {
transition: .35s;
transform-style: preserve-3d;
position: relative;
}
/* hide back of pane during swap */
.front, .back {
backface-visibility: hidden;
position: absolute;
top: 0;
left: 0;
}
/* front pane, placed above back */
.front {
z-index: 2;
/* for firefox 31 */
transform: rotateY(-90deg);
}
/* back, initially hidden pane */
.back {
transform: rotateY(180deg);
}
```
```html
Click the button below to see the animated alert.
=================================================
Email address or password
Incorrect.
Please Try Again.
```<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,273 | 4,282 | <issue_start>username_0: I'm adding a reload button for iOS in-app web browser using `WKWebView.reload`
I tried two options, and the two options works for me the same but I want to know what is the technical correct option to use with #selector. Is it `#selector(class.method)` or `#selector (object.method)` ?
Here is the first:
```
let webView = WKWebView()
view = webView
navigationItem.rightBarButtonItem = UIBarButtonItem(barButtonSystemItem: .refresh, target: webView, action: #selector(WKWebView.reload))
```
and here the second one:
```
let webView = WKWebView()
view = webView
navigationItem.rightBarButtonItem = UIBarButtonItem(barButtonSystemItem: .refresh, target: webView, action: #selector(webView))
```
Here is the full code
```
import UIKit
import WebKit
class ViewController: UIViewController {
let webView = WKWebView()
override func loadView() {
view = webView
}
override func viewDidLoad() {
super.viewDidLoad()
navigationItem.rightBarButtonItem = UIBarButtonItem(barButtonSystemItem: .refresh, target: webView, action: #selector(WKWebView.reload))
}
}
```<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,285 | 4,248 | <issue_start>username_0: I am a python starter and I am trying to solve the Longest Palindromic Substring problem on Lintcode. The description is:"Given a string S, find the longest palindromic substring in S. You may assume that the maximum length of S is 1000, and there exists one unique longest palindromic substring." I ran my codes, and it showed
>
> line 12, in ispalindrome while k[0] == k[-1]: IndexError: string index out of range.
>
>
>
I don't really know how could that error be raised. Can anyone help me review my codes.
```
def longestPalindrome(self, s):
# write your code here
if len(s) == 0 or len(s) == 1:
return s
else:
def ispalindrome(k):
while k[0] == k[-1]:
k = k[1:-1]
if len(k) > 1:
return False
else:
return True
for i in s:
if s.rfind(i) > s.find(i):
s = s[s.find(i):s.rfind(i)+1]
if ispalindrome(s) == True:
break
return s
else:
s = s[1:-1]
```<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,803 | 6,316 | <issue_start>username_0: i'm using Flask-SQLAlchemy and i have the following models with one to many relationship,
```
class User(db.Model):
# Table name
__tablename__ = "users"
# Primary key
user_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
email = db.Column(db.String(50), nullable=False, unique=True)
password = db.Column(db.String, nullable=False)
username = db.Column(db.String(50), unique=True)
# Relationships (A-Z)
uploads = db.relationship("Upload", backref="user")
class Upload(db.Model):
# Table name
__tablename__ = "uploads"
# Primary key
upload_id = db.Column(db.Integer, primary_key=True)
# Fields (A-Z)
name = db.Column(db.String(50), nullable=False)
path_to_file = db.Column(db.String(256), nullable=False, unique=True)
uploaded_by = db.Column(db.Integer, db.ForeignKey("users.user_id"))
```
and i want to return JSON like this:
```
{
"users": [
{
"email": "<EMAIL>",
"uploads": [
{
"name": "1.png",
"path_to_file": "static/1.png"
}
],
"username": "maro"
},
{
"email": "<EMAIL>",
"uploads": [
{
"name": "2.jpg",
"path_to_file": "static/2.jpg"
}
],
"username": "makos"
}
]
}
```
So basically i want to return user object with all uploads (files user uploaded) in array.
I know i can access Upload class object within user with User.uploads (created with db.relationship) but i need some kind of serializer.
I wanted to add custom serialize() method to all my models:
```
# User serializer
def serialize_user(self):
if self.uploads:
uploads = [upload.serialize_upload() for upload in self.uploads]
return {
"email": self.email,
"password": <PASSWORD>,
"username": self.username,
"uploads": uploads
}
# Upload serializer
def serialize_upload(self):
if self.user:
dict_user = self.user.serialize_user()
return {
"name": self.name,
"path_to_file": self.path_to_file,
"user": dict_user
}
```
But problem with this is that i end up with nesting loop. My User object has upload files and each upload has it's user's data and these user's data has uploads files...
My view endpoint:
```
@app.route('/users', methods=["GET"])
def get_users():
users = [user.serialize_user() for user in User.query.all()]
return jsonify(users)
```
Error:
```
RecursionError: maximum recursion depth exceeded while calling a Python object
```
Partial solution:
I can simply ommit serializing user object inside Upload serializer but then i won't be able to create similiar endpoint but to get uploads.
Example: /uploads - JSON with all uploads and user object nested.
How can i effectively work with relationships to return them as serialized JSON data similiar to JSON structure above?<issue_comment>username_1: * Within the function `filter` is missing the `newArray` with the filtered items.
* **This is an alternative:**
```
var result = filter([1,2,3,4,5], function(n) {
return n % 2 === 0;
});
```
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
if (callback(item)) newArray.push(item);
});
return newArray;
};
var result = filter([1, 2, 3, 4, 5], function(n) {
return n % 2 === 0;
});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 <issue_comment>username_2: If you want to save whether the callback returned `true` or `false` for each element, you need to store the single return values in a new array (as username_1 already suggested);
```js
var each = function(collection, callback) {
if (Array.isArray(collection)) {
for (var i = 0; i < collection.length; i++) {
callback(collection[i]);
}
} else {
for (var key in collection) {
callback(collection[key]);
}
}
};
var filter = function(collection, callback) {
var newArray = [];
each(collection, function(item) {
newArray.push(callback(item));
});
return newArray;
};
var source = [1,2,3,4,5];
var result = filter(source, function(n) {
return n % 2 === 0;
});
console.log("source array", source);
console.log("result array", result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0 <issue_comment>username_3: ```js
const source = [1,2,3,4,5];
const isEven = n => n%2 === 0;
const result = source.map(el=>isEven(el));
console.log("source array", source);
console.log("result array", result);
```
Upvotes: 0 <issue_comment>username_4: I am not sure where you want to use this: in a browser? Old Scripting environment? Node? but it seems to me you are trying to re-invent the wheel here.
The Javascript builtin array functions include the Find, Foreach and Map. I am not sure how you want the output either, so I'll just log it.
In this case you can use Foreach In long hand..
```js
var nums=[1,2,3,4,5];
nums.forEach(function(element) {
console.log(element%2==0)
});
```
which will output
```
false
true
false
true
false
```
If you are familiar with arrow functions this is even simpler
```js
var nums=[1,2,3,4,5];
nums.forEach(element => {console.log(element%2==0)});
```
If you want to do this asynchronously, then you can wrap in a promise in most modern environments
```js
var nums=[1,2,3,4,5];
var x= new Promise( (resolve) => {
var result=[];
nums.forEach(function(element) {
result.push(element%2==0);
});
resolve(result);
})
.then(evens=>console.log(evens));
```
Upvotes: 0 |
2018/03/19 | 1,311 | 4,619 | <issue_start>username_0: I'm trying to load some data from stage to relational environment and something is happening I can't figure out.
I'm trying to run the following query:
```
SELECT
CAST(SPLIT_PART(some_field,'_',2) AS BIGINT) cmt_par
FROM
public.some_table;
```
The some\_field is a column that has data with two numbers joined by an underscore like this:
```
some_field -> 38972691802309_48937927428392
```
And I'm trying to get the second part.
That said, here is the error I'm getting:
```
[Amazon](500310) Invalid operation: Invalid digit, Value '1', Pos 0,
Type: Long
Details:
-----------------------------------------------
error: Invalid digit, Value '1', Pos 0, Type: Long
code: 1207
context:
query: 1097254
location: :0
process: query0_99 [pid=0]
-----------------------------------------------;
Execution time: 2.61s
Statement 1 of 1 finished
1 statement failed.
```
It's literally saying some numbers are not valid digits. I've already tried to get the exactly data which is throwing the error and it appears to be a normal field like I was expecting. It happens even if I throw out NULL fields.
I thought it would be an encoding error, but I've not found any references to solve that.
Anyone has any idea?
Thanks everybody.<issue_comment>username_1: Hmmm. I would start by investigating the problem. Are there any non-digit characters?
```
SELECT some_field
FROM public.some_table
WHERE SPLIT_PART(some_field, '_', 2) ~ '[^0-9]';
```
Is the value too long for a `bigint`?
```
SELECT some_field
FROM public.some_table
WHERE LEN(SPLIT_PART(some_field, '_', 2)) > 27
```
If you need more than 27 digits of precision, consider a `decimal` rather than `bigint`.
Upvotes: 1 <issue_comment>username_2: I just ran into this problem and did some digging. Seems like the error `Value '1'` is the misleading part, and the problem is actually that these fields are just not valid as numeric.
In my case they were empty strings. I found the solution to my problem in [this blogpost](https://blog.fishtownanalytics.com/how-to-safely-convert-strings-to-integers-in-redshift-a4d7aa39c70), which is essentially to find any fields that aren't numeric, and fill them with null before casting.
```
select cast(colname as integer) from
(select
case when colname ~ '^[0-9]+$' then colname
else null
end as colname
from tablename);
```
Bottom line: this Redshift error is completely confusing and really needs to be fixed.
Upvotes: 4 <issue_comment>username_3: For my `Redshift` SQL, I had to wrap my columns with `Cast(col As Datatype)` to make this error go away.
For example, setting my columns `datatype` to `Char` with a specific length worked:
```
Cast(COLUMN1 As Char(xx)) = Cast(COLUMN2 As Char(xxx))
```
Upvotes: 1 <issue_comment>username_4: If you get error message like “Invalid digit, Value ‘O’, Pos 0, Type: Integer” try executing your copy command by eliminating the header row. Use IGNOREHEADER parameter in your copy command to ignore the first line of the data file.
So the COPY command will look like below:
`COPY orders FROM 's3://sourcedatainorig/order.txt' credentials 'aws_access_key_id=;aws\_secret\_access\_key=' delimiter '\t' IGNOREHEADER 1;`
Upvotes: 2 <issue_comment>username_5: #### When you are using a Glue job to upsert data from any data source to Redshift:
Glue will rearrange the data *then* copy which can cause this issue. This happened to me even after using `apply-mapping`.
In my case, the `datatype` was not an issue at all. In the source they were typecast to exactly match the fields in Redshift.
>
> **Glue was rearranging the columns by the alphabetical order of column names then copying the data into Redshift table** *(which will
> obviously throw an error because my first column is an ID Key, not
> like the other string column).*
>
>
>
**To fix the issue, I used a SQL query within Glue to run a select command with the correct order of the columns in the table.**.
It's weird why Glue did that even after using `apply-mapping`, but the work-around I used helped.
For example: source table has fields ID|EMAIL|NAME with values 1|<EMAIL>|abcd and target table has fields ID|EMAIL|NAME But when Glue is upserting the data, it is rearranging the data by their column names before writing. Glue is trying to write <EMAIL>|1|abcd in ID|EMAIL|NAME. This is throwing an error because ID is expecting a int value, EMAIL is expecting a string. I did a SQL query transform using the query "SELECT ID, EMAIL, NAME FROM data" to rearrange the columns before writing the data.
Upvotes: 1 |
2018/03/19 | 1,461 | 5,253 | <issue_start>username_0: This has probably been asked before because of how high level the question is, however I couldn't find a solution and am struggling to get this set up. I am working on my first full stack web app using MERN stack. I am on a Mac. Everything I'm doing here is on my local machine.
**For MongoDB**, I have it installed on my local machine. I have the mongod dameom running. Here's what I have in my interactive mongo shell:
```
// run in terminal
> mongo
> show dbs
admin 0.000GB
config 0.000GB
mydboverhere 0.064GB
local 0.000GB
> use mydboverhere
switched to db mydboverhere
> show collections
table_one
table_two
andathirdtable
```
I would like to connect my node/express API to the *mydboverhere* database. In my node directory structure, I have a *models* directory with:
**/models/index.js**
```
var mongoose = require('mongoose');
mongoose.set('debug', true);
// is this line correct?
mongoose.connect('mongodb://localhost:27017/mydboverhere/table_one');
mongoose.Promise = Promise;
module.exports.Todo = require("./table1"); // this requires table1.js
```
and **/models/table1.js**
```
// this matches the form of the data in the database, I believe
var mongoose = require('mongoose');
var tab1Schema = new mongoose.Schema({
name: {
type: String,
required: 'cannot be blank'
},
completed: {
type: Boolean,
default: false
},
created_date: {
type: Date,
default: Date.now
}
})
var Table1 = mongoose.model('Table1', tab1Schema)
module.exports = Table1;
```
I believe I have my **/routes/tableroutes** file correct:
```
var express = require('express');
var router = express.Router();
var db = require('../models')
router.get('/', function(req, res){
// res.send("Hello from the table1 route")
db.Table1.find()
.then(function(data) {
res.json(data);
})
.catch(function(err) {
res.send(err);
})
});
module.exports = router;
```
and also I think I am loading these routes into my **root /index.js** file correctly as well:
```
var express = require('express')
var app = express();
var tableRoutes = require('./routes/tableroutes');
// test the root route
app.get('/', function(req, res){
res.send("Hello from the Root Route")
});
// set base route to /api for my tableRoutes
app.use('/api', tableRoutes);
app.listen(3000, () => console.log('Example app listening on port 3000!'))
```
Unfortunately, with mongod running, when I try to run node index.js to get my node app running, I receive the following error message in my command line:
*... (node:66245) UnhandledPromiseRejectionWarning: Error: Unsupported host 'localhost:27017/mydboverhere/table\_one', hosts must be URL encoded and contain at most one unencoded slash ...*
And I'm stuck here right now... pretty much, i'm not sure if Im connecting my node API with mongodb correctly or not. This is all being done on my local machine, and I have mongodb installed at /data/db as it should be. Maybe the error is due to the underscore in the collection name table\_one. Maybe the error is because the data in the table\_one collection in mongo doesnt' match exactly with the schema in table1.js (I created the mongodb separately by pushing a dataframe from R into it, and then wrote the table1.js schema to match it).
Regardless of which of the following above issues is **the issue**, I'm not sure, and I'm struggling to continue. Any help here is greatly appreciated!
EDIT1: I have a strong feeling that the following line:
```
mongoose.connect('mongodb://localhost:27017/mydboverhere/table_one');
```
is incorrect, and am seeing the proper way to connect to a specific db.
EDIT2: I think there's another javascript library called mongoDB for this, but I would very much prefer to get this working with mongoose.<issue_comment>username_1: I think there is an error here:
You are using `thisdboverhere` whereas it should be `mydboverhere`.
```
mongoose.connect('mongodb://localhost:27017/mydboverhere', function(){
// do your process here
});
```
Or
```
mongoose.connect('mongodb://localhost:27017/mydboverhere');
var db = mongoose.connection // I think you are forgetting to instantiate the connection here
```
Upvotes: 2 <issue_comment>username_2: From [this good github post here](https://github.com/Automattic/mongoose/issues/2461), I found the following:
```
Make sure you're connecting to the same database
(mongoose.connect('mongodb://hostname:27017/'))
and accessing the same collection
(mongoose.model('', schema, ''))
```
Upvotes: 1 <issue_comment>username_3: I'm late to the party, but I had the same error message. Removing the underscore in the database name fixed it for me.
My original database name I tried was:
`const URL_MONGODB = "mongodb://localhost:27017/portfolio_db";`
I removed the underscore, and used this database name:
`const URL_MONGODB = "mongodb://localhost:27017/portfoliodb";`
After which, I no longer got the error "UnhandledPromiseRejectionWarning: Error: Unsupported host 'localhost:27017/data/db', hosts must be URL encoded and contain at most one unencoded slash"
Upvotes: 0 |
2018/03/19 | 1,285 | 4,304 | <issue_start>username_0: Here are my `user` and `product` schemas:
```
const productSchema = new Schema({
//...
addedBy: {
type: mongoose.Schema.Types.ObjectId,
ref: "users"
}
});
const userSchema = new Schema({
//...
addedItems: [{
type: mongoose.Schema.ObjectId,
ref: "products"
}]
});
mongoose.model("products", productSchema);
mongoose.model("users", userSchema);
```
In my Node back end route I do this query:
```
User.findOneAndUpdate(
{ _id: req.body.id },
{ $push: { addedItems: newProduct._id } },
{ upsert: true, new: true },
function(err, doc) {
console.log(err, doc);
}
);
```
The `console.log` prints out this:
```
{
//...
addedItems: [ 5ab0223118599214f4dd7803 ]
}
```
Everything looks good. I go to actually look at the data using the front-end website for my mongo db; I'm using [mlab.com](https://mlab.com), and this is what shows:
```
{
//...
"addedItems": [
{
"$oid": "5ab0223118599214f4dd7803"
},
{
"$oid": "5ab0223118599214f4dd7803"
}
]
}
```
**Question:** What the heck happened? Why does it add an additional entry into addedItems ?! Even though my console.log only showed one.
**Note:**
I tested to see if the backend route was being called more than once. It is not.
It seems to be a problem with `$push` because if I just have `{ addedItems: newProduct._id }` then only one entry goes in, but it overwrites the entire array.
**Edit:**
Made a test project to produce the same results: <https://github.com/philliprognerud/test-mcve-stackoverflow>
Can anyone figure out what's going on?<issue_comment>username_1: The problem is caused by your mixed used of promises (via async/await) and callbacks with the `findOneAndUpdate` call which ends up executing the command twice.
To fix the problem:
```
const updatedUser = await User.findOneAndUpdate(
{ id: userID },
{ $push: { addedItems: newProduct.id } },
{ upsert: true, new: true }
);
console.log(updatedUser);
```
Future readers note that the use of `await` isn't shown here in the question, but is in the MCVE.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I am facing similar issue. Just landed to this page. I find that previous answer is not very descriptive. So posting this:
```
export const updateUserHandler = async (req, res) => {
const request = req.body;
await User.findOneAndUpdate( //<== remove await
{ _id: request.id },
{ $push: { addedItems: newProduct._id } },
{ upsert: true, new: true },
(findErr, findRes) => {
if (findErr) {
res.status(500).send({
message: 'Failed: to update user',
IsSuccess: false,
result: findErr
});
} else {
res.status(200).send({
message: 'Success: to update user',
IsSuccess: true,
result: findRes
});
}
}
);
}
```
Here there are two async calls one is the **async** and other is **await**. Because of this there are two entries in the document. Just remove await from **await** User.findOneAndUpdate. It will work perfectly.
Thanks!!
Upvotes: 2 <issue_comment>username_3: This code $push keeps adding two entries:
const ali={ "\_id": "5eaa39a18e7719140e3f4430" };
```
// return await customerModel.findOneAndUpdate(
// ali,
// {
// "$push": {
// "address": [objAdr],
// },
// },
// function (error: any, success: any) {
// if (error) {
// console.log(error);
// } else {
// console.log(success);
// }
// }
// );
```
My solutions working true:
```
return await customerModel
.findOneAndUpdate(
{ _id: ids },
{ $push: { "address": objAdr } }
)
.catch((err: string | undefined) => new Error(err));
```
Upvotes: 1 <issue_comment>username_4: When you await Query you are using the promise-like, specifically, `.then()` and `.catch(()` of Query. Passing a callback as well will result in the behavior you're describing.
If you await Query and `.then()` of Query simultaneously, would make the query execute twice
use:
```
await Model.findOneAndUpdate(query, doc, options)
```
OR
```
Model.findOneAndUpdate(query, doc, options, callback)
```
Upvotes: 2 |
2018/03/19 | 867 | 3,245 | <issue_start>username_0: I have data in variables that I want to add into a dataframe. The issue I am running into is that I need to organize the variables where they are not populating every row.
I need the data to look like this:
```
name notification1 notification2 notification3
a 1
b 2
c 3
```
The dataframe currently look like this:
```
name notification1 notification2 notification3
a 1 1 1
b 2 2 2
c 3 3 3
```
The variables are set up like(all variables are str):
```
notification1 = 1.0
notification2 = 2.0
notification3 = 3.0
person_notification1 = a
person_notification2 = b
person_notification3 = c
```
Every notification has only one person attached to it, so not every row needs data per person.
Thank you in advance hope my question makes sense.<issue_comment>username_1: Consider storing your data in lists, instead of individual variables.
```
notifs = [1, 2, 3]
persons = ['a', 'b', 'c']
```
Initialise a diagonal 2D array using `np.diag`, and pass it to `pd.DataFrame`:
```
pd.DataFrame(
np.diag(notifs),
index=persons,
columns=np.arange(1, len(notifs) + 1)
).add_prefix('notification')
notification1 notification2 notification3
a 1 0 0
b 0 2 0
c 0 0 3
```
Upvotes: 1 <issue_comment>username_2: A way to do this:
```
import pandas as pd
notification1 = 1.0
notification2 = 2.0
notification3 = 3.0
person_notification1 = 'a'
person_notification2 = 'b'
person_notification3 = 'c'
def row(name, notification):
return {'name': name, 'notification_'+str(notification) : notification}
df = pd.DataFrame()
df = df.append(row(person_notification1, int(notification1)),ignore_index=True)
df = df.append(row(person_notification2, int(notification2)),ignore_index=True)
df = df.append(row(person_notification3, int(notification3)),ignore_index=True)
```
The result:
```
name notification_1 notification_2 notification_3
0 a 1.0 NaN NaN
1 b NaN 2.0 NaN
2 c NaN NaN 3.0
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: I think you want each column having only one value and rest as empty string. Please find the my solution below. I hope it helps.
```
import pandas as pd
import numpy as np
def main():
notification = [1.0, 2.0, 3.0]
persons = ['a', 'b', 'c']
columns = ['notification{}'.format(i) for i, elem in enumerate(notification, 1)]
df = pd.DataFrame(columns=columns)
for r,c,v in zip(persons,columns,notification):
df.at[r,c] = v
df = df.replace(np.nan, '', regex=True)
print(df)
if __name__ == '__main__':
main()
```
Output
```
notification1 notification2 notification3
a 1
b 2
c 3
```
Upvotes: 0 |
2018/03/19 | 809 | 3,066 | <issue_start>username_0: I'm having trouble returning data frames from a loop in R. I have a set of functions that reads in files and turns them into data frames for the larger project to use/visualize.
I have a list of file names to pass:
```
# list of files to read
frameList <-c("apples", "bananas", "pears")
```
This function iterates over the list and runs the functions to create the data frames if they are not already present.
```
populateFrames <- function(){
for (frame in frameList){
if (exists(frame) && is.data.frame(get(frame))){
# do nothing
}
else {
frame <- clean_data(gather_data(frame))
}
}
}
```
When executed, the function runs with no errors, but does not save any data frame to the environment.
I can manually run the same thing and that saves a data frame:
```
# manually create "apples" data frame
apples <- clean_data(gather_data(frameList[1]))
```
From my reading through similar questions here, I see that assign() is used for similar things. But in the same way as before, I can run the code manually fine; but when put inside the loop no data frame is saved to the environment.
```
# returns a data frame, "apples" to the environment
assign(x = frame[1], value = clean_data(gather_data(frame[1])))
```<issue_comment>username_1: Solutions, following the principle of "change as little about the OPs implementation as possible".
You have two problems here.
1. Your function is not returning anything, so any changes that happen are stuck in the environment of the function
2. I think you're expecting the re-assignment of `frame`in the `else`statement to re-assign it to that element in frameList. It's not.
This is the NOT RECOMMENDED\* way of doing this where you assign a variable in the function's parent environment. In this case you are *populatingFrames* as a side effect, mutating the `frameList` in the parent environment. Mutating the input is generally something you want to avoid if you want to practice defensive programming.
```
populateFrames <- function(){
for (i in seq_along(frameList)){
if (exists(frameList[[i]]) && is.data.frame(get(frameList[[i]]))){
# do nothing
}
else {
frameList[[i]] <<- clean_data(gather_data(frameList[[i]]))
}
}
}
```
This is the RECOMMENDED version where you return the new frameList (which means you have to assign it to a value).
```
populateFrames <- function(){
for (i in seq_along(frameList)){
if (exists(frameList[[i]]) && is.data.frame(get(frameList[[i]]))){
# do nothing
}
else {
frameList[[i]] <- clean_data(gather_data(frameList[[i]]))
}
}
frameList
}
```
Upvotes: 2 <issue_comment>username_2: Avoiding global variable assignments, which are typically a no-no, try lapply:
```
lapply(
frameList,
function(frame){
if(exists(frame) && is.data.frame(get(frame))){
frame
}else{
clean_data(gather_data(frame))
}
}
)
```
Upvotes: 1 |
2018/03/19 | 788 | 2,197 | <issue_start>username_0: Hei i have this code in CSS:
```css
.gradient-four {
height: 100px;
width: 100px;
margin: 5px auto;
border-radius: 50%;
float: left;
background-image: radial-gradient( circle closest-side, red, purple);
}
```
And it´s applied to a div. How can i repeat that div, without repeating the code over and over? I tried the background repeat but that would´t do the trick.Is it possible?
Thank you<issue_comment>username_1: Use the [`repeating-radial-gradient`](https://developer.mozilla.org/en-US/docs/Web/CSS/repeating-radial-gradient) function. It takes the same parameters as `radial-gradient` and functions the same, just repeating.
Upvotes: 0 <issue_comment>username_2: To repeat a linear/radial-background you simply need de specify a size then you may adjust the background-repeat to choose how to repeat it:
```css
.gradient-four {
height: 100px;
width: 100px;
margin: 5px auto;
border-radius: 50%;
float: left;
background-image: radial-gradient( circle closest-side, red, purple);
background-size:50px 50px;
}
```
```html
```
And if you want to repeat the result of the whole div you have, you may adjust the gradient like this:
```css
.gradient-four {
height: 500px;
width: 500px;
margin: 5px auto;
float: left;
background-image: radial-gradient( circle closest-side, red, purple 98%, transparent 100%);
background-size:100px 100px;
}
```
Upvotes: 1 <issue_comment>username_3: Use `repeating-radial-gradient` like so:
```
.gradient-four {
height: 100px;
width: 100%;
margin: 5px auto;
float: left;
background-image: -webkit-repeating-radial-gradient(center center, red, purple 49px, rgba(0,0,0,0) 50px, rgba(0,0,0,0) 100%);
background-image: -moz-repeating-radial-gradient(center center, red, purple 49px, rgba(0,0,0,0) 50px, rgba(0,0,0,0) 100%);
background-image: -ms-repeating-radial-gradient(center center, red, purple 49px, rgba(0,0,0,0) 50px, rgba(0,0,0,0) 100%);
background-image: repeating-radial-gradient(center center, red, purple 49px, rgba(0,0,0,0) 50px, rgba(0,0,0,0) 100%);
background-size: 100px 100px;
}
```
[JSFiddle](https://jsfiddle.net/coxzocnn/13/)
Upvotes: 0 |
2018/03/19 | 2,805 | 10,155 | <issue_start>username_0: I've looked at a ton of articles about how to rebase with Git and it makes sense...or at least I think it does. However, I'm struggling with the following...
For this scenario we have the following branches:
* master (local)
* origin/master
* jheigs (local)
* origin/jheigs
Ok, so now let's say that origin/master is ahead by 1 commit. So, I was told from articles and the dev team I am working on is to do the following on the jheigs branch:
```
$ git add ...
$ git commit ...
$ git status (ensure everything is up-to-date locally)
$ git pull (again, check and ensure that everything is ready to go)
$ git pull --rebase origin master
$ git push (defaulting to origin/jheigs)
```
1. When the rebase runs fine, what I have run into is that origin/jheigs and local jheigs have HEAD commits that don't match (given the above rebase), so at times I may have to pull and then push, which can cause conflicts. What I am confused about is...should I instead be using:
```
$ git push --force origin jheigs (?)
```
2. Second question...now, let's say that I've pushed and jheigs has been rebased correctly with origin/master. No conflicts existed and my jheigs and origin/jheigs are now ahead of master by 1 commit. Ok, a day goes by and I need to make more changes. So I make those changes on jheigs and add/commit. However, origin/master has no additional updates. Essentially I'm still ahead of origin/master by 1 commits (soon to be 2). Do I still follow the same process above? Or do I just add/commit and push to origin/jheigs without rebasing since I'm already ahead of origin/master?
I apologize that this is so length, but I thought I had this figured out and it's just not as smooth as I thought it would be. I want to be careful with rebasing, so any help is appreciated!<issue_comment>username_1: Well, first, I would (and do) just avoid `git pull` entirely, or mostly: it's meant to be convenient, but it turns out to be *in*convenient. It just runs `git fetch` followed by a second Git command that affects whatever branch you have checked out. Sometimes I don't want or need the fetch; other times, I want to look around at what happened *when* the fetch ran, *before* I do any second command to affect the current branch.
As you have probably read, `git rebase` is really about *copying* (some) commits. You have some collection of commits in your repository, and now, for whatever reason, the shine has gone off some commit(s), and you want to make newer, better, prettier, shinier commits.
The way to make all of this make sense is to *draw commit graphs*. Remember that there are multiple clones—different repositories that contain *mostly* the same sets of commits, but not exactly the same sets—involved!
### Drawing your own repository's commits
You can have Git do this for you: `git log --all --decorate --oneline --graph`, or git log (with) A DOG. It's good exercise to do it by hand at first though. Also, when you do it by hand, you can draw these out horizontally, which tends to make more sense.
Remember that each commit, identified by its unique hash ID, is read-only. Each commit points back to its *parent* commit. A branch name like `master` or `jheigs`, or a remote-tracking name like `origin/master` or `origin/jheigs`, *points to* (records the hash ID of) the *tip* commit, and Git works backwards from these tips:
```
...--C--D--E <-- master
\
F--G <-- jheigs (HEAD)
```
This might be the graph fragment of a repository where commit `E` is the tip of `master`, and `E` points back to `D`, while you've added two commits of your own: `G`, which is the tip of `jheigs` and which points back to `F`, and earlier, you added `F` pointing back to `D`.
Note that here, commits `D` and earlier are on *both* branches.
A `git rebase` will copy1 some commits to new-and-improved commits. For instance, here you might want to copy `F` to a new-and-improved `F'`, where the main difference is that `F'` is attached to `E`, not to `D`:
```
F' [new and improved!]
/
...--C--D--E <-- master
\
F--G <-- jheigs (HEAD)
```
Once you've copied `F` to `F'`, you can now copy `G` to new-and-improved `G'`:
```
F'-G' [new and improved!]
/
...--C--D--E <-- master
\
F--G <-- jheigs (HEAD)
```
And now that all of the commits that are "on" (reachable from) `jheigs`, but *not* on `master`, have been copied to the new-and-improved version, you can have Git peel the `jheigs` label off commit `G` and paste it onto `G'`:
```
F'-G' <-- jheigs (HEAD)
/
...--C--D--E <-- master
\
F--G [abandoned]
```
The old, dull `F` and `G` commits that were so shiny and nice yesterday are now junk, replaced with the shiny new `F'` and `G'`. This is because *your* branch name `jheigs` now points to the last of the new copied commits.
---
1This copying is done as if by `git cherry-pick`. Depending on which rebase command you use, it may actually *be* done with `git cherry-pick`.
---
### There's another repository involved: `fetch` and `push`
The graph above is for *your* repository, but there's a second repository involved here. They have their own branch names, and their own commits. The commits that you have, that they also have, share commit hash IDs. There might be some commits that you have that they don't; and there might be some commits that they have that you don't.
If you think they might have commits that you don't, you should run `git fetch` or `git fetch origin`. This has your Git call up their Git, at the URL you have listed under the name `origin`. Your Git calls up their Git and has them list all of their commits (by hash IDs) as given by *their* branch names.
If they have commits that you don't, your Git now downloads those commits. Your Git also *changes* their branch names, such as `master` or `jheigs`, so that they read `origin/master` and `origin/jheigs`. These new names won't interfere with *your* branch names.
Now that you have all the commits that they have, plus all the commits that you had before, your repository perhaps looks like this—let's assume you *haven't* done `git rebase` yet:
```
H <-- origin/master
/
...--C--D--E <-- master
\
F--G <-- jheigs (HEAD), origin/jheigs
```
Your `origin/*` is your Git's memory of *their* branch names. This means that *their* `master` identifies commit `H`. You have commit `H` now, thanks to `git fetch`. *Their* `jheigs` identifies commit `G`, just like yours.
If you now run `git rebase master`, your Git will copy your `F` and `G` to a new `F'` and `G'`, built to come after commit `E`, which is where your `master` points. You probably would like to have them come after commit `H` instead.
Here, you can do this pretty easily: you can just run `git rebase origin/master`. This tells your Git to find commits that you have (i.e., `F` and `G`) that are not *reachable from* your `origin/master` (i.e., their `master`, as far as you can remember anyway) and put the copies after commit `H`. The result looks like this:
```
F'-G' <-- jheigs (HEAD)
/
H <-- origin/master
/
...--C--D--E <-- master
\
F--G <-- origin/jheigs
```
Note that none of the `origin/*` names have moved, and your own `master` has not moved either. However, *their* `jheigs` which you call `origin/jheigs` still remembers commit `G`.
*Now* you need `git push --force jheigs`, to tell them: *Throw away commits `F` and `G` in favor of the new shiny `F'` and `G'`.* Once they agree to do that, your Git will remember their `jheigs` as pointing to `G'`:
```
F'-G' <-- jheigs (HEAD), origin/jheigs
/
H <-- origin/master
/
...--C--D--E <-- master
```
With no names to find them, commits `F` and `G` appear to vanish entirely.
### Why you might not want to rebase, ever
Note that there may be a *third* Git repository out there, and a fourth or more, with another `origin/jheigs` in it. If so, all those *other* repositories must take some action (such as running `git fetch`) to get the new commits and update their own `origin/jheigs` names.
Further, they might have built their own commits atop your commits before you decided to throw out your commits in favor of new-and-improved ones. If so, they may be forced to copy *their* commits just like you copied yours. That might even be a good thing, or it might make them annoyed.
Hence, if you're ever about to rebase commits that *other people* have, or might have, you should be reasonably sure that it's OK to make work for them. If no one else has the original commits, it's obviously safe to copy-and-replace them. If everyone else has agreed that this kind of copy-replace is *supposed* to happen, it's still fine. So rebase if it's OK to do so, and if it makes sense to do so.
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> When the rebase runs fine, what I have run into is that origin/jheigs and local jheigs have HEAD commits that don't match (given the above rebase)
>
>
>
Yes, but the difference is your additional un-pushed work. After a `git pull --rebase`, the upstream HEAD will be an *ancestor* of your local HEAD.
>
> so at times I may have to pull and then push
>
>
>
Which is generally the correct thing to do
>
> which can cause conflicts.
>
>
>
It shouldn't, unless someone else pushes in between your pull and push (in which case, you need to do it again, to incorporate their work).
>
> Ok, a day goes by and I need to make more changes. So I make those changes on jheigs and add/commit. However, origin/master has no additional updates. Essentially I'm still ahead of origin/master by 1 commits (soon to be 2). Do I still follow the same process above? Or do I just add/commit and push to origin/jheigs without rebasing since I'm already ahead of origin/master?
>
>
>
Either is fine. If upstream hasn't moved, `git pull --rebase` will do nothing.
Upvotes: 0 |
2018/03/19 | 918 | 3,400 | <issue_start>username_0: I have a React Navigation Tab Component like this:
```
const RootNavigator=TabNavigator({
Home:{
screen: Home,
navigationOptions:{
tabBarIcon: ({focused}) => (
)
}
},
Notifications:{
screen: Notifications,
navigationOptions:{
tabBarIcon: ({focused}) => (
)
}
}, {});
```
Is there a way to make a callback when leaving a screen?
In this case, I would like to perform a function when I leave the Notifications tab. Such as mark the notifications as seen and remove the badge indicator.
As of now, I am pulling the Notification icon from another component in order to show the number badge.
Thanks in advance.<issue_comment>username_1: One option is to use [`onNavigationStateChange`](https://reactnavigation.org/docs/custom-navigator-overview.html#onnavigationstatechangeprevstate-newstate-action) to check the current change of the navigation and do the action you need to clear notifications etc.
>
> **onNavigationStateChange(prevState, newState, action)**
>
>
> Function that gets called every time navigation state managed by the
> navigator changes. It receives the previous state, the new state of
> the navigation and the action that issued state change. By default it
> prints state changes to the console.
>
>
>
Another option is to use [`addListener`](https://reactnavigation.org/docs/navigation-prop.html#addlistener-subscribe-to-updates-to-navigation-lifecycle). This way you can subscribe to `willFocus`/`didFocus` or `willBlur`/`didBlur` events and do the action you need.
>
> **addListener** - *Subscribe to updates to navigation lifecycle*
>
>
> React Navigation emits events to screen components that subscribe to
> them:
>
>
> * `willBlur` - the screen will be unfocused
> * `willFocus` - the screen will focus
> * `didFocus` - the screen focused (if there was a transition, the transition completed)
> * `didBlur` - the screen unfocused (if there was a transition, the transition completed)
>
>
>
*Example from the docs*
```
const didBlurSubscription = this.props.navigation.addListener(
'didBlur',
payload => {
console.debug('didBlur', payload);
}
);
// Remove the listener when you are done
didBlurSubscription.remove();
// Payload
{
action: { type: 'Navigation/COMPLETE_TRANSITION', key: 'StackRouterRoot' },
context: 'id-1518521010538-2:Navigation/COMPLETE_TRANSITION_Root',
lastState: undefined,
state: undefined,
type: 'didBlur',
};
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For those who want a third option, you could use the `NavigationEvents` component as suggested by the [docs](https://reactnavigation.org/docs/en/2.x/navigation-events.html) to listen to navigations hooks and to whatever you intended to do.
>
> Edit: This is documentation for React Navigation 2.x, which is no
> longer actively maintained. For up-to-date documentation, see the
> latest version (6.x).
>
>
>
```
import React from 'react';
import { View } from 'react-native';
import { NavigationEvents } from 'react-navigation';
const MyScreen = () => (
console.log('will focus',payload)}
onDidFocus={payload => console.log('did focus',payload)}
onWillBlur={payload => console.log('will blur',payload)}
onDidBlur={payload => console.log('did blur',payload)}
/>
{/\*
Your view code
\*/}
);
export default MyScreen;
```
Upvotes: 2 |
2018/03/19 | 889 | 3,329 | <issue_start>username_0: I'm trying to do some laravel validation.
I need to ensure that the field max rent is always great than min rent and to proivded a message letting the user know.
Here is my validation code in my controller
```
$this->validate($request, [
"county" => "required",
"town" => "required",
"type" => "required",
"min-bedrooms" => "required",
"max-bedrooms" => "required",
"min-bathrooms" => "required",
"max-bathrooms" => "required",
"min-rent" => "required|max4",
"max-rent" => "required|max4",
]);
```
I'm not using a seperate rules method. This is within the controller<issue_comment>username_1: You could use a [Custom Validation Rule](https://laravel.com/docs/5.6/validation#custom-validation-rules).
1. Create the Rule class
========================
```php
php artisan make:rule RentRule
```
2. Insert your logic
====================
**App\Rules\RentRule**
```php
namespace App\Rules;
use Illuminate\Contracts\Validation\Rule;
class RentRule implements Rule
{
protected $min_rent;
/**
* Create a new rule instance.
*
* @param $min_rent
*/
public function __construct($min_rent)
{
// Here we are passing the min-rent value to use it in the validation.
$this->min_rent = $min_rent;
}
/**
* Determine if the validation rule passes.
*
* @param string $attribute
* @param mixed $value
* @return bool
*/
public function passes($attribute, $value)
{
// This is where you define the condition to be checked.
return $value > $this->min_rent;
}
/**
* Get the validation error message.
*
* @return string
*/
public function message()
{
// Customize the error message
return 'The maximum rent value must be greater than the minimum rent value.';
}
}
```
3. Use it
=========
```php
use App\Rules\RentRule;
// ...
$this->validate($request, [
"county" => "required",
"town" => "required",
"type" => "required",
"min-bedrooms" => "required",
"max-bedrooms" => "required",
"min-bathrooms" => "required",
"max-bathrooms" => "required",
"min-rent" => "required|max4",
"max-rent" => ["required", new RentRule($request->get('min-rent')],
]);
```
---
Side note
=========
I suggest you to use [Form Request](https://laravel.com/docs/5.6/validation#form-request-validation) classes to extract the validation logic from the controller and decouple your code. This will let you have classes that has just one responsability making it easier to test and cleaner to read.
Upvotes: 2 <issue_comment>username_2: We can use parameters from the request as a part of the validation rule. This may use in a situation like one field must be greater than another field. The following code is an example of checking `max-rent` should be greater than `min-rent.` In this case the validation rule `numeric` important we are checking numbers, otherwise it will check the number of characters.
```
$request->validate([
"min-rent" => "required|numeric|max:9999",
"max-rent" => "required|numeric|min:{$request->input('min-rent')}|max:99999",
]);
```
Upvotes: -1 |
2018/03/19 | 941 | 3,355 | <issue_start>username_0: I'm trying to write code to organize a data dump.
Basically I need to extract
asdf from: *123456**asdf**123456789*
I can do this in normal worksheet mode in Excel, but I am having trouble with this in VBA. Theoretically the code below should return the string 9 spaces left and 9 spaces right, but the code just hangs there with an error.
```
Set wks = ActiveWorkbook.ActiveSheet
LastRow = wks.UsedRange.Rows.Count
For MyRow = 1 To LastRow
celltxt = wks.Cells(MyRow, 1).Text
If InStr(1, celltxt, "name") Then
LString = Len(wks.Cells(MyRow, 1))
wks.Cells(MyRow, 1) = RIGHT(LEFT(wks.Cells(MyRow, 1),LString-6),LString -9)
End If
Next For
```<issue_comment>username_1: You could use a [Custom Validation Rule](https://laravel.com/docs/5.6/validation#custom-validation-rules).
1. Create the Rule class
========================
```php
php artisan make:rule RentRule
```
2. Insert your logic
====================
**App\Rules\RentRule**
```php
namespace App\Rules;
use Illuminate\Contracts\Validation\Rule;
class RentRule implements Rule
{
protected $min_rent;
/**
* Create a new rule instance.
*
* @param $min_rent
*/
public function __construct($min_rent)
{
// Here we are passing the min-rent value to use it in the validation.
$this->min_rent = $min_rent;
}
/**
* Determine if the validation rule passes.
*
* @param string $attribute
* @param mixed $value
* @return bool
*/
public function passes($attribute, $value)
{
// This is where you define the condition to be checked.
return $value > $this->min_rent;
}
/**
* Get the validation error message.
*
* @return string
*/
public function message()
{
// Customize the error message
return 'The maximum rent value must be greater than the minimum rent value.';
}
}
```
3. Use it
=========
```php
use App\Rules\RentRule;
// ...
$this->validate($request, [
"county" => "required",
"town" => "required",
"type" => "required",
"min-bedrooms" => "required",
"max-bedrooms" => "required",
"min-bathrooms" => "required",
"max-bathrooms" => "required",
"min-rent" => "required|max4",
"max-rent" => ["required", new RentRule($request->get('min-rent')],
]);
```
---
Side note
=========
I suggest you to use [Form Request](https://laravel.com/docs/5.6/validation#form-request-validation) classes to extract the validation logic from the controller and decouple your code. This will let you have classes that has just one responsability making it easier to test and cleaner to read.
Upvotes: 2 <issue_comment>username_2: We can use parameters from the request as a part of the validation rule. This may use in a situation like one field must be greater than another field. The following code is an example of checking `max-rent` should be greater than `min-rent.` In this case the validation rule `numeric` important we are checking numbers, otherwise it will check the number of characters.
```
$request->validate([
"min-rent" => "required|numeric|max:9999",
"max-rent" => "required|numeric|min:{$request->input('min-rent')}|max:99999",
]);
```
Upvotes: -1 |
2018/03/19 | 909 | 2,753 | <issue_start>username_0: I am not sure what I am missing in a query where i am trying to convert a not in subquery to join.
Here is my original query that works perfectly for me:
```
select
battery_id
from
battery_price
where
clinic_id = 2
and battery_id not in
(
select battery_id
from battery_price
where clinic_id = 4569
)
;
```
Here is the query that I am trying and it does not work because it does not give me any null field:
```
select leftq.battery_id as leftf, rightq.battery_id as rightf
from
(
select
battery_id
from
battery_price
where
clinic_id = 2
) as leftq
left join
(
select battery_id
from battery_price
where clinic_id = 4569
) as rightq
on rightq.battery_id = leftq.battery_id
;
```
Here is the table schema:
```
CREATE TABLE `battery_price` (
`battery_price_id` int(11) NOT NULL AUTO_INCREMENT,
`clinic_id` int(11) NOT NULL DEFAULT '0',
`battery_id` int(11) NOT NULL DEFAULT '0',
`retail_price` decimal(10,2) NOT NULL DEFAULT '0.00',
`actual_cost` decimal(10,2) NOT NULL DEFAULT '0.00',
`provider_cost` decimal(10,2) NOT NULL DEFAULT '0.00',
`in_use` tinyint(1) NOT NULL DEFAULT '1',
`sales_tax_id1` int(11) NOT NULL DEFAULT '0',
`sales_tax_id2` int(11) NOT NULL DEFAULT '0',
`sales_tax_id3` int(11) NOT NULL DEFAULT '0',
`sales_tax_id4` int(11) NOT NULL DEFAULT '0',
`sales_tax_id5` int(11) NOT NULL DEFAULT '0',
`sales_tax_category_id` int(11) NOT NULL DEFAULT '0',
`price_locked` tinyint(1) NOT NULL DEFAULT '1',
`item_number` varchar(50) NOT NULL DEFAULT '',
`last_update` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`last_edit_user_id` int(11) DEFAULT '0',
PRIMARY KEY (`battery_price_id`),
KEY `battery_id` (`battery_id`),
KEY `battery_id_2` (`battery_id`),
KEY `battery_id_3` (`battery_id`)
) ENGINE=MyISAM AUTO_INCREMENT=2639 DEFAULT CHARSET=latin1
```<issue_comment>username_1: Although I'm not quite sure why you want to select a NULL column (if you don't, just omit the second selected column after the comma), I think this query solves your problem:
```
select battery_price.battery_id as leftf, rightq.battery_id as rightf
from battery_price left join
(
select battery_id
from battery_price
where clinic_id = 4569
) as rightq
on rightq.battery_id = battery_price.battery_id
where battery_price.clinic_id = 2 and rightq.battery_id is null;
```
Upvotes: 1 <issue_comment>username_2: Try this one:
```
select
t.battery_id
from
battery_price t
left outer join battery_price u
on t.battery_id = u.battery_id
and u.clinic_id = 4569
where t.clinic_id = 2
and u.battery_id is NULL
```
Upvotes: 3 [selected_answer] |
2018/03/19 | 441 | 1,584 | <issue_start>username_0: Can someone help me figure out what i am doing wrong? When i run it, it will not display.
```html
function reverseMyString() {
string = parseInt(prompt("Enter a String"));
var str = string;
var reverseStr = "";
for (var i = str.length - 1; i >= 0; i--)
reverseStr += str[i];
document.write("Reversed String =" + reverseStr);
}
Click here to reverse a String
```<issue_comment>username_1: Unnecessary `parseInt` over entered string.
```
string = parseInt(prompt("Enter a String"));
^
```
Remove that call, and your logic will work as expected because will be treated as string rather than a number (Integer).
```html
function reverseMyString() {
var str = prompt("Enter a String");
var reverseStr = "";
for (var i = str.length - 1; i >= 0; i--)
reverseStr += str[i];
document.write("Reversed String =" + reverseStr);
}
Click here to reverse a String
```
**Aside note:**
You don't need to re-assign the entered value to a new variable:
```
string = parseInt(prompt("Enter a String"));
var str = string;
^^^^^^^^^^^^
```
Upvotes: 2 <issue_comment>username_2: If you want to have a string you dont't have to parse it to an Integer.
This code below splits the string into an array, then reverses it with the built in reverse() function, then joins it again into a string.
```js
function reverseMyString() {
var string = prompt("Enter a String");
var reverseStr = string.split("").reverse().join("");
console.log(reverseStr);
}
```
```html
Click here to reverse a String
```
Upvotes: 2 |
2018/03/19 | 463 | 1,805 | <issue_start>username_0: New to perforce (and stackoverflow). I'm trying determine what has changed in my local view: I want to list all the recent changes under a directory in my workspace.
Perforce documentation seems to suggest everything is file based. Is there not a simple solution that does not require either probing each file individually and recursively down the directory tree, or filtering and processing changelists (multiple projects are sharing the same depot in my case)?
Ideally, I'm looking for the SVN equivalent of "svn log -v" (i.e., no path specification).
I'm also looking for a command-line solution.<issue_comment>username_1: Unnecessary `parseInt` over entered string.
```
string = parseInt(prompt("Enter a String"));
^
```
Remove that call, and your logic will work as expected because will be treated as string rather than a number (Integer).
```html
function reverseMyString() {
var str = prompt("Enter a String");
var reverseStr = "";
for (var i = str.length - 1; i >= 0; i--)
reverseStr += str[i];
document.write("Reversed String =" + reverseStr);
}
Click here to reverse a String
```
**Aside note:**
You don't need to re-assign the entered value to a new variable:
```
string = parseInt(prompt("Enter a String"));
var str = string;
^^^^^^^^^^^^
```
Upvotes: 2 <issue_comment>username_2: If you want to have a string you dont't have to parse it to an Integer.
This code below splits the string into an array, then reverses it with the built in reverse() function, then joins it again into a string.
```js
function reverseMyString() {
var string = prompt("Enter a String");
var reverseStr = string.split("").reverse().join("");
console.log(reverseStr);
}
```
```html
Click here to reverse a String
```
Upvotes: 2 |
2018/03/19 | 772 | 2,101 | <issue_start>username_0: I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?<issue_comment>username_1: It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (`*` or `::` )
### Create a network
```
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
```
### Connect a container
```
docker network connect \
--ip 192.168.38.14 \
\
```
### Disconnect from original network
```
docker network disconnect
```
### Example
Before the containers `eth0` is on the default bridge network
```
→ docker exec $CONTAINER ip ad sh
1: lo: mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid\_lft forever preferred\_lft forever
15: eth0@if16: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid\_lft forever preferred\_lft forever
```
Afterwards, `eth1` has been added and no more `eth0`
```
→ docker exec $CONTAINER ip ad sh
1: lo: mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid\_lft forever preferred\_lft forever
17: eth1@if18: mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid\_lft forever preferred\_lft forever
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.
Upvotes: 0 |
2018/03/19 | 2,136 | 8,176 | <issue_start>username_0: I set numpy random seed at the beginning of my program. During the program execution I run a function multiple times using `multiprocessing.Process`. The function uses numpy random functions to draw random numbers. The problem is that `Process` gets a copy of the current environment. Therefore, each process is running independently and they all start with the same random seed as the parent environment.
So my question is how can I share the random state of numpy in the parent environment with the child process environment? Just note that I want to use `Process` for my work and need to use a **separate class** and do `import numpy` in that class separately. I tried using `multiprocessing.Manager` to share the random state but it seems that things do not work as expected and I always get the same results. Also, it does not matter if I move the for loop inside `drawNumpySamples` or leave it in `main.py`; I still cannot get different numbers and the random state is always the same. Here's a simplified version of my code:
```
# randomClass.py
import numpy as np
class myClass(self):
def __init__(self, randomSt):
print ('setup the object')
np.random.set_state(randomSt)
def drawNumpySamples(self, idx)
np.random.uniform()
```
And in the main file:
```
# main.py
import numpy as np
from multiprocessing import Process, Manager
from randomClass import myClass
np.random.seed(1) # set random seed
mng = Manager()
randomState = mng.list(np.random.get_state())
myC = myClass(randomSt = randomState)
for i in range(10):
myC.drawNumpySamples() # this will always return the same results
```
**Note**: I use Python 3.5. I also posted an issue on Numpy's GitHub page. Just sending the issue link [here](https://github.com/numpy/numpy/issues/10769) for future reference.<issue_comment>username_1: You need to update the state of the `Manager` each time you get a random number:
```
import numpy as np
from multiprocessing import Manager, Pool, Lock
lock = Lock()
mng = Manager()
state = mng.list(np.random.get_state())
def get_random(_):
with lock:
np.random.set_state(state)
result = np.random.uniform()
state[:] = np.random.get_state()
return result
np.random.seed(1)
result1 = Pool(10).map(get_random, range(10))
# Compare with non-parallel version
np.random.seed(1)
result2 = [np.random.uniform() for _ in range(10)]
# result of Pool.map may be in different order
assert sorted(result1) == sorted(result2)
```
Upvotes: 2 <issue_comment>username_2: Even if you manage to get this working, I don’t think it will do what you want. As soon as you have multiple processes pulling from the same random state in parallel, it’s no longer deterministic which order they each get to the state, meaning your runs won’t actually be repeatable. There are probably ways around that, but it seems like a nontrivial problem.
Meanwhile, there is a solution that should solve both the problem you want and the nondeterminism problem:
Before spawning a child process, ask the RNG for a random number, and pass it to the child. The child can then seed with that number. Each child will then have a different random sequence from other children, but the same random sequence that the same child got if you rerun the entire app with a fixed seed.
If your main process does any other RNG work that could depend non-deterministically on the execution of the children, you'll need to pre-generate the seeds for all of your child processes, in order, before pulling any other random numbers.
---
As senderle pointed out in a comment: If you don't need multiple distinct runs, but just one fixed run, you don't even really need to pull a seed from your seeded RNG; just use a counter starting at 1 and increment it for each new process, and use that as a seed. I don't know if that's acceptable, but if it is, it's hard to get simpler than that.
As Amir pointed out in a comment: a better way is to draw a random integer every time you spawn a new process and pass that random integer to the new process to set the numpy's random seed with that integer. This integer can indeed come from `np.random.randint()`.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Fortunately, according to [the documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.random.html), you can access [the complete state of the numpy random number generator using `get_state`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.get_state.html#numpy.random.get_state) and set it again using `set_state`. The generator itself uses the [Mersenne Twister](https://en.wikipedia.org/wiki/Mersenne_Twister) algorithm (see [the `RandomState` part of the documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.RandomState.html#numpy.random.RandomState)).
This means you can do anything you want, though whether it will be *good* and *efficient* is a different question entirely. As [username_2 points out](https://stackoverflow.com/a/49375587/1256452), no matter how you share the parent's state—this could use [username_1's method](https://stackoverflow.com/a/49372952/1256452), which looks correct—your sequencing within each child will depend on the order in which each child draws random numbers from the MT state machine.
It would perhaps be better to build a large pool of pseudo-random numbers for each child, saving the start state of the entire generator once at the start. Then each child can draw a PRNG value until its particular pool runs out, after which you have the child coordinate with the parent for the next pool. The parent enumerates which children got which "pool'th" number. The code would look something like this (note that it would make sense to turn this into an infinite generator with a `next` method):
```
class PrngPool(object):
def __init__(self, child_id, shared_state):
self._child_id = child_id
self._shared_state = shared_state
self._numbers = []
def next_number(self):
if not self.numbers:
self._refill()
return self.numbers.pop(0) # XXX inefficient
def _refill(self):
# ... something like username_1's lock/gen/unlock,
# but fill up self._numbers with the next 1000 (or
# however many) numbers after adding our ID and
# the index "n" of which n-through-n+999 numbers
# we took here. Any other child also doing a
# _refill will wait for the lock and get an updated
# index n -- eg, if we got numbers 3000 to 3999,
# the next child will get numbers 4000 to 4999.
```
This way there is not nearly as much communication through Manager items (MT state and our ID-and-index added to the "used" list). At the end of the process, it's possible to see which children used which PRNG values, and to re-generate those PRNG values if needed (remember to record the full MT internal start state!).
Edit to add: The way to think about this is like this: the MT is *not* actually random. It is periodic with a very long period. When you use any such RNG, your seed is simply a starting point within the period. To get repeatability you must use *non*-random numbers, such as a set from a book. There is a (virtual) book with every number that comes out of the MT generator. We're going to write down which page(s) of this book we used for each group of computations, so that we can re-open the book to those pages later and re-do the same computations.
Upvotes: 2 <issue_comment>username_4: You can use [`np.random.SeedSequence`](https://numpy.org/doc/stable/reference/random/bit_generators/generated/numpy.random.SeedSequence.html). See <https://numpy.org/doc/stable/reference/random/parallel.html>:
```
from numpy.random import SeedSequence, default_rng
ss = SeedSequence(12345)
# Spawn off 10 child SeedSequences to pass to child processes.
child_seeds = ss.spawn(10)
streams = [default_rng(s) for s in child_seeds]
```
This way, each of you thread/process will get a statistically independent random generator.
Upvotes: 1 |
2018/03/19 | 386 | 1,763 | <issue_start>username_0: While creating some basic workflow using KNIME and PSQL I have encountered problems with selecting proper node for fetching data from db.
In node repo we can find at least:
1. **PostgreSQL Connector**
2. **Database Reader**
3. **Database Connector**
Actually, we can do the same using 2) alone or connecting either 1) or 2) to node 3) input.
I assumed there are some hidden advantages like improved performance with complex queries or better overall stability but on the other hand we are using exactly the same database driver, anyway..<issue_comment>username_1: One advantage of using 1 or 2 is that you only need to enter connection details once for a database in a workflow, and can then use multiple reader or writer nodes. I'm not sure if there is a performance benefit.
1 offers simpler connection details with the bundled postgres jdbc drivers than 2
Upvotes: 0 <issue_comment>username_2: There is a big difference between the Connector Nodes and the Reader Node.
The Database Reader, reads data into KNIME, the data is then on the machine running the workflow. This can be a bad idea for big tables.
The Connector nodes do not. The data remains where it is (usually on a remote machine in your cluster). You can then connect Database nodes to the connector nodes. All data manipulation will then happen within the database, no data is loaded to your machine (unless you use the output port preview).
For the difference of the other two:
The PostgresSQL Connector is just a special case of the Database Connector, that has pre-set configuration. However you can make the same configuration with the Database Connector, which allows you to choose more detailed options for non standard databases.
Upvotes: 3 [selected_answer] |
2018/03/19 | 2,608 | 8,492 | <issue_start>username_0: Guid is a 128bits structure, long is a Int64 so 64 bits structure, therefore Guid can be used to represent two long and two long can be stored in a Guid.
I have been searching several times for a reliable way to perform the transformation of a Guid to 2 longs and the way around, mainly to get a simple way to provide a tracking id to external services.
The objective is to get a reversable way to pass in a single parameter 2 longs, and decode it back later (of course it is not intended to be used "decoded" on the other side). It is like a session id for the external service.<issue_comment>username_1: **Warning**: these solutions do not take endianness into consideration, and the result may therefore differ from one platform to another
Taking advantage of new features of C# 7, I came out with the following tools class, which transforms long, ulong, int, uint to Guid and reverse:
```
public static class GuidTools
{
public static Guid GuidFromLongs(long a, long b)
{
byte[] guidData = new byte[16];
Array.Copy(BitConverter.GetBytes(a), guidData, 8);
Array.Copy(BitConverter.GetBytes(b), 0, guidData, 8, 8);
return new Guid(guidData);
}
public static (long, long) ToLongs(this Guid guid)
{
var bytes = guid.ToByteArray();
var long1 = BitConverter.ToInt64(bytes, 0);
var long2 = BitConverter.ToInt64(bytes, 8);
return (long1, long2);
}
public static Guid GuidFromULongs(ulong a, ulong b)
{
byte[] guidData = new byte[16];
Array.Copy(BitConverter.GetBytes(a), guidData, 8);
Array.Copy(BitConverter.GetBytes(b), 0, guidData, 8, 8);
return new Guid(guidData);
}
public static (ulong, ulong) ToULongs(this Guid guid)
{
var bytes = guid.ToByteArray();
var ulong1 = BitConverter.ToUInt64(bytes, 0);
var ulong2 = BitConverter.ToUInt64(bytes, 8);
return (ulong1, ulong2);
}
public static Guid GuidFromInts(int a, int b, int c, int d)
{
byte[] guidData = new byte[16];
Array.Copy(BitConverter.GetBytes(a), guidData, 4);
Array.Copy(BitConverter.GetBytes(b), 0, guidData, 4, 4);
Array.Copy(BitConverter.GetBytes(c), 0, guidData, 8, 4);
Array.Copy(BitConverter.GetBytes(d), 0, guidData, 12, 4);
return new Guid(guidData);
}
public static (int, int , int, int) ToInts(this Guid guid)
{
var bytes = guid.ToByteArray();
var a = BitConverter.ToInt32(bytes, 0);
var b = BitConverter.ToInt32(bytes, 4);
var c = BitConverter.ToInt32(bytes, 8);
var d = BitConverter.ToInt32(bytes, 12);
return (a, b, c, d);
}
public static Guid GuidFromUInts(uint a, uint b, uint c, uint d)
{
byte[] guidData = new byte[16];
Array.Copy(BitConverter.GetBytes(a), guidData, 4);
Array.Copy(BitConverter.GetBytes(b), 0, guidData, 4, 4);
Array.Copy(BitConverter.GetBytes(c), 0, guidData, 8, 4);
Array.Copy(BitConverter.GetBytes(d), 0, guidData, 12, 4);
return new Guid(guidData);
}
public static (uint, uint, uint, uint) ToUInts(this Guid guid)
{
var bytes = guid.ToByteArray();
var a = BitConverter.ToUInt32(bytes, 0);
var b = BitConverter.ToUInt32(bytes, 4);
var c = BitConverter.ToUInt32(bytes, 8);
var d = BitConverter.ToUInt32(bytes, 12);
return (a, b, c, d);
}
}
```
Also found another solution inspired from there: [Converting System.Decimal to System.Guid](https://stackoverflow.com/questions/3563830/converting-system-decimal-to-system-guid)
```
[StructLayout(LayoutKind.Explicit)]
struct GuidConverter
{
[FieldOffset(0)]
public decimal Decimal;
[FieldOffset(0)]
public Guid Guid;
[FieldOffset(0)]
public long Long1;
[FieldOffset(8)]
public long Long2;
}
private static GuidConverter _converter;
public static (long, long) FastGuidToLongs(this Guid guid)
{
_converter.Guid = guid;
return (_converter.Long1, _converter.Long2);
}
public static Guid FastLongsToGuid(long a, long b)
{
_converter.Long1 = a;
_converter.Long2 = b;
return _converter.Guid;
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Following pair of methods could do what you need:
```
public static void GuidToInt16(Guid guidToConvert, out long guidAsLong1, out long guidAsLong2)
{
byte[] guidByteArray = guidToConvert.ToByteArray();
var segment1 = new ArraySegment(guidByteArray, 0, 8);
var segment2 = new ArraySegment(guidByteArray, 8, 8);
guidAsLong1 = BitConverter.ToInt64(segment1.ToArray(), 0);
guidAsLong2 = BitConverter.ToInt64(segment2.ToArray(), 0);
}
public static Guid Int16ToGuid(long guidAsLong1, long guidAsLong2)
{
var segment1 = BitConverter.GetBytes(guidAsLong1);
var segment2 = BitConverter.GetBytes(guidAsLong2);
return new Guid(segment1.Concat(segment2).ToArray());
}
```
And possible usage:
```
Guid guidToConvert = new Guid("cbd5bb87-a249-49ac-8b06-87c124205b99");
long guidAsLong1, guidAsLong2;
GuidToInt16(guidToConvert, out guidAsLong1, out guidAsLong2);
Console.WriteLine(guidAsLong1 + " " + guidAsLong2);
Guid guidConvertedBack = Int16ToGuid(guidAsLong1, guidAsLong2);
Console.WriteLine(guidConvertedBack);
Console.ReadKey();
```
Upvotes: 0 <issue_comment>username_3: My solution should help understand whole process with binary operations:
```
class Program
{
public static Guid LongsToGuid(long l1, long l2)
{
var a = (int)l1;
var b = (short)(l1 >> 32);
var c = (short)(l1 >> 48);
var d = (byte)l2;
var e = (byte)(l2 >> 8);
var f = (byte)(l2 >> 16);
var g = (byte)(l2 >> 24);
var h = (byte)(l2 >> 32);
var i = (byte)(l2 >> 40);
var j = (byte)(l2 >> 48);
var k = (byte)(l2 >> 56);
return new Guid(a, b, c, d, e, f, g, h, i, j, k);
}
public static long BytesToLong(byte[] bytes, int start, int end)
{
long toReturn = 0;
for (var i = start; i < end; i++)
{
toReturn |= ((long)bytes[i]) << (8 * i);
}
return toReturn;
}
static void Main(string[] args)
{
var l1 = long.MinValue;
var l2 = long.MaxValue;
var guid = LongsToGuid(l1, l2);
var guidBytes = guid.ToByteArray();
var readL1 = BytesToLong(guidBytes, 0, 8);
var readL2 = BytesToLong(guidBytes, 8, 16);
Console.WriteLine(l1 == readL1);
Console.WriteLine(l2 == readL2);
Console.ReadKey();
}
}
```
Upvotes: 0 <issue_comment>username_4: As an `unsafe` but very efficient version (no `byte[]` allocations, via `BitConverter`):
```
static void Main()
{
var g = Guid.NewGuid();
Console.WriteLine(g);
GuidToInt64(g, out var x, out var y);
Console.WriteLine(x);
Console.WriteLine(y);
var g2 = GuidFromInt64(x, y);
Console.WriteLine(g2);
}
public static unsafe void GuidToInt64(Guid value, out long x, out long y)
{
long* ptr = (long*)&value
x = *ptr++;
y = *ptr;
}
public static unsafe Guid GuidFromInt64(long x, long y)
{
long* ptr = stackalloc long[2];
ptr[0] = x;
ptr[1] = y;
return *(Guid*)ptr;
}
```
You could actually do the same thing with a union struct, if you don't like using the `unsafe` keyword, but: it is more code, and a union struct is still fundamentally unverifiable, so this doesn't gain you much at the IL level (it just means you don't need the "allow unsafe code" flag):
```
static void Main()
{
var g = Guid.NewGuid();
Console.WriteLine(g);
var val = new GuidInt64(g);
var x = val.X;
var y = val.Y;
Console.WriteLine(x);
Console.WriteLine(y);
var val2 = new GuidInt64(x, y);
var g2 = val2.Guid;
Console.WriteLine(g2);
}
[StructLayout(LayoutKind.Explicit)]
struct GuidInt64
{
[FieldOffset(0)]
private Guid _guid;
[FieldOffset(0)]
private long _x;
[FieldOffset(8)]
private long _y;
public Guid Guid => _guid;
public long X => _x;
public long Y => _y;
public GuidInt64(Guid guid)
{
_x = _y = 0; // to make the compiler happy
_guid = guid;
}
public GuidInt64(long x, long y)
{
_guid = Guid.Empty;// to make the compiler happy
_x = x;
_y = y;
}
}
```
Upvotes: 2 |
2018/03/19 | 2,731 | 12,027 | <issue_start>username_0: I'm trying to implement a simple read/write lock for a resource accessed concurrently by multiple threads. The workers randomly try reading or writing to a shared object. When a read lock is set, workers should not be able to write until the lock is released. When a write lock is set, read and write are not permitted.
Although my implementation seems to work, I believe it is conceptually wrong.
A read operation taking place should allow for more read operations happening at the same time, resulting in the overall number of reads being larger than the number of writes. My program yields numbers that follow the probability of these operations being performed by a worker.
I feel like my implementation is actually not concurrent at all, but I'm having a hard time identifying the mistake. I would really appreciate being pointed in the right direction.
Main class that dispatches and terminates workers:
```
class Main {
private static final int THREAD_NUMBER = 4;
public static void main(String[] args) {
// creating workers
Thread[] workers = new Thread[THREAD_NUMBER];
for (int i = 0; i < THREAD_NUMBER; i++) {
workers[i] = new Thread(new Worker(i + 1));
}
System.out.println("Spawned workers: " + THREAD_NUMBER);
// starting workers
for (Thread t : workers) {
t.start();
}
try {
Thread.sleep((long) 10000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
// stopping workers
System.out.println("Stopping workers...");
for (Thread t : workers) {
t.interrupt();
}
}
}
```
The Resource class:
```
class Resource {
enum ResourceLock {
ON,
OFF
}
private static Resource instance = null;
private ResourceLock writeLock = ResourceLock.OFF;
private ResourceLock readLock = ResourceLock.OFF;
private Resource() {}
public static synchronized Resource getInstance() {
if (instance == null) {
instance = new Resource();
}
return instance;
}
public ResourceLock getWriteLock() {
return writeLock;
}
public ResourceLock getReadLock() {
return readLock;
}
public void setWriteLock() {
writeLock = ResourceLock.ON;
}
public void setReadLock() {
readLock = ResourceLock.ON;
}
public void releaseWriteLock() {
writeLock = ResourceLock.OFF;
}
public void releaseReadLock() {
readLock = ResourceLock.OFF;
}
}
```
And finally the Worker class:
```
import java.util.Random;
class Worker implements Runnable {
private static final double WRITE_PROB = 0.5;
private static Random rand = new Random();
private Resource res;
private int id;
public Worker(int id) {
res = Resource.getInstance();
this.id = id;
}
public void run() {
message("Started.");
while (!Thread.currentThread().isInterrupted()) {
performAction();
}
}
private void message(String msg) {
System.out.println("Worker " + id + ": " + msg);
}
private void read() {
synchronized(res) {
while (res.getWriteLock() == Resource.ResourceLock.ON) {
try {
wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
res.setReadLock();
// perform read
try {
Thread.sleep((long) 500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
res.releaseReadLock();
res.notifyAll();
}
message("Finished reading.");
}
private void write() {
synchronized(res) {
while (res.getWriteLock() == Resource.ResourceLock.ON || res.getReadLock() == Resource.ResourceLock.ON) {
try {
wait();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
res.setWriteLock();
// perform write
try {
Thread.sleep((long) 500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
res.releaseWriteLock();
res.notifyAll();
}
message("Finished writing.");
}
private void performAction() {
double r = rand.nextDouble();
if (r <= WRITE_PROB) {
write();
} else {
read();
}
}
}
```
The reasoning behind having two separate locks for read and write is that I want to have the ability to atomise both operations and their queries for the lock.
Here is an example of the output I'm getting with a 0.5 write probability:
```
Spawned workers: 4
Worker 2: Started.
Worker 3: Started.
Worker 1: Started.
Worker 4: Started.
Worker 2: Finished writing.
Worker 4: Finished reading.
Worker 1: Finished writing.
Worker 3: Finished writing.
Worker 1: Finished reading.
Worker 4: Finished writing.
Worker 2: Finished reading.
Worker 4: Finished reading.
Worker 1: Finished reading.
Worker 3: Finished writing.
Worker 1: Finished writing.
Worker 4: Finished writing.
Worker 2: Finished writing.
Worker 4: Finished writing.
Worker 1: Finished reading.
Worker 3: Finished writing.
Worker 1: Finished writing.
Worker 4: Finished reading.
Worker 2: Finished writing.
Stopping workers...
Worker 4: Finished writing.
Worker 1: Finished writing.
Worker 3: Finished reading.
Worker 2: Finished reading.
```
Help much appreciated.<issue_comment>username_1: You are performing the entire operation within a `synchronized` block, so there is no concurrency. Further, there is no precedence towards any lock kind, as at most one thread can own a lock. Not performing the entire operation in a `synchronized` block won’t work with your current code, as every reader does a `readLock = ResourceLock.OFF` at the end, regardless of how many readers are there. Without a counter, you can’t support multiple readers correctly.
Besides that, it’s a strange code structure, to provide a `Resource` class maintaining the state but leaving it entirely up to the callers to do the right thing with it. That’s not the way to deal with responsibility and encapsulation.
An implementation may look like
```
class ReadWriteLock {
static final int WRITE_LOCKED = -1, FREE = 0;
private int numberOfReaders = FREE;
private Thread currentWriteLockOwner;
public synchronized void acquireReadLock() throws InterruptedException {
while(numberOfReaders == WRITE_LOCKED) wait();
numberOfReaders++;
}
public synchronized void releaseReadLock() {
if(numberOfReaders <= 0) throw new IllegalMonitorStateException();
numberOfReaders--;
if(numberOfReaders == FREE) notifyAll();
}
public synchronized void acquireWriteLock() throws InterruptedException {
while(numberOfReaders != FREE) wait();
numberOfReaders = WRITE_LOCKED;
currentWriteLockOwner = Thread.currentThread();
}
public synchronized void releaseWriteLock() {
if(numberOfReaders!=WRITE_LOCKED || currentWriteLockOwner!=Thread.currentThread())
throw new IllegalMonitorStateException();
numberOfReaders = FREE;
currentWriteLockOwner = null;
notifyAll();
}
}
```
It simply uses a counter of acquired read locks, setting the counter to `-1` when there is a write lock (so write locks can not be nested). Acquiring a read lock may succeed whenever there is no write lock, so there is no need to implement precedence for them, the possibility to succeed when another thread already has a real lock, is sufficient. In fact, when having a significantly larger number of readers than writers, you may encounter the [“starving writer” problem](https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem).
The worker simplifies to
```
class Worker implements Runnable {
private static final double WRITE_PROB = 0.5;
private static final Random rand = new Random();
private final ReadWriteLock theLock;
private final int id;
public Worker(int id, ReadWriteLock lock) {
theLock = lock;
this.id = id;
}
public void run() {
message("Started.");
while(!Thread.currentThread().isInterrupted()) {
performAction();
}
}
private void message(String msg) {
System.out.println("Worker " + id + ": " + msg);
}
private void read() {
try {
theLock.acquireReadLock();
} catch(InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
// perform read
try {
Thread.sleep(500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
finally { theLock.releaseReadLock(); }
message("Finished reading.");
}
private void write() {
try {
theLock.acquireWriteLock();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
return;
}
// perform write
try {
Thread.sleep(500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
finally { theLock.releaseWriteLock(); }
message("Finished writing.");
}
private void performAction() {
double r = rand.nextDouble();
if (r <= WRITE_PROB) {
write();
} else {
read();
}
}
}
```
Note that I avoided global variables here. The lock should get passed to the constructor. It’s also important that the methods return when being interrupted during the lock acquisition. Self interrupting and retrying the acquisition like in your original code will lead to an infinite loop, as the next wait would again throw an `InterruptedException` after you restored the current thread’s interrupted state. Of course, proceeding without having the lock would be wrong too, so the only valid options are not restoring the interrupted state or returning immediately.
The only change to your main program is to construct a pass the lock instance:
```
ReadWriteLock sharedLock = new ReadWriteLock();
// creating workers
Thread[] workers = new Thread[THREAD_NUMBER];
for (int i = 0; i < THREAD_NUMBER; i++) {
workers[i] = new Thread(new Worker(i + 1, sharedLock));
}
System.out.println("Spawned workers: " + THREAD_NUMBER);
// starting workers
for (Thread t : workers) {
t.start();
}
try {
Thread.sleep(10000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
// stopping workers
System.out.println("Stopping workers...");
for (Thread t : workers) {
t.interrupt();
}
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: This is the simple implementation for `ReadWriteLock` with more priority given to write operation:
```
public class ReadWriteLock{
private int readers = 0;
private int writers = 0;
private int writeRequests = 0;
public synchronized void lockRead() throws InterruptedException{
while(writers > 0 || writeRequests > 0){
wait();
}
readers++;
}
public synchronized void unlockRead(){
readers--;
notifyAll();
}
public synchronized void lockWrite() throws InterruptedException{
writeRequests++;
while(readers > 0 || writers > 0){
wait();
}
writeRequests--;
writers++;
}
public synchronized void unlockWrite() throws InterruptedException{
writers--;
notifyAll();
}
}
```
Source: <http://tutorials.jenkov.com/java-concurrency/read-write-locks.html>
Upvotes: 2 |
2018/03/19 | 1,347 | 4,380 | <issue_start>username_0: I tried to run this
```
int array( void )
{
char text[12] = { 112, 114, 111, 103, 112, 0 };
int i;
for(i = 0; text[i]; i = i +1)
printf("%c", text[i]);
printf("\n");
return 0;
}
int main( void )
{
int array(void);
return 0;
}
```
and the program runs but I am getting no result. Now when I use the main function to define the program:
```
int main( void )
{
char text[12] = { 112, 114, 111, 112, 0 };
int i;
for(i = 0; text[i]; i = i +1)
printf("%c", text[i]);
printf("\n");
return 0;
}
```
I get the result `progr`(as wanted). I already searched but the only related questions I find are about using `main` as a function name and wrong outputs. Maybe I am searching the wrong way but I would be pleased if someone could answer this.<issue_comment>username_1: As stated in the comments on the question, instead of this...
```
int main(void)
{
int array(void);
return 0;
}
```
You should instead write
```
int main(void)
{
array(void);
return 0;
}
```
This is because when saying `int something();` you are declaring a prototype of a function called `something` that takes in no parameters and returns an int. Instead, you need to invoke that function simply by typing `something();`.
Upvotes: 2 <issue_comment>username_2: Since the function returns `int`, You should've assigned its return value to `int` too. **OR**
You can just call it normally with only its name because it always returns a `0` constant.
```
int i = array(); // i = 0
// or
array();
```
It's just **printing** and its value is always `0`.
Then I suggest making it `void` instead because You'll need to call it by name only.
```
#include
void array()
{
char text[12] = { 112, 114, 111, 103, 112, 0 };
for(int i = 0; text[i]; i++)
printf("%c", text[i]);
printf("\n");
}
int main()
{
array();
return 0;
}
```
**Note** that You can't assign the type `void` to anything.
Upvotes: 2 <issue_comment>username_3: Your direct problem is that you did not **call the function** `array`, you
`declare` it inside the body of `main`. **The declaration itself does not execute the code.**
A function **declaration** tells the compiler about a function's name, return type, and parameters.
A function **definition** provides the actual body of the function.
**Defining a Function**
The general form of a function definition in C programming language is as follows
```
return_type function_name( parameter list ) {
body of the function
}
```
In your case you have two function definitions:
```
// 1.
int array( void )
{
char text[12] = { 112, 114, 111, 103, 112, 0 };
int i;
for(i = 0; text[i]; i = i +1)
printf("%c", text[i]);
printf("\n");
return 0;
}
// 2.
int main(void)
{
//...
return 0;
}
```
**Function Declarations**
A function declaration tells the compiler about a function name and how to call the function. The actual body of the function can be defined separately.
```
int array(void); // function declaration, no parameters, returns int value
```
**Calling a Function**
**To call a function, you simply need to pass the required parameters along with the function name, and if the function returns a value, then you can store the returned value.**
In your case call it like this:
```
array();
```
since there are no parameters to pass.
To sum up:
```
int array(void); // function declaration, no parameters, returns int value
int array(void) // definition of the function `array`
{
// function body
char text[12] = { 112, 114, 111, 103, 112, 0 };
// ...
return 0;
}
int main(void) // definition of the function `main`,
{
array(); // function call, calling function `array` with no parameters
return 0;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: `int array(void);` inside of `main` is only a declaration. You are telling to the compiler that you have a function elsewhere that is called `array` that it takes no parameters, and returns nothing. But in no case the `array` function is being called on main. To call it just change the declaration to an statement like:
```
int main()
{
int array(void); // you tell compiler that you have a function array.
array(); // you are executing the code of array();
return 0;
}
```
Upvotes: 2 |
2018/03/19 | 1,386 | 5,869 | <issue_start>username_0: I have a question regarding the repository and service pattern in combination with ASP.NET Core and EF Core. I'm in the stage of learning all of this, so I might miss the clear picture fully in front of me right now.
My current project structure looks as follows:
* Project.Repository.Contracts
* Project.Repository.EF
* Project.Repository.FakeData
* Project.Service.Contracts
* Project.Service
* Project.WebAPI
* WebApp
In my understanding of the repository pattern, only the Project.Repository.EF project does know about EntityFramework.
But all "Repository, Service, ASP, EF" examples register the DbContext in the ConfigureService method in the WebAPI. By calling services.AddDbContext.
Isn't this a break of the concept?
I want to avoid to have the EntityFramework dependency in my WebApi.
So my question is, how can i archieve this?
This is my code so far:
WebApp.Startup.cs
```
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddMvc();
Project.WebApi.Module.ConfigureServices(services);
services.AddAutoMapper();
}
}
```
Project.WebAPI.Module.cs
```
public class Module
{
public static void ConfigureServices(IServiceCollection services)
{
services.AddSingleton();
services.AddSingleton();
}
}
```
The Service and Repository are just stubs at the moment.
So again, what I want to avoid is, that I have to call services.AddDbContext in my Project.WebAPI.Module.cs class.
What I want to, is to register the DbContext in my Project.Repository.EF Project without hardcoupling it with my WebAPI.
Is this even possible?<issue_comment>username_1: Ok so let me make it a bit clearer for you.
The Repository pattern is more than just a data access layer that does some CRUD operations but I will try to limit my answer just to your problem to help you understand it.
(Keep in mind that the answer below is only relevant if you have a need to use repositories)
First lets talk a bit the naming of your projects.
The project `Project.Repository.Contracts` should be renamed to `Project.Repository.DTOs` to make it clearer.
This project contains Data Transfer Objects which is what your DataContext will use to generate your db and handle all the operations.
Then the `Project.Service.Contracts` can be renamed to `Project.Domain` in which you will have all your business logic related models.
Automapper will be used to do the two way mapping from dtos to domain objects and vice versa.
Now we will have to split your `Project.Repository.EF` project to two different projects.
You see, the repositories only need to know about the data context. EF doesn't need to know about the repositories.
So from the `Project.Repository.EF` project you will create `Project.Data` and `Project.Repository`
Your migrations and data context itself is in the `Data` project while the repositories are in the `Repository` project which references the `Data` project.
Now the Data project can be refered in the API project in order to be used in the startup class and as long as the services only know about the repositories project (and the web project only about services), you should be fine.
I would also make an extension method in the `Data` project instead of a static call like this `Project.WebApi.Module.ConfigureServices(services);`. It's cleaner.
You can also have your Startup class in a shared project between the dependencies and reference this single project on the project from which you start the host as well. Keep in mind that both DbContext, Services and repositories need to be configured at the IoC somehow. My cuppa is to create assembly markers and use [Scrutor](https://github.com/khellang/Scrutor) to scan for these dependencies.
EDIT: As Camilo suggested however, there's absolutely no need for a repository pattern in Entity Framework Core. EF Core is based on interfaces so you can implement your own classes around them.
The main reason why you would need a repository on top of EF was mocking.
This is no longer needed because:
1. In memory database built-in into EF Core
2. Is based on interfaces so you can implement your own classes around them
Upvotes: 2 [selected_answer]<issue_comment>username_2: This may not be the best answer or the one you're looking for, but I hope it helps.
>
> Isn't this a break of the concept?
>
>
>
From an idealistic view, perhaps. The important portion that you're separating out into another project(s) is the implementation of your repository, the setup of your context (mappings, etc).
The value in that is two-fold (at least):
1. You can reuse these in other projects
2. You can switch them out with a different implementation (with a matching interface) within the same WebAPI project. For instance, if you were supporting a legacy database and a new database.
>
> Is this even possible?
>
>
>
Possibly with extra work and complexity, but is it practical? IME, I haven't seen a situation where it would improve the readability and maintainability of the code.
The question I would ask is, Am I going to need to dynamically choose between EF and a completely different repository type (such as Dapper, NHibernate, etc)? If the answer is no, or even not in the near future, I wouldn't add to the complexity.
As my grandmother used to say, "Don't borrow trouble." If you do need to completely switch to a different repository framework / ORM down the road, switching out the lines in `ConfigureServices` will be trivial. It's only complicated if you need to support two or more at once.
Upvotes: 0 |
2018/03/19 | 917 | 3,445 | <issue_start>username_0: I am trying to move entries from my old tables to the new one with the updated schema. The problem is, I have to move content from 10 tables with old config to 10 tables with the new config.
I am doing this with the help of console command. When I add the new table and execute the command, I get Duplicate entry error for the tables that already have data which is obvious.
When I try to use `DB::connection('mysql_old')->table('users')->truncate();`, It throws `1701 Cannot truncate a table referenced in a foreign key constraint` Errror which is obvious too!
Here is how I am moving entries from old tables to the new one.
```
$entries = DB::connection('mysql_old')->table('users')->get();
DB::table('users')->truncate();
foreach($entries as $entry){
$user = \App\User::create([
'name' => $entry->name,
'email' => $entry->email,
'status' => $entry->status,
'credits' => $entry->credits,
'role' => $entry->user_role,
'email_subscription' => $entry->email_subscription,
'activation_key' => $entry->activation_key,
'password' => $<PASSWORD>,
'remember_token' => $entry->remember_token,
'created_at' => $entry->created_at,
'updated_at' => $entry->updated_at
]);
}
```
The only solution is to disable foreign key check before truncate and enable it again after truncate (I think). It is a relational database as obvious. So, is there any better way to complete this task?
I thought about giving a try to move entries from old table to the new one in `a relational way` but it is not possible in this case.
I can execute the command `php artisan migrate:refresh` every time the command is executed. But here is the problem with that, There are more than 25 tables and It takes about 20-30 seconds to complete `migrate:refresh`.
I am really confused how to get this done. Is there any proper or standard way?<issue_comment>username_1: You can do this:
```php
Schema::disableForeignKeyConstraints();
// Your database operations go here..
Schema::enableForeignKeyConstraints();
```
Upvotes: 4 <issue_comment>username_2: Finally, I found the solution to turn off and turn on the foreign key check. Here is how I moved information from old table to new one.
```
// Disable foreign key checks!
DB::statement('SET FOREIGN_KEY_CHECKS=0;');
// Move users from old table to the new one
$entries = DB::connection('mysql_old')->table('users')->get();
DB::table('users')->truncate();
foreach($entries as $entry){
$user = \App\User::create([
'name' => $entry->name,
'email' => $entry->email,
'status' => $entry->status,
'credits' => $entry->credits,
'role' => $entry->user_role,
'email_subscription' => $entry->email_subscription,
'activation_key' => $entry->activation_key,
'password' => $<PASSWORD>,
'remember_token' => $entry->remember_token,
'created_at' => $entry->created_at,
'updated_at' => $entry->updated_at
]);
}
// Enable foreign key checks!
DB::statement('SET FOREIGN_KEY_CHECKS=1;');
```
It worked!
Upvotes: 4 [selected_answer]<issue_comment>username_3: if you have migration table just make index to attribut like this:
```
just make ->index
like this
$table->unsignedBigInteger('city_id')->index();
$table->foreign('city_id')->references('id')
->on('cities')->onDelete('cascade');
```
Upvotes: 0 |
2018/03/19 | 977 | 3,369 | <issue_start>username_0: ### Steps to reproduce
I have application bootstrapped from `vue-cli` with a `webpack` template. I'm running it on Chrome `65.0.3325.146` but it also exists on `64.X.XXX` version.
I'm adding here:
`package.json`: <https://gist.github.com/marcinlesek/a7e6076ce4befe2e810743fdbaf81480>
`webpack.base.conf.js`: <https://gist.github.com/marcinlesek/80cbf27b6ef4172248709f32c257d0cd>
### What is expected?
The application should works fine with Chrome Browser and I should be able to disable/change styles in Chrome Dev Tools.
### What is actually happening?
When I change style via Chrome `dev tools` it broke all styles (after changing or disabling one property) that page looks like in pure HTML without any line of style code. Fresh dev tools setup and Chrome reinstall doesn't help. What is a bit tricky, on Firefox `58.0.2` everything works correctly.
---
My collegues also had this problem, so it convince me that it isn't my local bug but something bigger on Vue side. Also find some questions regarding this bug, like [Page styles break when I change styles in Chrome DevTools with Webpack HMR](https://stackoverflow.com/questions/48407862/page-styles-break-when-i-change-styles-in-chrome-devtools-with-webpack-hmr)
Thanks in advance.
Best regards,
Marcin<issue_comment>username_1: I've encountered the issue as well, and I was able to prevent this by disabling CSS Source maps in development. I'm still looking into why this only happens on Chrome, but at least we can start looking there. I don't believe this is a Webpack issue.
**-- Updated --**
I simply changed the devtool to "eval-source-map" in my config/index.js file and everything works.
```
file: config/index.js
...
// https://webpack.js.org/configuration/devtool/#development
devtool: 'eval-source-map'
...
```
Upvotes: 2 <issue_comment>username_2: **I find another solution**. Thanks to answer of @username_1 regarding changing `cheap-module-eval-source-map` to `eval-source-map`. Unfortunately, this change **didn't fix for me my styles** in Chrome Dev Tools but give me good point to check.
After a bit I found, that changing `cacheBusting: true,` to `false` in `config/index.js` help to solve that and now it's possible to change style in Chrome Dev Tools.
```
// file: config/index.js
...
// If you have problems debugging vue-files in devtools,
// set this to false - it *may* help
// https://vue-loader.vuejs.org/en/options.html#cachebusting
cacheBusting: false,
...
```
Hope this will help anyone! :)
Upvotes: 3 [selected_answer]<issue_comment>username_3: I had this issue, but only when I had multiple blocks in one component.
E.g.,
```
...
...
```
I couldn't work out the exact cause, except I noted that I could see that the `sources` devtools tab only ever shows one inline style block, so figure there's some fragile trickery there. My quick workaround was to simply move at least one of the style blocks into its own file.
```
...
```
I don't know why this worked. Hope it helps someone.
Upvotes: 1 <issue_comment>username_4: Inside your webpack config file you can try to **enable source map** for your sass loader configuration.
You need to edit your file as follow:
```
module.exports = {
css: {
loaderOptions: {
sass: {
sourceMap: true
}
}
}
}
```
Upvotes: 2 |
2018/03/19 | 749 | 2,762 | <issue_start>username_0: ```
new.df <- as.data.frame(match(unique_numbers$ID, MASTERFILE$ID))
```
I have a few million rows in a data frame called MASTERFILE. It contains a column "ID" with a bunch of integers. I have another data frame called "unique\_numbers" which has a similar integer column "ID" with numbers in it.
I want to match the two "ID" columns from the different data frames so that the IDs that match in the MASTERFILE, will be copied to the new data frame "new.df".
The above command seems to work, but I'm afraid it only goes through each number ones, and the MASTERFILE may have the same ID written multiple times in different rows which I think it doesn't pick up!<issue_comment>username_1: I've encountered the issue as well, and I was able to prevent this by disabling CSS Source maps in development. I'm still looking into why this only happens on Chrome, but at least we can start looking there. I don't believe this is a Webpack issue.
**-- Updated --**
I simply changed the devtool to "eval-source-map" in my config/index.js file and everything works.
```
file: config/index.js
...
// https://webpack.js.org/configuration/devtool/#development
devtool: 'eval-source-map'
...
```
Upvotes: 2 <issue_comment>username_2: **I find another solution**. Thanks to answer of @username_1 regarding changing `cheap-module-eval-source-map` to `eval-source-map`. Unfortunately, this change **didn't fix for me my styles** in Chrome Dev Tools but give me good point to check.
After a bit I found, that changing `cacheBusting: true,` to `false` in `config/index.js` help to solve that and now it's possible to change style in Chrome Dev Tools.
```
// file: config/index.js
...
// If you have problems debugging vue-files in devtools,
// set this to false - it *may* help
// https://vue-loader.vuejs.org/en/options.html#cachebusting
cacheBusting: false,
...
```
Hope this will help anyone! :)
Upvotes: 3 [selected_answer]<issue_comment>username_3: I had this issue, but only when I had multiple blocks in one component.
E.g.,
```
...
...
```
I couldn't work out the exact cause, except I noted that I could see that the `sources` devtools tab only ever shows one inline style block, so figure there's some fragile trickery there. My quick workaround was to simply move at least one of the style blocks into its own file.
```
...
```
I don't know why this worked. Hope it helps someone.
Upvotes: 1 <issue_comment>username_4: Inside your webpack config file you can try to **enable source map** for your sass loader configuration.
You need to edit your file as follow:
```
module.exports = {
css: {
loaderOptions: {
sass: {
sourceMap: true
}
}
}
}
```
Upvotes: 2 |
2018/03/19 | 354 | 1,405 | <issue_start>username_0: I'm using the proxy integration with my Java lambda function. The input for the lambda handler is a JSON object that represents an incoming request. It has a body, headers, query params and so on. But it doesn't include the source URL that is parsed by the API Gateway for the body, query params and etc. Is there a way to get it?
The thing is API Gateway doesn't support arrays in query parameters. Example: `/endpoint?numbers=1&numbers=2`. Seems, the only way to pass an array is to pass it as a JSON array string `/endpoint?numbers=[1,2]` but it is not always suitable. So I want to parse the URL by myself.<issue_comment>username_1: Unfortunately, API Gateway doesn't provide you with the full URL for you to parse yourself.
If you have the option of using the `POST` method to `/endpoint`, you might consider sending a request body instead of query string parameters.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The 'path' field inside the 'requestContext' field contains the path of API gateway resource. See [docs](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html#api-gateway-create-api-as-simple-proxy-for-lambda-test) for more details.
Upvotes: 0 <issue_comment>username_3: you can inject the UriInfo object. It holds all the information you need.
```
@Context
UriInfo uri;
```
Upvotes: 0 |
2018/03/19 | 1,064 | 3,331 | <issue_start>username_0: I couldn't figure out what Vtiger was on about when it asked me to set the php.ini file to `error_reporting`. Even after the error\_reporting had been edited to show;
`error_reporting = E_WARNING & ~E_NOTICE & ~E_DEPRECATED & ~E_STRICT`
The CRM still persists in telling me it requires resetting, no matter how many time I restarted apache with `sudo apachectl restart`
Anyway...
After moving on, I managed to get to add the database, which at first, looks straight forward, but beware! There's one more huddle to get over before you can successfully install.
You'll probably receive the command to add:
```
sql_mode = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
```
Big headache!!
firstly, Mac OS X 10.13.\* running MySQL mysql-5.7.21-macos10.13-x86\_64 doesn't own /my.cnf and if like me, you're MySQL file will probably be stored in usr/local/mysql/bin/.
Stop looking, you wont find it anywhere on your Mac...
But there is light at the end of this tunnel...<issue_comment>username_1: start up terminal and type:
```
sudo pico /etc/my.cnf
```
My.cnf will be empty. Copy and paste this in the file:
```
[mysqld]
sql_mode = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
```
Save it (ctrl-x and hit y to accept saving the file) and exit, back into terminal.
Type:
```
sudo touch /etc/my.cnf
```
Then run mysql; `mysql -u root -p`
and check the entry,
```
SHOW VARIABLES LIKE 'sql_mode';
```
The result should show the settings you added to my.cnf
Now return to vtiger and continue the installation and it should work :)
Upvotes: 3 <issue_comment>username_2: If you use mysql on windows, please edit file [mysql.ini]
(sample: C:\wamp64\bin\mysql\mysql5.7.21\mysql.ini)
and add below info and then restart mysql service. It's OK
```
[mysqld]
sql_mode = ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
```
Upvotes: 3 <issue_comment>username_3: For Ubuntu - Run command:
`sudo nano /etc/mysql/my.cnf`
Add the following part to the bottom:
```
[mysqld]
sql_mode=ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
```
Run command to restart MySQL Service:
`sudo service mysql restart`
You will also need to change Database Collation to `utf8_general_ci` in phpmyadmin.
Done !
Upvotes: 3 <issue_comment>username_4: Fire this query through the root user in MySQL
```sql
SET GLOBAL sql_mode = 'ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION'
```
And check if the settings are applied by firing the query as follows
```sql
SHOW GLOBAL VARIABLES LIKE 'sql_mode';
```
Upvotes: 1 <issue_comment>username_5: Quick & Dirty:
edit /modules/Install/Utils.php
Block: \* Function to check sql\_mode configuration
Line 349: return false;
change false to true
YOU DISALBE THE CHECK - OWN RISK
Vtiger CRM 7.3.0
Upvotes: 1 <issue_comment>username_6: You basically have to disable it in utils.php as mysql 8.0 doesnt support NO\_AUTO\_CREATE\_USER.
Upvotes: 0 <issue_comment>username_7: [Windows Manual Installations]
Find My.ini in C:\ProgramData\MySQL\MySQL Server 8.0
Replace
```
sql-mode="ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION"
```
note: MySQL 8.0 does not support NO\_AUTO\_CREATE\_USER (So remove it or sql service would not start)
Upvotes: 1 |
2018/03/19 | 351 | 1,133 | <issue_start>username_0: Is it possible to list all S3 buckets using a boto3 resource, ie `boto3.resource('s3')`?
I know that it's possible to do so using a low-level service client:
```
import boto3
boto3.client('s3').list_buckets()
```
However in an ideal world we can operate at the higher level of resources. Is there a method that allows us to to do and, if not, why?<issue_comment>username_1: Get `.buckets.pages()` from the `S3` resource and then loop through the pages to grab the buckets:
```
import boto3
buckets_iter = boto3.resource('s3').buckets.pages()
buckets = []
for bucket in buckets_iter:
buckets += bucket
print(buckets)
```
I hope this helps.
Upvotes: 2 <issue_comment>username_2: You can use [`s3.buckets.all()`](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/collections.html#boto3.resources.collection.CollectionManager):
```
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
```
Using list comprehension:
```
s3 = boto3.resource('s3')
buckets = [bucket.name for bucket in s3.buckets.all()]
print(buckets)
```
Upvotes: 6 [selected_answer] |
2018/03/19 | 1,591 | 5,876 | <issue_start>username_0: I have one file called `ClientList.txt` that has a output of:
```
client1.hello.com
client2.hello.com
client3.hello.com
```
And I use this script to append the values of the `ClientList.txt` in a `output.txt` file. Code:
```
with open("ClientList.txt", "r") as infile:
with open("output.txt", "a") as outfile:
for line in infile:
outfile.write("".join(["clients name: ",line.strip(), ", clients URL: ", line.strip(), ", service: VIP\n"]))
```
Output:
```
clients name: client1.hello.com, clients URL: client1.hello.com, service: VIP
clients name: client2.hello.com, clients URL: client2.hello.com, service: VIP
clients name: client3.hello.com, clients URL: client3.hello.com, service: VIP
```
Question: In the future I would like to update the `ClientList.txt` with new clients (example: `client4.hello.com` etc.). Is it possible to not append the value if it already exists in the `output.txt` file?<issue_comment>username_1: Files are just simple streams of text, so they don't support any notion of "append this row if no equivalent row exists"; you have to build that manually.
You *can* do that with just a plain text file, but it's clunky and potentially inefficient. What you have to do is read through the file and check for yourself. You can optimize that by reading through the file once and storing it in a set instead of doing it over and over again, but it's still a bit ugly:
```
with open("ClientList.txt", "r") as infile:
with open("output.txt", "r") as outfile:
existing = set(outfile)
with open("output.txt", "a") as outfile:
for line in infile:
outline = "".join(["clients name: ",line.strip(), ", clients URL: ", line.strip(), ", service: VIP\n"])
if outline not in existing:
outfile.write(outline)
existing.add(outline)
```
If you're wondering how that `set(outfile)` works: A file object in Python is an iterable of lines. That's why `for line in infile:` works. And it means we can construct a set of all of the lines just by passing that iterable to `set`.
---
You might be better off with a database.
The simplest database is probably the `dbm` format that comes builtin to Python, which works a lot like a Python dict. Just as you can't store the same key in a dict multiple times (repeats just overwrite the originals), the same is true with a `dbm`. So:
```
with open("ClientList.txt", "r") as infile:
with dbm.open("output.dbm", "c") as outfile:
for line in infile:
outline = "".join(["clients name: ",line.strip(), ", clients URL: ", line.strip(), ", service: VIP\n"])
outfile[outline] = ""
```
Or, better, actually *use* the key-value-ness. If it's the `clients name` rather than the whole string that has to be unique, make that the key, and the rest the value:
```
with open("ClientList.txt", "r") as infile:
with dbm.open("output.dbm", "c") as outfile:
for line in infile:
outline = json.dumps({
"clients name": line.strip(),
"clients URL": line.strip(),
"service": "VIP"})
outfile[clients_name] = outline
```
Then, of course, your output is a `dbm` database rather than a textfile, which only works if the thing that's consuming your data knows how to use a `dbm`. But if you're writing the thing that's consuming your data, that shouldn't be a problem.
---
Of course you have multiple values to associate with each key, so the ideal solution is probably either a multi-column key-value database, a document database, or a relational database. Python comes with a simple relational database called `sqlite3`, which you could use something like this (untested):
```
with open("ClientList.txt", "r") as infile:
db = sqlite3.connect('output.sqlite')
db.execute('''CREATE TABLE IF NOT EXISTS Clients COLUMNS (
Name TEXT PRIMARY KEY,
URL TEXT,
Service TEXT)''')
for line in infile:
db.execute('''INSERT OR IGNORE INTO Clients (Name, URL, Service)
VALUES (?, ?, ?)''', (line.strip(), line.strip(), 'VIP'))
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Surely you can just check if the client's name is in output.txt like:
```
with open("ClientList.txt", "r") as infile:
with open("output.txt", "a") as outfile:
file=outfile.read()
for line in infile:
clientName=line.strip()
if (clientName in file)==False:
outfile.write("".join(["clients name: ",line.strip(), ", clients URL: ", line.strip(), ", service: VIP\n"]))
```
Not sure if that would work but it should do.
Upvotes: -1 <issue_comment>username_3: As long as the file size is not too big, I'd go with the simplestest solution and just read the data into memory, alter it, write it back. That's extremely simple in python and still fast.
```
with open('ClientList.txt') as f:
data = set(f.readlines())
data.add('nextValue\n')
with open('ClientList.txt', 'w') as f:
f.writelines(data)
```
Upvotes: 0 <issue_comment>username_4: you can add a validation like this:`
def exist(file\_name, link):
with open(file\_name) as tmp:
for line in tmp:
if link in line:
return True
return False
with open("ClientList.txt", "r") as infile:
with open("output.txt", "a") as outfile:
for line in infile:
if not exist('output.txt',line):
outfile.write("".join(["clients name: ",line.strip(), ", clients URL: ", line.strip(), ", service: VIP\n"]))
`
even closing and opening the file every time may not be a good idea, you could modify the function so that you maintain the reference to the file and you only have to move the pointer at the beginning and start the search one more time, you could also have all the content in memory and perform a search there directly, but that work will be your decision
Upvotes: 0 |
2018/03/19 | 1,588 | 5,375 | <issue_start>username_0: So I have a .txt file I'm supposed to be reading information in from and displaying in a neat little table. Here's a snippet of the .txt files contents
in the format
Farm name, item count, item, price, total price
```none
Collins Farm, 43900 tomatoes 0.67 29413
Bart Smith Farms, 34910 cassavas 0.99 34560.9
Allen Farms, 117 coconuts 0.54 63.18
```
etc...
It should print out in the console as
>
> Collins Farm (some spaces here) 43900 items contributed totaling $29413.00
>
>
>
```
#include
#include
#include
#include
using namespace std;
int addLine(int);
int main()
{
using std::ifstream;
ifstream myFile;
myFile.open("ASSGN6-B.txt");
string farmName;
int itemCount;
string itemName;
double itemPrice;
double totalPrice;
if (!myFile)
{
cout << "File open failed!" << endl;
}
else
{
cout << "\t\t\t=========================================================" << endl;
cout << "\t\t\t= FARMER'S MARKET INVENTORY =" << endl;
cout << "\t\t\t=========================================================" << endl;
while (myFile.good())
{
getline (myFile, farmName);
getline (myFile, itemName);
myFile >> itemName >> itemPrice >> totalPrice;
cout << farmName << " " << itemCount << " items contributed totaling $" << totalPrice << endl;
}
}
myFile.close();
return 0;
}
```
This is what I've been messing with trying to figure out how this input stuff works. I guess what I really don't get is how it's supposed to know which item is which. I previously thought it just read in a line at a time only but theres's gotta be a way to separate the items even on the same line and print them all separately in the console.
Also, some farm names appear twice in the .txt file and I'm supposed to combine their data into one line if it's a duplicate. Help with this would be appreciated too.
Thanks.<issue_comment>username_1: First piece of advice:
Don't use
```
while (myFile.good()) { ... }
```
You need to make sure that the data you expect to read are indeed read successfully.
See [Why is iostream::eof inside a loop condition considered wrong?](https://stackoverflow.com/questions/5605125/why-is-iostreameof-inside-a-loop-condition-considered-wrong) to understand why.
Coming to the other problems...
The line
```
getline (myFile, farmName);
```
will read an entire line to `farmName`. That's not what you want. You want to read everything up to the comma (`,`) character. `std::getline` has such an option. Use
```
getline (myFile, farmName, ',');
```
It's not clear what you were hoping to accomplish by
```
getline (myFile, itemName);
```
That line can be removed.
Looking at the sample data, all you need is
```
myFile >> itemCount >> itemName >> itemPrice >> totalPrice;
```
to read rest of the data.
However, after you read them, make sure to ignore the everything in that line. You can use `istream::ignore` for that.
Here's my suggestion.
```
while ( getline(myFile, farmName, ',') &&
(myFile >> itemCount >> itemName >> itemPrice >> totalPrice) )
{
cout << farmName << " " << itemCount << " items contributed totaling $" << totalPrice << endl;
// Ignore rest of the line
myFile.ignore(std::numeric_limits::max(), '\n');
}
```
Make sure to add
```
#include
```
to be able to use `std::numeric_limits`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I think you should try to do more research to understand file operations in c++, but I noticed a few things with your code:
```
getline(myFile, farmName);
```
This line will take the entire line of the file it's reading and store it into farmName. It basically reads the line until it finds the end of line character. I think your intention was for the line to read until the comma for the farm name, thus you would use:
```
getline(myFile, farmName, ',');
```
The third parameter is an optional delimiting character that tells getline what character to look for to stop reading at.
The next lines that I noticed are:
```
getline(myFile, itemName);
myFile >> itemName >> itemPrice >> totalPrice;
```
Why are you reading data into itemName twice? I think you meant to read into itemCount first before itemName by the looks of your file format, thus you should eliminate the first line shown above and just have:
```
myFile >> itemCount >> itemName >> itemPrice >> totalPrice;
```
In terms of adding to the existing data if the farm name appears more than once, your current code overwrites the data from each line after printing and continuing to the next iteration of the while loop. Thus, you would have to redesign your code to save previous values from farms and check if the farm appears more than once and determine what to do from there.
Upvotes: 0 <issue_comment>username_3: Could use [regex](https://regex101.com/r/8YElIU/1):
```
std::string line;
std::regex r{R"(([^,]*), ?([^ ]*) ([^ ]*) ([^ ]*) (\d*.?\d*))"};
std::smatch m;
while(getline(myFile, line)){
std::regex_match(line, m, r);
std::string farmName { m[1] };
int itemCount {std::stoi(m[2])};
std::string itemName { m[3] };
double itemPrice {std::stod(m[4])};
double totalPrice {std::stod(m[5])};
std::cout << farmName << " " << itemCount << " items contributed totaling $" << totalPrice << std::endl;
}
```
Upvotes: 0 |
2018/03/19 | 824 | 3,448 | <issue_start>username_0: I'm making a list of tasks to learn how to use PouchDB / CouchDB, the application is quite simple, would have authentication and the user would create their tasks.
My question is regarding how to store each user's information in the database. Should I create a database for each user with their tasks? Or is there a way to put all of the tasks of all users into a database called "Tasks" and somehow filter the synchronization so that PouchDB does not synchronize the whole database (including other users' tasks) that is in the server?
(I have read the pouchdb documentation a few times and I have not been able to define this, if it is documented, please inform me where.)<issue_comment>username_1: Both pattern are valid. The only difference is that in order to use the filtered replication, you need to provide access to the main database.
Since it's in javascript, it's easy to get credentials and then access the main database. This would give users the ability to see everyone's data.
A more secure approach would be to use a database-per-user pattern. Each database will be protected by the user's credentials.
Upvotes: 2 <issue_comment>username_2: You can use both approaches to fulfill your use case:
**Database per user**
* A database per user, is the db-per-user pattern in CouchDB. CouchDB can handle the database creation/deletion each time a user is created/deleted in CouchDB. In this case each PouchDB client will replicate the complete user database.
* You can enable it in the server [config](http://docs.couchdb.org/en/2.1.1/config/couch-peruser.html)
* This is a proper approach if the users data is isolated and you don't need to share information between users. In this case you can have some scalability issues if you need you sync many user databases with another one in CouchDB. See this [post](https://stackoverflow.com/questions/48886402/couchdb-db-per-user-with-shared-data-scalability).
**Single database for every user**
* You need to use the filtered-replication feature in CouchDB/PouchDB. This [post](https://pouchdb.com/2015/04/05/filtered-replication.html) explains how to use it.
* With this approach you can replicate a subset of the CouchDB database in PouchDB
* As you have a single database is easier to share info between users
* But, this approach has some performance problems. The filtering process is very inefficient. As it has to process the whole dataset, including the deleted documents to determine the set of documents to be included in the replication. This filtering is done in a couchdb external process in the server which add more cost to the process.
* If you need to use the filtering approach it is better to use a *Mango Selector* for this purpose as it is evaluated in the CouchDB main process and it could be indexed. See [options.selector](https://pouchdb.com/api.html#replication) in the PouchDB replication filtering options.
**Conclusion**
Which is better? depends on your use case... In any case you should consider the scalability issues in both cases:
* In the case of filtered replication, you will face some issues as the number of documents grow if you have to filter the complete dataset. This is reported to be 10x faster when using mango selectors.
* In the case of db-per-user, you will have some issues if you need to consolidate the different user databases in a single one when the number of users grow.
Upvotes: 3 [selected_answer] |
2018/03/19 | 468 | 1,985 | <issue_start>username_0: Let say I have the following VCs:
RootVC --> VC A --> VC B
I'm using present method to present view controller from RootVC to VC A then to VC B. Now I'm on VC B and I want to dismiss from VC B back to RootVC using
```
self.view.window!.rootViewController?.dismiss(animated: true, completion: nil)
```
it works but I still see VC A shows up during the dismiss process. Then, I try this method
```
self.presentationController?.presentedViewController.dismiss(animated: true, completion: nil)
```
It also works to dismiss back to root VC but I still see VC A in process.
My question is is there a way to not show VC A during the dismiss process? I already try animated: false but still get the same result. Thanks!<issue_comment>username_1: You can try to use this:
```
navigationController?.popToRootViewController(animated: false)
```
Upvotes: -1 <issue_comment>username_2: You need to make the change in the `modalPresentationStyle` to the `.custom`. The `custom` will allow you to view the `presentingViewController` view when the current visible controller's view is transparent.
Now when you want to go back to root view on the current presenting stack you need to call the method `dismissToRootViewController(animated: completion:)`.
In the implementation of this method will allow all intermediate presenting view controller view to be transparent which will give you dismiss animation from `VC c` to `RootVC`.
```
extension UIViewController {
func dismissToRootViewController(animated: Bool, completion: (() -> Swift.Void)? = nil) {
var viewController = self.presentingViewController
while viewController?.presentingViewController != nil {
viewController?.view.alpha = 0.0
viewController = viewController?.presentingViewController
}
self.dismiss(animated: true) {
viewController?.dismiss(animated: false, completion: completion)
}
}
}
```
Upvotes: 0 |
2018/03/19 | 433 | 1,764 | <issue_start>username_0: I tried running the program below:
```
from functools import lru_cache
@lru_cache(Maxsize = None)
def count(n):
factorial_num = 1
num_digits = 0
if n == 1:
factorial_num = 1
else:
factorial_num = n * count(n-1)
return len(str(factorial_num))
```
However, it didn't give me the length of the factorial number as anticipated.
I also wanted to use the code to find the factorial of very big numbers in range of billions and tried using `lru_cache`. Still, no luck.<issue_comment>username_1: You can try to use this:
```
navigationController?.popToRootViewController(animated: false)
```
Upvotes: -1 <issue_comment>username_2: You need to make the change in the `modalPresentationStyle` to the `.custom`. The `custom` will allow you to view the `presentingViewController` view when the current visible controller's view is transparent.
Now when you want to go back to root view on the current presenting stack you need to call the method `dismissToRootViewController(animated: completion:)`.
In the implementation of this method will allow all intermediate presenting view controller view to be transparent which will give you dismiss animation from `VC c` to `RootVC`.
```
extension UIViewController {
func dismissToRootViewController(animated: Bool, completion: (() -> Swift.Void)? = nil) {
var viewController = self.presentingViewController
while viewController?.presentingViewController != nil {
viewController?.view.alpha = 0.0
viewController = viewController?.presentingViewController
}
self.dismiss(animated: true) {
viewController?.dismiss(animated: false, completion: completion)
}
}
}
```
Upvotes: 0 |
2018/03/19 | 1,069 | 3,447 | <issue_start>username_0: ```
template
struct foo {
int x;
decltype(x) f1();
};
```
It seems to be impossible to define `f1` out-of-line. I have tried the following definitions, and none of them work:
```
template decltype(x) foo::f1() {}
template auto foo::f1() -> decltype(x) {}
template auto foo::f1() { return x; }
template decltype(std::declval>().x) foo::f1() {}
// This return type is copied from the gcc error message
template decltype (((foo\*)(void)0)->foo::x) foo::f1() {}
```
This isn't a problem in real code because changing the in-class declaration of `f1` to `auto f1() -> decltype(x);` allows the second definition. but I'm puzzled as to why that changes anything. Is it even possible to declare the original `f1` out-of-line?<issue_comment>username_1: As dumb as this might seem, I believe the following is correct:
```
template
struct foo {
int x;
decltype(x) f1();
};
template
int foo::f1() { return 0; }
```
Clang accepts it, but GCC doesn't, so I am going to say that I think GCC has a bug. [[Coliru link](http://coliru.stacked-crooked.com/a/de51b5df3a5c7299)]
The issue is whether these two declarations of `f1` declare the same function (more technically, the same member function of the same class template). This is governed by [basic.link]/9, according to which:
>
> Two names that are the same (Clause 6) and that are declared in different scopes shall denote the same variable, function, type, template or namespace if
>
>
> * both names have external linkage or else both names have internal linkage and are declared in the same translation unit; and
> * both names refer to members of the same namespace or to members, not by inheritance, of the same class; and
> * when both names denote functions, the parameter-type-lists of the functions (11.3.5) are identical; and
> * when both names denote function templates, the signatures (17.5.6.1) are the same.
>
>
>
The requirements appear to be satisfied, provided that the return types are in fact the same (since the return type is part of the signature for a class member function template, according to [defns.signature.member.templ]). Since `foo::x` is `int`, they are the same.
This would not be the case if the type of `x` were dependent. For example, GCC and Clang both reject the definition when the declaration of `x` is changed to `typename identity::type x;`. [[Coliru link](http://coliru.stacked-crooked.com/a/8e27f386bdae0df7)] In that case, [temp.type]/2 would apply:
>
> If an expression *e* is type-dependent (17.6.2.2), `decltype(`*e*`)` denotes a unique dependent type. Two such
> *decltype-specifiers* refer to the same type only if their *expression*s are equivalent (17.5.6.1). [ *Note:* However, such a type may be aliased, e.g., by a *typedef-name*. — *end note* ]
>
>
>
Perhaps GCC is in error for considering `x` to be type-dependent (it shouldn't be). However, this note suggests a workaround:
```
template
struct foo {
int x;
decltype(x) f1();
using x\_type = decltype(x);
};
template
typename foo::x\_type foo::f1() { return 0; }
```
This works on both GCC and Clang. [[Coliru link](http://coliru.stacked-crooked.com/a/fc19cda216bf94e4)]
Upvotes: 3 [selected_answer]<issue_comment>username_2: (I cheated... sort of)
Using MSVC I clicked on "quick action -> create function declaration" for that member function and got this:
```
template
decltype(x) foo::f1()
{
return x;
}
```
Upvotes: 0 |
2018/03/19 | 1,105 | 3,813 | <issue_start>username_0: I have an object setup like this:
```
window.onload = function(){
window.someSettings = {
result: false,
init: function() {
$html = $('html'),
$window = $(window),
$someElement = $('container');
this.doWork($html, $window, $someElement);
},
doWork: function($html, $window, $someElement){
console.log('inside do work ', this.result); //prints true
$window.scroll(function(){
console.log('inside scroll ', this.result); //prints undefined
})
},
trigger: function() {
this.result = true;
}
}
$(function() {
window.someSettings.init();
})
window.someSettings.trigger();
}
```
inside the `doWork()` function, console log prints **true**
but the same thing inside the `$window.scroll()` prints **undefined**.
I don't understand why is it `undefined` and how do I fix the scoping so I can access the `result` variable.
Here's the fiddle for the same: <https://jsfiddle.net/ufwkecLL/4/><issue_comment>username_1: As dumb as this might seem, I believe the following is correct:
```
template
struct foo {
int x;
decltype(x) f1();
};
template
int foo::f1() { return 0; }
```
Clang accepts it, but GCC doesn't, so I am going to say that I think GCC has a bug. [[Coliru link](http://coliru.stacked-crooked.com/a/de51b5df3a5c7299)]
The issue is whether these two declarations of `f1` declare the same function (more technically, the same member function of the same class template). This is governed by [basic.link]/9, according to which:
>
> Two names that are the same (Clause 6) and that are declared in different scopes shall denote the same variable, function, type, template or namespace if
>
>
> * both names have external linkage or else both names have internal linkage and are declared in the same translation unit; and
> * both names refer to members of the same namespace or to members, not by inheritance, of the same class; and
> * when both names denote functions, the parameter-type-lists of the functions (11.3.5) are identical; and
> * when both names denote function templates, the signatures (17.5.6.1) are the same.
>
>
>
The requirements appear to be satisfied, provided that the return types are in fact the same (since the return type is part of the signature for a class member function template, according to [defns.signature.member.templ]). Since `foo::x` is `int`, they are the same.
This would not be the case if the type of `x` were dependent. For example, GCC and Clang both reject the definition when the declaration of `x` is changed to `typename identity::type x;`. [[Coliru link](http://coliru.stacked-crooked.com/a/8e27f386bdae0df7)] In that case, [temp.type]/2 would apply:
>
> If an expression *e* is type-dependent (17.6.2.2), `decltype(`*e*`)` denotes a unique dependent type. Two such
> *decltype-specifiers* refer to the same type only if their *expression*s are equivalent (17.5.6.1). [ *Note:* However, such a type may be aliased, e.g., by a *typedef-name*. — *end note* ]
>
>
>
Perhaps GCC is in error for considering `x` to be type-dependent (it shouldn't be). However, this note suggests a workaround:
```
template
struct foo {
int x;
decltype(x) f1();
using x\_type = decltype(x);
};
template
typename foo::x\_type foo::f1() { return 0; }
```
This works on both GCC and Clang. [[Coliru link](http://coliru.stacked-crooked.com/a/fc19cda216bf94e4)]
Upvotes: 3 [selected_answer]<issue_comment>username_2: (I cheated... sort of)
Using MSVC I clicked on "quick action -> create function declaration" for that member function and got this:
```
template
decltype(x) foo::f1()
{
return x;
}
```
Upvotes: 0 |
2018/03/19 | 5,189 | 22,523 | <issue_start>username_0: I had a form that had two fields. An InputFilter with validators was applied to it. It was working fine. Then I moved the fields to a fieldset and added the fieldset to the form. Now the assignment validators to the fields is not present. The validator objects `isValid` method is not triggered at all. So how to apply the InputFilter validators to fields in a fieldset? Here you are the classes:
Text class Validator
```
namespace Application\Validator;
use Zend\Validator\StringLength;
use Zend\Validator\ValidatorInterface;
class Text implements ValidatorInterface
{
protected $stringLength;
protected $messages = [];
public function __construct()
{
$this->stringLengthValidator = new StringLength();
}
public function isValid($value, $context = null)
{
if (empty($context['url'])) {
if (empty($value)) return false;
$this->stringLengthValidator->setMin(3);
$this->stringLengthValidator->setMax(5000);
if ($this->stringLengthValidator->isValid($value)) {
return true;
}
$this->messages = $this->stringLengthValidator->getMessages();
return false;
}
if (!empty($value)) return false;
return true;
}
public function getMessages()
{
return $this->messages;
}
}
```
Test class InputFilter
```
namespace Application\Filter;
use Application\Fieldset\Test as Fieldset;
use Application\Validator\Text;
use Application\Validator\Url;
use Zend\InputFilter\InputFilter;
class Test extends InputFilter
{
public function init()
{
$this->add([
'name' => Fieldset::TEXT,
'required' => false,
'allow_empty' => true,
'continue_if_empty' => true,
'validators' => [
['name' => Text::class],
],
]);
$this->add([
'name' => Fieldset::URL,
'required' => false,
'allow_empty' => true,
'continue_if_empty' => true,
'validators' => [
['name' => Url::class],
],
]);
}
}
```
Test class Fieldset
```
namespace Application\Fieldset;
use Zend\Form\Fieldset;
class Test extends Fieldset
{
const TEXT = 'text';
const URL = 'url';
public function init()
{
$this->add([
'name' => self::TEXT,
'type' => 'textarea',
'attributes' => [
'id' => 'text',
'class' => 'form-control',
'placeholder' => 'Type text here',
'rows' => '6',
],
'options' => [
'label' => self::TEXT,
],
]);
$this->add([
'name' => self::URL,
'type' => 'text',
'attributes' => [
'id' => 'url',
'class' => 'form-control',
'placeholder' => 'Type url here',
],
'options' => [
'label' => self::URL,
],
]);
}
}
```
Test class Form
```
namespace Application\Form;
use Application\Fieldset\Test as TestFieldset;
use Zend\Form\Form;
class Test extends Form
{
public function init()
{
$this->add([
'name' => 'test',
'type' => TestFieldset::class,
'options' => [
'use_as_base_fieldset' => true,
],
]);
$this->add([
'name' => 'submit',
'attributes' => [
'type' => 'submit',
'value' => 'Send',
],
]);
}
}
```
TestController class
```
namespace Application\Controller;
use Application\Form\Test as Form;
use Zend\Debug\Debug;
use Zend\Mvc\Controller\AbstractActionController;
use Zend\View\Model\ViewModel;
class TestController extends AbstractActionController
{
private $form;
public function __construct(Form $form)
{
$this->form = $form;
}
public function indexAction()
{
if ($this->getRequest()->isPost()) {
$this->form->setData($this->getRequest()->getPost());
Debug::dump($this->getRequest()->getPost());
if ($this->form->isValid()) {
Debug::dump($this->form->getData());
die();
}
}
return new ViewModel(['form' => $this->form]);
}
}
```
TestControllerFactory class
```
namespace Application\Factory;
use Application\Controller\TestController;
use Application\Form\Test;
use Interop\Container\ContainerInterface;
use Zend\ServiceManager\Factory\FactoryInterface;
class TestControllerFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
$form = $container->get('FormElementManager')->get(Test::class);
return new TestController($form);
}
}
```
Test class
```
namespace Application\Factory;
use Application\Filter\Test as Filter;
use Application\Entity\Form as Entity;
use Application\Form\Test as Form;
use Interop\Container\ContainerInterface;
use Zend\Hydrator\ClassMethods;
use Zend\ServiceManager\Factory\FactoryInterface;
class Test implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
return (new Form())
->setHydrator($container
->get('HydratorManager')
->get(ClassMethods::class))
->setObject(new Entity())
->setInputFilter($container->get('InputFilterManager')->get(Filter::class));
}
}
```
Test Fieldset
```
namespace Application\Factory;
use Application\Entity\Fieldset as Entity;
use Application\Fieldset\Test as Fieldset;
use Interop\Container\ContainerInterface;
use Zend\Hydrator\ClassMethods;
use Zend\ServiceManager\Factory\FactoryInterface;
class TestFieldset implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
return (new Fieldset())
->setHydrator($container->get('HydratorManager')->get(ClassMethods::class))
->setObject(new Entity());
}
}
```
UPDATE
------
I updated the fieldset class accordingly to @Nukeface advise by adding `setInputFilter()`. But it did not worked. It even had not executed InpuFilter class `init` method. Perhaps I did in wrong:
```
php
namespace Application\Fieldset;
use Application\Filter\Test as Filter;
use Zend\Form\Fieldset;
use Zend\InputFilter\InputFilterAwareTrait;
class Test extends Fieldset
{
use InputFilterAwareTrait;
const TEXT = 'text';
const URL = 'url';
public function init()
{
$this-add([
'name' => self::TEXT,
'type' => 'textarea',
'attributes' => [
'id' => 'text',
'class' => 'form-control',
'placeholder' => 'Type text here',
'rows' => '6',
],
'options' => [
'label' => self::TEXT,
],
]);
$this->add([
'name' => self::URL,
'type' => 'text',
'attributes' => [
'id' => 'url',
'class' => 'form-control',
'placeholder' => 'Type url here',
],
'options' => [
'label' => self::URL,
],
]);
$this->setInputFilter(new Filter());
}
}
```<issue_comment>username_1: Just use the `InputFilterProviderInterface` class to your fieldset. This implements the `getInputFilterSpecification` method to your fieldset, which executes the input filters mentioned in this method.
```
class MyFieldset extends Fieldset implements InputFilterProviderInterface
{
public function init()
{
$this->add([
'name' => 'textfield',
'type' => Text::class,
'attributes' => [
...
],
'options' => [
...
]
]);
}
public function getInputFilterSpecification()
{
return [
'textfield' => [
'required' => true,
'filters' => [
...
],
'validators' => [
[
'name' => YourTextValidator::class,
'options' => [
...
],
],
],
],
];
}
}
```
AS long as you add this fieldset in your form the bound filters and validators will be executed on the `isValid` method call of your form.
Upvotes: 0 <issue_comment>username_2: Tried an answer before and ran out of chars (30k limit), so [created a repo instead](https://github.com/username_2/zf-doctrine-form). The repo contains abstraction of the answer below, which is a working example.
Your question shows you having the right idea, just not yet the implementation. It also contains a few mistakes, such as setting a FQCN for a Fieldset name. Hopefully the below can have you up and running.
As a use case, we'll have a basic Address form. Relationships for Country, Timezones and other things I'll leave out of the scope. For more in depth and nesting of Fieldsets (also with Collections) I'll refer you to my repo.
---
General setup
-------------
First create the basic setup. Create the Entity and configuration.
### Basic Entity
```
namespace Demo\Entity;
class Address
{
protected $id; // int - primary key - unique - auto increment
protected $street; // string - max length 255 - not null
protected $number; // int - max length 11 - not null
protected $city; // string - max length 255 - null
// getters/setters/annotation/et cetera
}
```
To handle this in a generic and re-usable way, we're going to need:
* AddressForm (general container)
* AddressFormFieldset (form needs to be validated)
* AddressFieldset (contains the entity inputs)
* AddressFieldsetInputFilter (must validate the data entered)
* AddressController (to handle CRUD actions)
* Factory classes for all of the above
* a form partial
### Configuration
To tie these together in Zend Framework, these need to be registered in the config. With clear naming, you can already add these. If you're using something like PhpStorm as your IDE, you might want to leave this till last, as the `use` statements can be generated for you.
As this is an explanation, I'm showing you now. Add this to your module's config:
```
// use statements here
return [
'controllers' => [
'factories' => [
AddressController::class => AddressControllerFactory::class,
],
],
'form_elements' => [ // <-- note: both Form and Fieldset classes count as Form elements
'factories' => [
AddressForm::class => AddressFormFactory::class,
AddressFieldset::class => AddressFieldsetFactory::class,
],
],
'input_filters' => [ // <-- note: input filter classes only!
'factories' => [
AddressFormInputFilter::class => AddressFormInputFilterFactory::class,
AddressFieldsetInputFilter::class => AddressFieldsetInputFilterFactory::class,
],
],
'view_manager' => [
'template_map' => [
'addressFormPartial' => __DIR__ . '/../view/partials/address-form.phtml',
],
];
```
Fieldset
--------
First we create the Fieldset (and Factory) class. This is because this contains the actual object we're going to handle.
### AddressFieldset
```
// other use statements for Elements
use Zend\Form\Fieldset;
class AddressFieldset extends Fieldset
{
public function init()
{
parent::init(); // called due to inheritance
$this->add([
'name' => 'id',
'type' => Hidden::class,
]);
$this->add([
'name' => 'street',
'required' => true,
'type' => Text::class,
'options' => [
'label' => 'Name',
],
'attributes' => [
'minlength' => 1,
'maxlength' => 255,
],
]);
$this->add([
'name' => 'number',
'required' => true,
'type' => Number::class,
'options' => [
'label' => 'Number',
],
'attributes' => [
'step' => 1,
'min' => 0,
],
]);
$this->add([
'name' => 'city',
'required' => false,
'type' => Text::class,
'options' => [
'label' => 'Name',
],
'attributes' => [
'minlength' => 1,
'maxlength' => 255,
],
]);
}
}
```
### AddressFieldsetFactory
```
// other use statements
use Zend\ServiceManager\Factory\FactoryInterface;
class AddressFieldsetFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
$this->setEntityManager($container->get(EntityManager::class));
/** @var AddressFieldset $fieldset */
$fieldset = new AddressFieldset($this->getEntityManager(), 'address');
$fieldset->setHydrator(
new DoctrineObject($this->getEntityManager())
);
$fieldset->setObject(new Address());
return $fieldset;
}
}
```
InputFilter
-----------
Above we created the Fieldset. That allows for the generation of the Fieldset for in a Form. At the same time, Zend Framework also has defaults already set per type of input (e.g. `'type' => Text::class`). However, if we want to validate it to our own, more strict, standards, we need to override the defaults. For this we need an InputFilter class.
### AddressFieldsetInputFilter
```
// other use statements
use Zend\InputFilter\InputFilter;
class AddressFieldsetInputFilter extends InputFilter
{
public function init()
{
parent::init(); // called due to inheritance
$this->add([
'name' => 'id',
'required' => true,
'filters' => [
['name' => ToInt::class],
],
'validators' => [
['name' => IsInt::class],
],
]);
$this->add([
'name' => 'street',
'required' => true,
'filters' => [
['name' => StringTrim::class], // remove whitespace before & after string
['name' => StripTags::class], // remove unwanted tags
[ // if received is empty string, set to 'null'
'name' => ToNull::class,
'options' => [
'type' => ToNull::TYPE_STRING, // also supports other types
],
],
],
'validators' => [
[
'name' => StringLength::class, // set min/max string length
'options' => [
'min' => 1,
'max' => 255,
],
],
],
]);
$this->add([
'name' => 'number',
'required' => true,
'filters' => [
['name' => ToInt::class], // received from HTML form always string, have it cast to integer
[
'name' => ToNull::class, // if received is empty string, set to 'null'
'options' => [
'type' => ToNull::TYPE_INTEGER,
],
],
],
'validators' => [
['name' => IsInt::class], // check if actually integer
],
]);
$this->add([
'name' => 'city',
'required' => false, // <-- not required
'filters' => [
['name' => StringTrim::class], // remove whitespace before & after string
['name' => StripTags::class], // remove unwanted tags
[ // if received is empty string, set to 'null'
'name' => ToNull::class,
'options' => [
'type' => ToNull::TYPE_STRING, // also supports other types
],
],
],
'validators' => [
[
'name' => StringLength::class, // set min/max string length
'options' => [
'min' => 1,
'max' => 255,
],
],
],
]);
}
}
```
### AddressFieldsetInputFilterFactory
```
// other use statements
use Zend\ServiceManager\Factory\FactoryInterface;
class AddressFieldsetInputFilterFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
// Nothing else required in this example. So it's as plain as can be.
return new AddressFieldsetInputFilter();
}
}
```
Form & Validation
-----------------
So. Above we created the Fieldset, it's InputFilter and 2 required Factory classes. This already allows us to do a great deal, such as:
* use the InputFilter in stand-alone setting to dynamically validate an object
* re-use Fieldset + InputFilter combination in other Fieldset and InputFilter classes for nesting
### Form
```
use Zend\Form\Form;
use Zend\InputFilter\InputFilterAwareInterface;
// other use statements
class AddressForm extends Form implements InputFilterAwareInterface
{
public function init()
{
//Call parent initializer. Check in parent what it does.
parent::init();
$this->add([
'type' => Csrf::class,
'name' => 'csrf',
'options' => [
'csrf_options' => [
'timeout' => 86400, // day
],
],
]);
$this->add([
'name' => 'address',
'type' => AddressFieldset::class,
'options' => [
'use_as_base_fieldset' => true,
],
]);
$this->add([
'name' => 'submit',
'type' => Submit::class,
'attributes' => [
'value' => 'Save',
],
]);
}
}
```
### Form Factory
```
use Zend\ServiceManager\Factory\FactoryInterface;
// other use statements
class AddressFormFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
/** @var AbstractForm $form */
$form = new AddressForm('address', $this->options);
$form->setInputFilter(
$container->get('InputFilterManager')->get(ContactFormInputFilter::class);
);
return $form;
}
}
```
Making it all come together
---------------------------
I'll show just the `AddressController#addAction`
### AddressController
```
use Zend\Mvc\Controller\AbstractActionController;
// other use statements
class AddressController extends AbstractActionController
{
protected $addressForm; // + getter/setter
protected $entityManager; // + getter/setter
public function __construct(
EntityManager $entityManager,
AddressForm $form
) {
$this->entityManager = $entityManager;
$this->addressForm = $form;
}
// Add your own: index, view, edit and delete functions
public function addAction () {
/** @var AddressForm $form */
$form = $this->getAddressForm();
/** @var Request $request */
$request = $this->getRequest();
if ($request->isPost()) {
$form->setData($request->getPost());
if ($form->isValid()) {
$entity = $form->getObject();
$this->getEntityManager()->persist($entity);
try {
$this->getEntityManager()->flush();
} catch (\Exception $e) {
$this->flashMessenger()->addErrorMessage($message);
return [
'form' => $form,
'validationMessages' => $form->getMessages() ?: '',
];
}
$this->flashMessenger()->addSuccessMessage(
'Successfully created object.'
);
return $this->redirect()->route($route, ['param' => 'routeParamValue']);
}
$this->flashMessenger()->addWarningMessage(
'Your form contains errors. Please correct them and try again.'
);
}
return [
'form' => $form,
'validationMessages' => $form->getMessages() ?: '',
];
}
}
```
### AddressControllerFactory
```
class AddressControllerFactory implements FactoryInterface
{
public function __invoke(ContainerInterface $container, $requestedName, array $options = null)
{
/** @var AddressController $controller */
$controller = new AddressController(
$container->get(EntityManager::class),
$container->get('FormElementManager')->get(AddressForm::class);
);
return $controller;
}
}
```
### Display in addressFormPartial
```
$this->headTitle('Add address');
$form->prepare();
echo $this->form()->openTag($form);
echo $this->formRow($form->get('csrf'));
echo $this->formRow($form->get('address')->get('id'));
echo $this->formRow($form->get('address')->get('street'));
echo $this->formRow($form->get('address')->get('number'));
echo $this->formRow($form->get('address')->get('city'));
echo $this->formRow($form->get('submit'));
echo $this->form()->closeTag($form);
```
To use this partial, say in a `add.phtml` view, use:
```
= $this-partial('addressFormPartial', ['form' => $form]) ?>
```
This bit of code will work with the demonstrated `addAction` in the Controller code above.
---
Hope you found this helpful ;-) If you have any questions left, don't hesitate to ask.
Upvotes: 1 |
2018/03/19 | 1,743 | 6,668 | <issue_start>username_0: I've been playing with VirtualBox implementations of Xubuntu. I'm learning provisioning test boxes with content using both VBoxManage and Vagrant. Unfortunately my boxes work very erratically. I'd like to enable the debug mode in VirtualBox to better understand why the boxes sometimes freeze up.
Current top menu: VirtualBox VM / Machine / View / Input / Devices / Window / Help. I'd like to be able to get to the / Debug / top menu with its selection of Statistics / Command Line / Logging / Show Log as displayed in the VirtualBox Graphical User Input (GUI) tool, [select Virtual Machine --> Settings --> User Interface.]
As I understand it, I'm looking for [the Built In Debugger](https://www.virtualbox.org/manual/ch12.html#ts_debugger). I've found the instructions, but they don't make sense to me.
The debugger can be enabled in three ways:
>
> Start the VM directly using VirtualBox --startvm, with an additional
> --dbg, --debug, or --debug-command-line argument.
>
>
> Set the VBOX\_GUI\_DBG\_ENABLED or VBOX\_GUI\_DBG\_AUTO\_SHOW environment
> variable to true before launching the VirtualBox process. Setting
> these variables (only their presence is checked) is effective even
> when the first VirtualBox process is the VM selector window. VMs
> subsequently launched from the selector will have the debugger
> enabled.
>
>
> Set the GUI/Dbg/Enabled extra data item to true before launching the
> VM. This can be set globally or on a per VM basis.
>
>
>
I know how to start my virtual boxes in two ways. 1) Open the Virtual Box GUI, click on the machine of interest, then start it up. 2) Start the box up from the folder that contains my custom Vagrantfile, then `$ vagrant up`.
Its not clear to me from the above link on Virtualbox, how to set up vagrant or VBoxManage or VirtualBox GUI to start up a box with Debug mode enabled (or better yet, enable Debug mode when building a custom box...). I can't believe someone built up a nice GUI then omit the chance to implement Debug into the tool.
Note: I'm using MacOS for my host; I've had best luck using `bstoots/xubuntu-16.04-desktop-amd64` as the base for my guest virtual machine box. Anybody been here before? Tips and hints as to how to start a box with debug enabled? Many thanks.<issue_comment>username_1: <https://reactos.org/wiki/VirtualBox#Built-in_VirtualBox_.28low-level.29_debugger> explains things a little better, however only for Windows (I presume 7 ...). There are 3 methods:
1 - Start VBox from the command line window adding the options as listed (see the VBox User manual for the command line method of running). The advantage that it is per machine, disadvantage: the command line is long and best issued from a shell script (in case of Windogs: BAT or CMD).
2 - Declare the environment variables:
I tried some under Windows 7 (64-bit Enterprise), here I declared User environment variables VBOX\_GUI\_DBG\_ENABLED and VBOX\_GUI\_DBG\_AUTO\_SHOW (Computer / Properties / Advanced System Settings / Environment Variables) and after restarting Virtual Box GUI when I started the VM it would come to the debug console. My VM was x86\_64 and like many other kernel debuggers this one was quite useless for stepping through ROM BIOS (at least the initial portion). I am too lazy to see if I can set the break-point either after the ROM BIOS is RAM-ed (relocated to RAM) or in the boot-loader (1st I would have to find if and how this VM relocates BIOS and then where would be the best place to break into the boot-loader, which is custom); but I did things like that in the past with similar debugger (this seems to borrow stuff from the oled Compusoft's CodeView x86 kernel debugger ..).
3' - Modify VBox global or per-machine configuration files, I have not tried that but I tracked the global file:
%homedisk%:\Users\%username%.VirtualBox\VirtualBox.xml
it is pretty straightforward, I assume adding the listed item shoudl work (%homedisk% is usually C, substitute %username% for the login name).
3'' - Modify the indiviual VM's file (\*.vbx). They VM's are in %homedisk%:\Users\%username%\VirtualBox VMs
The \*.vbox files are XML format and find where to add the data is also simple. This method has advantage of being perVM, disadvantage of possibility of screwing the VM up (so make a backup)
I have VirtualBox on my home iMac but I have not tried this yet. I did not dig into VirtualBox on Mac file structures, but I would be surprised if they were not somewhat analogous to Windows. The command line should be pretty similar (paths of course would be different, and the shell script too), one possible annoyance might be that you may have to use sudo ...
Regardless of the host, debugging the initial boot sequence of x86\_64 is not trivial because usually in the beginning we time-travel to 80s and pretend that we are running 386 with two cascaded PICs and the extended address lines controller by a keyboard controller ... lots of fun! (or not ...)
Upvotes: 2 <issue_comment>username_1: So I tried few more things (Virtual Box 5.2.20 r125813 on 64-bit Win7 Enterprise).
Method 3'' (per machine ExtraDataItem: does NOT do anything, even does not add 'Debug' menu to the VM window)
Method 3' (global ExtraDataItem): adds 'Debug' menu to the VM windows but does not break at the VM start (VM is just running, you can open the debug console and stop it but then of course we are deep into the boot process, or after...). But it could be useful ... no harm of having 'Debug' as a default.
Method 1: Works BUT not as described, even in the VBox own User manual is confused, page 261 describes the options WRONG. However the chapter 8 gives some ideas, here we go:
you can add the environment variables to the command line:
```
C:\Program Files\Oracle\VirtualBox>vboxmanage startvm "SomeVM" -E VBOX_GUI_DBG_AUTO_SHOW=true -E VBOX_GUI_DBG_ENABLED=true
```
will show the 'Debug' menu, open the debug window and load the VM halted at the reset vector
VBOX\_GUI\_DBG\_ENABLED=true
alone will just add 'Debug' to the VM's window (the VM will run)
VBOX\_GUI\_DBG\_AUTO\_SHOW=true
alone will load VM halted but no 'Debug' menu so really nothing to do ... (however this can be paired with the global setting!)
The remark in the manual (and online) that the variables have to be only defined is NOT true, unless set to "true" they do not have any discernible effect.
BTW: the ExtraDataItem line is:
```
```
I decided to set it up: this way all the VMs have the 'Debug' menu enabled but start as usual, if I want to debug one from the start then I use the command line with
-E VBOX\_GUI\_DBG\_AUTO\_SHOW=true
Upvotes: 2 |
2018/03/19 | 502 | 1,921 | <issue_start>username_0: I have created a simple app component which renders a side bar and a dashboard. On a link click within the sidebar, I want to do an AJAX request and change the state of the dashboard. I have moved the click handler function to the index.js app component so it can pass the props down to the dashboard.
index.js:
```
import React from "react";
import { NavBar } from "./components/NavBar";
import { NavBarSide} from "./components/NavBarSide";
import { Dashboard} from "./components/Dashboard"
import { render } from "react-dom";
class App extends React.Component {
handleNavClick(url) {
console.log(url)
}
render() {
return (
this.handleNavClick(url)}/>
)
}
}
render(, window.document.getElementById("root"));
```
My NavBarSide is like so...
NavBarSide.js:
```
import React from 'react';
import { Nav, NavItem, NavLink } from 'reactstrap';
export class NavBarSide extends React.Component {
render() {
return (
Highest Price
);
}
}
```
Instead of the expected behaviour, this function appears to immediately execute.
If there is a better way of doing this (I think with react-router v4) it would be helpful if that was also included.<issue_comment>username_1: You need to place the function inside another.
Like this:
>
> onClick={()=>this.props.clickHandler("/api/highest/price")}
>
>
>
If not the render will execute the funcion on mount because you are trigering with the "(...)"
Upvotes: 0 <issue_comment>username_2: ```
export default class NavBarSide extends React.Component {
constructor(props) {
super(props);
// Bind the function only once - on creation
this.onClick = this.onClick.bind(this);
}
onClick() {
this.props.clickHandler("/api/highest/price");
}
render() {
return (
Highest Price
);
}
}
```
Upvotes: 1 |
2018/03/19 | 526 | 2,103 | <issue_start>username_0: I was curious about the differences between `.jar` with `.class` files and `.jar` with `.java` files. I partially got the answer [here](https://stackoverflow.com/a/32877987/8614565), But then what is the usefulness of `.java` files in the jar?
My guess is that the java files in the jar are like an interface that prevents compilation error, because I solved the `IllegalAccessError` thrown on runtime by replacing jar files with `.class` with jar files with `.java` specifically when using `Xposed Framework`. (Got the hint from this [thread](https://stackoverflow.com/questions/11155340/java-lang-illegalaccesserror-class-ref-in-pre-verified-class-resolved-to-unexpe).)
**Also**
Thank you for your explanations and they were helpful. But I want to learn more about the differences in compiler's view, because I am wondering why my app works fine even if I only included the jar with java files, not class files (**zxing**). Also there are some cases that throws IllegalAccessException when I include the jar with class files, but not thrown when I include the jar with java files(**xposed**), even though I have to include at least one of them to make the compiler(AIDE) not complain about references, like unknown package. Why does the compiler not complain when I include only jar with java files though the compiler would not be able to resolve the actual implementation of the referred classes?<issue_comment>username_1: You need to place the function inside another.
Like this:
>
> onClick={()=>this.props.clickHandler("/api/highest/price")}
>
>
>
If not the render will execute the funcion on mount because you are trigering with the "(...)"
Upvotes: 0 <issue_comment>username_2: ```
export default class NavBarSide extends React.Component {
constructor(props) {
super(props);
// Bind the function only once - on creation
this.onClick = this.onClick.bind(this);
}
onClick() {
this.props.clickHandler("/api/highest/price");
}
render() {
return (
Highest Price
);
}
}
```
Upvotes: 1 |
2018/03/19 | 311 | 1,125 | <issue_start>username_0: I want to write a dataframe to a file
```
dataToGO = {'Midpoint': xdata1, '': "", 'Avg Diam': ydata1, '' : ""}
colums = ['Midpoint', '', 'Avg Diam', '']
ToFile = pad.DataFrame(data=dataToGO, columns=colums)
ToFile.to_csv("processed"+filname+".csv", index=False)
```
However, I want to add 10 blank lines in the file before the contents of the data frame. How do I do that?<issue_comment>username_1: You need to place the function inside another.
Like this:
>
> onClick={()=>this.props.clickHandler("/api/highest/price")}
>
>
>
If not the render will execute the funcion on mount because you are trigering with the "(...)"
Upvotes: 0 <issue_comment>username_2: ```
export default class NavBarSide extends React.Component {
constructor(props) {
super(props);
// Bind the function only once - on creation
this.onClick = this.onClick.bind(this);
}
onClick() {
this.props.clickHandler("/api/highest/price");
}
render() {
return (
Highest Price
);
}
}
```
Upvotes: 1 |
2018/03/19 | 1,279 | 4,683 | <issue_start>username_0: I'm sending emails using: <https://github.com/sendgrid/sendgrid-nodejs/tree/master/packages/mail>
I have not been able to find out HOW I can add the `[Unsubscribe]([Unsubscribe])` equivalent. This is documented in here: <https://sendgrid.com/docs/Classroom/Basics/Marketing_Campaigns/unsubscribe_groups.html#-Using-a-Custom-Unsubscribe-Link>
On the website, you just use a shortcode [Unsubscribe], this does not work when sending emails via the sendgrid/mail package.<issue_comment>username_1: Since you're sending using code, it's a "transactional" type of message. You'll want to either turn on the Subscription Tracking filter at the account level (via [UI](subscription tracking setting) or [API](https://sendgrid.com/docs/API_Reference/Web_API_v3/Settings/tracking.html#Update-Subscription-Tracking-Settings-PATCH)), or turn it on as you send the message, as part of the [mail/send API call](https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/index.html), under `tracking_settings`.
It's important to note that you can't mix those. If you define *anything* in the `mail/send` API call, you'll need to define *everything* for Subscription Tracking in that call. SendGrid won't look at some settings at the mail level, and some at the account level.
Most users will just set it at the account level. There, you can customize the HTML & Text of the Unsubscribe footer, customize the HTML of the landing page, or redirect landing to a URL of your choosing, which will send the recipient there with `?email=<EMAIL>` in the URL string for your system to catch. You can also define the "replacement tag" like `[%unsubscribe%]`, so that you can place the URL wherever you want within your HTML.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The easiest way is to do this via the [SendGrid GUI](https://app.sendgrid.com/settings/tracking).
Go to Settings -> Tracking -> Subscription Tracking
Upvotes: 2 <issue_comment>username_3: One tip that would have saved me an hour or two is that:
It's possible to send the following in the api json along with other stuff:
```
"asm":{
"group_id":123,
"groups_to_display": [123],
}
```
then the following variables become available to use within the template:
```
<%asm_group_unsubscribe_raw_url%>
<%asm_preferences_raw_url%>
```
If you want to keep things simple don't include the following variable as it fiddles with too many things (this wasn't obvious from the [documentation](https://sendgrid.com/docs/API_Reference/Web_API_v3/Mail/index.html) so obviously I did so and wasted time :( ):
```
"tracking_settings": {
"subscription_tracking": {
"enable": true,
"substitution_tag": "[unsubscribe_url]"
}
}
```
Just use them in their raw format and you shall be fine.
Upvotes: 4 <issue_comment>username_4: The best approach is to use Group Unsubscribes.
1. First create a group in Sendgrid:
* *Groups > Unsubscribe Groups > Create a group*
2. Next, insert a module into the Sendgrid template that creates specific tags in your email, which are populated when you make an API request
* Go to your template
* Insert an unsubscribe module in an HTML block
* Save
[](https://i.stack.imgur.com/dW9S1.png)
3. Finally make an API request and specify the group created in step 1:
```
"asm":{
"group_id":544,
"groups_to_display": [544, 788],
}
```
4. These will be inserted into the module mentioned in step 2 when the email is sent.
Unfortunately Sendgrid unsubscribe links are not as straightforward as they could be. They are explained in more detail [here](https://sgwidget.com/blog/unsubscribe-links-sendgrid-emails)
Upvotes: 3 <issue_comment>username_5: 1. <https://app.sendgrid.com/> > Suppressions > Unsubscribe Groups > Create New Group
2. Note down group\_id/ids. e.g 123 (Only number !Not string)
[](https://i.stack.imgur.com/KfqCU.png)
3. Send email using node.js
```js
const sgMail = require('@sendgrid/mail');
sgMail.setApiKey(SENDGRID_API_KEY);
const tags = { invitedBy : Alex }
const msg = {
to: email,
from: { "email": SENDER_EMAIL,
"name": SENDER_NAME
},
templateId: TEMPLATE_ID,
dynamic_template_data: {
Sender_Name: name,
...tags
},
asm: {
group_id: 123,
groups_to_display: [
123
],
},
};
await sgMail.send(msg);
```
Upvotes: 4 |
2018/03/19 | 354 | 978 | <issue_start>username_0: I have an hour and a minute column in my dataset. I want to concatenate them so that there is a "." in between. In cases where the minute column is less than 10, I want to add a leading zero.
Example of what I want it to look like:
Hour(A): 1
Minute (B): 2
Desired Outcome (C): 1.02
I have been playing with this formula Con (A,”.”, IF (Len (B) =1, con (0, B), B)) but I can't seem to make it work
Can anyone give any insight?<issue_comment>username_1: Try this:
```
=A1&"."&TEXT(B1,"00")
```
Or this also works:
```
=TEXT(TIME(A1,B1,0),"h.mm")
```
---
As required:
[](https://i.stack.imgur.com/vGLh3.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Put this formula in column C,
```
=time(a1, b1, 0).
```
Format the cell as `h.mm`.
[](https://i.stack.imgur.com/VeB2p.png)
Upvotes: 1 |
2018/03/19 | 287 | 1,173 | <issue_start>username_0: I have been playing around with Kafka for a few months now and I have noticed that I have a rather large number of message/sec under the internal Kafka topic called \_\_consumer\_offsets. Now we have a hand full of other topics that are rather small in terms of the number of message across the brokers. At the most we can see ~1k message/sec on a couple topics, but I almost always see 15-20k message/sec from \_consumer\_offsets. Can someone point me to something that will explain this or provide some insight in why the consumer\_offsets topic has so many messages.
[](https://i.stack.imgur.com/fBe4b.png)
Thanks<issue_comment>username_1: This is how Apache Kafka works. See <http://kafka.apache.org/documentation.html#impl_offsettracking> for more details.
Upvotes: 1 <issue_comment>username_2: I have The same. Number of messages in all other topic is insignificant in comparison to \_\_consumer\_offsets topic. Is it problem because high number of commits by producers ? If not, we should hide this topic from monitoring graph for clear seeing another topics.
Upvotes: 0 |
2018/03/19 | 1,511 | 6,143 | <issue_start>username_0: I'm having troubles with hiding the keyboard on Android 8.
I used this before and it worked for the older androids:
```
val view = activity.currentFocus
if (view != null) {
val imm = activity.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
imm.hideSoftInputFromWindow(view.windowToken, 0)
}
```
Android 8 just ignores it and shows the keyboard anyway.
Probably making the input field unfocusable would help, but I really need it to be focusable, so this is not an option.<issue_comment>username_1: Instead of `hideSoftInputFromWindow` you can use `toggleSoftInput`.
```
val imm: InputMethodManager = getSystemService(Activity.INPUT_METHOD_SERVICE) as InputMethodManager
if (imm.isActive)
imm.toggleSoftInput(InputMethodManager.HIDE_IMPLICIT_ONLY, 0)
```
works for Android 8 on the emulator at least
Upvotes: 4 [selected_answer]<issue_comment>username_2: I had the same issue recently, and worked around it by providing the root view from (in my case) the fragment for the window token instead of the activity's current focus.
This way, the keyboard is dismissed, and focus in the `EditText` is maintained.
Tested on Pixel 2 XL running `Android 8.1`:
```
/**
* Hides the Soft Keyboard on demand
*
* @param activity the activity from which to get the IMM
* @param view the view from which to provide a windowToken
*/
fun hideSoftKeyboard(activity: Activity, view: View?) {
val inputMethodManager = activity.getSystemService(Activity.INPUT_METHOD_SERVICE) as InputMethodManager
inputMethodManager.hideSoftInputFromWindow(view?.windowToken, 0)
}
```
Upvotes: 0 <issue_comment>username_3: I had a similar problem and I solved it like this:
```
class MainActivity : Activity() {
override fun onCreate(savedInstanceState: Bundle?) {
...
window.setSoftInputMode(
WindowManager.LayoutParams.SOFT_INPUT_STATE_ALWAYS_HIDDEN)
}
}
```
>
> This is not the most elegant solution. But in my situation it was acceptable
>
>
>
Upvotes: 0 <issue_comment>username_4: Use this method I have created
```
public static void showHideInput(Context context,boolean visibility, View view){
if (view != null) {
InputMethodManager imm = (InputMethodManager) context.getSystemService(Context.INPUT_METHOD_SERVICE);
if (visibility) imm.toggleSoftInput(InputMethodManager.SHOW_FORCED, InputMethodManager.HIDE_IMPLICIT_ONLY);
else imm.hideSoftInputFromWindow(view.getWindowToken(), 0);
}
}
```
Upvotes: 0 <issue_comment>username_5: Here you have two static functions to hide keyboard, depend on your case which one you want to use. I tested on Android Oreo and it works.
```
object UIHelper {
fun hideSoftKeyboard(activity: Activity?) {
if (activity != null) {
val inputManager = activity.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
if (activity.currentFocus != null && inputManager != null) {
inputManager.hideSoftInputFromWindow(activity.currentFocus!!.windowToken, 0)
inputManager.hideSoftInputFromInputMethod(activity.currentFocus!!.windowToken, 0)
}
}
}
fun hideSoftKeyboard(view: View?) {
if (view != null) {
val inputManager = view!!.getContext().getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
inputManager?.hideSoftInputFromWindow(view!!.getWindowToken(), 0)
}
}
fun showKeyboard(activityContext: Context, editText: EditText) {
editText.requestFocus()
Handler().postDelayed(Runnable {
val inputMethodManager = activityContext.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
inputMethodManager.showSoftInput(editText, InputMethodManager.SHOW_IMPLICIT)
}, 250)
}
}
```
Example of use:
1. `UIHelper.hideSoftKeyboard(this)`
2. `UIHelper.hideSoftKeyboard(passwordField)`
To show:
```
UIHelper.showKeyboard(this, passwordField)
```
Upvotes: 1 <issue_comment>username_6: Hide keyboard inside runnable, with calling post method of your view:
```
view.post(() -> {
hideKeyboard(view);
}
```
Upvotes: 0 <issue_comment>username_7: The @EarlOfEgo 's solution caused some troubles on older Android versions. This is the ideal solution, which works perfectly on all (at least almost) Android versions:
```
protected fun hideKeyboard() {
val view = activity.currentFocus
if(android.os.Build.VERSION.SDK_INT >= 26) {
val imm: InputMethodManager = context.getSystemService(Activity.INPUT_METHOD_SERVICE) as InputMethodManager
view?.post({
imm.hideSoftInputFromWindow(activity.currentFocus.windowToken, 0)
imm.hideSoftInputFromInputMethod(activity.currentFocus.windowToken, 0)
})
} else {
if (view != null) {
val imm = activity.getSystemService(Context.INPUT_METHOD_SERVICE) as InputMethodManager
imm.hideSoftInputFromWindow(view.windowToken, 0)
imm.hideSoftInputFromInputMethod(view.windowToken, 0)
}
}
}
```
Upvotes: 2 <issue_comment>username_8: just try this method to intercept the focus event and hide the softkeyboard:
```
@Override
public void onWindowFocusChanged(boolean hasFocus) {
super.onWindowFocusChanged(hasFocus);
if (hasFocus) {
View lFocused = getCurrentFocus();
if (lFocused != null)
lFocused.postDelayed(new Runnable() {
@Override
public void run() {
InputMethodManager lInputManager = (InputMethodManager) pContext.getSystemService(Context.INPUT_METHOD_SERVICE);
lInputManager.hideSoftInputFromWindow(getCurrentFocus().getWindowToken(), 0);
}
}, 100);//Modified to 100ms to intercept SoftKeyBoard on Android 8 (Oreo) and hide it.
}
}
```
Upvotes: 0 |
2018/03/19 | 373 | 1,414 | <issue_start>username_0: It is possible to get the current input value without jQuery?
I mean, i don't wanna use document.getElementById / class. i wanna get element by input tag.
I have this implemented with jQuery:
```
$(document).on('keyup', 'input', function(e) {
console.log($(this).val());
})
```
And is works for every input.
```
```
I don't know how i can transform jQuery code into pure javascript code. I try this:
```
document.querySelector('input').addEventListener('input', function (evt) {
console.log(this.value);
});
```
But, is working only for the first input. Any ideas?<issue_comment>username_1: You need to use the function `querySelectorAll` to get the whole set of elements. Then, you need to loop over those elements and bind the necessary event.
The function `querySelector` returns the first found element only.
```js
document.querySelectorAll('input').forEach(function(input) {
input.addEventListener('input', function(evt) {
console.log(this.value);
});
})
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can use `document.getElementsByTagName` and loop through them
```js
for (var i=0; i< document.getElementsByTagName("input").length; i++) {
document.getElementsByTagName("input")[i].addEventListener('input', function(evt) {
console.log(this.value);
});
}
```
```html
```
Upvotes: 0 |
2018/03/19 | 325 | 1,249 | <issue_start>username_0: I am attempting to stretch a 300x300 image to fill the parent as a background for a sample layout. However, it seems that at most it can reach it's max dimensions as a perfect square. Here is my code.
```
```
There is still white space at both the top and bottom of the app and I want to eliminate that. Any and all solutions to this dilemma would be greatly appreciated.[enter image description here](https://i.stack.imgur.com/PTlSZ.png)<issue_comment>username_1: You need to use the function `querySelectorAll` to get the whole set of elements. Then, you need to loop over those elements and bind the necessary event.
The function `querySelector` returns the first found element only.
```js
document.querySelectorAll('input').forEach(function(input) {
input.addEventListener('input', function(evt) {
console.log(this.value);
});
})
```
```html
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: you can use `document.getElementsByTagName` and loop through them
```js
for (var i=0; i< document.getElementsByTagName("input").length; i++) {
document.getElementsByTagName("input")[i].addEventListener('input', function(evt) {
console.log(this.value);
});
}
```
```html
```
Upvotes: 0 |
2018/03/19 | 806 | 3,046 | <issue_start>username_0: I want to define a generic strong alias type, i.e. a type
```
template
class StrongAlias {
T value;
};
```
such that for a type `T` a `StrongAlias` can be used in exactly the same way as `T`, but `StrongAlias` and `StrongAlias` are different types that can not be implecitly converted to each other.
In order to mimic a `T` as perfectly as possible, I would like my `StrongAlias` to have the same constructors as `T`.
This means I would like to do something like the following:
```
template
class StrongAlias {
T value;
public:
// doesn't work
template>>
StrongAlias(Args&&... args) noexcept(std::is\_nothrow\_constructible\_v)
: value(std::forward(args)...) {}
};
```
except that this wouldn't work since the `template parameter pack must be the last template parameter`, as clang 5.0 would tell me.
The other way to use SFINAE that I thought of would be in the return type, but since a constructor doesn't have a return type, this does not seem to work either.
Is there any way to use SFINAE on a variadic template parameter pack in a constructor?
Alternatively, if there isn't one, can I accomplish what I want in another way?
Note that being implicitly constructible from a `T` isn't enough in my case, as the example of `StrongAlias>` shows: If `StrongAlias` can only be implictly constructed from a `std::optional`, it cannot be be constructed from a `std::nullopt` (of type `std::nullopt_t`), because that would involve 2 user-defined conversions. I really want to have all constructors of the aliased type.
EDIT:
Of course it would be possible to implement this without SFINAE and let the program be invalid if a `StrongAlias` is constructed from incompatible arguments. However, while this would be an acceptable behaviour in my specific case, it is clearly not optimal as the `StrongAlias` may be used in a template that queries if the given type is constructible from some arguments (via `std::is_constructible`). While this would yield a `std::false_type` for `T`, it would result in a `std::true_type` for `StrongAlias`, which could mean unnecessary compile errors for `StrongAlias` that wouldn't exist for `T`.<issue_comment>username_1: Just change `std::enable_if_t` to a non-type template parameter:
```
template
class StrongAlias {
T value;
public:
template, int> = 0>
StrongAlias(Args&&... args) noexcept(noexcept(T(std::declval()...)))
: value(std::forward(args)...) {}
};
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The two issues with your snippet that make it not compile are
* Trying to pass a type `std::is_constructible` as the first argument of `std::enable_if_t`;
* Trying to pass `decltype(...)` to a `noexcept` operator.
(There's also a third problem with the function style cast inside the `noexcept`, but that only affects semantics, not compilability.)
Neither causes the error message you cite, which concerns a rule that doesn't apply to function templates at all. With these two problems fixed, Wandbox's Clang 5.0 happily accepts it.
Upvotes: 2 |
2018/03/19 | 440 | 1,508 | <issue_start>username_0: I remember reading somewhere that local variables with inferred types can be reassigned with values of the same type, which would make sense.
```
var x = 5;
x = 1; // Should compile, no?
```
However, I'm curious what would happen if you were to reassign `x` to an object of a different type. Would something like this still compile?
```
var x = 5;
x = new Scanner(System.in); // What happens?
```
I'm currently not able to install an early release of JDK 10, and did not want to wait until tomorrow to find out.<issue_comment>username_1: Would not compile, throws *"incompatible types: Scanner cannot be converted to int"*. Local variable type inference does not change the static-typed nature of Java. In other words:
```
var x = 5;
x = new Scanner(System.in);
```
is just syntactic sugar for:
```
int x = 5;
x = new Scanner(System.in);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Once a `var` variable has been initialized, you cannot reassign it to a different type as the type has already been inferred.
so, for example this:
```
var x = 5;
x = 1;
```
would compile as `x` is inferred to be `int` and reassigning the value `1` to it is also fine as they're the same type.
on the other hand, something like:
```
var x = 5;
x = "1";
```
will not compile as `x` is inferred to be `int` hence assigning a `string` to `x` would cause a compilation error.
the same applies to the `Scanner` example you've shown, it will fail to compile.
Upvotes: 2 |
2018/03/19 | 1,352 | 5,178 | <issue_start>username_0: I am trying to implement a custom or-filter in API Platform. But for some reason it is not loading. Find below my configuration.
This is my filter:
```php
php
namespace AppBundle\Filter;
use ApiPlatform\Core\Bridge\Doctrine\Orm\Filter\AbstractFilter;
use ApiPlatform\Core\Bridge\Doctrine\Orm\Util\QueryNameGeneratorInterface;
use Doctrine\ORM\QueryBuilder;
use Doctrine\Common\Annotations\AnnotationReader;
final class SearchFilter extends AbstractFilter
{
protected function filterProperty(string $property, $value, QueryBuilder $queryBuilder, QueryNameGeneratorInterface $queryNameGenerator, string $resourceClass, string $operationName = null)
{
if ($property === 'search') {
$this-logger->info('Search for: ' . $value);
} else {
return;
}
$reader = new AnnotationReader();
$annotation = $reader->getClassAnnotation(new \ReflectionClass(new $resourceClass), \AppBundle\Filter\SearchAnnotation::class);
if (!$annotation) {
throw new \HttpInvalidParamException('No Search implemented.');
}
$parameterName = $queryNameGenerator->generateParameterName($property);
$search = [];
$mappedJoins = [];
foreach ($annotation->fields as $field)
{
$joins = explode(".", $field);
for ($lastAlias = 'o', $i = 0, $num = count($joins); $i < $num; $i++) {
$currentAlias = $joins[$i];
if ($i === $num - 1) {
$search[] = "LOWER({$lastAlias}.{$currentAlias}) LIKE LOWER(:{$parameterName})";
} else {
$join = "{$lastAlias}.{$currentAlias}";
if (false === array_search($join, $mappedJoins)) {
$queryBuilder->leftJoin($join, $currentAlias);
$mappedJoins[] = $join;
}
}
$lastAlias = $currentAlias;
}
}
$queryBuilder->andWhere(implode(' OR ', $search));
$queryBuilder->setParameter($parameterName, '%' . $value . '%');
}
/**
* @param string $resourceClass
* @return array
*/
public function getDescription(string $resourceClass): array
{
$reader = new AnnotationReader();
$annotation = $reader->getClassAnnotation(new \ReflectionClass(new $resourceClass), \AppBundle\Filter\SearchAnnotation::class);
$description['search'] = [
'property' => 'search',
'type' => 'string',
'required' => false,
'swagger' => ['description' => 'Filter on ' . implode(', ', $annotation->fields)],
];
return $description;
}
}
```
In api\_filters.yml:
```php
driver.custom_search_filter:
class: 'AppBundle\Filter\SearchFilter'
autowire: true
tags: [ { name: 'api_platform.filter' } ]
```
In my annotation file:
```php
php
namespace AppBundle\Filter;
use Doctrine\Common\Annotations\Annotation;
use Doctrine\Common\Annotations\Annotation\Target;
use Doctrine\Common\Annotations\AnnotationException;
/**
* @Annotation
* @Target("CLASS")
*/
final class SearchAnnotation
{
public $fields = [];
/**
* Constructor.
*
* @param array $data Key-value for properties to be defined in this class.
* @throws AnnotationException
*/
public function __construct(array $data)
{
if (!isset($data['value']) || !is_array($data['value'])) {
throw new AnnotationException('Options must be a array of strings.');
}
foreach ($data['value'] as $key = $value) {
if (is_string($value)) {
$this->fields[] = $value;
} else {
throw new AnnotationException('Options must be a array of strings.');
}
}
}
}
```
And finally in my entity:
```php
/**
* A driver that bring meals from hub to customer.
*
*
* @ApiResource(
* attributes={
* "filters"={"driver.search_filter","driver.custom_search_filter"},
* "denormalization_context"={"groups"={"post_driver"}}
* }
* )
* @Searchable({"firstName"})
*
* @ORM\Entity
* @ORM\Table(name="vendor_driver")
*/
class Driver
{
```
It is exactly as according to the issue that was reported here:
<https://github.com/api-platform/core/issues/398>
I am not getting any errors, but the filter is simply not working. I am seeing it in Swagger. But when I enter a value in Swagger, the db returns all entities. Its never reaching the filterProperty method.
Does anyone have an idea?<issue_comment>username_1: I've just managed to get this working by removing the autowiring, e.g.
```
my.custom_search_filter:
class: AppBundle\Filter\CustomSearchFilter
arguments:
- '@doctrine'
- '@request_stack'
- '@logger'
tags: [ { name: 'api_platform.filter', id: 'search' } ]
```
Hope that helps.
Upvotes: 2 <issue_comment>username_2: Did you import your custom filter in the Driver entity? In Regexp example of custom filter and [here](https://github.com/api-platform/core/issues/398) they are explicitly import their custom filters like:
```
use AppBundle\Filter\SearchAnnotation as Searchable;
/**
* @Searchable({"name", "description", "whatever"})
*/
class Product
{
```
Upvotes: 0 |
2018/03/19 | 397 | 1,344 | <issue_start>username_0: ```
int *i;
ters_cevir(){
char *term=i;
char *som=i;
char som1;
while (*term != '\0') { term++; }
while (*som != '\0') {
som1=som*;
*term=som;
term--;
som++;
}
}
int main() {
char *isim=malloc(sizeof(char));
i=&isim
printf("Reverse words=");
scanf("%s",isim);
printf("Kelimenizin tersi:\n ");
ters_cevir(); // When I call this, it must make the reverse one that make from memory
while (*isim != '\0') {
printf("%c",*isim);
isim++;
sayac++;
}
return 0;
}
```<issue_comment>username_1: I've just managed to get this working by removing the autowiring, e.g.
```
my.custom_search_filter:
class: AppBundle\Filter\CustomSearchFilter
arguments:
- '@doctrine'
- '@request_stack'
- '@logger'
tags: [ { name: 'api_platform.filter', id: 'search' } ]
```
Hope that helps.
Upvotes: 2 <issue_comment>username_2: Did you import your custom filter in the Driver entity? In Regexp example of custom filter and [here](https://github.com/api-platform/core/issues/398) they are explicitly import their custom filters like:
```
use AppBundle\Filter\SearchAnnotation as Searchable;
/**
* @Searchable({"name", "description", "whatever"})
*/
class Product
{
```
Upvotes: 0 |
2018/03/19 | 434 | 1,750 | <issue_start>username_0: I have an Azure function and I wrapped the logic in a try catch block.
I can log the exception to Azure, but I want save and then send the details of the exception to an email address. I see I can use [Elmah.io](https://docs.elmah.io/logging-to-elmah-io-from-azure-functions/) but I have to pay a monthly subscription.
Doesn't Azure have a simple way to log the data and then send me an email?
Something like this
```
catch (Exception ex)
{
log.Info($"Points Event Error: " + ex.Message);
// send me email with details
}
```<issue_comment>username_1: The most Azure Function-ish way to do this would be to use [Azure Functions SendGrid ouput binding](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-sendgrid).
You can follow the examples from that article, but instead of `out` parameter use `ICollector` / `IAsyncCollector` because you only need to send e-mails conditionally.
I believe Send Grid has a free tier available for low usage scenarios.
Upvotes: 2 <issue_comment>username_2: Rather than looking to leverage SendGrid to send you emails on Exceptions directly from your Function you could consider leveraging [Functions' Application Insights integration](https://github.com/Azure/Azure-Functions/wiki/App-Insights). This will give you additional benefits (performance tracking, history of events, trace logging) in addition to being able to [set Alerts based on Exceptions](https://learn.microsoft.com/en-us/azure/application-insights/app-insights-alerts) in Application Insights itself. It also means your monitoring / alerting isn't baked into your business logic in your Function.
Application Insights has a free tier that should suffice for most use cases.
Upvotes: 1 |
2018/03/19 | 646 | 1,369 | <issue_start>username_0: I'm new with Pyhton and would like to filter a dictionary with keys composed by two values. Here my dict:
```
{(0, 'DRYER'): [103.0, 131.0, 9.0, 1.24],
(2, 'DRYER'): [106.0, 120.0, 5.0, 1.24],
(2, 'WASHING'): [70.0, 90.0, 11.0, 0.19]}
```
The keys are composed by two values `(n,a)` and I want to create a new dictionary where `n=2`, which results in:
```
{(2, 'DRYER'): [106.0, 120.0, 5.0, 1.24],
(2, 'WASHING'): [70.0, 90.0, 11.0, 0.19]}
```
If anyone could help, I would be thankful!<issue_comment>username_1: Check if first value of the tuple key is 2, with dict comprehension:
```
{k: v for k, v in dct.items() if k[0]==2}
```
So:
```
In [11]: dct = {(0, 'DRYER'): [103.0, 131.0, 9.0, 1.24],
...: (2, 'DRYER'): [106.0, 120.0, 5.0, 1.24],
...: (2, 'WASHING'): [70.0, 90.0, 11.0, 0.19]}
...:
In [12]: {k: v for k, v in dct.items() if k[0]==2}
Out[12]:
{(2, 'DRYER'): [106.0, 120.0, 5.0, 1.24],
(2, 'WASHING'): [70.0, 90.0, 11.0, 0.19]}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try using `filter`
```
new_dict = dict(filter(function(), old_dict.items()))
```
in your case `function()` would be
```
lambda it: True if it[0][0] == 2 else False
```
Basically `filter()` grabs a function and an iterable and removes all the elements from iterable, if the function returns False
Upvotes: 0 |
2018/03/19 | 590 | 1,871 | <issue_start>username_0: I'm writing an application on node.js
I need to insert a Static Google Map.
The code looks like this
```
.panel.panel-primary
.panel-heading
h2.panel-title On the map
.panel-body
img.img-responsive.img-rounded(src="http://maps.googleapis.com/maps/api/staticmap?center=#{location.coords.lat},#{location.coords.lng}&zoom=17&size=400x350&sensor=false&markers=#{location.coords.lat},#{location.coords.lng}&scale=2&key=<KEY>")
```
Controller, from where information about coordinates is taken
```
res.render('location-info', {
title: 'empty',
pageHeader: {title: 'empty'},
sidebar: {
context: 'empty',
callToAction: 'empty.'
},
location: {
name: 'empty',
address: 'empty',
rating: 5,
facilities: ['empty'],
coords: {lat: 55.752813, lng: 37.791908},
openingTimes: [{
days: 'empty',
opening: '19:00',
closing: '01:00',
closet: false
},{
```
[How it looks like.](https://i.stack.imgur.com/VikzQ.jpg)
When im try to copy someone else's code, everything works fine. Help! Thank you in advance.<issue_comment>username_1: You can check the error messages from Google Map [here](https://developers.google.com/maps/documentation/static-maps/error-messages).
In your case, the size parameter is not within the expected range of numeric values, or is missing from the request.
Upvotes: 0 <issue_comment>username_2: Template literals use `$`, eg `${varNameHere}` - not `#{location.coords.lat}`.
Try:
`src="http://maps.googleapis.com/maps/api/staticmap?center=${location.coords.lat},${location.coords.lng}&zoom=17&size=400x350&sensor=false&markers=${location.coords.lat},${location.coords.lng}&scale=2&key=<KEY>"`
Upvotes: 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.