date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/22 | 403 | 1,627 | <issue_start>username_0: Using Foxit .NET SDK in an application to flatten a PDF. When I open the document using Phantom PDF document appears to be flattened. When the document is opened using Adobe Reader, the document appears not to be flattened and the form fields accessible. This is the code I'm currently using:
```
// Code added to "Flatten" the PDFs
SignatureFieldFlatteningOptions sFFO = new SignatureFieldFlatteningOptions();
sFFO = SignatureFieldFlatteningOptions.Retain;
FormFlatteningOptions fFO = new FormFlatteningOptions();
fFO.DigitalSignatures = sFFO;
mergeDocument.FormFlattening = fFO;
mergeDocument.CompressionLevel = 9;
byte[] pdfModifiedOutput = mergeDocument.Draw();
```
Is there something I should be doing differently? Has anyone else seen this problem?<issue_comment>username_1: **Update**: Colab now supports input prompts, so you should see these immediately, e.g.,
[](https://i.stack.imgur.com/7hCnv.png)
**Old answer:** A typical pattern is to run the command without prompts, if it has such an option. For example, for installation using `apt`, provide the option `-y`.
If no such option exists, you can do something like piping yes to the program like so: `yes | programThatHasConfirmationPrompts`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Colab input prompts did not pop out for my command but the pipelining worked, for example:
```
!yes | sh -c "$(curl -fsSL https://someURL...)"
```
Upvotes: 0 |
2018/03/22 | 2,129 | 8,218 | <issue_start>username_0: I want to export some data from my PLC by writing it into a text-file and saving it to an USB-Stick. I managed to create the text file, but I can't write anything.
I use functions from the TwinCAT standard-libraries in the following code:
```
PROGRAM P_WriteFile
VAR
nStateP : INT := 1;
fbOpenFile : FB_FileOpen; // open or create file
fbWriteFile : FB_FilePuts; // write to file
fbCloseFile : FB_FileClose; // Close file
sPath : STRING := '\Hard Disk2\foobar.txt'; // target path
sAmsNetID : STRING := '1.23.34.456.1.1';
sOutput : STRING := 'foo';
bDone : BOOL;
END_VAR
CASE nStateP OF
1:
// open/create file
fbOpenFile(sNetId := sAmsNetID, sPathName := sPath, nMode := 2, bExecute := TRUE, tTimeout := INT_TO_TIME(200), bBusy =>, bError => , nErrId =>, hFile => );
IF fbOpenFile.bBusy THEN
nStateP := 2;
END_IF
2:
// write to file
IF NOT fbOpenFile.bError THEN
fbWriteFile(sNetId := sAmsNetID, hFile := fbOpenFile.hFile, sLine := sOutput, bExecute := TRUE, tTimeout := INT_TO_TIME(200), bBusy =>, bError =>, nErrId =>);
fbOpenFile(bExecute := FALSE);
END_IF
IF fbWriteFile.bBusy THEN
nStateP := 3;
END_IF
3:
// Close file
IF NOT fbWriteFile.bBusy AND NOT fbWriteFile.bError THEN
fbCloseFile(sNetId := sAmsNetID, hFile := fbOpenFile.hFile, bExecute := TRUE, tTimeout := INT_TO_TIME(200), bBusy =>, bError =>, nErrId =>);
END_IF
IF fbWriteFile.bBusy THEN
nStateP := 4;
END_IF
4:
IF NOT fbCloseFile.bBusy AND NOT fbCloseFile.bError THEN
bDone := TRUE;
nStateP := 1;
ELSE
bDone := FALSE;
END_IF
END_CASE
```
The program enters all states, but the result is an empty text file, which I can't open on the control Panel. ("A sharing Violation occured while accessing \Hard Disk2\foobar.txt")
Also the bBusy - variable of the functions (e.g. FB\_FileOpen.bBusy) don't change back to 'FALSE'.
It would be great if anyone could help me!
Thanks :)<issue_comment>username_1: Generally:
What the busy-flag is telling you is that the function block is currently busy doing the operation that you are requesting the FB to do. This means that you should not change state of your state-machine when it is busy, but the other way around. You should also check whether the operation was successful or not (by looking at the bError-flag) prior to going to the next step. As long as the function block that you are calling is busy (bBusy = true), you call the function block with the bExecute-flag set to low. What I usually do is to set this up as two separate stages for opening, such as:
Some sort of pseudo-code:
```
Step1_Open:
FBOPENFILE(bExecute=TRUE)...
GOTO STEP2_OPEN
Step2_Open:
FBOPENFILE(bExecute=FALSE)
IF NOT FBOPENFILE.bBusy AND NOT FBOPENFILE.bError THEN
GOTO Step3_StartWrite
END_IF
Step3_StartWrite
FBWRITEFILE(bExecute=TRUE)
GOTO STEP4_WRITEFILE
Step4_Writefile:
FBWRITEFILEFILE(bExecute=FALSE)
IF NOT FBWRITEFILEFILE.bBusy AND NOT FBWRITEFILEFILE.bError THEN
NEXT STEP
END_IF
```
...and so forth...
So in your example your stage 2 is very critical. You should not close the file until the writing is finished, which it will be as soon as bBusy is false. What you're basically doing is to close down the file while it's still writing it! Also, you can remove the "fbOpenFile(bExecute := FALSE);" in this stage, because as soon as you have (successfully) opened the file and have a file handle, you don't need to make any more calls to this function block anymore.
Other thoughts:
Is the sAmsNetId the local one of your computer? If it's the local one I don't think you need to provide it.
I've written my own file-writer which I've been using for quite some time and which is working. The code for it is:
```
fbRisingEdge(CLK := bExecute);
CASE eFileWriteStep OF
E_FileWriteStep.IDLE :
IF fbRisingEdge.Q THEN
nFileHandle := 0;
bBusy := TRUE;
eFileWriteStep := E_FileWriteStep.OPEN;
nFileWriteSubStep := 0;
END_IF
E_FileWriteStep.OPEN :
CASE nFileWriteSubStep OF
0 :
fbFileOpen(sPathName := sPathName, bExecute := FALSE);
fbFileOpen(sPathName := sPathName, bExecute := TRUE);
nFileWriteSubStep := nFileWriteSubStep + 1;
1 :
fbFileOpen(bExecute := FALSE);
IF NOT fbFileOpen.bBusy THEN
IF fbFileOpen.bError THEN
bError := TRUE;
eFileWriteStep := E_FileWriteStep.CLEAN;
nFileWriteSubStep := 0;
ELSE
nFileHandle := fbFileOpen.hFile;
eFileWriteStep := E_FileWriteStep.WRITE;
nFileWriteSubStep := 0;
END_IF
END_IF
END_CASE
E_FileWriteStep.WRITE :
CASE nFileWriteSubStep OF
0 :
fbFileWrite(bExecute := FALSE);
fbFileWrite(hFile := nFileHandle,
pWriteBuff := aFileData,
cbWriteLen := UDINT_TO_UINT(UPPER_BOUND(aFileData, 1)),
bExecute := TRUE);
nFileWriteSubStep := nFileWriteSubStep + 1;
1 :
fbFileWrite(bExecute := FALSE);
IF NOT fbFileWrite.bBusy THEN
IF fbFileWrite.bError THEN
bError := TRUE;
eFileWriteStep := E_FileWriteStep.CLEAN;
ELSE
eFileWriteStep := E_FileWriteStep.CLEAN;
nBytesWritten := fbFileWrite.cbWrite;
END_IF
nFileWriteSubStep := 0;
END_IF
END_CASE
E_FileWriteStep.CLOSE :
CASE nFileWriteSubStep OF
0 :
fbFileClose(bExecute := FALSE);
fbFileClose(hFile := nFileHandle, bExecute := TRUE);
nFileWriteSubStep := 1;
1 :
fbFileClose(bExecute := FALSE);
IF NOT fbFileClose.bBusy THEN
IF fbFileClose.bError THEN
bError := TRUE;
END_IF
eFileWriteStep := E_FileWriteStep.CLEAN;
nFileHandle := 0;
nFileWriteSubStep := 0;
END_IF
END_CASE
E_FileWriteStep.CLEAN :
IF nFileHandle <> 0 THEN
eFileWriteStep := E_FileWriteStep.CLOSE;
nFileWriteSubStep := 0;
ELSE
eFileWriteStep := E_FileWriteStep.IDLE;
bBusy := FALSE;
END_IF
END_CASE
```
You activate the function block by the rising edge at the beginning. The data to be written is provided by an array of bytes (aFileData). At the end of this state machine you also have some cleaning code plus eventual error-handling. In this code you can also see how I make sure that the previous step succeeds before I go on to the next step.
Good luck!
Upvotes: 4 [selected_answer]<issue_comment>username_2: The main problem in your program is that you are calling the function blocks only once, as if they were normal functions. You must call the function blocks continuously, and check for when they have finished their specific function.
When calling them continuously, you should first wait for the Busy flag going high and then wait for it going low (and without error). This will be easiest if you have a state for both waiting situation and calls the function block in both states.
Remember, that these file operation function blocks are working together with the Windows system and might take some time to finish their job.
BTW, your code example might actually work, if you add a 'NOT fbOpenFile.bBusy' criteria to the first IF in state 2. But the program would be easier to read/debug if you use two states for starting and finishing for each of the FB's.
Upvotes: 0 |
2018/03/22 | 335 | 1,289 | <issue_start>username_0: I have the following code for the DropDown Selected Index Changed. But It's not firing the event as expected
```
```
I'm getting the following error in the Console
>
> Uncaught Error: Sys.WebForms.PageRequestManagerServerErrorException:
> Invalid postback or callback argument. Event validation is enabled
> using in configuration or <%@
> Page EnableEventValidation="true" %> in a page. For security
> purposes, this feature verifies that arguments to postback or callback
> events originate from the server control that originally rendered
> them. If the data is valid and expected, use the
> ClientScriptManager.RegisterForEventValidation method in order to
> register the postback or callback data for validation.
>
>
>
Any Help is much appreciated<issue_comment>username_1: Try Setting this property on your aspx page as:
```
EnableEventValidation="false"
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was Able to resolve the Issue by adding `EnableEventValidation="false"` on my
page's code
```
<%@ Page Language="C#" AutoEventWireup="true" EnableEventValidation="false"
CodeBehind="Application.aspx.cs" Inherits="USI.User.Application"
Title="Course Application" MasterPageFile="~/User/MyPage.Master" %>
```
Upvotes: 0 |
2018/03/22 | 201 | 682 | <issue_start>username_0: I have a multibyte character ''. How can I convert this into '\360\245\242\245'?
`use Devel::Peek;` gives me that information, but it only prints it to console...<issue_comment>username_1: Try Setting this property on your aspx page as:
```
EnableEventValidation="false"
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was Able to resolve the Issue by adding `EnableEventValidation="false"` on my
page's code
```
<%@ Page Language="C#" AutoEventWireup="true" EnableEventValidation="false"
CodeBehind="Application.aspx.cs" Inherits="USI.User.Application"
Title="Course Application" MasterPageFile="~/User/MyPage.Master" %>
```
Upvotes: 0 |
2018/03/22 | 677 | 2,629 | <issue_start>username_0: I have a number of documents with selection dates and location id's. I am trying to build a pipeline that will match on documents with a selection time that falls within the last hour then run a count on those. My dates are in mm-dd-yyyy hh:mm:ss format but seem to be represented as strings in MongoDB
Here is what I have so far
```
last_hour = datetime.datetime.now() - datetime.timedelta(minutes=60)
now = dateime.datetime.now()
pipeline = [
{"$match":{"select_time":{"$gt":last_hour,"$lte":now}}},
{"$unwind":"$loc_id"},
{"$group": {"_id":"$loc_id"}},
{"$sort": SON([("_id", -1), ("count", -1)])}
]
for i in list(db.loc_counter.aggregate(pipeline)):
print i
```
Everything but the match piece is working. I am not sure if it is a date format issue or what.<issue_comment>username_1: Convert the current format of time to ISODate format.
You may want to write a migration to convert current `select_time` to ISODate format ([See supported formats](https://docs.mongodb.com/manual/reference/method/Date/).)
The projection stage of the pipeline can become inefficient when the number of documents in the collection gets huge.
Also, fix the script inserting the documents in the wrong date format to insert document with `select_time` field in ISODate format.
```
lasthour = datetime.now() - timedelta(hours=1)
pipeline = [
{
'$project': {
'select_time_ISODate': {
'$dateFromString': {
'dateString': {
'$concat': [
{'$substr': ['$select_time', 6, 4]},
'-', {'$substr': ['$select_time', 0, 2]},
'-', {'$substr': ['$select_time', 3, 2]},
'T', {'$substr': ['$select_time', 11, 8]}
]
}
}
},
'loc_id': 1
}
},
{'$match': {'select_time_ISODate': {'$gte': lasthour}}},
{'$count': 'num_logs_since_past_hour'}
]
cursor = db.loc_counter.aggregate(pipeline)
print(tuple(cursor))
```
Suppose that `select_time` were in the right date format, you'll only need the match and count stage of the current pipeline.
Upvotes: 3 [selected_answer]<issue_comment>username_2: **$dateFromString** works fine, But my suggestion is store select\_time as **ISO date** only. The datetime formats stored as string are not useful , You can not use them for sorting or in date comparison conditions. Everytime projecting the select\_time as ISO datetime will be an unnecessery pipeline.
Upvotes: 0 |
2018/03/22 | 2,000 | 7,570 | <issue_start>username_0: I'm working on a pw change form in my laravel app. I want to use the validator with custom error messages.
My code looks like this:
```
$rules = [
'username' => 'required|max:255',
'oldpassword' => 'required|max:255',
'newpassword' => 'required|min:6|max:255|alpha_num',
'newpasswordagain' => 'required|same:newpassword',
];
$messages = [
'username.required' => Lang::get('userpasschange.usernamerequired'),
'username.max:255' => Lang::get('userpasschange.usernamemax255'),
'oldpassword.required' => Lang::get('userpasschange.oldpasswordrequired'),
'oldpassword.max:255' => Lang::get('userpasschange.oldpasswordmax255'),
'newpassword.required' => Lang::get('userpasschange.newpasswordrequired'),
'newpassword.min:6' => Lang::get('userpasschange.newpasswordmin6'),
'newpassword.max:255' => Lang::get('userpasschange.newpasswordmax255'),
'newpassword.alpha_num' => Lang::get('userpasschange.newpasswordalpha_num'),
'newpasswordagain.required' => Lang::get('userpasschange.newpasswordagainrequired'),
'newpasswordagain.same:newpassword' => Lang::get('userpasschange.newpasswordagainsamenewpassword'),
];
$validator = Validator::make($request->all(), $rules, $messages);
$validator->setCustomMessages($messages);
Log::debug("custommessages: " . json_encode($messages));
Log::debug("messages: " . json_encode($validator->messages()));
```
In the log **custommessages** is shows my custom msgs, but in the next line there is the original **messages**.
I'm working from the [official doc](https://laravel.com/docs/5.5/validation#working-with-error-messages).
Have anybody meet this problem?
Thx for the answers in advance!<issue_comment>username_1: After you've indicated the messages in `Validator::make`
```
$validator = Validator::make($request->all(), $rules, $messages);
```
you shouldn't indicate them again
```
$validator->setCustomMessages($messages); // don't do that
```
### NOTE
The **better** way to use Request validation is to move [them to another file](https://laravel.com/docs/5.5/validation#form-request-validation)
Upvotes: 0 <issue_comment>username_2: A rewrite and the recommended way of doing it.
Manual for reference <https://laravel.com/docs/5.5/validation#creating-form-requests>
Use requests files.
1. run `php artisan make:request UpdateUserPasswordRequest`
2. Write the request file
Edit feb 2020: in the latest version of Laravel in the authorize method the global auth() object can be used instead of \Auth so \Auth::check() will become auth()->check(). Both still work and will update if something is removed from the framework
```
php
namespace App\Http\Requests;
class UpdateUserPasswordRequest extends FormRequest
{
/**
* Determine if the user is authorized to make this request.
*
* @return bool
*/
public function authorize()
{
// only allow updates if the user is logged in
return \Auth::check();
// In laravel 8 use Auth::check()
// edit you can now replace this with return auth()-check();
}
/**
* Get the validation rules that apply to the request.
*
* @return array
*/
public function rules()
{
return [
'username' => 'required|max:255',
'oldpassword' => 'required|max:255',
'newpassword' => 'required|min:6|max:255|alpha_num',
'newpasswordagain' => 'required|same:newpassword',
];
}
/**
* Get the validation attributes that apply to the request.
*
* @return array
*/
public function attributes()
{
return [
'username' => trans('userpasschange.username'),
'oldpassword' => trans('userpasschange.oldpassword'),
'newpassword' => trans('userpasschange.newpassword'),
'newpasswordagain' => trans('userpasschange.newpasswordagain'),
];
}
/**
* Get the validation messages that apply to the request.
*
* @return array
*/
public function messages()
{
// use trans instead on Lang
return [
'username.required' => Lang::get('userpasschange.usernamerequired'),
'oldpassword.required' => Lang::get('userpasschange.oldpasswordrequired'),
'oldpassword.max' => Lang::get('userpasschange.oldpasswordmax255'),
'newpassword.required' => Lang::get('userpasschange.newpasswordrequired'),
'newpassword.min' => Lang::get('userpasschange.newpasswordmin6'),
'newpassword.max' => Lang::get('userpasschange.newpasswordmax255'),
'newpassword.alpha_num' =>Lang::get('userpasschange.newpasswordalpha_num'),
'newpasswordagain.required' => Lang::get('userpasschange.newpasswordagainrequired'),
'newpasswordagain.same:newpassword' => Lang::get('userpasschange.newpasswordagainsamenewpassword'),
'username.max' => 'The :attribute field must have under 255 chars',
];
}
```
3. In UserController
>
>
> ```
> php namespace App\Http\Controllers;
>
>
> // VALIDATION: change the requests to match your own file names if you need form validation
> use App\Http\Requests\UpdateUserPasswordRequest as ChangePassRequest;
> //etc
>
> class UserCrudController extends Controller
> {
> public function chnagePassword(ChangePassRequest $request)
> {
> // save new pass since it passed validation if we got here
> }
> }
> </code
> ```
>
>
Upvotes: 6 [selected_answer]<issue_comment>username_3: ```
$messages = [
'username.required' => Lang::get('userpasschange.usernamerequired'),
'username.max' => Lang::get('userpasschange.usernamemax255'),
'oldpassword.required' => Lang::get('userpasschange.oldpasswordrequired'),
'oldpassword.max' => Lang::get('userpasschange.oldpasswordmax255'),
'newpassword.required' => Lang::get('userpasschange.newpasswordrequired'),
'newpassword.min' => Lang::get('userpasschange.newpasswordmin6'),
'newpassword.max' => Lang::get('userpasschange.newpasswordmax255'),
'newpassword.alpha_num' => Lang::get('userpasschange.newpasswordalpha_num'),
'newpasswordagain.required' => Lang::get('userpasschange.newpasswordagainrequired'),
'newpasswordagain.same:newpassword' => Lang::get('userpasschange.newpasswordagainsamenewpassword'),
];
```
Try to dont use this :255 and :6 endings.
---
Wrong:
```
'username.max:255' => Lang::get('userpasschange.usernamemax255'),
```
Correct:
```
'username.max' => Lang::get('userpasschange.usernamemax255'),
```
Upvotes: -1 <issue_comment>username_4: For Laravel **7.x**, **6.x**, **5.x**
With the custom rule defined, you might use it in your controller validation like :
```
$validatedData = $request->validate([
'f_name' => 'required|min:8',
'l_name' => 'required',
],
[
'f_name.required'=> 'Your First Name is Required', // custom message
'f_name.min'=> 'First Name Should be Minimum of 8 Character', // custom message
'l_name.required'=> 'Your Last Name is Required' // custom message
]
);
```
For localization you can use :
```
['f_name.required'=> trans('user.your first name is required'],
```
Hope this helps...
Upvotes: 3 |
2018/03/22 | 372 | 1,343 | <issue_start>username_0: I'm trying to filter an array of File objects via jQuery $.grep.
For this particular example, I'm trying to filter via the file name.
So here is what I have so far:
Step 1: Create an array of files:
```
Array.prototype.push.apply(fileBuffers, e.originalEvent.dataTransfer.files);
```
Step 2: using fileBuffers:
```
var arr = $.grep(fileBuffers, function (fileBuffers, value) { return (fileBuffers[value].name !== filename) });
```
So I'm trying to create a new array, arr, that contains an array of files, except the file that has filename.
So far this is not working for me. What am I doing wrong?.<issue_comment>username_1: The correct syntax for `$.grep` is;
```
var arr = $.grep(fileBuffers, function (file, idx) {
return (file.name !== filename);
});
```
**Note** that the first parameter of the callback is the element in filebuffers and the second parameter is that item's index in the filebuffers array.
[Link to `$.grep` documentation](https://api.jquery.com/jquery.grep/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: Steve's answer was correct! I had a whitespace that crept into filename, so I had to do a .trim().
So:
```
var arr = $.grep(fileBuffers, function (file, idx) {
return (file.name !== filename.trim()); });
```
Upvotes: 0 |
2018/03/22 | 491 | 1,692 | <issue_start>username_0: I am trying to align vertically `TextInput` with `Icon`s and I tried various style rules, but it doesn't work. In iOS, layout is correct, but in Android elements are not aligned vertically.
```
this.setState({ searchTextInputVal: value })}
returnKeyType="search"
placeholder="Type Here..."
value={this.state.searchTextInputVal}
underlineColorAndroid="transparent"
/>
input: {
paddingLeft: 35,
paddingRight: 19,
margin: 8,
borderRadius: 3,
overflow: 'hidden',
borderColor: Colors.grey,
borderWidth: 1,
fontSize: 16,
color: Colors.darkBlue,
height: 40,
...Platform.select({
ios: {
height: 40,
},
android: {
borderWidth: 0,
},
}),
},
searchIcon: {
left: 16,
color: Colors.grey,
},
clearIcon: {
right: 16,
color: Colors.grey,
},
```
[](https://i.stack.imgur.com/Sfq82.png)<issue_comment>username_1: The TextInput component in android has by default some paddings & margins, that are added to the ones that you set in the input style. If you use Toggle Inspector option, you might see them.
Upvotes: 1 <issue_comment>username_2: I didn't notice I had another style:
```
icon: {
backgroundColor: 'transparent',
position: 'absolute',
color: 'white',
top: 15.5,
...Platform.select({
android: {
top: 20,
},
}),
},
```
When I removed:
```
top: 15.5,
...Platform.select({
android: {
top: 20,
},
}),
```
and added: `justifyContent: 'center'`, it works now.
Upvotes: 1 [selected_answer] |
2018/03/22 | 618 | 2,454 | <issue_start>username_0: I am trying to write a function that takes the following 2 parameters:
1. A sentence as a string
2. A number of lines as an integer
So if I was to call **formatLines("My name is Gary", 2);** ...
The possible outcomes would be:
* array("My name is", "Gary");
* array("My name", "<NAME>");
* array("My", "name is Gary");
It would return: **array("My name", "<NAME>");** because the difference in character counts for each line is as small as possible.
So the part I am ultimately stuck on is creating an array of possible outcomes where the words are in the correct order, split over x lines. Once I have an array of possible outcomes I would be fine working out the best result.
So how would I go about generating all the possible combinations?
Regards
Joe<issue_comment>username_1: It seems like doing this by creating all possible ways of splitting the text and then determining the best one would be unnecessarily inefficient. You can count the characters and divide by the number of lines to find approximately the right number of characters per line.
```
function lineSplitChars($text, $lines) {
if (str_word_count($text) < $lines) {
throw new InvalidArgumentException('lines must be fewer than word count', 1);
}
$width = strlen($text) / $lines; // initial width calculation
while ($width > 0) {
$result = explode("\n", wordwrap($text, $width)); // generate result
// check for correct number of lines. return if correct, adjust width if not
$n = count($result);
if ($n == $lines) return $result;
if ($n > $lines) {
$width++;
} else {
$width--;
};
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: An answer has been accepted here - but this strikes me as a rather cumbersome method for solving the problem when PHP already provides a wordwrap() function which does most of the heavy lifting:
```
function format_lines($str, $lines)
{
$guess_length=(integer)(strlen($str)/($lines+1));
do {
$out=explode("\n", wordwrap($str, $guess_length));
$guess_length++;
} while ($guess_length$lines);
return $out;
}
```
As it stands, it is rather a brute force method, and for very large inputs, a better solution would use optimum searching (adding/removing a larger initial interval then decreasing this in iterations)
Upvotes: 1 |
2018/03/22 | 887 | 2,895 | <issue_start>username_0: New to coding but I'm trying to reset my Rock Paper Scissors game back to zero after 10 tries.
I have this code here. When I play and one of them reaches 10, they don't reset:
```
function reset(userScore, computerScore) {
return = 0;
}
function gameEnd() {
if (userScore || computerScore === "10");
return reset;
}
```<issue_comment>username_1: The condition `if (userScore || computerScore === "10");` is incorrect and should be:
```
if (userScore === "10" || computerScore === "10")
```
But this means that both `userScore` and `computerScore` are strings. They should be numbers, in which case the condition should be:
```
if (userScore === 10 || computerScore === 10)
```
Also, why does the `reset()` function have two parameters that are not used? You should call it, not return it.
Try this:
```
function reset() {
userScore = 0;
computerScore = 0;
return = 0;
}
function gameEnd() {
if (userScore === 10 || computerScore === 10)
return reset();
}
```
If the `gameEnd()` doesn't need to return anything, then `reset()` doesn't need to return anything either:
```
function reset() {
userScore = 0;
computerScore = 0;
}
function gameEnd() {
if (userScore === 10 || computerScore === 10)
reset();
}
```
Upvotes: 0 <issue_comment>username_2: I dont see where `reset` is setting the scores to zero. Also you don't need to accept userScore or computerScore as arguments unless you'll be using them in the function. Also, javascript doesn't mandate a return for void functions.
```
function reset() {
userScore = 0;
computerScore = 0;
}
function gameEnd() {
// check both scores to see if either are 10
// + will type cast a string to int (or keep an int) to check to see if 10
if (+userScore === 10 || +computerScore === 10) {
reset();
}
}
```
Upvotes: 1 <issue_comment>username_3: If you want both to be 10 (and 10 is an integer):
```
function gameEnd() {
if (userScore === 10 && computerScore === 10) {
userScore = 0;
computerScore = 0;
}
}
```
If your goal is to check for just one of the two replace `&&` (AND) with `||` (OR)
I'd however recommend to use `>=` instead of `===` if you ever implement a "bonus" or whatever that gives two points and one has already 9, it will still get caught.
Note: I'm assuming `userScore` and `computerScore` are in a global scope.
A alternative would be:
```
function reset() {
userScore = 0;
computerScore = 0;
}
function gameEnd() {
if (userScore >= 10 && computerScore >= 10) {
reset();
}
}
```
Upvotes: 0 <issue_comment>username_4: you can use method reset() for making score to be 0. and before callin it in endGame method check if any of scores reaches 10 or not
```
function reset(){userscore=0; computerscore=0;}
function endGame(){
if(userscore >=10 ||computerscore>=0){
reset();
}
```
Upvotes: 0 |
2018/03/22 | 569 | 1,704 | <issue_start>username_0: I'd like to create a new column to a Pandas dataframe populated with True or False based on the other values in each specific row. My approach to solve this task was to apply a function checking boolean conditions across each row in the dataframe and populate the new column with either True or False.
This is the dataframe:
```
l={'DayTime':['2018-03-01','2018-03-02','2018-03-03'],'Pressure':
[9,10.5,10.5], 'Feed':[9,10.5,11], 'Temp':[9,10.5,11]}
df1=pd.DataFrame(l)
```
This is the function I wrote:
```
def ops_on(row):
return row[('Feed' > 10)
& ('Pressure' > 10)
& ('Temp' > 10)
]
```
The function ops\_on is used to create the new column ['ops\_on']:
```
df1['ops_on'] = df1.apply(ops_on, axis='columns')
```
Unfortunately, I get this error message:
TypeError: ("'>' not supported between instances of 'str' and 'int'", 'occurred at index 0')
Thankful for help.<issue_comment>username_1: You should work column-wise (vectorised, efficient) rather than row-wise (inefficient, Python loop):
```
df1['ops_on'] = (df1['Feed'] > 10) & (df1['Pressure'] > 10) & (df1['Temp'] > 10)
```
The `&` ("and") operator is applied to Boolean series element-wise. An arbitrary number of such conditions can be chained.
---
Alternatively, for the special case where you are performing the same comparison multiple times:
```
df1['ops_on'] = df1[['Feed', 'Pressure', 'Temp']].gt(10).all(1)
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: In your current setup, just re-write your function like this:
```
def ops_on(row):
return (row['Feed'] > 10) & (row['Pressure'] > 10) & (row['Temp'] > 10)
```
Upvotes: 1 |
2018/03/22 | 816 | 2,703 | <issue_start>username_0: How do I iterate over text files only within a directory? What I have thus far is;
```
for file in glob.glob('*'):
f = open(file)
text = f.read()
f.close()
```
This works, however I am having to store my .py file in the same directory (folder) to get it to run, and as a result the iteration is including the .py file itself. Ideally what I want to command is either;
1. "Look in this subdirectory/folder, and iterate over all the files in there"
OR...
2. "Look through all files in this directory and iterate over those with .txt extension"
I'm sure I'm asking for something fairly straight forward, but I do not know how to proceed. Its probably worth me highlighting that I got the glob module through trial and error, so if this is the wrong way to go around this particular method feel free to correct me! Thanks.<issue_comment>username_1: The solution is very simple.
```
for file in glob.glob('*'):
if not file.endswith('.txt'):
continue
f = open(file)
text = f.read()
f.close()
```
Upvotes: -1 <issue_comment>username_2: The `glob.glob` function actually takes a globbing pattern as its parameter.
For instance, `"*.txt"` while match the files whose name ends with `.txt`.
Here is how you can use it:
```
for file in glob.glob("*.txt"):
f = open(file)
text = f.read()
f.close()
```
If however you want to exclude some specific files, say `.py` files, this is not directly supported by globbing's syntax, as explained [here](https://stackoverflow.com/q/20638040/7051394).
In that case, you'll need to get those files, and manually exclude them:
```
pythonFiles = glob.glob("*.py")
otherFiles = [f for f in glob.glob("*") if f not in pythonFiles]
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: `glob.glob()` uses the same wildcard pattern matching as your standard unix-like shell. The pattern can be used to filter on extensions of course:
```
# this will list all ".py" files in the current directory
# (
>>> glob.glob("*.py")
['__init__.py', 'manage.py', 'fabfile.py', 'fixmig.py']
```
but it can also be used to explore a given path, relative:
```
>>> glob.glob("../*")
['../etc', '../docs', '../setup.sh', '../tools', '../project', '../bin', '../pylint.html', '../sql']
```
or absolute:
```
>>> glob.glob("/home/bruno/Bureau/mailgun/*")
['/home/bruno/Bureau/mailgun/Domains_ Verify - Mailgun.html', '/home/bruno/Bureau/mailgun/Domains_ Verify - Mailgun_files']
```
And you can of course do both at once:
```
>>> glob.glob("/home/bruno/Bureau/*.pdf")
['/home/bruno/Bureau/marvin.pdf', '/home/bruno/Bureau/24-pages.pdf', '/home/bruno/Bureau/alice-in-wonderland.pdf']
```
Upvotes: 1 |
2018/03/22 | 1,050 | 4,157 | <issue_start>username_0: I've come to an issue when handling concurrency.
In the example below, two users A and B edit the same invoice and make different changes to it. If both of them **click save at the same time** I would like one of them to succeed, and the other one to fail. Otherwise the resulting invoice would be an undesired "merged invoice".
Here's the example, tested in PostgreSQL (but I think this question should be database agnostic):
```
create table invoice (
id int primary key not null,
created date
);
create table invoice_line (
invoice_id int,
line numeric(6),
amount numeric(10,2),
constraint fk_invoice foreign key (invoice_id) references invoice(id)
);
insert into invoice(id, created) values (123, '2018-03-17');
insert into invoice_line (invoice_id, line, amount) values (123, 1, 24);
insert into invoice_line (invoice_id, line, amount) values (123, 2, 26);
```
So the initial rows of the invoice are:
```
invoice_id line amount
---------- ---- ------
123 1 24
123 2 26
```
Now, user A edits the invoice, **removes line 2 and clicks SAVE**:
```
-- transaction begins
set transaction isolation level serializable;
select * from invoice where id = 123; -- #1 will it block the other thread?
delete invoice_line where invoice_id = 123 and line = 2;
commit; -- User A would expect the invoice to only include line 1.
```
At the same time user B edits the invoice and **adds line 3, and clicks SAVE**:
```
-- transaction begins
set transaction isolation level serializable;
select * from invoice where id = 123; -- #2 will this wait the other thread?
insert into invoice_line (invoice_id, line, amount) values (123, 3, 45);
commit; -- User B would expect the invoice to include lines 1, 2, and 3.
```
Unfortunately both transactions succeed, and I get the merged rows (corrupted state):
```
invoice_id line amount
---------- ---- ------
123 1 24
123 3 45
```
Since this is not what I wanted, what options do I have to control concurrency?<issue_comment>username_1: This is not a database concurrency issue. The ACID properties of databases are about transactions completing, while maintaining database integrity. In the situation you describe, the transactions are correct, and the database is correctly processing them.
What you want is a locking mechanism, essentially a semaphore that guarantees that only one user can have write access to the data at any one time. You might be able to rely on database locking mechanisms, capturing when locks fail to occur.
But, I would suggest one of two other approaches. If you are comfortable with the changes being only in the application logic, then put the locking mechanism there. Have a place where a user can "lock" the table or record; then don't let anyone else touch it.
You can go a step further. You can require that users obtain "ownership" of the table for changes. Then you can implement a trigger that fails unless the user is the one making the changes.
And, you might think of other solutions. What I really want to point out is that your use-case is outside what RDBMSs do by default (because they would let both transactions complete successfully). So, you will need additional logic for any database (that I'm familiar with).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Invoice line items, as a general rule, should not be edited or deleted after being posted. If a customer needs a charge reversed, the typical way to do that is to add a new transaction that credits the amount, possibly with a cross-reference field that contains the ID of the line item that is being reversed. The advantage of this approach is (1) You can modify a customer's balance without having to go back and rebook any prior statement periods, and (2) you won't run into concurrency issues like this one, which are hard to solve.
If the invoice hasn't posted yet, you still don't allow editing of the line items. Instead, you'd cancel the prior invoice and create a new one, with all new line items. This again dodges the concurrency issue at hand.
Upvotes: 0 |
2018/03/22 | 584 | 2,616 | <issue_start>username_0: Below is the relevant code (JS+jQuery on the client side):
```
function getuser(username, password) {
var user = new Object();
user.constructor();
user.username = username;
user.password = <PASSWORD>;
//....
$("#a1").click(function () {
var u = getuser($("#username").val(), $("#password").val());
if (u == false) {
alert("error");
} else {
//....
}
});
}
```
The question is how to send `var u` to a session on the server side?<issue_comment>username_1: This is not a database concurrency issue. The ACID properties of databases are about transactions completing, while maintaining database integrity. In the situation you describe, the transactions are correct, and the database is correctly processing them.
What you want is a locking mechanism, essentially a semaphore that guarantees that only one user can have write access to the data at any one time. You might be able to rely on database locking mechanisms, capturing when locks fail to occur.
But, I would suggest one of two other approaches. If you are comfortable with the changes being only in the application logic, then put the locking mechanism there. Have a place where a user can "lock" the table or record; then don't let anyone else touch it.
You can go a step further. You can require that users obtain "ownership" of the table for changes. Then you can implement a trigger that fails unless the user is the one making the changes.
And, you might think of other solutions. What I really want to point out is that your use-case is outside what RDBMSs do by default (because they would let both transactions complete successfully). So, you will need additional logic for any database (that I'm familiar with).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Invoice line items, as a general rule, should not be edited or deleted after being posted. If a customer needs a charge reversed, the typical way to do that is to add a new transaction that credits the amount, possibly with a cross-reference field that contains the ID of the line item that is being reversed. The advantage of this approach is (1) You can modify a customer's balance without having to go back and rebook any prior statement periods, and (2) you won't run into concurrency issues like this one, which are hard to solve.
If the invoice hasn't posted yet, you still don't allow editing of the line items. Instead, you'd cancel the prior invoice and create a new one, with all new line items. This again dodges the concurrency issue at hand.
Upvotes: 0 |
2018/03/22 | 759 | 2,597 | <issue_start>username_0: I have to find the last column of a row in a sheet. I am able to find the last column in the sheet, but for a particular row, I need to find the last column which will vary for every sheet in the excel, and it will vary at every run. To find the last column, I have used the below code, with reference from the question [Finding last column across multiple sheets in a function](https://stackoverflow.com/questions/49428994/how-to-find-last-column-across-multiple-sheets-in-a-function):
```
For Each ws In ThisWorkbook.Sheets
lc = ws.Cells.Find("*", SearchOrder:=xlByColumns,
SearchDirection:=xlPrevious).Column
Debug.Print ws.Name, lc
MsgBox lc
Next ws
```
Updated:
Trying to use the below code, but its showing error code 91. Function is :
```
Function lastColumn(Optional sheetName As String, Optional
rowToCheck As Long = 1) As Long
Dim ws As Worksheet
If sheetName = vbNullString Then
Set ws = ActiveSheet
Else
Set ws = Worksheets(sheetName)
End If
lastColumn = ws.Cells(rowToCheck, ws.Columns.Count).End(xlToLeft).Column
End Function
```
Calling it in the code as:
```
For Each ws In ThisWorkbook.Worksheets
i = ws.Columns(2).Find("Total").Row (error code as 91)
Debug.Print lastColumn(ws.Name, i)
Next ws
```<issue_comment>username_1: ```
Sub Test()
For Each ws In ThisWorkbook.Sheets
lc = ws.Cells(i, ws.Columns.Count).End(xlToLeft).Column
Debug.Print ws.Name, lc
MsgBox lc
Next ws
End Sub
```
Just replace `i` with the row number.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this :
```
With Worksheets(set_sheet_name)
LastCol = .Cells(5, .Columns.Count).End(xlToLeft).Column
End With
```
this will get you nr. of columns from the line "5", if you want another line just change the 5 with whatever line you need.
Upvotes: 0 <issue_comment>username_3: This is the function that I am using for lastColumn per specific row:
```
Function lastColumn(Optional sheetName As String, Optional rowToCheck As Long = 1) As Long
Dim ws As Worksheet
If sheetName = vbNullString Then
Set ws = ActiveSheet
Else
Set ws = Worksheets(sheetName)
End If
lastColumn = ws.Cells(rowToCheck, ws.Columns.Count).End(xlToLeft).Column
End Function
```
It takes optional arguments `sheetName` and `rowToCheck`. This is a way to run it for your case:
```
Public Sub TestMe()
Dim ws As Worksheet
Dim lc As Long
lc = 8
For Each ws In ThisWorkbook.Worksheets
Debug.Print lastColumn(ws.Name, lc)
Next ws
End Sub
```
Upvotes: 1 |
2018/03/22 | 574 | 1,850 | <issue_start>username_0: I use [vue-toasted](https://github.com/shakee93/vue-toasted). I registered `vue-toasted` in `main.js` and use it as below:
```
import Toasted from 'vue-toasted'
Vue.use(Toasted, Option)
```
And use it like this in a vue component
```
this.$toasted.error("Temperature too Hot").goAway(5000),
```
But I do not know how to change the position of toasted message. The current position is `'top-right'`. How do I change this?<issue_comment>username_1: By just following the link you provided, it is apparent that you can change the position with the Container Position setting.
[](https://i.stack.imgur.com/gxV98.png)
Also you can set the position according to the API page.
>
> API Options below are the options you can pass to create a toast
>
>
> Position String 'top-right' Position
> of the toast container ['top-right', 'top-center', 'top-left',
> 'bottom-right', 'bottom-center', 'bottom-left']
>
>
>
Upvotes: 0 <issue_comment>username_2: I found a way to change the position of toasted msg.
This is the way that I found
if you want to use Options inside error or show method, you do not put option value in Vue.use().
After removing option in Vue.use, I can move msg
I hope it will be helpful to you guys
Upvotes: 0 <issue_comment>username_3: You can refer the demo [here](https://shakee93.github.io/vue-toasted/)
```
let toast = this.$toasted.show("Toasted !!", {
theme: "toasted-primary",
position: "top-right",
duration : 5000
});
```
Upvotes: 1 <issue_comment>username_4: You haven't actually set the options in your main.js
```
import Toasted from 'vue-toasted';
const toastOptions = {
position: 'top-center',
duration : 2000,
theme: "toasted-primary"
}
Vue.use(Toasted, toastOptions);
```
Upvotes: 0 |
2018/03/22 | 1,751 | 6,124 | <issue_start>username_0: I am making a would you rather game, and I would like to not have character restrictions for the W.Y.R. questions. I have seen many examples here on Stack Overflow and other websites, but they use other modules and methods I don't understand how to use or want to use. So I would rather use
```py
button_text_font = pygame.font.Font(font_location, 20)
red_button_text = button_text_font.render(red_text, True, bg_color)
blue_button_text = button_text_font.render(blue_text, True, bg_color)
```
I would like to know how to use this method and, for example, somehow input how far the text can go until it wraps to the next line.
Thanks
P.S. If you could, please also include centering text, etc.<issue_comment>username_1: This is adapted from some very old code I wrote:
```
def renderTextCenteredAt(text, font, colour, x, y, screen, allowed_width):
# first, split the text into words
words = text.split()
# now, construct lines out of these words
lines = []
while len(words) > 0:
# get as many words as will fit within allowed_width
line_words = []
while len(words) > 0:
line_words.append(words.pop(0))
fw, fh = font.size(' '.join(line_words + words[:1]))
if fw > allowed_width:
break
# add a line consisting of those words
line = ' '.join(line_words)
lines.append(line)
# now we've split our text into lines that fit into the width, actually
# render them
# we'll render each line below the last, so we need to keep track of
# the culmative height of the lines we've rendered so far
y_offset = 0
for line in lines:
fw, fh = font.size(line)
# (tx, ty) is the top-left of the font surface
tx = x - fw / 2
ty = y + y_offset
font_surface = font.render(line, True, colour)
screen.blit(font_surface, (tx, ty))
y_offset += fh
```
The basic algorithm is to split the text into words and iteratively build up lines word by word checking the resulting width each time and splitting to a new line when you would exceed the width.
As you can query how wide the rendered text will be, you can figure out where to render it to centre it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is messy and there is far more you can do but if you want a specific length of text for say a paragraph...
```
font = pygame.font.SysFont("Times New Roman, Arial", 20, bold=True)
your_text = "blah blah blah"
txtX, txtY = 125, 500
wraplen = 50
count = 0
my_wrap = textwrap.TextWrapper(width=wraplen)
wrap_list = my_wrap.wrap(text=your_text)
# Draw one line at a time further down the screen
for i in wrap_list:
txtY = txtY + 35
Mtxt = font.render(f"{i}", True, (255, 255, 255))
WIN.blit(Mtxt, (txtX, txtY))
count += 1
# Update All Window and contents
pygame.display.update()
```
Upvotes: 1 <issue_comment>username_3: Using the implementation in [Pygame Zero](https://github.com/lordmauve/pgzero/blob/master/pgzero/ptext.py#L81-L143), text can be wrapped with the following function.
```
# Adapted from https://github.com/lordmauve/pgzero/blob/master/pgzero/ptext.py#L81-L143
def wrap_text(text, font, max_width):
texts = text.replace("\t", " ").split("\n")
lines = []
for text in texts:
text = text.rstrip(" ")
if not text:
lines.append("")
continue
# Preserve leading spaces in all cases.
a = len(text) - len(text.lstrip(" "))
# At any time, a is the rightmost known index you can legally split a line. I.e. it's legal
# to add text[:a] to lines, and line is what will be added to lines if
# text is split at a.
a = text.index(" ", a) if " " in text else len(text)
line = text[:a]
while a + 1 < len(text):
# b is the next legal place to break the line, with `bline`` the
# corresponding line to add.
if " " not in text[a + 1:]:
b = len(text)
bline = text
else:
# Lines may be split at any space character that immediately follows a non-space
# character.
b = text.index(" ", a + 1)
while text[b - 1] == " ":
if " " in text[b + 1:]:
b = text.index(" ", b + 1)
else:
b = len(text)
break
bline = text[:b]
bline = text[:b]
if font.size(bline)[0] <= max_width:
a, line = b, bline
else:
lines.append(line)
text = text[a:].lstrip(" ")
a = text.index(" ", 1) if " " in text[1:] else len(text)
line = text[:a]
if text:
lines.append(line)
return lines
```
Bear in mind that wrapping text requires multiple lines that must be rendered separately. Here's an example of how you could render each line.
```
def create_text(text, color, pos, size, max_width=None, line_spacing=1):
font = pygame.font.SysFont("monospace", size)
if max_width is not None:
lines = wrap_text(text, font, max_width)
else:
lines = text.replace("\t", " ").split("\n")
line_ys = (
np.arange(len(lines)) - len(lines) / 2 + 0.5
) * 1.25 * font.get_linesize() + pos[1]
# Create the surface and rect that make up each line
text_objects = []
for line, y_pos in zip(lines, line_ys):
text_surface = font.render(line, True, color)
text_rect = text_surface.get_rect(center=(pos[0], y_pos))
text_objects.append((text_surface, text_rect))
return text_objects
# Example case
lines = create_text(
text="Some long text that needs to be wrapped",
color=(255, 255, 255), # White
pos=(SCREEN_WIDTH // 2, SCREEN_HEIGHT // 2), # Center of the screen
size=16,
max_width=SCREEN_WIDTH,
)
# Render each line
for text_object in lines:
screen.blit(*text_object)
```
Upvotes: 0 |
2018/03/22 | 289 | 1,083 | <issue_start>username_0: I need a query to see which Roles users have in DotNetNuke.
I found this query but it gives the RoleID and not the name.
What if there are more than one role associated to a user?
```
SELECT Users.FirstName, Users.LastName, Users.Email,UserRoles.RoleID
FROM UserRoles
INNER JOIN Users ON UserRoles.UserID = Users.UserID
```<issue_comment>username_1: If there are 2 roles for a user it will bring 2 rows for the same user with different RoleID. n rows for n roles.
Upvotes: -1 <issue_comment>username_2: You need to include the `Roles` table in your query if you want the Role Names.
```
SELECT Users.FirstName, Users.LastName, Users.Email, UserRoles.RoleID, Roles.RoleName
FROM UserRoles
INNER JOIN Users ON UserRoles.UserID = Users.UserID
INNER JOIN Roles ON UserRoles.RoleID = Roles.RoleID
WHERE (Roles.PortalID = 0)
```
You also might want to include the PortalID to avoid duplicates from other portals. However I would recommend to use the DNN core functionalities do determine a user role, with the `RoleController`.
Upvotes: 3 [selected_answer] |
2018/03/22 | 502 | 1,585 | <issue_start>username_0: I think i am not moving my pop-up correctly as you can see on the picture.
[Two dialog instead of one](https://i.stack.imgur.com/TZaM9.png)
How to handle this in angular?
EDIT : if you take a look closer to the picture i sent, there are one pop-up in the center, and there are one too at the bottom (in red). And i woud like that the one at the bottom disappears.
To display the pop-up, i am using this code : <https://material.angular.io/components/dialog/examples>
The only difference is the css :
[my css](https://i.stack.imgur.com/6417c.png)<issue_comment>username_1: This example can maybe help you first one:
```
this.dialog = this.dialog.open(Dialog1, {
width: '100px',
height: '1000px'
});
this.dialog.updatePosition({ top: '20px', left: '20px' });
this.dialog.afterClosed().subscribe((result: any) => {
this.dialog2.close();
})
```
Second one 130 pixel to the left:
```
this.dialog2 = this.dialog2.open(Dialog2, {
width: '100px',
height: '1000px'
});
this.dialog2.updatePosition({ top: '20px', left: '130px' });
```
Upvotes: 1 <issue_comment>username_2: It seems like you didn't include a theme to your project.
Are you using angular-cli?
If you are, put
`@import "~@angular/material/prebuilt-themes/indigo-pink.css";` in `style.css`.
If not, put in `index.html`.
Then don't forget to remove your custom CSS!
Check step 4 here: [material.angular.io/guide/getting-started](http://material.angular.io/guide/getting-started)
Upvotes: 3 [selected_answer] |
2018/03/22 | 455 | 1,517 | <issue_start>username_0: Well, I was trying out convolution on grey scale images, but then when I searched for convolution on rgb images, I couldn't find satisfactory explanation. How to apply convolution to rgb images?<issue_comment>username_1: A linear combination of vectors can be computed by linearly combining corresponding vector elements:
```
a * [x1, y1, z1] + b * [x2, y2, z2] = [a*x1+b*x2, a*y1+b*y2 , a*z1+b*z2]
```
Because a convolution is a linear operation (i.e. you weight each pixel within a neighborhood and add up the results), it follows that you can apply a convolution to each of the RGB channels independently (e.g. using MATLAB syntax):
```matlab
img = imread(...);
img(:,:,1) = conv2(img(:,:,1),kernel);
img(:,:,2) = conv2(img(:,:,2),kernel);
img(:,:,3) = conv2(img(:,:,3),kernel);
```
Upvotes: 2 <issue_comment>username_2: You can look at this in two different ways: First, you may convert the color image into an intensity image with a normal vector. The most applicable one is (.299, .587, .114) which is the natural gray scale conversion. To get intensity you need to convert I = .299\*R + .587\*G + .114\*B.
If you are designing your own convolutional network and intend to keep the color channels as inputs, just treat colored image as a 4D tensor with 3 channels. For example if you have a (h x w) image, the tensor size is (1 x h x w x 3) and you may use a filter of size (kh x kw x 3 x f) which kh and kw are your filter sizes and f is the required output features.
Upvotes: 0 |
2018/03/22 | 445 | 1,412 | <issue_start>username_0: I'm using Log4j2 ver 2.3
log4j2.xml looks like:
```
xml version="1.0" encoding="UTF-8"?
```
All works fine until next day when log from a previous day is getting overwritten by some logs from current day.
Example:
catalina.2018-03-21.log yesterday (March 21st) was fine but today got overwritten by some logs from 2018-03-22 when catalina.2018-03-22.log contains rest of logs from today (March 22nd)
Any thoughts?<issue_comment>username_1: try
```
BasicConfigurator.resetConfiguration()
```
in the method that uses logger after *BasicConfigurator.configure()*
Upvotes: 0 <issue_comment>username_2: Log4j 2.3 is somewhat old, being released on 2015-05-09, almost 3 years old. So try using a updated version; version 2.11.0 is the latest as of now which was released on 2018-03-11.
**Update:** To continue using Log4j 2.3, you can *compromise your requirements*. One of the options could be using static value for `fileName` attribute. E.g. `.../catalina.log`, `.../catalina.current.log`, etc.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Try adding an auto-increment variable to your file pattern (%i).
This worked fine for me.
```
filePattern="${sys:catalina.base}/logs/catalina.%d{yyyy-MM-dd}.%i.log"
```
Upvotes: 0 <issue_comment>username_4: mention in time based TimeBasedTriggeringPolicy time interval as **1** it will rollover file on daylly basis
Upvotes: 0 |
2018/03/22 | 283 | 1,177 | <issue_start>username_0: I'm trying to learn how to use GitHub.
I managed to create a repository and update the master,
But I'm trying to understand how the branching is working.
I have created a new branch on eclipse by right clicking the master and then "Create branch..." button.
After that I double-clicked it to work on this branch, and I added a new class to see if I am working on this branch.
But when I'm going back to the master that class is already there, and I did not used the merge command...
In the project explorer the content on the [...] after the project's name has change to the new branch's name, which I will assumed means that I'm working on a different branch.
So, what am I doing wrong?
Thanks!<issue_comment>username_1: Before a new item is considered attached to a branch in git, you must commit it. Git works with commits. Branches (and Tags) are only "pointers" to that commits (or "named" commits).
Upvotes: 2 [selected_answer]<issue_comment>username_2: It suggest that you did not change the branch. You worked on the master branch only. You need to switch to a specific branch by going to **Git perspective -> Switch To** option.
Upvotes: 0 |
2018/03/22 | 353 | 1,047 | <issue_start>username_0: ```
pic_names <- c("../img/pics/111L.jpg", NA,
"../img/pics/134L.jpg",NA,
"../img/pics/164L.jpg", NA,
"../img/pics/187L.jpg", NA)
df <- as.data.frame(pic_names)
```
I want to do two operations:
* Remove parts of the strings. The desired results is from "../img/pics/111L.jpg" to "111L"; from "../img/pics/134L.jpg" to "134L" and so on
* the NA cell has to be filled with the strings contained in the preceding rows. The output should like like this
pic\_names 1 111L 2 111L 3 134L 4 134L 5 121R 6 121R 7 166R 8 166R<issue_comment>username_1: Before a new item is considered attached to a branch in git, you must commit it. Git works with commits. Branches (and Tags) are only "pointers" to that commits (or "named" commits).
Upvotes: 2 [selected_answer]<issue_comment>username_2: It suggest that you did not change the branch. You worked on the master branch only. You need to switch to a specific branch by going to **Git perspective -> Switch To** option.
Upvotes: 0 |
2018/03/22 | 1,129 | 3,498 | <issue_start>username_0: I have a DataFrame(df) in pyspark, by reading from a hive table:
```
df=spark.sql('select * from ')
+++++++++++++++++++++++++++++++++++++++++++
| Name | URL visited |
+++++++++++++++++++++++++++++++++++++++++++
| person1 | [google,msn,yahoo] |
| person2 | [fb.com,airbnb,wired.com] |
| person3 | [fb.com,google.com] |
+++++++++++++++++++++++++++++++++++++++++++
```
When i tried the following, got an error
```
df_dict = dict(zip(df['name'],df['url']))
"TypeError: zip argument #1 must support iteration."
```
`type(df.name) is of 'pyspark.sql.column.Column'`
How do i create a dictionary like the following, which can be iterated later on
```
{'person1':'google','msn','yahoo'}
{'person2':'fb.com','airbnb','wired.com'}
{'person3':'fb.com','google.com'}
```
Appreciate your thoughts and help.<issue_comment>username_1: If you wanted your results in a python dictionary, you could use `collect()`1 to bring the data into local memory and then massage the output as desired.
First collect the data:
```python
df_dict = df.collect()
#[Row(Name=u'person1', URL visited=[u'google', u'msn,yahoo']),
# Row(Name=u'person2', URL visited=[u'fb.com', u'airbnb', u'wired.com']),
# Row(Name=u'person3', URL visited=[u'fb.com', u'google.com'])]
```
This returns a list of [`pyspark.sql.Row`](http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.Row) objects. You can easily convert this to a list of `dict`s:
```python
df_dict = [{r['Name']: r['URL visited']} for r in df_dict]
#[{u'person1': [u'google', u'msn,yahoo']},
# {u'person2': [u'fb.com', u'airbnb', u'wired.com']},
# {u'person3': [u'fb.com', u'google.com']}]
```
1 Be advised that for large data sets, this operation can be slow and potentially fail with an Out of Memory error. You should consider if this is what you really want to do first as you will lose the parallelization benefits of spark by bringing the data into local memory.
Upvotes: 3 <issue_comment>username_2: I think you can try `row.asDict()`, this code run directly on the executor, and you don't have to collect the data on driver.
Something like:
```
df.rdd.map(lambda row: row.asDict())
```
Upvotes: 5 <issue_comment>username_3: Given:
```
+++++++++++++++++++++++++++++++++++++++++++
| Name | URL visited |
+++++++++++++++++++++++++++++++++++++++++++
| person1 | [google,msn,yahoo] |
| person2 | [fb.com,airbnb,wired.com] |
| person3 | [fb.com,google.com] |
+++++++++++++++++++++++++++++++++++++++++++
```
This should work:
```
df_dict = df \
.rdd \
.map(lambda row: {row[0]: row[1]}) \
.collect()
df_dict
#[{'person1': ['google','msn','yahoo']},
# {'person2': ['fb.com','airbnb','wired.com']},
# {'person3': ['fb.com','google.com']}]
```
This way you just collect after processing.
Please, let me know if that works for you :)
Upvotes: 2 <issue_comment>username_4: How about using the pyspark [`Row.as_Dict()`](https://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Row.asDict) method? This is part of the dataframe API (which I understand is the "recommended" API at time of writing) and would not require you to use the RDD API at all.
```
df_list_of_dict = [row.asDict() for row in df.collect()]
type(df_list_of_dict), type(df_list_of_dict[0])
#(, )
df\_list\_of\_dict
#[{'person1': ['google','msn','yahoo']},
# {'person2': ['fb.com','airbnb','wired.com']},
# {'person3': ['fb.com','google.com']}]
```
Upvotes: 4 |
2018/03/22 | 230 | 822 | <issue_start>username_0: Any one please help me. I already have android studio 2.2 and i uninstalled android studio and again downloaded new version of android studio from developer site. when i wanna open the project it is showing a popup like
Cannot load project: com.intellij.ide.plugins.PluginManager$StartupAbortedException: Fatal error initializing plugin net.rim.tools.ajde.
Please any one help me.<issue_comment>username_1: The problem is the BlackBerry plugin for Android Studio (net.rim.tools.ajde)
Uninstall it or deselect it in AS from
>
> File > Settings > Plugins
>
>
>
Now, the problem is how to run BlackBerry Plug-in for Android Studio in AS > 3.0
Upvotes: -1 <issue_comment>username_2: Clear the file in `C:\Users\lianjia\.IntelliJIdea2017.3\config\plugins` and then restart the IDE.
Upvotes: 2 |
2018/03/22 | 229 | 770 | <issue_start>username_0: my android app is calling a native c++ function which works fine using a .so library.
>
> "openDatabase(String path)"
>
>
>
The path is the sqlite database .db but i have no idea how to get this file inside my android compiled app. Right now it is located in resources folder but how can i access it inside my app?
Thanks<issue_comment>username_1: The problem is the BlackBerry plugin for Android Studio (net.rim.tools.ajde)
Uninstall it or deselect it in AS from
>
> File > Settings > Plugins
>
>
>
Now, the problem is how to run BlackBerry Plug-in for Android Studio in AS > 3.0
Upvotes: -1 <issue_comment>username_2: Clear the file in `C:\Users\lianjia\.IntelliJIdea2017.3\config\plugins` and then restart the IDE.
Upvotes: 2 |
2018/03/22 | 1,991 | 7,966 | <issue_start>username_0: As documented in questions like [Entity Framework Indexing ALL foreign key columns](https://stackoverflow.com/questions/29707363/entity-framework-indexing-all-foreign-key-columns), EF Core seems to automatically generate an index for every foreign key. This is a sound default for me (let's not get into an opinion war here...), but there are cases where it is just a waste of space and slowing down inserts and updates. How do I prevent it on a case-by-case basis?
I don't want to wholly turn it off, as it does more good than harm; I don't want to have to manually configure it for all those indices I *do* want. I just want to prevent it on *specific* FKs.
Related side question: is the fact that these index are automatically created mentioned anywhere in the EF documentation? I can't find it anywhere, which is probably why I can't find how to disable it?
*Someone is bound to question why I would want to do this... so in the interest of saving time, the OPer of the linked question gave a great example in a comment:*
>
> We have a `People` table and an `Addresses` table, for example. The
> `People.AddressID` FK was Indexed by EF but I only ever start from a
> `People` row and search for the `Addresses` record; I never find an
> `Addresses` row and then search the `People.AddressID` column for a
> matching record.
>
>
><issue_comment>username_1: If it is really necessary to avoid the usage of some foreign keys indices - as far as I know (currently) - in .Net Core, it is necessary to remove code that will set the indices in generated migration code file.
Another approach would be to implement a custom migration generator in combination with an attribute or maybe an extension method that will avoid the index creation. You could find more information in this answer for EF6: [EF6 preventing not to create Index on Foreign Key](https://stackoverflow.com/questions/46212828/ef6-preventing-not-to-create-index-on-foreign-key). But I'm not sure if it will work in .Net Core too. The approach seems to be bit different, here is a [MS doc article](https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/operations) that should help.
But, **I strongly advise against doing this!** I'm against doing this, because you have to modify generated migration files and not because of not using indices for FKs. Like you mentioned in question's comments, in real world scenarios some cases need such approach.
---
For other people they are not really sure if they have to avoid the usage of indices on FKs and therefor they have to modify migration files:
Before you go that way, I would suggest to implement the application with indices on FKs and would check the performance and space usage. Therefor I would produce a lot test data.
If it really results in performance and space usage issues on a test or QA stage, it's still possible to remove indices in migration files.
Because we already chat about `EnsureCreated` vs `migrations` here for completeness further information about EnsureCreated and migrations (*even if you don't need it :-)*):
* MS doc about [EnsureCreated()](https://learn.microsoft.com/en-us/ef/core/api/microsoft.entityframeworkcore.storage.idatabasecreator) (It will not update your database if you have some model changes - migrations would do it)
* interesting too (even if for EF7) [EF7 EnsureCreated vs. Migrate Methods](http://thedatafarm.com/data-access/ef7-ensurecreated-vs-migrate-methods/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: EF Core has a configuration option to replace one of its services.
I found replacing IConventionSetBuilder to custom one would be a much cleaner approach.
<https://giridharprakash.me/2020/02/12/entity-framework-core-override-conventions/>
Upvotes: 2 <issue_comment>username_3: Entity Framework core 2.0 (the latest version available when the question was asked) doesn't have such a mechanism, but EF Core 2.2 just might - in the form of [*Owned Entity Types*](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities).
Namely, since you said:
>
> *" I only ever start from a `People` row and search for the `Addresses` record; I never find an `Addresses` row"*
>
>
>
Then you may want to make the `Address` an *Owned Entity Type* (and especially the variant with [*'Storing owned types in separate tables'*](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities), to match your choice of storing the address information in a separate `Addresses` table).
The docs of the feature seem to say a matching:
>
> *"Owned entities are essentially a part of the owner and cannot exist without it"*
>
>
>
By the way, now that the feature is in EF, this may justify why EF always creates the indexes for `HasMany`/`HasOne`. It's likely because the `Has*` relations are meant to be used towards other entities (as opposed to 'value objects') and these, since they have their own identity, *are* meant to be queried independently and allow accessing other entities they relate to using navigational properties. For such a use case, it would be simply dangerous use such navigation properties without indexes (a few queries could make the database slow down hugely).
There are few caveats here though:
Turning an entity into an owned one doesn't instruct EF only about the index, but rather it instructs to map the model to database in a way that is a bit different (more on this below) but the end effect is in fact free of that extra index on `People`.
But chances are, this actually might be the **better** solution for you: this way you also say that no one should query the Address (by [not allowing to create a `DbSet` of that type](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities#by-design-restrictions)), minimizing the chance of someone using it to reach the other entities with these costly indexless queries.
As to what the difference is, you'll note that if you make the `Address` owned by `Person`, EF will create a `PersonId` column in the `Address` table, which is different to your `AddressId` in the `People` table (in a sense, lack of the foreign key is a bit of a cheat: an index for querying Person from Address is there, it's just that it's the primary key index of the `People` table, which was there anyways). But take note that this design is actually rather good - it not only needs one column less (no `AddressId` in `People`), but it also guarantees that there's no way to make orphaned `Address` record that your code will never be able to access.
If you would still like to keep the `AddressId` column in the `Addresses`, then there's still one option:
* Just choose a name of `AddressId` for the foreign key in the `Addresses` table and just "pretend" you don't know that it happens to have the same values as the `PersonId` :)
If that option isn't funny (e.g. because you can't change your database schema), then you're somewhat out of luck. But do take note that among the [Current shortcomings of EF](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities#current-shortcomings) they still list *"Instances of owned entity types cannot be shared by multiple owners"*, while some shortcomings of the previous versions are already listed as addressed. Might be worth watching that space as, it seems to me, resolving that one will probably involve introducing the ability to have your `AddressId` in the `People`, because in such a model, for the owned objects to be shared among many entities the foreign keys would need to be sitting with the owning entities to create an association to the same value for each.
Upvotes: 0 <issue_comment>username_4: in the OnModelCreating override
AFTER the call to
```
base.OnModelCreating(modelBuilder);
```
add:
```
var indexForRemoval = modelBuilder.Entity().HasIndex(x => x.Column\_Index\_Is\_On).Metadata;
modelBuilder.Entity().Metadata.RemoveIndex(indexForRemoval);
```
'''
Upvotes: 0 |
2018/03/22 | 1,738 | 6,982 | <issue_start>username_0: Is there anyway to change the baseUrl of CKFinder dynamically?
I need to use this kind of path: `/websitebuilder/www/user_images/$id/`. I used google to find some answer, but I didn't manage to make it works.
Can someone please give me any hint how should I do that?
I know that in config.php you change the **baseUrl** param, but how to make it dinamically?<issue_comment>username_1: If it is really necessary to avoid the usage of some foreign keys indices - as far as I know (currently) - in .Net Core, it is necessary to remove code that will set the indices in generated migration code file.
Another approach would be to implement a custom migration generator in combination with an attribute or maybe an extension method that will avoid the index creation. You could find more information in this answer for EF6: [EF6 preventing not to create Index on Foreign Key](https://stackoverflow.com/questions/46212828/ef6-preventing-not-to-create-index-on-foreign-key). But I'm not sure if it will work in .Net Core too. The approach seems to be bit different, here is a [MS doc article](https://learn.microsoft.com/en-us/ef/core/managing-schemas/migrations/operations) that should help.
But, **I strongly advise against doing this!** I'm against doing this, because you have to modify generated migration files and not because of not using indices for FKs. Like you mentioned in question's comments, in real world scenarios some cases need such approach.
---
For other people they are not really sure if they have to avoid the usage of indices on FKs and therefor they have to modify migration files:
Before you go that way, I would suggest to implement the application with indices on FKs and would check the performance and space usage. Therefor I would produce a lot test data.
If it really results in performance and space usage issues on a test or QA stage, it's still possible to remove indices in migration files.
Because we already chat about `EnsureCreated` vs `migrations` here for completeness further information about EnsureCreated and migrations (*even if you don't need it :-)*):
* MS doc about [EnsureCreated()](https://learn.microsoft.com/en-us/ef/core/api/microsoft.entityframeworkcore.storage.idatabasecreator) (It will not update your database if you have some model changes - migrations would do it)
* interesting too (even if for EF7) [EF7 EnsureCreated vs. Migrate Methods](http://thedatafarm.com/data-access/ef7-ensurecreated-vs-migrate-methods/)
Upvotes: 2 [selected_answer]<issue_comment>username_2: EF Core has a configuration option to replace one of its services.
I found replacing IConventionSetBuilder to custom one would be a much cleaner approach.
<https://giridharprakash.me/2020/02/12/entity-framework-core-override-conventions/>
Upvotes: 2 <issue_comment>username_3: Entity Framework core 2.0 (the latest version available when the question was asked) doesn't have such a mechanism, but EF Core 2.2 just might - in the form of [*Owned Entity Types*](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities).
Namely, since you said:
>
> *" I only ever start from a `People` row and search for the `Addresses` record; I never find an `Addresses` row"*
>
>
>
Then you may want to make the `Address` an *Owned Entity Type* (and especially the variant with [*'Storing owned types in separate tables'*](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities), to match your choice of storing the address information in a separate `Addresses` table).
The docs of the feature seem to say a matching:
>
> *"Owned entities are essentially a part of the owner and cannot exist without it"*
>
>
>
By the way, now that the feature is in EF, this may justify why EF always creates the indexes for `HasMany`/`HasOne`. It's likely because the `Has*` relations are meant to be used towards other entities (as opposed to 'value objects') and these, since they have their own identity, *are* meant to be queried independently and allow accessing other entities they relate to using navigational properties. For such a use case, it would be simply dangerous use such navigation properties without indexes (a few queries could make the database slow down hugely).
There are few caveats here though:
Turning an entity into an owned one doesn't instruct EF only about the index, but rather it instructs to map the model to database in a way that is a bit different (more on this below) but the end effect is in fact free of that extra index on `People`.
But chances are, this actually might be the **better** solution for you: this way you also say that no one should query the Address (by [not allowing to create a `DbSet` of that type](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities#by-design-restrictions)), minimizing the chance of someone using it to reach the other entities with these costly indexless queries.
As to what the difference is, you'll note that if you make the `Address` owned by `Person`, EF will create a `PersonId` column in the `Address` table, which is different to your `AddressId` in the `People` table (in a sense, lack of the foreign key is a bit of a cheat: an index for querying Person from Address is there, it's just that it's the primary key index of the `People` table, which was there anyways). But take note that this design is actually rather good - it not only needs one column less (no `AddressId` in `People`), but it also guarantees that there's no way to make orphaned `Address` record that your code will never be able to access.
If you would still like to keep the `AddressId` column in the `Addresses`, then there's still one option:
* Just choose a name of `AddressId` for the foreign key in the `Addresses` table and just "pretend" you don't know that it happens to have the same values as the `PersonId` :)
If that option isn't funny (e.g. because you can't change your database schema), then you're somewhat out of luck. But do take note that among the [Current shortcomings of EF](https://learn.microsoft.com/en-us/ef/core/modeling/owned-entities#current-shortcomings) they still list *"Instances of owned entity types cannot be shared by multiple owners"*, while some shortcomings of the previous versions are already listed as addressed. Might be worth watching that space as, it seems to me, resolving that one will probably involve introducing the ability to have your `AddressId` in the `People`, because in such a model, for the owned objects to be shared among many entities the foreign keys would need to be sitting with the owning entities to create an association to the same value for each.
Upvotes: 0 <issue_comment>username_4: in the OnModelCreating override
AFTER the call to
```
base.OnModelCreating(modelBuilder);
```
add:
```
var indexForRemoval = modelBuilder.Entity().HasIndex(x => x.Column\_Index\_Is\_On).Metadata;
modelBuilder.Entity().Metadata.RemoveIndex(indexForRemoval);
```
'''
Upvotes: 0 |
2018/03/22 | 840 | 2,953 | <issue_start>username_0: I want to create my own signal emitter or callback storage, call it whatever you want.
Here is what I come up so far:
```
var DetonationCallback = function detonationCallback() {
detonationCallback.callbacks = [];
detonationCallback.add = function(callback) {
detonationCallback.callbacks.push(callback);
};
for(var i = 0; i < detonationCallback.callbacks.length; ++i) {
callback[i](arguments);
}
};
```
Basically I have two problems now which I can not tackle. The first one is how can I move
```
detonationCallback.callbacks = [];
detonationCallback.add = function(callback) {
detonationCallback.callbacks.push(callback);
};
```
outside of the function? And another question is how can I pass all the arguments which were passed into the `detonationCallback` just in the same order into `callback[i]`?
Will be grateful for any answer.
Additional info: when done with the implementation of the type I would like to use it as follows:
```
var callback = new DetonationCallback();
function f1() {
}
function f2(firstArg, secondArg) {
}
callback.add(f1);
callback.add(f2);
callback();
```<issue_comment>username_1: First, make detonationCallback an Object
```
var detonationCallbacks = {};
detonationCallbacks.callbacks = [];
detonationCallback.clear = function() {
detonationCallback.callbacks = [];
}
detonationCallback.add = function(callback) {
detonationCallback.callbacks.push(callback);
};
```
Second, make your detonate a function in detonationCallback
```
detonationCallback.detonate = function() {
for(var i = 0; i < detonationCallback.callbacks.length; ++i) {
this.callback[i](arguments);
}
};
```
If you want/need to use new, just create an object prototype for this
Upvotes: 2 [selected_answer]<issue_comment>username_2: I'm not sure why you wan't to move the add function outside but how about using a class instead?
Also note the use of [.apply()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply) to send several arguments with an array.
```js
class DetonationCallback {
constructor() {
this.callbacks = [];
}
add(callback) {
this.callbacks.push(callback);
}
call(args) {
this.callbacks.forEach(function(callback) {
callback.apply(this, args);
});
}
}
var callback = function(arg1, arg2) {
console.log(arg1, arg2);
}
var callback2 = function(arg1) {
console.log(arg1);
}
var handler = new DetonationCallback();
handler.add(callback);
handler.add(callback2);
handler.call(['arg1', 'arg2']);
```
Upvotes: 1 <issue_comment>username_3: No need to write this yourself:
When using node you can use the built in EventEmitter lib.
<https://nodejs.org/api/events.html>
There is also a port for the Browser
<https://github.com/Olical/EventEmitter>
Upvotes: 0 |
2018/03/22 | 1,451 | 4,322 | <issue_start>username_0: I'm working on creating an hmac-sha1 hash in Cerner's proprietary language, CCL. It's similar to PL/SQL and has access to native oracle functions. The idea is to mimic a hash created inside javascript. Oracle is using DBMS\_CRYPTO and javascript is using CRYPTO.JS. Below are my two implementations, however I'm unable to get the hashes to match with basic testing strings.
If anyone can shed some light on what I'm doing wrong, I'd appreciate it! I've tried playing around with how I'm giving the data to the dbms\_crypto.mac() function, but I'm not able to get it to match.
Javascript output: bad02f0a5324ad708bb8100220bae499e2c127b8
Codepen: <https://codepen.io/bookluvr416/pen/jzmVWx>
```
var consumerKey = "testConsumer";
var secretKey = "testSecret";
var valueToSign = consumerKey + secretKey;
var hmac = Crypto.HMAC(Crypto.SHA1, valueToSign.toLowerCase(),
secretKey.toLowerCase(), { asBytes: false });
```
DBMS\_CRYPTO output: 0BCC191B3A941C95ECAA46C8F825394706096E62
PL/SQL Sample that I'm trying to base my CCL on:
```
DECLARE
typ INTEGER := DBMS_CRYPTO.SH1;
key RAW(100) := 'testsecret';
mac_value RAW(100);
BEGIN
mac_value := DBMS_CRYPTO.MAC('testconsumertestsecret', typ, key);
END;
```
Caveat - I'm not able to actually test the PL/SQL version, since I don't have an oracle sandbox to play in. I'm also not allowed to post the proprietary code on external websites, so I can't show my actual implementation.<issue_comment>username_1: I was using an incorrect or outdated example.
```
DBMS_CRYPTO.mac(UTL_I18N.string_to_raw(l_oauth_base_string, 'AL32UTF8')
,DBMS_CRYPTO.hmac_sh1
,UTL_I18N.string_to_raw(l_oauth_key, 'AL32UTF8'));
```
Upvotes: 1 <issue_comment>username_2: Once there is no compatibility between cryptoJS in JavaScript and dbms\_crypto in Oracle PL/SQL, especially when encrypting using Pkcs7, The following code shows how that can be done.
The answer is here:
Javascript
```
var AesUtil = function() {};
AesUtil.prototype.encrypt = function(key, iv, plainText) {
var encrypted = CryptoJS.AES.encrypt(
plainText,
CryptoJS.enc.Base64.parse(key),
{ iv: CryptoJS.enc.Utf8.parse(iv), mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 });
return encrypted.ciphertext.toString(CryptoJS.enc.Base64);
}
AesUtil.prototype.decrypt = function(key, iv, cipherText) {
var cipherParams = CryptoJS.lib.CipherParams.create({
ciphertext: CryptoJS.enc.Base64.parse(cipherText)
});
var decrypted = CryptoJS.AES.decrypt(
cipherParams,
CryptoJS.enc.Base64.parse(key),
{ iv: CryptoJS.enc.Utf8.parse(iv), mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 });
return decrypted.toString(CryptoJS.enc.Utf8);
}
```
PL/SQL
```
create or replace package encrypt_decrypt_pk as
-- Created by username_2 19/09/2020
-- grant execute on sys.dbms_crypto to ; -- Using sysdba
function fn\_encrypt(a\_text in varchar2, a\_key varchar2, a\_iv varchar2) return varchar2;
function fn\_decrypt(a\_token in varchar2, a\_key varchar2, a\_iv varchar2) return varchar2;
end encrypt\_decrypt\_pk;
/
create or replace package body encrypt\_decrypt\_pk as
-- Created by username_2 19/09/2020
-- grant execute on sys.dbms\_crypto to ; -- Using sysdba
function fn\_encrypt(a\_text in varchar2, a\_key varchar2, a\_iv varchar2)
return varchar2
as
encryption\_type pls\_integer :=
dbms\_crypto.encrypt\_aes128
+ dbms\_crypto.chain\_cbc
+ dbms\_crypto.pad\_pkcs5;
begin
return utl\_raw.cast\_to\_varchar2(utl\_encode.base64\_encode(dbms\_crypto.encrypt(
src => utl\_raw.cast\_to\_raw(a\_text),
typ => encryption\_type,
key => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_key)),
iv => utl\_raw.cast\_to\_raw(a\_iv)
)));
end;
function fn\_decrypt(a\_token in varchar2, a\_key varchar2, a\_iv varchar2)
return varchar2
as
encryption\_type pls\_integer :=
dbms\_crypto.encrypt\_aes128
+ dbms\_crypto.chain\_cbc
+ dbms\_crypto.pad\_pkcs5;
begin
return utl\_raw.cast\_to\_varchar2(dbms\_crypto.decrypt(
src => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_token)),
typ => encryption\_type,
key => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_key)),
iv => utl\_raw.cast\_to\_raw(a\_iv)
));
end;
end encrypt\_decrypt\_pk;
/
```
Upvotes: 0 |
2018/03/22 | 1,253 | 3,795 | <issue_start>username_0: I was wondering if it is possible to use a macro to execute a solver/what if analysis function while Excel focuses on a different sheet.
I set up a macro for both of these methods and they work fine for when I am currently on the sheet with the formulas and data but when I switch to a different sheet and execute the macro it tends to run into these errors "Error 438"Object Doesn't Support This Property or Method"" and "runtime error 1004".
My code currently looks like this:
```
Sub Solver_alpha()
Worksheets("Input Output").Activate
Worksheets("Input Output").Range ("$B$7:$B$6")
' Solver_alpha Macro
'
' Keyboard Shortcut: Ctrl+t
'
SolverOk SetCell:="$B$7", MaxMinVal:=3, ValueOf:=0, ByChange:="$B$6", Engine:=1 _
, EngineDesc:="GRG Nonlinear"
SolverOk SetCell:="$B$7", MaxMinVal:=3, ValueOf:=0, ByChange:="$B$6", Engine:=1 _
, EngineDesc:="GRG Nonlinear"
SolverSolve UserFinish:=True
End Sub
```<issue_comment>username_1: I was using an incorrect or outdated example.
```
DBMS_CRYPTO.mac(UTL_I18N.string_to_raw(l_oauth_base_string, 'AL32UTF8')
,DBMS_CRYPTO.hmac_sh1
,UTL_I18N.string_to_raw(l_oauth_key, 'AL32UTF8'));
```
Upvotes: 1 <issue_comment>username_2: Once there is no compatibility between cryptoJS in JavaScript and dbms\_crypto in Oracle PL/SQL, especially when encrypting using Pkcs7, The following code shows how that can be done.
The answer is here:
Javascript
```
var AesUtil = function() {};
AesUtil.prototype.encrypt = function(key, iv, plainText) {
var encrypted = CryptoJS.AES.encrypt(
plainText,
CryptoJS.enc.Base64.parse(key),
{ iv: CryptoJS.enc.Utf8.parse(iv), mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 });
return encrypted.ciphertext.toString(CryptoJS.enc.Base64);
}
AesUtil.prototype.decrypt = function(key, iv, cipherText) {
var cipherParams = CryptoJS.lib.CipherParams.create({
ciphertext: CryptoJS.enc.Base64.parse(cipherText)
});
var decrypted = CryptoJS.AES.decrypt(
cipherParams,
CryptoJS.enc.Base64.parse(key),
{ iv: CryptoJS.enc.Utf8.parse(iv), mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 });
return decrypted.toString(CryptoJS.enc.Utf8);
}
```
PL/SQL
```
create or replace package encrypt_decrypt_pk as
-- Created by username_2 19/09/2020
-- grant execute on sys.dbms_crypto to ; -- Using sysdba
function fn\_encrypt(a\_text in varchar2, a\_key varchar2, a\_iv varchar2) return varchar2;
function fn\_decrypt(a\_token in varchar2, a\_key varchar2, a\_iv varchar2) return varchar2;
end encrypt\_decrypt\_pk;
/
create or replace package body encrypt\_decrypt\_pk as
-- Created by username_2 19/09/2020
-- grant execute on sys.dbms\_crypto to ; -- Using sysdba
function fn\_encrypt(a\_text in varchar2, a\_key varchar2, a\_iv varchar2)
return varchar2
as
encryption\_type pls\_integer :=
dbms\_crypto.encrypt\_aes128
+ dbms\_crypto.chain\_cbc
+ dbms\_crypto.pad\_pkcs5;
begin
return utl\_raw.cast\_to\_varchar2(utl\_encode.base64\_encode(dbms\_crypto.encrypt(
src => utl\_raw.cast\_to\_raw(a\_text),
typ => encryption\_type,
key => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_key)),
iv => utl\_raw.cast\_to\_raw(a\_iv)
)));
end;
function fn\_decrypt(a\_token in varchar2, a\_key varchar2, a\_iv varchar2)
return varchar2
as
encryption\_type pls\_integer :=
dbms\_crypto.encrypt\_aes128
+ dbms\_crypto.chain\_cbc
+ dbms\_crypto.pad\_pkcs5;
begin
return utl\_raw.cast\_to\_varchar2(dbms\_crypto.decrypt(
src => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_token)),
typ => encryption\_type,
key => utl\_encode.base64\_decode(utl\_raw.cast\_to\_raw(a\_key)),
iv => utl\_raw.cast\_to\_raw(a\_iv)
));
end;
end encrypt\_decrypt\_pk;
/
```
Upvotes: 0 |
2018/03/22 | 540 | 2,105 | <issue_start>username_0: I'd like to create a [`CompletableFuture`](https://docs.oracle.com/javase/10/docs/api/java/util/concurrent/CompletableFuture.html) that has already completed exceptionally.
Scala provides what I'm looking for via a factory method:
```
Future.failed(new RuntimeException("No Future!"))
```
Is there something similar in Java 10 or later?<issue_comment>username_1: I couldn't find a factory method for a failed future in the standard library for Java 8 (Java 9 fixed this as helpfully pointed out by [username_2](https://stackoverflow.com/questions/49432257/completablefuture-immediate-failure/49432258#49432258)), but it's easy to create one:
```
/**
* Creates a {@link CompletableFuture} that has already completed
* exceptionally with the given {@code error}.
*
* @param error the returned {@link CompletableFuture} should immediately
* complete with this {@link Throwable}
* @param the type of value inside the {@link CompletableFuture}
\* @return a {@link CompletableFuture} that has already completed with the
\* given {@code error}
\*/
public static CompletableFuture failed(Throwable error) {
CompletableFuture future = new CompletableFuture<>();
future.completeExceptionally(error);
return future;
}
```
Upvotes: 5 <issue_comment>username_2: Java 9 provides [`CompletableFuture#failedFuture(Throwable)`](https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/CompletableFuture.html#failedFuture-java.lang.Throwable-) which
>
> Returns a new `CompletableFuture` that is already completed
> exceptionally with the given exception.
>
>
>
that is more or less what you submitted
```
/**
* Returns a new CompletableFuture that is already completed
* exceptionally with the given exception.
*
* @param ex the exception
* @param the type of the value
\* @return the exceptionally completed CompletableFuture
\* @since 9
\*/
public static CompletableFuture failedFuture(Throwable ex) {
if (ex == null) throw new NullPointerException();
return new CompletableFuture(new AltResult(ex));
}
```
Upvotes: 5 [selected_answer] |
2018/03/22 | 391 | 1,221 | <issue_start>username_0: How do I format a date in React?
The date comes in the form `Thu, 22 Mar 2018 14:11:40 GMT`.
I would like to display only `22 Mar 2018`.
```
render() {
return (
{this.state.posts.map(function(item, i) {
return (
- {item.pubDate}
)
})}
)
}
```
**Edited**
After installing Moment.js with `npm install moment --save`, I used it successfully this way:
```
import React, { Component } from 'react';
import moment from 'moment'
class PrettyDate extends Component {
render() {
return (
{moment(item.pubDate).format('ll')}
)
}
}
export default PrettyDate;
```<issue_comment>username_1: A no-lib-dependent JavaScript solution is `item.pubDate.split(" ").slice(1, 4).join(" ")` but you might consider using [moment](https://momentjs.com/) if you already have it installed.
Upvotes: 2 <issue_comment>username_2: you can format the date like that.
```
function myDate() {
const date = new Date(2018,3,22);
const year = date.getFullYear();
const month = date.toLocaleString("en-US", { month: "long" });
const day = date.toLocaleString("en-US", { day: "2-digit" });
return (
{day}
{month}
{year}
);
}
export default myDate;
```
Upvotes: 1 |
2018/03/22 | 299 | 1,039 | <issue_start>username_0: I have a log folder `/var/www/app/logs/` in an EC2 AMI instance. My PHP aplication is writing logs there, however every day I must logon and do chmod in order to avoid the error message that the folder is not writable:
`$ sudo chmod 7777 -R /var/www/app/logs/`
For some reason the instance this permissions are reset every morning automatically by some unknown process. Is there a way to make this change permanently?<issue_comment>username_1: A no-lib-dependent JavaScript solution is `item.pubDate.split(" ").slice(1, 4).join(" ")` but you might consider using [moment](https://momentjs.com/) if you already have it installed.
Upvotes: 2 <issue_comment>username_2: you can format the date like that.
```
function myDate() {
const date = new Date(2018,3,22);
const year = date.getFullYear();
const month = date.toLocaleString("en-US", { month: "long" });
const day = date.toLocaleString("en-US", { day: "2-digit" });
return (
{day}
{month}
{year}
);
}
export default myDate;
```
Upvotes: 1 |
2018/03/22 | 487 | 1,729 | <issue_start>username_0: Is it possible to achieve this kind of query in cassandra efficiently?
Say I have a table something
```
CREATE TABLE something(
a INT,
b INT,
c INT,
d INT,
e INT
PRIMARY KEY(a,b,c,d,e)
);
```
And I want to query this table in following way:
`SELECT * FROM something WHERE a=? AND b=? AND e=?`
or
`SELECT * FROM something WHERE a=? AND c=? AND d=?`
or
`SELECT * FROM something WHERE a=? AND b=? AND d=?`
and so on.
All of the above queries won't work cause cassandra require query to specify clustering column in order.
I know normally this kind of scenario would need to create some materialized view or to denormalize data into several table. However, in this case, I will need to make 4\*3\*2\*1 = **24** tables which is basically not a viable solution.
Secondary index require that `ALLOW FILTERING` option must be turn on for multiple index query to work which seems to be a bad idea. Besides, there may be some high cardinality columns in the something table.
I would like to know if there is any work around to allow such a complicated query to work?<issue_comment>username_1: A no-lib-dependent JavaScript solution is `item.pubDate.split(" ").slice(1, 4).join(" ")` but you might consider using [moment](https://momentjs.com/) if you already have it installed.
Upvotes: 2 <issue_comment>username_2: you can format the date like that.
```
function myDate() {
const date = new Date(2018,3,22);
const year = date.getFullYear();
const month = date.toLocaleString("en-US", { month: "long" });
const day = date.toLocaleString("en-US", { day: "2-digit" });
return (
{day}
{month}
{year}
);
}
export default myDate;
```
Upvotes: 1 |
2018/03/22 | 536 | 2,092 | <issue_start>username_0: I am generating some important information on the HTML page at the end of a transaction.
```
function downloadReference(){
}
Hello! Thank you for your interest.
| | |
| --- | --- |
| **Interest Received Date:** | 22 March 2018 |
| **Confirmation Number:** | J4835K3344 |
[Download this info!](#)
```
Now, I want give user an option to download this info for his reference. I don't want to create a file at server side.
I know in HTML5 we can give download attribute in anchor tag to allow user to download the file, but how do I generate the content on the fly and not refer to any physical file?<issue_comment>username_1: Thanks to @bloodyKnuckles for the hint. Writing answer for anyone who's need this in future.
I added id to my anchor tag:
Then, JavaScript function:
```
function downloadReference(){
var dl = document.getElementById ("dl");
dl.href="data:text/plain," + document.getElementById("data").outerHTML;
return true;
}
```
`return true;` is required for download to happen.
We don't support dead browsers like IE, so no worries about that.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can do something like this which would be pretty simple. You're going to need to pass your data into a data url at some point. if it's rendered with JS I would recommend passing the data to a function like in my example below and using a listener instead of onclick. Otherwise, you could encode it into a data attribute on your link and pass it in that way. I'm happy to update to better explain.
```
function generateFile(dataurl, filename) {
// create a hidden anchor
var link = document.createElement("a");
link.href = dataurl;
link.setAttribute("download", filename);
// programatically click dynamic anchor
var mouse = document.createEvent("MouseEvents");
mouse.initEvent("click", false, true);
link.dispatchEvent(mouse);
return false;
}
document.getElementById('download').onclick = function() {
generateFile("data:text/html, Your HTML
=========
", "reference.html");
}
```
Upvotes: 0 |
2018/03/22 | 1,434 | 4,524 | <issue_start>username_0: let's say I have this C# class
```
public class Product
{
public Guid Id { get; set; }
public string ProductName { get; set; }
public Decimal Price { get; set; }
public int Level { get; set; }
}
```
The equivalent typescript would be something like:
```
export class Product {
id: ???;
productName: string;
price: number;
level: number;
}
```
How to represent Guid in typescript?<issue_comment>username_1: Guids are usually represented as strings in Javascript, so the simplest way to represent the GUID is as a string. Usually when serialization to JSON occurs it is represented as a string, so using a string will ensure compatibility with data from the server.
To make the GUID different from a simple string, you could use branded types:
```
type GUID = string & { isGuid: true};
function guid(guid: string) : GUID {
return guid as GUID; // maybe add validation that the parameter is an actual guid ?
}
export interface Product {
id: GUID;
productName: string;
price: number;
level: number;
}
declare let p: Product;
p.id = "" // error
p.id = guid("guid data"); // ok
p.id.split('-') // we have access to string methods
```
This [article](https://spin.atomicobject.com/2018/01/15/typescript-flexible-nominal-typing/) has a bit more of a discussion on branded types. Also the typescript compiler uses branded types for [paths](https://github.com/Microsoft/TypeScript/blob/8e8e879fc22e080f0c8fe69a73921899ab062c3f/src/compiler/types.ts#L54) which is similar to this use case.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Another alternative is using following NPM package:
**guid-typescript** which you can find here: <https://www.npmjs.com/package/guid-typescript>
Then it will be just like this:
```
import { Guid } from "guid-typescript";
export class Product {
id: Guid;
productName: string;
price: number;
level: number;
}
```
Upvotes: 4 <issue_comment>username_3: For most of my use cases, I need to accept a deserialized string that comes from an API, but also generate new ids and validate them once they're on the client.
Both of the previous answers are great & each tackle a piece of the problem, where this gist combines the typing from the accepted answer and the spirit of the *guid-typescript* package: <https://gist.github.com/username_3/b9b2bbc6f10c686921c0f216bfe4cb40>
```
class GuidFlavoring {
// tslint:disable-next-line: variable-name
\_type?: FlavorT;
}
/\*\* A \*\*guid\*\* type, based on \*\*string\*\* \*/
type GuidFlavor = T & GuidFlavoring;
/\*\* A \*\*guid\*\*-flavored string primitive, supported by factory methods in the \*\*Guid\*\* class
\*/
export type guid = GuidFlavor;
/\*\* A container for factory methods, which support the \*\*guid\*\* type \*/
export class Guid {
/\*\* Specifies the RegExp necessary to validate \*\*guid\*\* values \*/
private static validator: RegExp = new RegExp(
'^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$',
'i'
);
/\*\* Generates a random, hyphenated \*\*guid\*\* value \*/
static newGuid = (): guid =>
[
Guid.generateGuidSegment(2),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(3),
].join('-');
/\*\* Generates a new \*\*guid\*\*, with the empty/least possible value
\* @returns {guid} 00000000-0000-0000-0000-000000000000
\*/
static empty = (): guid => '00000000-0000-0000-0000-000000000000';
/\*\* Generates a new \*\*guid\*\*, with the full/greatest possible value
\* @returns {guid} ffffffff-ffff-ffff-ffffffffffff
\*/
static full = (): guid => 'ffffffff-ffff-ffff-ffffffffffff';
/\*\* Evaluates whether the supplied \*\*guid\*\* is equal to the empty/least possible value \*/
static isEmpty = (value: guid) => value === Guid.empty();
/\*\* Evaluates whether the supplied \*guid\* is equal to the empty/greatest possible value \*/
static isFull = (value: guid) => value === Guid.full();
/\*\* Evaluates whether the supplied value is a valid \*\*guid\*\* \*/
static isValid = (value: string | guid): boolean =>
Guid.validator.test(value);
/\*\* Generates a specified number of double-byte segements for \*\*guid\*\* generation \*/
private static generateGuidSegment(count: number): string {
let out = '';
for (let i = 0; i < count; i++) {
// tslint:disable-next-line:no-bitwise
out += (((1 + Math.random()) \* 0x10000) | 0)
.toString(16)
.substring(1)
.toLowerCase();
}
return out;
}
}
```
Upvotes: 0 |
2018/03/22 | 1,572 | 4,799 | <issue_start>username_0: I am trying to get 2 newest dates in a date array and arrange them in order. I was ,yet, merely able to arrange them in order :
```
php
$data = array(
array(
"title" = "Another title",
"date" => "Fri, 17 Jun 2011 08:55:57 +0200"
),
array(
"title" => "Another title",
"date" => "Fri, 17 Jun 2014 08:55:57 +0200"
),
array(
"title" => "My title",
"date" => "Mon, 18 Jun 2012 09:55:57 +0200"
)
);
function sortFunction( $a, $b ) {
return strtotime($a["date"]) - strtotime($b["date"]);
}
usort($data, "sortFunction");
var_dump($data);
?>
```
Please give me a suggestion to get 2 newest dates in $data .<issue_comment>username_1: Guids are usually represented as strings in Javascript, so the simplest way to represent the GUID is as a string. Usually when serialization to JSON occurs it is represented as a string, so using a string will ensure compatibility with data from the server.
To make the GUID different from a simple string, you could use branded types:
```
type GUID = string & { isGuid: true};
function guid(guid: string) : GUID {
return guid as GUID; // maybe add validation that the parameter is an actual guid ?
}
export interface Product {
id: GUID;
productName: string;
price: number;
level: number;
}
declare let p: Product;
p.id = "" // error
p.id = guid("guid data"); // ok
p.id.split('-') // we have access to string methods
```
This [article](https://spin.atomicobject.com/2018/01/15/typescript-flexible-nominal-typing/) has a bit more of a discussion on branded types. Also the typescript compiler uses branded types for [paths](https://github.com/Microsoft/TypeScript/blob/8e8e879fc22e080f0c8fe69a73921899ab062c3f/src/compiler/types.ts#L54) which is similar to this use case.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Another alternative is using following NPM package:
**guid-typescript** which you can find here: <https://www.npmjs.com/package/guid-typescript>
Then it will be just like this:
```
import { Guid } from "guid-typescript";
export class Product {
id: Guid;
productName: string;
price: number;
level: number;
}
```
Upvotes: 4 <issue_comment>username_3: For most of my use cases, I need to accept a deserialized string that comes from an API, but also generate new ids and validate them once they're on the client.
Both of the previous answers are great & each tackle a piece of the problem, where this gist combines the typing from the accepted answer and the spirit of the *guid-typescript* package: <https://gist.github.com/username_3/b9b2bbc6f10c686921c0f216bfe4cb40>
```
class GuidFlavoring {
// tslint:disable-next-line: variable-name
\_type?: FlavorT;
}
/\*\* A \*\*guid\*\* type, based on \*\*string\*\* \*/
type GuidFlavor = T & GuidFlavoring;
/\*\* A \*\*guid\*\*-flavored string primitive, supported by factory methods in the \*\*Guid\*\* class
\*/
export type guid = GuidFlavor;
/\*\* A container for factory methods, which support the \*\*guid\*\* type \*/
export class Guid {
/\*\* Specifies the RegExp necessary to validate \*\*guid\*\* values \*/
private static validator: RegExp = new RegExp(
'^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$',
'i'
);
/\*\* Generates a random, hyphenated \*\*guid\*\* value \*/
static newGuid = (): guid =>
[
Guid.generateGuidSegment(2),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(1),
Guid.generateGuidSegment(3),
].join('-');
/\*\* Generates a new \*\*guid\*\*, with the empty/least possible value
\* @returns {guid} 00000000-0000-0000-0000-000000000000
\*/
static empty = (): guid => '00000000-0000-0000-0000-000000000000';
/\*\* Generates a new \*\*guid\*\*, with the full/greatest possible value
\* @returns {guid} ffffffff-ffff-ffff-ffffffffffff
\*/
static full = (): guid => 'ffffffff-ffff-ffff-ffffffffffff';
/\*\* Evaluates whether the supplied \*\*guid\*\* is equal to the empty/least possible value \*/
static isEmpty = (value: guid) => value === Guid.empty();
/\*\* Evaluates whether the supplied \*guid\* is equal to the empty/greatest possible value \*/
static isFull = (value: guid) => value === Guid.full();
/\*\* Evaluates whether the supplied value is a valid \*\*guid\*\* \*/
static isValid = (value: string | guid): boolean =>
Guid.validator.test(value);
/\*\* Generates a specified number of double-byte segements for \*\*guid\*\* generation \*/
private static generateGuidSegment(count: number): string {
let out = '';
for (let i = 0; i < count; i++) {
// tslint:disable-next-line:no-bitwise
out += (((1 + Math.random()) \* 0x10000) | 0)
.toString(16)
.substring(1)
.toLowerCase();
}
return out;
}
}
```
Upvotes: 0 |
2018/03/22 | 556 | 1,901 | <issue_start>username_0: I know I can accomplish my objective with this process:
1. Create new column
2. Update new from old
3. Drop old
I am looking for a way to do this with one command. I know that `remane object` works at the table level. For example, I can do this:
```
rename object Test.danPatient to dimPatient
```
But these all fail with various error messages:
```
rename object Test.dimPatient.City to Test.dimPatient.Town
rename object Test.dimPatient.City to Town
rename object DatabaseName.Test.dimPatient.City to Town
rename object DatabaseName.Test.dimPatient.City to DatabaseName.Test.dimPatient.Town
```
Plus, according to [this](https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-rename-transact-sql), `sp_rename` is not supported by Azure Data Warehouse. My unsuccessful efforts to use it suggest that the web page is accurate.
Am I attempting the impossible?<issue_comment>username_1: Your correct that right now you cannot rename a column in this fashion. The fastest way to do this today is to run a [CREATE TABLE AS SELECT (CTAS)](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-table-as-select-azure-sql-data-warehouse) operation. Your statement would look something like this:
`CREATE TABLE Test.DimPatientNew AS
SELECT
City [town],
FROM
Test.DimPatient;`
You can add your feedback to our public feedback forum here:
<https://feedback.azure.com/forums/307516-sql-data-warehouse>
Rename a column request: <https://feedback.azure.com/forums/307516/suggestions/18434083>
Upvotes: 4 [selected_answer]<issue_comment>username_2: `sp_rename` and column is now supported in Azure Synapse Analytics. So you can use:
```
sp_rename '[schema].[table].[old_column]', '[new_column]' , 'COLUMN';
```
Ref: [link](https://learn.microsoft.com/en-us/sql/t-sql/statements/rename-transact-sql?view=aps-pdw-2016-au7)
Upvotes: 2 |
2018/03/22 | 694 | 2,687 | <issue_start>username_0: As an Atom user, I am used that when committing staged changes a new buffer is opened where I can edit the commit message. In particular, the advantages are:
* existing spell check is available
* I can provide a one line comment and an additional longer description
* I see a list of files to be committed (similar to the case when using the CLI)
What is the vscode equivalent? So far I found two options:
1. `Command Palette -> Git: commit (staged)`
2. Use the activity bar (where I see the staged files but still get a limited editing option of the commit message)<issue_comment>username_1: You could always specify VSCode as the editor to use when running `git commit` from the command line.
VSCode even has the `--wait` option : it waits until you close *the opened buffer* (not until you quit VSCode) :
```
GIT_EDITOR='code --wait' git commit
```
The main downside is that I don't know how to target a specific instance of VSCode to use if I have several open.
Upvotes: 2 <issue_comment>username_2: A full editor can be used as git commit message editor, see [v1.69 release notes: Author commit message using an editor](https://github.com/microsoft/vscode-docs/blob/vnext/release-notes/v1_69.md#author-commit-message-using-an-editor).
>
> Use the full text editor to author your commit message. To accept the
> commit message and complete the commit operation you can either close
> the editor tab or select the `Accept Commit Message` button in the
> editor toolbar. To cancel the commit operation, you can either clear
> the contents of the text editor and close the editor tab, or select
> the `Discard Commit Message` button in the editor toolbar.
>
>
> You can disable this new flow, and fallback to the previous experience
> that uses the quick input control, by toggling the
> `git.useEditorAsCommitInput` setting. After the setting is changed, you
> will have to restart VS Code for the change to take effect.
>
>
> To leverage the same flow for git commit commands that executed in the
> integrated terminal, enable the git.terminalGitEditor setting. After
> enabling the setting, you will have to restart your terminal session.
>
>
>
Steps to enable:
1. Enable the setting: `Git: Use Editor as Commit Input`
2. Optionally, enable: `Git: Verbose Commit`
3. With an \*empty commit message input box`trigger one of the commands that sends an commit, like`Git: Commit`or`Git: Commit All`.
You can also hit `Ctrl`+`Enter` with focus in the empty commit message input.
[There is a bug in testing this right now (https://github.com/microsoft/vscode/issues/150463). There is an mp4 of it working at the link above. though.]
Upvotes: 0 |
2018/03/22 | 596 | 2,016 | <issue_start>username_0: Here is a section on #include's from [Google's C++ style guide](https://google.github.io/styleguide/cppguide.html#Names_and_Order_of_Includes):
>
> In dir/foo.cc or dir/foo\_test.cc, whose main purpose is to implement
> or test the stuff in dir2/foo2.h, order your includes as follows:
>
>
>
> ```
> dir2/foo2.h.
> A blank line
> C system files.
> C++ system files.
> A blank line
> Other libraries' .h files.
> Your project's .h files.
>
> ```
>
> Note that any adjacent blank lines should be collapsed.
>
>
> With the preferred ordering, if dir2/foo2.h omits any necessary
> includes, the build of dir/foo.cc or dir/foo\_test.cc will break. Thus,
> this rule ensures that build breaks show up first for the people
> working on these files, not for innocent people in other packages.
>
>
>
I don't understand the last line:
>
> Thus, this rule ensures that build breaks show up first for the people
> working on these files, not for innocent people in other packages.
>
>
>
Can someone explain how putting `dir2/foo2.h` first results in a "good" build break, and how putting `dir2/foo2.h` last results in a "bad" build break?<issue_comment>username_1: A "good" build break is when it breaks for the guy making the breaking changes. Because that guy can repair it. If that guy does *not* notice those breaking changes because the include files are sorted so it does compile for him by plain chance, then it will be released.
A "bad" build break is when the released version breaks the build of the users of that code.
Upvotes: 2 <issue_comment>username_2: In `dir2.h`, you forget to include `X.h`. Then in current file you include:
```
X.h
dir2.h
```
This will compile fine. Then someone else includes `dir2.h` somewhere else and they end up with compilation error originating from `dir2.h`, even if they never changed anything in that file...
If you have the correct order, you should get the error first time you include `dir2.h`.
Upvotes: 5 [selected_answer] |
2018/03/22 | 322 | 919 | <issue_start>username_0: I have an array of 100 indices
```
const randomArr = [...Array(100).keys()]
```
How to return 100 array like this
```
[{index: i}, {title: `title${i}`}]
```
where `i` should be index of random array.<issue_comment>username_1: Use [`Array.from()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/from):
```js
const result = Array.from({ length: 100 }, (_, i) => [
{ index: i }, { title: `title${i}` }
]);
console.log(result);
```
Upvotes: 3 <issue_comment>username_2: I think you mean that you want a list like
```
[{index: i, title: `title${i}`}]
```
But here goes anyway.
Using [Array.prototype.fill](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/fill)
```js
const randomArr = Array(100).fill(0).map((e, i) =>
[{ index: i}, {title: `title${i}`}]
);
console.log(randomArr);
```
Upvotes: 1 |
2018/03/22 | 655 | 2,392 | <issue_start>username_0: I'm using DELPHI with ADO and SQL Server 2014.
In our database table there is a spatial column for geometrical data. We can read and write data to this field (more info is here : <https://learn.microsoft.com/de-de/sql/relational-databases/spatial/spatial-data-sql-server>).
If I display this table using a TDBGRID component I got only (BLOB) shown for the content of this column in my table.
Now I want to see the content of this column. Is the any good coding to show the content of this column e.g. in a dbmemo as text.
The only solution I know is to read the field as text into a string and put this to a normal memo, I'm looking forward to get a more efficient method to access this data<issue_comment>username_1: You can query e.g. for [**Well-known text**](https://en.wikipedia.org/wiki/Well-known_text) format by using SQL function like [**STAsText**](https://learn.microsoft.com/en-us/sql/t-sql/spatial-geometry/stastext-geometry-data-type):
```sql
SELECT MyColumn.STAsText() FROM MyTable
```
An alternative would be fetching your data in [**Well-known binary**](https://en.wikipedia.org/wiki/Well-known_text#Well-known_binary) data stream with parsing it on the client side to represent as text by yourself (the format is described). For fetching such stream you'd use [**STAsBinary**](https://learn.microsoft.com/en-us/sql/t-sql/spatial-geometry/stasbinary-geometry-data-type) function:
```
SELECT MyColumn.STAsBinary() FROM MyTable
```
Yet another option would be fetching raw [**geometry**](https://learn.microsoft.com/en-us/sql/t-sql/spatial-geometry/spatial-types-geometry-transact-sql) data as they are stored in database (as you do right now) and parse it by yourself. The format is described in the [**[MS-SSCLRT]**](https://msdn.microsoft.com/en-us/library/ee320529.aspx) document. But if I were you I would better write parser for the WKB format and fetch data in WKB format because it's quite established universal format, whilst SQL Server internal formats may change frequently.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In case your geometry includes Z and / or M values it is better to call **.ToString ()** method.
```
SELECT MyColumn.ToString () FROM MyTable
```
The output includes Z and M values in addition to X,Y Coordinates. The .STAsText() method returns only the X,Y coordinates of your shape.
Upvotes: 1 |
2018/03/22 | 685 | 2,405 | <issue_start>username_0: I have a JavaScript script that makes a jQuery AJAX call, and passes a serialized javascript object in the "data" property:
>
> data: { Specific: JSON.stringify({DAY: "1", DEP: "2", CARRIER: "3",
> FLT: "4", LEGCD: "5"})
>
>
>
It is received in a C# Generic Handler thusly:
```
var Specific = JsonConvert.DeserializeObject(context.Request.Params["Specific"]);
```
In the Generic Handler, within Visual Studio debugger, I can see the received object.
>
> Specific = {{ "DAY": "", "DEP": "", "CARRIER": "", "FLT": "",
> "LEGCD": "" }}
>
>
>
My question is, how do I reference the received object's properties (DAY, DEP, FLT, etc)?
I tried `Specific.DAY`, and `Specific["DAY"]`, with no success.<issue_comment>username_1: Rather than using
```
var Specific = JsonConvert.DeserializeObject(context.Request.Params["SpecificFlt"]);
```
And ending up with a type of `System.Object` for "Specific", It might help to deserialize to a custom type as follows:
```
public class SpecificObj
{
public string DAY {get; set;}
public string DEP {get; set;}
public string CARRIER {get; set;}
public string FLT {get; set;}
public string LEGCD {get; set;}
}
```
And
```
var Specific = JsonConvert.DeserializeObject(context.Request.Params["SpecificFlt"]);
```
From there you should be able to access the properties using the typical dot operation (`Specific.DAY`)
EDIT: Alternatively you can use reflection:
```
Type t = Specific.GetType();
PropertyInfo p = t.GetProperty("DAY");
string day = (string)p.GetValue(Specific);
```
This reflection can be done other ways using newer versions of C# as detailed in one of the answers here:
[How to access property of anonymous type in C#?](https://stackoverflow.com/questions/1203522/how-to-access-property-of-anonymous-type-in-c)
Upvotes: 2 [selected_answer]<issue_comment>username_2: If you don't want to create the class, the following will also work
```
var specific = JObject.Parse(json);
// specific["DAY"] alone will return a JToken (JValue in this case),
// so use the explicit conversion to string
var day = (string)specific["DAY"];
```
or, if all the values are strings
```
var specific = JsonConvert.DeserializeObject>(json);
var day = specific["DAY"]
```
If `DAY` is not present in the JSON, the first one will return `null`, the second one will throw `KeyNotFoundException`.
Upvotes: 0 |
2018/03/22 | 1,052 | 4,062 | <issue_start>username_0: I just started flutter, this is a basic question but I am not able to solve.
I created a stateful widget and I need to call the setState() method on click of a button. The button is not part of this stateful widget. The button is present in the footer of application.
complete application code:
```
import 'package:flutter/material.dart';
void main() => runApp(new MyApp());
class MyApp extends StatelessWidget {
@override
build(BuildContext context) {
return new MaterialApp(
title: "My app title",
home: new Scaffold(
appBar: new AppBar(
title: new Text("My App"),
backgroundColor: Colors.amber,
),
body: new Container(
child: new Center(
child: new MyStateFullWidget(),
),
),
persistentFooterButtons: [
new FlatButton(
onPressed: () {
// I need to call the update() of MyStateFullWidget/MyStateFullWidgetState class
},
child: new Text("Click Here"),
color: Colors.amber,
textColor: Colors.white,
),
],
));
}
}
class MyStateFullWidget extends StatefulWidget {
@override
State createState() {
return new MyStateFullWidgetState();
}
}
class MyStateFullWidgetState extends State {
int count = 0;
@override
Widget build(BuildContext context) {
return new Text("Count: $count");
}
update() {
setState() {
count++;
}
}
}
```<issue_comment>username_1: >
> I need to call the setState() method on click of a button
>
>
>
You may have a few options (or alternatives) to achieve the end result (all with different tradeoffs):
* Elevate the state (i.e. `count`) to an [Inherited Widget](https://docs.flutter.io/flutter/widgets/InheritedWidget-class.html) above the Button and Display Widget. This may be the easiest or most appropriate.
* Leverage some kind of Action-based communication such as [Flutter Redux](https://pub.dartlang.org/packages/flutter_redux) (where you `dispatch` an action, which affects the display widget via a `StoreConnector` and rebuilds). This can be seen as just another way to 'elevate' state. However, this requires a whole new dependency and a lot of overhead given your simple example, but I wanted to point it out.
* You can create some kind of [Stream](https://docs.flutter.io/flutter/dart-async/Stream-class.html) and [StreamController](https://docs.flutter.io/flutter/dart-async/StreamController-class.html) that the Display widget subscribes/listens to. However, this may be overkill and I'm not sure how appropriate representing button clicks over a stream would be.
There may be other solutions that I'm not thinking of; however, remember that the goal of reactive UI is to [keep state simple](https://www.youtube.com/watch?v=zKXz3pUkw9A).
So if you have multiple leaf widgets that care about a piece of state (want to know it, want to change it), it might mean that this belongs to a higher level component (e.g a App-level piece of state, or maybe some other common ancestor - but both could use Inherited Widget to prevent passing the piece of state all around)
Upvotes: 3 <issue_comment>username_2: You should use your `Scaffold` in the `State` instead of using in `StatelessWidget`.
Here is the working solution.
```
class MyApp extends StatelessWidget {
@override
build(BuildContext context) {
return MaterialApp(
title: "My app title",
home: MyStateFullWidget());
}
}
class MyStateFullWidget extends StatefulWidget {
@override
State createState() {
return MyStateFullWidgetState();
}
}
class MyStateFullWidgetState extends State {
int count = 0;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("My App"),
backgroundColor: Colors.amber,
),
body: Container(
child: Center(
child:Text("Count: $count"),
),
),
persistentFooterButtons: [
FlatButton(
onPressed: update,
child: Text("Click Here"),
color: Colors.amber,
textColor: Colors.white,
),
],
);
}
void update() {
setState(() {
count++;
});
}
}
```
Upvotes: 0 |
2018/03/22 | 850 | 3,147 | <issue_start>username_0: Can I fetch JSON data of items from this link "/items"
```
app.get("/items", middleware.isLoggedIn, function(req, res) {
Item.findById(req.params.id, function(err, item) {
if(err){
console.log(err);
}else{
res.render("buildings/items", {item : item});
}
});
});
```
I tried :
```
$(document).ready(function(){
$.getJSON("/items")
.then(function(data){
console.log(data);
});
});
```
but it didn't work.<issue_comment>username_1: >
> I need to call the setState() method on click of a button
>
>
>
You may have a few options (or alternatives) to achieve the end result (all with different tradeoffs):
* Elevate the state (i.e. `count`) to an [Inherited Widget](https://docs.flutter.io/flutter/widgets/InheritedWidget-class.html) above the Button and Display Widget. This may be the easiest or most appropriate.
* Leverage some kind of Action-based communication such as [Flutter Redux](https://pub.dartlang.org/packages/flutter_redux) (where you `dispatch` an action, which affects the display widget via a `StoreConnector` and rebuilds). This can be seen as just another way to 'elevate' state. However, this requires a whole new dependency and a lot of overhead given your simple example, but I wanted to point it out.
* You can create some kind of [Stream](https://docs.flutter.io/flutter/dart-async/Stream-class.html) and [StreamController](https://docs.flutter.io/flutter/dart-async/StreamController-class.html) that the Display widget subscribes/listens to. However, this may be overkill and I'm not sure how appropriate representing button clicks over a stream would be.
There may be other solutions that I'm not thinking of; however, remember that the goal of reactive UI is to [keep state simple](https://www.youtube.com/watch?v=zKXz3pUkw9A).
So if you have multiple leaf widgets that care about a piece of state (want to know it, want to change it), it might mean that this belongs to a higher level component (e.g a App-level piece of state, or maybe some other common ancestor - but both could use Inherited Widget to prevent passing the piece of state all around)
Upvotes: 3 <issue_comment>username_2: You should use your `Scaffold` in the `State` instead of using in `StatelessWidget`.
Here is the working solution.
```
class MyApp extends StatelessWidget {
@override
build(BuildContext context) {
return MaterialApp(
title: "My app title",
home: MyStateFullWidget());
}
}
class MyStateFullWidget extends StatefulWidget {
@override
State createState() {
return MyStateFullWidgetState();
}
}
class MyStateFullWidgetState extends State {
int count = 0;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("My App"),
backgroundColor: Colors.amber,
),
body: Container(
child: Center(
child:Text("Count: $count"),
),
),
persistentFooterButtons: [
FlatButton(
onPressed: update,
child: Text("Click Here"),
color: Colors.amber,
textColor: Colors.white,
),
],
);
}
void update() {
setState(() {
count++;
});
}
}
```
Upvotes: 0 |
2018/03/22 | 410 | 1,557 | <issue_start>username_0: I was reading [Simo's blog](https://www.simoahava.com/analytics/track-non-javascript-visits-google-analytics/) trying to identify a way to track non-js-enabled hits. A solution is given at the comments by a user with the moniker Duncan. He suggests creating a new custom js variable in Google Tag Manager:
```
function(){
return true;
}
```
Then the commentator suggests creating `a rule which matches on 'Does not equal: true'. I do not understand this bit. What does the commentator mean by 'a rule' and how do I realize this? Can somebody guide me.<issue_comment>username_1: This is a Javascript variable. If javascript is disabled it will not be executed. Therefore the value is never equal to true in a browser that does not execute javascript.
Since there is a supported way of passing in variables (i.e. appending them to the url of the noscript iframe) I don't think you need to use such a workaround, funny as it might be.
Upvotes: 0 <issue_comment>username_2: 1. Create a custom JS variable
[](https://i.stack.imgur.com/n8Kdr.png)
2. Create a new trigger, for example a pageview trigger, and in the conditions options do the following
[](https://i.stack.imgur.com/QASt9.png)
3. Now this trigger should only fire when there's a pageview in a browser that does not have Javascript on because then the function will never execute and it will never be true.
Upvotes: 2 [selected_answer] |
2018/03/22 | 474 | 1,886 | <issue_start>username_0: I am trying to import faker from jupyter notebook on mac and failed.
It complains as follows:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in ()
----> 1 from faker import Faker
ModuleNotFoundError: No module named 'faker'
```
I use pip install faker and the system says
```
Requirement already satisfied: faker in /Applications/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: text-unidecode in /Applications/anaconda3/lib/python3.6/site-packages (from faker)
Requirement already satisfied: python-dateutil>=2.4 in /Applications/anaconda3/lib/python3.6/site-packages (from faker)
Requirement already satisfied: six in /Applications/anaconda3/lib/python3.6/site-packages (from faker)
```
I would be appreciated if anyone can help me out
Thanks<issue_comment>username_1: This is a Javascript variable. If javascript is disabled it will not be executed. Therefore the value is never equal to true in a browser that does not execute javascript.
Since there is a supported way of passing in variables (i.e. appending them to the url of the noscript iframe) I don't think you need to use such a workaround, funny as it might be.
Upvotes: 0 <issue_comment>username_2: 1. Create a custom JS variable
[](https://i.stack.imgur.com/n8Kdr.png)
2. Create a new trigger, for example a pageview trigger, and in the conditions options do the following
[](https://i.stack.imgur.com/QASt9.png)
3. Now this trigger should only fire when there's a pageview in a browser that does not have Javascript on because then the function will never execute and it will never be true.
Upvotes: 2 [selected_answer] |
2018/03/22 | 343 | 1,264 | <issue_start>username_0: I can replace a single quote by two quotes.
But I also need to check if a value is empty.
Following code returns:
it's: command not found
```
original="it's smth"
modified=${original:-NULL}
$modified=${modified//\'/\'\'}
echo "$modified"
```
What is wrong?<issue_comment>username_1: This is a Javascript variable. If javascript is disabled it will not be executed. Therefore the value is never equal to true in a browser that does not execute javascript.
Since there is a supported way of passing in variables (i.e. appending them to the url of the noscript iframe) I don't think you need to use such a workaround, funny as it might be.
Upvotes: 0 <issue_comment>username_2: 1. Create a custom JS variable
[](https://i.stack.imgur.com/n8Kdr.png)
2. Create a new trigger, for example a pageview trigger, and in the conditions options do the following
[](https://i.stack.imgur.com/QASt9.png)
3. Now this trigger should only fire when there's a pageview in a browser that does not have Javascript on because then the function will never execute and it will never be true.
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,754 | 5,466 | <issue_start>username_0: I want a timestamp field in MySQL table, to be set only on inserts, not on updates. The table created like that:
```
CREATE TABLE `test_insert_timestamp` (
`key` integer NOT NULL,
`value` integer NOT NULL,
`insert_timestamp` timestamp DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (`key`)
);
```
The data is loaded with this sentence (need to be used LOAD DATA LOCAL INFILE):
```
LOAD DATA LOCAL INFILE
"inserts_test_timestamp1.txt"
REPLACE
INTO TABLE
`test_insert_timestamp`
FIELDS TERMINATED BY ';'
```
Note: I need to use REPLACE option, not matter why.
The content of `inserts_test_timestamp**1**.txt` been:
```
1;2
3;4
```
I have another file `inserts_test_timestamp**2**.txt` been:
```
3;4
5;6
```
What I wont is:
1. if I load file `inserts_test_timestamp**1**.txt` then the field `insert_timestamp` is set (that is ok with the code)
2. if I load `inserts_test_timestamp**2**.txt`, record (3;4) don't change field `insert_timestamp` already set, but record (5;6) set new `insert_timestamp`.
But no way. Both records are timestamped with same value, instead of left (3;4) with the old timestamp.
I'm working on `MariaDB 5.5.52` database over `CentOS 7.3` release. Think that `MariaDB` version is important, but I can't change that.<issue_comment>username_1: You can divide the process in two steps:
```
MariaDB [_]> DROP TABLE IF EXISTS
-> `temp_test_insert_timestamp`,
-> `test_insert_timestamp`;
Query OK, 0 rows affected (0.00 sec)
MariaDB [_]> CREATE TABLE IF NOT EXISTS `test_insert_timestamp` (
-> `key` integer NOT NULL,
-> `value` integer NOT NULL,
-> `insert_timestamp` timestamp DEFAULT CURRENT_TIMESTAMP,
-> PRIMARY KEY (`key`)
-> );
Query OK, 0 rows affected (0.00 sec)
MariaDB [_]> CREATE TABLE IF NOT EXISTS `temp_test_insert_timestamp` (
-> `key` integer NOT NULL,
-> `value` integer NOT NULL,
-> `insert_timestamp` timestamp DEFAULT CURRENT_TIMESTAMP,
-> PRIMARY KEY (`key`)
-> );
Query OK, 0 rows affected (0.00 sec)
MariaDB [_]> LOAD DATA LOCAL INFILE '/path/to/file/inserts_test_timestamp1.txt'
-> INTO TABLE `test_insert_timestamp`
-> FIELDS TERMINATED BY ';'
-> (`key`, `value`);
Query OK, 2 rows affected (0.00 sec)
Records: 2 Deleted: 0 Skipped: 0 Warnings: 0
MariaDB [_]> SELECT
-> `key`,
-> `value`,
-> `insert_timestamp`
-> FROM
-> `test_insert_timestamp`;
+-----+-------+---------------------+
| key | value | insert_timestamp |
+-----+-------+---------------------+
| 1 | 2 | 2018-03-20 00:49:38 |
| 3 | 4 | 2018-03-20 00:49:38 |
+-----+-------+---------------------+
2 rows in set (0.00 sec)
MariaDB [_]> DO SLEEP(5);
Query OK, 0 rows affected (5.00 sec)
MariaDB [_]> LOAD DATA LOCAL INFILE '/path/to/file/inserts_test_timestamp2.txt'
-> INTO TABLE `temp_test_insert_timestamp`
-> FIELDS TERMINATED BY ';'
-> (`key`, `value`);
Query OK, 2 rows affected (0.00 sec)
Records: 2 Deleted: 0 Skipped: 0 Warnings: 0
MariaDB [_]> SELECT
-> `key`,
-> `value`,
-> `insert_timestamp`
-> FROM
-> `temp_test_insert_timestamp`;
+-----+-------+---------------------+
| key | value | insert_timestamp |
+-----+-------+---------------------+
| 3 | 4 | 2018-03-20 00:49:43 |
| 5 | 6 | 2018-03-20 00:49:43 |
+-----+-------+---------------------+
2 rows in set (0.00 sec)
MariaDB [_]> REPLACE INTO `test_insert_timestamp`
-> SELECT
-> `ttit`.`key`,
-> `ttit`.`value`,
-> `tit`.`insert_timestamp`
-> FROM
-> `temp_test_insert_timestamp` `ttit`
-> LEFT JOIN `test_insert_timestamp` `tit`
-> ON `ttit`.`key` = `tit`.`key`;
Query OK, 2 rows affected (0.01 sec)
Records: 2 Duplicates: 0 Warnings: 0
MariaDB [_]> SELECT
-> `key`,
-> `value`,
-> `insert_timestamp`
-> FROM
-> `test_insert_timestamp`;
+-----+-------+---------------------+
| key | value | insert_timestamp |
+-----+-------+---------------------+
| 1 | 2 | 2018-03-20 00:49:38 |
| 3 | 4 | 2018-03-20 00:49:38 |
| 5 | 6 | 2018-03-20 00:49:43 |
+-----+-------+---------------------+
3 rows in set (0.00 sec)
MariaDB [_]> TRUNCATE TABLE `temp_test_insert_timestamp`;
Query OK, 0 rows affected (0.00 sec)
```
Upvotes: 1 <issue_comment>username_2: I implement the solution in this post: [MySQL LOAD DATA INFILE with ON DUPLICATE KEY UPDATE](https://stackoverflow.com/questions/15271202/mysql-load-data-infile-with-on-duplicate-key-update)
This solution not only allows me to get the insert\_timestamp, but also a field with update\_timestamp:
```
# --- Create temporary table ---
CREATE TEMPORARY TABLE temporary_table LIKE test_insert_timestamp;
# --- Delete index to speed up
DROP INDEX `PRIMARY` ON temporary_table;
DROP INDEX `STAMP_INDEX` ON temporary_table;
# --- Load data in temporary table
LOAD DATA LOCAL INFILE "./inserts_test_timestamp1.txt"
INTO TABLE temporary_table
FIELDS TERMINATED BY ';' OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES
SET
insert_timestamp = CURRENT_TIMESTAMP(),
update_timestamp = NULL
;
# --- Insert data in temporary table ---
INSERT INTO test_insert_timestamp
SELECT * FROM temporary_table
ON DUPLICATE KEY UPDATE
update_timestamp = CURRENT_TIMESTAMP();
# --- Drop temporary
DROP TEMPORARY TABLE temporary_table;
```
Thanks for help !
Upvotes: 1 [selected_answer] |
2018/03/22 | 470 | 1,547 | <issue_start>username_0: Hi i'm trying to compile a C code without reordering my variables in stack but can't do it.
I've tryed using:
>
> `__attribute__((no_reorder))`
>
>
>
But doesn't work, also tryed to compile with the flag:
>
> -fno-toplevel-reorder
>
>
>
But didn't work... so i'm stuck.
Actual code:
```
uint8_t __attribute__((no_reorder)) first_buf[64];
uint8_t second_buf[32];
```
This is my compiler version:
```
gcc (Debian 7.2.0-19) 7.2.0
```
Thank you for reading!<issue_comment>username_1: from the gcc documentation here:
<https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html>
>
> no\_reorder
>
>
> Do not reorder functions or variables marked no\_reorder **against each other** or top level assembler statements the executable. The
> actual order in the program will depend on the linker command line.
> Static variables marked like this are also not removed. This has a
> similar effect as the -fno-toplevel-reorder option, but only applies
> to the marked symbols.
>
>
>
(emphasis mine)
So it would appear that you would need to apply the attribute to the variables the respective order of which you want preserved. Applying the attribute to only a single variable will only preserve the order of that variable with itsef, which has no effect.
Upvotes: 2 <issue_comment>username_2: **-fno-stack-protector** will do it. It will cancel stack canary and reordering of buffer on stack.
add that flag when you compile. i.e.
```
gcc myprogram.c -fno-stack-protector
```
Upvotes: 2 |
2018/03/22 | 449 | 1,793 | <issue_start>username_0: I have the following textbox binded to a MVVM ViewModel
Textbox
```
```
Below is my `ImportPresenter` Which handles the input.
```
Public Class ImportPresenter : ObservableObject
{
private double _BalanceValue = 0;
public double BalanceValue
{
get
{
return _BalanceValue;
}
set
{
_BalanceValue = double.Parse(value.ToString(),
System.Globalization.NumberStyles.Currency);
RaisePropertyChangedEvent("BalanceValue");
}
}//END BALANCEVALUE
}
```
For the most part, this works, except when testing, The `TextBox` is updating itself while I'm typing. Should I use a different event for the `TextBox`?<issue_comment>username_1: By default, the `UpdateSourceTrigger` for the `TextBox` is `Lost Focus`. By setting it to `PropertyChanged` it will update for every change made, e.g. every character.
Upvotes: 3 [selected_answer]<issue_comment>username_2: in some situations updating viewmodel on `LostFocus` and on every `PropertyChanged` is equally inconvenient. (e.g. typing key word for search - we don't want to run search after each letter, we want to do it when users stop typing - but they don't leave search field)
since .net 4.5 `Binding` class has [Delay](https://msdn.microsoft.com/en-us/library/system.windows.data.bindingbase.delay(v=vs.110).aspx) property (in milliseconds):
```
Text="{Binding BalanceValue, Mode=TwoWay, Delay=333, UpdateSourceTrigger=PropertyChanged, StringFormat=N2}"
```
>
> To avoid updating the source object with every keystroke, set the Delay property to a reasonable value to cause the binding to update only after that amount of time has elapsed since the user stopped typing.
>
>
>
Upvotes: 3 |
2018/03/22 | 787 | 2,321 | <issue_start>username_0: I want to remove the first two word from multiple strings (date and month).
```js
jQuery(function($) {
/* Get the text of the element I'm after */
var niceText = $('div').text(),
openSpan = '',
closeSpan = '';
/* Make the sentence into an array */
niceText = niceText.split(' ');
/* Add span to the beginning of the array */
niceText.unshift(openSpan);
/* Add as the 4th value in the array */
niceText.splice(3, 0, closeSpan);
/* Turn it back into a string */
niceText = niceText.join(' ');
/* Append the new HTML to the header */
$('div').html(niceText);
});
```
```html
22 may 2018
21 may 2018
20 february 2018
18 February 2018
```
But it does not work. Create a loop.
Do you have any suggestions or other solutions?<issue_comment>username_1: I created a little snippet that does what you want.
`22 may 2018` becomes `22 may 2018`
>
> **Explanation**
>
>
>
I replace each div's html with [`.html()`](http://api.jquery.com/html/).
I wrap the first two words with a `span` using [`Array#splice()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice).
**Demo:**
```js
$('.test').html((_, html) => {
var words = html.trim().split(' '); // Trim the html before splitting it
words.splice(0, 0, ''); // Add the starting at index 0
words.splice(3, 0, ''); // Add the ending at index 3
return words.join(' '); // Returns the new HTML
});
console.log($('.test').html());
```
```html
22 may 2018
21 may 2018
20 february 2018
18 February 2018
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: $('div').text() , selecting all the divs in the dom. so use jquery each to iterate through. like following code.
```
var divs = $('div');
$.each(divs, function(){
var string = $(this).text();
var words = string.trim().split(' ');
$(this).html(''+ words[0] + ' ' + words[1] +'' + ' ' + words[2]);
});
```
working demo here
<https://jsfiddle.net/w3e4gpth/6/>
Upvotes: 0 <issue_comment>username_3: Here you go:
```js
$('div').html(function(_,html) {
var mydate = html.trim().split(' ');
return ''+mydate[0]+' '+mydate[1]+' '+mydate[2];
});
```
```html
22 may 2018
21 may 2018
20 february 2018
18 February 2018
```
Upvotes: 1 |
2018/03/22 | 1,168 | 3,735 | <issue_start>username_0: I am trying to show nested data in ul/li, but nested children are not showing. See my code and please tell me what is wrong there.
Controller:
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
POCO class:
```
public class MenuItem
{
public int Id { get; set; }
public string Name { get; set; }
public int ParentId { get; set; }
public virtual List Children { get; set; }
}
```
View:
```
@helper ShowTree(List menusList)
{
@foreach (var item in menusList)
{
* @item.Name
@if (item.Children != null && item.Children.Any())
{
@ShowTree(item.Children)
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
If you run the code then you will see child 4 is not showing which is a child of child 3. Please advise what I need to change in my code. Thanks<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 1,102 | 3,783 | <issue_start>username_0: I cannot connect to my database, i receive message "invalid user or password " even if the password is correct, what should i do anyone any idea or my code is wrong here are all my code for login form/
here is my code
```
Imports System.Data.SqlClient
Public Class LoginForm
Private Sub UsersBindingNavigatorSaveItem_Click(sender As Object, e As EventArgs) Handles UsersBindingNavigatorSaveItem.Click
Me.Validate()
Me.UsersBindingSource.EndEdit()
Me.TableAdapterManager.UpdateAll(Me.DataSet1)
End Sub
Private Sub LoginForm_Load(sender As Object, e As EventArgs) Handles MyBase.Load
'TODO: This line of code loads data into the 'DataSet1.users' table. You can move, or remove it, as needed.
Me.UsersTableAdapter.Fill(Me.DataSet1.users)
End Sub
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
Dim connection As New SqlConnection("Data Source=(LocalDB)\MSSQLLocalDB;AttachDbFilename=|DataDirectory|\Database_topdent.mdf;Integrated Security=True")
Dim command As New SqlCommand("Select * from users where User = @user and Password = <PASSWORD> ", connection)
command.Parameters.Add("@user", SqlDbType.VarChar).Value = UserTextBox.Text
command.Parameters.Add("@password", SqlDbType.VarChar).Value = PasswordTextBox.Text
Dim adapter As New SqlDataAdapter(command)
Dim table As New DataTable()
adapter.Fill(table)
If table.Rows.Count() <= 0 Then
MessageBox.Show("username or textbox invalid")
Else
Form1.Show()
Me.Hide()
End If
End If
End Sub
End Class
```<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 1,376 | 5,021 | <issue_start>username_0: I am creating an android application. i want to have a screen that asks the user to select their preferred language. Here is my code for my listview.
```
public class LanguageSelect extends Activity {
int[] IMAGES = {R.drawable.english, R.drawable.french, R.drawable.spanish, R.drawable.german, R.drawable.swedish, R.drawable.russia};
String[] NAMES = {"English", "Français", "Español", "Deutsche", "svenska", "русский"};
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(R.layout.activity_language_select);
ListView listView = (ListView)findViewById(R.id.Lang);
CustomAdpater customadapter = new CustomAdpater();
listView.setAdapter(customadapter);
listView.setOnItemClickListener(new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView adapterView, View view, int i, long l) {
if(i==0)
{
LangHelper.changeLocale(this.getResources(), "fr");
}
}
});
}
class CustomAdpater extends BaseAdapter{
@Override
public int getCount() {
return IMAGES.length;
}
@Override
public Object getItem(int i) {
return null;
}
@Override
public long getItemId(int i) {
return 0;
}
@Override
public View getView(int i, View view, ViewGroup viewGroup)
{
view = getLayoutInflater().inflate(R.layout.custom_layout_lang,null);
ImageView imageView=(ImageView)view.findViewById(R.id.imageView);
TextView textViewLang=(TextView)view.findViewById(R.id.textViewLang);
imageView.setImageResource(IMAGES[i]);
textViewLang.setText(NAMES[i]);
return view;
}
}}
```
I also have created several string.xml files for the languages i wish to use from a tutorial i found. see image below
[strings.xml image](https://i.stack.imgur.com/zXjFQ.png)
I also created a LangHelper class. see below
```
public class LangHelper {
public static void changeLocale(Resources res, String locale)
{
Configuration config;
config = new Configuration(res.getConfiguration());
switch (locale) {
case "es":
config.locale = new Locale("es");
break;
case "fr":
config.locale = new Locale("fr");
break;
default:
config.locale = new Locale("en");
}
res.updateConfiguration(config, res.getDisplayMetrics());
}}
```
My issue now is connecting the LangHelper class to the Listview, so that when the user selects their desired language from the list the following pages i.e. the full application is now translated into that language that the user has selected<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 1,547 | 5,205 | <issue_start>username_0: I'm implementing an WebAPI using dotnet core (2) and EF Core. The application uses UnitOfWork/Repository-pattern. I've come to a point where I need to implement a "Many-To-Many"-relation but I'm having some trouble. This is what I've got sofar:
**Entities:**
```
public class Team : DataEntity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int TeamId { get; set; }
public int OrganizationID { get; set; }
public string Name { get; set; }
public string URL { get; set; }
public virtual ICollection Seasons { get; set; }
public ICollection TeamMember { get; set; }
}
public class Member : DataEntity
{
[Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
public int MemberId { get; set; }
public string Name { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
public string MobilePhone { get; set; }
public string Email { get; set; }
public int RelatedTo { get; set; }
public int TeamRole { get; set; }
public ICollection TeamMember { get; set; }
}
public class TeamMember
{
public int TeamId { get; set; }
public Team Team { get; set; }
public int MemberId { get; set; }
public Member Member { get; set; }
}
```
**In my DbContext-class:**
```
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity()
.HasKey(t => new { t.TeamId, t.MemberId });
modelBuilder.Entity()
.HasOne(tm => tm.Team)
.WithMany(t => t.TeamMember)
.HasForeignKey(tm => tm.TeamId);
modelBuilder.Entity()
.HasOne(tm => tm.Member)
.WithMany(t => t.TeamMember)
.HasForeignKey(tm => tm.MemberId);
}
```
On my repository I have en extension for IncludeAll:
```
public static IQueryable IncludeAll(this IGenericRepository repository)
{
return repository
.AsQueryable()
.Include(s => s.Seasons)
.Include(tm => tm.TeamMember);
}
```
Everything builds as expected but when trying to invoke the controller-action that's fetching a Team (which I expect to include all members) or a Member (which I expect to include all the members teams - code not included above). SwaggerUI returns: TypeError: Failed to fetch. If I try to invoke the controller-action directly in chrome (<http://localhost/api/Team/1>) i get an incomplete result, but never the less...a result :) :
```
{
"value":
{
"teamId":1,
"organizationID":1,
"name":"Team1",
"url":"http://www.team1.com",
"seasons":[
{
"id":1,
"teamID":1,
"name":"PK4",
"description":null,
"startDate":"2017-09-01T00:00:00",
"endDate":"2018-12-31T00:00:00",
"created":"2017-12-01T00:00:00",
"updated":"2017-12-27T00:00:00",
"createdBy":"magnus",
"updatedBy":"magnus"
}],
"teamMember":[
{
"teamId":1
```
Am I missing something obvious?<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 813 | 2,631 | <issue_start>username_0: Getting this error when trying to reset index for a simple pandas dataframe.
My input is:
`df2 = a.reset_index(drop=True)`
Output
```
TypeError Traceback (most recent call last)
in ()
----> 1 df2 = a.reset\_index(drop=True)
2 df2`
TypeError: 'bool' object is not callable
```
This doesn't usually happen when I reset indices and also couldn't find anything on here when the error involves a reset.
Thanks<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 1,222 | 4,256 | <issue_start>username_0: I have a `data.frame` that arbitrarily defines parameter names and sequence boundaries:
```
dfParameterValues <- data.frame(ParameterName = character(), seqFrom = integer(), seqTo = integer(), seqBy = integer())
row1 <- data.frame(ParameterName = "parameterA", seqFrom = 1, seqTo = 2, seqBy = 1)
row2 <- data.frame(ParameterName = "parameterB", seqFrom = 5, seqTo = 7, seqBy = 1)
row3 <- data.frame(ParameterName = "parameterC", seqFrom = 10, seqTo = 11, seqBy = 1)
dfParameterValues <- rbind(dfParameterValues, row1)
dfParameterValues <- rbind(dfParameterValues, row2)
dfParameterValues <- rbind(dfParameterValues, row3)
```
I would like to use this approach to create a grid of c parameter columns based on the number of unique `ParameterName`s that contain r rows of all possible combinations of the sequences given by `seqFrom`, `seqTo`, and `seqBy`. The result would therefore look somewhat like this or should have a content like the following:
```
ParameterA ParameterB ParameterC
1 5 10
1 5 11
1 6 10
1 6 11
1 7 10
1 7 11
2 5 10
2 5 11
2 6 10
2 6 11
2 7 10
2 7 11
```
Edit: Note that the parameter names and their numbers are not known in advance. The data.frame comes from elsewhere so I cannot use the standard static expand.grid approach and need something like a flexible function that creates the expanded grid based on any dataframe with the columns ParameterName, seqFrom, seqTo, seqBy.
I've been playing around with for loops (which is bad to begin with) and it hasn't lead me to any elegant ideas. I can't seem to find a way to come up with the result by using tidyr without constructing the sequences seperately first, either. Do you have any elegant approaches?
Bonus kudos for extending this to include not only numerical sequences, but vectors/sets of characters / other factors, too.
Many thanks!<issue_comment>username_1: Your query gets the top level elements (`ParentId == 0`) only and then populate just their direct child elements.
Your query needs to be changed to populate all child elements for all levels. Note that your `MeuItem` does not need the `ParentId` property.
```
// Group the items by parentId and project to MenuItem
var groups = allMenu.ToLookup(x => x.ParentId, x => new MenuItem
{
Id = x.Id,
Name = x.Name,
});
// Assign the child menus to all items
foreach (var item in allMenu)
{
item.children = groups[item.Id].ToList();
}
// Return just the top level items
ViewBag.menusList = groups[0].ToList();
```
As a side note, do not use `ViewBag`. Pass the model to the view instead
```
return View(groups[0].ToList());
```
and in the view
```
@model List
....
@ShowTree(Model);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Now i could fix my problem. the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
```
Action
------
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 0 |
2018/03/22 | 622 | 2,252 | <issue_start>username_0: I have this error, but I don't really know why:
>
> Argument 1 of Window.getComputedStyle does not implement interface Element
>
>
>
HTML:
```
```
JavaScript / jQuery:
```
var reveal = $('.reveal');
reveal.css('margin', '10px');
var resulte = window.getComputedStyle(reveal, 'margin');
```<issue_comment>username_1: `getComputedStyle()` is a JavaScript function that expects a JavaScript object, not a jQuery object. Pass it `reveal[0]` and it will work.
The second argument of the `getComputedStyle()` function is optional and it is for the pseudo element, not the CSS property. You can use `getComputedStyle()` to get all the properties and then use `getPropertyValue('margin')` to get the specific property that you want.
The problem is when you assign a value to the `margin` property in jQuery like this `reveal.css('margin', '10px')`, then it gets applies to each of the margins (top, right, bottom, and left) and the `margin` property will return nothing (in some browsers). You'll have to get each margin separately.
```js
var reveal = $('.reveal');
reveal.css('margin', '10px');
var resulte = window.getComputedStyle(reveal[0], null).getPropertyValue('margin-top');
console.log(resulte);
```
```css
.reveal {
background-color: #f00;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem here is that you're passing a `jQuery` object where an `Element`[1](https://developer.mozilla.org/en-US/docs/Web/API/Element) is expected.
You're mixing up jQuery and (vanilla/plain) JavaScript, which are NOT the same. JavaScript is a language and jQuery is a library that allows you to deal with (primarily) the DOM.
A jQuery object not always interchangeable with JavaScript so you need to extract the actual matched elements.
A jQuery object (from the top of my head) is basically an advanced iterable which contains all of it's matched elements in inside. If you only expect one match it should be in `reveal[0]`.
`reveal[0]` should in that case be an `Element` or `HTMLElement` which you can then pass to the `window.getComputedStyle`[2](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle) function.
Upvotes: 0 |
2018/03/22 | 762 | 2,815 | <issue_start>username_0: am creating a blogging project. i want everytime a new post is created, the user doesnt have to refresh the page to see the post, it will just append to the already existing ones. this is what am doing presently
```
function postTin() {
$.ajax({
type:'POST',
url: "frextra.php",
data:"ins=passPost",
success: (function (result) {
$("#post").html(result);
})
})
}
postTin(); // To output when the page loads
setInterval(postTin, (5 * 1000)); // x * 1000 to get it in seconds
```
what am doing here is that every 5 seconds the page goes to reload the posted data in which case it will also add the newly posted ones
**what i want is this:**
i don't want to use a timer, i want the request to only load when there is a new post added in the database. been stuck here for two weeks. searched around and have not seen anything to help.<issue_comment>username_1: `getComputedStyle()` is a JavaScript function that expects a JavaScript object, not a jQuery object. Pass it `reveal[0]` and it will work.
The second argument of the `getComputedStyle()` function is optional and it is for the pseudo element, not the CSS property. You can use `getComputedStyle()` to get all the properties and then use `getPropertyValue('margin')` to get the specific property that you want.
The problem is when you assign a value to the `margin` property in jQuery like this `reveal.css('margin', '10px')`, then it gets applies to each of the margins (top, right, bottom, and left) and the `margin` property will return nothing (in some browsers). You'll have to get each margin separately.
```js
var reveal = $('.reveal');
reveal.css('margin', '10px');
var resulte = window.getComputedStyle(reveal[0], null).getPropertyValue('margin-top');
console.log(resulte);
```
```css
.reveal {
background-color: #f00;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem here is that you're passing a `jQuery` object where an `Element`[1](https://developer.mozilla.org/en-US/docs/Web/API/Element) is expected.
You're mixing up jQuery and (vanilla/plain) JavaScript, which are NOT the same. JavaScript is a language and jQuery is a library that allows you to deal with (primarily) the DOM.
A jQuery object not always interchangeable with JavaScript so you need to extract the actual matched elements.
A jQuery object (from the top of my head) is basically an advanced iterable which contains all of it's matched elements in inside. If you only expect one match it should be in `reveal[0]`.
`reveal[0]` should in that case be an `Element` or `HTMLElement` which you can then pass to the `window.getComputedStyle`[2](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle) function.
Upvotes: 0 |
2018/03/22 | 580 | 2,167 | <issue_start>username_0: I want to rename file from before remote hook using context.
```
container.beforeRemote('upload', function (context, res, next) {
/////rename file
}
```
Can anyone tell me how can i access files from this?<issue_comment>username_1: `getComputedStyle()` is a JavaScript function that expects a JavaScript object, not a jQuery object. Pass it `reveal[0]` and it will work.
The second argument of the `getComputedStyle()` function is optional and it is for the pseudo element, not the CSS property. You can use `getComputedStyle()` to get all the properties and then use `getPropertyValue('margin')` to get the specific property that you want.
The problem is when you assign a value to the `margin` property in jQuery like this `reveal.css('margin', '10px')`, then it gets applies to each of the margins (top, right, bottom, and left) and the `margin` property will return nothing (in some browsers). You'll have to get each margin separately.
```js
var reveal = $('.reveal');
reveal.css('margin', '10px');
var resulte = window.getComputedStyle(reveal[0], null).getPropertyValue('margin-top');
console.log(resulte);
```
```css
.reveal {
background-color: #f00;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem here is that you're passing a `jQuery` object where an `Element`[1](https://developer.mozilla.org/en-US/docs/Web/API/Element) is expected.
You're mixing up jQuery and (vanilla/plain) JavaScript, which are NOT the same. JavaScript is a language and jQuery is a library that allows you to deal with (primarily) the DOM.
A jQuery object not always interchangeable with JavaScript so you need to extract the actual matched elements.
A jQuery object (from the top of my head) is basically an advanced iterable which contains all of it's matched elements in inside. If you only expect one match it should be in `reveal[0]`.
`reveal[0]` should in that case be an `Element` or `HTMLElement` which you can then pass to the `window.getComputedStyle`[2](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle) function.
Upvotes: 0 |
2018/03/22 | 760 | 2,844 | <issue_start>username_0: I have created a directive which wraps a jQuery element, this directive is binded to an object which contains some callback functions as following:
```
vm.treeEvents = {
check_node: function(node, selected){
vm.form.$setDirty();
...
},
uncheck_node: function(node, selected){
vm.form.$setDirty();
...
}
};
```
In the directive post link function I have this :
```
if (scope.tree.treeEvents.hasOwnProperty(evt)) {
scope.tree.treeView.on(evt.indexOf('.') > 0 ? evt : evt + '.jstree', scope.tree.treeEvents[evt]);
}
```
so whenever an event declared in the `treeEvents` scope binding is triggered, the callback function is executed, and then the form is set to `dirty` state.
When I did this I noticed that the form is not passed to the dirty state unless I scroll the page or I click on some element in the form.
How can I solve this?<issue_comment>username_1: `getComputedStyle()` is a JavaScript function that expects a JavaScript object, not a jQuery object. Pass it `reveal[0]` and it will work.
The second argument of the `getComputedStyle()` function is optional and it is for the pseudo element, not the CSS property. You can use `getComputedStyle()` to get all the properties and then use `getPropertyValue('margin')` to get the specific property that you want.
The problem is when you assign a value to the `margin` property in jQuery like this `reveal.css('margin', '10px')`, then it gets applies to each of the margins (top, right, bottom, and left) and the `margin` property will return nothing (in some browsers). You'll have to get each margin separately.
```js
var reveal = $('.reveal');
reveal.css('margin', '10px');
var resulte = window.getComputedStyle(reveal[0], null).getPropertyValue('margin-top');
console.log(resulte);
```
```css
.reveal {
background-color: #f00;
height: 50px;
width: 50px;
}
```
```html
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: The problem here is that you're passing a `jQuery` object where an `Element`[1](https://developer.mozilla.org/en-US/docs/Web/API/Element) is expected.
You're mixing up jQuery and (vanilla/plain) JavaScript, which are NOT the same. JavaScript is a language and jQuery is a library that allows you to deal with (primarily) the DOM.
A jQuery object not always interchangeable with JavaScript so you need to extract the actual matched elements.
A jQuery object (from the top of my head) is basically an advanced iterable which contains all of it's matched elements in inside. If you only expect one match it should be in `reveal[0]`.
`reveal[0]` should in that case be an `Element` or `HTMLElement` which you can then pass to the `window.getComputedStyle`[2](https://developer.mozilla.org/en-US/docs/Web/API/Window/getComputedStyle) function.
Upvotes: 0 |
2018/03/22 | 620 | 1,931 | <issue_start>username_0: I trying to creating a new dynamic array. I assigned one dynamic variable value to another new variable and pushing new array. But updating last array variable value.
Code
----
```
for(let i=0;i
```
My Json Value:
--------------
```
[{
"itemName" : "3 SS Finish Baskets",
"itemDesc" : "3 SS Finish Baskets",
"itemId" : 1,
"unitId" : 2,
"categories" : [
{
"id" : 1,
"text" : "single room"
},
{
"id" : 2,
"text" : "Foyer/Living"
}
]
},
{
....
}]
```
Output
------
```
[{
"itemName" : "3 SS Finish Baskets",
"categoryId " : 2
},
{
"itemName" : "3 SS Finish Baskets",
"categoryId " : 2
}]
```
Expecting Output
----------------
```
[{
"itemName" : "3 SS Finish Baskets",
"categoryId " 1
},
{
"itemName" : "3 SS Finish Baskets",
"categoryId " 2
}]
```<issue_comment>username_1: Problem with shallow copy, it is assigning reference of array.
Use angular.copy(source, destination) for deep copy.
```
var allItems = [{
"itemName" : "3 SS Finish Baskets",
"itemDesc" : "3 SS Finish Baskets",
"itemId" : 1,
"unitId" : 2,
"categories" : [
{
"id" : 1,
"text" : "single room"
},
{
"id" : 2,
"text" : "Foyer/Living"
}
]
}]
$scope.reitems = [];
for(let i=0;i
```
Upvotes: 0 <issue_comment>username_2: Try this code, it will work.
```
let allItems = data;
let reitems = [];
let n = 0;
let allItemCategory = data.categories;
for(let i=0; i {
console.log('element.id',element.id)
let categoryId = element.id;
var obj = {
categoryId : element.id,
itemName : allItems[i].itemName,
itemDesc : allItems[i].itemDesc
}
reitems[n] = obj;
n++;
});
}
console.log('allI new reitems ------->', reitems);
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 746 | 2,048 | <issue_start>username_0: I want to print a Code 128 barcode with a Zebra printer. But I just can't get exactly where I want because the barcode is either too small or too big for the label size of 40x20 mm. Is there anything else I can try besides using the `^BY` (Bar Code Field Default) module width and ratio?
```
^XA^PQ2^LH0,0^FS
^MUM
^GB40,20,0.1,B^FS
^FO1.5,4
^BY0.2
^BCN,10,N,N
^FD*030493LEJCG002999*^FS
^FO8,15
^A0N,3,3
^FD*030493LEJCG002830*^FS
^MUD
^XZ
```
Above script gives me a label that looks like this:
[](https://i.stack.imgur.com/mF7yu.png)
But when I just decrease the module width to 0.1 (which is the lowest) the barcode becomes too small and may be problematic to scan with a hand scanner:
[](https://i.stack.imgur.com/UwISW.png)<issue_comment>username_1: Code-128 is a fixed-ratio code, so you would appear to have the choice of two sizes. You may be able to solve the problem by using a 300dpi printer in place of a 200.
If you can change the format (and I'm intrigued by the barcode and readable being different values) then you could save a little by printing *one* number-sequence and *one* alpha-sequence, as an even count of numerics will be encoded as alphabet C so you'd save one change-alphabet element.
Do you really need the `*` on each end?
Otherwise, perhaps code 39 (which prints the `*` if you use the print-interpretation-line option) would suit your purposes better.
Upvotes: 1 <issue_comment>username_2: Another Possibility is to do on the fly code-set changes, Try something like
```
^XA^PQ2^LH0,0^FS
^MUM
^GB60,20,0.1,B^FS
^FO1.5,4
^BY0.2
^BCN,10,N,N
^FD>:*>5030493>6LEJCG>5002830>6*^FS
^FO8,15
^A0N,3,3
^FD*030493LEJCG002830*^FS
^MUD
^XZ
```
This will allow less symbols to encode your data
If you can structure content to have all the alpha chars a one end or the other.
or (Depending on your firmware) you could use auto `^BCN,10,N,N,N,A`
Upvotes: 0 |
2018/03/22 | 3,183 | 12,002 | <issue_start>username_0: I wrote this code in `lib/helper.js`:
```js
var myfunction = async function(x,y) {
....
return [variableA, variableB]
}
exports.myfunction = myfunction;
```
Then I tried to use it in another file:
```js
var helper = require('./helper.js');
var start = function(a,b){
....
const result = await helper.myfunction('test','test');
}
exports.start = start;
```
I got an error:
>
> await is only valid in async function
>
>
>
What is the issue?<issue_comment>username_1: The error is not refering to `myfunction` but to `start`.
```
async function start() {
....
const result = await helper.myfunction('test', 'test');
}
```
---
```js
// My function
const myfunction = async function(x, y) {
return [
x,
y,
];
}
// Start function
const start = async function(a, b) {
const result = await myfunction('test', 'test');
console.log(result);
}
// Call start
start();
```
---
---
I use the opportunity of this question to advise you about an known anti pattern using `await` which is : `return await`.
---
**WRONG**
```js
async function myfunction() {
console.log('Inside of myfunction');
}
// Here we wait for the myfunction to finish
// and then returns a promise that'll be waited for aswell
// It's useless to wait the myfunction to finish before to return
// we can simply returns a promise that will be resolved later
// useless async here
async function start() {
// useless await here
return await myfunction();
}
// Call start
(async() => {
console.log('before start');
await start();
console.log('after start');
})();
```
---
**CORRECT**
```js
async function myfunction() {
console.log('Inside of myfunction');
}
// Here we wait for the myfunction to finish
// and then returns a promise that'll be waited for aswell
// It's useless to wait the myfunction to finish before to return
// we can simply returns a promise that will be resolved later
// Also point that we don't use async keyword on the function because
// we can simply returns the promise returned by myfunction
function start() {
return myfunction();
}
// Call start
(async() => {
console.log('before start');
await start();
console.log('after start');
})();
```
---
Also, know that there is a special case where `return await` is correct and important : (using try/catch)
[Are there performance concerns with `return await`?](https://stackoverflow.com/questions/43353087/are-there-performance-concerns-with-return-await/43985067#43985067)
Upvotes: 10 [selected_answer]<issue_comment>username_2: **To use `await`, its executing context needs to be `async` in nature**
As it said, you need to define the nature of your `executing context` where you are willing to `await` a task before anything.
**Just put `async` before the `fn` declaration in which your `async` task will execute.**
```
var start = async function(a, b) {
// Your async task will execute with await
await foo()
console.log('I will execute after foo get either resolved/rejected')
}
```
**Explanation:**
```
var helper = require('./helper.js');
var start = async function(a,b){
....
const result = await helper.myfunction('test','test');
}
exports.start = start;
```
**Wondering what's going under the hood**
`await` consumes promise/future / task-returning methods/functions and `async` marks a method/function as capable of using await.
Also if you are familiar with `promises`, `await` is actually doing the same process of promise/resolve. Creating a chain of promise and executes your next task in `resolve` callback.
For more info you can refer to [MDN DOCS](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function).
Upvotes: 5 <issue_comment>username_3: When I got this error, it turned out I had a call to the map function inside my "async" function, so this error message was actually referring to the map function not being marked as "async". I got around this issue by taking the "await" call out of the map function and coming up with some other way of getting the expected behavior.
```
var myfunction = async function(x,y) {
....
someArray.map(someVariable => { // <- This was the function giving the error
return await someFunction(someVariable);
});
}
```
Upvotes: 4 <issue_comment>username_4: The current implementation of `async` / `await` only supports the `await` keyword inside of `async` functions Change your `start` function signature so you can use `await` inside `start`.
```
var start = async function(a, b) {
}
```
For those interested, the proposal for top-level `await` is currently in Stage 2: <https://github.com/tc39/proposal-top-level-await>
Upvotes: 2 <issue_comment>username_5: Yes, await / async was a great concept, but the implementation is completely broken.
For whatever reason, the await keyword has been implemented such that it can only be used within an async method. This is in fact a bug, though you will not see it referred to as such anywhere but right here. The fix for this bug would be to implement the await keyword such that it can only be used TO CALL an async function, regardless of whether the calling function is itself synchronous or asynchronous.
Due to this bug, if you use await to call a real asynchronous function somewhere in your code, then ALL of your functions must be marked as async and ALL of your function calls must use await.
This essentially means that you must add the overhead of promises to all of the functions in your entire application, most of which are not and never will be asynchronous.
If you actually think about it, using await in a function should require the function containing the await keyword TO NOT BE ASYNC - this is because the await keyword is going to pause processing in the function where the await keyword is found. If processing in that function is paused, then it is definitely NOT asynchronous.
So, to the developers of javascript and ECMAScript - please fix the await/async implementation as follows...
* await can only be used to CALL async functions.
* await can appear in any kind of function, synchronous or asynchronous.
* Change the error message from "await is only valid in async function" to "await can only be used to call async functions".
Upvotes: -1 <issue_comment>username_6: I had the same problem and the following block of code was giving the same error message:
```
repositories.forEach( repo => {
const commits = await getCommits(repo);
displayCommit(commits);
});
```
The problem is that the method getCommits() was async but I was passing it the argument repo which was also produced by a Promise. So, I had to add the word async to it like this: async(repo) and it started working:
```
repositories.forEach( async(repo) => {
const commits = await getCommits(repo);
displayCommit(commits);
});
```
Upvotes: 4 <issue_comment>username_7: "await is only valid in async function"
But why? 'await' explicitly turns an async call into a synchronous call, and therefore the caller cannot be async (or asyncable) - at least, not because of the call being made at 'await'.
Upvotes: -1 <issue_comment>username_8: async/await is the mechanism of handling promise, two ways we can do it
```
functionWhichReturnsPromise()
.then(result => {
console.log(result);
})
.cathc(err => {
console.log(result);
});
```
or we can use await to wait for the promise to full-filed it first, which means either it is rejected or resolved.
Now if we want to use **await** (waiting for a promise to fulfil) inside a function, it's mandatory that the container function must be an async function because we are waiting for a promise to fulfiled asynchronously || make sense right?.
```
async function getRecipesAw(){
const IDs = await getIds; // returns promise
const recipe = await getRecipe(IDs[2]); // returns promise
return recipe; // returning a promise
}
getRecipesAw().then(result=>{
console.log(result);
}).catch(error=>{
console.log(error);
});
```
Upvotes: 2 <issue_comment>username_9: If you are writing a Chrome Extension and you get this error for your code at root, you can fix it using the following "workaround":
```
async function run() {
// Your async code here
const beers = await fetch("https://api.punkapi.com/v2/beers");
}
run();
```
Basically you have to wrap your async code in an `async function` and then call the function without awaiting it.
Upvotes: 4 <issue_comment>username_10: This in one file works..
Looks like await only is applied to the local function which has to be async..
I also am struggling now with a more complex structure and in between different files. That's why I made this small test code.
edit: i forgot to say that I'm working with node.js.. sry. I don't have a clear question. Just thought it could be helpful with the discussion..
```
function helper(callback){
function doA(){
var array = ["a ","b ","c "];
var alphabet = "";
return new Promise(function (resolve, reject) {
array.forEach(function(key,index){
alphabet += key;
if (index == array.length - 1){
resolve(alphabet);
};
});
});
};
function doB(){
var a = "well done!";
return a;
};
async function make() {
var alphabet = await doA();
var appreciate = doB();
callback(alphabet+appreciate);
};
make();
};
helper(function(message){
console.log(message);
});
```
Upvotes: 0 <issue_comment>username_11: If you have called async function inside **foreach** update it to **for loop**
Upvotes: 2 <issue_comment>username_12: Found the code below in this nice article: [HTTP requests in Node using Axios](https://flaviocopes.com/node-axios/#get-requests)
```
const axios = require('axios')
const getBreeds = async () => {
try {
return await axios.get('https://dog.ceo/api/breeds/list/all')
} catch (error) {
console.error(error)
}
}
const countBreeds = async () => {
const breeds = await getBreeds()
if (breeds.data.message) {
console.log(`Got ${Object.entries(breeds.data.message).length} breeds`)
}
}
countBreeds()
```
Or using Promise:
```
const axios = require('axios')
const getBreeds = () => {
try {
return axios.get('https://dog.ceo/api/breeds/list/all')
} catch (error) {
console.error(error)
}
}
const countBreeds = async () => {
const breeds = getBreeds()
.then(response => {
if (response.data.message) {
console.log(
`Got ${Object.entries(response.data.message).length} breeds`
)
}
})
.catch(error => {
console.log(error)
})
}
countBreeds()
```
Upvotes: 2 <issue_comment>username_13: ### A common problem in Express:
The warning can refer to the function, or ***where you call it***.
Express items tend to look like this:
```
app.post('/foo', ensureLoggedIn("/join"), (req, res) => {
const facts = await db.lookup(something)
res.redirect('/')
})
```
Notice the `=>` arrow function syntax for the function.
The problem is NOT actually in the db.lookup call, **but right here in the Express item**.
Needs to be:
```
app.post('/foo', ensureLoggedIn("/join"), async function (req, res) {
const facts = await db.lookup(something)
res.redirect('/')
})
```
Basically, nix the `=>` and add `async function` .
Upvotes: 1 <issue_comment>username_14: In later nodejs (>=14), top await is allowed with `{ "type": "module" }` specified in `package.json` or with file extension `.mjs`.
<https://www.stefanjudis.com/today-i-learned/top-level-await-is-available-in-node-js-modules/>
Upvotes: 3 |
2018/03/22 | 297 | 1,214 | <issue_start>username_0: I am using Visual Studio Code, is it possible to change a class name for instance only in the CSS file and this change is also applied in the HTML document? or vice verse, changing the class name in HTML and seeing it change in the CSS file.<issue_comment>username_1: Full disclosure, I don't use Visual Studio, but a quick search looked like this is what you may be looking for?
<https://marketplace.visualstudio.com/items?itemName=Zignd.html-css-class-completion>
Unless I'm misreading?
Upvotes: 1 <issue_comment>username_2: At the time of writing, this feature does not exist in VS Code.
VS Code is currently unable to know if the same CSS class is referenced in other files, so refactoring across multiple files wouldn’t work.
If you have your cursor on a CSS selector and you press `Shift` + `F12` to find all references, you will find references to the class name only within the same CSS file, even when you are using the class in your HTML.
There is an [open issue](https://github.com/microsoft/vscode/issues/47331) for implementing cross-file IntelliSense for CSS classes and ids. This might lead to supporting refactoring across HTML and CSS files in the future.
Upvotes: 2 |
2018/03/22 | 407 | 1,591 | <issue_start>username_0: I was trying to use the REST API of TeamCity but i can't find a list of all supported requests and the names of parameters. I wanted to look it up in their official documentation (<https://confluence.jetbrains.com/display/TCD10/REST+API>)
where a link to exactly this list is provided
(<http://teamcity:8111/app/rest/application.wadl>)
but i just can't connect to it. Seems like the page is down.
I have googled all kinds of stuff in the hope to find this list somewhere else but i couldn't find anything smart.
Does anyone know where to find such a list or can provide one? That would be fantastic.
Thanks<issue_comment>username_1: Full disclosure, I don't use Visual Studio, but a quick search looked like this is what you may be looking for?
<https://marketplace.visualstudio.com/items?itemName=Zignd.html-css-class-completion>
Unless I'm misreading?
Upvotes: 1 <issue_comment>username_2: At the time of writing, this feature does not exist in VS Code.
VS Code is currently unable to know if the same CSS class is referenced in other files, so refactoring across multiple files wouldn’t work.
If you have your cursor on a CSS selector and you press `Shift` + `F12` to find all references, you will find references to the class name only within the same CSS file, even when you are using the class in your HTML.
There is an [open issue](https://github.com/microsoft/vscode/issues/47331) for implementing cross-file IntelliSense for CSS classes and ids. This might lead to supporting refactoring across HTML and CSS files in the future.
Upvotes: 2 |
2018/03/22 | 1,091 | 3,567 | <issue_start>username_0: I'm trying to extract the CSRF token so I can log in, and be able to obtain cookies, but I'm not able to.
I'm able to get a 200 response code when accessing the URL that contains the CSRF token, and I'm able to see it on the browser and the console, but my response assertion is not able to assert anything regardless of me changing the apply to, field to test, and pattern matching rules sections. My regular expression extractor isn't able to get anything either. All the headers to get to the URL are there. Any suggestions?
Forgot to mention, I'm able to get it on one server that's exactly (or should be) exactly the same as this one...
[](https://i.stack.imgur.com/0pbmx.png)
[](https://i.stack.imgur.com/m5QE4.png)
EDIT:
I placed it under the HTTP Sampler that has that response, and here is an example of what I get for my response assertion. I've also added various images.
[](https://i.stack.imgur.com/j2kiL.png)
[](https://i.stack.imgur.com/rXA6k.png)
[](https://i.stack.imgur.com/xPAbB.png)
[](https://i.stack.imgur.com/FGd6S.png)
[](https://i.stack.imgur.com/uVtpf.png)<issue_comment>username_1: Unfortunately you didn't share your output, so I cannot tell for sure, but although it seems your RegEx is correct in both cases, it could be that it doesn't match due to some extra spacing.
It appears that you are expecting a valid JSON, so instead of RegEx you could use [JSON Extractor](https://jmeter.apache.org/usermanual/component_reference.html#JSON_Extractor) and/or [JSON Assertion](https://jmeter.apache.org/usermanual/component_reference.html#JSON_Assertion), for which extra spacing will not matter.
Example: if Response Data is
>
>
> ```
> {"token":"1<PASSWORD>"}
>
> ```
>
>
I can specify JSON Extractor as
[](https://i.stack.imgur.com/WtzGQ.png)
(most important line is **JSON Path**: `$.token`)
and the result will be variable `token` with value `12345`.
[Here's](http://jsonpath.com/) a good online JSON Path tester, which can help you to figure out the right JSON Path.
Upvotes: 2 [selected_answer]<issue_comment>username_2: If your goal is to check presence of a [JSON Object](https://www.w3schools.com/js/js_json_objects.asp) with name of `token` and an arbitrary value I would recommend going for [JSON Assertion](http://jmeter.apache.org/usermanual/component_reference.html#JSON_Assertion) instead.
1. Add JSON Assertion as a child of the request you would like to assert.
2. Use the following [JSON Path](https://github.com/json-path/JsonPath#path-examples) query:
```
$.token
```
[](https://i.stack.imgur.com/pdQdv.png)
JSON Assertion is available since [JMeter 4.0](https://www.blazemeter.com/blog/whats-new-in-jmeter-4)
---
If you still want to go for the Response Assertion - configure it as follows:
* Pattern Matching Rules: `Contains`
* Patterns to Test: `{"token":"(.+?)"}`
[](https://i.stack.imgur.com/DPS4E.png)
Upvotes: 0 |
2018/03/22 | 688 | 2,224 | <issue_start>username_0: I want to create a string vector out of a row of a data frame. without its row and column names. This is my dataframe.
```
agreement = c("Strongly Disagree"," Disagree", "Neither", "Agree", "Strongly Agree")
likelihood = c("Very unlikely","Unlikely", "Neither", "Likely", "Very Likely")
df <- as.data.frame(rbind(agreement, likelihood))
```
Once I have the data frame, how do I go back to the character vector without its column values? E.g.
```
> "Strongly Disagree"," Disagree", "Neither", "Agree", "Strongly Agree"
```
I tried with `as.character`, but what I get in return are numeric values.
```
as.character(df[1,1:5])
> "1" "1" "1" "1" "1"
```
I also tried with `as.vector`, but it returns me a list including column names.
```
as.vector(df[1,1:5])
> V1 V2 V3 V4 V5
agreement Strongly Disagree Disagree Neither Agree Strongly Agree
```
Any help would be appreciated!<issue_comment>username_1: You need to set the `stringsAsFactors` to `FALSE` when creating a dataframe.
```
df <- as.data.frame(rbind(agreement, likelihood), stringsAsFactors = FALSE)
```
`as.character(df[1,1:5])` now results in
```
"Strongly Disagree" " Disagree" "Neither" "Agree" "Strongly Agree"
```
When retrieving all rows or columns you can simply leave the field empty e.g (`df[1,]` returns the first row with all columns)
Upvotes: 3 [selected_answer]<issue_comment>username_2: In addition to @YusufSyakur's answer for the `as.character` part, you should know a subset of a data.frame with more than one column is always data.frame itself. Even if there's only one row. And a data.frame is just a list with extra sugar on top, so `as.vector` removes that sugar and leaves the list.
Upvotes: 0 <issue_comment>username_3: Have in mind that you can use tibbles instead of dataframes (<https://cran.r-project.org/web/packages/tibble/vignettes/tibble.html>)
with a tibble stings are not converted to factors by default. So in your case you could use:
```
df <- as.tibble(rbind(agreement, likelihood))
> as.character(df[1,1:5])
[1] "Strongly Disagree" " Disagree" "Neither" "Agree" "Strongly Agree"
```
Upvotes: 0 |
2018/03/22 | 422 | 1,418 | <issue_start>username_0: I am looking for some advice, so I have some checkboxes. I would like to Add a dynamic count so that it shows the number that was selected.
So far I have
```
$('.individual').length
```
Which returns the number but how would I add it to my span:
```
- 0 of 29 records selected
- [Select All 29 Records](#)
```
This is what is on my Table Data
```
|
```
I forgot to mention that on my application, the table is loaded with Ajax, would I just need to add it to my AJAX callback?<issue_comment>username_1: Use JQuery .text() and pass the length to it like so:
```
$('#count-checked-checkboxes').text($('.individual').length);
```
Upvotes: 0 <issue_comment>username_2: This should do it:
```
$('#count-checked-checkboxes').text($('.individual:checked').length)
```
Upvotes: 0 <issue_comment>username_3: Try this code. I used **[`text()`](http://api.jquery.com/text/)** of jQuery to add text.
```js
$(".1").on("change", check);
function check(){
if($(".1:checked").length>0){
$('p').show(); //instead of $('p') select whatever you want show(selector for that text)
$("#selected").text($(".1:checked").length);
$("#total").text($(".1").length);
}
else{
$('p').hide(); //instead of $('p') select whatever you want hide(selector for that text)
}
}
check();
```
```html
1
2
3
4
5
of checkbox selected
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 473 | 1,789 | <issue_start>username_0: I am managing quite a large website fairly complex. We are using ansible for deployments; majority of the deployments are fine we can just include the playbooks and roles in a master playbook and it works like a charm.
Master playbooks looks like something below
* Master PlayBook
+ includes deployment playbook that have vars/config specific for deploying latest release
+ deployment playbooks includes specific roles related to the SW
- roles have tasks that are tagged appropriately
The issue we are having is that we can't pass on the tags while including the playbooks in master playbook. Something like
* include: task1.yml
tags: t1
This work absolutely fine if it's called from the command line, without using tags in master playbok
ansible-playbook -i host master\_playbook.yml -t t1
Any suggestions for a possible solutions would be helpful<issue_comment>username_1: Use JQuery .text() and pass the length to it like so:
```
$('#count-checked-checkboxes').text($('.individual').length);
```
Upvotes: 0 <issue_comment>username_2: This should do it:
```
$('#count-checked-checkboxes').text($('.individual:checked').length)
```
Upvotes: 0 <issue_comment>username_3: Try this code. I used **[`text()`](http://api.jquery.com/text/)** of jQuery to add text.
```js
$(".1").on("change", check);
function check(){
if($(".1:checked").length>0){
$('p').show(); //instead of $('p') select whatever you want show(selector for that text)
$("#selected").text($(".1:checked").length);
$("#total").text($(".1").length);
}
else{
$('p').hide(); //instead of $('p') select whatever you want hide(selector for that text)
}
}
check();
```
```html
1
2
3
4
5
of checkbox selected
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 392 | 1,234 | <issue_start>username_0: Why is the size of the compressed string bigger?
Doesn't the zlib need to **compress** ??
Example:
```
import zlib
import sys
str1 = "abcdefghijklmnopqrstuvwxyz"
print "size1: ", sys.getsizeof(str1)
print "size2: ", sys.getsizeof(zlib.compress(str1))
```
The output:
```
size1: 47
size2: 55
```<issue_comment>username_1: Use JQuery .text() and pass the length to it like so:
```
$('#count-checked-checkboxes').text($('.individual').length);
```
Upvotes: 0 <issue_comment>username_2: This should do it:
```
$('#count-checked-checkboxes').text($('.individual:checked').length)
```
Upvotes: 0 <issue_comment>username_3: Try this code. I used **[`text()`](http://api.jquery.com/text/)** of jQuery to add text.
```js
$(".1").on("change", check);
function check(){
if($(".1:checked").length>0){
$('p').show(); //instead of $('p') select whatever you want show(selector for that text)
$("#selected").text($(".1:checked").length);
$("#total").text($(".1").length);
}
else{
$('p').hide(); //instead of $('p') select whatever you want hide(selector for that text)
}
}
check();
```
```html
1
2
3
4
5
of checkbox selected
```
Upvotes: 2 [selected_answer] |
2018/03/22 | 3,699 | 13,564 | <issue_start>username_0: After format my computer I reinstalled Vs 2017 V 15.6.3
and install ASP.Net Core SDK from Microsoft 2.1.4
But when I create new asp core application VS failed with error
>
> "Project file is incomplete. Expected imports are missing"
>
>
>
Please, can anyone help?
[](https://i.stack.imgur.com/hiHP1.png)<issue_comment>username_1: I had the same issue. In my case, deleting global.json and appsettings solved the problem.
Upvotes: 6 <issue_comment>username_2: I got the same issue. I could not create new ASP .Net Core 2 on fresh installed VS2017 or open existing one, which works perfectly on another computer with VS2017.
It's started work for me after uninstalling all .NET Core Runtimes and .NET Core Windows Server Hosting from Windows. SDKs have been left only and all works finally.
[](https://i.stack.imgur.com/igSl9.jpg)
Probably, I needed to uninstall just .NET Core Runtimes or .NET Core Windows Server Hosting from Windows and it would be enough.
Upvotes: 2 <issue_comment>username_3: Thanks guys, i had reinstall older version of visual studio 2017 and it works well. My VS version is 15.4 and it's fine
Upvotes: 1 <issue_comment>username_4: I had the same symptoms from a corrupted dotnet core SDK (from the command line I was unable to run dotnet -v, where previously I was able). My project failed to load after a failed IIS web platform installation failure. Reinstalling .NET core SDK resolved the issue.
Upvotes: 2 <issue_comment>username_5: Eh, rookie mistake, I had Visual Studio SSDT open and was trying to open an existing .Net Core project which produces the same error:
>
> "Project file is incomplete. Expected imports are missing"
>
>
>
The solution was to use Visual Studio *not SSDT*, see in Help > About:
[](https://i.stack.imgur.com/eV06p.png)
Upvotes: 4 <issue_comment>username_6: in my case, I went to the folder C:\Program Files\dotnet\sdk
delete the preview sdk folder, then on the command line run:
>
> dotnet sdk 2.1.200
>
>
>
to set the correct sdk version for asp.net core. this fixed the problem.
Upvotes: 2 <issue_comment>username_7: Repairing last Core installation worked for me
Upvotes: 4 <issue_comment>username_8: I had the same problem with an ASP.net Core 2.0 project after I've installed the DotNetCore.2.0.5-WindowsHosting Framework on my development machine.
I could solve this problem after I've deinstalled the WindowsHosting Framework and all .net Core packages. After this I've reinstalled the dotnet-sdk-2.1.200-win-x64 package and everything worked fine.
Upvotes: 0 <issue_comment>username_9: I have solved this problem a couple of times lately by closing Visual Studio and running the following commands
1. dotnet nuget locals -c all
2. dotnet clean
3. dotnet build
There seems to be times, especially at the beginning of projects, when nuget gets out of whack (technical term).
Upvotes: 3 <issue_comment>username_10: In my case, I had two versions of Visual Studio installed (15.7 and 15.6).
15.7 didn't have the web workloads installed, though .net core 2.1 RC1 was already installed. I installed the asp.net workload into VS2017, and then repaired my .net core install for good measure. Definitely something in this process swapped my c:\program files(x86)\dotnet and c"\program files\dotnet in this path and Visual Studio (15.7) wouldn't open my web project.
I simply edited the System Environment variables, moved the x86 folder down one and voila - reopened Visual Studio and it now loaded my project.
[](https://i.stack.imgur.com/PNCfe.png)
Upvotes: 4 <issue_comment>username_11: Deleting the bin and obj folders for the project fixed this for me.
Upvotes: 0 <issue_comment>username_12: Same issue...took me a while to figure out. Since I was working on an ASP.NET Core project that targeted .NET Core 1.1, I thought I still needed the .NET Core 1.1 SDK installed. Since I had .NET Core SDK 2.1 installed too there was some sort of conflict and I couldn't open the project file. After uninstalling .NET Core 1.1 I was then able to open my project.
Upvotes: 0 <issue_comment>username_13: This happened to me when I uninstalled a VS2015 instance of Visual Studio I had running side by side with VS2017.
I had to go out and reinstall the .Net Core sdk/runtime with the x86 version. I had already manually done the x64 versions of these but didn't think about visual studio running in 32bit.
These are the files:
* dotnet-sdk-x.x.x-win-x86.exe
* DotNetCore.x.x.x-WindowsHosting.exe
Upvotes: 1 <issue_comment>username_14: You can check global.json file which you can find in solution root directory and make sure that the target SDK version in it installed on your machine or update it to one you already have it installed for example if you have SDK version 2.1.4 installed your global.json should look like this
```
{
"sdk":
{
"version": "2.1.400"
}
}
```
Upvotes: 5 <issue_comment>username_15: Run **dotnet restore** on command line for the solution.
**Tip.** *If you've uninstalled the latest .net core version let's say 2.1.403 and installed the previous one, ensure the 2.1.403 folder has been actually removed at the path*
C:\Program Files\dotnet\sdk\
*I had the issue due to there left an empty folder with 2.1.403*
Upvotes: 0 <issue_comment>username_16: **Delete all build generated bin and obj folders**. This worked for me after renaming project and trying to reload solution. Not really sure about the real cause of incompatible loading.
Upvotes: 1 <issue_comment>username_17: I get the same issue, and I run Visual with Administrator privilege again. This helps me to open project nomarlly.
Upvotes: 0 <issue_comment>username_18: I started to get this error when unistalled all my old .NET Core SDKs and installed the latest one, 2.1.5 at the time of posting. I have tried all the solutions here without any luck so decided to check what is the current SDK version after all the cleaup I did because I thought this is causing the problem, and I was right - started a command prompt and wrote `dotnet --version` and got the message "Did you mean to run dotnet SDK commands? Please install dotnet SDK from:". Strange, the SDK is installed but the system doesn't see it. Then I looked in my environment variables and the PATH variable had the x86 for dotnet path before the regular Program Files (the x64 one) path. So I moved the x64 before the x86 and voila, everthing is back to normal.
Hope I helped someone.
Upvotes: 3 <issue_comment>username_19: Turns out my project was dependent on another project in the solution, which had failed NuGet dependency issues, which in turn was because the Target Framework in the project properties was blank.
I opened the `.csproj` file to check the target framework, [downloaded the target framework](https://www.microsoft.com/net/download/visual-studio-sdks) and chose "repair", then restarted Visual Studio and all good again!
Upvotes: 0 <issue_comment>username_20: >
> ***Resolved***
>
>
>
1. Even after uninstalling certain version from control panel, there would the files and folders with deleted version found in C:\Program Files\dotnet\sdk
2. Please go and delete unwanted version folder
3. Try reloading the project
>
> This worked for me
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_21: I was facing the same issue. I was running my project on .Net core 2.2. I tried every single solution given here but nothing worked. I repaired my visual studio and it's working fine now.
Upvotes: 1 <issue_comment>username_22: I just had the same issue while installing the new preview version of VS 2019.
I fixed it by uninstalling the latest update which in turn uninstalled the most recent downloaded version of core. Which I belive caused the missmatch.
Upvotes: 0 <issue_comment>username_23: Solved for me updating Visual Studio 2017 to 15.9.11 (I can't remember the version that was failing). Maybe repairing Visual Studio would have worked too. Reparing .NET Core installation, and cleaning build artifacts and temporary files didn't solve it.
Upvotes: 0 <issue_comment>username_24: Another potential cause is you **installed a .NET Core version which is incompatible with Visual Studio**. This might be unrelated to the original question by the OP, since it concerns different version numbers, but since I landed on this page while looking for help I thought this might be useful to others.
At the time of writing, I installed **.NET Core 2.2.203** which is **not compatible with Visual Studio 2017** (Professional 15.9.11).
The [.NET Core download page](https://dotnet.microsoft.com/download) lists a separate download for Visual Studio 2017. Be sure to download this one when you intend to use Visual Studio 2017. The current supported version is **.NET Core 2.2.106**.
Upvotes: 5 <issue_comment>username_25: I solved this problem by modifying the first line in the .csproj file from:
```
```
to:
```
```
Upvotes: 4 <issue_comment>username_26: Just need to cross check your project creation .net core library version with updated one.When you update core library it won't get updated into .csproj, either you need to update it manually or need to remove latest/updated core sdk file.
To check .net core version
--fire command on cmd :- dotnet --version
Then check with installed library in Program and Features
--Removing of latest library resolve the problem without change of single line
Upvotes: 0 <issue_comment>username_27: After installing **VS2019** I started to have the same error when opening existing .Net Core solutions in VS2017.
In **Visual Studio Installer** I've updated VS2017 to the latest version (15.9.11) and the problem disappeared.
Later after upgrading **VS2019** to the latest release I started to have the same errors and had to upgrade VS2017 again.
Upvotes: 3 <issue_comment>username_28: I had the same issue. took me quiet a while to figure it out. In my case, I was trying to open a new ASP .NET Core Application and I was getting back 0 Projects like the one on this post.
**What I tried:**
I tried changing environmental variables under Control Panel\System and Security\System (click advance settings)
[](https://i.stack.imgur.com/HlQ3E.png)
Then click Environmental Variables
[](https://i.stack.imgur.com/9zOQh.png)
Then check to see if you have dotnet path included, whether it is under Program Files or Program files (x86)
[](https://i.stack.imgur.com/ObGgh.png)
Some people suggested to move (x86) up, which is based on [microsoft documentation](https://learn.microsoft.com/en-us/aspnet/core/test/troubleshoot?view=aspnetcore-2.2#no-net-core-sdks-were-detected)
**What fixed my issue:**
Turns out I had the wrong version installed and it was not compatible with Visual Studio 2017
[](https://i.stack.imgur.com/5H0yl.png)
Then I removed the other installations and reinstalled the right version that is compatible with VS17 and works like a magic. Hope it helps someone in the future.
Upvotes: 2 <issue_comment>username_29: The only thing that worked for me was to upgrade Visual studio:
[](https://i.stack.imgur.com/eX0CH.png)
[](https://i.stack.imgur.com/6I2ea.png)
Upvotes: 2 <issue_comment>username_30: I had the same error message. In my case. I had to install the **.NET Core cross-platform development** toolset.
Steps:
1. In Visual Studio, go to Tools > Get Tools and Features...
2. Modify the installation under Workloads: select .NET Core cross-platform development and click Modify.
3. Restart VS and rebuild your project.
Upvotes: 2 <issue_comment>username_31: I have resolved it by updating visual studio to 2017 version.
The actual problem was that .net core version was later than Visual Studio.
Go to Help> Check for update, update Visual Studio.
Upvotes: 1 <issue_comment>username_26: It seems that, visual studio is searching for right version of sdk. VS2017 does not support all version of sdk where as VS2019 supports. When you install VS2019 it always install latest version of sdk, which is not going to support by VS2017, which lids to this showing "**Project file is incomplete. Expected imports are missing.**" and project become **unavailable** in vs2017 projects
**.NET Core 2.2** sdk is supported by both VS2017 and VS2019.
**Installation Steps:**
1.Uninstall all previously installed sdk from control panel
2.delete sdk folder from "C:\Program Files\dotnet"
3.install .net core 2.2 sdk
4.check installed sdk by running dotnet --version on administrative command prompt (if it say dotnet not supported then update **global.json** file from C:\Windows\system32\global.json which pointing to right version of sdk)
[](https://i.stack.imgur.com/rws2o.png)
5.restart the computer and reopen VS
List of sdk <https://dotnet.microsoft.com/download/visual-studio-sdks>
Upvotes: 0 |
2018/03/22 | 3,907 | 14,961 | <issue_start>username_0: I'have included Exoplayer in a app, Client is requesting that user should watch the full video and close the screen. I'have implemented PlayerEventListener for identifying the finished state, But didn't find any callback or a way to stop user seeking action. Please help me here how to stop seeking the video.
```
private Player.EventListener exoEventListener = new Player.EventListener() {
@Override
public void onTimelineChanged(Timeline timeline, Object manifest) {}
@Override
public void onTracksChanged(TrackGroupArray trackGroups, TrackSelectionArray trackSelections) {}
@Override
public void onLoadingChanged(boolean isLoading) {}
@Override
public void onPlayerStateChanged(boolean playWhenReady, int playbackState) {
switch(playbackState) {
case Player.STATE_BUFFERING:
break;
case Player.STATE_ENDED:
isVideoFinished = true;
videoSectionListener.onVideoFinished();
break;
case Player.STATE_IDLE:
break;
case Player.STATE_READY:
break;
default:
break;
}
}
@Override
public void onRepeatModeChanged(int repeatMode) {}
@Override
public void onShuffleModeEnabledChanged(boolean shuffleModeEnabled) {}
@Override
public void onPlayerError(ExoPlaybackException error) {}
@Override
public void onPositionDiscontinuity(int reason) {}
@Override
public void onPlaybackParametersChanged(PlaybackParameters playbackParameters) {}
@Override
public void onSeekProcessed() {}
};
```<issue_comment>username_1: I had the same issue. In my case, deleting global.json and appsettings solved the problem.
Upvotes: 6 <issue_comment>username_2: I got the same issue. I could not create new ASP .Net Core 2 on fresh installed VS2017 or open existing one, which works perfectly on another computer with VS2017.
It's started work for me after uninstalling all .NET Core Runtimes and .NET Core Windows Server Hosting from Windows. SDKs have been left only and all works finally.
[](https://i.stack.imgur.com/igSl9.jpg)
Probably, I needed to uninstall just .NET Core Runtimes or .NET Core Windows Server Hosting from Windows and it would be enough.
Upvotes: 2 <issue_comment>username_3: Thanks guys, i had reinstall older version of visual studio 2017 and it works well. My VS version is 15.4 and it's fine
Upvotes: 1 <issue_comment>username_4: I had the same symptoms from a corrupted dotnet core SDK (from the command line I was unable to run dotnet -v, where previously I was able). My project failed to load after a failed IIS web platform installation failure. Reinstalling .NET core SDK resolved the issue.
Upvotes: 2 <issue_comment>username_5: Eh, rookie mistake, I had Visual Studio SSDT open and was trying to open an existing .Net Core project which produces the same error:
>
> "Project file is incomplete. Expected imports are missing"
>
>
>
The solution was to use Visual Studio *not SSDT*, see in Help > About:
[](https://i.stack.imgur.com/eV06p.png)
Upvotes: 4 <issue_comment>username_6: in my case, I went to the folder C:\Program Files\dotnet\sdk
delete the preview sdk folder, then on the command line run:
>
> dotnet sdk 2.1.200
>
>
>
to set the correct sdk version for asp.net core. this fixed the problem.
Upvotes: 2 <issue_comment>username_7: Repairing last Core installation worked for me
Upvotes: 4 <issue_comment>username_8: I had the same problem with an ASP.net Core 2.0 project after I've installed the DotNetCore.2.0.5-WindowsHosting Framework on my development machine.
I could solve this problem after I've deinstalled the WindowsHosting Framework and all .net Core packages. After this I've reinstalled the dotnet-sdk-2.1.200-win-x64 package and everything worked fine.
Upvotes: 0 <issue_comment>username_9: I have solved this problem a couple of times lately by closing Visual Studio and running the following commands
1. dotnet nuget locals -c all
2. dotnet clean
3. dotnet build
There seems to be times, especially at the beginning of projects, when nuget gets out of whack (technical term).
Upvotes: 3 <issue_comment>username_10: In my case, I had two versions of Visual Studio installed (15.7 and 15.6).
15.7 didn't have the web workloads installed, though .net core 2.1 RC1 was already installed. I installed the asp.net workload into VS2017, and then repaired my .net core install for good measure. Definitely something in this process swapped my c:\program files(x86)\dotnet and c"\program files\dotnet in this path and Visual Studio (15.7) wouldn't open my web project.
I simply edited the System Environment variables, moved the x86 folder down one and voila - reopened Visual Studio and it now loaded my project.
[](https://i.stack.imgur.com/PNCfe.png)
Upvotes: 4 <issue_comment>username_11: Deleting the bin and obj folders for the project fixed this for me.
Upvotes: 0 <issue_comment>username_12: Same issue...took me a while to figure out. Since I was working on an ASP.NET Core project that targeted .NET Core 1.1, I thought I still needed the .NET Core 1.1 SDK installed. Since I had .NET Core SDK 2.1 installed too there was some sort of conflict and I couldn't open the project file. After uninstalling .NET Core 1.1 I was then able to open my project.
Upvotes: 0 <issue_comment>username_13: This happened to me when I uninstalled a VS2015 instance of Visual Studio I had running side by side with VS2017.
I had to go out and reinstall the .Net Core sdk/runtime with the x86 version. I had already manually done the x64 versions of these but didn't think about visual studio running in 32bit.
These are the files:
* dotnet-sdk-x.x.x-win-x86.exe
* DotNetCore.x.x.x-WindowsHosting.exe
Upvotes: 1 <issue_comment>username_14: You can check global.json file which you can find in solution root directory and make sure that the target SDK version in it installed on your machine or update it to one you already have it installed for example if you have SDK version 2.1.4 installed your global.json should look like this
```
{
"sdk":
{
"version": "2.1.400"
}
}
```
Upvotes: 5 <issue_comment>username_15: Run **dotnet restore** on command line for the solution.
**Tip.** *If you've uninstalled the latest .net core version let's say 2.1.403 and installed the previous one, ensure the 2.1.403 folder has been actually removed at the path*
C:\Program Files\dotnet\sdk\
*I had the issue due to there left an empty folder with 2.1.403*
Upvotes: 0 <issue_comment>username_16: **Delete all build generated bin and obj folders**. This worked for me after renaming project and trying to reload solution. Not really sure about the real cause of incompatible loading.
Upvotes: 1 <issue_comment>username_17: I get the same issue, and I run Visual with Administrator privilege again. This helps me to open project nomarlly.
Upvotes: 0 <issue_comment>username_18: I started to get this error when unistalled all my old .NET Core SDKs and installed the latest one, 2.1.5 at the time of posting. I have tried all the solutions here without any luck so decided to check what is the current SDK version after all the cleaup I did because I thought this is causing the problem, and I was right - started a command prompt and wrote `dotnet --version` and got the message "Did you mean to run dotnet SDK commands? Please install dotnet SDK from:". Strange, the SDK is installed but the system doesn't see it. Then I looked in my environment variables and the PATH variable had the x86 for dotnet path before the regular Program Files (the x64 one) path. So I moved the x64 before the x86 and voila, everthing is back to normal.
Hope I helped someone.
Upvotes: 3 <issue_comment>username_19: Turns out my project was dependent on another project in the solution, which had failed NuGet dependency issues, which in turn was because the Target Framework in the project properties was blank.
I opened the `.csproj` file to check the target framework, [downloaded the target framework](https://www.microsoft.com/net/download/visual-studio-sdks) and chose "repair", then restarted Visual Studio and all good again!
Upvotes: 0 <issue_comment>username_20: >
> ***Resolved***
>
>
>
1. Even after uninstalling certain version from control panel, there would the files and folders with deleted version found in C:\Program Files\dotnet\sdk
2. Please go and delete unwanted version folder
3. Try reloading the project
>
> This worked for me
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_21: I was facing the same issue. I was running my project on .Net core 2.2. I tried every single solution given here but nothing worked. I repaired my visual studio and it's working fine now.
Upvotes: 1 <issue_comment>username_22: I just had the same issue while installing the new preview version of VS 2019.
I fixed it by uninstalling the latest update which in turn uninstalled the most recent downloaded version of core. Which I belive caused the missmatch.
Upvotes: 0 <issue_comment>username_23: Solved for me updating Visual Studio 2017 to 15.9.11 (I can't remember the version that was failing). Maybe repairing Visual Studio would have worked too. Reparing .NET Core installation, and cleaning build artifacts and temporary files didn't solve it.
Upvotes: 0 <issue_comment>username_24: Another potential cause is you **installed a .NET Core version which is incompatible with Visual Studio**. This might be unrelated to the original question by the OP, since it concerns different version numbers, but since I landed on this page while looking for help I thought this might be useful to others.
At the time of writing, I installed **.NET Core 2.2.203** which is **not compatible with Visual Studio 2017** (Professional 15.9.11).
The [.NET Core download page](https://dotnet.microsoft.com/download) lists a separate download for Visual Studio 2017. Be sure to download this one when you intend to use Visual Studio 2017. The current supported version is **.NET Core 2.2.106**.
Upvotes: 5 <issue_comment>username_25: I solved this problem by modifying the first line in the .csproj file from:
```
```
to:
```
```
Upvotes: 4 <issue_comment>username_26: Just need to cross check your project creation .net core library version with updated one.When you update core library it won't get updated into .csproj, either you need to update it manually or need to remove latest/updated core sdk file.
To check .net core version
--fire command on cmd :- dotnet --version
Then check with installed library in Program and Features
--Removing of latest library resolve the problem without change of single line
Upvotes: 0 <issue_comment>username_27: After installing **VS2019** I started to have the same error when opening existing .Net Core solutions in VS2017.
In **Visual Studio Installer** I've updated VS2017 to the latest version (15.9.11) and the problem disappeared.
Later after upgrading **VS2019** to the latest release I started to have the same errors and had to upgrade VS2017 again.
Upvotes: 3 <issue_comment>username_28: I had the same issue. took me quiet a while to figure it out. In my case, I was trying to open a new ASP .NET Core Application and I was getting back 0 Projects like the one on this post.
**What I tried:**
I tried changing environmental variables under Control Panel\System and Security\System (click advance settings)
[](https://i.stack.imgur.com/HlQ3E.png)
Then click Environmental Variables
[](https://i.stack.imgur.com/9zOQh.png)
Then check to see if you have dotnet path included, whether it is under Program Files or Program files (x86)
[](https://i.stack.imgur.com/ObGgh.png)
Some people suggested to move (x86) up, which is based on [microsoft documentation](https://learn.microsoft.com/en-us/aspnet/core/test/troubleshoot?view=aspnetcore-2.2#no-net-core-sdks-were-detected)
**What fixed my issue:**
Turns out I had the wrong version installed and it was not compatible with Visual Studio 2017
[](https://i.stack.imgur.com/5H0yl.png)
Then I removed the other installations and reinstalled the right version that is compatible with VS17 and works like a magic. Hope it helps someone in the future.
Upvotes: 2 <issue_comment>username_29: The only thing that worked for me was to upgrade Visual studio:
[](https://i.stack.imgur.com/eX0CH.png)
[](https://i.stack.imgur.com/6I2ea.png)
Upvotes: 2 <issue_comment>username_30: I had the same error message. In my case. I had to install the **.NET Core cross-platform development** toolset.
Steps:
1. In Visual Studio, go to Tools > Get Tools and Features...
2. Modify the installation under Workloads: select .NET Core cross-platform development and click Modify.
3. Restart VS and rebuild your project.
Upvotes: 2 <issue_comment>username_31: I have resolved it by updating visual studio to 2017 version.
The actual problem was that .net core version was later than Visual Studio.
Go to Help> Check for update, update Visual Studio.
Upvotes: 1 <issue_comment>username_26: It seems that, visual studio is searching for right version of sdk. VS2017 does not support all version of sdk where as VS2019 supports. When you install VS2019 it always install latest version of sdk, which is not going to support by VS2017, which lids to this showing "**Project file is incomplete. Expected imports are missing.**" and project become **unavailable** in vs2017 projects
**.NET Core 2.2** sdk is supported by both VS2017 and VS2019.
**Installation Steps:**
1.Uninstall all previously installed sdk from control panel
2.delete sdk folder from "C:\Program Files\dotnet"
3.install .net core 2.2 sdk
4.check installed sdk by running dotnet --version on administrative command prompt (if it say dotnet not supported then update **global.json** file from C:\Windows\system32\global.json which pointing to right version of sdk)
[](https://i.stack.imgur.com/rws2o.png)
5.restart the computer and reopen VS
List of sdk <https://dotnet.microsoft.com/download/visual-studio-sdks>
Upvotes: 0 |
2018/03/22 | 3,737 | 10,011 | <issue_start>username_0: The app I'm building is a very simple website monitoring tool with Users, Alerts, and Crawls. I created a Rake task to take care of crawling the sites at specified intervals, and it was working fine, saving the crawl history to the db when manually run with `rake crawl_next`.
After integrating logic to check if the crawl is over a user's specified limits or there are errors & then emailing the user, I'm no longer able to save a crawl record to the database. I'm getting a `NoMethodError: undefined method 'clear' for false:FalseClass` when running the rake task, which I can't pinpoint the source of. Based on the console output, I presume it's some failing validation, but I can't determine what validation might be causing it to fail. I'm hoping a more experienced Dev can point me in the right direction.
I've isolated the issue with the saving of the crawl record to the database...I think.
* I've used `pry` to inspect the inputs & variables, and nothing seems amiss.
* I've commented out the `after_save` action on the crawl model to eliminate possible alert model code errors.
* In the Rails console, I've tried manually creating new crawls using both mass assignment & the `.create` method. It fails the same way.
* Using Rails 5.1.5 w/ Ruby 2.5.0p0, c9.io IDE
Console output:
```
** Invoke crawl_next (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute crawl_next
rake aborted!
NoMethodError: undefined method `clear' for false:FalseClass
/usr/local/rvm/gems/ruby-2.5.0/gems/activemodel-5.1.5/lib/active_model/validations.rb:334:in `valid?'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/validations.rb:65:in `valid?'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/validations.rb:82:in `perform_validations'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/validations.rb:44:in `save'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/attribute_methods/dirty.rb:35:in `save'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:308:in `block (2 levels) in save'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:384:in `block in with_transaction_returning_status'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `block in transaction'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/connection_adapters/abstract/transaction.rb:194:in `block in within_new_transaction'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon_synchronize'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/connection_adapters/abstract/transaction.rb:191:in `within_new_transaction'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/connection_adapters/abstract/database_statements.rb:235:in `transaction'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:210:in `transaction'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:381:in `with_transaction_returning_status'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:308:in `block in save'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:323:in `rollback_active_record_state!'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/transactions.rb:307:in `save'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active_record/suppressor.rb:42:in `save'
/home/ubuntu/workspace/web_monitor/lib/tasks/scheduler.rake:19:in `block (2 levels) in '
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active\_record/relation/delegation.rb:39:in `each'
/usr/local/rvm/gems/ruby-2.5.0/gems/activerecord-5.1.5/lib/active\_record/relation/delegation.rb:39:in `each'
/home/ubuntu/workspace/web\_monitor/lib/tasks/scheduler.rake:5:in `block in '
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:251:in `block in execute'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:251:in `each'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:251:in `execute'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:195:in `block in invoke\_with\_call\_chain'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/2.5.0/monitor.rb:226:in `mon\_synchronize'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:188:in `invoke\_with\_call\_chain'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/task.rb:181:in `invoke'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:160:in `invoke\_task'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:116:in `block (2 levels) in top\_level'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:116:in `each'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:116:in `block in top\_level'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:125:in `run\_with\_threads'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:110:in `top\_level'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:83:in `block in run'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:186:in `standard\_exception\_handling'
/usr/local/rvm/rubies/ruby-2.5.0/lib/ruby/gems/2.5.0/gems/rake-12.3.0/lib/rake/application.rb:80:in `run'
/usr/local/rvm/gems/ruby-2.5.0@global/gems/rake-12.3.0/exe/rake:27:in `'
/usr/local/rvm/rubies/ruby-2.5.0/bin/rake:29:in `load'
/usr/local/rvm/rubies/ruby-2.5.0/bin/rake:29:in `'
Tasks: TOP => crawl\_next
```
lib/tasks/scheduler.rake:
```
desc "perform crawls for active alerts if their interval has passed"
task :crawl_next => :environment do
alerts = Alert.where(active: true).includes(:crawls)
alerts.each do |alert|
last = alert.crawls.last
last_crawl_time = last.crawl_time if !last.nil?
if last.nil? ||
(last_crawl_time + alert.crawl_interval_mins*60) < Time.now + 1
crawl_stats = alert.crawl
end
if crawl_stats
crawl = Crawl.new(crawl_stats)
crawl.save
if crawl.exceeds_limits? || crawl.errors
UserMailer.crawl_alert(alert, crawl).deliver_later
end
else
alert.deactivate
end
end
end
```
crawl.rb model:
```
class Crawl < ActiveRecord::Base
belongs_to :alert
after_save :update_alert_last_crawl
def update_alert_last_crawl
alert = self.alert
alert.update(last_crawl: self.crawl_time)
end
def exceeds_limits?
self.resp_time_ms > self.alert.response_time_threshold_ms
end
def errors
self.resp_code != "200"
end
end
```
alert.rb model:
```
class Alert < ActiveRecord::Base
has_many :user_alerts
has_many :users, through: :user_alerts, dependent: :destroy
has_many :crawls, dependent: :destroy
before_save :activate # TODO: don't activate all alerts before save
validate :valid_url? # Using custom method instead of valid url gem
validates :crawl_interval_mins, presence: true, inclusion: {in: [10, 30, 60]}
validates :notify_emails, presence: true, length: {minimum: 6}
validates :name, presence: true, length: {minimum: 2}
def activate
self.active = true
end
def deactivate
self.active = false
UserMailer.alert_deactivated(self).deliver_later
end
# Return a hash corresponding to a Crawl's schema, to be used in creating
# a new crawl record
def crawl
data = {}
resp = nil
time = Benchmark.measure do
begin
resp = HTTParty.get(self.url)
rescue => e
data = crawl_error_info(e)
end
end
{
alert_id: self.id,
crawl_time: Time.now.to_s,
resp_code: data[:resp_code] || resp.code,
resp_time_ms: data[:resp_time_ms] || time.real * 1_000,
resp_status: data[:resp_status] || resp.message,
resp_size_kb: data[:resp_size_kb] || resp.size # TODO: convert to mb
}
end
def valid_url?
if !self.url.match(/^(((http|https):\/\/|)?[a-z0-9]+([\-\.]{1}[a-z0-9]+)*\.[a-z]{2,6}(:[0-9]{1,5})?(\/.*)?)$/i)
errors.add(:url, "not valid")
end
end
def crawl_error_info(e)
if e.class == SocketError
return {
resp_code: 443,
resp_time_ms: 0,
resp_status: "NAME/SVC NOT KNOWN",
resp_size_kb: 0 # TODO: convert to mb
}
end
end
end
```<issue_comment>username_1: The error is happening in the crawl\_next task, in this line:
```
crawl.save
```
Then like you mentioned it is failing in the active model validations in this method, in errors.clear:
```
def valid?(context = nil)
current_context, self.validation_context = validation_context, context
errors.clear
run_validations!
ensure
self.validation_context = current_context
end
```
Put a pry before the Crawl save to see what data is being passed into the Crawl.new to clear the issue.
Upvotes: 0 <issue_comment>username_2: This is not good...
```
def errors
self.resp_code != "200"
end
```
`errors` is a method provided by `ActiveRecord::Base` and you've overwritten it with your on method that just returns `true` or `false`. When Rails tries to clear down errors using `errors.clear` it's not expecting to get back a boolean `false` and the boolean `false` doesn't support `#clear`.
Change the name of your `errors` method to something else, like maybe `bad_response_code?` and then change the line in your rake file to...
```
if crawl.exceeds_limits? || crawl.bad_response_code?
```
Upvotes: 1 |
2018/03/22 | 1,111 | 3,807 | <issue_start>username_0: I am trying to get a drop down data from a route.
I made the dropdown data a model.
```
create_table "choices", force: :cascade do |t|
t.string "description"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
```
and I seed it as following:
```
Choice.create!(
[
{
description: "Choice 1"
},
{
description: "Choice 2"
}
]
)
```
This drop down model does not have its own controller, it is used by another model's api controller. The model looks like
```
create_table "forms", force: :cascade do |t|
t.text "username"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
```
And this is how "Choice" Model is used in the form's controller
```
class Api::FormsController < ApplicationController
def choices
all_choices = Choice.all.
map {|e| {"label" => e.description, "value" => e.id}}.as_json
respond_to do |format|
format.json { render json: all_choices, status: :ok }
end
end
end
```
I also create a get route for that model
```
constraints(lambda { |req| req.format.symbol.in? [:json, :csv, :xlsx] }) do
post '/login' => 'sessions#create'
namespace :api, defaults: { format: :json } do
get '/forms/choices' => 'forms#choices'
end
end
```
and from my front-end(Using React)
```
getDropdown(){
ajax({
url: `/api/forms/choices.json`,
dataType: 'json',
success: data => {
this.setState({
choices: data
});
},
error: () => {
console.log("Error retrieving data")
}
})
}
```
When I use `console.log(this.state.choices)` to print out the data I got, I got empty object. I am relatively new to rails and understand that it is convention over configuration, so I believe I am missing something small here.
Any help is appreciated.<issue_comment>username_1: The problem is my route. There are other routes before the `choice` route in fact.
```
constraints(lambda { |req| req.format.symbol.in? [:json, :csv, :xlsx] }) do
post '/login' => 'sessions#create'
namespace :api, defaults: { format: :json } do
get '/forms/otherstuff' => 'forms#otherstuff'
get '/forms/:user_id' => 'forms#show' <----
get '/forms/choices' => 'forms#choices' <----
end
end
```
I fixed it by switching the `choice` route with the route above it, just simply switched their places and it worked.
```
constraints(lambda { |req| req.format.symbol.in? [:json, :csv, :xlsx] }) do
post '/login' => 'sessions#create'
namespace :api, defaults: { format: :json } do
get '/forms/otherstuff' => 'forms#otherstuff'
get '/forms/choices' => 'forms#choices' <----
get '/forms/:user_id' => 'forms#show' <----
end
end
```
I have no idea why, I would love to get a explanation of this rail behavior
Upvotes: 0 <issue_comment>username_2: The explanation is simple. You have the following route:
```
get '/forms/:user_id' => 'forms#show'
```
So, it will match:
```
/api/forms/1
/api/forms/2
```
But it also matches:
```
/api/forms/otherstuff
/api/forms/some_string
/api/forms/choices
```
In general, you should read your route as:
match `/api/forms` and then, if there is anything else added to the route, treat it as a `:user_id`.
This is described in the Rails documentation:
>
> Rails routes are matched in the order they are specified, so if you have a resources :photos above a get 'photos/poll' the show action's route for the resources line will be matched before the get line. To fix this, move the get line above the resources line so that it is matched first.
>
>
>
Source: <http://guides.rubyonrails.org/routing.html#crud-verbs-and-actions>
Upvotes: 2 [selected_answer] |
2018/03/22 | 1,148 | 3,705 | <issue_start>username_0: I am calling my Java webservice (POST request) via Postman in the following manner which works perfectly fine (i.e. I can see my records getting inserted into the database):
[](https://i.stack.imgur.com/M1D5Y.png)
And, here's how the contents inside the `Headers(1)` tab look like:
[](https://i.stack.imgur.com/uwVrt.png)
Instead of calling it via Postman, I have to call the same request in PHP using cURL. I am wondering if there's a way to export this command to a `curl` command so that I could use it in my PHP code? I have found the opposite approach at many places online where someone is asking to convert a curl based request to Postman but couldn't figure out how to do the opposite.
I found this question for curl to Postman: [Simulate a specific CURL in PostMan](https://stackoverflow.com/questions/27957943/simulate-a-specific-curl-in-postman)<issue_comment>username_1: [](https://i.stack.imgur.com/ziTKS.png)
You can see the button "Code" in the attached screenshot, press it and you can get your code in many different languages including PHP cURL
[](https://i.stack.imgur.com/JKzG4.png)
Upvotes: 11 [selected_answer]<issue_comment>username_2: Starting from `Postman 8` you need to visit here
[](https://i.stack.imgur.com/wX6ht.png)
Upvotes: 7 <issue_comment>username_3: If you're like me running MacOS and still have Postman v7, you have to click the ellipses here to find the code option
[](https://i.stack.imgur.com/92rBv.png)
Upvotes: 3 <issue_comment>username_4: The first answer here works with the older versions of Postman. With the latest releases in 2021 the cURL can be found clicking this icon (circled in red)
[](https://i.stack.imgur.com/6Mld1.jpg)
Upvotes: 5 <issue_comment>username_5: [](https://i.stack.imgur.com/HXFxx.png)To extend the existing answers, there is an option to ***generate PHP ready cURL code*** if needed. With latest Postman, you will find dropdown for **PHP - cURL** and other languages.
See screenshot:
Upvotes: 2 <issue_comment>username_6: Here is a quick video demonstration [postman request to curl](http://youtu.be/L3m6cpQPsV0)
Latest version 8.x:
* Find **Code** symbol on the right side of the postman.
[](https://i.stack.imgur.com/JYk4W.png)
* Select cURL from the dropdown.
[](https://i.stack.imgur.com/zCRju.png)
Upvotes: 3 <issue_comment>username_7: Steps:
* Open Postman
* Open your request
* Select icon from right side bar
* Select cURL from dropdown (if not cURL is there by default)
* Copy the code and use.
Upvotes: 3 <issue_comment>username_8: Select icon from right side bar work so fine for me in Postman here.
Upvotes: 2 <issue_comment>username_9: For those who use v7.36.5 for Ubuntu, or similar.
Instead of the "" icon, there is a clickable "Code" text:
[](https://i.stack.imgur.com/v3ZlV.png)
Upvotes: 4 <issue_comment>username_10: For `Postman Version 7.36.5 (7.36.5)`
[](https://i.stack.imgur.com/UItGI.png)
Upvotes: 0 |
2018/03/22 | 1,001 | 3,252 | <issue_start>username_0: Currently we can set all files from a local folder or files patterns using the "pattern" attribute, but we need to upload multiple files from a local folder to blob storage in single command invocation. Is it possible to do this?
For example source folder contain more files, I need to send selected files in a single invocation, not use loop. Any way to specify selected files name in command line by coma separated value.
Syntax:
```
AzCopy /Source:C:\myfolder /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /S
```<issue_comment>username_1: [](https://i.stack.imgur.com/ziTKS.png)
You can see the button "Code" in the attached screenshot, press it and you can get your code in many different languages including PHP cURL
[](https://i.stack.imgur.com/JKzG4.png)
Upvotes: 11 [selected_answer]<issue_comment>username_2: Starting from `Postman 8` you need to visit here
[](https://i.stack.imgur.com/wX6ht.png)
Upvotes: 7 <issue_comment>username_3: If you're like me running MacOS and still have Postman v7, you have to click the ellipses here to find the code option
[](https://i.stack.imgur.com/92rBv.png)
Upvotes: 3 <issue_comment>username_4: The first answer here works with the older versions of Postman. With the latest releases in 2021 the cURL can be found clicking this icon (circled in red)
[](https://i.stack.imgur.com/6Mld1.jpg)
Upvotes: 5 <issue_comment>username_5: [](https://i.stack.imgur.com/HXFxx.png)To extend the existing answers, there is an option to ***generate PHP ready cURL code*** if needed. With latest Postman, you will find dropdown for **PHP - cURL** and other languages.
See screenshot:
Upvotes: 2 <issue_comment>username_6: Here is a quick video demonstration [postman request to curl](http://youtu.be/L3m6cpQPsV0)
Latest version 8.x:
* Find **Code** symbol on the right side of the postman.
[](https://i.stack.imgur.com/JYk4W.png)
* Select cURL from the dropdown.
[](https://i.stack.imgur.com/zCRju.png)
Upvotes: 3 <issue_comment>username_7: Steps:
* Open Postman
* Open your request
* Select icon from right side bar
* Select cURL from dropdown (if not cURL is there by default)
* Copy the code and use.
Upvotes: 3 <issue_comment>username_8: Select icon from right side bar work so fine for me in Postman here.
Upvotes: 2 <issue_comment>username_9: For those who use v7.36.5 for Ubuntu, or similar.
Instead of the "" icon, there is a clickable "Code" text:
[](https://i.stack.imgur.com/v3ZlV.png)
Upvotes: 4 <issue_comment>username_10: For `Postman Version 7.36.5 (7.36.5)`
[](https://i.stack.imgur.com/UItGI.png)
Upvotes: 0 |
2018/03/22 | 865 | 3,068 | <issue_start>username_0: **Problem**:
I have a function `void myFunc(data)`
I am reading data from database using QSqlQuery:
```
QSqlQuery qry;
if (qry.exec("SELECT data, interval from table"))
{
while(qry.next())
{
// Somehow create and call function: myFunc(int data) periodically with interval = interval
}
}
```
As far as I understand I could use a timer like that:
```
QTimer *timer = new QTimer(this);
connect(timer, SIGNAL(timeout()), this, SLOT(myFunc()));
timer->start(interval); //time specified in ms
```
but how can I pass argument `data` to `myFunc` when I create this timer?<issue_comment>username_1: You have lots of options:
* As mentioned in comments store the int data in `this`, maybe in a `std::map` so that when you have multiple timers you can lookup the correct value and call `myFunc` with it. Since you have not stated if `myFunc` is one function or there can be multiple, you may have to store the function as well
* You can make a class which gets the `data` value in the constructor, saves it in a member, you connect a slot of that class to the timer, and from the slot invoke `myFunc()` with the stored value
* You can make a class that inherits from `QTimer` and has the data you need, use that class when creating the timer instead of the plain `QTimer`, and then in the slot `myFunc` you can access that instance via `QObject::sender()`, cast it to your type and do whatever needs to be done.
Upvotes: 1 <issue_comment>username_2: If you use C++11, you can connect your timer to a lambda function in which you capture your data value.
Example (untested):
```
int interval = 500;
int data = 42;
QTimer *timer = new QTimer(this);
connect(timer, &QTimer::timeout, [data] {
/* Implement your logic here */
});
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: One more option: have a QObject-derived class that runs the function calling `QObject::startTimer`. In this same class, use a `QMap` where each pair has the timer id as key and the data as value.
A simple implementation:
```
#include
#include
class TimedExecution : QObject
{
Q\_OBJECT
public:
TimedExecution() : QObject(0){}
void addFunction(int data, int interval);
protected:
void timerEvent(QTimerEvent \*event);
private:
QMap map;
};
```
Use the `addFunction` method to create a new timed execution task (the passed interval is assumed to be expressed in seconds, here):
```
void TimedExecution::addFunction(int data, int interval)
{
map.insert(startTimer(interval * 1000), data);
}
```
Start the same function in the overridden `timerEvent` method, passing the data retrieved from the map, using the timer id retrieved from the timer event as the map key:
```
void TimedExecution::timerEvent(QTimerEvent *event)
{
myFunc( map[event->timerId()] );
}
```
Upvotes: 2 <issue_comment>username_4: If you can use Qt5 and C++11 then you can leverage `std::bind`:
for example, assuming `decltype(this)==MyClass*`:
```
connect(timer, &QTimer::timeout, this, std::bind(&MyClass::myFunc,this,data));
```
Upvotes: 2 |
2018/03/22 | 509 | 1,979 | <issue_start>username_0: I am trying to combine interpolation and angular-translate to retrieve lang translations from en -> fr from a couple json files. All the definitions are defined however to display the interpolated string in HTML it looks like this:
```
{{ 'this.section.in.json.' + object.param | translate}}
```
so it'll take the param as a string, find it in the en.json and if the setting is french find the translation in fr.json.
My issue is that Object.param is coming from an API and it has a whitespace in it while the json is structured differently:
```
Need param with no spaces--> "thisString": "this String" <--Object.Param returns this
```
I can define a function in my component to use .replace() and return a new value but there's a lot of different translations to deal with for a lot of different params. Is there a way to use .replace in the interpolation string in the html file? as shown below
```
{{ 'this.section.in.json.' + object.param.replace(*regex*, '') | translate}}
```<issue_comment>username_1: Nope, you can't use those method-functions directly in the interpolation context. But you can chain pipes. Which means that you can write your own pipe for removing those whitespaces at first and then have your translation applied afterwards.
e.g.:
```
{{ 'this.section.in.json.' + object.param | removeWhitespaces | translate}}
```
Here you first remove whitespaces and then the 'cleaned' string gets translated.
Upvotes: 1 <issue_comment>username_2: I would just make a new pipe that strips out white spaces.
Just be sure to register it in your app module.
```
import { Component, Pipe, PipeTransform } from '@angular/core';
@Pipe({ name: 'stripSpaces' })
export class StripSpaces implements PipeTransform {
transform(str: string): any {
return str.replace(/\s/g, '')
}
}
```
Then in your template use this
```
{{ 'this.section.in.json.' + object.param | stripSpaces | translate }}
```
Upvotes: 3 [selected_answer] |
2018/03/22 | 1,072 | 3,571 | <issue_start>username_0: I would like to search for a pattern in bunch of source files. The pattern should act as a marker. If the pattern is found, I would like to process that line by performing a substitution of another string
**For example:**
```
Private const String myTestString = @"VAL15"; // STRING—REPLACE-VAL##
```
Here, I want to search my source file for pattern `STRING—REPLACE-VAL` and then replace `VAL15` with `VAL20` in same.
**Output:**
```
private const String myTestString = @"VAL20"; // STRING—REPLACE-VAL##
```
Tried below command but not working as expected.
```
sed -i '/,STRING—REPLACE-VAL##/ {; s/,VAL15,/,VAL20,/;}' myTestFile.cpp
```
**Question:** Is it possible to search for `STRING—REPLACE-VAL##` and then search for matching pattern `@"VAL`??" in same line and replace 15 by 20.
`sed` supports search & replacing the same pattern very easily but not sure if `sed` supports to search pattern but replace another string in the matching line?
Any help would be appreciated. Thanks in advance.<issue_comment>username_1: You were very close :
```
sed -i '/STRING—REPLACE-VAL##/{s/VAL15/VAL20/}' myTestFile.cpp
```
In your original you were trying to replace `",VAL15,"` but this string is not in the line (the commas). Furthermore, the same occurs for your search string `",STRING—REPLACE-VAL##"`.
It also occured to me that the first hyphen between `STRING` and `REPLACE` is an em-dash and not a standard `-`, maybe this is another problem. Make sure that the string is exactly the same. If you are not sure about the dashes, you could use
```
sed -i '/STRING.REPLACE.VAL##/{s/VAL15/VAL20/}' myTestFile.cpp
```
and to answer your question, yes you can try to do multiple matches in the following way:
**`sed`:**
```
sed -i '/STRING.REPLACE.VAL##/{ /@"VAL/ { s/VAL15/VAL20/ } }' myTestFile.cpp
```
**`awk`:**
```
awk '/STRING.REPLACE.VAL##/&&/@"VAL/{sub("VAL15","VAL20")}1' myTestFile.cpp
```
Upvotes: 2 <issue_comment>username_2: The search pattern in the attempt includes many superfluous/erronious commas -- and that is the reason that the match was failing. Patterns and searches are delimited by default with the `/` character. (Unless you have a specific reason to do otherwise it is a good idea to stick with `/` for readability.)
One thing to be careful of is to make sure that your word is exactly matched... and that *VAL15**2*** (for example) is not matched. You can enclose *VAL15* within `\<` and `\>` to match the word boundary.
Also, wrapping parentheses are not required after the match pattern.
```
sed -i '/STRING—REPLACE-VAL##/ s/\/VAL20/' myTestFile.cpp
```
@username_1 noted that you specified a special em dash character `—` in the match that is non-ascii & suggested ways to work around this -- but we can also use a trick to exactly match what you want and make that aspect stand out more (note: sh/bash assumed as the shell -- note the careful use of quoting).
```
sed -i "/STRING$(printf '\342\200\224')REPLACE-VAL##/ s/\/VAL20/" myTestFile.cpp
```
Or to match hyphen and em dash in both places:
```
emdash=$(printf '\342\200\224')
sed -i "/STRING[-$emdash]REPLACE[-$emdash]VAL##/ s/\/VAL20/" myTestFile.cpp
```
We can expand on this further on this to include the en dash as well -- left as an exercise for the reader. To help with that exercise, here is how I was able to decode your em dash char:
```
echo '—' | od -c
```
Upvotes: 1 <issue_comment>username_3: With GNU sed you could just do
```
sed -i '/STRING—REPLACE-VAL##/s/VAL15/VAL20/' myTestFile.cpp
```
Upvotes: 1 |
2018/03/22 | 341 | 1,202 | <issue_start>username_0: I tried to launch my Cordapp from IDE. When I make request I get following error:
>
> net.corda.core.transactions.MissingContractAttachments: Cannot find
> contract attachments for [com.example.contract.IBuildContract].
>
>
>
I read about this problem here: <https://docs.corda.net/api-contract-constraints.html#debugging>
But IDE can find `setExtraCordappPackagesToScan`. How I can set extra packages or save this problem in other way in corda v3?<issue_comment>username_1: If you're using the node driver to start your nodes, here's an example of setting the extra CorDapp packages to scan:
```
driver(DriverParameters(
isDebug = true,
extraCordappPackagesToScan = listOf("net.corda.vega.contracts", "net.corda.vega.plugin.customserializers"))
) {
TODO("Driver logic.")
}
```
Upvotes: 0 <issue_comment>username_2: Do not add the Contract name .I mean just include till the package .
**withExtraCordappPackagesToScan(Arrays.asList("com.example.contract")**
In java , we are including the package as below :
@Before
public void setup() {
network = new MockNetwork(ImmutableList.of("com.xyz.module1.contract","com.xyz.module2.contract"));
Upvotes: 1 |
2018/03/22 | 762 | 2,158 | <issue_start>username_0: I'm trying to create a network graph in NetworkX based on a defeaultdict which contains nodes and edges. When I try to add the edges, I get the following error message:
>
> add\_edge() missing 1 required positional argument: 'v\_of\_edge'
>
>
>
My code is as follows:
```
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
}
G = nx.Graph()
for k,v in graph.items():
G.add_node(k)
G.add_edge(*v)
nx.draw()
plt.show()
```
I know that `add_edge` takes `(u,v)` argument, where I suppose that `u` is node and `v` is the edges, so instead I tried:
```
G.add_edge(k,v)
```
But that resulted in a new error message saying:
>
> unhashable type: 'list'
>
>
>
I don't know how to proceed, but it leaves me at least one question.
In the second approach, should I somehow access each edge for a given node individually?<issue_comment>username_1: The problem in your code is that `v` is a list in your graph structure definition. The following code will make it work:
```
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
}
G = nx.Graph()
for k,v in graph.items():
for vv in v:
G.add_edge(k,vv)
nx.draw(G)
plt.show()
```
Based on your definition, you are supposed to loop through the list associated with each node to define the edge.
[](https://i.stack.imgur.com/u7qU2.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a fix-up to your code that will make it work:
```
import networkx as nx
graph = {'a': ['c'], 'b': ['c', 'e'], 'c': ['a', 'b', 'd', 'e'], 'd': ['c'], 'e': ['c', 'b'], 'f': []}
G = nx.Graph()
for k,v in graph.items():
G.add_node(k)
for i in v:
G.add_edge(k, i)
nx.draw(G)
import matplotlib.pyplot as plt
plt.show()
```
As you can see, the main issue is that you need to properly add each edge. Since your `v` are lists, you must go item by item in that list to add the edge.
Upvotes: 1 |
2018/03/22 | 564 | 1,561 | <issue_start>username_0: For example I have the next classes
```
abstract class X(val x: T)
data class Y(val y: String = "y") : X(x = "x")
class Holder(
val value: X
)
```
I try to serialize with gson:
```
val gson = Gson()
val x = Holder(Y())
val json = gson.toJson(x)
```
I've got `{"value":{"x":"x"}}` but I need `{"value":{"x":"x", "y":"y"}}`<issue_comment>username_1: The problem in your code is that `v` is a list in your graph structure definition. The following code will make it work:
```
graph = { "a" : ["c"],
"b" : ["c", "e"],
"c" : ["a", "b", "d", "e"],
"d" : ["c"],
"e" : ["c", "b"],
"f" : []
}
G = nx.Graph()
for k,v in graph.items():
for vv in v:
G.add_edge(k,vv)
nx.draw(G)
plt.show()
```
Based on your definition, you are supposed to loop through the list associated with each node to define the edge.
[](https://i.stack.imgur.com/u7qU2.png)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Here is a fix-up to your code that will make it work:
```
import networkx as nx
graph = {'a': ['c'], 'b': ['c', 'e'], 'c': ['a', 'b', 'd', 'e'], 'd': ['c'], 'e': ['c', 'b'], 'f': []}
G = nx.Graph()
for k,v in graph.items():
G.add_node(k)
for i in v:
G.add_edge(k, i)
nx.draw(G)
import matplotlib.pyplot as plt
plt.show()
```
As you can see, the main issue is that you need to properly add each edge. Since your `v` are lists, you must go item by item in that list to add the edge.
Upvotes: 1 |
2018/03/22 | 2,823 | 9,719 | <issue_start>username_0: I am in the middle of a project where I am working on a survery for the user on a website. The user can either click a "back" button, which takes him to the previous question, or a "next" button, which takes him to the next question.
All of these questions are within the same parent div, seperated into their own divs. So they are all in the same page. I am giving each question div within the parent div a property of hide and show so that the relavant question shows up based on the user clicking next and back. So if the user clicks next, I will hide the current question div and show the next question div. Vice versa for when the user clicks the back button.
HTML
```
### 7%
On a scale of 0-5, where 0 is not at all likely and 5 is extremely likely:
How likely are you to recommend Classic Screenings Cinema to someone else?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is not enjoyed it at all and 5 is enjoyed it a lot:
How much did you enjoy the film/event you watched?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is very unhelpful and 5 is very helpful:
Overall, how helpful were the Classic Screenings staff?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Did you purchase any food / snacks from the cinema?
Yes
No
On a scale of 0-5, where 0 is very poor and 5 is excellent:
What do you rate the quality of the food / snacks?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Did you visit the toilet facilities during your visit to Clasic Screenins Cinema?
Yes
No
On a scale of 0-5, where 0 is extremely dissatisfied and 5 is extremely satisfied, how would you rate the following:?
Cleanliness of the toilet facilities
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is extremely dissatisfied and 5 is extremely satisfied, how would you rate the following:?
The amount of time it took to purchase your ticket, factoring in any time with queuing?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
With regards to entering the cinema screen and taking your allocated seat, please let us know how strong you agree with the following statements, with 0 representing strongly disagree and 5 representing strongly agree.
My allocated seat was in great condition
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
The area around my seat was clean and tidy
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Were there any disruptions that occured during the viewing of you movie / event?
Yes
No
Please tell us more about the disruption and what staff did to handle this
THANK YOU FOR COMPLETING OUR SURVEY!
------------------------------------
We are very grateful towards you taking the time out to complete this survery.
Every couple of months our team reflect on these surverys and we develop straregies on the best ways to deal with areas in which can do with improving, as well as ideas on how to further strengthen the positives.
Once again, we thank you for filling in the survey and we hope to see you soon!
BACK
----
NEXT
----
```
js:
```
$(document).ready(function(){
var facilityQ = document.querySelectorAll(".facilities");
var foodChoice = document.querySelectorAll(".snacks");
var backBtn = document.getElementById("backBtn");
var nextBtn = document.getElementById("nextBtn");
var questions = ["one","two","three","four","five","six","seven","eight","nine"];
currentQuestion = 0;
//scrolling to next question
$(nextBtn).click(function(){
currentQuestion++;
if (currentQuestion == questions.length-1){
nextBtn.style.display = "none";
backBtn.style.display = "none";
}
if (currentQuestion >= 1){
backBtn.style.visibility = "visible";
}
if (currentQuestion < questions.length){
document.getElementById(questions[currentQuestion]).style.display = "block";
document.getElementById(questions[currentQuestion-1]).style.display = "none";
}
});
$(foodChoice[0]).click(function(){
$("#food-followUp").slideDown();
})
$(facilityQ[0]).click(function(){
$("#facilities-followUp").slideDown();
})
// scrolling to previous question
$(backBtn).click(function() {
if (currentQuestion > 0) {
document.getElementById(questions[currentQuestion]).style.display = "none";
document.getElementById(questions[currentQuestion - 1]).style.display = "block";
}
currentQuestion--;
if (currentQuestion === 0) {
backBtn.style.visibility = "hidden";
}
});
```
THE PROBLEM:
I have stumbled across a problem which I didnt even think of before laying out the survery this way. I really wanted to have that single page survey functionality but I am struggling to come up with a solution on how to check each radio button question, to ensure the user has selected one, before clicking on the next button.
Is anyone with a bit more experiance able to help me out here? I have been trying to think of a way but whenever I think I have found one, it raises another problem.
Your time, help and suggesstions are very much apprecited!<issue_comment>username_1: Maybe you could concat the current question number to the element name so you can check if that specific group of radio buttons is selected.
```
function getCheckedValue( groupName ) {
var radios = document.getElementsByName( groupName );
for( i = 0; i < radios.length; i++ ) {
if( radios[i].checked ) {
return radios[i].value;
}
}
return false;
}
if ( getCheckedValue( "q" + currentQuestion ) ) {
// there is a radio button selected, proceed
} else {
// ask the user to select a radio button
}
```
(How to check if radio button is checked got from here [In JavaScript, how can I get all radio buttons in the page with a given name?](https://stackoverflow.com/questions/1682964/in-javascript-how-can-i-get-all-radio-buttons-in-the-page-with-a-given-name))
Hope it helps
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could check for **visible** inputs and see if there is **at least 1 checked**
```
$('.class:visible').find('input[type=radio]:checked').val()
```
I have not tested this.. but it will at least get you on the right path.
I noticed you had some errors in your html as well. I cleaned it up a bit here ya go!
```
### 7%
On a scale of 0-5, where 0 is not at all likely and 5 is extremely likely:
How likely are you to recommend Classic Screenings Cinema to someone else?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is not enjoyed it at all and 5 is enjoyed it a lot:
How much did you enjoy the film/event you watched?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is very unhelpful and 5 is very helpful:
Overall, how helpful were the Classic Screenings staff?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Did you purchase any food / snacks from the cinema?
Yes
No
On a scale of 0-5, where 0 is very poor and 5 is excellent:
What do you rate the quality of the food / snacks?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Did you visit the toilet facilities during your visit to Clasic Screenins Cinema?
Yes
No
On a scale of 0-5, where 0 is extremely dissatisfied and 5 is extremely satisfied, how would you rate the following:?
Cleanliness of the toilet facilities
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
On a scale of 0-5, where 0 is extremely dissatisfied and 5 is extremely satisfied, how would you rate the following:?
The amount of time it took to purchase your ticket, factoring in any time with queuing?
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
With regards to entering the cinema screen and taking your allocated seat, please let us know how strong you agree with
the following statements, with 0 representing strongly disagree and 5 representing strongly agree.
My allocated seat was in great condition
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
The area around my seat was clean and tidy
* 0
*
* 1
*
* 2
*
* 3
*
* 4
*
* 5
*
Were there any disruptions that occured during the viewing of you movie / event?
Yes
No
Please tell us more about the disruption and what staff did to handle this
THANK YOU FOR COMPLETING OUR SURVEY!
------------------------------------
We are very grateful towards you taking the time out to complete this survery.
Every couple of months our team reflect on these surverys and we develop straregies on the best ways to deal with areas
in which can do with improving, as well as ideas on how to further strengthen the positives.
Once again, we thank you for filling in the survey and we hope to see you soon!
BACK
----
NEXT
----
```
Upvotes: 1 <issue_comment>username_3: You're almost there all you have to do is add this right above "currentQuestion++;" and close the else after your current code. This will validate that something in the current input has been checked before allowing the code to go through. I would suggest adding something that prompts them to select an option.
```
var checkinput
document.getElementById(questions[currentQuestion]).getElementsByTagName("input");
if (checkinput.value == ""){
}
else{
```
Upvotes: 0 |
2018/03/22 | 929 | 2,936 | <issue_start>username_0: I have created a pure `Python` project in PyCharm and imported `numpy`, `tensorflow` and `opencv` in order to test a small program.
All packages are updated to the latest version.My `Python` version is `3.6.4` and I am running on `Windows x64`. I have browsed through all the solutions on related threads that suggested updating `NVIDIA` driver, but I have an `Intel` driver.
I am new to `Python`, `Tensorflow` and `Pycharm`.
Here is the logged error:
```
Faulting application name: python.exe, version: 3.6.4150.1013, time stamp: 0x5a38b889
Faulting module name: ucrtbase.dll, version: 10.0.16299.248, time stamp: 0xe71e5dfe
Exception code: 0xc0000409
Fault offset: 0x000000000006b79e
Faulting process ID: 0x4004
Faulting application start time: 0x01d3c1ef8a3d751c
Faulting application path: C:\Users\xtr\Test\TfLayers\Scripts\python.exe
Faulting module path: C:\WINDOWS\System32\ucrtbase.dll
Report ID: e96d98cb-28c9-4340-bcd3-a7033d4b4972
Faulting package full name:
Faulting package-relative application ID:
```<issue_comment>username_1: The problem does not come from PyCharm, if you use any other IDEs, the result would be the same. In fact, they all use a package called `pydev` to debug. Your best bet would be to create a brand new Python environment (PyCharm has a function for this) and gradually install packages.
If the solution works and you can find out which package conflicts with `pydev`, it will be most helpful.
Upvotes: 0 <issue_comment>username_2: This was solved by installing pyqt. I installed pyqt with the command (from conda-forge)
```
conda install -c conda-forge pyqt
```
Upvotes: 2 <issue_comment>username_3: In my case it was obsolete pyqt library. This following worked for me.
>
> conda install -c anaconda pyqt
>
>
>
Upvotes: 1 <issue_comment>username_4: I got the same error and bumped into this question, but for a different reason which I want to present in case somebody else faces the same situation. I already had `pyqt` installed as mentioned in other answers. But, just to be sure and according to the unspoken "did you try restarting" principle, I reinstalled `pyqt` in my `conda` env alongside `PyQt5` (and also `PyQt6`) - bot PyQts installed via `pip`.
No idea why and sadly I do not have the time to debug it, but `PyQt6` was the problematic part in my code (which I am trying for the 1st time). My minimal test code is as follows and includes two subversions denoted by `v1` and `v2`. I hope that part is obvious:
```
from PyQt6.QtWidgets import QApplication, QWidget # v1
# from PyQt5.QtWidgets import QApplication, QWidget # v2
import sys
q = QApplication(sys.argv)
w = QWidget()
w.show() # in debug mode, ran ok till here (?)
sys.exit(q.exec()) # for `v1` from above
# sys.exit(q.exec_()) # for `v2` from above
```
The code variation `v2` is working while `v1` throws the same error mentioned by the OP.
Upvotes: 0 |
2018/03/22 | 248 | 965 | <issue_start>username_0: I'm looking for an option to push my remote configuration to origin.
Locally I have 2 remote sources I work with. I would like to push this information to our origin so the fresh clone from origin will contain both remotes. Is that possible?
local repository my\_repo:
```
git remote add remote2 http://some-url.git
# magic command here
```
fresh repository my\_repo:
```
git clone my_repo
cd my_repo
git fetch remote2
```<issue_comment>username_1: I don't think Git has any option to add remotes automatically.
You could probably write a script to add the `remote2` folder in `.git/refs/remotes/`, or you could simply ask your repository's collaborators to add the new remote manually. It's very straightforward:
```
git remote add remote2 http://some-url.git
git fetch remote2
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Git has no built-in functionality to push its configuration between repositories.
Upvotes: 1 |
2018/03/22 | 200 | 837 | <issue_start>username_0: My application allows users direct access to DynamoDB from their browser. They can view and edit any record for which they know the partition key (a UUID).
My problem is that users can create new records by 'editing' a non-existant partition key. Is there a way to use an IAM policy to prevent that?<issue_comment>username_1: I don't think Git has any option to add remotes automatically.
You could probably write a script to add the `remote2` folder in `.git/refs/remotes/`, or you could simply ask your repository's collaborators to add the new remote manually. It's very straightforward:
```
git remote add remote2 http://some-url.git
git fetch remote2
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Git has no built-in functionality to push its configuration between repositories.
Upvotes: 1 |
2018/03/22 | 768 | 3,207 | <issue_start>username_0: I have got code that reads the data from the array perfectly when I use a AJAX request. When I push an object to the array however, ng-repeat doesn't render the new row and I have to refresh the page to then fetch the data that was sent to server.
Why does it do this?
Thanks
Javascript
----------
```
function processError() {
var responseCode = 404;
var error = {};
error["client"] = document.getElementById('client').value;
error["errorMessage"] = document.getElementById('error-message').value;
error["simpleRes"] = document.getElementById('simple-fix').value;
error["fullRes"] = document.getElementById('full-fix').value;
error["reason"] = document.getElementById('reason').value;
var errorJson = JSON.stringify(error);
$.ajax({
url: "../ErrorChecker/rest/error",
type: "POST",
data: errorJson,
contentType: "application/json"
})
.done(function (data, statusText, xhr, displayMessage) {
$('.form').hide();
responseCode = xhr.status;
reloadData(data);
});
function reloadData(data) {
if (responseCode == 200) {
processPositiveResponse(data);
} else {
$('#negative-message').show(1000);
}
}
}
function processPositiveResponse(data) {
$('#positive-message').show(1000);
updateTable(data);
$('#errorTable').DataTable().destroy();
setupTable();
clearInputs();
console.log($scope.controller.errors);
}
function updateTable(data) {
$scope.controller.errors.push({
"id": data,
"client": document.getElementById('client').value,
"errorMessage": document.getElementById('error-message').value,
"simpleRes": document.getElementById('simple-fix').value,
"fullRes": document.getElementById('full-fix').value,
"reason": document.getElementById('reason').value
})
}
```
HTML
----
```
| | | | | | | |
| --- | --- | --- | --- | --- | --- | --- |
|
| {{ x.id }} | {{ x.client }} | {{ x.errorMessage }} | {{ x.simpleRes }} | {{ x.fullRes }} | {{ x.reason }} |
```<issue_comment>username_1: That's because you're using jQuery and Angular together. Don't do that. EVER. Angular is not aware of what jQuery is doing, and jQuery is not aware of what Angular is generating in the DOM. Solution : REMOVE jQuery and use Angular's own `$http` service.
The same way, don't use `document.getElementById('full-fix').value`. You're taking Angular backwards. Angular *generates* the DOM from data, so you don't need to *select* DOM elements to read their value because that value is already in your data.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Update the document. Use scope apply as a quick fix. Not the way to do it imo. But it will solve your problem fast. On my mobile if I do not forget ill update this comment later with more details and best practises. For now a google search to scope apply will help you a long way.
Upvotes: -1 |
2018/03/22 | 1,346 | 5,046 | <issue_start>username_0: I am working in a program where I use `Tkinter` for the UI. I am writing a code to play an audio repeatedly. I am using `pygame.mixer.music()` for playing audio.
In the UI I created two buttons ("Start" and "Stop"). I attached a method which contains the loop structure to the start button, so that when the Start button is pressed the loop will be executed and starts playing audio repeatedly. Now I don't know how to attach the Stop button. Like, when Stop button is pressed the control should exit the loop. Can I use interrupts or some other thing like that? Iam totaly new to the concept of interrupts. To proceed with that, help me with what kind of interrupt, what is the library for that, etc. If not please help me how to proceed with the stop button.
Here is my code:
```
from pygame import *
from Tkinter import *
import time
root = Tk()
root.geometry("1000x200")
root.title("sampler")
m=1
n=1
mixer.init()
def play():
while m==1:
print 'playing'
mixer.music.load('audio 1.mp3')
mixer.music.play()
time.sleep(n)
start = Button(root, text="play", command = play)
start.pack()
stop = Button(root, text="Stop")
stop.pack()
mainloop()
```
`n` defines how long the audio should be played for each loop.<issue_comment>username_1: You need to add a command to your button... `stop = Button(root, text="Stop", command=stop)`
Upvotes: 0 <issue_comment>username_2: Just adding a stop command probably wont work because the way your infinite loop is structured, you can't interact with the tkinter interface while play is clicked. Try restructuring your program like this:
```
from Tkinter import *
from pygame import *
import time
import threading
switch = True
root = Tk()
n = 1
mixer.init()
root.geometry("1000x200")
root.title("sampler")
def play():
def run():
while switch:
print 'playing'
mixer.music.load('audio 1.mp3')
mixer.music.play()
time.sleep(n)
if not switch:
break
thread = threading.Thread(target=run)
thread.start()
def switch_on():
global switch
switch = True
play()
def switch_off():
global switch
switch = False
def kill():
root.destroy()
onbutton = Button(root, text="Play", command=switch_on)
onbutton.pack()
offbutton = Button(root, text="Stop", command=switch_off)
offbutton.pack()
killbutton = Button(root, text="Kill", command=kill)
killbutton.pack()
root.mainloop()
```
This way the tkinter buttons are running on separate threads, so while one is looping you can still interact with the other.
Upvotes: 0 <issue_comment>username_3: Python doesn't exactly support interrupts, the closest thing would probably be some sort of signal handler, which are supported via its [`signal`](https://docs.python.org/2/library/signal.html#module-signal) library. However they may not work well with Tkinter (or `pygame`), so I don't think that would be a good approach—and they're not really necessary anyway because what you want to do can be handled within Tkinter's `mainloop()`.
Although it may seem somewhat complex, the way I would suggest implementing it would be to encapsulate most of the playing control functionality within a single Python `class`. This will reduce the use of global variables, which will make the program easier to debug and develop further (because of the many advantages of Object-Oriented Programming — aka as [**OOP**](https://en.wikipedia.org/wiki/Object-oriented_programming)).
Below illustrates what I mean. Note, I'm using Python 3, so had to make a few additional changes to your code in order for it would work with that version. I'm not sure, but this version ought to work in Python 2, as well, except you'll need to change the `import` of the Tkinter module as indicated.
```
from pygame import *
from tkinter import * # Change to "from Tkinter import *" for Python 2.x.
class PlayController(object):
def __init__(self, mixer, music_filename, polling_delay):
self.mixer = mixer
self.music_filename = music_filename
self.polling_delay = polling_delay # In milliseconds.
self.playing = False
def play(self):
if self.playing:
self.stop()
self.mixer.music.load(self.music_filename)
self.mixer.music.play(-1) # -1 means to loop indefinitely.
self.playing = True
root.after(self.polling_delay, self.check_status) # Start playing check.
def stop(self):
if self.playing:
self.mixer.music.stop()
self.playing = False
def check_status(self):
if self.playing:
print('playing')
root.after(self.polling_delay, self.check_status) # Repeat after delay.
root = Tk()
root.geometry("1000x200")
root.title("Sampler")
mixer.init()
play_control = PlayController(mixer, 'tone.wav', 1000)
Button(root, text="Play", command=play_control.play).pack()
Button(root, text="Stop", command=play_control.stop).pack()
mainloop()
```
Upvotes: 1 |
2018/03/22 | 855 | 2,735 | <issue_start>username_0: I want to read data from a CSV file in Java and then put this data into a list. The data in the CSV is put into rows which looks like:
Data, 32, 4.3
Month, May2, May 5
The code I have currently only prints the [32].
```
ArrayList myList = new ArrayList();
Scanner scanner = new Scanner(new File("\\C:\\Users\\Book1.csv\\"));
scanner.useDelimiter(",");
while(scanner.hasNext()){
myList.add(scanner.next());
for (int i = 0; i <= myList.size(); i++) {
System.out.println(myList.toString());
}
scanner.close();
}
```<issue_comment>username_1: Maybe this code can help you, maybe this code is different from yours, you use arrayList while I use regular array.
**Example of the data:**
* Farhan,3.84,4,72
* Rajab,2.98,4,72
* Agil,2.72,4,72
* Alpin,3.11,4,73
* Mono,3,6,118 K
* imel,3.97,7,132
* Rano,2.12,6,110
* username_1,4,1,22
**Placing data on each row in a csv file separated by commas into the array of each index**
```
int tmp = 0;
String read;
Mahasiswa[] mhs = new Mahasiswa[100];
BufferedWriter outs;
BufferedReader ins;
BufferedReader br = new BufferedReader(new
InputStreamReader(System.in));
Scanner input = new Scanner(System.in);
try {
ins = new BufferedReader(new FileReader("src/file.csv"));
tmp = 0;
while ((read = ins.readLine()) != null) {
String[] siswa = read.split(",");
mhs[tmp] = new Mahasiswa();
mhs[tmp].nama = siswa[0];
mhs[tmp].ipk = Float.parseFloat(siswa[1]);
mhs[tmp].sem = Integer.parseInt(siswa[2]);
mhs[tmp].sks = Integer.parseInt(siswa[3]);
tmp++;
i++;
}
ins.close();
} catch (IOException e) {
System.out.println("Terdapat Masalah: " + e);
}
```
**Print the array data**
```
tmp = 0;
while (tmp < i) {
System.out.println(mhs[tmp].nama + "\t\t" +
mhs[tmp].ipk + "\t\t" +
mhs[tmp].sem + "\t\t" +
mhs[tmp].sks);
tmp++;
}
```
Upvotes: 2 <issue_comment>username_2: ```
ArrayList myList = new ArrayList();
try (Scanner scanner = new Scanner(new File("C:\\Users\\Book1.csv"))) {
//here at your code there are backslashes at front and end of the path that was the
//main reason you are not able to read csv file
scanner.useDelimiter(",");
while (scanner.hasNext()) {
myList.add(scanner.next());
}
for (int i = 0; i < myList.size(); i++) { //remember index is always equal to "length - 1"
System.out.println(myList);
}
} catch (Exception e) {
e.printStackTrace();
}
```
you also did not handle the `FileNotFoundException`
Hope this helps:)
Upvotes: 0 |
2018/03/22 | 378 | 1,466 | <issue_start>username_0: I am using NodeJS with Express and the PUG view engine.
I am attempting to check if an array contains a certain string. I have tried the builtin javascript methods such as:
```
array.includes(str)
array.indexOf(str) > -1.
```
However neither of these options seem to work. How can I check if an array contains a certain string in PUG?
```
if example_array.length > 0
span() Array contains elements! // This works and appears on page
if example_array.includes('example string') // This does not work
span() Array includes example string! // This does not appear on page
```<issue_comment>username_1: If you want to run inline JS in your template you have to mark your code as unbuffered.
<https://pugjs.org/language/code.html#unbuffered-code>
```
if example_array.length > 0
span() Array contains elements!
- if (example_array.includes('example string'))
span() Array includes example string!
```
*Note the "-" in front of the "if" on line 3.*
Since this is a literal js expression now the parantheses are required aswell.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This kept me busy for hours so I wanted to share my own solution. While the answer is correct when checking for a string directly, remember you may need to add `.toString()` if you're checking from a variable:
```
- if (example_array.includes(myVariable.toString()))
```
Hopefully this saves someone some time!
Upvotes: 1 |
2018/03/22 | 486 | 2,049 | <issue_start>username_0: I am working on a react-native based project where google made it frustratingly hard to implement their oAuth without using an SDK.
First of all, WebView is not allowed to oAuth. So I attempted to implement it through normal browser / chrome tab where redirect is my app id like `com.myCompany.myApp` but when I add this as allowed oAuth redirect url I get error in google cloud console that this is not a valid redirect id for web based project.
I then tried to set it up with firebase dynamic links, But when I save changes it errors with `Request contains an invalid argument.`<issue_comment>username_1: You cant use it because your app id is not an valid [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier)
>
> This is the path in your application that users are redirected to after they have authenticated with Google. The path will be appended with the authorization code for access. Must have a protocol. Cannot contain URL fragments or relative paths. Cannot be a public IP address.
>
>
>
You cant redirect to an app id there is no way of knowing where the redirect should go. The following are valid redirect URIs
```
http://test.example.com/index.php
http://localhost:38898/signin-google
```
Upvotes: 1 <issue_comment>username_2: Different type of OAuth keys have different redirect requirements and best practices. It's too numerous to name them all, so please see this [documentation](https://developers.google.com/identity/protocols/OAuth2InstalledApp).
In general, if it's an installed app or a mobile app, the redirect URI isn't as important because you are just redirecting back to the app. However for web based app OAuth keys, you will need to be very careful with the redirect because it carries your access token. You can read more about [redirect URIs](https://www.oauth.com/oauth2-servers/redirect-uris/) and see what would qualify as a proper URI.
Basically Google makes sure your URI seems ok for your OAuth key type, otherwise it will reject it.
Upvotes: 3 [selected_answer] |
2018/03/22 | 681 | 2,548 | <issue_start>username_0: I was trying to retrieve weather data through openweather API but when I try to do so I couldn't able to retrieve the forecast of the weather.
```
import json,requests,urllib.parse
url='https://api.openweathermap.org/data/2.5/forecast?'
while True:
APPID='xxxxxxxxxxxx'
id='1283378'
main_url=url +urllib.parse.urlencode({'id':id}) +'&APPID=xxxxxxxxx'
response=requests.get(main_url).json()
#print(main_url)
status=response['cod']
print(status + '\n')
inc=0
if status =="200":
for i in range(5):
print(i)
for each in response["list"][i]["weather"]:
print(each["main"])
date= response["list"][i]["dt_txt"]
print(date)
```
How I could increment the value of `i`, so that the forecast data can be retrieved? `x+=` and `range` doesn't help here as `x+=` only can increment one value and `range` cannot be given as an input one by one.
Output of the above program is:
```none
200
0
1
2
3
4
Clouds
2018-03-23 06:00:00
```<issue_comment>username_1: You cant use it because your app id is not an valid [URI](https://en.wikipedia.org/wiki/Uniform_Resource_Identifier)
>
> This is the path in your application that users are redirected to after they have authenticated with Google. The path will be appended with the authorization code for access. Must have a protocol. Cannot contain URL fragments or relative paths. Cannot be a public IP address.
>
>
>
You cant redirect to an app id there is no way of knowing where the redirect should go. The following are valid redirect URIs
```
http://test.example.com/index.php
http://localhost:38898/signin-google
```
Upvotes: 1 <issue_comment>username_2: Different type of OAuth keys have different redirect requirements and best practices. It's too numerous to name them all, so please see this [documentation](https://developers.google.com/identity/protocols/OAuth2InstalledApp).
In general, if it's an installed app or a mobile app, the redirect URI isn't as important because you are just redirecting back to the app. However for web based app OAuth keys, you will need to be very careful with the redirect because it carries your access token. You can read more about [redirect URIs](https://www.oauth.com/oauth2-servers/redirect-uris/) and see what would qualify as a proper URI.
Basically Google makes sure your URI seems ok for your OAuth key type, otherwise it will reject it.
Upvotes: 3 [selected_answer] |
2018/03/22 | 893 | 2,306 | <issue_start>username_0: I have following AVRO message in Kafka topic.
```
{
"table": {
"string": "Schema.xDEAL"
},
"op_type": {
"string": "Insert"
},
"op_ts": {
"string": "2018-03-16 09:03:25.000462"
},
"current_ts": {
"string": "2018-03-16 10:03:37.778000"
},
"pos": {
"string": "00000000000000010722"
},
"before": null,
"after": {
"row": {
"DEA_PID_DEAL": {
"string": "AAAAAAAA"
},
"DEA_NME_DEAL": {
"string": "MY OGG DEAL"
},
"DEA_NME_ALIAS_NAME": {
"string": "MY OGG DEAL"
},
"DEA_NUM_DEAL_CNTL": {
"string": "4swb6zs4"
}
}
}
```
}
When I run the following query. It creates the stream with null values.
```
CREATE STREAM tls_deal (DEA_PID_DEAL VARCHAR, DEA_NME_DEAL varchar, DEA_NME_ALIAS_NAME VARCHAR, DEA_NUM_DEAL_CNTL VARCHAR) WITH (kafka_topic='deal-ogg-topic',value_format='AVRO', key = 'DEA_PID_DEAL');
```
But when I change the AVRO message to following it works.
```
{
"table": {
"string": "Schema.xDEAL"
},
"op_type": {
"string": "Insert"
},
"op_ts": {
"string": "2018-03-16 09:03:25.000462"
},
"current_ts": {
"string": "2018-03-16 10:03:37.778000"
},
"pos": {
"string": "00000000000000010722"
},
"DEA_PID_DEAL": {
"string": "AAAAAAAA"
},
"DEA_NME_DEAL": {
"string": "MY OGG DEAL"
},
"DEA_NME_ALIAS_NAME": {
"string": "MY OGG DEAL"
},
"DEA_NUM_DEAL_CNTL": {
"string": "4swb6zs4"
}
```
}
Now If I run the above query the data will be populated.
My question is If I need to populate stream from nested field how can I handle this?
I am not able to find the solution in KSQL documentation page.
Thanks in advance. I appreciate the help. :)<issue_comment>username_1: KSQL does not currently (22 Mar 2018 / v0.5) support nested Avro. You can use Single Message Transform to flatten the data coming from Kafka Connect. For example, Debezium ships with `UnwrapFromEnvelope`.
Upvotes: 1 <issue_comment>username_2: As Robin states, this is not *currently* supported, (22 Mar 2018 / v0.5). However, it is a tracked feature request. You may want to up-vote or track this Github issue in the KSQL repo:
<https://github.com/confluentinc/ksql/issues/638>
Upvotes: 3 [selected_answer] |
2018/03/22 | 281 | 1,230 | <issue_start>username_0: I need a java/j2ee program, that will load an updated class file when the server is still on. In our Web Application testing environment few classes(Java files) are being frequently changed to fix defects and need to load them in JVM fresh.
We can not restart the server to impact on going testing .
Can we write a program to load it using class loader?<issue_comment>username_1: No you cannot, you would need to have the container on the server side. Although for your server like tomcat, jboss, websphere etc. you can configure them to hot deploy but the server side would still needs restart once new class file is placed. If you are trying to hot deploy a class file you can try using JRebel it will allow you to hot deploy the class file without actually restarting because it is managed within JRebel container. (also note that JRebel is not free)
Upvotes: 0 <issue_comment>username_2: There are quite a few solutions/plugins which help you with hot-deployment of java code saving you with complete build and server restart. I had used jRebel earlier available at <https://www.jrebel.com/products/jrebel>. There are other such solutions/tools that can help you with the hot deployment.
Upvotes: -1 |
2018/03/22 | 1,273 | 4,890 | <issue_start>username_0: How can I perform the proper comparison check in the following code?
```
enum Location {
case ontop
case inside
case underneath
}
struct Item {
var location: Location?
func checkStuff(currentLocation: Location?) {
if case .ontop = currentLocation {
// DO SOME STUFF
}
}
}
// currentLocation is optional, and initially nil
var currentLocation: Location?
var item1 = Item(location: .ontop)
item1.checkStuff(currentLocation: currentLocation)
currentLocation = item1.location
var item2 = Item(location: .inside)
item2.checkStuff(currentLocation: currentLocation)
```
So there is a struct, which 1 of its properties is an enum `Location` so that it can only have 1 of 3 values.
The struct has a method that takes action if an instance's `location` property is the same as an externally provided value of the same type that is the current status of the Location (from another instance of the same object type).
I cannot get the correct syntax to get into the right section of the `if` statement.
I have also tried unwrapping the optional `currentLocation`:
```
if let tempCurrentLocation = currentLocation {
if case tempCurrentLocation = Location.ontop {
print("Currently ontop")
location = .ontop
} else {
print("Currently NOT ontop")
location = .inside
}
} else {
print("Not able to unwrap enum...")
}
```<issue_comment>username_1: It's important to note that `currentLocation` is not a `Location`, it's a `Location?` (a.k.a. `Optional`). So you have to pattern match against the cases of `Optional` first, not of `Location`. Then, within the patterns of `Optional`'s cases, you can match against the various cases of `Location`.
Here is the progression of syntactic sugar, starting with the most verbose, and arriving at the most succinct, common way of doing this:
* `if case Optional.some(.ontop) = currentLocation { ... }`
* `if case .some(.ontop) = currentLocation { ... }`
* And finally, the preferred way: `if case .ontop? = currentLocation { ... }`
`if case` is only really ideal if you want to check for a very small subset of a large set of cases. If you need to check multiple cases, it's better to use a `switch`. The case patterns are the same:
```
switch currentLocation {
case .onTop?: // ...
case .inside?: // ...
case .underneath?: // ...
case nil: // ...
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try using a `switch` statement:
```
switch self.currentLocation {
case .ontop?:
break
case .inside?:
break
case .underneath?:
break
case nil:
// In this case, `self.currentLocation` is `nil` rather than one of the cases of `Location`
break
}
```
You need to have a `?` after each of the cases in the switch statement, because `self.currentLocation` is a `Location?`. The values in the switch cases have to match the type of `self.currentLocation`. By writing simply `.ontop`, `.inside`, etc., they would be of type `Location`, whereas writing `.ontop?`, `.inside?`, etc., makes their type `Location?`, which is the correct type.
Upvotes: 2 <issue_comment>username_3: You could write it gracefully like this, it is better to use `switch` statements instead of `if` conditions.
```
enum Location {
case ontop
case inside
case underneath
}
struct Item {
var location: Location?
func checkStuff(currentLocation: Location?) {
switch currentLocation {
case .ontop?:
print("on top")
case .inside?:
print("inside")
case .underneath?:
print("underneath")
case .none:
print("Location is nil")
}
}
}
```
Upvotes: 2 <issue_comment>username_4: If you want a single `if` test with your `Location?` parameter, you can either add a `?`:
```
func checkStuff(currentLocation: Location?) {
if case .ontop? = currentLocation {
// DO SOME STUFF
}
}
```
Or just use `==` test (which can be used to compare an optional to a literal):
```
func checkStuff(currentLocation: Location?) {
if currentLocation == .ontop {
// DO SOME STUFF
}
}
```
I think this latter approach leads to more natural looking code, but the `if case` approach is really valuable if your enum had associated values (which it doesn't).
Clearly, if you wanted tests for the other `Location` enum possibilities, then you'd use `switch` statement (again with `?` after the various cases), as suggested by others:
```
func checkStuff(currentLocation: Location?) {
switch currentLocation {
case .ontop?:
// DO SOME STUFF IF ONTOP
case .inside?:
// DO SOME OTHER STUFF IF INSIDE
case .underneath?:
// DO YET SOME OTHER STUFF IF UNDERNEATH
case nil:
// DO YET SOME OTHER STUFF IF NIL
}
}
```
Upvotes: 0 |
2018/03/22 | 1,043 | 3,677 | <issue_start>username_0: My component has the following computed code:
```
textButton() {
const key = this.$root.feature.name === //...
// ...
},
```
Right now I'm trying desperately to mock "root" in my test, but I just don't know how. Any tips?<issue_comment>username_1: You may use a Vue Plugin inside the test to inject the mock data into `localVue` so that your components can access it.
```
import {createLocalVue, shallow} from '@vue/test-utils';
const localVue = createLocalVue();
localVue.use((Vue) => {
Vue.prototype.feature = {
name: 'Fake news!'
};
});
let wrapper = shallow(Component, {
localVue
});
```
---
I had the same issues a while ago and I came to the conclusion that accessing `this.$root` may be a sign that you have to further improve the way you communicate with components. Consider using the plugin structure to define globally available properties and methods *not only inside the test* for example. Mixins might be helpful as well.
* <https://v2.vuejs.org/v2/guide/plugins.html>
* <https://v2.vuejs.org/v2/api/#mixins>
Upvotes: 2 <issue_comment>username_2: Vue test utils provides you with the ability to inject mocks when you mount (or shallow mount) your component.
```js
const $root = 'some test value or jasmine spy'
let wrapper = shallow(ComponentToTest, {
mocks: { $root }
})
```
That should then be easily testable. Hope that helps
Upvotes: 3 <issue_comment>username_3: There are two ways to accomplish this with `vue-test-utils`.
One way, as mentioned above, is using the `mocks` mounting option.
```
const wrapper = shallowMount(Foo, {
mocks: {
$root: {
feature: {
name: "Some Val"
}
}
}
})
```
But in your case, you probably want to use the computed mouting option, which is a bit cleaner than a deep object in `mocks`.
```
const wrapper = shallowMount(Foo, {
computed: {
textButton: () => "Some Value"
}
})
```
Hopefully this helps!
If you are interested I am compiling a collection of simple guides on how to test Vue components [here](https://github.com/username_3/vue-testing-handbook). It's under development, but feel free to ask make an issue if you need help with other related things to testing Vue components.
Upvotes: 2 <issue_comment>username_4: Solution from <https://github.com/vuejs/vue-test-utils/issues/481#issuecomment-423716430>:
You can set $root on the vm directly:
```
wrapper.vm.$root = { loading: true }
wrapper.vm.$forceUpdate()
```
Or you can pass in a parent component with the parentComponent mounting option. In VTU, the paren will be the $root:
```
const Parent = {
data() {
return {
loading: "asdas"
}
}
}
const wrapper = shallowMount(TestComponent, {
parentComponent: Parent
})
```
Upvotes: 2 <issue_comment>username_5: For people having the same problem in Vue 3 and [Vue Test Utils v2](https://github.com/vuejs/test-utils):
I recommend wrapping `$root` accesses with component computed variables. So, replace this:
```
methods: {
textButton() {
const key = this.$root.feature.name === //...
// ...
},
}
```
with this:
```
computed: {
feature() {
return this.$root.feature;
}
},
methods: {
textButton() {
const key = this.feature.name === //...
// ...
},
}
```
And then, in test implementation, mock this computed variable **before** mounting:
```
import { shallowMount } from '@vue/test-utils';
import MyComponent from '';
MyComponent.computed.feature = jest.fn(() => { name: 'random feature name' });
const wrapper = shallowMount(MyComponent);
...
```
This way you won't need to mock `$root` anymore.
Upvotes: 0 |
2018/03/22 | 1,367 | 4,673 | <issue_start>username_0: Using Knockout JS:
I have a requirement as.
I have a table with 2 static columns where each has a text-box. I also have a add row button outside the table and one remove row button along each row.
When the user clicks add row it adds a row to the table and I can see two columns with textbox in each. User can add more rows if required by clicking add row and click remove row to remove rows.
This all has been setup and works fine as :
<https://jsfiddle.net/aman1981/xmd1xobm/14/>
My issue is there is also an Get Columns button. When the user clicks this button I fetch a list of columns and want to add these columns with headers b/w the already existing columns of the table.Each of these columns also need to have a textbox as well.
For ex I would the list of columns as:
```
var columnsList = 'name, address, zip';
```
I am not sure how to add columns dynamically. Would appreciate inputs.
Here is the setup of my table:
```
| SIID | Comment |
| --- | --- |
| | | |
```
```
```
Anyone?<issue_comment>username_1: Nice for you i have a dynamic table component from my previous project:
```js
var observable = ko.observable;
var pureComputed = ko.pureComputed;
var observableArray = ko.observableArray;
var unwrap = ko.unwrap;
var data = observableArray([{
a: 123,
b: 234,
c: 345
},{
a: 1231,
b: 2341,
c: 3451
},{
a: 1232,
b: 2342,
c: 3425
},{
a: 1233,
b: 2343,
c: 3453
}]);
var columns = observableArray([{
field: "a",
displayName: "A"
},{
field: "b",
displayName: "B (i can even change the title)"
}])
function viewModel(params) {
var paramColumns = params.columns;
var paramData = params.data;
var paramFieldKey = params.fieldKey || "field";
var paramDisplayNameKey = params.displayNameKey || "displayName";
var koColumnHeaders = pureComputed(function () {
var columns = paramColumns();
var columnHeaders = [];
var fieldKey = unwrap(paramFieldKey);
var displayNameKey = unwrap(paramDisplayNameKey);
for (var i in columns) {
var column = columns[i];
columnHeaders.push({
field: column[fieldKey],
displayName: column[displayNameKey]
})
}
return columnHeaders;
})
var koRows = pureComputed(function () {
var data = paramData();
var columns = paramColumns();
var fieldKey = unwrap(paramFieldKey);
var rows = [];
for (var i in data) {
var datum = data[i];
var cells = []
var row = {
entity: data,
cells: cells
};
for (var j in columns) {
var column = columns[j];
cells.push(datum[column[fieldKey]] || "");
}
rows.push(row);
}
return rows;
});
return {
rows: koRows,
columns: koColumnHeaders,
}
}
ko.applyBindings(new viewModel({
data: data,
columns: columns
}))
```
```html
| |
| --- |
| |
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I was able to resolve this using the post at:
[Knockout JS: Get Textbox data from Table under for-each binding](https://stackoverflow.com/questions/49515028/knockout-js-get-textbox-data-from-table-under-for-each-binding)
Here is my code:
```
(function() {
var ViewModel = function() {
var self = this;
self.valuesData = ko.observableArray();
self.columns = ko.computed(function() {
if (self.valuesData().length === 0)
return [];
return ValuesData.columns;
});
self.addRow = function() {
self.valuesData.push(new ValuesData());
};
self.Save = function() {
alert('Data:')
};
self.removeRow = function(data) {
self.valuesData.remove(data);
};
}
// Dynamic values.
var ValuesData = function(siid, comment) {
var self = this;
// Add observables dynamically for all relevant columns.
for (var i = 0; i < ValuesData.columns.length; i++) {
var column = ValuesData.columns[i];
self[column.Property] = ko.observable(column.InitialValue)
}
};
// Basic column definition.
ValuesData.columns = [{
Caption: 'SIID',
Property: 'SIID',
InitialValue: undefined
},
{
Caption: 'Column 1',
Property: 'Col1',
InitialValue: undefined
},
{
Caption: 'Column 2',
Property: 'Col2',
InitialValue: 'banana'
},
{
Caption: 'Comment',
Property: 'Comment',
InitialValue: undefined
}
]
vm = new ViewModel()
ko.applyBindings(vm);
// add initial row.
vm.addRow();
})();
```
Here is the html part:
```
| |
| |
| |
```
Upvotes: 0 |
Subsets and Splits