date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/20 | 931 | 3,162 | <issue_start>username_0: I've written some code in Laravel to display only the images that are in mysql database, in Post table.
This is the function to display the images
```
public function index()
{
$posts = Post::all()->pluck('image');
return response()->json(['images' => $posts]);
}
```
And this is the response that i am getting which displays image filenames in JSON array
```
{
"images": [
"1509695371.jpg",
"1509696465.jpg",
"1509697249.jpg"
]
}
```
But i want to display them with the full URL, like this below in json format. It will be better using Laravel eloquent in that function but without using sql concatenation.
```
{
"images": [
"http://localhost:8000/images/1509695371.jpg",
"http://localhost:8000/images/1509696465.jpg",
"http://localhost:8000/images/1509697249.jpg"
]
}
```
Any help will be much more appreciated!<issue_comment>username_1: You can use [map](https://laravel.com/docs/5.5/collections#method-map) method on your collection:
```
public function index()
{
$posts = Post::all()->pluck('image')->map(function($image){
return "http://localhost:8000/images/".$image;
});
return response()->json(['images' => $posts]);
}
```
Upvotes: 2 <issue_comment>username_2: This could be handled with a simple loop:
```
$posts = Post::all()->pluck('image');
foreach($posts AS $index => $image){
$posts[$index] = url("/images/".$image);
}
```
The `url()` helper returns a fully-qualified URL based on your config and the path passed, so
```
url("/images/1509695371.jpg")
```
should return
```
http://localhost:8000/images/1509695371.jpg
```
Edit: To include all Data, but still format `images`, you'll need to remove the `->pluck()` function and loop `$posts`, then `$post->images`:
```
$posts = Post::all();
foreach($posts AS $post){
foreach($post->images AS $index => $image){
$posts->images[$index] = url("/images/".$image);
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: One solution is to make a AssetsService, it can have method for appending a path to an image: `assetLink('images', $image)`.
An example implementation for this:
```
public function link(string $path, string $fileName): string
{
return sprintf(
'%s/%s/%s',
env('APP_URL'),
$path,
$fileName,
)
}
```
Now, you need to append to several paths. Simply make a seperate method that takes an array and iterates it using the method above. Another example:
```
public function linkArray(string $path, array $files): array
{
return array_map(function ($fileName) {
return $this->link($path, $fileName)
}, $files)
}
```
You can then call it like this: `$assetsService->linkArray('images', $files)`. Remember you can use Dependency Injection to get a service instantiated by laravel's container.
This gives you a reusable set of methods for file paths without making your database do unnecessary work. Services are small classes that cost very little but give you a lot of transparency. You define what you use a service for or when something is at all a service.
Upvotes: 2 |
2018/03/20 | 1,123 | 4,006 | <issue_start>username_0: I'm trying to take a method and make it generic, and I'm a little stuck because the method uses Linq to look at elements. Here's the example method:
```cs
private List GetListFromIDS(string ids, IEnumerable data)
{
if (string.IsNullOrEmpty(ids))
return null;
var list = ids
.Split(new char[] { ',' })
.Where(x => !string.IsNullOrWhiteSpace(x))
.Select(x => int.Parse(x.Trim()));
return data
.Where(x => list.Contains(x.Function\_Id)))
.Select(x => x.Function\_Id)
.ToList();
}
```
The parts that change are the type (`SubSpace_Function`) and the property to lookup `Function_ID`.
I know I can just change the `SubSpace_Function` part to `T` in the generic method signature, but since each type will have it's own property to lookup, I'm not sure how to 'pass' in something like `Function_Id`.<issue_comment>username_1: It's pretty easy to do with `Func`:
```
private List GetListFromIDS(string ids, IEnumerable data, Func, bool> filterExpression, Func selectExpression)
{
if (string.IsNullOrEmpty(ids))
return null;
var list = ids
.Split(',') // simplify
.Where(x => !string.IsNullOrWhiteSpace(x))
.Select(x => int.Parse(x.Trim()));
return data
.Where(x => filterExpression(x, list))
.Select(selectExpression)
.ToList();
}
```
And call using:
```
var data = GetListFromIDS(
"123,123,123",
someList,
(x, list) => list.Contains(x.Function\_Id),
x => x.Function\_Id);
```
---
Another way is to call the select `Func` inline:
```
private List GetListFromIDS(string ids, IEnumerable data, Func selectExpression)
{
if (string.IsNullOrEmpty(ids))
return null;
var list = ids
.Split(',') // simplify
.Where(x => !string.IsNullOrWhiteSpace(x))
.Select(x => int.Parse(x.Trim()));
return data
.Where(x => list.Contains(selectExpression(x)))
.Select(selectExpression)
.ToList();
}
```
And call using:
```
var data = GetListFromIDS(
"123,123,123",
someList,
x => x.Function\_Id);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I know this focused on generics, but I took the approach of using an interface instead:
```
interface ISubSpaceFunction
{
int FunctionId { get; }
}
class Foo : ISubSpaceFunction
{
public int FunctionId => FooMethodForFunctionId();
private int FooMethodForFunctionId()
{
//do foo function id stuff
throw new NotImplementedException();//so it compiles
}
}
class Bar : ISubSpaceFunction
{
public int FunctionId => BarMethodForFunctionId();
private int BarMethodForFunctionId()
{
//do bar function id stuff
throw new NotImplementedException();//so it compiles
}
}
static class MyClass
{
private static List GetListFromIds(string idsString, IEnumerable subSpaceFunctions)
{
var ids = string.IsNullOrWhiteSpace(idsString) ?
Enumerable.Empty() :
idsString.Split(new[] { ',' })
.Where(x => !string.IsNullOrWhiteSpace(x))
.Select(x => x.Trim())
.Select(int.Parse);
var idSet = new HashSet(ids);
return subSpaceFunctions.Select(ssf => ssf.FunctionId)
.Where(ids.Contains)
.ToList();
}
}
class Example
{
public void Test()
{
string ids = "1, 2, 3, 4, 5";
var subSpaceFunctions = new ISubSpaceFunction[] { new Foo(), new Bar() };
var results = MyClass.GetListFromIds(ids, subSpaceFunctions);
}
}
```
My attitude on this and related matters is that the code to get the Property value for each particular type has to go *somewhere*, so it might as well go in the Type's class. This ensures that if the Property's value is needed elsewhere, there is no duplication. This also allows for mixing multiple types that satisfy ISubSpaceFunction, as is done in the example, and you could easily have the interface also specify some common method to be used elsewhere.
I also prefer returning empty collections over null when writing these kinds of LINQ based transformation methods in order to minimize null checking "down the pipeline," but a "fail fast" use case may call for a null return value.
Upvotes: 0 |
2018/03/20 | 375 | 1,563 | <issue_start>username_0: I am writing a c terminal program that runs until the user terminates it with `Ctrl`+`C`. Think something like `ping` or `top`.
My program allocates to the heap but starts no other threads or processes. Should I be handling SIGINT and freeing any allocated memory before exit or is leaving it to the OS better practice?<issue_comment>username_1: If you exit anyway, you don't need to release any resources. The OS will take care of it just fine, and there is no benefit in doing it manually.
Note that free() is not [async-safe](http://man7.org/linux/man-pages/man7/signal-safety.7.html), so you would definitely have to do the actual freeing in the main thread, and not in the handler. But don't do that, unless you want to do other things than exit().
Use SIGINT handlers for things like resetting the terminal (e.g. with ncurses), or saving critical state.
Upvotes: 0 <issue_comment>username_2: The short answer is yes given your context, which is a normal exit situation. In an abnormal exit situation, then the short answer is absolutely no.
If you are concerned that your program is leaking memory during its execution, which is a bad thing in the sense that it slows your program execution, then you can keep track of the memory that you allocate and then free it before you exit. Then you can run your program with valgrind and if valgrind complains about blocks that weren't free'd, then you will know you have some type of leak. The location of the allocation will help you know if the leak is of any importance.
Upvotes: 1 |
2018/03/20 | 737 | 2,767 | <issue_start>username_0: Recently switched from Sublime Text 3 to VS Code. Overall pleased with the switch except for this one little thing that's just annoying enough to be a dealbreaker if there's no solution. In ST3 if I type, say, a , it doesn't automatically drop in a , which is nice because I'm often pasting it in and don't want it closed right there.
What ST3 DOES do, however, is complete the tag the moment I type `. It autofills `div>` the moment I type the forward slash. This is the behavior I want from VS Code. I can't find any mention of this anywhere which is completely baffling. I know how to autoclose tags, but that's no good becasue then I have to manually close them. I want VS Code, like ST3, to autocomplete the tag for me, just not immediately.`<issue_comment>username_1: Go to **File > Preferences > Settings**, search for `html.autoClosingTags` and set it to `false`.
This will make it so when you type , it won't insert automatically, but if you start typing `, it won't close the tag automatically. You can press `ENTER` to make it autocomplete for you.`
Or you can leave this option enabled and when you type and it autocompletes, you can just press `CTRL` + `Z`.
More information on this behavior [here](https://code.visualstudio.com/Docs/languages/html#_close-tags).
Upvotes: 6 <issue_comment>username_2: Add this to settings.json to make it work like Sublime Text:
```
"html.autoClosingTags": false,
"auto-close-tag.SublimeText3Mode": true
```
Upvotes: 2 <issue_comment>username_3: On Windows/Linux - Ctrl + Shift + P
On MacOS - Cmd + Shift + P
In the search box type settings.json
paste the following line there
`"html.autoClosingTags": false`
Upvotes: 1 <issue_comment>username_4: ### A very simple trick i learnt
If you want to disable tags auto completion for just a single task for example. To save a file without vscode adding closing tags. Just set a different `language mode` for that file.
Change from the inferred one i.e `html` to `Batch`, `Diff` `ignore`. The options on vscode are many. This will enable you to save the file without addition of any closing tags.
After you are remember to reset the language mode to `Auto Detect`.
**TLDR;**
To access language mode-:
* Use the command pallete and search `Change Language Mode` or
* Find a shortcut at the bottom right section on Vscode.
Upvotes: 0 <issue_comment>username_5: Go to Settings, search for "auto closing" and enable/disable these options as needed
[](https://i.stack.imgur.com/MUBbs.png)
Or set them in your settings.json file like so:
```
"html.autoClosingTags": false,
"typescript.autoClosingTags": false,
"javascript.autoClosingTags": false,
```
Upvotes: 2 |
2018/03/20 | 583 | 2,376 | <issue_start>username_0: I am creating car park app and i want users to enter some information in edit texts before registering. The edit texts are as follows:
`First Name`, `Last Name`, `Email`, `password`, `car no`.
When user hits register button, i want to store these values in firebase database connected to my project.I want to know how to create tables in firebase and how these values will be stored. I am new to programming.<issue_comment>username_1: First retrieve the edittexts:
```
String email = textEmail.getText().toString().trim();
String firstname = firstName.getText().toString().trim();
//etc
```
first authenticate the user using `createUserWithEmailAndPassword` and then add to the database:
```
private DatabaseReference mDatabase, newUser;
private FirebaseUser mCurrentUser;
mDatabase = FirebaseDatabase.getInstance().getReference().child("users");
auth.createUserWithEmailAndPassword(email, password)
.addOnCompleteListener(SignUpActivity.this, new OnCompleteListener() {
@Override
public void onComplete(@NonNull Task task) {
Toasty.info(getApplicationContext(), "creation of account was: " + task.isSuccessful(), Toast.LENGTH\_SHORT).show();
if (!task.isSuccessful()) {
Toasty.error(getApplicationContext(), "Authentication failed: " + task.getException().getMessage(),
Toast.LENGTH\_SHORT).show();
} else {
mCurrentUser= task.getResult().getUser();<-- gets you the uid
newUser=mDatabase.child(mCurrentUser.getUid());
newUser.child("email").setValue(email);
newUser.child("firstname").setValue(name);
}
});
```
you will have the following database:
```
users
userid
email:<EMAIL> <--example
firstname:userx <--example
```
Upvotes: 1 <issue_comment>username_2: Firebase doesn't store data in tables, it is not a classic database. It's just a big JSON file that might be replicated in several nodes and might span several of them too.
The classic approach is to have the first level children of the root be what you imagine would be the tables. So, for example, you will have
```
{
"users": {
...
}
}
```
Second, what you are trying to do is very simple and you can know how to do it by opening the starting guide.
Moreover, the best approach to handle users' authentication is not this but using FirebaseAuth. In the guide you'll find about this too.
Upvotes: 0 |
2018/03/20 | 865 | 2,908 | <issue_start>username_0: I just started learning this language and I have a problem trying to create a Matrix of type char from user input.
For example I want to read this as my input:
```
3 // this is an int n that will give me a square matrix[n][n]
.#.
###
.#.
```
For this example, this is what I have:
```
//...
Scanner stdin = new Scanner(System.in);
int n = stdin.nextInt();
char[][] matrix = new char[n][n]
for(int i = 0; i < n; i++){
matrix = stdin.nextLine();
}
```
Obviously this is wrong, and I know that. I'm just not seeing a way to correctly read this input.
If anyone could help me I would appreciate it.
ps: if possible, keep it simple, because like I said, I just started learning java :)<issue_comment>username_1: First, you need to add `stdin.nextLine();` after reading `n` to skip the new line character.
Second, this is what you need inside your loop:
```
matrix[i] = stdin.nextLine().toCharArray();
```
This reads next line and converts it to an array of chars.
Upvotes: 1 <issue_comment>username_2: see code sample:
```
public class MatInput {
public static void main(String[] args) {
int matX = 3;
int matY = 3;
String matrix [][]=new String[matX ][matY];
Scanner input = new Scanner(System.in);
System.out.println("enter the strings for the Matrix");
for(int row=0;row
```
Upvotes: -1 <issue_comment>username_3: This is a runnable version of your question with output
```
import javafx.application.Application;
import javafx.stage.Stage;
import java.util.Arrays;
import java.util.Scanner;
public class MainNoFXML extends Application {
@Override
public void start(Stage stage) {
System.out.println("Enter Matrix Size:");
Scanner stdin = new Scanner(System.in);
int n = stdin.nextInt();
char[][] matrix = new char[n][n];
stdin.nextLine();
for(int i = 0; i < n; i++) {
System.out.println("Enter "+n+" Number of Chars");
System.arraycopy(stdin.nextLine().toCharArray(), 0, matrix[i], 0, n);
}
System.out.println("\nYour Matrix:");
for(int i = 0; i < n; i++)
System.out.println(Arrays.toString(matrix[i]));
}
public static void main(String[] args) { launch(args); }
}
```
Output:
```
Enter Matrix Size:
3
Enter 3 Number of Chars
.#.
Enter 3 Number of Chars
###
Enter 3 Number of Chars
.#.
Your Matrix:
[., #, .]
[#, #, #]
[., #, .]
```
Upvotes: 0 <issue_comment>username_4: First of all, thank you for all your answers.
I emailed my teacher and this was the solution he gave to me, if anyone is wondering:
```
Scanner stdin = new Scanner(System.in);
int n = stdin.nextInt();
stdin.nextLine();
char[][] matrix = new char[n][n]
for(int i = 0; i < n; i++){
String line = stdin.nextLine();
for(int j = 0; i < n; j++){
matrix[i][j] = line.charAt(j);
}
}
```
Upvotes: 0 |
2018/03/20 | 413 | 1,225 | <issue_start>username_0: If I want to download a newer version of a python3 package, it seems like pip, pip3, and pip3.6 all download the python2 version anyways. When I check version of each pip, I get the following:
```
$ pip -V
pip 9.0.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
$ pip3 -V
pip 9.0.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
$ pip3.6 -V
pip 9.0.2 from /usr/local/lib/python2.7/dist-packages (python 2.7)
```
I would assume that pip3 and pip3.6 would want to say something like python 3.6?<issue_comment>username_1: pip is bundled with python > 3.4
so,if you're on a **Unix machine** try:
```
python3.6 -m pip install [Package_to_install]
```
or if you're on a **Windows machine**
```
py -m pip install [Package_to_install]
```
I hope this is what you meant..
Upvotes: 3 [selected_answer]<issue_comment>username_2: >
> I would assume that pip3 and pip3.6 would want to say something like python 3.6?
>
>
>
They must be, but it's not magic, it's the shebang line (the first line of the script, it starts with `#!`).
Open the scripts in your editor and fix the shebang lines. Something like that:
```
vim $(which pip3.6)
```
Upvotes: 0 |
2018/03/20 | 401 | 1,362 | <issue_start>username_0: Are they different or simple aliases?
I obtain the /private/var by running:
```
FileManager.default.contentsOfDirectory(at: folder, includingPropertiesForKeys: [], options: [])
```
And the second is created with a simple:
```
data.write(to: f, options: [.atomic])
```
where f is in the same directory as "folder"<issue_comment>username_1: That are the same directories, as one can verify by retrieving the
“canonical path” for both:
```
let url1 = URL(fileURLWithPath: "/var/mobile/Containers/")
if let cp = (try? url1.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
let url2 = URL(fileURLWithPath: "/private/var/mobile/Containers/")
if let cp = (try? url2.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
```
In fact, `/var` is a symbolic link to `/private/var`:
```
var buffer = Array(repeating: 0, count: 1024)
if readlink("/var", &buffer, buffer.count) > 0 {
print(String(cString: &buffer))
// "private/var"
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Swift users, using `URL.standardizedFileURL` eliminates the ambiguity/confusion caused by paths which contain soft links or other different elements that ultimately resolve to the same file.
Upvotes: 2 |
2018/03/20 | 560 | 1,911 | <issue_start>username_0: Example:
```
row_number |id |firstname | middlename | lastname |
0 | 1 | John | NULL | Doe |
1 | 1 | John | Jacob | Doe |
2 | 2 | Alison | Marie | Smith |
3 | 2 | NULL | Marie | Smith |
4 | 2 | Alison | Marie | Smith |
```
I'm trying to figure out how to groupby id, and then grab the row with the least number of NULL values for each groupby, dropping any extra rows that contain the least number of NULLs is fine (for example, dropping row\_number 4 since it ties row\_number 2 for the least number of NULLS where id=2)
The answer for this example would be the row\_numbers 1 and 2
Preferably would be ANSI SQL, but I can translate other languages (like python with pandas) if you can think of a way to do it
Edit:
Added a row for the case of tie-breaking.<issue_comment>username_1: That are the same directories, as one can verify by retrieving the
“canonical path” for both:
```
let url1 = URL(fileURLWithPath: "/var/mobile/Containers/")
if let cp = (try? url1.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
let url2 = URL(fileURLWithPath: "/private/var/mobile/Containers/")
if let cp = (try? url2.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
```
In fact, `/var` is a symbolic link to `/private/var`:
```
var buffer = Array(repeating: 0, count: 1024)
if readlink("/var", &buffer, buffer.count) > 0 {
print(String(cString: &buffer))
// "private/var"
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Swift users, using `URL.standardizedFileURL` eliminates the ambiguity/confusion caused by paths which contain soft links or other different elements that ultimately resolve to the same file.
Upvotes: 2 |
2018/03/20 | 716 | 2,590 | <issue_start>username_0: I want to read in Spotfire Binary data into a non-TERR R engine that can handle graphing and other complex packages, etc. So I want to use the SpotfireData package with other non-TERR R engines. Yet when I try to install, I get an error:
```
install.packages("SpotfireData")
Warning in install.packages :
package ‘SpotfireData’ is not available (for R version 3.4.4)
```
Has anyone had luck using the SpotfireData package outside of TERR?
I'm using:
```
> version
_
platform x86_64-w64-mingw32
arch x86_64
os mingw32
system x86_64, mingw32
status
major 3
minor 4.4
year 2018
month 03
day 15
svn rev 74408
language R
version.string R version 3.4.4 (2018-03-15)
nickname Someone to Lean On
```
Also, when I switch engines to R3.4.3, I get the same error:
```
install.packages("SpotfireData")
Warning in install.packages :
package ‘SpotfireData’ is not available (for R version 3.4.3)
```
Also, when I copy/paste the actual SpotfireData package folder into my R3.4.4 library, I get this error:
```
library(SpotfireData)
Error in library(SpotfireData) :
‘SpotfireData’ is not a valid installed package
```<issue_comment>username_1: That are the same directories, as one can verify by retrieving the
“canonical path” for both:
```
let url1 = URL(fileURLWithPath: "/var/mobile/Containers/")
if let cp = (try? url1.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
let url2 = URL(fileURLWithPath: "/private/var/mobile/Containers/")
if let cp = (try? url2.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
```
In fact, `/var` is a symbolic link to `/private/var`:
```
var buffer = Array(repeating: 0, count: 1024)
if readlink("/var", &buffer, buffer.count) > 0 {
print(String(cString: &buffer))
// "private/var"
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Swift users, using `URL.standardizedFileURL` eliminates the ambiguity/confusion caused by paths which contain soft links or other different elements that ultimately resolve to the same file.
Upvotes: 2 |
2018/03/20 | 524 | 1,804 | <issue_start>username_0: I am having some difficulty figuring out how to take the user input and write the number to a file.
For instance, if the user inputs the number `50`, the program should create a text file with the numbers `1,2,3,....50` save the output in the file.
This is what I have so far and it works and saves the users input to the file.
I can't figure out how to break it down so it saves to the file starting at `1` and counts to the number inputted by the user.
```
def main():
outfile = open('counting.txt', 'w')
print('This program will create a text file with counting numbers')
N = int(input('How many numbers would you like to store in this file: ')
outfile.write(str(N) + '\n')
outfile.close()
print('Data has been written to counting.txt')
main()
```<issue_comment>username_1: That are the same directories, as one can verify by retrieving the
“canonical path” for both:
```
let url1 = URL(fileURLWithPath: "/var/mobile/Containers/")
if let cp = (try? url1.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
let url2 = URL(fileURLWithPath: "/private/var/mobile/Containers/")
if let cp = (try? url2.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
```
In fact, `/var` is a symbolic link to `/private/var`:
```
var buffer = Array(repeating: 0, count: 1024)
if readlink("/var", &buffer, buffer.count) > 0 {
print(String(cString: &buffer))
// "private/var"
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Swift users, using `URL.standardizedFileURL` eliminates the ambiguity/confusion caused by paths which contain soft links or other different elements that ultimately resolve to the same file.
Upvotes: 2 |
2018/03/20 | 444 | 1,510 | <issue_start>username_0: Here is a stack implementation I found on the web
```
public struct Stack {
fileprivate var array = [T]()
public var isEmpty: Bool {
return array.isEmpty
}
public var count: Int {
return array.count
}
public mutating func push(\_ element: T) {
array.append(element)
}
public mutating func pop() -> T? {
return array.popLast()
}
public var top: T? {
return array.last
}
}
```
I wanted a simple contains method to see if an element is in the stack<issue_comment>username_1: That are the same directories, as one can verify by retrieving the
“canonical path” for both:
```
let url1 = URL(fileURLWithPath: "/var/mobile/Containers/")
if let cp = (try? url1.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
let url2 = URL(fileURLWithPath: "/private/var/mobile/Containers/")
if let cp = (try? url2.resourceValues(forKeys: [.canonicalPathKey]))?.canonicalPath {
print(cp)
// "/private/var/mobile/Containers"
}
```
In fact, `/var` is a symbolic link to `/private/var`:
```
var buffer = Array(repeating: 0, count: 1024)
if readlink("/var", &buffer, buffer.count) > 0 {
print(String(cString: &buffer))
// "private/var"
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Swift users, using `URL.standardizedFileURL` eliminates the ambiguity/confusion caused by paths which contain soft links or other different elements that ultimately resolve to the same file.
Upvotes: 2 |
2018/03/20 | 631 | 1,451 | <issue_start>username_0: I am pretty new to R and I have a loop which gives sometimes a matrix like this:
```
1 2
FALSE 0 0
TRUE 0 2
```
I need to do as follows:
If the two cells in a single row have zeros replace them by 0.5
If one of the cells is not zero divide by the sum of the row
so the result of this will be:
```
1 2
FALSE 0.5 0.5
TRUE 0 1
```
Any idea please?
Thank you<issue_comment>username_1: If your matrix is `x`,
```
(x <- matrix(c(0, 0, 0, 2), 2))
# [,1] [,2]
# [1,] 0 0
# [2,] 0 2
zero_rows <- as.logical(rowSums(x != 0))
x[zero_rows,] <- x[zero_rows,]/sum(x[zero_rows,])
x[rowSums(x) == 0, ] <- rep(0.5, ncol(x))
x
# [,1] [,2]
# [1,] 0.5 0.5
# [2,] 0.0 1.0
```
This will work for a matrix (2 dimensional array) of arbitrary size
@akrun's suggested edit, constructing `zero_rows` with `rowSums(x != 0)` instead of `apply(x, 1, function(r) 0 %in% r)` should make this even more efficient.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Let `x <- matrix(c(0, 0, 0, 2), 2))`
```
t(apply(x,1,function(y)if(all(!y))replace(y,!y,0.5)else if(any(!y))y/sum(y) else y))
[,1] [,2]
[1,] 0.5 0.5
[2,] 0.0 1.0
```
Upvotes: 1 <issue_comment>username_3: ```
x = matrix(c(0, 0, 0, 2), 2)
t(apply(x, 1L, function(y) ifelse(all(y == 0), return(rep(0.5, length(y))), return(y/sum(y)))))
# [,1] [,2]
#[1,] 0.5 0.5
#[2,] 0.0 1.0
```
Upvotes: 0 |
2018/03/20 | 1,616 | 4,507 | <issue_start>username_0: >
> Write a program that examines three variables — x, y, and z — and
> prints the largest odd number among them. If none of them are odd, it
> should print a message to that effect.
>
>
>
This is my source code:
```
#include
int main() {
int x,y,z;
x = 11;
y = 15;
z = 18;
if (x > y && x > z && x % 2 != 0)
printf("%d", x);
else if (y > z && y > x && y % 2 != 0)
printf("%d", y);
else if (z > x && z > y && z % 2 != 0)
printf("%d", z);
else
printf("error");
return 0;
}
```
The program is compiling and running but is giving the wrong answer. For the above it gives "error" as the output, but the greatest odd number is 15.<issue_comment>username_1: The code is not printing the largest odd number, but is printing the largest number **if it happens to be odd**.
You need to keep track of the largest odd number you've found and then print that.
For example:
```
int found = false, largest = 0;
if (x%2 != 0) {
found = 1;
largest = x;
}
if (y%2 != 0) {
found = 1;
if (y > largest) {
largest = y;
}
}
if (z%2 != 0) {
found = 1;
if (z > largest) {
largest = z;
}
}
if (found) {
printf("%d", largest);
} else {
printf("error");
}
```
Upvotes: 0 <issue_comment>username_2: You may have noticed that your code is very repetitive. And the next exercise in the book is likely to be "OK, now do this for *ten* input numbers", which would be too tedious to contemplate, or "... for *any number* of input numbers", which just plain can't be done the way you're doing it.
So instead you should be thinking about a code structure like this:
```
int current_largest_odd_number = 0; /* note: zero is even */
for (;;) {
int n = read_a_number();
if (n == -1) break; /* read_a_number returns -1 on EOF */
if (is_even(n)) continue;
if (n > current_largest_odd_number)
current_largest_odd_number = n;
}
if (current_largest_odd_number > 0)
printf("largest odd number: %d\n", current_largest_odd_number);
```
Upvotes: 0 <issue_comment>username_3: In your example, y is the largest odd number, but it is not the largest number. z is larger, so "y>z" evaluates to false, and the path you want is not taken.
Upvotes: 1 <issue_comment>username_4: You are just printing the largest integer of the three if it is odd. For not to change you code so much and if you doesn't care about the others values, you can set the even variables to -INF. That is:
```
//INF can be whatever big even value you think that works fine (1e9), 0x3ffffffe, etc
#define INF 1e9
if (x % 2 == 0) x = -INF;
if (y % 2 == 0) y = -INF;
if (z % 2 == 0) z = -INF;
//here the rest of your code
```
Upvotes: 2 <issue_comment>username_5: Your code prints the largest number if and only if is is odd.
You could instead test each number in turn and update a pointer to the maximum odd value if any.
```
#include
int main() {
int x = 11;
int y = 15;
int z = 18;
int \*largest = NULL;
if (x % 2 != 0) {
largest = &x
}
if (y % 2 != 0) {
if (!largest || y > \*largest) {
largest = &y
}
}
if (z % 2 != 0) {
if (!largest || z > \*largest) {
largest = &z
}
}
if (largest) {
printf("%d\n", \*largest);
} else {
printf("error\n");
}
return 0;
}
```
Here is an alternative approach with an indicator instead of a pointer:
```
#include
int main() {
int x = 11;
int y = 15;
int z = 18;
int largest = 0;
int found = 0;
if (x % 2 != 0) {
largest = x;
found = 1;
}
if (y % 2 != 0) {
if (!found || y > largest) {
largest = y;
found = 1;
}
}
if (z % 2 != 0) {
if (!found || z > largest) {
largest = z;
found = 1;
}
}
if (found) {
printf("%d\n", largest);
} else {
printf("error\n");
}
return 0;
}
```
Upvotes: 0 <issue_comment>username_6: My version examines all possible three integer numbers and prints the largest
odd number (As an absolute beginner.)
```
x = int(input("number x = "))
y = int(input("number y = "))
z = int(input("number z = "))
/*Ask the user to input three variables*/
ans=0
if x%2==0 and y%2==0 and z%2==0:
print("There are no odd numbers.")
else:
if x<=y<=z and z%2!=0:
ans=z
elif y%2!=0:
ans=y
elif x%2!=0:
ans=x
if x<=y>=z and y%2!=0:
ans=y
elif x>=z and x%2!=0:
ans=x
elif z%2!=0:
ans=z
if x>=y<=z and x>=z and x%2!=0:
ans=x
elif z%2!=0:
ans=z
elif y%2!=0:
ans=y
if x>=y>=z and x%2!=0:
ans=x
elif y%2!=0:
ans=y
elif z%2!=0:
ans=z
print("The largest odd number is "+str(ans))
```
Upvotes: 0 |
2018/03/20 | 1,427 | 4,333 | <issue_start>username_0: I have a DataFrame with Arrays.
```
val DF = Seq(
("123", "|1|2","3|3|4" ),
("124", "|3|2","|3|4" )
).toDF("id", "complete1", "complete2")
.select($"id", split($"complete1", "\\|").as("complete1"), split($"complete2", "\\|").as("complete2"))
|id |complete1|complete2|
+-------------+---------+---------+
| 123| [, 1, 2]|[3, 3, 4]|
| 124| [, 3, 2]| [, 3, 4]|
+-------------+---------+---------+
```
How do I extract the minimum of each arrays?
```
|id |complete1|complete2|
+-------------+---------+---------+
| 123| 1 | 3 |
| 124| 2 | 3 |
+-------------+---------+---------+
```
I have tried defining a UDF to do this but I am getting an error.
```
def minArray(a:Array[String]) :String = a.filter(_.nonEmpty).min.mkString
val minArrayUDF = udf(minArray _)
def getMinArray(df: DataFrame, i: Int): DataFrame = df.withColumn("complete" + i, minArrayUDF(df("complete" + i)))
val minDf = (1 to 2).foldLeft(DF){ case (df, i) => getMinArray(df, i)}
java.lang.ClassCastException: scala.collection.mutable.WrappedArray$ofRef cannot be cast to [Ljava.lang.String;
```<issue_comment>username_1: You can define your `udf` function as below
```
def minUdf = udf((arr: Seq[String])=> arr.filterNot(_ == "").map(_.toInt).min)
```
and call it as
```
DF.select(col("id"), minUdf(col("complete1")).as("complete1"), minUdf(col("complete2")).as("complete2")).show(false)
```
which should give you
```
+---+---------+---------+
|id |complete1|complete2|
+---+---------+---------+
|123|1 |3 |
|124|2 |3 |
+---+---------+---------+
```
**Updated**
In case *if the array passed to udf functions are empty or array of empty strings* then you will encounter
>
>
> >
> > java.lang.UnsupportedOperationException: empty.min
> >
> >
> >
>
>
>
You should handle that with `if else` condition in `udf` function as
```
def minUdf = udf((arr: Seq[String])=> {
val filtered = arr.filterNot(_ == "")
if(filtered.isEmpty) 0
else filtered.map(_.toInt).min
})
```
I hope the answer is helpful
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is how you can do it without using `udf`
First `explode` the array you got with `split()` and then group by the same id and find `min`
```
val DF = Seq(
("123", "|1|2","3|3|4" ),
("124", "|3|2","|3|4" )
).toDF("id", "complete1", "complete2")
.select($"id", split($"complete1", "\\|").as("complete1"), split($"complete2", "\\|").as("complete2"))
.withColumn("complete1", explode($"complete1"))
.withColumn("complete2", explode($"complete2"))
.groupBy($"id").agg(min($"complete1".cast(IntegerType)).as("complete1"), min($"complete2".cast(IntegerType)).as("complete2"))
```
Output:
```
+---+---------+---------+
|id |complete1|complete2|
+---+---------+---------+
|124|2 |3 |
|123|1 |3 |
+---+---------+---------+
```
Upvotes: 1 <issue_comment>username_3: You don't need an UDF for this, you can use `sort_array`:
```
val DF = Seq(
("123", "|1|2","3|3|4" ),
("124", "|3|2","|3|4" )
).toDF("id", "complete1", "complete2")
.select(
$"id",
split(regexp_replace($"complete1","^\\|",""), "\\|").as("complete1"),
split(regexp_replace($"complete2","^\\|",""), "\\|").as("complete2")
)
// now select minimum
DF.
.select(
$"id",
sort_array($"complete1")(0).as("complete1"),
sort_array($"complete2")(0).as("complete2")
).show()
+---+---------+---------+
| id|complete1|complete2|
+---+---------+---------+
|123| 1| 3|
|124| 2| 3|
+---+---------+---------+
```
Note that I removed the leading `|` before splitting to avoid empty strings in the array
Upvotes: 1 <issue_comment>username_4: Since Spark 2.4, you can use [`array_min`](https://docs.databricks.com/spark/latest/spark-sql/language-manual/functions.html#array-min) to find the minimum value in an array. To use this function you will first have to cast your arrays of strings to arrays of integers. Casting will also take care of the empty strings by converting them into `null` values.
```scala
DF.select($"id",
array_min(expr("cast(complete1 as array)")).as("complete1"),
array\_min(expr("cast(complete2 as array)")).as("complete2"))
```
Upvotes: 3 |
2018/03/20 | 706 | 2,252 | <issue_start>username_0: This is a C++03 question.
In the following code, `class Foo` implements `operator[]` that returns a pointer to a member function. The code currently does this by returning a reference to a `TFunc`, which is `typedef`ed to a member function.
I'd like to learn: what would be the syntax of the `operator[]` definition without using `typedef`? You can see I flailed around a bit, without success, trying. My first thought was that `typedef` worked like a macro, i.e. that a simple string substitution should work - but apparently not. All the other variations I tried didn't work either.
```
#include
#include
#include
template
class Foo
{
public:
typedef void (T::\*TFunc)( const std::string&, const std::string& );
typedef std::map< std::string, TFunc > FooMap;
operator FooMap&()
{
return member\_;
}
//void (T::\*)( const std::string&, const std::string& ) operator []( const std::string& str )
//void (T::\*&)( const std::string&, const std::string& ) operator []( const std::string& str )
//void (T::\*)&( const std::string&, const std::string& ) operator []( const std::string& str )
//void (T::\*)( const std::string&, const std::string& )& operator []( const std::string& str )
TFunc& operator []( const std::string& str )
{
return member\_[ str ];
}
private:
FooMap member\_;
};
class Bar
{
public:
void func()
{
fb\_["a"] = &Bar::abc;
}
void callFunc( const std::string& str, const std::string arg1,
const std::string& arg2 )
{
(this->\*fb\_[ str ])( arg1, arg2 );
}
void abc( const std::string& key, const std::string& val )
{
std::cout << key << ": " << val << std::endl;
}
private:
Foo fb\_;
};
int main( int argc, char\* argv[] )
{
Bar b;
b.func();
b.callFunc( "a", "hello", "world" );
return 0;
}
```<issue_comment>username_1: The ugly syntax would be:
```
void (T::*&operator [](const std::string& str))(const std::string&, const std::string&);
```
[Demo](http://coliru.stacked-crooked.com/a/169513f57d8a1dd6)
Upvotes: 3 [selected_answer]<issue_comment>username_2: In C++ 14 or later, you can use
```
decltype(auto)
```
as the return type
You might also consider using `std:: function` instead of what you currently have as the map's value type.
Upvotes: 2 |
2018/03/20 | 584 | 2,118 | <issue_start>username_0: I'm new to SQL Server and I am getting this error "Cannot insert the value NULL into column 'Occupied', table 'DBProjectHamlet.dbo.tblGrave'; column does not allow nulls. INSERT fails. The statement has been terminated."
This is my code for the insert followed by the code to create the table
```
INSERT INTO tblGrave (GraveName)
SELECT Grave
FROM tblPlotsandOccupants
IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'tblGrave' AND TABLE_SCHEMA = 'dbo')
DROP TABLE dbo.tblGrave;
GO
CREATE TABLE tblGrave
(
GraveID INT IDENTITY (1,1),
GraveName VARCHAR(MAX) NULL,
GraveTypeID INT NOT NULL,
PlotID INT NOT NULL,
Occupied BIT NOT NULL
)
```
I'm not trying to insert anything into column Occupied, I don't know why this is happening or how to fix it. I just want to insert values into tblGrave (GraveName). Any help would be great.<issue_comment>username_1: Exactly! You *aren't* doing anything with `Occupied` and that is the problem. The column is specified to be `NOT NULL` but has no default value. You are not inserting a value, so it gets the default. The default default is `NULL`, and that is not allowed.
One simple solution is:
```
INSERT INTO tblGrave (GraveName, Occupied)
SELECT Grave, 0
FROM tblPlotsandOccupants;
```
This fixes your immediate problem, but will then you will get an error on `PlotId`.
A more robust solution would add a default value for the `NOT NULL` columns and declare the rest to be nullable (the default). Something like this:
```
CREATE TABLE tblGrave (
GraveID INT IDENTITY (1,1) PRIMARY KEY,
GraveName VARCHAR(MAX),
GraveTypeID,
PlotID INT,
Occupied BIT NOT NULL DEFAULT 0
);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: When you created your table, you defined that column as "NOT NULL" rather than allowing it to be null.
You need to either allow "Occupied" to be null, set a default value, or define the value that you want upon inserting.
You can even set the value to be ' ' which is blank but isn't null.
EDIT
Note @Gordon's answer for sql examples.
Upvotes: 0 |
2018/03/20 | 356 | 1,316 | <issue_start>username_0: When I write this code:
```
ListView lv = new ListView();
foreach (ListViewDataItem item in lv.Items)
{
}
```
I get "**the type or name ListViewDataItem could not be found**"
**Items** is also not found under the lv object.
Basically I need to iterate through each row of the ListView and set a checkbox added using item template.
How can I accomplish that?<issue_comment>username_1: The correct way to loop through a listview is to access it's ItemsSource. Then you can cast the item into your view model and do stuff with it.
```
foreach (var item in lv.ItemsSource)
{
// cast the item
var dataItem = (ListViewDataItem) item;
// then do stuff with your casted item
...
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I used a for loop to iterate over my listView ChildCount, assigned a var as the Tag of the GetChildAt as ImageAdapterViewHolder and then set my checkbox to false.
```
class ImageAdapterViewHolder : Java.Lang.Object
{
public ImageView SavedImage { get; set; }
public TextView Description { get; set; }
public CheckBox SelectImage { get; set; }
}
for (int i = 0; i < listView.ChildCount; i++)
{
var row = listView.GetChildAt(i).Tag as ImageAdapterViewHolder;
row.SelectImage.Checked = false;
}
```
Upvotes: 0 |
2018/03/20 | 456 | 1,775 | <issue_start>username_0: I have a table that has a sql column with a date as a string like 'January 1, 2018'. I'm trying to turn that into a DateTime object in C# so I can use it to sort a list by. I'm currently grouping everything by the ID so I can return the highest revision. This is working great but I also need to OrderByDescending date from a the column that represents a date. The below code will order everything alphanumerically but I need to sort by DateTime.
```
using (dbEntities entities = new dbEntities())
{
var db = entities.db_table
.GroupBy(x => x.ID) //grouping by the id
.Select(x => x.OrderByDescending(y =>
y.REVISIONID).FirstOrDefault());
return db.OrderBy(e => e.Date_String).ToList();
}
```
Thanks, I appreciate any help on this!<issue_comment>username_1: You'll need to materialize the objects and use LINQ-to-Objects to do the conversion to a C# DateTime.
```
return db.AsEnumerable().OrderBy(e => DateTime.Parse(e.Date_String)).ToList();
```
If at all possible, I would strongly recommend changing your column to a `datetime2` or `datetimeoffset` at the database level, though.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you don't mind some of the work being done on the client side you could do something like this:
```
using (dbEntities entities = new dbEntities())
{
var db = entities.db_table
.GroupBy(x => x.ID) //grouping by the id
.Select(x => x.OrderByDescending(y =>
y.REVISIONID).FirstOrDefault()).ToList();
return db.OrderBy(e => DateTime.Parse(e.Date_String)).ToList();
}
```
The parsing of the DateTime needs to be modified so it matches the format in the database, but otherwise it should work.
Upvotes: 0 |
2018/03/20 | 1,165 | 4,041 | <issue_start>username_0: I have a symfony 2.8 application and I recently integrated VueJs 2 as my front-end framework, because it gives a lot of flexibility.
My application is not single page and I use the symfony controllers to render views. All the views are wrapped in a base twig layout:
```
{% block body %} {% endblock %}
```
I load most of the JS with webpack, all my vue components and JS dependencies are compiled in `vendor-bundle.js` and `vue-bundle.js`. My VueJs instance looks like this:
```
import './components-dir/component.vue'
import './components-dir/component2.vue'
Vue.component('component', Component);
Vue.component('component2', Component2);
window.onload = function () {
new Vue({
el: '#app',
components: {}
});
};
```
I want to pass some php variables from the controller to the vuejs componets, but I can't manage to make it work.
A very simple example of a controller looks like this:
```
/**
* @Route("/contract", name="contract")
* @Method("GET")
*/
public function indexAction()
{
$paymentMethods = PaymentMethod::getChoices();
return $this->render('contracts/index.html.twig', [
'paymentMethods' => $serializer->normalize($paymentMethods, 'json'),
]);
}
```
All the html, css and js are handled by vueJs. The twig view looks like this:
```
{% extends 'vue-base.html.twig' %}
{% block body %}
{% endblock %}
```
The `contracts.vue` component looks like this:
```
Hi from component
export default {
data() {
return {}
},
props: ['paymentMethods'],
mounted: function () {
console.log(this.paymentMethods)
}
}
```
>
> How can I pass the php variables as props to vueJs ?
>
>
>
In the example above, I don't get any errors, but the property is not passed to vuejs. The console log prints `undefined`.
I want to be able to do this, because I don't want to have a SPA, but I also want to pass some variables from symfony to vue, because I won't have to make additional requests.<issue_comment>username_1: you need to add the following to your .vue file `props: ['paymentMethods']` please refer to the following url for complete documentation <https://v2.vuejs.org/v2/guide/components.html#Passing-Data-with-Props>
Upvotes: 1 <issue_comment>username_2: Probably late to the party, but if anyone is having the same issue, the problem here was the casing.
CamelCased props like `paymentMethods` are converted to hyphen-case in html, and can be used like this:
```
```
Upvotes: 0 <issue_comment>username_3: Instead of passing Twig variable as value of Vue attr:
```
```
you can render whole using twig:
```
```
Thanks to this you will avoid delimeters conflict between vue and twig.
Also as the value comes directly from twig, it probably wont change upon a time, as it is generated in backend - not in some vue-source - so you don't need to bind it, just pass it like:
```
```
Upvotes: 2 <issue_comment>username_4: The simplest way to pass variables from twig to Vue application is:
Twig:
```
```
JS:
```
import Vue from 'vue'
new Vue({
el: '#app',
data: {
foo: '',
bar: ''
},
template: 'foo = {{ foo }}bar = {{ bar }}',
beforeMount: function() {
this.foo = this.$el.attributes['data-foo'].value
this.bar = this.$el.attributes['data-bar'].value
}
})
```
If you would like to use a Vue component you can do it the following way:
Twig:
```
```
JS:
```
import Vue from 'vue'
import App from 'App'
new Vue({
el: '#app',
render(h) {
return h(App, {
props: {
foo: this.$el.attributes['data-foo'].value,
bar: this.$el.attributes['data-bar'].value,
}
})
}
})
```
App.vue:
```
foo = {{ foo }}
bar = {{ bar }}
export default {
props: ['foo', 'bar'],
}
```
Please note if you would like to pass arrays you should convert them to json format before:
Twig:
```
```
and then you should decode json:
JS:
```
this.foo = JSON.parse(this.$el.attributes['data-foo'].value)
```
Upvotes: 1 |
2018/03/20 | 3,446 | 13,253 | <issue_start>username_0: As a homework I have to do a simple URL shortener, where I can add full link to list, which is processed by [Hashids.net library](https://github.com/ullmark/hashids.net), and I get short version of an URL.

I've got something like this now, but I got stuck on redirecting it back to full link.
I would like to add a new controller, which will take the responsibility of redirecting short URL to full URL. After clicking short URL it should go to `localhost:xxxx/ShortenedUrl` and then redirect to full link. Any tips how can I create this?
I was trying to do it by `@Html.ActionLink(@item.ShortenedLink, "Index", "Redirect")` and `return Redirect(fullLink)` in Redirect controller but it didn't work as I expect.
And one more question about routes, how can I achieve that after clicking short URL it will give me `localhost:XXXX/ShortenedURL` (i.e. `localhost:XXXX/FSIAOFJO2@`). Now I've got
```
[@Html.DisplayFor(model => item.ShortenedLink)](@item.ShortenedLink)
```
and
```
app.UseMvc(routes =>
{
routes.MapRoute("default", "{controller=Link}/{action=Index}");
});
```
but it gives me `localhost:XXXX/Link/ShortenedURL`, so I would like to omit this Link in URL.
**View** (part with Short URL):
```
@Html.ActionLink(item.ShortenedLink,"GoToFull","Redirect", new { target = "\_blank" })) |
```
**Link controller:**
```
public class LinkController : Controller
{
private ILinksRepository _repository;
public LinkController(ILinksRepository linksRepository)
{
_repository = linksRepository;
}
[HttpGet]
public IActionResult Index()
{
var links = _repository.GetLinks();
return View(links);
}
[HttpPost]
public IActionResult Create(Link link)
{
_repository.AddLink(link);
return Redirect("Index");
}
[HttpGet]
public IActionResult Delete(Link link)
{
_repository.DeleteLink(link);
return Redirect("Index");
}
}
```
**Redirect controller which I am trying to do:**
```
private ILinksRepository _repository;
public RedirectController(ILinksRepository linksRepository)
{
_repository = linksRepository;
}
public IActionResult GoToFull()
{
var links = _repository.GetLinks();
return Redirect(links[0].FullLink);
}
```
Is there a better way to get access to links list in Redirect Controller?<issue_comment>username_1: This is my suggestion, trigger the link via AJAX, here is working example:
This is the HTML element binded through model:
```
@Html.ActionLink(Model.ShortenedLink, "", "", null,
new { onclick = "fncTrigger('" + "http://www.google.com" + "');" })
```
This is the javascript ajax code:
```
function fncTrigger(id) {
$.ajax({
url: '@Url.Action("TestDirect", "Home")',
type: "GET",
data: { id: id },
success: function (e) {
},
error: function (err) {
alert(err);
},
});
}
```
Then on your controller to receive the ajax click:
```
public ActionResult TestDirect(string id)
{
return JavaScript("window.location = '" + id + "'");
}
```
Basically what I am doing here is that, after I click the link, it will call the TestDirect action, then redirect it to using the passed url parameter. You can do the conversion inside this action.
Upvotes: 1 <issue_comment>username_2: To create dynamic data-driven URLs, you need to create a custom [`IRouter`](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.routing.irouter?view=aspnetcore-2.0). Here is how it can be done:
`CachedRoute`
-------------
This is a reusable generic class that maps a set of dynamically provided URLs to a single action method. You can inject an `ICachedRouteDataProvider` to provide the data (a URL to primary key mapping).
The data is cached to prevent multiple simultaneous requests from overloading the database (routes run on every request). The default cache time is for 15 minutes, but you can adjust as necessary for your requirements.
>
> If you want it to act "immediate", you could build a more advanced cache that is updated just after a successful database update of one of the records. That is, the same action method would update both the database and the cache.
>
>
>
```
using Microsoft.AspNetCore.Routing;
using Microsoft.Extensions.Caching.Memory;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
public class CachedRoute : IRouter
{
private readonly string \_controller;
private readonly string \_action;
private readonly ICachedRouteDataProvider \_dataProvider;
private readonly IMemoryCache \_cache;
private readonly IRouter \_target;
private readonly string \_cacheKey;
private object \_lock = new object();
public CachedRoute(
string controller,
string action,
ICachedRouteDataProvider dataProvider,
IMemoryCache cache,
IRouter target)
{
if (string.IsNullOrWhiteSpace(controller))
throw new ArgumentNullException("controller");
if (string.IsNullOrWhiteSpace(action))
throw new ArgumentNullException("action");
if (dataProvider == null)
throw new ArgumentNullException("dataProvider");
if (cache == null)
throw new ArgumentNullException("cache");
if (target == null)
throw new ArgumentNullException("target");
\_controller = controller;
\_action = action;
\_dataProvider = dataProvider;
\_cache = cache;
\_target = target;
// Set Defaults
CacheTimeoutInSeconds = 900;
\_cacheKey = "\_\_" + this.GetType().Name + "\_GetPageList\_" + \_controller + "\_" + \_action;
}
public int CacheTimeoutInSeconds { get; set; }
public async Task RouteAsync(RouteContext context)
{
var requestPath = context.HttpContext.Request.Path.Value;
if (!string.IsNullOrEmpty(requestPath) && requestPath[0] == '/')
{
// Trim the leading slash
requestPath = requestPath.Substring(1);
}
// Get the page id that matches.
TPrimaryKey id;
//If this returns false, that means the URI did not match
if (!GetPageList().TryGetValue(requestPath, out id))
{
return;
}
//Invoke MVC controller/action
var routeData = context.RouteData;
// TODO: You might want to use the page object (from the database) to
// get both the controller and action, and possibly even an area.
// Alternatively, you could create a route for each table and hard-code
// this information.
routeData.Values["controller"] = \_controller;
routeData.Values["action"] = \_action;
// This will be the primary key of the database row.
// It might be an integer or a GUID.
routeData.Values["id"] = id;
await \_target.RouteAsync(context);
}
public VirtualPathData GetVirtualPath(VirtualPathContext context)
{
VirtualPathData result = null;
string virtualPath;
if (TryFindMatch(GetPageList(), context.Values, out virtualPath))
{
result = new VirtualPathData(this, virtualPath);
}
return result;
}
private bool TryFindMatch(IDictionary pages, IDictionary values, out string virtualPath)
{
virtualPath = string.Empty;
TPrimaryKey id;
object idObj;
object controller;
object action;
if (!values.TryGetValue("id", out idObj))
{
return false;
}
id = SafeConvert(idObj);
values.TryGetValue("controller", out controller);
values.TryGetValue("action", out action);
// The logic here should be the inverse of the logic in
// RouteAsync(). So, we match the same controller, action, and id.
// If we had additional route values there, we would take them all
// into consideration during this step.
if (action.Equals(\_action) && controller.Equals(\_controller))
{
// The 'OrDefault' case returns the default value of the type you're
// iterating over. For value types, it will be a new instance of that type.
// Since KeyValuePair is a value type (i.e. a struct),
// the 'OrDefault' case will not result in a null-reference exception.
// Since TKey here is string, the .Key of that new instance will be null.
virtualPath = pages.FirstOrDefault(x => x.Value.Equals(id)).Key;
if (!string.IsNullOrEmpty(virtualPath))
{
return true;
}
}
return false;
}
private IDictionary GetPageList()
{
IDictionary pages;
if (!\_cache.TryGetValue(\_cacheKey, out pages))
{
// Only allow one thread to poplate the data
lock (\_lock)
{
if (!\_cache.TryGetValue(\_cacheKey, out pages))
{
pages = \_dataProvider.GetPageToIdMap();
\_cache.Set(\_cacheKey, pages,
new MemoryCacheEntryOptions()
{
Priority = CacheItemPriority.NeverRemove,
AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(this.CacheTimeoutInSeconds)
});
}
}
}
return pages;
}
private static T SafeConvert(object obj)
{
if (typeof(T).Equals(typeof(Guid)))
{
if (obj.GetType() == typeof(string))
{
return (T)(object)new Guid(obj.ToString());
}
return (T)(object)Guid.Empty;
}
return (T)Convert.ChangeType(obj, typeof(T));
}
}
```
`LinkCachedRouteDataProvider`
-----------------------------
Here we have a simple service that retrieves the data from the database and loads it into a Dictionary. The most complicated part is the scope that needs to be setup in order to use `DbContext` from within the service.
```
public interface ICachedRouteDataProvider
{
IDictionary GetPageToIdMap();
}
public class LinkCachedRouteDataProvider : ICachedRouteDataProvider
{
private readonly IServiceProvider serviceProvider;
public LinkCachedRouteDataProvider(IServiceProvider serviceProvider)
{
this.serviceProvider = serviceProvider
?? throw new ArgumentNullException(nameof(serviceProvider));
}
public IDictionary GetPageToIdMap()
{
using (var scope = serviceProvider.CreateScope())
{
var dbContext = scope.ServiceProvider.GetService();
return (from link in dbContext.Links
select new KeyValuePair(
link.ShortenedLink.Trim('/'),
link.Id)
).ToDictionary(pair => pair.Key, pair => pair.Value);
}
}
}
```
`RedirectController`
--------------------
Our redirect controller accepts the primary key as an `id` parameter and then looks up the database record to get the URL to redirect to.
```
public class RedirectController
{
private readonly ApplicationDbContext dbContext;
public RedirectController(ApplicationDbContext dbContext)
{
this.dbContext = dbContext
?? throw new ArgumentNullException(nameof(dbContext));
}
public IActionResult GoToFull(int id)
{
var link = dbContext.Links.FirstOrDefault(x => x.Id == id);
return new RedirectResult(link.FullLink);
}
}
```
>
> In a production scenario, you would probably want to make this a *permanent* redirect `return new RedirectResult(link.FullLink, true)`, but those are automatically cached by browsers which makes testing difficult.
>
>
>
`Startup.cs`
------------
We setup the `DbContext`, the memory cache, and the `LinkCachedRouteDataProvider` in our DI container for use later.
```
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddMvc();
services.AddMemoryCache();
services.AddSingleton();
}
```
And then we setup our routing using the `CachedRoute`, providing all dependencies.
```
app.UseMvc(routes =>
{
routes.Routes.Add(new CachedRoute(
controller: "Redirect",
action: "GoToFull",
dataProvider: app.ApplicationServices.GetService(),
cache: app.ApplicationServices.GetService(),
target: routes.DefaultHandler)
// Set to 60 seconds of caching to make DB updates refresh quicker
{ CacheTimeoutInSeconds = 60 });
routes.MapRoute(
name: "default",
template: "{controller=Home}/{action=Index}/{id?}");
});
```
To build these short URLs on the user interface, you can use tag helpers (or HTML helpers) the same way you would with any other route:
```
@Url.Action("GoToFull", "Redirect", new { id = 1 })
```
Which is generated as:
```
</M81J1w0A>
```
You can of course use a model to pass the `id` parameter into your view when it is generated.
```
@Url.Action("GoToFull", "Redirect", new { id = Model.Id })
```
I have made a [Demo on GitHub](https://github.com/username_2/ShortenedUrls). If you enter the short URLs into the browser, they will be redirected to the long URLs.
* `M81J1w0A` -> `https://maps.google.com/`
* `r33NW8K` -> `https://stackoverflow.com/`
I didn't create any of the views to update the URLs in the database, but that type of thing is covered in several tutorials such as [Get started with ASP.NET Core MVC and Entity Framework Core using Visual Studio](https://learn.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro), and it doesn't look like you are having issues with that part.
References:
* [Get started with ASP.NET Core MVC and Entity Framework Core using Visual Studio](https://learn.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro)
* [Change route collection of MVC6 after startup](https://stackoverflow.com/q/32565768)
* [MVC Routing template to represent infinite self-referential hierarchical category structure](https://stackoverflow.com/q/48122318)
* [Imlementing a Custom IRouter in ASP.NET 5 (vNext) MVC 6](https://stackoverflow.com/q/32582232)
Upvotes: 0 |
2018/03/20 | 471 | 1,857 | <issue_start>username_0: I have an ES6 class in Ember 3.1 which is being handed an ember data object called `certifciate`. I would like to be able to call `.reload()` on that certificate as follows:
```
@action
showCertificateInfo(this: DomainCard, certificate) {
this.setProperties({
isShowingCertificateModal: true,
selectedCert: certificate,
})
certificate
.reload()
.then(() => {
this.set('isShowingCertificateModal', true)
})
.catch(e => {
// TODO: handle this
})
}
```
However, if I do this, then Ember gives the following error/warning:
```
Assertion Failed: You attempted to access the 'reload' property
(of )... However in this case4 the object
in quetstion is a special kind of Ember object (a proxy). Therefore,
it is still necessary to use` .get(‘reload’)` in this case.
```
If I do as the code suggests and call `.get('reload')` instead, then I get an internal Ember error that `this` is not defined when calling `this._internalModel`. I get the same error when doing:
```
const reload = certificate.get('reload').bind(certificate)
reload().then()
...
```
What do I need to do to be able to reload this ember data object properly?<issue_comment>username_1: Pulling the content out of the proxy solved the problem:
```
const certContent = certificate.content || certificate
certContent.reload()
```
I'm still not sure why Ember 3.1 isn't able to work with the proxy properly, but this was the solution.
Upvotes: -1 <issue_comment>username_1: Actually, whe fundamental problem seems to be that the certificate model had an async relationship to the domain model, and by using `{async: false}`, we remove the need to get a proxyPromise object returned to us and remove the need to pull the `content` object out of the promise.
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,521 | 4,852 | <issue_start>username_0: Does anyone know if there is a way to run the code changes in a Laravel project **without** refreshing the page every time.
I know that to see the changes I need to
```
php artisan serve
```
but I do it every time and it is kind of frustrating.
Thank you anyways.<issue_comment>username_1: You can achieve this with [Laravel Mix](https://laravel.com/docs/mix).
According to [this part](https://laravel.com/docs/mix#browsersync-reloading) of the documentation, you need to edit your `webpack.mix.js` file, and add this to the end:
```
mix.browserSync('127.0.0.1:8000');
```
It needs to match to the output of the `php artisan serve` command, where you found a line something like this:
```
Laravel development server started:
```
After this, you have to run the `php artisan serve`, and the `npm run watch` command simultaneously. You must leave to run both commands while you edit your files.
Note: The first time you run the `npm run watch`, it installs additional components. But the command output is quite clear on that. If everything is in order, Laravel Mix automatically opens your browser with `http://localhost:3000`, or something like this.
Upvotes: 7 [selected_answer]<issue_comment>username_2: **add in webpack.mix.js file in laravel**
```
mix.browserSync('127.0.0.1:8000');
```
**then run this command**
```
> npm install browser-sync [email protected] --save-dev --production=false
```
**after this run npm run watch**
```
> Browsersync automatic run your port 3000
```
Upvotes: 3 <issue_comment>username_3: To achieve this you can use [Laravel Mix](https://gist.github.com/vades/28f7e487192b3fbb61c429efd200cf1f)
* Ensure that Node.js and NPM are installed:
>
> run node -v and npm -v.
>
>
>
* Install Laravel Mix
>
> npm install.
>
>
>
* Install browser-sync and browser-sync-webpack-plugin
>
> npm install browser-sync browser-sync-webpack-plugin --save-dev --production=false
>
>
>
* Open webpack.mix.js and add this line
>
> mix.browserSync('127.0.0.1:8000');.
>
>
>
* Run Project with these two commands: The **npm run watch** command will continue running in your terminal and watch all relevant CSS and JavaScript files for changes. Webpack will automatically recompile your assets when it detects a change
>
> php artisan serve. and then npm run watch.
>
>
>
Upvotes: 1 <issue_comment>username_4: First make sure you install [nodejs](https://nodejs.org/en/), After that install laravel-mix
```
npm install --save-dev laravel-mix
```
create `webpack.mix.js` file in root folder for your project and add to it
```js
const mix =require('laravel-mix')
mix.browserSync('127.0.0.1:8000');
```
Open package.json file and add to script section:
```json
"scripts": {
"watch": "mix watch"
}
```
Run laravel project
```
php artisan serve
```
To update laravel project auto when you make changes run in another terminal:
```
npm run watch
```
Updated from Laravel 9.x
========================
you can use `vite` instead of `laravel-mix`, you should run this command to install vite:
```
npm install
```
Without any configuration, The next line of code will include auto in master page, If you want to include in another master page like admin, you can write it to auto refresh when make changes:
```
@vite(['resources/sass/app.scss', 'resources/js/app.js'])
```
After installing vite, run this command
```
npm run dev
```
And run
```
php artisan serve
```
For more information, [view docs](https://laravel.com/docs/9.x/vite)
Upvotes: 2 <issue_comment>username_5: **Live Server** extension can help u easy archive this.
1. Install **[Live Server](https://i.stack.imgur.com/NhCBD.png)** from ****VSCode** market place**.
2. Install the [**Live Server Extension**](https://chrome.google.com/webstore/detail/live-server-web-extension/fiegdmejfepffgpnejdinekhfieaogmj/) in the **Chrome** **browser** then edit like [**this**](https://i.stack.imgur.com/eAE3v.png).
\*Note: On **Live Server
Web Chrome Extension** The **Actual Server Address** is where
`php artisan serve` running, by default is
>
> **<http://127.0.0.1:8000>**
>
>
>
And the **Live Server Address** is where your VSCode **Live Server** running ( mine is <http://127.0.0.1:5500> )
3. Open **VSCode**, press
>
> **Ctrl + Shift + P**
>
>
>
and enter "**change live**" then choose like [**this**](https://i.stack.imgur.com/7oZ7W.png), then choose your **Workspace** (your PHP file's parent directory)
\*Note: whenever you change your **Workspace** remember to do this **Step 3**
4. Done, whenever you run the `php artisan serve` command, remember to turn on **Live Server** on **VSCode** like [**this**](https://i.stack.imgur.com/8ZW96.png). Your browser will auto refresh after your **VSCode** does auto save.
Enjoy ;)
Upvotes: 1 |
2018/03/20 | 701 | 2,591 | <issue_start>username_0: EDIT->I am also using a ui-router- Can i use resolve
name: 'PLANSPONSOR.SUMMARY',
```
state: {
url: "/summary",
templateUrl: '../script/planSummary.html',
controller: "summaryCtrl",params:{obj:null}
}
}
]
```
I am trying to trigger the API before the directive in my controller.
The Directive needs my API to be called to get the data so that it can populate on the page.
When i load the page the directive fires because its called in HTML and the API is triggered next.
Can anybody help me on Using the $watch function or do i need to use something else so that API is triggered on the page and then Directive.
API CODE (trimmed for code sanity)
```
$timeout(function () {$http({
method: 'GET',
url: 'getPSDetails?psuid='+$scope.psuId,
//url: 'getPSDetails?controlNumber='+$scope.DataEntered,
}).success(function (data) {
console.log('success response'); }
$scope.summaryObject =data; ( I am getting all the data here )
```
My Directive. (trimmed for code sanity)
```
myapp.directive('summaryFeatureDetails',function(){
return{
restrict: 'E',
scope:true,
controller:function($scope){
$scope.columnPerPage = 5;
$scope.currentPage = 0;
$scope.currentColumns = [];
$scope.controlData = [];
$scope.totalNumberOfPage = Math.floor($scope.selectedData.length/$scope.columnPerPage);
if (($scope.selectedData.length % $scope.columnPerPage) > 0){
$scope.totalNumberOfPage = $scope.totalNumberOfPage + 1;
}
}
}
```<issue_comment>username_1: If the directive loads before the `$scope.summaryObject` is set then make sure to load the directive **after** the object is set.
This can be done by simply adding an [NgIf](https://docs.angularjs.org/api/ng/directive/ngIf) expression on the directive tag which checks the object value and only render the html if the object is not null
```
```
Upvotes: 1 <issue_comment>username_2: I will try an answer to help you with what I see. This is how I do my stuff:
Controller:
```
var vm = this;
vm.dataToDisplay = [];
...
$http({
method: 'GET',
url: 'getPSDetails?psuid='+$scope.psuId,
}).success(function (data) {
Array.prototype.push.apply(vm.dataToDisplay, data);
}
```
Directive:
```
myapp.directive('summaryFeatureDetails',function(){
return{
restrict: 'E',
scope: {
myData: '='
}
/* No controller */
```
HTML:
```
```
Upvotes: 0 |
2018/03/20 | 879 | 2,795 | <issue_start>username_0: I am creating a spreadsheet that a LARGE amount of people are going to use that have no experience with Excel...
What I want to have happen is they scan an order number into a field and it will populate the information on all lines of their order. When they scan, the scanner only populates the first 8 digits on the order, and it does not pick up on how many lines are on the data.
So for example; The scanner will return FK560082 but the data from the system will say FK560082.001.8051 and if there are multiple lines on the order it will have FK560082.002.8051 and etc... (We have no limit on the number of lines allowed an order).
Right now, I used the formula below to break the order number away from the other details.
```
=IFERROR(LEFT(A2,FIND(".",A2,1)-1),A2)
```
Which allowed me to use this formula to get my first occurrence (or first line) of my order. However, I'm looking for a formula that will allow me to find data from my other line items too.
```
=IFERROR(INDEX('Current Orders'!F:F,MATCH('2'!A2,'Current Orders'!L:L,0)),"")
```
Since so many people are going to use this spreadsheet, I'd prefer to not have to train everyone on the ctrl+shift+enter of an array formula, but if that's all that's possible I'll make it work.<issue_comment>username_1: One or more FK560082.002.8051 values in 'Current Orders'!L:L. FK560082 in 2!A2. Additional information to be retrieved from 'Current Orders'!F:F.
Try,
```
=iferror(index('Current Orders'!F:F, aggregate(15, 6, row('Current Orders'!L$1:index('Current Orders'!L:L, match("zzz", 'Current Orders'!L:L)))/(left('Current Orders'!L$1:index('Current Orders'!L:L, match("zzz", 'Current Orders'!L:L)), len('2'!A$2))='2'!A$2), row(1:1))), text(,))
```
Drag down for subsequent invoice lines.
Upvotes: 2 <issue_comment>username_2: I think this can be done using INDIRECT and by searching through Current Orders beginning after the first line that was found.
These search result formulas can be used on sheet '2' or any other sheet.
Separate your search results into two columns: the result, and a column for the row that was found. I'll use B. First item on the order, cell B4 formula is =MATCH('2'!$A$2,'Current Orders'!L:L,0)
Second item in column B (cell B5) will be =IFERROR(MATCH('2'!$A$2, INDIRECT("'Current Orders'!L" & $B4 & ":L9999"),0),""). Fill down from cell B5.
Column A will be the actual value, for instance at A4 =IFERROR(INDEX('Current Orders'!F:F, $B4),""). Fill down from A4 onward.
Good luck!
Upvotes: 1 <issue_comment>username_3: Does this answer you question?
[<https://stackoverflow.com/a/18767728/9492960][1]>
```
=INDEX('Sheet2'!B:B,MATCH(1,INDEX((A1='Sheet2'!A:A)*(C1='Sheet2'!C:C),0),0))
```
Is an Index Match with multiple criteria without an array.
Upvotes: 0 |
2018/03/20 | 562 | 1,886 | <issue_start>username_0: I want to get an element from a mutable map and do an operation on it.
For example I want to change his name value (the element on the map will be with the new value)
and I want to return it in the end
to start I wrote a working code but it is very Java
```
var newAppKey: AppKey = null
val appKey = myMap(request.appKeyId)
if (appKey != null) {
newAppKey = appKey.copy(name = request.appKeyName)
myMap.put(appKey.name, newAppKey)
newAppKey
} else {
newAppKey = null
}
```
This code works but it very java.
I though about something like
```
val newAppKey = appIdToApp(request.appKeyId) match {
case: Some(appKey) => appKey.copy(name = request.appKeyName)
case: None => None{AppKey}
}
```
Which doesn't compile or updates the myMap object with the new value.
How do I improve it to scala concepts.<issue_comment>username_1: There are a couple of mistakes in your code.
>
> case: Some(appKey) => appKey.copy(name = request.appKeyName)
>
>
>
This syntax for case is incorrect. It should be
```
case Some(appKey) => appKey.copy(name = request.appKeyName)
```
Also, the return type of your expression is currently `Any` (Scala equivalent of Object), because your success case returns an object of type (appKey's type) whereas the failure case returns a `None`, which is of type `Option`. To make things consistent, your success case should return
```
Some(appKey.copy(name = request.appKeyName))
```
While there are better ways to deal with Options than pattern matching, the corrected code would be
```
val newAppKey = appIdToApp(request.appKeyId) map (appKey =>
appKey.copy(name = request.appKeyName))
```
Upvotes: 0 <issue_comment>username_2: Simply:
```
val key = request.appKeyId
val newValueOpt = myMap.get(key).map(_.copy(name = request.appKeyName))
newValueOpt.foreach(myMap.update(key, _))
```
Upvotes: 1 |
2018/03/20 | 318 | 1,241 | <issue_start>username_0: I need clone/checkout a git repository from a local server, but that repository contains a file with the following extension
>
> asmx?wsdl
>
>
>
And I receive an error message from git
```
error: unable to create file path/to/file/file.asmx?wsdl: Invalid argument
fatal: unable to checkout working tree
warning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry the checkout with 'git checkout -f HEAD'
```
How can I solve this? I need that file<issue_comment>username_1: When you have the local copy so we can say the server then you can go the server directory in the project and you should be able to remove the file from the repo without cloning it.
So you can use `git rm filename` and commit and push your changes. When you then try to clone the repo it should work without that file.
Upvotes: 0 <issue_comment>username_2: I am assuming you are using Windows. Windows can not create files with "?" in the filename. Checking out under Linux or MacOS should work.
If desperate, you can get the file content with
```
git show master:path/to/file/file.asmx?wsdl
```
where "master" is a branch that contains the file.
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,260 | 4,115 | <issue_start>username_0: I got used to working in IDLE, but got a recommendation to use PyCharm instead.
As I'm just getting used to it, I have a question on accessing elements of a matrix.
I'm getting different string inputs from the user and filling the matrix with them;
```
from numpy import *
row=int(input())
col=int(input())
m=range(row*col)
m=reshape(m,(row,col))
for i in range(0,row):
for j in range(0,col):
el=int(input())
m[i][j]=el
```
What PyCharm is telling me:
Class 'ndarray' does not define '**getitem**', so the '[]' operator cannot be used on its instances less...
This inspection detects names that should resolve but don't. Due to dynamic dispatch and duck typing, this is possible in a limited but useful number of cases. Top-level and class-level items are supported better than instance items.
Could anyone please explain to me how I can change/fix this?<issue_comment>username_1: Try this :
```
from numpy import np
row=int(input())
col=int(input())
m = []
for i in range(0,row):
for j in range(0,col):
el=int(input())
m.append(el)
m=np.reshape(m,(row,col))
```
Upvotes: 1 <issue_comment>username_2: You are using the vanilla `range` object in Python. It looks like you want to use the `numpy.arange` operation instead which creates a NumPy array for you. `range` does not give you what `numpy.arange` does.
You need to change that statement to use `arange`:
```
m=arange(row*col)
```
The reason why PyCharm is probably giving you that error is because you are implicitly converting from one type to another. This is due to their [type hinting](https://www.jetbrains.com/help/pycharm/type-hinting-in-pycharm.html) mechanism so it's good practice to produce the expected output immediately when you produce the array with `arange`.
However, this is quite superfluous as you are creating a `row x col` 2D array that is linearly increasing, but you end up replacing all of the elements anyway. Consider pre-allocating the matrix with `numpy.zeros` of the desired size instead:
```
m = zeros((row, col))
```
Also, it is very bad practice to do this: `from numpy import *`. This is because you may be importing other packages that could share the same function names between packages. Unless you absolutely know that doing the above style of import won't provide any conflicts, I would not recommend you do this. Instead, import NumPy with the alias `np` as many people do so:
```
import numpy as np
```
After, call any NumPy functions by accessing `np`:
```
import numpy as np
row=int(input())
col=int(input())
m=np.range(row*col)
m=np.reshape(m,(row,col))
# or:
# m = np.zeros((row, col))
for i in range(0,row):
for j in range(0,col):
el=int(input())
m[i][j]=el
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Your code works, in IDLE, and `ipython`:
```
In [178]: row=3
...: col=2
...:
...: m=range(row*col)
...: m=np.reshape(m,(row,col))
...:
...: for i in range(0,row):
...: for j in range(0,col):
...: el=int(input())
...: m[i][j]=el
...:
1
2
3
4
5
6
In [179]: m
Out[179]:
array([[1, 2],
[3, 4],
[5, 6]])
```
If `pycharm` objects, it's because it's looking at style as much as syntax. I don't use it, so can't say how well it handles `numpy` constructs.
I would use `np.arange(row*col).reshape(row,col)` to generate `m`, but your use of `range` and `reshape` works as well. Actually since you are filling in all values from user input, `m = np.zeros((row,col),int)` works just as well.
Iterative input like that is slow and clumsy, but at least you aren't trying to use `np.append`.
The double indexing might be a problem for pycharm.
```
m[i][j] = e1 # works
m[i, j] = e1 # better
```
I have no idea why `pycharm` is complaining that `'ndarray' does not define 'getitem'.
```
In [184]: m.__getitem__
Out[184]:
```
It may just be that `pycharm` is objecting to `[i][j]`, but for rather convoluted reasons. I'd try the `[i,j]` syntax and see if the complaint goes away.
Upvotes: 1 |
2018/03/20 | 1,134 | 3,644 | <issue_start>username_0: ```
```
I need help adding a zero to the text box value if the input has fewer digits than the maximum length of the field. for example: if some one enters 1234 then it should add a zero to it and make it 01234. Same way if someone enters 12 then it should make it 00012 when the user moves out of the text field. We also need to make sure that if user enters 00000 this should not be accepted as input.
Thanks!<issue_comment>username_1: Try this :
```
from numpy import np
row=int(input())
col=int(input())
m = []
for i in range(0,row):
for j in range(0,col):
el=int(input())
m.append(el)
m=np.reshape(m,(row,col))
```
Upvotes: 1 <issue_comment>username_2: You are using the vanilla `range` object in Python. It looks like you want to use the `numpy.arange` operation instead which creates a NumPy array for you. `range` does not give you what `numpy.arange` does.
You need to change that statement to use `arange`:
```
m=arange(row*col)
```
The reason why PyCharm is probably giving you that error is because you are implicitly converting from one type to another. This is due to their [type hinting](https://www.jetbrains.com/help/pycharm/type-hinting-in-pycharm.html) mechanism so it's good practice to produce the expected output immediately when you produce the array with `arange`.
However, this is quite superfluous as you are creating a `row x col` 2D array that is linearly increasing, but you end up replacing all of the elements anyway. Consider pre-allocating the matrix with `numpy.zeros` of the desired size instead:
```
m = zeros((row, col))
```
Also, it is very bad practice to do this: `from numpy import *`. This is because you may be importing other packages that could share the same function names between packages. Unless you absolutely know that doing the above style of import won't provide any conflicts, I would not recommend you do this. Instead, import NumPy with the alias `np` as many people do so:
```
import numpy as np
```
After, call any NumPy functions by accessing `np`:
```
import numpy as np
row=int(input())
col=int(input())
m=np.range(row*col)
m=np.reshape(m,(row,col))
# or:
# m = np.zeros((row, col))
for i in range(0,row):
for j in range(0,col):
el=int(input())
m[i][j]=el
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Your code works, in IDLE, and `ipython`:
```
In [178]: row=3
...: col=2
...:
...: m=range(row*col)
...: m=np.reshape(m,(row,col))
...:
...: for i in range(0,row):
...: for j in range(0,col):
...: el=int(input())
...: m[i][j]=el
...:
1
2
3
4
5
6
In [179]: m
Out[179]:
array([[1, 2],
[3, 4],
[5, 6]])
```
If `pycharm` objects, it's because it's looking at style as much as syntax. I don't use it, so can't say how well it handles `numpy` constructs.
I would use `np.arange(row*col).reshape(row,col)` to generate `m`, but your use of `range` and `reshape` works as well. Actually since you are filling in all values from user input, `m = np.zeros((row,col),int)` works just as well.
Iterative input like that is slow and clumsy, but at least you aren't trying to use `np.append`.
The double indexing might be a problem for pycharm.
```
m[i][j] = e1 # works
m[i, j] = e1 # better
```
I have no idea why `pycharm` is complaining that `'ndarray' does not define 'getitem'.
```
In [184]: m.__getitem__
Out[184]:
```
It may just be that `pycharm` is objecting to `[i][j]`, but for rather convoluted reasons. I'd try the `[i,j]` syntax and see if the complaint goes away.
Upvotes: 1 |
2018/03/20 | 779 | 2,843 | <issue_start>username_0: I'm using Monolog (<https://github.com/Seldaek/monolog>) in my project and we want to receive an email when there are errors in our application. The logs are working as expected but if the log level is ERROR (below log level is DEBUG), I want the logger to send me an email.
I tried to use the class NativeMailHandler but it doesn't seems to work and I would prefer to use our SMTP mail server (works great in PHP but I can't figure out how to link it with Monolog error handler)
```
$logger = new Logger('LOG');
$logHandler = new StreamHandler('synchro.log',Logger::DEBUG);
$logger->pushHandler($logHandler);
```<issue_comment>username_1: Well, I found a solution to my problem, maybe it will help someone one day:
Since the class NativeMailHandler is using the mail() php function. I changed the code in monolog/monolog/src/Monolog/Handler/NativeMailhandler.php of the function send() and I declared my PHPMailer() there instead of the mail() function.
```
$mailHandler = new NativeMailerHandler(<EMAIL>, $subject, $from,Logger::ERROR,true, 70);
$logHandler = new StreamHandler('synchro.log',Logger::DEBUG);
$logger->pushHandler($logHandler);
$logger->pushHandler($mailHandler); //push the mail Handler. on the log level defined (Logger::ERROR in this example), it will send an email to the configured mail in the send() function in monolog/monolog/src/Monolog/Handler/NativeMailhandler.php
```
Upvotes: 0 <issue_comment>username_2: I created PHPMailer handler for Monolog. It enables you to send logs to emails with PHPMailer.
It is available on [GitHub](https://github.com/filips123/MonologPHPMailer/) and [Packagist](https://packagist.org/packages/filips123/monolog-phpmailer/), but it can also be used without Composer (which requires manual installation of Monolog and PHPMailer).
```
php
use MonologPHPMailer\PHPMailerHandler;
use Monolog\Formatter\HtmlFormatter;
use Monolog\Logger;
use Monolog\Processor\IntrospectionProcessor;
use Monolog\Processor\MemoryUsageProcessor;
use Monolog\Processor\WebProcessor;
use PHPMailer\PHPMailer\PHPMailer;
require __DIR__ . '/vendor/autoload.php';
$mailer = new PHPMailer(true);
$logger = new Logger('logger');
$mailer-isSMTP();
$mailer->Host = 'smtp.example.com';
$mailer->SMTPAuth = true;
$mailer->Username = '<EMAIL>';
$mailer->Password = '<PASSWORD>';
$mailer->setFrom('<EMAIL>', 'Logging Server');
$mailer->addAddress('<EMAIL>', 'Your Name');
$logger->pushProcessor(new IntrospectionProcessor);
$logger->pushProcessor(new MemoryUsageProcessor);
$logger->pushProcessor(new WebProcessor);
$handler = new PHPMailerHandler($mailer);
$handler->setFormatter(new HtmlFormatter);
$logger->pushHandler($handler);
$logger->error('Error!');
$logger->alert('Something went wrong!');
```
Upvotes: 2 |
2018/03/20 | 781 | 2,706 | <issue_start>username_0: **Problem** :
I want to get the latest csv file in the downloads folder by typing `$LATEST`. When I dereference `$LATEST` I want to see the last csv file put in there.
**What I have tried** :
1. `'ls -t $DL/*.csv | head -1'` (this works)
2. `export $LATEST='ls -t $DL/*.csv | head -1'`
The problem with 2. is it always returns the latest file at the time export is run. (e.g. `old.csv`) When I add a new file (e.g. `new.csv`) I want `$LATEST` to show `new.csv` not `old.csv`.<issue_comment>username_1: Well, I found a solution to my problem, maybe it will help someone one day:
Since the class NativeMailHandler is using the mail() php function. I changed the code in monolog/monolog/src/Monolog/Handler/NativeMailhandler.php of the function send() and I declared my PHPMailer() there instead of the mail() function.
```
$mailHandler = new NativeMailerHandler(<EMAIL>, $subject, $from,Logger::ERROR,true, 70);
$logHandler = new StreamHandler('synchro.log',Logger::DEBUG);
$logger->pushHandler($logHandler);
$logger->pushHandler($mailHandler); //push the mail Handler. on the log level defined (Logger::ERROR in this example), it will send an email to the configured mail in the send() function in monolog/monolog/src/Monolog/Handler/NativeMailhandler.php
```
Upvotes: 0 <issue_comment>username_2: I created PHPMailer handler for Monolog. It enables you to send logs to emails with PHPMailer.
It is available on [GitHub](https://github.com/filips123/MonologPHPMailer/) and [Packagist](https://packagist.org/packages/filips123/monolog-phpmailer/), but it can also be used without Composer (which requires manual installation of Monolog and PHPMailer).
```
php
use MonologPHPMailer\PHPMailerHandler;
use Monolog\Formatter\HtmlFormatter;
use Monolog\Logger;
use Monolog\Processor\IntrospectionProcessor;
use Monolog\Processor\MemoryUsageProcessor;
use Monolog\Processor\WebProcessor;
use PHPMailer\PHPMailer\PHPMailer;
require __DIR__ . '/vendor/autoload.php';
$mailer = new PHPMailer(true);
$logger = new Logger('logger');
$mailer-isSMTP();
$mailer->Host = 'smtp.example.com';
$mailer->SMTPAuth = true;
$mailer->Username = '<EMAIL>';
$mailer->Password = '<PASSWORD>';
$mailer->setFrom('<EMAIL>', 'Logging Server');
$mailer->addAddress('<EMAIL>', 'Your Name');
$logger->pushProcessor(new IntrospectionProcessor);
$logger->pushProcessor(new MemoryUsageProcessor);
$logger->pushProcessor(new WebProcessor);
$handler = new PHPMailerHandler($mailer);
$handler->setFormatter(new HtmlFormatter);
$logger->pushHandler($handler);
$logger->error('Error!');
$logger->alert('Something went wrong!');
```
Upvotes: 2 |
2018/03/20 | 3,410 | 11,872 | <issue_start>username_0: Using Visual Studio 2017, AspNetCore 1.1.2
All of a sudden I am getting following error when I am trying to publish (Release build) any project in the solution:
>
> Assets file 'C:\example\obj\project.assets.json' doesn't have a target for
> '.NETFramework,Version=v4.5.2/win7-x86'. Ensure that restore has run
> and that you have included 'net452' in the TargetFrameworks for your
> project. You may also need to include 'win7-x86' in your project's
> RuntimeIdentifiers.
>
>
>
Have checked in the `project.assets.json` files, I have:
```
"targets": {
".NETFramework,Version=v4.5.2": {
```
and
```
"runtimes": {
"win7-x86": {
"#import": []
}
```
In the \*.csproj files I have:
```
net452
x86
```
Have made no changes to config in the projects. Only thing is that I have updated VS2017 to latest version today, 15.6.3. Could this cause issue?<issue_comment>username_1: According to the Microsoft blog (which, bizarrely, my account doesn't have permissions to post in), this *isn't* a bug, and is entirely caused by ReSharper. If you disable this, the problem goes away.
Errr, one problem: I'm getting this error, and I don't have ReSharper.
After a *lot* of hunting around, I found the reason I was getting the error on my .NET Core project which had been upgraded from 1.0 to 2.1.
When running my project in Debug or Release mode, everything worked fine, but when I tried to publish to Azure, I got that error:
>
> Assets file '(mikesproject)\obj\project.assets.json' doesn't have a target for '.NETCoreApp,Version=v2.0'. Ensure that restore has run and that you have included 'netcoreapp2.0' in the TargetFrameworks for your project.
>
>
>
Although I had updated the version of .NET Core to 2.1 in Project\Properties and upgraded the various nuget packages, there was one place which hadn't picked up this change... the Publish Profile file.
I needed to go into the `Properties\PublishProfiles` folder in my solution, open up the `.pubxml` file relating to the way I was publishing to Azure, and change this setting from `netcoreapp2.0` to `netcoreapp2.1`:
```
. . .
netcoreapp2.0
. . .
```
Ridiculous, hey?
I do wish Microsoft error messages gave some clue as to the source of problems like this.
Upvotes: 8 [selected_answer]<issue_comment>username_2: Restarting Visual Studio solved the error for me.
Upvotes: 7 <issue_comment>username_3: Right click on the project file, and click unload. Then right click on the project and reload.
Upvotes: 6 <issue_comment>username_4: For me the problem ended up being that one of my NuGet feeds was down, so a package was not getting updated properly. It wasn't until I ran a NuGet package restore directly on the solution that I saw any error messages related to my NuGet feed being down.
Upvotes: 2 <issue_comment>username_5: Restarting Visual Studio or unloading/reloading the project didn't work for me, but deleting the "obj" folder and then rebuilding seems to have fixed the issue.
Upvotes: 2 <issue_comment>username_6: Had this error in similar situation. This has helped me: [Link](https://github.com/dotnet/sdk/issues/1321#issuecomment-323606946)
This is my property group in \*.csproj file of my .net core 3.0 project:
```
Exe
netcoreapp3.0
win-x64 <----- SOLVES IT. Mandatory Line
```
Upvotes: 5 <issue_comment>username_7: Delete the publish profile you created and create a new one. The wizard will put in the correct targetframe work for you and publish again. It should solve the problem.
Upvotes: 4 <issue_comment>username_8: A colleague ran into this after upgrading an application from dotnet core 1.1 to dotnet core 2.1. He properly updated all the targetFramework references within the various csproj files, and had no issues on his local development machine. However, we run Azure DevOps Server and build agents on-premises, so the build agent was reporting this error after a pull request build was executed.
The `dotnet clean` task was throwing an error because of the new targeted framework. `dotnet clean` uses the same targets as build, publish, etc, so after a change in target frameworks the `dotnet restore` *must* happen before the `dotnet clean` to update the dependent files. In hindsight this makes sense because you want to restore dependencies to the proper target framework before you do any building or deploying.
This may only affect projects with upgraded target frameworks but I have not tested it.
Upvotes: 1 <issue_comment>username_9: To me, the error was caused because of an existing *global.json* file in the solution level folder, pointing to a different .NET version.
Removing that file (or changing its SDK version) resolved the problem
Upvotes: 2 <issue_comment>username_10: **Migrating from nuget 5.4 to nuget 5.8** solve the problem on my devops build server
Upvotes: 5 <issue_comment>username_11: Upgrading NuGet version from 5.5.1 to 5.8.0 fixed the issue.
Upvotes: 2 <issue_comment>username_12: You should try all the other solutions here first. Failing that you can try what eventually unblocked me when none of these did. I ran into this problem when porting a Jenkins build to an Azure DevOps pipeline on a pool of agents. It took about 60 builds before I tried every other possibility. I found out that needed to do two things:
1. Ensure the tooling was consistent for this specific project
2. Use a nuget restore friendly with the version of MSBuild used after [finding out that mattered](https://developercommunity2.visualstudio.com/t/error-NETSDK1005:-Assets-file-projecta/1248649?preview=true) yet I couldn't use the proposed workaround for just updated nuget tooling.
The versions I needed to use are likely different than yours.
1:
```
call choco install windows-adk-all --version=10.0.15063.0 --force
call choco install windows-sdk-10.1 --version=10.1.15063.468 --force
```
2:
```
call MSBuild -t:restore Solution.sln
call MSBuild Solution.sln /t:rebuild;pack /p:Configuration=Release /p:Platform="Any CPU"
```
Upvotes: 2 <issue_comment>username_13: Receiving similar error for 'netcoreapp3.1' when building using command line. It turned out to be an MsBuild switch that caused the issue. Specifically speaking:
```
/p:TargetFramework="netcoreapp3.1"
```
Removed the switch and the error was fixed.
Upvotes: 1 <issue_comment>username_14: I had similar issue, when I installed a new sdk version.
Exception was:
```
Severity Code Description Project File Line Suppression State Error NETSDK1005
Assets file '.. \RazorPages\obj\project.assets.json' doesn't have a target for
'netcoreapp3.1'. Ensure that restore has run and that you have included 'netcoreapp3.1'
in the TargetFrameworks for your project. RazorPages
C:\Program Files\dotnet\sdk\5.0.102\Sdks\Microsoft.NET.Sdk
\targets\Microsoft.PackageDependencyResolution.targets 241
```
Solution was to select again the target version of the project.
1. right click on solution
2. Properties\Application Tab
3. Change Target framework version to something different and change it back.
Upvotes: 2 <issue_comment>username_15: If your build script starts with a `dotnet restore` and ends with a `dotnet publish --no-restore`, you must make sure that they both include the same `--runtime` parameter.
Upvotes: 3 <issue_comment>username_16: I got this error when upgrading a web project from netcoreapp3.1 to net5.0.
One of the answers here pointed me in the right direction:
*The publish profile still had netcoreapp3.1 as target framework.* Edited the publish profile and changed target framework to net5.0 and it worked.
(Visual Studio 16.8)
Upvotes: 2 <issue_comment>username_17: In my case updating visual studio 2019 to the latest version, fixed the issue.
Upvotes: 0 <issue_comment>username_18: In my case, if you have `TargetFrameworks` and `TargetFramework` together in the csrpoj file, remove `TargetFramework` will solve the problem.
edit it from:
```xml
netstandard2.0;net461;
net461
```
to
```xml
netstandard2.0;net461;
```
Upvotes: 0 <issue_comment>username_19: From my experience if you have dependencies in your solution built using "ProjectSection(ProjectDependencies) = postProject"
then in this case dotnet build goes nuts.
Upvotes: 0 <issue_comment>username_20: I ran into the `NETSDK1047` when playing around with Docker in a brand new dotnet project created using `dotnet new worker` and the docker file from [dotnet-docker samples](https://github.com/dotnet/dotnet-docker/blob/3739ebf9f8fa2cc85eb3b73bc00fca467672771f/samples/dotnetapp/Dockerfile.alpine-x64-slim).
```
❯ docker build -t dockertest .
output elided...
/usr/share/dotnet/sdk/6.0.300/Sdks/Microsoft.NET.Sdk/targets/Microsoft.PackageDependencyResolution.targets(267,5): error NETSDK1047: Assets file '/source/obj/project.assets.json' doesn't have a target for 'net6.0/linux-musl-x64'. Ensure that restore has run and that you have included 'net6.0' in the TargetFrameworks for your project. You may also need to include 'linux-musl-x64' in your project's RuntimeIdentifiers. [/source/dockertest.csproj]
The command '/bin/sh -c dotnet publish -c release -o /app -r linux-musl-x64 --self-contained true --no-restore /p:PublishTrimmed=true /p:PublishReadyToRun=true /p:PublishSingleFile=true' returned a non-zero code: 1
dockertest on main [✘] via .NET v6.0.202 net6.0
❯
```
The issue was because I forgot to add a `.dockerignore` [file](https://github.com/username_20/dockertest/blob/08ef0c3cd7d984af20902245d9573fdf3601223b/.dockerignore) ignoring the `bin` and `obj` directories.
I only realized why because I tried different Dockerfiles from the `dotnet-docker` repo and got a [different error](https://learn.microsoft.com/en-us/dotnet/core/tools/sdk-errors/netsdk1064) which has this same resolution. I'll try to make a PR to the docs of `NETSDK1047` to add this resolution. edit: link to PR <https://github.com/dotnet/docs/pull/29530>
Upvotes: 0 <issue_comment>username_21: For me it was the happening because I had migrated my project from `.net5.0` to `.net6.0` and the problem was caused when I was publishing the project while debugging worked fine.
Checking the publishing profile showed that it had a configuration for `.net5.0` in it:
[](https://i.stack.imgur.com/yPjfn.png)
Changing the existing `.net core` version with the desired one resolved the issue:
---
[](https://i.stack.imgur.com/JWK3M.png)
---
Or you can directly change it by going into the publishing profile `.pubxml` file under `Properties > PublishingProfiles` directory.
[](https://i.stack.imgur.com/lsyJD.png)
Upvotes: 3 <issue_comment>username_22: Had the same problem, for me it was that I had a space before my `TargetFramework`
```
net6.0
```
Upvotes: 0 <issue_comment>username_23: I upgraded from netstandard to net6.0, when publishing had to change TargetFramework to net6.0
[](https://i.stack.imgur.com/9GljC.png)
Upvotes: 1 <issue_comment>username_24: Running `dotnet restore --packages .nuget` in the project directory fixed the issues for me.
Upvotes: 2 <issue_comment>username_25: On my end I had this issue with net6.0 at build time, even before trying to publish. Even if everything was pointing to csproj and it had all the right TargetFramework, the issue was that our repo had a Nuget.Config at it's root and it included a configuration for a local disk Nuget Repo of another programmer. I disabled the Nuget.Config file and I was able to build the project. It was probably unable to restore the Nuget Packages but the error message was misleading.
Upvotes: 0 <issue_comment>username_26: clean cache and restart VS then it worked for me.
Upvotes: -1 |
2018/03/20 | 2,812 | 8,215 | <issue_start>username_0: I am using following code to run kmeans algorithm on Iris flower dataset- <https://github.com/marcoscastro/kmeans/blob/master/kmeans.cpp>
I have modified the above code to read input from files. Below is my code -
```
#include
#include
#include
#include
#include
#include
#include
using namespace std;
class Point
{
private:
int id\_point, id\_cluster;
vector values;
int total\_values;
string name;
public:
Point(int id\_point, vector& values, string name = "")
{
this->id\_point = id\_point;
total\_values = values.size();
for(int i = 0; i < total\_values; i++)
this->values.push\_back(values[i]);
this->name = name;
this->id\_cluster = -1;
}
int getID()
{
return id\_point;
}
void setCluster(int id\_cluster)
{
this->id\_cluster = id\_cluster;
}
int getCluster()
{
return id\_cluster;
}
double getValue(int index)
{
return values[index];
}
int getTotalValues()
{
return total\_values;
}
void addValue(double value)
{
values.push\_back(value);
}
string getName()
{
return name;
}
};
class Cluster
{
private:
int id\_cluster;
vector central\_values;
vector points;
public:
Cluster(int id\_cluster, Point point)
{
this->id\_cluster = id\_cluster;
int total\_values = point.getTotalValues();
for(int i = 0; i < total\_values; i++)
central\_values.push\_back(point.getValue(i));
points.push\_back(point);
}
void addPoint(Point point)
{
points.push\_back(point);
}
bool removePoint(int id\_point)
{
int total\_points = points.size();
for(int i = 0; i < total\_points; i++)
{
if(points[i].getID() == id\_point)
{
points.erase(points.begin() + i);
return true;
}
}
return false;
}
double getCentralValue(int index)
{
return central\_values[index];
}
void setCentralValue(int index, double value)
{
central\_values[index] = value;
}
Point getPoint(int index)
{
return points[index];
}
int getTotalPoints()
{
return points.size();
}
int getID()
{
return id\_cluster;
}
};
class KMeans
{
private:
int K; // number of clusters
int total\_values, total\_points, max\_iterations;
vector clusters;
// return ID of nearest center (uses euclidean distance)
int getIDNearestCenter(Point point)
{
double sum = 0.0, min\_dist;
int id\_cluster\_center = 0;
for(int i = 0; i < total\_values; i++)
{
sum += pow(clusters[0].getCentralValue(i) -
point.getValue(i), 2.0);
}
min\_dist = sqrt(sum);
for(int i = 1; i < K; i++)
{
double dist;
sum = 0.0;
for(int j = 0; j < total\_values; j++)
{
sum += pow(clusters[i].getCentralValue(j) -
point.getValue(j), 2.0);
}
dist = sqrt(sum);
if(dist < min\_dist)
{
min\_dist = dist;
id\_cluster\_center = i;
}
}
return id\_cluster\_center;
}
public:
KMeans(int K, int total\_points, int total\_values, int max\_iterations)
{
this->K = K;
this->total\_points = total\_points;
this->total\_values = total\_values;
this->max\_iterations = max\_iterations;
}
void run(vector & points)
{
if(K > total\_points)
return;
vector prohibited\_indexes;
printf("Inside run \n");
// choose K distinct values for the centers of the clusters
printf(" K distinct cluster\n");
for(int i = 0; i < K; i++)
{
while(true)
{
int index\_point = rand() % total\_points;
if(find(prohibited\_indexes.begin(), prohibited\_indexes.end(),
index\_point) == prohibited\_indexes.end())
{
printf("i= %d\n",i);
prohibited\_indexes.push\_back(index\_point);
points[index\_point].setCluster(i);
Cluster cluster(i, points[index\_point]);
clusters.push\_back(cluster);
break;
}
}
}
int iter = 1;
printf(" Each point to nearest cluster\n");
while(true)
{
bool done = true;
// associates each point to the nearest center
for(int i = 0; i < total\_points; i++)
{
int id\_old\_cluster = points[i].getCluster();
int id\_nearest\_center = getIDNearestCenter(points[i]);
if(id\_old\_cluster != id\_nearest\_center)
{
if(id\_old\_cluster != -1)
clusters[id\_old\_cluster].removePoint(points[i].getID());
points[i].setCluster(id\_nearest\_center);
clusters[id\_nearest\_center].addPoint(points[i]);
done = false;
}
}
// recalculating the center of each cluster
for(int i = 0; i < K; i++)
{
for(int j = 0; j < total\_values; j++)
{
int total\_points\_cluster = clusters[i].getTotalPoints();
double sum = 0.0;
if(total\_points\_cluster > 0)
{
for(int p = 0; p < total\_points\_cluster; p++)
sum += clusters[i].getPoint(p).getValue(j);
clusters[i].setCentralValue(j, sum / total\_points\_cluster);
}
}
}
if(done == true || iter >= max\_iterations)
{
cout << "Break in iteration " << iter << "\n\n";
break;
}
iter++;
}
// shows elements of clusters
for(int i = 0; i < K; i++)
{
int total\_points\_cluster = clusters[i].getTotalPoints();
cout << "Cluster " << clusters[i].getID() + 1 << endl;
for(int j = 0; j < total\_points\_cluster; j++)
{
cout << "Point " << clusters[i].getPoint(j).getID() + 1 << ": ";
for(int p = 0; p < total\_values; p++)
cout << clusters[i].getPoint(j).getValue(p) << " ";
string point\_name = clusters[i].getPoint(j).getName();
if(point\_name != "")
cout << "- " << point\_name;
cout << endl;
}
cout << "Cluster values: ";
for(int j = 0; j < total\_values; j++)
cout << clusters[i].getCentralValue(j) << " ";
cout << "\n\n";
}
}
};
int main(int argc, char \*argv[])
{
srand(time(NULL));
int total\_points, total\_values, K, max\_iterations, has\_name;
ifstream inFile("datafile.txt");
if (!inFile) {
cerr << "Unable to open file datafile.txt";
exit(1); // call system to stop
}
inFile >> total\_points >> total\_values >> K >> max\_iterations >> has\_name;
cout << "Details- \n";
vector points;
string point\_name,str;
int i=0;
while(inFile.eof())
{
string temp;
vector values;
for(int j = 0; j < total\_values; j++)
{
double value;
inFile >> value;
values.push\_back(value);
}
if(has\_name)
{
inFile >> point\_name;
Point p(i, values, point\_name);
points.push\_back(p);
i++;
}
else
{
inFile >> temp;
Point p(i, values);
points.push\_back(p);
i++;
}
}
inFile.close();
KMeans kmeans(K, total\_points, total\_values, max\_iterations);
kmeans.run(points);
return 0;
}
```
Output of code is -
```
Details-
15043100000Inside run
K distinct cluster i= 0
Segmentation fault
```
When I run it in gdb, the error shown is -
```
Program received signal SIGSEGV, Segmentation fault.
0x0000000000401db6 in Point::setCluster (this=0x540, id_cluster=0)
at kmeans.cpp:41
41 this->id_cluster = id_cluster;
```
I am stuck at this as I cannot find the cause for this segmentation fault.
My dataset file looks like -
```
150 4 3 10000 1
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
. . .
7.0,3.2,4.7,1.4,Iris-versicolor
6.4,3.2,4.5,1.5,Iris-versicolor
6.9,3.1,4.9,1.5,Iris-versicolor
5.5,2.3,4.0,1.3,Iris-versicolor
6.5,2.8,4.6,1.5,Iris-versicolor
. . .
```<issue_comment>username_1: `points[index_point].setCluster(i);` could be accessing the vector out of bounds. The code you quoted actually always sets a number of `total_points` in the vector `points` before calling `run`, while your modified code just reads until end of file and has no guarantees that the number of total points passed to the constructor of `KMeans` matches the value of entries in `points`. Either fix your file I/O or fix the logic of bounds checking.
Upvotes: 0 <issue_comment>username_2: in `KMeans::run(vector&)` you call `points[index_point].setCluster(i);` without any guarantee that `index_point` is within bounds.
`index_point` is determined by `int index_point = rand() % total_points;`, and `total_points` is retrieved from the input file "datafile.txt" which could be anything. It certainly does not have to match `points.size()`, but it should. Make sure it does, or just use `points.size()` instead.
A bit offtopic, but using [`rand()` and only using modulo](https://stackoverflow.com/a/10984975/3931225) is almost always wrong. If you use C++11 or newer, please consider using [std::uniform\_int\_distribution](http://www.cplusplus.com/reference/random/uniform_int_distribution/).
Upvotes: 2 [selected_answer] |
2018/03/20 | 444 | 1,685 | <issue_start>username_0: What to do?
I have some other Android studio project and the alt+F7 works ok.
I guess something got broken in this particular project settings.
Also a class that is in use is shown (hover mouse over class name) to be "Class XX is never used"
Should I do project clean "Invalidate caches / Restart" from File menu?
I don´t like that because loosing all file history but if that´s what it takes.. any idea?<issue_comment>username_1: `points[index_point].setCluster(i);` could be accessing the vector out of bounds. The code you quoted actually always sets a number of `total_points` in the vector `points` before calling `run`, while your modified code just reads until end of file and has no guarantees that the number of total points passed to the constructor of `KMeans` matches the value of entries in `points`. Either fix your file I/O or fix the logic of bounds checking.
Upvotes: 0 <issue_comment>username_2: in `KMeans::run(vector&)` you call `points[index_point].setCluster(i);` without any guarantee that `index_point` is within bounds.
`index_point` is determined by `int index_point = rand() % total_points;`, and `total_points` is retrieved from the input file "datafile.txt" which could be anything. It certainly does not have to match `points.size()`, but it should. Make sure it does, or just use `points.size()` instead.
A bit offtopic, but using [`rand()` and only using modulo](https://stackoverflow.com/a/10984975/3931225) is almost always wrong. If you use C++11 or newer, please consider using [std::uniform\_int\_distribution](http://www.cplusplus.com/reference/random/uniform_int_distribution/).
Upvotes: 2 [selected_answer] |
2018/03/20 | 2,834 | 9,734 | <issue_start>username_0: As far as I know in c when using `printf()` we don't use &.Right?But in my programme, if I don't use it in the display function it gives me an error.Can someone explain this? Thank you
```
#include
#define max 100
void enqueue();
char dequeue(void);
int front=-1,rear=-1,option;
char name[max][max],val;
void display(void);
void enqueue() {
printf("Enter the name of the paitent : ");
if(rear==max-1)
printf("Line is full");
else if (front==-1 && rear==-1)
front=rear=0;
else
rear++;
scanf("%s",&name[rear][rear]);
}
char dequeue(void) {
char val;
if(front==-1 || front >rear )
printf("Line is empty");
else
{
val=name[front];
front++;
if(front>rear)
front=rear=-1;
return val;
}
}
void display(void) {
int i;
if(front==-1|| front >rear)
printf("The queue is empty");
else
{
for(i=front; i<=rear; i++) {
printf("%s\t",&name[i][i]);
}
}
}
int main () {
printf("\n\n\*\*\*\*\*\*\*Medical Cneter\*\*\*\*\*\*\*\*");
printf("\n\t1. Insert a paitent");
printf("\n\t2. Remove a paitent");
printf("\n\t3. Check Paitent list");
printf("\n\t4. Display");
printf("\n\t5. Exit");
do {
printf("\nEnter your option: ");
scanf("%d",&option);
switch(option)
{
case 1:
enqueue();
break;
case 2:
dequeue();
break;
// case 3:
case 4:
display();
break;
}
} while(option !=5);
}
```
If I don't use & the programme will crash. As far as I know in c when using `printf()` we don't use. But in my programme, if I don't use it in the display function it gives me an error.Can someone explain this? Thank you<issue_comment>username_1: The `&` operator means "take the reference of". `scanf` modifies its arguments, therefore, you need to given a reference to the arguments you want modified. `printf` does not modify its arguments, therefore, you can pass its arguments by value, because it can do its job with a copy.
However, this is not a final rule, as the format string `%p` will print a pointer (i.e a reference). So `printf` might need a reference to a variable in some cases.
A more general rule would be this one: `scanf` takes its arguments by reference, `printf` takes its arguments by value.
Upvotes: -1 <issue_comment>username_2: To answer your question, let's review what the `&` operator does, and what types of values we need to pass to `printf`, and also, for comparison, what types of values we need to pass to `scanf`.
If you have a thing `x`, then the expression `&x` gives you a pointer to `x`. If `x` is an `int`, `&x` gives a pointer-to-`int`. If `x` is a `char`, `&x` gives pointer-to-`char`.
For example, if I write
```
int i;
int *ip;
ip = &i
```
I have declared an `int` variable `i`, and a pointer-to-int variable `ip`. I have used the `&` operator to make a pointer to `i`, and I have stored that pointer-to-`int` in the variable `ip`. This is all fine.
As you may know, when you call `scanf` you always have to pass pointers to the variables which you want `scanf` to fill in for you. You can't write
```
scanf("%d %d", x, y); /* WRONG */
```
because that would pass the values of the variables `x` and `y` to `scanf`. But you don't want to pass values *to* `scanf`, you want `scanf` to read some values from the user, and transmit them back to you. In fact, you want `scanf` to write *to* your variables `x` and `y`. That's why you pass pointers to `x` and `y`, so that `scanf` can use the pointers to fill in your variables. So that's why you almost always see `&`'s on the arguments in `scanf` calls.
But none of those reasons applies to `printf`. When you call `printf`, you *do* want to pass ordinary values to it. You're not (usually) asking `printf` to pass any data back to you. So most of the `printf` format specifiers are defined as accepting ordinary values, *not* pointers-to-values. So that's why you hardly ever see `&`'s in `printf` calls.
Now, you might think of `&` as an operator that "converts things" to pointers, but that's not really a good way of thinking about it. As I said, given an object `x`, the expression `&x` *constructs* a pointer to `x`. It doesn't "convert" anything; it certainly doesn't "convert" x. It constructs a brand-new pointer value, pointing to `x`.
In the code you posted, it looks like you might have used `&` in an attempt to perform such a "conversion". You had an array `name` of type array-of-array-of-`char`, or a two-dimensional array of characters. You were trying to print a string with `%s`. You knew, or perhaps your compiler warned you, that `%s` needs a pointer-to-char, and you knew (or your compiler told you) that the expression `name[i][i]` gave a value of type `char`. Now, putting a `&` in front of `name[i][i]` did indeed get a value of type pointer-to-`char`, as `%s` requires, and it might even have seemed to work, but it's a pretty haphazard solution.
It's true that `printf`'s `%s` needs a pointer-to-`char`, but it doesn't need just *any* pointer-to-`char`; it needs a pointer-to-`char` that points to a valid, null-terminated string. And that's why, even though `%s` needs a pointer, you still don't usually see `&`'s in `printf` calls. You could use `&` on a single character variable to get a pointer to that character, like this:
```
char c = 'x';
printf("%s", &c); /* WRONG */
```
But this is broken code, and won't work properly, because the pointer you get is not to a valid string, because there's no null termination.
In your code, you *probably* want to change the line
```
printf("%s\t",&name[i][i]);
```
to
```
printf("%s\t",name[i]);
```
`name` is a two-dimensional array of `char`, but since a string in C is an array of `char`, you can also think of `name` as being a (single dimensional) array of strings, and I think that's how you're trying to use it.
Similarly, I suspect you want to change the line
```
scanf("%s",&name[rear][rear]);
```
to
```
scanf("%s", name[rear]);
```
But before you say "I thought `scanf` always needed `&`!", remember, the rule is that `scanf` needs a *pointer*. And since `name[i]` is an array, you automatically get a pointer to its first element when you pass it to `scanf` (or in fact when you use it in any expression). (And this is also the reasoning behind `printf("%s\t",name[i])`.)
If you wanted to use an explicit `&`, what you want is a pointer to the *beginning* of the string array you want `scanf` to fill in, so I think you'd want
```
scanf("%s", &name[rear][0]);
```
instead of the expression you had.
(It looks like you were, probably accidentally, running your strings down the diagonal of the `name` array, instead of down the left edge. It's also curious that you declared the perfectly square array
```
char name[max][max];
```
that is, 100 strings of 100 characters each. It's not wrong, and it'll work, but it's curious, and it makes it easier to mix up the rows and the columns.)
`
Upvotes: 4 [selected_answer]<issue_comment>username_3: I tried compiling your code and it seems I'm getting the error:
```
warning: assignment makes integer from pointer without a cast [-Wint-conversion]
val=name[front];
^
```
The reason is that `name` is a 2D array and you are assigning a pointer to a 1D array of chars to a char.
I think that in order to fully understand the problem one would have to understand how C deals with memory and pointers.
The '&' symbol means that you are taking the address of something (not to be confused with c++ references).
This means that if a function has an argument like the following:
```
void foo(char* bar);
```
and you want to pass your variable `char var;` to the function, you would have to call the function using the '&' symbol to explicitly tell the compiler that you want to pass the address of `var` as the parameter to the function.
```
foo(&var);
```
if you want to access the value of `var` inside function `foo` you would have to "dereference" the pointer so that the compiler understands that you want the value stored at the address you just passed to the function.
In order to dereference a pointer you use the '\*' symbol.
```
char some_char_in_foo = *var;
```
I suggest reading up on pointers to clarify the matter further.
Upvotes: 0 <issue_comment>username_4: The `%s` conversion specifier in both `printf` and `scanf` expects it's corresponding argument to have type `char *`, and to point to the first character in a *string*.
Remember that in C, a *string* is a sequence of characters including a 0-valued terminator. Strings are *stored* in arrays of `char` (or `wchar_t` for "wide" strings).
Your `name` array can store `max` strings, each of which can be up to `max`-1 characters long, not counting the string terminator.
When an array *expression* is not the operand of the `sizeof` or unary `&` operators, or isn't a string literal used to initialize a character array in a declaration, the expression is implicitly converted ("decays") from type "array of `char`" to "pointer to `char`", and the value of the expression is the address of the first element.
`name` has type "`max`-element array of `max`-element array of `char`". This means that each `name[i]` has type "`max`-element array of `char`", which, if it's not the operand of `&` or `sizeof`, "decays" to type `char *`.
That's basically a long-winded way of saying that the *expression* `max[i]` is equivalent to `&max[i][0]`. So you can write your `printf` statement as
```
printf( "%s\t", name[i] );
```
Similarly, you can rewrite your `scanf` statement in the `enqueue` function as
```
scanf( "%s", name[i] );
```
although honestly, for user input it's safer to use `fgets`:
```
if ( fgets( name[i], sizeof name[i], stdin ) )
// successful input
else
// EOF or error on read
```
Upvotes: 0 |
2018/03/20 | 970 | 2,290 | <issue_start>username_0: I have an array which is a set of coordinates, and I want to swap them, and generate a new array.
I want to swap only the regions entries:
```
var Regions = [
{"group": "region_01",
"coords": [
[3110, 2323],
[3119, 2344],
[3117, 2385],
[3110, 2417],
[3110, 2323]
]}]
```
So it should be like this:
```
var Regions = [
{"group": "region_01",
"coords": [
[2323, 3110],
[2344, 3119],
[2385, 3117],
[2417, 3110],
[2323, 3110]
]}]
```
The reverse method doesn't work in this type of array right? What is the best way to do this?<issue_comment>username_1: You can use nested [`Array.forEach()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach) calls to reverse the sub arrays in place:
```js
const Regions = [{"group":"region_01","coords":[[2323,3110],[2344,3119],[2385,3117],[2417,3110],[2323,3110]]}];
Regions.forEach(({ coords }) =>
coords.forEach((arr) => arr.reverse())
);
console.log(Regions);
```
Upvotes: 1 <issue_comment>username_2: You can use the function `reverse` to modify the original array.
```js
var Regions = [
{"group": "region_01",
"coords": [
[3110, 2323],
[3119, 2344],
[3117, 2385],
[3110, 2417],
[3110, 2323]
]}];
Regions.forEach((r) => r.coords.forEach(c => c.reverse()));
console.log(Regions);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
If you don't want to use the function `reverse()`, you can use the function `map` to make the swap.
```js
var Regions = [
{"group": "region_01",
"coords": [
[3110, 2323],
[3119, 2344],
[3117, 2385],
[3110, 2417],
[3110, 2323]
]}];
Regions.forEach((r) => r.coords = r.coords.map(([left, right]) => [right, left]));
console.log(Regions);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: ```
var Regions = {
group: "region_01",
coords: [
[3110, 2323],
[3119, 2344],
[3117, 2385],
[3110, 2417],
[3110, 2323]
]
}
for (var i = 0, len = Regions.coords.length; i < len; i++) {
var coord = Regions.coords[i];
coord[1] = [coord[0], coord[0] = coord[1]][0];
}
```
Upvotes: 0 |
2018/03/20 | 303 | 977 | <issue_start>username_0: I'm working on a program in pike and looking for a method like the *endswith()* in python- a method that gives me the extension of a given file.
Can someone help me with that?
thank you<issue_comment>username_1: extract the end of the string, and compare it with the desired extension:
```
"hello.html"[<4..] == ".html"
```
(`<4` counts from the end of the string/array)
Upvotes: 0 <issue_comment>username_2: Python's *endswith()* is something like Pike's *has\_suffix(string s, string suffix)*:
```
has_suffix("index.html", ".html");
```
Reference:
<http://pike.lysator.liu.se/generated/manual/modref/ex/predef_3A_3A/has_suffix.html>
Upvotes: 1 <issue_comment>username_3: If you want to see what the extension of a file is, just find the last dot and get the substring after it, e.g. `(str/".")[-1]`
If you just want to check if the file is of a certain extension, using has\_suffix() is a good way, e.g. `has_suffix(str, ".html")`
Upvotes: 0 |
2018/03/20 | 303 | 1,001 | <issue_start>username_0: This is for a battleship program and the height/width of the array is passed in through a different method (called promptInt). I'm not sure how to go about calling a method to determine the size of the array. \<issue_comment>username_1: extract the end of the string, and compare it with the desired extension:
```
"hello.html"[<4..] == ".html"
```
(`<4` counts from the end of the string/array)
Upvotes: 0 <issue_comment>username_2: Python's *endswith()* is something like Pike's *has\_suffix(string s, string suffix)*:
```
has_suffix("index.html", ".html");
```
Reference:
<http://pike.lysator.liu.se/generated/manual/modref/ex/predef_3A_3A/has_suffix.html>
Upvotes: 1 <issue_comment>username_3: If you want to see what the extension of a file is, just find the last dot and get the substring after it, e.g. `(str/".")[-1]`
If you just want to check if the file is of a certain extension, using has\_suffix() is a good way, e.g. `has_suffix(str, ".html")`
Upvotes: 0 |
2018/03/20 | 323 | 1,114 | <issue_start>username_0: I want to trigger Python script from my Spring boot microservices im Asynchronous manner, SO that my Microservice will be notified once the execution of python script completes.Can any one suggest the best approach for this? appreciated if any one provide some reference to sample code.
Thanks in advance!!!
Thanks,
Sudheer<issue_comment>username_1: extract the end of the string, and compare it with the desired extension:
```
"hello.html"[<4..] == ".html"
```
(`<4` counts from the end of the string/array)
Upvotes: 0 <issue_comment>username_2: Python's *endswith()* is something like Pike's *has\_suffix(string s, string suffix)*:
```
has_suffix("index.html", ".html");
```
Reference:
<http://pike.lysator.liu.se/generated/manual/modref/ex/predef_3A_3A/has_suffix.html>
Upvotes: 1 <issue_comment>username_3: If you want to see what the extension of a file is, just find the last dot and get the substring after it, e.g. `(str/".")[-1]`
If you just want to check if the file is of a certain extension, using has\_suffix() is a good way, e.g. `has_suffix(str, ".html")`
Upvotes: 0 |
2018/03/20 | 1,244 | 4,818 | <issue_start>username_0: So in angularjs you had the possibility to define a directive and bind the html template to an already existing controller. In principal this meant you could reuse the controller for multiple directives, therefore for multiple templates.
```
angular
.module('App')
.component('Name', {
templateUrl: 'some.html',
controller: 'someController'
});
```
How can this be performed in Angular. As far as I understood it in Angular Components are directives and always directly bind the html. Basically I want to use another view which only changes the html but keeps the same functionality.
Edit:
Basically I want this:
```
@Component({
selector: 'jhi-exercise-list',
templateUrl: '(MULTIPLE HTML TEMPLATE PATHS HERE)',
providers: [
]
})
export class className{
//USING THE SAME COMPONENT CODE FOR THE MULTIPLE TEMPLATES
constructor(){}
}
```
The only option I found so far would be through extending but I think thats overkill.<issue_comment>username_1: There must be some directive in angularjs as ng-if or something like that.
In Angular 4 you can do this as one of way as
```
```
And you can pass the value of x according to your need.
Added Solution
```
import { Component, Input } from '@angular/core';
import { Blogger } from '../models/blogger';
import { BloggerProfileService } from '../service/blogger-profile.service';
import { SharedService } from '../service/shared.service';
import { AuthService } from '../auth-service.service';
import { Router } from '@angular/router';
@Component({
selector: 'article-author-panel',
templateUrl: './article-author-panel.component.html',
styleUrls: ['./article-author-panel.component.css']
})
export class ArticleAuthorPanelComponent {
@Input('templateType') templateType;
author;
blogger : Blogger;
articleDt;
user;
@Input()
get articleDate(){
return this.articleDt;
}
set articleDate(date){
this.articleDt = this.sharedService.getUTCDate(new Date(date));
}
@Input()
set articleAuthor(author) {
this.author = author;
this.bloggerProfileService.getBlogger(author)
.take(1)
.subscribe((blogger) => {
this.blogger = blogger;
})
}
get articleAuthor() {
return this.author;
}
constructor(
private bloggerProfileService: BloggerProfileService,
private sharedService: SharedService,
private router : Router,
private authService: AuthService
) {
authService.user$
.take(1)
.subscribe((user) => {
if(user) this.user = user;
});
}
clickFollow(){
if(this.user === null || this.user === undefined){
this.router.navigate(['/']);
}
}
}
```
Template
```

{{ blogger.name }}
{{ blogger.summary }}
Follow

{{ blogger.name }}
{{ blogger.summary }}
{{ articleDate }}
Follow
```
This is my component which I have created where according to the position I placed the component the layout changes. You can consider it as two different htmls connected together as normal if else keeping the behind code same. According to my demand I did not use each variable. Hope this helps.
Upvotes: -1 <issue_comment>username_2: ```
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { LoadingComponent } from './loading.component'
@NgModule({
imports: [
CommonModule
],
declarations: [LoadingComponent],
exports: [LoadingComponent]
})
export class LoadingModule { }
```
IN this case am planning to reuse the loading component in multiple places. I will be injecting this module on specif area am intending to use loading component
Upvotes: -1 <issue_comment>username_3: Define `templateUrl` as a function
----------------------------------
You can specify `templateUrl` as a string representing the URL **or as a function** which takes two arguments `tElement` and `tAttrs`.
The function takes the following arguments:
* `tElement` - template element - The element where the directive has been declared.
* `tAttrs` - template attributes - Normalized list of attributes declared on this element.
For more information, see [AngularJS Comprehensive Directive API Reference - templateURL](https://docs.angularjs.org/api/ng/service/$compile#-templateurl-)
Upvotes: 0 <issue_comment>username_4: After more research it still seems that the best solution I could come up with is using inheritance.
<https://coryrylan.com/blog/angular-component-inheritance-and-template-swapping>
He has a rather nice description of it
Thank you to all who helped me ;)
Upvotes: 0 |
2018/03/20 | 1,661 | 6,164 | <issue_start>username_0: I am building an react application to connect to and display data from a MQTT server.
I have implemented the basic connection code in `mqtt/actions.js` See below:
```
const client = mqtt.connect(options);
client.on('connect', function () {
mqttConnectionState('MQTT_CONNECTED')
client.subscribe(['btemp', 'otemp'], (err, granted) => {
if (err) alert(err)
console.log(`Subscribed to: otemp & btemp topics`)
})
})
client.on('message', function (topic, message) {
updateTemp({topic: topic, value: message.toString()})
});
const mqttConnectionState = (action, err = null) => {
return {
type: action,
payload: err
}
}
```
I am looking to on button press initiate the mqtt connection and then dispatch a connection success event.
However with the above code I am unsure exactly how this would work.
I could move the connect line `const client = mqtt.connect(options);` to a function and run that function on button click but then then the `client.on` functions will not be able to see the `client const`.
How is best to approach this?
I am using React.JS, Redux and the MQTT.JS libraries.
Update: Trying to dispatch and action when a message is received
Reducer:
```
const createClient = () => {
const client = mqtt.connect(options);
client.on('connect', function () {
mqttConnectionState('MQTT_CONNECTED')
client.subscribe(['btemp', 'otemp'], (err, granted) => {
if (err) alert(err)
console.log(`Subscribed to: otemp & btemp topics`)
});
});
client.on('message', (topic, message) => {
console.log('message received from mqtt')
processMessage({topic, message})
})
return client;
}
case MESSAGE_RECEIVED:
console.log('message received')
messageReceived(payload)
return state;
```
Actions:
```
export const processMessage = (data) => dispatch => {
console.log('Processing Message')
return {
type: 'MESSAGE_RECEIVED',
payload: data
}
}
```
`message received from mqtt` log each time a message arrives, however `processMessage({topic, message})` never executes as `Processing Message` never logs to the console<issue_comment>username_1: "Actions are payloads of information that send data from your application to your store" ([docs](https://redux.js.org/basics/actions))
So you have to create the `client` in the Reducer (his function). Put it on the Redux `state` like this:
```
initialState = {
client: null
}
```
and you reducer.js file should look like this:
```
import {
mqttConnectionState
} from './actions'
let initialState = {
client: null ,
err: null
}
const createClient = () => {
const client = mqtt.connect(options);
client.on('connect', function () {
mqttConnectionState('MQTT_CONNECTED')
client.subscribe(['btemp', 'otemp'], (err, granted) => {
if (err) alert(err)
console.log(`Subscribed to: otemp & btemp topics`)
});
});
return client;
}
function app(state = initialState, action) {
switch (action.type) {
case 'INIT_CONNECTION':
return {
...state,
client: createClient()
})
case 'MQTT_CONNECTED':
return {
...state,
err: action.payload
})
default:
return state
}
}
```
and you actions.js:
```
...
const mqttConnectionInit = () => {
return {
type: 'INIT_CONNECTION'
}
}
const mqttConnectionState = (err = null) => {
return {
type: 'MQTT_CONNECTED',
payload: err
}
}
...
```
this way you can dispatch the action mqttConnectionInit in the onclick button event.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Dispatching Redux actions inside a reducer may clear your store. I mean, setting it's state to what you have under `initialState`. And require cycles are no-op.
Yesterday I did some changes in my code, because I've tried solution above and ended up with a warning "require cycles are allowed but can result in uninitialized values". I moved mqtt connection related code into the middleware.
```
import { DEVICE_TYPE, HOST, PASSWORD, PORT, USER_NAME } from '../utils/variables';
import { mqttConnectionInit, mqttConnectionState } from '../actions/devices';
import mqtt from 'mqtt/dist/mqtt';
import { SIGNED_IN } from '../constants/types';
const MqttMiddleware = store => next => action => {
if (action.type == SIGNED_IN) {
store.dispatch(mqttConnectionInit());
const client = mqtt.connect(`ws://${HOST}:${PORT}`, { username: USER_NAME, password: <PASSWORD> });
client.on('connect', function () {
let license = store.getState().auth.license;
store.dispatch(mqttConnectionState(client));
client.subscribe(`/${USER_NAME}/${license}/+/${DEVICE_TYPE}/#`);
});
client.on('message', ((topic, payload) => {
const device = JSON.parse(message(topic, payload.toString()));
console.log(device);
}));
}
next(action);
};
export function message(message, value) {
const msg = message.split('/');
return JSON.stringify({
"id": msg[3],
"user": msg[1],
"license": msg[2],
"type": msg[4],
"name": msg[5],
"value": value == "0" ? 0 : (value.match(/[A-Za-z]/) ? value : Number(value))
});
}
export default MqttMiddleware;
```
You can do pretty much all you want with the store.
```
actions.js
```
```
import { INIT_CONNECTION, MQTT_CONNECTED } from '../constants/types'
export const mqttConnectionInit = () => {
return {
type: INIT_CONNECTION
}
}
export const mqttConnectionState = (client, err = null) => {
return {
type: MQTT_CONNECTED,
error: err,
client: client,
}
}
```
```
reducers.js
```
```
import { INIT_CONNECTION, MQTT_CONNECTED } from '../constants/types';
const mqttReducer = (state = initialState, action) => {
switch (action.type) {
case INIT_CONNECTION:
return {
...state,
client: null,
};
case MQTT_CONNECTED:
return {
...state,
err: action.error,
client: action.client,
};
default:
return state;
}
}
const initialState = {
client: null,
err: null,
}
export default mqttReducer;
```
Upvotes: 1 |
2018/03/20 | 370 | 1,302 | <issue_start>username_0: Alright I have a terms of service modal which is an ngBootstrap modal and when I press the button to close that button I want the action that closes the modal define wheter the checkbox is checked or not
This is the html:
```
#### Terms of service.
×
I accept.
```
the link to open the modal and the checkbox
```
I have read
[the terms of service](javascript:void(0)).
```
And under it I have `{{accepted}}` just for testing
And the typescript
```
accepted: boolean = false;
constructor(private modalService: NgbModal) {}
open(content) {
this.modalService.open(content);
}
setAccepted(accepted:boolean){
this.accepted = accepted;
}
```
I tried [(ngModel)], \*ngIf, ngModel to the accepted boolean from my typescript but nothing seems to work.<issue_comment>username_1: Hmm, one comment... You have...
```
I accept.
```
Should be
```
I accept.
```
At least this is how I do this for a button click, you can not have two separate (click) . I have used this to success on my own personal site.
Upvotes: 0 <issue_comment>username_2: Use "[checked]" input propery or attribute.
Use a Boolean to check on uncheck the checkbox.
In Template:
```
```
In TS:
```
accepted: Boolean;
accepted = true; // Or False
```
Upvotes: 4 [selected_answer] |
2018/03/20 | 528 | 1,838 | <issue_start>username_0: I have a div with the id #one. I set the background color to be red. I want the option to change this to a random color when the div is clicked on, so I made a function to do this. My question:
Why does it not work when I write 'background-color' instead of 'backgroundColor' in JS? If I write 'background-color', then I get an error saying that there's a bad assignment on the left of the operator. Thanks in advance!
Code:
<https://codepen.io/simonrevill/pen/pLebyj>
```
//This works:
var col = '#'+Math.floor(Math.random()*16777215).toString(16);
function change() {
document.getElementById("one").style.backgroundColor=col;
}
//This doesn't:
var col = '#'+Math.floor(Math.random()*16777215).toString(16);
function change() {
document.getElementById("one").style.background-color=col;
}
```<issue_comment>username_1: Javascript attributes and variables are using camel case syntax. Moreover, dashes are forbidden in this case because they would be interpreted as a substract operator. That's why you get this error.
Upvotes: 0 <issue_comment>username_2: To specify a CSS property in JavaScript that contains a dash, simply remove the dash. For example, `background-color` becomes `backgroundColor`, the `border-radius` property transforms into `borderRadius`, and so on.
You can't use `-` in JavaScript because `-` is a special keyword. For example, if you are using:
```
background-color
```
It means you are subtracting two variables `background` and `color`.
Your code changes background color only once, if you want to change color on everytime then you need this:
```
let r1 = Math.floor(Math.random() * 255);
let r2 = Math.floor(Math.random() * 255);
let r3 = Math.floor(Math.random() * 255);
element.style.backgroundColor = `rgb(${r1},${r2},${r3})`;
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 635 | 2,277 | <issue_start>username_0: This might be a basic sql questions, however I was curious to know the answer to this.
I need to fetch top one record from the db. Which query would be more efficient, one with where clause or order by?
Example:
Table
```
Movie
id name isPlaying endDate isDeleted
```
Above is a versioned table for storing records for movie.
If the endDate is not null and isDeleted = 1 then the record is old and an updated one already exist in this table.
So to fetch the movie "Gladiator" which is currently playing, I can write a query in two ways:
```
1.
Select m.isPlaying
From Movie m
where m.name=:name (given)
and m.endDate is null and m.isDeleted=0
2. Select TOP 1 m.isPlaying
From Movie m
where m.name=:name (given)
order by m.id desc --- This will always give me the active record (one which is not deleted)
```
Which query is faster and the correct way to do it?
Update:
id is the only indexed column and id is the unique key. I am expecting the queries to return me only one result.
Update:
Examples:
```
Movie
id name isPlaying EndDate isDeleted
3 Gladiator 1 03/1/2017 1
4 Gladiator 1 03/1/2017 1
5 Gladiator 0 null 0
```<issue_comment>username_1: Javascript attributes and variables are using camel case syntax. Moreover, dashes are forbidden in this case because they would be interpreted as a substract operator. That's why you get this error.
Upvotes: 0 <issue_comment>username_2: To specify a CSS property in JavaScript that contains a dash, simply remove the dash. For example, `background-color` becomes `backgroundColor`, the `border-radius` property transforms into `borderRadius`, and so on.
You can't use `-` in JavaScript because `-` is a special keyword. For example, if you are using:
```
background-color
```
It means you are subtracting two variables `background` and `color`.
Your code changes background color only once, if you want to change color on everytime then you need this:
```
let r1 = Math.floor(Math.random() * 255);
let r2 = Math.floor(Math.random() * 255);
let r3 = Math.floor(Math.random() * 255);
element.style.backgroundColor = `rgb(${r1},${r2},${r3})`;
```
Upvotes: 2 [selected_answer] |
2018/03/20 | 1,140 | 4,494 | <issue_start>username_0: I'm attempting to implement a custom authorization policy in my asp.net core 2.0 application.
In my Custom AuthorizationHandler I have this check:
```
if (!context.User.Identity.IsAuthenticated)
{
this.logger.LogInformation("Failed to authorize user. User is not authenticated.");
return Task.CompletedTask;
}
// ... More authorization checks
```
which makes sense, as unauthenticated users are not authorized for my controller action.
I'm using JWT bearer auth and have it configured like this:
```
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(cfg =>
{
cfg.RequireHttpsMetadata = false;
cfg.SaveToken = true;
cfg.TokenValidationParameters = new TokenValidationParameters
{
ValidIssuer = "",
ValidAudience = "",
IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(signingKey)),
ValidateLifetime = false
};
});
```
I'm also registering my authorization requirement and handler (note these registrations happen AFTER I've configured authentication above, and BEFORE I call `serivces.AddMVC()`:
```
services.AddAuthorization(options =>
{
options.AddPolicy(Constants.AuthorizationPolicies.MyCustomPolicy,
policy => policy.Requirements.Add(new MyCustomRequirement()));
});
services.AddScoped();
```
Here's my problem, It appears my Authorization Policy is executing before the jwt token is being challenged and validated, as a result, my authorization policy is failing because the user is not authenticated.
Here's a log of a sample call:
```
Now listening on: http://localhost:60235
Application started. Press Ctrl+C to shut down.
[14:50:57 INF] Request starting HTTP/1.1 GET http://localhost:60235/api/values application/json
[14:51:03 INF] Failed to authorize user. User is not authenticated.
[14:51:03 INF] Authorization failed for user: null.
[14:51:03 INF] Authorization failed for the request at filter 'Microsoft.AspNetCore.Mvc.Authorization.AuthorizeFilter'.
[14:51:03 INF] Executing ChallengeResult with authentication schemes ([]).
[14:51:03 INF] Successfully validated the token.
[14:51:03 INF] AuthenticationScheme: Bearer was challenged.
[14:51:03 INF] Executed action AuthorizationTestClient.Controllers.ValuesController.Get (AuthorizationTestClient) in 5819.1586ms
[14:51:03 INF] Request finished in 5961.5619ms 401
```
As you can see, the authorization policy executes first, fails because the user isn't authenticated and then the authentication challenge happens.
Shouldn't this be the other way around? How do I tell asp.net to perform authentication before authorization?<issue_comment>username_1: >
> I'm also registering my authorization requirement and handler (note
> these registrations happen AFTER I've configured authentication above,
> and BEFORE I call serivces.AddMVC()
>
>
>
The order of services registration in DI container is not quite important. What **is** important it's the order of middleware registration in `Startup.Configure` method. You have not provided the code of this method, but I bet you add authentication middleware after MVC middleware or don't add it at all. Authentication middleware should be added before MVC, so make sure your `Startup.Configure` looks similar to this:
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseAuthentication();
app.UseMvc();
}
```
Check following articles for more details:
* [ASP.NET Core Middleware -
Ordering](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?tabs=aspnetcore2x#ordering)
* [Why UseAuthentication must be before UseMvc in NET Core
2.0](https://stackoverflow.com/questions/49276987/why-useauthentication-must-be-before-usemvc-in-net-core-2-0)
Upvotes: 5 [selected_answer]<issue_comment>username_2: Ok.. turns out it was very easy fix. I needed a call to app.UseAuthentication() in my Configure method. Silly me!
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseAuthentication();
app.UseMvc();
}
```
Upvotes: -1 |
2018/03/20 | 576 | 2,201 | <issue_start>username_0: How can I remove NA cases in a column and calculate the mean value of a factor at the same time?
With this code I calculate the mean value of DC1 in Group\_A, within x dataframe
`test.mean <- mean(x$DC1[x$Groups=="Group_A"])`
However, some values of the DC1 column in the Group\_A factor do have NA cells. In order to remove NA cases from DC1, I run this code, where the column DC1 is the 3rd.
```
test.filterNA <- x[complete.cases(x[ , 3]), ]
```
How can I merge both codes in one simple line?<issue_comment>username_1: >
> I'm also registering my authorization requirement and handler (note
> these registrations happen AFTER I've configured authentication above,
> and BEFORE I call serivces.AddMVC()
>
>
>
The order of services registration in DI container is not quite important. What **is** important it's the order of middleware registration in `Startup.Configure` method. You have not provided the code of this method, but I bet you add authentication middleware after MVC middleware or don't add it at all. Authentication middleware should be added before MVC, so make sure your `Startup.Configure` looks similar to this:
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseAuthentication();
app.UseMvc();
}
```
Check following articles for more details:
* [ASP.NET Core Middleware -
Ordering](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/middleware/?tabs=aspnetcore2x#ordering)
* [Why UseAuthentication must be before UseMvc in NET Core
2.0](https://stackoverflow.com/questions/49276987/why-useauthentication-must-be-before-usemvc-in-net-core-2-0)
Upvotes: 5 [selected_answer]<issue_comment>username_2: Ok.. turns out it was very easy fix. I needed a call to app.UseAuthentication() in my Configure method. Silly me!
```
public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseAuthentication();
app.UseMvc();
}
```
Upvotes: -1 |
2018/03/20 | 1,921 | 7,495 | <issue_start>username_0: I must use a for loop to go through the h2 elements in the array and remove the class attribute for all h2 elements that aren’t the one that has been clicked. I also need to remove the class attributes for all of the div siblings of the h2 elements that weren’t clicked, but I am not sure how to do this. The code I am trying to use is under the "//remove all other answers" comment. Please help me out, thanks!
```js
var toggle = function() {
var h2 = this; // clicked h2 tag
var div = h2.nextElementSibling; // h2 tag's sibling div tag
// toggle plus and minus image in h2 elements by adding or removing a class
if (h2.hasAttribute("class")) {
h2.removeAttribute("class");
} else {
h2.setAttribute("class", "minus");
}
// toggle div visibility by adding or removing a class
if (div.hasAttribute("class")) {
div.removeAttribute("class");
} else {
div.setAttribute("class", "open");
}
//remove all other answers
var faqs = $("faqs");
var h2Elements = faqs.getElementsByTagName("h2");
for (var i = 0; i < h2Elements.length; i++ ) {
if(!h2Elements.onclick) {
h2.removeAttribute("class", "minus");
} else {
h2Elements.onclick;
}
}
};
```
```html
JavaScript FAQs
===============
[What is JavaScript?](#)
------------------------
JavaScript is a is a browser-based programming language
that makes web pages more responsive and saves round trips to the server.
[What is jQuery?](#)
--------------------
jQuery is a library of the JavaScript functions that you're most likely
to need as you develop websites.
[Why is jQuery becoming so popular?](#)
---------------------------------------
Three reasons:
* It's free.
* It lets you get more done in less time.
* All of its functions are cross-browser compatible.
```<issue_comment>username_1: There is an easy common pattern for your type of problem. Give all questions a single, shared classname. Then on click
1. use document.getElementsByClassName with the shared classname and apply css display:"none" (or a class that achieves this style) on all elements
2. set display:"block" or display:"inline" on the current selection
Upvotes: 0 <issue_comment>username_2: You've wrapped all this code in your `toggle` function, but the function is not called anywhere.
You should attach the event listener to your `h2` tags after defining them with jQuery.
The order of your set/remove attributes is a little off.
Try coming this working example to your code:
```
var h2 = $("h2");
h2.on('click', function() {
for (var i = 0; i < h2.length; i++) {
if (h2[i] !== this) {
h2[i].setAttribute('class', 'red');
} else {
h2[i].removeAttribute('class', 'red');
}
}
})
```
I've use the example class `red` here if you wanted to say, toggle the color in your CSS. You can use whatever class here in place of my example.
Upvotes: 0 <issue_comment>username_3: Hope this helps. What I have done is I hide all div(and remove class red from all h2 tag other than one which is click in for loop) and toggle clicked h2 and it's sibling.
```js
function func(e){
var x=document.getElementsByClassName("ans");
for(var i=0;i
```
```css
.red{
background-color:red;
}
.hide{
display:none;
}
```
```html
JavaScript FAQs
===============
[What is JavaScript?](#)
------------------------
JavaScript is a is a browser-based programming language
that makes web pages more responsive and saves round trips to the server.
[What is jQuery?](#)
--------------------
jQuery is a library of the JavaScript functions that you're most likely
to need as you develop websites.
[Why is jQuery becoming so popular?](#)
---------------------------------------
Three reasons:
* It's free.
* It lets you get more done in less time.
* All of its functions are cross-browser compatible.
```
Upvotes: 0 <issue_comment>username_4: This example should accomplish what you've outlined in your question. Here I'm looping through all H2 elements and processing the one that was clicked separately.
```js
$('h2').on('click',function(){
var thisH2 = this;
$('h2').each(function(){
if (this === thisH2){
if ($(this).next().is(":visible")){
$(this).removeClass('plus').addClass('minus');
$(this).next().hide();
}else{
$(this).removeClass('minus').addClass('plus');
$(this).next().toggle();
}
}else{
$(this).removeClass('plus').addClass('minus');
$(this).next().hide();
}
});
});
```
```css
h2{
cursor:pointer;
}
h2:hover{
text-decoration:underline;
}
```
```html
JavaScript FAQs
===============
What is JavaScript?
-------------------
JavaScript is a is a browser-based programming language
that makes web pages more responsive and saves round trips to the server.
What is jQuery?
---------------
jQuery is a library of the JavaScript functions that you're most likely
to need as you develop websites.
Why is jQuery becoming so popular?
----------------------------------
Three reasons:
* It's free.
* It lets you get more done in less time.
* All of its functions are cross-browser compatible.
```
Upvotes: 1 <issue_comment>username_5: To help you identify your sections from your Subheadings
>
> Add this to all sections you can use different identifiers
>
>
>
I'd suggest adding a class or attribute
```
[What is JavaScript?](#)
------------------------
```
This will enable us to select all the divs will the class section
```
const sections = document.querySelectorAll('.section')
```
Then we can loop over them all and add the minus class I'd suggest just adding this in the mark up if you intend this to be your default state.
```
sections.forEach(el => {
el.classList.add('minus')
});
```
Now we can loop over all your anchor tags I'd suggest giving them an identifier such as a class to separate them from other anchor tags but the example i'll just select all the anchor tags.
We attach a function reference to the on click of the element called `openSection` which we'll define shortly.
```
document.querySelectorAll('a').forEach((el, index) => {
el.onclick = openSection;
})
```
Now, this is the function that will toggle your `minus` and remove it from other items
Your function gets passed an `event` which will contain the information we need to get the correct section to hide. We loop through the sections and remove minus with toggle if it matches the element clicked and then any other item if it doesn't have minus it gets added on to make sure it's hidden.
```
function openSection(e) {
// we use - 1 because lists start at 0
const el = e.srcElement.classList.value - 1;
sections.forEach((section, index) => {
if (index === el) {
section.classList.toggle('minus')
} else if (!section.classList.contains('minus')) {
section.classList.add('minus')
}
})
}
```
>
> Working example
>
>
> <https://codepen.io/anon/pen/KoWgwm>
>
>
>
Stuff used
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach>
<https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll>
<https://developer.mozilla.org/en-US/docs/Web/API/Element/classList>
Upvotes: 0 |
2018/03/20 | 1,187 | 4,346 | <issue_start>username_0: I have the function that works with edge and chrome, but not with IE (V 11.0.15063.0)
```
item = totalsArray.find(function([[l,r]]) {
return l === a && r === b
});
```
But array.find does not work with IE so I have the polyfill from mdn as well <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find>
The error I get when I try running with polyfill is `SCRIPT1010: Expected Identifier` highlighting the first parentheses of `function([[l,r]])` I cant seem to figure out whats wrong<issue_comment>username_1: If you read the doc of Array.prototype.find() you can view that they're is no support for IE for this function.
<https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find>
You will have to write a loop who will do it manually.
Upvotes: 0 <issue_comment>username_2: The problem here is that you're using new JavaScript syntax that isn't available in Internet Explorer or other older browsers.
As you noted, you can provide IE with the missing `array.find()` method by simply defining it yourself (a "polyfill").
But you're also using new ES6 *syntax*, the [destructuring function parameter](http://2ality.com/2015/01/es6-destructuring.html).
Forget about `.find()` for a moment and let's look at the syntax question on its own. Here's a version of your callback written as a standalone function that logs the `l` and `r` values, and a simple test:
```js
// This function actually takes a single argument,
// which is an array of one element. That element is
// an array of two elements which we call l and r.
// We use a destructuring function parameter to get
// those values directly without writing explicit
// code to access the array elements.
var fun = function( [ [l,r] ] ) {
console.log( l, r );
};
fun( [ [12,34] ] ); // Should log: 12 34
```
That snippet runs fine in modern browsers that support ES6 syntax, but in any version of IE you will get the "Expected Identifier" message, because it doesn't recognize the new function syntax with a destructuring parameter.
You can't fix this with a polyfill. That just adds a missing method to the `Array` prototype, it doesn't let IE11's version of JavaScript understand new syntax that didn't exist at the time IE11 was written.
If you want IE compatibility, you have two options. One is to use a JavaScript compiler such as TypeScript or Babel that will let you use this ES6 syntax and translate it down to an ES5 equivalent. Here's a copy of the above snippet with identical code but with Babel enabled. And in addition to the test, we'll log the ES5 source code that our ES6 function was translated into:
```js
var fun = function( [ [l,r] ] ) {
console.log( l, r );
};
fun( [ [12,34] ] );
console.log( fun.toString() );
```
The displayed ES5 code may have a couple of calls to a helper function named `_slicedToArray` or a variation on that. This function is included by the Babel compiler in the code it generates.
This runs fine in IE but would require you to start using one kind of build process or another so that the Babel or TypeScript (or other) compiler runs whenever you make a change.
The other option is to write ES5-compatible code where you do the destructuring yourself. Look at the syntax for your function and what you pass in when you call it. The function actually takes a single parameter which is an array. That array has a single element which is also an array. That inner array has two elements, which you've named `l` and `r` for convenience.
So you can do that yourself like this:
```js
// arg is an array of one element.
// That element is itself an array of two elements.
// We call those two elements l and r.
// In ES6, we could use this destructuring syntax:
// function( [ [l,r] ] ) {}
// But for compatibility with ES5, we'll use a single
// function argument containing the outer array, and
// access the l and r values with code.
var fun = function( arg ) {
var l = arg[0][0], r = arg[0][1];
console.log( l, r )
};
fun( [ [12,34] ] );
```
If you plug that idea back into your original code, it should work in IE too:
```
item = totalsArray.find( function( arg ) {
var l = arg[0][0], r = arg[0][1];
return l === a && r === b;
});
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,000 | 3,392 | <issue_start>username_0: I want my Vagrant provision script to run some checks that will require user action if they're not satisfied. As easy as:
```
if [ ! -f /some/required/file ]; then
echo "[Error] Please do required stuff before provisioning"
exit
fi
```
But, as long as this is not a real error, I got the `echo` printed in green. I'd like my output to be red (or, a different color at least) to alert the user.
I tried:
```
echo "\033[31m[Error] Blah blah blah"
```
that works locally, but on Vagrant output the color code gets escaped and I got it echoed in green instead.
Is that possible?<issue_comment>username_1: Vagrant commands runs by default with `--no-color` option. You could try to set color on with `--color`. The environmental variables for Vagrant are documented [here](https://www.vagrantup.com/docs/other/environmental-variables.html).
Upvotes: 0 <issue_comment>username_2: This is happening because some tools write some of their messages to `stderr`, which Vagrant then interprets as an error and prints in red.
Not all terminals support ANSI colour codes and Vagrant don't take care of that. Said that, I won't suggest colorizing a word by sending it to `stderr` unless it is an error.
To achieve that you can simply:
```
echo "Your error message" > /dev/stderr
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You need to use keep\_color true then it works as intended;
```
config.vm.provision "shell", keep_color: true, inline: $echoes
$echoes = <<-ECHOES
echo "\e[32mPROVISIONING DONE\e[0m"
ECHOES
```
From <https://www.vagrantup.com/docs/provisioning/shell.html>
keep\_color (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted.
Upvotes: 1 <issue_comment>username_4: Here is a bash script `test.sh` which should demonstrate how to output to stderr or stdout conditionally. This form is good for a command like `[` / `test` or `touch` that does not return any stdout or stderr normally. This form is checking the exit status code of the command which is stored in `$?`.
```bash
test -f $1
if [ $? -eq 0 ]; then
echo "File exists: $1"
else
echo "File not found: $1"
fi
```
You can alternatively hard code your file path like your question shows:
```bash
file="/some/required/file"
test -f $file
if [ $? -eq 0 ]; then
echo "File exists: $file"
else
echo "File not found: $file"
fi
```
If you have output of the command, but its being sent to stderr rather than stdout and ending up in red in the Vagrant output, you can use the following forms to redirect the output to where you would expect it to be. This is good for commands like `update-grub` or `wget`.
#### `wget`
```bash
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
```
#### `update-grub`
```bash
out=$(update-grub 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
```
One Liners
==========
#### `wget`
```bash
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1) && echo "$out" || echo "$out" > /dev/stderr
```
#### `update-grub`
```bash
out=$(update-grub 2>&1) && echo "$out" || echo "$out" > /dev/stderr
```
Upvotes: 0 |
2018/03/20 | 989 | 3,324 | <issue_start>username_0: I have integrated **froala editor**. When I fetch the value of div, the data after does not get saved in my db.
Below is the value of my div
`Hello, Froala inline!
line 2
line 3 and url - <http://google>.com`
Data saved in db: `Hello, Froala inline!
line 2
line 3`
How do I make sure entire data is saved? Do I need to do domething at the froala editor end or at php end?
I think this happens with `'` and `&` , if these special characters are there, PHP deletes the text beyond this. What is the solution?<issue_comment>username_1: Vagrant commands runs by default with `--no-color` option. You could try to set color on with `--color`. The environmental variables for Vagrant are documented [here](https://www.vagrantup.com/docs/other/environmental-variables.html).
Upvotes: 0 <issue_comment>username_2: This is happening because some tools write some of their messages to `stderr`, which Vagrant then interprets as an error and prints in red.
Not all terminals support ANSI colour codes and Vagrant don't take care of that. Said that, I won't suggest colorizing a word by sending it to `stderr` unless it is an error.
To achieve that you can simply:
```
echo "Your error message" > /dev/stderr
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You need to use keep\_color true then it works as intended;
```
config.vm.provision "shell", keep_color: true, inline: $echoes
$echoes = <<-ECHOES
echo "\e[32mPROVISIONING DONE\e[0m"
ECHOES
```
From <https://www.vagrantup.com/docs/provisioning/shell.html>
keep\_color (boolean) - Vagrant automatically colors output in green and red depending on whether the output is from stdout or stderr. If this is true, Vagrant will not do this, allowing the native colors from the script to be outputted.
Upvotes: 1 <issue_comment>username_4: Here is a bash script `test.sh` which should demonstrate how to output to stderr or stdout conditionally. This form is good for a command like `[` / `test` or `touch` that does not return any stdout or stderr normally. This form is checking the exit status code of the command which is stored in `$?`.
```bash
test -f $1
if [ $? -eq 0 ]; then
echo "File exists: $1"
else
echo "File not found: $1"
fi
```
You can alternatively hard code your file path like your question shows:
```bash
file="/some/required/file"
test -f $file
if [ $? -eq 0 ]; then
echo "File exists: $file"
else
echo "File not found: $file"
fi
```
If you have output of the command, but its being sent to stderr rather than stdout and ending up in red in the Vagrant output, you can use the following forms to redirect the output to where you would expect it to be. This is good for commands like `update-grub` or `wget`.
#### `wget`
```bash
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
```
#### `update-grub`
```bash
out=$(update-grub 2>&1)
if [ $? -ne 0 ]; then
echo "$out" > /dev/stderr
else
echo "$out"
fi
```
One Liners
==========
#### `wget`
```bash
url='https://example.com/file'
out=$(wget --no-verbose $url 2>&1) && echo "$out" || echo "$out" > /dev/stderr
```
#### `update-grub`
```bash
out=$(update-grub 2>&1) && echo "$out" || echo "$out" > /dev/stderr
```
Upvotes: 0 |
2018/03/20 | 916 | 3,454 | <issue_start>username_0: I have a module in which public method of a public class creates and returns a new instance of a private class. The requirement is that `MyClassPrivateHelper` must only be instantiated by `MyClass`.
```
class MyClassPrivateHelper {
constructor(private cls: MyClass) {
}
public someHelperMethod(arg): void {
this.cls.someMethod();
}
}
export class MyClass {
public createHelper(): MyClassPrivateHelper { // error here
return new MyClassPrivateHelper(this);
}
public someMethod(): void {
/**/
}
}
```
With this arrangement TypeScript reports error:
`[ts] Return type of public method from exported class has or is using private name 'MyClassPrivateHelper'.`
My goal is to export just the "type" of the private class without letting an module consuming code be able instantiate it directly. e.g.
```
const mycls = new module.MyClass();
// should be allowed
const helper: MyClassPrivateHelper = mycls.createHelper();
// should not be allowed
const helper = new module.MyClassPrivateHelper();
```
I have tried using `typeof` like so without success.
```
export type Helper = typeof MyClassPrivateHelper
```
Maybe I am not understanding how "typeof" works. My questions are:
* Why export of type using `typeof` not working?
* How do I export type without exposing the private class outside module?<issue_comment>username_1: You should create an interface and export that:
```
export interface IMyHelper {
someHelperMethod(arg): void;
}
```
and then let the Helper implement that:
```
class MyClassPrivateHelper implements IMyHelper {
constructor(private cls: MyClass) {
}
public someHelperMethod(arg): void {
this.cls.someMethod();
}
}
```
The public class will return the interface
```
export class MyClass {
public createHelper(): IMyHelper {
return new MyClassPrivateHelper(this);
}
public someMethod(): void {
/**/
}
}
```
From the outside the helper is again referenced by its interface:
```
const helper: IMyHelper = mycls.createHelper();
```
Upvotes: 3 <issue_comment>username_2: >
> Why export of type using typeof not working?
>
>
>
> ```
> export type MyInterface = typeof MyClassPrivateHelper
>
> ```
>
>
In this example `MyInterface` is the type of the constructor function but you'd like to export the type of the instances this constructor can produce.
>
> How do I export type without exposing the private class outside module?
>
>
>
Like this:
```
export type MyInterface = InstanceType
```
`InstanceType` is described briefly here: <https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-8.html>.
Alternatively, I found that the following also works:
```
type Interface = { [P in keyof T]: T[P] }
export interface MyInterface extends Interface {}
```
The `Interface` type basically copies all public properties from `T`, which we then use to declare a new interface.
See <https://github.com/Microsoft/TypeScript/issues/471#issuecomment-381842426> for more details.
Upvotes: 4 <issue_comment>username_3: The simplest way to do this is the following:
```js
class MyClassPrivateHelper {
constructor(private cls: any) {
}
public someHelperMethod(arg: any): void {
this.cls.someMethod();
}
}
export type Helper = MyClassPrivateHelper;
```
Calling `export type Helper = InstanceType` is redundant.
Upvotes: 2 |
2018/03/20 | 605 | 2,423 | <issue_start>username_0: I want to host some html files on Google Cloud and wondered, if this is possible to do, without adding a custom domain...
With for example Cloudflare or AWS, that's possible...<issue_comment>username_1: GCS objects can be loaded just fine from a web browser, with or without a domain. They follow either of these naming schemes:
```
https://storage.googleapis.com/YOUR_BUCKET_NAME/YOUR_OBJECT_NAME
https://YOUR_BUCKET_NAME.storage.googleapis.com/YOUR_OBJECT_NAME
```
If you simply need to serve resources via a web browser, this is quite sufficient.
If you need a bucket to represent an entire website, it'd be a good idea to use a custom domain. This enables a handful of nice, website-like features, such as defining default pages when none is specified as well as providing a customization 404 page.
Upvotes: 4 <issue_comment>username_2: You have three options (well, only two of them are really viable, but the last one can be useful in certain situations).
In order of ease to use and viability:
1) Google App Engine:
The default Google App Engine app is served out of \*.appspot.com site, so if you create a project call "cutekittens", your site address will be cutekittens.appspot.com.
Furthermore, you can choose to do something simple like a [static webpage](https://cloud.google.com/appengine/docs/standard/python/getting-started/hosting-a-static-website), or you can host an entire webapp on [Google App Engine](https://cloud.google.com/appengine/kb/). It's easy to use and very powerful. Google App Engine supports its own storage (Datastore), bigdata (Big Query), and MySQL (Cloud SQL) solutions and all of that can be served out of the default appspot.com site which acts the the front end.
2) Static Website on [Google Cloud Storage](https://cloud.google.com/storage/docs/hosting-static-website). Google Cloud Storage is less powerful but should suffice if you just need a static website served. It uses "storage.googleapis.com/[BUCKET\_NAME]/[OBJECT\_NAME]", in which your object is probably an index.html.
3) Use a [Google Compute Engine VM on static IP](https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address). This option is probably the MOST powerful, as you can do anything you want on your own VM. However this is also the less friendly usage since you will need the actual IP address to access the site and its resources.
Upvotes: 3 |
2018/03/20 | 895 | 2,947 | <issue_start>username_0: I just upgraded my PhpStorm project from PHPUnit 4.x to PHPUnit 5.x. Running my unit tests works fine.
In order to get "run with coverage" working, I had to edit my Run/Debug configuration and add `--whitelist=/path/to/whitelisted/dir` to Test Runner options. Now coverage is generated when I choose "Run with coverage," but I get this Xdebug warning.
```
Testing started at 12:06 PM ...
/usr/local/opt/php56/bin/php -dzend_extension=/usr/local/opt/php56-
xdebug/xdebug.so -dxdebug.coverage_enable=1
/Users/bwood/code/php/WpsConsole/vendor/phpunit/phpunit/phpunit --
whitelist=/Users/bwood/code/php/WpsConsole/src --coverage-clover /Users/bwood/Library/Caches/PhpStorm2017.3/coverage/WpsConsole$NEW.coverage --no-configuration /Users/bwood/code/php/WpsConsole/tests/UnitTests --teamcity
Cannot load Xdebug - extension already loaded
PHPUnit 5.7.27 by <NAME> and contributors.
```
I think the issue is that php is being called with
```
-dzend_extension=/usr/local/opt/php56-xdebug/xdebug.so
```
But I can't figure out where I can remove that option. I'm testing a console application so there is no webserver configuration.<issue_comment>username_1: Check your `PHP Interpreter` in PhpStorm -- there is a separate field for xdebug extension. It allows you to not to have xdebug loaded in actual php.ini (so the ordinary script execution will be faster) and only load xdebug when needed (debug/coverage).
Based on your info so far you have it in php.ini and there. **Proposed solution is to remove it from that field.**
[](https://i.stack.imgur.com/ieELx.png)
*P.S. Obviously, it works for CLI stuff only and dopes not affect browser based debug in any way (as it's web server that runs PHP so IDE cannot pass such parameter there).*
Upvotes: 3 [selected_answer]<issue_comment>username_2: I had two different problems that led to this same situation. (Mine was saying already loaded twice).
The first was the above debugger extension field in PHPStorm. I fixed that but was still having problems.
So then I ran:
```
>php --ini
Cannot load Xdebug - it was already loaded
Configuration File (php.ini) Path: /usr/local/etc/php/7.2
Loaded Configuration File: /usr/local/etc/php/7.2/php.ini
Scan for additional .ini files in: /usr/local/etc/php/7.2/conf.d
Additional .ini files parsed: /usr/local/etc/php/7.2/conf.d/ext-opcache.ini
```
Since all of those files were in the same base directory I went to /usr/local/etc/php
```
$:/usr/local/etc/$ grep -r 'xdebug' .
./php/7.2/php.ini:zend_extension="xdebug.so"
./php/7.2/php.ini:zend_extension="xdebug.so"
```
This showed me that the problem was in my php.ini file. For some reason, I had this at the top of my php.ini file.
```
zend_extension="xdebug.so"
zend_extension="xdebug.so"
[PHP]
```
I removed one of the lines and the warning went away!
Upvotes: 4 |
2018/03/20 | 1,953 | 7,002 | <issue_start>username_0: I'm working on Magento 1.9.3.7 and I want to understand if is a good idea to migrate to Magento 2 or not.
I summarized this differences :
* Magento 2.0 is faster then Magento 1.x
* Some significant changes in the structure of directory which reduces
the complexity of structure
* New technologies and latest versions (example Php, Jquery etc)
* Allows developer to setup automated test easily
* Many features are now integrate to Magento 2
* Improvements to checkout and other stuff
My questions:
1. There is any index to look up to decide when is a good moment to migrate to Magento 2?
2. There are any hidden issue I have to know before?
3. Someone ever try this migration? If yes do you see this big improvement?
4. All my modules (third parts & hand-written) will be obsolete?
5. Why Magento 1.x is still releasing new security updates if there is Magento 2?
I hope to listen to different experencies or solution to understand if is the right way.
Please if I said something wrong tell me.
Docs on Internet (differences) : <https://gauge.agency/articles/the-differences-between-magento-1-and-magento-2-and-which-is-better/><issue_comment>username_1: >
> There is any index to look up to decide when is a good moment to
> migrate to Magento 2?
>
>
>
It depends on individual store environment (Big stores with own ESB may use M2 as storefront, small ones will have to wait until their ERP Provider releases a plugin or connector)
>
> There are any hidden issue I have to know before?
>
>
>
M2 code architecture is nothing like M1. MVC has been dropped in favour of MVVM
>
> Someone ever try this migration? If yes do you see this big
> improvement?
>
>
>
yes. FPC has been improved alot and general ttfb response is a way better
>
> All my modules (third parts & hand-written) will be obsolete?
>
>
>
yes, due different design patterns.
>
> Why Magento 1.x is still releasing new security updates if there is
> Magento 2?
>
>
>
Magento inc. has promised ongoing support for M1. There are too many enterprise customers I guess.
Upvotes: 1 <issue_comment>username_2: Having worked extensively with both platform, I have to say that Magento Inc's reasons for upgrading to m2 are just silly.
>
> Magento 2.0 is faster then Magento 1.x
>
>
>
This is not really true, right? Reasons why they say that M2 is faster is that it supports php 7.x and runs Varnish. To this, I say, so what? M1 does as well.
Community efforts like this one work like a charm: <https://github.com/Inchoo/Inchoo_PHP7> (I'm in no way affiliated with Inchoo).
**Edit**: This is now even less true since M1 (as of 1.9.4) supports 7.2 without third party modules.
On the other side M2 has a semi working asset precompiling system, which keeps causing issues on every turn. Further more, it slows the development to such degree that M1 feels like a blazing fast solution.
(If you think that this should be an outrageous exaggeration, which it should probably be, but sadly isn't, check out some of the [GH issues](https://github.com/magento/magento2/issues/7859#issuecomment-268214155).
>
> Some significant changes in the structure of directory which reduces the complexity of structure
>
>
>
This was a great idea, but the actual result is terrible. How the hell did M2 end up with more configuration and more XMLs? What's with the XML heavy UI components?
Is this the example of the simplified module structure – <https://github.com/magento/magento2/tree/2.3-develop/app/code/Magento/Catalog>?
Yeah sure, M1 is not great here, but M2 did not improve here at all, just check out the amount of the xmls in a single module – <https://github.com/magento/magento2/tree/2.3-develop/app/code/Magento/Catalog/etc>
>
> New technologies and latest versions (example Php, jQuery etc)
>
>
>
Sure, and stuff like ZF1, KnockoutJS and [Fotorama](https://github.com/artpolikarpov/fotorama/issues/532).
>
> Allows developer to setup automated test easily
>
>
>
I agree here. M2 has a proper support for automated testing, while M1 has almost none.
>
> Many features are now integrate to Magento 2
>
>
>
I'm not sure what exactly you wanted to say here, but the problem I had is that they simply migrated features from M1 to M2, didn't improve them at all, slapped new interface on top of it and call it new platform.
While there's no problem here, I feel like this was a huge opportunity to improve the system, but they dropped the ball.
>
> Improvements to checkout and other stuff
>
>
>
I disagree, checkout is now not as nearly flexible as it was. Working with KnockoutJs and UI Components is the last thing you want to do.
I'm fine with it being quirky and all, but the flexibility and possibility to improve checkout per particular shop is nowhere near M1.
>
> There is any index to look up to decide when is a good moment to migrate to Magento 2?
>
>
>
Most of the Magento agencies are using this to promote their services and offer migrations to M2 as a way to make extra profit. So you'll always see companies talking about performance and feature improvements which aren't there.
This is the only case where someone says something differently: <https://amasty.com/blog/magento-1-vs-magento-2-performance-comparison-speed-test-results/> (I'm in no way affiliated with Amasty).
>
> There are any hidden issue I have to know before?
>
>
>
Platform is not stable enough, major bugs are still present. Just do a quick browse through issue reports on GH.
>
> Why Magento 1.x is still releasing new security updates if there is Magento 2?
>
>
>
There are lots of businesses that will never migrate to M2. They have no option here.
Lastly, I want to say that I'm sorry for all the hate in this answer, really wasn't my intention. :D
Upvotes: 2 <issue_comment>username_3: I have tried the Magento 1 to Magento 2 migration a few times before. But for me, there is only one reason I can think that stands out, to do such a major overhaul on a website, and that is security,
You should not just upgrade to Magento 2, but specifically 2.3 as it has a lot more invested in security and less prone to malware attacks. It also has new features that did not exist in Magento 2.2.
If you are still on Magento 1, then in theory, it's only a matter of time before malware finds your site.
A good practice would be to have a fork of this repository <https://github.com/magento/magento2> and bring the latest fixes into your code periodically. This would of course give you another reason to upgrade to the latest version since Magento 1 is no longer maintained.
You will have to reproduce all of your modules for Magento 2, there is absolutely no other way, And if you use the Data Migration tool, you should have an easy time bringing the data over. And the next point is going to be to create the theme for your site once more, there is no easy way to bring your theme over from M1 either.
Good luck my friend =D
Upvotes: 0 |
2018/03/20 | 1,784 | 6,175 | <issue_start>username_0: How do you pass from relative links to absolute links:
```
[A](/foo/ba.pdf)
[B](foo/ba.pdf)
[A](http://google.com/foo/ba.pdf)
[A](#hello)
```
should transform to
```
[A](PREFIX/foo/ba.pdf)
[B](PREFIX/foo/ba.pdf)
[A](http://google.com/foo/ba.pdf)
[A](#hello)
```
where `PREFIX` is string (user defined)<issue_comment>username_1: >
> There is any index to look up to decide when is a good moment to
> migrate to Magento 2?
>
>
>
It depends on individual store environment (Big stores with own ESB may use M2 as storefront, small ones will have to wait until their ERP Provider releases a plugin or connector)
>
> There are any hidden issue I have to know before?
>
>
>
M2 code architecture is nothing like M1. MVC has been dropped in favour of MVVM
>
> Someone ever try this migration? If yes do you see this big
> improvement?
>
>
>
yes. FPC has been improved alot and general ttfb response is a way better
>
> All my modules (third parts & hand-written) will be obsolete?
>
>
>
yes, due different design patterns.
>
> Why Magento 1.x is still releasing new security updates if there is
> Magento 2?
>
>
>
Magento inc. has promised ongoing support for M1. There are too many enterprise customers I guess.
Upvotes: 1 <issue_comment>username_2: Having worked extensively with both platform, I have to say that Magento Inc's reasons for upgrading to m2 are just silly.
>
> Magento 2.0 is faster then Magento 1.x
>
>
>
This is not really true, right? Reasons why they say that M2 is faster is that it supports php 7.x and runs Varnish. To this, I say, so what? M1 does as well.
Community efforts like this one work like a charm: <https://github.com/Inchoo/Inchoo_PHP7> (I'm in no way affiliated with Inchoo).
**Edit**: This is now even less true since M1 (as of 1.9.4) supports 7.2 without third party modules.
On the other side M2 has a semi working asset precompiling system, which keeps causing issues on every turn. Further more, it slows the development to such degree that M1 feels like a blazing fast solution.
(If you think that this should be an outrageous exaggeration, which it should probably be, but sadly isn't, check out some of the [GH issues](https://github.com/magento/magento2/issues/7859#issuecomment-268214155).
>
> Some significant changes in the structure of directory which reduces the complexity of structure
>
>
>
This was a great idea, but the actual result is terrible. How the hell did M2 end up with more configuration and more XMLs? What's with the XML heavy UI components?
Is this the example of the simplified module structure – <https://github.com/magento/magento2/tree/2.3-develop/app/code/Magento/Catalog>?
Yeah sure, M1 is not great here, but M2 did not improve here at all, just check out the amount of the xmls in a single module – <https://github.com/magento/magento2/tree/2.3-develop/app/code/Magento/Catalog/etc>
>
> New technologies and latest versions (example Php, jQuery etc)
>
>
>
Sure, and stuff like ZF1, KnockoutJS and [Fotorama](https://github.com/artpolikarpov/fotorama/issues/532).
>
> Allows developer to setup automated test easily
>
>
>
I agree here. M2 has a proper support for automated testing, while M1 has almost none.
>
> Many features are now integrate to Magento 2
>
>
>
I'm not sure what exactly you wanted to say here, but the problem I had is that they simply migrated features from M1 to M2, didn't improve them at all, slapped new interface on top of it and call it new platform.
While there's no problem here, I feel like this was a huge opportunity to improve the system, but they dropped the ball.
>
> Improvements to checkout and other stuff
>
>
>
I disagree, checkout is now not as nearly flexible as it was. Working with KnockoutJs and UI Components is the last thing you want to do.
I'm fine with it being quirky and all, but the flexibility and possibility to improve checkout per particular shop is nowhere near M1.
>
> There is any index to look up to decide when is a good moment to migrate to Magento 2?
>
>
>
Most of the Magento agencies are using this to promote their services and offer migrations to M2 as a way to make extra profit. So you'll always see companies talking about performance and feature improvements which aren't there.
This is the only case where someone says something differently: <https://amasty.com/blog/magento-1-vs-magento-2-performance-comparison-speed-test-results/> (I'm in no way affiliated with Amasty).
>
> There are any hidden issue I have to know before?
>
>
>
Platform is not stable enough, major bugs are still present. Just do a quick browse through issue reports on GH.
>
> Why Magento 1.x is still releasing new security updates if there is Magento 2?
>
>
>
There are lots of businesses that will never migrate to M2. They have no option here.
Lastly, I want to say that I'm sorry for all the hate in this answer, really wasn't my intention. :D
Upvotes: 2 <issue_comment>username_3: I have tried the Magento 1 to Magento 2 migration a few times before. But for me, there is only one reason I can think that stands out, to do such a major overhaul on a website, and that is security,
You should not just upgrade to Magento 2, but specifically 2.3 as it has a lot more invested in security and less prone to malware attacks. It also has new features that did not exist in Magento 2.2.
If you are still on Magento 1, then in theory, it's only a matter of time before malware finds your site.
A good practice would be to have a fork of this repository <https://github.com/magento/magento2> and bring the latest fixes into your code periodically. This would of course give you another reason to upgrade to the latest version since Magento 1 is no longer maintained.
You will have to reproduce all of your modules for Magento 2, there is absolutely no other way, And if you use the Data Migration tool, you should have an easy time bringing the data over. And the next point is going to be to create the theme for your site once more, there is no easy way to bring your theme over from M1 either.
Good luck my friend =D
Upvotes: 0 |
2018/03/20 | 323 | 925 | <issue_start>username_0: I have 2 Model which have HABTM relation
**`User`**
```
has_and_belongs_to_many :rooms
```
**`Room`**
```
has_and_belongs_to_many :users
```
I also create the migration to join the table like this
```
create_join_table :users, :rooms do |t|
t.index [:user_id, :room_id]
t.index [:room_id, :user_id]
end
```
I would like to query the room which is contained user\_id of user B in among of user A's rooms. How can I do it?<issue_comment>username_1: this should work
```
user_b = User.find(id: 123) #or id of user b
user_a = User.find(id: 234) #or id of user a
user_b.rooms.joins(:users).where(users: {id: user_a.id})
```
Upvotes: 0 <issue_comment>username_2: I’m not sure you can do this in a single SQL call but it sounds like you want the union of two sets.
```
UserA.rooms & UserB.rooms
```
That should give you the rooms both users shared.
Upvotes: 3 [selected_answer] |
2018/03/20 | 1,412 | 4,740 | <issue_start>username_0: I am writing the following classifier to check out sci-kit.
...
```
class MyClassifier():
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return
def predict(self, x_test):
prediction = []
for row in x_test:
label = self.closest(row)
prediction.append(label)
return prediction
def closest(self, row):
best_dist = euc(row, self.x_train[0])
best_index = 0
for i in range(1, len(self.x_train)):
dist = euc(row, self.x_train[0])
if dist < best_dist:
best_dist = dist
best_index = i
return self.y_train[best_index]
```
And later, I want to use my own classifier:
```
# Use my own Classifier
classifer = MyClassifier()
print(classifer)
classifer = classifer.fit(x_train, y_train)
prediction = classifer.predict(x_test)
print(prediction)
print(y_test)
```
When I run it, I am getting the following error:
```
<__main__.MyClassifier object at 0x103ec5668>
Traceback (most recent call last):
File "/.../NewClassifier.py", line 72, in
prediction = classifer.predict(x\_test)
AttributeError: 'NoneType' object has no attribute 'predict'
```
What's wrong with predict() function?<issue_comment>username_1: Your classmethod
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return
```
returns nothing, so it implicitly returns `None`.
Therefor `classifer = classifer.fit(x_train, y_train)` is overwrites the variable named `classifer` of type `MyClassifier` wiht a `None`.
A `None` has not method that you can call - thats the exact error message you got.
You should change `classifer = classifer.fit(x_train, y_train)` to simply
```
classifer.fit(x_train, y_train)
```
so you keep the variable named `classifer` as your Class-Instance instead of "overwriting" it with `None`.
---
This should fix it:
```
# Use my own Classifier
classifer = MyClassifier()
print(classifer)
classifer.fit(x_train, y_train)
prediction = classifer.predict(x_test)
print(prediction)
print(y_test)
```
Upvotes: 2 <issue_comment>username_2: I recommend using Python's built in debugger, pdb. If you add `import pdb;pdb.set_trace()` before your `classifer = MyClassifier()` statement, you can see every variable and interact with your code.
Now, you are overwriting your class instantiation.
```
-> print(classifer)
(Pdb) n
<__main__.MyClassifier object at 0x7f7fe2f139e8> // This is your classifer object
-> classifer = classifer.fit("test", "test2")
(Pdb) classifer
-> prediction = classifier.predict(x_test)
(Pdb) classifer
(Pdb)
```
So, because you are naming the variable the same thing, it's overwriting your previous class.
You have `classifer = MyClassifier()` and then `classifer = classifer.foo` so, it loses it's orginal reference to `MyClassifier()`.
Secondly, your `fit(x_train, y_train)` function doesn't return anything.
Having:
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return
```
Is the same as:
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return None
```
Which is what your getting:
```
(Pdb) print(classifer)
None
```
And thus, that's why your receiving `AttributeError: 'NoneType' object has no attribute 'predict'` because `classifer` is None.
I'm not sure what the fit function is supposed to return, but I imagine it's self. So, the following code works for me in getting past your error, but since i don't know what x\_train, y\_train, x\_test, and y\_test are supposed to be, I couldn't run all of your code. Still, it fixes the problem you asked the question about.
```
class MyClassifier():
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return self // Must return something, and from context, this
// seems to be your intention.
def predict(self, x_test):
prediction = []
for row in x_test:
label = self.closest(row)
prediction.append(label)
return prediction
def closest(self, row):
best_dist = euc(row, self.x_train[0])
best_index = 0
for i in range(1, len(self.x_train)):
dist = euc(row, self.x_train)
if dist < best_dist:
best_dist = dist
best_index = i
return self.y_train[best_index]
classifier = MyClassifier()
print(classifier)
classifier2 = classifier.fit("test", "test2")
prediction = classifier2.predict(x_test)
print(prediction)
print(y_test)
```
Upvotes: 2 |
2018/03/20 | 1,380 | 4,595 | <issue_start>username_0: I am running a server and I have a pointed my domain via cloudflare to my server IP and have a signed SSL certificate via LetsEncrypt for my domain. My server is running an apache webserver using porto 443 for the ssl traffic.
I installed docker and a run a couple of containers. My goal is to get traefik up and running using port 443 as well and route all docker traffic through it. Is that even possible?
I used this here: <https://www.linuxserver.io/2018/02/03/using-traefik-as-a-reverse-proxy-with-docker/> to write my traefik.toml file and my docker-compose file.
However, whenever I start up the docker-compose all services are up except traefik.
I receive following error:
>
> ERROR: for traefik Cannot start service traefik: driver failed programming external connectivity on endpoint traefik (2d10b64b47e62e7dcb5f94265529fb647e4ba62dbeeb43c201ea02d39f60b381): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use
> ERROR: Encountered errors while bringing up the project.
>
>
>
I wonder if the reason is that I already use port 443 for my domain?!
How can I fix this?
Thanks for your help!<issue_comment>username_1: Your classmethod
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return
```
returns nothing, so it implicitly returns `None`.
Therefor `classifer = classifer.fit(x_train, y_train)` is overwrites the variable named `classifer` of type `MyClassifier` wiht a `None`.
A `None` has not method that you can call - thats the exact error message you got.
You should change `classifer = classifer.fit(x_train, y_train)` to simply
```
classifer.fit(x_train, y_train)
```
so you keep the variable named `classifer` as your Class-Instance instead of "overwriting" it with `None`.
---
This should fix it:
```
# Use my own Classifier
classifer = MyClassifier()
print(classifer)
classifer.fit(x_train, y_train)
prediction = classifer.predict(x_test)
print(prediction)
print(y_test)
```
Upvotes: 2 <issue_comment>username_2: I recommend using Python's built in debugger, pdb. If you add `import pdb;pdb.set_trace()` before your `classifer = MyClassifier()` statement, you can see every variable and interact with your code.
Now, you are overwriting your class instantiation.
```
-> print(classifer)
(Pdb) n
<__main__.MyClassifier object at 0x7f7fe2f139e8> // This is your classifer object
-> classifer = classifer.fit("test", "test2")
(Pdb) classifer
-> prediction = classifier.predict(x_test)
(Pdb) classifer
(Pdb)
```
So, because you are naming the variable the same thing, it's overwriting your previous class.
You have `classifer = MyClassifier()` and then `classifer = classifer.foo` so, it loses it's orginal reference to `MyClassifier()`.
Secondly, your `fit(x_train, y_train)` function doesn't return anything.
Having:
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return
```
Is the same as:
```
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return None
```
Which is what your getting:
```
(Pdb) print(classifer)
None
```
And thus, that's why your receiving `AttributeError: 'NoneType' object has no attribute 'predict'` because `classifer` is None.
I'm not sure what the fit function is supposed to return, but I imagine it's self. So, the following code works for me in getting past your error, but since i don't know what x\_train, y\_train, x\_test, and y\_test are supposed to be, I couldn't run all of your code. Still, it fixes the problem you asked the question about.
```
class MyClassifier():
def fit(self, x_train, y_train):
self.x_train = x_train
self.y_train = y_train
return self // Must return something, and from context, this
// seems to be your intention.
def predict(self, x_test):
prediction = []
for row in x_test:
label = self.closest(row)
prediction.append(label)
return prediction
def closest(self, row):
best_dist = euc(row, self.x_train[0])
best_index = 0
for i in range(1, len(self.x_train)):
dist = euc(row, self.x_train)
if dist < best_dist:
best_dist = dist
best_index = i
return self.y_train[best_index]
classifier = MyClassifier()
print(classifier)
classifier2 = classifier.fit("test", "test2")
prediction = classifier2.predict(x_test)
print(prediction)
print(y_test)
```
Upvotes: 2 |
2018/03/20 | 435 | 1,351 | <issue_start>username_0: I am working on a website, what I want to achieve is that the navbar of my page stacked over the slider.
I have tried z-index property.
My HTML Code:
```
[St. Clare's Sr. Sec School](#)
* [Home](#)
* [About](#)
* [Contact](#)
* [Messages](#)
* [More](#)
```
What I want to achieve is shown in the below image :
[](https://i.stack.imgur.com/hfCBC.jpg)
What I have right now is :
[](https://i.stack.imgur.com/GoKF7.png)<issue_comment>username_1: Bootstrap 4 has class the `fixed-top` for this purpose.
```
[St. Clare's Sr. Sec School](#)
* [Home](#)
* [About](#)
* [Contact](#)
* [Messages](#)
* [More](#)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: The `fixed-top` answer by ZimSystem is a good one, but in the event that you don't want the position to be fixed, you could set the navbar to be absolute. Bootstrap 4 has the class `position-absolute` for that (though it would take more work to center the navbar horizontally).
Upvotes: 0 <issue_comment>username_3: **if you wana put your slider behind your navbar try the below css code for your slider and navbar**
```
.slider{
position:absolute;
top:0;
z-index:0;
}
.navbar-nav{
z-index:1;
}
```
Upvotes: 0 |
2018/03/20 | 366 | 1,283 | <issue_start>username_0: I need to apply some styling to the label of a Redux Form Field. The Redux Form API doesn't mention any way to style the label. The `classes.formField` class gets applied to the field itself but not the label.
This may also be a question about forcing inheritance because the label, in this case, is not inheriting the parent's styles.
```
import { Field } from 'redux-form'
import TextField from 'redux-form-material-ui/lib/TextField'
```<issue_comment>username_1: Add your own component to the label prop.
```
}
fullWidth
name="answer"
required
type="text"
validate={[required]}
/>
```
Make custom label component and pass it
```
const CustomLabel = () => {
var labelStyle = {
color: 'white',
};
return Im the custom label with css
}
```
>
> In React, inline styles are not specified as a string. Instead they
> are specified with an object whose key is the camelCased version of
> the style name, and whose value is the style's value, usually a string.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can pass props directly to `TextField` using the `props` prop:
```
```
Sadly this is undocumented on [Redux-Form: Field docs](https://redux-form.com/6.0.0-alpha.4/docs/api/field.md/) :/
Upvotes: 2 |
2018/03/20 | 618 | 1,988 | <issue_start>username_0: I'm using Base64 to encrypt/decrypt links. It works great, except the link is getting two equals signs so:
When a customer gets an email, the whole link is not clickable, the two important "==" isn't reading as a link.
How can I make it read the encrypted text as a whole link with the two equals signs?
Also, is it possible if you are missing some letter, refer to another page/error message instead of casting an exception (like
>
> Invalid length for a Base-64 char array or string)?
>
>
>
for example now the link is like:
www.blaa/constollername/methodname?base64link=f5HmbS2tfYRozBfcIV9bCUa1YcGmOFp0AR==
but I want it to be:
www.blaa/constollername/methodname?base64link=f5HmbS2tfYRozBfcIV9bCUa1YcGmOFp0AR
Also if you print out like:
www.blaa/constollername/methodname?base64link=f5HmbS2tfYRozBf
in the url you should be given a error or something like: This page is forbidden.... or something like that, instead of an exception (like I said before).
The code is working and I'm using:
Encoding.UTF8.GetBytes and ToBase64String
Because the code is working except the equals sign I'm not posting any code yet if not needed.<issue_comment>username_1: Add your own component to the label prop.
```
}
fullWidth
name="answer"
required
type="text"
validate={[required]}
/>
```
Make custom label component and pass it
```
const CustomLabel = () => {
var labelStyle = {
color: 'white',
};
return Im the custom label with css
}
```
>
> In React, inline styles are not specified as a string. Instead they
> are specified with an object whose key is the camelCased version of
> the style name, and whose value is the style's value, usually a string.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can pass props directly to `TextField` using the `props` prop:
```
```
Sadly this is undocumented on [Redux-Form: Field docs](https://redux-form.com/6.0.0-alpha.4/docs/api/field.md/) :/
Upvotes: 2 |
2018/03/20 | 633 | 1,996 | <issue_start>username_0: I've been trying to implement a function in OCaml that returns the smallest missing number (greater than 0) in a sorted list.
Here is what I've done
```
let getMissingNumber l =
let rec find min = function
| [] -> min
| t :: [] -> t + 1
| t1 :: t2 :: r -> if t2 - t1 > 1 then t1 + 1 else find min (t2 :: r)
in find 1 l;;
```
Here are the results:
```
# getMissingNumber [1; 4; 5];;
- : int = 2
# getMissingNumber [1; 2; 5];;
- : int = 3
# getMissingNumber [1; 2; 3];;
- : int = 4
# getMissingNumber [3; 4; 5];;
- : int = 6
```
All the results are correct but the last one. Any suggestions?<issue_comment>username_1: The problem is that if list contains more than one element, this function will never return 1, just because of `| t :: [] -> t + 1` (if `t > 0`).
So we may replace `| t :: [] -> t + 1` by `| t :: [] -> min`, but in this case there will be a problem with all lists of the form `[1; 2; 3; ...; n]`, because in the `| t1 :: t2 :: r` branch we don't change `min`, so we will end up by returning 1 (even if correct response is `n+1`).
So we need to "update" `min`, but what's interesting is that if we replace `find min (t2 :: r)` by `find (t2 + 1) (t2 :: r)`, we will return to your original function.
In fact this function search smallest missing number greater than smallest presented number. The main problem is that you distinguish `[t]` and `t1::t2::r` for no good reason.
```
let getMissingNumber l =
let rec find min = function
| [] -> min
| t::r ->
if t > min then min
else find (t + 1) r
in find 1 l
```
Upvotes: 1 <issue_comment>username_2: If the input list may start with values smaller than one, then you also need to skip over these.
```
let getMissingNumber l =
let rec find min = function
| [] -> min
| h :: t when min < t -> min
| h :: t when min = t -> find (min+1) t
| _ :: t -> find min t
in find 1 l
```
Upvotes: 0 |
2018/03/20 | 529 | 1,362 | <issue_start>username_0: I'm trying to tackle what I thought was a simple query.
I have two databases each with one table in the DB.
What I would like to do is find all of the emails from DB1.Table that don't exist in DB2.Table
I'm using this query, but the result is incorrect because I know DB1.Table contains emails that don't exist in DB2.Table (result always comes back as 0)
`SELECT DB1.20180320.email
FROM DB1.20180320
WHERE DB1.20180319.email NOT IN
(SELECT DB2.20180319.email FROM DB2.20180319 WHERE Status = 'active')`
Any ideas on what I'm doing wrong here? I'm working with about 80k rows in each table.
Thanks.<issue_comment>username_1: without seeing your data, try something like this.
```
SELECT DB1.20180320.email
FROM DB1.20180320
left join DB2.20180319 on DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
WHERE DB2.20180319.email IS null;
```
This should show all the emails in DB1.20180320 that don't exist in DB2.20180319
Upvotes: 3 [selected_answer]<issue_comment>username_2: `NOT EXISTS` query should do it. It returns email that exist in DB1, but not DB2.
```
SELECT DB1.20180320.email
FROM DB1.20180320
WHERE NOT EXISTS(
SELECT 1
FROM DB2.20180319
WHERE DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
)
```
Upvotes: 0 |
2018/03/20 | 571 | 1,667 | <issue_start>username_0: we have an interface with a Generic Customer (gold/silver), now lets say i stored the last created cutomer somewhere (Cache/DB/etc).
how do i create a GetCustomer method that returns the type of customer.
Should i add GetCustomer to the base class or interface or elsewhere ? and how do we use GetCustomer ?
Hope that makes sense.
```
interface ICstomerInterface
{
T MakeCustomer();
}
public abstract class BaseCustomer: ICstomerInterface
{
public string Type { get; set; }
public string Name { get; set; }
// methods
public abstract T MakeCustomer();
}
public class Gold : BaseCustomer
{
public override Gold MakeCustomer()
{
var customer = new Gold
{
Type= "Gold",
Name = "Jack"
};
return customer;
}
}
public class Silver : BaseCustomer
{
public override Silver MakeCustomer()
{
var customer = new Silver();
customer.Name = "Jones";
customer.Type = "Silver";
return customer;
}
}
```<issue_comment>username_1: without seeing your data, try something like this.
```
SELECT DB1.20180320.email
FROM DB1.20180320
left join DB2.20180319 on DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
WHERE DB2.20180319.email IS null;
```
This should show all the emails in DB1.20180320 that don't exist in DB2.20180319
Upvotes: 3 [selected_answer]<issue_comment>username_2: `NOT EXISTS` query should do it. It returns email that exist in DB1, but not DB2.
```
SELECT DB1.20180320.email
FROM DB1.20180320
WHERE NOT EXISTS(
SELECT 1
FROM DB2.20180319
WHERE DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
)
```
Upvotes: 0 |
2018/03/20 | 819 | 2,493 | <issue_start>username_0: I am trying to convert mysqli code to PDO way but I run into an issue.
It should behave: If deleted main parent delete his sub categories also, so that there are no "orphans" left in database.
The main parents are parent 0 in [database](https://i.stack.imgur.com/9BZw2.jpg), while sub categories are linked with parents ID.
**This is working mysqli example :**
```
// Delete
if(isset($_GET['delete']) && !empty($_GET['delete'])) {
$delete_id = (int)$_GET['delete'];
$delete_id = sanitize($delete_id);
/* Deleting a parent and its children to avoid orphaned categories in the database. */
$result = $db->query("SELECT * FROM categories WHERE id = '{$delete_id}'");
$category = mysqli_fetch_assoc($result);
if($category['parent'] == 0) {
$db->query("DELETE FROM categories WHERE parent = '{$delete_id}'");
header("Location: categories.php");
}
$db->query("DELETE FROM categories WHERE id = '{$delete_id}'");
header("Location: categories.php");
}
```
**What I have tried with PDO way:**
```
//delete Category
if(isset($_GET['delete']) && !empty($_GET['delete'])){
$delete_id = (int)$_GET['delete'];
$delete_id = sanitize($delete_id);
//Deleting sub-categories if parent is deleted
$sql= $veza->prepare ("SELECT * FROM categories WHERE id = '$delete_id'");
$result = $sql->execute();
$category = $result->fetch(PDO::FETCH_ASSOC);
if($category['parent'] == 0){
$sql = "DELETE FROM categories WHERE parent = '$delete_id'";
$sql->execute();
}
$dsql=$veza->prepare("DELETE FROM categories WHERE id = '$delete_id'");
$dsql->execute($_GET);
header("location: categories.php");
}
```
I can't find the solution.
I have Uncaught Error: Call to a member function fetch() on boolean<issue_comment>username_1: without seeing your data, try something like this.
```
SELECT DB1.20180320.email
FROM DB1.20180320
left join DB2.20180319 on DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
WHERE DB2.20180319.email IS null;
```
This should show all the emails in DB1.20180320 that don't exist in DB2.20180319
Upvotes: 3 [selected_answer]<issue_comment>username_2: `NOT EXISTS` query should do it. It returns email that exist in DB1, but not DB2.
```
SELECT DB1.20180320.email
FROM DB1.20180320
WHERE NOT EXISTS(
SELECT 1
FROM DB2.20180319
WHERE DB1.20180320.email = DB2.20180319.email
AND DB2.20180319.Status = 'active'
)
```
Upvotes: 0 |
2018/03/20 | 650 | 2,227 | <issue_start>username_0: I have two columns of data I am cleaning up using VBA. If the value in column A is non-numeric or blank, I need to delete the entire row. Below is a sample of the data and the code I am trying to use. It seems to be completely skipping over the portion of the code that deletes the rows if IsNumeric returns false.
```
9669 DONE
9670 OPEN
Order # STATUS
9552
9672
```
Code that isn't working.
```
Dim cell As Range
For Each cell In Range("A1:A" & max_col)
If IsNumeric(cell) = False Then
Cells(cell, 1).Select
Rows(cell).EntireRow.Delete
Exit For
End If
Next cell
```
Any help is appreciated!<issue_comment>username_1: Loop from the bottom
```
Dim max_col as long
max_col = 100
Dim i as Long
For i = max_col to 1 step -1
If Not isnumeric(activesheet.cells(i,1)) then
activesheet.rows(i).delete
End If
Next i
```
Upvotes: 2 <issue_comment>username_2: When deleting (or adding, for that matter) rows you need to loop backwards through the data set - See @ScottCraner's example for an exact answer like that - or, you create the range of cells to delete then delete at once, like below:
```
Dim rowNo as Long
For rowNo = 1 to max_col
Dim cell as Range
Set cell = Range(rowNo,1)
If IsNumeric(cell) Then
Dim collectRows as Range
If collectRows is Nothing Then
Set collectRows = cell
Else
Set collectRows = Union(collectRows,cell)
End If
End If
Next
collectRows.EntireRow.Delete
```
Upvotes: 2 <issue_comment>username_3: use just
```
With Range("A1", Cells(Rows.Count, 1).End(xlUp))
.SpecialCells(xlCellTypeBlanks).EntireRow.Delete
.SpecialCells(xlCellTypeConstants, xlTextValues).EntireRow.Delete
End With
```
or, if you don't know for sure whether there will be empty or not numeric cells
```
With Range("A1", Cells(Rows.Count, 1).End(xlUp))
If WorksheetFunction.CountBlank(.Cells) > 0 Then .SpecialCells(xlCellTypeBlanks).EntireRow.Delete
If WorksheetFunction.Count(.Cells) < .Rows.Count Then .SpecialCells(xlCellTypeConstants, xlTextValues).EntireRow.Delete
End With
```
Upvotes: 3 [selected_answer] |
2018/03/20 | 305 | 1,209 | <issue_start>username_0: I'm trying to use facebook login button on a React-native android app but I keep having the .iml files created in the root folder of my project everytime I sync with Gradle.
Then I get the message saying "the modules [module1, module2] point to the same directory in the file system. Each module has to have a unique path.
I tried to delete and refresh de .iml file several time but I keep having the same result.
Anyone faced this issue ?<issue_comment>username_1: I also experienced this error, and was able to resolve this by doing File → Invalidated Caches / Restart… → Invalidate & Restart.
Upvotes: 1 <issue_comment>username_2: I have this issue. I followed the advice to invalidate caches, but after that, Android Studio throws this error:
>
> Help please! Unsupported Modules Detected: Compilation is not
> supported for following modules: react-native-vector-icons,
> react-native-cookie.
>
>
>
Unfortunately you can't have non-Gradle Java modules and Android-Gradle modules in one project.
Upvotes: -1 <issue_comment>username_3: Go to the root project in your terminal and run command
```
npm run clean_nm
npm install
```
It will fix the problem
Upvotes: -1 |
2018/03/20 | 455 | 1,825 | <issue_start>username_0: I have been successfully writing robot framework test scripts (Using Eclipse IDE, RED Robot Editor) and now want to start scheduling them using windows batch files. However, when I try and run the script from a windows command prompt, I'm getting an error (see below). As I mentioned, the scripts worked fine. I would right click on the script file in Eclipse, then Run As -> Robot Test
Here's my error:
```
Importing test library 'Library' failed: ImportError: No module named Library
```
At script start up, here's what happens:
**\_\_init\_\_.robot**
```
*** Settings ***
Resource ../Generic_Configurations/Import_File.robot
```
**Import\_File.robot**
```
*** Settings ***
Library Selenium2Library
Library Library
```
A couple preliminary questions:
* Is running my scripts from a command line dependent upon the eclipse environment (RED plugin)? If so, how do I tell the system to pull in those settings?
* Do I need to set any environmental variables so that the Library.py will be recognized?
Any comments/suggestions appreciated!<issue_comment>username_1: I also experienced this error, and was able to resolve this by doing File → Invalidated Caches / Restart… → Invalidate & Restart.
Upvotes: 1 <issue_comment>username_2: I have this issue. I followed the advice to invalidate caches, but after that, Android Studio throws this error:
>
> Help please! Unsupported Modules Detected: Compilation is not
> supported for following modules: react-native-vector-icons,
> react-native-cookie.
>
>
>
Unfortunately you can't have non-Gradle Java modules and Android-Gradle modules in one project.
Upvotes: -1 <issue_comment>username_3: Go to the root project in your terminal and run command
```
npm run clean_nm
npm install
```
It will fix the problem
Upvotes: -1 |
2018/03/20 | 1,737 | 4,818 | <issue_start>username_0: I looked number different solutions online, but count not find what I am trying to achine.
Please help me on this.
I am using Apache Spark 2.1.0 with Scala. Below is my dataframe:
```
+-----------+-------+
|COLUMN_NAME| VALUE |
+-----------+-------+
|col1 | val1 |
|col2 | val2 |
|col3 | val3 |
|col4 | val4 |
|col5 | val5 |
+-----------+-------+
```
I want this to be transpose to, as below:
```
+-----+-------+-----+------+-----+
|col1 | col2 |col3 | col4 |col5 |
+-----+-------+-----+------+-----+
|val1 | val2 |val3 | val4 |val5 |
+-----+-------+-----+------+-----+
```<issue_comment>username_1: You can do this using `pivot`, but you still need aggregation but what if you have multiple `value` for a `COLUMN_NAME`?
```
val df = Seq(
("col1", "val1"),
("col2", "val2"),
("col3", "val3"),
("col4", "val4"),
("col5", "val5")
).toDF("COLUMN_NAME", "VALUE")
df
.groupBy()
.pivot("COLUMN_NAME").agg(first("VALUE"))
.show()
+----+----+----+----+----+
|col1|col2|col3|col4|col5|
+----+----+----+----+----+
|val1|val2|val3|val4|val5|
+----+----+----+----+----+
```
EDIT:
if your dataframe is really that small as in your example, you can collect it as `Map`:
```
val map = df.as[(String,String)].collect().toMap
```
and then apply [this answer](https://stackoverflow.com/questions/49386299/spark-convert-map-to-a-single-row-dataframe/49386539#49386539)
Upvotes: 4 <issue_comment>username_2: If your *dataframe is small enough as in the question*, then you can *collect COLUMN\_NAME to form schema* and *collect VALUE to form the rows* and then *create a new dataframe* as
```
import org.apache.spark.sql.functions._
import org.apache.spark.sql.Row
//creating schema from existing dataframe
val schema = StructType(df.select(collect_list("COLUMN_NAME")).first().getAs[Seq[String]](0).map(x => StructField(x, StringType)))
//creating RDD[Row]
val values = sc.parallelize(Seq(Row.fromSeq(df.select(collect_list("VALUE")).first().getAs[Seq[String]](0))))
//new dataframe creation
sqlContext.createDataFrame(values, schema).show(false)
```
which should give you
```
+----+----+----+----+----+
|col1|col2|col3|col4|col5|
+----+----+----+----+----+
|val1|val2|val3|val4|val5|
+----+----+----+----+----+
```
Upvotes: 5 [selected_answer]<issue_comment>username_3: Another solution though lengthy using crosstab.
```
val dfp = spark.sql(""" with t1 (
select 'col1' c1, 'val1' c2 union all
select 'col2' c1, 'val2' c2 union all
select 'col3' c1, 'val3' c2 union all
select 'col4' c1, 'val4' c2 union all
select 'col5' c1, 'val5' c2
) select c1 COLUMN_NAME, c2 VALUE from t1
""")
dfp.show(50,false)
+-----------+-----+
|COLUMN_NAME|VALUE|
+-----------+-----+
|col1 |val1 |
|col2 |val2 |
|col3 |val3 |
|col4 |val4 |
|col5 |val5 |
+-----------+-----+
val dfp2=dfp.groupBy("column_name").agg( first($"value") as "value" ).stat.crosstab("value", "column_name")
dfp2.show(false)
+-----------------+----+----+----+----+----+
|value_column_name|col1|col2|col3|col4|col5|
+-----------------+----+----+----+----+----+
|val1 |1 |0 |0 |0 |0 |
|val3 |0 |0 |1 |0 |0 |
|val2 |0 |1 |0 |0 |0 |
|val5 |0 |0 |0 |0 |1 |
|val4 |0 |0 |0 |1 |0 |
+-----------------+----+----+----+----+----+
val needed_cols = dfp2.columns.drop(1)
needed_cols: Array[String] = Array(col1, col2, col3, col4, col5)
val dfp3 = needed_cols.foldLeft(dfp2) { (acc,x) => acc.withColumn(x,expr(s"case when ${x}=1 then value_column_name else 0 end")) }
dfp3.show(false)
+-----------------+----+----+----+----+----+
|value_column_name|col1|col2|col3|col4|col5|
+-----------------+----+----+----+----+----+
|val1 |val1|0 |0 |0 |0 |
|val3 |0 |0 |val3|0 |0 |
|val2 |0 |val2|0 |0 |0 |
|val5 |0 |0 |0 |0 |val5|
|val4 |0 |0 |0 |val4|0 |
+-----------------+----+----+----+----+----+
dfp3.select( needed_cols.map( c => max(col(c)).as(c)) :_* ).show
+----+----+----+----+----+
|col1|col2|col3|col4|col5|
+----+----+----+----+----+
|val1|val2|val3|val4|val5|
+----+----+----+----+----+
```
Upvotes: 2 <issue_comment>username_3: To enhance username_2's answer, collect and then convert it to a map.
```
val mp = df.as[(String,String)].collect.toMap
```
with a dummy dataframe, we can build further using foldLeft
```
val f = Seq("1").toDF("dummy")
mp.keys.toList.sorted.foldLeft(f) { (acc,x) => acc.withColumn(mp(x),lit(x) ) }.drop("dummy").show(false)
+----+----+----+----+----+
|val1|val2|val3|val4|val5|
+----+----+----+----+----+
|col1|col2|col3|col4|col5|
+----+----+----+----+----+
```
Upvotes: 0 |
2018/03/20 | 693 | 2,116 | <issue_start>username_0: I am trying to use Artifactory as a front for our Helm Charts. I have the following set up:
* helm-remote-stable : stable community Helm Charts
* helm-local-stable : stable company Helm Charts
* helm-stable: virtual repo with both of the above as upstreams
What's supposed to be happening is that the `helm-stable` virtual repo manages merging the two upstream index.yaml files.
However, I am getting the following exception in the logs:
```
2018-03-20 18:58:04,483 [art-exec-276943] [ERROR] (o.a.a.h.r.m.HelmVirtualMerger:194) - Couldn't read index file in remote repository helm-remote-stable : (was com.github.zafarkhaja.semver.UnexpectedCharacterException) (through reference chain: org.jfrog.repomd.helm.model.HelmIndexYamlMetadata["entries"]->java.util.LinkedHashMap["grafana"]->java.util.TreeSet[6])
```
It looks like Artifactory is trying to enforce semver through some library and it's not parsing the community index.yaml file. This breaks the entire feature of the product.
Here's what's breaking from the community index.yaml:
```
- created: 2018-01-28T21:04:13.090211594Z
description: The leading tool for querying and visualizing time series and metrics.
digest: 6c25c79e16df4c31637d3f8b1b379bb4c0a34157fa5b817f4c518ef50d43911b
engine: gotpl
home: https://grafana.net
icon: https://raw.githubusercontent.com/grafana/grafana/master/public/img/logo_transparent_400x.png
maintainers:
- email: <EMAIL>
name: <NAME>
name: grafana
sources:
- https://github.com/grafana/grafana
urls:
- https://kubernetes-charts.storage.googleapis.com/grafana-0.6.tgz
version: "0.6"
```
Please note the `version: "0.6"` which borking the entire thing.
Any idea on how to get around this? I am using the Artifactory cloud offering.<issue_comment>username_1: have you tried changing the version of `grafana` chart from 0.6 to 0.6.0 and pushing it to `helm-local-stable`.
Upvotes: 0 <issue_comment>username_2: This was fixed in Artifactory version 5.9.0.
You can find more details here: <https://www.jfrog.com/jira/browse/RTFACT-15668>
Upvotes: 2 |
2018/03/20 | 1,724 | 6,379 | <issue_start>username_0: I have Miniconda3 installed at C:\Users\me\Miniconda3, and my 'Project Interpreter' within PyCharm set to my conda environment, and that is all working correctly. However it appears that conda is not set for my path variable as if I type `conda` into the PyCharm Terminal I get
```
'conda' is not recognized as an internal or external command, operable program or batch file.
```
Is there a way to set the PyCharm Terminal to behave like the Anaconda Prompt?
I have Windows 10, PyCharm 2018.1 EAP, and conda 4.4.10 installed.<issue_comment>username_1: You can change pycharm settings to achieve this.
In **Settings > Tools > Terminal**, change the `Shell path` as following:
`cmd.exe "/K" "C:\Users\me\Miniconda3\Scripts\activate.bat" "C:\Users\me\Miniconda3"`
And the `C:\Users\me\Miniconda3` can be replaced by either one of your conda environment name such as `base`
Close the Terminal and reopen it, you will get the Anaconda prompt.
It works in my PyCharm Community Edition 2018.1.2

Upvotes: 7 [selected_answer]<issue_comment>username_2: The shell path may differ, you can check from the properties of shortcut of 'Anaconda Prompt': rightClick on the icon of 'Anaconda Prompt' >> properties >> shortcut >> Target
Upvotes: 2 <issue_comment>username_3: Great answer by `dd.` It helped me out as well, but I chose to do it in a slightly different way in PyCharm.
It appears we can get the Anaconda prompt running in the PyCharm terminal without having to redirect to a new Shell path, ie. we may keep the original Shell path which in my case is `"C:\Windows\System32\cmd.exe"` for Windows 10. And instead point to the Environment Variables that are used by the conda command prompt, in the following way:
1. Get the PATH value of your conda environment, for instance by performing `echo %PATH` from the conda command prompt as described [here](https://stackoverflow.com/questions/54175042/python-3-7-anaconda-environment-import-ssl-dll-load-fail-error) in the answer by `Rob` / `Adrian`. If you have already set the PATH for the python interpreter in PyCharm you can find it here: `Settings - Build, Execution, Deployment - Console - Python Console`. Click the folder button to the right of Environment variables input and then copy the path value from the Value field to the right of the variable under Name
2. Then go to `Settings - Tools - Terminal`
3. Click the folder icon to the right of Environment Variables input section, and create a new variable by pressing the `+` symbol. Name it `PATH` and paste in the previously copied value. Click OK and then Apply
You could restart PyCharm, or close and restart Terminal within PyCharm, in order to make sure the changes have been recognized.
Now you should be able to use for instance both `pip list` and `conda list` within the same Terminal window within PyCharm. In my case the former command returns a smaller list compared to the larger list from the other command (from conda).
Regardless, it appears you should now be able to use both within one, ie. to use the same Terminal window to perform conda and regular python operations, for instance for installations.
>
> Sidenote: Though the two-in-one option works for the Terminal windows it does not seem to work for the Python Console - where I use the conda one within PyCharm. In that Console it currently only recognize packages from the conda interpreter and not the packages from my previous regular python interpreter.
>
>
>
Anyway, hope this helps other people! If anyone has any insights into whether or not this is a viable solution in the long run, please let me know.
Upvotes: 0 <issue_comment>username_4: For window user, first of all check the location of your anaconda environment
you could type `conda env list` to show
For my case, the env I want to have my anaconda prompt is located at `C:\Users\YOURUSERNAME\Anaconda3\` (which is the root env, the very first you get)
And then go to pycharm, go settings, go Tools,
Inside Shell path enter
`cmd.exe "/K" C:\Users\YOURUSERNAME\Anaconda3\Scripts\activate.bat C:\Users\YOURUSERNAME\Anaconda3`
Upvotes: 3 <issue_comment>username_5: Here's what I got to work (its a variation of username_1 post):
1. right click 'anaconda powershell prompt' in the start menu; click 'open file location'
2. right click 'anaconda powershell prompt' in file explorer; click 'properties'
3. under 'shortcut' tab the 'target' line is what you need. mine looked like
```
%windir%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
4. go to pycharm under settings -> tools -> Terminal
5. leave the current powershell path (don't change it!), and append on:
```
-ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
(this is part of the path above)
In fact, the full version can be written directly as
```
powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
No need to explicitly specify the path to powershell. (Still need to replace the path to anaconda with your own.)
(also make sure there is a space between the end of the powershell path and the dash)
6. restart the terminal in pycharm and you should be in the base conda environment
Upvotes: 1 <issue_comment>username_6: Type in Anaconda Prompt conda env list to retrieve the conda path.
Move to the folder with your bash (eg : GitBash):
```
cd /etc/profile.d
```
And add conda to the ~/.basrc file :
```
echo ". ${PWD}/conda.sh" >> ~/.bashrc
```
Activate the modifications of your ~/.basrc :
```
source ~/.bashrc
```
Upvotes: 0 <issue_comment>username_7: In case you are using Miniconda , Windows 11 and Pycharm Community Edition
[](https://i.stack.imgur.com/Ms7RN.png)
In Shell Path set below path replacing you respective user id
```
cmd.exe "/K"
C:\Users\your_user_Id\AppData\Local\miniconda3\Scripts\activate.bat
C:\Users\your_uesr_id\AppData\Local\miniconda3
```
Upvotes: 1 |
2018/03/20 | 1,875 | 6,749 | <issue_start>username_0: I have a file that I am reading data from, the data is in the format
[function] [number1] [number2] where [number2] is optional!
Eg.
```
+ 88 19
- 29 28
! 4
+ 2 2
```
Output from above:
>
> 107
>
> 1
>
>
>
My code works just fine for when the line is in the format [function] [number1] [number2] but my code fails when [number2] does not exist in a line.
My code so far:
Libraries I am using:
iostream
fstream
cstdlib
string
...
```
while (infile >> function >> number1>> number2)
{
switch (function)
{
case '+':
addition(number1, number2);
break;
case '-':
subtraction(number1, number2);
break;
case '!':
factorial(number1);
break;
```
...
How can I read [number1] only and skip back to [function] on the next read IF [number2] does not exist.
Any help is appreciated,
Thanks!<issue_comment>username_1: You can change pycharm settings to achieve this.
In **Settings > Tools > Terminal**, change the `Shell path` as following:
`cmd.exe "/K" "C:\Users\me\Miniconda3\Scripts\activate.bat" "C:\Users\me\Miniconda3"`
And the `C:\Users\me\Miniconda3` can be replaced by either one of your conda environment name such as `base`
Close the Terminal and reopen it, you will get the Anaconda prompt.
It works in my PyCharm Community Edition 2018.1.2

Upvotes: 7 [selected_answer]<issue_comment>username_2: The shell path may differ, you can check from the properties of shortcut of 'Anaconda Prompt': rightClick on the icon of 'Anaconda Prompt' >> properties >> shortcut >> Target
Upvotes: 2 <issue_comment>username_3: Great answer by `dd.` It helped me out as well, but I chose to do it in a slightly different way in PyCharm.
It appears we can get the Anaconda prompt running in the PyCharm terminal without having to redirect to a new Shell path, ie. we may keep the original Shell path which in my case is `"C:\Windows\System32\cmd.exe"` for Windows 10. And instead point to the Environment Variables that are used by the conda command prompt, in the following way:
1. Get the PATH value of your conda environment, for instance by performing `echo %PATH` from the conda command prompt as described [here](https://stackoverflow.com/questions/54175042/python-3-7-anaconda-environment-import-ssl-dll-load-fail-error) in the answer by `Rob` / `Adrian`. If you have already set the PATH for the python interpreter in PyCharm you can find it here: `Settings - Build, Execution, Deployment - Console - Python Console`. Click the folder button to the right of Environment variables input and then copy the path value from the Value field to the right of the variable under Name
2. Then go to `Settings - Tools - Terminal`
3. Click the folder icon to the right of Environment Variables input section, and create a new variable by pressing the `+` symbol. Name it `PATH` and paste in the previously copied value. Click OK and then Apply
You could restart PyCharm, or close and restart Terminal within PyCharm, in order to make sure the changes have been recognized.
Now you should be able to use for instance both `pip list` and `conda list` within the same Terminal window within PyCharm. In my case the former command returns a smaller list compared to the larger list from the other command (from conda).
Regardless, it appears you should now be able to use both within one, ie. to use the same Terminal window to perform conda and regular python operations, for instance for installations.
>
> Sidenote: Though the two-in-one option works for the Terminal windows it does not seem to work for the Python Console - where I use the conda one within PyCharm. In that Console it currently only recognize packages from the conda interpreter and not the packages from my previous regular python interpreter.
>
>
>
Anyway, hope this helps other people! If anyone has any insights into whether or not this is a viable solution in the long run, please let me know.
Upvotes: 0 <issue_comment>username_4: For window user, first of all check the location of your anaconda environment
you could type `conda env list` to show
For my case, the env I want to have my anaconda prompt is located at `C:\Users\YOURUSERNAME\Anaconda3\` (which is the root env, the very first you get)
And then go to pycharm, go settings, go Tools,
Inside Shell path enter
`cmd.exe "/K" C:\Users\YOURUSERNAME\Anaconda3\Scripts\activate.bat C:\Users\YOURUSERNAME\Anaconda3`
Upvotes: 3 <issue_comment>username_5: Here's what I got to work (its a variation of username_1 post):
1. right click 'anaconda powershell prompt' in the start menu; click 'open file location'
2. right click 'anaconda powershell prompt' in file explorer; click 'properties'
3. under 'shortcut' tab the 'target' line is what you need. mine looked like
```
%windir%\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
4. go to pycharm under settings -> tools -> Terminal
5. leave the current powershell path (don't change it!), and append on:
```
-ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
(this is part of the path above)
In fact, the full version can be written directly as
```
powershell.exe -ExecutionPolicy ByPass -NoExit -Command "& 'C:\ProgramData\Anaconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\ProgramData\Anaconda3' "
```
No need to explicitly specify the path to powershell. (Still need to replace the path to anaconda with your own.)
(also make sure there is a space between the end of the powershell path and the dash)
6. restart the terminal in pycharm and you should be in the base conda environment
Upvotes: 1 <issue_comment>username_6: Type in Anaconda Prompt conda env list to retrieve the conda path.
Move to the folder with your bash (eg : GitBash):
```
cd /etc/profile.d
```
And add conda to the ~/.basrc file :
```
echo ". ${PWD}/conda.sh" >> ~/.bashrc
```
Activate the modifications of your ~/.basrc :
```
source ~/.bashrc
```
Upvotes: 0 <issue_comment>username_7: In case you are using Miniconda , Windows 11 and Pycharm Community Edition
[](https://i.stack.imgur.com/Ms7RN.png)
In Shell Path set below path replacing you respective user id
```
cmd.exe "/K"
C:\Users\your_user_Id\AppData\Local\miniconda3\Scripts\activate.bat
C:\Users\your_uesr_id\AppData\Local\miniconda3
```
Upvotes: 1 |
2018/03/20 | 701 | 2,470 | <issue_start>username_0: Is the following C++ code standard compliant?
```
#include
int main()
{
[](auto v){ std::cout << v << std::endl; }.operator()(42);
}
```
Both *clang++ 3.8.0* and *g++ 7.2.0* [compile this code fine](http://coliru.stacked-crooked.com/a/d6534b6f121a1706) (the compiler flags are `-std=c++14 -Wall -Wextra -Werror -pedantic-errors`).<issue_comment>username_1: **Yes, it appears to be well-defined** since template parameters for lambdas' `operator()` are strictly defined.
`[expr.prim.lambda]/5`
>
> ...
>
> For a generic lambda, the closure type has a public inline function
> call operator member template (14.5.2) whose *template-parameter-list* consists of one invented type *template-parameter*
> for each occurrence of `auto` in the lambda’s *parameter-declaration-clause*, in order of appearance.
>
> ...
>
>
>
Upvotes: 4 <issue_comment>username_2: This is indeed standard compliant. The standard specifies there must be a member `operator()`, and that it has one template argument for every occurence of `auto` in its paramater-declaration-clause. There is no wording that forbids providing those explicitly.
Bottom of the line: The call operator of a lambda is just a normal function (template, if generic).
---
For reference, the relevant standard clause:
>
> The closure type for a non-generic lambda-expression has a public
> inline function call operator (16.5.4) whose parameters and return
> type are described by the lambda-expression’s
> parameter-declaration-clause and trailing-return-type respectively.
> **For a generic lambda, the closure type has a public inline function
> call operator member template (17.5.2) whose template-parameter-list
> consists of one invented type template- parameter for each occurrence
> of auto in the lambda’s parameter-declaration-clause, in order of
> appearance.** The invented type template-parameter is a parameter pack
> if the corresponding parameter-declaration declares a function
> parameter pack (11.3.5). The return type and function parameters of
> the function call operator template are derived from the
> lambda-expression’s trailing-return-type and
> parameter-declaration-clause by replacing each occurrence of auto in
> the decl-specifiers of the parameter-declaration-clause with the name
> of the corresponding invented template-parameter.
>
>
>
8.1.5.1/3 [expr.prim.lambda.closure] in N4659 (C++17), emphasize mine.
Upvotes: 4 |
2018/03/20 | 311 | 1,044 | <issue_start>username_0: So, I'm receiving a string named "desktop", which is, actually, a property of my obj.
So, as a property it should look like `obj.desktop`.
I've been trying to concat this the way it follows:
```
oJSonElementByIndex +"["+"'desktop'"+"]"
```
And also:
```
oJSonElementByIndex + "." + "desktop"
```
But it always looks like this:
```
"[object Object]['detalleDesktop']"
```
Any ideas on what's wrong?
Thanks in advance.<issue_comment>username_1: If you have a property name of an object stored in a string you can access the property value via the `[]` bracket notation, putting the variable in the brackets as shown below.
```
var desktop = 'some_proprty_name';
...
var value = oJSonElementByIndex[desktop];
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You were almost good :
```
v = "desktop"
oJSonElementByIndex[v]
oJSonElementByIndex["desktop"]
```
string + object = string + object.toString() :
```
({}).toString() // "[object Object]"
({}) + "" // "[object Object]"
```
Upvotes: 0 |
2018/03/20 | 782 | 3,291 | <issue_start>username_0: I'm working on Play Framework 2.4 and AngularJs 1.5.8 with coffeescript.
I am analyzing if it is convenient to use npm.
We are using several libraries and many of them have their own dependencies.
So, I do not know, what would happen if 2 different libraries had the same dependency but in different versions?
Could it origin a problem? Which version will be downloaded in the node\_modules directory?
Is it possible to use 2 versions of the same library just using npm or do I need something like jspm?
Thanks in advance.<issue_comment>username_1: As per my experience, NPM normally replaces the latest version of a package with the old one. So I think if you are using NPM then it is not possible to store 2 different versions of the same package.
Upvotes: -1 <issue_comment>username_2: npm uses [semantic versioning](https://semver.org/) to specify version ranges. By default, when you run `npm install --save foo`, it downloads the latest version of the `foo` package and stores its number in `package.json` dependencies, beginning with a caret (^). The caret indicates 'compatible' with, which generally means anything with the same major version (the first number).
When npm resolves nested dependencies, it checks to see if the version strings of dependencies can be resolved with a single version. If so, it installs that version in the top-level `node_modules` directory. Otherwise, it installs a version for each in nested `node_modules` directories.
In other words, it handles this problem automatically, provided publishers follow the semantic versioning conventions and don't include breaking changes in minor or bugfix releases. This is enforced by community, in that failure to follow said conventions is a good way to make people not want to use your stuff.
Upvotes: 1 <issue_comment>username_3: NPM, from the start was always designed to cope with multiple versions of a dependency. It did this by making all NPM module have there own node\_modules directory,. This did cause module bloat, so NPM was later made to intelligently flatten the node\_modules directory were it could.
But for now let's forget about NPM and node\_module directory flattening and think about how it was possible to have 2 versions of the same dependency.
Let's say we have 2 modules called `X` and `Y`, both required a module called `Z`, problem is `X` required version 1, and `Y` required version 2. NPM would create a structure like ->
```
node_modules
X
nodule_modules
Z ver 1
Y
nodule_modules
Z ver 2
```
Because of the way node searches node\_modules directory, X would always find the correct version of Z. And the same for Y. This is because node will first check the current directory for a node\_modules, if this does not exists it traverses up the directory tree until it finds a module called Z.
Now back to the flattening bit.
If X & Y both were now using Z ver 2. The directory structure would look something like.
```
node_modules
X
Y
Z ver 2
```
As you can see, X will now find Z ver 2, and so will Y.
This is a brief explanation of how Node does module resolution.
Hope that helps..
ps. And like @username_2 has pointed out, knowing what to keep and merge is determined by semantic versioning.
Upvotes: 2 |
2018/03/20 | 1,029 | 4,289 | <issue_start>username_0: I have a component. It takes a value as its @Input. If the onDelete() method is called, there is an EventEmitter that will emit the value that was put in. I am trying to write a test for this component, and the value I get from the emitter is undefined, not the value I passed.
Here is the test:
```
import { CheckedComponent } from '../checked/checked.component';
describe('CheckedComponent', () => {
it('emits the input value on delte', () => {
let total = null;
const component = new CheckedComponent();
component.item = 'example';
component.delete.subscribe(item => {
total = item;
});
component.onDelete()
console.log("HERE", total)
expect(total).not.toBeNull();
// expect(total).toEqual('example');
});
});
```
The commented line does not work. total equals "undefined," not 'example.' Interestingly, if I console.log(component.item), I still get the 'example' piece of data.
Here is the actual component:
```
import { Component, Input, Output, EventEmitter } from '@angular/core';
@Component({
templateUrl: './checked.component.html',
selector: 'checked',
styleUrls: ['./checked.component.css']
})
export class CheckedComponent {
@Input('item') item;
@Output() delete = new EventEmitter()
onDelete() {
this.delete.emit(this.item.value)
}
}
```
Not sure what I am doing wrong here. Semi-new to testing in Angular 4. Any guidance would be appreciated. Thanks!<issue_comment>username_1: As per my experience, NPM normally replaces the latest version of a package with the old one. So I think if you are using NPM then it is not possible to store 2 different versions of the same package.
Upvotes: -1 <issue_comment>username_2: npm uses [semantic versioning](https://semver.org/) to specify version ranges. By default, when you run `npm install --save foo`, it downloads the latest version of the `foo` package and stores its number in `package.json` dependencies, beginning with a caret (^). The caret indicates 'compatible' with, which generally means anything with the same major version (the first number).
When npm resolves nested dependencies, it checks to see if the version strings of dependencies can be resolved with a single version. If so, it installs that version in the top-level `node_modules` directory. Otherwise, it installs a version for each in nested `node_modules` directories.
In other words, it handles this problem automatically, provided publishers follow the semantic versioning conventions and don't include breaking changes in minor or bugfix releases. This is enforced by community, in that failure to follow said conventions is a good way to make people not want to use your stuff.
Upvotes: 1 <issue_comment>username_3: NPM, from the start was always designed to cope with multiple versions of a dependency. It did this by making all NPM module have there own node\_modules directory,. This did cause module bloat, so NPM was later made to intelligently flatten the node\_modules directory were it could.
But for now let's forget about NPM and node\_module directory flattening and think about how it was possible to have 2 versions of the same dependency.
Let's say we have 2 modules called `X` and `Y`, both required a module called `Z`, problem is `X` required version 1, and `Y` required version 2. NPM would create a structure like ->
```
node_modules
X
nodule_modules
Z ver 1
Y
nodule_modules
Z ver 2
```
Because of the way node searches node\_modules directory, X would always find the correct version of Z. And the same for Y. This is because node will first check the current directory for a node\_modules, if this does not exists it traverses up the directory tree until it finds a module called Z.
Now back to the flattening bit.
If X & Y both were now using Z ver 2. The directory structure would look something like.
```
node_modules
X
Y
Z ver 2
```
As you can see, X will now find Z ver 2, and so will Y.
This is a brief explanation of how Node does module resolution.
Hope that helps..
ps. And like @username_2 has pointed out, knowing what to keep and merge is determined by semantic versioning.
Upvotes: 2 |
2018/03/20 | 988 | 3,505 | <issue_start>username_0: Why does it say that it can not find my class? Why should I create another class with the same name in order to make make it not complain?
```
from typing import Dict
class WeekDay:
def __init__(self, day_number, day_name):
self.day_name = day_name
self.day_number = day_number
@staticmethod
def get_week_days() -> Dict[str, WeekDay]: # WeekDay unresolved reference error
weekdays = {
"monday": WeekDay(1, "Monday"),
"tuesday": WeekDay(2, "Tuesday"),
"wednesday": WeekDay(3, "Wednesday"),
"thursday": WeekDay(4, "Thursday"),
"friday": WeekDay(5, "Friday"),
"saturday": WeekDay(6, "Saturday"),
"sunday": WeekDay(7, "Sunday")
}
return weekdays
```<issue_comment>username_1: you cannot reference a class from its own definition
```
class A:
def foo(self):
pass
bar = A.foo
```
this will raise the following error:
```
Traceback (most recent call last):
class A:
File "/home/shmulik/so/ans.py", line 28, in A
bar = A.foo
NameError: name 'A' is not defined
```
as a workaround this issue, [PEP484 - Type Hints](https://www.python.org/dev/peps/pep-0484/#forward-references) (thanks to @ashwini-chaudhary for the comment) allows to write the class definition as a string.
>
> When a type hint contains names that have not been defined yet, that
> definition may be expressed as a string literal, to be resolved later.
>
>
>
so we can for example write:
```
class A:
def foo(self, x: 'A'):
pass
```
and this class will be interpreted happily by python.
Side Note
=========
so hey you mentioned on the first example that we cannot reference class from its definition, so why this code works?
```
class A:
def foo(self):
A.bar()
@staticmethod
def bar():
print(42)
A().foo()
```
this code works as the python interpreter skips the body of the method definition of `foo()` during the definition of class `A`, and only on the last line, when `foo()` is called (and class A is defined), the python interpreter executes the body of `foo()` and calling `A.bar()`.
Upvotes: 0 <issue_comment>username_2: From docs ([Section Forward references](https://www.python.org/dev/peps/pep-0484/#forward-references))
>
> When a type hint contains names that have not been defined yet, that definition may be expressed as a string literal, to be resolved later.
>
>
> A situation where this occurs commonly is the definition of a
> container class, where the class being defined occurs in the signature
> of some of the methods.
>
>
>
so in order to solve that just wrap the type with quotes, like this:
```
from typing import Dict
class WeekDay:
def __init__(self, day_number, day_name):
self.day_name = day_name
self.day_number = day_number
@staticmethod
def get_week_days() -> Dict[str, 'WeekDay']: # quote WeekDay
weekdays = {
"monday": WeekDay(1, "Monday"),
"tuesday": WeekDay(2, "Tuesday"),
"wednesday": WeekDay(3, "Wednesday"),
"thursday": WeekDay(4, "Thursday"),
"friday": WeekDay(5, "Friday"),
"saturday": WeekDay(6, "Saturday"),
"sunday": WeekDay(7, "Sunday")
}
return weekdays
```
Upvotes: 6 [selected_answer]<issue_comment>username_3: From Python3.7, you can use:
`from __future__ import annotations`
Upvotes: 4 |
2018/03/20 | 2,028 | 4,850 | <issue_start>username_0: i was using android ndk 13b with visual studio 2017 and i got an update for android ndk 15c which added one error. I am also using new Clang 5.0 (before it was 3.8). This is the error :
>
> /usr/local/google/buildbot/src/android/ndk-r15-release/ndk/sources/android/support/src/stdio/vfprintf.c(242):
> error : undefined reference to '\_\_signbit'
>
>
>
This is my verbose full response :
>
> 1>Android clang version 5.0.300080 (based on LLVM 5.0.300080)
> 1>Target: i686-none-linux-android 1>Thread model: posix
> 1>InstalledDir:
> C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\llvm\prebuilt\windows-x86\_64\bin
> 1>Found candidate GCC installation:
> C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64/lib/gcc/i686-linux-android\4.9.x
> 1>Selected GCC installation:
> C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64/lib/gcc/i686-linux-android/4.9.x
> 1>Candidate multilib: .;@m32 1>Selected multilib: .;@m32 1>
> "C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64/lib/gcc/i686-linux-android/4.9.x/../../../../i686-linux-android/bin\ld"
> "--sysroot=C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86"
> --eh-frame-hdr -m elf\_i386 -shared -o "x86\Release\libPredictEngineMultiLang.so"
> "C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86/usr/lib\crtbegin\_so.o"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\llvm\prebuilt\windows-x86\_64\lib64\clang\5.0.300080\lib\linux\i386"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64/lib/gcc/i686-linux-android/4.9.x"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64/lib/gcc/i686-linux-android/4.9.x/../../../../i686-linux-android/lib"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86/usr/lib"
> "-rpath-link=C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86\usr\lib"
> "-rpath-link=C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86\usr\lib"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86\usr\lib"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64\lib\gcc\i686-linux-android\4.9.x"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\sources\cxx-stl\llvm-libc++\libs\x86"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\toolchains\x86-4.9\prebuilt\windows-x86\_64\lib\gcc\i686-linux-android\4.9.x"
> "-LC:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\sources\cxx-stl\llvm-libc++\libs\x86"
> --no-undefined -z relro -z now -z noexecstack "x86\Release\CharsetConverter.o" "x86\Release\CppSQLite3.o"
> "x86\Release\PhonemConverterEN.o"
> "x86\Release\PhonemConverterFR.o" "x86\Release\PhoneticEngineEN.o"
> "x86\Release\PhoneticEngineFR.o" "x86\Release\PredictDb.o"
> "x86\Release\PredictEngineEN.o" "x86\Release\PredictEngineFR.o"
> "x86\Release\SearchEngineEN.o" "x86\Release\SearchEngineFR.o"
> "x86\Release\sqlite3.o" "x86\Release\DictionaryEN.o"
> "x86\Release\DictionaryFR.o" "x86\Release\PhonemEN.o"
> "x86\Release\PhonemFR.o" "x86\Release\PredictEN.o"
> "x86\Release\PredictFR.o"
> "C:\Users\hhenry-garon\Downloads\OpenSSL-for-Android-Prebuilt-master\OpenSSL-for-Android-Prebuilt-master\openssl-1.0.2\x86\lib\libcrypto.a"
> -landroid\_support -lc++\_static -lc++abi -landroid\_support -lc++\_static -lc++abi -llog -landroid -lgcc -ldl -lc -lgcc -ldl "C:\ProgramData\Microsoft\AndroidNDK64\android-ndk-r15c\platforms\android-23\arch-x86/usr/lib\crtend\_so.o"
> 1>/usr/local/google/buildbot/src/android/ndk-r15-release/ndk/sources/android/support/src/stdio/vfprintf.c(242):
> error : undefined reference to '\_\_signbit' 1>clang.exe : error :
> linker command failed with exit code 1 (use -v to see invocation)
>
>
>
I am compiling in x86 a android library .so with Clang 5.0 on visual studio 2017. I only read that maybe i can add a no-stdio configuration but i have no idea where to do that in visual studio 2017.
Thanks<issue_comment>username_1: There is issues with Android NDK 15c on Visual studio 2017.
I was using LLVM static and i changed for GNU static, everything works now.
Thanks to microsoft (not)
Upvotes: 1 [selected_answer]<issue_comment>username_2: For anyone else running into this - I resolved this by adding "m" to the "Library Dependencies" in the linker flags in vs. It seems like libc++ has a dependency on the c math library. Functions like printf, sprintf etc are using \_\_signbit.
Upvotes: 1 |
2018/03/20 | 768 | 3,132 | <issue_start>username_0: My goal is to simply offer my addin if Office application is launched with a certain argument.
----------------------------------------------------------------------------------------------
Unfortunately, I could not find anything to help me do this. I have tried to use the Office Application Load Addin swtich `/lc:Addin.dll` with no success. One option I entertained was to create all of the Office addin registry entries upon time of wish to launch addin however this seemed extremely clumsy and way to much overhead. Also, the deal breaker for me was requiring registry creation elevated privileges in order to initialize the addin.
I decided to have my addin not do much of anything at startup unless a certain environment variable exists.
In order to do it this way I need to either set the ribbon to non-visible by default and show the Ribbon upon discovering the env variable. Or the opposite have the ribbon visible by default and hide the ribbon upon discovering env variable.
Things I have tried
===================
* Setting ribbon's tab `Globals.Ribbons.MyRibbon.MyTab.visible = false`.
* Invalidating the Ribbon `Globals.Ribbons.MyRibbon.RibbbonUi.Invalidate()`.
* Invalidating the tab after setting visible to false `Globals.Ribbons.MyRibbon.RibbbonUi.InvalidateControl(tabCtrlId)`.
***The things tried dont include the dozens of thing to try to only load addin in certain circumstances.***<issue_comment>username_1: I figured out a solution.
-------------------------
*after digging into the base class `AddinBase` I discovered some methods available for me to override.*
So I overrode the `CreateRibbonExtensibilityObject` method.
-----------------------------------------------------------
```
protected override IRibbonExtensibility CreateRibbonExtensibilityObject( )
{
if( Environment.GetCommandLineArgs( ).ToList( ).FirstOrDefault( a => a.ToLower( ).Contains( "/launchmyaddin" ) ) != null )
{
return null;
}
return base.CreateRibbonExtensibilityObject( );
}
```
>
> What this does is prevent the ribbon from even being created if my switch is present and if it is present then I just pass off to the base class implementation in order to have the Addin create my ribbon like normal.
>
>
>
Also, `CreateRibbonExtensibilityObject()` returns an object that has a `GetCustomUI( ribbonXml )` so we can create our custom ribbon from xml.
This gives us more power.
My solution only needed to show/hide a ribbon once only at startup. I did think about how this might be toggled on and off so I went poking around for other members I could override.
I believe you can override the `CreateRibbonObjects( )` member which I think will get called upon every time the invalidate of a ribbon is called. Here you may be able to remove the item from the collection the base class returns that represents your ribbon you wish to hide.
Upvotes: 2 <issue_comment>username_2: If you use custom tab(s) (this is, ControlIdType=Custom) you may set visibility via:
```cs
foreach (var tab in Globals.Ribbons.Ribbon1.Tabs)
{
tab.Visible = false;
}
```
Upvotes: 0 |
2018/03/20 | 2,360 | 7,727 | <issue_start>username_0: I am trying my best to learn ASP.NET MVC programming but I've occurred a problem that I can't resolve. I am writing code - first database, I want to populate 3 of my tables via Seed method, there's the models code:
```
//Doctor Table
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Web;
namespace Przychodnia.Models.Clinic
{
public class Doctor
{
public int DoctorID { get; set; }
public string Name { get; set; }
public string Surname { get; set; }
public string City { get; set; }
public string PostCode { get; set; }
public string Adress { get; set; }
public string Salary { get; set; }
[StringLength(9, MinimumLength = 9, ErrorMessage = "Phone number must be 9 characters long.")]
public string PhoneNumber { get; set; }
public string RoomNumber { get; set; }
public Specialization Specialization { get; set; }
public List Appointments { get; set; }
public List Vacations { get; set; }
}
}
//Patient Table
using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Linq;
using System.Web;
namespace Przychodnia.Models.Clinic
{
public class Patient
{
public int PatientID { get; set; }
[StringLength(11, MinimumLength = 11, ErrorMessage = "PESEL number must be 11 characters long.")]
public string PESEL { get; set; }
public string Name { get; set; }
public string Surname { get; set; }
public string City { get; set; }
public string PostCode { get; set; }
public string Adress { get; set; }
[DataType(DataType.Date)]
[DisplayFormat(DataFormatString = "{0:yyyy-MM-dd}", ApplyFormatInEditMode = true)]
public DateTime BirthDate { get; set; }
public string InsuranceNumber { get; set; }
[StringLength(9, MinimumLength = 9, ErrorMessage = "Phone number must be 9 characters long.")]
public string PhoneNumber { get; set; }
public List Appointments { get; set; }
}
}
//Specialization Table
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Przychodnia.Models.Clinic
{
public class Specialization
{
public int SpecializationID { get; set; }
public string Name { get; set; }
}
}
```
Now, I have DummyData class, which looks like so:
```
using Przychodnia.Models.Clinic;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Przychodnia.Data
{
public class DummyData
{
public static List getSpecializations()
{
List specializations = new List()
{
new Specialization()
{
Name = "Dermatolog"
},
new Specialization()
{
Name = "Chirurg"
},
new Specialization()
{
Name = "Laryngolog"
}
};
return specializations;
}
public static List getDoctors(ClinicContext context)
{
List doctors = new List()
{
new Doctor()
{
Name = "Jan",
Surname = "Kowalski",
City = "Olsztyn",
PostCode = "123-123",
Adress = "Boeinga 2/14",
Salary = "5500",
PhoneNumber = "111222333",
RoomNumber = "18",
Specialization = context.Specializations.FirstOrDefault(o => o.Name == "Chirurg")
},
new Doctor()
{
Name = "Aleksy",
Surname = "Dimitriv",
City = "Warszawa",
PostCode = "300-200",
Adress = "Dymowskiego 2/14",
Salary = "10000",
PhoneNumber = "000999888",
RoomNumber = "101",
Specialization = context.Specializations.FirstOrDefault(o => o.Name == "Dermatolog")
},
new Doctor()
{
Name = "Juliusz",
Surname = "Petrarka",
City = "Kraków",
PostCode = "000-123",
Adress = "Mickiewicza 2/14",
Salary = "8500",
PhoneNumber = "333444222",
RoomNumber = "01",
Specialization = context.Specializations.FirstOrDefault(o => o.Name == "Laryngolog")
}
};
return doctors;
}
public static List getPatients()
{
List patients = new List()
{
new Patient()
{
Name = "Anna",
Surname = "Pszczoła",
PESEL = "01234567890",
City = "Olsztyn",
PostCode = "123-123",
Adress = "Heweliusza 22",
BirthDate = DateTime.Now,
InsuranceNumber = "123123123ZZ",
PhoneNumber = "123321123"
},
new Patient()
{
Name = "Juliusz",
Surname = "Słowacki",
PESEL = "02030405060",
City = "Elbląg",
PostCode = "2-123",
Adress = "Mariusza 50",
BirthDate = DateTime.Now,
InsuranceNumber = "00000000123Z",
PhoneNumber = "000221122"
},
new Patient()
{
Name = "Karolina",
Surname = "Ogórek",
PESEL = "11104592831",
City = "Lublin",
PostCode = "123-2",
Adress = "Batorego 2",
BirthDate = DateTime.Now,
InsuranceNumber = "zzxxcc0002333",
PhoneNumber = "989231453"
},
};
return patients;
}
}
}
```
And I've edited Configuration.cs seed method by adding some lines:
```
context.Specializations.AddOrUpdate(
s => s.Name, DummyData.getSpecializations().ToArray());
context.SaveChanges();
context.Patients.AddOrUpdate(
p => new { p.PESEL, p.Name, p.Surname, p.City, p.PostCode, p.Adress, p.BirthDate, p.InsuranceNumber, p.PhoneNumber }, DummyData.getPatients().ToArray());
context.SaveChanges();
context.Doctors.AddOrUpdate(
d => new { d.Name, d.Surname, d.City, d.PostCode, d.Adress, d.Salary, d.PhoneNumber, d.RoomNumber, d.Specialization }, DummyData.getDoctors(context).ToArray());
context.SaveChanges();
```
Then, when I try to update-database I am getting this kind of message:
```
"Unable to create a constant value of type 'Przychodnia.Models.Clinic.Specialization'.
Only primitive types or enumeration types are supported in this context."
```<issue_comment>username_1: You can't assign `d.Specialization` directly inside `AddOrUpdate` method since `d.Specialization` itself declared as a complex class, which that method expects all property arguments in expression tree declared as primitive/simple types. Instead, you can create `SpecializationID` unique key property inside `Doctor` class and add proper association for that property to `Specialization` class like this:
```
public class Doctor
{
// other primitive type properties
// since EF is used, ForeignKeyAttribute may be used here
[ForeignKey("Specialization")]
public int SpecializationID { get; set; }
public virtual Specialization Specialization { get; set; }
}
```
Then, you need to adjust `AddOrUpdate` method expression by including `SpecializationID` property to replace `Specialization` class:
```
context.Doctors.AddOrUpdate(
d => new { d.Name, d.Surname, d.City, d.PostCode, d.Adress, d.Salary, d.PhoneNumber, d.RoomNumber, d.SpecializationID }, DummyData.getDoctors(context).ToArray());
```
A similar issue may be found here (with multiple complex classes instead of one):
[`Only primitive types or enumeration are supported in this context` error whilst seeding ASP.NET MVC using AddOrUpdate match on multiple fields](https://stackoverflow.com/questions/42316385/only-primitive-types-or-enumeration-are-supported-in-this-context-error-whilst)
Upvotes: 1 <issue_comment>username_2: Allright, I got it now! I'll post the answer so maybe it'll help somebody.
All I had to do is to change the seed method like so
```
context.Specializations.AddOrUpdate(
s => s.SpecializationID , DummyData.getSpecializations().ToArray());
context.SaveChanges();
context.Patients.AddOrUpdate(
p => p.PatientID, DummyData.getPatients().ToArray());
context.SaveChanges();
context.Doctors.AddOrUpdate(
d => d.DoctorID, DummyData.getDoctors(context).ToArray());
context.SaveChanges();
```
So it pass only primitive type and the rest of populating is done in DummyData.cs , well it was simple as that...
Upvotes: 0 |
2018/03/20 | 991 | 3,529 | <issue_start>username_0: I have some number fields set based on a large number of factors in an eCommerce site. I want an option that will clear out those numbers if a radio option is clicked, but then return to their previous numbers if a different radio option is clicked. I have the following code to set the values to 0, but I dont know how to continue for setting them back. My values are defined in several different places, so I can't easily refer to them, but is there a way to read the fields before they're set to 0, and then set them back to their previous state?
```
$('input[type="radio"]').click(function() {
if($(this).attr('id') == 'yes-option') {
$('#option1').val('0');
$('#option2').val('0');
$('#option3').val('0');
$('#option4').val('0');
}
else if($(this).attr('id') == 'no-option') {
???
}
```<issue_comment>username_1: You can use **[`data-attributes`](https://api.jquery.com/data/)** to store the previously entered/selected value:
```
$('input[type="radio"]').click(function() {
var $optionOne = $('#option1');
var $optionTwo = $('#option2');
var $optionThree = $('#option3');
var $optionFour = $('#option4');
if($(this).attr('id') == 'yes-option') {
$optionOne.data('previous-value', $optionOne.val());
$optionOne.val('0');
$optionTwo.data('previous-value', $optionTwo.val());
$optionTwo.val('0');
$optionThree.data('previous-value', $optionThree.val());
$optionThree.val('0');
$optionFour.data('previous-value', $optionFour.val());
$optionFour.val('0');
} else if($(this).attr('id') == 'no-option') {
$optionOne.val($optionOne.data('previous-value'));
$optionTwo.val($optionTwo.data('previous-value'));
$optionThree.val($optionThree.data('previous-value'));
$optionFour.val($optionFour.data('previous-value'));
}
});
```
Upvotes: 2 <issue_comment>username_2: A possible approach would be to always keep your options' previous state in an array.
```js
var previousStates = [];
$('input[type="radio"]').click(function() {
if($(this).attr('id') == 'yes-option') {
$.saveState(); //Save the current state before changing values
$( "option" ).each(function( index ) {
$(this).val('0');
});
}
else if($(this).attr('id') == 'no-option') {
$.restoreState();
}
});
$.saveState = function() {
previousStates = []; //Empty the array
$( "option" ).each(function( index ) {
previousStates[index] = $(this).val();
});
}
$.restoreState = function() {
$( "option" ).each(function( index ) {
$(this).val(previousStates[index]);
});
}
```
*Note: as this method uses indexes to identify options, be careful if you need to dynamically add or remove an option!*
Upvotes: 0 <issue_comment>username_3: Use data-attributes to store the old values, so you can read them later on.
It's better to use a loop to go trough each element, so you don't need to modify it every time a new input field was added.
You can also change the selector to `$('input')` or anything that suit your needs.
```js
$('input[type="radio"]').click(function() {
if($(this).attr('id') == 'yes-option') {
$('[id^="option"]').each(function(){
$(this).data('oldval', $(this).val());
$(this).val(0);
});
}
else if($(this).attr('id') == 'no-option') {
$('[id^="option"]').each(function(){
$(this).val($(this).data('oldval'));
});
}
});
```
```html
YES
NO
```
Upvotes: 0 |
2018/03/20 | 368 | 1,096 | <issue_start>username_0: I am fairly new to python. Presently, I have a pandas series `pds`
```
pds.shape
#(1159,)
```
`pds` has an index which is not sequential and at each index there is a `(18,100)` array
```
pds[pds.index[1]].shape
#(18, 100)
```
How can I convert this to a pandas dataframe and/or a numpy array with dimensions `(1159,18,100)`?
```
pdf = pd.DataFrame(pds)
```
gives me a pandas with shape
```
pdf.shape
(1159, 1)
```<issue_comment>username_1: Does this work: `numpy.stack([pds[pds.index[i]] for i in range(1159)], axis=0)`?
[stack](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.stack.html) should put all your arrays together along the axis given.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You say you want to keep indexing, that means numpy is out (someone already posted a numpy solution as well). My recommendation would be to create a series of DataFrames, as panels are deprecated.
```
new series = pd.Series()
for index, element in pds:
new_series.append(pd.DataFrame(element))
```
Should do the trick.
Upvotes: 2 |
2018/03/20 | 463 | 1,474 | <issue_start>username_0: I have a dataset with the following structure:
Month | Day | Hour | Minute | Value1 | Value2 | Value3
The dataset has a length of 525,600 rows. What I need is the mean over fifteen minutes for each value (value1, value2, value3). The output should have the following structure:
```
Month | Begin | End | MeanValues1 | MeanValues2 | MeanValues3
01 | 0:00 | 0:15 | 1.23 | 2.34 | 3.23
01 | 0:15 | 0:30 | 1.76 | 3.02 | 3.24
```
Hence, the output dataset should have a length of 35,040 rows.
Can anybody help me and give me a lightweight solution process for R?
I don't know how I can implement that in a very efficient way. Moreover, it is not clear how I can build the Begin and End column in the output dataset.
I thank you in advance for any input.
Best<issue_comment>username_1: Does this work: `numpy.stack([pds[pds.index[i]] for i in range(1159)], axis=0)`?
[stack](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.stack.html) should put all your arrays together along the axis given.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You say you want to keep indexing, that means numpy is out (someone already posted a numpy solution as well). My recommendation would be to create a series of DataFrames, as panels are deprecated.
```
new series = pd.Series()
for index, element in pds:
new_series.append(pd.DataFrame(element))
```
Should do the trick.
Upvotes: 2 |
2018/03/20 | 2,332 | 8,084 | <issue_start>username_0: I've been struggling reading the javadocs to determine how to use lambdas to elegantly combine a list of rows of one type into a grouped-up list of another type.
I've figured out how to use the `Collectors.groupingBy` syntax to get the data into a `Map>` but since the results will be used in a variety of later function calls... I'd ideally like to have these reduced to a list of objects which contain the new mapping.
Here are my data types, `RowData` is the source... I want to get the data combined into a list of `CodesToBrands`:
```
class RowData {
private String id;
private String name;
public RowData() {
}
public RowData(String id, String name) {
this.id = id;
this.name = name;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
class CodeToBrands {
private String code;
private List brands = new ArrayList<>();
public String getCode() {
return code;
}
public void setCode(String code) {
this.code = code;
}
public List getBrands() {
return brands;
}
public void addBrands(List brands) {
this.brands.addAll(brands);
}
public void addBrand(String brand) {
this.brands.add(brand);
}
}
```
Here's the test I'm writing to try and figure it out...
```
@Test
public void testMappingRows() {
List rows = new ArrayList<>();
rows.add(new RowData("A", "Foo"));
rows.add(new RowData("B", "Foo"));
rows.add(new RowData("A", "Bar"));
rows.add(new RowData("B", "Zoo"));
rows.add(new RowData("C", "Elf"));
// Groups a list of elements to a Map>
System.out.println("\nMapping the codes to a list of brands");
Map> result = rows.stream()
.collect(Collectors.groupingBy(RowData::getId, Collectors.mapping(RowData::getName, Collectors.toList())));
// Show results are grouped nicely
result.entrySet().forEach((entry) -> {
System.out.println("Key: " + entry.getKey());
entry.getValue().forEach((value) -> System.out.println("..Value: " + value));
});
/\*\*Prints:
\* Mapping the codes to a list of brands
Key: A
..Value: Foo
..Value: Bar
Key: B
..Value: Foo
..Value: Zoo
Key: C
..Value: Elf\*/
// How to get these as a List objects where each CodeToBrand objects to avoid working with a Map>?
List resultsAsNewType;
}
```
Can anyone provide any help in trying to get this same overall result in a easier-to-use datatype?
Thanks in advance<issue_comment>username_1: An easy way to do it is creating a constructor in `CodeToBrands` using its fields:
```
public CodeToBrands(String code, List brands) {
this.code = code;
this.brands = brands;
}
```
Then simply [`map`](https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html#map-java.util.function.Function-) each entry to a new `CodeToBrands` instance:
```
List resultsAsNewType = result
.entrySet()
.stream()
.map(e -> new CodeToBrands(e.getKey(), e.getValue()))
.collect(Collectors.toList());
```
Upvotes: 2 <issue_comment>username_2: You can simply chain further operations after grouping and collect the result like so:
```
List resultSet = rows.stream()
.collect(Collectors.groupingBy(RowData::getId,
Collectors.mapping(RowData::getName, Collectors.toList())))
.entrySet()
.stream()
.map(e -> {
CodeToBrands codeToBrand = new CodeToBrands();
codeToBrand.setCode(e.getKey());
codeToBrand.addBrands(e.getValue());
return codeToBrand;
}).collect(Collectors.toCollection(ArrayList::new));
```
This approach creates a stream over the `entrySet` after grouping and then simply maps each `Map.Entry>` into a `CodeToBrands` instance and then finally we accumulate the elements into a list implementation.
Another approach would be using the `toMap` collector:
```
List resultSet = rows.stream()
.collect(Collectors.toMap(RowData::getId,
valueMapper -> new ArrayList<>(Collections.singletonList(valueMapper.getName())),
(v, v1) -> {
v.addAll(v1);
return v;
})).entrySet()
.stream()
.map(e -> {
CodeToBrands codeToBrand = new CodeToBrands();
codeToBrand.setCode(e.getKey());
codeToBrand.addBrands(e.getValue());
return codeToBrand;
}).collect(Collectors.toCollection(ArrayList::new));
```
This approach is quite similar to the above but just "another" way to go about it. So, this specific overload of the `toMap` collector takes a key mapper (`RowData::getId` in this case) which produces the keys for the map.
The function `valueMapper -> new ArrayList<>(Collections.singletonList(valueMapper.getName()))` is the value mapper which produces the map values.
Finally, the function `(v, v1) -> {...}` is the merge function, used to resolve collisions between values associated with the same key.
The following chained functions are the same as the first example shown.
Upvotes: 2 <issue_comment>username_3: You could do it in one pass using [`Collectors.toMap`](https://docs.oracle.com/javase/9/docs/api/java/util/stream/Collectors.html#toMap-java.util.function.Function-java.util.function.Function-java.util.function.BinaryOperator-):
```
Collection values = rows.stream()
.collect(Collectors.toMap(
RowData::getId,
rowData -> {
CodeToBrands codeToBrands = new CodeToBrands();
codeToBrands.setCode(rowData.getId());
codeToBrands.addBrand(row.getName());
return codeToBrands;
},
(left, right) -> {
left.addBrands(right.getBrands());
return left;
}))
.values();
```
Then, if you need a `List` instead of a `Collection`, simply do:
```
List result = new ArrayList<>(values);
```
---
The code above could be simplified if you had a specific constructor and a merge method in the `CodeToBrands` class:
```
public CodeToBrands(String code, String brand) {
this.code = code;
this.brands.add(brand);
}
public CodeToBrands merge(CodeToBrands another) {
this.brands.addAll(another.getBrands());
return this;
}
```
Then, simply do:
```
Collection values = rows.stream()
.collect(Collectors.toMap(
RowData::getId,
rowData -> new CodeToBrands(rowData.getId(), rowData.getName()),
CodeToBrands::merge))
.values();
```
Upvotes: 4 [selected_answer]<issue_comment>username_4: The other answers seem to be keen on first creating the original map you already have, then creating `CodeToBrands` in a second go.
IIUC, you wanted this in one go, which you can do by creating your own `Collector`. All it needs to do is collect into your target class directly, instead of creating a list first.
```
// the target map
Supplier> supplier = HashMap::new;
// for each id, you want to reuse an existing CodeToBrands,
// or create a new one if we don't have one already
BiConsumer,RowData> accumulator = (map, rd) -> {
// this assumes a CodeToBrands(String id) constructor
CodeToBrands ctb = map.computeIfAbsent(rd.getId(), CodeToBrands::new);
ctb.addBrand(rd.getName());
};
// to complete the collector, we need to be able to combine two CodeToBrands objects
BinaryOperator> combiner = (map1, map2) -> {
// add all map2 entries to map1
for (Entry entry : map2.entrySet()) {
map1.merge(entry.getKey(), entry.getValue(), (ctb1, ctb2) -> {
// add all ctb2 brands to ctb1 and continue collecting in there
ctb1.addBrands(ctb2.getBrands());
return ctb1;
});
}
return map1;
};
// now, you have everything, use these for a new Collector
Collector> collector =
Collector.of(supplier, accumulator, combiner);
// let's give it a go
System.out.println("\nMapping the codes by new collector");
Map map = rows.stream().collect(collector);
map.forEach((s, ctb) -> {
System.out.println("Key: " + s);
ctb.getBrands().forEach((value) -> System.out.println("..Value: " + value));
});
```
Now if you insist on having a `List` instead, maybe do `new ArrayList<>(map.entrySet())`, or `collectingAndThen()`; but in my experience, you might as well use the `Set` it returns for almost all use cases.
You can boil down the code by using the lambdas directly, but I figured it be easier to follow to do it step by step.
Upvotes: 1 |
2018/03/20 | 826 | 2,565 | <issue_start>username_0: Microsofts Requirements and compatibility page for TFS found [here](https://learn.microsoft.com/en-us/vsts/tfs-server/requirements) do not show Update 2 in the SQL section.
In our current environment we have TFS2017 Update 2 running on the same machine as SQL Server 2014. I'd like to install SQL Server 2017 and move the TFS database off the same machine to the new instance of SQL then point TFS 2017 at the new instance.
The next weekend I would do the TFS upgrade from 2017 to 2018. I have added extra context in case there is an obvious flaw in my plan that can be pointed out by the community.<issue_comment>username_1: TFS 2017 Update 2 isn't listed because the SQL Server requirements didn't change from Update 1. Thus, TFS 2017 Update 2 is not compatible with SQL Server 2017.
I would expect the configuration process would throw an error when attempting to attach the database.
Upvotes: 1 <issue_comment>username_2: **Not support**
Please take a look at below Q&A
>
> Markus: *Does TFS 2017 support SQL Server 2017 as well?*
>
>
> <NAME> MS: *@Markus, It does not. I believe **SQL 2017 shipped
> after TFS 2017** and, therefore, support was not included. Here’s our
> system requirements page:
> <https://learn.microsoft.com/en-us/vsts/tfs-server/requirements>*
>
>
> *[Source Link](https://blogs.msdn.microsoft.com/bharry/2017/10/13/team-foundation-server-2018-and-sql-server/)*
>
>
>
Likewise, SQL 2017 also shipped after TFS2017 update2. According to [SQL Server 2017 Release Notes](https://learn.microsoft.com/en-us/sql/sql-server/sql-server-2017-release-notes), generated released October 2017, TFS2017 update2 released July 24, 2017.
Besides according to [TFS 2017 Update 2 Release Notes](https://learn.microsoft.com/en-us/visualstudio/releasenotes/tfs2017-update2) also not mentioned support SQL2017
You could use SQL2016 (minimum SP1) instead which support on both TFS2017 update2/2018 as a workaround for now. When you upgrade to TFS2018, then update your SQL version to SQL2017.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In case someone needs this information, when attempting to use Microsoft SQL Server 2017 (RTM) - 14.0.1000.169 (X64). TFS 2017 Update 2 returned the following error:
TF255146: The SQL Server instance you specified (XXXXXX) is version 'SQL Server vNext', which is not supported by this version of Team Foundation Server. For more information about supported versions of SQL Server, visit <https://www.visualstudio.com/docs/setup-admin/requirements>
Upvotes: 1 |
2018/03/20 | 747 | 2,055 | <issue_start>username_0: Below code is to get some data from csv file. But the result is showing as follow:
5,Jones,123-123-1234,BCBS,GP1234,39,Sarah,Broken Arm,3
6,Smith,123-231-1234,UHC,G1234,47,Francine,Physical Therapy,03/25/2015
9,Adams,123-123-4321,Cigna,U1234,28,Bob,Broken Arm,2
5,<NAME>,123-321-1234,BCBS,GP1235,37,Andrea,Tummy Ache,3
10,Pewterschmidt,123-312-1234,UHC,G1112,42,Peter,Supervision normal first pregnancy,03/26/2015
But I want to get the data except first column(such as 5,6,9,5,10)
How can I do that? Could you give me an idea? Thanks.
```
void Hospital::readRecordsFile()
{
fileName = "Waterbury Hospital patients records.csv";
ifstream file(fileName);
string value;
vector getInform;
while(file.good())
{
getline(file, value);
getInform.push\_back(value);
//getInform.erase(getInform.begin()+1);
}
for(int i=0;i
```<issue_comment>username_1: You can find first separator (,) in each line and then delete all charaters before it:
```
getline(file, value);
const auto pos = value.find(',');
if(pos != string::npos)
value.erase(0, pos + 1);
```
If you are not sure about used separator character (,) in CSV file. You would probably do ignore all digits from the beginning of each line:
```
getline(file, value);
const auto pos = value.find_first_not_of("0123456789");
if(pos != string::npos)
value.erase(0, pos + 1);
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You should split each line by ',' and then ignore the first part.
Simply parse string to rich the first ',' character, then use substring from that index to end.
Upvotes: 0 <issue_comment>username_3: [`std::istream::ignore`](http://en.cppreference.com/w/cpp/io/basic_istream/ignore) can be used to ignore some of the text from an input stream.
>
> Extracts and discards characters from the input stream until and including `delim`.
>
>
>
```
file.ignore(std::numeric_limits::max(), ',');
```
and follow it up with `getline` to read the rest of the line.
```
getline(file, value);
```
Upvotes: 1 |
2018/03/20 | 2,561 | 8,964 | <issue_start>username_0: I am trying to create a filter dropdown where only the label of the dropdown will be visible initially, and then on click of the label, the list of options will drop down over the top of the rest of the content(using `absolute` positioning). The part I am struggling with is enclosing both the `relative` positioned span and the absolute positioned list within a container so that both are within a border and the border expands as the absolutely positioned menu slides down. Below is what I've tried, as you can see it's a little wonky(content jumping around) and the borders don't line up quite right. Not sure if this is the right approach, open to ideas on how to improve the look/functionality:
```js
$(function() {
$('.dropdown span').click(function() {
$(this).parent().toggleClass('open');
$(this).next('ul').slideToggle();
});
});
```
```css
.container {
width:400px;
}
.dropdown {
position:relative;
border:1px solid black;
}
.dropdown.open {
border-bottom:none;
}
.dropdown span {
display:block;
padding:10px 15px;
}
.dropdown ul {
display:block;
background:#fff;
width:100%;
list-style-type:none;
padding:0 0 15px;
margin:0;
border:1px solid black;
border-top:none;
position:absolute;
z-index:10;
display:none;
}
.dropdown ul > li {
padding:15px 15px 0;
}
.dropdown ul > li:first-child {
padding-top:0;
}
```
```html
Label
* Option 1
* Option 2
* Option 3
* Option 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel tellus sit amet diam sagittis tempor. Nullam sed nunc non ipsum rhoncus tincidunt. Ut odio nisi, convallis et augue vitae, dictum semper mauris. Donec ullamcorper vehicula mi in interdum. Cras at hendrerit dolor, a scelerisque arcu. Nullam sagittis consectetur hendrerit. Donec interdum gravida tincidunt. Morbi id sem eleifend, gravida urna sit amet, vestibulum nibh. Pellentesque non convallis massa. Vivamus non metus lobortis, condimentum lorem vitae, semper augue. Ut eget ante eget orci elementum sodales. Donec nec ligula mauris.
Nunc a consectetur nulla, vel viverra velit. Maecenas sagittis velit turpis, eu dapibus turpis blandit vitae. Duis mollis, lorem ac consectetur hendrerit, turpis odio lacinia eros, sed lacinia velit justo in est. Integer non mauris lacinia, sagittis justo sed, accumsan tortor. Suspendisse a commodo tortor. Etiam tincidunt mi sit amet elementum fringilla. Pellentesque luctus ac leo non lobortis. Morbi iaculis consequat lacus eget tristique. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Vivamus ultrices congue augue, vel consequat velit viverra sed. Sed a finibus velit. Mauris sed orci lectus. Vivamus bibendum ante et quam volutpat, sed venenatis mi dignissim. Ut tempus iaculis faucibus.
```<issue_comment>username_1: I made a couple of changes get it to work. I added `box-sizing:border-box` to all elements, just so I know we are dealing with consistent widths with borders. Notice how the borders of the absolute positioned items were both on the inside of the parent border. If you add the width of the border to the width of the children, your borders line up in terms of how far apart they are `width: calc(100% + 2px)` and you just need to offset the absolute positioning to `left: -1px`. Different browsers may give you fits with a negative value here, so you could use `transform: translateX(-1px)` instead.
```js
$(function() {
$('.dropdown span').click(function() {
$(this).parent().toggleClass('open');
$(this).next('ul').slideToggle();
});
});
```
```css
* {
box-sizing: border-box
}
.container {
width:400px;
}
.dropdown {
position:relative;
border:1px solid black;
}
.dropdown.open {
border-bottom:none;
}
.dropdown span {
display:block;
padding:10px 15px;
}
.dropdown ul {
display:block;
background:#fff;
width: calc(100% + 2px);
list-style-type:none;
padding:0 0 15px;
margin:0;
border:1px solid black;
border-top:none;
position:absolute;
z-index:10;
display:none;
left: 0;
-webkit-transform: translateX(-1px);
transform: translateX(-1px);
}
.dropdown ul > li {
padding:15px 15px 0;
width: 100%;
}
.dropdown ul > li:first-child {
padding-top:0;
}
```
```html
Label
* Option 1
* Option 2
* Option 3
* Option 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel tellus sit amet diam sagittis tempor. Nullam sed nunc non ipsum rhoncus tincidunt. Ut odio nisi, convallis et augue vitae, dictum semper mauris. Donec ullamcorper vehicula mi in interdum. Cras at hendrerit dolor, a scelerisque arcu. Nullam sagittis consectetur hendrerit. Donec interdum gravida tincidunt. Morbi id sem eleifend, gravida urna sit amet, vestibulum nibh. Pellentesque non convallis massa. Vivamus non metus lobortis, condimentum lorem vitae, semper augue. Ut eget ante eget orci elementum sodales. Donec nec ligula mauris.
Nunc a consectetur nulla, vel viverra velit. Maecenas sagittis velit turpis, eu dapibus turpis blandit vitae. Duis mollis, lorem ac consectetur hendrerit, turpis odio lacinia eros, sed lacinia velit justo in est. Integer non mauris lacinia, sagittis justo sed, accumsan tortor. Suspendisse a commodo tortor. Etiam tincidunt mi sit amet elementum fringilla. Pellentesque luctus ac leo non lobortis. Morbi iaculis consequat lacus eget tristique. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Vivamus ultrices congue augue, vel consequat velit viverra sed. Sed a finibus velit. Mauris sed orci lectus. Vivamus bibendum ante et quam volutpat, sed venenatis mi dignissim. Ut tempus iaculis faucibus.
```
Upvotes: 1 <issue_comment>username_2: Well done so far. The only missing details are:
* a `margin-left: -1px;` on the which will make it shift to the left by `1px`
* a `border: 1px solid transparent` on `.dropdown.open`, to prevent the rest of the page shifting up by `1px`.
```js
$(function() {
$('.dropdown span').click(function() {
$(this).parent().toggleClass('open');
$(this).next('ul').slideToggle();
});
$(window).on('click', function(e) {
if (!$(e.target).closest('.dropdown').is('.dropdown')
|| $(e.target).closest('li').is('.dropdown li')
) {
$('.dropdown.open span').trigger('click')
}
})
});
```
```css
.container {
width: 400px;
}
.dropdown {
position: relative;
border: 1px solid black;
}
.dropdown.open {
border-bottom: none;
}
.dropdown span {
display: block;
padding: 10px 15px;
}
.dropdown ul {
display: block;
background: #fff;
width: 100%;
list-style-type: none;
padding: 0 0 15px;
margin: 0;
border: 1px solid black;
border-top: none;
position: absolute;
z-index: 10;
display: none;
margin-left: -1px;
}
.dropdown ul>li {
padding: 15px 15px 0;
}
.dropdown ul>li:first-child {
padding-top: 0;
}
.dropdown.open {
border-bottom: 1px solid transparent;
}
```
```html
Label
* Option 1
* Option 2
* Option 3
* Option 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel tellus sit amet diam sagittis tempor. Nullam sed nunc non ipsum rhoncus tincidunt. Ut odio nisi, convallis et augue vitae, dictum semper mauris. Donec ullamcorper vehicula mi in interdum.
Cras at hendrerit dolor, a scelerisque arcu. Nullam sagittis consectetur hendrerit. Donec interdum gravida tincidunt. Morbi id sem eleifend, gravida urna sit amet, vestibulum nibh. Pellentesque non convallis massa. Vivamus non metus lobortis, condimentum
lorem vitae, semper augue. Ut eget ante eget orci elementum sodales. Donec nec ligula mauris.
Nunc a consectetur nulla, vel viverra velit. Maecenas sagittis velit turpis, eu dapibus turpis blandit vitae. Duis mollis, lorem ac consectetur hendrerit, turpis odio lacinia eros, sed lacinia velit justo in est. Integer non mauris lacinia, sagittis
justo sed, accumsan tortor. Suspendisse a commodo tortor. Etiam tincidunt mi sit amet elementum fringilla. Pellentesque luctus ac leo non lobortis. Morbi iaculis consequat lacus eget tristique. Vestibulum ante ipsum primis in faucibus orci luctus
et ultrices posuere cubilia Curae; Vivamus ultrices congue augue, vel consequat velit viverra sed. Sed a finibus velit. Mauris sed orci lectus. Vivamus bibendum ante et quam volutpat, sed venenatis mi dignissim. Ut tempus iaculis faucibus.
```
*Note:* I also took the liberty of adding a small closing function to your JavaScript, for when clicking outside `.dropdown` or on one of its options, as yours only closes when clicked on label.
Upvotes: 3 [selected_answer] |
2018/03/20 | 3,626 | 8,664 | <issue_start>username_0: DDL:
```
CREATE TABLE [testXML]
(
[scheduleid] [uniqueidentifier] primary key,
[XMLData1] [xml] NULL
)
INSERT INTO testXML ([scheduleid],XMLData1)
VALUES ('88888888-DDDD-4444-AAAA-666666666666','
')
```
Query:
```
SELECT
[scheduleid],
StationID_q = ARD3.res.value('@q', 'varchar(max)'),
ProgramID_s = ARD2.ag.value('@s', 'varchar(max)'),
StartTime_o = ARD3.res.value('@o', 'datetime')
FROM
[DVR_0601].[dbo].testXML Sch
CROSS APPLY
Sch.XMLData1.nodes('/ArrayOfRDData/RDData/rps') AS AoD(RDData)
CROSS APPLY
AoD.RDData.nodes('rp') AS ARD2(ag)
CROSS APPLY
AoD.RDData.nodes('rp/res/re') AS ARD3(res)
WHERE
ISNULL( ARD2.ag.value('@ag', 'int'), 0) = 1
```
Output:
```
StationID_q ProgramID_s StartTime_o
000000cb-0000-0000-0000-000000000000 00a566e2-0000-0000-0000-000000000000 2018-01-10 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a566e2-0000-0000-0000-000000000000 2018-01-26 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a566e2-0000-0000-0000-000000000000 2018-01-31 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a566e2-0000-0000-0000-000000000000 2018-01-29 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a5860a-0000-0000-0000-000000000000 2018-01-10 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a5860a-0000-0000-0000-000000000000 2018-01-26 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a5860a-0000-0000-0000-000000000000 2018-01-31 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a5860a-0000-0000-0000-000000000000 2018-01-29 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a595c0-0000-0000-0000-000000000000 2018-01-10 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a595c0-0000-0000-0000-000000000000 2018-01-26 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a595c0-0000-0000-0000-000000000000 2018-01-31 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a595c0-0000-0000-0000-000000000000 2018-01-29 17:00:00.000
```
Required output:
```
StationID_q ProgramID_s StartTime_o
000000cb-0000-0000-0000-000000000000 00a566e2-0000-0000-0000-000000000000 2018-01-10 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a5860a-0000-0000-0000-000000000000 2018-01-26 17:00:00.000
000000cb-0000-0000-0000-000000000000 00a595c0-0000-0000-0000-000000000000 2018-01-29 17:00:00.000
```
I am getting cross joins between the rows of data.
Also note if ag="0" I want to skip that data, it does, just it still joins that row. I'm not sure how to join the to or if it's at all possible.<issue_comment>username_1: I made a couple of changes get it to work. I added `box-sizing:border-box` to all elements, just so I know we are dealing with consistent widths with borders. Notice how the borders of the absolute positioned items were both on the inside of the parent border. If you add the width of the border to the width of the children, your borders line up in terms of how far apart they are `width: calc(100% + 2px)` and you just need to offset the absolute positioning to `left: -1px`. Different browsers may give you fits with a negative value here, so you could use `transform: translateX(-1px)` instead.
```js
$(function() {
$('.dropdown span').click(function() {
$(this).parent().toggleClass('open');
$(this).next('ul').slideToggle();
});
});
```
```css
* {
box-sizing: border-box
}
.container {
width:400px;
}
.dropdown {
position:relative;
border:1px solid black;
}
.dropdown.open {
border-bottom:none;
}
.dropdown span {
display:block;
padding:10px 15px;
}
.dropdown ul {
display:block;
background:#fff;
width: calc(100% + 2px);
list-style-type:none;
padding:0 0 15px;
margin:0;
border:1px solid black;
border-top:none;
position:absolute;
z-index:10;
display:none;
left: 0;
-webkit-transform: translateX(-1px);
transform: translateX(-1px);
}
.dropdown ul > li {
padding:15px 15px 0;
width: 100%;
}
.dropdown ul > li:first-child {
padding-top:0;
}
```
```html
Label
* Option 1
* Option 2
* Option 3
* Option 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel tellus sit amet diam sagittis tempor. Nullam sed nunc non ipsum rhoncus tincidunt. Ut odio nisi, convallis et augue vitae, dictum semper mauris. Donec ullamcorper vehicula mi in interdum. Cras at hendrerit dolor, a scelerisque arcu. Nullam sagittis consectetur hendrerit. Donec interdum gravida tincidunt. Morbi id sem eleifend, gravida urna sit amet, vestibulum nibh. Pellentesque non convallis massa. Vivamus non metus lobortis, condimentum lorem vitae, semper augue. Ut eget ante eget orci elementum sodales. Donec nec ligula mauris.
Nunc a consectetur nulla, vel viverra velit. Maecenas sagittis velit turpis, eu dapibus turpis blandit vitae. Duis mollis, lorem ac consectetur hendrerit, turpis odio lacinia eros, sed lacinia velit justo in est. Integer non mauris lacinia, sagittis justo sed, accumsan tortor. Suspendisse a commodo tortor. Etiam tincidunt mi sit amet elementum fringilla. Pellentesque luctus ac leo non lobortis. Morbi iaculis consequat lacus eget tristique. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Vivamus ultrices congue augue, vel consequat velit viverra sed. Sed a finibus velit. Mauris sed orci lectus. Vivamus bibendum ante et quam volutpat, sed venenatis mi dignissim. Ut tempus iaculis faucibus.
```
Upvotes: 1 <issue_comment>username_2: Well done so far. The only missing details are:
* a `margin-left: -1px;` on the which will make it shift to the left by `1px`
* a `border: 1px solid transparent` on `.dropdown.open`, to prevent the rest of the page shifting up by `1px`.
```js
$(function() {
$('.dropdown span').click(function() {
$(this).parent().toggleClass('open');
$(this).next('ul').slideToggle();
});
$(window).on('click', function(e) {
if (!$(e.target).closest('.dropdown').is('.dropdown')
|| $(e.target).closest('li').is('.dropdown li')
) {
$('.dropdown.open span').trigger('click')
}
})
});
```
```css
.container {
width: 400px;
}
.dropdown {
position: relative;
border: 1px solid black;
}
.dropdown.open {
border-bottom: none;
}
.dropdown span {
display: block;
padding: 10px 15px;
}
.dropdown ul {
display: block;
background: #fff;
width: 100%;
list-style-type: none;
padding: 0 0 15px;
margin: 0;
border: 1px solid black;
border-top: none;
position: absolute;
z-index: 10;
display: none;
margin-left: -1px;
}
.dropdown ul>li {
padding: 15px 15px 0;
}
.dropdown ul>li:first-child {
padding-top: 0;
}
.dropdown.open {
border-bottom: 1px solid transparent;
}
```
```html
Label
* Option 1
* Option 2
* Option 3
* Option 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut vel tellus sit amet diam sagittis tempor. Nullam sed nunc non ipsum rhoncus tincidunt. Ut odio nisi, convallis et augue vitae, dictum semper mauris. Donec ullamcorper vehicula mi in interdum.
Cras at hendrerit dolor, a scelerisque arcu. Nullam sagittis consectetur hendrerit. Donec interdum gravida tincidunt. Morbi id sem eleifend, gravida urna sit amet, vestibulum nibh. Pellentesque non convallis massa. Vivamus non metus lobortis, condimentum
lorem vitae, semper augue. Ut eget ante eget orci elementum sodales. Donec nec ligula mauris.
Nunc a consectetur nulla, vel viverra velit. Maecenas sagittis velit turpis, eu dapibus turpis blandit vitae. Duis mollis, lorem ac consectetur hendrerit, turpis odio lacinia eros, sed lacinia velit justo in est. Integer non mauris lacinia, sagittis
justo sed, accumsan tortor. Suspendisse a commodo tortor. Etiam tincidunt mi sit amet elementum fringilla. Pellentesque luctus ac leo non lobortis. Morbi iaculis consequat lacus eget tristique. Vestibulum ante ipsum primis in faucibus orci luctus
et ultrices posuere cubilia Curae; Vivamus ultrices congue augue, vel consequat velit viverra sed. Sed a finibus velit. Mauris sed orci lectus. Vivamus bibendum ante et quam volutpat, sed venenatis mi dignissim. Ut tempus iaculis faucibus.
```
*Note:* I also took the liberty of adding a small closing function to your JavaScript, for when clicking outside `.dropdown` or on one of its options, as yours only closes when clicked on label.
Upvotes: 3 [selected_answer] |
2018/03/20 | 2,391 | 7,488 | <issue_start>username_0: I am just trying to display nth relation using `ul` and `li` in razor using recursive function call. Suppose I have db table where I store parent child relation like below one.
table structure
---------------
```
+----+----------+----------+
| ID | Name | ParentID |
+----+----------+----------+
| 1 | Parent 1 | 0 |
+----+----------+----------+
| 2 | child 1 | 1 |
+----+----------+----------+
| 3 | child 2 | 1 |
+----+----------+----------+
| 4 | child 3 | 1 |
+----+----------+----------+
| 5 | Parent | 0 |
+----+----------+----------+
| 6 | child 4 | 4 |
+----+----------+----------+
```
So i like to show the nested data this way in razor view
```
Parent 1
child 1
child 2
child 3
child 4
Parent
```
So this code i tried but could not achieve the goal.
**c# POCO classes**
```
public class MenuItem
{
public int Id { get; set; }
public string Name { get; set; }
public int ParentId { get; set; }
public virtual ICollection Children { get; set; }
}
public class MenuDTO
{
public int Id { get; set; }
public string Name { get; set; }
public int ParentId { get; set; }
public virtual ICollection Children { get; set; }
}
```
**Action code**
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
.Select(e => new
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList()
.Select(p => new MenuDTO
{
Id = p.Id,
Name = p.Name,
ParentId = p.ParentId,
Children = p.Children
//Children = p.Children.Cast()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
**Razor code**
```
@{
var menuList = ViewBag.menusList as List;
ShowTree(menuList);
}
@helper ShowTree(List menusList)
{
if (menusList != null)
{
foreach (var item in menusList)
{
- @item.Name
@if (item.Children.Any())
{
@ShowTree(item.Children)
}
}
}
}
```
in my case i am query List allMenu to get data instead from db table. when i am running my code then i am getting below error
>
> CS1502: The best overloaded method match for
> 'ASP.\_Page\_Views\_Menu\_Index\_cshtml.ShowTree(System.Collections.Generic.List)'
> has some invalid arguments
>
>
>
tell me what is wrong in my code. please help me to fix and acquire my goal. thanks
EDIT
----
Full working code as following
```
@helper ShowTree(List menusList)
{
@foreach (var item in menusList)
{
* @item.Name
@if (item.Children!=null && item.Children.Any())
{
@ShowTree(item.Children)
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
.Where(e => e.ParentId == 0) /\* grab only the root parent nodes \*/
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = allMenu.Where(x => x.ParentId == e.Id) /\* grab second level children \*/
.Select(e2 => new MenuItem
{
Id = e2.Id,
Name = e2.Name,
ParentId = e2.ParentId,
Children = allMenu.Where(x2 => x2.ParentId == e2.Id).ToList() /\* grab third level children \*/
}).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
public class MenuItem
{
public int Id { get; set; }
public string Name { get; set; }
public int ParentId { get; set; }
public virtual List Children { get; set; }
}
```<issue_comment>username_1: Your code is not able to find a function ShowTree taking a parameter of type `ICollection` when it executes the line
```
@ShowTree(item.Children)
```
because item.Children is of type `ICollection`. The function ShowTree in your code takes a parameter of a different type, `List`, which is not the same as `ICollection`. As a result, the runtime reports the CS1502 error you see.
After realizing that you were looking for a recursive solution, I have modified the code to do just that.
**Action Code**
```
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
.Where(e => e.ParentId == 0) /* grab only the root parent nodes */
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = GetChildren(allMenu, e.Id) /* Recursively grab the children */
}).ToList();
ViewBag.menusList = mi;
return View();
}
///
/// Recursively grabs the children from the list of items for the provided parentId
///
/// List of all items
/// Id of parent item
/// List of children of parentId
private static List GetChildren(List items, int parentId)
{
return items
.Where(x => x.ParentId == parentId)
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
Children = GetChildren(items, e.Id)
}).ToList();
}
```
**Razor Code**
```
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
@helper ShowTree(List menusList)
{
if (menusList != null)
{
foreach (var item in menusList)
{
- @item.Name
@if (item.Children.Any())
{
@ShowTree(item.Children)
}
}
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I fixed and did the job this way
the problem was in logic of razor code and also i comment this line `//.Where(e => e.ParentId == 0)` here i am adding working code.
**working code sample**
```
@helper ShowTree(List menu, int? parentid = 0, int level = 0)
{
var items = menu.Where(m => m.ParentId == parentid);
if (items.Any())
{
if (items.First().ParentId > 0)
{
level++;
}
@foreach (var item in items)
{
* @item.Name
@ShowTree(menu, item.Id, level);
}
}
}
@{
var menuList = ViewBag.menusList as List;
@ShowTree(menuList);
}
public ActionResult Index()
{
List allMenu = new List
{
new MenuItem {Id=1,Name="Parent 1", ParentId=0},
new MenuItem {Id=2,Name="child 1", ParentId=1},
new MenuItem {Id=3,Name="child 2", ParentId=1},
new MenuItem {Id=4,Name="child 3", ParentId=1},
new MenuItem {Id=5,Name="Parent 2", ParentId=0},
new MenuItem {Id=6,Name="child 4", ParentId=4}
};
List mi = allMenu
//.Where(e => e.ParentId == 0) /\* grab only the root parent nodes \*/
.Select(e => new MenuItem
{
Id = e.Id,
Name = e.Name,
ParentId = e.ParentId,
//Children = allMenu.Where(x => x.ParentId == e.Id).ToList()
}).ToList();
ViewBag.menusList = mi;
return View();
}
```
Upvotes: 2 |
2018/03/20 | 2,282 | 7,416 | <issue_start>username_0: I am working on training a VGG16-like model in Keras, on a 3 classes subset from Places205, and encountered the following error:
```
ValueError: Error when checking target: expected dense_3 to have shape (3,) but got array with shape (1,)
```
I read multiple similar issues but none helped me so far. The error is on the last layer, where I've put 3 because this is the number of classes I'm trying right now.
The code is the following:
```
import keras from keras.datasets
import cifar10 from keras.preprocessing.image
import ImageDataGenerator from keras.models
import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K import os
# Constants used
img_width, img_height = 224, 224
train_data_dir='places\\train'
validation_data_dir='places\\validation'
save_filename = 'vgg_trained_model.h5'
training_samples = 15
validation_samples = 5
batch_size = 5
epochs = 5
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height) else:
input_shape = (img_width, img_height, 3)
model = Sequential([
# Block 1
Conv2D(64, (3, 3), activation='relu', input_shape=input_shape, padding='same'),
Conv2D(64, (3, 3), activation='relu', padding='same'),
MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),
# Block 2
Conv2D(128, (3, 3), activation='relu', padding='same'),
Conv2D(128, (3, 3), activation='relu', padding='same'),
MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),
# Block 3
Conv2D(256, (3, 3), activation='relu', padding='same'),
Conv2D(256, (3, 3), activation='relu', padding='same'),
Conv2D(256, (3, 3), activation='relu', padding='same'),
MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),
# Block 4
Conv2D(512, (3, 3), activation='relu', padding='same'),
Conv2D(512, (3, 3), activation='relu', padding='same'),
Conv2D(512, (3, 3), activation='relu', padding='same'),
MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),
# Block 5
Conv2D(512, (3, 3), activation='relu', padding='same',),
Conv2D(512, (3, 3), activation='relu', padding='same',),
Conv2D(512, (3, 3), activation='relu', padding='same',),
MaxPooling2D(pool_size=(2, 2), strides=(2, 2)),
# Top
Flatten(),
Dense(4096, activation='relu'),
Dense(4096, activation='relu'),
Dense(3, activation='softmax') ])
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
# no augmentation config train_datagen = ImageDataGenerator() validation_datagen = ImageDataGenerator()
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
validation_generator = validation_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='binary')
model.fit_generator(
train_generator,
steps_per_epoch=training_samples // batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_samples // batch_size)
model.save_weights(save_filename)
```<issue_comment>username_1: The problem is with your label-data shape. In a multiclass problem you are predicting the probabibility of every possible class, so must provide label data in (N, m) shape, where N is the number of training examples, and m is the number of possible classes (3 in your case).
Keras expects y-data in (N, 3) shape, not (N,) as you've problably provided, that's why it raises an error.
Use e.g. [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) to convert your label data to one-hot encoded form.
Upvotes: 5 <issue_comment>username_2: Had the same issue. To solve the problem you can simply change in validation\_generator and train\_generator the class mode from 'binary' to 'categorical' - that's because you have 3 classes-which is not binary.
Upvotes: 4 <issue_comment>username_3: As mentioned by others, Keras expects "one hot" encoding in multiclass problems.
**Keras comes with a handy function to recode labels:**
```
print(train_labels)
[1. 2. 2. ... 1. 0. 2.]
print(train_labels.shape)
(2000,)
```
Recode labels using `to_categorical` to get the correct shape of inputs:
```
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
print(train_labels)
[[0. 1. 0.]
[0. 0. 1.]
[0. 0. 1.]
...
[0. 1. 0.]
[1. 0. 0.]
[0. 0. 1.]]
print(train_labels.shape)
(2000, 3) # viz. 2000 observations, 3 labels as 'one hot'
```
**Other importent things to change/check in multiclass (compared to binary classification):**
Set `class_mode='categorical'` in the `generator()` function(s).
Don't forget that the *last* dense layer must specify the number of labels (or classes):
```
model.add(layers.Dense(3, activation='softmax'))
```
Make sure that `activation=` and `loss=` is chosen so to suit multiclass problems, usually this means `activation='softmax'` and `loss='categorical_crossentropy'`.
Upvotes: 5 <issue_comment>username_4: Problem : expected dense\_3 to have shape (3,) but got array with shape (1,)
If you are using it for classification then the number of variables should be correct in the parameter for adding a dense layer.
```
variables_for_classification=5 #change it as per your number of categories
model.add(Dense(variables_for_classification, activation='softmax'))
model.fit(X_train, Y_train, epochs=epochs, batch_size=batch_size,validation_split=0.1,callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
```
To make it more clear.
As I was using the LSTM to predict the category of the news and the categories were 5- business, tech, politics, sports, entertainment
In that dense function when I put 5 it worked correctly.
Upvotes: 3 <issue_comment>username_5: The reason for this is you would have used 'binary' class\_mode in the fit\_generator() method for a multi class problem. Change that to 'categorical' and the error goes.
Upvotes: 2 <issue_comment>username_6: if take these errors u just need to specify the last with numbers of the classes > as an example u have 6 classes u have to make that :
```
model.add(Dense(6, activation='softmax'))
```
u can use this
```
num_classes=...
```
and the last layers will be
```
model.add(Dense(num_classes, activation='softmax'))
```
Upvotes: 2 <issue_comment>username_7: I also got the same error and solved it by setting `class_mode` as `categorical` instead of `binary`
Upvotes: 2 <issue_comment>username_8: The problem is with the shape of the labels of the data "Y".
The shape you have for the labels are (m,) and this will not work with the:
```
loss = "binary_crossentropy"
```
I believe if you don't want to play with the shape of the labels, then use:
```
loss = "sparse_categorical_crossentropy"
```
Upvotes: 1 <issue_comment>username_9: For me, this worked.
```
from keras.utils import to_categorical
num_labels=10 #for my case
train_labels=to_categorical(train_labels,10)
test_labels=to_categorical(test_labels,10)
```
Specifying the number of labels as an argument while categorically encoding my labels helped train effectively on my training set.
Upvotes: 0 |
2018/03/20 | 1,157 | 4,252 | <issue_start>username_0: I tried searching, so forgive me if my searching skills are weak. I'm trying to slim down an Excel model for speed and filesize, and make it more legible, so I thought I'd replace a bunch of nested IF() statements with some AND() statements. I confirmed that the logic is the same, but my file exploded in size and I'm quite surprised. Is AND() horribly inefficient and worse than nested IF()? I can't share the formulas, but are there general suggestions for optimizing nested IF() formulas?
Thanks!<issue_comment>username_1: **File size will not increase because formulas are designed inefficiently but inefficiency could be caused by unnecessarily lengthy formulas.** In both cases, it's not number *number* of formulas that matters (since they are stored in plain text), as much as the number of **dependencies on other cells** since Excel stores separate **chain of calculation** information about each dependency.
You can check it out (and see why your file is so big) **by changing the Excel file's extension to `.ZIP`, opening it up as a compressed file, and examining files within**, comparing a previous (smaller) version of your file to this one.
An `.XLSX` file is just a compressed set of text `.XML` files. An `.XLSM` adds not much more than a `VBAProject.bin` (binary file).
As was already stated, **efficiency of your *formatting*** can have a huge effect on file size. This can be improved by:
* **Using conditional formatting** (since a *rule* can format an area of cells that would otherwise need to have separate information stored for each cell)
* **Using built-in (common) formatting styles** instead of custom formats, which need to be stored individually
* An **overview of the file structure** [here](http://professor-excel.com/xml-zip-excel-file-structure/).
* Microsoft's **Official XLSX File Structure document**: *Download **[`PDF`](http://interoperability.blob.core.windows.net/files/MS-XLSX/[MS-XLSX].pdf) or [`DOCX`](http://interoperability.blob.core.windows.net/files/MS-XLSX/[MS-XLSX]-171212.docx)***
* Stack Overflow : **[How to properly assemble a valid xlsx file from its internal sub-components?](https://msdn.microsoft.com/en-us/library/dd922181(v=office.12).aspx)**
---
More related information:
-------------------------
* MSDN : **[Excel Recalculation and Construction of a calculation chain](https://msdn.microsoft.com/en-us/library/bb687891(v=office.15).aspx)**
* MSDN : **[Improving calculation performance](https://msdn.microsoft.com/en-us/vba/excel-vba/articles/excel-improving-calcuation-performance)**
* **[The "Ultimate Guide" to Reducing File Size in Excel](http://www.excelefficiency.com/reduce-excel-file-size/)**
* Microsoft.com : **[How to minimize the size of an XML Spreadsheet file in Excel](https://support.microsoft.com/en-us/help/325091/how-to-minimize-the-size-of-an-xml-spreadsheet-file-in-excel)**
Upvotes: 1 <issue_comment>username_2: Nested `IF`'s are more efficient than `AND` in the sense that the `IF` *usually* won't bother calculating the rest of the formula that doesn't apply, e.g.
```
= IF(TRUE,,)
```
Generally, in the equation above will be completely ignored.
`AND` **always** checks all conditions, even if unnecessary. E.g, consider this:
```
= AND(FALSE,,,...)
```
The result of the above equation will always be `FALSE` because the first argument is `FALSE`. (The remaining arguments are irrelevant if the first argument is `FALSE`.) Still, `AND` will check all of them anyway. `OR` acts the same way in this case:
```
= OR(TRUE,,,...)
```
In this way, `IF`'s are better becuase they "stop" after a condition is met.
Depending on the details of your spreadsheet (of which I know none), this could drastically affect calculation times especially if volatile formulas are used, which are known to spread like the plague to other cells in your spreadsheet if you're not careful.
However, I don't really prefer either. Nested `IF`'s are sloppy, difficult to read, and difficult to maintain. I practically always prefer a lookup table. See [this question](https://stackoverflow.com/questions/48643955/using-if-formula-in-spreadsheets/) (and accepted answer) for an example of what I'm talking about.
Upvotes: 1 [selected_answer] |
2018/03/20 | 947 | 3,266 | <issue_start>username_0: I am including a mixin from one SASS file into another file but I've got an error with the ionic `serve` command:
```
Error: "Sass Error
Invalid CSS after " @include": expected identifier, was '"../../../theme/mai'"
```
This is the file where I am importing the mixin:
```
action-sheet-layout-1 {
@include '../../../theme/main.scss'
@include "settingAnimationHeader"; // this mixin is defined in main.scss
[button-action-shit] {
z-index: 99999; right: 16px; top: 56px;
...
}
```
The mixin as defined in `main.scss`:
```
@mixin settingAnimationHeader {
@keyframes headerOff {
from {
background-color: theme-colors('mainColors', 'primary');
}
to {
background-color: transparent;
}
}
...
}
```
I am new to ionic and SASS, is what I am doing right or am I missing something?
The directory structure of both files from the app root:
```sh
src/theme/main.scss # the mixin is defined in this file.
src/components/action-sheet/layout-1/action-sheet-layout-1.scss # the mixin is imported here.
```<issue_comment>username_1: at the top of your SASS file (`action-sheet-layout-1.scss`) you need to include: `@import "../../../theme/main.scss"` then you can access the mixins inside `main.scss` by doing `@include settingAnimationHeader;` inside the css rule where you want to apply this mixin
Upvotes: 4 <issue_comment>username_2: You can **rename your main.scss file to \_main.scss**
The prefix '\_' tells the sass/scss compiler that this file is a **partial**. And so, the compiler will not try to compile the file.
[Sass partials](https://sass-lang.com/guide#topic-4)
Upvotes: 3 <issue_comment>username_3: If the mixins are defined in *main.scss* just make sure to import the file before the file they are used in ie *action-sheet-layout-1.scss* in this case.
Upvotes: 2 <issue_comment>username_4: **2021 update**
>
> The Sass team discourages the continued use of the @import rule. Sass will gradually phase it out over the next few years, and eventually remove it from the language entirely. Prefer the @use rule instead.
>
>
>
So as per the situation here , the code should be changed **from**
`@import 'path/to/main.scss'` //*Note: make sure that you name your sass partials starting with an underscore(\_), this way sass knows that its not meant to be compiled to a css file and you also don't need to include the extension as seen here, instead change it*
**to**
`@use 'path/to/main' as m` //*where 'm' is a reference variable or namespace to your partial.which BTW is optional but recommended.*
You can find more about it here: [Sass-docs/imports](https://sass-lang.com/documentation/at-rules/import)
Upvotes: 4 <issue_comment>username_5: For angular application
Create or rename the file with prefix `_` eg \_main.scss. Thats tells the sass/scss compiler that this file is a partial.
**\_style.scss**
```
$sizes: 3, 7, 20, 33,37,40;
@mixin width-classes {
@each $i in $sizes {
$width: $i +'% !important';
.w-#{$i} {
width: #{$width};
}
}
}
@include width-classes;
```
In the other scss file, import the scss file.
**app.scss**
```
@import '../../../../../../style';
```
Upvotes: 0 |
2018/03/20 | 860 | 2,873 | <issue_start>username_0: There are tables in MySQL DB
```
entity
----------
ID NAME
1 entity1
2 entity2
3 entity3
entity_props
----------
ENTITY_ID PROP_ID PROP_VALUE
1 23 abc
1 24 def
1 25 xyz
```
When i need to select all entities which has property values 23="abc", 24="def" and 25="xyz" i use such request
```
SELECT ID
FROM entity
WHERE PROP_ID=23 AND PROP_VALUE="abc" AND ID IN
(SELECT ENTITY_ID FROM entity_props WHERE PROP_ID=24 AND PROP_VALUE="def" and ENTITY_ID IN
(SELECT ENTITY_ID FROM entity_props WHERE PROP_ID=25 AND PROP_VALUE="xyz"))
```
But when there are too many properties it looks like terrible. Can you suggest how to simplyfiy it?
Thanks in advance!<issue_comment>username_1: at the top of your SASS file (`action-sheet-layout-1.scss`) you need to include: `@import "../../../theme/main.scss"` then you can access the mixins inside `main.scss` by doing `@include settingAnimationHeader;` inside the css rule where you want to apply this mixin
Upvotes: 4 <issue_comment>username_2: You can **rename your main.scss file to \_main.scss**
The prefix '\_' tells the sass/scss compiler that this file is a **partial**. And so, the compiler will not try to compile the file.
[Sass partials](https://sass-lang.com/guide#topic-4)
Upvotes: 3 <issue_comment>username_3: If the mixins are defined in *main.scss* just make sure to import the file before the file they are used in ie *action-sheet-layout-1.scss* in this case.
Upvotes: 2 <issue_comment>username_4: **2021 update**
>
> The Sass team discourages the continued use of the @import rule. Sass will gradually phase it out over the next few years, and eventually remove it from the language entirely. Prefer the @use rule instead.
>
>
>
So as per the situation here , the code should be changed **from**
`@import 'path/to/main.scss'` //*Note: make sure that you name your sass partials starting with an underscore(\_), this way sass knows that its not meant to be compiled to a css file and you also don't need to include the extension as seen here, instead change it*
**to**
`@use 'path/to/main' as m` //*where 'm' is a reference variable or namespace to your partial.which BTW is optional but recommended.*
You can find more about it here: [Sass-docs/imports](https://sass-lang.com/documentation/at-rules/import)
Upvotes: 4 <issue_comment>username_5: For angular application
Create or rename the file with prefix `_` eg \_main.scss. Thats tells the sass/scss compiler that this file is a partial.
**\_style.scss**
```
$sizes: 3, 7, 20, 33,37,40;
@mixin width-classes {
@each $i in $sizes {
$width: $i +'% !important';
.w-#{$i} {
width: #{$width};
}
}
}
@include width-classes;
```
In the other scss file, import the scss file.
**app.scss**
```
@import '../../../../../../style';
```
Upvotes: 0 |
2018/03/20 | 1,636 | 5,587 | <issue_start>username_0: I'm following the tutorial to encrypt data with google cloud kms, but when I try to encryt give me 404 error. I seach in the code and noted that it has DEFAULT\_ROOT\_URL = <https://cloudkms.googleapis.com/>. but the root of the url do not appears in the URL, then of couse give error 404. Could someone tell me why the URL was not mounted corretilly. I look in properties and there is no reference for Root\_URL.
`
The requested URL `/v1beta1/projects/condoease-3f3ea/locations/global/keyRings/test/cryptoKeys/quickstart:encrypt` was not found on this server. That?s all we know.
```
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1056)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.example.getstarted.util.CloudKeyManagementServiceHelper.wrapDataEncryptionKey(CloudKeyManagementServiceHelper.java:129)
at com.example.getstarted.util.CloudStorageHelper.getImageUrl(CloudStorageHelper.java:121)
at com.example.getstarted.basicactions.CreateBookServlet.doPost(CreateBookServlet.java:56)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:845)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1689)
at com.example.getstarted.util.DatastoreSessionFilter.doFilter(DatastoreSessionFilter.java:111)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
```
`<issue_comment>username_1: <https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/kms> has a good sample to get you started. I don't think the getting-started-java has KMS samples (that I know of).
What version of the library are you using?
Your URL is using /v1beta1/ which makes me think you're using an older version. Cloud KMS is GA and should be /v1/.
*Update*:
Full disclosure I'm an engineer @ Google on Cloud KMS.
<https://codelabs.developers.google.com/codelabs/cloud-bookshelf-java-cloud-kms> looks like it has instructions to download a legacy java client library which is constructing broken URLs to v1beta1. We've filed an issue internally to track getting this resolved ASAP, thank you for posting your issue!
In the meantime, I'd encourage the above github [link](https://github.com/GoogleCloudPlatform/java-docs-samples/tree/master/kms) to get you started using KMS. Thanks for using the product!
Upvotes: 1 <issue_comment>username_2: API version `v1beta1` is no longer supported; can you tell us where you found it documented? I'd like to get it fixed.
You'll also get this uninformative error message if you err in other part of the URLs, such as missing the fixed components like `/projects/`, `/locations/`, `/keyRings/`, or `/cryptoKeys/`. We'll look at improving it.
Upvotes: 0 |
2018/03/20 | 845 | 2,448 | <issue_start>username_0: Why can't I match a string in a Pandas series using `in`? In the following example, the first evaluation results in False unexpectedly, but the second one works.
```
df = pd.DataFrame({'name': [ 'Adam', 'Ben', 'Chris' ]})
'Adam' in df['name']
'Adam' in list(df['name'])
```<issue_comment>username_1: ### In the first case:
Because the `in` operator is interpreted as a call to `df['name'].__contains__('Adam')`. If you look at the implementation of `__contains__` in `pandas.Series`, you will find that it's the following (inhereted from `pandas.core.generic.NDFrame`) :
```
def __contains__(self, key):
"""True if the key is in the info axis"""
return key in self._info_axis
```
so, your first use of `in` is interpreted as:
```
'Adam' in df['name']._info_axis
```
This gives `False`, expectedly, because `df['name']._info_axis` actually contains information about the `range/index` and not the data itself:
```
In [37]: df['name']._info_axis
Out[37]: RangeIndex(start=0, stop=3, step=1)
In [38]: list(df['name']._info_axis)
Out[38]: [0, 1, 2]
```
---
### In the second case:
```
'Adam' in list(df['name'])
```
The use of `list`, converts the `pandas.Series` to a list of the values. So, the actual operation is this:
```
In [42]: list(df['name'])
Out[42]: ['Adam', 'Ben', 'Chris']
In [43]: 'Adam' in ['Adam', 'Ben', 'Chris']
Out[43]: True
```
---
Here are few more idiomatic ways to do what you want (with the associated speed):
```
In [56]: df.name.str.contains('Adam').any()
Out[56]: True
In [57]: timeit df.name.str.contains('Adam').any()
The slowest run took 6.25 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 144 µs per loop
In [58]: df.name.isin(['Adam']).any()
Out[58]: True
In [59]: timeit df.name.isin(['Adam']).any()
The slowest run took 5.13 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 191 µs per loop
In [60]: df.name.eq('Adam').any()
Out[60]: True
In [61]: timeit df.name.eq('Adam').any()
10000 loops, best of 3: 178 µs per loop
```
Note: the last way is also suggested by @Wen in the comment above
Upvotes: 5 [selected_answer]<issue_comment>username_2: ```
found = df[df['Column'].str.contains('Text_to_search')]
print(len(found))
```
`len(found)` will give you the number of number of matches in column.
Upvotes: -1 |
2018/03/20 | 529 | 2,337 | <issue_start>username_0: I am watching a tutorial on how to add headers with OkHttp Interceptors, but I am confused about a few things.
* What is a `Chain` object?
* What does `Request original = chain.request()` do?
* What does `return chain.proceed(request)` do?
Code:
```
OkHttpClient.Builder httpClient = new OkHttpClient.Builder();
httpClient.addInterceptor(new Interceptor() {
@Override
public Response intercept(Interceptor.Chain chain) throws IOException {
Request original = chain.request();
// Request customization: add request headers
Request.Builder requestBuilder = original.newBuilder()
.header("Authorization", "auth-value");
Request request = requestBuilder.build();
return chain.proceed(request);
}
});
OkHttpClient client = httpClient.build();
```<issue_comment>username_1: ```
@Override
public Response intercept(Chain chain) throws IOException {
FormBody.Builder formbody=new FormBody.Builder()
.add("ServiceId",ServiceId);
RequestBody requestBody=formbody.build();
Request Original=chain.request();
Request.Builder builder=Original.newBuilder()
.post(requestBody)
.header("Authorization",Authorization);
Request request=builder.build();
return chain.proceed(request);
}
```
Upvotes: -1 <issue_comment>username_2: The Chain object in retrofit is an implementation of the [Chain of Responsibility](https://en.m.wikipedia.org/wiki/Chain-of-responsibility_pattern) design pattern, and each interceptor is a processing object which acquires the result of the previous interceptor through `chain.request()`, applies its own logic on it (by a builder pattern), and usually passes it to the next unit (interceptor) using `chain.proceed`.
In some special cases, an interceptor may throw an exception to halt the normal flow of the chain and prevent the API from being called (e.g an Interceptor checking the expiry of JWT tokens), or return a Response without actually calling other chain items (caching is an example of this usage).
Obviously, interceptors are called in the order they've been added. The last unit of this chain connects to OkHttp and performs the HTTP request; then retrofit tries to convert the plain result taken from the API to your desired objects using the Converter Factories.
Upvotes: 3 |
2018/03/20 | 719 | 2,669 | <issue_start>username_0: Webpack dev server proxy config documentation seen here:
<https://webpack.js.org/configuration/dev-server/#devserver-proxy>
says it uses http-proxy-middleware:
<https://github.com/chimurai/http-proxy-middleware#http-proxy-events>
Using the `onProxyRes` function documented in the above link I do the following:
```
function onProxyRes(proxyRes, req, res) {
proxyRes.headers['x-added'] = 'foobar'; // add new header to response
delete proxyRes.headers['x-removed']; // remove header from response
console.log(req.headers) // log headers
console.log(req.body) // undefined
console.log(proxyReq.body) // undefined
}
```
My problem, although everything else works great - I cannot log the Request Body - it returns `undefined`
Anyone know how to read the request body for debugging purposes? Do I somehow need to use the npm `body-parser` module? If so, how? thx<issue_comment>username_1: I tried logging the request with the express [body-parser](https://www.npmjs.com/package/body-parser) module but it caused the requests to hang. Using the [body](https://www.npmjs.com/package/body) module to log the request body fixed it.
```
const anyBody = require('body/any')
onProxyReq(proxyReq, req, res) {
anyBody(req, res, function (err, body) {
if (err) console.error(err)
console.log(body)
})
})
```
Note that I also used this same approach inside express as follows:
```
app.use((req, res, next) => {
anyBody(req, res, function (err, body) {
if (err) console.error(err)
console.log(body)
})
next()
})
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I solved a similar problem using `body-parser`.
I was trying to modify the request body before sending it to server, and it caused the request to hang maybe because, altering the body, the `Content-Length` of the request was no more matching (causing it to be truncated).
The solution was "resizing" `Content-Length` before writing the new request body:
```
var bodyParser = require('body-parser');
devServer: {
before: function (app, server) {
app.use(bodyParser.json());
},
onProxyReq: function (proxyReq, req, res) {
req.body = {
...req.body,
myProperty: "myPropertyValue"
};
const body = JSON.stringify(req.body);
proxyReq.setHeader('Content-Length', Buffer.byteLength(body));
proxyReq.write(body);
}
}
```
Not sure, but in your case it could be that the request has been truncated by adding/removing headers, causing it to hang.
Upvotes: 2 |
2018/03/20 | 707 | 2,782 | <issue_start>username_0: I am using a Vaadin BeanValidationBinder.
If I bind a field like this:
```
binder.forField(email).bind("email");
```
then the validation works.
If I bind the field like this:
```
binder.forField(email).bind(PersonDTO::getEmail, PersonDTO::setEmail);
```
then it does not work (there is no attempt made to validate)
I prefer the latter form because it is more explicit and less error-prone, but why doesn't validation work, and how can I get it to work?
I have tried the `@Email` annotation on the field, the getter, and the setter. Using the annotation on the setter caused an exception, but having the annotation on either the field or the getter have the effect as described above.<issue_comment>username_1: I haven't dug through the sources to see exactly where this is done, but it makes some sense if you think about it:
* using a field with [`binder.bind("name")`](https://vaadin.com/api/com/vaadin/data/Binder.BindingBuilder.html#bind-java.lang.String-), the framework is able to inspect it and deduce the associated annotations, and thus evaluate the new value
* using [`ValueProvider`](https://vaadin.com/api/com/vaadin/data/ValueProvider.html) & [`Setter`](https://vaadin.com/api/com/vaadin/server/Setter.html) with [`binder.bind(PersonDTO::getEmail, PersonDTO::setEmail)`](https://vaadin.com/api/com/vaadin/data/Binder.BindingBuilder.html#bind-com.vaadin.data.ValueProvider-com.vaadin.server.Setter-), you can supply any kind of method implementation which may not necessarily manipulate a field, although that's usually not the case. For example (not the brightest, but you get the point), you allow the user to input a series of comma separated values in a `TextField`, which internally you then split and save into a list.
What you can do in your case, is to explicitly add a specific [email validator](https://vaadin.com/api/com/vaadin/data/validator/EmailValidator.html) to the binding:
```java
binder.forField(email)
.withValidator(new EmailValidator("Please provide a valid e-mail address"))
.asRequired("Please provide a valid e-mail address")
.bind(PersonDTO::getEmail, PersonDTO::setEmail);
```
Upvotes: 2 <issue_comment>username_2: Unfortunately the way you are trying to do the binding is unsupported with the built-in BeanValidation support. The bit limited JSR 303 support in Vaadin is only available with naming based binding. Vaadin JSR 303 supports only property level binding and without using the naming convention based binding, the framework would have no way to connect errors to the related field.
If you can cope with displaying the errors separately in a generic place, like close to the save button, I'd suggest to use JSR 303 APIs directly and doing the validation yourself.
Upvotes: 1 |
2018/03/20 | 3,256 | 6,973 | <issue_start>username_0: I have an array of strings like this:
```
strings = [
"ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00",
"AMS 210 10 Intro to American Lit II SMTH 222 TTH 11:30AM-12:45PM DeProspo,R 0/25 0 4.00",
"AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00",
"AMS 300 10 <NAME> DALY 107 TTH 10:00AM-11:15AM Knight 12/20 0 4.00",
"AMS 394 11 SpTp: Public Opinion Amer Pol DALY 107 TTH 02:30PM-03:45PM Cossette 5/16 0 4.00",
"ANT 105 10 Introduction to Anthropology CMWL 210 TTH 11:30AM-12:45PM Lampman 1/25 1 4.00",
"ANT 107 10 Intro to Envrmntl Archaeology CMWL 101 TTH 11:30AM-12:45PM Markin 2/25 0 4.00",
"ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00",
"ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00",
"ANT 300 10 Language and Culture CMWL 101 TTH 02:30PM-03:45PM Neely 1/18 0 4.00",
"ANT 320 10 Race and Ethnicity CMWL 101 TTH 10:00AM-11:15AM Lampman -4/16 2 4.00",
"ANT 104 10 Intro to World Music & Ethno GCA 204 TTH 10:00AM-11:15AM McCollum, J 0/25 0 4.00",
"ANT 105 10 Introduction to Anthropology CMWL 210 TTH 11:30AM-12:45PM Lampman 1/25 1 4.00",
"ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00",
"ANT 300 10 Language and Culture CMWL 101 TTH 02:30PM-03:45PM Neely 1/18 0 4.00",
"ANT 320 10 Race and Ethnicity CMWL 101 TTH 10:00AM-11:15AM Lampman -4/16 2 4.00",
"ANT 104 10 Intro to World Music & Ethno GCA 204 TTH 10:00AM-11:15AM McCollum, J 0/25 0 4.00",
"ANT 105 10 Introduction to Anthropology CMWL 210 TTH 11:30AM-12:45PM Lampman 1/25 1 4.00",
"ANT 300 10 Language and Culture CMWL 101 TTH 02:30PM-03:45PM Neely 1/18 0 4.00",
"ANT 320 10 Race and Ethnicity CMWL 101 TTH 10:00AM-11:15AM Lampman -4/16 2 4.00",
"ANT 104 10 Intro to World Music & Ethno GCA 204 TTH 10:00AM-11:15AM McCollum, J 0/25 0 4.00",
"AMS 210 10 Intro to American Lit II SMTH 222 TTH 11:30AM-12:45PM DeProspo,R 0/25 0 4.00",
"AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00",
"AMS 300 10 <NAME> DALY 107 TTH 10:00AM-11:15AM Knight 12/20 0 4.00",
"AMS 394 11 SpTp: Public Opinion Amer Pol DALY 107 TTH 02:30PM-03:45PM Cossette 5/16 0 4.00",
"ANT 104 10 Intro to World Music & Ethno GCA 204 TTH 10:00AM-11:15AM McCollum, J 0/25 0 4.00",
"ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00",
"AMS 300 10 <NAME> DALY 107 TTH 10:00AM-11:15AM Knight 12/20 0 4.00",
]
```
I want to sort this array by the start time, first value would be `01:00PM` for `ANT 107 90 Intro to Envrmntl Archaeology`. Is there any straight forward way of doing this?<issue_comment>username_1: Here is a quick hack approach based on your data. Is not really sorting on time (ignores AM/PM) just by time as a numerical value.
```
strings.sort! { |x,y|
# split on time delimeter
s = x.index('-')
# sort by time as numerical
x[s-7..s-1] <=> y[s-7..s-1]
}
puts strings
```
outputs:
```
ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00
AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00
ANT 300 10 Language and Culture CMWL 101 TTH 02:30PM-03:45PM Neely 1/18 0 4.00
AMS 394 11 SpTp: Public Opinion Amer Pol DALY 107 TTH 02:30PM-03:45PM Cossette 5/16 0 4.00
...
...
```
Upvotes: 1 <issue_comment>username_2: Here's another alternative using `DateTime`:
```
require 'date'
strings.sort_by! do |item|
time = item.scan(/(\d{2}:\d{2}(PM|AM))/)
DateTime.parse(time.first.first).to_time.to_i
end
puts strings
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can try:
```
data = <
```
The `$2=='A' ? 0 : 1` is not needed with your data, but it is relevant for data like
```
data = <
```
Upvotes: 0 <issue_comment>username_4: ```
arr = [
"ANT 107 90 Intro to Archaeology 01:00PM-02:15PM Markin ",
"AMS 210 10 Intro to Lit I 11:30AM-12:45PM DeProspo,R ",
"AMS 210 11 Intro to Lit II 02:00PM-03:15PM DeProspo,R ",
"AMS 300 10 <NAME> 10:00AM-11:15AM Knight "
]
arr.sort_by { |s| [s[40], s[35,5]] }
#=> ["AMS 300 10 <NAME> 10:00AM-11:15AM Knight ",
# "AMS 210 10 Intro to Lit I 11:30AM-12:45PM DeProspo,R ",
# "ANT 107 90 Intro to Archaeology 01:00PM-02:15PM Markin ",
# "AMS 210 11 Intro to Lit II 02:00PM-03:15PM DeProspo,R "]
```
Notice that if `s = arr.first` then
```
[s[40], s[35,5]]
#=> ["P", "01:00"]
```
The ordering of arrays used by [Enumerable#sort\_by](http://ruby-doc.org/core-2.4.0/Enumerable.html#method-i-sort_by) is explained in the third paragraph of the doc for [Array#<=>](http://ruby-doc.org/core-2.4.0/Array.html#method-i-3C-3D-3E). The ordering of strings (e.g., comparing `"P"` vs `"A"` or `"01:00"` vs `"01:20"`) is explained in the doc for [String#<=>](http://ruby-doc.org/core-2.4.0/String.html#method-i-3C-3D-3E).`["A", "11:30"]` would be sorted before `["P", "01:00"]` because `"A"` precedes `"P"`. Similarly, `["P", "01:00"]` is sorted before `["P", "02:00"]` because the first elements of the two arrays are equal and `"01:00"` precedes `"02:00"`.
One could alternatively write
```
arr.sort_by { |s| s[40] + s[35,5] }
```
Is `s = arr.first` then
```
s[40] + s[35,5]
#=> "P01:00"
```
Upvotes: 0 |
2018/03/20 | 1,820 | 4,091 | <issue_start>username_0: I want copy file from:
>
> C:\Users\Machina\Documents\Visual Studio
> 2017\Projects\P\Patcher\bin\Debug\Patches\0.0.0.2\SomeDir\OtherDir\File.txt
>
>
>
to this Folder:
>
> C:\Users\Machina\Documents\Visual Studio
> 2017\Projects\P\Patcher\bin\Debug\Builds
>
>
>
but i need to create subFolder in destination folder For this file:
>
> \0.0.0.2\SomeDir\OtherDir\
>
>
>
so new path to file should be:
>
> C:\Users\Machina\Documents\Visual Studio
> 2017\Projects\P\Patcher\bin\Debug\Builds\0.0.0.2\SomeDir\OtherDir\File.txt
>
>
>
I try
```
fileList[i].Replace(filePath, $"{path}Builds/")
```
but this return source file path :/ I don't have idea how do this.<issue_comment>username_1: Here is a quick hack approach based on your data. Is not really sorting on time (ignores AM/PM) just by time as a numerical value.
```
strings.sort! { |x,y|
# split on time delimeter
s = x.index('-')
# sort by time as numerical
x[s-7..s-1] <=> y[s-7..s-1]
}
puts strings
```
outputs:
```
ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
ANT 294 10 SpTp: Queer Anthropology CMWL 210 TTH 01:00PM-02:15PM Neely 0/12 1 4.00
ANT 107 90 Intro to Envrmntl Archaeology CMWL 101 TTH 01:00PM-02:15PM Markin 2/15 0 4.00
AMS 210 11 Intro to American Lit II SMTH 222 TTH 01:00PM-02:15PM DeProspo,R 1/25 0 4.00
ANT 300 10 Language and Culture CMWL 101 TTH 02:30PM-03:45PM Neely 1/18 0 4.00
AMS 394 11 SpTp: Public Opinion Amer Pol DALY 107 TTH 02:30PM-03:45PM Cossette 5/16 0 4.00
...
...
```
Upvotes: 1 <issue_comment>username_2: Here's another alternative using `DateTime`:
```
require 'date'
strings.sort_by! do |item|
time = item.scan(/(\d{2}:\d{2}(PM|AM))/)
DateTime.parse(time.first.first).to_time.to_i
end
puts strings
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can try:
```
data = <
```
The `$2=='A' ? 0 : 1` is not needed with your data, but it is relevant for data like
```
data = <
```
Upvotes: 0 <issue_comment>username_4: ```
arr = [
"ANT 107 90 Intro to Archaeology 01:00PM-02:15PM Markin ",
"AMS 210 10 Intro to Lit I 11:30AM-12:45PM DeProspo,R ",
"AMS 210 11 Intro to Lit II 02:00PM-03:15PM DeProspo,R ",
"AMS 300 10 <NAME> 10:00AM-11:15AM Knight "
]
arr.sort_by { |s| [s[40], s[35,5]] }
#=> ["AMS 300 10 <NAME> 10:00AM-11:15AM Knight ",
# "AMS 210 10 Intro to Lit I 11:30AM-12:45PM DeProspo,R ",
# "ANT 107 90 Intro to Archaeology 01:00PM-02:15PM Markin ",
# "AMS 210 11 Intro to Lit II 02:00PM-03:15PM DeProspo,R "]
```
Notice that if `s = arr.first` then
```
[s[40], s[35,5]]
#=> ["P", "01:00"]
```
The ordering of arrays used by [Enumerable#sort\_by](http://ruby-doc.org/core-2.4.0/Enumerable.html#method-i-sort_by) is explained in the third paragraph of the doc for [Array#<=>](http://ruby-doc.org/core-2.4.0/Array.html#method-i-3C-3D-3E). The ordering of strings (e.g., comparing `"P"` vs `"A"` or `"01:00"` vs `"01:20"`) is explained in the doc for [String#<=>](http://ruby-doc.org/core-2.4.0/String.html#method-i-3C-3D-3E).`["A", "11:30"]` would be sorted before `["P", "01:00"]` because `"A"` precedes `"P"`. Similarly, `["P", "01:00"]` is sorted before `["P", "02:00"]` because the first elements of the two arrays are equal and `"01:00"` precedes `"02:00"`.
One could alternatively write
```
arr.sort_by { |s| s[40] + s[35,5] }
```
Is `s = arr.first` then
```
s[40] + s[35,5]
#=> "P01:00"
```
Upvotes: 0 |
Subsets and Splits