date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/21 | 445 | 1,443 | <issue_start>username_0: I'm having a weird problem, my domain is [aridod.com](http://aridod.com) which is working fine but if i add www before aridod.com I mean [www.aridod.com](http://www.aridod.com) that takes me to [domainnamesales.com](http://domainnamesales.com) which is completely weird. I don't know how it happens. My source files fairly simple html. I looked into the browser source i saw iframe is injected anyway. And it is not anyway happening from source files & here is my dns settings : [](https://i.stack.imgur.com/auhSd.jpg)
Any help would be greatly appreciated.<issue_comment>username_1: I would break up the images into four separate images (with different class names) and use the z-index property.
<https://www.w3schools.com/cssref/pr_pos_z-index.asp>
Hope that helps
Upvotes: 0 <issue_comment>username_2: You will need to change the `background-position` of all the images on hover...
```css
body,
html {
height: 100%;
margin: 0;
}
.bg {
background-image: url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80"), url("http://via.placeholder.com/80x80");
height: 100%;
background-position: center;
background-repeat: no-repeat;
transition: 1s ease;
}
.bg:hover {
background-position: top left, top right, bottom left, bottom right;
}
```
Upvotes: 2 |
2018/03/21 | 1,064 | 3,722 | <issue_start>username_0: My work is running Jenkins and Bitbucket Server (so instead of the bitbucket cloud, they host their own bitbucket version). I am used to having passing/failing builds on github and bitbucket cloud immediately reporting back on PRs and branches as to whether the build passed or failed. I want to give that gift to my team in the current environment. How do I get PRs in Bitbucket server to receive success/failure of builds from Jenkins?
---
[Figure 1 just shows an example of the functionality I want, operational on PRs in github+codeship]
---
[](https://i.stack.imgur.com/2W6mj.png)<issue_comment>username_1: While the [Webhook to Jenkins for Bitbucket](https://marketplace.atlassian.com/plugins/com.nerdwin15.stash-stash-webhook-jenkins/server/overview) can help notify Jenkins to poll whenever there is a commit, you still need to be mindful of the [“lazy ref updates” in Bitbucket](https://christiangalsterer.wordpress.com/2015/04/23/continuous-integration-for-pull-requests-with-jenkins-and-stash/#comment-108) (described [in this thread](https://community.atlassian.com/t5/Bitbucket-questions/Change-pull-request-refs-after-Commit-instead-of-after-Approval/qaq-p/194702#M6839))
>
> We had to implement something that would do a get to the REST API for the `pull-request/*/changes` before the call to the Jenkins `/git/notifyCommit` url.
>
>
>
The last Jenkins URL `/git/notifyCommit` comes from the [Jenkins Git plugin](https://wiki.jenkins.io/display/JENKINS/Git+Plugin#GitPlugin-Pushnotificationfromrepository).
See more at "[Configuring Webhook To Jenkins for Bitbucket](https://mohamicorp.atlassian.net/wiki/spaces/DOC/pages/121274372/Configuring+Webhook+To+Jenkins+for+Bitbucket)".
Once Jenkins is properly called, you can then, as mentioned in "[Notify build status from Jenkins to Bitbucket Server](https://stackoverflow.com/a/47338075/6309)", use the "[Jenkins Stash Pullrequest Builder](https://plugins.jenkins.io/stash-pullrequest-builder)", from [`nemccarthy/stash-pullrequest-builder-plugin`](https://github.com/nemccarthy/stash-pullrequest-builder-plugin).
[](https://i.stack.imgur.com/bdSh5.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: The bitbucket server has `build-status` [API](https://developer.atlassian.com/server/bitbucket/how-tos/updating-build-status-for-commits/). It stores a build-status for particular commit, there is no separate PR build status. The PR build status is a build status of the head commit in this PR.
You can implement yourself the rest api call to update the build status or to use one of the existing plugins. We use [Post Webhooks for Bitbucket](https://marketplace.atlassian.com/plugins/nl.topicus.bitbucket.bitbucket-webhooks/server/overview) bitbucket plugin in conjunction with [Bitbucket Branch Source](https://plugins.jenkins.io/cloudbees-bitbucket-branch-source) jenkins plugin.
Upvotes: 2 <issue_comment>username_3: you can setup [Stash notifier plugin](https://wiki.jenkins.io/display/JENKINS/Bitbucket+%28Stash%29+Notifier+Plugin) , it workds perfectly with BitBucket and notifies build status to branch and pull request
Upvotes: 1 <issue_comment>username_4: You could you use BitBucket REST API to achieve this ?
Here the how-to update commits with the build status :
[Updating build status for commits](https://developer.atlassian.com/server/bitbucket/how-tos/updating-build-status-for-commits/)
Commit status are then shown in Pull Request and on branches
Upvotes: 2 |
2018/03/21 | 192 | 680 | <issue_start>username_0: I am new to react. I have developed a page using react js. Now I want to do code coverage. Pls help me. Pls suggest which tool should I use for code Coverage for react code.
Thanks<issue_comment>username_1: [Jest](https://facebook.github.io/jest/) is a probably the best tool, but not as easy to use as libraries like [Mocha](https://mochajs.org/), which is simple and easy to use.
A good comparison of the two is provided [here](https://spin.atomicobject.com/2017/05/02/react-testing-jest-vs-mocha/).
Upvotes: 1 <issue_comment>username_2: After the new update from them, **Jest** is the best option with **Enzyme** (for shallow rendering).
Upvotes: 0 |
2018/03/21 | 475 | 1,789 | <issue_start>username_0: I would like to include the same javascript file more than once on a page. This is because different variables are defined using the "data" attr to get different data.
Here is an example of what i mean:
```
```
The above obviously detects and displays the information from the script twice, but the information displayed is the same for both, when it should be different.
My js file:
```
var key = document.getElementById("sc_widget").getAttribute("data-key");
var id = document.getElementById("sc_widget").getAttribute("data-id");
$.ajax({
type: "post",
url: "//example.co.uk/sc.php?key="+ key + "&id="+ id,
contentType: "application/x-www-form-urlencoded",
success: function(responseData, textStatus, jqXHR) {
$('.sc_ouput').html(responseData);
},
error: function(jqXHR, textStatus, errorThrown) {
console.log(errorThrown);
}
})
```
So is there anyway to accomplish this?<issue_comment>username_1: `id` attributes must be **unique** in an HTML document.
`document.getElementById("sc_widget")` will always get the *first* element with the matching ID. The others will be ignored as the browser attempts to recover from the error.
Instead, get the *last*
Upvotes: 2 <issue_comment>username_2: >
> I would like to include the same javascript file more than once on a
> page.
>
>
>
You don't need to include and re-include the same file.
You simply need to write a function which takes one or more parameters:
```
function myFunction(fruit) {
/* [... CODE HERE ...] */
}
```
if you want to run the script once on the data `apple` and once on the data `banana`, you can then do the following:
```
myFunction(apple);
myFunction(banana);
```
Upvotes: 1 |
2018/03/21 | 1,225 | 4,513 | <issue_start>username_0: I am trying to make class that sets the time using three integer variables for amount of hours, minutes, and seconds, and a fourth Boolean variable to set AM or PM. I am able to get the three integer variables to work correctly, but I don't know what I am doing wrong regarding the Boolean variable not being assigned or tested correctly.
```
public class Time
{
private int hour;
private int minute;
private int second;
private boolean amPm;
public Time(int setHour, int setMinute, int setSecond, boolean setAmPm)
{
hour = setHour;
minute = setMinute;
second = setSecond;
amPm = setAmPm;
}
public String toString()
{
if (amPm == true)
{
String halfDay = "P.M.";
}
else
{
String halfDay = "A.M.";
}
return hour + ":" + minute + ":" + second + " " + halfDay + ".";
}
}
```<issue_comment>username_1: You can't use halfDay as it's out of the scope
>
> In computer programming, the scope of a name binding – an association of a name to an entity, such as a variable – is the region of a computer program where the binding is valid: where the name can be used to refer to the entity. Such a region is referred to as a scope block. In other parts of the program the name may refer to a different entity (it may have a different binding), or to nothing at all (it may be unbound).
>
>
>
In other world, you declare halfDay in the if/else statement, but it disapear once you close the brackets. If you want to use it for your concatenation, you need to change his scope, by declaring it before.
```
public class Time
{
private int hour;
private int minute;
private int second;
private boolean amPm;
public Time(int setHour, int setMinute, int setSecond, boolean setAmPm)
{
hour = setHour;
minute = setMinute;
second = setSecond;
amPm = setAmPm;
}
public String toString()
{
String halfDay;
if (amPm == true)
{
halfDay = "P.M.";
}
else
{
halfDay = "A.M.";
}
return hour + ":" + minute + ":" + second + " " + halfDay + ".";
}
}
```
Upvotes: 2 <issue_comment>username_2: The `boolean` is fine, the `halfDay` variable doesn't have *scope*. Give it scope. Like,
```
public String toString()
{
String halfDay;
if (amPm)
{
halfDay = "P.M.";
}
else
{
halfDay = "A.M.";
}
return hour + ":" + minute + ":" + second + " " + halfDay + ".";
}
```
But, I would prefer to not use such a temporary variable. You could do,
```
public String toString()
{
if (amPm)
{
return hour + ":" + minute + ":" + second + " P.M..";
}
return hour + ":" + minute + ":" + second + " A.M..";
}
```
**or** with a ternary and a formatted `String`. Like
```
public String toString()
{
return String.format("%d:%d:%d %s.", hour, minute, second, amPm ? "P.M." : "A.M.");
}
```
Upvotes: 1 <issue_comment>username_3: The issue you've come across here is one of scoping: You are initializing the `halfDay` String inside the if/else statement in the `toString()` function, but it needs to be available outside of that to be used in the return statement.
Your objective can be achieved using a [ternary operator](https://www.sitepoint.com/java-ternary-operator/) like so:
```
@Override // Overriding the toString() method from Object
public String toString() {
return hour + ":" +
minute + ":" +
second + " " +
(amPm ? "P.M" : "A.M") + "."
}
```
Upvotes: 2 <issue_comment>username_4: ```
public String toString()
{
String halfDay = "";
if (amPm == true)
{
halfDay = "P.M.";
}
else
{
halfDay = "A.M.";
}
return hour + ":" + minute + ":" + second + " " + halfDay + ".";
}
```
Do this. Your halfDay variable isn't in the scope of your return.
Upvotes: 0 <issue_comment>username_5: Simplified boolean check and changed scope of halfDay:
```
public Time(int setHour, int setMinute, int setSecond, boolean setAmPm)
{
hour = setHour;
minute = setMinute;
second = setSecond;
amPm = setAmPm;
}
public String toString()
{
String halfDay = "A.M.";
if (amPm)
{
halfDay = "P.M.";
}
return hour + ":" + minute + ":" + second + " " + halfDay + ".";
}
```
Upvotes: 1 |
2018/03/21 | 397 | 1,702 | <issue_start>username_0: I am writing an enterprise-scale application with Angular and ngrx. The intention is to use Flux and ngrx throughout. For reuse and separability we require (at least) two state stores that do not interact with each other. But we do need both stores to be active at the same time, and potentially accessed from the same components.
Ngrx seems to be predicated on the assumption that there will only ever be one Store at once. Is there an approach that will allow me to have multiple Store objects (templated of course with different State objects), and have them both loaded and active at the same time?
I'm aware that 'best practice' recommends combining the stores into one. That's not viable here unless there is an entirely novel approach.<issue_comment>username_1: I'd suggest setting up two feature states. Here are the relevant docs:
<https://github.com/ngrx/platform/blob/v5.2.0/docs/store/api.md#feature-module-state-composition>
Though it isn't the same thing as having two separate stores it is the same for most practical purposes. The feature states are loaded when the module that imports `StoreModule.forFeature('featureName', reducers)` is loaded. You could do this lazy or eager. Each feature state will have access to root state so you can put common state on root state that both can access. Feature states should never reference each-other as they may not be loaded and that would negate the reason for having them.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I have created complete application for multiple stores using actions and reducers. I have uploaded zip file in github.
<https://github.com/akashprince138/multiple_store>
Upvotes: -1 |
2018/03/21 | 503 | 1,781 | <issue_start>username_0: I have 197 levels relating to location, I want to simplify this by creating a new variable "INSIDE" which stores 1 when location is a building/home/etc and 0 when location is outside. I have tried grepl() but it gives an error
```
data$Inside<-ifelse(grepl(data$Premise.Description,pattern = c("BUILDING","ROOM","AUTO","BALCONY","BANK","BAR","STORE","CHURCH","COLLEGE","CONDOMINIUM","CENTER","DAY CARE","SCHOOL","HOSPITAL","LIBRARY","PARLOR","OFFICE","MOSQUE","CLUB","PORCH","MALL","WAREHOUSE")),1,0)
```
>
> Warning message:
> In grepl(crime\_3yr$Premise.Description, pattern = c("BUILDING", :
> argument 'pattern' has length > 1 and only the first element will be used
>
>
>
I have tried using lapply() but it did not work too.
I want the output to be like this:
```
BUILDING 1
SHOP 1
Street 0
```<issue_comment>username_1: I'd suggest setting up two feature states. Here are the relevant docs:
<https://github.com/ngrx/platform/blob/v5.2.0/docs/store/api.md#feature-module-state-composition>
Though it isn't the same thing as having two separate stores it is the same for most practical purposes. The feature states are loaded when the module that imports `StoreModule.forFeature('featureName', reducers)` is loaded. You could do this lazy or eager. Each feature state will have access to root state so you can put common state on root state that both can access. Feature states should never reference each-other as they may not be loaded and that would negate the reason for having them.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I have created complete application for multiple stores using actions and reducers. I have uploaded zip file in github.
<https://github.com/akashprince138/multiple_store>
Upvotes: -1 |
2018/03/21 | 361 | 1,342 | <issue_start>username_0: Here is my code where I try to build real-time html editor.In javascript I get the text from textarea which has id=pure then in document.body.onkeyup function I pass the value to the textarea that has id=compiled. It does not work at all. I wonder if the problem is about open-writeln-close or another syntax?
```
function compile() {
var h = document.getElementById("pure");
var compiled = document.getElementById("compiled").contentWindow.document;
document.body.onkeyup = function(){
compiled.open();
compiled.writeln(h.value);
compiled.close();
};
}
compile();
Write your HTML here:
```<issue_comment>username_1: ```
Input something in the input box:
Name:
```
Upvotes: 0 <issue_comment>username_2: Not 100% sure what your after looking at your code.
But if all your after is have a TextArea were you can put HTML markup, and then see a preview. Below is an example..
```js
var h = document.getElementById("pure");
var compiled = document.getElementById("compiled");
h.onkeyup = function() {
compiled.innerHTML = h.value;
pure.classList.toggle("error",
compiled.innerHTML !== h.value);
};
h.onkeyup();
```
```css
.error {
background-color: red;
color: white;
}
```
```html
Write your HTML here:
Hello **world**
```
Upvotes: 1 |
2018/03/21 | 1,344 | 4,932 | <issue_start>username_0: I want to create two-way communicate beetwen my Qt Apps. I want to use QProcess to do this. I'm calling sucesfully child app from root app and sending test data without any erro, but I can't recive any data in child app. I'll be gratefull for any help. I'm using Qt 4.7.1. Below my test code:
**Root app:**
```
InterProcess::InterProcess(QObject *parent) : QProcess(parent)
{
process = new QProcess(this);
process->start(myChildApp);
process->waitForStarted();
process->setCurrentWriteChannel(QProcess::StandardOutput);
process->write("Test");
connect( process, SIGNAL(error(QProcess::ProcessError)), this, SLOT(error(QProcess::ProcessError)) );
connect( process, SIGNAL(readyReadStandardError()), this, SLOT(readyReadStandardError()) );
connect( process, SIGNAL(readyReadStandardOutput()), this, SLOT(readyReadStandardOutput()) );
QByteArray InterProcess::read()
{
QByteArray readBuffer = process->readAllStandardOutput();
return readBuffer;
}
void InterProcess::error( QProcess::ProcessError error )
{
qDebug() << "Error!";
qDebug() << error;
}
void InterProcess::readyReadStandardError()
{
qDebug() << "Ready to read error.";
qDebug() << process->readAllStandardError();
}
void InterProcess::readyReadStandardOutput()
{
qDebug() << "The output:";
QByteArray readBuffer = process->readAllStandardOutput();
qDebug() << readBuffer;
}
```
**Child app:**
```
InterProcess::InterProcess(QObject *parent) : QProcess(parent)
{
process = new QProcess();
process->setCurrentReadChannel(QProcess::StandardOutput);
connect( process, SIGNAL(readyRead()), this, SLOT(readyReadStandardOutput()));
connect( process, SIGNAL(error(QProcess::ProcessError)), this, SLOT(error(QProcess::ProcessError)) );
connect( process, SIGNAL(readyReadStandardError()), this, SLOT(readyReadStandardError()) );
connect( process, SIGNAL(readyReadStandardOutput()), this, SLOT(readyReadStandardOutput()) );
process->waitForReadyRead(5000);
}
void InterProcess::readyReadStandardError()
{
qDebug() << "Ready to read error.";
qDebug() << process->readAllStandardError();
setText("REady error");
}
void InterProcess::readyReadStandardOutput()
{
setMessage("2");
qDebug() << "The output:";
QByteArray readBuffer = process->readAllStandardOutput();
qDebug() << readBuffer;
}
void InterProcess::error( QProcess::ProcessError error )
{
qDebug() << "Error!";
qDebug() << error;
setText(QString(error));
}
```<issue_comment>username_1: Locally, using UDP is very convenient and efficient
```
void Server::initSocket() {
udpSocket = new QUdpSocket(this);
udpSocket->bind(QHostAddress::LocalHost, 7755);
connect(udpSocket, SIGNAL(readyRead()), this, SLOT(readPendingDatagrams()));}
void Server::readPendingDatagrams(){
while (udpSocket->hasPendingDatagrams()) {
QByteArray datagram;
datagram.resize(udpSocket->pendingDatagramSize());
QHostAddress sender;
quint16 senderPort;
udpSocket->readDatagram(datagram.data(), datagram.size(),
&sender, &senderPort);
processTheDatagram(datagram);
}}
```
Upvotes: 0 <issue_comment>username_2: It's very hard to explain in one answer all mistakes, so just look at code and ask if you still got problems.
Here is example of using QProcess as IPC.
This is your main process, that creates additional process and connects to its signals
**MyApplicaiton.h**
```
#ifndef MYAPPLICATION_H
#define MYAPPLICATION_H
#include
class InterProcess;
class MyApplication : public QApplication {
Q\_OBJECT
public:
MyApplication(int &argc, char \*\*argv);
signals:
void mainApplicationSignal();
private slots:
void onInterProcessSignal();
private:
InterProcess \*mProcess;
};
#endif // MYAPPLICATION\_H
```
**MyApplicaiton.cpp**
```
#include "MyApplication.h"
#include "InterProcess.h"
MyApplication::MyApplication(int &argc, char **argv) : QApplication(argc, argv) {
mProcess = new InterProcess(this);
connect(mProcess, SIGNAL(interProcessSignal()),
this, SLOT(onInterProcessSignal()));
mProcess->start();
}
void MyApplication::onInterProcessSignal() {}
```
This is example implementation of your interProcess class:
**InterProcess.h**
```
class InterProcess : public QProcess {
Q_OBJECT
public:
explicit InterProcess(QObject *parent = nullptr);
signals:
void interProcessSignal();
private slots:
void onMainApplicationSignal();
};
```
**InterProcess.cpp**
```
#include "InterProcess.h"
#include "MyApplication.h"
InterProcess::InterProcess(QObject *parent) : QProcess(parent) {
if(parent) {
auto myApp = qobject_cast(parent);
if(myApp) {
connect(myApp, SIGNAL(mainApplicationSignal()),
this, SLOT(onMainApplicationSignal()));
}
}
}
void InterProcess::onMainApplicationSignal() {}
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 493 | 1,803 | <issue_start>username_0: I am working on application where I am recording video from Camera Intent. On my Samsung mobile `MediaStore.EXTRA_VIDEO_QUALITY` is working and even my allocated memory size also works but same application on my Google Pixel there `MediaStore.EXTRA_VIDEO_QUALITY` is not working and even allocated size of memory is not working with camera intent.
My code is given below:
```
public void takeVideoFromCamera(){
File mediaFile =new File(Environment.getExternalStorageDirectory().getAbsolutePath()+ "/myvideo.mp4");
Intent intent = new Intent(MediaStore.ACTION_VIDEO_CAPTURE);
Uri videoUri;
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
// videoUri = FileProvider.getUriForFile(this, this.getApplicationContext().getPackageName() + ".provider", mediaFile);
videoUri = FileProvider.getUriForFile(this, "i.am.ce.by.murgqcy.provider", mediaFile);
} else {
videoUri = Uri.fromFile(mediaFile);
}
intent.putExtra(MediaStore.EXTRA_OUTPUT, videoUri);
intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 0);
intent.putExtra(MediaStore.EXTRA_SIZE_LIMIT, 5491520L);//5*1048*1048=5MB
intent.putExtra(MediaStore.EXTRA_DURATION_LIMIT,45);
startActivityForResult(intent, VIDEO_CAPTURE);
}
```<issue_comment>username_1: ```
intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1);
```
Upvotes: 1 <issue_comment>username_2: According to [MediaStore.EXTRA\_VIDEO\_QUALITY](http://developer.android.com/reference/android/provider/MediaStore.html#EXTRA_VIDEO_QUALITY)
You should change the value of ***MediaStore.EXTRA\_VIDEO\_QUALITY*** from 0 to 1.
0 means low quality
Thus could be the solution `intent.putExtra(MediaStore.EXTRA_VIDEO_QUALITY, 1);`
Upvotes: 2 |
2018/03/21 | 1,440 | 5,761 | <issue_start>username_0: I have two audio files in which a sentence is read (like singing a song) by two different people. So they have different lengths. They are just vocal, no instrument in it.
A1: Audio File 1
A2: Audio File 2
Sample sentence : *"Lorem ipsum dolor sit amet, ..."*
[](https://i.stack.imgur.com/eVJqS.png)
I know the time every word starts and ends in A1. And I need to find automatically that what time every word starts and ends in A2. (Any language, preferably Python or C#)
Times are saved in XML. So, I can split A1 file by word. So, how to find sound of a word in another audio that has different duration (of word) and different voice?<issue_comment>username_1: My approach for this would be to record the dB volume at a constant interval (such as every 100 milliseconds) store this volume in a list or array. I found a way of doing this on java here: [Decibel values at specific points in wav file](https://stackoverflow.com/questions/13243690/decibel-values-at-specific-points-in-wav-file). It is possible in other languages. Meanwhile, take note of the max volume:
```
max = 0;
currentVolume = f(x)
if currentVolume > max
{
max = currentVolume
}
```
Then divide the maximum volume by an editable threshold, in my example I went for 7. Say the maximum volume is 21, 21/7 = 3dB, let's call this measure X.
We second threshold, such as 1 and multiply it by X. Whenever the volume is greater than this new value (1\*x), we consider that to be the start of a word. When it is less than the given value, we consider it to be the end of a word.
[Visual explanation](https://i.stack.imgur.com/pE5US.png)
Upvotes: 2 <issue_comment>username_2: So from what I read, it seems you would want to use [Dynamic Time Warping (DTW)](https://en.wikipedia.org/wiki/Dynamic_time_warping). Of course, I'll leave the explanation for wikipedia, but it is generally used to recognize speech patterns without getting noise from different pronunciation.
Sadly, I am more well versed in C, Java and Python. So I will be suggesting python Libraries.
1. [fastdtw](https://pypi.python.org/pypi/fastdtw)
2. [pydtw](https://pypi.python.org/pypi/pydtw)
3. [mlpy](http://mlpy.sourceforge.net/docs/3.5/dtw.html)
4. [rpy2](https://rpy2.readthedocs.io/en/version_2.8.x/)
With [rpy2](https://rpy2.readthedocs.io/en/version_2.8.x/) you can actually use R's library and use their implementation of DTW in your python code. Sadly, I couldn't find any good tutorials for this but there are good examples if you choose to use R.
Please let me know if that doesn't help, Cheers!
Upvotes: 3 <issue_comment>username_3: Without knowing how sophisticated your understanding of the problem space is it isn't easy to know whether to point you in a direction or provide detail on why this problem is non-trivial.
I'd suggest that you start with something like <https://cloud.google.com/speech/> and try to convert the speech blocks to text and then perform a similarity comparison on these.
If you really want to try to do the processing yourself you could look at doing some spectrographic analysis. Take the wave form data and perform an FFT to get frequency distributions and look for marker patterns that align your samples.
With only single word comparison of different speakers you are probably not going to be able to apply any kind of neural network unless you are able to train them on the 2 speakers entire speech set and use the network to then try to compare the individual word chunks.
It's been a few years since I did any of this so maybe it's easier these days but my recollection is that although this sounds conceptually simple it might prove to be more difficult than you realise.
The Dynamic Time Warping looks like the most promising suggestion.
Upvotes: 2 <issue_comment>username_4: secret sauce of below : pointA - pointB is zero if both points have same value ... that is numerically do a pointA minus pointB ... below leverages this to identify at what file byte index offset gives us this zero value when comparing the raw audio curves from a pair of input files ... or an close to zero in a relative sense if both source audio are different even slightly
approach is open up both files and pluck out the raw audio curve of each file ... define two variables bestSum and currentSum, set both to MAX\_INT\_VALUE ( any arbitrary high value ) ... iterate across the both files simultaneously and obtain the integer value of the current raw audio curve level of file A do same on other file B ... for each such integer just subtract the integer from file A from integer from file B ... continue this loop until you have reached end of one file ... inside of above loop add to currentSum variable the current value of the above mentioned subtraction ... at bottom of above loop update bestSum to become currentSum if currentSum < bestSum also store current file index offset ...
create an outer loop which does a repeat all of above by introducing an offset in time of one file then relaunch above inner loop ... your common audio is when you are using the offset which has the minimum total sum value .. that is the offset when you encountered bestSum
do not start coding until you have gained intuition that above makes perfect sense
I highly encourage you to plot out the curve of the raw audio for one file to confirm you are accessing this sequence of integers ... do this before attempting above algorithm
it will help to visualize above by viewing each input source audio as a curve and you simply keep one curve steady as you slide the other audio curve left or right until you see the curve shapes match or get very close to matching
Upvotes: 2 |
2018/03/21 | 878 | 2,922 | <issue_start>username_0: I am a beginner exploring scala.The following is a Scala function.
```
def printArray[K](array:Array[K]) = array.mkString("Array(" , ", " , ")")
val array2 = Array("a", 2, true)
printArray(array2)
```
The output is
Array(a, 2, true)
**My doubts**
Here we have given the array type as K. What does K means? Does it mean all types?
How is the fucntion 'mkString' able to give the output as Array(a, 2, true).
Basically I don't understand the concatenation part.
Appreciate your help.<issue_comment>username_1: If you look at your `array2` definition in *REPL*, you will see that *array2 is of type Any*, *the parent type of all the other types in Scala*
```
scala> val array2 = Array("a", 2, true)
//array2: Array[Any] = Array(a, 2, true)
```
So when you call the function `def printArray[K](array:Array[K]) = array.mkString("Array(" , ", " , ")")` `K` now is treated as `Any` which returns a string with *intitial String* as `Array(` and *ending string* as `)` and *all the values separated* by `,`.
>
>
> >
> > `def mkString(start: String, sep: String, end: String): String =
> > addString(new StringBuilder(), start, sep, end).toString`
> >
> >
> >
>
>
>
Upvotes: 0 <issue_comment>username_2: `K` is not a type here it is a type parameter, for more intuition have a look at other question [Type parameter in scala](https://stackoverflow.com/questions/39283549/type-parameter-in-scala)
---
In this specific example `K` is infered to by `Any` - the most specific type that satisfies all 3 values `"a"`, `2` and `true`
```
val array2: Array[Any] = Array("a", 2, true)
```
---
the `mkString` function joins all items of collection into single string. It adds separator between items and some strings in the beginning and end. Documentation [mkString](https://www.scala-lang.org/api/current/scala/Array.html#mkString(start:String,sep:String,end:String):String)
Upvotes: 2 <issue_comment>username_3: The `mkString` method called as
```
arr.mkString(prefix, separator, suffix)
```
will invoke `toString` on all array elements, prepend the `prefix`, then concatenate all strings separating them by the `separator`, and finally append the `suffix`.
The type parameter `K` in `printArray[K]` is ignored, it could be replaced by an existential. It's just a method with a bad name and confusing signature.
When you store any primitive data types (like `Int`) together with types that extend `AnyRef` (like `String`) into the same array, the least upper bound is inferred to be `Any`, so in
```
printArray(array2)
```
the `K` is set to `Any`, and the `mkString` works as described above, gluing together
```
Array( prefix
a "a".toString
, separator
2 2.toString
, separator
true true.toString
) suffix
```
yielding the string `Array(a,2,true)`.
Upvotes: 4 [selected_answer] |
2018/03/21 | 1,906 | 5,523 | <issue_start>username_0: I am asking a kind of generalisation of this question:
[Best way to extract elements from nested lists](https://stackoverflow.com/questions/27029300/best-way-to-extract-elements-from-nested-lists).
It is also somehow related to these questions:
[recursive function for extract elements from deep nested lists/tuples](https://stackoverflow.com/questions/49247894/recursive-function-for-extract-elements-from-deep-nested-lists-tuples)
[Scheme - find most deeply values nested lists](https://stackoverflow.com/questions/8389649/scheme-find-most-deeply-values-nested-lists)
[Scheme - find most deeply nested lists](https://stackoverflow.com/questions/8355428/scheme-find-most-deeply-nested-lists/8360899#8360899)
Essentially, I have some arbitrary nested list structure, where at the bottom there are various numpy arrays that are all the same shape. I want to iterate or slice all these bottom-level arrays whilst preserving the nested list structure in which they live. It is this part about preserving the nested structure in the output which doesn't seem to be answered in these other questions.
So, for example:
```
A = np.ones((3,3,3))
nest = [[A,A,A],[[A,A,[A,A]],[A,A]],A]
```
and we want, schematically,
```
nest[0,...] == [[A[0,...],A[0,...],A[0,...]],[[A[0,...],A[0,...],[A[0,...],A[0,...]]],[A[0,...],A[0,...]]],A[0,...]]
```
or
```
nest[1:3,5,:] == [[A[1:3,5,:],A[1:3,5,:],A[1:3,5,:]],[[A[1:3,5,:],A[1:3,5,:],[A[1:3,5,:],A[1:3,5,:]]],[A[1:3,5,:],A[1:3,5,:]]],A[1:3,5,:]]
```
I'm sure some clever recursive function or something can do this, my brain is just not coming up with it right now...
I guess it would also be best if this returns views onto the bottom level arrays rather than copies parts of them.
edit: Perhaps something like this can work: <https://stackoverflow.com/a/43357135/1447953>. That method would require that numpy slicing operations be converted into functions somehow, which I guess can be done on a case-by-case basis, so perhaps this is the way to go.<issue_comment>username_1: maybe a generator like:
```
def get_nested_elements(elements):
if not elements or isinstance(elements[0], np.number):
yield elements
else:
for node in elements:
for e in get_nested_elements(node):
yield e
```
will return the ndarray if the first element is of type number.
Upvotes: 2 <issue_comment>username_2: A quick first attempt:
```
def recurse(f, alist):
def foo(f, item):
if isinstance(item,list):
return recurse(f, item)
else:
return f(item)
return [foo(f, item) for item in alist]
```
test case:
```
In [1]: A = np.arange(10)
In [2]: alist = [[A,A],[[A],[A,A]],A]
In [6]: recurse(lambda A: A[3], alist)
Out[6]: [[3, 3], [[3], [3, 3]], 3]
In [7]: recurse(lambda A: A[:3], alist)
Out[7]:
[[array([0, 1, 2]), array([0, 1, 2])],
[[array([0, 1, 2])], [array([0, 1, 2]), array([0, 1, 2])]],
array([0, 1, 2])]
```
not limited to indexing:
```
In [10]: recurse(lambda A: A.sum(), alist)
Out[10]: [[45, 45], [[45], [45, 45]], 45]
```
Upvotes: 1 <issue_comment>username_3: Appreciate the help! I came up with the following voodoo to do what I wanted, partly based on <https://stackoverflow.com/a/43357135/1447953>. I also generalised it to allow binary (and higher) operations with mirrored nested structures. It's basically a generalisation of 'map' to apply over nested structures. Could use better checking of the structure as it goes, for example it doesn't make sure that the lengths match, though I guess 'map' will throw an error if this is the case.
```
def apply_f(f,*iters):
"""Apply some function to matching 'bottom level' objects
in mirrored nested structure of lists,
return the result in the same nested list structure.
'iters' should be lists which have identical
nested structure, and whose bottom level elements can
be used as arguments to 'f'.
"""
# We have to descend all list structures in lock-step!
if all(isinstance(item,list) for item in iters):
return list(map(lambda *items: apply_f(f,*items), *iters))
elif any(isinstance(item,list) for item in iters):
raise ValueError("Inconsistency in nested list structure of arguments detected! Nested structures must be identical in order to apply functions over them")
else:
return f(*iters)
```
It can be used like this:
```
A = np.ones((2,2))
a = [A,[A,A],A,[[A,A],A]]
def my_sum(a,b,c):
return a + b + c
b = apply_f(my_sum,a,a,a)
print(a)
print(b)
# Check structure
print(apply_f(lambda A: A.shape, a))
print(apply_f(lambda A: A.shape, b))
```
Output:
```
[array([[1., 1.],
[1., 1.]]), [array([[1., 1.],
[1., 1.]]), array([[1., 1.],
[1., 1.]])], array([[1., 1.],
[1., 1.]]), [[array([[1., 1.],
[1., 1.]]), array([[1., 1.],
[1., 1.]])], array([[1., 1.],
[1., 1.]])]]
[array([[3., 3.],
[3., 3.]]), [array([[3., 3.],
[3., 3.]]), array([[3., 3.],
[3., 3.]])], array([[3., 3.],
[3., 3.]]), [[array([[3., 3.],
[3., 3.]]), array([[3., 3.],
[3., 3.]])], array([[3., 3.],
[3., 3.]])]]
[(2, 2), [(2, 2), (2, 2)], (2, 2), [[(2, 2), (2, 2)], (2, 2)]]
[(2, 2), [(2, 2), (2, 2)], (2, 2), [[(2, 2), (2, 2)], (2, 2)]]
```
Of course then the applied function can do whatever one likes to the bottom level arrays, whether it be extract elements, slice them, add them together, whatever.
Upvotes: 0 |
2018/03/21 | 543 | 1,704 | <issue_start>username_0: I am working on a large C++ project. I have a bunch of macros that are used in a subset of the project, but I don't want to export them everywhere, to avoid contaminating unrelated code.
Right now, many of my files look like
```
#include // defines MACRO\_1, MACRO\_2, ... MACRO\_N
// bla
// bla
#undef MACRO\_1
#undef MACRO\_2
// ...
#undef MACRO\_N
```
Of course, this is going to break the day I introduce `MACRO_N+1`.
Is there a better way to do somehow undefine all these macros at once?<issue_comment>username_1: You can create a separate include file that undefines the macros in question and include that in your source files.
For example, in undefMacros.h:
```
#undef MACRO_1
#undef MACRO_2
// ...
#undef MACRO_N
```
Then in your source files:
```
#include // defines MACRO\_1, MACRO\_2, ... MACRO\_N
// bla
// bla
#include
```
Then you only need to maintain the `#undef` statements in one place, in addition to where the macros are originally defined.
Upvotes: 2 <issue_comment>username_2: A variant on [username_1's answer](https://stackoverflow.com/a/49409544/2096401) would be to combine the definitions and clearing of the macros in one file:
**`MyMacros.h`**
```
#ifndef MACRO_1
# define MACRO_1 ...
# define MACRO_2 ...
...
# define MACRO_N ...
//! Remember to add #undef if new macros are added
#else
# undef MACRO_1
# undef MACRO_2
...
# undef MACRO_N
#endif
```
and then simply include this file at the top and bottom of where you use it:
```
#include "MyMacros.h"
...
...
...
#include "MyMacros.h"
```
Keeping everything in one file should make it easier to remember to add an `#undef` if you add a new macro.
Upvotes: 1 |
2018/03/21 | 599 | 2,097 | <issue_start>username_0: [](https://i.stack.imgur.com/jVZlc.png)
```
firebaseAuth = FirebaseAuth.getInstance();
mDatabase = FirebaseDatabase.getInstance();
mDb = mDatabase.getReference();
FirebaseUser user = firebaseAuth.getCurrentUser();
userKey = user.getUid();
mDb.child(userKey).addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(DataSnapshot dataSnapshot) {
Log.d(TAG, "Name: " + dataSnapshot.child("user_id").getValue());
userID = String.valueOf(dataSnapshot.child("user_id").getValue());
}
@Override
public void onCancelled(DatabaseError databaseError) {
}
});
```
I do not know what is wrong with my code, I want to retrieve value of user id.<issue_comment>username_1: You can create a separate include file that undefines the macros in question and include that in your source files.
For example, in undefMacros.h:
```
#undef MACRO_1
#undef MACRO_2
// ...
#undef MACRO_N
```
Then in your source files:
```
#include // defines MACRO\_1, MACRO\_2, ... MACRO\_N
// bla
// bla
#include
```
Then you only need to maintain the `#undef` statements in one place, in addition to where the macros are originally defined.
Upvotes: 2 <issue_comment>username_2: A variant on [username_1's answer](https://stackoverflow.com/a/49409544/2096401) would be to combine the definitions and clearing of the macros in one file:
**`MyMacros.h`**
```
#ifndef MACRO_1
# define MACRO_1 ...
# define MACRO_2 ...
...
# define MACRO_N ...
//! Remember to add #undef if new macros are added
#else
# undef MACRO_1
# undef MACRO_2
...
# undef MACRO_N
#endif
```
and then simply include this file at the top and bottom of where you use it:
```
#include "MyMacros.h"
...
...
...
#include "MyMacros.h"
```
Keeping everything in one file should make it easier to remember to add an `#undef` if you add a new macro.
Upvotes: 1 |
2018/03/21 | 554 | 1,771 | <issue_start>username_0: Problem:
I'd like to be able to count the number of lines in a Google Document. For example, the script must return 6 for the following text.
[](https://i.stack.imgur.com/zgg2m.png)
There doesn't seem to be any reliable method of extracting '\n' or '\r' characters from the text though.
```
text.findText(/\r/g) //OR
text.findText(/\n/g)
```
The 2nd line of code is not supposed to work anyway, because according to GAS documentation, 'new line characters are automatically converted to /r'<issue_comment>username_1: You can create a separate include file that undefines the macros in question and include that in your source files.
For example, in undefMacros.h:
```
#undef MACRO_1
#undef MACRO_2
// ...
#undef MACRO_N
```
Then in your source files:
```
#include // defines MACRO\_1, MACRO\_2, ... MACRO\_N
// bla
// bla
#include
```
Then you only need to maintain the `#undef` statements in one place, in addition to where the macros are originally defined.
Upvotes: 2 <issue_comment>username_2: A variant on [username_1's answer](https://stackoverflow.com/a/49409544/2096401) would be to combine the definitions and clearing of the macros in one file:
**`MyMacros.h`**
```
#ifndef MACRO_1
# define MACRO_1 ...
# define MACRO_2 ...
...
# define MACRO_N ...
//! Remember to add #undef if new macros are added
#else
# undef MACRO_1
# undef MACRO_2
...
# undef MACRO_N
#endif
```
and then simply include this file at the top and bottom of where you use it:
```
#include "MyMacros.h"
...
...
...
#include "MyMacros.h"
```
Keeping everything in one file should make it easier to remember to add an `#undef` if you add a new macro.
Upvotes: 1 |
2018/03/21 | 1,630 | 4,936 | <issue_start>username_0: I get `Execution failed for task ':ChatAppGluonApp:applyRetrobuffer'.` trying to create an Android APK. I'm Using Eclipse Oxygen (4.7.3RC2) and Windows 7. The same error comes up with macOs 10.13.1 and Eclipse Oxygen (4.7.2). On both operating systems I'm using Android 26 with Build Tools 26.0.2.
I tried running using Java 8 and 9, but the outcome was the same. The program works fine on Desktop.
[Stacktrace](https://gist.github.com/powerworr/7db5e22c2b9ec5ff88a95bee53290264)
```
buildscript {
repositories {
jcenter()
google()
maven{
url 'http://nexus.gluonhq.com/nexus/content/repositories/releases'
}
}
dependencies {
classpath 'org.javafxports:jfxmobile-plugin:2.0.17'
}
}
apply plugin: 'org.javafxports.jfxmobile'
repositories {
jcenter()
maven {
url 'http://nexus.gluonhq.com/nexus/content/repositories/releases'
}
}
mainClassName = 'de.....ChatApplication'
sourceCompatibility = 1.8
targetCompatibility = 1.8
dependencies {
compile 'com.gluonhq:charm:4.4.0-jdk9'
androidRuntime 'com.gluonhq:charm:4.4.1'
compile 'com.airhacks:afterburner.mfx:1.6.3'
compile files('libs/chatFx.jar')
//compile files('libs/chatFxTest.jar')
compile files('libs/miglayout-core-5.0.jar')
compile files('libs/miglayout-javafx-5.0.jar')
// https://mvnrepository.com/artifact/org.slf4j/slf4j-api
compile group: 'org.slf4j', name: 'slf4j-api', version: '1.7.25'
// https://mvnrepository.com/artifact/com.google.code.findbugs/jsr305
compile group: 'com.google.code.findbugs', name: 'jsr305', version: '3.0.2'
// https://mvnrepository.com/artifact/javax.xml.ws/jaxws-api
compile group: 'javax.xml.ws', name: 'jaxws-api', version: '2.2.6'
}
jfxmobile {
downConfig {
version = '3.7.0'
// Do not edit the line below. Use Gluon Mobile Settings in your project context menu instead
plugins 'display', 'lifecycle', 'statusbar', 'storage'
}
android {
compileSdkVersion = '26'
buildToolsVersion = '26.0.2'
manifest = 'src/android/AndroidManifest.xml'
}
ios {
infoPList = file('src/ios/Default-Info.plist')
forceLinkClasses = [
'com.gluonhq.**.*',
'javax.annotations.**.*',
'javax.inject.**.*',
'javax.json.**.*',
'org.glassfish.json.**.*'
]
}
}
```<issue_comment>username_1: Based on your exception for the `applyRetrobuffer` task:
```
15:26:29.974 [ERROR] [system.err] java.lang.IllegalArgumentException
15:26:29.974 [ERROR] [system.err] at org.objectweb.asm.ClassReader.(Unknown Source)
15:26:29.974 [ERROR] [system.err] at org.objectweb.asm.ClassReader.(Unknown Source)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClassAnalyzer.analyze(ClassAnalyzer.java:48)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.Retrobuffer$1.visitClass(Retrobuffer.java:59)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClasspathVisitor.visitFile(ClasspathVisitor.java:59)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClasspathVisitor.visitFile(ClasspathVisitor.java:41)
15:26:29.974 [ERROR] [system.err] at java.base/java.nio.file.Files.walkFileTree(Files.java:2713)
15:26:29.974 [ERROR] [system.err] at java.base/java.nio.file.Files.walkFileTree(Files.java:2785)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.Retrobuffer.run(Retrobuffer.java:56)
15:26:29.975 [ERROR] [system.err] at org.javafxports.retrobuffer.Main.main(Main.java:45)
```
Android doesn't support Java 9, so when using jfxmobile 2.0.+ all your Android dependencies have to be compiled with Java 8 target.
In your case, the exception indicates that there's still at least one class on the classpath that has been compiled with java 9 or higher as the target.
There is already a filed [issue](https://github.com/javafxports/javafxmobile-plugin/issues/32) to show more information about the class that produces the failure.
I can't say about your local dependencies, but the rest work fine for me. As for `miglayout-*-5.0`, unless you have compiled a local version, it is from 2014.
Anyway, just make sure that you add this to any local dependency you build yourself with Java 9:
```
sourceCompatibility = 1.8
targetCompatibility = 1.8
```
As an aside, when you manage to apply the retrobuffer task successfully, you will have some conflicts with duplicated files from dependencies, like `META-INF/LICENSE.txt` being in different jars. Use `packagingOptions`, like in this [link](https://stackoverflow.com/a/42835652/3956070).
Upvotes: 3 [selected_answer]<issue_comment>username_2: jfxmobile 2.x brings support for Java 9 BUT for mobile you can target for iOS only, since there is NO current Android version that support Java 9 API yet
Upvotes: 0 |
2018/03/21 | 949 | 2,851 | <issue_start>username_0: I am new to VBA,I have a string **I\_HEAD\_FOR**
I wanted to extract the substring which is started and ended by special character '\_'(Underscore) in VBA.
I need code snippet which can do above task,Could you please help me with it
Here in this case The code should extract substring **HEAD**
---
This the idea is from:
**I\_HEAD\_FOR**
to get as a result: **HEAD**<issue_comment>username_1: Based on your exception for the `applyRetrobuffer` task:
```
15:26:29.974 [ERROR] [system.err] java.lang.IllegalArgumentException
15:26:29.974 [ERROR] [system.err] at org.objectweb.asm.ClassReader.(Unknown Source)
15:26:29.974 [ERROR] [system.err] at org.objectweb.asm.ClassReader.(Unknown Source)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClassAnalyzer.analyze(ClassAnalyzer.java:48)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.Retrobuffer$1.visitClass(Retrobuffer.java:59)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClasspathVisitor.visitFile(ClasspathVisitor.java:59)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.ClasspathVisitor.visitFile(ClasspathVisitor.java:41)
15:26:29.974 [ERROR] [system.err] at java.base/java.nio.file.Files.walkFileTree(Files.java:2713)
15:26:29.974 [ERROR] [system.err] at java.base/java.nio.file.Files.walkFileTree(Files.java:2785)
15:26:29.974 [ERROR] [system.err] at org.javafxports.retrobuffer.Retrobuffer.run(Retrobuffer.java:56)
15:26:29.975 [ERROR] [system.err] at org.javafxports.retrobuffer.Main.main(Main.java:45)
```
Android doesn't support Java 9, so when using jfxmobile 2.0.+ all your Android dependencies have to be compiled with Java 8 target.
In your case, the exception indicates that there's still at least one class on the classpath that has been compiled with java 9 or higher as the target.
There is already a filed [issue](https://github.com/javafxports/javafxmobile-plugin/issues/32) to show more information about the class that produces the failure.
I can't say about your local dependencies, but the rest work fine for me. As for `miglayout-*-5.0`, unless you have compiled a local version, it is from 2014.
Anyway, just make sure that you add this to any local dependency you build yourself with Java 9:
```
sourceCompatibility = 1.8
targetCompatibility = 1.8
```
As an aside, when you manage to apply the retrobuffer task successfully, you will have some conflicts with duplicated files from dependencies, like `META-INF/LICENSE.txt` being in different jars. Use `packagingOptions`, like in this [link](https://stackoverflow.com/a/42835652/3956070).
Upvotes: 3 [selected_answer]<issue_comment>username_2: jfxmobile 2.x brings support for Java 9 BUT for mobile you can target for iOS only, since there is NO current Android version that support Java 9 API yet
Upvotes: 0 |
2018/03/21 | 1,329 | 4,174 | <issue_start>username_0: We are looking to return rows of a query as groups and displaying all entries of the group in the sort order. Randomly based on the `set_id`... and then in order by the `sort_id`.
So, randomly it will show:
>
> Carl,
> Phil,
> Wendy,
> Tina,
> Rick,
> Joe
>
>
>
or
>
> Tina,
> Rick,
> Joe,
> Carl,
> Phil,
> Wendy
>
>
>
This query is always showing Tina/Rick/Joe first
```
SELECT * FROM products ORDER BY set_id, rand()
```
Any help would be appreciated
```
+---------+--------+-------+----------+
| id | set_id | name | sort_id |
+---------+--------+-------+----------+
| 1 | AA |Rick | 2 |
| 2 | BB |Carl | 1 |
| 3 | AA |Joe | 3 |
| 4 | AA |Tina | 1 |
| 5 | BB |Phil | 2 |
| 6 | BB |Wendy | 3 |
+---------+--------+-------+----------+
```<issue_comment>username_1: If we strip away the randomness of the gorup ordering, your query would look like this:
```
SELECT
*
FROM
products
ORDER BY
set_id,
sort_id;
```
The ordering by `set_id` is necessary to "group" the results, without really grouping them. You do not want to group them, because then the rows with the same group would be aggregated, meaning that only one row per group would be put out.
Since you only want to randomize the groups, you need to write another query that assigns a random number to each group, like the one below:
```
SELECT
set_id,
RAND() as 'rnd'
FROM
products
GROUP BY
set_id
```
The `GROUP BY` clause makes sure, that each group is only selected once. The resultset will look like this:
| set\_id | priority |
+--------+---------+
| AA | 0.21 |
| BB | 0.1 |
With that result we can then randomize the output, by combining both queries with a `JOIN` on the `set_id` field. This will add the randomly generated number from the second query to the result set of the first query and therefore extend the static `set_id` with the randomized, but still for all group members equal, `rnd`:
```
SELECT
products.*
FROM
products
JOIN (
SELECT
set_id,
RAND() as 'rnd'
FROM
products
GROUP BY
set_id
) as rnd ON rnd.set_id = products.set_id
ORDER BY
rnd.rnd,
products.set_id,
products.sort_id;
```
Keep in mind, that it is important to still group on `products.set_id`, because it may be possible that two groups get the same random number assigned. If the result would not be ordered by `products.set_id` those groups members would then be merged.
Upvotes: 0 <issue_comment>username_2: if you need a random comma separated name list this will do the trick.
This will keep the groups and the correct sorting within the group.
**Query**
```
SELECT
GROUP_CONCAT(Table_names_rand.names) as names
FROM (
SELECT
*
FROM (
SELECT
GROUP_CONCAT(name ORDER BY sort_id) as names
FROM
Table1
GROUP BY
set_id
)
AS Table1_names
ORDER BY
RAND()
)
AS Table_names_rand
```
**Result**
```
| names |
|-------------------------------|
| Carl,Phil,Wendy,Tina,Rick,Joe |
```
or
```
| names |
|-------------------------------|
| Tina,Rick,Joe,Carl,Phil,Wendy |
```
demo <http://www.sqlfiddle.com/#!9/487ac9/9>
if you need random names as records output.
**Query**
```
SELECT
Table1.name
FROM
Table1
CROSS JOIN (
SELECT
GROUP_CONCAT(Table_names_rand.names) as names
FROM (
SELECT
*
FROM (
SELECT
GROUP_CONCAT(name ORDER BY sort_id) as names
FROM
Table1
GROUP BY
set_id
)
AS Table1_names
ORDER BY
RAND()
)
AS Table_names_rand
)
AS Table_names_rand
ORDER BY
FIND_IN_SET(name, Table_names_rand.names)
```
**Result**
```
| name |
|-------|
| Carl |
| Phil |
| Wendy |
| Tina |
| Rick |
| Joe |
```
or
```
| name |
|-------|
| Tina |
| Rick |
| Joe |
| Carl |
| Phil |
| Wendy |
```
demo <http://www.sqlfiddle.com/#!9/487ac9/28>
Upvotes: 1 |
2018/03/21 | 2,204 | 7,549 | <issue_start>username_0: In my client I receive via ZeroMQ a lot of input, which needs to be constantly updated. My server is written in python, but that should not matter. So this is what I do in my `MainActivity`:
```
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
/********************************NETWORK********************************/
new NetworkCall().execute("");
}
private class NetworkCall extends AsyncTask {
@Override
protected String doInBackground(String... params) {
while (true) {
try {
ZMQ.Context context = ZMQ.context(1);
// Connect to server
ZMQ.Socket requester = context.socket(ZMQ.REQ);
String address = "tcp://xxx.xx.xx.xx";
int port = 5000;
requester.connect(address + ":" + port);
// Initialize poll set
ZMQ.Poller poller = new ZMQ.Poller(1);
poller.register(requester, ZMQ.Poller.POLLIN);
requester.send("COORDINATES");
//while (true) {
String data;
poller.poll();
data = requester.recvStr();
System.out.println(data);
if (data == null) {
try {
sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
} requester.close();
} catch (IllegalStateException ise) {
ise.printStackTrace();
}
}
}
@Override
protected void onPostExecute(String result) {
}
@Override
protected void onPreExecute() {
}
@Override
protected void onProgressUpdate(Void... values) {
}
}
}
```
After executing this code on my device, I'll get like 5-9 input data strings, which I receive from the server, but then the following exception appears:
```
E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #2
Process: com.example.viktoria.gazefocus, PID: 31339
java.lang.RuntimeException: An error occurred while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:353)
at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383)
at java.util.concurrent.FutureTask.setException(FutureTask.java:252)
at java.util.concurrent.FutureTask.run(FutureTask.java:271)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
Caused by: com.example.viktoria.gazefocus.zmq.ZError$IOException: java.io.IOException: Too many open files
at com.example.viktoria.gazefocus.zmq.Signaler.makeFdPair(Signaler.java:94)
at com.example.viktoria.gazefocus.zmq.Signaler.(Signaler.java:50)
at com.example.viktoria.gazefocus.zmq.Mailbox.(Mailbox.java:51)
at com.example.viktoria.gazefocus.zmq.Ctx.(Ctx.java:128)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmq\_ctx\_new(ZMQ.java:244)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmqInit(ZMQ.java:277)
at org.zeromq.ZMQ$Context.(ZMQ.java:269)
at org.zeromq.ZMQ.context(ZMQ.java:254)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:73)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:67)
at android.os.AsyncTask$2.call(AsyncTask.java:333)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.PipeImpl.(PipeImpl.java:42)
at sun.nio.ch.SelectorProviderImpl.openPipe(SelectorProviderImpl.java:50)
at java.nio.channels.Pipe.open(Pipe.java:155)
at com.example.viktoria.gazefocus.zmq.Signaler.makeFdPair(Signaler.java:91)
at com.example.viktoria.gazefocus.zmq.Signaler.(Signaler.java:50)
at com.example.viktoria.gazefocus.zmq.Mailbox.(Mailbox.java:51)
at com.example.viktoria.gazefocus.zmq.Ctx.(Ctx.java:128)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmq\_ctx\_new(ZMQ.java:244)
at com.example.viktoria.gazefocus.zmq.ZMQ.zmqInit(ZMQ.java:277)
at org.zeromq.ZMQ$Context.(ZMQ.java:269)
at org.zeromq.ZMQ.context(ZMQ.java:254)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:73)
at com.example.viktoria.gazefocus.MainActivity$NetworkCall.doInBackground(MainActivity.java:67)
at android.os.AsyncTask$2.call(AsyncTask.java:333)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:245)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
at java.lang.Thread.run(Thread.java:764)
```
Apparently too many files are open. After research (I'm using Ubuntu 16.04) I changed the `ulimit` with `ulimit -n 10000`. Still this exception will happen. Sometimes I get more input data, sometimes less. Also if I set something like `Executor executor = Executors.newFixedThreadPool(5);` into the `onCreate()` method, nothing will change.
How to overcome this issue?
Thanks for reading!<issue_comment>username_1: You have a leak because you're not closing / ending / freeing something. I *think* that the context has to be terminated: `context.term()` after you close the requester...
Upvotes: 2 [selected_answer]<issue_comment>username_2: Well, in [distributed-system](/questions/tagged/distributed-system "show questions tagged 'distributed-system'") design, the infrastructure for signalling / messaging setup costs are not negligible. Some use-cases are more foregiving, some less.
Getting always a new **`Context()`** instance per each method-call and throwing it right away soon after by a clean-up call to it's **`.term()`\***-method is for sure better than having a hung-app or a frozen device, yet it is far from a fair design, respecting the process latencies and an "ecology"-of-resources.
Better first setup a semi-persistent infrastructure of resources ( each `Context()`-instance is typically a very expensive toy to instantiate ( API 4.2+ as of 2018-Q1 ), not so sharp for the `Socket()`-instances, but similar to the `Poller()` and all it's internal-AccessPoint(s) registration-hooks, yet the principle may extend on 'em too ).
Early re-factoring of the code will help not to extend the case with using expensive resources as a "consumable disposable".
The section:
```
while (true) {
try {
ZMQ.Context context = ZMQ.context(1);
// Connect to server
ZMQ.Socket requester = context.socket( ZMQ.REQ );
String address = "tcp://xxx.xx.xx.xx";
int port = 5000;
requester.connect( address + ":" + port );
...
}
...
}
```
is exactly a resources-devastating anti-pattern, altogether with repetitive latencies and even risks of remote-hangups and remote-rejections and similar issues.
Upvotes: 0 |
2018/03/21 | 864 | 2,700 | <issue_start>username_0: I have a SQLAlchemy model and a pandas dataframe with few records which are supposed to be loaded into table represented by that sqlalchemy model. But before loading I need to check if all rows in dataframe satisfy 'UniqueConstraint'
My Model and dataframe are as follows:
*Model:*
```
class Flight(Base):
__tablename__ = 'flight'
flight_id = Column(Integer)
from_location = Column(String)
to_location = Column(String)
schedule = Column(String)
__table_args__ = (UniqueConstraint('flight_id', 'schedule', name='flight_schedule'),)
```
*Dataframe:*
```
flight_id | from_location | to_location | schedule |
1 | Vancouver | Toronto | 3-Jan |
2 | Amsterdam | Tokyo | 15-Feb |
4 | Fairbanks | Glasgow | 12-Jan |
9 | Halmstad | Athens | 21-Jan |
3 | Brisbane | Lisbon | 4-Feb |
4 | Johannesburg | Venice | 12-Jan |
```
In this case, the checker function should return false as 3rd & 6th records in dataframe violate uniqueconstraint (same flight can't be scheduled for 2 different routes at the same time). Any hints/solutions on how to do it?<issue_comment>username_1: IIUC `duplicated`
```
df.duplicated('flight_id',keep=False)
Out[473]:
0 False
1 False
2 True
3 False
4 False
5 True
dtype: bool
```
Or using `groupby`
```
df.groupby('flight_id').transform('nunique').gt(1).any(1)
Out[482]:
0 False
1 False
2 True
3 False
4 False
5 True
dtype: bool
```
Upvotes: 0 <issue_comment>username_2: I think need [`DataFrame.duplicated`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html) for check dupes per specified columns with [`any`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html) for check at least one `True`:
```
print (df.duplicated(['flight_id', 'schedule']).any())
True
```
**Detail**:
```
print (df.duplicated(['flight_id', 'schedule']))
0 False
1 False
2 False
3 False
4 False
5 True
dtype: bool
```
And if you need filter problematic rows use [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing) and parameter `keep=False` for return all dupes:
```
print (df[df.duplicated(['flight_id', 'schedule'], keep=False)])
flight_id from_location to_location schedule
2 4 Fairbanks Glasgow 12-Jan
5 4 Johannesburg Venice 12-Jan
```
**Detail**:
```
print (df.duplicated(['flight_id', 'schedule'], keep=False))
0 False
1 False
2 True
3 False
4 False
5 True
dtype: bool
```
Upvotes: 1 |
2018/03/21 | 382 | 1,542 | <issue_start>username_0: I am working on a android project with my friend. He uses windows while I am an Ubuntu user. This is the process that i followed to collaborate with him.
He created a new project and uploaded that project on Github. I forked his repository and cloned that repo on my Ubuntu. Then I opened Android Studio and imported the project . Now i set up origin as my repo and upstream as his repo using the console inside android studio. The project is working fine but as soon as I open any java file in the project ,the compliler goes insane. Now the compiler does not recognize AppCombatAcivity Class and any other library class.
How do I fix this problem ? And is this problem due to change in OS or change in .gitignore.
The gitignore from my friends project looks like the following:
```
.gradle
/local.properties
/.idea/workspace.xml
/.idea/libraries
.DS_Store
/build`
```<issue_comment>username_1: Seems like you didn't put build folder in your `.gitignore` file.
Normally you should also put those as well in your `.gitignore`
```
.idea
*.iml
.gradle
/local.properties
/.idea/workspace.xml
/.idea/libraries
.DS_Store
/build
/captures
.externalNativeBuild
```
Upvotes: 1 <issue_comment>username_2: Maybe you don't have the same SDKs and Build tools. Install required build tools, platform tools and SDKs
Try clean build or delete build folder from project. And remove local.properties, which maybe pointing to your friend's SDK path, and also .gradle and .idea folders.
Also check your JDK version.
Upvotes: 0 |
2018/03/21 | 957 | 3,003 | <issue_start>username_0: I was wondering how the string of `HTTP_ACCEPT_LANGUAGE` is determined.
if a user has the following string:
`"HTTP_ACCEPT_LANGUAGE" => "en-US,en;q=0.9,he;q=0.8"`
* how are these comma separated parameters determined (OS, browser, IP->geo?)
* what do those params mean?
SideQuestion: Is there any part of an HTTP request to get the OS language?
I went over google, but I couldn't understand the `q=n` quality value, please please, please don't copy google or PHP.NET, I can search and read also, I would like to understand and make the best of using the most of an http request.
Thanks,
Bud<issue_comment>username_1: The string is determined by the client software. The client can request whatever languages it wants, with whatever priorities it wants. The server is not obliged to comply.
The rules that describe the format of the string can be found [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language).
The [quality value](https://developer.mozilla.org/en-US/docs/Glossary/Quality_values) denotes priority, with higher values sorting first. By your example, servers answering this request should attempt to provide English as a first choice, then Hebrew.
You probably don't need to parse this yourself. You can likely use something like [locale\_accept\_from\_http()](http://php.net/manual/en/locale.acceptfromhttp.php).
Upvotes: 1 <issue_comment>username_2: Generally speaking, `Accept-Language` can be used in one of the following forms:
1. `Accept-Language: *`, which means you are ready to accept any language.
2. `Accept-Language:` , where stands for the language code, usually 2 letters.
3. `Accept-Language:` , where can be seen as an extended form of ,
4. A compound form, you have already given an example: `"en-US,en;q=0.9,he;q=0.8"`.
>
> how are these comma separated parameters determined (OS, browser, IP->geo?)
>
>
>
The default value is retrieved from OS language settings by the browser.
And users can manually set preferred language(s) from browser's settings panel. If multiple languages are set, their order being the order in `Accept-Language` as you see.
>
> what do those params mean?
>
>
>
They are a list of `-` pairs, separated by `;`.
Default quality is 1, so your `Accept-Language` value actually is:
`q=1,en-US,en;q=0.9,he;q=0.8,rest of languages...`
>
> Is there any part of an HTTP request to get the OS language?
>
>
>
No.
>
> I couldn't understand the q=n quality value.
>
>
>
As explained.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Adding to @username_2's answer re the 'q' values, looking at [RFC7231 section-5.3.5](https://www.rfc-editor.org/rfc/rfc7231#section-5.3.5) (which Obsoletes [RFC 2616](https://www.ietf.org/rfc/rfc2616.txt)) it seems that is not quite correct.
It should be `;, ...` such that an example string
`en,en-US,en-AU;q=0.8,fr;q=0.6,en-GB;q=0.4`
ends up as
* `en;q=1`
* `en-US;q=1`
* `en-AU;q=0.8`
* `fr;q=0.6`
* `en-GB;q=0.4`
Upvotes: 2 |
2018/03/21 | 926 | 2,896 | <issue_start>username_0: This is the modal:
```
{% csrf\_token %}
*place*
#### Test
*date\_range*
{{ agresionForm.fecha }}
Insertar
Cancelar
```
I open it with:
```
$("#modalInsertAgression").show();
```
It does not work on iPhone, when the modal appears the background keeps black and it lost the focus, you can not click in the modal or the buttons.
**Edit: The modal only works with `$("#modalInsertAgression").modal({backdrop: false});` but I do not want to lose the black background...**<issue_comment>username_1: The string is determined by the client software. The client can request whatever languages it wants, with whatever priorities it wants. The server is not obliged to comply.
The rules that describe the format of the string can be found [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language).
The [quality value](https://developer.mozilla.org/en-US/docs/Glossary/Quality_values) denotes priority, with higher values sorting first. By your example, servers answering this request should attempt to provide English as a first choice, then Hebrew.
You probably don't need to parse this yourself. You can likely use something like [locale\_accept\_from\_http()](http://php.net/manual/en/locale.acceptfromhttp.php).
Upvotes: 1 <issue_comment>username_2: Generally speaking, `Accept-Language` can be used in one of the following forms:
1. `Accept-Language: *`, which means you are ready to accept any language.
2. `Accept-Language:` , where stands for the language code, usually 2 letters.
3. `Accept-Language:` , where can be seen as an extended form of ,
4. A compound form, you have already given an example: `"en-US,en;q=0.9,he;q=0.8"`.
>
> how are these comma separated parameters determined (OS, browser, IP->geo?)
>
>
>
The default value is retrieved from OS language settings by the browser.
And users can manually set preferred language(s) from browser's settings panel. If multiple languages are set, their order being the order in `Accept-Language` as you see.
>
> what do those params mean?
>
>
>
They are a list of `-` pairs, separated by `;`.
Default quality is 1, so your `Accept-Language` value actually is:
`q=1,en-US,en;q=0.9,he;q=0.8,rest of languages...`
>
> Is there any part of an HTTP request to get the OS language?
>
>
>
No.
>
> I couldn't understand the q=n quality value.
>
>
>
As explained.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Adding to @username_2's answer re the 'q' values, looking at [RFC7231 section-5.3.5](https://www.rfc-editor.org/rfc/rfc7231#section-5.3.5) (which Obsoletes [RFC 2616](https://www.ietf.org/rfc/rfc2616.txt)) it seems that is not quite correct.
It should be `;, ...` such that an example string
`en,en-US,en-AU;q=0.8,fr;q=0.6,en-GB;q=0.4`
ends up as
* `en;q=1`
* `en-US;q=1`
* `en-AU;q=0.8`
* `fr;q=0.6`
* `en-GB;q=0.4`
Upvotes: 2 |
2018/03/21 | 405 | 1,400 | <issue_start>username_0: We're looking for a way to increase the padding (or margins) for a `QListWidget` we are using in our application. We'd like to increase this for all four directions to give the text in our list some extra space
I've looked at the documentation for both [`QListWidget`](http://doc.qt.io/qt-5/qlistwidget-members.html) and [`QListWidgetItem`](http://doc.qt.io/qt-5/qlistwidgetitem-members.html) and can't find anything. For `QListWidget` there's `setContentsMargins` which is inherited from `QWidget` but that is for the widget as a whole (rather than individual entries).
What can we do to solve this? Grateful for help!<issue_comment>username_1: how about this
```
ui->listWidget->setStyleSheet("QListWidget {padding: 10px;} QListWidget::item { margin: 10px; }");
```
Upvotes: 3 <issue_comment>username_2: [`setSpacing(int)`](http://doc.qt.io/qt-5/qlistview.html#spacing-prop) will increase the padding in all directions
(Thank you G.M. for your help!)
Upvotes: 5 [selected_answer]<issue_comment>username_3: we use css selector for thise
=============================
```
list = QListWidget()
list.addItem("item 1")
list.addItem("item 2")
list.addItem("item 3")
list.setStyleSheet("""
QListWidget {
background-color: red;
padding:20px;
}
QListWidget::item {
margin:20px;
background-color: blue;
}
""")
```
Upvotes: 2 |
2018/03/21 | 338 | 1,127 | <issue_start>username_0: I don't know how to configure npm-watch I don't understand how to use it from documentation (readme).
In root project I have script folder with some .js files. After any changes in script folder I want call npm build, how to do this using npm-watch? How to set up npm-watch?<issue_comment>username_1: The [documentation](https://github.com/M-Zuber/npm-watch) says:
>
> The keys of the "watch" config should match the names of your "scripts"
>
>
>
It means that in your case the `package.json` file should have:
```
{
"watch": {
"build": "script/*.js"
},
"scripts": {
"build": "npm build",
"watch": "npm-watch"
}
}
```
Upvotes: 3 <issue_comment>username_2: ```
-> Open your project
-> go to terminal
run the command
-> npm i watch
then go the package.json
paste the below code
"watch": {
"build": "script/*.js"
},
"scripts": {
"build": "npm build",
"watch": "npm-watch"
}
Last Step
open the terminal and run the below command
-> npm run watch
```
[check the image](https://i.stack.imgur.com/Jf4F6.png)
Upvotes: 0 |
2018/03/21 | 993 | 3,761 | <issue_start>username_0: I have a gridview in which I show the distance between the registered user's latitude & longitude (set in their profile) and the gridview line's latitude & longitude. I've created a helper class which gives me a function to return said distance
My gridview for distance is this :
```
[
'format'=>'raw',
'attribute'=>'distance (km)',
'value'=> function ($data) {
$latFrom = Yii::$app->user->identity->profile->city->latitude;
$longFrom = Yii::$app->user->identity->profile->city->longitude;
$latTo = $data->createdBy->profile->city->latitude;
$longTo = $data->createdBy->profile->city->longitude;
return GeoHelper::distance($latFrom, $longFrom, $latTo, $longTo);
},
],
```
I'm trying to add sorting for distance in my gridview, but can't seem to find how to do so.
I've tried adding a public $distance property to the search model and setting it as safe and then adding
```
$dataProvider->sort->attributes['distance'] = [
'asc' => ['distance' => SORT_ASC],
'desc' => ['distance' => SORT_DESC],
];
```
But no luck
Any ideas ?<issue_comment>username_1: In your gridview column you define the attribute as
```
'attribute'=>'distance (km)',
```
but in your provider you call it 'distance'
```
$dataProvider->sort->attributes['distance']
```
Try setting the column in your gridview like this:
```
'attribute'=>'distance',
```
If you need the 'km' to show up in the header, set the 'label' property of the column.
Upvotes: 0 <issue_comment>username_2: Of course this is not an issue, which could be solved that easily.
The value, that you want to sort is calculated in PHP, but DB has no clue about those values. As @Sergei Beregov said, you can use `ArrayDataProvider`, yeah, using that way it's achievable. However depending on users count it could be quite expensive (if you have a lot of users)
If your DB supports Geo functions, I would suggest computing distance values in DB itself. For instance in MySQL you can use [ST\_Distance()](https://dev.mysql.com/doc/refman/5.7/en/spatial-relation-functions-object-shapes.html) to compute the distance between two points and mix it with ActiveRecord relations in Yii2. I've written an experimental staff using Geo functions in my blog [Spatial relations in Yii2](https://blog.keo.kz/2017/05/19/yii2-spatial-relations/). Sorry, it's in russian, but I hope it'll be helpful.
As long as you can compute the distance in DB, you can easily sort it in queue in a normal way like `'distance' => SORT_ASC`
Upvotes: 2 <issue_comment>username_3: I solved it thanks to username_2's help !
In Post model, i created a new find function
```
public static function findWithDistance()
{
$latitudeFrom = Yii::$app->user->identity->profile->city->latitude;
$longFrom = Yii::$app->user->identity->profile->city->longitude;
return parent::find()
->addSelect([
"post.*",
"round(6371 * acos( cos( radians(postcity.latitude) ) * cos( radians({$latitudeFrom}) ) * cos( radians({$longFrom}) - radians(postcity.longitude)) + sin(radians(postcity.latitude)) * sin( radians({$latitudeFrom})))) as distance",
])
->leftJoin('user', 'post.created_by = user.id')
->leftJoin('profile', 'user.id = profile.user_id')
->leftJoin('city postcity', 'profile.location = postcity.id');
}
```
which I call in PostSearch model if the user is logged in:
```
public function search($params)
{
$query = (!Yii::$app->user->isGuest && Yii::$app->user->identity->profile->city!=null) ? Post::findWithDistance() : Post::find();
...
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,201 | 4,435 | <issue_start>username_0: EDIT: I've done some research about this, but have not been able to find a solution!
I'm reading a configuration from an XML file.
The layout of the configuration is set for certain version.
This version is in the first line of the XML, which is easy to extract.
What I would like to know is, what is the best way to actually parse this file/get another method, based on the version (so all 4 elements, the Major, Minor, Build and Revision).
Right now I've come up with this:
```
switch (version.Major)
{
case 0:
switch (version.Minor)
{
case 0:
switch (version.Build)
{
case 0:
switch (version.Revision)
{
case 0:
return VersionNone(doc);
}
break;
}
break;
}
break;
}
throw new NotImplementedException();
```
But I do not find this elegant (at all) and I feel like there is a way better way to do this.
Anyone who can help?<issue_comment>username_1: For something like this i'd be tempted to build a dictionary of actions
*Edit:* Updated to include a document paramter in response to comment from OP
```
Dictionary> versionSpecificMethods = new Dictionary>{
{"1.0.0.0", DoStuff1},
{"1.2.3.4", DoStuff2},
{"3.1.7.182", DoStuff3}};
private void RunMethodForVersion(string version, XmlDocument someXmlDoc)
{
var codeToRun = versionSpecificMethods[version];
codeToRun.Invoke(someXmlDoc);
}
private static void DoStuff1(XmlDocument doc)
{
// Insert user code here
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: If you want to share code between version, you can try to use inheritance:
```
interface IConfig
{
string GetConnectionString();
string GetUserSettings();
int GetDelay();
}
class Version_1 : IConfig
{
public virtual string GetConnectionString() { ... }
public virtual string GetUserSettings { ... }
public virtual int GetDelay();
}
class Version_1_1 : Version_1
{
// Changed UserSetttings and delay in xml
public override GetUserSettings { ... }
public override int GetDelay { ... }
}
class Version_1_1_4 : Version_1_1
{
// Changed delay in xml
public override int GetDelay { ... }
}
```
Now you need create reqired instance of config depend on version of your file.
Upvotes: 2 <issue_comment>username_3: You can try this e.g :(
```
enum Version {
Major = 0,
Build = 1,
Revision = 2
}
public class MyApp {
static Dictionary versionManager = new Dictionary {
{Version.Major, new Major() },
{Version.Build, new Build() },
{Version.Revision, new Revision() }
};
public static object Run( int v, XmlDocument xmlDoc ) {
try {
Version version = (Version)v;
var classObj = versionManager[version];
return classObj.GetType().GetMethod("Parse").Invoke(classObj, new object[] { xmlDoc });//System.Reflection;
} catch {
throw new NotImplementedException();
}
}
}
//Major = 0
class Major {
public Major( ) { }
public string Parse( XmlDocument xmlDoc ) {
return "DONE";
}
}
//Build = 1
class Build {
public Build( ) { }
public string Parse( XmlDocument xmlDoc ) {
return "DONE";
}
}
//Revision = 2
class Revision {
public Revision( ) { }
public string Parse( XmlDocument xmlDoc ) {
return "DONE";
}
}
```
Use ==>
```
MyApp.Run(1/*Version==>Build*/, new XmlDocument());
```
Upvotes: 2 <issue_comment>username_4: Based on @Andy-P's [answer](https://stackoverflow.com/a/49558986/342842), but looks for 'most recent' version instead of exact version:
```
using ParserFunction = System.Func;
public static class ParserFactory
{
private static System.Collections.Generic.SortedDictionary Parsers
= new System.Collections.Generic.SortedDictionary();
public static void SetParser(Version version, ParserFunction parser)
{
if (Parsers.ContainsKey(version)) Parsers[version] = parser;
else Parsers.Add(version, parser);
}
public static ParserFunction GetParser(Version version)
{
if (Parsers.Count == 0) return null;
Version lastKey = null;
foreach ( var key in Parsers.Keys)
{
if (version.CompareTo(key) < 0)
{
if ( lastKey == null ) lastKey = key;
break;
}
lastKey = key;
if (version.CompareTo(key) == 0) break;
}
return Parsers[lastKey];
}
}
```
Upvotes: 1 |
2018/03/21 | 576 | 1,676 | <issue_start>username_0: Sorry for the bad title, but I don't know how to describe it better.
I have 3 tables
1.) contests
```
ID Title
----------
1 Contest 1
2 Contest 2
3 Contest 3
```
2.) contest\_series
```
ID contest_id series_id
----------------------------
1 1 3
2 1 2
3 2 1
4 2 2
5 3 3
```
3.) series
```
ID start_date
----------------
1 2018-03-21 14:00:00
2 2018-03-21 15:00:00
3 2018-03-21 16:00:00
```
Now what I try to achieve is, getting a list of contests ordered by the start\_date of the first starting series in the contest.
Wanted Result:
```
contest_id start_date_of_first_series
------------------------
2 2018-03-21 14:00:00
1 2018-03-21 15:00:00
3 2018-03-21 16:00:00
```
Important: the contest\_id in the result needs to be distinct.<issue_comment>username_1: I think this is just a `join` and `group by`:
```
select cs.contest_id, min(s.start_date) as first_start_date
from contest_series cs join
series s
on cs.series_id = s.series_id
group by cs.contest_id
order by first_start_date;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I slightly modified it, but @username_1 guided me in the right direction and solved my problem:
```
select contests.* from contests
join (
select contest_series.contest_id, min(series.start) as first_start_date
from contest_series
join series on contest_series.series_id = series.id
group by contest_series.contest_id
) filtered_contests on filtered_contests.contest_id = contests.id
order by first_start_date
```
Upvotes: 0 |
2018/03/21 | 522 | 1,975 | <issue_start>username_0: Hey iam Databinding my slider to an attribute whit the values 1 and 0. On 1 the switch should be active, on 0 it shouldnt. I successfully Databinded some Input and Select Elements but at the Slider i run into following error:
>
> No value accessor for form control with unspecified name attribute
>
>
>
This is my HTML part:
```
Off
On
```
I do not have any Form arround it, and i also have a name attribute, so i dont understand why this error happens! Thanks for any help!
**EDIT**
Here is the whole Mat-Card Block:
```
today
Endlos
Off
On
```
**EDIT 2**
Added [**Stackblitz**](https://stackblitz.com/edit/angular-bfrn2u) (Error shows up in Console)<issue_comment>username_1: [**This blitz**](https://stackblitz.com/edit/angular-mvpfu5?file=app/slide-toggle-overview-example.html) works well with your code, are you sure you didn't do something else ?
By the way, one binding will be enough.
```
```
Upvotes: 0 <issue_comment>username_2: I have solved it by simple deleting and inserting the element. Seems like this is a bit bugged. Got the same problem whit form-field-control error. Deleteing and Inserting it solved the Issue.
Upvotes: 2 [selected_answer]<issue_comment>username_3: I had a similar error, but in my case apparently it was a synchronization issue, at the moment of render the components html.
I followed some of the solutions proposed on this page but any of them worked for me, at least not completely.
What did actually solve my error was to write the below code snippet inside the father html tag of the elements
I was binding to the variable.
Code:
```
*ngIf="variable-name"
```
The error was caused, apparently by the project trying to render the page, apparently at the moment of evaluating the variable, the project just could no find its value. With the above code snippet you make sure that before rendering the page you ask if the variable has being initialized.
Upvotes: 1 |
2018/03/21 | 712 | 2,479 | <issue_start>username_0: I have one route which should support 2 different URLs - `/settings/:code` and `/settings`:
```
{
path: '/settings',
alias: '/settings/:code',
name: 'settings',
component: settings
},
```
It is working correctly in my SPA when I go to URL `localhost:8080/settings` and `localhost:8080/settings/123` in both way it's hitting settings component.
How ever when I try to get this `:code` like this:
```
resetCode: this.$route.params.code,
```
from url it is always undefined even when I'm on `localhost:8080/settings/123`.
When I turn alias and path the other way around so it's:
```
{
alias: '/settings',
path: '/settings/:code',
name: 'settings',
component: settings
},
```
then my :code is correct, I mean it's 123 when I go to URL `localhost:8080/settings/123` but in this case it's not displaying settings component when I go to url `localhost:8080/settings`.
So either way is bad, how I can fix it?<issue_comment>username_1: Try this:
```
{
name: 'settings',
path: '/settings',
component: settings,
children: [
{
name: 'settings_code',
path: ':code',
component: settings
}
]
}
```
It is untested, but i think because your alias and path match the same path your problem occurs.
That's why i suggest to try it with a child-route.
Upvotes: 2 <issue_comment>username_2: This is not how `alias` works. You can specify url alias with different names but not with url parameters.
```
path: '/settings'
alias: '/settings/:code' // incorrect
alias: '/settingsAlias' // correct (different path name)
path: '/settings/:code'
alias: '/settings' // incorrect ( param is missing )
alias: '/settingsAlias/:code' // correct
alias: '/settingsAlias/:codeAlias // correct
```
Also, the `@username_1 Kleßen` answer seems to be working fine for you. But this is not why child routes are created for.
```
path: '/settings',
component: settings,
children: [
{
path: ':code' // ( incorrect usage ) how is this children?
}
]
```
Child routes should follow with a slash `/` (and not just `/:param` but `/pathname` ). If you use as preceding example, then you're just doing a trick with the router.
Then, what is the correct way of using it?
Well, you can **define an optional paramter**:
```
path: '/settings/:code?' // using ? defines param as optional
```
So, now you can hit `'/settings'` or `'/settings/123'` in the url.
Upvotes: 3 [selected_answer] |
2018/03/21 | 413 | 1,562 | <issue_start>username_0: I want to use double values and int values separately in one method as parameters of the method how can I achieve that? The method is multiplying two integer numbers and two double numbers in the same method. The output should be two lines one for the integer and the other for the double value<issue_comment>username_1: The most efficient way would be to overload the method :
```
void foo(int i)
void foo(double i)
```
But it is also valid to assign an `int` to a `double`, so you could also provide only :
```
void foo(double i)
```
to accept both `int` and `double`.
An alternative to overloading would be to define a generic class with a method that accepts the generic type :
```
public class Bar{
public void foo(T t){
...
}
}
```
And you could so instantiate them :
```
Bar barInt = new Bar<>();
barInt .foo(3);
Bar barDouble = new Bar<>();
barDouble.foo(3);
```
Note that numeric wrappers consume more memory as primitive counter parts.
So the first way with overloading is really the most efficient.
And you have still many other ways to handle this question.
All that to say that you should use a way or another according your requirement.
Upvotes: 3 [selected_answer]<issue_comment>username_2: For some calculation which are typesafe for int/double you can just use this:
```
T calc(T a, Tb) { };
```
If you want to use functions for a special Type (rounding/floating behaviour/...) you can use `instance of` and call a help method with `INT` or `DOUBLE` or ... or just do it in `if-else`.
Upvotes: 0 |
2018/03/21 | 681 | 2,419 | <issue_start>username_0: I need to delete duplicates files and only keep the original (oldest file). I need it to work with directories and be able to set the path i.e. E:media/, The directories will have files that arent duplicates (i need to keep those) and files with duplicates (sometimes more than 1 duplicate).
I have been able to put together a script that will do this based on hash but I cant seem to get it to work right if I set the path and it doesnt work with directories.
```
$files = Get-ChildItem -File |
Select-Object FullName, LastWriteTime, @{n="hash";e= {(Get-FileHash $_).Hash}} |
Sort-Object Hash,LastWriteTime
for ($i=1; $i -lt $files.count; $i++){
Write-Host $i
If ($files[$i].hash -eq $files[$i-1].hash){
Remove-Item -Path $files[$i].fullname
}
}
```
I changed
```
$files=Get-ChildItem E:/media -File
```
and
```
Get-ChildItem E:/media -File
```
but doesn't work, and I cant figure out how to make it work in a directory, it only works inside said folder, I have 10000's of folders I need to sort out.
Im stumped and will appreciate any pointers in the right direction, thanks<issue_comment>username_1: The most efficient way would be to overload the method :
```
void foo(int i)
void foo(double i)
```
But it is also valid to assign an `int` to a `double`, so you could also provide only :
```
void foo(double i)
```
to accept both `int` and `double`.
An alternative to overloading would be to define a generic class with a method that accepts the generic type :
```
public class Bar{
public void foo(T t){
...
}
}
```
And you could so instantiate them :
```
Bar barInt = new Bar<>();
barInt .foo(3);
Bar barDouble = new Bar<>();
barDouble.foo(3);
```
Note that numeric wrappers consume more memory as primitive counter parts.
So the first way with overloading is really the most efficient.
And you have still many other ways to handle this question.
All that to say that you should use a way or another according your requirement.
Upvotes: 3 [selected_answer]<issue_comment>username_2: For some calculation which are typesafe for int/double you can just use this:
```
T calc(T a, Tb) { };
```
If you want to use functions for a special Type (rounding/floating behaviour/...) you can use `instance of` and call a help method with `INT` or `DOUBLE` or ... or just do it in `if-else`.
Upvotes: 0 |
2018/03/21 | 706 | 2,744 | <issue_start>username_0: I would like to be able to download a set of files from Drive on to my tablet (Nexus 7 2012 running Lineage OS) and then edit, compile and execute the relevant files. These files are all c++ related .cpp, .h and the main file. (BTW I am new to c++ and Termux).
When I currently do this I can edit and compile but cannot execute - I get a permission denied error message. I have followed the instructions from the termux help page, run termux-setup-storage and given permission for the emulator to access the shared folders. These are all setup correctly as far as I can tell.
Checking up on other questions and sites it is well explained that you cannot compile and run on the emulated storage locations (nor the external drive if you have one - I don't).
My issue is that I cannot copy the files from the emulated download folder into the termux folder. I cannot directly download from drive to the termux folder, and do not know how to navigate directly to Drive from within termux.
So more specifically:
1. How do you copy a file from the emulated termux downloads into the termux folder? (adding su does not work).
2. Is it possible to navigate via termux directly into drive and to work there?
3. I have tried to chmod the files in termux but this does not work. I have also tried termux-open and this does not work. Is there some other way to compile and execute c++ files in termux using clang++?
Any other suggestions would be appreciated.
Thanks in advance.<issue_comment>username_1: The most efficient way would be to overload the method :
```
void foo(int i)
void foo(double i)
```
But it is also valid to assign an `int` to a `double`, so you could also provide only :
```
void foo(double i)
```
to accept both `int` and `double`.
An alternative to overloading would be to define a generic class with a method that accepts the generic type :
```
public class Bar{
public void foo(T t){
...
}
}
```
And you could so instantiate them :
```
Bar barInt = new Bar<>();
barInt .foo(3);
Bar barDouble = new Bar<>();
barDouble.foo(3);
```
Note that numeric wrappers consume more memory as primitive counter parts.
So the first way with overloading is really the most efficient.
And you have still many other ways to handle this question.
All that to say that you should use a way or another according your requirement.
Upvotes: 3 [selected_answer]<issue_comment>username_2: For some calculation which are typesafe for int/double you can just use this:
```
T calc(T a, Tb) { };
```
If you want to use functions for a special Type (rounding/floating behaviour/...) you can use `instance of` and call a help method with `INT` or `DOUBLE` or ... or just do it in `if-else`.
Upvotes: 0 |
2018/03/21 | 720 | 2,572 | <issue_start>username_0: I have `Xamarin.Android project` and use `MVVMCross`. I need to add some underlining text in my `.axml` layout.
I found some ways to do it but its don't work for me.
I have this layout:
```
xml version="1.0" encoding="utf-8"?
```
1. The string.
```
Underlined text
```
I can't use this way because I use two languages and I have the folder `Locales` with two `.txt` files. So the project doesn't take strings from `strings.xml`
2. Paint.
```
TextView MyLink = FindViewById(Resource.Id.MyLink);
MyLink.PaintFlags = (MyLink.PaintFlags | Android.Graphics.PaintFlags.UnderlineText);
```
3. FromHtml.
```
txtView.setText(Html.fromHtml("underlined text"));
```
Doesn't work for me, besides I see the green underline and comment that this decision is deprecated.
How can I mark my `TextView` as underlined?<issue_comment>username_1: Create own textView and use it instead of standart textView in axml
```
public class UnderlineTextView : TextView
private void Initialize()
{
//this.PaintFlags = this.PaintFlags | Android.Graphics.PaintFlags.UnderlineText;
//this.Text = "This text will be underlined";
String underlineData = this.Text;
SpannableString content = new SpannableString(underlineData);
content.SetSpan(new UnderlineSpan(), 0, underlineData.Length, 0);
// 0 specify start index and underlineData.length() specify end index of styling
this.TextFormatted = content;
}
```
Upvotes: 1 <issue_comment>username_2: Option 2 looks like Java instead of xamarin c#, try:
```
txtView.PaintFlags = (txtView.PaintFlags | Android.Graphics.PaintFlags.UnderlineText);
```
And use the correct Id.
Upvotes: 3 [selected_answer]<issue_comment>username_3: You should be using `txtView.TextFormatted = Html.fromHtml("underlined text");` for it to work as you're expecting.
It's often ideal to use data binding when you're using the MVVM pattern. An approach using data binding with MvvmCross is to use a [ValueConverter](https://www.mvvmcross.com/documentation/fundamentals/value-converters) that accepts string input and outputs a `SpannableString` which you can use with the `TextFormatted` MvvmCross binding.
```
public class UnderlinedStringValueConverter : MvxValueConverter
{
protected override SpannableString Convert(string value, Type targetType, object parameter, CultureInfo culture)
{
var spannable = new SpannableString(value ?? string.Empty);
spannable.SetSpan(new UnderlineSpan(), 0, spannable.Length(), 0);
return spannable;
}
}
```
Then in your `TextView` AXML:
```
```
Upvotes: 1 |
2018/03/21 | 718 | 2,466 | <issue_start>username_0: I have a list of dict's...
```
categories = [{'summarycategory': {'amount':1233}},
{'hhCategory': {}},
{'information': {'mRoles': ['4456'],
'cRoles': None,
'emcRoles': ['spm/4456']}}]
```
I want to get value information.emcRoles. To do this, I do:
```
for x in categories:
for key in x:
if key == "information":
print(x[key]["emcRoles"])
```
There must be a more pythonic way?
Also, need it needs to be null safe. So if, `"information"` is not there, I don't want a null pointer looking for emcRoles.<issue_comment>username_1: don't *loop* on the keys, you're killing the use of the dict key lookup (plain loop is `O(n)`, dict lookup is `O(1)`
Instead, just check if key belongs, and go get it if it does.
```
for x in categories:
if "information" in x:
print(x["information"]["emcRoles"])
```
or use `dict.get` to save a dict key access:
```
for x in categories:
d = x.get("information")
if d is not None: # "if d:" would work as well here
print(d["emcRoles"])
```
to create a list of those infos instead, use a listcomp with a condition (again, listcomp makes it difficult to avoid the double dict key access):
```
[x["information"]["emcRoles"] for x in categories if "information" in x]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If `information` or `emcRoles` might be missing, you can either "ask for forgiveness", by wrapping it all in a `try..except`
```
try:
for x in categories:
if "information" in x:
print(x["information"]["emcRoles"])
except:
# handle gracefully ...
```
or you could use `get()` and provide fallback values as you see fit:
```
for x in categories:
print(x.get("information", {}).get("emcRoles", "fallback_value"))
```
Upvotes: 1 <issue_comment>username_3: A one-liner:
```
next(x for x in categories if 'information' in x)['information']['emcRoles']
```
Upvotes: 2 <issue_comment>username_4: Depending on what else you're doing with your categories list it might make sense to convert your list of dictionaries to new dictionary:
```
newdictionary=dict([(key,d[key]) for d in categories for key in d])
print(newdictionary['information']['emcRoles'])
```
See [how to convert list of dict to dict](https://stackoverflow.com/questions/5236296/how-to-convert-list-of-dict-to-dict) for more.
Upvotes: 1 |
2018/03/21 | 552 | 2,055 | <issue_start>username_0: I have a lot of classes where each class contains a map with completely different search keys. Each map has 4 items on average => 4 search keys on average.
Example:
```
class A
{
private final static Map properties;
static
{
Map tmp = new HashMap<>();
tmp.put("NearestNeighbour", "INTER\_NEAREST");
tmp.put("Bilinear", "INTER\_LINEAR");
tmp.put("Bicubic4x4", "INTER\_CUBIC");
properties = Collections.unmodifiableMap(tmp);
}
private enum InterpolationMode
{
NN("NearestNeighbour"),
Bilinear("Bilinear"),
Bicubic("Bicubic");
private String mode;
InterpolationMode(String mode) {
this.mode = mode;
}
public String getMode(){
return mode;
}
}
}
```
In this class my keys for map are `NearestNeighbour, Bilinear, Bicubic4x4` so I created a private enum and retrieve the value from map like this `properties.get(InterpolationMode.Bilinear.getMode());`
The problem is that I have about 20 classes and each class has it's own map with different keys ( they are not related ). The global package enum does not make sense for me since those search keys are not related in any way. Is it a good idea to create private enum like that in each class? Or is there a better way to do that and does not use enum at all?<issue_comment>username_1: Using enums for this purpose is totally fine. What you could consider is using enums as keys (instead of strings) and using `EnumMap` instead of `HashMap`.
```
Map tmp = new EnumMap<>(InterpolationMode.class);
tmp.put(InterpolationMode.NN, "INTER\_NEAREST");
tmp.put(InterpolationMode.Bilinear, "INTER\_LINEAR");
tmp.put(InterpolationMode.Bicubic, "INTER\_CUBIC");
```
The advantage of `EnumMap` is that it is more compact and efficient.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Are you creating many classes just to hold different maps of data? If so, it is better to create only one class, then create many objects from that class.
Using global enums is ok, as long as the enums have real meanings (e.g. gender, types of bank accounts).
Upvotes: 0 |
2018/03/21 | 379 | 1,359 | <issue_start>username_0: The Delphi `TList.Sort()` method expects a callback function argument of type `function (Item1, Item2: Pointer): Integer;` for comparing the list items.
I'd like to get rid of typecasting within the callback function and would like to define a callback function like this:
```
function MyTypeListSortCompare( Item1, Item2 : tMyType ) : integer;
begin
result := WideCompareStr(Item1.Name, Item2.Name);
end;
...
MyList.Sort(tListSortCompare(MyTypeListSortCompare));
...
```
but unfortunately this triggers an "Invalid typecast" compiler error.
Is there some possibility to properly typecast function pointers in Delphi(2006)?<issue_comment>username_1: A typecast is possible but requires to prefix the function name with "@":
```
var
MyList : TList;
begin
...
MyList.Sort(TListSortCompare(@MyTypeListSortCompare));
...
end;
```
As pointed out in the comments the typecast isn't needed when type-checked pointers are turned off, so in that case this also works:
```
MyList.Sort(@MyTypeListSortCompare);
```
Upvotes: 2 <issue_comment>username_2: I normally do something like this:
```
function MyTypeListSortCompare( Item1, Item2 : Pointer ) : integer;
var
LHS: TMyType absolute Item1;
RHS: TMyType absolute Item2;
begin
result := WideCompareStr(LHS.Name, RHS.Name);
end;
```
Upvotes: 4 [selected_answer] |
2018/03/21 | 829 | 2,606 | <issue_start>username_0: Here is the deal: i have a website that i want to extract some Href's, especifically the ones that have the text "LEIA ESTA EDIÇÃO", like in this HTML.
```
[LEIA ESTA EDIÇÃO](http://acervo.estadao.com.br/pagina/#!/20120824-43410-spo-1-pri-a1-not/busca/ministro+Minist%C3%A9rio "LEIA ESTA EDIÇÃO")
```
this is the code i have, it's pretty wrong, i was making some tests to see if it work.
By the way: It has to be selenium.
```
driver = webdriver.Chrome()
x = 1
while True:
try:
link = ("http://acervo.estadao.com.br/procura/#!/ministro%3B minist%C3%A9rio|||/Acervo/capa//{}/2000|2010|2010///Primeira").format(x)
driver.get(link)
time.sleep(1)
xpath = "//a[contains(text(),'LEIA ESTA EDIÇÃO')]"
links = driver.find_elements_by_xpath(xpath)
bw=('')
for link in links:
bw += link._element.get_attribute("href")
print (bw)
x = x + 1
time.sleep(1)
except NoSuchElementException:
pass
print(x)
time.sleep(1)
```<issue_comment>username_1: You can try below code to get required output:
```
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver.get(link)
links = WebDriverWait(driver, 10).until(EC.presence_of_all_elements_located((By.LINK_TEXT, "LEIA ESTA EDIÇÃO")))
references = [link.get_attribute("href") for link in links]
```
Upvotes: 2 <issue_comment>username_2: I would really recommend you to read the [selenium docs](http://selenium-python.readthedocs.io/getting-started.html#simple-usage), the explanations over there are easy and straightforward.
There are some places your code can be improved:
1. Your really do not need the while True. Just think about it, once you extracted all of the links you are done.
2. The try/except is not correctly indented.
3. You should get a list of links and extract the text hrefs out of them.
A simple 1 liner can be (if there is at least 1 a tag with that text):
```
[a_tag.get_attribute('href') for a_tag in driver.find_elements_by_link_text("LEIA ESTA EDIÇÃO")]
```
4. The `bw`: It will become 1 concatenated string of all of the hrefs, I am pretty sure that it is not what you are looking for but rather a list or other data structure.
5. I Would recommend reading [this answer](https://stackoverflow.com/a/39680089/9491566) about string concatenation in python.
6. Overall it seems like you can improve you python. I would really recommend getting more comfortable with the language and flow before jumping into selenium :)
Upvotes: 1 |
2018/03/21 | 1,188 | 2,828 | <issue_start>username_0: I am trying to graph the result of multiple runs of a function. The function returns an object that has lists of different lengths.
```
library(ldbounds)
reas1.lin = bounds(t = c(.5,.75,1),iuse = c(3,3),alpha = c(0.025,0.025))
```
the result looks like this:
```
$bounds.type
[1] 2
$spending.type
[1] "Power Family: alpha * t^phi"
$time
[1] 0.50 0.75 1.00
$time2
[1] 0.50 0.75 1.00
$alpha
[1] 0.025 0.025
$overall.alpha
[1] 0.05
$lower.bounds
[1] -2.241403 -2.288491 -2.229551
$upper.bounds
[1] 2.241403 2.288491 2.229551
$exit.pr
[1] 0.0250 0.0375 0.0500
$diff.pr
[1] 0.0250 0.0125 0.0125
attr(,"class")
[1] "bounds"
```
I would like to get a dataframe that looks like this:
```
time1 time2 lower.bound upper.bound exit.pr diffpr type
0.50 0.50 -2.241403 2.241403 0.0250 0.0250 Power Family: alpha * t^phi
0.75 0.75 -2.288491 2.288491 0.0375 0.0125 Power Family: alpha * t^phi
1.00 1.00 -2.229551 2.229551 0.0500 0.0125 Power Family: alpha * t^phi
```
This is how I extracted the data to the above dataframe, but it depends on the number of elements in each list, there must be a more elegant solution to this...
```
example1.lin <- data.frame(matrix(as.numeric(unlist(reas1.lin)[c(3:8,12:23)]),
nrow=3,
byrow=F),type=reas1.lin$spending.type)
```<issue_comment>username_1: Here's one way:
```
nms <- c("time", "time2", "lower.bounds", "upper.bounds", "exit.pr", "diff.pr", "spending.type")
as.data.frame(reas1.lin[nms])
# time time2 lower.bounds upper.bounds exit.pr diff.pr spending.type
# 1 0.50 0.50 -2.241403 2.241403 0.0250 0.0250 Power Family: alpha * t^phi
# 2 0.75 0.75 -2.288491 2.288491 0.0375 0.0125 Power Family: alpha * t^phi
# 3 1.00 1.00 -2.229551 2.229551 0.0500 0.0125 Power Family: alpha * t^phi
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Turns out someone was faster than I was (and uses less lines of code). If you are interested in a version that doesn't require manually inserting the names of your list, you can do this:
```
library(ldbounds)
library(tidyverse)
example1 = bounds(t = c(.5,.75,1),iuse = c(3,3),alpha = c(0.025,0.025))
example1 <- map(example1, `length<-`, max(lengths(example1)))
example1 %>% as_data_frame() %>% fill(everything())
```
First, I filled all vectors in the list with NAs to be the same lenght, then I combined them to a dataframe and replaced NAs with the value in the row above them if necessary.
The sort of quirky and surprising but rather elegant way of just using the length-assignment as a function comes from an answer to [this question](https://stackoverflow.com/questions/34570860/adding-na-to-make-all-list-elements-equal-length).
Upvotes: 1 |
2018/03/21 | 573 | 2,551 | <issue_start>username_0: I'm trying to figure out how the Fingerprint AIO / touchId login would work with our API authentication.
After the user logs in and enables touchId, do we need to generate an access token with long expiry for future fingerprint logins?
Looking through some sample implementations of the feature online I was hoping there was some kind of signature or hash that touchId would return that we could store and validate against.
But all I was finding is that mostly just returns with a success or failure status.
Is there a flow that’s typical here?
Any insight would be helpful.
Thanks!<issue_comment>username_1: maybe you can build your own encrypted signature store in **ionic native storage**.
For example, encrypt your userid+password (just example) and store inside the local storage, when calling the *fingerprint aio*, you call it out and put it in the "fingerprintOptions" > clientId/clientSecret. If user successfully authenticated, then you call out the clientId/clientSecret by "this.fingerprintOptions.clientId/clientSecret" which will get your encrypted signature and then send it to your server to do decryption. After the decryption and return success message and let the user to login.
Those are just my concept to share with you.
My concept would be:
1. Ask user to key in (first time setup)username and password to create
the encrypted signature and send back to your server to store it and
store one in ionic native storage. (used to send to server to
authenticate)
2. when user got in the app, check if the user registered the signature
and popup the fingerprint. After successful authenticate will auto
login to the app.
Note: How to do encryption and decryption for the signature is up to you, this is just a concept.
The ionic native storage can be found here:
<https://ionicframework.com/docs/storage/>
Hope this help you out for your though, and sorry for the bad English.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If I am not mistaken, **this is not possible**. The fingerprint scanner only check if the fingerprint is on the device itself. So if you have multiple users on 1 device, it will never work, since you won't know who the logged in user is. The plugin does return a code after the scanning, but that code differs everytime since your fingerprint won't be EXACTLY the same everytime (due to the position / angle of your finger). I hope Ionic will enable fingerprint login with multiple users on a device and actually verify the user via the fingerprint
Upvotes: 1 |
2018/03/21 | 985 | 3,799 | <issue_start>username_0: I am a newbie with HSM. I would like to use HSM in one of my product to store key database. I have following questions:
1) Does all HSM support HTTPS(SSL)?
2) Do we have userspace on HSM where we can run our own programs?
3) Any standard API to access HSM through HTPPS?
NOTE: The user can have any HSM from any service provider.<issue_comment>username_1: It is very much vendor dependent.
You are assuming that the HSM has a linux or desktop-like kernel and GUI. Nope. The HSM is probably an embedded system running a roll-your-own (proprietary) operating system.
The Utimaco 'CryptoServer' line does not support HTTPS or SSL, *but that is an answer to an incorrect question*. Does the software (called SecurityServer) implement a secure connection between the host application and the firmware, running on the HSM? Yes. But it isn't over HTTPS (as of 4.20) and it doesn't use TLS. And the encrypted connection is between your application *and the firmware on the HSM, not the OS that the HSM is plugged into*. Ie, there is no 'man-in-the-middle'.
Does the HSM provide "user space"? No, because the OS on the CryptoServer is an embedded OS and it does not have the concept of a User (as defined by a linux or other desktop OS). If you write custom code (CryptoServers support custom code in C and in Lua), C modules run in kernel space, Lua modules run in a Lua sandbox.
SecurityServer supports CXI (Utimaco proprietary), Java\_CXI, JCE, PKCS#11, CSP, CNG, EKM. But not RESTful, which is what an HTTPS connection would be.
So: Ask the vendor.
(note: Yes, I work for Utimaco).
Upvotes: 3 <issue_comment>username_2: (To give a more vendor neutral answer)
**1a) Does all HSM support securing HTTPS(SSL)?**
* I suppose most HSMs supporting RSA encryption are usable for HTTPS/SSL/TLS acceleration
* Note that HSM is usually used only for initial key exchange (e.g. RSA) and consecutive encryption (e.g. AES) is done by the application
**1b) Does all HSM support HTTPS(SSL) as a communication channel between application and HSM hardware?**
* As far as I know only Thales supports SSL/TLS for end-to-end connection into HSM firmware
* Other vendors use their own proprietary protocols for securing communication between application and HSM firmware
**2) Do we have userspace on HSM where we can run our own programs?**
* Some HSM models allow running custom code inside HSM hardware -- see [this answer](https://stackoverflow.com/a/37086893/5128464)
* Note that running custom code inside HSM hardware may break security certifications
**3) Any standard API to access HSM through HTTPS?**
* The 'gold standard' API to access HSMs is [PKCS#11](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=pkcs11)
* Alternative APIs are Java's JCA/JCE, Microsoft's CryptoAPI/CNF
* Some vendors provide alternative APIs or proprietary extensions to abovementioned APIs
* I am not aware of any standardized API to access HSM via HTTPS
---
Disclaimer: It's been some time since I dealt with this, so please do validate my thoughts...
Good luck!
Upvotes: 4 [selected_answer]<issue_comment>username_3: I agree that HSM functionality changes from user to user.
I can talk about Network HSM provided by Gemalto.
1. They access the HSM in a network through static IP or DNS.
2. The standard used by the provider is PKCS #11 2.20 and 2.30 - cryptoki library.
You can basically use the SDK provided to develop your applications and the functionality and usage is based on PKCS, but the vendor may have their on additions or subtractions.
You can also write custom functions(Functionality Module - FM) and download into the HSM.
3. In Gemalto Network HSM case Message Dispatch library provided by Gemalto is used for communicating with FM inside an HSM
Upvotes: 2 |
2018/03/21 | 1,333 | 3,231 | <issue_start>username_0: VB2010: I have text that consists of blocks of text that start with day and time DD HHMM and end only at the next day/time.
Here is my sample text:
```
18 2131 Z50000 ZZ-AAA
PR
PR
AGM TPS P773QQ 1500 DCA 22FEB
21,77,23,M10,F,26,3100,2
OK
18 2134 Z50000 ZZ-AAA
PR
QU HMKKDBB
.DDVZAZC 182134
ARR
FI US1500/AN P773QQ/DA KDCA/AD KMIA/IN 2026/FB 152/LA /LR
DT DDL DCAV 182134 M33A
- OS KMIA /GNO6541/R200RR
18 2134 Z50000 ZZ-AAA
PR
PR
ARR OPN P773QQ 1500 DCA 22FEB
0757
OK
18 2135 Z50000 ZZ-AAA
PR
PR
ARR M58 P773QQ 1500 DCA 22FEB
212
UNKNOWN POL/SPOL
QU HMKKDBB
.DDVZAZC 182134
ARR
FI US1500/AN P773QQ/DA KDCA/AD KMIA/IN 2026/FB 152/LA /LR
DT DDL DCAV 182134 M33A
- OS KMIA /GNO6541/R200RR
18 2136 Z50000 ZZ-AAA
PRF 1500/18 MIA IN 0152 333
18 2137 Z50000 ZZ-AAA
PR
PRZ 1500/18 MIA IN 2026 N/A 333
```
My goal is to get only the blocks of text that have key phrases ^FI and ^DT in the middle. The matching groups should contain only two blocks. The one from 18 2134 and end at M33A and then from 18 2135 to M33A.
I have tried:
This works for the most part except it starts the match at the prior block.
```
RegexOptions.Singleline Or RegexOptions.Multiline Or RegexOptions.IgnoreCase
^\d\d \d{4}(.*?)^FI US(.*?)^DT DDL(.*?)\r
```
This one I took from another post but cant seem to wrap my head around. It matches only the first part of every block.
```
RegexOptions.Multiline Or RegexOptions.IgnoreCase
^\d\d \d{4}.*\r[\s\S]*?(?=(?:^\d\d \d{4}|$))
```
Haven't used regex in a while so any help appreciated.<issue_comment>username_1: This regex should find what you need, if single line enabled
`[0-3]\d\s+[0-2]\d[0-5]\d.*?(FI.*?)\n(DT.*?)\n`
**Explanation:**
`[0-3]\d\s+[0-2]\d[0-5]\d` day hour and minute check
`.*?` ungreedy capturing, . includes newline
`(FI.*?)\n` first group, FI line, until line break
`(DT.*?)\n` second group, same deal
Upvotes: 0 <issue_comment>username_2: You may use
```
(?ms)^\d\d +\d{4}\b(?:(?!^(?:\d\d +\d{4}\b|FI|DT)).)*?^(?:FI|DT).*?(?=^\d\d +\d{4}\b|\Z)
```
See the [regex demo](https://regex101.com/r/PqZsoA/3) (Though it is a PCRE regex test, it will work the same in .NET).
**Pattern details**
* `(?ms)` - multiline and singleline options
* `^` - start of a line
* `\d\d +\d{4}\b` - 2 digits, 1 or more spaces and 4 digits as a whole word
* `(?:(?!^(?:\d\d +\d{4}\b|FI|DT)).)*?` - any char, 0+ repetitions, as few as possible, that does not start the sequence: start of a line, 2 digits, 1 or more spaces and 4 digits as a whole word, or `FI` or `DT`
* `^(?:FI|DT)` - `FI` or `DT` at the start of a line
* `.*?` - any 0+ chars, as few as possible
* `(?=^\d\d +\d{4}\b|\Z)` - a positive lookahead that requires `^\d\d +\d{4}\b` (start of a line, 2 digits, 1 or more spaces and 4 digits as a whole word) or `\Z` (end of string) to match immediately to the right of the current location.
Upvotes: 2 [selected_answer] |
2018/03/21 | 722 | 2,709 | <issue_start>username_0: I'm trying to come up with the best way to use jQuery to delete all the rows except for the one selected. I've done some searching and I've seen several posts about deleting all records or all records except the first or last row but nothing on deleting all but the one selected.
The scenario would be searching for a list of employees which would return maybe 5 records. Then there would be a "Select" button and when you click on that button it would remove all rows except for that one. Actually from there my idea is to not only remove the rows but then to hide the "Select" button and display a text box as well as displaying a "Add" button below the table to add the employees with the item entered in the textbox.
I've thought about putting a hidden field on each row that has an row #, pass that row index to a jQuery function and then loop through all of the Tbody.Tr and remove if it doesn't match the row index passed in?
Another thought would be to put an `Onclick` on each of the "Select" buttons and then in the function use "this" to for reference to the row but then I don't know how to effectively say `"remove all but $this row"`.
```
| First Name | Last Name | Employee ID | Position Title | |
| --- | --- | --- | --- | --- |
@foreach (var item in Model.listEmployee)
{
| @item.FirstName | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
}
```<issue_comment>username_1: Why not filter on your Model.listEmployee on the id of the selected row? This is assuming your current set up is reactive.
Ie:
```
const new_list = old_list.filter(item => item.id !== selected_id)
```
Upvotes: 0 <issue_comment>username_2: First ids need to be singular so `id="btnSelect"` is not going to work. Make it a class or name.
So select the TR and select the siblings and you can remove it.
```js
$("tbody").on("click", "button", function() { //bind the click to the buttons
var button = $(this) //get what was clicked
$(this).closest("tr") //select the TR
.siblings() //get the other TRS
.remove() // um, remove them
button.hide() //hide your button
button.siblings().removeAttr('hidden') //show the input
});
```
```html
| First Name | Last Name | Employee ID | Position Title | |
| --- | --- | --- | --- | --- |
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 600 | 1,975 | <issue_start>username_0: I tried to write a query, but unfortunately I didn't succeed.
I want to know how many packages delivered over a given period by a person.
So I want to know how many packages were delivered by John (user\_id = 1) between 01-02-18 and 28-02-18. John drives another car (another plate\_id) every day.
(orders\_drivers.user\_id, plates.plate\_name, orders.delivery\_date, orders.package\_amount)
I have 3 table:
**orders** with *`plate_id`* *`delivery_date`* *`package_amount`*
**plates** with *`plate_id`* *`plate_name`*
**orders\_drivers** with *`plate_id`* *`plate_date`* *`user_id`*
I tried some solutions but didn't get the expected result. Thanks!<issue_comment>username_1: Why not filter on your Model.listEmployee on the id of the selected row? This is assuming your current set up is reactive.
Ie:
```
const new_list = old_list.filter(item => item.id !== selected_id)
```
Upvotes: 0 <issue_comment>username_2: First ids need to be singular so `id="btnSelect"` is not going to work. Make it a class or name.
So select the TR and select the siblings and you can remove it.
```js
$("tbody").on("click", "button", function() { //bind the click to the buttons
var button = $(this) //get what was clicked
$(this).closest("tr") //select the TR
.siblings() //get the other TRS
.remove() // um, remove them
button.hide() //hide your button
button.siblings().removeAttr('hidden') //show the input
});
```
```html
| First Name | Last Name | Employee ID | Position Title | |
| --- | --- | --- | --- | --- |
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| <EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| C@<EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
| D@<EMAIL> | @item.LastName | @item.EmployeeId | @item.PositionTitle | Select
|
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 357 | 1,041 | <issue_start>username_0: I need to remove all `item[0]` if `a` is `== '1'`:
```
a = [['1','2','3'], ['2','4','9']]
for item in a:
if item[0] == '1':
del item
```<issue_comment>username_1: Use `filter`:
```
new_a = list(filter(lambda item: item[0] != '1', a))
```
The list is just so its compatible regardless of your python version (`filter` returns lazy sequence in python3).
Upvotes: 0 <issue_comment>username_2: You can use a list comprehension as follows
```
a = [i for i in a if i[0] != '1']
```
Upvotes: 2 <issue_comment>username_3: Do not change length of list while iterating over it. make new list instead.
```
b = [i for i in a if i[0] != '1']
```
Upvotes: 1 <issue_comment>username_4: A List Comprehension is the best way to solve this problem but if you want to use a for loop here's some Python code for that:
```
a = [[1,2,3], [2,1,9], [1,6,9], [5,6,7]]
# Code
def removeOneList(a):
for item in a:
if item[0] == 1:
del item[:]
return a
print(removeOneList(a))
```
Upvotes: 0 |
2018/03/21 | 890 | 3,153 | <issue_start>username_0: I'm trying to build a very basic chrome extension with reactjs. However, I'm getting the following error:
>
> Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'self' 'unsafe-eval'". Either the 'unsafe-inline' keyword, a hash ('sha256-irwQGzGiWtpl6jTh4akRDdUUXCrtjkPABk5rinXZ5yw='), or a nonce ('nonce-...') is required to enable inline execution.
>
>
>
I don't understand where this is coming from, considering that I don't seem to have any inline scripts.
newtab.html:
```
New Tab
```
test.jsx:
```
import React from "react";
import ReactDOM from "react-dom";
class Hello extends React.PureComponent {
render() {
return (
Hello!
======
);
}
}
const element = ;
ReactDOM.render(
element,
document.getElementById('root')
);
```
manifest.json:
```
{
"manifest_version": 2,
"name": "SuperBasicReact",
"description": "Just trying to make this work",
"version": "0.1",
"chrome_url_overrides": {
"newtab": "newtab.html"
},
"browser_action": {
"default_title": "SuperBasicReact"
},
"permissions": [
"http://*/*",
"https://*/*"
],
"content_scripts": [{
"matches": ["http://*/", "https://*/"],
"js": ["test.jsx", "babel.js"]
}],
"content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'; default-src 'self'"
}
```
I'm using chrome version 65.0.3325.162.
Any and all help will be appreciated.
**edit:**
I have seen ["Refused to execute inline script" in ReactJS Chrome Extension even though there are no inline scripts](https://stackoverflow.com/questions/40952233/), however, I don't actually see a solution present at that link.<issue_comment>username_1: I would first off trying to remove at least a lot of the content security policy you have added in the manifest.json
It is kind of weird that you get a hash code in your error when you dont have any in your manifest. So try adding the hash from the error and then
I would try removing the whole content security policy line in the manifest and then testing again, and if that does not work try only adding `script-src 'self'` and then only `default-src 'self'` and then `object-src 'self'`, lately the CSP with google extensions have been weird so you have to experiment, also the `default-src 'self'` often messes up the html format.
Hope this helps
Upvotes: 0 <issue_comment>username_2: The problem comes from the way Babel in browser script works. Because of CSP limitations on extensions, you will have to transpile the jsx code with Babel locally and only add the transpiled (js) file in your html.
You can run Babel from the command line. Please see the relevant [guide here](https://babeljs.io/docs/usage/cli/). For later development, consider using a tool such as Browserify with the babelify plugin. See [usage examples here](https://github.com/babel/babelify).
Upvotes: 3 [selected_answer]<issue_comment>username_3: Try putting `INLINE_RUNTIME_CHUNK=false` to your .env files and rebuild
Worked for me
thanks to <https://github.com/facebook/create-react-app/issues/5897>
Upvotes: 2 |
2018/03/21 | 2,920 | 8,884 | <issue_start>username_0: I am trying to automate a test using SVG Nodes. I need to click on a particular node.
I click on the element in Firefox, click F12 to get the console up, & type console.dir($0) to see the details. This is what the element looks like:
```
text
__data__: Object { height: 0, depth: 1, x: 0, … }
attributes: NamedNodeMap [ fill="#fff", font-size="36px", font-family="mapicons", … ]
baseURI: "https://fnord.co.uk/#/jobs/add/job?@=-702x6714391y19z&lf=1!19.73740"
childElementCount: 0
childNodes: NodeList [ #text ]
children: HTMLCollection []
classList: DOMTokenList []
className: SVGAnimatedString { baseVal: "", animVal: "" }
clientHeight: 0
clientLeft: 0
clientTop: 0
clientWidth: 0
dataset: DOMStringMap { }
dx: SVGAnimatedLengthList { baseVal: SVGLengthList, animVal: SVGLengthList }
dy: SVGAnimatedLengthList { baseVal: SVGLengthList, animVal: SVGLengthList }
farthestViewportElement:
firstChild: #text ""
firstElementChild: null
id: ""
innerHTML: "\ue05e"
isConnected: true
lastChild: #text ""
lastElementChild: null
lengthAdjust: SVGAnimatedEnumeration { baseVal: 1, animVal: 1 }
localName: "text"
namespaceURI: "http://www.w3.org/2000/svg"
nearestViewportElement:
nextElementSibling: null
nextSibling: null
nodeName: "text"
nodeType: 1
nodeValue: null
onabort: null
onanimationcancel: null
onanimationend: null
onanimationiteration: null
onanimationstart: null
onauxclick: null
onblur: null
oncanplay: null
oncanplaythrough: null
onchange: null
onclick: null
onclose: null
oncontextmenu: null
oncopy: null
oncut: null
ondblclick: null
ondrag: null
ondragend: null
ondragenter: null
ondragexit: null
ondragleave: null
ondragover: null
ondragstart: null
ondrop: null
ondurationchange: null
onemptied: null
onended: null
onerror: null
onfocus: null
ongotpointercapture: null
oninput: null
oninvalid: null
onkeydown: null
onkeypress: null
onkeyup: null
onload: null
onloadeddata: null
onloadedmetadata: null
onloadend: null
onloadstart: null
onlostpointercapture: null
onmousedown: null
onmouseenter: null
onmouseleave: null
onmousemove: null
onmouseout: null
onmouseover: null
onmouseup: null
onmozfullscreenchange: null
onmozfullscreenerror: null
onpaste: null
onpause: null
onplay: null
onplaying: null
onpointercancel: null
onpointerdown: null
onpointerenter: null
onpointerleave: null
onpointermove: null
onpointerout: null
onpointerover: null
onpointerup: null
onprogress: null
onratechange: null
onreset: null
onresize: null
onscroll: null
onseeked: null
onseeking: null
onselect: null
onselectstart: null
onshow: null
onstalled: null
onsubmit: null
onsuspend: null
ontimeupdate: null
ontoggle: null
ontransitioncancel: null
ontransitionend: null
ontransitionrun: null
ontransitionstart: null
onvolumechange: null
onwaiting: null
onwebkitanimationend: null
onwebkitanimationiteration: null
onwebkitanimationstart: null
onwebkittransitionend: null
onwheel: null
outerHTML: "\ue05e"
ownerDocument: HTMLDocument https://alloy-labs.yotta.co.uk/#/jobs/add/job?@=-702x6714391y19z&lf=1!19.73740
ownerSVGElement:
parentElement:
parentNode:
prefix: null
previousElementSibling:
previousSibling:
requiredExtensions: SVGStringList { length: 0, numberOfItems: 0 }
requiredFeatures: SVGStringList { length: 0, numberOfItems: 0 }
rotate: SVGAnimatedNumberList { baseVal: SVGNumberList, animVal: SVGNumberList }
scrollHeight: 0
scrollLeft: 0
scrollLeftMax: 0
scrollTop: 0
scrollTopMax: 0
scrollWidth: 0
style: CSS2Properties { }
systemLanguage: SVGStringList { length: 0, numberOfItems: 0 }
tabIndex: -1
tagName: "text"
textContent: "\ue05e"
textLength: SVGAnimatedLength { baseVal: SVGLength, animVal: SVGLength }
transform: SVGAnimatedTransformList { baseVal: SVGTransformList […], animVal: SVGTransformList […] }
viewportElement:
x: SVGAnimatedLengthList { baseVal: SVGLengthList, animVal: SVGLengthList }
y: SVGAnimatedLengthList
animVal: SVGLengthList { numberOfItems: 0, length: 0 }
baseVal: SVGLengthList
length: 0
numberOfItems: 0
\_\_proto\_\_: SVGLengthListPrototype { clear: clear(), initialize: initialize(), getItem: getItem(), … }
\_\_proto\_\_: SVGAnimatedLengthListPrototype { baseVal: Getter, animVal: Getter, … }
\_\_proto\_\_: SVGTextElementPrototype
constructor: ()
length: 0
name: "SVGTextElement"
prototype: SVGTextElementPrototype { … }
Symbol(Symbol.hasInstance): undefined
\_\_proto\_\_: function ()
\_\_proto\_\_: SVGTextPositioningElementPrototype
```
I tried finding it using find by classname, but that didn't work:
```
String className="SVGAnimatedString";
List cardTitles = driver.findElements(By.className(className));
```
Evidently className does not mean this kind of className?
So I tried finding it using cssSelector. This works, but is less useful, as the CSS Selector changes each time screen loads.
Having found my element , if I click it just using element.click, nothing happens.
If I click on it using this Javascript, it throws an exception:
```
JavascriptExecutor executor = (JavascriptExecutor) driver;
executor.executeScript("arguments[0].click();", element);
```
The exception message:
>
> org.openqa.selenium.JavascriptException: TypeError: arguments[0].click is not a function
>
>
>
How do I get the click to work on this element?
This is NOT a duplicate of [Selenium WebDriver [Java]: How to Click on elements within an SVG using XPath](https://stackoverflow.com/questions/41829000/selenium-webdriver-java-how-to-click-on-elements-within-an-svg-using-xpath)
Unlike that enquirer, I have managed to find my element.
My problem is with clicking that element.
Browser: Firefox Quantum 59.0.1 (64-bit)
Geckodriver: 0.19.1
Selenium Jar: 3.8.1<issue_comment>username_1: You could use something like this if the classname has something about it that never changes. A partial search would be a simple explanation:
```
targetElement.findElement(By.cssSelector("[className*='partialClassNameString']"));
```
The "\*" indicates that you want to match on either this string or this string as a substring of any web element that has a className attribute.
As far as clicking on the element. Have you tried using submit? I'm grasping as straws here, but it doesn't hurt to try. instead of .click() you use .submit()
Upvotes: 0 <issue_comment>username_2: I think your find by is wrong. There are also lots of things to add to this post to help debug.
1. What version of firefox, webdriver (firefox, marionetter or gecko)
and selenium jar are you using?
2. Does it work in other browsers?
In response to:
*Evidently className does not mean this kind of className*
You can check your find syntax from the console so in your case type the following to find an element that has a class of SVGAnimatedString:
```
$('.SVGAnimatedString')
```
Click on the collapse arrow to see if anything was found.
From the properties you posted your element does not have any class values set:
```
classList: DOMTokenList []
```
At this website <https://demos.telerik.com/kendo-ui/dragdrop/events>
Click on the big circle and repeat
```
console.dir($0)
classList: DOMTokenList [ "k-header" ]
```
And this is the element in the DOM
```
Drag the small circle here.
```
Thus these CSS selectors work:
```
$('.k-header') ==> finds 6, class only
$('div.k-header') ==> finds 4, element type and class
$('div.k-header[data-role="droptarget"]') ===> finds 1, as above with attribute and value
```
What CSS selector have you got to work?
I notice the properties posted also have:
```
clientHeight: 0
clientLeft: 0
clientTop: 0
clientWidth: 0
```
So it looks like you've got a hidden element?
My advice would be to click on the element, show it in the DOM and try a couple of elements above or below the one you are working with, you may find the elements has stable properties that make it easier.
You can also use this code to double check where your webdriver and selenium think the element is (updating for creating the driver):
```
WebDriver webDriver = new FireFoxDriver();
Actions actions = new Actions(webDriver);
webDriver.navigate().to("https://demos.telerik.com/kendo-ui/dragdrop/events");
Thread.sleep(500);
WebElement drop = webDriver.findElement(By.cssSelector("#droptarget"));
//movetoelement should be center and is
actions.moveToElement(drop).contextClick().perform();
Thread.sleep(500);
webDriver.quit();
```
I've worked with SVG elements and the normal .click() has worked fine:
Firefox 57.0.2, 57.0.3, 58.0.2, 59.0 with geckodriver 0.18.0 and selenium-java:3.7.1
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,405 | 4,186 | <issue_start>username_0: So I've got these two arrays:
```
resultList: { date: string, amount: number }[] = [];
dateList: { date: string, amounts: { amount: number }[] }[] = [];
```
One of them has all results, which I want to sort by dates hence the second array.
This is the code I'm using to try to achieve this:
```
this.resultList.forEach((result) => {
let dateFound: boolean = false;
this.dateList.forEach((date) => {
if (result.date === date.date) {
dateFound = true;
return;
}
});
if (dateFound == false) {
//create new date entry in dateList
this.dateList.push({date: result.date, amounts: []});
}
//find that date entry and push a value to it's sub array called amounts
this.dateList.find((dates) => {
return dates.date == result.date
}).amounts.push({
amount: result.amount
});
});
```
OUTPUT if you have 3 results of the same date
```
[
{date: '2018-03-21', amounts: [{amount: 1}]},
{date: '2018-03-21', amounts: [{amount: 1},{amount: 43}]},
{date: '2018-03-21', amounts: [{amount: 1},{amount: 43}, {amount: 55}]}
]
```
Desired OUTPUT if you have 3 results of the same date
```
[
{date: '2018-03-21', amounts: [{amount: 1},{amount: 43}, {amount: 55}]}
]
```<issue_comment>username_1: I added an `else` clause to your if condition and remove the `.find()` part:
```
if (dateFound == false) {
//create new date entry in dateList
this.dateList.push({date: result.date, amounts: []});
} else {
for (let d of this.dateList) {
if (d.date == result.date) {
d.amounts.push({amount: result.amount})
}
}
}
```
Upvotes: 1 <issue_comment>username_2: You can do this by first [reducing](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) your data into an object with the unique dates as the keys and the amounts as the value for each key, and then [mapping](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Map) them into the structure for your desired output, like so:
```js
var data = [
{date: '2018-03-21', amount: 1},
{date: '2018-03-21', amount: 43},
{date: '2018-03-21', amount: 41},
{date: '2018-03-22', amount: 18},
{date: '2018-03-23', amount: 25},
{date: '2018-03-24', amount: 15},
{date: '2018-03-24', amount: 25},
];
// reduce to single object with unique dates as keys, collection of amounts as values
var dateMap = data.reduce((res, curr) => {
// if the date has not been added already
if (!res[curr.date]) {
// create it on the result object
res[curr.date] = []
}
// push the amount into the array for the date
res[curr.date].push({amount: curr.amount});
return res;
}, {});
// map each key of dateMap to an object matching the desired output format
var dateList = Object.keys(dateMap).map(key => {
return {date: key, amounts: dateMap[key]};
});
console.log(dateList);
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can [reduce](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce) the array into a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map). For each date create an object with the `amounts` attribute, and fill it with the amount values from the same date. Then [spread](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) the [`Map.values()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/values) back to an array:
```js
const data = [{"date":"2018-03-21","amount":1},{"date":"2018-03-21","amount":43},{"date":"2018-03-21","amount":41},{"date":"2018-03-22","amount":18},{"date":"2018-03-23","amount":25},{"date":"2018-03-24","amount":15},{"date":"2018-03-24","amount":25}];
const result = [...data
.reduce((r, o) => {
const { date, amount } = o;
r.has(date) || r.set(date, {
date,
amounts: []
});
r.get(date).amounts.push({ amount });
return r;
}, new Map())
.values()
];
console.log(result);
```
Upvotes: 0 |
2018/03/21 | 1,134 | 3,094 | <issue_start>username_0: I want to merge a `List[List[Double]]` based on the values of the elements in the inner Lists. Here's what I have so far:
```
// inner Lists are (timestamp, ID, measurement)
val data = List(List(60, 0, 3.4), List(60, 1, 2.5), List(120, 0, 1.1),
List(180, 0, 5.6), List(180, 1, 4.4), List(180, 2, 6.7))
data
.foldLeft(List[List[Double]]())(
(ret, ll) =>
// if this is the first list, just add it to the return val
if (ret.isEmpty){
List(ll)
// if the timestamps match, add a new (ID, measurement) pair to this inner list
} else if (ret(0)(0) == ll(0)){
{{ret(0) :+ ll(1)} :+ ll(2)} :: ret.drop(1)
// if this is a new timestamp, add it to the beginning of the return val
} else {
ll :: ret
}
)
```
This works, but it doesn't smell optimal to me (especially the right-additions '`:+`'). For my use case, I have a pretty big (~25,000 inner Lists) List of elements, which are themselves all length-3 Lists. At most, there will be a fourfold degeneracy, because the inner lists are `List(timestamp, ID, measurement)` groups, and there are only four unique IDs. Essentially, I want to smush together all of the measurements that have the same timestamps.
Does anyone see a more optimal way of doing this?
I actually start with a `List[Double]` of timestamps and a `List[Double]` of measurements for each of the four IDs, if there's a better way of starting from that point.<issue_comment>username_1: Here is a slightly shorter way to do it:
```
data.
groupBy(_(0)).
mapValues(_.flatMap(_.tail)).
toList.
map(kv => kv._1 :: kv._2)
```
The result looks 1:1 exactly the same as what your algorithm produces:
```
List(List(180.0, 0.0, 5.6, 1.0, 4.4, 2.0, 6.7), List(120.0, 0.0, 1.1), List(60.0, 0.0, 3.4, 1.0, 2.5))
List(List(180.0, 0.0, 5.6, 1.0, 4.4, 2.0, 6.7), List(120.0, 0.0, 1.1), List(60.0, 0.0, 3.4, 1.0, 2.5))
```
Explanation:
* group by timestamp
* in the grouped values, drop the redundant timestamps, and flatten to single list
* tack the timestamp back onto the flat list of ids-&-measurements
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is a possibility:
```
input
.groupBy(_(0))
.map { case (tstp, values) => tstp :: values.flatMap(_.tail) }
```
The idea is just to group inner lists by their first element and then flatten the resulting values.
which returns:
```
List(List(180.0, 0.0, 5.6, 1.0, 4.4, 2.0, 6.7), List(120.0, 0.0, 1.1), List(60.0, 0.0, 3.4, 1.0, 2.5))
```
Upvotes: 2 <issue_comment>username_3: What about representing your measurements with a case class?
```
case class Measurement(timestamp: Int, id: Int, value: Double)
val measurementData = List(Measurement(60, 0, 3.4), Measurement(60, 1, 2.5),
Measurement(120, 0, 1.1), Measurement(180, 0, 5.6),
Measurement(180, 1, 4.4), Measurement(180, 2, 6.7))
measurementData.foldLeft(List[Measurement]())({
case (Nil, m) => List(m)
case (x :: xs, m) if x.timestamp == m.timestamp => m :: xs
case (xs, m) => m :: xs
})
```
Upvotes: 1 |
2018/03/21 | 454 | 1,781 | <issue_start>username_0: I am a beginner and learning WPF Application. I have a simple project and in that I want to read DB Configuration string from App.Config File. But I am not able to do so. Below is my attempt:
**APP.Config File:**
```
xml version="1.0" encoding="utf-8" ?
```
**CS Code:**
```
public static void GetDataFromDB()
{
//var CS = @"Data Source=.\;Initial Catalog=Connect;Integrated Security=SSPI";
// ABOVE CODE WORKS FINE
string CS = ConfigurationManager.ConnectionStrings["DBCS"].ConnectionString;
using (SqlConnection con = new SqlConnection(CS))
{
con.Open();
SqlDataAdapter da = new SqlDataAdapter("Select * from tblTenant", con);
DataSet ds = new DataSet();
da.Fill(ds);
}
}
```
[](https://i.stack.imgur.com/ZYFOr.png)
Edit:
[](https://i.stack.imgur.com/THPRh.png)<issue_comment>username_1: Add "clear" to your app.config before the connections string definition. It will look like this :
```
```
Upvotes: 0 <issue_comment>username_2: Try to change in the App.config:
Upvotes: 0 <issue_comment>username_3: You need to put the connnection string in the `App.config` of the running WPF application and not in the DAL or any other class library.
The `ConfigurationManager` class reads the configuration file of the running executable.
Upvotes: 4 [selected_answer]<issue_comment>username_4: You can add Connection string to both DAL and UI projects.
Try removing Data Source =.\ to Data Source =.
Upvotes: 0 |
2018/03/21 | 849 | 2,352 | <issue_start>username_0: So in this project I'm working on I'm currently trying to do a reset-password function which generates a URL which is then given to the user with a secret attached to it to be validated in the backend.
The problem is that the secret needs to be stored in the db (postgresql) in a bytea format. Is there a way to convert a UUID to a bytea format in php? I've tried some different variations of pack and unpack as well as a few functions I found on the interwebs but with no success.
I'm not allowed to alter the db in any way.<issue_comment>username_1: You can get the bytes from a Ramsey\Uuid\Uuid object like this:
```
$uuid = \Ramsey\Uuid\Uuid::uuid4();
$bytes = $uuid->getBytes();
```
Upvotes: 2 <issue_comment>username_2: FWIW, this is what i do (MySQL context):
```
/**
*
* pack with H* to ensure compatibility with MySQL function UNHEX.
*
* @param string $stringGuid
*
* @return string 16 bytes long
*
* @ref https://stackoverflow.com/questions/2839037/php-mysql-storing-and-retrieving-uuids
*/
function stringGuidAsBinaryGuid($stringGuid)
{
$binary = pack("H*" , str_replace('-' , '' , $stringGuid));
return $binary;
}
function binaryGuidAsStringGuid($binaryGuid)
{
$string = unpack("H*" , $binaryGuid);
$string = preg_replace(
"/([0-9a-f]{8})([0-9a-f]{4})([0-9a-f]{4})([0-9a-f]{4})([0-9a-f]{12})/" ,
"$1-$2-$3-$4-$5" ,
$string["1"]
);
return $string;
}
/***
*
* returns a valid GUID, v4 (string, 36 bytes)
*
* @link http://php.net/manual/en/function.com-create-guid.php
*
* @return string
*
*/
function GUID()
{
// prefer strong crypto variant
if (function_exists('openssl_random_pseudo_bytes') === true) {
$data = openssl_random_pseudo_bytes(16);
$data[6] = chr(ord($data[6]) & 0x0f | 0x40); // set version to 0100
$data[8] = chr(ord($data[8]) & 0x3f | 0x80); // set bits 6-7 to 10
return vsprintf('%s%s-%s-%s-%s-%s%s%s' , str_split(bin2hex($data) , 4));
}
return sprintf(
'%04X%04X-%04X-%04X-%04X-%04X%04X%04X' ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(16384 , 20479) ,
mt_rand(32768 , 49151) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535)
);
}
```
Upvotes: 1 |
2018/03/21 | 948 | 2,773 | <issue_start>username_0: I try to add the containing of a csv file to a new list. It's a list of different types of people with caracteristics like the function, a matricule, the last name, the firstname and a sex. So I managed to read the file but I don't really know how to process to add the containing of the file to my list.Here is my code :
```
`private static void ReadTest()
{
int count = 0;
string line;
Char c = ';';
StreamReader file= new StreamReader("Listing.csv");
while ((line= file.ReadLine()) != null)
{
String[] substrings = line.Split(c);
foreach (var substring in substrings)
{
Console.WriteLine(substring);
}
count++;
}
fichier.Close();
System.Console.WriteLine("Number of lines : {0}.", count);
}
static void Main(string[] args)
{
List Workers = new List();
}
```
'<issue_comment>username_1: You can get the bytes from a Ramsey\Uuid\Uuid object like this:
```
$uuid = \Ramsey\Uuid\Uuid::uuid4();
$bytes = $uuid->getBytes();
```
Upvotes: 2 <issue_comment>username_2: FWIW, this is what i do (MySQL context):
```
/**
*
* pack with H* to ensure compatibility with MySQL function UNHEX.
*
* @param string $stringGuid
*
* @return string 16 bytes long
*
* @ref https://stackoverflow.com/questions/2839037/php-mysql-storing-and-retrieving-uuids
*/
function stringGuidAsBinaryGuid($stringGuid)
{
$binary = pack("H*" , str_replace('-' , '' , $stringGuid));
return $binary;
}
function binaryGuidAsStringGuid($binaryGuid)
{
$string = unpack("H*" , $binaryGuid);
$string = preg_replace(
"/([0-9a-f]{8})([0-9a-f]{4})([0-9a-f]{4})([0-9a-f]{4})([0-9a-f]{12})/" ,
"$1-$2-$3-$4-$5" ,
$string["1"]
);
return $string;
}
/***
*
* returns a valid GUID, v4 (string, 36 bytes)
*
* @link http://php.net/manual/en/function.com-create-guid.php
*
* @return string
*
*/
function GUID()
{
// prefer strong crypto variant
if (function_exists('openssl_random_pseudo_bytes') === true) {
$data = openssl_random_pseudo_bytes(16);
$data[6] = chr(ord($data[6]) & 0x0f | 0x40); // set version to 0100
$data[8] = chr(ord($data[8]) & 0x3f | 0x80); // set bits 6-7 to 10
return vsprintf('%s%s-%s-%s-%s-%s%s%s' , str_split(bin2hex($data) , 4));
}
return sprintf(
'%04X%04X-%04X-%04X-%04X-%04X%04X%04X' ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(16384 , 20479) ,
mt_rand(32768 , 49151) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535) ,
mt_rand(0 , 65535)
);
}
```
Upvotes: 1 |
2018/03/21 | 433 | 1,154 | <issue_start>username_0: I have the following dataframe:
```
'A' 'B' 'Dict'
a f {'k1': 'v1', 'k2': 'v2'}
b h {}
c g {'k3': 'v3'}
… … …
```
And I would like the following:
```
'A' 'B' 'Keys'
a f k1
a f k2
c g k3
… … …
```
That is, getting the keys of a dict to make the rows of the new dataframe. The dict may be empty or contain arbitrary number of elements.
Here is the solution I am using now. It works, but seems to be quite inefficient and not very pythonic…
```
my_list = []
for row in subset.iterrows():
for key in row[1][2].keys():
my_list.append((row[1][0], row[1][1], key))
new_df = pd.DataFrame(my_list)
```
Thanks in advance for your ideas!<issue_comment>username_1: You'll need `stack` here:
```
pd.DataFrame(
df.Dict.tolist(),
index=pd.MultiIndex.from_arrays([df.A, df.B])
).stack().reset_index()
A B level_2 0
0 a f k1 v1
1 a f k2 v2
2 c g k3 v3
```
Upvotes: 1 <issue_comment>username_2: Or you can `set_index()`
```
df.set_index(['A','B'])['Dict'].apply(pd.Series).stack().reset_index()
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 995 | 3,543 | <issue_start>username_0: I have a problem with lazy loading and webpack.
There is a video of <NAME> showing how easy it is with webpack 4 to create a lazy loaded bundle ([Here](https://www.youtube.com/watch?v=hQBAQh5AFwA)). But when I try to do it with typescript I run into some problems.
```
index.ts
export const someThing = something => import("./lazy/lazy");
```
and
```
lazy/lazy.ts
export default "I am lazy";
```
when I run it without any webpack configuration and name the files to ".js" I get a main chunk and a small chunk for the lazy loaded module.
But when I run it as ".ts" files with a simple webpack configuration I get just the "main.js" file and no extra chunk.
```
webpack.config.js
let config = {
resolve: {
extensions: [".ts", ".js"]
},
module: {
rules: [
{ test: /\.ts$/, use: ["ts-loader"], exclude: /node_modules/ },
]
},
}
module.exports = config;
```
and
```
tsconfig.json
{
"compilerOptions": {
"module": "commonjs",
"target": "es5",
"noImplicitAny": false,
"sourceMap": true,
"lib": [ "es6", "dom" ],
"removeComments": true
}
}
```
Is there something to configure I am mission?
What exactly is the difference between the import of a "js" file to a "ts" file?<issue_comment>username_1: [Dynamic imports are an ES feature](https://github.com/tc39/proposal-dynamic-import), you need to tell TypeScript to transform to ESNext to get `import` on the output, just change `"module": "commonjs"` to `"module": "esnext"`.
Take this code :
```
export const LoadMe = () => import('./a-module')
```
* `"module": "commonjs"` compiles to `module.exports.LoadMe = () => require('a-module')`, Webpack can't know if it's dynamic or just a normal `require`
* `"module": "esnext"` compiles to `export const LoadMe = () => import('a-module')`, Webpack knows it's dynamic because it's a call expression to `import`
Upvotes: 5 [selected_answer]<issue_comment>username_2: I have a trick one to lazyLoad a module with its type:
`function lazyLoad(){
return let lazyModule:typeof import('xxx') = require('xxx');
}`
limitation: **the xxx can only be string not a variable**.
Upvotes: 0 <issue_comment>username_3: There is no way to configure the Typescript compiler to not touch certain imports.
The [old way](https://webpack.js.org/api/module-methods/#requireensure) of lazy loading works just as well though:
```js
require.ensure([], require => {
const lodash = require('lodash') as typeof import('lodash');
}, 'lodash-chunk');
```
this can be wrapped in a promise to achieve a very similar behavior to the ES6 `import`
```js
function importLodash(): Promise {
return new Promise((resolve, reject) => {
require.ensure([], require => {
resolve(require('lodash'))
}, reject, 'lodash-chunk');
})
}
// then use it
const lodash = await importLodash();
```
\*note- `require.ensure` is not generic - this function will need to be duplicated for every module you want to lazy load.
You'll also need to declare the `require.ensure` interface based on your enabled libs. I use this in my custom typings file
```js
/* typings.d.ts */
declare global {
const require: {
// declare webpack's lazy loading interface
ensure(deps: string[], cb: (lazyRequire: (path: string) => any) => void, chunkName: string): void;
ensure(deps: string[], cb: (lazyRequire: (path: string) => any) => void, errCb: (err: Error) => void, chunkName: string): void;
};
}
```
Upvotes: 1 |
2018/03/21 | 1,174 | 3,177 | <issue_start>username_0: I have data that looks like this:
```
DATE TIME_M EX BID ASK SYM_ROOT SYM_SUFFIX
0 20180312 9:30:00.052465558 V 41.67 43.77 TRIP NaN
1 20180312 9:30:00.207724531 B 41.66 43.61 TRIP NaN
2 20180312 9:30:00.208090941 K 40.80 44.76 TRIP NaN
3 20180312 9:30:00.208116618 Z 41.62 43.83 TRIP NaN
4 20180312 9:30:00.208691471 V 40.76 43.77 TRIP NaN
```
In order to make it look like this:
```
DATE EX BID ASK time
0 2018-03-12 V 41.67 43.77 34200.052466
1 2018-03-12 B 41.66 43.61 34200.207725
2 2018-03-12 K 40.80 44.76 34200.208091
3 2018-03-12 Z 41.62 43.83 34200.208117
4 2018-03-12 V 40.76 43.77 34200.208691
```
I created the following function:
```
def transform_date_time(file):
# Transform DATE format to include hiffens:
file['DATE'] = file['DATE'].apply(lambda x: datetime.datetime.strptime(str(x), '%Y%m%d'))
# Join DATE and TIME_M
file["newtime"] = pd.to_datetime(file["DATE"].astype(str) +" "+ file["TIME_M"].map(str))
# Get seconds from midnight
file["midnight"] = pd.to_datetime(file["DATE"].astype(str) + " " + "00:00:00.000000000")
file['time'] = file["newtime"] - file["midnight"] # in timedelta format
file['time'] = file['time'].apply(lambda x: x.total_seconds())
# Delete columns that will not be used
columns = ['SYM_ROOT', 'SYM_SUFFIX','TIME_M','newtime','midnight']
file.drop(columns, inplace=True, axis=1)
return file
```
So, what I am doing is transform the `DATE` column to include hyphens and the `TIME_M` column turns into a 'time' column that is now seconds from midnight instead of a regular 24 hour time.
My problem is that this takes a while to run. Is there a more efficient way to do the same thing?<issue_comment>username_1: There is a simpler way with `pandas`.
```
df['DATE'] = pd.to_datetime(df['DATE'], format='%Y%m%d')
df['TIME_M'] = pd.to_timedelta(df['TIME_M']).dt.total_seconds()
```
In general, avoid `lambda` as this is just a poorly disguised and generally inefficient loop.
Note that your output for date is a `datetime` object. Internally, it is represented as an integer. The dashes are just there for presentation.
If you *really* need the dashes, you need to convert *back to string*, something I wouldn't recommend unless absolutely required.
Upvotes: 2 [selected_answer]<issue_comment>username_2: DateTime conversions can be expensive. So let's try to avoid them where possible:
Most obvious optimization:
The same date and time (up to the decimal) are repeated, so it would be better to only do the conversion when they change, and use the result of the last conversion as long as they stay the same (for the data that will probably be a lot of times).
When they change you don't even have to use datetime conversions in your case. For the date you can simply insert the `'-'` using substring, and for the time it would be easy to calculate the seconds without converting to time or datetime first. Just extract the 3 parts, multiply by 3600, 60 and add the seconds.
Upvotes: 0 |
2018/03/21 | 313 | 1,293 | <issue_start>username_0: I am trying to execute some statements within Try block. I am calling an API and performing some operations. There is one specific operation which is resulting in RuntimeError which is puked while running the program, although I am catching it with an exception. How do I go about avoiding the errors being puked from Try block?
```
try:
call API and perform some tasks.
encounters an error here
Except RunTimeError as ex:
print(ex)
```<issue_comment>username_1: the whole point of try and except is to catch the errors thrown by the code in try block in except block. Post the code for solution to the error
Upvotes: 0 <issue_comment>username_2: Well as username_1 <NAME> said the point of try and except is to catch the error thrown in the try block.
In case you have some flow you want to ignore from its exceptions in the try block, than you can have a nested try catch block for the specific error you want to ignore and keep doing the logic after it is ignored.
for example:
```
try:
try:
call API and perform some tasks.
encounters an error here
Except TheErrorYouWantToIngore:
pass
keep on doing some stuff even TheErrorYouWantToIngore has been throwed
Except RunTimeError as ex:
print(ex)
```
Upvotes: 1 |
2018/03/21 | 386 | 1,439 | <issue_start>username_0: I have a list of strings which is returned from a query in List A.
I am trying to use String Utils.join to combine the values in the list to a string separated by comma and in quotes. But it is not working as expected.
Values in abcList - [abc, cde, fgh]
```
abcList.addAll(jdbcTemplate.queryForList(abcSql, String.class));
String abc= StringUtils.join(abcList, "','");
abc = "'" +abc+ "'";
```
Expected output - 'abc', 'cde', 'fgh'
Actual output - 'abc, cde, fgh'
I am not sure what I am doing wrong here as I want to pass the values form the string abc into query with "IN" condition.<issue_comment>username_1: the whole point of try and except is to catch the errors thrown by the code in try block in except block. Post the code for solution to the error
Upvotes: 0 <issue_comment>username_2: Well as username_1 <NAME> said the point of try and except is to catch the error thrown in the try block.
In case you have some flow you want to ignore from its exceptions in the try block, than you can have a nested try catch block for the specific error you want to ignore and keep doing the logic after it is ignored.
for example:
```
try:
try:
call API and perform some tasks.
encounters an error here
Except TheErrorYouWantToIngore:
pass
keep on doing some stuff even TheErrorYouWantToIngore has been throwed
Except RunTimeError as ex:
print(ex)
```
Upvotes: 1 |
2018/03/21 | 403 | 1,589 | <issue_start>username_0: I have this function to toggle a dark mode on and of with a single button.
It checks if the dark.css stylesheet is already added to the site, if yes it removes it. If there isn't a dark.css it loads it and appends it to the head.
Now I want to store this information in the localStorage so the browser remembers whether the dark.css should be loaded or not.
```
$(function() {
$('#toggler').click(function(){
if ($('link[href*="css/dark.css"]').length) {
$('link[href*="css/dark.css"]').remove();
}
else {
var lightMode = document.createElement('link');
darkMode.rel="stylesheet";
darkMode.href="css/dark.css";
document.getElementsByTagName('head')[0].appendChild(darkMode);
}
});
})
```<issue_comment>username_1: the whole point of try and except is to catch the errors thrown by the code in try block in except block. Post the code for solution to the error
Upvotes: 0 <issue_comment>username_2: Well as username_1 <NAME> said the point of try and except is to catch the error thrown in the try block.
In case you have some flow you want to ignore from its exceptions in the try block, than you can have a nested try catch block for the specific error you want to ignore and keep doing the logic after it is ignored.
for example:
```
try:
try:
call API and perform some tasks.
encounters an error here
Except TheErrorYouWantToIngore:
pass
keep on doing some stuff even TheErrorYouWantToIngore has been throwed
Except RunTimeError as ex:
print(ex)
```
Upvotes: 1 |
2018/03/21 | 631 | 2,529 | <issue_start>username_0: I need to address the following issue
As a client I connect to a server, the server sends blocks of data in the following form:
>
> [4 bytes][msg - block of bytes the size of int(previous 4 bytes)]
>
>
>
When using `twisted` I need to make `dataReceived(self, data)` to be called with the `msg` part, I don't mind receiving the the 4 bytes prefix, but I have I need to make sure I get the entire message block in one piece, not fragmented, one at a time.
Please advise.<issue_comment>username_1: I've ended up writing the following Custom Receiver
```
HEADER_LENGTH = 4
class CustomReceiver(Protocol):
_buffer = b''
def dataReceived(self, data):
logger.info(f'DATA RECEIVED: {data}')
data = (self._buffer + data)
header = data[:HEADER_LENGTH]
logger.info(f'header: {header} len: {len(header)}')
while len(header) == HEADER_LENGTH:
response_length = int.from_bytes(header, byteorder='big')
response = data[HEADER_LENGTH:][:response_length]
self.responseReceived(response)
data = data[HEADER_LENGTH + response_length:]
header = data[:HEADER_LENGTH]
self._buffer = header
```
I'm not sure if I should add a locking mechanism for `dataReceived()`, simultaneous invocations will corrupt the `_buffer` data.
Upvotes: 0 <issue_comment>username_2: `StatefulProtocol` is helpful for protocols like this.
```
from twisted.protocols.stateful import StatefulProtocol
HEADER_LENGTH = 4
class YourProtocol(StatefulProtocol):
# Define the first handler and what data it expects.
def getInitialState(self):
return (
# The first handler is self._header
self._header,
# And it expects HEADER_LENGTH (4) bytes
HEADER_LENGTH,
)
# When HEADER_LENGTH bytes have been received, this is called.
def _header(self, data):
# It returns a tuple representing the next state handler.
return (
# The next thing we can handle is a response
self._response,
# And the response is made up of this many bytes.
int.from_bytes(header, byteorder='big'),
)
# When the number of bytes from the header has been received,
# this is called.
def _response(self, data):
# Application dispatch of the data
self.responseReceived(data)
# Return to the initial state to process the next received data.
return self.getInitialState()
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 777 | 3,154 | <issue_start>username_0: I have to create a code for an observable and an observer. Each observer has its own "update" method as you already know.
In my observable code, i used the "notifyObservers" method. Its parameter is a string array containing infos about my projects ( command number, name of the hamburger, name of the drink,etc..).
I still have an error in my update method in my observer and I don't know where it comes from. Its two parameters are the observable and the string array containing the infos I already specified.
here is my code
```
public void update(Observable arg0, Object arg1)
{
if (!(arg0 instanceof BDCommande))
System.out.println("Objet émetteur inconnu\n");
if (arg1 instanceof String[]) {
String[] labels = new String[5];
labels = (String[]) arg1;
}
else
System.out.println("Type de message inconnu");
int num = Integer.parseInt(labels[0]);
}
```
An error is detected in the last line of the method, labels[0] seems to not be recognized.
If someone can help me here, it's be cool.
Thanks ! ( and sorry for my poor english I'm french )<issue_comment>username_1: I've ended up writing the following Custom Receiver
```
HEADER_LENGTH = 4
class CustomReceiver(Protocol):
_buffer = b''
def dataReceived(self, data):
logger.info(f'DATA RECEIVED: {data}')
data = (self._buffer + data)
header = data[:HEADER_LENGTH]
logger.info(f'header: {header} len: {len(header)}')
while len(header) == HEADER_LENGTH:
response_length = int.from_bytes(header, byteorder='big')
response = data[HEADER_LENGTH:][:response_length]
self.responseReceived(response)
data = data[HEADER_LENGTH + response_length:]
header = data[:HEADER_LENGTH]
self._buffer = header
```
I'm not sure if I should add a locking mechanism for `dataReceived()`, simultaneous invocations will corrupt the `_buffer` data.
Upvotes: 0 <issue_comment>username_2: `StatefulProtocol` is helpful for protocols like this.
```
from twisted.protocols.stateful import StatefulProtocol
HEADER_LENGTH = 4
class YourProtocol(StatefulProtocol):
# Define the first handler and what data it expects.
def getInitialState(self):
return (
# The first handler is self._header
self._header,
# And it expects HEADER_LENGTH (4) bytes
HEADER_LENGTH,
)
# When HEADER_LENGTH bytes have been received, this is called.
def _header(self, data):
# It returns a tuple representing the next state handler.
return (
# The next thing we can handle is a response
self._response,
# And the response is made up of this many bytes.
int.from_bytes(header, byteorder='big'),
)
# When the number of bytes from the header has been received,
# this is called.
def _response(self, data):
# Application dispatch of the data
self.responseReceived(data)
# Return to the initial state to process the next received data.
return self.getInitialState()
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 647 | 2,456 | <issue_start>username_0: Having this timestamp: `1519347000`, I want to convert it into date format inside interpolation like this:
```
{{$ctrl.myTimestamp | date:'MMM d y, hh:mm'}}
```
The resulted values is **Jan 18 1970, 04:02** which is obviously wrong, the correct one should be in this case **February 23, 2018 12:50**
[Plunker here](https://plnkr.co/edit/NdzVbBQMsWq1mXKVNv2a?p=preview).
Any ideas what's going wrong?<issue_comment>username_1: I've ended up writing the following Custom Receiver
```
HEADER_LENGTH = 4
class CustomReceiver(Protocol):
_buffer = b''
def dataReceived(self, data):
logger.info(f'DATA RECEIVED: {data}')
data = (self._buffer + data)
header = data[:HEADER_LENGTH]
logger.info(f'header: {header} len: {len(header)}')
while len(header) == HEADER_LENGTH:
response_length = int.from_bytes(header, byteorder='big')
response = data[HEADER_LENGTH:][:response_length]
self.responseReceived(response)
data = data[HEADER_LENGTH + response_length:]
header = data[:HEADER_LENGTH]
self._buffer = header
```
I'm not sure if I should add a locking mechanism for `dataReceived()`, simultaneous invocations will corrupt the `_buffer` data.
Upvotes: 0 <issue_comment>username_2: `StatefulProtocol` is helpful for protocols like this.
```
from twisted.protocols.stateful import StatefulProtocol
HEADER_LENGTH = 4
class YourProtocol(StatefulProtocol):
# Define the first handler and what data it expects.
def getInitialState(self):
return (
# The first handler is self._header
self._header,
# And it expects HEADER_LENGTH (4) bytes
HEADER_LENGTH,
)
# When HEADER_LENGTH bytes have been received, this is called.
def _header(self, data):
# It returns a tuple representing the next state handler.
return (
# The next thing we can handle is a response
self._response,
# And the response is made up of this many bytes.
int.from_bytes(header, byteorder='big'),
)
# When the number of bytes from the header has been received,
# this is called.
def _response(self, data):
# Application dispatch of the data
self.responseReceived(data)
# Return to the initial state to process the next received data.
return self.getInitialState()
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 729 | 2,936 | <issue_start>username_0: I would like to write a simple test for a controller, which accepts a json body. But as soon I add the `parse.json` BodyParser to the Action my Tests cannot be compiled anymore.
The Setup is basically the plain play-scala-seed project.
Error:
```
[error] ... could not find implicit value for parameter mat: akka.stream.Materializer
[error] status(home) mustBe OK
[error] ^
```
`HomeController`:
```
def index() = Action { implicit request =>
Ok
}
def json() = Action(parse.json) { implicit request =>
Ok
}
```
`HomeControllerSpec`:
```
class HomeControllerSpec extends PlaySpec with GuiceOneAppPerTest with Injecting {
"HomeController POST" should {
"answer Ok" in {
val controller = new HomeController(stubControllerComponents())
val home = controller.json().apply(FakeRequest(POST, "/"))
status(home) mustBe OK
}
}
```<issue_comment>username_1: I've ended up writing the following Custom Receiver
```
HEADER_LENGTH = 4
class CustomReceiver(Protocol):
_buffer = b''
def dataReceived(self, data):
logger.info(f'DATA RECEIVED: {data}')
data = (self._buffer + data)
header = data[:HEADER_LENGTH]
logger.info(f'header: {header} len: {len(header)}')
while len(header) == HEADER_LENGTH:
response_length = int.from_bytes(header, byteorder='big')
response = data[HEADER_LENGTH:][:response_length]
self.responseReceived(response)
data = data[HEADER_LENGTH + response_length:]
header = data[:HEADER_LENGTH]
self._buffer = header
```
I'm not sure if I should add a locking mechanism for `dataReceived()`, simultaneous invocations will corrupt the `_buffer` data.
Upvotes: 0 <issue_comment>username_2: `StatefulProtocol` is helpful for protocols like this.
```
from twisted.protocols.stateful import StatefulProtocol
HEADER_LENGTH = 4
class YourProtocol(StatefulProtocol):
# Define the first handler and what data it expects.
def getInitialState(self):
return (
# The first handler is self._header
self._header,
# And it expects HEADER_LENGTH (4) bytes
HEADER_LENGTH,
)
# When HEADER_LENGTH bytes have been received, this is called.
def _header(self, data):
# It returns a tuple representing the next state handler.
return (
# The next thing we can handle is a response
self._response,
# And the response is made up of this many bytes.
int.from_bytes(header, byteorder='big'),
)
# When the number of bytes from the header has been received,
# this is called.
def _response(self, data):
# Application dispatch of the data
self.responseReceived(data)
# Return to the initial state to process the next received data.
return self.getInitialState()
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 894 | 3,074 | <issue_start>username_0: I've implemented a quicksort in C++ after reading the Wikipedia article on it. Just trying to refresh my memory from school in prep for interviews for a job change.
Here is my implementation
```
#include
size\_t partition(std::vector & inputs, size\_t leftIndex, size\_t rightIndex)
{
int pivotValue = inputs[leftIndex];
size\_t i = leftIndex - 1;
size\_t j = rightIndex + 1;
while (true)
{
while (inputs[++i] < pivotValue);
while (inputs[--j] > pivotValue);
if (i >= j)
{
return j;
}
std::iter\_swap(inputs.begin() + i, inputs.begin() + j);
}
return 0;
}
void sort(std::vector & inputs, size\_t leftIndex, size\_t rightIndex)
{
if (leftIndex < rightIndex)
{
size\_t pivot = partition(inputs, leftIndex, rightIndex);
sort(inputs, leftIndex, pivot);
sort(inputs, pivot + 1, rightIndex);
}
}
int main()
{
std::vector inputs = { 3,7,1,2,9,5,4,0,8,6 };
sort(inputs, 0, inputs.size() - 1);
return 0;
}
```
It seems to work fine for all my test inputs that I've come up with so far.
*Editing from here for clarity*
If we change
```
if (i >= j)
{
return j;
}
```
to
```
if (i > j)
{
return j;
}
else if(i == j)
{
return j;
}
```
My question is, what set of inputs would exercise the following block of the partition function?
```
if (i > j)
{
return j;
}
```<issue_comment>username_1: There is no `break` in the body of your `while (true) { ... }`, so anything after it is unreachable.
As your test cases run to completion, and there are no exceptions thrown or caught, that means they must have *all* hit `return j;`, *each time* you `partition`ed.
Upvotes: 0 <issue_comment>username_2: Confusion arose, because I had implemented the algorithm wrong. As a couple users pointed out in the comments to the original question (before edits), the listing on wikipedia uses a do/while as opposed to a while, such that the increment of i and decrement of j occur before the evaluation of the condition.
I had assumed there was no functional difference, but there is.
So, given an equivalent partition function:
```
size_t partition(std::vector & inputs, size\_t leftIndex, size\_t rightIndex)
{
int pivotValue = inputs[leftIndex];
size\_t i = leftIndex - 1;
size\_t j = rightIndex + 1;
while (true)
{
while (inputs[++i] < pivotValue);
while (inputs[--j] > pivotValue);
if (i >= j)
{
return j;
}
std::iter\_swap(inputs.begin() + i, inputs.begin() + j);
}
return 0;
}
```
It becomes obvious that the check for i >= j will get a true hit after the loop walked the marker at each bound closer towards the middle, when they are pointed to the same element or have crossed over each other.
With the fixes increment and decrement loops, an increment and decrement occur each iteration of the main loop. Whereas before, they didn't.
As always, when dealing with algorithms, step through it on paper and it will become clear.
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,070 | 3,725 | <issue_start>username_0: I'm walking through the tutorials for setting up auth0 as an API gateway authorizer for AWS listed here: <https://auth0.com/docs/integrations/aws-api-gateway/custom-authorizers>
I am using the recommended authorizer from here: <https://github.com/auth0-samples/jwt-rsa-aws-custom-authorizer>
The only modification has been to the config files.
However, when testing the authorizer function, I get the following error:
```
{"name":"JsonWebTokenError","message":"jwt issuer invalid. expected: https://MYSERVICE.auth0.com"}
```
Where MYSERVICE is the auth0 api I have set up. This is confusing, because I've gotten the jwt token through this method:
```
curl --request POST \
--url https://MYSERVICE.auth0.com/oauth/token \
--header 'content-type: application/json' \
--data '{"client_id":"MY_ID","client_secret":"MY_SECRET","audience":"TestApi","grant_type":"client_credentials"}'
```
The resulting token can be loaded into the debugger tool at <https://jwt.io/>, and it reports the iss field as <https://MYSERVICE.auth0.com>
[](https://i.stack.imgur.com/y7i55.png)
Is there a misconfiguration that might cause this issue?<issue_comment>username_1: Went through the entire tutorial after reading your question, and this worked for me (had already done this recently).
Unclear, but from your error message reported in question, it looks like expected issuer does not have a trailing `/` on the end.
However, mine definitely DID have that. Here a screenshot from JWT.IO of a token that is working.
[](https://i.stack.imgur.com/62ddb.gif)
Can simply send that the API (using postman) and appending it as Authorization Bearer {{token}} header. using the tutorial's api (AWS petshop), receive the output:
```
[
{
"id": 1,
"type": "dog",
"price": 249.99
},
{
"id": 2,
"type": "cat",
"price": 124.99
},
{
"id": 3,
"type": "fish",
"price": 0.99
}
]
```
Be helpful to see your JWT token `iss` and `aud` (audience) values.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Little late to the party, but this is worked for my Blazor WASM ASP.Net Core 3.1 Web API project when I setup a custom domain and received the same error.
The fix for me was to set the TokenValidationParameters.ValidIssuer = [MY\_CUSTOM\_DOMAIN] in the Startup.cs class of my web service app.
```
public void ConfigureServices(IServiceCollection services)
{
services.AddAuthentication(options =>
{
options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
})
.AddJwtBearer(options =>
{
options.Authority = Configuration[“Auth0:Authority”];
options.Audience = Configuration[“Auth0:ApiIdentifier”];
options.TokenValidationParameters.ValidIssuer = Configuration[“Auth0:Issuer”];
});
}
```
Here is my appsettings.config for my server:
```
{
“AllowedHosts”: “*”,
“Auth0”: {
“Authority”: “[AUTH0_TENANT_DOMAIN]”, (i.e. https://prod-mydomain.us.auth0.com)
“Issuer”: “[MY_CUSTOM_DOMAIN]”, (i.e. https://login.mycustomdomain.net/)
“ApiIdentifier”: “[MY_API_DOMAIN]” (i.e. https://example.net/api)
}
}
```
IMPORTANT! => I had to include a trailing “/” in the URL for my custom domain like this: [https://login.mycustomdomain.net/"](https://login.mycustomdomain.net/%22). You can verify if you need a trailing “/” by looking at the ISS value found in the bearer token (@ jwt.io or jwt.ms) passed during the call to your web service.
Upvotes: 0 |
2018/03/21 | 532 | 1,689 | <issue_start>username_0: I have the following code and I made sure its extension and name are correct. However, I still get the error outputted as seen below.
I did see another person asked a similar question here on Stack Overflow, and read the answer but it did not help me.
[Failed to load a .bin.gz pre trained words2vecx](https://stackoverflow.com/questions/44045881/failed-to-load-a-bin-gz-pre-trained-words2vecx)
Any suggestions how to fix this?
Input:
```
import gensim
word2vec_path = "GoogleNews-vectors-negative300.bin.gz"
word2vec = gensim.models.KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
```
Output:
```
OSError: Not a gzipped file (b've')
```<issue_comment>username_1: The problem is that the file you've downloaded is not a gzip file. If you check the size of the file it maybe in KBs (that is what happened with me, when I downloaded it from [this Github link](https://github.com/mmihaltz/word2vec-GoogleNews-vectors) because it needed git-lfs)
Here is an alternate solution to resolve this issue:
Download the model using the below command on your terminal:
```
wget -c "https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz"
```
Then, load the model as you would using gensim:
```
from gensim import models
w = models.KeyedVectors.load_word2vec_format(
'GoogleNews-vectors-negative300.bin', binary=True)
```
Hope this helps you!!
Upvotes: 2 <issue_comment>username_2: **Try this**
```py
import tensorflow
word2vec_path = 'https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz'
word2vec = models.KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
```
Upvotes: 0 |
2018/03/21 | 772 | 2,868 | <issue_start>username_0: I have recently started to convert my react native app and running on Expo so i could test this on my actual android device. I am getting this error above. I have since then running my app using the Expo XDE. I am also running on a windows machine.
The full error message is:
][1](https://i.stack.imgur.com/AbxKr.png)
I figured that this has something to do with my index.js, yet here it is
```
import { AppRegistry } from 'react-native';
import App from './App';
AppRegistry.registerComponent('projectTARA', () => 'App');
```<issue_comment>username_1: The problem is you haven't opened your project in Android Studio. The previous project is running in android studio and you are running the latest one in emulator.
Upvotes: 2 <issue_comment>username_2: The following solution worked:
```
AppRegistry.registerComponent('main',() => App);
```
Credits: <NAME>
*(Cannot you people just reply to mark as solve ?)*
Upvotes: 6 [selected_answer]<issue_comment>username_3: I created a Project from "Doco", and save it with Practice. so write that line like that
```
AppRegistry.registerComponent('Practice', () => FixedDimensionsBasics);
```
then I get an error like in a question
So, I check on my Appdelegate.m file on ios folder and find that Project name is Project
so I change that line with
```
AppRegistry.registerComponent('Project', () => FixedDimensionsBasics);
```
and my Error solved
so if you are using "Doco" then check that thing.
Upvotes: 2 <issue_comment>username_4: I came across this issue because I had previously run a React project that was not fully shut down. To fix this, I could have restarted my computer. But I chose to do this instead: (Using MacOS, should work with Linux as well, for Windows you would likely need to use Task Manager or similar):
1. Open terminal
2. Find the program that is using port 8081 by typing "lsof -i :8081". (Note this is LSOF in lowercase)
3. Look at the process ID (under the PID column)
4. Kill the process by typing "kill -9 *PID*" (where *PID* is the process ID number you found in step 3).
After killing the process you should be able to run your react-native project.
Upvotes: 1 <issue_comment>username_5: Removed node\_modules and reinstalled by yarn that worked for me.
Upvotes: 1 <issue_comment>username_6: **Error ?** if you are running your app on the expo framework, the framework cannot find registration of the main App,or its entry point.
**Solution ?** go to you entry script, could be "src/main.js" or in my case the "App.js"
then;
```
import { registerRootComponent } from 'expo';
export default function App(){
//your code app code goes here
}
registerRootComponent(App);
```
Refer here: <https://docs.expo.dev/versions/latest/sdk/register-root-component/>
Upvotes: 0 |
2018/03/21 | 821 | 3,037 | <issue_start>username_0: Generally, a list comprehension follows a following pattern:
```
my_list = [record for record in records_set]
```
It could be extended to, for example, the following expression, if record is a dictionary:
```
my_list = [(record['attribute_a'],record['attribute_b']) for record in records_set]
```
What if I want the list comprehension to take a list of attributes I want to extract from a record as an argument?
Lets say I have a list
```
attributes_list = ['attribute_a','attribute_b','attribute_c']
```
As a result of applying it to the list comprehension pattern, I want to get the following list comprehension as a result:
```
my_list = [(record['attribute_a'],record['attribute_b'],record['attribute_c']) for record in records_set]
```
How do I do it?<issue_comment>username_1: The problem is you haven't opened your project in Android Studio. The previous project is running in android studio and you are running the latest one in emulator.
Upvotes: 2 <issue_comment>username_2: The following solution worked:
```
AppRegistry.registerComponent('main',() => App);
```
Credits: <NAME>
*(Cannot you people just reply to mark as solve ?)*
Upvotes: 6 [selected_answer]<issue_comment>username_3: I created a Project from "Doco", and save it with Practice. so write that line like that
```
AppRegistry.registerComponent('Practice', () => FixedDimensionsBasics);
```
then I get an error like in a question
So, I check on my Appdelegate.m file on ios folder and find that Project name is Project
so I change that line with
```
AppRegistry.registerComponent('Project', () => FixedDimensionsBasics);
```
and my Error solved
so if you are using "Doco" then check that thing.
Upvotes: 2 <issue_comment>username_4: I came across this issue because I had previously run a React project that was not fully shut down. To fix this, I could have restarted my computer. But I chose to do this instead: (Using MacOS, should work with Linux as well, for Windows you would likely need to use Task Manager or similar):
1. Open terminal
2. Find the program that is using port 8081 by typing "lsof -i :8081". (Note this is LSOF in lowercase)
3. Look at the process ID (under the PID column)
4. Kill the process by typing "kill -9 *PID*" (where *PID* is the process ID number you found in step 3).
After killing the process you should be able to run your react-native project.
Upvotes: 1 <issue_comment>username_5: Removed node\_modules and reinstalled by yarn that worked for me.
Upvotes: 1 <issue_comment>username_6: **Error ?** if you are running your app on the expo framework, the framework cannot find registration of the main App,or its entry point.
**Solution ?** go to you entry script, could be "src/main.js" or in my case the "App.js"
then;
```
import { registerRootComponent } from 'expo';
export default function App(){
//your code app code goes here
}
registerRootComponent(App);
```
Refer here: <https://docs.expo.dev/versions/latest/sdk/register-root-component/>
Upvotes: 0 |
2018/03/21 | 820 | 3,188 | <issue_start>username_0: I'm trying to pass some cmd commands using system() and I would like to be able to "communicate" with cmd, say I code in `system("dir")` in my mainwindow.cpp under my clicked function
this is what it looks like for example
```
void MainWindow::on_pushButton_login_clicked()
{
std::string platform_server_ip = ui->lineEdit_platform_server_ip->text().toStdString();
if (platform_server_ip == "dir"
{
QMessageBox::information(this,"Login", "all required log in details are correct");
close();
const char* c = platform_server_ip.c_str();
system(c);
system("ipconfig");
}
```
*I would like to know why it behaves like this and if that's normal*. I've included `CONFIG += console`
in my project file, and checked "run in terminal" (tried it also without) but it never shows me my **desired outcome**.
Instead what I get, is a blank terminal that pops up along side my GUI, and then when I enter "dir" in my GUI and hit enter, a cmd window pops up really fast and in less than a second, its gone. I've even tried it with `system("ipconfig")`and`system ("pause")`
as well as with one system command like this `system("ipconfig" "&pause")`
**desired outcome:** is just a normal execution of system("ipconfig"), followed by other system commands, that display the same result as typing them in cmd itself.
I've also tried all this in "qt Console application" and I either get the same result, or the output (what would normally be as output on cmd) is then found in "application output" of qt creator.
Is there another better way I can achieve what I want?
I'm truly a noob and would really appreciate some guidance.<issue_comment>username_1: You can try
```
system("cmd /k ipconfig");
```
This will open another terminal window which will stay open (`k` stands for `keep`) at the end of the command execution.
I think you don't need the `CONFIG += console` project setting, to achieve this. Calling `system` will start another process, which isn't related at all with the calling application.
If you want to start external programs from within a Qt application, you can use [QProcess class](http://doc.qt.io/qt-5/qprocess.html), which lets you somehow interact with the started processes through standard in/out. For a very simple example, have a form with a push button and a text edit called `textEdit`; in the push button `clicked` slot:
```
QProcess process;
process.start("ipconfig");
process.waitForReadyRead();
ui->textEdit->setText(process.readAll());
process.waitForFinished();
```
This way, you won't see additional console windows, and the command output will be shown directly in your text edit.
This can be generalized in a function like this:
```
bool exec(QString command)
{
QProcess process;
process.start(command);
if(!process.waitForStarted())
{
return false; //the process failed to start
}
//etc...
return true;
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Depending on whether this is not just a quick hack/tool, you can look at QProcess for more indepth control over your process so that you can read / write the child process pipes.
Upvotes: 0 |
2018/03/21 | 770 | 2,602 | <issue_start>username_0: I have created an exe using cx\_Freeze.
The build was successful.
Then when I was clicking on the exe, I got:-
```
no module named 'queue'
```
So, i researched and added this line:-
```
from multiprocessing import Queue
```
Now i'm getting:-
```
no module named 'chardet'.
```
This is the full error i am getting :-
```
Traceback (most recent call last):
File "C:\Users\pc\AppData\Local\Programs\Python\Python36-32\lib\site-pac
kages\cx_Freeze\initscripts\__startup__.py", line 14, in run
module.run()
File "C:\Users\pc\AppData\Local\Programs\Python\Python36-32\lib\site-pac
kages\cx_Freeze\initscripts\Console.py", line 26, in run
exec(code, m.__dict__)
File "normalapi_2103.py", line 1, in
File "C:\Users\pc\AppData\Local\Programs\Python\Python36-32\lib\site-pac
kages\requests-2.18.4-py3.6.egg\requests\\_\_init\_\_.py", line 44, in
import chardet
ModuleNotFoundError: No module named 'chardet'
```
And i am not getting what to import...
I have also tried adding
>
> 'requests' and 'os' packages in setup.py.
>
>
>
But no luck...<issue_comment>username_1: You can try
```
system("cmd /k ipconfig");
```
This will open another terminal window which will stay open (`k` stands for `keep`) at the end of the command execution.
I think you don't need the `CONFIG += console` project setting, to achieve this. Calling `system` will start another process, which isn't related at all with the calling application.
If you want to start external programs from within a Qt application, you can use [QProcess class](http://doc.qt.io/qt-5/qprocess.html), which lets you somehow interact with the started processes through standard in/out. For a very simple example, have a form with a push button and a text edit called `textEdit`; in the push button `clicked` slot:
```
QProcess process;
process.start("ipconfig");
process.waitForReadyRead();
ui->textEdit->setText(process.readAll());
process.waitForFinished();
```
This way, you won't see additional console windows, and the command output will be shown directly in your text edit.
This can be generalized in a function like this:
```
bool exec(QString command)
{
QProcess process;
process.start(command);
if(!process.waitForStarted())
{
return false; //the process failed to start
}
//etc...
return true;
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Depending on whether this is not just a quick hack/tool, you can look at QProcess for more indepth control over your process so that you can read / write the child process pipes.
Upvotes: 0 |
2018/03/21 | 250 | 841 | <issue_start>username_0: Please tell me what was wrong with this line and any suggestions:
```
ALTER TABLE iv_customer_token CHANGE customer_id customer_id INT(10) NOT NULL UNSIGNED;
```
Error:
>
> ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'UNSIGNED' at line 1
>
>
><issue_comment>username_1: `UNSIGNED` needs to be after the data define type.
so
```
ALTER TABLE iv_customer_token CHANGE customer_id customer_id INT(10) NOT NULL UNSIGNED;
```
needs to be
```
ALTER TABLE iv_customer_token CHANGE customer_id customer_id INT(10) UNSIGNED NOT NULL ;
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this:
```
ALTER TABLE iv_customer_token MODIFY customer_id INT(10) UNSIGNED NOT NULL;
```
Upvotes: 0 |
2018/03/21 | 676 | 2,347 | <issue_start>username_0: I have the following problem:
I have a class B that extends class A.
The class A has a method M that has a first parameter p1 (required) and a second parameter p2 (optional).
p2 has a default value V.
B overwrite the method M, so it's signature is the same as A's M.
I am really worried about a change in default value of p2 (V) on A by some developer that do not imagine it is overwritten by child classes and V is repeated on his M's signatures.
So is possible to create M on a child class of A without repeat V as default value, but keep the behavior?
```
class A
{
public function M($p1, $p2 = 'V')
{
}
}
class B extends A
{
public function M($p1, $p2 = null)
{
//do something more
return parent::M($p1, $p2);
/*
if user don't pass $p2, it will be null and will pass null to A::M too.
Is not what I want. I would like A::M get his $p2 default value 'V'.
*/
}
}
```<issue_comment>username_1: First I want to say it might be a bit weird to not 'trust' developers to use code in a correct way.
But I have a suggestion, since you control the parent class `A`, you can just add some logic into the implementation that just uses value `V` if `null` was passed.
```
class A {
public function M($p1, $p2 = 'V') {
if ($p2 === null) $p2 = 'V';
}
}
```
Upvotes: 1 <issue_comment>username_2: Declare the type of `$p2` in `A`. Then it won't accept a null value.
```
class A
{
public function M($p1, string $p2 = 'V')
{
}
}
```
The `M` method in the child class can still have a `null` default value, which is fine. People *should* be able to change default values when they extend a class. They can change the entire behavior of the `M` method if they want to. But if it calls `parent::M($p1, $p2);` without assigning a string you'll get a type error. The situation you need to handle is the one where *your code* does something unexpected.
Upvotes: 0 <issue_comment>username_3: Is very nice PHP function `func_get_args`
```
php
function aa($a, $b = null) {
var_export([$a, $b, func_get_args()]);
}
aa('1 argument');
aa('second NULL', null);
</code
```
In first case it returns only 1 elements array, so you can detect it is default or passed value.
Upvotes: 2 [selected_answer] |
2018/03/21 | 524 | 1,685 | <issue_start>username_0: can somone explain me how this code works
```
php
function append($initial)
{
$result=func_get_arg(0);
foreach(func_get_arg()as $key=value){
if($key>=1)
{
$result .=' '.$value;
}
}
return $result;
echo append('Alex,'James','Garrett');
?>
```
why do we have a `0` at the `func_get_arg(0)`, and this is a loop there are `0,1,2` shouldn't it only post Alex, James?
and what is the (as) does the `func_get_arg() as $key => value`. give the array the names to the value ?
this is basic but a bit messy!<issue_comment>username_1: That's how it works:
```
php
function append($initial)
{
// Get the first argument - func_get_arg gets any argument of the function
$result=func_get_arg(0);
// Get the remaining arguments and concat them in a string
foreach(func_get_args() as $key=value) {
// Ignore the first (0) argument, that is already in the string
if($key>=1)
{
$result .=' '.$value;
}
}
// Return it
return $result;
}
// Call the function
echo append('Alex,'James','Garrett');
?>
```
This function will do the same that:
```
echo implode(' ', array('Alex', 'James', 'Garrett'));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: before you used the foreach{} loop. You have returned 'Alex' which is at position 0.
```
$result=func_get_arg(0);
foreach(){
}
return $result; //It returns Alex
//foreach() loop
foreach(func_get_arg()as $key=>value){
/*Its looping and only printing after
the key gets to 1 and then the loop goes to 2.Eg: $result[$key]=> $value; */
if($key>=1)
{
$result .=' '.$value;
}
}
```
Upvotes: 0 |
2018/03/21 | 486 | 1,780 | <issue_start>username_0: Try to write string in file in new line, although i added "\n" to the end of string but could not success code as below:-
```
Timer t = new Timer();
t.scheduleAtFixedRate(new TimerTask() {
@Override
public void run() {
for (;true;){
String smsg = new String("$;TRIG;");
String neuline= "";
neuline =neuline +"\n";
neuline =smsg+"\n";
activeBT.write(neuline.getBytes());
}}
}, 0, 100);
```
can someone help me<issue_comment>username_1: That's how it works:
```
php
function append($initial)
{
// Get the first argument - func_get_arg gets any argument of the function
$result=func_get_arg(0);
// Get the remaining arguments and concat them in a string
foreach(func_get_args() as $key=value) {
// Ignore the first (0) argument, that is already in the string
if($key>=1)
{
$result .=' '.$value;
}
}
// Return it
return $result;
}
// Call the function
echo append('Alex,'James','Garrett');
?>
```
This function will do the same that:
```
echo implode(' ', array('Alex', 'James', 'Garrett'));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: before you used the foreach{} loop. You have returned 'Alex' which is at position 0.
```
$result=func_get_arg(0);
foreach(){
}
return $result; //It returns Alex
//foreach() loop
foreach(func_get_arg()as $key=>value){
/*Its looping and only printing after
the key gets to 1 and then the loop goes to 2.Eg: $result[$key]=> $value; */
if($key>=1)
{
$result .=' '.$value;
}
}
```
Upvotes: 0 |
2018/03/21 | 857 | 2,782 | <issue_start>username_0: I am new to python and pandas and I am struggling to figure out how to pull out the 10 counties with the most water used for irrigation in 2014.
```
%matplotlib inline
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
data = pd.read_csv('info.csv') #reads csv
data['Year'] = pd.to_datetime(['Year'], format='%Y') #converts string to
datetime
data.index = data['Year'] #makes year the index
del data['Year'] #delete the duplicate year column
```
This is what the data looks like (this is only partial of the data):
```
County WUCode RegNo Year SourceCode SourceID Annual CountyName
1 IR 311 2014 WELL 1 946 Adams
1 IN 311 2014 INTAKE 1 268056 Adams
1 IN 312 2014 WELL 1 48 Adams
1 IN 312 2014 WELL 2 96 Adams
1 IR 312 2014 INTAKE 1 337968 Adams
3 IR 315 2014 WELL 5 81900 Putnam
3 PS 315 2014 WELL 6 104400 Putnam
```
I have a couple questions:
I am not sure how to pull out only the "IR" in the `WUCode` Column with pandas and I am not sure how to print out a table with the 10 counties with the highest water usage for irrigation in 2014.
I have been able to use the `.loc` function to pull out the information I need, with something like this:
```
data.loc['2014', ['CountyName', 'Annual', 'WUCode']]
```
From here I am kind of lost. Help would be appreciated!<issue_comment>username_1: That's how it works:
```
php
function append($initial)
{
// Get the first argument - func_get_arg gets any argument of the function
$result=func_get_arg(0);
// Get the remaining arguments and concat them in a string
foreach(func_get_args() as $key=value) {
// Ignore the first (0) argument, that is already in the string
if($key>=1)
{
$result .=' '.$value;
}
}
// Return it
return $result;
}
// Call the function
echo append('Alex,'James','Garrett');
?>
```
This function will do the same that:
```
echo implode(' ', array('Alex', 'James', 'Garrett'));
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: before you used the foreach{} loop. You have returned 'Alex' which is at position 0.
```
$result=func_get_arg(0);
foreach(){
}
return $result; //It returns Alex
//foreach() loop
foreach(func_get_arg()as $key=>value){
/*Its looping and only printing after
the key gets to 1 and then the loop goes to 2.Eg: $result[$key]=> $value; */
if($key>=1)
{
$result .=' '.$value;
}
}
```
Upvotes: 0 |
2018/03/21 | 665 | 2,450 | <issue_start>username_0: In my program, I am reading a resource file for a unit test. I use file path as:
```
\\\path\\\to\\\file
```
On my machine(Windows) this runs fine. But on server(Unix), this fails, and I have to change it to: `/path/to/file`
But Java is supposed to be platform independent. So isn't this behaviour unexpected?<issue_comment>username_1: Use `FileSystem.getSeparator()` or `System.getProperty("file.separator")` instead of using slashes.
EDIT:
You can get an instance of `FileSystem` via `FileSystems.getDefault` (JDK 1.7+)
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use `File.separator` to get the appropriate character in a platform-independent way.
Upvotes: 2 <issue_comment>username_3: Java is platform independent. The file path-es and some system calls are not.
As long as the path is relative, you can use `File.separator`:
```
String path = "path" + File.separator + "to" + File.separator + "file";
System.out.println(path); // prints path\to\file on windows
```
Sometimes it's an option is to provide a Properties file and let the user define path of that actual file. This way full paths are okay too. You can read the properties like this:
```
Properties props = new Properties();
props.load(new FileInputStream(filePath));
```
The next question is: how to specify the location of that file? That might be either a file on a relative path. If that's not viable for your app, then you can let the user specify it in a system property:
```
java ... -DconfigFile=C:\TEMP\asd.txt .... -jar myapp.jar
```
Then you can access it like this:
```
// prints C:\TEMP\asd.txt if you specified -DconfigFile=C:\TEMP\asd.txt
System.out.println(System.getProperty("configFile"));
```
Upvotes: 1 <issue_comment>username_4: This is the expected behaviour.
Java **code** *compiles* on any machine/OS provided you have the right version of Java installed on it.
However, at run time, your code sees only a variable **value** like another one, which happens to be `\path\to\file`
When it talks to the file system, it uses that particular value ; the file system then tries to find that path you've given to it ; which is why one syntax works fine on Windows but will not work on Linux.
Upvotes: 0 <issue_comment>username_5: Better way of doing this is :
```
val pathUri = Paths.get(".//src//test//res//file.txt").toUri()
val is = FileInputStream((File(pathUri)))
```
Upvotes: 0 |
2018/03/21 | 807 | 2,919 | <issue_start>username_0: I have some designs I'm following for an iOS project. The font used is Avenir with relatively tight line spacing.
Some of these labels will have dynamic text, so I can't just make the label's size larger since the size should be determined by the content.
By default line spacing for a UILabel ends up pretty large.
[](https://i.stack.imgur.com/OVxqS.png)
If I adjust the `Line Height Multiple` or the `Max Height`, the text along the top ends up cropped.
[](https://i.stack.imgur.com/BztuF.png)
---
[](https://i.stack.imgur.com/Pi0hZ.png)
It should behave like this (Affinity Designer)...
[](https://i.stack.imgur.com/MWl4N.png)
Is there a way to handle this?
Thanks for your help!<issue_comment>username_1: Unfortunately the UILabel has several quirks when it comes to vertical adjustments. A somewhat hacky solution is to move the baseline of the first line down as needed. Depending on if your string ends with a newline, and the amount of tightening you do, you might need to add one or two extra newlines also, otherwise the rendering engine will clip the last line.
The code snippet assumes that `self.label` already has an attributed string assigned to it, and that it has line separator character `0x2028` between the lines. This is usually true when entering multi-line text in IB.
```
// 0x2028 is the unicode line separator character
// Use \n instead if it is what you have
// or calculate the length of the first line in some other way
NSInteger lengthOfFirstLine = [self.label.text componentsSeparatedByString:@"\u2028"][0].length;
NSMutableAttributedString *s = [[NSMutableAttributedString alloc] initWithAttributedString:self.label.attributedText];
// Add two more blank lines so that the rendering engine doesn't clip the last line
[s appendAttributedString:[[NSAttributedString alloc] initWithString:@"\n\n"]];
// Move the baseline offset for the first line down
// the other lines will adjust to this
// 50 is a value you will have to find what looks best for you
[s addAttribute:NSBaselineOffsetAttributeName value:@(-50) range:NSMakeRange(0, lengthOfFirstLine)];
self.label.attributedText = s;
```
Upvotes: 1 <issue_comment>username_2: This works for me. By adding
>
> minimumLineHeight
>
>
>
```
let string = NSMutableAttributedString(string: venue.name)
let style = NSMutableParagraphStyle()
style.lineHeightMultiple = 0.68
style.minimumLineHeight = nameLabel.font.lineHeight
string.addAttribute(NSAttributedString.Key.paragraphStyle,
value: style,
range: NSMakeRange(0, venue.name.count))
nameLabel.attributedText = string
```
Upvotes: 3 [selected_answer] |
2018/03/21 | 319 | 1,104 | <issue_start>username_0: `blue` should be blue without adding ids to elements, without using JavaScript and without knowing the parent `#id`s, as they can change at a future date, while allowing JavaScript to set style attributes that apply correctly.
```css
custom-tag {
color: blue;
/*
* the above should have the specificity of exactly 2×ids + 1×tag
*/
}
/* later loaded CSS, which is not controllable... */
#one custom-tag, #two custom-tag {
color: red;
}
```
```html
blue
orange
green
blue
```<issue_comment>username_1: Increase selector specificity without the need of parents, using `:not()` + non-existing selectors of desired specificity:
```css
any-tag:not(#bogus_id):not(#bogus_id) {
color: blue;
}
#one any-tag, #two any-tag {
color: red;
}
```
```html
blue
orange
green
blue
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can also stack up the selectors like this:
```
.selector.selector
```
which will increase the specificity.
You can also stack them more than twice.
Works in all browsers.
Upvotes: 0 |
2018/03/21 | 3,741 | 10,147 | <issue_start>username_0: I'm trying to calculate the total point scored per player per question.
For each question, I'm retrieving the data in the following format.
```
[
{
"_id":"5ab24e5e49e0f20a06d73ab7",
"player":"Kareltje",
"answers":
[
{
"_id":"5ab227cf07818240934b11a5",
"player":"Peter",
"comment":"",
"points":7,
"__v":0
},
{
"_id":"5ab24bf3b8494c76fd0bb31a",
"player":"André",
"comment":"",
"points":6,
"__v":0
},
{
"_id":"5ab24bf7b8494c76fd0bb31b",
"player":"Maikel",
"comment":"",
"points":5,
"__v":0
},
{
"_id":"5ab24bfab8494c76fd0bb31c",
"player":"Iebele",
"comment":"",
"points":4,
"__v":0
},
{
"_id":"5ab24bfeb8494c76fd0bb31d",
"player":"Bettina",
"comment":"",
"points":3,
"__v":0
},
{
"_id":"5ab24c01b8494c76fd0bb31e",
"player":"Shirley",
"comment":"",
"points":2,
"__v":0
},
{
"_id":"5ab24c04b8494c76fd0bb31f",
"player":"Suzanne",
"comment":"",
"points":1,
"__v":0
}
],
"question":1,"__v":0
},
{
"_id":"5ab24fa21e7caa1132720e7a",
"player":"Maikel",
"answers":
[
{
"_id":"5ab24c04b8494c76fd0bb31f",
"player":"Suzanne",
"comment":"",
"points":7,
"__v":0
},
{
"_id":"5ab24bfab8494c76fd0bb31c",
"player":"Iebele",
"comment":"",
"points":6,
"__v":0
},
{
"_id":"5ab24bf3b8494c76fd0bb31a",
"player":"André",
"comment":"",
"points":5,
"__v":0
},
{
"_id":"5ab24c01b8494c76fd0bb31e",
"player":"Shirley",
"comment":"",
"points":4,
"__v":0
},
{
"_id":"5ab24bf7b8494c76fd0bb31b",
"player":"Maikel",
"comment":"",
"points":3,
"__v":0
},
{
"_id":"5ab24bfeb8494c76fd0bb31d",
"player":"Bettina",
"comment":"",
"points":2,
"__v":0
},
{
"_id":"5ab227cf07818240934b11a5",
"player":"Peter",
"comment":"",
"points":1,
"__v":0
}
],
"question":1,"__v":0
}
]
```
I'm want to have a total score for each player based on this data, but I can't seem to find a code, to add up the points per player.
The result be something like:
Peter: 14
André: 12
Maikel: 10
Iebele: 8
Any ideas on how to achieve this?
I've tried to get the points with the following code:
```
var { data, players } = this.state;
var ArrLength = data.length;
console.log(data);
var j;
var x;
for (j = 0; j < ArrLength; j++) {
let answer = data[j].answers;
for (x = 0; x < answer.length; x++) {
console.log(answer[`${x}`].points);
}
}
```
This at least show me the points per player in the console.log. But now adding them to get an end result is something I can't seem to figure out.<issue_comment>username_1: You can simply do it with an external array. It basically uses the same key for each element in `points`, it verifies if the key is set already (if not, sets as zero), and sums points related with same key.
PS: `_id` is not unique in your json.
Since the JSON you gave is not valid, I'm using the following one:
```
var arr = [
{
answers: [{
_id: "5ab227cf07818240934b11a5",
player: "Peter",
comment: "",
points: 7,
__v: 0
}, {
_id: "5ab24bf3b8494c76fd0bb31a",
player: "André",
comment: "",
points: 6,
__v: 0
}, {
_id: "5ab24bf7b8494c76fd0bb31b",
player: "Maikel",
comment: "",
points: 5,
__v: 0
}, {
_id: "5ab24bfab8494c76fd0bb31c",
player: "Iebele",
comment: "",
points: 4,
__v: 0
}
],
player: "Pieter",
question: 1,
__v: 0,
_id: "5ab24e5e49e0f20a06d73ab7"
}, {
answers: [{
_id: "5ab227cf07818240935b11a5",
player: "Peter",
comment: "",
points: 7,
__v: 0
}, {
_id: "5ab24bf3b8494c76fd8bb31a",
player: "André",
comment: "",
points: 6,
__v: 0
}, {
_id: "5ab24bf7b8494c76fd2bb31b",
player: "Maikel",
comment: "",
points: 5,
__v: 0
}, {
_id: "5ab24bfab8494c76fd9bb31c",
player: "Iebele",
comment: "",
points: 4,
__v: 0
}
],
player: "Kareltje",
question: 1,
__v: 0,
_id: "5ab24e5e49e0f20b86d73ab7"
}
]
var points = {};
for (var i in arr) {
for (var j in arr[i].answers) {
if (!points[arr[i].answers[j].player]) {
points[arr[i].answers[j].player] = 0;
}
points[arr[i].answers[j].player] += arr[i].answers[j].points;
}
}
```
Upvotes: 1 <issue_comment>username_2: You could use a [`Map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) for counting and build an array with the data.
```js
var data = [{ answers: [{ _id: "5ab227cf07818240934b11a5", player: "Peter", comment: "", points: 7, __v: 0 }, { _id: "5ab24bf3b8494c76fd0bb31a", player: "André", comment: "", points: 6, __v: 0 }, { _id: "5ab24bf7b8494c76fd0bb31b", player: "Maikel", comment: "", points: 5, __v: 0 }, { _id: "5ab24bfab8494c76fd0bb31c", player: "Iebele", comment: "", points: 4, __v: 0 }], player: "Pieter", question: 1, __v: 0, _id: "5ab24e5e49e0f20a06d73ab7" }, { answers: [{ _id: "5ab227cf07818240935b11a5", player: "Peter", comment: "", points: 7, __v: 0 }, { _id: "5ab24bf3b8494c76fd8bb31a", player: "André", comment: "", points: 6, __v: 0 }, { _id: "5ab24bf7b8494c76fd2bb31b", player: "Maikel", comment: "", points: 5, __v: 0 }, { _id: "5ab24bfab8494c76fd9bb31c", player: "Iebele", comment: "", points: 4, __v: 0 }], player: "Kareltje", question: 1, __v: 0, _id: "5ab24e5e49e0f20b86d73ab7" }],
score = Array.from(
data.reduce((m, { answers }) =>
answers.reduce((n, { player, points }) =>
n.set(player, (n.get(player) || 0) + points), m), new Map),
([player, score]) => ({ player, score })
);
console.log(score);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 2 <issue_comment>username_3: You can use this alternative.
Using the function `reduce` and the function `forEach`.
* `reduce` to accumulate.
* `forEach` to loop over the answers.
```js
var data = [{ answers: [ { _id: "5ab227cf07818240934b11a5", player: "Peter", comment: "", points: 7, __v: 0 } , { _id: "5ab24bf3b8494c76fd0bb31a", player: "André", comment: "", points: 6, __v: 0 } , { _id: "5ab24bf7b8494c76fd0bb31b", player: "Maikel", comment: "", points: 5, __v: 0 } , { _id: "5ab24bfab8494c76fd0bb31c", player: "Iebele", comment: "", points: 4, __v: 0 } ], player: "Pieter", question: 1, __v: 0, _id: "5ab24e5e49e0f20a06d73ab7"},{ answers: [ { _id: "5ab227cf07818240935b11a5", player: "Peter", comment: "", points: 7, __v: 0 }, { _id: "5ab24bf3b8494c76fd8bb31a", player: "André", comment: "", points: 6, __v: 0 } , { _id: "5ab24bf7b8494c76fd2bb31b", player: "Maikel", comment: "", points: 5, __v: 0 } ,{ _id: "5ab24bfab8494c76fd9bb31c", player: "Iebele", comment: "", points: 4, __v: 0 } ], player: "Kareltje", question: 1, __v: 0, _id: "5ab24e5e49e0f20b86d73ab7"}];
var result = data.reduce((a, {answers}) => {
answers.forEach(({player, points}) => a[player] = (a[player] || 0) + points);
return a;
}, {});
console.log(result);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 2 <issue_comment>username_4: You can use `.reduce` to iterate through the array and aggregate all the scores:
```js
const data = [{"answers":[{"_id":"5ab227cf07818240934b11a5","player":"Peter","comment":"","points":7,"__v":0},{"_id":"5ab24bf3b8494c76fd0bb31a","player":"André","comment":"","points":6,"__v":0},{"_id":"5ab24bf7b8494c76fd0bb31b","player":"Maikel","comment":"","points":5,"__v":0},{"_id":"5ab24bfab8494c76fd0bb31c","player":"Iebele","comment":"","points":4,"__v":0}],"player":"Pieter","question":1,"__v":0,"_id":"5ab24e5e49e0f20a06d73ab7"},{"answers":[{"_id":"5ab227cf07818240935b11a5","player":"Peter","comment":"","points":7,"__v":0},{"_id":"5ab24bf3b8494c76fd8bb31a","player":"André","comment":"","points":6,"__v":0},{"_id":"5ab24bf7b8494c76fd2bb31b","player":"Maikel","comment":"","points":5,"__v":0},{"_id":"5ab24bfab8494c76fd9bb31c","player":"Iebele","comment":"","points":4,"__v":0}],"player":"Kareltje","question":1,"__v":0,"_id":"5ab24e5e49e0f20b86d73ab7"}];
const scores = data.reduce( (scores, d) => {
d.answers.forEach( answer => {
scores[answer.player] = (scores[answer.player]||0) + answer.points;
});
return scores;
}, {});
console.log(scores);
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,036 | 3,805 | <issue_start>username_0: In Vue there's the '`updated`' lifecycle method, which is appropriate for doing things to my markup after the data is in. In my case, I'm manually nudging SVG text to align to something after its dimensions change due to new text being input.
Well, it seems like I want to use the '`watch`' block because it allows me to only run my alignment function after a *specific* property is changed, but what I really need is to use the '`updated`' event as it's got to do with the DOM and not the data - but how could it be possible to isolate and run my aligning function on *just* the one element that was edited and caused the '`updated`' to fire?
I'd rather not cause multiple adjustment functions to fire off on objects which were not even updated by the user data entry.<issue_comment>username_1: >
> The watch properties are concerned with data and update is concerned with the markup.. so what ends up happening is, my alignment function is a 'key behind' the input. It runs the alignment, before the markup is rendered. So when it aligns, it does this using previous rendering. Therefore, I need to hook into the event which is concerned with the markup.
>
>
>
Use `Vue.nextTick(() => { /* code you were executing */ })`:
[Docs:](https://v2.vuejs.org/v2/api/#Vue-nextTick)
>
> ### Vue.nextTick( [callback, context] )
>
>
> * **Arguments:**
>
>
> + `{Function} [callback]`
> + `{Object} [context]`
> * **Usage:**
>
>
> Defer the callback to be executed after the next DOM update cycle.
> Use it immediately after you've changed some data to wait for the DOM
> update.
>
>
>
> ```js
> // modify data
> vm.msg = 'Hello'
> // DOM not updated yet
> Vue.nextTick(function () {
> // DOM updated
> })
>
> // usage as a promise (2.1.0+, see note below)
> Vue.nextTick()
> .then(function () {
> // DOM updated
> })
>
> ```
>
>
>
So instead of:
```js
new Vue({
el: '#app',
// ...
watch: {
someProperty() {
someUpdateCode();
}
}
})
```
Do:
```js
new Vue({
el: '#app',
// ...
watch: {
someProperty() {
Vue.nextTick(() => { someUpdateCode(); });
}
}
})
```
Comments: Is the new javascript syntax world-wide?
==================================================
Yes, those are [**arrow functions**](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/Arrow_functions). Its pretty standard JavaScript syntax. It comes from ES6 (ECMAScript 2017) version.
Basically the code `Vue.nextTick(() => { someUpdateCode(); });` is the same as: `Vue.nextTick(function () { someUpdateCode(); }.bind(this));`
Roughly speaking, `(abc) => { code }` is the same as `function(abc) { code }.bind(this)`. The `.bind(this)` part is important and frequently overlooked. Arrow functions retain the original context's `this`, whereas `function(){}`s have their specific `this` (which is modified by whoever calls that function - via `functionName.call()` or `functionName.apply()`), the `.bind()` is to retain the original context's `this` inside the `function`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Do it via Vue’s Updated method.
In order to run a specific function based on some specific property change, you can introduce a helping variable that tells you if your specific property is changed.
*how could it be possible to isolate and run my aligning function on just the one element that was edited and caused the 'updated' to fire?*
**Create a Boolean property and bind it to your element in such a way that when an element is changed the Boolean property is to be set to True and in ‘Updated’ method, just use an if condition on that Boolean property such that when it’s true, execute your alignment function and reset that Boolean property to false.**
Upvotes: 0 |
2018/03/21 | 1,052 | 3,819 | <issue_start>username_0: Imagine I have a component that applies certain values to the `class` attribute of its host element, based on certain run-time conditions.
For example, take this TextBox component, which applies class values based on the state of readonly and disabled inputs passed in to it:
```
class TextBoxComponent {
@HostBinding("class.readonly") @Input() public readonly = false;
@HostBinding("class.disabled") @Input() public disabled = false;
}
```
Suppose I want to also pass in arbitrary class values from the container, by setting the `class` attribute.
For example, say I want to pass in a `pull-right` class for positioning:
```
```
What's the most correct way to do this in Angular? Both from the consumer's point of view and coding it to work correctly within the component?<issue_comment>username_1: >
> The watch properties are concerned with data and update is concerned with the markup.. so what ends up happening is, my alignment function is a 'key behind' the input. It runs the alignment, before the markup is rendered. So when it aligns, it does this using previous rendering. Therefore, I need to hook into the event which is concerned with the markup.
>
>
>
Use `Vue.nextTick(() => { /* code you were executing */ })`:
[Docs:](https://v2.vuejs.org/v2/api/#Vue-nextTick)
>
> ### Vue.nextTick( [callback, context] )
>
>
> * **Arguments:**
>
>
> + `{Function} [callback]`
> + `{Object} [context]`
> * **Usage:**
>
>
> Defer the callback to be executed after the next DOM update cycle.
> Use it immediately after you've changed some data to wait for the DOM
> update.
>
>
>
> ```js
> // modify data
> vm.msg = 'Hello'
> // DOM not updated yet
> Vue.nextTick(function () {
> // DOM updated
> })
>
> // usage as a promise (2.1.0+, see note below)
> Vue.nextTick()
> .then(function () {
> // DOM updated
> })
>
> ```
>
>
>
So instead of:
```js
new Vue({
el: '#app',
// ...
watch: {
someProperty() {
someUpdateCode();
}
}
})
```
Do:
```js
new Vue({
el: '#app',
// ...
watch: {
someProperty() {
Vue.nextTick(() => { someUpdateCode(); });
}
}
})
```
Comments: Is the new javascript syntax world-wide?
==================================================
Yes, those are [**arrow functions**](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Functions/Arrow_functions). Its pretty standard JavaScript syntax. It comes from ES6 (ECMAScript 2017) version.
Basically the code `Vue.nextTick(() => { someUpdateCode(); });` is the same as: `Vue.nextTick(function () { someUpdateCode(); }.bind(this));`
Roughly speaking, `(abc) => { code }` is the same as `function(abc) { code }.bind(this)`. The `.bind(this)` part is important and frequently overlooked. Arrow functions retain the original context's `this`, whereas `function(){}`s have their specific `this` (which is modified by whoever calls that function - via `functionName.call()` or `functionName.apply()`), the `.bind()` is to retain the original context's `this` inside the `function`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Do it via Vue’s Updated method.
In order to run a specific function based on some specific property change, you can introduce a helping variable that tells you if your specific property is changed.
*how could it be possible to isolate and run my aligning function on just the one element that was edited and caused the 'updated' to fire?*
**Create a Boolean property and bind it to your element in such a way that when an element is changed the Boolean property is to be set to True and in ‘Updated’ method, just use an if condition on that Boolean property such that when it’s true, execute your alignment function and reset that Boolean property to false.**
Upvotes: 0 |
2018/03/21 | 883 | 2,848 | <issue_start>username_0: I have this assignment for my C class. She gave us pseudo code for the selection sort. There's some other requisites, but those I don't need help with. I followed her code, and even looked online for sample code. And mine follows the same concept but it never works. When I print the "sorted" array, it prints the unsorted numbers and a bunch of other numbers. If I enter '8 7 6 5 4 3 2 1' my array prints '8765432146142368001234567846142368'.
```
#define MAXSIZE 10
void print_array(const int a[], int size);
int main()
{
int get_numbers(int n[]); //Function to retrieve a list of numbers and
returns an array of the numbers
int numbers[MAXSIZE]; //Array to store unsorted list of numbers
void selection_sort(int a[], int size);
puts("Enter each number then press enter. (Ctrl + Z) to end.");
get_numbers(numbers);
selection_sort(numbers, MAXSIZE);
print_array(numbers, MAXSIZE);
}
void get_numbers(int n[])
{
int s; //Stores return value of scanf
int i; //Stores amount of values entered
for(i = 0; (s = scanf("%i", &n[i])) != EOF; i++);
if(i == 0)
{
puts("Error: No numbers entered.");
}else if(i > MAXSIZE)
{
puts("Error: Too many values entered.");
print_array(n, MAXSIZE);
}else
{
print_array(n, MAXSIZE);
}
}
void print_array(const int a[], int size)
{
for(int i = 0; i < size; i++)
{
printf("%i", a[i]);
}
}
void selection_sort(int a[], int size)
{
void swap(int n[], int i, int min_index);
for(int i = 0; i < (size - 1); i++)
{
int min = a[i];
int min_index = i;
for(int j = (i + 1); j < size; j++)
{
if(a[j] < min)
{
min = a[j];
min_index = j;
}
}
swap(a, i, min_index);
}
}
void swap(int n[], int i, int min_index)
{
int temp = n[i];
n[i] = n[min_index];
n[min_index] = temp;
}
```<issue_comment>username_1: There are quite a few issues with your program.
Function prototypes need to be declared before any other functions.
If they bail out early/late, you need to pass the value of i to print\_array. If you pass MAXSIZE you will be reading/writing to memory that isn't yours. Which is why you get weird values.
Upvotes: 1 <issue_comment>username_2: Thank you for everyone and their tips! I took what <NAME> had mentioned (sorting based on exact number of values, spacing my output). My main problem was that it seemed to be printing a bunch of random numbers which I was unsure where it came from. After spacing I realized it does sort them but there was a random number ex. 8 7 6 5 4 3 2 1 44176288 0 1 2 3 4 5 6 7 8 44176288 0. After some more debugging I realized that I was also printing not at exact array size. After fixing all the different sizes it was fixed.
Upvotes: 0 |
2018/03/21 | 891 | 2,924 | <issue_start>username_0: So I am trying to make an array of every word in a text and the array should be like `[word, startIndex, endIndex]`. I am going to use this to replace words after, after checking the word-type and find a synonym for it to replace it with. But the problem I am facing is splitting each word and storing the start and end index. `text.match(/\b(\w+)\b/g)` works, but I do not get the start and end index that I need. I also tried making some function to parse the text, but it ended up overcomplicated and not really working like it should.
So i wondered if anybody in the javascript community here has a better solution or know how to make an easy function for it.
This is what I would like to happen.
Input:
>
> Norway, officially the Kingdom of Norway, is a sovereign state and unitary monarchy whose territory comprises the western portion of the Scandinavian Peninsula
>
>
>
Output:
>
> ['Norway', 0, 6], ['officially', 8, 18]
>
>
>
And the same for all words<issue_comment>username_1: Partly taken from: [Return positions of a regex match() in Javascript?](https://stackoverflow.com/questions/2295657/return-positions-of-a-regex-match-in-javascript) but adapted to return the length of the match and the match itself:
```js
var wordIndices = (s) => {
var getAllWords = /\b(\w+)\b/g;
var output = [];
while ((match = getAllWords.exec(s)) != null) {
output.push([match[0], match.index, match.index + match[0].length-1])
}
return output
}
s = 'Norway, officially the Kingdom of Norway, is a sovereign state and unitary monarchy whose territory comprises the western portion of the Scandinavian Peninsula';
console.log(wordIndices(s))
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think you example results was slightly wrong `['Norway', 0, 6], ['officially', 9, 19]`, last should have been 8,18..
So the following might be what your after.
```js
var str1 = `Norway, officially the Kingdom of Norway, is a sovereign state and unitary monarchy whose territory comprises the western portion of the Scandinavian Peninsula`;
var regex1 = RegExp(/\b(\w+)\b/g);
var array1;
var ret = [];
while ((array1 = regex1.exec(str1)) !== null) {
ret.push([array1[0], array1.index,
array1.index + array1[0].length - 1]);
}
console.log(ret);
```
Upvotes: 1 <issue_comment>username_3: If your goal is to replace those words, there is an easier solution.
You can just use `replace` with a callback function.
Example:
```js
const input = 'Norway, officially the Kingdom of Norway, is a sovereign state and unitary monarchy whose territory comprises the western portion of the Scandinavian Peninsula'
const output = input.replace(/\b(\w+)\b/g, (word, group, index) => {
console.log(word, index);
if (word.length <= 3) {
return '...';
} else {
return word;
}
})
console.log(output);
```
Upvotes: 0 |
2018/03/21 | 1,065 | 4,331 | <issue_start>username_0: Hello Internet Hive Mind!
I need to query AWS Athena with nifi, however I need to change the staging directory (the S3 bucket & folder where the results will be saved) for each query sent.
But the s3\_staging\_dir property has to be set on the DBCPConnectionPool Controller Service.
How can I change the value of that property for each different flow file?
Apparently it can't be fetched by expression language alone.
Thanks!<issue_comment>username_1: You don't have to set the property in `DBCPConnectionPool`. The query that you set in the SQL processor will output the results from Athena as flowfiles. You can connect the SQL processor to a `PutS3Object` and specify the bucket name and other necessary properties. This will write the result of your SQL query to the S3 staging directory.
Upvotes: 0 <issue_comment>username_2: I'm not sure the nature of your flow where each query depends on a different staging directory, but there are a couple things to keep in mind.
1. The `DBCPConnectionPool` controller service does allow dynamic properties which evaluate expression language, but that expression language evaluation is performed when the controller service is *enabled*, so "once" per start/stop.
2. The dynamic properties on the controller service *do not* evaluate flowfile attributes.
From [Apache NiFi `DBCPConnectionPool` documentation](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-dbcp-service-nar/1.5.0/org.apache.nifi.dbcp.DBCPConnectionPool/index.html):
>
> Dynamic Properties:
>
>
> Dynamic Properties allow the user to specify both the name and value
> of a property.
>
>
> ...
>
>
> Specifies a property name and value to be set on the JDBC
> connection(s). If Expression Language is used, evaluation will be
> performed upon the controller service being enabled. Note that no flow
> file input (attributes, e.g.) is available for use in Expression
> Language constructs for these properties. **Supports Expression
> Language: true**
>
>
>
Because of your requirement that the S3 staging directory is different on every request, I think in this case, you would need to pursue one of the following options:
1. [File a Jira](https://issues.apache.org/jira/secure/CreateIssue!default.jspa) requesting native Athena support in NiFi (thoroughly explain why the existing `DBCPConnectionPool` doesn't support your use case)
2. Extend the `DBCPConnectionPool` controller service with your own `AthenaConnectionPool` controller service. There are many tutorials for building your own NiFi components, but the [NiFi Developer Guide > Developing Controller Services](https://nifi.apache.org/docs/nifi-docs/html/developer-guide.html#developing-controller-service) is the best place to start. You can make a controller service which does evaluate incoming flowfile attributes when performing expression language execution, but you will need to manually trigger this, as controller services do not have an `@OnTrigger` phase of their lifecycle. If you also write a custom processor, you can invoke some "re-evaluate" method in the controller service from the `onTrigger()` method of the processor, but existing processors will not call this. Instead, you could [theoretically put a high frequency refresher in the controller service](http://www.nifi.rocks/developing-a-custom-apache-nifi-controller-service/) itself using executors, but this will definitely affect performance
3. Create multiple `DBCPConnectionPool` instances and SQL processors for each staging directory (feasible on the order of 1 - 3, otherwise abysmal)
4. Use the [`ExecuteStreamCommand`](https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.5.0/org.apache.nifi.processors.standard.ExecuteStreamCommand/index.html) processor with [`awscli`](https://docs.aws.amazon.com/cli/latest/reference/athena/) to execute the queries using the command-line tool. This deprives you of the NiFi native SQL tools but allows custom queries on every invocation because `ExecuteStreamCommand` can interpret the flowfile-specific attributes and use them in the query
5. Re-evaluate your flow design and see if there is a way to perform the queries without allowing for arbitrary S3 staging directories on individual query execution
Upvotes: 3 [selected_answer] |
2018/03/21 | 1,287 | 4,900 | <issue_start>username_0: I'm kind of new on purescript and I was experimenting with effects and particular async effects.
One of the things I love the most about FP and strict compilers like the one purescript has is that it enforces you to handle all possible results, in particular when you define that something can fail. If you for example are using an Either, you need to tell the program what to do in case you have the `Right` answer or an error.
When I first looked at effects I liked the concept of `actions` and `handlers` and that if a part of your code needs to throw an exception (I imagine this is the last resource you want to use) you need to declare it using something like
```
someAction :: forall eff. Eff (exception :: EXCEPTION | eff)
```
and that you can define a handler that removes that effect so you know that from that point onwards you don't have to care about exceptions.
But doing some basic tests with the `Aff` monad and the `purescript-node-fs-aff` library I got some unexpected results.
If I do something like this
```
main :: forall e. Eff (console :: CONSOLE, buffer :: BUFFER, fs :: FS | e) Unit
main = do
_ <- launchAff $ readAndLog "./non-existing-file"
pure unit
readAndLog :: forall eff. String -> Aff (fs :: FS, console :: CONSOLE | eff) Unit
readAndLog path = do
str <- readTextFile UTF8 path
log str
```
If the file doesn't exists the program will terminate throwing an exception and there is nothing telling me that this code can fail, and that I should try to protect my program agaisnt that failure.
I can in fact be a little more defensive and use a `catchError`, but I was expecting that at least the compiler fail saying I wasn't taking exception as a possible side effect.
```
main :: forall e. Eff (console :: CONSOLE, buffer :: BUFFER, fs :: FS | e) Unit
main = do
_ <- launchAff $ readAndLog "./non-existing-file" `catchError` (\e -> log ("buu: " <> message e))
pure unit
readAndLog :: forall eff. String -> Aff (fs :: FS, console :: CONSOLE | eff) Unit
readAndLog path = do
str <- readTextFile UTF8 path
log str
```
Ideally I would like to do something like `Either` and be responsible to handle the particular errors the operation may have. For example when I read a file I should expect to have an error like `ENOENT` (file does not exist) or `EACCES` (you don't have access), etc. If I want to ignore the particular reason and just log that it failed it's my choice but the type system should enforce me to handle it.<issue_comment>username_1: The compiler would not tell you that because exceptions are assimilated by the `Aff` machinery in the library you are using. Here are the relevant pieces:
```
readTextFile = toAff2 A.readTextFile
toAff2 f a b = toAff (f a b)
toAff p = makeAff \e a -> p $ either e a
```
Where `A.readTextFile` is [the async version](https://pursuit.purescript.org/packages/purescript-node-fs/4.0.1/docs/Node.FS.Async#v:readTextFile) in the Node library.
The expression `either e a` is where it happens. As `Aff` has a `MonadError`, your only recourse is indeed to catch the error. That said, I completely agree with you that the effect should expose the fact exceptions may be thrown. For some reason, only [the synchronous version](https://pursuit.purescript.org/packages/purescript-node-fs/4.0.1/docs/Node.FS.Sync#v:readTextFile) of the Node library does.
Upvotes: 0 <issue_comment>username_2: There are multiple facets to your question.
First, effect rows are soon being removed from standard practice. That has begun with [`v0.12`](https://github.com/purescript/purescript/releases/tag/v0.12.0-rc1). Here is [a public opinion poll](https://twitter.com/paf31/status/908760073303764993) on the matter. The discussions are on GitHub. In a nutshell: the burden outweighs the benefit.
Then there is the problem of knowing which exceptions could be thrown. If you use 3rd party JavaScript anywhere, that will be difficult. In this case I recommend thinking of known exceptions as a lower bound of all exceptions which may be thrown. In other words, even if you catch all known exceptions you must also make a default catch to account for the unknown.
Take a look at <NAME>'s [purescript-checked-exceptions](https://pursuit.purescript.org/packages/purescript-checked-exceptions/1.0.0) library. This shows how to apply checked exceptions to an ExceptT interface.
I have similar work done for JavaScript's native exceptions which is appropriate for FFI. Unfortunately that is not published yet but I will hopefully be able to do so soon. At least you can know it is possible.
The final facet is Aff itself. Unfortunately, Aff is not designed for arbitrary exceptions. It only supports the `Error` type (i.e. the JavaScript type of the same name). Therefore, you would be better to use purescript-checked-exceptions with Aff to add checked exceptions.
Upvotes: 2 |
2018/03/21 | 750 | 2,814 | <issue_start>username_0: I want that if user click the purchase button but typed a word that does not belong to the list of the combobox it will give a message Invalid Customer.
I have tried:
```
else if (comboBox1.Text != comboBox1.Items)
MessageBox.Show("Invalid customer.", "Information", MessageBoxButtons.OK, MessageBoxIcon.Information);
```
but it won't compile and there's always a red line error.<issue_comment>username_1: The compiler would not tell you that because exceptions are assimilated by the `Aff` machinery in the library you are using. Here are the relevant pieces:
```
readTextFile = toAff2 A.readTextFile
toAff2 f a b = toAff (f a b)
toAff p = makeAff \e a -> p $ either e a
```
Where `A.readTextFile` is [the async version](https://pursuit.purescript.org/packages/purescript-node-fs/4.0.1/docs/Node.FS.Async#v:readTextFile) in the Node library.
The expression `either e a` is where it happens. As `Aff` has a `MonadError`, your only recourse is indeed to catch the error. That said, I completely agree with you that the effect should expose the fact exceptions may be thrown. For some reason, only [the synchronous version](https://pursuit.purescript.org/packages/purescript-node-fs/4.0.1/docs/Node.FS.Sync#v:readTextFile) of the Node library does.
Upvotes: 0 <issue_comment>username_2: There are multiple facets to your question.
First, effect rows are soon being removed from standard practice. That has begun with [`v0.12`](https://github.com/purescript/purescript/releases/tag/v0.12.0-rc1). Here is [a public opinion poll](https://twitter.com/paf31/status/908760073303764993) on the matter. The discussions are on GitHub. In a nutshell: the burden outweighs the benefit.
Then there is the problem of knowing which exceptions could be thrown. If you use 3rd party JavaScript anywhere, that will be difficult. In this case I recommend thinking of known exceptions as a lower bound of all exceptions which may be thrown. In other words, even if you catch all known exceptions you must also make a default catch to account for the unknown.
Take a look at <NAME>'s [purescript-checked-exceptions](https://pursuit.purescript.org/packages/purescript-checked-exceptions/1.0.0) library. This shows how to apply checked exceptions to an ExceptT interface.
I have similar work done for JavaScript's native exceptions which is appropriate for FFI. Unfortunately that is not published yet but I will hopefully be able to do so soon. At least you can know it is possible.
The final facet is Aff itself. Unfortunately, Aff is not designed for arbitrary exceptions. It only supports the `Error` type (i.e. the JavaScript type of the same name). Therefore, you would be better to use purescript-checked-exceptions with Aff to add checked exceptions.
Upvotes: 2 |
2018/03/21 | 1,075 | 3,444 | <issue_start>username_0: This is really two questions.
I have a list of age intervals. For each interval there is a corresponding value. The intervals and values are organized in a list of tuples `age_value_intervals` (see the comments in the code).
I also have a separate list of distinct ages, `ages`, for which I would like to know the value.
The code below is an attempt at mapping the value to the given age.
Now to the questions,
1. In order to assign a value to the `value_map` I iterate over both `ages` and `value_map` using `zip`. I then try to assign to `value`. This doesn't work. Why?
2. I doubt that the method I use is very efficient (if it had worked). Is there a better way to achieve this mapping?
---
```
import numpy as np
# List of tuples defining and age interval and the corresponing value for
# that interval. For instance (20, 30, 10) indicates that the age interval from
# 20 to 30 has the value 10
age_value_intervals = [(20, 30, 10),
(30, 35, 5),
(35, 42, 50),
(50, 56, 40),
(56, 60, 30)]
# The ages for which I would like to know the value
ages = [25, 30, 35, 40, 45, 50]
# Empty array used to stor the values for the corresponding age
value_map = np.empty(len(ages))
# I want the value to be nan if there is no known value
value_map[:] = np.nan
# Iterate over the ages I want to know the value for
for age, value in zip(ages, value_map):
# Check if the age is in an interval for which the value is known
for from_age, to_age, actual_value in age_value_intervals:
if age >= from_age and age < to_age:
# Assign the value to the value_map
# This is were it falls apart (I guess...)
value = actual_value
# Move on to the next age since we got a match
break
#Expected output
value_map = [10, 5, 50, 50, nan, 40]
```<issue_comment>username_1: I recommend you use [`numpy.digitize`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.digitize.html) together with `dict` for this. You can manually account for instances when a value cannot be mapped to a range.
```
import numpy as np
age_value_intervals = [(20, 30, 10),
(30, 35, 5),
(35, 42, 50),
(50, 56, 40),
(56, 60, 30)]
ages = np.array([25, 30, 35, 40, 45, 50])
bins = np.array([x[0] for x in age_value_intervals])
mapper = dict(enumerate([x[2] for x in age_value_intervals], 1))
res = np.array([mapper[x] for x in np.digitize(ages, bins)], dtype=float)
for idx in range(len(ages)):
if not any(i <= ages[idx] <= j for i, j, k in age_value_intervals):
res[idx] = np.nan
```
Result:
```
array([ 10., 5., 50., 50., nan, 40.])
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: First, as noted on comments, if you try to assign to variable you are currently changing inside loop, the value simply gets lost.
Secondly most of the mappings are redundant.
Something like this can probably still be improved but should work:
```
result=[]
for check_age in ages:
for from_age, to_age, value in age_value_intervals:
if check_age in range(from_age, to_age):
result+=[value]
print result
```
Note, if you need some result added also when the age is *not* in the interval, there needs to be additional code.
Upvotes: 2 |
2018/03/21 | 884 | 2,823 | <issue_start>username_0: I have a list of a custom object that I want to change it's value by passing it to a function:
My code below:
```
func UpdateButtonPressed(_Url : String,referenceArray: inout Array)
{
var \_itemList = [Item]()
var result:String! = ""
let url = URL(string:\_Url)
var request = URLRequest(url: url! as URL)
request.httpMethod = "POST"
request.addValue("application/json", forHTTPHeaderField: "Content-Type")
request.addValue("application/json", forHTTPHeaderField: "Accept")
let session = URLSession.shared
let task = session.dataTask(with: request, completionHandler: { (data, response, error) in
guard data != nil else {
print("no data found: \(error)")
return }
do {
let temp = NSString(data: data!, encoding: String.Encoding.utf8.rawValue)
result = String(describing: temp!)
DispatchQueue.main.async {
referenceArray.append(contentsOf: self.UpdateItem(result:result))
}
}
})
task.resume()
}
func UpdateItem(result:String)-> [Item]
{
var \_itemList = [Item]()
\_itemList = Item.ConvertJSONToItemList(result: result)
return \_itemList
}
```
and I call my function as below:
```
var mostRecent = [Item]()
UpdateButtonPressed(_Url:PublicVariables.GetMostRecent(), referenceArray: &mostRecent)
```
But it's not worked, How can I do that?<issue_comment>username_1: I recommend you use [`numpy.digitize`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.digitize.html) together with `dict` for this. You can manually account for instances when a value cannot be mapped to a range.
```
import numpy as np
age_value_intervals = [(20, 30, 10),
(30, 35, 5),
(35, 42, 50),
(50, 56, 40),
(56, 60, 30)]
ages = np.array([25, 30, 35, 40, 45, 50])
bins = np.array([x[0] for x in age_value_intervals])
mapper = dict(enumerate([x[2] for x in age_value_intervals], 1))
res = np.array([mapper[x] for x in np.digitize(ages, bins)], dtype=float)
for idx in range(len(ages)):
if not any(i <= ages[idx] <= j for i, j, k in age_value_intervals):
res[idx] = np.nan
```
Result:
```
array([ 10., 5., 50., 50., nan, 40.])
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: First, as noted on comments, if you try to assign to variable you are currently changing inside loop, the value simply gets lost.
Secondly most of the mappings are redundant.
Something like this can probably still be improved but should work:
```
result=[]
for check_age in ages:
for from_age, to_age, value in age_value_intervals:
if check_age in range(from_age, to_age):
result+=[value]
print result
```
Note, if you need some result added also when the age is *not* in the interval, there needs to be additional code.
Upvotes: 2 |
2018/03/21 | 1,129 | 2,696 | <issue_start>username_0: I am strugling a lot finding a tidy way of doing the following.
I have a Pandas DataFrame that looks like that:
```
data = {'Ids': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'Value': [32, 56, 87, 12, 45, 78,
14, 21, 56]}
df=pd.DataFrame(data)
Out[2]:
Ids Value
0 1 32
1 2 56
2 3 87
3 1 12
4 2 45
5 3 78
6 1 14
7 2 21
8 3 56
```
I would like to add another column that identifies each sub-set of data (Ids from 1 to 3) with a label. Something like this:
```
Out[3]:
Case Ids Value
0 A 1 32
1 A 2 56
2 A 3 87
3 B 1 12
4 B 2 45
5 B 3 78
6 C 1 14
7 C 2 21
8 C 3 56
```
I am trying to use pandas.cut() function this way but I am now having lot of success:
```
df["test"]=pd.cut(df1.Value, bins=3, labels=["A", "B", "C"], right=False)
```
Is there a nice and tidy way of achieving what I want using Pandas functions? Thank you!<issue_comment>username_1: I think need [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) with numpy indexing:
```
a = np.array(["A", "B", "C"])
df['new'] = a[df.groupby('Ids').cumcount()]
print (df)
Ids Value new
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 3 <issue_comment>username_2: Using `groupby` + `ngroup` + `map`:
```
mapper = {0 : 'A', 1 : 'B', 2 : 'C'}
df['New'] = df.groupby(df.Ids.diff().lt(0).cumsum()).ngroup().map(mapper)
```
Or, using @username_1's neat indexing trick,
```
mapper = np.array(['A', 'B', 'C'])
df['New'] = mapper[df.groupby(df.Ids.diff().lt(0).cumsum()).ngroup()]
```
```
Ids Value Case
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 <issue_comment>username_3: You could use:
```
l = np.array(list('ABC'))
df['Case'] = l[(df.Ids == 1).cumsum().sub(1)]
```
Output:
```
Ids Value Case
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 <issue_comment>username_4: This work for your sample data
```
df['new']=np.array(['A','B','C']).repeat(len(df)//3)
df
Out[519]:
Ids Value new
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 |
2018/03/21 | 833 | 2,050 | <issue_start>username_0: There is an existing train\_labels, which has the following attributes
```
('labels_train shape ', (3000,))
('type of labels_train ', )
```
and another array is Y, which has the following attributes
```
('Y ', (3000,1))
('type of Y ', )
```
How to assign `Y` to `labels_train`, or how to make `Y` has the same shape as `labels_train`?<issue_comment>username_1: I think need [`cumcount`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) with numpy indexing:
```
a = np.array(["A", "B", "C"])
df['new'] = a[df.groupby('Ids').cumcount()]
print (df)
Ids Value new
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 3 <issue_comment>username_2: Using `groupby` + `ngroup` + `map`:
```
mapper = {0 : 'A', 1 : 'B', 2 : 'C'}
df['New'] = df.groupby(df.Ids.diff().lt(0).cumsum()).ngroup().map(mapper)
```
Or, using @username_1's neat indexing trick,
```
mapper = np.array(['A', 'B', 'C'])
df['New'] = mapper[df.groupby(df.Ids.diff().lt(0).cumsum()).ngroup()]
```
```
Ids Value Case
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 <issue_comment>username_3: You could use:
```
l = np.array(list('ABC'))
df['Case'] = l[(df.Ids == 1).cumsum().sub(1)]
```
Output:
```
Ids Value Case
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 <issue_comment>username_4: This work for your sample data
```
df['new']=np.array(['A','B','C']).repeat(len(df)//3)
df
Out[519]:
Ids Value new
0 1 32 A
1 2 56 A
2 3 87 A
3 1 12 B
4 2 45 B
5 3 78 B
6 1 14 C
7 2 21 C
8 3 56 C
```
Upvotes: 2 |
2018/03/21 | 475 | 1,505 | <issue_start>username_0: I have a little doubt about "Mappings section" of the aws cloudformation syntax:
Example:
--------
```
...
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: Fn::FindInMap
- accounts
- Ref "AWS::AccountId"
...
```
---
Could the Mappings section be included in a serverless.yml file ?
I meant, eventhough it is a valid cloudformation syntax, would it possible include it in the serverless.yml, so that later we can implement it (serverless | sls deploy ...)?
thanks,<issue_comment>username_1: You might be able to use:
```
functions:
# ...
resources:
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
```
Upvotes: 2 <issue_comment>username_2: Just another way to work with mapping is through stage params.
<https://www.serverless.com/framework/docs/guides/parameters>
```
params:
stage1:
schedule:true
stage2:
schedule:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: ${param:schedule}
```
Then call adding the stage arg (default is dev)
```
serverless deploy --stage stage1
```
Upvotes: 0 |
2018/03/21 | 1,397 | 4,933 | <issue_start>username_0: I'm having problems with persisting a one-to-many, many-to-one bi-directional relationship to my database and I can't figure out what I'm doing wrong exactly. Hope that someone can help me further.
I have 2 entities in my database declared as follows:
```
class Lot {
@OneToMany(targetEntity = Request.class, mappedBy = "lot", cascade = CascadeType.ALL, fetch = FetchType.LAZY, orphanRemoval = true)
private Set requests;
}
class Request{
@ManyToOne(optional = true, fetch = FetchType.EAGER)
@JoinColumn(name = "fk\_lot")
private Lot lot;
}
```
the problem comes here:
```
class RequestManagerImpl{
// compose Request r object
requestId = saveRequest(r).getId();
// check to see if we need to create a parent object
LotType lot = createRequestInput.getLot();
if(lot != null){
addRequestToLot(Lists.newArrayList(r), lot);
}
}
class LotManagerImpl {
@Override
public Lot addRequestsToLot(List requests, Lot lot) {
if (lot != null && requests != null && !requests.isEmpty()) {
for (Request request : requests) {
lot.addRequest(request);
}
Lot lotValue = lotDAO.saveOrUpdate(lot);
if (lotValue != null) {
updateLotStatusByRequestsStatus(lotValue);
}
return lotValue;
}
}
class LotDAO {
@Override
public Lot saveOrUpdate(Lot lot) {
if (em.contains(lot)) {
em.merge(lot);
} else {
em.persist(lot);
}
return lot;
}
}
class RequestDAO {
@Transactional(propagation = Propagation.REQUIRES\_NEW)
Request saveRequest(Request request);
}
```
The em.persist returns me the following stacktrace:
```
Caused by: javax.persistence.PersistenceException: org.hibernate.PersistentObjectException: detached entity passed to persist: disp.entities.Request
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:147)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:155)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:162)
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:811)
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:771)
at org.hibernate.jpa.event.internal.core.JpaPersistEventListener$1.cascade(JpaPersistEventListener.java:80)
at org.hibernate.engine.internal.Cascade.cascadeToOne(Cascade.java:458)
at org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:383)
at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:193)
at org.hibernate.engine.internal.Cascade.cascadeCollectionElements(Cascade.java:491)
at org.hibernate.engine.internal.Cascade.cascadeCollection(Cascade.java:423)
at org.hibernate.engine.internal.Cascade.cascadeAssociation(Cascade.java:386)
at org.hibernate.engine.internal.Cascade.cascadeProperty(Cascade.java:193)
at org.hibernate.engine.internal.Cascade.cascade(Cascade.java:126)
at org.hibernate.event.internal.AbstractSaveEventListener.cascadeAfterSave(AbstractSaveEventListener.java:445)
at org.hibernate.event.internal.AbstractSaveEventListener.performSaveOrReplicate(AbstractSaveEventListener.java:281)
at org.hibernate.event.internal.AbstractSaveEventListener.performSave(AbstractSaveEventListener.java:182)
at org.hibernate.event.internal.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:125)
at org.hibernate.jpa.event.internal.core.JpaPersistEventListener.saveWithGeneratedId(JpaPersistEventListener.java:67)
at org.hibernate.event.internal.DefaultPersistEventListener.entityIsTransient(DefaultPersistEventListener.java:189)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:132)
at org.hibernate.event.internal.DefaultPersistEventListener.onPersist(DefaultPersistEventListener.java:58)
at org.hibernate.internal.SessionImpl.firePersist(SessionImpl.java:780)
at org.hibernate.internal.SessionImpl.persist(SessionImpl.java:765)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at .proxy.$Proxy78.persist(Unknown Source)
at disp.dao.impl.LotDAOImpl.saveOrUpdate(LotDAOImpl.java:123)
```<issue_comment>username_1: You might be able to use:
```
functions:
# ...
resources:
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
```
Upvotes: 2 <issue_comment>username_2: Just another way to work with mapping is through stage params.
<https://www.serverless.com/framework/docs/guides/parameters>
```
params:
stage1:
schedule:true
stage2:
schedule:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: ${param:schedule}
```
Then call adding the stage arg (default is dev)
```
serverless deploy --stage stage1
```
Upvotes: 0 |
2018/03/21 | 519 | 1,498 | <issue_start>username_0: I'm trying to get an API call and save it as a dataframe.
problem is that I need the data from the 'result' column.
Didn't succeed to do that.
I'm basically just trying to save the API call as a csv file in order to work with it.
P.S when I do this with a "JSON to CSV converter" from the web it does it as I wish. (example: <https://konklone.io/json/>)
```
import requests
import pandas as pd
import json
res = requests.get("http://api.etherscan.io/api?module=account&action=txlist&
address=0xddbd2b932c763ba5b1b7ae3b362eac3e8d40121a&startblock=0&
endblock=99999999&sort=asc&apikey=YourApiKeyToken")
j = res.json()
j
df = pd.DataFrame(j)
df.head()
```
[output example picture](https://i.stack.imgur.com/zCInx.png)<issue_comment>username_1: You might be able to use:
```
functions:
# ...
resources:
Mappings:
accounts:
56565d644801:true
986958470041:true
090960219037:true
05166767667:false
```
Upvotes: 2 <issue_comment>username_2: Just another way to work with mapping is through stage params.
<https://www.serverless.com/framework/docs/guides/parameters>
```
params:
stage1:
schedule:true
stage2:
schedule:false
functions:
MyFunction:
handler: src/MyFunction/func.lambda_handler
role: MyRole
events:
- schedule:
rate: rate(12 hours)
enabled: ${param:schedule}
```
Then call adding the stage arg (default is dev)
```
serverless deploy --stage stage1
```
Upvotes: 0 |
2018/03/21 | 323 | 1,228 | <issue_start>username_0: I have A activity that consists a list of countries.
When I clicked on one country it open B activity that consists a list of images. and when I clicked on one of them it open activity C.
**A(OnClick)-->OpenB-->B(OnClick)-->OpenC**
and when i press back arrow on app bar it should back to B activity
Anyone has any idea how do I implement this?
Please help me<issue_comment>username_1: In your backButton you just call finish();
**Example**
```
@Override
public boolean onOptionsItemSelected(MenuItem item) {
if (item.getItemId() == android.R.id.home) // Press Back Icon
{
finish();
}
return super.onOptionsItemSelected(item);
}
```
Upvotes: -1 <issue_comment>username_2: simply add below line in your onCreate()
```
getSupportActionBar().setHomeButtonEnabled(true);
getSupportActionBar().setDisplayHomeAsUpEnabled(true)
```
and then add this lines
```
@Override
public boolean onOptionsItemSelected(MenuItem item) {
int id = item.getItemId();
switch(id) {
case android.R.id.home:
onBackPressed();
return true;
}
return super.onOptionsItemSelected(item);
}
```
hope this will help you!
Upvotes: 0 |
2018/03/21 | 227 | 1,014 | <issue_start>username_0: I have a custom object location contain id and name only. Is there any trick to create multi picklist from this object?<issue_comment>username_1: You can create a multi-select picklist just like any other field:
Salesforce Classic -> Setup -> Create -> Objects -> MyObject -> Fields and Relationships -> New Field
Upvotes: 0 <issue_comment>username_2: You can create Multi-Select PickList in the In your custom object :
If you want the pickList value available in other objects also then You can create a Global Pick List: Click on Setup Type PickList value set in Search Box and Choose it Create new Pick List value.
If you are in Salesforce Classic GO to Setup Search for Object the click on it it will open List of your custom object choose your custom object then select In Fields and Relationships Click New choose New Field as **Multi-Select** PickList. Specify the name In the PickList Value Choose from Global Pick List or You can specify its value.
Upvotes: 2 [selected_answer] |
2018/03/21 | 943 | 3,423 | <issue_start>username_0: I know that this question has been asked previously in this link [Facebook OAuth "The domain of this URL isn't included in the app's domain"](https://stackoverflow.com/questions/37063685/facebook-oauth-the-domain-of-this-url-isnt-included-in-the-apps-domain) , but the solution is not working for me.
It’s showing the error like this:
[](https://i.stack.imgur.com/jDNSd.png)
I have included my domain in the app’s domain of facebook api as well,
The redirect url format that I am passing goes like this:
<https://example.com/clients/authorization/facebook/a1njk2nkll55343nlk4/page>
Here the random characters after facebook/ is the id of the client which is dynamic.
Moreover, I am having the same problem while logging through facebook as well.
Thank You.<issue_comment>username_1: This might be the problem due to the security update of Facebook API.
This is from Facebook developer dashboard!
"In March, we're making a security update to your app settings that will invalidate calls from URIs not listed in the Valid OAuth redirect URIs field"
So, for that Add your site's callback url in the Valid OAuth redirect URIs field
You need to add your oAuth Redirect URI properly in the Facebook Login->Settings: Valid OAuth Redirect URIs
In Valid OAuth Redirect URIs, You can include the uri in this format:
<https://example.com/clients/authorization/facebook>
And the redirect uri while authenticating can be as follow:
<https://example.com/clients/authorization/facebook&a1njk2nkll55343nlk4/page>
Here "&" separates your required other parameters.
The same case applies for facebook Login. Add your full login redirect uri to the Valid OAuth Redirect URIs. Like this:
<https://example.com/login/facebook/callback>
Hope This helps.
Thanks.
Upvotes: 3 [selected_answer]<issue_comment>username_2: A dynamic URL Redirect is not supported by the current behavior.
This is due to a change to the handling of redirect URIs, announced in December 2017, and taking effect this month, Enabling strict mode is required in order to use OAuth redirects:
<https://developers.facebook.com/blog/post/2017/12/18/strict-uri-matching/>
Make sure to set App Domain in:
<https://developers.facebook.com/apps/[your-app-ip]/settings/basic/>
Also all you callback urls including the `https://`
Please see:
<https://developers.facebook.com/apps/[your-app-ip]/fb-login/settings/>
* If you are using the PHP SDK, please ensure you have the latest version installed (v5.6.2).
Upvotes: 2 <issue_comment>username_3: I had issues with this. Noticed that after moving to https:// it still redirects to http://
after long hours of looking into fb sdk found that they had issues detecting if your site is secure.
So, on file
facebookd-v5.\*-sdk/Url/FacebookUrlDetectionHandler.php
update two functions. Updated versions:
```
protected function protocolWithActiveSsl($protocol)
{
$protocol = strtolower((string)$protocol);
return in_array($protocol, ['on', '1', 'https', 'ssl', 'https, https'], true);
}
```
and
```
protected function getHostName()
{
...
// Don't append port number if a normal port.
if (($scheme == 'http' && $port == '80') || ($scheme == 'https' && $port == '443') || ($scheme == 'https' && $port == '80')) {
$appendPort = '';
}
return $host . $appendPort;
}
```
Upvotes: 0 |
2018/03/21 | 860 | 3,228 | <issue_start>username_0: I'm trying to access all repositories that have more than 5000 stars on Github. I've written this scraper to work with Node.js (it's running on a Cloud9 environment):
```
var request = require('request');
var fs = require('fs');
var options = {
url: 'https://api.github.com/repositories',
headers: {
'User-Agent': 'myusernamehere'
},
qs: {
stargazers: 5000
}
};
function callback(error, response, body) {
if (!error && response.statusCode == 200) {
console.log(response.headers);
fs.writeFile('output_teste.json', body, function (err) {
if (err) throw err;
console.log('It\'s saved!');
console.log(response.statusCode);
});
} else {
console.log(response.statusCode);
}
}
request(options, callback);
```
But the result is not all of the repositories, just the first page of all of them. How can I use pagination with the Request module? I've tried to find examples within the documentation, but they aren't that clear. Or do I need to do this with another library or maybe another language?
Thanks!<issue_comment>username_1: you should modify your querystring to include the value of "since". You can read more on the github documentation.
<https://developer.github.com/v3/repos/#list-all-public-repositories>
Sample URL with query string of since
>
> <https://api.github.com/repositories?since=364>
>
>
>
Upvotes: 1 <issue_comment>username_2: You could use the pagination data provided in `response.headers.link` that's received when making calls to the GitHub API to find out if there are any more pages left for your call.
One approach is to loop through the pages until there are no more new pages left, at which point you can write to file and return from function.
On each loop you can add to the data that you already have by using `concat` (I assume that the response body is delivered as an array) and then passing on the data to the next function call.
I rewrote your code to include a basic implementation of such a technique:
```
var request = require('request');
var fs = require('fs');
var requestOptions = function(page) {
var url = 'https://api.github.com/repositories?page=' + page;
return {
url: url,
headers: {
'User-Agent': 'myusernamehere'
},
qs: {
stargazers: 5000
}
};
};
function doRequest(page, incomingRepos) {
request(requestOptions(page), function(error, response, body) {
if (!error && response.statusCode == 200) {
console.log(response.headers);
var currentPageRepos = JSON.parse(body);
var joinedRepos = incomingRepos.concat(currentPageRepos);
var linkData = response.headers.link;
// if response does not include reference to next page
// then we have reached the last page and can save content and return
if (!(linkData.includes('rel="next"'))) {
fs.writeFile('output_teste.json', JSON.stringify(joinedRepos), function(err) {
if (err) throw err;
console.log('It\'s saved!');
});
return;
}
page++;
doRequest(page, joinedRepos);
} else {
console.log(response.statusCode);
}
});
}
doRequest(1, []);
```
Upvotes: 0 |
2018/03/21 | 451 | 1,689 | <issue_start>username_0: I have written the following code:
```
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
Dim myValue As Variant
If Range("B22") = "Yes" Then
myValue = InputBox("InsertInitialDetach")
Range("C22").Value = myValue
End If
End Sub
```
This code is meant to do the following thing: If I select Yes from a Yes/No drop down list, a message box appears and asks me to fill in a date.
The problem is that even after I fill the date, whatever I do afterwards, the box keeps on appearing and asking for the date. If I move two cells down, for example, the popup will continue to ask me for a date.
Please tell me what should I do to fix this error?<issue_comment>username_1: Would this be ok:
```
Private Sub Worksheet_SelectionChange(ByVal Target As Range)
Dim myValue As Variant
If (Not Intersect(Range("B22"), Target) Is Nothing) And (Range("B22") = "Yes") Then
myValue = InputBox("InsertInitialDetach")
Range("C22").Value = myValue
End If
End Sub
```
It checks every time whether you are changing `Range("B22")` and whether `Range("B22")` "Yes".
Upvotes: 2 <issue_comment>username_2: You are using the selectionChange event which triggers after any change in the area selected, if want to trigger on value changes use the change event
```
Private Sub Worksheet_Change(ByVal Target As Range)
Dim myValue As Variant
On Error GoTo ErrorOccured
If Target.Address = "$B$1" And Target.Value = "Yes" Then
myValue = InputBox("Insert initialDetach")
Range("B2").Value = myValue
End If
ErrorOccured:
'Do when value is not valid
End Sub
```
Upvotes: 2 [selected_answer] |
2018/03/21 | 1,153 | 3,691 | <issue_start>username_0: I'm using a `std::string` to interface with a C-library that requires a `char*` and a length field:
```
std::string buffer(MAX_BUFFER_SIZE, '\0');
TheCLibraryFunction(&buffer[0], buffer.size());
```
However, the `size()` of the string is the actual size, not the size of the string containing actual valid non-null characters (i.e. the equivalent of `strlen()`). What is the best method of telling the `std::string` to reduce its size so that there's ~~only 1 ending null terminator character~~ no explicit null terminator characters? The best solution I can think of is something like:
```
buffer.resize(strlen(buffer.c_str()));
```
Or even:
```
char buffer[MAX_BUFFER_SIZE]{};
TheCLibraryFunction(buffer, sizeof(buffer));
std::string thevalue = buffer;
```
Hoping for some built-in / "modern C++" way of doing this.
EDIT
----
I'd like to clarify the "ending null terminator" requirement I mentioned previously. I didn't mean that I want 1 null terminator explicitly in `std::string`'s buffer, I was more or less thinking of the string as it comes out of `basic_string::c_str()` which has 1 null terminator. However for the purposes of the `resize()`, I want it to represent the size of *actual* non-null characters. Sorry for the confusion.<issue_comment>username_1: You can do this:
```
buffer.erase(std::find(buffer.begin(), buffer.end(), '\0'), buffer.end());
```
Consider [`std::basic_string::erase`](http://en.cppreference.com/w/cpp/string/basic_string/erase) has an overloading:
```
basic_string& erase( size_type index = 0, size_type count = npos );
```
A more succinct way:
```
buffer.erase(buffer.find('\0'));
```
Upvotes: 3 <issue_comment>username_2: Many ways to do this; but probably the one to me that seems to be most "C++" rather than C is:
```
str.erase(std::find(str.begin(), str.end(), '\0'), str.end());
```
i.e. Erase everything from the first null to the end.
Upvotes: 5 [selected_answer]<issue_comment>username_3: You can use buffer.find('\0') instead of strlen(buffer.c\_str())
Upvotes: 2 <issue_comment>username_4: I know this question already has some answers, but I also want to contribute with my answer, although this solution works for all null terminators only at the end of the string.
```
void rtrim_null(std::string& str) {
str.erase(std::find_if(str.rbegin(), str.rend(), [](int character) {
return '\0' != character;
}).base(), str.end());
}
```
Upvotes: 1 <issue_comment>username_5: Using your
```
std::string buffer(MAX_BUFFER_SIZE, '\0');
TheCLibraryFunction(&buffer[0], buffer.size());
```
To remove 0-terminators, simplest is:
`buffer = buffer.c_str();`
(also possibly fastest? - didn't test)
Upvotes: 2 <issue_comment>username_6: One Liner using [boost::trim\_if](https://www.boost.org/doc/libs/1_52_0/doc/html/boost/algorithm/trim_if.html) if you want to trim multiple characters (I wanted to remove `\r`, `\n`, `\0` and `' '`) from the string:
```
int main()
{
std::string str("abcd \0 \r\n \0", 12);
std::cout << "Str: " << str << std::endl;
std::cout << "Length: " << str.length() << std::endl;
boost::trim_if(str, boost::is_any_of(std::string("\r\n\0 ", 4)));
std::cout << "Length: " << str.length() << std::endl;
return 0;
}
```
Prints:
```
Str: abcd
Length: 12
Length: 4
```
Upvotes: 0 <issue_comment>username_7: I'd like to give you a very QUICK and EASY answer. Just do:
```
yourStringWithNullChars = yourStringWithNullChars.c_str();
```
then
```
yourStringWithNullChars.shrink_to_fit();
```
Done, no need to search and replace, no need for external libraries. Yes, it's tested, I use here on my program. ;)
Bye.
Upvotes: 0 |
2018/03/21 | 2,630 | 7,237 | <issue_start>username_0: In VS Code I usually open files that have no extension just `filename` . I know I can change the language syntax with *Change Language Mode --> Language that I want* but I don't want to do this manually every time I open such a file. Can I make a default to this language every time I open a file with no extension?
I know I can do this:
```json
"files.associations": {
"*.myphp": "php"
}
```
But what if there is no extension? Also I want to be able to do this without affecting the other file types (that have extension).<issue_comment>username_1: VS Code's globbing doesn't currently seem to have a way to detect files with no extension. Every time someone opens an issue, they point it back to [this issue here](https://github.com/Microsoft/vscode/issues/1800). They detail their [globbing support here](https://code.visualstudio.com/docs/extensionAPI/vscode-api#GlobPattern).
That said, I do have a hacky solution to this. Put this in your "WORKSPACE SETTINGS" (not your general settings unless you really want this to be global).
```
{
"files.associations": {
"[!.]": "php",
"[!.][!.]": "php",
"[!.][!.][!.]": "php",
"[!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php",
"[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]": "php"
},
}
```
This works by adding a rule for every file name length and ensuring that the file cannot have a period in it (up to 20 characters in my example). This is a horrible hacky solution, but if you hold your nose and set it once, you can forget it and things just work. I do this in my dotfiles repo to associate extension-less files with "shellscript".
You an also do this just for a specific directory by using the double star glob: `"**/just_this_dir_name/[!.]": "php"`.
Upvotes: 3 <issue_comment>username_2: Thanks to the closing of issue [Expose 'change language' as command](https://github.com/Microsoft/vscode/issues/1800) and the development of plugin [Modelines](https://marketplace.visualstudio.com/items?itemName=chrislajoie.vscode-modelines), we can now use vim style modeline inside files without suffix and vscode can detect the file type.
Example:
filename: `post-commit`
file content:
```sh
#!/bin/sh
# vim: set ft=sh
echo "hello world linked"
```
After installing the Modelines extension, the file is highlighted properly:
[](https://i.stack.imgur.com/NLpK4.png)
Upvotes: 2 <issue_comment>username_3: If you choose username_1's [solution](https://stackoverflow.com/a/53468881/5589126) you can use pattern-list entry to write it in one line:
```json
{
"file.associations": {
// all files up to 30 characters long without extension will be associated with `php`
"{[!.],[!.][!.],[!.][!.][!.],[!.][!.][!.][!.],[!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]}": "php"
}
}
```
In example above, all files up to 30 characters long without extension will be associated with `php`.
You can shorten this entry with multiple pattern lists combined in one line, such as `{...}{...}{...}`. The example below works like the previous one, but it is much shorter:
```json
{
"file.associations": {
// all files up to 30 characters long without extension will be associated with `php`
"{[!.],[!.][!.],[!.][!.][!.],[!.][!.][!.][!.],[!.][!.][!.][!.][!.]}{[],[!.],[!.][!.][!.][!.][!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]}{[],[!.],[!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.][!.]}": "php"
}
}
```
---
Also, if you want to set this up for files in a directory where you only have root access, or you don't want to create a `.vscode` folder in it, you can create a rule in your user `settings.json` with the path in this rule, e.g.:
```json
{
"file.associations": {
"/etc/**/{[!.],[!.][!.],...}": "shellscript"
}
}
```
Another solution for files in specific directory is to use multiple rules:
```json
{
"file.associations": {
"/etc/**/*": "shellscript", // this rule will work on all files
"/etc/**/*.list": "debsources", // and this one to exclude .list files
"*.list": "debsources" // due to this rule won't work for .list
// files in /etc/** directories
}
}
```
This is because a more specific rule takes precedence over a more general rule.
Upvotes: 0 |
Subsets and Splits