text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Getting the error (System.InvalidOperationException: 'ExecuteNonQuery: Connection property has not been initialized.') This is my code:
private void btnRegister_Click(object sender, EventArgs e)
{
SqlConnection con = new SqlConnection("Data Source=(local);Initial Catalog=register;Integrated Security=True"); //datasource
SqlCommand cmd = new SqlCommand(@"INSERT INTO [dbo].[register]
([firstname], [lastname], [address], [gender], [email], [phone], [username], [password])
VALUES ('" + txtFname.Text + "', '" + txtLname.Text + "', '" + txtAdd.Text + "', '" + cmbGender.SelectedItem.ToString() + "', '" + txtEmail.Text + "', '" + txtPhone.Text + "', '" + txtUser.Text + "', '" + txtPass.Text + "')");
con.Open();
cmd.ExecuteNonQuery();
con.Close();
MessageBox.Show("Registered successfully"); //end of code
}
This code is supposed to add values into a database table from Windows form but its giving this error:
System.InvalidOperationException: 'ExecuteNonQuery: Connection property has not been initialized.'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65412944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: setPlaybackQuality() does not work anymore setPlaybackQuality() does not work anymore, even on official YT Api Demo: https://developers.google.com/youtube/youtube_player_demo
What has changed in the work of the YouTube Iframe api?
A: You can read in this issue that the functionality is no longer supported, the method is still there, but as a no-op.
due to changes in our player infrastructure, the player will no longer
honor requests to set a manual playback quality via the API. As
documented, the player has always made a "best effort" to respect the
requested quality.
The documentation will be updated in the future to indicate this call
is no longer supported, though it will still be available as a "no-op"
for compatibility purposes.
A: So, finally, answer from Google:
setPlaybackQuality is now considered a "no-op"; calling this function
will not change the player behavior. The player will use a variety of
signals to determine the optimal playback quality.
Users are able to manually request a specific playback quality via the
quality selector in the player controls.
A: It was also reported in this thread. You could file a bug report if you think this is a bug.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49631563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to save a module from one computer and import into into another computer in IntelliJ with Maven I have a Maven module in IntelliJ which works fine from one computer. I have saved the ".iml" file together with the project in Git. When I check it out on another computer,
*
*"New Project"
*then "File" -> "New Module from Existing Sources" -> Select the ".iml" file, the structure is all there, but no Maven dependencies are resolved.
How do I get IntelliJ to download and import the Maven dependencies?
Things I have tried:
*
*"Re-build Project"
*Right-click the module and "Re-build module"
*"File" -> "Invalidate Caches / Restart" (both invalidate and restart)
*"Re-import All Maven Projects", this simply deleted the two Maven modules from the project. I then had to re-create the modules as above, once they were there again I had the same problem.
On the comand-line "mvn" is able to import the project and resolve all dependencies just fine.
Additional information:
*
*The ".iml" file, when I look at it in a text editor, does not have any absolute paths in it.
Here is a picture of the module settings window:
A: If your goal is to just import the project on another PC, don't rely on the iml files. Some even consider it bad practice to commit IDE specific files in maven projects, as not everyone on a project might use the same version or even a different IDE. If you take a look at popular .gitignore files (e.g. this one), you'll most often find that any IDE specific files get excluded.
Consider importing the projects pom.xml:
Import Project -> from external model -> Maven
EDIT
JetBrains recommends to NOT include the iml file with Maven or Gradle projects, see here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44634524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Multiple server behind one public IP address so I am setting up my home network with multiple Raspberry Pis and I have run into an issue, which might be similar, but not exactly identical to some other queries here on stackoverflow. I am just starting so this might be a pretty newbie question.
Here is the setup: I have a router (a pretty shitty one as we rent the apartment from someone who had the network set up) and want to connect three Raspberry Pis with different functions:
*
*RPi 1 is running a Apache2 Webserver and hosting my owncloud instance. As I do not have a static public IP I am using noip.com to dynamically update a domain to resolve to my current IP - address.
*RPi 2 is running a VPN service which I want to be able to use while being on the road, e.g. in Internet Cafés and such.
*RPi 3 has a RPi Noir Camera v2 and serves as a Baby Monitor which is accessible via its private IP address within the network.
So, here comes the question: is there a way to access each of these raspberries via their private IP addresses from outside my network?
I.e. I want to be able to access the owncloud, the VPN and the baby monitor via their respective private IP - addresses? Or do I need to find a way to run all these services on a single machine?
Thanks and sorry for asking basic questions.
A: This can be done via port forwarding on the router.
For example:
for external IP / port 1234 -> forward to internal IP (and possibly different port) of RPi 1
for external IP / port 1235 -> forward to internal IP of RPi 2
and so on..
I use port 1234 as an example for the webserver, because there could be problems when using port 80 on a home network. To access it you can use yourPublicIP:1234/index.html (or dynamic_domain:1234 )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44143921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: My application is working on emulator but not on real devices i am using simple asynctask function for getting values from mysql database through json.it was working fine with emulator but if i am trying from the mobile i am getting error. like Java.lang.NullPointerExceprtion:Attempt to invke virtual metho 'java.lang.string.java.lang.stringbuilder.toString() on a null object reference.
I tried with new project but result is same. this application is not working in all the devices except emulator. can you help me on this.
My Code is -
public class MainActivity extends AppCompatActivity {
private static final String Latest_Products7 = "Questions";
JSONArray productsArray7 = null;
public static final int CONNECTION_TIMEOUT7=100000;
public static final int READ_TIMEOUT7=150000;
HashMap<String,ArrayList<WorldPopulation>> hasmap = new HashMap<String,ArrayList<WorldPopulation>>();
ArrayList<WorldPopulation> arraylist7 = null;
StringBuilder result7;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
new AsyncLogin7().execute();
}
private class AsyncLogin7 extends AsyncTask<String, String, StringBuilder> {
ProgressDialog pdLoading = new ProgressDialog(MainActivity.this);
HttpURLConnection conn7;
URL url7 = null;
@Override
protected void onPreExecute() {
super.onPreExecute();
pdLoading.setMessage("\tLoading...");
pdLoading.setCancelable(false);
pdLoading.show();
}
@Override
protected StringBuilder doInBackground(String... params) {
try {
// Enter URL address where your php file resides
url7 = new URL("http:/Samplesite/****/somephp.php");
} catch (MalformedURLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
// Setup HttpURLConnection class to send and receive data from php and mysql
conn7 = (HttpURLConnection)url7.openConnection();
conn7.setReadTimeout(READ_TIMEOUT7);
conn7.setConnectTimeout(CONNECTION_TIMEOUT7);
conn7.setRequestMethod("POST");
// setDoInput and setDoOutput method depict handling of both send and receive
conn7.setDoInput(true);
conn7.setDoOutput(true);
// Append parameters to URL
Uri.Builder builder7 = new Uri.Builder().appendQueryParameter("reg_id", "hai") ;
String query7 = builder7.build().getEncodedQuery();
// Open connection for sending data
OutputStream os7 = conn7.getOutputStream();
BufferedWriter writer7 = new BufferedWriter(new OutputStreamWriter(os7, "UTF-8"));
writer7.write(query7);
writer7.flush();
writer7.close();
os7.close();
conn7.connect();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
try {
int response_code7 = conn7.getResponseCode();
// Check if successful connection made
if (response_code7 == HttpURLConnection.HTTP_OK) {
// Read data sent from server
InputStream input7 = conn7.getInputStream();
BufferedReader reader7 = new BufferedReader(new InputStreamReader(input7));
result7 = new StringBuilder();
String line7;
while ((line7 = reader7.readLine()) != null) {
result7.append(line7);
}
// Pass data to onPostExecute method
}
} catch (IOException e) {
e.printStackTrace();
} finally {
conn7.disconnect();
}
return result7;
}
@Override
protected void onPostExecute(StringBuilder result7) {
super.onPostExecute(result7);
Log.e("dai",result7.toString());
Toast.makeText(MainActivity.this,result7.toString(),Toast.LENGTH_LONG).show();
pdLoading.dismiss();
/* Intent intnt = new Intent(Checklist_activity.this,Task_main.class);
intnt.putExtra("task",hasmap);
startActivity(intnt);*/
}
}
}
A: Change
try {
int response_code7 = conn7.getResponseCode();
// Check if successful connection made
if (response_code7 == HttpURLConnection.HTTP_OK) {
// Read data sent from server
InputStream input7 = conn7.getInputStream();
BufferedReader reader7 = new BufferedReader(new InputStreamReader(input7));
result7 = new StringBuilder();
String line7;
while ((line7 = reader7.readLine()) != null) {
result7.append(line7);
}
// Pass data to onPostExecute method
}
} catch (IOException e) {
e.printStackTrace();
} finally {
conn7.disconnect();
}
return result7;
To
try {
int response_code7 = conn7.getResponseCode();
result7 = new StringBuilder();
// Check if successful connection made
if (response_code7 == HttpURLConnection.HTTP_OK) {
// Read data sent from server
InputStream input7 = conn7.getInputStream();
BufferedReader reader7 = new BufferedReader(new InputStreamReader(input7));
String line7;
while ((line7 = reader7.readLine()) != null) {
result7.append(line7);
}
// Pass data to onPostExecute method
}
} catch (IOException e) {
e.printStackTrace();
} finally {
conn7.disconnect();
}
return result7;
A: Try something like this
Log.e("dai",MainActivity.this.result7.toString());
Toast.makeText(MainActivity.this,MainActivity.this.result7.toString(),Toast.LENGTH_LONG).show();
OR
@Override
protected void onPostExecute(StringBuilder result) {
super.onPostExecute(result);
Log.e("dai",result.toString());
Toast.makeText(MainActivity.this,result.toString(),Toast.LENGTH_LONG).show();
pdLoading.dismiss();
/* Intent intnt = new Intent(Checklist_activity.this,Task_main.class);
intnt.putExtra("task",hasmap);
startActivity(intnt);*/
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50327872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Padding container not functioning correctly Its a bit of a tough one to explain really.
this is it:
<div id="inside-cntr">
<!--GAME CONTENT GOES HERE!-->
<div style="position:relative; margin:15px; margin-top:35px; margin-bottom:20px; padding:1px; width:200px; height:200px; display:inline-block; float:left; background-color:#333;"></div>
<div style="position:relative; margin:15px; margin-top:35px; margin-bottom:20px; padding:1px; width:200px; height:200px; display:inline-block; float:right; background-color:#333;"></div>
<!--GAME CONTENT GOES HERE!-->
#inside-cntr { position:relative; width:760px; height:auto; min-height:50px; margin:0px; background-image:url(../images/global/main-content-inner.jpg); background-repeat:repeat-y; background-position:center; z-index:10; clear:both; }
What is happening is that the two div test blocks do not sit inside the expanding div container when both blocks have float attributes.
Also, i'm not too sure why I have to put such large margins to position the div blocks too?
A: #inside-cntr { overflow:hidden; zoom:1; }
Explanation: http://work.arounds.org/clearing-floats/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3508159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to activate JOptionPane from another class? I have main class with a main GUI from where I want to activate and get values from a new class with a JOptionPane like the code below. Since I already have a main GUI window opened, how and where should I activate/call the class below and finally, how do I get the values from the JOptionPane? Help is preciated! Thanks!
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.JTextField;
public class OptionPaneTest {
JPanel myPanel = new JPanel();
JTextField field1 = new JTextField(10);
JTextField field2 = new JTextField(10);
myPanel.add(field1);
myPanel.add(field2);
JOptionPane.showMessageDialog(null, myPanel);
}
Edit:
InputNewPerson nyPerson = new InputNewPerson();
JOptionPane.showMessageDialog(null, nyPerson);
String test = nyPerson.inputName.getText();
A: JOPtionPane provides a number of preset dialog types that can be used. However, when you are trying to do something that does not fit the mold of one of those types, it is best to create your own dialog by making a sub-class of JDialog. Doing this will give you full control over how the controls are laid out and ability to respond to button clicks as you want. You will want to add an ActionListener for the OK button. Then, in that callback, you can extract the values from the text fields.
The process of creating a custom dialog should be very similar to how you created the main window for your GUI. Except, instead of extending JFrame, you should extend JDialog. Here is a very basic example. In the example, the ActionListener just closes the dialog. You will want to add more code that extracts the values from the text fields and provides them to where they are needed in the rest of your code.
A: I guess looking at your question, you need something like this. I had made a small JDialog, where you will enter a UserName and Answer, this will then be passed to the original GUI to be shown in the respective fields, as you press the SUBMIT JButton.
Try your hands on this code and ask any question that may arise :
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
/*
* This is the actual GUI class, which will get
* values from the JDIalog class.
*/
public class GetDialogValues extends JFrame
{
private JTextField userField;
private JTextField questionField;
public GetDialogValues()
{
super("JFRAME");
}
private void createAndDisplayGUI(GetDialogValues gdv)
{
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setLocationByPlatform(true);
JPanel contentPane = new JPanel();
contentPane.setLayout(new GridLayout(0, 2));
JLabel userName = new JLabel("USERNAME : ");
userField = new JTextField();
JLabel questionLabel = new JLabel("Are you feeling GOOD ?");
questionField = new JTextField();
contentPane.add(userName);
contentPane.add(userField);
contentPane.add(questionLabel);
contentPane.add(questionField);
getContentPane().add(contentPane);
pack();
setVisible(true);
InputDialog id = new InputDialog(gdv, "Get INPUT : ", true);
}
public void setValues(final String username, final String answer)
{
SwingUtilities.invokeLater(new Runnable()
{
public void run()
{
userField.setText(username);
questionField.setText(answer);
}
});
}
public static void main(String... args)
{
Runnable runnable = new Runnable()
{
public void run()
{
GetDialogValues gdv = new GetDialogValues();
gdv.createAndDisplayGUI(gdv);
}
};
SwingUtilities.invokeLater(runnable);
}
}
class InputDialog extends JDialog
{
private GetDialogValues gdv;
private JTextField usernameField;
private JTextField questionField;
private JButton submitButton;
private ActionListener actionButton = new ActionListener()
{
public void actionPerformed(ActionEvent ae)
{
if (usernameField.getDocument().getLength() > 0
&& questionField.getDocument().getLength() > 0)
{
gdv.setValues(usernameField.getText().trim()
, questionField.getText().trim());
dispose();
}
else if (usernameField.getDocument().getLength() == 0)
{
JOptionPane.showMessageDialog(null, "Please Enter USERNAME."
, "Invalid USERNAME : ", JOptionPane.ERROR_MESSAGE);
}
else if (questionField.getDocument().getLength() == 0)
{
JOptionPane.showMessageDialog(null, "Please Answer the question"
, "Invalid ANSWER : ", JOptionPane.ERROR_MESSAGE);
}
}
};
public InputDialog(GetDialogValues gdv, String title, boolean isModal)
{
this.gdv = gdv;
setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE);
setLayout(new BorderLayout());
setModal(isModal);
setTitle(title);
JPanel panel = new JPanel();
panel.setLayout(new GridLayout(0, 2));
JLabel usernameLabel = new JLabel("Enter USERNAME : ");
usernameField = new JTextField();
JLabel questionLabel = new JLabel("How are you feeling ?");
questionField = new JTextField();
panel.add(usernameLabel);
panel.add(usernameField);
panel.add(questionLabel);
panel.add(questionField);
submitButton = new JButton("SUBMIT");
submitButton.addActionListener(actionButton);
add(panel, BorderLayout.CENTER);
add(submitButton, BorderLayout.PAGE_END);
pack();
setVisible(true);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9700549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Top Encountered CSS Bugs/Issues Please list CSS bugs/issues you encounter and how to solve them or a link to a site that solves them.
Please vote on what bugs you think people will encounter the most.
Thanks!
A: The Internet Explorer box model bug.
A: Double Margin Bug (< IE7)
A: IE6 doesn't support min-height.
You can use conditional comments to set height, which IE6 treats as a min-height.
Or you can use the child selector in CSS, which IE6 can't read, to reinstate height: auto on everything but IE6.
.myDiv {
height: 100px;
min-height: 100px;
}
.parentElement > .myDiv {
height: auto;
}
Using techniques like this can be problematic, but all popular modern browsers work in such a way that it's a valid technique.
A: Almost every HTML/CSS bug that you will encounter will be in Internet Explorer. IE6 has a lot of them, IE7 a bit fewer and IE8 subtantially fewer.
Having a proper doctype is a must. Without it the page is rendered in quirks mode, and especially for IE that is bad. It renders the page more or less as IE5 would, with the box model bug and everything.
Here are some common IE bugs:
*
*Making the content of each element at least one character high. (Can be fixed using overflow.)
*Expanding each element to contain it's children even it it's floating elements. (Can be fixed using overflow.)
*Elements that are not positioned but has layout gets a z-index, although they shouldn't. (Can be fixed by making it positioned and give it a specific z-index, and do the same for all elements on the same level that needs it.)
*Margins are not collapsed correctly. (Use padding instead if possible.)
*Vanishing floating elements. (Give them a specific size.)
*lots more... (including suggestions for fixes)
The most stable fix for most of the bugs is to rearrange the layout to avoid them, or to specify stricter styles (e.g. a specific size).
A: Chalk another one up for IE6:
DropDownList and DIV overlapping problem, with screen shots. The iframe fix is mentioned in the article. I'm not sure if there are CSS bugs that have consistent buggy behavior across all browsers.
A: here a link that list all IE known bugs and how to fix it:
PositionsEverything.net
A: Rumor has it that IE8 will not allow you to center elements with text-align: center;, only the text inside elements themselves. Instead, you must use margin: 0 auto;. If this is in fact the case, nearly all of the interwebs will implode.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/716013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Extract object from black background OpenCV C++ First you have to know that I work with OpenCV in C++ in Visual Studio.
I have a picture like : Original image
I want to create a new picture of the hand but with a lot less of black bacground.
So the final image should look like this : Final Image
I know there are some OpenCv functions that could help me but I have really trouble to implement the algorithm because OpenCv can't be used in Debug Mode so it hard to check what I am doing.
Have anyone any idea how to proceed ?
Thanks you very much.
A: Find contour, find bounding rectangle, crop.
Here is example of finding bounding box: example
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37781244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Configuring react-toolbox sass variables with toolbox-loader I want to change the default value of $appbar-height variable.
I create toolbox-theme.scss file with
$appbar-height: 3 * $unit !default;
But I get a lot of errors:
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./~/react-toolbox/lib/app/style.scss
Module build failed:
top: 0;
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/node_modules/react-toolbox/lib/app/style.scss (line 3, column 21)
@ ./~/react-toolbox/lib/app/style.scss 4:14-230 13:2-17:4 14:20-236
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./src/styles/styles.scss
Module build failed:
undefined
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/src/styles/styles.scss (line 3, column 21)
@ ./src/styles/styles.scss 4:14-261 13:2-17:4 14:20-267
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./src/components/header/style.scss
Module build failed:
@import "~react-toolbox/lib/button/config";
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/src/components/header/style.scss (line 3, column 21)
@ ./src/components/header/style.scss 4:14-269 13:2-17:4 14:20-275
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./~/react-toolbox/lib/button/style.scss
Module build failed:
@import "./config";
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/node_modules/react-toolbox/lib/button/style.scss (line 3, column 21)
@ ./~/react-toolbox/lib/button/style.scss 4:14-230 13:2-17:4 14:20-236
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./~/react-toolbox/lib/app_bar/style.scss
Module build failed:
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/node_modules/react-toolbox/lib/app_bar/style.scss (line 3, column 21)
@ ./~/react-toolbox/lib/app_bar/style.scss 4:14-230 13:2-17:4 14:20-236
ERROR in ./~/css-loader?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!./~/sass-loader?sourceMap!./~/toolbox-loader!./~/react-toolbox/lib/ripple/style.scss
Module build failed:
^
Undefined variable: "$unit".
in /home/jules/projects/tourbnb-frontend/node_modules/react-toolbox/lib/ripple/style.scss (line 3, column 21)
@ ./~/react-toolbox/lib/ripple/style.scss 4:14-230 13:2-17:4 14:20-236
If I change my config file to $color-primary-dark: $palette-blue-700 !default; it works all right and changes the color. How is $unit different from $palette-blue-700?
My webpack loader for styles:
{
test: /(\.css|\.scss)$/,
loader: 'style!css?sourceMap&modules&importLoaders=1&localIdentName=[name]__[local]___[hash:base64:5]!sass?sourceMap!toolbox'
}
A: toolbox-loader only imports the _colors.scss file (see first code line)
you have to import the _globals.scss file manually (or fork toolbox-loader).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/35255862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Android Floating buttons are not set at a fixed position I am showing the 2 floating buttons in my activity but the floating buttons are setting at bottom end when there is no data to show in the activity. if records are show in the activity then the floating buttons are showing after the records instead of at a fix position
Following is my activity layout
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/login_background"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_gravity="center"
android:background="@color/login_header"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:orientation="horizontal">
<ImageView
android:id="@+id/imgInfo"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="5dp"
android:src="@drawable/info" />
<ImageView
android:id="@+id/imgLogout"
android:layout_width="wrap_content"
android:layout_height="20dp"
android:layout_marginStart="290dp"
android:src="@drawable/logout" />
</LinearLayout>
<ImageView
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_gravity="center_horizontal|center_vertical"
android:layout_marginTop="10dp"
android:src="@drawable/logo1" />
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:orientation="horizontal"
android:paddingStart="120dp">
<ImageView
android:layout_width="wrap_content"
android:layout_height="40dp"
android:layout_gravity="center_horizontal|center_vertical"
android:background="@color/login_header"
android:src="@drawable/userprofile" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:fontFamily="@font/segoeui"
android:paddingStart="20dp"
android:text="@string/name"
android:textColor="@color/white"
android:textSize="20sp"
android:textStyle="bold" />
</LinearLayout>
</LinearLayout>
<LinearLayout
android:id="@+id/machineLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/white"
android:orientation="vertical">
<android.support.v7.widget.RecyclerView
android:id="@+id/recyclerView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:scrollbars="vertical">
</android.support.v7.widget.RecyclerView>
</LinearLayout>
</LinearLayout>
<android.support.design.widget.FloatingActionButton
android:id="@+id/search"
android:layout_width="46dp"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_gravity="bottom|end"
android:layout_margin="@dimen/fab_margin"
app:srcCompat="@drawable/search" />
<android.support.design.widget.FloatingActionButton
android:id="@+id/addNew"
android:layout_width="46dp"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_gravity="bottom|end"
android:layout_margin="@dimen/fab_margin"
app:srcCompat="@drawable/add" />
Following the screen shot
A: Try this
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_gravity="center"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:orientation="horizontal">
<ImageView
android:id="@+id/imgInfo"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="5dp"
android:src="@drawable/ic_message" />
<ImageView
android:id="@+id/imgLogout"
android:layout_width="wrap_content"
android:layout_height="20dp"
android:layout_marginStart="290dp"
android:src="@drawable/ic_message" />
</LinearLayout>
<ImageView
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_gravity="center_horizontal|center_vertical"
android:layout_marginTop="10dp"
android:src="@drawable/kid_goku" />
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:orientation="horizontal"
android:paddingStart="120dp">
<ImageView
android:layout_width="wrap_content"
android:layout_height="40dp"
android:layout_gravity="center_horizontal|center_vertical"
android:src="@drawable/kid_goku" />
<TextView
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:paddingStart="20dp"
android:text="name"
android:textColor="@color/white"
android:textSize="20sp"
android:textStyle="bold" />
</LinearLayout>
</LinearLayout>
<LinearLayout
android:id="@+id/machineLayout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="@color/white"
android:orientation="vertical">
<android.support.v7.widget.RecyclerView
android:id="@+id/recyclerView"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:scrollbars="vertical">
</android.support.v7.widget.RecyclerView>
</LinearLayout>
</LinearLayout>
<android.support.design.widget.FloatingActionButton
android:id="@+id/search"
android:layout_width="46dp"
android:layout_height="wrap_content"
android:layout_above="@+id/addNew"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_margin="@dimen/fab_margin"
app:srcCompat="@drawable/ic_message" />
<android.support.design.widget.FloatingActionButton
android:id="@+id/addNew"
android:layout_width="46dp"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_alignParentEnd="true"
android:layout_alignParentRight="true"
android:layout_gravity="bottom|end"
android:layout_margin="@dimen/fab_margin"
app:srcCompat="@drawable/ic_message" />
</RelativeLayout>
OUTPUT
A: Use CoordinatorLayout as root view.
And Also add app:layout_anchorGravity="bottom|right|end(specify as you need)" in FloatingActionButton
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50232067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Indexing Strategy in Oracle I have a table with 2 million rows.
The ndv ( number of distinct values ) in the columns are as follows :
A - 3
B - 60
D - 150
E - 600,000
The most frequently updated columns are A & B ( NDV = 3 for both ).
Assuming every query will have either column D or column E in WHERE clause, which of following will be the best set of indexes for SELECT statement:
D
D,E,A
E,A
A,E
A: Not really enough information to give a definitive assessment, but some things to consider:
*
*You're unlikely to get a skip scan benefit, so if you want snappy
response from predicates with leading E or leading D, that will be 2
indexes. (One leading with D, and one leading with E).
*If A/B are updated frequently (although that's a generic term),
you might choose to leave them out of the index definition in
order to reduce index maintenance overhead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48754047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JSON parameter size limit I am calling my WCF Web service using jQuery $.ajax json POST.
One of the input parameters is very long - over 8000 bytes. The data in it is a comma-separated list of GUIDs, like this "78dace54-1eea-4b31-8a43-dcd01e172d14,ce485e64-e7c6-481c-a424-2624371180aa,ede4c606-f743-4e0a-a8cc-59bcffa7feda,f0a81ed1-80db-4f6d-92d7-2fc47759a409".
When that parameter is 8176 bytes long, the request succeeds. When it's 8213 (one more comma and GUID) - the request fails.
It fails from the browser and from Fiddler (HTTP debugging proxy).
I added this to the webservice config:
<configuration>
<system.web.extensions>
<scripting>
<webServices>
<jsonSerialization maxJsonLength="50000000" recursionLimit="50000"/>
</webServices>
</scripting>
</system.web.extensions>
That does not make any difference, the request still fails for input param over 8176 bytes long.
That input param maps into a String on the WCF side.
What am I missing? Thank you!
UPDATE, this solved my problem:
Turns out that this setting controls the total JSON message length
<webServices>
<jsonSerialization maxJsonLength="50000000" recursionLimit="50000"/>
</webServices>
There is another setting that controls maximum length for individual parameters:
<bindings>
<webHttpBinding>
<binding name="Binding_Name" maxReceivedMessageSize="900000">
<readerQuotas maxDepth="32" maxStringContentLength="900000" maxBytesPerRead="900000" maxArrayLength="120000" maxNameTableCharCount="120000"/>
</binding>
</webHttpBinding>
</bindings>
Also, make sure to set this:
<system.web>
<httpRuntime maxRequestLength="900000"/>
Hope this takes care of some headaches out there!
A: The actual limit seems to be 8192 bytes.
You have to check your Web.config in the system.serviceModel tag :
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="Service1Soap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"
allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"
messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true">
<readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/>
<security mode="None">
<transport clientCredentialType="None" proxyCredentialType="None" realm=""/>
<message clientCredentialType="UserName" algorithmSuite="Default"/>
</security>
</binding>
</basicHttpBinding>
</bindings>
You need to change maxStringContentLength="8192" to a greater value.
You may also make multiple requests instead of one to get the list of GUID page by page, using an offset parameter in each request. For example, to get list of GUID by pages of 200, first request with offset=0, second with offset=200, ... until you get less than 200 items.
A: I know it won't be of much help for you but I'd like to point out that the JSON spec does not set any limit; however, it allows parsers to do so:
An implementation may set limits on the size of texts that it
accepts. An implementation may set limits on the maximum depth of
nesting. An implementation may set limits on the range of numbers.
An implementation may set limits on the length and character contents
of strings.
RFC4627: The application/json Media Type for JavaScript Object Notation (JSON)
See if this answer applies to you.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9121158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
} |
Q: Operator overloading for primitive types in C++ In C++, if you've got a class Foo, and you want users of Foo to be able to write:
Foo x;
x += 3;
, you can simply make a member function of Foo, Foo& operator+=(int rhs). If you want to be able to write:
Foo x;
int y = 3;
y += x;
, you cannot accomplish this by writing a member function of Foo, instead one has to write an external function, which usually must be declared as a friend of Foo.
How hard would it be for a future version of C++ to say that this can be written with a Foo member function int& operator+=(int &lhs, Reversed), where Reversed was some empty class whose sole purpose was to distinguish the two versions of the operator?
If this were done, it could probably eliminate the vast majority of the uses of the friend keyword in new code.
A: You can in fact define such an operator, because you are free to overload += for built-in types:
int& operator+=(int &lhs, Foo &rhs) {
lhs += rhs.somefield;
return lhs;
}
On the other hand, instead of writing overloaded functions for all possible operators, you can also provide a function that will allow implicit casts of class Foo to int:
class Foo {
... somefield;
operator int() {
return (int)somefield;
}
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39808976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Attempted to call an undefined method named "getDefaultName" I want to upgrade symfony from 3.3 to 3.4 but when I do a composer update, I've got this error :
[RuntimeException]
An error occurred when executing the "'cache:clear --no-warmup'" command:
PHP Fatal error: Uncaught Symfony\Component\Debug\Exception\UndefinedMethodException: Attempted to call an undefined method named "getDefaultName" of class "Doctrine\Bundle\DoctrineCacheBundle\Command\ContainsCommand". in /srv/http/ocim.formations/vendor/symfony/symfony/src/Symfony/Component/Console/DependencyInjectionAddConsoleCommandPass.php:61
Stack trace:
#0 /srv/http/ocim.formations/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/Compiler/Compiler.php(141): Symfony\Component\Console\DependencyInjection\AddConsoleCommandPass->process(Object(Symfony\Component\DependencyInjection\ContainerBuilder))
#1 /srv/http/ocim.formations/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/ContainerBuilder.php(759): Symfony\Component\DependencyInjection\Compiler\Compiler->compile(Object(Symfony\Component\DependencyInjection\ContainerBuilder))
#2 /srv/http/ocim.formations/vendor/symfony/symfony/src/Symfony/Component/HttpKernel/Kernel.php(643): Symfony\Component\DependencyInjection\ContainerBuilder->compile in /srv/http/ocim.formations/vendor/symfony/symfony/src/Symfony/Component/Console/DependencyInjection/AddConsoleCommandPass.php on line 61
In the browser, there are 2 messages :
(2/2) ContextErrorException
Warning: file_put_contents(/srv/http/ocim.formations/var/cache/dev/appDevDebugProjectContainerDeprecations.log): failed to open stream: Permission denied
in Kernel.php (line 648)
(1/2) FatalThrowableError
Call to undefined method Doctrine\Bundle\DoctrineCacheBundle\Command\ContainsCommand::getDefaultName()
in AddConsoleCommandPass.php (line 61)
Thanks for your help
A: As mentioned here, it may be caused by outdated Composer version, which uses older Symfony's Console Component. So when Composer has previously loaded older version of this class, it's not autoloaded again when your Symfony instance is trying to access this class later in cache:clear command.
The solution may be to update your Composer with composer self-update.
A: I ran into a similar problem today: We use symfony/console 3.4+ in our project, which tries to load the command-name via getDefaultName. But composer uses internally an older version of symfony/console where this method does not exist, since it was added in v3.4.0.
In this case a composer self-update won't help, but you can make sure to add the command to your service-definition like so:
services:
myvendorname.mypackagename.foo.command:
class: MyVendorName\MyPackageName\Command\FooCommand
tags:
- { name: 'console.command', command: 'foo' }
# ^^^^^^^^^^^^^^^^
# This is the important part
This will load the name directly from the service definition and does not try to call getDefaultName().
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49941993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Cordova android gradle error when building I had android building working for Cordova working for a while, then I tried to upgrade the cordova android version and now after having spent a whole day on it I can't get it to work again.
First it wouldn't download gradle, but I found a solution for that where I could get it to download from localhost.
Now it won't download gradle-2.2.3.pom from either maven or jcenter. The URL's it complains about all work in the browser, but it fails instantly in the console, clearly not even trying. The thing is the first time I got it to work I was in China, so I used some proxy settings I think to get it to work (since they block everything). I have unset all the proxy settings I could think of and even reinstalled, cordova, node+npm, cordova android and even Android Studio, but I cannot for the life of me get it to work.
This is the actual error I get from the console:
A problem occurred configuring root project 'android'.
Could not resolve all dependencies for configuration ':classpath'.
Could not resolve com.android.tools.build:gradle:2.2.3.
Required by:
project :
Could not resolve com.android.tools.build:gradle:2.2.3.
Could not get resource 'https://jcenter.bintray.com/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom'.
Could not GET 'https://jcenter.bintray.com/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom'.
Connect to jcenter.bintray.com:443 [jcenter.bintray.com/127.0.1.3] failed: Connection refused: connect
Could not resolve com.android.tools.build:gradle:2.2.3.
Could not get resource 'https://repo1.maven.org/maven2/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom'.
Could not GET 'https://repo1.maven.org/maven2/com/android/tools/build/gradle/2.2.3/gradle-2.2.3.pom'.
Connect to repo1.maven.org:443 [repo1.maven.org/127.0.1.2] failed: Connection refused: connect
Any ideas very much appreciated! I'm on Windows 10 and since I reinstalled everything today I'm now on cordova 7.01 and Android studio 2.3.3 (although it also didn't work with the earlier versions).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44996539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: out parameters of struct type not required to be assigned I've noticed some bizarre behavior in my code when accidentally commenting out a line in a function during code review. It was very hard to reproduce but I'll depict a similar example here.
I've got this test class:
public class Test
{
public void GetOut(out EmailAddress email)
{
try
{
Foo(email);
}
catch
{
}
}
public void Foo(EmailAddress email)
{
}
}
there is No assignment to Email in the GetOut which normally would throw an error:
The out parameter 'email' must be assigned to before control leaves the current method
However if EmailAddress is in a struct in a seperate assembly there is no error created and everything compiles fine.
public struct EmailAddress
{
#region Constructors
public EmailAddress(string email)
: this(email, string.Empty)
{
}
public EmailAddress(string email, string name)
{
this.Email = email;
this.Name = name;
}
#endregion
#region Properties
public string Email { get; private set; }
public string Name { get; private set; }
#endregion
}
Why doesn't the compiler enforce that Email must be assign to?
Why does this code compile if the struct is created in a separate assembly, but it doesn't compile if the struct is defined in the existing assembly?
A: TLDR: This is a known bug of long standing. I first wrote about it in 2010:
https://blogs.msdn.microsoft.com/ericlippert/2010/01/18/a-definite-assignment-anomaly/
It is harmless and you can safely ignore it, and congratulate yourself on finding a somewhat obscure bug.
Why doesn't the compiler enforce that Email must be definitely assigned?
Oh, it does, in a fashion. It just has a wrong idea of what condition implies that the variable is definitely assigned, as we shall see.
Why does this code compile if the struct is created in a separate assembly, but it doesn't compile if the struct is defined in the existing assembly?
That's the crux of the bug. The bug is a consequence of the intersection of how the C# compiler does definite assignment checking on structs and how the compiler loads metadata from libraries.
Consider this:
struct Foo
{
public int x;
public int y;
}
// Yes, public fields are bad, but this is just
// to illustrate the situation.
void M(out Foo f)
{
OK, at this point what do we know? f is an alias for a variable of type Foo, so the storage has already been allocated and is definitely at least in the state that it came out of the storage allocator. If there was a value placed in the variable by the caller, that value is there.
What do we require? We require that f be definitely assigned at any point where control leaves M normally. So you would expect something like:
void M(out Foo f)
{
f = new Foo();
}
which sets f.x and f.y to their default values. But what about this?
void M(out Foo f)
{
f = new Foo();
f.x = 123;
f.y = 456;
}
That should also be fine. But, and here is the kicker, why do we need to assign the default values only to blow them away a moment later? C#'s definite assignment checker checks to see if every field is assigned! This is legal:
void M(out Foo f)
{
f.x = 123;
f.y = 456;
}
And why should that not be legal? It's a value type. f is a variable, and it already contains a valid value of type Foo, so let's just set the fields, and we're done, right?
Right. So what's the bug?
The bug that you have discovered is: as a cost savings, the C# compiler does not load the metadata for private fields of structs that are in referenced libraries. That metadata can be huge, and it would slow down the compiler for very little win to load it all into memory every time.
And now you should be able to deduce the cause of the bug you've found. When the compiler checks to see if the out parameter is definitely assigned, it compares the number of known fields to the number of fields that were definite initialized and in your case it only knows about the zero public fields because the private field metadata was not loaded. The compiler concludes "zero fields required, zero fields initialized, we're good."
Like I said, this bug has been around for more than a decade and people like you occasionally rediscover it and report it. It's harmless, and it is unlikely to be fixed because fixing it is of almost zero benefit but a large performance cost.
And of course the bug does not repro for private fields of structs that are in source code in your project, because obviously the compiler already has information about the private fields at hand.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58631941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Getting unknown attribute error in android studio I am working on a project which I saw in a video tutorial. But when I write app:menu="@menu/bottom_navigation_menu" to link customized menu, I have a problem. It is not work properly. This is my code error image.
A: You need to add
xmlns:app="http://schemas.android.com/apk/res-auto"
to your main xml element
A: I am assuming you are using drawer layout at your root layout. If that is the case then Add below line of code to your drawer layout
xmlns:app="http://schemas.android.com/apk/res-auto"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46833000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Genymotion: "Unfortunately has stopped" I've created an application in React Native that works fine in iOS. I've copied the code over to the Android portion of it, and separated out the platform-specific components. When I hit a certain component, the app crashes with an "Unfortunately has stopped".
There are no logs, no error in the console, nothing. What do I look for and where can I look? Logs? Somewhere in code?
In ~/genymotion-log/Google Nexus 6<...>-logcat.txt, I see the following:
05-15 23:50:14.379 D/OpenGLRenderer( 620): Use EGL_SWAP_BEHAVIOR_PRESERVED: true
05-15 23:50:14.380 D/Atlas ( 620): Validating map...
05-15 23:50:14.429 I/OpenGLRenderer( 620): Initialized EGL, version 1.4
05-15 23:50:14.429 D/ ( 620): HostConnection::get() New Host Connection established 0xaf31ca40, tid 1876
05-15 23:50:14.463 D/OpenGLRenderer( 620): Enabling debug mode 0
05-15 23:50:14.489 W/EGL_emulation( 620): eglSurfaceAttrib not implemented
05-15 23:50:14.490 W/OpenGLRenderer( 620): Failed to set EGL_SWAP_BEHAVIOR on surface 0x9e45dfc0, error=EGL_SUCCESS
05-15 23:50:14.490 W/EGL_emulation( 941): eglSurfaceAttrib not implemented
05-15 23:50:14.490 W/OpenGLRenderer( 941): Failed to set EGL_SWAP_BEHAVIOR on surface 0xb43e44a0, error=EGL_SUCCESS
05-15 23:50:14.952 I/ActivityManager( 620): Killing 1492:com.android.onetimeinitializer/u0a10 (adj 15): empty #17
05-15 23:50:15.219 W/OpenGLRenderer( 941): Incorrectly called buildLayer on View: ShortcutAndWidgetContainer, destroying layer...
05-15 23:50:15.440 W/ResourceType( 724): No package identifier when getting value for resource number 0x00000000
05-15 23:50:15.442 W/PackageManager( 724): Failure retrieving resources for com.bidsmart: Resource ID #0x0
05-15 23:50:18.400 W/AudioTrack( 620): AUDIO_OUTPUT_FLAG_FAST denied by client
05-15 23:50:18.424 I/Process ( 1805): Sending signal. PID: 1805 SIG: 9
05-15 23:50:18.463 D/OpenGLRenderer( 620): endAllStagingAnimators on 0xa1a6f780 (RippleDrawable) with handle 0xaf3be470
05-15 23:50:18.468 I/ActivityManager( 620): Process com.bidsmart (pid 1805) has died
05-15 23:50:18.472 W/InputMethodManagerService( 620): Got RemoteException sending setActive(false) notification to pid 1805 uid 10061
A: No fix, but the reason is I'm pushing too much data from the server to the client. Once I ran adb logcat, I got this:
java.lang.OutOfMemoryError: Failed to allocate a 2470012 byte allocation with 48508 free bytes and 47KB until OOM.
Turns out I'm pushing my images over and over to the client until it breaks. iOS can handle it but RN can't.
Link to StackOverflow related thread: Android:java.lang.OutOfMemoryError: Failed to allocate a 23970828 byte allocation with 2097152 free bytes and 2MB until OOM
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37246369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Not able to use Embedding Layer with tf.distribute.MirroredStrategy I am trying to parallelize a model with embedding layer, on tensorflow version 2.4.1 . But it is throwing me the following error :
InvalidArgumentError: Cannot assign a device for operation sequential/emb_layer/embedding_lookup/ReadVariableOp: Could not satisfy explicit device specification '' because the node {{colocation_node sequential/emb_layer/embedding_lookup/ReadVariableOp}} was colocated with a group of nodes that required incompatible device '/job:localhost/replica:0/task:0/device:GPU:0'. All available devices [/job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0, /job:localhost/replica:0/task:0/device:XLA_GPU:0, /job:localhost/replica:0/task:0/device:GPU:0].
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
GatherV2: GPU CPU XLA_CPU XLA_GPU
Cast: GPU CPU XLA_CPU XLA_GPU
Const: GPU CPU XLA_CPU XLA_GPU
ResourceSparseApplyAdagradV2: CPU
_Arg: GPU CPU XLA_CPU XLA_GPU
ReadVariableOp: GPU CPU XLA_CPU XLA_GPU
Colocation members, user-requested devices, and framework assigned devices, if any:
sequential_emb_layer_embedding_lookup_readvariableop_resource (_Arg) framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
adagrad_adagrad_update_update_0_resourcesparseapplyadagradv2_accum (_Arg) framework assigned device=/job:localhost/replica:0/task:0/device:GPU:0
sequential/emb_layer/embedding_lookup/ReadVariableOp (ReadVariableOp)
sequential/emb_layer/embedding_lookup/axis (Const)
sequential/emb_layer/embedding_lookup (GatherV2)
gradient_tape/sequential/emb_layer/embedding_lookup/Shape (Const)
gradient_tape/sequential/emb_layer/embedding_lookup/Cast (Cast)
Adagrad/Adagrad/update/update_0/ResourceSparseApplyAdagradV2 (ResourceSparseApplyAdagradV2) /job:localhost/replica:0/task:0/device:GPU:0
[[{{node sequential/emb_layer/embedding_lookup/ReadVariableOp}}]] [Op:__inference_train_function_631]
Simplified the model to a basic model to make it reproducible :
import tensorflow as tf
central_storage_strategy = tf.distribute.MirroredStrategy()
with central_storage_strategy.scope():
user_model = tf.keras.Sequential([
tf.keras.layers.Embedding(10, 2, name = "emb_layer")
])
user_model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1), loss="mse")
user_model.fit([1],[[1,2]], epochs=3)
Any help will be highly appreciated. Thanks !
A: So finally I figured out the problem, if anyone is looking for an answer.
Tensorflow does not have complete GPU implementation of Adagrad optimizer as of now. ResourceSparseApplyAdagradV2 operation gives error on GPU, which is integral to embedding layer. So it can not be used with embedding layer with data parallelism strategies. Using Adam or rmsprop works fine.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66688358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Indentation of li items in ng-repeat I am using ng-repeat to show list items with some text. And I want every single item to be indented 10-20px to the right from the previous one. I don't have much experience with css.
<li ng-repeat="todo in todos"
ng-class="{'selectedToDo': (todo.id == selectedToDo)}">
{{todo.toDoText}}
</li>
Here is a jsFiddle with my code.
Thanks in advance!
A: you may use ng-style to solve your problem:
<li ng-repeat="todo in todos"
ng-class="{'selectedToDo': (todo.id == selectedToDo)}"
ng-style="{'margin-left': 10*$index+'px'}">
{{todo.toDoText}}
</li>
$index is a varibale that will be set by ng-repeat. You may use this to calculate your style.
A: Change your template with following::
<div ng-controller="MyCtrl">
<ul>
<li ng-repeat="todo in todos"
ng-class="{'selectedToDo': (todo.id == selectedToDo)}" style="text-indent: {{$index * 10}}px">
{{todo.toDoText}}
</li>
</ul>
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21754394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: NumberFormatter does not respect minimumFractionDigits while using significant digits I would like to format and display Double with the following rules:
If there are more than 2 fractional digits, display it as 6
significant figures, otherwise just display its original value with a minimum of 2 fractional digits.
In order to format nummbers according to my need, I declared an extension with the following method:
extension NSNumber {
func significantFormattedString(minimum: Int = 2, maximum: Int = 6) -> String {
let formatter = NumberFormatter()
formatter.locale = Locale(identifier: "en_US")
formatter.minimumSignificantDigits = minimum
formatter.maximumSignificantDigits = maximum
formatter.minimumFractionDigits = 2
formatter.usesGroupingSeparator = true
formatter.minimumIntegerDigits = 1
formatter.numberStyle = .decimal
return formatter.string(from: self) ?? "-"
}
}
And then when testing with the code below:
let myNumber = NSNumber(value: 7568.9)
let myString = myNumber.significantFormattedString()
print("\(myString)") // prints "7,568.9"
If I comment out the lines specifying minimumSignificantDigits and maximumSignificantDigits, it works as expected (i.e. displaying a minimum of 2 fraction digits, "7,568.90")
Is there anyway to achieve my desired result, or I can only format the result again?
Thanks!
A: You said "If there are more than 2 fractional digits". A number cannot "have" 2 fractional digits. You can add an infinite number of fractional digits (0) to a number and the value will not change. You are confusing the actual value with a format string.
What you really mean is probably "If 100 times the number is an integer". To do this, you will need an if statement:
let formatter = NumberFormatter()
formatter.locale = Locale(identifier: "en_US")
formatter.usesGroupingSeparator = true
formatter.minimumIntegerDigits = 1
formatter.numberStyle = .decimal
// this checks if the number has two leading fractional digits that are not zero
if (a * pow(10, minimum)).truncatingRemainder(dividingBy: 1) == 0 {
formatter.minimumFractionDigits = minimum
} else {
formatter.minimumSignificantDigits = maximum
}
return formatter.string(from: self) ?? "-"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49624484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PyGTK - localization with Right To Left languages (BiDi) (Almost?) all of the material in the Web regarding PyGTK localization is discussing usage of gettext - i.e., how to properly show the translated strings.
But that's not enough... There are certain languages (Hebrew, Arabic and more) that are written from Right To Left, and therefore, the widgets should be 'swapped'. Packing 'Start' should be at the rightmost, and continue to the left.
I assume that locale.setlocale(locale.LC_ALL, '') should solve the problem.
However, it didn't work (on Hebrew Windows 7 machine).
Here is a sample code, that tries to change the locale to Hebrew and displays 2 buttons - but they are still from Left To Right:
import gtk
import locale
locale.setlocale(locale.LC_ALL, 'Hebrew_Israel.1255')
print locale.setlocale(locale.LC_ALL)
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.connect("destroy", lambda w: gtk.main_quit())
box1 = gtk.HBox(False, 0)
window.add(box1)
button1 = gtk.Button("first")
box1.pack_start(button1, True, True, 0)
button2 = gtk.Button("second")
box1.pack_start(button2, True, True, 0)
window.show_all()
gtk.main()
A: gtk.widget_set_default_direction(gtk.TEXT_DIR_RTL)
This sets the default direction for widgets that don't call set_direction.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25780883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to update nested JSON with mongo/node/mongoose My front end is setup with React, and I am using MongoDB for my database, node/express and mongoose ODM.
All of my data is basically nested JSON essentially like this.
{
"data": [
{
"id": 0,
"stringA": "a random string",
"stringB": "another random string",
"someArray": [
{
"id": 0,
"stringInArray": 3,
"nestedArrayOne": [
{
"id": 0,
"stringInNestedArray": "asdasd",
},
{
"id": 1,
"string2InNestedArray": "asdasd",
}
]
},
{
"id": 1,
"stringInArray": 3,
"nestedArrayTwo": [
{
"id": 0,
"anotherNestArray": [
{
"stringInNestedArray": "string"
}
]
}
}
]
}
I apologize if that is difficult to understand. Anyways, I have setup several routes in my Node/express server using mongoose. Get, put and post requests are easy at the top level.
myRouter.route('/')
.get((req, res, next) => {
Data.find()
.then(data=> {
console.log('getting all incidents');
res.statusCode = 200;
res.setHeader('Content-Type', 'application/json');
res.json(data);
})
.catch(err => next(err));
})
.post((req, res, next) => {
MyRouter.create(req.body)
.then(data=> {
res.statusCode = 200;
res.setHeader('Content-Type', 'application/json');
res.json(data);
})
.catch(err => next(err));
})
Second level isn't so bad.
myRouter.route('/:id')
.get((req, res, next) => {
Data.findById(req.params.id)
.then(data=> {
res.statusCode = 200;
res.setHeader('Content-Type', 'application/json');
res.json(data);
})
.catch(err => next(err));
})
.put((req, res, next) => {
Data.findByIdAndUpdate(req.params.id, {
$set: req.body
}, { new: true })
.then(data=> {
res.statusCode = 200;
res.setHeader('Content-Type', 'application/json');
res.json(data);
})
.catch(err => next(err));
})
Once I start getting into routes where I am accessing nested arrays such as
myRouter.route('/:id/array/:arrayId')
or
myRouter.route('/:id/array/:arrayId/anotherArray')
I have no idea where to even start. For a POST request for the first array that I come to I have this code that works fine.
myRouter.route('/:id/array')
.post((req, res, next) => {
Data.findById(req.params.id)
.then(data=> {
if (data) {
data.myArray.push(req.body);
data.save()
.then(data=> {
res.statusCode = 200;
res.setHeader('Content-Type', 'application/json');
res.json(data.myArray[data.myArray.length-1]);
})
.catch(err => next(err));
} else{
err = new Error(`Incident ${req.params.id} not found`);
err.status = 404;
return next(err);
}
})
.catch(err => next(err));
})
Like I said, that POST request works fine; I push data into what starts out as an empty array, save it and then return the last (most recent) entry that I pushed into the array.
Is this the correct way of going about this? What if I want to push data into a nested array within that previous array? (ie myRouter.route('/:id/array/:arrayId/anotherArray'))
I am trying to use the built-in Mongoose functions such as "findById" and "findByIdAndUpdate" however, I can only access the first id in my route and not the id's of my nested arrays as far as I can tell.
Is there a proper way to post new data and update old data in nested arrays without having to basically search the main object, then the first array, then the next array, then the next array then push my data?
I hope this wasn't too terrible to understand, I appreciate the help! Thanks.
A: If you want to update nested array you need use $push to update array.
Reference:
*
*https://www.mongodb.com/community/forums/t/pushing-array-of-elements-to-nested-array-in-mongo-db-schema/112494
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72038714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: The reasons to use binding in a JSF form I am new to the JSF. Can anybody please explain me why binding attribute is used in the code below:
<h:form id="epox" binding="#{rxManufacturerEditor.form}" />
I am a bit confused with value and binding attributes, however I am not getting why we mention binding attribute with form tag.
A: The only reason to use binding to a backing bean's UIComponent instance that I know of is the ability to manipulate that component programmatically within an action/actionlistener method, or ajax listener method, like in:
UIInput programmaticInput;//getter+setter
String value1, value2;//getter+setter
...
public void modifyInput() {
ELContext ctx = FacesContext.getCurrentInstance().getELContext();
ValueExpression ve = FacesContext.getCurrentInstance().getApplication().getExpressionFactory().createValueExpression(ctx, "#{bean.value2}", Object.class);
programmaticInput.setValueExpression("value", ve);
}
After the action method has been triggered the value of component <h:inputText value="#{bean.value1}" binding="#{bean.programmaticInput} ... /> will be bound to value2 instead of value1.
I rarely use this type of binding, because facelets offer an XML-based view definition without the necessity to (regularly) mess with programmatic components.
Be sure to know that the abovementioned construct fails in Mojarra version older than 2.1.18, forcing view scoped beans to be recreated on every HTTP request. For more details refer to @ViewScoped fails in tag handlers.
More typically, you'd want to use binding to the view in which you can do cross-field validation:
<h:inputText binding="#{input}" ... />
<h:inputText validator="#{bean.validate}" ... >
<f:attribute name="input" value="#{input}" />
</h:inputText>
Here, the whole first input component will be available as an attribute of the second component and therefore its value will be available in the associated validator (method). Another example is to check which of the command components has been triggered in view:
<h:commandButton binding="#{button}" ... />
<h:inputText disabled="#{not empty param[button.clientId]}" ... />
Here, the input text component will be disabled only when the button was pressed.
For more information proceed to the follwing answers by BalusC:
*
*What is component binding in JSF? When it is preferred to be used?
*How does the 'binding' attribute work in JSF? When and how should it be used?
A: The <h:form> tag can be bound to a backing bean's property that has the same type of the tag HTMLForm - just like the other usual tags.
See also: Difference between value and binding
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18955836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Does Safari consider subdomains of a 2nd level domain to be 3rd party? I'd like to set-up a front-end one one host that authenticates with a back-end on another host. Assuming that the domains are:
*
*www.example.com
*api.example.com
Will Safari allow api.example.com to set a cookie in the browser if the request was made while the user was at www.example.com?
A: The answer is no: Safari/WebKit considers sites that share a 2nd-level domain (i.e., example.com) to be 1st-party.
We tested this on some sites hosted on our local machines using dummy domains (www.example.localdev and api.example.localdev) and Safari treated them as 3rd-party. This meant we could not use our client-side site (www) to authenticate a user via our backend (api).
However, upon moving to staging instances on the internet with actual domains (www.example.com and api.example.com) they were treated as 1st-party and everyone went home happy.
WebKit's tracking protection describes supporting the subdomain strategy:
First and third-party. If news.example is shown in the URL bar and it loads a subresource from adtech.example, then news.example is first-party and adtech.example is third-party. Note that different parties have to be different websites. sub.news.example is considered first-party when loaded under news.example because they are considered to be the same site.
But it appears they also adhere strictly to their description of a website as "a registrable domain including all of its subdomains."
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66264741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: noSelectionOption Attribute I am new to JSF and I came across the noSelectionOption attribute in JSF 2.0.
I don't understand the purpose of this attribute. As per the description, it's used when the selection is required and the user selects noSelectionOption causing a validation error.
So, if noSelectionOption = true then the user can select noSelectionOption and bypass that list or menu?
Or, if noSelectionOption = true then the user has to select one of the items, and, if he chooses noSelectionOption then the validation error occurs?
Can the user see noSelectionOption as one of the items under the List or menu if it's true?
Please help me to understand the logic behind this.
A: A f:selectItem that has noSelectOption set to true represents a "no selection" option, something like this:
-- Select a Colour -- < noSelectOption was intended for this case
Red
Green
Blue
Tomato
This item is rendered in the menu, unless hideNoSelectionOption is set to true in your menu component. In that case, the option is selected when the user interacts with the menu.
Just bear in mind that if an entry is required and this "no selection" option is the one selected, there will be a validation error.
An alternative that requires a little more of coding is to use a f:selectItem with value="#{null}", to represent the case in which an user did not select a value. If you have a converter you'll have to check for this null case and, if you feel like it, introduce some custom validators.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13478663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: How to dispose/release/"finalize" unmanaged resources when a shared value gets out of scope I have a type that encapsulates a key to an external resource. Once the key is lost (all values that share it get out of scope), the resource should be released (implicitly) on the next garbage-collection, like memory does for regular values.
So I'm looking for something similar to OOP disposing, or ForeignPtr, only that I represent references to something other than objects from foreign languages (although if ForeignPtr can properly and elegantly work for this too, knowing how would also answer this question).
Is it possible? if so, how?
A: I suggest you look at ResourceT:
ResourceT is a monad transformer which creates a region of code where
you can safely allocate resources.
A: You can use System.Mem.Weak.addFinalizer for this.
Unfortunately the semantics for weak references can be a little difficult to understand at first. The warning note is particularly important.
If you can attach an IORef, MVar, or TVar to your key, and make a weak reference/finalizer associated with that, it's likely to be much more reliable.
In the special case that your key is a pointer to non-managed memory (e.g. some C data structure), you should instead create a ForeignPtr and attach a finalizer to it. The semantics for these finalizers are a little different; rather than an arbitrary IO action the finalizer must be a function pointer to an external function.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26907739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Checking if a datetime field is > than a specific hour of the current day Rails 2.3.5 / Ruby 1.8.7
For a datetime record field, is there a Time method that would make it possible to say "if this time is > 5am this morning" in a single line?
Like:
<td>
<% if my_data.updated_at > 5am this morning %>
Valid
<% else %>
Expired
<% end %>
</td>
I guess otherwise it woudl be storing now(), changing it's 'hour' property to '05' and then comparing the datetime field to that?
Thanks - Working with Times is still confusing to me for some reason.
A: <td style="text-align:center;">
<% if my_data.last_status_update.blank? %>
-
<% else %>
<%=h my_data.last_status_update.strftime("%m-%d-%Y @ %H:%M CST") %>
<% end %>
</td>
<%
if !my_data.last_status_update.blank? && my_data.last_status_update.year == Time.now.year &&
my_data.last_status_update.day == Time.now.day && my_data.last_status_update.hour >= 5
%>
<td style="text-align:center;background:#90ee90">
YES
</td>
<% else %>
<td style="text-align:center;background:#ff9999">
EXPIRED!
</td>
<% end %>
A: I have never player with the Time method, but I guess you could check the rails API : http://corelib.rubyonrails.org/classes/Time.html.
You could play with Time.at.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/10441015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I add page numbers to my iText7 Pdf after im done generating it? private void addPageNumbers(Document doc)
{
var totalPages = doc.GetPdfDocument().GetNumberOfPages();
for (int i = 1; i <= totalPages; i++)
{
// Write aligned text to the specified by parameters point
doc.ShowTextAligned(new Paragraph(string.Format("page %s of %s", i, totalPages)),
559, 806, i, TextAlignment.RIGHT, VerticalAlignment.TOP, 0);
}
doc.Close();
}
this is the code i tried, but I get the following exception:
iText.Kernel.PdfException: "Cannot draw elements on already flushed
pages."
I need to add the page numbers in the end because after generating the content of the pdf I generate a table of contents and move it to the beginning of the document. Therefore i only know the page numbers after generating all the pages.
A: iText by default tries to flush pages (i.e. write their contents to the PdfWriter target stream and free them in memory) early which is shortly after you started the next page. To such a flushed page you obviously cannot add your page x of y header anymore.
There are some ways around this. For example, if you have enough resources available and don't need that aggressive, early flushing, you can switch it of by using a different Document constructor, the one with an extra boolean parameter immediateFlush, and setting this parameter to false.
Thus, instead of
new Document(pdfDoc)
or
new Document(pdfDoc, pageSize)
use
new Document(pdfDoc, pageSize, false)
This is a related answer.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64041344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Questions that in case of fluctuating the validation accuracy and loss curve for image binary classification, ask the way of analysis and solution I implement training and evaluating for binary classification with image data through transfer learning from keras API. I'd like to compare performance each models(ResNet, Inception, Xception, VGG, Efficient Net). The datasets are composed by train(approx.2000ea), valid(approx.250ea), test(approx.250ea).
But I faced unfamiliar situation for me so I'm asking couple of questions here.
*
*As shown below, Valid Accuracy or Loss has a very high up and down deviation.
I wonder which one is the problem and what needs to be changed.
epoch_acc_loss
loss_epoch
acc_epoch
*If I want to express validation accuracy with number, what should I say in the above case?
Average or maximum or minimum?
*It is being performed using Keras (tensorflow), and there are many examples in the API for
train, valid but the code for Test(evaluation?) is hard to find. When figuring performance,
normally implement until valid? or Do I need to show evaluation result?
*Now I use Keras API for transfer learning and set this.
include_top=False
conv_base.trainable=False
Summary
I wonder if there is an effect of transfer learning without includint from top, or if it's not,
is there a way to freeze or learn from a specific layer of conv_base.
I'm a beginner and have not many experience so it could be ridiculous questions but please give kind advice.
Thanks a lot in advance.
A: *
*It's hard to figure out the problem without any given code/model structure. From your loss graph I can see that your model is facing underfitting (or it has a lots of dropout). Common mistakes, that make models underfit are: very high lr and primitive structure (so model can't figure out the dependencies in your data). And you should never forget about the principle "garbage in - garbage out", so double-check tour data for any structure roughness.
*Well, validation accuracy in you model training logs is mean accuracy for validation set. Validation technique is based on statistics - you take random N% out of your set for validation, so average is always better if we're talking about multiple experimets (or cross validation).
*I'm not sure if I've understood your question correct here, but if you want to evaluate your model with the metric, that you've specified for it after the training process (fit() function call) you should use model.evaluate(val_x, val_y). Or you may use model.predict(val_x) and compare its results to val_y via your metric function.
*If you are using default weights for keras pretrained models (imagenet weights) and you want to use your own fully-connected part with it, you may use ONLY pretrained feature extractor (conv blocks). So you specify include_top=False. Of course there will be some positive effect (I'd say it will be significant in comparison with randomly initialized weights) because conv blocks have params that were trained to extract correct features from image. Also would recommend here to use so called "fine-tuning" technique - freeze all layers in pretrained part except a few in its end (may be few layers or even 2-3 conv blocks). Here's the example of fine-tuning of EfficientNetB0:
effnet = EfficientNetB0(weights="imagenet", include_top=False, input_shape=(540, 960, 3))
effnet.trainable = True
for layer in effnet.layers:
if 'block7a' not in layer.name and 'top' not in layer.name:
layer.trainable = False
Here I freeze all pretrained weights except last conv block ones. I've looked into the model with effnet.summary() and selected names of blocks that I want to unfreeze.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71434983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I get a method reference for an instance method with a dynamically bound target In C# we can instantiate a Delegate through a method reference to static method or an instance method. Example:
Func<object, object, bool> refEquals = Object.ReferenceEquals; // target = null
Func<string> toStr = new Object().ToString; // target = new Object()
For the latter the Delegate’s target is the new Object(), whereas the former has a null target.
But, how can I instantiate a Delegate for the ToString method reference without a pre-defined target? In this case, I would like that the ToString’s target would be bound to the Delegate’s argument. This could be useful, for instance, to call a certain instance method to all items of an IEnumerable<T>:
Func<object, string> toStr = Object.ToString; // the target (this) would be the Func’s argument
IEnumerable<T> dataSrc = ...
IEnumerable<String> dataSrc = dataSrc.Select(toStr);
However, first line does not compile:
error CS0123: No overload for 'ToString' matches delegate 'System.Func'
Java 8 provides this feature through Reference to an Instance Method of an Arbitrary Object. How can I achieve this same feature in .Net?
I know that we could surpass this limitation with a lambda expression, such as:
Func<Object, String> toStr = item => item.ToString();
However, this incurs in a further indirection to call the ToString instance method and for that reason I am not considering this workaround as a valid solution for my question.
A: Via Reflection, you can get an equivalent behavior to that one described in Java 8. You can create an instance of a Delegate with a null target and dynamically binding its first argument to the this method parameter. For your example you can create the toStr delegate in the following way:
MethodInfo methodToStr = typeof(object).GetMethod("ToString");
Func<Object, String> toStr = (Func<Object, String>) Delegate.CreateDelegate(
typeof(Func<Object, String>),
methodToStr);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37525900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: WebSharper F# - How to run a template project created with VS Code and Ionide? I have never worked with .NET before and I would like to know how to run a WebSharper F# project without any IDE.
Context
*
*I'm running Linux with Mono 4.4.2
*The project was created with VS Code and Ionide, using the template websharperserverclient
*I'm able to compile the code using the automatically generated file build.sh or by executing xbuild, but only .dll files are generated, I couldn't see any .exe
I thank in advance for any help!
Updates
Using websharperserverclient I get weird results like the one showed in the picture below and xsp4 doesn't give any hint about it.
A: WebSharper can run as an ASP.NET module, so the easiest way to start your app is to run xsp4 (mono's self-hosted ASP.NET server) in the project folder. That's good as a quick server for testing; for production you should rather configure a server like Apache or nginx.
Another solution would be to use the websharpersuave template instead, which does generate a self-serving executable.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39559325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: How do I align items properly in React native I want the narration to be Bold , and should be afar left , the amount to be on the same row as the narration , the date should be far left below the narration. But what I do does not seem to work as the transactions list is somewhat not aligned and looks like this :
tried all I could, i do not seem to see it work fine.
My code is looking thus :
import React, {useEffect, useState} from 'react';
import {
ActivityIndicator,
Button,
Image,
ImageBackground,
SafeAreaView,
StyleSheet,
Text,
TouchableOpacity,
View,
} from 'react-native';
import {Header, Avatar, Icon, Card} from '@rneui/themed';
import {FlatList, ScrollView} from 'react-native-gesture-handler';
import {useNavigation} from '@react-navigation/native';
import {Tab} from '@rneui/base';
import AsyncStorage from '@react-native-async-storage/async-storage';
const HomePage = () => {
const [transaction_details, setTransaction_details] = useState([]);
const [isLoading, setLoading] = useState(true);
const navigation = useNavigation();
const Item = ({title}) => (
<View style={styles.item}>
<Text style={styles.title}>{title}</Text>
</View>
);
FlatListItemSeparator = () => {
return (
<View
style={{
height: 1,
width: 350,
backgroundColor: '#D3D3D3',
}}
/>
);
};
showdata = async () => {
let token = await AsyncStorage.getItem('token');
alert(token);
};
getTransactionsList = async () => {
let token = await AsyncStorage.getItem('token');
let email = await AsyncStorage.getItem('email');
fetch('https://******************/api/fetch-transaction/' + email, {
method: 'GET',
headers: {
'Accept': 'application/json',
'Content-type': 'application/json',
'Authorization': `Bearer ${token}`,
},
})
.then(response => response.json())
.then(responseJson => {
setTransaction_details(responseJson.results);
setLoading(false);
});
};
useEffect(() => {
//showdata();
getTransactionsList();
});
/*useEffect(() => {
fetch('https://brotherlike-navies.000webhostapp.com/people/people.php', {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-type': 'application/json',
},
})
.then(response => response.json())
.then(responseJson => {
setTransaction_details(responseJson);
setLoading(false);
});
}, []);
*/
return (
<View style={{flex: 1}}>
<Header
containerStyle={{
backgroundColor: 'transparent',
justifyContent: 'space-around',
}}
leftComponent={
<Avatar
small
rounded
source={{
uri: 'https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSiRne6FGeaSVKarmINpum5kCuJ-pwRiA9ZT6D4_TTnUVACpNbzwJKBMNdiicFDChdFuYA&usqp=CAU',
}}
onPress={() => console.log('Left Clicked!')}
activeOpacity={0.7}
/>
}
rightComponent={
<Icon
name={'mail-outline'}
color={'#00BB23'}
size={32}
onPress={() => navigation.navigate('Accounts')}
/>
}></Header>
<ImageBackground
source={{
uri: 'asset:/logo/bg.JPG',
}}
imageStyle={{borderRadius: 6}}
style={{
top: 15,
paddingTop: 95,
alignSelf: 'center',
width: 328,
height: 145,
borderadius: 9,
justifyContent: 'center',
alignSelf: 'center',
alignItems: 'center',
}}>
<View>
<Text style={styles.accText}>Wallet Balance</Text>
<Text style={styles.text}> 250,000 </Text>
</View>
</ImageBackground>
<View>
<Text
style={{
fontFamily: 'Poppins-Bold',
flexDirection: 'row',
paddingTop: 55,
fontSize: 15,
left: 18,
color: 'gray',
}}>
Recent Transactions
</Text>
</View>
<View style={{flex: 1, marginTop: 35}}>
{isLoading ? (
<ActivityIndicator />
) : (
<FlatList
style={{fontFamily: 'Poppins-Medium', alignSelf: 'center'}}
ItemSeparatorComponent={this.FlatListItemSeparator}
data={transaction_details}
renderItem={({item}) => {
//console.log(item);
return (
<View style={{flex: 2, flexDirection: 'row'}}>
<Text style={styles.PayeeName}>
{item.narration}
{' '}
</Text>
<Text style={styles.date_ofTransaction}>{item.date}</Text>
<Text style={styles.amountValue}>{item.amount}</Text>
</View>
);
}}
keyExtractor={item => item.id.toString()}
/>
)}
</View>
</View>
);
};
export default HomePage;
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
padding: 20,
},
date_ofTransaction: {
marginTop: 20,
alignItems: 'flex-start',
alignItems: 'center',
left: -85,
fontFamily: 'Poppins-Light',
fontSize: 9,
},
paragraph: {
fontSize: 18,
fontWeight: 'bold',
textAlign: 'center',
padding: 20,
},
text: {
top: -85,
fontSize: 30,
color: 'white',
textAlign: 'center',
fontFamily: 'Poppins-Bold',
},
mainContainer: {
paddingTop: 90,
justifyContent: 'center',
alignItems: 'center',
},
accText: {
top: -85,
paddingTop: 10,
justifyContent: 'center',
alignItems: 'center',
fontFamily: 'Poppins-Medium',
color: 'white',
textAlign: 'center',
},
PayeeName: {
justifyContent: 'flex-start',
alignItems: 'center',
left: 23,
fontFamily: 'Poppins-Medium',
size: 800,
fontWeight: 'bold',
},
amountValue: {
flexDirection :'row',
alignItems: 'flex-end',
fontFamily: 'Poppins-Medium',
size: 800,
fontWeight: 'bold',
},
});
The alignment is quite poor, however, I wish i could be shown a guide as how to go about this. So I could follow along etc. New to some form of Design in React native, as I am learning it on my own.
A: what about this approach. not sure if {' '} in you code is necessary. this aligns with space between and looks better for the ui instead of giving a random space to separate the items. and also apply the rest oft he styles to your liking, for example make font bold, etc.
<FlatList
data={transaction_details}
ItemSeparatorComponent={this.FlatListItemSeparator}
renderItem={({ item }) => {
return (
<View>
<View style={{ flexDirection: 'row', justifyContent: 'space-between', alignItems: 'center' }}>
<Text>{item.narration}</Text>
<Text>{item.amount}</Text>
</View>
<Text>{item.date}</Text>
</View>
)
}}
keyExtractor={(item) => item.id.toString()}
/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74854708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: JAVA Tree parser for ANTLR I want to make a JAVA AST parser and i came across this extremely useful answer.
So as per the instructions i created all the files and there were no errors generating the lexer and parser using the Java.g file but when compiling *.java file i get an error in Main.java
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
JavaLexer lexer = new JavaLexer(new ANTLRFileStream("Test.java"));
JavaParser parser = new JavaParser(new CommonTokenStream(lexer));
CommonTree tree = (CommonTree)parser.javaSource().getTree();
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
System.out.println(st);
}
}
for compilation:
javac -cp antlr-3.4-complete.jar *.java
and the error is:
Main.java:9: error: cannot find symbol
CommonTree tree = (CommonTree)parser.javaSource().getTree();
^
symbol: method javaSource()
location: variable parser of type JavaParser
1 error
I am a beginner and i am really unable to find the problem. Thanks in advance.
A: CommonTree tree = (CommonTree)parser.javaSource().getTree();
This assumes that the start point for the Java grammar you are using is the javaSource rule.
Check your grammar to see whether that is indeed the case. If not, identify the correct starting rule and use that. The methods of the parser are named the same as the rules in the grammar.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/48272932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: REACT Router v6 not triggering useEffect The useEffect function is not triggering in AddFarm.js path
AddFarm.js - Path: /anadir-granja
const fetchData = async()=>{
console.log("fetchdata")
try{
const res = await axios.get("link")
console.log(res)
}catch(error){
console.log(error)
}
}
useEffect = (() =>{
console.log("useEffect")
fetchData()
}, [])
App.js
return (
<div className="App">
<Routes>
<Route exact path="/" element={<Home/>}/>
<Route exact path="anadir-granja" element={<AddFarm/>}/>
</Routes>
</div>
);
I want to have a web with many paths where each path is a component. I want to call the useEffect function to fetch data from a mongodb on each component.
A: This is because AddFarm component is not mounted when you go to this path /anadir-granja and the reason is you forgot to put a / before anadir-granja in the path property of the Route component. It should be like this:
<Route exact path="/anadir-granja" element={<AddFarm/>}/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74844184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Django admin template override not working Django 1.6.11
App structure looks like:
my_project/
|-- new_app/
|-- templates/
in my config:
TEMPLATE_ROOT = os.path.join(BASE_ROOT, 'templates/')
TEMPLATE_DIRS = (
TEMPLATE_ROOT,
)
INSTALLED_APPS = (
'django.contrib.admin',
...
'new_app',
)
I've also tried listing new_app before contrib.admin and that didn't help.
When I copy venv/django/contrib/admin/templates/admin/change_list.html to my /templates/admin/new_app/change_list.html I don't see my customizations show up.
my_project/
|-- new_app/
|-- templates/
|-- admin/
|-- new_app/
|-- change_list.html
When I move change_list.html up one level so it's under the admin path, the changes show up just fine:
my_project/
|-- new_app/
|-- templates/
|-- admin/
|-- change_list.html
|-- new_app/ (now an empty folder)
... but of course that would mean my changes are going to affect every admin page, not just for the app I'm trying to modify.
I've added this to the app's only model within admin.py:
class MyModelAdmin(reversion.VersionAdmin):
change_list_template = 'admin/new_app/change_list.html'
... this gives me some of what I need, but I also need change_list_results.html and there's no ModelAdmin override for that.
I'm following the documentation guide found at readthedocs in section 2.4.8 on page 31, but I don't seem to be having any luck.
A: Have you tried using a templates folder inside your app? Something like this:
my_project/
|-- new_app/
|-- templates/
|-- new_app/
|-- admin/
|-- change_list.html
|-- templates/
A: When several applications provide different versions of the same resource (template, static file, management command, translation), the application listed first in INSTALLED_APPS has precedence. See docs.
Change:
INSTALLED_APPS = (
'django.contrib.admin',
...
'new_app',
)
To:
INSTALLED_APPS = (
'new_app',
'django.contrib.admin',
...
)
Your templates in new_app should now be found before the templates in contrib.admin.
A: I know this is old but I came here because I was having a very similar problem. James Parker got me on the right track by looking at extended versions of ModelAdmin. I had a Mixin and an overloaded Admin like so:
class EvaluationAdmin(ExportMixin, MarkdownxModelAdmin):. It turned out the ExportMixin was hardcoding the template preventing the normal template override from working. I did not find a solution mentioned anywhere and this might not be the most elegant but I fixed it by subclassing ExportMixin and hardcoding the template to my overridden one instead. Just be sure to start with the copy the mixin was using to keep any additional features it was providing ('admin/import_export/change_list_export.html' in this case).
class EvaluationExportMixin(ExportMixin):
change_list_template = 'admin/eval/evaluation/change_list_export.html'
class EvaluationAdmin(EvaluationExportMixin, MarkdownxModelAdmin):
...
Hopefully this might point someone else to where to look and one possible work-around.
A: Go to page 607.
Custom template options The Overriding admin templates section describes how to override or extend the default
admin templates. Use the following options to override the default templates used by the ModelAdmin views:
And there you will got options how to override templates for specific models with ModelAdmin.
The same(i didn't compare, but seems similar) docs can be found on django site: Custom template options
| {
"language": "en",
"url": "https://stackoverflow.com/questions/39964672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: random 15px margins? I am getting random 15px top and bottom margins and I have no idea where they are coming from.
This is not happening in IE, only FF and Chrome.
In the following sample, the spacing above and below each "Here is content" paragraph was unexpected.
#pageContent {
background-color: #fff;
padding: 10px;
}
#contentHead {
height: 33px;
width: 882px;
color: black;
font-size: 14px;
font-weight: bold;
line-height: 34px;
padding-left: 48px;
text-transform: uppercase;
}
#contentBody{
background-color: #d4d3d1;
border-bottom-left-radius: 10px;
border-bottom-right-radius: 10px;
-moz-border-radius: 0 0 10px 10px;
border: 1px solid #8b8b8b;
}
#contentNoSidebar{
background-color: #000;
color: white;
}
<div id="pageContent">
<div id="contentHead">Sample Page</div>
<div id="contentBody">
<div id="contentNoSidebar">
<p>Here is content</p>
...
<p>Here is content</p>
</div>
</div>
</div>
A: If you're talking about the margins surrounding each <p> tag, that is inherent from the user agent style sheet.
By default paragraph tags have a surrounding margin. If you do something like:
p { margin: 0; padding: 0; }
you should be able to get rid of the margin/padding.
A: Paragraphs default to 1em vertical margin (top and bottom, but they can overlap). I guess you're talking about the div having a bottom margin - but it doesn't, that's the top margin of the p below it.
A: Now you added a test image, I know what you mean.
Use this CSS to fix the issue:
p { margin: 0; padding: 16px 0 }
In short, provide the same spacing between paragraphs using padding instead of margin.
A: Delete the --moz-border-radius: 0 0 10px 10px; command on your content body css and use
margin:0 px; padding based on your content.It will work, this is the reason for the reputation. Use firebug in your mozila browser we can easily find the bugs.
A: Your questions seems resolved already, but you might want to start using a CSS reset to override user agent stylesheet stuff for everything. Makes styling your web pages so they look the same on most browsers easier.
html, body, div, span, applet, object, iframe,
h1, h2, h3, h4, h5, h6, p, blockquote, pre,
a, abbr, acronym, address, big, cite, code,
del, dfn, em, img, ins, kbd, q, s, samp,
small, strike, strong, sub, sup, tt, var,
b, u, i, center,
dl, dt, dd, ol, ul, li,
fieldset, form, label, legend,
table, caption, tbody, tfoot, thead, tr, th, td,
article, aside, canvas, details, embed,
figure, figcaption, footer, header, hgroup,
menu, nav, output, ruby, section, summary,
time, mark, audio, video {
margin: 0;
padding: 0;
border: 0;
font-size: 100%;
font: inherit;
vertical-align: baseline;
}
/* HTML5 display-role reset for older browsers */
article, aside, details, figcaption, figure,
footer, header, hgroup, menu, nav, section {
display: block;
}
body {
line-height: 1;
}
ol, ul {
list-style: none;
}
blockquote, q {
quotes: none;
}
blockquote:before, blockquote:after,
q:before, q:after {
content: '';
content: none;
}
table {
border-collapse: collapse;
border-spacing: 0;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4846684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SAS intnx quarter variation sorry it is probably a very simple question, but I can't seem to find an answer to it.
Say, we want to create a table that contains 4 quarters back from the previous month:
%macro asd;
%let today = %sysfunc(today());
%let quarter_count_back = 4;
%let first_quarter = %sysfunc(intnx(month,&today.,-1));
proc sql;
create table quarters
(
Quarters num informat = date9. format = date9.
);
insert into quarters
%do i = 0 %to -&quarter_count_back.+1 %by -1;
values(%sysfunc(intnx(quarter,&first_quarter.,&i.)))
%end;
;
quit;
run;
%mend asd;
%asd;
run;
This code works just fine and creates a table, which starts from APR2016 and goes back in time by quarter. However, if I change the number in the 'first_quarter' line for -2, -3 etc... the code always starts from JAN2016 which just doesn't make any sense to me! For example:
%let first_quarter = %sysfunc(intnx(month,&today.,-2));
It seems logical that if I put this line in the code the table should start from MAR2016 and go back by quarter, but it does not, it starts from JAN2016.
Any ideas on what I am doing wrong here?
Thanks!
A: The default alignment for the INTNX function is the beginning of the interval. If you want it to go back 3 months, that's different than quarters. You can adjust these by looking at the fourth parameter of the INTNX function which controls the alignment. Options are:
*
*Same
*Beginning
*End
If you want three months, try the MONTH.3 interval instead of quarter.
http://support.sas.com/documentation/cdl/en/lefunctionsref/63354/HTML/default/viewer.htm#p10v3sa3i4kfxfn1sovhi5xzxh8n.htm
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36990259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Receiving JSON data on Web API Visual Studio C# I have a web api which is receiving some POST data.
I am able to parse the JSON string when the elements are not nested - but the nested ones just will not parse at all...
Here is the JSON I am receiving:
{
"wlauth": {
"userid": "user",
"password": "pass"
},
"ident": "01234567890",
"identtype": "imsi",
"message": "VGVzdCBNZXNzYWdl"
}
Here is the code from the Controller that handles the Post request:
public IHttpActionResult ReceiveSMSData(SMSReturned data)
{
Debug.WriteLine(data.userid);
Debug.WriteLine(data.password);
Debug.WriteLine(data.Ident);
Debug.WriteLine(data.identtype);
Debug.WriteLine(data.message);
return Ok();
}
From this I get the following in the debug console (the first two lines are blank):
'
01234567890
imsi
VGVzdCBNZXNzYWdl'
So in other words, the non-nested elements appear fine, but the nested ones do not - what should I be doing differently to retrieve those nested elements?
Edit:
Here is the SMSReturned Class:
public class SMSReturned
{
public string wlauth { get; set; }
public string Ident { get; set; }
public string identtype { get; set; }
public string message { get; set; }
public string userid { get; set; }
public string password { get; set; }
}
A: The structure for SMSReturned is missing some elements. Try this:
public class WLAuth
{
public string userid { get; set; }
public string password { get; set; }
}
public class SMSReturned
{
public WLAuth wlauth { get; set; }
public string Ident { get; set; }
public string identtype { get; set; }
public string message { get; set; }
public string userid { get; set; }
public string password { get; set; }
}
and this:
public IHttpActionResult ReceiveSMSData(SMSReturned data)
{
Debug.WriteLine(data.wlauth.userid);
Debug.WriteLine(data.wlauth.password);
Debug.WriteLine(data.Ident);
Debug.WriteLine(data.identtype);
Debug.WriteLine(data.message);
return Ok();
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41883072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Using a pretrained flair model with classic word embeddings I have experimented with flair models and they give really good performance. However, due to them using contextual embeddings they are incredibly slow. I want to use them with classic word embeddings instead.
The code given in the documentation is:
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
How can I alter this code to use classic word embeddings for NER instead?
A: First, incredibly slow is of course dependant on the compute power you are throwing at your problem, the exact models you tried, and of course the type of data the models are processing. So, as a small disclaimer, if the compute and the data are causing a bottleneck now, smaller and simpler models may still perform slow in your case.
That being said, here you can see a list of all the embeddings that are supported within flair. Note that "classic" WordEmbeddings are also part of the models that you can choose from.
In your case, you could choose any of the embeddings presented on that page, for instance the FastText embeddings for English, and then use the produced representations in a downstream NER task. Within flair, you could check out the SequenceTagger. Here is a great page to see how this is implemented in HuggingFace, with a training example.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72949990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Firebase authentication (is not a function, is not a constructor) I don't know what is wrong. I'm using Node.js and trying to log in using email/password and Google authentication. I have enabled all of them in Firebase console.
npm Firebase version - 3.1.0
part of code:
var firebase = require('firebase');
var config = {
apiKey: "AIzaSyAH27JhfgCQfGmoGTdv_VaGIaX4P-qAs_A",
authDomain: "pgs-intern.firebaseapp.com",
databaseURL: "https://pgs-intern.firebaseio.com",
storageBucket: "pgs-intern.appspot.com",
};
firebase.initializeApp(config);
app.post('/login', function(req, res) {
var auth = firebase.auth();
firebase.auth().signInWithEmailAndPassword(req.body.login, req.body.password).catch(function(error) {
// Handle Errors here.
var errorCode = error.code;
var errorMessage = error.message;
// ...
});
}
Error: firebase.auth(...).signInWithLoginAndPassword is not a function
or
Error: firebase.auth(...).GoogleAuthProviders is not a constructor when I write
firebase.auth().signInWithPopup(provider).then(function(result) {
// This gives you a Google Access Token. You can use it to access the Google API.
var token = result.credential.accessToken;
// The signed-in user info.
var user = result.user;
// ...
}).catch(function(error) {
// Handle Errors here.
var errorCode = error.code;
var errorMessage = error.message;
// The email of the user's account used.
var email = error.email;
// The firebase.auth.AuthCredential type that was used.
var credential = error.credential;
// ...
});
I just did exactly what is in documentation.
A: Your first error probably comes from a typo somewhere.
firebase.auth(...).signInWithLoginAndPassword is not a function
Notice it says signInWithLoginAndPassword, the function is called signInWithEmailAndPassword. In the posted code it's used correctly, so it's probably somewhere else.
firebase.auth(...).GoogleAuthProviders is not a constructor
You have not posted the code where you use this, but I assume this error happens when you create your provider variable, that you use in firebase.auth().signInWithPopup(provider)
That line should be var provider = new firebase.auth.GoogleAuthProvider();
Based on the error message, I think you might be doing new firebase.auth().GoogleAuthProvider(); Omit the brackets after auth, if that's the case.
A: There is no way to sign your node.js app into firebase with email+password or one of the social providers.
Server-side processes instead sign into Firebase using so-called service accounts. The crucial difference is in the way you initialize the app:
var admin = require('firebase-admin');
admin.initializeApp({
serviceAccount: "path/to/serviceAccountCredentials.json",
databaseURL: "https://databaseName.firebaseio.com"
});
See this page of the Firebase documentation for details on setting up a server-side process.
A: Do not call GoogleAuthProvider via an Auth() function.
According to the documentation you have to create an instance of GoogleAuthProvider.
let provider = new firebase.auth.GoogleAuthProvider()
Please check the following link https://firebase.google.com/docs/auth/web/google-signin
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38200044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: `[FromQuery]` IEnumerable parsing in ASP.NET Core 3.1? So, when I tested how binding works for an IEnumerable<string> argument, you simply pass the argument's name in the query string, repeatedly, like this: ?a=item1&a=item2&a=item3...
So, what must I write, if I have an argument of type IEnumerable<SimpleObject> a, where SimpleObject is defined as the following:
public class SimpleObject
{
public string Number { get; set; }
public string Text { get; set; }
}
in order to successfully bind it to a list of said objects? Or no such default ModelBinder exists for that mapping? (Please provide a sample ModelBinder in that case)
A: The default model-binding setup supports an indexed format, where each property is specified against an index. This is best demonstrated with an example query-string:
?a[0].Number=1&a[0].Text=item1&a[1].Number=2&a[1].Text=item2
As shown, this sets the following key-value pairs
*
*a[0].Number = 1
*a[0].Text = item1
*a[1].Number = 2
*a[2].Text = item2
This isn't quite covered in the official docs, but there's a section on collections and one on dictionaries. The approach shown above is a combination of these approaches.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62197118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Not able to ssh to a linux instance, which is on private subnet I am trying to ssh into the linux instance ( OCI) which is on a private subnet. To access it firstly i have created one bastion windows host . I have configured gitbash in my bastion server and then i am trying to connect to the linux instance . , getting permission denied error .
ssh -i private-key-file-path username@privateIPAddress
furthermore, i have allowed all connection from all ports in my ingress and egress rules for this private linux instance . Also i am able to connect to this linux machine from putty using the ppk file
Just wanted to know if this is the correct approach to connect to the server from gitbash
A: That is the command format I use from my bash terminal and it works for me.
I just tried to login with a user other than Ubuntu on the server and it gave me permission denied also even though the user was su status. I changed it back to the user ubuntu and it worked fine.
A: What is the error you get? You can add '-v' flag to the ssh command to get complete verbose log.
If you're using same key for both Putty and through gitbash client, it might not work. Putty uses one form of private key and gitbash needs OpenSSH format of the private key.
You can convert the key between the types using PuttyGen utility.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72548601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Error 10093 on accept() on different thread I created a while loop with the winsock accept() method in it but it throws error 10093 (WSAData not yet initialized) every time it loops.
WSAData IS initialized in the main thread that starts the accept thread.
I don't know if this is anything thread related. The code to start the WSAData and the thread is this:
iResult = WSAStartup(MAKEWORD(2,2), &wsaData);
if (iResult != 0) {
printf("WSAStartup failed with error: %d\n", iResult);
return 1;
}
// Things in between (bind, listen...)
std::thread acceptThread(Accept);
And here is the Accept() method I made (well, the actual accept method that is called):
SOCKET temp = accept(ListenSocket, NULL, NULL);
After that I check "temp" and that's when the error occurs
The WSAStartup does work because it doesn't go in the if.
A: Sockets do not have a thread affinity, so you can freely create a socket in one thread and use it in another thread. You do not need to call WSAStartup() on a per-thread basis. If accept() reports WSANOTINITIALISED then either WSAStartup() really was not called beforehand, or else WSACleanup() was called prematurely.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23097081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Printing characters from different data type in a text file I am trying to print strings from an cell array and and numbers from a vector into a text file. I have to put the 'n th' string of the cell array and 'n th' number of vector in the 'n th' line of the text file. There will be a space in between these two things.
To do this, I converted vector into a cell and then concatenate two cells horizontally. However, I do not know how to add space in between. Still without that space that concatenated cell should print out something in the text file. However, it's not printing out. Any help!? Thanks!
A: I think this sample code should solve your problem.
You get spaces by having spaces between your formats.
You need to use \r\n to get a new line on Windows machines.
strings = {'hello','how','are','you'};
numbers = [1, 2, 3, 4];
fileID = fopen('tester.txt','w');
format = '%s %f \r\n';
for i = 1:length(numbers)
fprintf(fileID,format,strings{i},numbers(i));
end
fclose(fileID);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22256755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Build Python as UCS-4 via pyenv I run into this issue ImportError numpy/core/multiarray.so: undefined symbol: PyUnicodeUCS2_AsASCIIString installing Python in a pyenv-virtualenv environment.
In my case, it happens with the matplotlib package instead of numpy (as in the above question), but it's basically the same issue.
The answer given in that question is a simple:
Rebuild NumPy against a Python built as UCS-4.
I don't know how to do this. In this other question it is said that one has to use:
./configure --enable-unicode=ucs4
but I don't know how to use that command along with pyenv.
This issue is also mentioned in pyenv's repo issue list, and a solution given in a comment. Sadly (for me) I can not understand how to apply the fix explained in said comment.
So my question basically is: how do I build Python as UCS-4 via pyenv?
A: Installing python with pyenv with ucs2:
$ export PYTHON_CONFIGURE_OPTS=--enable-unicode=ucs2
$ pyenv install -v 2.7.11
...
$ pyenv local 2.7.11
$ pyenv versions
system
* 2.7.11 (set by /home/nwani/.python-version)
$ /home/nwani/.pyenv/shims/python
Python 2.7.11 (default, Aug 13 2016, 13:42:13)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sysconfig
>>> sysconfig.get_config_vars()['CONFIG_ARGS']
"'--prefix=/home/nwani/.pyenv/versions/2.7.11' '--enable-unicode=ucs2' '--libdir=/home/nwani/.pyenv/versions/2.7.11/lib' 'LDFLAGS=-L/home/nwani/.pyenv/versions/2.7.11/lib ' 'CPPFLAGS=-I/home/nwani/.pyenv/versions/2.7.11/include '"
Installing python with pyenv with ucs4:
$ pyenv uninstall 2.7.11
pyenv: remove /home/nwani/.pyenv/versions/2.7.11? y
$ export PYTHON_CONFIGURE_OPTS=--enable-unicode=ucs4
$ pyenv install -v 2.7.11
...
$ pyenv local 2.7.11
$ pyenv versions
system
* 2.7.11 (set by /home/nwani/.python-version)
$ /home/nwani/.pyenv/shims/python
Python 2.7.11 (default, Aug 13 2016, 13:49:09)
[GCC 4.8.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sysconfig
>>> sysconfig.get_config_vars()['CONFIG_ARGS']
"'--prefix=/home/nwani/.pyenv/versions/2.7.11' '--enable-unicode=ucs4' '--libdir=/home/nwani/.pyenv/versions/2.7.11/lib' 'LDFLAGS=-L/home/nwani/.pyenv/versions/2.7.11/lib ' 'CPPFLAGS=-I/home/nwani/.pyenv/versions/2.7.11/include '"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38928942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Google Drive file download API failure I have an app which prints files from a Google drive account. It uses the API https://drive.google.com/uc?id=&export=download to download the content and sends for printing. Instead of downloading the correct content some junk characters are being downloaded from the last two days. I tried this API in postman and got the same result.
HttpClient client = new OkHttpClient();
Request request = new Request.Builder()
.url("drive.google.com/…)
.get().addHeader("Authorization", "Bearer xxxx"")
.addHeader("cache-control", "no-cache")
.addHeader("Postman-Token", "02efaa73-543e-4b02-bbd4-570cfbd3f9f4")
.build();
Response response = client.newCall(request).execute();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55160010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Deleting all files in a directory except the ones mentioned in a list I have a directory called a00 containing 3000 files with extension .SAC. I have a text file called gd.list containing names of 88 of those 3000 files. I am trying to write a code that will delete all .SAC files except those mentioned in gd.list
How to do that using shell/bash?
A: If you are feeling brave, try something like
ls *.sac | fgrep -v -f gd.list | xargs echo rm
Note that I've put an echo in that xargs, just to make sure no one has a cut and paste accident.
Note also the limitations of this approach mentioned in the comments. As I said, if you are feeling brave...
A: The rm command is commented out so that you can check and verify that it's working as needed. Then just un-comment that line.
The check directory section will ensure you don't accidentally run the script from the wrong directory and clobber the wrong files.
You can remove the echo deleting line to run silently.
#!/bin/bash
cd /home/me/myfolder2tocleanup/
# Exit if the directory isn't found.
if (($?>0)); then
echo "Can't find work dir... exiting"
exit
fi
for i in *; do
if ! grep -qxFe "$i" filelist.txt; then
echo "Deleting: $i"
# the next line is commented out. Test it. Then uncomment to removed the files
# rm "$i"
fi
done
You can find the answer here https://askubuntu.com/questions/830776/remove-file-but-exclude-all-files-in-a-list by L. D. James
A: there are a few alternatives.
I'd prefer to see find -Z as it more clearly demarcates the file names:
find . -maxdepth 1 -name '*.sac' -print0 | grep -x -z -Z -f gd.list | xargs -0 echo rm
Again, test this first. Perhaps sort the output and make sure it is unique versus the original file.
For a smaller list of filenames I would recommend just using find with -and -not -name and -delete, but with a larger list that can be tricky.
You could tag the files you want to keep as read-only, then delete the wildcard with the appropriate setting in rm or find to skip read-only files. That assumes you own the read-only flag. You could tag the files as executable, and use find, if the read-only flag is not for you.
Another option would be to move the matching files to a temp folder, delete the wildcard, then move the files you want to keep back. That is assuming you can afford for the files to disappear temporarily.
To make them disappear for a shorter time, move the kept files out to a temp directory, move the original directory out, move the temp directory in, then delete the movced out directory.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51729574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Mui v5 - How to migrate away from withStyles I've spent a day on the change notes and other docs for v5 and can see withStyles is not recommended anymore, in favor of sx. However, withStyles provided something that sx did not: a way to take an existing component and style all aspects of it without touching the underlying API.
Take the following example:
export const ResourceCardHeader = withStyles((theme) => ({
root: {
paddingBottom: 0,
[theme.breakpoints.down('xs')]: {
paddingLeft: theme.spacing(1),
paddingRight: theme.spacing(1),
paddingTop: theme.spacing(1),
},
},
content: {
borderBottom: `2px solid ${theme.palette.primary.main}`,
paddingBottom: '5px',
width: '100%',
whiteSpace: 'nowrap',
},
title: {
fontSize: '1.1rem',
fontWeight: 700,
transition: theme.transitions.create('font-size', {
easing: theme.transitions.easing.easeInOut,
duration: theme.transitions.duration.shortest,
}),
// Hide around x button
maxWidth: 'calc(100% - 8px)',
textOverflow: 'ellipsis',
overflow: 'hidden',
[theme.breakpoints.down('sm')]: {
fontSize: '0.9rem',
},
[theme.breakpoints.down('xs')]: {
fontSize: '0.75rem',
},
},
subheader: {
fontSize: '0.8rem',
transition: theme.transitions.create('font-size', {
easing: theme.transitions.easing.easeInOut,
duration: theme.transitions.duration.shortest,
}),
textOverflow: 'ellipsis',
overflow: 'hidden',
[theme.breakpoints.down('xs')]: {
fontSize: '0.6rem',
},
},
}))(CardHeader)
This is doing... a lot. Moving it to sx seems unreasonable and worse. I would also have to import and re-export the underlying component/props as far as I can tell. More importantly, I can access the styles of the individual inner components (content, title etc). This in all lets me restyle the card component to a very granular level for a specific component. I can't see how sx gives me that, not can I tell what I should be doing instead.
I would also rather migrate away entirely from JSS if possible to keep up to date with the latest mui standards.
What would be the best approach to migrate this Header and other components like it to a more v5-y approach?
A: Try to use CreateMuiTheme like global theme for some components and then just in components where you need specific style use other theme wrapper or another style. Also for component styles you can use makeStyles
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71365131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: pivot multiple columns Hi I have a SQL query which gives results like the following table
ID NAME problem_ID date_of_entry elem_id staff_id
1 abc 456 12/12/2014 789 32
1 abc 768 12/01/2014 896 67
1 abc 897 02/14/2014 875 98
2 bcd 723 02/17/2014 287 09
2 bcd 923 09/13/2014 879 01
2 bcd 878 08/23/2014 hgd 34
I want results results as below
ID NAME problem_ID_1 problem_ID_2 problem_ID_3 date_of_entry_1 date_of_entry_2 date_of_entry_3 elem_id_1 elem_id_2 elem_id_3 staff_id_1 staff_id_2 staff_id_3
problem_id, date_of entry,elem_id, staff_id are all dynamic. Can you please give me a idea of how I should do this using pivot function or any other way.
A: Try this
I have done for two column problem_ID and date_of_entry you can add the other two column in pivot.
fiidle demo here
http://sqlfiddle.com/#!3/ef8e8e/1
CREATE TABLE #Products
(
ID INT,
NAME VARCHAR(30),
problem_ID INT,
date_of_entry DATE,
elem_id VARCHAR(30),
staff_id INT
);
INSERT INTO #Products
VALUES (1,'abc',456,'2014/12/12',789,32),
(1,'abc',768,'2014/12/01',896,67),
(1,'abc',897,'2014/02/14',875,98),
(2,'bcd',723,'2014/02/17',287,09),
(2,'bcd',923,'2014/09/13',879,01),
(2,'bcd',878,'2014/08/23','hgd',34)
DECLARE @problm VARCHAR(MAX)='',
@daofenty_id VARCHAR(MAX)='',
@aggproblm VARCHAR(MAX)='',
@aggdaofenty_id VARCHAR(MAX)='',
@sql NVARCHAR(max)
SET @problm = (SELECT DISTINCT Quotename('problm'+CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY problem_ID)))
+ ','
FROM #Products
FOR XML PATH(''))
SET @aggproblm = (SELECT DISTINCT ' max('
+ Quotename('problm'+CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY problem_ID)))
+ ') problm'
+ CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY problem_ID))
+ ','
FROM #Products
FOR XML PATH(''))
SET @daofenty_id =(SELECT DISTINCT
+ Quotename('daofenty_id'+CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY date_of_entry)))
+ ','
FROM #Products
FOR XML PATH(''))
SET @aggdaofenty_id = (SELECT DISTINCT + ' max('
+ Quotename('daofenty_id'+CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY date_of_entry)))
+ ') daofenty_id'
+ CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY date_of_entry))
+ ','
FROM #Products
FOR XML PATH(''))
SET @problm = LEFT(@problm, Len(@problm) - 1)
SET @daofenty_id = LEFT(@daofenty_id, Len(@daofenty_id) - 1)
SET @aggproblm = LEFT(@aggproblm, Len(@aggproblm) - 1)
SET @aggdaofenty_id = LEFT(@aggdaofenty_id, Len(@aggdaofenty_id) - 1)
SET @sql = 'SELECT Id,name,' + @aggproblm + ','
+ @aggdaofenty_id + '
FROM (select * from (SELECT ''problm''+convert(varchar(50),row_number() over(partition by ID order by problem_ID)) problm_id, ''daofenty_id''+convert(varchar(50),row_number() over(partition by ID order by date_of_entry)) daofenty_id ,
''elemid''+convert(varchar(50),row_number() over(partition by ID order by elem_id)) elemid , ''staffid''+convert(varchar(50),row_number() over(partition by ID order by staff_id)) staffid,*
FROM #Products) A
) AS T
PIVOT
(max(problem_id) FOR problm_id IN
(' + @problm + ')) AS P1
PIVOT
(max(date_of_entry) FOR daofenty_id IN
('
+ @daofenty_id + ')) AS P1
group by id,name'
--PRINT @sql
EXEC Sp_executesql
@sql
to limit the no. of columns
SET @problm = (SELECT DISTINCT TOP N Quotename('problm'+CONVERT(VARCHAR(50), Row_number() OVER(partition BY ID ORDER BY problem_ID)))
+ ','
FROM #Products
FOR XML PATH(''))
Similarly do the same for other columns..
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26490776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to resolve NoSuchFileException for Files.createDirectories On a production Linux env, Im getting NoSuchFileException for Files.createDirectories api.
I have checked the java docs for Files.createDirectories and it inherently does not throw this exception.
I need to know on what condition do we get this error for the Files.createDirectories API.
stackTrace
-----------
java.nio.file.NoSuchFileException: /folder1/folder2/folder abc/ABC-UVW XYZ
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
Code
if(!file.getParentFile().exists()) {
Files.createDirectories(Paths.get(file.getParent()));
//Files.createDirectories() should create all non existent parent directories but instead it throws NoSuchFileException.
//......other code
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57070534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Functional programming in nuclear plants? After reading this question I just wondered whether it would be a good idea to use Haskell (or other functional programming languages) in mission critical industries.
Apart from Erlang, most languages followed imperative/design-by-contract paradigms (Ada, Eiffel, C++).
But what about the functional ones?
The resulting code would be easily maintainable, stable and lots of potential bugs could be eliminated by their strict type systems at compile-time.
Or is lazy evaluation more dangerous than helpful? Are there other security drawbacks?
A: I think you could. The language seems well suited for such situations, assuming you trust the compiler enough to use it in mission critical situation.
Remember that in mission critical situations it is not only your code that is under scrutiny, but all other components too. That includes compiler (Haskell compiler is not among the easiest ones to code review), appropriate certified hardware that runs the software, appropriate hardware that compiles your code, hardware that bootstraps the compilation of the compiler that will compile your code, hell - even wires that connect that all to the power grid and frequency of voltage change in the socket.
If you are interested in looking at mission critical software quality, I suggest looking at NASA software quality procedures. They are very strict and formal, but well these guys throw millions of dollars in space in hope it will survive pretty rough conditions and will make it to Mars or wherever and then autonomously operate and send some nice photos of Martians back to earth.
So, there you go: Haskell is good for mission critical situations, but it'd be an expensive process to bootstrap its usage there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1147248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: Revision Control and Dependency Resolution with NPM / Node / package.json We have not been committing node_modules folder(s) in our application to revision control. Our build processes and developer instructions include running "npm install" manually on an initial check out to install required node modules. Our package.json files detail specific dependency versions.
Recently, our automated builds broke because a down stream dependency broke due to a recent 3rd party commit which I did not think would be possible. Our package.json file is as follows:
{
"name": "test-package",
"description": "Test Package",
"version": "1.0.0",
"license": "UNLICENSED",
"private": true,
"repository": { "type": "svn", "url": "" },
"dependencies": {
"extend": "3.0.0",
"windows-registry": "0.1.3"
}
}
Specifically, our dependency on "windows-registry" version "0.1.3" broke because of a child dependency of that module ("ref" version "1.2.0"). The dependencies from "windows-registry" package.json file are as follows:
"dependencies": {
"debug": "^2.2.0",
"ffi": "^2.0.0",
"ref": "^1.2.0",
"ref-struct": "^1.0.2",
"ref-union": "^1.0.0"
}
I would assume "windows-registry" would always reference version "1.2.0" of the "ref" package, but it was actually pulling in version "1.3.4" and then recently "1.3.5" which broke our builds. I verified in the package.json file for "ref" that it is not version "1.2.0". The package.json file for "ref" is huge and it has lots of values such as "ref@^1.2.0" under various keys within the file. Interesting parts of the package.json file are as follows:
{
/* Lots of other stuff */
"_spec": "ref@^1.2.0",
"version": "1.3.4"
}
Why is NPM not loading the same consistent repeatable dependency graph? Should we be committing node_modules to our revision control?
A: See this SO answer:
In the simplest terms, the tilde matches the most recent minor version (the middle number). ~1.2.3 will match all 1.2.x versions but will miss 1.3.0.
The caret, on the other hand, is more relaxed. It will update you to the most recent major version (the first number). ^1.2.3 will match any 1.x.x release including 1.3.0, but will hold off on 2.0.0.
As far as your other questions: you should definitely not commit your node_modules folder. You should rather commit a package-lock.json file, which will freezes your dependencies as they are. The shrinkwrap command was typically used for this, but as of npm v5 the lock file is generated by default
I would also suggest looking into yarn, which is an npm compatible package manager that is better and faster at managing complex dependency trees
Finally, since the npm repository more or less enforce semver, it helps to be aware what each increment is supposed to mean in term of breaking vs non-breaking changes. In your case, the changes should have been backward compatible if the package author had followed semantic versioning:
Given a version number MAJOR.MINOR.PATCH, increment the:
*
*MAJOR version when you make incompatible API changes,
*MINOR version when you add functionality in a backwards-compatible manner, and
*PATCH version when you make backwards-compatible bug fixes.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46083385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: django send_mail with SMTP backend cannot send out email I am deploying a Django project on an ubuntu stack with a postfix SMTP mail server, hosted on Amazon's EC2. I can send out email from the server using the Linux mail program. But when I try to send email using django.core.mail.send_mail, the email is never received.
Here are my settings:
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
I left everything else as default.
I tried
python manage.py shell
Then in the shell, I did
from django.core.mail import *
send_mail(
'TEST',
'THIS IS A TEST',
'[email protected]',
['[email protected]'],
fail_silently=False,
)
This returns 1, but I never received any message at the destination ('[email protected]' in the example).
Is there a tutorial on how to configure a SMTP server to work with Django's mail system? Thanks.
A: I assume that you did specify your EMAIL_HOST, EMAIL_PORT, EMAIL_HOST_USER and EMAIL_HOST_PASSWORD in your settings.py right?
A detailed explanation of how the default django.core.mail.backends.smtp.EmailBackend works is explained -
https://docs.djangoproject.com/en/dev/topics/email/
https://docs.djangoproject.com/en/dev/topics/email/#smtp-backend
And specifically for your email port, you did open your port for smtp under EC2's security group? SMTP port usually defaults to 25 if you left it as default during your postfix configuration and it is important that you have opened up that port when you created your EC2 instance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6645501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: cross tables query Let us say we have 2 tables : A and B.
In table A we have payments for customers. 12 payments each year for each customer identified by numClient column and the amount of payment identified by payment column.
In table B there is the sum of payments of the year for every customer so just one row each year per customer still identified by a numClient column and payment identified by yearPayment column.
I would like a query that lists all customers (displaying numClient) whose yearPayment of table B is different from the sum of his payments in table A.
As those tables cover differents years, I would like to query only for 2018. In table A, the payment date is PaymentDate Column. In table B, the year of payment is YearPayment column.
A: The whole story sounds wrong. Not your words, but the model - why are you using table B? Keep payments where they are (table A). If you have to sum them, do so. Or create a view. But, keeping them separately in two tables just asks for a problem (the one you have now - finding a difference).
Anyway:
select a.id_customer, sum(a.payment), b.sum_payment
from a join b on a.id_customer = b.id_customer
where extract(year from a.date_column) = 2018
and extract(year from b.date_column) = 2018
group by a.id_customer, b.sum_payment
having b.sum_payment <> sum(a.payment)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51982559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Get the name of the input variable in a function This appears to be a difficult question to answer. Given a function such as the one displayed how would you get the name of the input variable for debugging purposes. i.e.) root -> root.left -> root.right -> root.left.right -> etc...
or i.e.) tree -> tree.left -> tree.right -> tree.left.right -> etc...
function TreeNode(val) {
this.val = val;
this.left = this.right = null;
}
var sum = function(root) {
console.log(root);
if(root === null) return 0;
return root.val + sum(root.left) + sum(root.right);
}
let tree = new TreeNode(1);
tree.left = new TreeNode(2);
tree.right = new TreeNode(3);
tree.left.right = new TreeNode(4);
let x = sum(tree);
console.log(x);
Basically, I want to console.log() the name of the variable rather than root in the sum function.
A:
Basically, I want to console.log() the name of the variable rather than root in the sum function.
You can't. When your sum function is called, it is passed a value. That value is a pointer to an object and there is no connection at all to the variable that the pointer came from. If you did this:
let tree = new TreeNode(1);
let x = y = tree;
sum(x);
sum(y);
there would be no difference at all in the two calls to sum(). They were each passed the exact same value (a pointer to a TreeNode object) and there is no reference at all to x or y or tree in the sum() function.
If you want extra info (like the name of a variable) for debugging reasons and/or logging, then you may have to pass that extra name into the function so you can log it.
A: You can change the sum function for debugging purposes:
function sum(root, path) {
if(!path) {
path = 'root';
}
console.log(path);
if(root === null) return 0;
return root.val + sum(root.left, path+'.left') + sum(root.right, path+'.right');
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50082079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: "Failed to attach to the remote VM" connecting jdb to the android emulator on Windows I’ve been trying to connect jdb to the android emulator for a little while, and have been met repeatedly with:
jdb -sourcepath ./src -attach localhost:8700
java.io.IOException: shmemBase_attach failed: The system cannot find the file specified
at com.sun.tools.jdi.SharedMemoryTransportService.attach0(Native Method)
at com.sun.tools.jdi.SharedMemoryTransportService.attach(SharedMemoryTransportService.java:90)
at com.sun.tools.jdi.GenericAttachingConnector.attach(GenericAttachingConnector.java:98)
at com.sun.tools.jdi.SharedMemoryAttachingConnector.attach(SharedMemoryAttachingConnector.java:45)
at com.sun.tools.example.debug.tty.VMConnection.attachTarget(VMConnection.java:358)
at com.sun.tools.example.debug.tty.VMConnection.open(VMConnection.java:168)
at com.sun.tools.example.debug.tty.Env.init(Env.java:64)
at com.sun.tools.example.debug.tty.TTY.main(TTY.java:1010)
Fatal error:
Unable to attach to target VM.
Not so great. What's the best way of getting round this? I'm running on Windows 7 64bit.
A: Currently this is working for me -- making a socket rather than a shared memory connection.
>jdb –sourcepath .\src -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8700
Beforehand you need to do some setup -- for example, see this set of useful details on setting up a non-eclipse debugger. It includes a good tip for setting your initial breakpoint -- create or edit a jdb.ini file in your home directory, with content like:
stop at com.mine.of.package.some.AClassIn:14
and they'll get loaded and deferred until connection.
edit: forgot to reference Herong Yang's page.
A: Try quitting Android Studio.
I had a similar problem on the Mac due to the ADB daemon already running. Once you quit any running daemons, you should see output similar to the following:
$ adb -d jdwp
28462
1939
^C
$ adb -d forward tcp:7777 jdwp:1939
$ jdb -attach localhost:7777 -sourcepath ./src
Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
Initializing jdb ...
>
See my other answer to a similar question for more details and how to start/stop the daemon.
A: Answer #1: Map localhost in your hosts file, as I linked to earlier. Just to be sure.
Answer #2: If you're using shared memory, bit-size could easily become an issue. Make sure you're using the same word width everywhere.
A: In order to debug application follow this steps:
Open the application on the device.
Find the PID with jdwp (make sure that 'android:debuggable' is set to true in the manifest):
adb jdwp
Start JVM with the following parameters:
java -agentlib:jdwp=transport=dt_shmem,server=y,address=<port> <class>
Expected output for this command:
Listening for transport dt_shmem at address: <port>
Use jdb to attach the application:
jdb -attach <port>
If jdb successful attached we will see the jdb cli.
Example:
> adb jdwp
12300
> java -agentlib:jdwp=transport=dt_shmem,server=y,address=8700 com.app.app
Listening for transport dt_shmem at address: 8700
> jdb -attach 8700
main[1]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4220174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
} |
Q: Python coroutine can one `send` without first doing `next`? When sending a value to a generator/coroutine, is there a way to avoid that initial next(g)?
def gen(n):
m = (yield) or "did not send m to gen"
print(n, m)
g = gen(10)
next(g)
g.send("sent m to g") # prints "10 sent m to g"
Without next(g), we get
TypeError: can't send non-None value to a just-started generator
A: The error stems from this bit of code in CPython's gen_send_ex2, i.e. it occurs if gi_frame_state is FRAME_CREATED.
The only place that matters for this discussion that sets gi_frame_state is here in gen_send_ex2, after a (possibly None) value has been sent and a frame is about to be evaluated.
Based on that, I'd say no, there's no way to send a non-None value to a just-started generator.
A: Not sure if this is helpful in your specific case, but you could use a decorator to initialize coroutines.
def initialized(coro_func):
def coro_init(*args, **kwargs):
g = coro_func(*args, **kwargs)
next(g)
return g
return coro_init
@initialized
def gen(n):
m = (yield) or "did not send m to gen"
print(n, m)
g = gen(10)
g.send("sent m to g") # prints "10 sent m to g"
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71939628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: While loop in php error Well since the foreign key doesn't even working in phpMyadmin i've decided to use the while-loop just to put some value in the table who has the fk. and here's the error i have encountered..
Fatal error: Maximum execution time of 30 seconds exceeded in /mnt/Target01/338270/
honestly, i've been working out with phpMyadmin for over a year but this is the first time this dramatical error occured.. and i really got so upset about it.. if you guts know what to do please tell me.. T^T
A: See this set_time_limit, and also this memory-limit
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5429555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: FIRDLJavaScriptExecuter.m WKWebView crashes on alloc/init
This is the line which caused the crash. I am using XCODE 12.2 with new M1 chip. I have updated the pods as well
| {
"language": "en",
"url": "https://stackoverflow.com/questions/65302884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: getting error unrecognized selector sent at tableview when last row of table view is clicked i m parsing json data and populating the tableview and making some validation with the incoming json data..everthing works fine.i made the code such that when the last table view row is clicked it got to open a modal view controller.when clicked .i m getting this error [tableiew1] Unrecognised selector send at the instance...could u guys help me out below is the code.
- (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath {
// Navigation logic may go here. Create and push another view controller.
if (indexPath.row == 5) {
if (self.dvController6 == nil)
{
Vad_tycker *temp = [[Vad_tycker alloc] initWithNibName:@"Vad_tycker" bundle:[NSBundle mainBundle]];
self.dvController6 = temp;
[temp release];
}
[self presentModalViewController:self.dvController6 animated:YES];
}
}
A: Seems like you have forgotten to provide the access for the tableView1 in Vad_tycker.
Or You should do a crosscheck whether you have assigned the correct instance in tableView delegate's and also make sure to provide the implementation for the method of delegate's in their respect target classes.
A: I think you forgot to connect the tableView Datasource and Delegate methods in the vad_tycker controller.
Also check that the instance of UITableView i.e. in your case tableView1 is also connected with the TableView. on the view.
Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9323088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to use a Jenkins tool installation in a docker container? To reduce duplication of effort in my docker containers, I'd like to run pipeline steps both in a docker container, and with Jenkins tool installations available.
This naïve attempt doesn't work - npm is not found
pipeline {
agent { dockerfile true }
tools { nodejs 'LTS' }
stages {
stage('NPM') {
steps { sh 'npm install-ci-test' }
}
}
}
Is this possible?
A: You can make it available when you configure the docker container by mounting the Jenkins folder on the build agent.
pipeline {
agent {
docker {
....
// Make tools folder available in docker (some slaves use mnt while other uses storage)
args '-v /mnt/Jenkins_MCU:/mnt/Jenkins_MCU -v /storage/Jenkins_MCU:/storage/Jenkins_MCU'
...
}
....
stage(...){
environment {
myToolHome = tool 'MyTool'
}
steps {
...
sh "${myToolHome}/path/to/binary arguments"
....
I am not sure how to get the path of the location for jenkins on the build agent, so in this example it is hard coded.
But it makes the tool available in the docker image.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56447619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
} |
Q: Remove "All Members" tab in Buddypress search results Is it possible to remove the "All Members" tab and list of members on the Members page and search results in buddypress? Currently there are two tabs, "All Members" and "My Friends". I do not want to display a list of all my buddypress members so I want to remove this completely hopefully without having to change the core files which I know isn't recommended.
A: To remove the tab instead of hiding it, create a template overload of this file:
buddypress\bp-templates\bp-legacy\buddypress\members\index.php
And simply delete the list element that creates the All Members tab.
To change what is displayed below those tabs, create a template overload of this file:
buddypress\bp-templates\bp-legacy\buddypress\members\members-loop.php
And adjust as necessary.
A: You may make both tabs invisble using CSS which is less painful (but it can be undone changing the css property using dev tools).
Try searching both tabs ID's or classes, then modify your buddypress css template file with:
.TabClassName {
display:none!important;
}
or
#TabId {
display:none!important;
}
The !important is to overwrite any other tab visibility modifier. More info where.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/27880903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: MYSQLI_NUM_ROWS doesn't return anything I have a table called "users" with 1 row.
I have been trying to get the number of rows that exist when the username and password have been entered. This wasn't returning anything, so I have created this code in the most simple form, but still it is not returning anything.
If I run the query on phpmyadmin, it returns the row.
Why could this not be working?
include("../includes/db.php");
$result = $link->query("SELECT * FROM users");
die(mysqli_num_rows($result));
The connection to the database is fine, all the other code works fine on my CMS.
edit:
This is my now working code:
include("../includes/db.php");
if(!isset($_SESSION['loggedin'])){
if(isset($_POST['username'])){
$username = $_POST['username'];
$password = md5($_POST['password']);
$sql = "SELECT * from users WHERE username LIKE '{$username}' AND password LIKE '{$password}' LIMIT 1";
$result = $link->query($sql);
if (!$result->num_rows == 1) {
echo "<p>Invalid username/password combination</p>";
LoginForm();
} else {
echo "<p>Logged in successfully</p>";
$_SESSION['loggedin'] = 1;
}
}else{
LoginForm();
}
}
A: include("../includes/db.php");
$result = $link->query("SELECT * FROM users");
echo $result->num_rows;
My bad for the previous answer. It's been a while since I've used PHP
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26679776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: SpriteKit: Preload sound file into memory before playing? Just wondering if this is possible. Currently, the first time I play a sound file while the app is running, there is a noticeable delay before the sound actually plays (like it's caching it or something). After this it plays instantly without issue, but if I close the app completely and relaunch it, the delay will be back the first time the sound is played. Here is the code I'm using to play the sound:
[self runAction:[SKAction playSoundFileNamed:@"mySound.caf" waitForCompletion:NO]];
A: One approach you could take is to load the sound in right at the beginning of the scene:
YourScene.h:
@interface YourScene : SKScene
@property (strong, nonatomic) SKAction *yourSoundAction;
@end
YourScene.m:
- (void)didMoveToView: (SKView *) yourView
{
_yourSoundAction = [SKAction playSoundFileNamed:@"yourSoundFile" waitForCompletion:NO];
// the rest of your init code
// possibly wrap this in a check to make sure the scene's only initiated once...
}
This should preload the sound, and you should be able to run it by calling the action on your scene:
[self runAction:_yourSoundAction];
I've tried this myself in a limited scenario and it appears to get rid of the delay.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22826675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19"
} |
Q: Get Table Value in jQuery <table>
<tr>
<td>dovecot</td>
<td></td>
<td>0.00</td>
<td>0.10</td>
<td>0.0</td>
</tr>
<tr>
<td>dpsel</td>
<td>dps-e-learn.in</td>
<td>0.00</td>
<td>0.06</td>
<td>0.0</td>
</tr>
<tr>
<td>svarun</td>
<td>svarun.in</td>
<td>0.00</td>
<td>0.02</td>
<td>0.0</td>
</tr>
<tr>
<td>DELAYED</td>
<td></td>
<td>0.00</td>
<td>0.00</td>
<td>0.1</td>
</tr>
<tr>
<td>hostc1</td>
<td>hostraptor.in</td>
<td>0.00</td>
<td>0.05</td>
<td>0.0</td>
</tr>
<tr>
<td>Top Process</td>
<td>%CPU 0.1</td>
<td colspan="3">
httpd [ecomwel.hostraptor.in] [/wp-content/plugins/nextgen-gallery/xml/media-rss.php?gid7]
</td>
</tr>
<tr>
<td>astrore</td>
<td>astroreddy.com</td>
<td>0.00</td>
<td>0.02</td>
<td>0.0</td>
</tr>
<tr>
<td>cpanel</td>
<td></td>
<td>0.00</td>
<td>0.00</td>
<td>0.0</td>
</tr>
<tr>
<td>named</td>
<td></td>
<td>0.00</td>
<td>0.20</td>
<td>0.0</td>
</tr>
</table>
I have a table like above I need to search in that table with a name dynamically using jQuery
For eg : I need to search using astrore and I need to get the next element I mean I need to get this domain name...
and this tr td will be changed every 2 min...
A: The contains-selector:
var value = $("td:contains('astrore')").next().text();
A: This allows for a repeating check for the value:
function scanForValue(value) {
$("td").each(function() {
if ($(this).text()==value) {
console.log($(this).next().text());
}
});
window.setTimeout("scanForValue('"+value+"');", 120000);
}
scanForValue('astrore');
http://jsfiddle.net/hLKRd/3/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14016355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Search inside a string $variable = 'of course it is unnecessary [http://google.com],
but it is simple["very simple"], and this simple question clearly
needs a simple, understandable answer [(where is it?)] in plain English'
Value of this variable everytime changes.
What I trying to do is to get the text from [...]. So, if there is [(google)], the match should be (google).
I'm searching for a solution, which can do each of these actions:
*
*get all matches of [...], write into $all
*get only the first match, write into $first
*get only the last match, write into $last
*remove all matches of [...] from the variable (erase)
*remove only first match
*remove only last match
Tried different regex for this, like /[\(.*?\)]/, but the results aren't what one might expect.
A: This should do it:
$variable = 'of course it is unnecessary [http://google.com],
but it is simple["very simple"], and this simple question clearly
needs a simple, understandable answer [(where is it?)] in plain English';
preg_match_all("/(\[(.*?)\])/", $variable, $matches);
$first = reset($matches[2]);
$last = end($matches[2]);
$all = $matches[2];
# To remove all matches
foreach($matches[1] as $key => $value) {
$variable = str_replace($value, '', $variable);
}
# To remove first match
$variable = str_replace($first, '', $variable);
# To remove last match
$variable = str_replace($last, '', $variable);
Note that if you use str_replace to replace the tags, all similar occurences of the tags will be removed if such exist, not just the first.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3734519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Git editor commit error I have my core.editor set to Sublime Text. However, all of my commits automatically fail and I am given the following:
Aborting commit due to empty commit message.
Even though it does open Sublime Text. Is there something else I need to do to prevent this, or will Sublime just not work for this?
Note, I am doing this all from the terminal.
A: The problem is that the Sublime Text command-line tool by default tells the Sublime Text GUI to open a file, and then exits right away, even while the GUI still has the file open. There's an option, though, that'll tell the command-line tool to wait for the file to be closed in the GUI. That option is --wait (or -w for short). So if I try this:
: $; git config core.editor subl
: $; git commit
... I get the following -- with the first line showing briefly then getting erased and replaced by the second:
hint: Waiting for your editor to close the file...
Aborting commit due to empty commit message.
You may or may not see the first line, as this all happens at about the same time as the Sublime Text GUI is becoming visible, and opening up the COMMIT_MESSAGE file. And when you return to the Terminal, you'll see only the second line.
But if I add the option to wait, it works. So I change the editor like so:
: $; git config core.editor 'subl -w'
And then if I do a git commit, and switch from Sublime Text to the Terminal without closing the COMMIT_MESSAGE file, I see:
: $; git commit
hint: Waiting for your editor to close the file...
And then if I go back and close the file (after writing in some text and saving it), I come back to see:
: $; git commit
[master 79d5a7b] Commit message I typed in Sublime Text GUI.
1 file changed, 1 insertion(+), 1 deletion(-)
A: If you're exiting without changing the message, you should be aware that you actually have to type something in.
The same thing happens to me (with gedit) if I don't actually enter something over and above the comment lines it starts with (merges are okay since they automatically add the non-comment "merging a to b"-like text).
However, if your editor is actually starting but the git commit carries on while it's open, then there's an issue with the way the editor is being started.
There's a known issue with Sublime Text in that programs that start it can't always correctly identify that it's still running. I think that may be due to the fact that the command line tool simply tells the GUI program to open the file (starting it first if needed), then the command line tool will exit.
Hence, git will assume it's finished and, since the file hasn't been changed at that point, it gives you that error message.
In terms of fixing that issue, I believe Sublime Text added a -w flag to ensure this didn't happen.
In any case, I prefer explicitly entering a message on the command line with something like:
git commit -m 'fixed my earlier screw-up'
so that I don't have to worry about editors and such.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50184996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Script to read in line, select from value, and print to file So I have a php script, I want to read in a file line by line, each line only contains one id. I want to select using sql for each id in the file, then print the result for each selection in the same file.
so far i have:
while (!feof($file))
{
// Get the current line that the file is reading
$currentLine = fgets($file) ;
//explodes integers by amount of sequential spaces
//$currentLine = preg_split('/[\s,]+/', $currentLine);
echo $currentLine; //this echo statement prints each line correctly
selectQuery($currentLine) ;
}
fclose($file) ;
as a test so far i only have
function selectQuery($currentLine){
echo $currentLine; //this is undefined?
}
A: The result of fgets is never undefined. However, your approach is way too low-level. Use file and array_filter:
$results = array_filter(file('input.filename'), function(line) {
return strpos($line, '4') !== false; // Add filter here
});
var_export($results); // Do something with the results here
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7601692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Oracle TOP N ordered rows I would like to get the top N rows from an Oracle table sorted by date.
The common way to do this, and this solution returns for every question I could find on SO/google.
Select *
from
(select * from
myTable
ordered by Date desc)
where rownum < N
This solution is in my case impracticable because myTable contains an huge ammount of rows which would
lead to Oracle taking too long to return all rows in the subquery.
Question is, is there a way to limit the number of ORDERED rows returned in the subquery ?
A:
Question is, is there a way to limit the number of ORDERED rows
returned in the subquery ?
The following is what I typically use for top-n type queries (pagination query in this case):
select * from (
select a.*, rownum r
from (
select *
from your_table
where ...
order by ...
) a
where rownum <= :upperBound
)
where r >= :lowerBound;
I usually use an indexed column to sort in inner query, and the use of rownum means Oracle can use the count(stopkey) optimization. So, not necessarily going to do full table scan:
create table t3 as select * from all_objects;
alter table t3 add constraint t_pk primary key(object_id);
analyze table t3 compute statistics;
delete from plan_table;
commit;
explain plan for
select * from (
select a.*, rownum r
from (
select object_id, object_name
from t3
order by object_id
) a
where rownum <= 2000
)
where r >= 1;
select operation, options, object_name, id, parent_id, position, cost, cardinality, other_tag, optimizer
from plan_table
order by id;
You'll find Oracle does a full index scan using t_pk. Also note the use of stopkey option.
Hope that explains my answer ;)
A: Your inference that Oracle must return all rows in the subquery before filtering out the first N is wrong. It will start fetching rows from the subquery, and stop when it has returned N rows.
Having said that, it may be that Oracle needs to select all rows from the table and sort them before it can start returning them. But if there were an index on the column being used in the ORDER BY clause, it might not.
Oracle is in the same position as any other DBMS: if you have a large table with no index on the column you are ordering by, how can it possibly know which rows are the top N without first getting all the rows and sorting them?
A: Order by may become heavy operation if you have lots of data. Take a look at your execution plan. If the data is not real time you could create a material view on these kind of selects...
A: In older versions of ORACLE (8.0) you don't have the possibility to use ORDER BY clause in subquery.
So, only for those of us who yet use some ancient versions, there is another way to deal with: The magic of UNION operator.
UNION will sort the records by columns in the query:
Example:
SELECT * FROM
(SELECT EMP_NO, EMP_NAME FROM EMP_TABLE
UNION
SELECT 99999999999,'' FROM DUAL)
WHERE ROWNUM<=5
where 99999999999 is bigger then all values in EMP_NO;
Or, if you want to select TOP 5 salary employees with the highest 5 salaries:
SELECT EMP_NO, EMP_NAME, 99999999999999-TMP_EMP_SAL
FROM
(SELECT 99999999999999-EMP_SAL TMP_EMP_SAL, EMP_NO, EMP_NAME
FROM EMP_TABLE
UNION
SELECT 99999999999999,0,'' FROM DUAL)
WHERE ROWNUM<=5;
Regards,
Virgil Ionescu
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6858325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: MySQL sum of change in growth and order by growth Hello I have two tables on for portfolio and and for stock data details. I want to find out how much the total growth of users portfolio by changes of price. For example:
User A:
stock A -4.41
Stock B -1.49
Stock C 0.38
Stock D 1.43
User B
Stock A -2.05
Stock B .05
I want to show table like: USER Growth
A -4.09
B -2.1
here are my two tables: one portfolio where all buy sell info's are kept of users by user_id
+-------------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+---------------+------+-----+---------+----------------+
| portfolio_ID | int(11) | NO | PRI | NULL | auto_increment |
| user_id | int(11) | NO | | NULL | |
| contest_id | int(11) | NO | | NULL | |
| company_id | int(11) | NO | | NULL | |
| share_amount | int(11) | NO | | NULL | |
| buy_price | decimal(9,2) | NO | | NULL | |
| total_buy_price | decimal(10,4) | NO | | NULL | |
| buy_date | date | NO | | NULL | |
| commision | decimal(9,2) | NO | | NULL | |
| sell_share_amount | int(11) | NO | | NULL | |
| sell_price | decimal(10,2) | NO | | NULL | |
| total_sell_price | decimal(16,2) | NO | | NULL | |
| sell_date | date | NO | | NULL | |
| sell_commision | decimal(9,2) | NO | | NULL | |
+-------------------+---------------+------+-----+---------+----------------+
Next the daily stock table:
+-----------------+------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------------+------------------+------+-----+---------+-------+
| company_id | int(11) | NO | PRI | NULL | |
| entry_date | date | NO | PRI | NULL | |
| entry_timestamp | int(10) unsigned | NO | | NULL | |
| open | decimal(16,2) | NO | | NULL | |
| high | decimal(16,2) | NO | | NULL | |
| low | decimal(16,2) | NO | | NULL | |
| ltp | decimal(16,2) | NO | | NULL | |
| ycp | decimal(16,2) | NO | | NULL | |
| cse_price | decimal(9,2) | NO | | NULL | |
| cse_volume | decimal(18,2) | NO | | NULL | |
| total_trade | int(30) | NO | | NULL | |
| total_volume | int(30) | NO | | NULL | |
| total_value | decimal(18,4) | NO | | NULL | |
| changes | float(10,2) | NO | | NULL | |
| floating_cap | float(16,2) | NO | | NULL | |
+-----------------+------------------+------+-----+---------+-------+
this is the sql i am using:
SELECT
po.user_id,
((e.ltp-po.buy_price)/po.buy_price) AS growth
FROM eod_stock AS e
LEFT OUTER JOIN portfolio AS po
ON e.company_id = po.company_id
WHERE po.contest_id = 2
GROUP BY po.user_id;
but it return only first changes not the sum of the changes of the growth, i then modified it like this:
SELECT
po.user_id,
SUM((e.ltp-po.buy_price)/po.buy_price) AS growth
FROM eod_stock AS e
LEFT OUTER JOIN portfolio AS po
ON e.company_id = po.company_id
WHERE po.contest_id = 2
GROUP BY growth
ORDER BY growth DESC;
but it generates a error code 1056, cant group on growth
here is the sql fiddle:
http://sqlfiddle.com/#!2/b3a83/2
Thanks for reading.
A: You don't want to group by growth. Why did you change the group by?
SELECT po.user_id, SUM((e.ltp-po.buy_price)/po.buy_price) AS growth
FROM eod_stock e LEFT OUTER JOIN
portfolio po
ON e.company_id = po.company_id
WHERE po.contest_id = 2
GROUP BY po.user_id
ORDER BY growth DESC;
However, I suspect that you might actually want the maximum minus the minimum:
SELECT po.user_id,
(MAX(e.ltp-po.buy_price) - MIN(e.ltp-po.buy_price))/po.buy_price) AS growth
FROM eod_stock e LEFT OUTER JOIN
portfolio po
ON e.company_id = po.company_id
WHERE po.contest_id = 2
GROUP BY po.user_id
ORDER BY growth DESC;
This gives an overall view over time. However, it doesn't show the direction. So something going from 50 to 100 gets the same value as something going from 100 to 50. This can be fixed, if this is what you are really looking for.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22458832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Add additional information to a region. iBeacons I want to be able to add more information, like an array or a string, when I initialize my CLBeaconRegion so that I can receive it at my didRangeBeacons-method. (not major, or minor)
At the moment, it looks like this:
_advertRegion = [[CLBeaconRegion alloc] initWithProximityUUID:_uuid identifier:@"003-002-001"];
But I really want to initialize it like this or similar:
_advertRegion = [[CLBeaconRegion alloc] initWithProximityUUID:_uuid identifier:@"003-002-001" setArrayOrSomething:myArray];
And also I should obviously be able to take the information from the region like:
[region getArray];
Of course, it doesn't have to be like that, just that you have an idea, what I "need".
What I've tried
*
*I've tried to set/get it through a objc_setAssociatedObject
*I've tried to set it through a setValue forKey
A: I would suggest you just use a separate NSDictionary instance keyed off the same identifier you use when constructing your CLBeaconRegion.
Like this:
// Make this a class variable, or make it part of a singleton object
NSDictionary *beaconRegionData = [[NSDictionary alloc] init];
// Here is the data you want to attach to the region
NSMutableArray *myArray = [[[NSMutableArray] alloc] init];
// and here is your region
_advertRegion = [[CLBeaconRegion alloc] initWithProximityUUID:_uuid identifier:@"003-002-001"];
// attach your data to the NSDictionary instead
[beaconRegionData setValue:myArray forKey:_advertRegion.identifier];
// and you can get it like this
NSLog(@"Here is my array: %@", [beaconRegionData valueForKey:_advertRegion.identifier]);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19646972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: data is not inserting in database c# winform I am using vs 2013 and i am using service base database for my windows based application i am creating employee management application while inserting data it shows me Records inserted but the data is updating in the database,anyone can help me...
void metroButton1_Click(object sender, EventArgs e)
{
try
{
for (int i = 0; i < dataGridView1.Rows.Count - 1; i++)
{
con = new SqlConnection(cs.DBcon);
using (SqlCommand cmd = new SqlCommand("INSERT INTO tbl_employee VALUES(@Designation, @Date, @Employee_name,@Leave,@L_Reason,@Performance,@Payment,@Petrol,@Grand_Total)", con))
{
cmd.Parameters.AddWithValue("@Designation", dataGridView1.Rows[i].Cells[0].Value);
cmd.Parameters.AddWithValue("@Date", dataGridView1.Rows[i].Cells[1].Value);
cmd.Parameters.AddWithValue("@Employee_name", dataGridView1.Rows[i].Cells[2].Value);
cmd.Parameters.AddWithValue("@Leave", dataGridView1.Rows[i].Cells[3].Value);
cmd.Parameters.AddWithValue("@L_Reason", dataGridView1.Rows[i].Cells[4].Value);
cmd.Parameters.AddWithValue("@Performance", dataGridView1.Rows[i].Cells[5].Value);
cmd.Parameters.AddWithValue("@Payment", dataGridView1.Rows[i].Cells[6].Value);
cmd.Parameters.AddWithValue("@Petrol", dataGridView1.Rows[i].Cells[7].Value);
cmd.Parameters.AddWithValue("@Grand_Total", dataGridView1.Rows[i].Cells[8].Value);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
}
MessageBox.Show("Records inserted.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
A: I suggest to add a con.BeginTransaction() before the ExecuteNonQuery and a con.Commit() after the ExecuteNonQuery.
A: You should open the connection before you make the command instance, like this:
void metroButton1_Click(object sender, EventArgs e)
{
try
{
con = new SqlConnection(cs.DBcon);
con.Open(); //Open the connection
for (int i = 0; i < dataGridView1.Rows.Count - 1; i++)
{
using (SqlCommand cmd = new SqlCommand("INSERT INTO tbl_employee VALUES(@Designation, @Date, @Employee_name,@Leave,@L_Reason,@Performance,@Payment,@Petrol,@Grand_Total)", con)) //Now create the command
{
cmd.Parameters.AddWithValue("@Designation", dataGridView1.Rows[i].Cells[0].Value);
cmd.Parameters.AddWithValue("@Date", dataGridView1.Rows[i].Cells[1].Value);
cmd.Parameters.AddWithValue("@Employee_name", dataGridView1.Rows[i].Cells[2].Value);
cmd.Parameters.AddWithValue("@Leave", dataGridView1.Rows[i].Cells[3].Value);
cmd.Parameters.AddWithValue("@L_Reason", dataGridView1.Rows[i].Cells[4].Value);
cmd.Parameters.AddWithValue("@Performance", dataGridView1.Rows[i].Cells[5].Value);
cmd.Parameters.AddWithValue("@Payment", dataGridView1.Rows[i].Cells[6].Value);
cmd.Parameters.AddWithValue("@Petrol", dataGridView1.Rows[i].Cells[7].Value);
cmd.Parameters.AddWithValue("@Grand_Total", dataGridView1.Rows[i].Cells[8].Value);
cmd.ExecuteNonQuery();
}
}
con.Close();
MessageBox.Show("Records inserted.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
A: Perform everything inside connection as below
void metroButton1_Click(object sender, EventArgs e)
{
try
{
for (int i = 0; i < dataGridView1.Rows.Count - 1; i++)
{
using(SqlConnection connection = new SqlConnection(cs.DBcon))
{
connection.Open();
cmd.Parameters.AddWithValue("@Designation", dataGridView1.Rows[i].Cells[0].Value);
cmd.Parameters.AddWithValue("@Date", dataGridView1.Rows[i].Cells[1].Value);
cmd.Parameters.AddWithValue("@Employee_name", dataGridView1.Rows[i].Cells[2].Value);
cmd.Parameters.AddWithValue("@Leave", dataGridView1.Rows[i].Cells[3].Value);
cmd.Parameters.AddWithValue("@L_Reason", dataGridView1.Rows[i].Cells[4].Value);
cmd.Parameters.AddWithValue("@Performance", dataGridView1.Rows[i].Cells[5].Value);
cmd.Parameters.AddWithValue("@Payment", dataGridView1.Rows[i].Cells[6].Value);
cmd.Parameters.AddWithValue("@Petrol", dataGridView1.Rows[i].Cells[7].Value);
cmd.Parameters.AddWithValue("@Grand_Total", dataGridView1.Rows[i].Cells[8].Value);
cmd.ExecuteNonQuery();
connection.Close();
}
}
MessageBox.Show("Records inserted.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36811680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: why php5 can find my class, but php7 can't? I have an application based on Laravel 5.2 which is using php 5.6 and I decided that it would be great to move my application to php7 (mostly for performance benefits). Everything seems to be working great, except php7 cannot find one specific class in my application.
The file is stored in app/Libraries/Main/Google/Auth/Auth.php file, it has a namespace Google and a class name GoogleAuth, so every time I want to use it, i just put - use Google/GoogleAuth; at the top of the file. In php5 this works great, but in php7 it cannot find the class. What could be the issue here?
A: I think remain one step, run composer dump-autoload and php artisan clear-compiled
command. May be this command will solve this issue.
UPDATE
"autoload": {
"classmap": [
"database",
"app/Libraries/Main"
],
"psr-4": {
"App\\": "app/"
}
}
After update your composer.json with above code, run below command:
//To clears all compiled files.
php artisan clear-compiled
//To updates the autoload_psr4.php
composer dump-autoload
//updates the autoload_classmap.php
php artisan optimize
| {
"language": "en",
"url": "https://stackoverflow.com/questions/40996786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: what ruby gem should I use to handle tar archive manipulation? I need to download a tar.gz file, and replace a directory in it with the contents of another tar.gz file. So far, I've tried the following gems, and found them lacking
*
*archive-tar2: it lost the penultimate path separator ("/") so couldn't actually extract
*archive-tarsimple: simply didn't extract the compressed tarball, and returned no error msg
*minitar: ran into a bug where it failed for filepaths longer than 100 characters
*archive-tar-minitar - fails the same as its parent Errno::ENAMETOOLONG / File name too long
*libarchive: bundle install failed the gcc compile (even after successful brew install libarchive)
I'm starting to lose faith. Is there a good, up to date, well maintained tar archive gem that just works? I'd prefer one that doesn't call out to the command line, since I'd like to eliminate the possibility of commandline injection attacks. But at this point I'll take anything that avoids manually calling out to a shell.
A: You can also check out archive-tar-minitar, it is partially based on minitar that you already tested out, and it doesn't seem that it emmits calls to the command line.
A: I ended up giving up with using a gem to manipulate the tar archives, and just doing it by shelling out to the commandline.
`cd #{container} && tar xvfz sdk.tar.gz`
`cd #{container} && tar xvfz Wizard.tar.gz`
#update the framework packaged with the wizard
FileUtils.rm_rf(container + "/Wizard.app/Contents/Resources/SDK.bundle")
FileUtils.rm_rf(container + "/Wizard.app/Contents/Resources/SDK.framework")
FileUtils.mv(container + "/resources/SDK.bundle", container + "/Wizard.app/Contents/Resources/")
FileUtils.mv(container + "/resources/SDK.framework", container + "/Wizard.app/Contents/Resources/")
config_plist = render_to_string({
file: 'site/_wizard_config',
layout: false,
locals: { app_id: @version.app.id },
formats: 'xml'
})
File.open(container + "/Wizard.app/Contents/Resources/Configuration.plist", 'w') { |file| file.write( config_plist ) }
`cd #{container} && rm Wizard.tar.gz`
`cd #{container} && tar -cvf Wizard.tar 'Wizard.app'`
`cd #{container} && gzip Wizard.tar`
All these backticks make me feel like I'm writing Perl again.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22260394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Generating unique IDs for multiple sources Consider N sources of data, each with a stream of events
Event{
long id;
Object data;
}
Some of the events within one stream might have the same id, as events might span across Updated, New etc. So we can see the following two streams:
<1, 2, 3, 1, 5, 2>
<3, 3, 4, 5, 4>
I would now like to combine these into one stream s.t. each order id is definitely going to be unique.
The easy way would be to use a String instead of long and append source number, generating sth like:
<"1 - 1", "1 - 2", "1 - 3", "2-3", "2-3" ... >
Is there a more memory coimpact way/better approach?
A: Your String solution is fine and in fact quite common. If you're interested in making it more compact, you may want to use a tuple of integers.
Another common method used in distributed systems is to use range allocation: have a central (singleton) server which allocates ranges in which each client can name its IDs. Such server could allocate, for example, the range 0-99 to client1, 100-199 to client2 etc. When a client exhausts the range it was allocated, it contacts the server again to allocate a new range.
A: Depending on the ranges of your stream/event numbers, you could combine the two numbers into a single int or long, placing the stream number in the top so many bits and the event number in the bottom so many bits. For example:
public static int getCombinedNo(int streamNo, int eventNo) {
if (streamNo >= (1 << 16))
throw new IllegalArgumentException("Stream no too big");
if (eventNo >= (1 << 16))
throw new IllegalArgumentException("Event no too big");
return (streamNo << 16) | eventNo;
}
This will only use 4 bytes per int as opposed to in the order of (say) 50-ish bytes for a typical String of the type you mention. (In this case, it also assumes that neither stream nor event number will exceed 65535.)
But: your string solution is also nice and clear. Is memory really that tight that you can't spare an extra 50 bytes per event?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13515912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Continous deployments for on prem databases with Azure DevOps Everyone, I'm looking on a way to deploy updates to our on prem databases using Azure DevOps and I'm running into a roadblock on the release definition. I have my DACPAC ready to go, but not sure how to get that over to my on prem server.
I see the WinRM-SQL Server DB Deployment as a task, but not sure how to set that up. I have seen a couple of videos that use the SQL Server Database Deploy as an option, but it looks like that task has been deprecated, so it looks like I will need to use the WinRM-SQL task. So, could anyone point me in the right direction on how to set this task up to use my local SQL server or possibly a tutorial that help get me started?
A: You will also have to install a release agent on the target server where you will be deploying the database, assign it to a Deployment Group, create your release pipeline template and then run a release. I wrote a blog post about how to deploy a database to an on-prem SQL Server by leveraging Azure DevOps: https://jpvelasco.com/deploying-a-sql-server-database-onto-an-on-prem-server-using-azure-devops/
Hope this helps.
A: If you already created a Deployment Group, within your Pipeline:
*
*Click to Add a new stage: [1]: https://i.stack.imgur.com/vc5TI.png
*On the right side, (SELECT A TEMPLATE screen) type SQL in the search box
*Select: IIS website and SQL database deployment, this will add a Stage with Two tasks: IIS deployment and SQL DB Deploy.
*Delete the IIS Deployment Task
*Configure the SQL DB Deploy task - It does not say it is deprecated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53728683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: how to insert data with laravel and ajax? I have this error:
http://localhost:8000/monthlyadd 500 (Internal Server Error)
error code:
message: "SQLSTATE[23000]: Integrity constraint violation: 1048 Column 'monthly_id' cannot be null (SQL: insert into `month_monthly` (`month_id`, `monthly_id`) values (1, ?))"
how to solve this error?
my controller code
public function store(Request $request)
{
$monthly = new Monthly();
$monthly->user_id = Auth::id();
$monthly->result = $request->input('result');
$monthly->problem = $request->input('problem');
$monthly->suggestion = $request->input('suggestion');
$monthly->months()->attach(request('month'));
$monthly->years()->attach(request('year'));
$monthly->save();
}
js code:
<script>
$(document).ready(function(){
$('#addform').on('submit', function(e){
e.preventDefault();
$.ajax({
type: "post",
url: "/monthlyadd",
data: $('#addform').serialize(),
success: function(response){
console.log(response)
$('#exampleModal').modal('hide')
alert ("data seved")
},
error: function(error){
console.log(error)
alert ("data not seve")
}
});
});
});
</script>
route:
Route::post('/monthlyadd', 'MonthlyController@store');
modal monthly
public function months()
{
return $this->belongsToMany(Month::class);
}
modal month:
public function monthlies()
{
return $this->belongsTo(Monthly::class);
}
A: your error is : monthly_id cannot be null
you can fix it by setting default value for it from your migration or phpmyadmin
or
in your store function set it to something:
$monthly->monthly_id= 'some-value';
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63666466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: hasNextInt() From Scanner behaving weirdly I have a very simple loop that waits for a number (int) and as long as that number is not exitOption it does not leave the loop, however I get an unexpected error, and I don't know what's causing it.
Edit
Adding another snippet so you can compile
public static void main(String[] args) throws FileNotFoundException,
SecurityException,
IOException,
ClassNotFoundException {
while (controller.selectOptionMM());
/Edit
public boolean selectOptionMM() throws SecurityException,
FileNotFoundException,
IOException {
int cmd = ui.getExitOption();
ui.mainMenu();
cmd = utils.readInteger(">>> "); // this is my problem, right here
// code in next snippet
while (cmd <1 || cmd > ui.getExitOption()) {
System.out.println("Invalid command!");
cmd = utils.readInteger(">>> ");
}
switch (cmd) {
case 1:
case 2:
case 3:
case 4: this.repository.close();
return true;
case 5: return false;
}
return false;
}
Here is what fails:
public int readInteger(String cmdPrompt) {
int cmd = 0;
Scanner input = new Scanner(System.in);
System.out.printf(cmdPrompt);
try {
if (input.hasNextInt())
cmd = input.nextInt(); // first time it works
// Second time it does not allow me to input anything
// catches InputMissmatchException, does not print message
// for said catch
// infinitely prints "Invalid command" from previous snippet
} catch (InputMismatchException ime) {
System.out.println("InputMismatchException: " + ime);
} catch (NoSuchElementException nsee) {
System.out.println("NoSuchElementException: " + nsee);
} catch (IllegalStateException ise) {
} finally {
input.close(); // not sure if I should test with if (input != null) THEN close
}
return cmd;
}
First time I pass trough, it reads the number no problem. Now if the number is not 5 (in this case exitOption), it passes again trough readInteger(String cmdPrompt) except this time it jumps to catch (InputMismatchException ime) (debug) except it does not print that message and just jumps to Error, input must be number and Invalid command.
Is something stuck in my input buffer, can I flush it, why is it (input buffer) stuck (with random data)???
I'll try debugging again and see what's stuck in my input buffer, if I can figure out how to see that.
A: The problem is in the call to input.close() - this causes the underlying input stream to be closed. When the input stream being closed is System.in, bad things happen (namely, you can't read from stdin any more). You should be OK just eliminating this line.
A: input.hasNextInt()
This line throws the exception if there is no Integer, so instead of it going to else block it forward to catch block. It will never go to else block if exception get caught.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14060165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Picking up correct cell value by mail script where ArrayFormula is applied I don't know anything about Google Scripts. All the scripts mentioned here under including the formulas have been researched on internet and improvised gradually. I need help of experts like you to achieve my outcomes. I am a beginner, hence please write your answer which I can understand. It would be great if you improve my script / formulas with brief explanation in simple language.
I have two sheets:
1) Form Response Sheet (linked to form)
2) DataBnak Sheet (importing form response data via query: =query('Form Responses 3'!$A:$BH,"",1)
I am using ArrayFormula in "DataBank" Sheet to create a unique text report based on values pulled "Form Responses". This report needs to be emailed to each respondent upon form submit. the report is pulled in CH column thru (ArrayFormula), and once the mail is sent, I am marking "1" in CM column so to ensure duplicate mails are not sent every time the script runs.
=ArrayFormula(IF(ROW($B:$B)=1,"Breif Report",IF(ISBLANK($B:$B),"",if(NOT(ISBLANK($CM:$CM)),"",iferror(vlookup(BU:BU&BV:BV&BW:BW&BX:BX,BriefProfile!$E:$F,2,0),"")))))
Formula Explanation: where Cell in B column is not blank, and where cell in CM column is not blank (mail not sent), then bring the pre-written text based on the look-up value within columns (BU,BV,BW,BX).
What works correct:
1) The ArrayFormula works perfect (it pulls correct pre-written text)
2) Mail Script Works perfect (it sends to mails to those who have not been sent mail earlier)
What does not work:
1) The text report picked for a respondent is same for all respondents. When I remove ArrayFormula and ValuePaste the text report instead of calling it thru the formula mentioned above, then mail script picks up the correct unique report for each respondent and sends mail.
My mail script is mentioned below:
function sendEmails() { // Sends non-duplicate emails with data from the current spreadsheet.
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("DataBank");
var Avals = sheet.getRange("A1:A").getValues(); // helps getting last fillled row^
var Alast = Avals.filter(String).length; // helps getting last fillled row^
var startRow = 2; // First row of data to process
var numRows = Alast-1; // Number of rows to process - last filled row^
var dataRange = sheet.getRange(startRow, 1, numRows, 91); // Fetch the range of cells A2:B3 //.getrange(row,column,numRows,numColumns) numColumns should equal to max column number where data process is required.
var data = dataRange.getValues();
for (var i = 0; i < data.length; ++i) {
var row = data[i];
var emailAddress = row[1]; // second column, actual column minus one
var message = row[85]; // 85th column, actual column minus one
var emailSent = row[90]; // 90th column, actual column minus one
if (emailSent !== EMAIL_SENT) { // Prevents sending duplicates
var subject = row[84] //'Sending emails from a Spreadsheet';
MailApp.sendEmail(emailAddress, subject, message);
sheet.getRange(startRow + i, 91).setValue(EMAIL_SENT);
Utilities.sleep(120000); // keeps the script waiting untill the sheet gets updated values from "ArrayFormula"
}
SpreadsheetApp.flush(); // Make sure the cell is updated right away in case the script is interrupted
}
}
Can you review my mail script and help me improve it so that it picks up correct unique report?
A: I just solved it. declaring var data = dataRange.getValues(); within the for loop solved my problem. I found the solution just now Thank you all!.
by declaring the variable before the for loop, I was actually storing the static data and the same was used in each iterations. When I declared the variable within for loop, the data started changing at each iteration.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/59599824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Django APscheduler prevent more workers running scheduled task I use APScheduler in Django, on Windows IIS to run my background script. Problem is, taks gets run multiple times. If I run same program on my PC, it only runs once, but when I upload to windows server (which hosts my Django app) it runs more times. I guess it has some connection with the number of workers? Job is scheduled, but each time job task is done, it's like it runs random number of instances. First 1 time, then 2, then 10, then again 2. Even tho I have 'replace_existing=True, coalesce= True, misfire_grace_time = 1, max_instances = 1'
planer_zad.py
from apscheduler.schedulers.background import BackgroundScheduler
from blog.views import cron_mail_overdue
def start():
scheduler.add_job(cron_mail_overdue, "cron", hour=7, minute=14, day_of_week='mon-sun', id="task002", replace_existing=True, coalesce= True, misfire_grace_time = 10, max_instances = 1)
scheduler.start()
apps.py
from django.apps import AppConfig
class BlogConfig(AppConfig):
name = 'blog'
def ready(self):
#print('Starting Scheduler...')
from .planer import planer_zad
planer_zad.start()
For test I tried 'interval':
scheduler.add_job(cron_mail_overdue, "interval", minutes=1, id="task002", replace_existing=True, coalesce= True, misfire_grace_time = 10, max_instances = 1)
Tried:
scheduler = BackgroundScheduler({
'apscheduler.executors.default': {
'class': 'apscheduler.executors.pool:ThreadPoolExecutor',
'max_workers': '1'
},
'apscheduler.executors.processpool': {
'type': 'processpool',
'max_workers': '1'
},
'apscheduler.job_defaults.coalesce': 'True',
'apscheduler.job_defaults.max_instances': '1',
'apscheduler.timezone': 'UTC',
})
scheduler.add_job(cron_mail_overdue, "cron", hour=9, minute=3, second=00, day_of_week='mon-sun', id="task002", replace_existing=True, coalesce= True, misfire_grace_time = 10, max_instances = 1)
scheduler.start()
Does not work. Sometimes it runs only once, then 12 times.
A: Just test if the object already exists in ready() :
# django/myapp/apps.py
from django.apps import AppConfig
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
class BlogConfig(AppConfig):
default_auto_field = 'django.db.models.BigAutoField'
name = 'blog'
def __init__(self, app_name, app_module):
super(BlogConfig, self).__init__(app_name, app_module)
self.planer_zad = None
def ready(self):
if os.environ.get('RUN_MAIN', None) != 'true':
return
if self.planer_zad is None:
background_scheduler = BackgroundScheduler()
background_scheduler.add_job(task1, CronTrigger.from_crontab('* * * * *')) # Every minutes (debug).
background_scheduler.start()
return background_scheduler
def task1(self):
print("cron task is working")
You can then call it later :
# api.py
from django.apps import apps
@router.get("/background-task")
def background_task(request):
"""
Run a background task.
"""
user = request.user
blog_config= apps.get_app_config('blog')
background_scheduler = blog_config.background_scheduler
return {"status": "Success", "True": str(background_scheduler)}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70997414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Cannot delete rows in GridView(PosgtesQL) by _RowDeleting command I have asp.net web application, and i need to delete rows in my GridView1 which works with PostgreSQL. I need to delete rows, but i don't have to use ObjectDataSource. Here's my GridView1_RowDeleting method:
protected void GridView1_RowDeleting(object sender, GridViewDeleteEventArgs e)
{
int ID2 = Convert.ToInt32(GridView1.DataKeys[e.RowIndex].Values[0]);
string constr = ConfigurationManager.ConnectionStrings["postgresConnectionString"].ConnectionString;
using (NpgsqlConnection cn = new NpgsqlConnection(constr))
{
string query = "DELETE FROM mainpage WHERE id=@ID";
NpgsqlCommand cmd = new NpgsqlCommand(query, cn);
cmd.Parameters.Add("@ID2", NpgsqlDbType.Integer).Value = ID2;
cn.Open();
cmd.ExecuteNonQuery();
}
And here's my GridView in .aspx file:
<asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" CellPadding="4" ForeColor="#333333" GridLines="None" Width="1650px" AutoGenerateDeleteButton="True" OnRowDeleting="GridView1_RowDeleting" >
Each time i'm clicking delete button i get error: "Index is out of range. The index must be a positive number, and its size should not exceed the size of the collection.
Parameter name: index".
I think problem in my int Id.
What i've tryed:
int ID = (int)GridView1.DataKeys[e.RowIndex].Value;
string ID = GridView1.DataKeys[e.RowIndex].Value.ToString(); and much more...
A: I think ArgumentOutOfRangeException occurred because you're not setting DataKeyNames attribute property on the grid, hence the row index is still out of bounds when calling e.RowIndex. You should set it to ID/primary key column name like this:
DataKeyNames="[ID or PK column name]"
Here is an example usage:
<asp:GridView ID="GridView1" runat="server" DataKeyNames="id"
AutoGenerateColumns="False" CellPadding="4" ForeColor="#333333" GridLines="None"
Width="1650px" AutoGenerateDeleteButton="True" OnRowDeleting="GridView1_RowDeleting">
</asp:GridView>
Update 1
Additionally, I found parameter name mismatch on this query:
string query = "DELETE FROM mainpage WHERE id=@ID";
NpgsqlCommand cmd = new NpgsqlCommand(query, cn);
cmd.Parameters.Add("@ID2", NpgsqlDbType.Integer).Value = ID2;
The correct one should be like example below:
string query = "DELETE FROM mainpage WHERE id=@ID";
NpgsqlCommand cmd = new NpgsqlCommand(query, cn);
cmd.Parameters.Add("@ID", NpgsqlDbType.Integer).Value = ID2;
Reference: GridView.DataKeyNames Property
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52218575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Bigquery: Will switching from BQ sandbox to BQ paid will change 60 days data limit setup in sandbox Bigquery: Will switching from BQ sandbox to BQ paid will change 60 days data limit setup in sandbox.
Also, will I be able to export all GA4 data (last 1 year minimum) post switching to BQ paid?
currently we only have 60 days data in BQ sandbox and want to know if moving to BQ paid service will remove this limitation.
lso, will I be able to export all GA4 data (last 1 year minimum) post switching to BQ paid?
A: I'm not sure if the change from 60 days is automatic, you may have to change it manually.
Unfortunately, you can't export old data from GA4. Once you are out of the sandbox and have changed the data limit, you will start to get more days stored.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75185088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to open an existing file in a custom syscall and write to it I'm having some trouble opening and writing an existing file with a syscall. The syscall is set up correctly. The syscall itself takes in a file name as a char* but I keep getting errors
linux-2.6.22.19/mycall/mycall.c:24: undefined reference to `open'
linux-2.6.22.19/mycall/mycall.c:25: undefined reference to `write'
linux-2.6.22.19/mycall/mycall.c:26: undefined reference to `close'
What is the proper way to open and write to a file directly from a syscall if I am unable to use open, write, and close?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/57336404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: R package `libs` directory too large after compilation to submit on CRAN I am the developer of the following package https://gitlab.inria.fr/gdurif/pCMF and I have a problem: the compiled library is too big.
When I check it with R CMD check, I get the following note regarding the size of the compiled library:
checking installed package size ... NOTE installed size is 31.0Mb sub-directories of 1Mb or more:
libs 30.8Mb
My package is based on C++ code interfaced thanks to the package rcpp and heavily uses the algebra template library Eigen based on the package RcppEigen.
Since Eigen is templated, I think that the compiled library can become large. However, I would like to submit my package on the CRAN and I have no idea at all how I could solve this.
Thanks
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53819970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: What is the expected behavoir when StreamBridge.send method returns false? I notice a lot of usage of the StreamBridge.send method in Spring Cloud Stream apps but none of them check its return value.
What is the expected behavior when the send method returns false? Should we do the retry if it returns false?
@OlegZhurakousky
Thanks!
A: It means that the channel.send() returned false. As for Should we do the retry, i don't know as I don't know your requirements.
Also, I would be interested why are you using StreamBridge? Indeed it is a component of s-c-stream, but 90% of what it does could be done in a more idiomatic way without it. In fact it was designed for one specific purpose, hence my interest.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/73311490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Sort a string to determine if it is a Anagram or Palindrome in Swift Xcode I have a extension names String, with two functions names isAnagramOf and isPalindrome. The first function is supposed to take input as a String, then first it will replace whitespace with no space then sort and compare the string and return a Bool to determine if anagram or not.
The second function named isPalindrome and will also ignore whitespaces and capitalization, it will then reverse the String and compare to return if it is reversed.
I am new to swift and following a tutorial, but I kept getting these errors no matter how I tried to write it. I have gone through it at least 10 times now and cant get it to work
If anyone can help with this code that would be great, I would also be open to someone showing me another way to write. Perhaps as a array first then to sort the string, I am not sure though.
extension String {
func isAnagramOf(_ s: String) -> Bool {
let lowerSelf = self.lowercased().replacingOccurrences(of: " ", with: "")
let lowerOther = s.lowercased().replacingOccurrences(of: " ", with: "")
return lowerSelf.sorted() == lowerOther.sorted() // first error:Value of type 'String' has no member 'sorted
}
func isPalindrome() -> Bool {
let f = self.lowercased().replacingOccurrences(of: " ", with: "")
let s = String(describing: f.reversed()) //second error:Value of type 'String' has no member 'reversed'
return f == s
}
}
A: In Swift 3 a String itself is not a collection, so you have to
sort or reverse its characters view:
extension String {
func isAnagramOf(_ s: String) -> Bool {
let lowerSelf = self.lowercased().replacingOccurrences(of: " ", with: "")
let lowerOther = s.lowercased().replacingOccurrences(of: " ", with: "")
return lowerSelf.characters.sorted() == lowerOther.characters.sorted()
}
func isPalindrome() -> Bool {
let f = self.lowercased().replacingOccurrences(of: " ", with: "")
return f == String(f.characters.reversed())
}
}
A slightly more efficient method to check for a palindrome is
extension String {
func isPalindrome() -> Bool {
let f = self.lowercased().replacingOccurrences(of: " ", with: "")
return !zip(f.characters, f.characters.reversed()).contains(where: { $0 != $1 })
}
}
because no new String is created, and the function "short-circuits",
i.e. returns as soon as a non-match is found.
In Swift 4 a String is collection of its characters, and
the code simplifies to
extension String {
func isAnagramOf(_ s: String) -> Bool {
let lowerSelf = self.lowercased().replacingOccurrences(of: " ", with: "")
let lowerOther = s.lowercased().replacingOccurrences(of: " ", with: "")
return lowerSelf.sorted() == lowerOther.sorted()
}
func isPalindrome() -> Bool {
let f = self.lowercased().replacingOccurrences(of: " ", with: "")
return !zip(f, f.reversed()).contains(where: { $0 != $1 })
}
}
Note also that
let f = self.lowercased().replacingOccurrences(of: " ", with: "")
returns a string with all space characters removed. If you want
to remove all whitespace (spaces, tabulators, newlines, ...) then use
for example
let f = self.lowercased().replacingOccurrences(of: "\\s", with: "", options: .regularExpression)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45469045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Selenium Expected Conditions - possible to use 'or'? I'm using Selenium 2 / WebDriver with the Python API, as follows:
from selenium.webdriver.support import expected_conditions as EC
# code that causes an ajax query to be run
WebDriverWait(driver, 10).until( EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.some_result")));
I want to wait for either a result to be returned (div.some_result) or a "Not found" string. Is that possible? Kind of:
WebDriverWait(driver, 10).until( \
EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.some_result")) \
or
EC.presence_of_element_located( \
(By.CSS_SELECTOR, "div.no_result")) \
);
I realise I could do this with a CSS selector (div.no_result, div.some_result), but is there a way to do it using the Selenium expected conditions method?
A: Ancient question but,
Consider how WedDriverWait works, in an example independent from selenium:
def is_even(n):
return n % 2 == 0
x = 10
WebDriverWait(x, 5).until(is_even)
This will wait up to 5 seconds for is_even(x) to return True
now, WebDriverWait(7, 5).until(is_even) will take 5 seconds and them raise a TimeoutException
Turns out, you can return any non Falsy value and capture it:
def return_if_even(n):
if n % 2 == 0:
return n
else:
return False
x = 10
y = WebDriverWait(x, 5).until(return_if_even)
print(y) # >> 10
Now consider how the methods of EC works:
print(By.CSS_SELECTOR) # first note this is only a string
>> 'css selector'
cond = EC.presence_of_element_located( ('css selector', 'div.some_result') )
# this is only a function(*ish), and you can call it right away:
cond(driver)
# if element is in page, returns the element, raise an exception otherwise
You probably would want to try something like:
def presence_of_any_element_located(parent, *selectors):
ecs = []
for selector in selectors:
ecs.append(
EC.presence_of_element_located( ('css selector', selector) )
)
# Execute the 'EC' functions agains 'parent'
ecs = [ec(parent) for ec in ecs]
return any(ecs)
this WOULD work if EC.presence_of_element_located returned False when selector not found in parent, but it raises an exception, an easy-to-understand workaround would be:
def element_in_parent(parent, selector):
matches = parent.find_elements_by_css_selector(selector)
if len(matches) == 0:
return False
else:
return matches
def any_element_in_parent(parent, *selectors):
for selector in selectors:
matches = element_in_parent(parent, selector)
# if there is a match, return right away
if matches:
return matches
# If list was exhausted
return False
# let's try
any_element_in_parent(driver, 'div.some_result', 'div.no_result')
# if found in driver, will return matches, else, return False
# For convenience, let's make a version wich takes a tuple containing the arguments (either one works):
cond = lambda args: any_element_in_parent(*args)
cond( (driver, 'div.some_result', 'div.no_result') )
# exactly same result as above
# At last, wait up until 5 seconds for it
WebDriverWait((driver, 'div.some_result', 'div.no_result'), 5).until(cond)
My goal was to explain, artfulrobot already gave a snippet for general use of actual EC methods, just note that
class A(object):
def __init__(...): pass
def __call__(...): pass
Is just a more flexible way to define functions (actually, a 'function-like', but that's irrelevant in this context)
A: I did it like this:
class AnyEc:
""" Use with WebDriverWait to combine expected_conditions
in an OR.
"""
def __init__(self, *args):
self.ecs = args
def __call__(self, driver):
for fn in self.ecs:
try:
res = fn(driver)
if res:
return True
# Or return res if you need the element found
except:
pass
Then call it like...
from selenium.webdriver.support import expected_conditions as EC
# ...
WebDriverWait(driver, 10).until( AnyEc(
EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.some_result")),
EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.no_result")) ))
Obviously it would be trivial to also implement an AllEc class likewise.
Nb. the try: block is odd. I was confused because some ECs return true/false while others will throw NoSuchElementException for False. The Exceptions are caught by WebDriverWait so my AnyEc thing was producing odd results because the first one to throw an exception meant AnyEc didn't proceed to the next test.
A: Not exactly through EC, but does achieve the same result - with a bonus.
Still using WebDriverWait's until() method, but passing the pure find_elements_*() methods inside a lambda expression:
WebDriverWait(driver, 10).until(lambda driver: driver.find_elements_by_id("id1") or \
driver.find_elements_by_css_selector("#id2"))[0]
The find_elements_*() methods return a list of all matched elements, or an empty one if there aren't such - which is a a boolean false. Thus if the first call doesn't find anything, the second is evaluated; that repeats until either of them finds a match, or the time runs out.
The bonus - as they return values, the index [0] at the end will actually return you the matched element - if you have any use for it, in the follow-up calls.
A: I did this and it worked fine for me:
WebDriverWait(driver, 10).until( EC.presence_of_element_located(By.XPATH, "//div[some_result] | //div[no_result]"));
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16462177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "25"
} |
Q: HippoMocks throws NotImplementedException when not specifying expectation I am investigating using mocking for unit tests I'm adding to existing code. For this I'm using HippoMocks. This involves another class calling some methods on my mock (that are all virtual). I want to avoid overspecifying all this but HippoMocks keep throwing NotImplementedException whenever the other class calls functions on my mock that I have not specified.
The below code exposes my issue.
void test()
{
class SimpleClassToMock
{
public:
virtual void memberFunction1() {}
virtual void memberFunction2() {}
};
MockRepository mocks;
// true or false here makes no difference.
mocks.autoExpect = true;
SimpleClassToMock* m = mocks.Mock<SimpleClassToMock>();
// I care about this function getting called.
mocks.ExpectCall(m, SimpleClassToMock::memberFunction1);
m->memberFunction1();
// HippoMocks fails on the next line by throwing NotImplementedException.
m->memberFunction2();
}
Is there any way to tell HippoMocks not to fail here? I only want to specify the expectations for things I care about for a particular test, not every single thing that is called.
PS: To those that have mocking experience, am I thinking about this all wrong? Is overspecifying the test in cases such as this not a problem/"what you want"?
A: To avoid overspecifying, you can use OnCall to allow them to be called 0-N times (optionally with argument checks, order checks and so on).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41547660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Changing the default cursor in sublime text3 - windows10 I am relatively new user of sublime so please consider helping me out,
I am aware that pressing the "insert" key in windows changes sublime into the overwrite mode
(with a underbar), and pressing "insert" again reverts it back to the "append" mode, and the vertical line cursor, discused here
My question - is there a way of changing the default cursor (append mode) to the underbar.
Any help is appreciated!
Thanks!
Please excuse me for any gramatical errors.
A: There is a way to modify the cursor, but it comes with a catch. The feature is only available in the Sublime Text "4" alpha builds, which you can only run if you're a registered user and willing to run alpha-level software, which means occasional random crashes and features not working right as the bugs get ironed out. You're also committing to upgrading to each new build as it's released and reporting issues back to the dev team. If you're interested, start at the Sublime Discord server here. The new builds are posted in #announcements.
You say you're a relatively new user, so I would not recommend running the alpha builds at this time, especially if it's just for this one feature.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/63207699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: React Router, change path alternative of link element Is it possible to change route in react by react-router by other way than <Link></Link> ?
(e.g change route when onClick, or onKeydown event runs some function)
A: There is another way. You can use history.push in your code:
import { useHistory } from 'react-router-dom';
const YourComponent = () => {
const history = useHistory();
return <button onClick={() => history.push('/profile')}>Profile</button>;
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66738120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unit testing function XPath query results? I'm having a little bit of a dilemma. I have a very basic class with functions returning specific XPath query results.
Here is the code I'm currently using.
[TestFixture]
public class MarketAtAGlance_Test
{
private XmlDocument document;
private MarketAtAGlance marketAtAGlance;
[SetUp]
public void setUp()
{
this.document = new XmlDocument();
// load document from file located in the project
this.marketAtAGlance = new MarketAtAGlance(document);
}
[Test]
public void getHourlyImport_Test()
{
Assert.AreEqual(100.0d, marketAtAGlance.getHourlyImport());
}
[Test]
public void getHourlyExport_Test()
{
Assert.AreEqual(1526.0d, marketAtAGlance.getHourlyExport());
}
}
public class MarketAtAGlance
{
XmlDocument document;
public MarketAtAGlance(XmlDocument document)
{
this.document = document;
}
public double getHourlyImport() {
double value = Convert.ToDouble(document.SelectSingleNode("//information[@id=\"dat11\"]/new_val").InnerText);
return value;
}
public double getHourlyExport() {
double value = Convert.ToDouble(document.SelectSingleNode("//information[@id=\"dat12\"]/new_val").InnerText);
return value;
}
}
This is my first use of unit testing so I'm still unsure of many minor things. As you can see, I'm loading a static XML file located on my hard drive. Should I have the extra dependency or put the XML text in a big string? I'm loading an older XML file (with the same format) because I can test with already known values.
Also, how would I go about unit testing an XmlHttpReader (class that takes in an XML url and loads it as a document?
Any comments on my question or comments about the design?
A: I would construct the XML in the test setup, but limit the XML to only what you need for the test to pass. It looks like your XML document could be very simple in this case.
<someRoot>
<someNode>
<information id='dat11'><new_val>100.0</new_val></information>
<information id='dat12'><new_val>1526.0</new_val></information>
</someNode>
</someRoot>
That XML would pass your test.
I also wouldn't test the XmlHttpReader, if that is a system class. You could mock a dependency to it. You might need to wrap it with something to help you easily decouple it as dependency from your class.
A: For your first question, whether you should have a big XML string or load it from a file. I would say either works. To be honest though since you are loading from a file inside the project I would keep it in the project as an embedded resource and load it via reflection. That would take the mess of file structure out of the picture if any of your colleages run it from their pc's. The only best practice I've really encountered with Unit Tests is to make sure you're testing properly and make sure others can run the test easily.
For your second question, about the XmlHttpReader. It would depend on your output. If you can test that you have valid XML then go for it. I would reccommend negative testing as well. Point it to http://stackoverflow.com, or a URL you know will error out and decorate the test with the appropriate expected error.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1766165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Failed to initiate service connection to simulator? When i install AFNetworking pod file for connecting web service. After installation pod then run my program Xcode show message "unable to contact local DTServiceHub to bless simulated connection".
I help this link for error from stackoverflow but not solve issue.
I used: Xcode 8.2.1, OSX 10.11.6 and Objective-C.
Thanks in Advance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44500168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Maximum interval on react loop->addPeriodicTimer is 2147 seconds I'm running a timer using react\eventloop on my ratchet wamp app. I would like to have it run hourly at 3600 seconds, but for some reason if I set the interval higher than 2147 seconds I get this warning:
Warning: stream_select(): The microseconds parameter must be greater than 0 in C
:\wamp\www\vendor\react\event-loop\StreamSelectLoop.php on line 255
What's so special about 2147 seconds? And what can I do to bypass this contraint?
The Event Handler
class Pusher implements WampServerInterface, MessageComponentInterface {
private $loop;
}
public function __construct(LoopInterface $loop) {
$this->loop = $loop;
$this->loop->addPeriodicTimer(2147, function() {
//code
});
}
public function onSubscribe(ConnectionInterface $conn, $topic) {}
public function onUnSubscribe(ConnectionInterface $conn, $topic) {}
public function onOpen(ConnectionInterface $conn) {}
public function onClose(ConnectionInterface $conn) {}
public function onCall(ConnectionInterface $conn, $id, $topic, array $params) {}
public function onPublish(ConnectionInterface $conn, $topic, $event {}
public function onError(ConnectionInterface $conn, \Exception $e) {}
The Server
$loop = Factory::create();
$webSock = new Server($loop);
$webSock->listen(8080, '0.0.0.0');
new IoServer(
new HttpServer(
new WsServer(
new SessionProvider(
new WampServer(
new Pusher($loop)),
$sesshandler
)
)
),
$webSock
);
$loop->run();
A: It is because of the limit of PHP integers on 32-bit platforms.
2147 (seconds) * 1000000 (microseconds in one second) ~= PHP_INT_MAX on 32-bit platforms.
On 64-bit platforms the limit would be ~ 300k years.
The strange thing is that React's React\EventLoop\StreamSelectLoop calls stream_select() only with microseconds parameter, while it also accepts seconds. Maybe they should fix this issue. As a workaround you could override StreamSelectLoop implementation so that it make use of $tv_sec parameter in stream_select().
I created a pull request, let's see if it will be accepted
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25029559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: PHP Regex cut off space with preg_replace I have some string like this
Name xxx Product 1 Pc 100
Name Pci Product2Pc.200
Name Pcx Product 3 Pcs300
I want to turn PC to Price
And this is result that I want
Name xxx Product 1 Price 100
Name Pci Product 2 Price 200
Name Pcx Product 3 Price 300
At first I use
$pattern = array('/(\s*)Product(\s*)/', '/(\s*)(Pc\.?|Pcs)(\s*)/');
But it came to change all of my PC to Price
Name xxx Product 1 Price 100
Name Price i Product 2 Price 200
Name Price x Product 3 Price 300
This is my code now.
$pattern = array('/(\s*)Product(\s*)/', '/[^a-z](Pc\.?|Pcs)[^a-z]/');
$replacement = array(' Product ', ' Price ');
$title = preg_replace($pattern, $replacement, $title, -1);
But it result like this
Name xxx Product 1 Price 100
Name Pci Product Price 00
Name Pcx Product 3 Price 00
Thanks you.
A: Regex:
(Product)\s*(\d+)\s*Pc[.s]?\s*(\d+)
Replacement string:
$1 $2 Price $3
DEMO
$string = <<<EOT
Name xxx Product 1 Pc 100
Name Pci Product2Pc.200
Name Pcx Product 3 Pcs300
EOT;
$pattern = "~(Product)\s*(\d+)\s*Pc[.s]?\s*(\d+)~";
echo preg_replace($pattern, "$1 $2 Price $3", $string);
Output:
Name xxx Product 1 Price 100
Name Pci Product 2 Price 200
Name Pcx Product 3 Price 300
A: The reason your attempt is not working is because you are removing things that you don't want to.
You could use the following regular expression.
$title = <<<DATA
Name xxx Product 1 Pc 100
Name Pci Product2Pc.200
Name Pcx Product 3 Pcs300
DATA;
$title = preg_replace('/Product\K\s*(\d+)\D+(\d+)/', ' $1 Price $2', $title);
echo $title;
Output:
Name xxx Product 1 Price 100
Name Pci Product 2 Price 200
Name Pcx Product 3 Price 300
| {
"language": "en",
"url": "https://stackoverflow.com/questions/26074283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Reading strings from file into dynamically allocated array in C I'm trying to learn C and can't seem to figure out how to read in strings from a file into an array. I have a 2D array of chars as an array of strings and try to read those in by using malloc but I keep getting a SegFault. Any tips on how to fix my code?
#include <stdio.h>
#define MAX_WORDS 10
#define MAX_WORD_SIZE 20
unsigned int getLine(char s[], unsigned int uint, FILE *stream);
int main( void ){
FILE *infile1;
unsigned int i = 0;
unsigned int j = 0;
unsigned int index;
char c;
char wordList[ MAX_WORDS+1 ][ MAX_WORD_SIZE + 1];
infile1 = fopen("myFile.txt", "r");
if (!(infile1 == NULL))
printf("fopen1 was successful!\n");
while( (c = getc(infile1)) != EOF){
while ((c = getc(infile1)) != ' '){
wordList[i] = (char *)malloc(sizeof(char) );
wordList[i][j] = getc(infile1);
j++;
}
j = 0;
i++;
}
printf("\nThe words:\n");
for (index = 0; index < i; ++index){
printf("%s\n", wordList[index]);
}
A: How are you compiling this. The compiler should give you and error on the assignment:
wordList[i] = (char *)malloc(sizeof(char) );
The array wordlist is not of type char *
Also you are missing an include for malloc (stdlib.h probably) and you shouldn't be casting the return from malloc.
A: One obvious problem - you are allocating one char for wordList[i], then using it as if you had a character for each wordList[i][j].
You don't need to allocate any memory here, as you've defined wordlist as a 2 dimensional array rather than as an array of pointers or similar.
Next obvious problem - you are reading in characters and never providing an end of string, so if you ever got to the printf() at the end you are going to keep going until there happens to be a 0 somewhere in or after wordList[index] - or run off the end of memory, with a Segfault.
Next problem - do you intend to only detect EOF immediately after reading a space - AND throw away alternate characters?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16311942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: ListView - onContextItemSelected manipulate the item instead of toString As you can see in the code below, when I LongClick on an item in ListView, i get a popup menu with option to manipulate the item (delete, update, etc).
the problem is that I use my function on item.toString instead the item itself.
How can i get the item itself, and put it as an argument in my functions?
onCreateContextMenu:
public void onCreateContextMenu(final ContextMenu menu, final View v, final ContextMenu.ContextMenuInfo menuInfo) {
super.onCreateContextMenu(menu, v, menuInfo);
if (v.getId() == R.id.db_list_view) {
MenuInflater inflater = getMenuInflater();
inflater.inflate(R.menu.menu_list, menu);
}
}
onContextItemSelected:
public boolean onContextItemSelected(android.view.MenuItem item) {
AdapterView.AdapterContextMenuInfo info = (AdapterView.AdapterContextMenuInfo) item.getMenuInfo();
Object obj = lv.getItemAtPosition(info.position);
String nameToString = obj.toString();
if (item.getTitle().equals("Delete")) {
deletePlayerFromLongClick(nameToString);
} else if (item.getTitle().equals("Update")) {
updatePlayerFromLongClick(nameToString);
} else if (item.getTitle().equals("Change Host/Guest")) {
changeMembership(nameToString);
}
return true;
}
A: do like:
yourAdapter.getItem(info.position);
or
((YourAdapter)lv.getAdapter()).getItem(position);
or even simpler,
listOfItem.get(info.position);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46040887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Checking for a Discourse topic title I have a ruby function that creates a discourse topic if the title is not found.
def get_topic(user,title,created_at,bumped_at,last_posted_at,excerpt)
title = title.gsub(/ /,' ')
puts "get_topic.title:'#{title}' user.id:#{user.id} created_at:#{created_at} last_posted_at:#{last_posted_at} excerpt:#{excerpt}"
result = Topic.find_by_title(title)
if result == nil then
result = Topic.create
result.title = title
result.fancy_title = title
result.user_id = user.id
result.last_post_user_id = user.id
result.updated_at = last_posted_at
result.created_at = created_at
result.bumped_at = bumped_at
result.last_posted_at = last_posted_at
result.excerpt = excerpt
result.save!
else
puts result
puts created_at,bumped_at,last_posted_at
if result.user_id != user.id then
result.user_id = user.id
result.last_post_user_id = user.id
end
if result.updated_at != last_posted_at then
result.updated_at = last_posted_at
end
if result.created_at != created_at then
result.created_at = created_at
end
if result.bumped_at != bumped_at then
result.bumped_at = bumped_at
end
if result.last_posted_at != last_posted_at then
result.last_posted_at = last_posted_at
end
if result.excerpt != excerpt then
result.excerpt = excerpt
end
if result.changed.length > 0 then
result.save!
puts result.slug
post = result.first_post
post.created_at = created_at
post.updated_at = result.updated_at
post.baked_at = result.bumped_at
post.last_version_at = result.last_posted_at
post.user_id = result.user_id
post.last_editor_id = result.user_id
post.raw = result.excerpt
post.save!
puts post
#puts user.id
end
end
puts "get_topic.result.slug:#{result.slug}"
return result
end
Before the code creates a topic it searches for the title.
title = title.gsub(/ /,' ') result = Topic.find_by_title(title)
If a nil result occurs it then creates a topic.
It works for most topics, however, for one title it’s throwing ActiveRecord::RecordInvalid: Validation failed: Title has already been used when the topic is being saved.
The valiation settings for topic.title is
validates :title, if: Proc.new { |t| t.new_record? || t.title_changed? },
presence: true,
topic_title_length: true,
censored_words: true,
quality_title: { unless: :private_message? },
max_emojis: true,
unique_among: { unless: Proc.new { |t| (SiteSetting.allow_duplicate_topic_titles? || t.private_message?) },
message: :has_already_been_used,
allow_blank: true,
case_sensitive: false,
collection: Proc.new { Topic.listable_topics } }
I’ve checked in the console using Topic.find_by_title(‘title’). No topics exist with the title. I checked using Topic.find_by_sql("select * from topics where title like ‘start of title%’). No topics exist with the first word of the title.
What other checks can done … what could be causing the error saying the topic title has already been used when there are no topics found with that title?
Further information
I tried turning on allowing duplicate titles. The code ran through without an error yet there were no topics with the title that was causing the error after the import. Feels like the error message is wrong.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/54014464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Way to share task and results between Azure website and workers We need to change our system to a two-tiered structure on azure with an Azure website handling requests and adding tasks to a queue which will then be processed in priority order by a set of Azure worker roles. The website will then return the results to the end user. The data and results sets for each task will be largish (several megabytes). What's the best way to broker this exchange of data.
We could do it via an Azure storage blob but they are quite slow. Is there a better way? Up until now we have been doing everything in scaled azure website which allows all instances access to the same disk.
A: If this is a long-running process I doubt that using blob storage would add that much overhead, although you don't specify what the tasks are.
On Zudio long-running tasks update Table Storage tables with progress and completion status, and we use polling from the browser to check when a task has finished. In the case of a large result returning to the user, we provide a direct link with a shared access signature to the blob with the completion message, so they can download it directly from storage. We're looking at replacing the polling with SignalR running over Service Bus, and having the worker roles send updates directly to the client, but we haven't started that development work yet so I can't tell you how that will actually work.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21495511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.