text
stringlengths 15
59.8k
| meta
dict |
---|---|
Q: Creating a custom command as an alternative to setValue() where I can control the type speed Hope someone can help with this. I have come across an issue with the application im testing. The developers are using vue.js library and there are a couple of fields which reformat the entered test. So for example if you enter phone number, the field will automatically enter the spaces and hyphens where its needed. This is also the same with the date of birth field where it automatically enters the slashes if the user does not.
So the issue I have is that using both 'setValue()' or 'sendKeys()' are entering the text too fast and the cursor in the field sometimes cannot keep up and the text entered sometimes appears in the incorrect order. For example, if I try to enter '123456789'. Some times it ends up as '132456798' (or any other combination). This cannot be produced manually and sometimes the test does pass. But its flakey.
What I wanted to do was to write a custom command to do something where it enters the string but in a slower manner. For this I need to have control of how fast I want the text to be entered. So I was thinking of something like this where I can pass in a selector and the text and then it will enter one character at a time with a 200 millisecond pause in between each character. Something like this:
let i = 0;
const speed = 200; // type speed in milliseconds
exports.command = function customSetValue(selector, txt) {
console.log(selector);
console.log(txt);
if (i < txt.length) {
this.execute(function () {
document.getElementsByName(selector).innerHTML += txt.charAt(i);
i++;
setTimeout(customSetValue, speed);
}, [selector, txt]);
}
return this;
};
When running document.getElementsByName(selector) in browser console I get a match on the required element. But it is not entering any text. Also note that I added a console.log in there and I was actually expecting this to log out 14 times but it only logged once. So itss as if my if condition is false
I checked my if condition and it should be true. So not sure why its not reiterating the function. Any help is much appreciated.
Also if it helps. I am using the .execute() command to inject javascript which is referenced here: https://nightwatchjs.org/api/execute.html
And the idea on this type writer is based on this: https://www.w3schools.com/howto/tryit.asp?filename=tryhow_js_typewriter
A: We ended up taking a different approach much simpler. Wanted to post here in case anyone else ever needs something similar
exports.command = function customSetValue(selector, txt) {
txt.split('').forEach(char => {
this.setValue(selector, char);
this.pause(200); // type speed in milliseconds
});
return this;
};
| {
"language": "en",
"url": "https://stackoverflow.com/questions/56272103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Toolbar overlaps status bar I've got a problem with my status bar which gets overlapped by the toolbar.
I wanted to have the function that when the user scrolles the ListView down, the toolbar disappears behind the status bar so that only the tabs are visible, just like in the WhatsApp and YouTube apps.
To achieve this effect or to get this function I used this line:
app:layout_scrollFlags="scroll|enterAlways"
into my android.support.v7.widget.Toolbar, but know as I said before, the status bar gets overlapped by the toolbar.
<?xml version="1.0" encoding="utf-8"?>
<android.support.design.widget.CoordinatorLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/main_content"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="true"
tools:context=".MainActivity">
<android.support.design.widget.AppBarLayout
android:id="@+id/appbar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:paddingTop="@dimen/appbar_padding_top"
android:theme="@style/AppTheme.AppBarOverlay">
<android.support.v7.widget.Toolbar
android:id="@+id/toolbar"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
app:layout_scrollFlags="scroll|enterAlways"
android:background="?attr/colorPrimary"
app:popupTheme="@style/AppTheme.PopupOverlay">
</android.support.v7.widget.Toolbar>
<android.support.design.widget.TabLayout
android:id="@+id/tabs"
android:layout_width="match_parent"
android:layout_height="wrap_content" />
</android.support.design.widget.AppBarLayout>
<android.support.v4.view.ViewPager
android:id="@+id/container"
android:layout_width="match_parent"
android:layout_height="match_parent"
app:layout_behavior="@string/appbar_scrolling_view_behavior" />
</android.support.design.widget.CoordinatorLayout>
Thankful for any help!
UPDATE:
v21\styles.xml
<resources>>
<style name="AppTheme.NoActionBar">
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
<item name="android:windowDrawsSystemBarBackgrounds">true</item>
<item name="android:statusBarColor">@android:color/transparent</item>
</style>
</resources>
styles.xml
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
<!-- Customize your theme here. -->
<item name="colorPrimary">@color/colorPrimary</item>
<item name="colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="colorAccent">@color/colorAccent</item>
</style>
<style name="AppTheme.NoActionBar">
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
</style>
<style name="AppTheme.AppBarOverlay" parent="ThemeOverlay.AppCompat.Dark.ActionBar" />
<style name="AppTheme.PopupOverlay" parent="ThemeOverlay.AppCompat.Light" />
</resources>
A: in v21\styles.xml
remove
<item name="android:statusBarColor">@android:color/transparent</item>
A: Try add android:fitsSystemWindows="true" to android.support.design.widget.AppBarLayout or @style/AppTheme.PopupOverlay style
A: This works for me to get a white overlay on device status bar (problem after update in question)
I changed:
<item name="android:windowDrawsSystemBarBackgrounds">true</item>
to
<item name="android:windowDrawsSystemBarBackgrounds">false</item>
in my styles.xml file
A: try making android:layout_height="?attr/actionBarSize" for android.support.v7.widget.Toolbar
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33984944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13"
} |
Q: Remove element from array in mongodb I am new in mongodb and i want to remove the some element in array.
my document as below
{
"_id" : ObjectId("4d525ab2924f0000000022ad"),
"name" : "hello",
"time" : [
{
"stamp" : "2010-07-01T12:01:03.75+02:00",
"reason" : "new"
},
{
"stamp" : "2010-07-02T16:03:48.187+03:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+04:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+05:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+06:00",
"reason" : "update"
}
]
}
in document, i want to remove first element(reason:new) and last element(06:00) .
and i want to do it using mongoquery, i am not using any java/php driver.
A: If I'm understanding you correctly, you want to remove the first and last elements of the array if the size of the array is greater than 3. You can do this by using the findAndModify query. In mongo shell you would be using this command:
db.collection.findAndModify({
query: { $where: "this.time.length > 3" },
update: { $pop: {time: 1}, $pop: {time: -1} },
new: true
});
This would find the document in your collection which matches the $where clause.
The $where field allows you to specify any valid javascript method. Please note that it applies the update only to the first matched document.
You might want to look at the following docs also:
*
*http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-JavascriptExpressionsand%7B%7B%24where%7D%7D for more on the $where clause.
*http://www.mongodb.org/display/DOCS/Updating#Updating-%24pop for
more on $pop.
*http://www.mongodb.org/display/DOCS/findAndModify+Command for more
on findAndModify.
A: You could update it with { $pop: { time: 1 } } to remove the last one, and { $pop: { time : -1 } } to remove the first one. There is probably a better way to handle it though.
A: @javaamtho you cannot test for a size greater than 3 but only if it is exactly 3, for size greater than x number you should use the $inc operator and have a field you either 1 or -1 to in order to keep track when you remove or add items (use a separate field outside the array as below, time_count)
{
"_id" : ObjectId("4d525ab2924f0000000022ad"),
"name" : "hello",
"time_count" : 5,
"time" : [
{
"stamp" : "2010-07-01T12:01:03.75+02:00",
"reason" : "new"
},
{
"stamp" : "2010-07-02T16:03:48.187+03:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+04:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+05:00",
"reason" : "update"
},
{
"stamp" : "2010-07-02T16:03:48.187+06:00",
"reason" : "update"
}
]
}
A: If you would like to leave these time elements, you can use aggregate command from mongo 2.2+ to retrieve min and max time elements, unset all time elements, and push min and max versions (with some modifications it could do your job):
smax=db.collection.aggregate([{$unwind: "$time"},
{$project: {tstamp:"$time.stamp",treason:"$time.reason"}},
{$group: {_id:"$_id",max:{$max: "$tstamp"}}},
{$sort: {max:1}}])
smin=db.collection.aggregate([{$unwind: "$time"},
{$project: {tstamp:"$time.stamp",treason:"$time.reason"}},
{$group: {_id:"$_id",min:{$min: "$tstamp"}}},
{$sort: {min:1}}])
db.students.update({},{$unset: {"scores": 1}},false,true)
smax.result.forEach(function(o)
{db.collection.update({_id:o._id},{$push:
{"time": {stamp: o.max ,reason: "new"}}},false,true)})
smin.result.forEach(function(o)
{db.collection.update({_id:o._id},{$push:
{"time": {stamp: o.min ,reason: "update"}}},false,true)})
A:
db.collection.findAndModify({
query: {$where: "this.time.length > 3"},
update: {$pop: {time: 1}, $pop{time: -1}},
new: true });
convert to PHP
| {
"language": "en",
"url": "https://stackoverflow.com/questions/4945825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: Silverlight and MikesFlipControl I would like to know when using some third party silverlight controls that are a container control why the controls that are conatined within them are not accessable in code behind. until the control is loaded. The example I am looking at is when using the FlipControl written by Mike Taulty.
Whehn I use his control and place a grid in the front container and texblocks in the behind container. only the grid is availabe at runtime until the flip is done that then show the behind container that then fires the load event of the text block. I woudl liek to populate these textblock before the flip is done but when I do I get object reference error because the textblock is null.
Any help on this would be great, here is mikes blog on this.
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2009/04/27/silverlight-3-simple-flip-control-built-on-planeprojection.aspx
Thanks in advance.
A: Although I'm not actually looking at Mikes code (you could do that though) I would imagine he has a single content control to which he has assigned the Front content originally. On Flip a projection is animated until its edge on at which point the Rear content is assigned, and the animation continues.
Hence at anyone time the only one of Front or Rear content can actually be navigated to with something like FindName
However if you give each of the root Child controls that you place in Front and Rear their own x:Name you should be able gain access to your text boxes using the rear name.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/2307099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Samsung Galaxy S8+ : Device Metrics Gives wrong device screen pixel density This is how I am getting the screen density using Android Studios
float density = getResources().getDisplayMetrics().density;
According to device specification, S8+ has 4.0 Density value and falls under xxxhdpi category
Refer this site https://material.io/devices/,
Have Attached a screenshot for reference
here
But the value returned by the above code is 2.9, which seems very wrong
check here
Tried using densityDpi also, but it also returns xxhdpi as opposed to xxxhdpi
int densityDpi = context.getResources().getDisplayMetrics().densityDpi;
The above code works well on other devices, tested on OnePlus 5T, Nexus 6P, Redmi Note 4, Moto g4+ and others...facing issues with the Galaxy s8+, haven't tested for Galaxy s8 but i guess the result will be same.
Is this a know Bug, or am I doing something wrong?
Its getting difficult to manage layout for S8+ devices without the correct pixel density info for s8+ devices.
A: You need to initialize display metrics object
You can try this:
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
flat density=metrics.density;
flat densityDpi=metrics.densityDpi;
A: Just like Surfman said. Your phone isn't at the maximum possible resolution. If you take the resolution reported within your screenshot and divide it by the reported 2.9 and multiply it by 4 you end up with roughly the maximum resolution of 2960x1440
| {
"language": "en",
"url": "https://stackoverflow.com/questions/47921657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Validation for Search box Need a validation for my search box that should allow space and %. The below mentioned letters should not be allowed.
< > ( ) ' " / \ * ; : = { } `(backtick) % + ^ ! - \x00-\x20
I tried by using JS code, please see the code :-
<script type="text/javascript">
var specialKeys = new Array();
specialKeys.push(8); //Backspace
specialKeys.push(9); //Tab
specialKeys.push(46); //Delete
specialKeys.push(36); //Home
specialKeys.push(35); //End
specialKeys.push(37); //Left
specialKeys.push(39); //Right
function IsAlphaNumeric(e) {
var keyCode = e.keyCode == 0 ? e.charCode : e.keyCode;
var ret = ((keyCode >= 48 && keyCode <= 57) || (keyCode >= 65 && keyCode <= 90) || (keyCode >= 97 && keyCode <= 122) || (specialKeys.indexOf(e.keyCode) != -1 && e.charCode != e.keyCode));
document.getElementById("error").style.display = ret ? "none" : "inline";
return ret;
}
</script>
let me know what needs to be modified or added in the above mentioned code.
A: You Can use Java Script Regular Expression to validate input following code will do what you want
$('input').bind('keypress', function (event) {
var regex = new RegExp("^[a-zA-Z0-9\b _ _%]+$");
var key = String.fromCharCode(!event.charCode ? event.which : event.charCode);
if (!regex.test(key)) {
event.preventDefault();
return false;
}
});
Here is a working js fiddle example
Validation Js Fiddle
A: Try this example:
script
$(function(){
$("input.rejectSpecial").on('input', function(){
this.value = this.value.replace(/(\>|\<|\(|\)|\"|\'|\\|\/|\*|;|\:|\=|\{|\}|`|%|\+|\^|\!|\-)/g, '');
});
})
html
<input type="text" placeholder="special letters not allowed" size="30" class="rejectSpecial"/>
<input type="text" placeholder="does nothing" size="30"/>
<input type="text" placeholder="special letters not allowed" size="30" class="rejectSpecial"/>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24401683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Why can I only make GET requests and not POST requests with this python code? I'm trying to get this Etrade stuff up an running.... so far i have:
from rauth import OAuth1Service
import webbrowser
def getSession():
# Create a session
# Use actual consumer secret and key in place of 'foo' and 'bar'
service = OAuth1Service(
name = 'etrade',
consumer_key = 'cabf024eaXXXXXXXXX7a0243d8d',
consumer_secret = '3d05c41XXXXXXXXX1949d07c',
request_token_url =
'https://etws.etrade.com/oauth/request_token',
access_token_url =
'https://etws.etrade.com/oauth/access_token',
authorize_url = 'https://us.etrade.com/e/t/etws/authorize?
key={}&token={}',
base_url = 'https://etws.etrade.com')
# Get request token and secret
oauth_token, oauth_token_secret = service.get_request_token(params =
{'oauth_callback': 'oob',
'format': 'json'})
auth_url = service.authorize_url.format('cabf0XXXXXXXXXa0243d8d',
oauth_token)
webbrowser.open(auth_url)
verifier = raw_input('Please input the verifier: ')
return service.get_auth_session(oauth_token, oauth_token_secret,
params = {'oauth_verifier': verifier})
session = getSession()
This authentication process works perfectly fine and allows me to do get/delete requests but when I attempt to make post requests:
url =
'https://etwssandbox.etrade.com/order/sandbox/rest/previewoptionorder'
para = {
"PreviewOptionOrder": {
"-xmlns": "http://order.etws.etrade.com",
"OptionOrderRequest": {
"accountId": "83550325",
"quantity": "4",
"symbolInfo": {
"symbol": "AAPL",
"callOrPut": "CALL",
"strikePrice": "585",
"expirationYear": "2012",
"expirationMonth": "07",
"expirationDay": "21"
},
"orderAction": "BUY_OPEN",
"priceType": "MARKET",
"orderTerm": "GOOD_FOR_DAY"
}
}
}
resp = session.post(url,data=para)
resp.text
I get an error:
Unauthorized request: consumer key is missing.
I've tried numerous things (granted I am new to this stuff). I tried authenticating using just requests to no avail and I tried passing the oauth1 object to the posts function as a kw argument. Any ideas?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46018488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Order object hold a Customer object or the customers ID or both? If I have an Order object and each Order has a Customer what should the Order object hold, a Customer object or the customers ID or both?
Does it depend on the context or since we are dealing with objects in OOP the Order should ideally hold a Customer object and not just its ID like the database column?
Thanks.
A: Do not mix db thinking with PHP code design.. Regulary the property should be set to the object itself rather then to its id. Like this:
class Order {
/**
* @var Contact
*/
protected $contact;
public function __construct(Contact $contact) {
$this->contact = $contact;
}
}
But there are reasons when it might make sense to just have to id of the contact. The best example would be that the contact has to be fetched from database, but you regulary don't need its information, expect of special cases. Then it might be useful to only have the id as class property and fetch the contact if required. (Known as 'lazy loading'). Note that 'lazy loading' could be hidden behind the scenes by a framework.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15304001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: PHP for loop to while loop Hey studying for exams and have this loop.
<?php
$ab = 0;
$xy = 1;
echo "<table>";
for ($i = 0; $i < 5; $i++) {
echo "<tr>";
echo "<td>" . $ab . "</td><td>" . $xy . "</td>";
$ab += $xy; $xy += $ab;
echo "</tr>";
}
echo "</table>";
the question is now how do i rewrite this as while loop ? what to keep in mind,
thanks !
A: $ab = 0;
$xy = 1;
echo "<table>";
$i = 0;
while ($i < 5) {
echo "<tr><td>$ab</td><td>$xy</td></tr>";
$ab += $xy;
$xy += $ab;
$i++;
}
echo "</table>";
For explanation :
Compared to the "for" loop, you have to initialize the "counter" before opening the loop [ $i = 0 ]
Inside the loop, you specify the condition to continue the loop [ $i < 5 ]
And somewhere into the loop, you increase your "counter" [ $i++ ]
Your "counter" can be increased or decreased, or directly set ; it's all about your code logic and what are your needs.
You can also break the loop whenever you want in case you need to, see an example :
while ($i < 5) {
echo "<tr><td>$ab</td><td>$xy</td></tr>";
$ab += $xy;
$xy += $ab;
if ($ab == 22) { // If $ab is equal to a specific value
/* Do more stuff here if you want to */
break; // stop the loop here
}
$i++;
}
This example also work with the "for" loop.
And there is also another keyword, "continue", used to tell to "jump" to the next loop iteration :
while ($i < 5) {
$i++; // Don't forget to increase "counter" first, to avoid infinite loop
if ($ab == 22) { // If $ab is equal to a specific value
/* Do more stuff here if you want to */
continue; // ignore this iteration
}
/* The following will be ignored if $ab is equal to 22 */
echo "<tr><td>$ab</td><td>$xy</td></tr>";
$ab += $xy;
$xy += $ab;
}
A: To replace a for loop with a while loop, you can declare a variable before you initiate the while loop which will indicate the current iteration of the loop. Then you can decrement/increment this variable upon each iteration of the while loop. So you would have something like this:
$counter = 0;
while ($counter < 5) {
echo "";
echo "<td>" . $ab . "</td><td>" . $xy . "</td>";
$ab += $xy;
$xy += $ab;
echo "</tr>";
$counter++;
}
in general:
for ($i = 0; $i < x; $i++) {
do stuff
}
is equivalent to:
$counter = 0;
while ($counter < x){
do stuff
counter++;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38944943",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-4"
} |
Q: How can I attach the jar produced by one maven module to another module with a classifier? I have a project for which I've done some refactoring (and module names clarifications).
As a consequence, the module that was neo4j-connector-impl has now become neo4j-connector-1.6. Now, for better compatibility, i want to have the jar produced by this project also be available as neo4j-connector-impl (its old name, if you follow me).
I have created a pom typed project under those GAV coordinates, and have tried attaching the jar using a mix of maven-dependency-plugin and build-helper-maven-plugin.
Unfortunatly, each time I build my project, that jar is not in maven repository and maven says
[INFO] Installing C:\Users\ndx\Documents\workspaces\git\neo4j-connector\neo4j-connector-impl-parent\neo4j-connector-impl\neo4j-connector-impl-1.6.jar to C:\Users\ndx\.m2\repository\com\netoprise\neo4j-connector-impl\1.6-SNAPSHOT\neo4j-connector-impl-1.6-SNAPSHOT.jar
[DEBUG] Skipped re-installing C:\Users\ndx\Documents\workspaces\git\neo4j-connector\neo4j-connector-impl-parent\neo4j-connector-impl\neo4j-connector-impl-1.6.jar to C:\Users\ndx\.m2\repository\com\netoprise\neo4j-connector-impl\1.6-SNAPSHOT\neo4j-connector-impl-1.6-SNAPSHOT.jar, seems unchanged
So ... how can I put that jar as a classified dependency ?
For info, here is my pom
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>com.netoprise</groupId>
<artifactId>neo4j-connector-impl-parent</artifactId>
<version>1.6-SNAPSHOT</version>
</parent>
<artifactId>neo4j-connector-impl</artifactId>
<packaging>pom</packaging>
<description>Compatibility module ensuring previous code can "quite" work</description>
<properties>
<dependencies.directory>${project.build.directory}/dependencies</dependencies.directory>
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>${project.artifactId}-1.6</artifactId>
<version>${project.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<includeGroupIds>${project.groupId}</includeGroupIds>
<excludeTransitive>true</excludeTransitive>
<overWriteSnapshots>true</overWriteSnapshots>
<stripVersion>true</stripVersion>
<outputDirectory>${dependencies.directory}</outputDirectory>
</configuration>
</execution>
</executions>
</plugin>
<!-- attach all projects artifact to this one in order for user projects
to have minimum integration work -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>attach-artifact</goal>
</goals>
<configuration>
<artifacts>
<!-- for maximum compatibility, the 1.6 version is attached as default jar artifact -->
<artifact>
<file>neo4j-connector-impl-1.6.jar</file>
<type>jar</type>
</artifact>
</artifacts>
<basedir>${dependencies.directory}</basedir>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
EDIT 1 Why are there versions numbers in artifacts ? Because that project (neo4j-jca-connector), provides access to neo4j graph database to various JavaEE servers. The neo4j version SHOULD NOT be constrained by this connector. As a consequence, we have to provide users a way to define, for a version of that connector, which neo4j version is used (considering as default - for backwards compatibility version - the 1.6 one).
As a consequence, we have one module for each version of neo4j,n and these modules are aggregated here. For the record, Bouncycastle do so for JDK versions ... in an even worse version, to my mind, as they use the JDK version directly in artifactId, what I plan to do only internaly : exposed artifacts will be neo4j-connector-impl and neo4j-connector-rar ... and classifiers will allow one to select which neo4j version to use.
EDIT 2 For more info, the whole project can be seen on github : https://github.com/Riduidel/neo4j-connector
The parent pom cotnaining the maven-shade-plugin declaration is https://github.com/Riduidel/neo4j-connector/blob/master/neo4j-connector-impl-parent/pom.xml
One version of that connector is https://github.com/Riduidel/neo4j-connector/blob/master/neo4j-connector-impl-parent/neo4j-connector-impl-1.5/pom.xml
And expected aggregator module should be https://github.com/Riduidel/neo4j-connector/blob/master/neo4j-connector-impl-parent/neo4j-connector-impl/pom.xml
A: Ok this is much clearer after reading your comment and EDIT.
So, correct me if I'm wrong.
*
*You want to be able to provide a version of your connector for each neo4j version
*You use module to do so.
What is still not clear to me, is that I cannot see the modules' part of your POM. You say that you want to use classifiers (so, of the same project) but maven modules are different projects...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15069645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How can I resolve HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Read timed out. (read timeout=60) Currently, I have a problem in download data from google cloud storage. Here is the error: HTTPSConnectionPool(host='oauth2.googleapis.com', port=443): Read timed out. (read timeout=60)
Although I increase the timeout to 600 -> The error still shows 60. Here is my code:
storage_client = storage.Client.from_service_account_json(SERVICE_ACCOUNT_JSON_FILENAME)
bucket = storage_client.get_bucket(BUCKET_NAME)
blob = bucket.blob('blob_name')
blob.download_to_filename('dest_name', timeout=600)
Moreover, I also changed the timeout as a tuple (connect_timeout, read_timeout)
blob.download_to_filename('dest_name', timeout=(600, 600))
but, It still not working.
Could you please help me solve this problem?
Thank you
| {
"language": "en",
"url": "https://stackoverflow.com/questions/66433239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How to plot 3D surface with X, Y, Z when Z is a list of list in Python? In my case, X is a range(0, 100), Y is a range(0, 10), Z is a list of list. Z has the same length as X, which is 100, and each element list inside of Z has the same dimension of Y.
Z = [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], ..., [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]].
I have the following code, but it does not work, it complains two or more arrays have incompatible dimensions on axis 1.
fig = plt.figure(figsize=(200, 6))
ax = fig.add_subplot(1, 2, 1, projection='3d')
ax.set_xticklabels(x_ax)
ax.set_yticklabels(y_ax)
ax.set_title("my title of chart")
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
ax.set_zlim(0, 100)
fig.colorbar(surf, shrink = 0.5, aspect = 5)
plt.show()
I guess the error is due to the data structure of Z, how do I make a compatible structure with X, and Y? Thanks
A: Here is a basic 3D surface plotting procedure. It seems that your X and Y are just 1D arrays. However, X, Y, and Z have to be 2D arrays of the same shape. numpy.meshgrid function is useful for creating 2D mesh from two 1D arrays.
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.array(np.linspace(-2,2,100))
y = np.array(np.linspace(-2,2,10))
X,Y = np.meshgrid(x,y)
Z = X * np.exp(-X**2 - Y**2);
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
fig.colorbar(surf, shrink = 0.5, aspect = 5)
plt.show()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23815004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How pointer adds extra value in printf? #include <stdio.h>
int main() {
int *const volatile p=5;
printf("%d",5/2 + p);
return 0;
}
Without pointer, output is 7 and if I add pointer then the output of the answer is 13. Kindly give the step by step explanation of this program.
A: The code is invalid. The type of 5/2 + p is int *, while %d requires an int.
Kindly give the step by step explanation of this program.
5/2 + p = // 5/2 is equal to 2
2 + p = // p has type int
2 * sizeof(int) + (char*)p = // p is 5
2 * sizeof(int) + 5 = // sizeof(int) if 4 on your platform
2 * 4 + 5 =
13
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70255248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: React App in same Server and multiple Domains I need to know if it's possible to maintain a React App on a IP (X.XXX.XXX.XX) server, and based on the URL that points to this server, render a different application.
An example of this would be to keep the application on one server, while several companies point their DNS's to the server's IP, and based on the company name, render the correct application.
I searched the internet if there is any way to do this with React Router but I didn't find
A: Use nginx reverse proxy to redirect based on url which will point to your different applications.
You can maintain the same IP for all of them.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58996570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How can I reference existing groups in my VBA code? I have already grouped cells in a spreadsheet using DATA>GROUP.
Based on a menu style system, I want to collapse/uncollapse a series of already created groups in a sheet, so that it only shows the selection chosen via the menu choices. I hope this makes sense.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53284224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to avoid OOM in hsqldb while doing a merge? I have two tables where the first is very large (>50M rows):
CREATE CACHED TABLE Alldistances (
word1 VARCHAR(70),
word2 VARCHAR(70),
distance INTEGER,
distcount INTEGER
);
and a second that can be also quite large (>5M rows):
CREATE CACHED TABLE tempcach (
word1 VARCHAR(70),
word2 VARCHAR(70),
distance INTEGER,
distcount INTEGER
);
Both tables have indexes:
CREATE INDEX mulalldis ON Alldistances (word1, word2, distance);
CREATE INDEX multem ON tempcach (word1, word2, distance);
In my java program I am using prepared statements to fill/preorganize data in the tempcach table and then I merge the table to alldistances with:
MERGE INTO Alldistances alld USING (
SELECT word1,
word2,
distance,
distcount FROM tempcach
) AS src (
newword1,
newword2,
newdistance,
newcount
) ON (
alld.word1 = src.newword1
AND alld.word2 = src.newword2
AND alld.distance = src.newdistance
) WHEN MATCHED THEN
UPDATE SET alld.distcount = alld.distcount+src.newcount
WHEN NOT MATCHED THEN
INSERT (
word1,
word2,
distance,
distcount
) VALUES (
newword1,
newword2,
newdistance,
newcount
);
The tempchach table is then dropped or truncated and filled with new data.
During the merge I get the OOM, which is i guess because the whole table is loaded into memory during the merge. So I will have to merge in batches, but can i do that in SQL or do it in my java program. Or is there a smart way to avoid OOM while merging?
A: It is possible to merge in chunks (batches) in SQL. You need to
*
*limit the number of rows from the temp table in each chunk
*delete those same rows
*repeat
The SELECT statement should use an ORDER BY and LIMIT
SELECT word1,
word2,
distance,
distcount FROM tempcach
ORDER BY primary key or unique columns
LIMIT 1000
) AS src (
After the merge, the delete statement will select the same rows to delete
DELETE FROM tempcach WHERE primary key or unique columns IN
(SELECT primary key or unique columns FROM tempcach
ORDER BY primary key or unique columns LIMIT 1000)
A: First, just because this kind of thing annoys me, why are you selecting all the fields of the temporary table in a subselect? Why not the simpler SQL:
MERGE INTO Alldistances alld USING tempcach AS src (
newword1,
newword2,
newdistance,
newcount
) ON (
alld.word1 = src.newword1
AND alld.word2 = src.newword2
AND alld.distance = src.newdistance
) WHEN MATCHED THEN
UPDATE SET alld.distcount = alld.distcount+src.newcount
WHEN NOT MATCHED THEN
INSERT (
word1,
word2,
distance,
distcount
) VALUES (
newword1,
newword2,
newdistance,
newcount
);
What you need to have the database avoid loading the whole table into memory is indexing on both tables.
CREATE INDEX all_data ON Alldistances (word1, word2, distance);
CREATE INDEX tempcach_data ON tempcach (word1, word2, distance);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/15327075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how to design db schema of an e-learning platform in dynamodb? I am designing No-SQL DB schema for an online e-learning platform, plan is to store it in the AWS DynamoDB
This is similar to Udemy, user can access and see all the available courses and users can buy more than one course.
As of now, I am thinking of the schema as below
User
Course
But I am just thinking from the item size limit wise, if in future a single user bought many courses or if a single course have many modules and videos in it which can automatically increases the item size in the AWS Dynamo DB which is max 400KB, I have seen blogs suggesting to split up the item, but wanted to check here for your inputs as well.
So my questions are
*
*DB schema wise is it fine the way that I am thinking of?
*What could be the better way when the entries are many and item size is large?
I am also exploring GraphQL, so that calls will have less latency from the front end, but worry is about the DB design and large items.
Let me know if anyone have their thoughts or valuable suggestions on this :)
Thanks in advance.
A: Modeling one-to-many relationships (e.g. Users to Courses bought) within a single item is a common pattern. However, if the many side of the relationship can grow large, you will likely want a different approach. It sounds like you're use case ins't a good fit for this particular pattern.
One way around this limitation is to model the relationship in an item collection. For example, you could model the user and the courses bought within the same partition. Keeping the data together makes it easier to fetch the data in a single query operation.
In this data model, I created a global secondary index named courses using the attribute GSIPK as the primary key for the secondary index. This would let you fetch all courses with a single query of the courses GSI.
Keep in mind that this is just one of many approaches you could take to model your data. Check out this talk from AWS Re:Invent about DynamoDB data modeling. It gives a fantastic walkthrough of some of the key concepts that will help you design your data model.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/67185887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
} |
Q: Difference of two DataTables in c# using LINQ II have two data tables as follows
dtOne
-------------------------
ID | Name
--------------------------
101 | ABC
102 | XYZ
103 | MNO
104 | PQR
--------------------------
dtTwo
-------------------------
ID | Name
--------------------------
101 | FGH
102 | XYZ
104 | GPS
--------------------------
I just want the result as data which is in dtOne and not in dtTwo (dtOne-dtTwo)
dtResult
-------------------------
ID | Name
--------------------------
103 | MNO
--------------------------
How can i achieve this .
I have used except method of LINQ but that is giving the result like this
101 | ABC
103 | MNO
104 | PQR
101 | FGH
104 | GPS
Thats means matching both column in except method
A: If you are trying to find rows that meet criteria based on multiple columns in the DataTable, try something like the following
string stationToFind = "ABC";
DateTime dateToFind = new DateTime(2016, 5, 26);
var result = dataTable.AsEnumerable().Where(row => row.Field<string>("station") == stationToFind
&& row.Field<datetime>("date") == dateToFind).ToList();
If you are only expecting one row, then you can use .FirstOrDefault() instead of .ToList()
A: DataTable dt = new DataTable();
dt.Columns.Add("station", typeof(string));
dt.Columns.Add("max_temp", typeof(double));
dt.Columns.Add("min_temp", typeof(double));
dt.Rows.Add("XYZ", 14.5, 3.5);
dt.Rows.Add("XYZ", 14.5, 3.5);
dt.Rows.Add("XYX", 13.5, 3.5);
dt.Rows.Add("ABC", 14.5, 5.5);
dt.Rows.Add("ABC", 12.5, 3.5);
dt.Rows.Add("ABC", 14.5, 5.5);
var maxvalue = dt.AsEnumerable().Max(s => s.Field<double>("max_temp"));
var coll = dt.AsEnumerable().Where(s => s.Field<double>("max_temp").Equals(maxvalue)).
Select(s => new { station = s.Field<string>("station"), maxtemp = s.Field<double>("max_temp"), mintemp = s.Field<double>("min_temp") }).Distinct();
foreach (var item in coll)
{
Console.WriteLine(item.station + " - " + item.maxtemp + " - " + item.mintemp);
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/37475696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Macro to remove highlighting from hyperlinks in Word I'm writing a macro to remove all highlight colors in a Word document except wdGray25 (whose HighlightColorIndex equals 15). The problem arises when macro runs into a hyperlink highlighted with wdGray25, whose hyperlink/field isn't highlighted when revealed by Alt+F9. In that case Do While .Execute loop goes into an infinite loop and never exits.
How can I rewrite the code so that .Execute method wouldn't go into an infinite loop? I appreciate your help.
color_array = Array("2", "3", "4", "5", "6", "7", "9", "10", "11", "12", "13", "14")
For Each color_number In color_array
With Selection
.HomeKey Unit:=wdStory
With Selection.Find
.Highlight = True
.Text = ""
Do While .Execute
If Selection.Range.HighlightColorIndex = color_number Then
Selection.Range.HighlightColorIndex = wdNoHighlight
Selection.Collapse wdCollapseEnd
End If
Loop
End With
End With
Next
A: Instead of searching, we can just *loop** through each Hyperlink in the Hyperlinks collection:
Sub RemoveHighlightFromHyperlinks()
Dim a As Hyperlink
For Each a In ActiveDocument.Hyperlinks
If a.Range.HighlightColorIndex <> 15 Then a.Range.HighlightColorIndex = wdAuto
Next a
End Sub
It loops through all hyperlinks in the document. If HighlightColorIndex <> 15 then remove HighlightColorIndex (by setting to wdAuto.)
More Information:
*
*MSDN : Hyperlinks Collection (Word)
*MSDN : Hyperlink Object (Word)
*MSDN : Range.HighlightColorIndex Property (Word)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/49224825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Download a Word editable document from an aspx page I currently have an aspx page returning a html form (with images and css). I would like get a download Word editable version of this form without rewrite all the code. Is it possible ?
An example : I Have a page 'WebForm2.aspx' with a simple content like this.
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm2.aspx.cs"
Inherits="WebApplication15.WebForm2" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title></title>
</head>
<body>
<form id="form1" runat="server">
<div>
<h1>Title</h1>
<asp:CheckBox runat="server" ID="myTest" Text="Doc 1" /><br />
<asp:CheckBox runat="server" ID="myTest2" Text="Doc 2" /><br />
<asp:CheckBox runat="server" ID="myTest3" Text="Doc 3" /><br />
<asp:CheckBox runat="server" ID="myTest4" Text="Doc 4" /><br />
</div>
</form>
</body>
</html>
I would like that from an other page for example a button that lets you download the contents of 'WebForm2.aspx' in an editable Word document.
I Add this in Page_PreRender method
Response.AddHeader("Content-Type", "application/msword");
It's partially work. If I save the document css and images aren't loaded but if I only open the downloadable content css and images are loaded.
A: MS Word can edit html document so by adding Content-Type=application/msword record in header will make browser open your page in browser unfortunately without css and images.
Add following code in prerender event on your page
Response.AddHeader("Content-Type", "application/msword");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16165675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Many-to-one prediction using LSTM in Keras, reshaping data I have multiple pandas dataframes containing multiple users, with for each user, a sequence of sequential timestamps of multiple features. It looks something like the following:
user day feature_1 ... feature_n
0 1 0 6 ... 5
1 1 1 7 ... 4
2 1 2 6 ... 7
3 1 3 7 ... 8
4 1 4 6 ... 4
5 1 5 4 ... 3
6 1 6 3 ... 5
7 1 7 4 ... 6
8 1 8 3 ... 7
9 1 9 4 ... 7
10 2 0 3 ... 5
11 2 1 5 ... 4
I want to predict the value of feature_1, one timestamp ahead, given seven timestamps of information, per user: many-to-one prediction if I am correct.
I have previously clustered my users and want to create one LSTM model per dataframe.
I've already filtered out users with less than 8 timestamps and I know how to create a target per row using the groupby and shift functions in pandas.
The Keras LSTM implementation requires data in the format of a 3D tensor and gives the following help in the documentation: [nSamples, nTimesteps, nFeatures]
I do not understand how I should achieve this, given my dataset and goal. I have tried searching for solutions or tutorials but have been unable to find any so far. Any help would be greatly appreciated!
If it matters, I am using Python 3.6.6, Keras 2.2.4, Tensorflow 1.11.0 and numpy 1.15.2
EDIT: Thanks to the suggestion from @ely I managed to figure it out. I should create a sample for every 7 timestamps as he explains in his comment. However, the most pythonic way of achieving this I still have not figured out.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53785887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Java: How to retrieve a specific value from my arraylist? I'm trying to get a value, deep from an arraylist.
The attributes are in the following order:
arraylist > [0, 1, 2, 3 etc.] >
(example from array[0]):
[String name = "x"], [Private Time time] > (in time):
[String beginTime="12:00", String endTime="12:30",
long difference="1800000"].
I know how to get to the 0, 1, 2 of the array with help of .get(i) but how do I go deeper?
I have tried .get(i).get(time).get(difference), but, as expected, it did not work.
Basically what I need to do is to sniff through the array and only take the difference value, and add everything up.
A: I'm thinking that you are storing an array of class Time.
You could do something like
((Time)get(i)).difference
Assuming that difference is an accessible field in the Time class.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/22276621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: How to push notifications from sql database to Flutter App My problem is that i have only found some tutorials about how to push notifications from firebase database using fcm to devices running an app made with flutter. In my case i have a server with mysql database with a lot of tables. User has the oppurtinity to choose from which table to get notifications from always when something is added to the table. I really dont know how to achieve this. I tried to learn about Fcm but it doesnt seem to be suitable for my purpose. I dont also know how to push notifications directly from sql. Im not using php by the way.
Please dont vote negatively im really sorry im new at this. So i would appreciate your help.
I would be very thankful for your answers.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/64598308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Visualize lowest nodes in hierarchical clustering with dendrogram I'm using linkage to generate an agglomerative hierarchical clustering for a dataset of around 5000 instances. I want to visualize the 'bottom' merges in the hierarchy, that is, the nodes close to the leaves with the smallest distance measures.
Unfortunately, the dendrogram visualization prefers to show the 'top' nodes from the last merges in the algorithm. By default it shows the top 30 nodes, collapsing the bottom of the tree. I can change the P value to show more nodes, but I would have to show all 5000+ to see the lowest levels of the clustering at which point the plot is no longer readable.
MCVE
For example, starting from the linkage documentation example
openExample('stats/CompareClusterAssignmentsToClustersExample')
run CompareClusterAssignmentsToClustersExample
dendrogram(Z, 'Orient', 'Left', 'Labels', species);
Produces a dendrogram with the top 30 nodes visible. The nodes with numerical labels are collapsing lower levels of the tree.
I can increase the number of visible nodes to include all leaves at expense of readability.
dendrogram(Z, size(Z,1), 'Orient', 'Left', 'Labels', species);
What I'd Like
What I'd really like is a zoomed in version of above, like the example below, but showing the first 30 closest clusters.
What I've Tried
I tried providing the function with the first 30 rows of Z,
dendrogram(Z(1:30), 'Orient', 'Left');
but that throws an "Index exceeds matrix dimensions." error when one of the rows references a cluster in a row > 30.
I also tried using the dendrogram Reorder property, but I am having difficulty finding a valid ordering that orders the clusters from closest to farthest.
%The Z matrix is in order from closest cluster to furthest,
% so I can use it to create an ordering
Y = reshape(Z(:, 1:2)', 1, [])
Y = Y(Y<151);
dendrogram(Z, 30, 'Orient', 'Left', 'Labels', species, 'Reorder', Y);
I get the error
In the requested ordering of the nodes, some data points belonging to
the same leaf in the plot are separated by the points belonging to
other leaves. Try to use a different ordering.
It may be the case that such an ordering is not possible if the entire tree is calculated because there would be branch crossings, but I'm hoping that there is a better ordering if I am only looking at a portion of the tree, and clusters at higher levels are not considered.
Question
How can I improve my visualization to show the lowest level clusters in the dendrogram?
A: Emmm...like ylim()?
dendrogram(Z, size(Z,1), 'Orient', 'Left', 'Labels', species);
ylim(max(ylim())-[30,0]);
yields
| {
"language": "en",
"url": "https://stackoverflow.com/questions/45336293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
} |
Q: How can I execute multiple NN training? I have two NVidia GPUs in the machine, but I am not using them.
I have three NN training running on my machine. When I am trying to run the fourth one, the script is giving me the following error:
my_user@my_machine:~/my_project/training_my_project$ python3 my_project.py
Traceback (most recent call last):
File "my_project.py", line 211, in <module>
load_data(
File "my_project.py", line 132, in load_data
tx = tf.convert_to_tensor(data_x, dtype=tf.float32)
File "/home/my_user/.local/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/my_user/.local/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Failed to allocate scratch buffer for device 0
my_user@my_machine:~/my_project/training_my_project$
How can I resolve this issue?
The following is my RAM usage:
my_user@my_machine:~/my_project/training_my_project$ free -m
total used free shared buff/cache available
Mem: 15947 6651 3650 20 5645 8952
Swap: 2047 338 1709
my_user@my_machine:~/my_project/training_my_project$
The following is my CPU usage:
my_user@my_machine:~$ top -i
top - 12:46:12 up 79 days, 21:14, 2 users, load average: 4,05, 3,82, 3,80
Tasks: 585 total, 2 running, 583 sleeping, 0 stopped, 0 zombie
%Cpu(s): 11,7 us, 1,6 sy, 0,0 ni, 86,6 id, 0,0 wa, 0,0 hi, 0,0 si, 0,0 st
MiB Mem : 15947,7 total, 3638,3 free, 6662,7 used, 5646,7 buff/cache
MiB Swap: 2048,0 total, 1709,4 free, 338,6 used. 8941,6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2081821 my_user 20 0 48,9g 2,5g 471076 S 156,1 15,8 1832:54 python3
2082196 my_user 20 0 48,8g 2,6g 467708 S 148,5 16,8 1798:51 python3
2076942 my_user 20 0 47,8g 1,6g 466916 R 147,5 10,3 2797:51 python3
1594 gdm 20 0 3989336 65816 31120 S 0,7 0,4 38:03.14 gnome-shell
93 root rt 0 0 0 0 S 0,3 0,0 0:38.42 migration/13
1185 root -51 0 0 0 0 S 0,3 0,0 3925:59 irq/54-nvidia
2075861 root 20 0 0 0 0 I 0,3 0,0 1:30.17 kworker/22:0-events
2076418 root 20 0 0 0 0 I 0,3 0,0 1:38.65 kworker/1:0-events
2085325 root 20 0 0 0 0 I 0,3 0,0 1:17.15 kworker/3:1-events
2093002 root 20 0 0 0 0 I 0,3 0,0 1:00.05 kworker/23:0-events
2100000 root 20 0 0 0 0 I 0,3 0,0 0:45.78 kworker/2:2-events
2104688 root 20 0 0 0 0 I 0,3 0,0 0:33.08 kworker/9:0-events
2106767 root 20 0 0 0 0 I 0,3 0,0 0:25.16 kworker/20:0-events
2115469 root 20 0 0 0 0 I 0,3 0,0 0:01.98 kworker/11:2-events
2115470 root 20 0 0 0 0 I 0,3 0,0 0:01.96 kworker/12:2-events
2115477 root 20 0 0 0 0 I 0,3 0,0 0:01.95 kworker/30:1-events
2116059 my_user 20 0 23560 4508 3420 R 0,3 0,0 0:00.80 top
The following is my TF configuration:
import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"] = "99" # Use both gpus for training.
import sys, random
import time
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.callbacks import ModelCheckpoint
import numpy as np
from lxml import etree, objectify
# <editor-fold desc="GPU">
# resolve GPU related issues.
try:
physical_devices = tf.config.list_physical_devices('GPU')
for gpu_instance in physical_devices:
tf.config.experimental.set_memory_growth(gpu_instance, True)
except Exception as e:
pass
# END of try
# </editor-fold>
Please, take the commented lines as commented-out lines.
Relevant source code:
def load_data(fname: str, class_index: int, feature_start_index: int, **selection):
i = 0
file = open(fname)
if "top_n_lines" in selection:
lines = [next(file) for _ in range(int(selection["top_n_lines"]))]
elif "random_n_lines" in selection:
tmp_lines = file.readlines()
lines = random.sample(tmp_lines, int(selection["random_n_lines"]))
else:
lines = file.readlines()
data_x, data_y = [], []
for l in lines:
row = l.strip().split()
x = [float(ix) for ix in row[feature_start_index:]]
y = encode(row[class_index])
data_x.append(x)
data_y.append(y)
# END for l in lines
num_rows = len(data_x)
given_fraction = selection.get("validation_part", 1.0)
if given_fraction > 0.9999:
valid_x, valid_y = data_x, data_y
else:
n = int(num_rows * given_fraction)
data_x, data_y = data_x[n:], data_y[n:]
valid_x, valid_y = data_x[:n], data_y[:n]
# END of if-else block
tx = tf.convert_to_tensor(data_x, np.float32)
ty = tf.convert_to_tensor(data_y, np.float32)
vx = tf.convert_to_tensor(valid_x, np.float32)
vy = tf.convert_to_tensor(valid_y, np.float32)
return tx, ty, vx, vy
# END of the function
A: Using multiple GPUs
If developing on a system with a single GPU, you can simulate multiple GPUs with virtual devices. This enables easy testing of multi-GPU setups without requiring additional resources.
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Create 2 virtual GPUs with 1GB memory each
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024),
tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
NOTE: Virtual devices cannot be modified after being initialized
Once there are multiple logical GPUs available to the runtime, you can utilize the multiple GPUs with tf.distribute.Strategy or with manual placement.
With tf.distribute.Strategy best practice for using multiple GPUs, here is a simple example:
tf.debugging.set_log_device_placement(True)
gpus = tf.config.list_logical_devices('GPU')
strategy = tf.distribute.MirroredStrategy(gpus)
with strategy.scope():
inputs = tf.keras.layers.Input(shape=(1,))
predictions = tf.keras.layers.Dense(1)(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=predictions)
model.compile(loss='mse',
optimizer=tf.keras.optimizers.SGD(learning_rate=0.2))
This program will run a copy of your model on each GPU, splitting the input data between them, also known as "data parallelism".
For more information about distribution strategies or manual placement, check out the guides on the links.
A: The RAM complaint isn't about your system ram (call it CPU RAM). It's about your GPU RAM.
The moment TF loads, it allocates all the GPU RAM for itself (some small fraction is left over due to page size stuff).
Your sample makes TF dynamically allocate GPU RAM, but it could still end up using up all the GPU RAM. Use the code below to provide a hard stop on GPU RAM per process. you'll likely want to change 1024 to 8096 or something like that.
and FYI, use nvidia-smi to monitor your GPU ram usage.
From the docs:
https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only allocate 1GB of memory on the first GPU
try:
tf.config.set_logical_device_configuration(
gpus[0],
[tf.config.LogicalDeviceConfiguration(memory_limit=1024)])
logical_gpus = tf.config.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Virtual devices must be set before GPUs have been initialized
print(e)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71017766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Sort a cell array of strings alphabetically by last name In Matlab, I have a cell array like:
names = {
'John Doe',
'Jane Watkins',
'Jeremy Jason Taylor',
'Roger Adrian'
}
I would like to sort these such that the last names appear in alphabetical order. In my example, it would come out being:
names_sorted = {
'Roger Adrian',
'John Doe',
'Jeremy Jason Taylor',
'Jane Watkins'
}
I know of inelegant ways of doing this. For instance, I could tokenize at space, make a separate last_names cell array, sort that, and apply the indexing to my original array.
My question is, is there a better way?
Because someone is sure to come up with that list of assumptions you can't make with regards to people names in a database, let me assure you that all my names are either "FIRST MIDDLE LAST" or "FIRST LAST". I checked.
A: If all first names had the same length, then you would be able to use sortrows, but in your case, that would require padding and modifying your array, anyway, so that you're better off converting it into "LAST FIRST MIDDLE" before applying sort. Fortunately, there's a simple regular expression for that:
names = {'John Doe';'Roger Adrian';'John Fitzgerald Kennedy'};
names_rearranged = regexprep(names,'(.*) (\w*)$','$2 $1')
names_rearranged =
'Doe John'
'Adrian Roger'
'Kennedy John Fitzgerald'
[names_rearranged_sorted, idx_sorted] = sort(names_rearranged);
names_sorted = names(idx_sorted)
names_sorted =
'Roger Adrian'
'John Doe'
'John Fitzgerald Kennedy'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9757957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: AJAX URL requests in Drupal 7 on localhost Apache2 don't include base URL or base path I am currently stuck with a problem in my local Drupal 7 environment. As Drupal 7 site configurations can get very complex, I will try to explain my problem in as much details as possible.
The site sits in a sub folder in my local environment, I have more projects running on my localhost, so preferably I would like to keep the projects separated. In addition, for this site I have two separate folders, one for development and one for production that share the same database, so a solution by adding fake domains would not work here I think (correct me if I'm wrong).
So the main problem seems to be that AJAX requests don't include the base URL or base path and I can't login on http://localhost/mysite/devel/docroot/user because the AJAX request would go to http://localhost/system/ajax or http://localhost/ajax_register/login/ajax and therefore would not return the correct JSON response.
How can this be solved? Are configurations in Apache's httpd.conf or .htaccess enough to make this work?
Here's what I did so far, first in settings.php:
$base_url = 'http://localhost/mysite/devel/docroot';
$base_path = '/mysite/devel/docroot/';
Next, I've tried the following with rewrite rules in httpd.conf:
<IfModule mod_rewrite.c>
RewriteEngine on
RewriteCond %{HTTP_REFERER} .*devel.*$ [OR]
RewriteRule ^/sites/(.*)$ /mysite/devel/docroot/sites/$1
RewriteRule ^/system/(.*)$ /mysite/devel/docroot/system/$1
RewriteRule ^/ajax_register/(.*)$ /mysite/devel/docroot/ajax_register/$1
</IfModule>
Here I got the following pop up with HTML (that seems to be from index.php) in the response text instead of the expected JSON response:
An AJAX HTTP error occurred.
HTTP Result Code: 404
Debugging information follows.
Path: http://localhost/ajax_register/login/ajax
StatusText: error
ResponseText: lots of HTML...
Then without rewrite rules but using proxies instead in httpd.conf:
<IfModule mod_proxy.c>
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Order deny,allow
Allow from all
</Proxy>
RewriteEngine on
ProxyPass /system/ http://localhost/mysite/devel/docroot/system/
ProxyPassReverse /system/ http://localhost/mysite/devel/docroot/system/
<Location /system/>
Order allow,deny
Allow from all
</Location>
ProxyPass /ajax_register/ http://localhost/mysite/devel/docroot/ajax_register/
ProxyPassReverse /ajax_register/ http://localhost/mysite/devel/docroot/ajax_register/
<Location /ajax_register/>
Order allow,deny
Allow from all
</Location>
</IfModule>
For the proxy directives, a similar 404 not found error was given for the POST AJAX request, except that now the response text is in JSON format.
An AJAX HTTP error occurred.
HTTP Result Code: 404
Debugging information follows.
Path: http://localhost/ajax_register/login/ajax
StatusText: error
ResponseText: some JSON code...
Without both the rewrite rules and the proxy directives I get the following error in the JavaScript pop up:
An AJAX HTTP error occurred.
HTTP Result Code: 404
Debugging information follows.
Path: http://localhost/ajax_register/login/ajax
StatusText: error
ResponseText:
404 Not Found
Not Found
The requested URL /ajax_register/login/ajax was not found on this server.
Finally in .htaccess I've tried to rewrite the base to the following:
RewriteBase /mysite/devel/docroot/
and here I get the same 404 error as was the case when both the rewrite rules and proxy directives are commented out in httpd.conf. I would also like to add that in the database, in the table languages I've set the domain for the English language to localhost.
I don't understand, why is the base not included in front of the AJAX URL requests? And how can I add it? When I query Drupal.settings.basePath in Firebug I do get the value that I've set in settings.php :S - Does someone have any ideas?
A: There are many solutions for your problem, but I'm gonna try to focus on the most mature one. I assume you are using windows as your environment.
You said you were running your projects on localhost, which is where your first mistake is.
Localhost is nothing but a form of weird domain name you specify. When you request a domain like "www.google.com" or "localhost" the following steps occur:
*
*The browser checks a specific file for the requested domain. This file is called hosts file
*If the domain name is found in the hosts file, the browser sends a request to the IP address specified in the hosts file
*If the domain name is not found in the hosts file, the browser queries domain name servers which return the corresponding IP adress.
Now localhost is nothing but a domain name specified in your hosts file, which points to the loopback address.(127.0.0.1).
So the magic trick here is to bind your custom hosts like "project1", "project2" etc.. to that loopback adress(127.0.0.1).
Than when you send a request to "project1" in your browser, the server running on port 80 will respond as if you typed "localhost".
The second part you need to take care of is called virtual hosts. When you send a request with a specific domain name, a special header is included in your http request called "Host".
Lets assume, that you redirect all this custom domains to the same IP (127.0.0.1). In order for apache to serve a different project, you should instruct apache to look at the "Host" header and resolve it for the corresponding project.
Again you do that by setting virtual hosts.
A lot of frameworks and content managing systems in PHP have some ugly ways to insert some magic "$BASE_PATH" variable, which is a bad practice as that could be achieved with relative paths in pure html and a properly configured server.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16508879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Toggle a div in Angular How can I make a toggle in Angular.JS so a div will open and close when clicked on the same div again. For now it perfoms the action so it opens the div but whenever I click again on it, the div needs to close again.
<a ng-href ng-click="openAccordionRow(champion.clean)"> Zerg </a>
and this opens:
<div ng-show="isAccordionOpen(champion.clean)">
info
</div>
JS:
$scope.activeRows = "";
$scope.isAccordionOpen = function(row) {
if ($scope.activeRows === row) {
return true;
} else {
return false;
}
}
$scope.openAccordionRow = function(row) {
$scope.activeRows = row;
}
Problem here is that the same clicked div doesn't close when pressing on it again.
A: Currently your code doesn't have anything that even attempts to collapse the row. You could change your code to this.
HTML:
<a ng-href ng-click="toggleAccordionRow(champion.clean)"> Zerg </a>
JavaScript:
$scope.activeRows = "";
$scope.isAccordionOpen = function(row) {
if ($scope.activeRows === row) {
return true;
} else {
return false;
}
}
$scope.toggleAccordionRow = function(row) {
$scope.activeRows = $scope.isAccordionOpen(row) ? "" : row;
}
A: If you want toggle. Just use these code.
$scope.activeRows = true // Am setting this for default to show
ng-click="$scope.activeRows = !$scope.activeRows" // change ng-click event
ng-show = "$scope.activeRows" // bind directly $scope.activeRows to show.
A: the easy way to do such a thing is the way bellow
<a ng-href ng-click="hideBar =!hideBar"> Zerg </a>
<div ng-show="hideBar">
info
</div>
| {
"language": "en",
"url": "https://stackoverflow.com/questions/29873261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Spark Structured Streaming with Azure EventHub In the context of Spark Streaming with Azure EventHub, technically i need some help in understanding the differrence b/w EventPosition.fromStartOfStream, EventPosition.fromEndOfStream. If i need to trigger the Streaming job only once a day with checkpointing enabled, what difference will the below code do.
Gone through couple of docs, couldn't find much information on this. Any help would be appreciated.
val ehConf = EventHubsConf(cs).setStartingPositions(positions).setStartingPosition(EventPosition.fromStartOfStream)
val ehConf = EventHubsConf(cs).setStartingPositions(positions).setStartingPosition(EventPosition.fromEndOfStream)
A: If you are checkpointing then position given by setStartingPosition won't have any use. it is only used if there is no checkpoint found.
Please see sample code and description here - https://github.com/Azure/azure-event-hubs-spark/blob/564267dd1287b0593f8914b1acf8ff7796b58e3b/docs/spark-streaming-eventhubs-integration.md#per-partition-configuration
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61072555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Docker different digest when push same images with different tags I built one docker image with one tag and then tag it with a new tag (both tags include registry URL).
I push the first tag, then the second tag, the push digest from 2 tags are different:
16:10:47 + docker build -t 10.88.102.47:8443/my-project/foo:jenkins-305 .
...
16:11:26 + docker tag 10.88.102.47:8443/my-project/foo:jenkins-305 10.88.102.47:8443/my-project/foo:latest
16:11:26 + docker push 10.88.102.47:8443/my-project/foo:jenkins-305
...
16:11:34 jenkins-280: digest: sha256:22a4cd54bf43f03530c475832ca4432adfcf02c796e9c1cdafea72cf07ce7bf4 size: 3654
16:11:35 + docker push 10.88.102.47:8443/my-project/foo:latest
...
16:11:36 latest: digest: sha256:ccb4e8c41339b1a5d780cc5d28064cabf839657617a9c1e1d14eaee507405b20 size: 3632
Pushing tag jenkins-305 - digest 22a4cd54bf43f03530c475832ca4432adfcf02c796e9c1cdafea72cf07ce7bf4
Pushing tag latest - digest ccb4e8c41339b1a5d780cc5d28064cabf839657617a9c1e1d14eaee507405b20
Shouldn't 2 digests of 2 tags from same image identical?
Update
I downloaded both tags on a different machines and see those tags have the same IMAGE ID (digest?)
10.88.102.47:8443/my-project/foo jenkins-305 sha256:5537979d74ac00f75b7830c41c27be5f545ec55b0ab12622f9fad2eec8583a6e 21 minutes ago 689.2 MB
10.88.102.47:8443/my-project/foo latest sha256:5537979d74ac00f75b7830c41c27be5f545ec55b0ab12622f9fad2eec8583a6e 21 minutes ago 689.2 MB
But this digest is different from the digest provided by docker push, why?
A: The docs on the registry report that the digest contains the image manifest, and the manifest is made up of the tag amongst other things.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38072624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
} |
Q: Is there any function/implementation in python similar to peaks(N) in MATLAB? I can obtain a 2D matrix/3D plot using peaks(N) where N is any number in MATLAB. Is there any way to do this in python?
MATLAB example:
Create a 5-by-5 matrix of peaks and display the surface:
figure
peaks(5);
How to do this in python?
A: You can use SciPy
from scipy.signal import find_peaks
peaks, _ = find_peaks(x, height=0) # x is the signal
print("x-values: ", peaks," y-values: ", x[peaks])
SciPy documentation for finding peaks
.. Or a quick solution if you signal is not too noisy then you can manually smooth the signal, differentiate the smoothed signal, find a threshold value and count the zeroes :)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/61195305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Fixing a git mess during a Pull Request I have a large series of git commits made during a pull request for a github project. I've been asked to squash all of the commits together, however since then I have made an incredible mess of merges, resets and accidentally merging over other commits that I've pulled to the repo whilst the PR has been underway.
What i'd like to be able to do, is simply copy the two files that are actually involved in the pull request to another directory, reset back to the very first commit I made for the PR, re-paste my new files with the latest changes and then commit them, so that the PR shows up with a single commit. Is this possible? Thanks!
A: If you want it to look like a single commit, you'll need to use git squash. Is there a reason why you can't use this?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24601939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Pentaho Kettle/ETL integration with Subversion or Git We are using Pentaho ETL (aka Kettle). Pentaho has a built in enterprise repository for versioning. However, it is very poor. First, there's no way to go through an entire Pentaho project looking for changes. Each file is its own individual change. Second, there doesn't appear to be a way to tag an entire release.
We could use a file based system for storing our Pentaho objects. Jobs will end with a .kjb suffix and Transformations will end with a .ktr suffix. If that file system happened to be a Git or Subversion workspace, we could checkout/push/sync a revision of the Pentaho project, do the changes, and then commit/push those changes back to the repository.
However, there doesn't appear to be a way to check in and out of files from Pentaho itself like you can with Eclipse, or VisualStudio, or any other IDE environment.
Is there a way to integrate Subversion with Pentaho? Is it possible to use Eclipse (which not only integrates with Subversion and Git, but also integrates with Jira and Rally) with Pentaho ETL/Kettle?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17707390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Debugging Paging in Sql Reporting Services I'm working on my first significant Sql Reporting Services project and am having problems with paging. Most of the reports are already working.
What is happening is that I"m getting different numbers and locations of page breaks between Web Reportviewer, PDF and Word documents. The word is the closest, but none of the three are really correct.
I've looked for the for the obvious like extra paging and making sure the report does not go outside of the left or right margins. My problem is that I'm not sure how to go about troubleshooting reports that pages that do not break in the correct location.
Does anyone have a suggestion where to start?
I'm using VS2008, SQL2008 DE on Vista Dev box.
A: This isn't really a problem - the different renderers are rendering the report appropriately for their output. The web viewer is optimised for screen-based reading and generally allows more content per page than the PDF viewer does as the PDF viewer is constrained by the paper size that it formats to. Thus you get more pages when rendering for PDF than web; however, the content of the report is exactly the same.
The best illustration of this is the Excel renderer - the Excel renderer renders the entire report onto a single worksheet in most cases (for reports with grouping and page breaks set on the group footer it will render each group on its own worksheet). You wouldn't want the Excel renderer to artificially create worksheets to try to paginate your report. It does the appropriate thing which is to include all the data in one big worksheet even though that may be logically thought of as one big "page".
The web renderer page length is determined by the InteractiveHeight attribute of the report (in the InteractiveSize property in the Properties pane for the report) but the interactive height is an approximation rather than a fixed page break setting and your page breaks may still not conform to the PDF version even though the InteractiveHeight is set to the same length as your target page length.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/1549727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Posting rendered html page to a web service? Best way to transmit data? I am creating a web service that will send an email via Amazon SES. I want to separate the service(API) from the actual application that is going to take in data to be sent. So all I want to send the service is the to address, subject line, and rendered html body of the message. The service in this case is going to be an MVC action result method in C#, and I would like to post the information in JSON. Typically when I post data to a webservice, the data is pretty small and concise.
So my question is, is there a better way to submit html to a webservice as opposed to just sending JSON formatted like this? Would it be a good idea to base64 encode it? Or is there a better way to do this all together?
Post body.
{
"to" : "[email protected]",
"subject":"hello",
"body":"<html><body><h1>asdasd</h1></body></html>"
}
My model would look something like.
public class EmailMessage
{
public string toAddress { get; set; }
public string subject { get; set; }
public string body { get; set; }
}
Then my action result would look something like...
[HttpPost]
public ActionResult SendEmail(EmailMessage msg)
{
//Send an email
}
Is there a better way to do this? Thanks in advance for the help.
A: If you are worried about size of request an response, you can add GZip support.
public class CompressAttribute : System.Web.Mvc.ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
HttpRequestBase request = filterContext.HttpContext.Request;
string acceptEncoding = request.Headers["Accept-Encoding"];
if (string.IsNullOrEmpty(acceptEncoding)) return;
acceptEncoding = acceptEncoding.ToUpperInvariant();
HttpResponseBase response = filterContext.HttpContext.Response;
if (acceptEncoding.Contains("GZIP") && response.Filter != null)
{
response.AppendHeader("Content-encoding", "gzip");
response.Filter = new GZipStream(response.Filter, CompressionMode.Compress);
}
else if (acceptEncoding.Contains("DEFLATE") && response.Filter != null)
{
response.AppendHeader("Content-encoding", "deflate");
response.Filter = new DeflateStream(response.Filter, CompressionMode.Compress);
}
}
}
then in your controller add attribute to the action or the controller itself:
[Compress]
public class AccountController : Controller
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16966370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: jquery eq selector loop for tumblr I have a question that I've been stuck with for awhile:
I'm using tumblr and trying to flip the img and the text of a "text post". So currently I'm doing it manually, selecting the 8th eq of .text img to insert before the 8th eq of .text p:first-child.
So in a photo/photoset post I could point to the captions and make them appear after the images, but in a text post everything is wrapped in the {body}, thus needing to select the particular post's image eq(8) to be matched and called up before that particular post's text eq(8)
$(".text img").eq(8).insertBefore($('.text p:first-child').eq(8));
$(".text img").eq(7).insertBefore($('.text p:first-child').eq(7));
$(".text img").eq(6).insertBefore($('.text p:first-child').eq(6));
$(".text img").eq(5).insertBefore($('.text p:first-child').eq(5));
$(".text img").eq(4).insertBefore($('.text p:first-child').eq(4));
$(".text img").eq(3).insertBefore($('.text p:first-child').eq(3));
$(".text img").eq(2).insertBefore($('.text p:first-child').eq(2));
$(".text img").eq(1).insertBefore($('.text p:first-child').eq(1));
$(".text img").eq(0).insertBefore($('.text p:first-child').eq(0));
The reason for using a text post to post images was to workaround the 10 imgs restriction on tumblr photoset posts.
Is there a way that I can code it such that every time a new text post is posted it will select that and insertbefore accordingly? The insertbefore is so that in my index-page, the first-child image and the first-child text would appear.
I know perhaps selecting eq is not the way to go...
(I've placed the relevant portion of the html and css code here, if I'm still lacking anything please let me know!!)
My Html code:
<div class="grid">
<div class ="autopagerize_page_element">
<div id="list">
{block:Posts inlineMediaWidth="1280"}
{block:Text}
{block:IndexPage}
<div class="grid-sizer grid-item">
<div class="text">
<a href="{Permalink}">
{body}
</a>
</div>
</div>
{/block:IndexPage}
{block:PermalinkPage}
<div class="postperm">
<div class="perm-right text1">
{body}
</div>
<div class="perm-left text2">
<div class="captionPerm">{body}</div>
</div>
</div>
{/block:PermalinkPage}
{/block:Text}
{block:Photo}
{block:IndexPage}
<div class="grid-sizer grid-item photo">
<a class="photos" href="{Permalink}">
<img src="{PhotoURL-HighRes}" alt="{PhotoAlt}"/>
</a>
{block:Caption}
<div class="spacePhotos"></div>
<div class="caption">
{Caption}</div>
{/block:Caption}
</div>
{/block:IndexPage}
{block:PermalinkPage}
<div class="postperm">
<div class="perm-right">
<img alt="{PhotoAlt}" src="{PhotoURL-HighRes}">
</div>
<div class="perm-left">
{block:Caption}
<div class="captionPerm">{Caption}</div>
{/block:Caption}
</div>
</div>
{/block:PermalinkPage}
{/block:Photo}
<!------------- PANORAMA* NOT CONFIGURED ------------------->
{block:Panorama}
<div class="post panorama">
{LinkOpenTag}
<img src="{PhotoURL-Panorama}" alt="{PhotoAlt}"/>
{LinkCloseTag}
{block:Caption}
<div class="caption">{Caption}</div>
{/block:Caption}
</div>
{/block:Panorama}
{block:Photoset}
{block:IndexPage}
<div class="grid-sizer grid-item photoset">
<a class="photos" href="{Permalink}">
{block:Photos}
<img alt="" src="{PhotoURL-HighRes}">
{/block:Photos}
</a>
{block:Caption}
<div class="spacePhotos"></div>
<div class="caption">
{Caption}
</div>
{/block:Caption}
</div>
{/block:IndexPage}
{block:PermalinkPage}
<div class="postperm">
<div class="perm-right">
{block:Photos}
<img alt="" src="{PhotoURL-HighRes}">
{/block:Photos}
</div>
<div class="perm-left">
{block:Caption}
<div class="captionPerm">{Caption}</div>
{/block:Caption}
</div>
</div>
{/block:PermalinkPage}
{/block:Photoset}
<!----------------- THE REST! *NOT CONFIGURED ------------------>
{block:Quote}
<div class="post quote">
"{Quote}"
{block:Source}
<div class="source">{Source}</div>
{/block:Source}
</div>
{/block:Quote}{block:Link}
<div class="post link">
<a href="{URL}" class="link" {Target}>{Name}</a>
{block:Description}
<div class="description">{Description}</div>
{/block:Description}
</div>
{/block:Link}{block:Chat}
<div class="post chat">
{block:Title}
<h3><a href="{Permalink}">{Title}</a></h3>
{/block:Title}
<ul class="chat">
{block:Lines}
<li class="{Alt} user_{UserNumber}">
{block:Label}
<span class="label">{Label}</span>
{/block:Label}{Line}
</li>
{/block:Lines}
</ul>
</div>
{/block:Chat}
{block:Video}
<div class="post video">
{Video-500}{block:Caption}
<div class="spacePhotos"></div>
<div class="caption">{Caption}</div>
{/block:Caption}
</div>
{/block:Video}{block:Audio}
<div class="post audio">
{AudioEmbed}{block:Caption}
<div class="caption">{Caption}</div>
{/block:Caption}
</div>
{/block:Audio}
{/block:Posts}
</div>
</div>
</div>
My CSS code:
.index-page .grid-item .caption p,
.index-page .grid-item .text p {
margin:auto;
}
.index-page .grid-item .caption p {
display: none;
padding-top:15px;
line-height:125%;
color:#fff;
text-align:center;
}
.index-page .grid-item .caption p:first-child{
display:block;
}
.index-page .photos img{
display: none;
}
.index-page .photos img:first-child{
display: block;
}
.index-page .text a {
text-decoration:none;
}
.index-page .text p {
display:none;
padding-top:15px;
line-height:125%;
color:#fff;
text-align:center;
max-width:90%;
}
.index-page .text p:first-of-type{
display: block;
}
.read_more {
display: none;
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44404328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to generate different pages from a click in ng-repeat list? I have this code, and this is what it does:
It generates a list of topics from a json.
These topics will be used to determinate a page in the "CONTENT" area.
But, how can i make a "link" or "way" to call/create every page for every item in that list.
Exists a way to verify what item has been clicked? and call the respective page from that item ?
angular.module('duall')
.controller('documentationController', ['$scope', '$http', function($scope, $http){
$scope.docs = [];
$http.get('static/titles.json').success(function(doc){
$scope.docs = doc;
}).error(function(error){
console.log(error);
});
}]);
<div ng-controller="documentationController">
<div class="row">
<div class="col s3" >
<!-- Search Box -->
<div class="input-field col s12" >
<input id="search" type="search" ng-model="q" aria-label="filter docs"/>
<label for="search">
<strong>
Pesquise Algo! :)
</strong>
</label>
<i class="material-icons prefix">search</i>
</div>
<!-- End of Search Box -->
<ul class="animate-container">
<li ng-repeat="docs in docs | filter:q as results">
<a class="" href="{{docs.title}}">
{{docs.title}}
</a>
</li>
<!-- if no results -->
<li ng-if="results.length === 0">
<strong>Nada encontrado :(</strong>
</li>
</ul>
</div> <!-- end col s3 -->
<div class="col s9">
<h1>CONTENT</h1>
</div><!-- end col s9 -->
</div><!-- end ROW -->
</div><!-- end Controller -->
| {
"language": "en",
"url": "https://stackoverflow.com/questions/38958432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Installing R under Program Files I installed R version 3.2.0 in my "C:\Program Files" folder, and then installed RStudio. When I tried to run an R markdown file, it prompted me to install the Rmarkdown package from source. However, I found that Installing RMarkdown packages in RStudio does not work on Windows 7.
Digging a little deeper I found that the R CMD --help command worked, but every R CMD <command> and R CMD <command> --help fails with
'C:\Program' is not recognized as an internal or external command, operable program or batch file
If I had tested my installation, I would have found the problem with
Sys.setenv(LC_COLLATE = "C", LANGUAGE = "en")
library("tools")
testInstalledBasic("both")
since that fails by calling R CMD BATCH during its execution.
The only solution I could find was to uninstall R and reinstall it a "C:" instead of "C:\Program Files."
It seems like installing R into the "Program Files" folder should be possible. Is there some configuration that would make that possible? Or is there some reason this basic scenario isn't supported?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30701960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Netbeans + jquery = error I'm working in a php project and Netbeans insists on marking lines like
$.get("/adminc/utilsAjax.php", { function: "orderIsOpenOrClosed", orderID: orderID, rand: randn }, function(data)
and the closings
});
As errors
I tried using the non minifierd version of jquery and no change.
It's like Netbeans ignores the jquery syntax.
Any ideas?
A: The "function" is a reserved word, and in object notation may require quotes, Netbeans is expecting () after the Function key word.
A: As Bodman Said, function is a reserved word so you need to quote that. But you may also need to quote all the hash keys for netbeans to interpret them correctly for example:
$.get("/adminc/utilsAjax.php", {
"function": "orderIsOpenOrClosed",
"orderID": orderID,
"rand": randn
},
function(data){
// fun body
});
A: You have missed a { after function(data)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5401765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Rails4 Form with remote - handling error message I have a simple remote form like this:
= form_for :question, :url => question_path, :remote => true, :html =>{:class => 'question-form'} do |form|
and in my controller I check if in the form the EULA is accepted:
def create
if (params[:question][:eula] != 1)
puts "ERROR!"
respond_to do |format|
return format.json {render :json => {:error_message => "FOOOO", :success => false } }
end
end
@question = Question.new(question_params)
respond_to do |format|
if @question.save
format.html { redirect_to @question, notice: 'question was successfully created.' }
format.json { render :show, status: :created, location: @question }
else
format.html { render :new }
format.json { render json: @question.errors, status: :unprocessable_entity }
end
end
end
How can I access :error_message or :success in my create.js.erb file?
A: You can do Ajax request
add id to form :id => "question-form"
= form_for :question, :url => question_path, :remote => true,
:html =>{:class => 'question-form', :id => "question-form"} do |form|
and anywhere in javascript file
$("#question-form").on("submit", function(e){
e.preventDefault();
var form = $(this);
var request = $.ajax({
method: "POST",
beforeSend: function(xhr) {xhr.setRequestHeader('X-CSRF-Token', $('meta[name="csrf-token"]').attr('content'))},
url: "/questions",
dataType: "json",
data: form.serialize()
})
request.done(function(res){
console.log(res.success);
})
request.catch(function(jqXHR){
console.log(jqXHR.responseJSON)
})
})
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50995995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to reduce ipa size Xamarin.ios I am trying to reduce my ipa size, but cant succeed. I tried with linking SDK assemblies only but when the app is archived it shows 189 MB of the app store and 75mb for ipa. I tried different options but it never worked.
Then I looked into my bin folder and I found out that Xamarin.swift4 is taking about 80 MB of the size, When I removed that package my ipda was reduced to 34mb and playstore size was reduced to 89 MB, but because of this my app crashed instantly after opening. So it means that I can not remove Xamarin.swift 4 libraries.
I have attached my bin folder pic and packages.congifg file, let me know if there is any solution
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Answers" version="1.4" targetFramework="xamarinios10" />
<package id="Crashlytics" version="1.4" targetFramework="xamarinios10" />
<package id="dannycabrera.GetiOSModel" version="1.4.0" targetFramework="xamarinios10" />
<package id="Fabric" version="1.4" targetFramework="xamarinios10" />
<package id="iOSCharts" version="3.1.1.2" targetFramework="xamarinios10" />
<package id="Microsoft.CSharp" version="4.4.1" targetFramework="xamarinios10" />
<package id="Microsoft.NETCore.Platforms" version="2.0.2" targetFramework="xamarinios10" />
<package id="Microsoft.Win32.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="NETStandard.Library" version="2.0.3" targetFramework="xamarinios10" />
<package id="Newtonsoft.Json" version="11.0.2" targetFramework="xamarinios10" />
<package id="System.AppContext" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Collections" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Collections.Concurrent" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.ComponentModel.TypeConverter" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Console" version="4.3.1" targetFramework="xamarinios10" />
<package id="System.Diagnostics.Debug" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Diagnostics.Tools" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Diagnostics.Tracing" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Globalization" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Globalization.Calendars" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.IO" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.IO.Compression" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.IO.Compression.ZipFile" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.IO.FileSystem" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.IO.FileSystem.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Linq" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Linq.Expressions" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Net.Http" version="4.3.3" targetFramework="xamarinios10" />
<package id="System.Net.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Net.Sockets" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.ObjectModel" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Reflection" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Reflection.Extensions" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Reflection.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Resources.ResourceManager" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.Extensions" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.Handles" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.InteropServices" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.InteropServices.RuntimeInformation" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.Numerics" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.Serialization.Formatters" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Runtime.Serialization.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Security.Cryptography.Algorithms" version="4.3.1" targetFramework="xamarinios10" />
<package id="System.Security.Cryptography.Encoding" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Security.Cryptography.Primitives" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Security.Cryptography.X509Certificates" version="4.3.2" targetFramework="xamarinios10" />
<package id="System.Text.Encoding" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Text.Encoding.Extensions" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Text.RegularExpressions" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Threading" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Threading.Tasks" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Threading.Timer" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Xml.ReaderWriter" version="4.3.1" targetFramework="xamarinios10" />
<package id="System.Xml.XDocument" version="4.3.0" targetFramework="xamarinios10" />
<package id="System.Xml.XmlDocument" version="4.3.0" targetFramework="xamarinios10" />
<package id="UITextFieldShaker" version="2017.10.19" targetFramework="xamarinios10" />
<package id="Xam.Plugin.Connectivity" version="3.1.1" targetFramework="xamarinios10" />
<package id="Xamarin.Forms" version="3.0.0.446417" targetFramework="xamarinios10" />
<package id="ZXing.Net.Mobile" version="2.3.2" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4" version="4.0.0.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.Core" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreAudio" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreData" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreFoundation" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreGraphics" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreImage" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.CoreMedia" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.Darwin" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.Dispatch" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.Foundation" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.Metal" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.ObjectiveC" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.OS" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.QuartzCore" version="4.1.0" targetFramework="xamarinios10" />
<package id="Xamarin.Swift4.UIKit" version="4.1.0" targetFramework="xamarinios10" />
</packages>
A: You should remove only the libraries you don't use in your project.
Two weeks ago my ipa file was at almost 200 MB, then I deleted all the Swift libraries that was unnecessary for the project, and now I create smaller ipa files than the previous one (80 MB).
So you need to check your application, see if you have unnecessary libraries and delete it, this way you will reduce the ipa size.
A: I would suggest looking into the 'Linker', which you will find in your iOS build options within Visual Studio
A brief overview:
*
*Don't Link - all assemblies left untouched (largest ipa size)
*Link SDK assemblies only - reduces size of SDK (Xamarin.iOS) assemblies by removing everything that your application doesn't use
*Link all assemblies - reduces size of all assemblies by removing everything that your application doesn't use (smallest ipasize)
Note that 'link all assemblies' can cause issues as the linker can't always determine what is used and can therefore remove code from an assembly that is actually required (think web services, reflection, serialisation). In such cases you can set a manual mtouch argument to prevent a specific assembly (or assemblies) from having being touched by the linker as below:
--linkskip=NameOfAssemblyToSkipWithoutFileExtension
or
--linkskip=NameOfFirstAssembly --linkskip=NameOfSecondAssembly
A real use case I've come across where the above is necessary is when using Entity Framework with Xamarin.iOS, as the linker removes code which is then called using reflection which causes the app to crash.
The full documentation for the Linker is available here: https://learn.microsoft.com/en-us/xamarin/ios/deploy-test/linker?tabs=vsmac
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51301253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: SwiftUI - Changing parent @State doesn't update child View Specific question:
SwiftUI doesn't like us initializing @State using parameters from the parent, but what if the parent holding that @State causes major performance issues?
Example:
How do I make tapping on the top text change the slider to full/empty?
Dragging the slider correctly communicates upwards when the slider changes from full to empty, but tapping the [Overview] full: text doesn't communicate downwards that the slider should change to full/empty.
I could store the underlying Double in the parent view, but it causes major lag and seems unnecessary.
import SwiftUI
// Top level View. It doesn't know anything about specific slider percentages,
// it only knows if the slider got moved to full/empty
struct SliderOverviewView: View {
// Try setting this to true and rerunning.. It DOES work here?!
@State var overview = OverviewModel(state: .empty)
var body: some View {
VStack {
Text("[Overview] full: \(overview.state.rawValue)")
.onTapGesture { // BROKEN: should update child..
switch overview.state {
case .full, .between: overview.state = .empty
case .empty: overview.state = .full
}
}
SliderDetailView(overview: $overview)
}
}
}
// Bottom level View. It knows about specific slider percentages and only
// communicates upwards when percentage goes to 0% or 100%.
struct SliderDetailView: View {
@State var details: DetailModel
init(overview: Binding<OverviewModel>) {
details = DetailModel(overview: overview)
}
var body: some View {
VStack {
Text("[Detail] percentFull: \(details.percentFull)")
Slider(value: $details.percentFull)
.padding(.horizontal, 48)
}
}
}
// Top level model that only knows if slider went to 0% or 100%
struct OverviewModel {
var state: OverviewState
enum OverviewState: String {
case empty
case between
case full
}
}
// Lower level model that knows full slider percentage
struct DetailModel {
@Binding var overview: OverviewModel
var percentFull: Double {
didSet {
if percentFull == 0 {
overview.state = .empty
} else if percentFull == 1 {
overview.state = .full
} else {
overview.state = .between
}
}
}
init(overview: Binding<OverviewModel>) {
_overview = overview
// set inital percent
switch overview.state.wrappedValue {
case .empty:
percentFull = 0.0
case .between:
percentFull = 0.5
case .full:
percentFull = 1.0
}
}
}
struct SliderOverviewView_Previews: PreviewProvider {
static var previews: some View {
SliderOverviewView()
}
}
Why don't I just store percentFull in the OverviewModel?
I'm looking for a pattern so my top level @State struct doesn't need to know EVERY low level detail specific to certain Views.
Running the code example is the clearest way to see my problem.
This question uses a contrived example where an Overview only knows if the slider is full or empty, but the Detail knows what percentFull the slider actually is. The Detail has very detailed control and knowledge of the slider, and only communicates upwards to the Overview when the slider is 0% or 100%
What's my specific case for why I need to do this?
For those curious, my app is running into performance issues because I have several gestures that give the user control over progress. I want my top level ViewModel to store if the gesture is complete or not, but it doesn't need to know the specifics of how far the user has swiped. I'm trying to hide this specific progress Double from my higher level ViewModel to improve app performance.
A: Here is working, simplified and refactored answer for your issue:
struct ContentView: View {
var body: some View {
SliderOverviewView()
}
}
struct SliderOverviewView: View {
@State private var overview: OverviewModel = OverviewModel(full: false)
var body: some View {
VStack {
Text("[Overview] full: \(overview.full.description)")
.onTapGesture {
overview.full.toggle()
}
SliderDetailView(overview: $overview)
}
}
}
struct SliderDetailView: View {
@Binding var overview: OverviewModel
var body: some View {
VStack {
Text("[Detail] percentFull: \(tellValue(value: overview.full))")
Slider(value: Binding(get: { () -> Double in
return tellValue(value: overview.full)
}, set: { newValue in
if newValue == 1 { overview.full = true }
else if newValue == 0 { overview.full = false }
}))
}
}
func tellValue(value: Bool) -> Double {
if value { return 1 }
else { return 0 }
}
}
struct OverviewModel {
var full: Bool
}
Update:
struct SliderDetailView: View {
@Binding var overview: OverviewModel
@State private var sliderValue: Double = Double()
var body: some View {
VStack {
Text("[Detail] percentFull: \(sliderValue)")
Slider(value: $sliderValue, in: 0.0...1.0)
}
.onAppear(perform: { sliderValue = tellValue(value: overview.full) })
.onChange(of: overview.full, perform: { newValue in
sliderValue = tellValue(value: newValue)
})
.onChange(of: sliderValue, perform: { newValue in
if newValue == 1 { overview.full = true }
else { overview.full = false }
})
}
func tellValue(value: Bool) -> Double {
value ? 1 : 0
}
}
A: I present here a clean alternative using 2 ObservableObject, a hight level OverviewModel that
only deal with if slider went to 0% or 100%, and a DetailModel that deals only with the slider percentage.
Dragging the slider correctly communicates upwards when the slider changes from full to empty, and
tapping the [Overview] full: text communicates downwards that the slider should change to full/empty.
import Foundation
import SwiftUI
@main
struct TestApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
struct ContentView: View {
@StateObject var overview = OverviewModel()
var body: some View {
SliderOverviewView().environmentObject(overview)
}
}
// Top level View. It doesn't know anything about specific slider percentages,
// it only cares if the slider got moved to full/empty
struct SliderOverviewView: View {
@EnvironmentObject var overview: OverviewModel
var body: some View {
VStack {
Text("[Overview] full: \(overview.state.rawValue)")
.onTapGesture {
switch overview.state {
case .full, .between: overview.state = .empty
case .empty: overview.state = .full
}
}
SliderDetailView()
}
}
}
// Bottom level View. It knows about specific slider percentages and only
// communicates upwards when percentage goes to 0% or 100%.
struct SliderDetailView: View {
@EnvironmentObject var overview: OverviewModel
@StateObject var details = DetailModel()
var body: some View {
VStack {
Text("[Detail] percentFull: \(details.percentFull)")
Slider(value: $details.percentFull).padding(.horizontal, 48)
.onChange(of: details.percentFull) { newVal in
switch newVal {
case 0: overview.state = .empty
case 1: overview.state = .full
default: break
}
}
}
// listen for the high level OverviewModel changes
.onReceive(overview.$state) { theState in
details.percentFull = theState == .full ? 1.0 : 0.0
}
}
}
enum OverviewState: String {
case empty
case between
case full
}
// Top level model that only knows if slider went to 0% or 100%
class OverviewModel: ObservableObject {
@Published var state: OverviewState = .empty
}
// Lower level model that knows full slider percentage
class DetailModel: ObservableObject {
@Published var percentFull = 0.0
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/70920224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: NgRx8 destruct values from action everyone, I would like to destruct action type and props in ngrx effect, I'm in a stuck with how I can do it
My actions:
export const addTab = createAction(
'[SuperUserTabs] add tab',
props<{ tab: SuperUserHeaderTab, tabType: TabType }>()
);
export const searchCompanyTab = createAction(
'[SuperUserTabs] search company tab'
);
export const searchCardholderTab = createAction(
'[SuperUserTabs] search cardholder tab'
);
Effect:
@Effect({ dispatch: false })
addTab$ = this.actions$.pipe(
ofType(
TabsActions.addTab,
TabsActions.searchCompanyTab,
TabsActions.searchCardholderTab
),
withLatestFrom(this.store.pipe(select(getTabs))),
tap(([action, tabs]) => {
// destruct here
const {type, props} = action;
// some logic
})
);
Any suggestions?
A: Updated: ofType accept action creator, so you can using it directly:
@Effect({ dispatch: false })
addTab$ = this.actions$.pipe(
ofType(
TabsActions.addTab,
TabsActions.searchCompanyTab,
TabsActions.searchCardholderTab
),
withLatestFrom(this.store.pipe(select(getTabs))),
tap(([action, tabs]) => {
// this object will contain `type` and action payload
const {type, ...payload} = action;
// some logic
console.log(payload)
})
);
==============
Old answer also work:
because you're using action creator, which return a function, so you can access action type through type property like this:
@Effect({ dispatch: false })
addTab$ = this.actions$.pipe(
ofType(
TabsActions.addTab.type,
TabsActions.searchCompanyTab.type,
TabsActions.searchCardholderTab.type
),
withLatestFrom(this.store.pipe(select(getTabs))),
tap(([action, tabs]) => {
// this object will contain `type` and action payload
const {type, ...payload} = action;
// some logic
console.log(payload)
})
);
demo: https://stackblitz.com/edit/angular-3t2fmx?file=src%2Fapp%2Fstore.ts
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58047476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: onlongclick listner in android 2.0.3 In my java code i used onLongClickListener and opened 1 AlerDailogBox that shows delete or not!!but i want to show some options on my longclick and according to the choice i want to do further.i dont want to use context menu.plz suggest me in current code what changes should i make?
OnLongClickListener myListener = new OnLongClickListener() {
public boolean onLongClick(final View v) {
// do something on long click
AlertDialog alertDialog = new AlertDialog.Builder(v.getContext()).create();
alertDialog.setTitle("Do you want to Delete?");
alertDialog.setMessage(" "+temp_name);
alertDialog.setButton("OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
EstimateTrackerActivity.this.dh.deleteexp(inc_id);
/*//Toast.makeText(EstimateTrackerActivity.this, "id"+id,Toast.LENGTH_LONG).show();
onclick_addcategory(v);*/
onclick_listexpense(v);
spinner.setSelection(temp3);
}
});
alertDialog.setButton2("CANCEL", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
onclick_listexpense(v);
spinner.setSelection(temp3);
}
});
alertDialog.show();
return false;
}
}; tr_inc.setOnLongClickListener(myListener);
}
A: Sound from your problem, I think you are new to Android.
Ok, look at code below.
To create an AlertDialog with a list of selectable items like the one shown to the right, use the setItems() method:
final CharSequence[] items = {"Red", "Green", "Blue"};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle("Pick a color");
builder.setItems(items, new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int item) {
Toast.makeText(getApplicationContext(), items[item], Toast.LENGTH_SHORT).show();
}
});
AlertDialog alert = builder.create();
For more info look at Creating an AlertDialog.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/9306049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Add column in hive not allowed from scala/spark code I am trying to add a column in a Hive table if the source data has new columns. All the detection of new columns works well, however, when I try to add the column to the destination table, I receive this error:
for (f <- df.schema.fields) {
if ("[" + f.name + "]"==chk) {
spark.sqlContext.sql("alter table dbo_nwd_orders add columns (" + f.name + " " + f.dataType.typeName.replace("integer", "int") + ")")
}
}
Error:
WARN HiveExternalCatalog: Could not alter schema of table `default`.`dbo_nwd_orders` in a Hive compatible way. Updating Hive metastore in Spark SQL specific format
InvalidOperationException(message:partition keys can not be changed.)
However, if I catch the alter sentence generated and execute it from hive GUI (HUE), I can add it without issues.
alter table dbo_nwd_orders add columns (newCol int)
Why that sentence is valid from the GUI and not from spark code?
Thank you very much.
A: It has been said multiple times here, but just to reiterate - Spark is not Hive interface and is not designed for full Hive compatibility in terms of language (Spark targets SQL standard, Hive uses custom SQL-like query language) or capabilities (Spark is ETL solution, Hive is a Data Warehousing solution).
Even data layouts are not fully compatible between these two.
Spark with Hive support is Spark with access to Hive metastore, not Spark that behaves like Hive.
If you need to access full set of Hive's features connect to Hive directly with native client or native (not Spark) JDBC connection, and use interact with it from there.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50758302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Cannot resolve method 'setText(java.lang.String am trying to add a library called smooth check box while I trying to cast it it showed me an error Cannot resolve method 'setText(java.lang.String '
this is my Class
............................................................................
package abtech.waiteriano.com.retrievingcontactsexample;
import android.annotation.TargetApi;
import android.content.Context;
import android.os.Build;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.BaseAdapter;
import android.widget.CheckBox;
import java.util.ArrayList;
import java.util.List;
import java.util.Locale;
import cn.refactor.library.SmoothCheckBox;
/**
* Created by Trinity Tuts on 10-01-2015.
*/
public class SelectUserAdapter extends BaseAdapter {
public List<SelectUser> _data;
private ArrayList<SelectUser> arraylist;
Context _c;
ViewHolder v;
public SelectUserAdapter(List<SelectUser> selectUsers, Context context) {
_data = selectUsers;
_c = context;
this.arraylist = new ArrayList<SelectUser>();
this.arraylist.addAll(_data);
}
@Override
public int getCount() {
return _data.size();
}
@Override
public Object getItem(int i) {
return _data.get(i);
}
@Override
public long getItemId(int i) {
return i;
}
@TargetApi(Build.VERSION_CODES.LOLLIPOP)
@Override
public View getView(int i, View convertView, ViewGroup viewGroup) {
View view = convertView;
if (view == null) {
LayoutInflater li = (LayoutInflater) _c.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
view = li.inflate(R.layout.contact_list_item, null);
Log.e("Inside", "here--------------------------- In view1");
} else {
view = convertView;
Log.e("Inside", "here--------------------------- In view2");
}
v = new ViewHolder();
v.check = (SmoothCheckBox) view.findViewById(R.id.contactsCB);
final SelectUser data = (SelectUser) _data.get(i);
v.check.setText(data.getName());
v.check.setChecked(data.getCheckedBox());;
Log.e("Image Thumb", "--------------" + data.getThumb());
/*// Set check box listener android
v.check.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
CheckBox checkBox = (CheckBox) view;
if (checkBox.isChecked()) {
data.setCheckedBox(true);
} else {
data.setCheckedBox(false);
}
}
});*/
view.setTag(data);
return view;
}
// Filter Class
public void filter(String charText) {
charText = charText.toLowerCase(Locale.getDefault());
_data.clear();
if (charText.length() == 0) {
_data.addAll(arraylist);
} else {
for (SelectUser wp : arraylist) {
if (wp.getName().toLowerCase(Locale.getDefault())
.contains(charText)) {
_data.add(wp);
}
}
}
notifyDataSetChanged();
}
static class ViewHolder {
SmoothCheckBox check;
}
}
and the error in this line
v.check.setText(data.getName());
android Monitor Error
Information:Gradle tasks [:app:assembleDebug]
E:\AndroidWorkSpace\RetrievingContactsExample\app\src\main\java\abtech\waiteriano\com\retrievingcontactsexample\SelectUserAdapter.java
Error:(69, 16) error: cannot find symbol method setText(String)
Error:Execution failed for task ':app:compileDebugJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
Information:BUILD FAILED
Information:Total time: 1.206 secs
Information:2 errors
Information:0 warnings
Information:See complete output in console
............................................................................
I don't know how to solve this error it's ma pleasure if any could help me
sorry of any thing is not clear I hope this gonna be understandable
............................................................................
A: I assume you mean this SmoothCheckBox on github.
Looking at the source code, one finds no setText(String)-method. If I understand the readme correctly, those check boxes are designed to have a selected and unselected color, but no text.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/44010952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-3"
} |
Q: Does git create new files during merge or overwrite existing ones I have a script test.sh
#!/bin/bash
echo start old file
sleep 20
echo end old file
in the repository which I do execute, and in the mean time I git merge other-branch changes like
#!/bin/bash
echo start new file
sleep 20
echo end new file
into the current branch.
It seems that git on Unix (?) does not directly overwrite the existing file node (?) and does instead rm test.sh and creates the new file.
In that way its guaranteed that the script execution will always read the initial file test.sh and terminate with echo end old file.
Note: On my system (Ubuntu 20.04), while executing the script and directy overtwriting the content in an editor, results in executing the new code, which is bad...
Is that correct and is it also correct on Windows with git-for-windows?
A: I can't answer regarding Windows, but on Ubuntu 18.04 I can confirm that a git checkout or git merge will delete and recreate a changed file, rather than editing it in place. This can be seen in strace output, for example:
unlink("test.sh") = 0
followed later by
openat(AT_FDCWD, "test.sh", O_WRONLY|O_CREAT|O_EXCL, 0666) = 4
It can also be seen if you create a hard link to the file before the git command and then look again afterwards, you will see that you have two different inodes, with different contents. This is to be expected following deletion and recreation, whereas an in-place edit would have preserved the hard linking.
$ ls -l test.sh
-rw-r--r-- 1 myuser mygroup 59 Jun 5 17:04 test.sh
$ ln test.sh test.sh.bak
$ ls -li test.sh*
262203 -rw-r--r-- 2 myuser mygroup 59 Jun 5 17:04 test.sh
262203 -rw-r--r-- 2 myuser mygroup 59 Jun 5 17:04 test.sh.bak
$ git merge mybranch
Updating 009b964..d57f33a
Fast-forward
test.sh | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
$ ls -li test.sh*
262219 -rw-r--r-- 1 myuser mygroup 70 Jun 5 17:05 test.sh
262203 -rw-r--r-- 1 myuser mygroup 59 Jun 5 17:04 test.sh.bak
You mentioned in a comment attached to the question that it is related to Overwrite executing bash script files. Although it would seem not to be the best idea to run a git command affecting a script which is currently still being executed, in fact the delete and recreate behaviour should mean that the existing execution will be unaffected. Even if the bash interpreter has not yet read the whole file into memory, it will have an open filehandle on the existing inode and can continue to access its contents even though that inode is no longer accessible via the filename that it had. See for example What happens to an open file handle on Linux if the pointed file gets moved or deleted
A: On Windows with git-for-windows I see the same behavior:
$ mklink /H test.sh.bak
$ fsutil hardlink list test.sh.bak
test.sh.bak
test.sh
$ git merge test
$ fsutil hardlink list test.sh.bak
test.sh.bak
Meaning the hard link did not get preserved, meanin a new file has been created.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62218749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: footer sum data canot sum all data in serverside datatables i have a model like here
https://pastebin.com/HLNQZgkg
public function data_json_judulbuku() {
error_reporting(-1);
if( isset($_GET['id']) ){
$id = $_GET['id'];
}else{
$id = '';
}
and javascript to access datatable
https://pastebin.com/f2NrFbBr
i canot sum all data in datatables, the sum just page listed
result footer sum
total value must 135 and curent page 15 but on display total 15 and curent page 15
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52306762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: varargs parameter list created at runtime public static void method(Set<?>... sets){}
Depending on program flow, above method is called with two sets, or with three sets, or more. (Not known at compile time).
Is there a way to constructing the argument list "on the fly"?
sets is of type Set< ?>[ ]
Following was not fruitful:
Set<Set<Integer>> varargs = new HashSet<Set<Integer>>();
(method recognizes varargs just as one set -> no solution)
Set<Integer>[] varargs = new HashSet<Integer>[2];
returns
"Cannot create generic array of HashSet<Integer>"
I would like to construct an array of arguments, while array size and content is filled at runtime.
A: Set<Integer>[] varargs = new HashSet[2];
varargs[0] = new HashSet<Integer>() ;
A: I believe array of Set should be defined like this:
Set<Integer>[] varargs = new Set[2];
varargs[0] = new HashSet<Integer>();
varargs[1] = new HashSet<Integer>();
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8717000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: cmdline of vim on mac terminal or iTerm2 disappear very slow I type : and then type esc key to exit command mod, the cmdline disappear very slow, about 1s-2s, which actually you are in normal already.
A: vim is waiting a short time to allow for the possibility that the esc key might begin a special key (such as cursor-left or F1).
You can alter this behavior altering these settings: ttimeout, timeoutlen
and ttimeoutlen.
The timeoutlen mode is set by default to 1 second (1000 milliseconds). If you set that to a shorter time (0.1 seconds is fast), it will help.
Some suggest (as in vim's documentation) reducing the timeout, e.g.,
set ttimeout set ttimeoutlen=100
Related discussion:
*
*Vim Command Line Escape Timeout
*Eliminating delays on ESC in vim and zsh
| {
"language": "en",
"url": "https://stackoverflow.com/questions/33949549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Getting this error when I use IBM Watson Tone Analyzer API. How can I resolve this? requests.exceptions.ConnectionError:
HTTPSConnectionPool(host='api.eu-gb.tone-analyzer.watson.cloud.ibm.com', port=443):
Max retries exceeded with url:
/instances/76db955a-ebbb-46c9-a9ca-121542253a0c/v3/tone?version=2017-09-21&sentences=true
(Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x05533928>:
Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))`
The output console shows a part of the output I am getting while printing at runtime and then this max retries exceeded error message pops up. Please revert back as soon as possible.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/62721562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Firebase authentication phone number for android: SMS template in different languages I am using sms Firebase authentication on the android application. However, the SMS I am receiving is weird: Part of it is sometimes English and part of it is Arabic and the rest of the SMS I can't make out actually.
How to make the SMS's language received only Arabic without the other characters?
A: Check whether your phone language and Firebase console are same. Otherwise this can happen. Firebase rechecks the locale of the phone that's causing the issue.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/46539758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Create Pivot Table and get Count in Pandas Dataframe I have my dataframe -
import pandas as pd
data = [['2233', 'A', 'FY21'], ['441', 'C', 'FY20'], ['6676', 'A', 'FY19'], ['033', 'C', 'FY16'],
['12', 'A', 'FY18'], ['91', 'B', 'FY15'], ['6676', 'C', 'FY10'], ['441', 'C', 'FY17'],
['12', 'A', 'FY14'], ['441', 'C', 'FY12']]
df = pd.DataFrame(data, columns = ('emp_id', 'category', 'year'))
df
emp_id category year
0 2233 A FY21
1 441 C FY20
2 6676 A FY19
3 033 C FY16
4 12 A FY18
5 91 B FY15
6 6676 C FY10
7 441 C FY17
8 12 A FY14
9 441 C FY12
So basically I want the categories should be created as individual column i.e A, B, & C and each column should contain the counts of them.
What I want as my output -
emp_id A B C
0 2233 1
1 441 3
2 6676 1
3 033 1
4 12 2
5 91 1
6 6676 1
What I was trying -
df['count'] = df.groupby(['emp_id'])['category'].transform('count')
df.drop_duplicates('emp_id', inplace = True)
df
emp_id category year count
0 2233 A FY21 1
1 441 C FY20 3
2 6676 A FY19 2
3 033 C FY16 1
4 12 A FY18 2
5 91 B FY15 1
please help me to get my desired output in python.
A: Use pd.crosstab:
df1 = pd.crosstab(df['emp_id'], df['category']).rename_axis(
columns=None).reset_index()
OUTPUT:
emp_id A B C
0 033 0 0 1
1 12 2 0 0
2 2233 1 0 0
3 441 0 0 3
4 6676 1 0 1
5 91 0 1 0
NOTE:
If you don't need 0 in the output you can use:
df = pd.crosstab(df['emp_id'], df['category']).rename_axis(
columns=None).reset_index().replace(0, '')
OUTPUT:
emp_id A B C
0 033 1
1 12 2
2 2233 1
3 441 3
4 6676 1 1
5 91 1
Updated Answer:
df = (
df.reset_index()
.pivot_table(
index=['emp_id', df.groupby('emp_id')['year'].transform(', '.join)],
columns='category',
values='index',
aggfunc='count',
fill_value=0)
.rename_axis(columns=None)
.reset_index()
)
OUTPUT:
emp_id year A B C
0 033 FY16 0 0 1
1 12 FY18, FY14 2 0 0
2 2233 FY21 1 0 0
3 441 FY20, FY17, FY12 0 0 3
4 6676 FY19, FY10 1 0 1
5 91 FY15 0 1 0
A: You can also use pivot table:
pv = pd.pivot_table(df, index='emp_id', columns='category', aggfunc='count')
pv.fillna('', inplace=True)
print(pv)
year
category A B C
emp_id
033 1
12 2
2233 1
441 3
6676 1 1
91 1
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68139935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: I wonder how to create a proportional table for one categorical variable and one numerical variable (numerical data is proportion)? library(ggplot2)
library(tidyverse)
library(dplyr)
aqi <- read.csv("aqi12_21.csv")
aqi <- select(aqi,State.Name,county.Name,Date,AQI,Category,Defining.Parameter)
aqi <- rename(aqi,State=State.Name,County=county.Name)
aqi <- separate(aqi, Date, c("Year", "Month", "Day"))
AQI_HIGH<-filter(aqi,AQI>100)
average_aqi_state <- AQI_HIGH %>% group_by(State) %>% summarise(average_aqi = mean(AQI))
So I have my average data which looks like:
I don't know how to create a proportional graph (average aqi is shown in percentage) while the state remains categorical variable
A: Suppose this simplified form of data represents your actual data:
dat <- structure(list(State = c("Alabama", "Alaska", "Arizona", "Others"
), average_aqi = c(300, 550, 150, 1000)), class = "data.frame", row.names = c(NA,
-4L))
If I understand your purpose correctly, you want to get the proportion of average_aqi in this way:
dat |> mutate(avaqi_perc = average_aqi/sum(average_aqi))
# State average_aqi avaqi_perc
#1 Alabama 300 0.150
#2 Alaska 550 0.275
#3 Arizona 150 0.075
#4 Others 1000 0.500
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72036035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Is there a way to change the browser's address bar without refreshing the page? I'm developing a web app. In it I have a section called categories that every time a user clicks one of the categories an update panel loads the appropriate content.
After the user clicked the category I want to change the browser's address bar url from
www.mysite.com/products
to something like
www.mysite.com/products/{selectedCat}
without refreshing the page.
Is there some kind of JavaScript API I can use to achieve this?
A: I believe directly manipulating the address bar to a completely different url without moving to that url isn't allowed for security reasons, if you are happy with it being
www.mysite.com/products/#{selectedCat}
i.e. an anchor style link within the same page then look into the various history/"back button" scripts that are now present in most javascript libraries.
The mention of update panel leads me to guess you are using asp.net, in that case the asp.net ajax history control is a good place to start
A: To add to what the guys have already said edit the window.location.hash property to match the URL you want in your onclick function.
window.location.hash = 'category-name'; // address bar would become http://example.com/#category-name
A: I don't think this is possible (at least changing to a totally different address), as it would be an unintuitive misuse of the address bar, and could promote phishing attacks.
A: With HTML5 you can modify the url without reloading:
If you want to make a new post in the browser's history (i.e. back button will work)
window.history.pushState('Object', 'Title', '/new-url');
If you just want to change the url without being able to go back
window.history.replaceState('Object', 'Title', '/another-new-url');
The object can be used for ajax navigation:
window.history.pushState({ id: 35 }, 'Viewing item #35', '/item/35');
window.onpopstate = function (e) {
var id = e.state.id;
load_item(id);
};
Read more here: http://www.w3.org/TR/html5-author/history.html
A fallback sollution: https://github.com/browserstate/history.js
A: This cannot be done the way you're saying it. The method suggested by somej.net is the closest you can get. It's actually very common practice in the AJAX age. Even Gmail uses this.
A:
"window.location.hash"
as suggested by sanchothefat should be the one and only way of doing it. Because all the places that I have seen this feature, it's all the time after the # in URL.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/352343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "93"
} |
Q: jquery $.post, to django view method, request.method shows GET, and cannot retrieve parameters I'm trying to figure out how I can send some data to a view in django vis POST, and return some data, depending on the data sent in the request, from the view to the client via an HttpResponse.
When the POST request is sent from the client, the web console prints,
POST http://<myurl>
GET http://<myurl>/
And the message returned is "get". In the view method, When I try to access the parameters via response.GET.get("key"), None is returned.
I must be misunderstanding something, anyone know what is going on?
views.py
from django.http import HttpResponse
def test(request):
msg = ""
if request.method == "POST":
msg = "post"
elif request.method == "GET":
msg = "get"
return HttpResponse(msg)
javascrip/jquery
function _req(url, params, callback ) {
function onResponse( data ) {
console.log( data );
callback(data);
};
$.post(
url,
JSON.stringify(params),
onResponse,
"text");
};
A: The two URLs that are printed show exactly what is happening. You are posting to a URL without a final slash, but you have the default APPEND_SLASH setting, so Django is redirecting to the URL with a final slash appended. Redirects are always GETs.
Make sure you post to the URL with the slash.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/25968145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: No trusted certificate found using CAS and JBoss I'm trying to authenticate through CAS+LDAP in a Jboss app. The config is like this:
*
*Server 1: Jboss 6.10. Cas is deployed here.
*Server 1: LDAP using OpenDS.
*Server 2: Jboss whith the app to login into.
I've configured both jboss to use SSL correctly and the CAS succesfyuly reads and authenticates against the LDAP.
When I get to (through https) server1:8443/app/ i'm redirected to server2:8443/cas/ and the login screen is displayed. I login with a valid user on the LDAP but when the flow gets back to the app i'm always getting this:
javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: No trusted certificate found
I've read its a certificate problem and, as it is a dev enviroment i'm trying the self-signed certificate. So I did this:
*
*Gen the self signed in Server1 with
keytool -genkey -alias jbosskey -keypass password -keyalg RSA -keystore server.keystore
*Get the certificate of the Server1 with:
keytool -export -alias jbosskey -keypass password -file server.crt -keystore server.keystore
*Copy the server.crt to Server2 and import it to the truststore of Jboss.
keytool -import -alias server1 -file server.crt -keystore C:\dev\jboss-6.1.0.Final\server.truststore
*This gets me the eception. So I also imported to the cacerts of the JVM.
keytool -import -alias server1 -file server.crt -keystore C:\dev\jdk160_18\jre\lib\security\cacerts
*Not working, so I tried to add the certificate to the keystore of the Jboss at Server2.
keytool -import -alias server1 -file server.crt -keystore C:\dev\jboss-6.1.0.Final\keystore.jks
The Server1 Jboss server.xml
<Connector protocol="HTTP/1.1" SSLEnabled="true"
port="${jboss.web.https.port}" address="${jboss.bind.address}"
scheme="https" secure="true" clientAuth="false"
keystoreFile="${jboss.server.home.dir}/conf/server.keystore"
keystorePass="password" sslProtocol = "TLS"
/>
The Server2 Jboss server.xml
<Connector protocol="HTTP/1.1" SSLEnabled="true"
port="8443" address="${jboss.bind.address}"
scheme="https" secure="true" clientAuth="false"
keystoreFile="C:\dev\jboss-6.1.0.Final\keystore.jks"
keystorePass="password"
truststoreFile="C:\dev\jboss-6.1.0.Final\server.truststore"
truststorePass="password"
sslProtocol = "TLS" />
I've been stuck on this for a couple of days and don't know if i'm missing something. Did I missed something important with the keytool?
Thanks in advance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/17972511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to get SetBinding method I need to do something like this:
http://social.msdn.microsoft.com/Forums/en-US/wpf/thread/982e2fcf-780f-4f1c-9730-cedcd4e24320/
I decided to follow the best way as John Smith advised.
I tried to set binding in xaml, it didn't work (target was always null).
I decide to set binding manually in code (for debugging purpose), so I need to execute "SetBinding" method of the DateRange object.
This method doesn't exists in object of type DateRange.
Any ideas?
<TextBox Grid.Row="1"
Grid.Column="1"
Name="Xml_Name"
>
<TextBox.Text>
<Binding XPath="@name" UpdateSourceTrigger="PropertyChanged" >
<Binding.ValidationRules>
<local:UniqueValidationRule x:Name="uniqueDatasourcesRule001" >
<local:UniqueValidationRule.UniqueCollection>
<local:UniqueDependencyObject uu="{Binding ElementName=Xml_Name, Path=Name, UpdateSourceTrigger=PropertyChanged}" />
</local:UniqueValidationRule.UniqueCollection>
</local:UniqueValidationRule>
</Binding.ValidationRules>
</Binding>
</TextBox.Text>
</TextBox>
public class UniqueDependencyObject : DependencyObject
{
public static readonly DependencyProperty uu11Property =
DependencyProperty.Register("uu", typeof(string), typeof(UniqueDependencyObject));
public string uu
{
set {
SetValue(uu11Property, value); }
get {
return (string)GetValue(uu11Property); }
}
}
public class UniqueValidationRule : ValidationRule
{
public UniqueDependencyObject UniqueCollection
{
get;
set;
}
public override ValidationResult Validate(object value, System.Globalization.CultureInfo cultureInfo)
{
// I set breakpoint to this line and check UniqueCollection.uu - it is always null
////////////////////////////////////////////////////////////////////////////////
////////////////////////////////////////////////////////////////////////////////
return new ValidationResult(true, null);
}
}
// And binding in code:
Binding binding = new Binding();
binding.ElementName = "Xml_Name";
binding.Path = new System.Windows.PropertyPath("Name");
binding.UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged;
UniqueValidationRule uVal = new UniqueValidationRule();
uVal.UniqueCollection = new UniqueDependencyObject();
BindingOperations.SetBinding(uVal.UniqueCollection, UniqueDependencyObject.uu11Property, binding);
A: I haven't read all the details of the forum post you're referring to, but i'm sure you need to know a few things about data binding before you can start using it.
*
*The target of a data binding is a dependency
property
*A dependency property has to be declared in a class that is derived from DependencyObject (at least when it is not an attached property, but we don't talk about those here)
*The SetBinding method you're looking for is either a static method in BindingOperations, or a method of FrameworkElement.
So when you're going to set up a binding on some property of your DataRange class, it would have to be derived from DependencyObject, and you would set the binding like this:
DataRange dataRange = ...
Binding binding = ...
BindingOperations.SetBinding(dataRange, DataRange.StartProperty, binding);
If DataRange were derived from FrameworkElement, you could write this:
dataRange.SetBinding(DataRange.StartProperty, binding);
Here DataRange.StartProperty is of type DependencyProperty and represents the Start dependency property of class DataRange.
You should at least read the MSDN articles Data Binding Overview, Dependency Properties Overview and Custom Dependency Properties.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/12036527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to implement the generalized form of std::same_as (i.e. for more than two type parameters) that is agnostic to parameter order? Background
We know that the concept std::same_as is agnostic to order (in other words, symmetric): std::same_as<T, U> is equivalent to std::same_as<U, T> (related question). In this question, I would like to implement something more general: template <typename ... Types> concept same_are = ... that checks whether types in the pack Types are equal to each other.
My attempt
#include <type_traits>
#include <iostream>
#include <concepts>
template <typename T, typename... Others>
concept same_with_others = (... && std::same_as<T, Others>);
template <typename... Types>
concept are_same = (... && same_with_others<Types, Types...>);
template< class T, class U> requires are_same<T, U>
void foo(T a, U b) {
std::cout << "Not integral" << std::endl;
}
// Note the order <U, T> is intentional
template< class T, class U> requires (are_same<U, T> && std::integral<T>)
void foo(T a, U b) {
std::cout << "Integral" << std::endl;
}
int main() {
foo(1, 2);
return 0;
}
(My intention here is to enumerate over every possible ordered pair of types in the pack)
Unfortunately, this code would not compile, with the compiler complaining that the call to foo(int, int) is ambiguous. I believe that it considers are_same<U, T> and are_same<T, U> as not equivalent. I would like to know why the code fails how I can fix it (so that the compiler treats them as equivalent)?
A: From cppreference.com Constraint_normalization
The normal form of any other expression E is the atomic constraint whose expression is E and whose parameter mapping is the identity mapping. This includes all fold expressions, even those folding over the && or || operators.
So
template <typename... Types>
concept are_same = (... && same_with_others<Types, Types...>);
is "atomic".
So indeed are_same<U, T> and are_same<T, U> are not equivalent.
I don't see how to implement it :-(
A: The problem is, with this concept:
template <typename T, typename... Others>
concept are_same = (... && std::same_as<T, Others>);
Is that the normalized form of this concept is... exactly that. We can't "unfold" this (there's nothing to do), and the current rules don't normalize through "parts" of a concept.
In other words, what you need for this to work is for your concept to normalize into:
... && (same-as-impl<T, U> && same-as-impl<U, T>)
into:
... && (is_same_v<T, U> && is_same_v<U, T>)
And consider one fold-expression && constraint to subsume another fold-expression constraint && if its underlying constraint subsumes the other's underlying constraint. If we had that rule, that would make your example work.
It may be possible to add this in the future - but the concern around the subsumption rules is that we do not want to require compilers to go all out and implement a full SAT solver to check constraint subsumption. This one doesn't seem like it makes it that much more complicated (we'd really just add the && and || rules through fold-expressions), but I really have no idea.
Note however that even if we had this kind of fold-expression subsumption, are_same<T, U> would still not subsume std::same_as<T, U>. It would only subsume are_same<U, T>. I am not sure if this would even be possible.
A: churill is right .Using std::conjunction_v might be helpful.
template <typename T,typename... Types>
concept are_same = std::conjunction_v<std::same_as<T,Types>...>;
| {
"language": "en",
"url": "https://stackoverflow.com/questions/58724459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
} |
Q: How to scalar multiply array Write a method scalarMultiply which takes as input a double[] array, and a double scale, and returns void. The method should modify the input array by multiplying each value in the array by scale.
. Question to consider: When we modify the input array, do we actually modify the value of the
variable
array
?
Here is what i did until now, but i dont know what i did wrong because it still is not working.
public class warm4{
public static void main(String[] args){
double[] array1 = {1,2,3,4};
double scale1 = 3;
}
}
public static void scalarMultiply(double[] array, double scale){
for( int i=0; i<array.length; i++){
array[i] = (array[i]) * scale;
System.out.print(array[i] + " ");
}
}
}
A: You're never calling the scalarMultiply method.
A: You're never calling scalarMultiply and the number of the brackets is incorrect.
public class warm4{
public static void main(String[] args){
double[] array1 = {1,2,3,4};
double scale1 = 3;
scalarMultiply(array1, scale1);
}
public static void scalarMultiply(double[] array, double scale){
for( int i=0; i<array.length; i++){
array[i] = (array[i]) * scale;
System.out.print(array[i] + " ");
}
}
}
A: Your method is OK. But you must call it from your main:
public static void main(String[] args){
double[] array1 = {1,2,3,4};
double scale1 = 3;
scalarMultiply(array1, scale1);
for (int i = 0; i < array1.length; i++) {
System.out.println(array1[i]);
}
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19661475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Animate not queueing correctly I have this working code for moving a heading icon forth and back if you hover over heading
jQuery('h1.heading').hover(
function(){
$icon = jQuery('.heading-icon', this);
if( ! $icon.is(':animated') ){
$icon.animate({ "margin-left" : "+=7px" }, 200, function(){
$icon.animate({ "margin-left" : "-=7px" }, 400, function(){
$icon.css('margin-left','auto');
} );
} );
}
},
function(){}
);
However if you hover faster over heading (faster than the animation is completed) it gets buggy and ends up moved away from original location.
I use onComplete functions, I even tried to use ! $('...').is(':animated') as you can see above, did not help, so I thought at least I would reset the position after animation end so then even if it would get buggy it would at least reset to the original position after all animation would end... That worked only partially and still gets buggy and ends up in the wrong position...
So whats wrong?
How come that e.g. shake effect from jQuery UI enques good?
Note: I don't care if the animation runs a few times more, the goal is to make it stay at the right position when (all) the animation(s) end.
Any help ? :)
EDIT
I finally reproduced the problem on JSFiddle - http://jsfiddle.net/yhJst/
==> try to hover up and down faster over the headings
EDIT2
It doesn't seem to be happening when there is only one heading ... http://jsfiddle.net/scZcB/3/
A: Here's the problem, in your callback function, you are using animate on the $icon variable. But when you hover an other element, that variable is changed for the new hovered element.
Use $(this) in the callback or the natural queuing :
Natural queuing
jQuery('h1.sc_blogger_title').on('mouseenter', function(){
$icon = jQuery('.sc_title_bubble_icon', this);
if( ! $icon.is(':animated') ){
$icon.animate({ "margin-left" : "+=7px" }, 200).animate({ "margin-left" : "-=7px" }, 400);
}
});
http://jsfiddle.net/yhJst/1/
$(this)
jQuery('h1.sc_blogger_title').on('mouseenter', function(){
$icon = jQuery('.sc_title_bubble_icon', this);
if( ! $icon.is(':animated') ){
$icon.animate({ "margin-left" : "+=7px" }, 200, function(){
$(this).animate({ "margin-left" : "-=7px" }, 400);
});
}
});
http://jsfiddle.net/yhJst/2/
Or use a local variable.
as you discovered, the current variable is a global one. Just add the keyword var.
jQuery('h1.sc_blogger_title').on('mouseenter', function(){
var $icon = jQuery('.sc_title_bubble_icon', this);
if( ! $icon.is(':animated') ){
$icon.animate({ "margin-left" : "+=7px" }, 200, function(){
$icon.animate({ "margin-left" : "-=7px" }, 400, function(){
$icon.css('margin-left','auto');
} );
} );
}
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24783304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can't append char to a StringBuffer 2-dimensional array Does anybody know why i can't append a char to this StringBuffer array (in my example below) and can somebody please show me how i needs to be done?
public class test {
public static void main(String args[]){
StringBuffer[][] templates = new StringBuffer[3][3];
templates[0][0].append('h');
}
}
My output to this code is:
output: Exception in thread "main" java.lang.NullPointerException
at test.main(test.java:6)
It would help me so much so if you know any solution, please respond to this
A: Below statement will just declare an array but will not initalize its elements :
StringBuffer[][] templates = new StringBuffer[3][3];
You need to initialize your array elements before trying to append the contents to them. Not doing so will result in NullPointerException
Add this initialization
templates[0][0] = new StringBuffer();
and then append
templates[0][0].append('h');
A: You need to initialize the buffers before you append something
templates[0][0] = new StringBuffer();
A: Others correctly pointed out the correct answer, but what happens when you try to do something like templates[1][2].append('h');?
What you really need is something like this:
public class Test { //<---Classes should be capitalized.
public static final int ARRAY_SIZE = 3; //Constants are your friend.
//Have a method for init of the double array
public static StringBuffer[][] initArray() {
StringBuffer[][] array = new StringBuffer[ARRAY_SIZE][ARRAY_SIZE];
for(int i = 0;i<ARRAY_SIZE;i++) {
for(int j=0;j<ARRAY_SIZE;j++) array[i][j] = new StringBuffer();
}
return array;
}
public static void main(String args[]){
StringBuffer[][] templates = initArray();
templates[0][0].append('h');
//You are now free to conquer the world with your StringBuffer Matrix.
}
}
Using the constants are important, as is is reasonable to expect your matrix size to change. By using constants, you can change it in only one location rather then scattered throughout your program.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21582128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: How do i download images from Airtable and upload to Wordpress using Airtable and Wordpress APIs preferably in javascript Im looking to make an automation script that takes images from Airtable and downloads and upload them to Wordpress upload folder. these images should backfill on all the post types with the empty image fields i have created for them. for example say i made a profile for myself James Major and there is an image field, it should take that pic and fill in it. i don't have a clue where to begin as I'm new to coding and i am learning on the fly. thank you for your help
enter code here
var base = new Airtable({ apiKey: 'API KEY HERE' }).base('Base key here');
const table = base('USERS');
const getRecords = async() => {
const records = await table.select({
maxRecords: 0,
view: "Website Export"
}).firstPage();
console.log(records);
};
getRecords();
const getRecordById = async(id) => {
try {
const record = await table.find(id);
console.log(record)
} catch (err) {
console.error(err);
}
};
// const createRecord
getRecordById(); ```
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72808393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Generating sharing message and shortened URL to Twitter I am using a Twitter ghetto sharing on a very simple Python based mobile site
return "http://mobile.twitter.com/home?status=" + urllib.quote(link)
I'd like to change the sharing so that Twitter messages gets the page title (I have it) and shortened URL.
Does there exist any general libraries/micro-frameworks for this purposes for Python?
A: I'd recommend implementing a URL-shortener API (ie: bit.ly). Even further, there are API wrappers available for most of these popular services (ie: Python API wrapper for bit.ly - http://code.google.com/p/python-bitly/).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6412277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Different font size on Different devices in same size class Hi I am new with the size class. As far I know Apple has given one size class (Compact + Regular) for Portrait iPhone 4s, 5, 6 and 6+. So How can I give different fonts size in these three different devices By storyboard or Any other way to do that.
Thanks
Happy coding
A: Autolayout and SizeClasses wouldn't target specific devices, so you will have to set the font sizes programatically. You can use check the size of your device using UIScreen.mainScreen().bounds.size.height and set the size of your font accordingly. This solution will clarify you more.
A: As you mentioned in your question you need to give separate font sizes for different devices.
First thing is we cant achieve it on storyboard.
You need to assign different font sizes Manually by using If conditions & checking devices.
For ex:
if ([[UIScreen mainScreen] bounds].size.height == 568) {
// Assign Font size for iPhone 5
}else if ([[UIScreen mainScreen] bounds].size.height == 667){
// Assign Font size for iPhone 6
}else if ([[UIScreen mainScreen] bounds].size.height == 736){
// Assign Font size for iPhone 6+
}else if ([[UIScreen mainScreen] bounds].size.height == 480){
// Assign Font size for iPhone 4s
}
Note:
*
*You can create a separate Font class & if you did it already than just need to put above validations in that class.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31849360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: RSStail crashes with segmentation fault 11 I'm trying to monitor an RSS feed using rsstail (https://github.com/flok99/rsstail), by piping the url from rsstail into another program. For some reason, rsstail crashes after a few minutes with Segmentation Fault 11. I have no idea why it would be crashing, and there are no reports online that seem to indicate that rsstail has any issues with segfaults on OSX.
How should I be going about figuring out this segfault? I'm considering debugging rsstail using gcb, but this might be overkill for something which by all rights shouldn't be happening.
If any one has an alternative to rsstail that produces command line output, that would also be appreciated.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/24445327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: automatic count the number of 1-n instance in a mysql table I have two mySql tables in a 1-n relationship. In the first table I need to set a field representing the number of instance of the second table, and keep it updated.
What is the best way to accomplish this? I have to update the counter every time I insert/delete a record in the second table or is there an automatic way to do it?
A: To have it as a field in the first table you need to update the counter every time that you insert/delete a record in the second table.
Alternatively, when you need to retrieve the data, you can just query the second table, joining with the first and filtering on the Id from the first table. If you don't need the data every time that you retrieve a record from the first table and if you are inserting/deleting lots from the second table, then this will be the more efficient route.
A: If the run-time query as suggested by @Najzero is not preferred you might think of creating View with this query and getting data from View
Also if you need to update the field every-time on insert/delete, you can consider creating Trigger on INSERT and DELETE operation.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14828153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: how can i validate year textbox in javascript? I am using a textbox on my page which is used to enter year in yyyy-yyyy format.
Can you tell how can i validate textbox in this format through javascript??
Thanks in advance...
A: You can match it with a regular expression:
var pattern = /\d{4}-\d{4}/;
A: If you are not using any javascript library (like jQuery, etc), Javascript Regular Expressions could be used for validating the input.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/6224643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Longest prefix+suffix-combination in set of strings I have a set of strings (less than 30) of length 1 to ~30. I need to find the subset of at least two strings that share the longest possible prefix- + suffix-combination.
For example, let the set be
Foobar
Facar
Faobaron
Gweron
Fzobar
The prefix/suffix F/ar has a combined length of 3 and is shared by Foobar, Facar and Fzobar; the prefix/suffix F/obar has a combined length of 5 and is shared by Foobar and Fzobar. The searched-for prefix/suffix is F/obar.
Note that this is not to be confused with the longest common prefix/suffix, since only two or more strings from the set need to share the same prefix+suffix. Also note that the sum of the lengths of both the prefix and the suffix is what is to be maximized, so both need to be taken into account. The prefix or suffix may be the empty string.
Does anyone know of an efficient method to implement this?
A: How about this:
maxLen := -1;
for I := 0 to Len(A) - 1 do
if Len(A[I]) > maxLen then // (1)
for J := 0 to Len(A[I]) do
for K := 0 to Len(A[I]) - J do
if J+K > maxLen then // (2)
begin
prf := LeftStr(A[I], J);
suf := RightStr(A[I], K);
found := False;
for m := 0 to Len(sufList) - 1 do
if (sufList[m] = suf) and (prfList[m] = prf) then
begin
maxLen := J+K;
Result := prf+'/'+suf;
found := True;
// (3)
n := 0;
while n < Len(sufList) do
if Len(sufList[n])+Len(prfList[n]) <= maxLen then
begin
sufList.Delete(n);
prfList.Delete(n);
end
else
Inc(n);
// (end of 3)
Break;
end;
if not found then
begin
sufList.Add(suf);
prfList.Add(prf);
end;
end;
In this example maxLen keeps sum of lengths of longest found prefix/suffix so far. The most important part of it is the line marked with (2). It bypasses lots of unnecessary string comparisons. In section (3) it eliminates any existing prefix/suffix that is shorter than newly found one (winch is duplicated).
| {
"language": "en",
"url": "https://stackoverflow.com/questions/51002654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Node.js `Stream` and chunk order Are Node.js streams order preserving up to and including the 'data' events? I believe they are, and this question also seems to suggest at least that the pipe methods are sequential.
I want chunk order to be preserved so that I can call a function that operates on each chunk before the stream ends, i.e., while one part of the stream is being read, I want the buffered data chunk to be processed by a function in the 'data' event.
However, when I run the following code, where I've used Math.random and setTimeout to test specifically if the events happen in order:
fs.createReadStream(filePath, { encoding: 'ascii' })
.pipe(streamToEntry)
.on('data', (chunk) => {
setTimeout(() => {
console.log(chunk);
}, Math.random() * 1000)
});
data chunks can be logged in an out-of-order fashion.
Is this because of setTimeout() or because the 'data' event is not necessarily called sequentially? i.e., should ordered processing only happen in the pipe methods, or can I process the data at the end sequentially?
A: Your data events are guaranteed to be emitted in order. See this answer for some additional details and some snippets from node's source code, which shows that indeed you will get your data events in order.
The problem appears when you add asynchronous code in your data callback (setTimeout is an example of async code). In this situation, your data callbacks are not guaranteed to finish processing in the order they were called.
What you need to do is ensure that by the time your data callback returns, you have fully processed your data. In other words your callback code needs to be sync code.
fs.createReadStream(filePath, { encoding: 'ascii' })
.pipe(streamToEntry)
.on('data', (chunk) => {
// only synchronous code here
console.log(chunk);
});
To make the question code work, async/await can be used:
fs.createReadStream(filePath, { encoding: 'ascii' })
.pipe(streamToEntry)
.on('data', async (chunk) => {
// only synchronous code here
await setTimeout(() => console.log(chunk), Math.random() * 1000);
});
| {
"language": "en",
"url": "https://stackoverflow.com/questions/53041032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: mod_wsgi problem with MAMP I make mod_wsgi is like following
$./configure
--with-python=/Library/Frameworks/Python.framework/Versions/2.7/bin/python
--with-apxs=/usr/local/apache2/bin/apxs
checking Apache version... 2.0.63
configure: creating ./config.status
config.status: creating Makefile
$sudo make $sudo make install
and then I copy file from /usr/local/apache2/modules/mod_wsgi.so to /Applications/MAMP/Library/modules/mod_wsgi.so
And then I add
LoadModule wsgi_module
modules/mod_wsgi.so
in httpd.conf
I run the apache and I got error.
$ sudo
/Applications/MAMP/Library/bin/apachectl
start
Syntax error on line 287 of
/Applications/MAMP/conf/apache/httpd.conf:
Cannot load
/Applications/MAMP/Library/modules/mod_wsgi.so
into server: cannot create object file
image or add library
A: Step 1: Make sure your version of MAMP is Version 2 because it includes a Universal Binary installer (32-bit & 64-bit)
Step 2: Modify your Make file and eliminate the other compiler versions, similar to:
CPPFLAGS = -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -DNDEBUG
CFLAGS = -Wc,"-arch i386" -Wc,"-arch x86_64" -Wc
LDFLAGS = -arch i386 -arch x86_64 -F/Library/Frameworks -framework Python -u _PyMac_Error
LDLIBS = -ldl -framework CoreFoundation
Step 3: In httpd.conf: LoadModule wsgi_module modules/mod_wsgi.so
| {
"language": "en",
"url": "https://stackoverflow.com/questions/3257496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Split an Array and get the Index I've an Array like this,
[
1,
1,
1,
0,
0,
0,
1,
1,
0,
1,
1
]
In above array I need to get split on 0 and 1 numbers and also their index. I write the code for getting 0 and 1 in separate array but I want their index also in the Main array, below code is:
for (int i = 0; i < [Array count]; i++) {
NSString* strV = [Array objectAtIndex:i];
NSLog(@"ArrayCount:%@",strV);
if ([strV isEqual:[NSNumber numberWithInt:0]]) {
[getArray addObject:strV];
}
}
NSLog(@"getArray:%@",getArray.description);
}
Then, how can I get index of 0, can you please help me. Thank you
A: NSMutableArray *arrayContainZero = [NSMutableArray new];
NSMutableArray *arrayWithZeroIndexes = [NSMutableArray new];
for (NSNumber *numberZero in Array)
{
if(numberZero.integerValue == 0)
{
[arrayContainZero addObject:numberZero];
[arrayWithZeroIndexes addObject:@([arrayContainZeroindexOfObject:numberZero])];
}
}
A: NSArray *mainArr=@[@1,@1,@1,@0,@0,@0,@1,@1,@0,@1,@1];
NSMutableArray *tempArr=[[NSMutableArray alloc] init];
for(int i=0;i<[mainArr count];i++){
if([[mainArr objectAtIndex:i] isEqualToNumber:[NSNumber numberWithInt:0]]){
NSMutableArray *arr=[[NSMutableArray alloc] initWithArray:tempArr];
[tempArr removeAllObjects];
NSDictionary *dic=[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:0],@"value",[NSNumber numberWithInt:i],@"index",nil];
[tempArr addObject:dic];
[tempArr addObjectsFromArray:arr];
}else if ([[mainArr objectAtIndex:i] isEqualToNumber:[NSNumber numberWithInt:1]]){
NSDictionary *dic=[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:1],@"value",[NSNumber numberWithInt:i],@"index",nil];
[tempArr addObject:dic];
}
}
NSLog(@"%@",tempArr.description);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36401826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Send data to a shiny R program First of all I'm new to shiny.
I'd like to send a matrix of records to a shiny application so I can populate a control and generate a graph.
Is it possible?
A: The trick is not to have your ui code and server code in two seperate files but to write a function which contains your ui and server code.
Try this:
shinyapp <- function(mat) {
app <- list(
ui = bootstrapPage(
here comes your ui.R),
server = function(input, output) {
here comes your server.R
the argument mat can be used now in server
}
)
runApp(app)
}
This worked for me.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/23431580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Dependency Inversion Principle: High Level and Low Level module example I was going through the following link to understand what high-level and low-level modules mean in the context of Dependency Inversion Principle.
As per the explanation given there, is the following code snippet a good/appropriate example?
public class HighLevel
{
private IAbstraction _abstraction;
public HighLevel(IAbstraction abstraction)
{
_abstraction = abstraction;
}
public void Act()
{
_abstraction.DoSomething();
}
}
public interface IAbstraction
{
void DoSomething();
}
public class LowLevel: IAbstraction
{
public void DoSomething()
{
//Do something
}
}
A: To make a long answer short: yes, this is an example of a Dependency Inversion Principle
| {
"language": "en",
"url": "https://stackoverflow.com/questions/43603510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
} |
Q: How to capture a video (AND audio) in python, from a camera (or webcam) i'm looking for a solution, either in linux or in windows, that allows me to
*
*record video (+audio) from my webcam & microphone, simultaneously.
*save it as a file.AVI (or mpg or whatever)
*display the video on the screen while recording it
Compression is NOT an issue in my case, and i actually prefer to capture RAW and compress it later.
So far i've done it with an ActiveX component in VB which took care of everything, and i'd like to progress with python (the VB solution is unstable, unreliable).
so far i've seen code that captures VIDEO only, or individual frames...
I've looked so far at
*
*OpenCV - couldn't find audio capture there
*PyGame - no simultaneous audio capture (AFAIK)
*VideoCapture - provide only single frames.
*SimpleCV - no audio
*VLC - binding to VideoLAN program into wxPthon - hopefully it will do (still investigating this option)
*kivy - just heard about it, didn't manage to get it working under windows SO FAR.
The question - is there a video & audio capture library for python?
or - what are the other options if any?
A: I would recommend ffmpeg. There is a python wrapper.
http://code.google.com/p/pyffmpeg/
A: Answer: No. There is no single library/solution in python to do video/audio recording simultaneously. You have to implement both separately and merge the audio and video signal in a smart way to end up with a video/audio file.
I got a solution for the problem you present. My code addresses your three issues:
*
*Records video + audio from webcam and microphone simultaneously.
*It saves the final video/audio file as .AVI
*Un-commenting lines 76, 77 and 78 will make the video to be displayed to screen while recording.
My solution uses pyaudio for audio recording, opencv for video recording, and ffmpeg for muxing the two signals. To be able to record both simultaneously, I use multithreading. One thread records video, and a second one the audio. I have uploaded my code to github and also have included all the essential parts it here.
https://github.com/JRodrigoF/AVrecordeR
Note: opencv is not able to control the fps at which the webcamera does the recording. It is only able to specify in the encoding of the file the desired final fps, but the webcamera usually behaves differently according to specifications and light conditions (I found). So the fps have to be controlled at the level of the code.
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
class VideoRecorder():
# Video class based on openCV
def __init__(self):
self.open = True
self.device_index = 0
self.fps = 6 # fps should be the minimum constant rate at which the camera can
self.fourcc = "MJPG" # capture images (with no decrease in speed over time; testing is required)
self.frameSize = (640,480) # video formats and sizes also depend and vary according to the camera used
self.video_filename = "temp_video.avi"
self.video_cap = cv2.VideoCapture(self.device_index)
self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
self.frame_counts = 1
self.start_time = time.time()
# Video starts being recorded
def record(self):
# counter = 1
timer_start = time.time()
timer_current = 0
while(self.open==True):
ret, video_frame = self.video_cap.read()
if (ret==True):
self.video_out.write(video_frame)
# print str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current)
self.frame_counts += 1
# counter += 1
# timer_current = time.time() - timer_start
time.sleep(0.16)
# gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
# cv2.imshow('video_frame', gray)
# cv2.waitKey(1)
else:
break
# 0.16 delay -> 6 fps
#
# Finishes the video recording therefore the thread too
def stop(self):
if self.open==True:
self.open=False
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
else:
pass
# Launches the video recording function using a thread
def start(self):
video_thread = threading.Thread(target=self.record)
video_thread.start()
class AudioRecorder():
# Audio class based on pyAudio and Wave
def __init__(self):
self.open = True
self.rate = 44100
self.frames_per_buffer = 1024
self.channels = 2
self.format = pyaudio.paInt16
self.audio_filename = "temp_audio.wav"
self.audio = pyaudio.PyAudio()
self.stream = self.audio.open(format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
frames_per_buffer = self.frames_per_buffer)
self.audio_frames = []
# Audio starts being recorded
def record(self):
self.stream.start_stream()
while(self.open == True):
data = self.stream.read(self.frames_per_buffer)
self.audio_frames.append(data)
if self.open==False:
break
# Finishes the audio recording therefore the thread too
def stop(self):
if self.open==True:
self.open = False
self.stream.stop_stream()
self.stream.close()
self.audio.terminate()
waveFile = wave.open(self.audio_filename, 'wb')
waveFile.setnchannels(self.channels)
waveFile.setsampwidth(self.audio.get_sample_size(self.format))
waveFile.setframerate(self.rate)
waveFile.writeframes(b''.join(self.audio_frames))
waveFile.close()
pass
# Launches the audio recording function using a thread
def start(self):
audio_thread = threading.Thread(target=self.record)
audio_thread.start()
def start_AVrecording(filename):
global video_thread
global audio_thread
video_thread = VideoRecorder()
audio_thread = AudioRecorder()
audio_thread.start()
video_thread.start()
return filename
def start_video_recording(filename):
global video_thread
video_thread = VideoRecorder()
video_thread.start()
return filename
def start_audio_recording(filename):
global audio_thread
audio_thread = AudioRecorder()
audio_thread.start()
return filename
def stop_AVrecording(filename):
audio_thread.stop()
frame_counts = video_thread.frame_counts
elapsed_time = time.time() - video_thread.start_time
recorded_fps = frame_counts / elapsed_time
print "total frames " + str(frame_counts)
print "elapsed time " + str(elapsed_time)
print "recorded fps " + str(recorded_fps)
video_thread.stop()
# Makes sure the threads have finished
while threading.active_count() > 1:
time.sleep(1)
# Merging audio and video signal
if abs(recorded_fps - 6) >= 0.01: # If the fps rate was higher/lower than expected, re-encode it to the expected
print "Re-encoding"
cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
subprocess.call(cmd, shell=True)
print "Muxing"
cmd = "ffmpeg -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
else:
print "Normal recording\nMuxing"
cmd = "ffmpeg -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
print ".."
# Required and wanted processing of final files
def file_manager(filename):
local_path = os.getcwd()
if os.path.exists(str(local_path) + "/temp_audio.wav"):
os.remove(str(local_path) + "/temp_audio.wav")
if os.path.exists(str(local_path) + "/temp_video.avi"):
os.remove(str(local_path) + "/temp_video.avi")
if os.path.exists(str(local_path) + "/temp_video2.avi"):
os.remove(str(local_path) + "/temp_video2.avi")
if os.path.exists(str(local_path) + "/" + filename + ".avi"):
os.remove(str(local_path) + "/" + filename + ".avi")
A: I've been looking around for a good answer to this, and I think it is GStreamer...
The documentation for the python bindings is extremely light, and most of it seemed centered around the old 0.10 version of GStreamer instead of the new 1.X versions, but GStreamer is a extremely powerful, cross-platform multimedia framework that can stream, mux, transcode, and display just about anything.
A: I used JRodrigoF's script for a while on a project. However, I noticed that sometimes the threads would hang and it would cause the program to crash. Another issue is that openCV does not capture video frames at a reliable rate and ffmpeg would distort my video when re-encoding.
I came up with a new solution that records much more reliably and with much higher quality for my application. It presently only works for Windows because it uses pywinauto and the built-in Windows Camera app. The last bit of the script does some error-checking to confirm the video successfully recorded by checking the timestamp of the name of the video.
https://gist.github.com/mjdargen/956cc968864f38bfc4e20c9798c7d670
import pywinauto
import time
import subprocess
import os
import datetime
def win_record(duration):
subprocess.run('start microsoft.windows.camera:', shell=True) # open camera app
# focus window by getting handle using title and class name
# subprocess call opens camera and gets focus, but this provides alternate way
# t, c = 'Camera', 'ApplicationFrameWindow'
# handle = pywinauto.findwindows.find_windows(title=t, class_name=c)[0]
# # get app and window
# app = pywinauto.application.Application().connect(handle=handle)
# window = app.window(handle=handle)
# window.set_focus() # set focus
time.sleep(2) # have to sleep
# take control of camera window to take video
desktop = pywinauto.Desktop(backend="uia")
cam = desktop['Camera']
# cam.print_control_identifiers()
# make sure in video mode
if cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").exists():
cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(1)
# start then stop video
cam.child_window(title="Take Video", auto_id="CaptureButton_1", control_type="Button").click()
time.sleep(duration+2)
cam.child_window(title="Stop taking Video", auto_id="CaptureButton_1", control_type="Button").click()
# retrieve vids from camera roll and sort
dir = 'C:/Users/m/Pictures/Camera Roll'
all_contents = list(os.listdir(dir))
vids = [f for f in all_contents if "_Pro.mp4" in f]
vids.sort()
vid = vids[-1] # get last vid
# compute time difference
vid_time = vid.replace('WIN_', '').replace('_Pro.mp4', '')
vid_time = datetime.datetime.strptime(vid_time, '%Y%m%d_%H_%M_%S')
now = datetime.datetime.now()
diff = now - vid_time
# time different greater than 2 minutes, assume something wrong & quit
if diff.seconds > 120:
quit()
subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True) # close camera app
print('Recorded successfully!')
win_record(2)
A: To the questions asked above: Yes the code should also works under Python3. I adjusted it a little bit and now works for python2 and python3 (tested it on windows7 with 2.7 and 3.6, though you need to have ffmpeg installed or the executable ffmpeg.exe at least in the same directory, you can get it here: https://www.ffmpeg.org/download.html ). Of course you also need all the other libraries cv2, numpy, pyaudio, installed like herewith:
pip install opencv-python numpy pyaudio
You can now run the code directly:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# VideoRecorder.py
from __future__ import print_function, division
import numpy as np
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
class VideoRecorder():
"Video class based on openCV"
def __init__(self, name="temp_video.avi", fourcc="MJPG", sizex=640, sizey=480, camindex=0, fps=30):
self.open = True
self.device_index = camindex
self.fps = fps # fps should be the minimum constant rate at which the camera can
self.fourcc = fourcc # capture images (with no decrease in speed over time; testing is required)
self.frameSize = (sizex, sizey) # video formats and sizes also depend and vary according to the camera used
self.video_filename = name
self.video_cap = cv2.VideoCapture(self.device_index)
self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
self.frame_counts = 1
self.start_time = time.time()
def record(self):
"Video starts being recorded"
# counter = 1
timer_start = time.time()
timer_current = 0
while self.open:
ret, video_frame = self.video_cap.read()
if ret:
self.video_out.write(video_frame)
# print(str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current))
self.frame_counts += 1
# counter += 1
# timer_current = time.time() - timer_start
time.sleep(1/self.fps)
# gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
# cv2.imshow('video_frame', gray)
# cv2.waitKey(1)
else:
break
def stop(self):
"Finishes the video recording therefore the thread too"
if self.open:
self.open=False
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
def start(self):
"Launches the video recording function using a thread"
video_thread = threading.Thread(target=self.record)
video_thread.start()
class AudioRecorder():
"Audio class based on pyAudio and Wave"
def __init__(self, filename="temp_audio.wav", rate=44100, fpb=1024, channels=2):
self.open = True
self.rate = rate
self.frames_per_buffer = fpb
self.channels = channels
self.format = pyaudio.paInt16
self.audio_filename = filename
self.audio = pyaudio.PyAudio()
self.stream = self.audio.open(format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
frames_per_buffer = self.frames_per_buffer)
self.audio_frames = []
def record(self):
"Audio starts being recorded"
self.stream.start_stream()
while self.open:
data = self.stream.read(self.frames_per_buffer)
self.audio_frames.append(data)
if not self.open:
break
def stop(self):
"Finishes the audio recording therefore the thread too"
if self.open:
self.open = False
self.stream.stop_stream()
self.stream.close()
self.audio.terminate()
waveFile = wave.open(self.audio_filename, 'wb')
waveFile.setnchannels(self.channels)
waveFile.setsampwidth(self.audio.get_sample_size(self.format))
waveFile.setframerate(self.rate)
waveFile.writeframes(b''.join(self.audio_frames))
waveFile.close()
def start(self):
"Launches the audio recording function using a thread"
audio_thread = threading.Thread(target=self.record)
audio_thread.start()
def start_AVrecording(filename="test"):
global video_thread
global audio_thread
video_thread = VideoRecorder()
audio_thread = AudioRecorder()
audio_thread.start()
video_thread.start()
return filename
def start_video_recording(filename="test"):
global video_thread
video_thread = VideoRecorder()
video_thread.start()
return filename
def start_audio_recording(filename="test"):
global audio_thread
audio_thread = AudioRecorder()
audio_thread.start()
return filename
def stop_AVrecording(filename="test"):
audio_thread.stop()
frame_counts = video_thread.frame_counts
elapsed_time = time.time() - video_thread.start_time
recorded_fps = frame_counts / elapsed_time
print("total frames " + str(frame_counts))
print("elapsed time " + str(elapsed_time))
print("recorded fps " + str(recorded_fps))
video_thread.stop()
# Makes sure the threads have finished
while threading.active_count() > 1:
time.sleep(1)
# Merging audio and video signal
if abs(recorded_fps - 6) >= 0.01: # If the fps rate was higher/lower than expected, re-encode it to the expected
print("Re-encoding")
cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
subprocess.call(cmd, shell=True)
print("Muxing")
cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
else:
print("Normal recording\nMuxing")
cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
print("..")
def file_manager(filename="test"):
"Required and wanted processing of final files"
local_path = os.getcwd()
if os.path.exists(str(local_path) + "/temp_audio.wav"):
os.remove(str(local_path) + "/temp_audio.wav")
if os.path.exists(str(local_path) + "/temp_video.avi"):
os.remove(str(local_path) + "/temp_video.avi")
if os.path.exists(str(local_path) + "/temp_video2.avi"):
os.remove(str(local_path) + "/temp_video2.avi")
# if os.path.exists(str(local_path) + "/" + filename + ".avi"):
# os.remove(str(local_path) + "/" + filename + ".avi")
if __name__ == '__main__':
start_AVrecording()
time.sleep(5)
stop_AVrecording()
file_manager()
A: If you notice misalignment between video and audio by the code above, please see my solution below
I think the most rated answer above does a great job. However, it did not work perfectly when I was using it,especially when you use a low fps rate (say 10). The main issue is with video recording. In order to properly synchronize video and audio recording with ffmpeg, one has to make sure that cv2.VideoCapture() and cv2.VideoWriter() share exact same FPS, because the recorded video time length is solely determined by fps rate and the number of frames.
Following is my suggested update:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# VideoRecorder.py
from __future__ import print_function, division
import numpy as np
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
import ffmpeg
class VideoRecorder():
"Video class based on openCV"
def __init__(self, name="temp_video.avi", fourcc="MJPG", sizex=640, sizey=480, camindex=0, fps=30):
self.open = True
self.device_index = camindex
self.fps = fps # fps should be the minimum constant rate at which the camera can
self.fourcc = fourcc # capture images (with no decrease in speed over time; testing is required)
self.frameSize = (sizex, sizey) # video formats and sizes also depend and vary according to the camera used
self.video_filename = name
self.video_cap = cv2.VideoCapture(self.device_index)
self.video_cap.set(cv2.CAP_PROP_FPS, self.fps)
self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
self.frame_counts = 1
self.start_time = time.time()
def record(self):
"Video starts being recorded"
# counter = 1
timer_start = time.time()
timer_current = 0
while self.open:
ret, video_frame = self.video_cap.read()
if ret:
self.video_out.write(video_frame)
# print(str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current))
self.frame_counts += 1
# print(self.frame_counts)
# counter += 1
# timer_current = time.time() - timer_start
# time.sleep(1/self.fps)
# gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
# cv2.imshow('video_frame', gray)
# cv2.waitKey(1)
else:
break
def stop(self):
"Finishes the video recording therefore the thread too"
if self.open:
self.open=False
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
def start(self):
"Launches the video recording function using a thread"
video_thread = threading.Thread(target=self.record)
video_thread.start()
class AudioRecorder():
"Audio class based on pyAudio and Wave"
def __init__(self, filename="temp_audio.wav", rate=44100, fpb=1024, channels=2):
self.open = True
self.rate = rate
self.frames_per_buffer = fpb
self.channels = channels
self.format = pyaudio.paInt16
self.audio_filename = filename
self.audio = pyaudio.PyAudio()
self.stream = self.audio.open(format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
frames_per_buffer = self.frames_per_buffer)
self.audio_frames = []
def record(self):
"Audio starts being recorded"
self.stream.start_stream()
while self.open:
data = self.stream.read(self.frames_per_buffer)
self.audio_frames.append(data)
if not self.open:
break
def stop(self):
"Finishes the audio recording therefore the thread too"
if self.open:
self.open = False
self.stream.stop_stream()
self.stream.close()
self.audio.terminate()
waveFile = wave.open(self.audio_filename, 'wb')
waveFile.setnchannels(self.channels)
waveFile.setsampwidth(self.audio.get_sample_size(self.format))
waveFile.setframerate(self.rate)
waveFile.writeframes(b''.join(self.audio_frames))
waveFile.close()
def start(self):
"Launches the audio recording function using a thread"
audio_thread = threading.Thread(target=self.record)
audio_thread.start()
def start_AVrecording(filename="test"):
global video_thread
global audio_thread
video_thread = VideoRecorder()
audio_thread = AudioRecorder()
audio_thread.start()
video_thread.start()
return filename
def start_video_recording(filename="test"):
global video_thread
video_thread = VideoRecorder()
video_thread.start()
return filename
def start_audio_recording(filename="test"):
global audio_thread
audio_thread = AudioRecorder()
audio_thread.start()
return filename
def stop_AVrecording(filename="test"):
audio_thread.stop()
frame_counts = video_thread.frame_counts
elapsed_time = time.time() - video_thread.start_time
recorded_fps = frame_counts / elapsed_time
print("total frames " + str(frame_counts))
print("elapsed time " + str(elapsed_time))
print("recorded fps " + str(recorded_fps))
video_thread.stop()
# Makes sure the threads have finished
while threading.active_count() > 1:
time.sleep(1)
video_stream = ffmpeg.input(video_thread.video_filename)
audio_stream = ffmpeg.input(audio_thread.audio_filename)
ffmpeg.output(audio_stream, video_stream, 'out.mp4').run(overwrite_output=True)
# # Merging audio and video signal
# if abs(recorded_fps - 6) >= 0.01: # If the fps rate was higher/lower than expected, re-encode it to the expected
# print("Re-encoding")
# cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
# subprocess.call(cmd, shell=True)
# print("Muxing")
# cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
# subprocess.call(cmd, shell=True)
# else:
# print("Normal recording\nMuxing")
# cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
# subprocess.call(cmd, shell=True)
# print("..")
def file_manager(filename="test"):
"Required and wanted processing of final files"
local_path = os.getcwd()
if os.path.exists(str(local_path) + "/temp_audio.wav"):
os.remove(str(local_path) + "/temp_audio.wav")
if os.path.exists(str(local_path) + "/temp_video.avi"):
os.remove(str(local_path) + "/temp_video.avi")
if os.path.exists(str(local_path) + "/temp_video2.avi"):
os.remove(str(local_path) + "/temp_video2.avi")
# if os.path.exists(str(local_path) + "/" + filename + ".avi"):
# os.remove(str(local_path) + "/" + filename + ".avi")
if __name__ == '__main__':
start_AVrecording()
# try:
# while True:
# pass
# except KeyboardInterrupt:
# stop_AVrecording()
time.sleep(10)
stop_AVrecording()
print("finishing recording")
file_manager()
A: Using everyone's contributions and following the suggestion of Paul
I was able to come up with the following code:
Recorder.py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# VideoRecorder.py
from __future__ import print_function, division
import numpy as np
import sys
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
import ffmpeg
REC_FOLDER = "recordings/"
class Recorder():
def __init__(self, filename):
self.filename = filename
self.video_thread = self.VideoRecorder(self, REC_FOLDER + filename)
self.audio_thread = self.AudioRecorder(self, REC_FOLDER + filename)
def startRecording(self):
self.video_thread.start()
self.audio_thread.start()
def stopRecording(self):
self.video_thread.stop()
self.audio_thread.stop()
def saveRecording(self):
#Save audio / Show video resume
self.audio_thread.saveAudio()
self.video_thread.showFramesResume()
#Merges both streams and writes
video_stream = ffmpeg.input(self.video_thread.video_filename)
audio_stream = ffmpeg.input(self.audio_thread.audio_filename)
while (not os.path.exists(self.audio_thread.audio_filename)):
print("waiting for audio file to exit...")
stream = ffmpeg.output(video_stream, audio_stream, REC_FOLDER + self.filename +".mp4")
try:
ffmpeg.run(stream, capture_stdout=True, capture_stderr=True, overwrite_output=True)
except ffmpeg.Error as e:
print(e.stdout, file=sys.stderr)
print(e.stderr, file=sys.stderr)
class VideoRecorder():
"Video class based on openCV"
def __init__(self, recorder, name, fourcc="MJPG", frameSize=(640,480), camindex=0, fps=15):
self.recorder = recorder
self.open = True
self.duration = 0
self.device_index = camindex
self.fps = fps # fps should be the minimum constant rate at which the camera can
self.fourcc = fourcc # capture images (with no decrease in speed over time; testing is required)
self.video_filename = name + ".avi" # video formats and sizes also depend and vary according to the camera used
self.video_cap = cv2.VideoCapture(self.device_index, cv2.CAP_DSHOW)
self.video_writer = cv2.VideoWriter_fourcc(*fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, frameSize)
self.frame_counts = 1
self.start_time = time.time()
def record(self):
"Video starts being recorded"
counter = 1
while self.open:
ret, video_frame = self.video_cap.read()
if ret:
self.video_out.write(video_frame)
self.frame_counts += 1
counter += 1
self.duration += 1/self.fps
if (video_frame is None): print("I WAS NONEEEEEEEEEEEEEEEEEEEEEE")
gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('video_frame', gray)
cv2.waitKey(1)
while(self.duration - self.recorder.audio_thread.duration >= 0.2 and self.recorder.audio_thread.open):
time.sleep(0.2)
else:
break
#Release Video
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
self.video_out = None
def stop(self):
"Finishes the video recording therefore the thread too"
self.open=False
def start(self):
"Launches the video recording function using a thread"
self.thread = threading.Thread(target=self.record)
self.thread.start()
def showFramesResume(self):
#Only stop of video has all frames
frame_counts = self.frame_counts
elapsed_time = time.time() - self.start_time
recorded_fps = self.frame_counts / elapsed_time
print("total frames " + str(frame_counts))
print("elapsed time " + str(elapsed_time))
print("recorded fps " + str(recorded_fps))
class AudioRecorder():
"Audio class based on pyAudio and Wave"
def __init__(self, recorder, filename, rate=44100, fpb=1024, channels=1, audio_index=0):
self.recorder = recorder
self.open = True
self.rate = rate
self.duration = 0
self.frames_per_buffer = fpb
self.channels = channels
self.format = pyaudio.paInt16
self.audio_filename = filename + ".wav"
self.audio = pyaudio.PyAudio()
self.stream = self.audio.open(format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
input_device_index=audio_index,
frames_per_buffer = self.frames_per_buffer)
self.audio_frames = []
def record(self):
"Audio starts being recorded"
self.stream.start_stream()
t_start = time.time_ns()
while self.open:
try:
self.duration += self.frames_per_buffer / self.rate
data = self.stream.read(self.frames_per_buffer)
self.audio_frames.append(data)
except Exception as e:
print('\n' + '*'*80)
print('PyAudio read exception at %.1fms\n' % ((time.time_ns() - t_start)/10**6))
print(e)
print('*'*80 + '\n')
while(self.duration - self.recorder.video_thread.duration >= 0.5):
time.sleep(0.5)
#Closes audio stream
self.stream.stop_stream()
self.stream.close()
self.audio.terminate()
def stop(self):
"Finishes the audio recording therefore the thread too"
self.open = False
def start(self):
"Launches the audio recording function using a thread"
self.thread = threading.Thread(target=self.record)
self.thread.start()
def saveAudio(self):
#Save Audio File
waveFile = wave.open(self.audio_filename, 'wb')
waveFile.setnchannels(self.channels)
waveFile.setsampwidth(self.audio.get_sample_size(self.format))
waveFile.setframerate(self.rate)
waveFile.writeframes(b''.join(self.audio_frames))
waveFile.close()
Main.py
from recorder import Recorder
import time
recorder = Recorder("test1")
recorder.startRecording()
time.sleep(240)
recorder.stopRecording()
recorder.saveRecording()
With this solution, the camera and the audio will wait for each other.
I also tried the FFmpeg Re-encoding and Muxing and even though it was able to synchronize the audio with video, the video had a massive quality drop.
A: You can do offline html,js code to do video with audio recording. Using python lib python webview open that page. It should work fine.
A: I was randomly getting "[Errno -9999] Unanticipated host error" while using JRodrigoF's solution and found that it's due to a race condition where an audio stream can be closed just before being read for the last time inside record() of AudioRecorder class.
I modified slightly so that all the closing procedures are done after the while loop and added a function list_audio_devices() that shows the list of audio devices to select from. I also added an audio device index as a parameter to choose an audio device.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# VideoRecorder.py
from __future__ import print_function, division
import numpy as np
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os
class VideoRecorder():
"Video class based on openCV"
def __init__(self, name="temp_video.avi", fourcc="MJPG", sizex=640, sizey=480, camindex=0, fps=30):
self.open = True
self.device_index = camindex
self.fps = fps # fps should be the minimum constant rate at which the camera can
self.fourcc = fourcc # capture images (with no decrease in speed over time; testing is required)
self.frameSize = (sizex, sizey) # video formats and sizes also depend and vary according to the camera used
self.video_filename = name
self.video_cap = cv2.VideoCapture(self.device_index)
self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
self.frame_counts = 1
self.start_time = time.time()
def record(self):
"Video starts being recorded"
# counter = 1
timer_start = time.time()
timer_current = 0
while self.open:
ret, video_frame = self.video_cap.read()
if ret:
self.video_out.write(video_frame)
# print(str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current))
self.frame_counts += 1
# counter += 1
# timer_current = time.time() - timer_start
time.sleep(1/self.fps)
# gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
# cv2.imshow('video_frame', gray)
# cv2.waitKey(1)
else:
break
def stop(self):
"Finishes the video recording therefore the thread too"
if self.open:
self.open=False
self.video_out.release()
self.video_cap.release()
cv2.destroyAllWindows()
def start(self):
"Launches the video recording function using a thread"
video_thread = threading.Thread(target=self.record)
video_thread.start()
class AudioRecorder():
"Audio class based on pyAudio and Wave"
def __init__(self, filename="temp_audio.wav", rate=44100, fpb=2**12, channels=1, audio_index=0):
self.open = True
self.rate = rate
self.frames_per_buffer = fpb
self.channels = channels
self.format = pyaudio.paInt16
self.audio_filename = filename
self.audio = pyaudio.PyAudio()
self.stream = self.audio.open(format=self.format,
channels=self.channels,
rate=self.rate,
input=True,
input_device_index=audio_index,
frames_per_buffer = self.frames_per_buffer)
self.audio_frames = []
def record(self):
"Audio starts being recorded"
self.stream.start_stream()
t_start = time.time_ns()
while self.open:
try:
data = self.stream.read(self.frames_per_buffer)
self.audio_frames.append(data)
except Exception as e:
print('\n' + '*'*80)
print('PyAudio read exception at %.1fms\n' % ((time.time_ns() - t_start)/10**6))
print(e)
print('*'*80 + '\n')
time.sleep(0.01)
self.stream.stop_stream()
self.stream.close()
self.audio.terminate()
waveFile = wave.open(self.audio_filename, 'wb')
waveFile.setnchannels(self.channels)
waveFile.setsampwidth(self.audio.get_sample_size(self.format))
waveFile.setframerate(self.rate)
waveFile.writeframes(b''.join(self.audio_frames))
waveFile.close()
def stop(self):
"Finishes the audio recording therefore the thread too"
if self.open:
self.open = False
def start(self):
"Launches the audio recording function using a thread"
audio_thread = threading.Thread(target=self.record)
audio_thread.start()
def start_AVrecording(filename="test", audio_index=0, sample_rate=44100):
global video_thread
global audio_thread
video_thread = VideoRecorder()
audio_thread = AudioRecorder(audio_index=audio_index, rate=sample_rate)
audio_thread.start()
video_thread.start()
return filename
def start_video_recording(filename="test"):
global video_thread
video_thread = VideoRecorder()
video_thread.start()
return filename
def start_audio_recording(filename="test", audio_index=0, sample_rate=44100):
global audio_thread
audio_thread = AudioRecorder(audio_index=audio_index, rate=sample_rate)
audio_thread.start()
return filename
def stop_AVrecording(filename="test"):
audio_thread.stop()
frame_counts = video_thread.frame_counts
elapsed_time = time.time() - video_thread.start_time
recorded_fps = frame_counts / elapsed_time
print("total frames " + str(frame_counts))
print("elapsed time " + str(elapsed_time))
print("recorded fps " + str(recorded_fps))
video_thread.stop()
# Makes sure the threads have finished
while threading.active_count() > 1:
time.sleep(1)
# Merging audio and video signal
if abs(recorded_fps - 6) >= 0.01: # If the fps rate was higher/lower than expected, re-encode it to the expected
print("Re-encoding")
cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
subprocess.call(cmd, shell=True)
print("Muxing")
cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
else:
print("Normal recording\nMuxing")
cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
subprocess.call(cmd, shell=True)
print("..")
def file_manager(filename="test"):
"Required and wanted processing of final files"
local_path = os.getcwd()
if os.path.exists(str(local_path) + "/temp_audio.wav"):
os.remove(str(local_path) + "/temp_audio.wav")
if os.path.exists(str(local_path) + "/temp_video.avi"):
os.remove(str(local_path) + "/temp_video.avi")
if os.path.exists(str(local_path) + "/temp_video2.avi"):
os.remove(str(local_path) + "/temp_video2.avi")
# if os.path.exists(str(local_path) + "/" + filename + ".avi"):
# os.remove(str(local_path) + "/" + filename + ".avi")
def list_audio_devices(name_filter=None):
pa = pyaudio.PyAudio()
device_index = None
sample_rate = None
for x in range(pa.get_device_count()):
info = pa.get_device_info_by_index(x)
print(pa.get_device_info_by_index(x))
if name_filter is not None and name_filter in info['name']:
device_index = info['index']
sample_rate = int(info['defaultSampleRate'])
break
return device_index, sample_rate
if __name__ == '__main__':
start_AVrecording()
time.sleep(5)
stop_AVrecording()
file_manager()
| {
"language": "en",
"url": "https://stackoverflow.com/questions/14140495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36"
} |
Q: Heroku, Rails: WildCard SSL cause env['warden'] to return nil In my rails app. I'm using devise. Users will enter the top page from several subdomains like "ja.myapp.com", "www.myapp.com" etc..
I use several subdomains only inside promotion pages, so when they enter the "sign in" page, All of them will redirect to the "www" subdomain page.
When user sign in, if the user's profile_type was "Student", user will redirect to "Students' home"., if it was "Teacher", "Teachers' home".
Everything works fine when I test with "POW", but when I uploaded it to the production server.
After user signed in, all of them will redirect to the TOP page.
I rollback the app and, try signing in again, but same thing happend
This happened after I uploaded the heroku certs from "single domain SSL" to "wildcard subdomains SSL".
After I change the SSL to single domain one, this won't happens anymore inside the rollbacked site.
I suppose that this happened because of the "wildcard subdomain SSL" & devise session controller, but not so sure about that.
Does anyone have any idea to fix this problem?
Below are the code.
routes.rb
root :to => 'students#index', :constraints => lambda { |request| request.env['warden'].user.try(:profile_type) == 'Student' }
root :to => 'teachers#index', :constraints => lambda { |request| request.env['warden'].user.try(:profile_type) == 'Teacher' }
root :to => 'pages#home'
devise_for :users, :controllers => {:sessions => "sessions", :registrations => "registrations", :confirmations => "confirmations"}
config/initializers/session_store.rb
MyAPP::Application.config.session_store :cookie_store, key: '_my_app_secure_session', domain: :all
# coding: utf-8
class SessionsController < Devise::SessionsController
after_filter :clear_sign_signout_flash
def after_sign_in_path_for(resource)
url = root_path
end
def new
@title = I18n.t "sessions.new.title"
super
end
def create
super
end
def destroy
super
end
protected
def clear_sign_signout_flash
if flash.keys.include?(:notice)
flash.delete(:notice)
end
end
end
application_controller
class ApplicationController < ActionController::Base
protect_from_forgery
before_filter :set_locale, :www_redirect
private
def www_redirect
#if Rails.env.production
parsed_subdomain = request.subdomains.first
locale = Locale.find_by_subdomain(parsed_subdomain)
if (use_www? && parsed_subdomain != "www") || locale.nil?
redirect_to request.url.sub(parsed_subdomain, "www")
end
#end
end
def use_www?
true
end
def set_locale
if user_signed_in?
if current_user.profile_type == "Teacher"
I18n.locale = I18n.default_locale
else
I18n.locale = current_user.locale.i18n_form.to_sym
end
else
if request.subdomains.first == "www" && cookies[:locale]
I18n.locale = cookies[:locale].to_sym
else
I18n.locale = extract_locale_from_subdomain || I18n.default_locale
cookies[:locale] = {
value: I18n.locale.to_s,
expires: 1.year.from_now,
domain: :all
}
end
end
end
def extract_locale_from_subdomain
parsed_subdomain = request.subdomains.first
locale = Locale.find_by_subdomain(parsed_subdomain)
if locale
I18n.available_locales.include?(locale.i18n_form.to_sym) ? locale.i18n_form.to_sym : nil
else
nil
end
end
end
locale_settings_controller.rb
class LocaleSettingsController < ApplicationController
def switch_language
locale = params[:locale]
locale = "www" if locale == "en"
path = params[:uri]
cookies[:locale] = {
value: locale,
expires: 1.year.from_now,
domain: :all
}
if Rails.env.production?
base_url = "http://" + "#{locale}" + ".myapp.com"
else
base_url = "http://" + "#{locale}" + ".myapp.dev"
end
url = path == "/" ? base_url : base_url + "#{path}"
redirect_to url
end
end
production.rb
config.force_ssl = true
config/initializers/rack_rewrite.rb
require 'rack/rewrite'
LingualBox::Application.config.middleware.insert_before(Rack::Lock, Rack::Rewrite) do
r301 %r{.*}, 'http://www.myapp.com$&', :if => Proc.new {|rack_env|
rack_env['SERVER_NAME'] == 'myapp.com'
}
end if Rails.env == 'production'
| {
"language": "en",
"url": "https://stackoverflow.com/questions/13490408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
} |
Q: Trying to use Javascript Function in AmCharts4 Barchart to convert number of days to years and months I am adding some charts to a cakephp4 app using amcharts 4.
I have an amCharts4 bar chart that shows the number of days that I need to convert to
<years> years, <months> months, <days> days
in my source data I have an "amount" column that contains the number of days, for example 500. I need that to show up as '1 year, 4 months, 15 days. I am using a JS function getFormatedStringFromDays(numberOfDays) which works great.
I'm trying to set a var to be used in the function - I've tried Number(), parseInt(), etc. trying to get the var mypreconvertednumber to be seen as a number. Every attempt I
make produces NaN - I can assign mypreconvertednumber a number (like var mypreconvertednumber = 500;) and of course it works fine.
My source data looks like this (amount is the number I'm trying to convert in the label):
barchart1.data = [{
"type": "Last Check In",
"amount": 500
},
var mypreconvertednumber = "{values.valueX.workingValue.formatNumber('#')}";
var valueLabel = series.bullets.push(new am4charts.LabelBullet());
valueLabel.label.text = getFormatedStringFromDays(mypreconvertednumber);
valueLabel.label.horizontalCenter = "left";
valueLabel.label.fontSize = 10;
valueLabel.label.truncate = false;
valueLabel.label.hideOversized = false;
valueLabel.label.dx = 5;
I'm hoping someone can point me to an amcharts built-in function to process the value label text like in amcharts 3.
"labelFunction": function(data) {
return getFormatedStringFromDays(data);
},
I have been scouring the docs - and probably looked right past the option I need.
Thank you for any help any of you can give me to help solve this.
The function works when I place a number like:
getFormatedStringFromDays('500') // will return 1 year, 4 months, 15 days.
But when I try to use the var mypreconvertednumber = "{values.valueX.workingValue.formatNumber('#')}"; the function returns NaN - ive tried using Number(), parseInt() to attempt to make sure its a number but always get NaN.
getFormatedStringFromDays(mypreconvertednumber) // produces NaN - even though the source data amount is: 500
barchart1.data = [{ "type": "Last Check In", "amount": 500 },...]
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72578510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: XSLT 2.0 creating incremental footnote numbers in HTML output through multi-stage transformation This question builds on the responses to my original question, where it was suggested that I post a followup. This concerns attempting to integrate the XSL code from the previous post.
In the previous question I presented a simplified version of the TEI:XML document I am transforming into HTML using XSLT 2.0 (the full tei file and current xslt can be found here https://xsltfiddle.liberty-development.net/bdxtqT/6). This is a fuller view of the hierarchy, but not all details:
<tei>
<teiHeader/>
<text>
<front/>
<body>
<p xml:lang="LA">
<seg type="typefoo" corresp="#foo601" xml:id="foo361">
<date type="deposition_date" when="1245">Idus
marcii</date>In non hendrerit metus. Sed in posuere
eros, sit amet pharetra lacus.</seg>
<seg type="typefoo" xml:id="foo362">Nullam semper varius
justo, vitae mollis turpis dapibus sit amet.
Donec<note type="public_note">note content</note>
rhoncus tempor urna sit amet imperdiet.</seg>
<seg type="typefoo" xml:id="foo363">Integer
id ante nunc. Curabitur at ligula sed arcu consequat
gravida et id orci. Morbi quis porta dolor.</seg>
<seg type="typefoo" corresp="#fooid2">Sed dictum<note
type="public_note">note content 2</note> sem nec urna sodales
cursus. Donec sit amet nibh tempor, congue ligula semper,
rhoncus odio.</seg>
</p>
</body>
<back>
<p xml:lang="EN">
<seg>
<seg>
</p>
<p xml:lang="FR">
<seg>
<seg>
</p>
</back>
</text>
<tei>
The desired HTML output is as follows. Incremental footnote numbers are created in <sup> based on one of three conditions:
*
*date[@type="deposition_date"] (add footnote no.),
*seg[@type="typefoo"] (add footnote no.)
*note[@type="public_note"] (replace with footnote no.).
Desired output
<div>
<p>Idus marcii<sup>1</sup>In non hendrerit metus. Sed in
posuere eros, sit amet pharetra lacus.</p><sup>2</sup>
<p>Nullam semper varius justo, vitae mollis turpis
dapibus sit amet. Donec<sup>3</sup> rhoncus tempor
urna sit amet imperdiet.</p>
<p>Integer id ante nunc. Curabitur at ligula sed
arcu consequat gravida et id orci. Morbi quis porta
dolor.</p>
<p>Sed dictum sem<sup>4</sup> nec urna sodales cursus.
Donec sit amet nibh tempor, congue ligula semper,
rhoncus odio.</p><sup>5</sup>
<div>
[...]
<div>
<p><sup>1</sup> 1245</p>
<p><sup>2</sup> foo601</p>
<p><sup>3</sup> note here</p>
<p><sup>4</sup> note here</p>
<p><sup>5</sup> fooid2</p>
</div>
The full XSLT transformation document is found at https://xsltfiddle.liberty-development.net/bdxtqT/6, where one can see the following problems:
*
*date[@type='deposition_date'] is being entirely replaced, instead receiving an added footnote marker
*seg[@type='dep_event' and @corresp] is not receiving an added footnote marker, but it appears in the <div> at the bottom of the page.
The XSL file is too long and doesn't seem to paste here correctly. Interact with files here https://xsltfiddle.liberty-development.net/bdxtqT/6.
NB: I am restricted to XSLT 2.0 as this transformation is fired off inside eXist-DB with Xquery 3.1.
Thanks very much!
A: I think, unless you want to prefix all your paths in that template matching / with the variable I suggested to store the result of the marker insertion, one way to merge the existing code with my suggestion is to change the match from / to /* e.g. use
<xsl:template match="/*">
<!-- div for text -->
<div>
<!-- LATIN : always present -->
<h3>Latin</h3>
<xsl:apply-templates select="//tei:body//tei:p"/>
<!-- ENGLISH : always present -->
<h3>English</h3>
<xsl:apply-templates select="//tei:back//tei:p[@xml:lang='EN']"/>
<!-- FRENCH : sometimes present -->
<xsl:if test="//tei:back//tei:p[@xml:lang='FR']">
<h3>French</h3>
<xsl:apply-templates select="//tei:back//tei:p[@xml:lang='FR']"/>
</xsl:if>
<!-- FOOTER for notes -->
<div class="footer">
<!-- FOOTNOTES (uses mode="build_footnotes" to construct a block of footnotes in <div>) -->
<xsl:if test="$footnote-sources">
<div class="footnotes" id="footnotesdiv">
<xsl:apply-templates select="$footnote-sources" mode="build_footnotes"/>
</div>
</xsl:if>
</div>
</div>
</xsl:template>
that would then mean that my suggestion to use
<xsl:template match="/">
<xsl:apply-templates select="$fn-markers-added/node()"/>
</xsl:template>
can be kept and the XSLT processor would apply it.
There is however the use of that variable $footnote-sources at the end of the template, as far as I can see from the snippet its use on nodes from the original input document would not be affected by the introduction of a temporary result adding markers but somehow to me it would feel wrong to at that place keep processing the original input while the rest works on the temporary result so I would be inclined to change the variable declaration to
<xsl:variable name="footnote-sources" select="$fn-markers-added/tei:text//tei:seg//date[@type='deposition_date'] |
$fn-markers-added/tei:text//tei:seg//note[@type='public_note'] | $fn-markers-added/tei:text//tei:seg[@corresp]"/>
With those two changes I think my suggestion in the previous answer should then be applied. Although now looking again at the posted source with a tei root element I wonder how a global variable having paths starting with tei:text would select anything but perhaps that is an omission in the sample.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/52860271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Nginx gives an Internal Server Error 500 after I have configured basic auth I am trying to do basic auth on Nginx. I have version 1.9.3 up and running on Ubuntu 14.04 and it works fine with a simple html file.
Here is the html file:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title></title>
</head>
<body>
"Some shoddy text"
</body>
</html>
And here is my nginx.conf file:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
server {
listen 80;
server_name 192.168.1.30;
location / {
root /www;
index index.html;
auth_basic "Restricted";
auth_basic_user_file /etc/users;
}
}
}
I used htpasswd to create two users in the "users" file under /etc (username "calvin" password "Calvin", and username "hobbes" password "Hobbes"). It's encrypted by looks like this:
calvin:$apr1$Q8LGMfGw$RbO.cG4R1riIfERU/175q0
hobbes:$apr1$M9KoUUhh$ayGd8bqqlN989ghWdTP4r/
All files belong to root:root. The server IP address is 192.168.1.30 and I am referencing that directly in the conf file.
It all works fine if I comment out the two auth lines and restart nginx, but if I uncomment them, then I do indeed get the username and password prompts when I try to load the site, but immediately thereafter get an Error 500 Internal Server error which seems to persist and I have to restart nginx.
Anybody can see what I'm doing wrong here? I had the same behaviour on the standard Ubuntu 14.04 apt-get version of Nginx (1.4.something) so I don't think it's the nginx version.
A: Not really an answer to your question as you are using MD5. However as this thread pops up when searching for the error, I am attaching this to it.
Similar errors happen when bcrypt is used to generate passwords for auth_basic:
htpasswd -B <file> <user> <pass>
Since bcrypt is not supported within auth_basic ATM, mysterious 500 errors can be found in nginx error.log, (usually found at /var/log/nginx/error.log), they look something like this:
*1 crypt_r() failed (22: Invalid argument), ...
At present the solution is to generate a new password using md5, which is the default anyway.
Edited to address md5 issues as brought up by @EricWolf in the comments:
md5 has its problems for sure, some context can be found in the following threads
*
*Is md5 considered insecure?
*Is md5 still considered secure for single use authentications?
Of the two, speed issue can be mitigated by using fail2ban, by banning on failed basic auth you'll make online brute forcing impractical (guide). You can also use long passwords to try and fortify a bit as suggested here.
Other than that it seems this is as good as it gets with nginx...
A: I was running Nginx in a Docker environment and I had the same issue. The reason was that some of the passwords were generated using bcrypt. I resolved it by using nginx:alpine.
A: Do you want a MORE secure password hash with nginx basic_auth? Do this:
echo "username:"$(mkpasswd -m sha-512) >> .htpasswd
SHA-512 is not considered nearly as good as bcrypt, but it's the best nginx supports at the moment.
A: I will just stick the htpassword file under "/etc/nginx" myself.
Assuming it is named htcontrol, then ...
sudo htpasswd -c /etc/nginx/htcontrol calvin
Follow the prompt for the password and the file will be in the correct place.
location / {
...
auth_basic "Restricted";
auth_basic_user_file htcontrol;
}
or auth_basic_user_file /etc/nginx/htcontrol; but the first variant works for me
A: I had goofed up when initially creating a user. As a result, the htpasswd file looked like:
user:
user:$apr1$passwdhashpasswdhashpasswdhash...
After deleting the blank user, everything worked fine.
A: I just had the same problem - after checking log as suggested by @Drazen Urch I've discovered that the file had root:root permissions - after changing to forge:forge (I'm using Forge with Digital Ocean) - the problem went away.
A: Well, just use correct RFC 2307 syntax:
passwordvalue = schemeprefix encryptedpassword
schemeprefix = "{" scheme "}"
scheme = "crypt" / "md5" / "sha" / altscheme
altscheme = "x-" keystring
encryptedpassword = encrypted password
For example: sha1 for helloworld for admin will be
*
*admin:{SHA}at+xg6SiyUovktq1redipHiJpaE=
I had same error cause i wrote {SHA1} what against RFC syntax. When i fixed it - all worked like a charm. {sha} will not work too. Only correct {SHA}.
A: First, check out your nginx error logs:
tail -f /var/log/nginx/error.log
In my case, I found the error:
[crit] 18901#18901: *6847 open() "/root/temp/.htpasswd" failed (13: Permission denied),
The /root/temp directory is one of my test directories, and cannot be read by nginx. After change it to /etc/apache2/ (follow the official guide https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-http-basic-authentication/) everything works fine.
===
After executing the ps command we can see the nginx worker process maintained by the user www-data, I had tried to chown www-data:www-data /root/temp to make sure www-data can access this file, but it still not working. To be honest, I don't have a very deep understanding on Linux File Permissions, so I change it to /etc/apache2/ to fix this in the end. And after a test, you can put the .htpasswd file in other directories which in /etc (like /etc/nginx).
A: I too was facing the same problem while setting up authentication for kibana. Here is the error in my /var/log/nginx/error.log file
2020/04/13 13:43:08 [crit] 49662#49662: *152 crypt_r() failed (22:
Invalid argument), client: 157.42.72.240, server: 168.61.168.150,
request: “GET / HTTP/1.1”, host: “168.61.168.150”
I resolved this issue by adding authentication using this.
sudo sh -c "echo -n 'kibanaadmin:' >> /etc/nginx/htpasswd.users"
sudo sh -c "openssl passwd -apr1 >> /etc/nginx/htpasswd.users"
You can refer this post if you are trying to setup kibana and got this issue.
https://medium.com/@shubham.singh98/log-monitoring-with-elk-stack-c5de72f0a822?postPublishedType=repub
A: In my case, I was using plain text password by -p flag, and coincidentally my password start with $ character.
So I updated my password and thus the error was gone.
NB: Other people answer helped me a lot to figure out my problem. I am posting my solution here if anyone stuck in a rare case like me.
A: In my case, I had my auth_basic setup protecting an nginx location that was served by a proxy_pass configuration.
The configured proxy_pass location wasn't returning a successful HTTP200 response, which caused nginx to respond with an Internal Server Error after I had entered the correct username and password.
If you have a similar setup, ensure that the proxy_pass location protected by auth_basic is returning an HTTP200 response after you rule out username/password issues.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/31833583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "29"
} |
Q: Why does this LinkedList addLast implementation work? I am working on adding an item to the end of a linked list (this isn't homework...just an exercise for myself).
Here is the program:
public class CustomLinkedList {
private static Node head = null;
private int size = 0;
public static void main(String[] args) {
CustomLinkedList myList = new CustomLinkedList();
myList.add(5);
myList.add(9);
myList.add(3);
System.out.println("List Size: " + myList.size);
myList.print();
}
private int size() {
return this.size;
}
private void print() {
Node temp = head;
for (int i=0; i<=size-1;i++){
System.out.print(temp.value + " ");
temp = temp.next;
}
System.out.println();
}
private void add(int value) {
if (head == null) {
head = new Node();
head.value = value;
head.next = null;
size++;
} else {
Node temp = head;
while (temp.next != null) {
temp = temp.next;
}
temp.next = new Node();
(temp.next).value = value;
size++;
}
}
}
Here is my Node class:
public class Node {
public int value;
public Node next;
public int getValue(){
return this.value;
}
}
Here is what I think is happening:
1. I have an original/ongoing list starting with "head."
2. I want to add to that list.
3. To add to it, I need to find the end of it. I do that by creating a new node called temp (which is just a copy of the original list).
4. I traverse the copy (temp) until I reached the end.
5. Once I reach the end, I create a new node.
To me, this is where my code stops. Now, in my mind, I need to add code that says, "Alright, you have your new node, you know where it needs to go, so lets go through the real list and add it."
But I don't have that. According to my debugger (image below), the right thing is happening, but I don't see the magic that is adding the new node to the original list. How is this working?
Edit:
I did look at other implementations (like the one here); it looked very similar. However, I still couldn't find why it works without assigning temp to head (or to head.next). I get Linked Lists in theory I believe. I just don't understand why this bit works.
A: Your confusion is that temp is different to head. It's not.
They are both variables holding references to the same Node object. Changes made via either variable are reflected in the (same) object they reference. When you add a Node to temp, you add it to the real list.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/36253275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Setting fs.default.name in core-site.xml Sets HDFS to Safemode I installed the Cloudera CDH4 distribution on a single machine in pseudo-distributed mode and successfully tested that it was working correctly (e.g. can run MapReduce programs, insert data on the Hive server, etc.) However, if I chance the core-site.xml file to have fs.default.name set to machine name rather than localhost and restart the NameNode service, the HDFS enters safe-mode.
Before the change of fs.default.name, I ran the following to check the state of the HDFS:
$ hadoop dfsadmin -report
...
Configured Capacity: 18503614464 (17.23 GB)
Present Capacity: 13794557952 (12.85 GB)
DFS Remaining: 13790785536 (12.84 GB)
DFS Used: 3772416 (3.60 MB)
DFS Used%: 0.03%
Under replicated blocks: 2
Blocks with corrupt replicas: 0
Missing blocks: 0
Then I made the modification to core-site.xml (with the machine name being hadoop):
<property>
<name>fs.default.name</name>
<value>hdfs://hadoop:8020</value>
</property>
I restarted the service and reran the report.
$ sudo service hadoop-hdfs-namenode restart
$ hadoop dfsadmin -report
...
Safe mode is ON
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
An interesting note is that I can still perform some HDFS commands. For example, I can run
$ hadoop fs -ls /tmp
However, if I try to read a file using hadoop fs -cat or try to place a file in the HDFS, I am told the NameNode is in safemode.
$ hadoop fs -put somefile .
put: Cannot create file/user/hadinstall/somefile._COPYING_. Name node is in safe mode.
The reason I need the fs.default.name to be set to the machine name is because I need to communicate with this machine on port 8020 (the default NameNode port). If fs.default.name is left to localhost, then the NameNode service will not listen to external connection requests.
I am at a loss as to why this is happening and would appreciate any help.
A: The issue stemmed from domain name resolution. The /etc/hosts file needed to be modified to point the IP address of the machine of the hadoop machine for both localhost and the fully qualified domain name.
192.168.0.201 hadoop.fully.qualified.domain.com localhost
A: Safemode is an HDFS state in which the file system is mounted read-only; no replication is performed, nor can files be created or deleted. Filesystem operations that access the filesystem metadata like 'ls' in you case will work.
The Namenode can be manually forced to leave safemode with this command( $ hadoop dfsadmin -safemode leave).Verify status of safemode with ( $ hadoop dfsadmin -safemode get)and then run dfsadmin report to see if it shows data.If after getting out of safe mode the report still dose not show any data then i'm suspecting communication between namenode and datanode is not hapenning. Check namenode and datanode logs after this step.
The next steps could be to try restarting datanode process and last resort will be to format namenode which will result in loss of data.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/19412328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8"
} |
Q: Checkbox styling Is there a way to style a checkbox and a label with border, so the user can see only the label (the checkbox is hidden), and when the user clicks on the label, the label will change the color of the background and the text? This click should also work as clicking on the checkbox.
If there is a way, how should I do this?
or
How to hide the checkbox and leave only the label do the work with changing colors?
A: Put them side to side (in html structure) and use the adjacent sibling selector +
Something like this
html
<input type="checkbox" id="box1" />
<label for="box1">checkbox #1</label>
css
input[type="checkbox"]{
position:absolute;
visibility:hidden;
z-index:-1;
}
input[type="checkbox"]:checked + label{
color:red;
}
you could style the label (2nd rule) as you want of course..
demo at http://jsfiddle.net/kb67J/1/
| {
"language": "en",
"url": "https://stackoverflow.com/questions/18993424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: Multiple Nuxt3 custom modules disable each other I developed 2 custom modules, let's say module1 and module2.
module1 uses addImports (to auto-import composable and utils) and module2 uses addComponent (to, obviously, auto-import some components) along a components list in directory ./runtime/components.
// module1
addImports([
{
from: resolve(runtimeDir, 'composables/useFoo'),
name: 'useFoo',
as: 'useFoo'
}
]);
// module2
const components = fs.readdirSync(resolve('./runtime/components')).map((component: string) => {
return component.split('.').slice(0,-1).join('.');
});
// components = [ 'Component1', 'Component2' ]
components.map((component: string) => {
addComponent({
name: component,
filePath: resolve(`./runtime/components/${component}.vue`),
global: true
});
});
After publishing both and testing by installing them in a test Nuxt3 project, it appears that they work fine individualy, but as soon as I add both in nuxt.config.ts file's modules array, the first one in the array break the second one:
// In test project
// addCOmponents of module2 does not work
modules: [
'module1',
'module2'
]
// addImports of module1 does not work properly
modules: [
'module2',
'module1'
]
I have no idea what i can do or what I am doing wrong :/
Can someone help please ?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/75210185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to count days between to dates So I would like to count how many days between two datepickers.
So I tried to make two array but it didnt work
any Ideas?
Here my date pickers
A: Convert your calendar string day, month, year to Date class.
More discussion here: Java string to date conversion
e.g.
DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
Date d1, d2;
try {
d1 = dateFormat.parse(year + "-" + month + "-" + day); //as per your example
d2 = //do the same thing as above
long days = getDifferenceDays(d1, d2);
}
catch (ParseException e) {
e.printStackTrace();
}
public static long getDifferenceDays(Date d1, Date d2) {
long diff = d2.getTime() - d1.getTime();
return TimeUnit.DAYS.convert(diff, TimeUnit.MILLISECONDS);
}
A: Create a method getDates()
private static ArrayList<Date> getDates(String dateString1, String dateString2)
{
ArrayList<Date> arrayofdates = new ArrayList<Date>();
DateFormat df1 = new SimpleDateFormat("dd-MM-yyyy");
Date date1 = null;
Date date2 = null;
try {
date1 = df1 .parse(dateString1);
date2 = df1 .parse(dateString2);
} catch (ParseException e) {
e.printStackTrace();
}
Calendar calender1 = Calendar.getInstance();
Calendar calendar2 = Calendar.getInstance();
calender1.setTime(date1);
calender2.setTime(date2);
while(!calender1.after(calender2))
{
arrayofdates.add(calender1.getTime());
calender1.add(Calendar.DATE, 1);
}
return arrayofdates;
}
then pass the parameter in this method to get array of dates
As you are using DatePicker then
DateFormat df1 = new SimpleDateFormat("dd-MM-yyyy");
ArrayList<Date> mBaseDateList = getDates(df1.format(cal1.time), df1.format(cal2.time))
A: Scanner in = new Scanner(System.in);
int n = in.nextInt();
Date d1, d2;
Calendar cal = Calendar.getInstance();
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
for (int i = 0; i < n; i++) {
try {
d2 = sdf.parse(in.next());
d1 = sdf.parse(in.next());
long differene = (d2.getTime() - d1.getTime()) / (1000 * 60 * 60 * 24);
System.out.println(Math.abs(differene));
} catch (Exception e) {
}
}
A: public static int getDaysBetweenDates(Date fromDate, Date toDate) {
return Math.abs((int) ((toDate.getTime() - fromDate.getTime()) / (1000 * 60 * 60 * 24)));
}
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55020082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Index keys of an object where the value satisfies a condition I'm trying to write a function that takes two objects and the name of a key that both objects contain, assuming the associated values are both arrays, and compares the length of the two arrays.
The general form of the function is this:
const compareByKeyLength = <T extends Record<string, unknown[]>, K extends keyof T>(
a: T,
b: T,
key: K,
) => {
return a[key].length < b[key].length ? 1 : -1;
};
However, I want to modify this so that T does not necessarily extend Record<string, unknown[]>. That is, it cannot be assumed that the T can be indexed by any arbitrary string to get an array. Some keys on the object will be other types. What I want is to limit the allowed values for K such that only the keys which are associated with array values can be given as input.
This is my best guess as to how to implement this, but as you can see, the compiler still doesn't accept that the values will be arrays:
// Find all keys in an object whose type is an array.
type ArrayKeys<O> = {
[K in keyof O]: O[K] extends any[] ? K : never
}[keyof O];
// Example to test if `ArrayKeys` works.
const example = {
foo: [1, 2, 3],
bar: "bar",
baz: ["one", "two", "three"],
};
// Evaluates to: "foo" | "baz"
type ExampleArrayKeys = ArrayKeys<typeof example>;
// Somehow `T[ArrayKeys<T>]` doesn't prove that the value accessed will be an array. Not sure why.
const compareByKeyLength = <T,>(
a: T,
b: T,
key: ArrayKeys<T>,
) => {
return a[key].length < b[key].length ? 1 : -1; // Error: Property 'length' does not exist on type 'T[ArrayKeys<T>]'.
};
TS Playground
My questions:
*
*What is the reason my attempt doesn't do what I want? Is it a limitation of the compiler to understand that T[ArrayKeys<T>] will always be an array? Or is that actually unsound and I'm just not seeing why?
*Is what I'm trying to do expressible in the type system, or do I need to use runtime checks to verify that the values are actually arrays?
A: Maybe this is what you need:
const compareByKeyLength = <
A extends Record<Key, any[]>,
B extends Record<Key, any[]>,
Key extends keyof A & keyof B
>
(
a: A,
b: B,
key: Key,
) => {
return a[key].length < b[key].length ? 1 : -1;
};
compareByKeyLength({a: [], b: 123}, {a: [], c: 123}, "a")
I introduce the generic type Key which is both a keyof A and keyof B. Then I specify that the inputs a and b are of the generic types A and B which both have the key Key with an array type any[].
Playground
| {
"language": "en",
"url": "https://stackoverflow.com/questions/72119858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to style a photo gallery? I am looking for a method to design a photo gallary, I am currently using table but not sure if its a correct method to do it or not, also not sure about styling.
It has many images, upload is used to upload photos and message is used to show upload messages.
I just need to show the images and allow users to save images on the server, and show the messages, (uploaded / can not upload). I can show the images, save them and send the messages to the message section but not sure how to style it.
Another problem is that I need all images to be in the same size, as each is in different size now.
I need it to be the same on all browsers.
<table>
<tbody>
<tr>
<td><img 1></td>
<td>upload photo btn</td>
<td><label>message</label></td>
<td><img 3></td>
<td>upload photo btn</td>
<td><label>message</label></td>
</tr>
</tbody>
</table>
A: Semantically tables should be used for tabular data.
Check out the answer to this question for some options.
Depending on the age of the browsers you want to support, you could also consider using a flex box layout. Flex box is nice and easy to code but not fully supported in all browsers yet.
A: According to me this shouldn't be accomplished using tables, the better way to do is by using CSS3 column-count property, this way you can cut-split your content to whatever number of columns you want to, this way you don't have to use tables too..
Demo
Demo (Added -webkit support)
html, body {
width: 100%;
height: 100%;
}
.wrap {
column-count: 3;
-moz-column-count: 3;
-webkit-column-count: 3;
}
.holder {
border: 1px solid #f00;
}
img {
max-width: 100%;
}
Note: column-count CSS3 property is not widely supported, but there
are many polyfills available out, you can search online and use it for
cross browser compatibility
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16870040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Unable to use DirectX.Capture library I am unable to use DirectX.Capture library because the following code from the example gave me errors:
Capture capture = new Capture(
Filters.VideoInputDevices[0],
Filters.AudioInputDevices[0]);
capture.Start();
capture.Stop();
An object reference is required for the non-static field, method, or
property 'DirectX.Capture.Filters.AudioInputDevices'
An object
reference is required for the non-static field, method, or property
'DirectX.Capture.Filters.VideoInputDevices'
Why? What am I doing wrong? How can I fix it?
A: The message is saying that the member AudioInputDevices and the member VideoInputDevices are not declared as static in the type DirectX.Capture.Filters, but you are using them as if they were static.
To reference a member that's not static, you need to instantiate that type, by calling the constructor (directly, or indirectly via some kind of factory method) of that type (DirectX.Capture.Filters).
In other words, you need something like this:
var filters = new DirectX.Capture.Filters(...);
var capture = new Capture(filters.VideoInputDevices[0], filters.AudioInputDevices[0]);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/28606938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Elasticsearch: Exact multiple match phrases on same field I want the following query to return results where exact phrases separated by OR match in a particular field.
{
"query": {
"nested": {
"path": "positions",
"query": {
"bool": {
"must": [
{
"query_string": {
"default_field": "positions.companyname",
"query": "microsoft OR gartner OR IBM"
}
},
{
"query_string": {
"default_field": "positions.title",
"query": "(Chief Information Security Officer) OR (Chief Digital Officer)"
}
}
]
}
},
"inner_hits": {
"highlight": {
"fields": {
"positions.title": {}
}
}
}
}
}
}
The results should contain exact Chief Information Security Officer OR Chief Digital Officer,
but not Chief Digital Marketing Officer OR Chief Information Officer as it is currently being returned.
Also, the field may not necessarily have the exact phrase to be searched.
For example:
"CIO Chief Information Officer" -> FALSE
"Head at Digital - Chief Digital Officer" -> TRUE
"Former lead Chief Information Security Officer" -> TRUE
"Chief Information Officer" -> False
I guess the point I am trying to make is those phrases should always be next to each other(proximity).
A: For your use case I would suggest you to use match_phrase query inside a bool query's should clause.
Something like this should work:
GET stackoverflow/_search
{
"query": {
"bool": {
"should": [
{
"match_phrase": {
"text": "Chief Information Security Officer"
}
},
{
"match_phrase": {
"text": "Chief Digital Officer"
}
}
]
}
}
}
A: This query would do it.
{
"query": {
"nested": {
"path": "positions",
"query": {
"bool": {
"must": [
{
"query_string": {
"default_field": "positions.companyname",
"query": "microsoft OR gartner OR IBM"
}
},
{
"bool": {
"should": [
{
"match_phrase": {
"positions.title": "chief information security officer"
}
},
{
"match_phrase": {
"positions.title": "chief digital officer"
}
}
]
}
}
]
}
}
}
}
}
match_phrase makes sure that the exact phrase is being searched. To match multiple phrases on the same field, use bool operator with should condition.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/55636676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Which IDE should be used for Java Application development? I want to develop applications for Nokia Mobile. I googled and found that J2ME is the language in which app can be developed for the mobile phone.
Please suggest any IDE in which i can develop applications for Android, Blackberry, Nokia (platforms nokia commonly use) etc.
Is it possible to have a single IDE for all types of development described above?
I work on Microsoft.NET technology and am new to Java Development.
A: For Android I'd go with Eclipse see the following for the Android SDK and Eclipse setup
*
*Android SDK
*Eclipse Plugin for Android
*Blackberry
*Nokia S60
A: Have a look at Mobile Tools for Java. It is based on Eclipse and widely used among developers! You can add a large number of plugins to fulfill your needs if necessary.
A: I think there is two main used IDEs for Java. Eclipse and Netbeans. Since for Eclipse there is a better plugin for Android, I would recommend it if you want just use one.
A: There was an eclipse for J2ME (eclipse pulsar). But some people prefer NetBeans.
For Android, eclipse.
This question is not going to live much time, I think XD. Is the kind of 'better' question that gets closed.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/7281934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
} |
Q: Displaying a confirmation message after doing a query with AJAX and PHP I am trying to call a function with Javascript, HTML, AJAX, and PHP.
I am still new to Javascript, AJAX, and PHP.
Background info: I am trying to add a new customer to my database but I am now trying to get a confirmation message on the same page without the page refreshing.
This is what I have:
<script type="text/javascript">
function addCustomer(ln,fn,pn,dob)
{
if (window.XMLHttpRequest)
{// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("txtHint").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","addCustomer.php?add_LN="+ln+"?add_FN="+fn+"?add_PN="+pn+"?add_DOB="+dob,true);
xmlhttp.send();
}
</script>
<p>Suggestions: <span id="txtHint"></span></p>
<form action="return addCustomer(this.add_LN, this.add_FN, this.add_PN, this.add_DOB);">
<input type="text" name="add_LN" id="add_LN" />
<br />
<br />
<input type="text" name="add_FN" id="add_FN" />
<br />
<br />
<input type="text" name="add_PN" id="add_PN" />
<br />
<br />
<input type="text" name="add_DOB" id="add_DOB" />
<br />
<br />
<input type="submit" name="add_Customer" id="add_Customer" value="Add Customer" />
</form>
<span id="txtHint"></span>
</p> </td>
</tr>
</table>
</div>
I get a "This link appears broken"
With:
http://127.0.0.1/cpsc471/return%20addCustomer(this.add_LN,%20this.add_FN,%20this.add_PN,%20this.add_DOB);?add_LN=ee&add_FN=123&add_PN=eee&add_DOB=eee&add_Customer=Add+Customer
in the url bar.
What am I doing wrong? :) I do not want the page to refresh. I am trying to just get the confirmation or non-confirmation to display.
Thanks again for all the help!
A: Your problem is here:
<form action="return addCustomer(this.add_LN, this.add_FN, this.add_PN, this.add_DOB);">
You don't want the form action to be a javascript call, you need to do this onclick or something.
The arguments above by @thiefmaster and @martin are correct: using jquery is much simpler. You are right about the framework thing, but in this case you need to go and code for a lot of troubles (responses, when is there a successfull AJAX call, different browsers etc)
A: xmlhttp.onreadystatechange=function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
// put your codes here like
document.getElementById("txtHint").innerHTML=xmlhttp.responseText;
// document.getElementById("show_label").innerHTML='text as you wish like updated enjoy';
}
}
And add a label with the id of show_label
| {
"language": "en",
"url": "https://stackoverflow.com/questions/5591885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: How to change the calendar table end date in power bi?
Here is my problem, I want to change the end date of my
calendar table based on what I have selected in the "Previous 12 Months Dates Table".
I used my "Previous 12 Months Dates Table" as a dropdown slicer filter and get the max date in that table and used it in my calendar table as my end date.
I tried a measure that returns the max date from my "Previous 12 Months Dates Table" and also I try to debug by displaying my measure value in the table chart and selecting a specific date in the dropdown slicer and it works fine.
But when I used that measure in my "Calendar Table" and try to debug it will return the default max date, not the date that I selected in my dropdown slicer.
Here is the behavior I wanted to achieve.
For example, if I select "EOMONTH" Jan 2022, it will return the previous 12 months' view from "EOMONTH" Jan 2022 and also it will calculate 12 months rolling average "EOMONTH" Jan 2022.
The previous 12 Months View and 12 months rolling average is working as I expected when I try to change the Calendar end date manually. that is why my only problem is how to change my calendar end date based on the selected date in the dropdown slicer.
Thanks in advance.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/71891578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: How to resolve this deprected problem in OnActivityCreated? I am getting 'onActivityCreated(android.os.bundle)' is deprecated.
package example.com.fragmentrecycler;
import android.os.Bundle;
import androidx.annotation.Nullable;
import androidx.fragment.app.Fragment;
import androidx.recyclerview.widget.LinearLayoutManager;
import androidx.recyclerview.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
/**
* A simple {@link Fragment} subclass.
*/
public class ListFrag extends Fragment {
RecyclerView recyclerView;
RecyclerView.Adapter myAdapter;
RecyclerView.LayoutManager layoutManager;
View view;
public ListFrag() {
// Required empty public constructor
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
view = inflater.inflate(R.layout.fragment_list, container, false);
return view;
}
@Override
public void onActivityCreated(@Nullable Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
}
public void notifyDataChanged()
{
myAdapter.notifyDataSetChanged();
}
}
This is that java file in which i am getting this error. This code is part of full project.I am not getting what wrong in my code.
A: onActivityCreated() is deprecated in API level 28.
There is no error shown because no error exists. Deprecated means that a newer or better method exists to handle stuff. So you need to change onActivityCreated() to onCreate(). But as I see you don't need to call this a second time if you already have a Fragment with onCreateView(). Keep your project clear and make a new .java file instead. Don't code everything in one single file.
*
*onCreate() is for Activity
*onCreateView() is for Fragment
| {
"language": "en",
"url": "https://stackoverflow.com/questions/68499896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: EventEmitter memory leak from function within socket.io handler I have a server that requires a dynamic value that's continuously updating from a spawned process. I tried to make a custom listener outside of the scope of the server because I was receiving potential memory leaks errors but I'm still receiving these messages after several connect/disconnects occur on the server. Why arn't the listeners that were added after the initial connect being removed during the disconnect event listener?
var express = require('express');
var http = require('http');
var spawn = require('child_process').spawn;
var util = require('util');
var fs = require('fs');
var EventEmitter = require('events').EventEmitter;
var sys = require('sys');
var app = express(),
server = http.createServer(app),
io = require('socket.io').listen(server);
function Looper(req) {
this.req = req;
EventEmitter.call(this);
}
sys.inherits(Looper, EventEmitter);
Looper.prototype.run = function() {
var self = this;
var cmd = spawn('./flow',[this.req]); // <-- script that outputs req every second
cmd.stdout.setEncoding('utf8');
cmd.stdout.on('data', function(data) {
self.emit('output',data);
});
}
Looper.prototype.output = function(callback) {
this.on('output', function(data) {
return callback(data.trim());
});
}
var looper = new Looper('blah');
looper.run();
app.use(express.static(__dirname + '/public'));
app.get('/', function(req, res) {
res.send(
"<script src='/socket.io/socket.io.js'></script>\n"+
"<script>\n"+
"\tvar socket=io.connect('http://127.0.0.1:3000');\n"+
"\tsocket.on('stream', function(data) {\n"+
"\t\tconsole.log(data);\n"+
"\t});\n"+
"</script>\n"
);
});
server.listen(3000);
io.sockets.on('connection', function(webSocket) {
looper.output(function(res) {
webSocket.emit('stream',res);
});
webSocket.on('disconnect', function() {
looper.removeListener('output',looper.output); // <- not remove listener added when connection was made
});
});
A: You add(!) an extra callback function every time you call looper.output to the event 'output'. I don't know what you want to achieve, but to get this call only once use this.once('output', ...) or move the callback setting to the object or remove the old function first...
| {
"language": "en",
"url": "https://stackoverflow.com/questions/21409828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: R segfaults when starting R by opening a source file My R.app (R version 3.3.2, R.app GUI 1.68 (7288) x86_64-apple-darwin13.4.0) on macOS Sierra is odd.
When I start R.app by clicking the app icon, I have no problems.
But if I start R.app by double-clicking a source file (e.g., test.R) in Finder.app or if I type open test.R in Terminal.app, I get a segfault error ("memory not mapped").
*** caught segfault ***
address 0x18, cause 'memory not mapped'
Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
>
Selection:
It is OK if R.app is already running when test.R is opened.
When R.app segfaults, I am sometimes lucky to have control and can close the app normally. But at other times, R.app halts with the spinning wheel appearing, and I have to Force-Quit the app. After I force-quit R.app, the next time I start R.app, I am permanently stuck (with R.app halting trying to open test.R as soon as I restart R.app), in which case I solve the problem by starting R from Terminal by typing /Applications/R.app/Contents/MacOS/R.
What would be the problem?
| {
"language": "en",
"url": "https://stackoverflow.com/questions/41314761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Compare md5 after tftp transfer Because I use the tftp command to transfer an important file, I would like to compare md5 in order to valid the transfer.
Note : The file has been already transfered in the the example below
int main(int argc, char *argv[])
{
FILE *input_md5_fd;
FILE *md5_fd;
size_t md5_length;
size_t nbr_items;
char *input_md5 = NULL;
char *file_md5 = NULL;
int ret;
chdir("/tmp");
system("tftp -g -r INPUT_MD5 192.168.0.1");
input_md5_fd = fopen("INPUT_MD5", "r");
if (input_md5_fd != NULL)
{
fprintf(stdout,"MD5 transfered\n");
fseek(input_md5_fd,0L,SEEK_END);
md5_length = ftell(md5_fd);
input_md5 = malloc(md5_length + 1);
fseek(input_md5_fd,0L,SEEK_SET);
nbr_items = fread(input_md5, 1, md5_length, input_md5_fd);
input_md5[nbr_items] = 0;
fprintf(stdout, "length = %lu B\n",md5_length);
fprintf(stdout, "contains %s\n", input_md5);
fclose(input_md5_fd);
}
else
{
return -1;
}
system("md5sum IMPORTANT_FILE > /tmp/file_md5.txt");
md5_fd = fopen("file_md5.txt", "r");
if (md5_fd != NULL)
{
file_md5 = malloc(md5_length +1);
rewind(md5_fd);
nbr_items = fread(file_md5, 1, md5_length, md5_fd);
file_md5[nbr_items] = 0;
fprintf(stdout, "contains %s\n", file_md5);
fclose(md5_fd);
}
else
{
return -1;
}
printf("file_md5 = %s\n", file_md5);
printf("input_md5 = %s\n", input_md5);
ret = strncmp(file_md5, input_md5, md5_length);
printf("ret = %d\n", ret);
free(input_md5);
free(file_md5);
}
Output :
MD5 transfered
length = 33 B
contains a95ef51ec6b1b06f61c97559ddf4868f
contains a95ef51ec6b1b06f61c97559ddf4868f
file_md5 = a95ef51ec6b1b06f61c97559ddf4868f
input_md5 = a95ef51ec6b1b06f61c97559ddf4868f
ret = 22
The input files contain :
# cat /tmp/INPUT_MD5
a95ef51ec6b1b06f61c97559ddf4868f
# cat /tmp/file_md5
a95ef51ec6b1b06f61c97559ddf4868f XXX_XX-XXXX-XX-DD.DDD-DD.DDD.DD.bin
X being char and D decimal values.
Why ret is not equal to 0 ? In addition, I don't know from where 34 comes from
EDIT :
CODE HAS BEEN UPDATED, problem came from md5_length definition. long type has been exchanged to size_t
A: You risk printing out and comparing garbage since you don't ensure the strings are nul terminated.
You need to do
fread(file_md5, 1, md5_length, md5_fd);
file_md5[md5_length] = 0;
And similar for input_md5. Or to do it properly, use the return value of fread() to add the nul terminator in the proper place, check if fread() fails, Check how much it returned.
If you also place your debug output inside quotes, it'll also be easier to spot unwanted whitespace or unprintable characters:
fprintf(stdout, "contains '%s'\n", input_md5);
| {
"language": "en",
"url": "https://stackoverflow.com/questions/34353763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: ocaml mutual recursion error I am new to Ocaml and want help with recursive function defined on a recursive data type. I have defined a data type as follows
type 'a slist = S of 'a sexp list
and
'a sexp = Symbol of 'a | L of 'a slist
The function(subst) I'm writing checks for a symbol a in the defined slist and substitutes it by b. For ex subst 1 10 S[ 1; 4; S[L[3; 1;]; 3];] returns S[10; 4; S[L[S[3; 10;]]; 3;]
. My code is as follows
let rec subst a b sl = match sl with
S s -> match s with
[] -> []
| p :: q -> match p with
Symbol sym -> if sym = a then S[b] :: (**subst** a b S[q]) else
S[p] :: (subst a b S[q])
L l -> (subst a b l) :: (subst a b S[q])
| L lis -> subst a b lis;;
I am getting the error :
This function is applied to too many arguments; Maybe you forgot a ';'
Please help
A: Your type can be defined in a simpler way, you don't need slist:
type 'a sexp = Symbol of 'a | L of 'a sexp list
Your problem is that subst a b S[q] is read as subst a b S [q], that is the function subst applied to 4 arguments. You must write subst a b (S[q]) instead.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/16939190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Teradata CASE WHEN attribute IN ( Select... from)
I have selected a bunch of IDs and I try to use CASE WHEN to get a column when the IDs which I really want to select are in the IDs I selected earlier.
But it shows Select Failed .3771:Illegal expression in WHEN clause of Case expression.
Any suggestion to get it right?
Please help.
p.s my goal is to select the IDs shown for the first time.
| {
"language": "en",
"url": "https://stackoverflow.com/questions/74885129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
} |
Q: Type mismatch java.io.Serializable and GenTraversableOnce I'm trying to get a parser to take a sequence of colon-seperated words and convert them into an array.
Here's an SSCCE.
import util.parsing.combinator._
class Element {
def getSuper() = "TODO"
}
class Comp extends RegexParsers with PackratParsers {
lazy val element: PackratParser[Element] = (
"foo") ^^ {
s => new Element
}
lazy val list: PackratParser[Array[Element]] = (
(element ~ ";" ~ list) |
element ~ ";") ^^
{
case a ~ ";" ~ b => Array(a) ++ b
case a ~ ";" => Array(a)
}
}
object Compiler extends Comp {
def main(args: Array[String]) = {
println(parseAll(list, "foo; foo; foo;"))
}
}
It's not working and it's not compiling, if it was I wouldn't be asking about it. This is the error message I'm getting. Is there a way to convert from Serializable to GenTraversableOnce?
~/Documents/Git/Workspace/Uncool/Scales$ scalac stov.scala
stov.scala:19: error: type mismatch;
found : java.io.Serializable
required: scala.collection.GenTraversableOnce[?]
case a ~ ";" ~ b => Array(a) ++ b
^
one error found
A: My suspicion goes on the | combinator.
The type of (element ~ ";" ~ list) is ~[~[Element, String], Array[Element]] and the type of element ~ ";" is ~[Element, String].
Thus when applying the | combinator on these parsers, it returns a Parser[U] where U is a supertype of T ([U >: T]).
Here the type of T is ~[~[Element, String], Array[Element]] and the type of U is ~[Element, String].
So the most specific type between Array[Element] and String is Serializable.
Between ~[Element, String] and Element its Object. That's why the type of | is ~[Serializable, Object].
So when applying the map operation, you need to provide a function ~[Serializable, Object] => U where U is Array[Element] in your case since the return type of your function is PackratParser[Array[Element]].
Now the only possible match is:
case obj ~ ser => //do what you want
Now you see that the pattern you're trying to match in your map is fundamentally wrong. Even if you return an empty array (just so that it compiles), you'll see that it leads to a match error at runtime.
That said, what I suggest is first to map separately each combinator:
lazy val list: PackratParser[Array[Element]] =
(element ~ ";" ~ list) ^^ {case a ~ ";" ~ b => Array(a) ++ b} |
(element ~ ";") ^^ {case a ~ ";" => Array(a)}
But the pattern you are looking for is already implemented using the rep combinator (you could also take a look at repsep but you'd need to handle the last ; separately):
lazy val list: PackratParser[Array[Element]] = rep(element <~ ";") ^^ (_.toArray)
| {
"language": "en",
"url": "https://stackoverflow.com/questions/30473466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Q: how to display javascript validation message on html popup Want to display javascript validation messages inside an html popup window which would be a seperate html page.currently it is displayed on the same page from where it has been called.
$("#getStarted").val('Try xxxx for Free');
$('#message1').text('User ' + emailID + ' already exists.');
$("#stRegister").val("");
document.getElementById("stRegister").focus();
<input name="stRegister" id="stRegister" type="email" class="registerinput" placeholder="Enter your business email ID" onkeypress="$('#message1').text('');">
<div class="col-md-4 col-md-offset-6 hidden-xs hidden-sm">
<div style="margin-left:8px;" class="alerts" id="message1">
</div>
</div>
A: var window = window.open(url, windowName, [windowFeatures]); moidify the dom on the window object.
A: Not sure if you can do it in a separate window. However, for validation you can use the window.confirm function natively build into the browsers. Here is an example:
// window.confirm returns a boolean based on the user action
let validate = window.confirm('Do you want to continue?');
if (validate) {
console.log('validated');
} else {
console.log('not validated');
}
A: You can use in your script something like:
window.alert("what you want here");
| {
"language": "en",
"url": "https://stackoverflow.com/questions/50718456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
} |
Q: Can we use class transitions in jQuery like with CSS3? Do any of you know the answer to the following problem. It's driving me crazy that I cannot find a solution.
I want to animate in jQuery by adding/replacing a class and transitioning between the two, NOT by using inline styling.
This is exactly how CSS3 transition works, and its really great and allows for quick prototyping. But I want to do it in jQuery because I want it to be normalised across all browsers.
I've used jQuery animate for a long time, but I've only used it to add inline-styling. This gets really messy to manage and means I can't have the end result written in CSS and separate the presentation layer from the logic.
Consider the following example. In CSS3 if I want to animate a div height by changing the height of a div I can do this:
*
*A div has a class .object which has an initial height of 100px.
*An event runs, and the div now has the classes .object and .active.
*With the .active class, the CSS now sets the height to 150px
*Using CSS3 transition, I can allow a smooth transition between the two over 2 seconds with easing.
If i want to do the same thing in jQuery:
*
*A div has a class .object which has an initial height of 100px.
*An event runs, and I use .animate to animate the height of the object to 150px
*As a result of this, the CSS now sets the height to 150px using inline styling
*Using parameters I can set the speed in ms with easing.
You may be aware of jQuery UI which allows transition on toggleClass and switchClass:
http://jqueryui.com/demos/toggleClass/
However, annoyingly it won't work for the example above because it only works if the initial class has no specific height set. To test this, inspect the element in the demo and give it a height and watch the demo break.
Hopefully someone can help me with this
A: You could try DOM Mutation Events, which are dispatched eg. when a property of an element is changed.
There are plugins for jQuery, which mimes the behavior of event mutation (although I haven't tried any of those).
But feel free to google about mutation events in jQuery and that should probably help you.
A: You can do it with http://jqueryui.com/demos/switchClass/
(i made a similar question few weeks a go can a .classA styles to .classB styles be animated easly? )
| {
"language": "en",
"url": "https://stackoverflow.com/questions/8473720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.